How to keep your IT systems working when the worst happens, by IT consultant John Dryden
IT is the life blood of any modern charity, linking its head, heart and essential organs. If it stops flowing, things will instantly seize up.
This is especially true for international charities, for whom email is the most practical way to communicate with far-flung colleagues. Where staff are operating in different time zones and remote locations across the developing world, it can sometimes be the only way to communicate regularly.
For example, an international medical charity we work with has 1,400 staff spread across the globe. On an average day its London-based team send and receive more than 11,000 emails – some of them involving life-or-death medical decisions.
A poll of 500 SMEs in Europe and the US shows that 85% are experiencing cost-related challenges with backup and recovery, 83% with lack of capabilities and 80% with complexity.
Other problems include high ongoing management costs (51%), expensive licensing models (48%) and backups either requiring or using too much storage (44%).
This means there is a maximum of 15% of SMEs that currently have no issues with data protection, said backup, replication and virtualisation management firm Veeam Software, which commissioned the survey.
Preliminary results from a joint CII, London School of Economics and University of Plymouth research project on how financial organisations approach risk culture, revealed that firms were becoming increasingly conservative and it could damage their profitability.
The research project was designed to deliver practical guidance for firms to improve the cultures and behaviours associated with risk-taking and control activities.
Interviews were carried out at nine financial institutions with risk management professionals and the study also included the findings from a survey of 2258 CII members.
As the security industry continues to grapple with a shortage in skilled professionals, particularly within very specific niches like application security, the state of security professional development continues to keep the industry locked up in a number of hotly contested debates. Beyond the most obvious argument over the value of security certifications, some security pundits have stepped up to argue about a more fundamental impediment to rising the tide for all boats in the industry: the cost of paid training.
"Mathematically it's easily demonstrable that organizations can't afford to send all of their employees to a class when you're talking classes that typically are around $1,000 a day," says Xeno Kovah, lead infosec engineer at The Mitre Corporation. "It's just not possible to take a group of 50 people out of your company, if you have a large one, and pay the amounts of money that are being asked to sufficiently bootstrap your employees."
Dozens of government agencies have no idea whether their websites or public kiosks are a security risk.
The widespread failing has been revealed in a review of 70 government departments and ministries that was able to identify 12 systems at risk because of insecure passwords, potential access by unauthorised users or being connected to internal networks. However, there was no evidence of privacy breaches.
KPMG investigated 215 publicly accessible computer systems and found 73% lacked formal security standards and had no formal risk management processes.
The offenders included the Ministries of Social Development, Education and Justice, as well as the Earthquake Commission and the MidCentral District Health Board.
Why would you need a Policy once you have Business impact analysis, Business continuity strategy and Business continuity plan? This is probably a question many experienced business continuity/disaster recovery practitioners are asking themselves, so here’s why ISO 22301 (a leading business continuity management standard) says it’s mandatory.
The main purpose of Business continuity policy is that the top management defines what it wants to achieve with business continuity. Now why would that be important? Because in many cases the executives have no idea how business continuity can help their organization, which means they won’t be particularly interested in supporting the business continuity effort in their company.
Computerworld — SAN FRANCISCO - Bringing consumer technology into the enterprise doesn't mean corporate data will be at risk or that money spent on failed projects was wasted. Just ask NASA, which regularly brings shiny toys into its "IT petting zoo" to play with and test, many of which have gone on to be venerated products.
Tom Soderstrom, CTO of IT at NASA's Jet Propulsion Laboratory, regularly brings consumer tech into his shop to see if it will result in an increase in productivity and innovation.
"I'm often called chief toy officer ... and I'm proud of that title," Soderstrom told an audience at the CITE Conference and Expo here. "Ideas come from everywhere. Productize them and dare to fail. The ones that make sense go into pilot mode and then become products and typically last for years."
Federal agencies are grappling with an unprecedented growth in data at the same time that backup solutions are nearing capacity, a situation that could hamper efforts to recover data in the event of an emergency.
Moreover, agency officials are not testing their disaster recovery solutions as often as they should, raising questions about their preparedness for a natural disaster or man-made incident, according to a survey of 150 federal defense and civilian IT managers in a new MeriTalk report.
Reducing data at the source is the smart way to do backup. That is the conclusion I came to in my last post, If files were bricks, you'd change your backup strategy. But I also left off by saying “there are technologically different ways to do this, which have their own smart and dumb aspects.” Let’s take a look at them.
There are two common ways of reducing data at the host (as I mentioned last time, I am only considering traditional backup from servers, not disk-array snapshots). Since terminology can be used in different ways, I’ll define the terms as I use them.
Amidst internal and external security threats, natural disasters, hacking attempts and technological changes, banks and service providers today are constantly faced with the possibilities of data loss, security breaches and breaks in business continuity. These institutions are being asked more frequently than ever what plans they have in place for speedy recovery should systems be compromised. Following a number of hard-hitting storms in the United States, including Hurricane Sandy and the devastation wrought on the Midwest following recent tornadoes, attention is focused on preparing for a recovery after natural disasters. Though preparing for natural impact is important, it becomes easy to forget there is just as much, if not more, potential for malicious manmade threats from a security and technology perspective.
All disaster recovery efforts, whether they are for natural disasters or security threats, must ultimately be tested for efficiency and reliability. While banks across the board conduct regular tests, the way in which these tests are conducted is crucial to determining a bank’s true ability to recover in the event of a disaster. In most instances, testing can be considered either static or dynamic. Most disaster recovery tests currently conducted are static in nature, meaning they are crafted to be sterile and built for success, to allow banks to ‘prove’ they have the ability and tools needed to succeed in the event of disruption. In these instances, banks and service providers are able to conduct tests and prove they have a perfect fail-over recovery system in place. The issue here is that these tests are rarely built to actually mimic any real disaster.