Industry Hot News (7051)
The Cyber Kill Chain describes the different stages of an attack, from initial reconnaissance to objective completion. In this article Richard Cassidy describes the different elements of the Cyber Kill Chain and how to use it.
Today’s attackers are becoming increasingly sophisticated, using advanced techniques to infiltrate a business’s environment. Unlike in the past when hackers primarily worked alone using ‘smash-and-grab’ techniques, today’s attackers prefer to work in groups, with each member bringing his or her own expertise. With highly skilled players in place, these groups are able to approach infiltration in a much more regimented way, following a defined process that enables then to evade detection and achieve their ultimate goal: turning sensitive, valuable data into a profit. With attackers ready to pounce on any business at any moment, how can businesses stay ahead and ensure their sensitive data remains safe? Most attacks follow a ‘process’ that identified attackers’ behaviours, ranging from researching, to launching an attack and ultimately to data exfiltration: this is articulated as the ‘Cyber Kill Chain’.
The Cyber Kill Chain was developed by Lockheed Martin’s Computer Incident Response Team and describes the different stages of an attack, from initial reconnaissance to objective completion. This representation of the attack flow has been widely adopted by organizations to help them approach their defence strategies in the same way attackers approach infiltrating their businesses. As malicious activity continues to threaten sensitive data — whether it is personal data or company sensitive data — one certainty remains: attackers will continue to exploit weakness to infiltrate systems and extract data that they can turn into money. The best opportunity to get ahead of the hacker is to understand the steps he / she will go through, his / her motivations and techniques, and a security strategy around it.
Nick Lowe explores how current security measures against bulk data theft from organizations are broken: and how they can be fixed.
Another year, and another round of large-scale data breaches has started. We were barely a week into 2016 when Time Warner was forced to announce a breach of up to 320,000 users’ email account passwords; this followed 2015’s mega-breaches at organizations such as Ashley Madison, the US Government’s Office of Personnel Management, toy maker Vtech and many others.
Despite the scale of these ongoing data losses, and the reputational damage and remediation costs they cause, the methods for enterprise-level protection of bulk passwords and personally identifiable information (PII) have remained fundamentally unchanged over the past 20 years. And it’s evident that these approaches are simply not effective in preventing breaches.
A majority of data thefts are done from an organization’s bulk file storage. This is because once a successful attack is executed, whether via a social engineering exploit to gain administrator credentials, malware installation, or a privilege-escalation attack using known software flaws, the theft itself can be done remarkably quickly. A million username/password pairs may be stolen in just 60 seconds.
Don’t put all your eggs in one basket, or so the saying goes. When it comes to phone system resilience, this would seem to be sound advice. After all, phone availability is critical for many organisations and relying on just one solution to guarantee that availability would be foolhardy. However, single points of failure may lie in wait for the unwary, even in situations as simple as putting in a toll free number for use in an emergency.
Most data center servers operate at only 12 to 18 percent of their capacity, yet many companies aren’t taking advantage of the cost-saving potential offered by data center consolidation. Consider this: in the last five years, the US government saved nearly $2 billion by consolidating data centers. Companies like Microsoft, HPE, and IBM have likewise saved billions.
In an effort to cut costs and regain control of the data center environment, IT managers are asking that their environments be consolidated and made more efficient. The conversation revolves around aligning IT with business needs, which today often means greater IT agility. Managers and executives are trying to drive down cost and in doing so have prioritized data center consolidation and migration projects.
In creating a consolidation or data center migration plan, high-density server equipment, applications, virtualization technology, and end-user considerations all fall under the general scope.
Given the sensitivity of the data stored in customer relationship management (CRM) applications, it should come as no surprise that there is a lot of concern over how to secure that data. To address that issue, Salesforce today extended a security policy engine service that now makes it possible to limit who gets to see which data stored in its applications in real time.
Seema Kumar, senior director of product marketing for Salesforce, says the Transaction Security service is an extension of Salesforce Shield that makes use of new event monitoring tools that IT organizations can then use to either block entirely or simply generate an alert when a user tries to access a certain type of data without permission. The IT organization can use Salesforce Shield to determine the specific action across a broad set of data.
In addition, Kumar says Salesforce will soon extend this capability to not only its own applications, but all the applications that tap into the same customer records stored in the Salesforce cloud ecosystem.
Is customer-facing breach notification and response a part of your incident response plan? If should be! This is the part where you notify people that their information has been compromised, communicate to employees and the public about what happened and set the tone for recovery. It's more art than science, with different factors that influence what and how you do the notification and response. Unfortunately, many firms treat breach notification as an afterthought or only as a compliance obligation, missing out on an opportunity to reassure and make things right with their customers at a critical time when a breach has damaged customer trust.
At RSA Conference last week, I moderated a panel discussion with three industry experts (Bo Holland of AllClear ID, Lisa Sotto of Hunton & Williams, and Matt Prevost of Chubb) who offered their insights into the what to do, how to do it, and how to pay for it and offset the risk as it relates to breach notification and response. Highlights from the discussion:
(TNS) - For some 25 volunteers the objective Saturday morning was equal parts simple and perplexing: find "Joe," or maybe it's "Bob."
Jackson County Search and Rescue manager Mark Mihaljevich was purposely vague with details to the volunteers completing Search and Rescue Academy training. Joe's an elderly man, they don't know his last name, he's wearing a hunting vest and a hat, but they don't know what color.
In actuality, Joe is a duffel bag hidden somewhere on the rural county-owned Givan property off Agate Road, but the unclear details the search and rescue volunteers were given is a common beginning to a missing persons investigation.
"This is typical," instructor Micki Evans said.
(TNS) - Local schools face tough choices on how much security is appropriate as last week’s shooting in Madison Twp. brought a nationwide issue close to home for the first time.
The challenge for schools is how far to go on a continuum with tons of options. More locks? More cameras? More guards? More drills? Adding metal detectors? Arming school staff? There’s no way to make everyone happy, as there are parents who support and oppose each of those steps.
“It’s a tough spot for schools and it comes down to one word — reasonableness. What is reasonable to reduce risk?” said Ken Trump, a national school safety consultant. “The majority of parents want safe schools, want risks reduced, want genuine preparedness.
Over the last ten to twenty years, we have witnessed the expansion of federal criminal prosecution of health and safety matters. Environmental and food and drug regulatory enforcement has been supplemented by aggressive criminal enforcement.
In the last few years, we have seen some landmark criminal cases involving companies and executives for food safety violations. Compliance programs in these high-risk industries can literally be a matter of life and death. Judges are handing out tough criminal sentences when warranted.
Each week we hear about the outbreak of a new foodborne illness. Weeks after that, we then usually hear about a criminal investigation against the company and sometimes individual executives.
I’ve often run into people that have to ‘send an email’ with a question for a person that’s located on a few seats away. Are they afraid of that person? Why can’t they just get up and go see them for a couple of minutes to ask what they need to ask? It seems the art of face-to-face communication is disappearing in favor of CYA (Cover You’re A…) and audit concerns. If it’s not written down then it’s can’t be true. What have we done to ourselves?
This happens allot when it comes to developing strategies for Business Continuity Management (BCM) and other contingency related initiatives. We don’t go and ask people, we develop questionnaire’s – sent by snail mail or email – or we purchase an expensive online tool, fill it with questions that get interpreted a myriad of ways and expect recipients to respond in a timely and comprehensive manner. Huh!
Mergers generally fail and large mergers generally fail spectacularly, so I get why many of my peers think the Dell/EMC merger will be a train wreck. They also thought Dell couldn’t be taken private because, generally, for a company like Dell, the path would be virtually impossible particularly if you had a corporate raider like Carl Icahn working against you.
But here’s the thing: I’ve spent a lot of time looking at merger processes. I ran a merger clean-up team when I was at IBM (and I was really busy), and I’ve looked at Dell’s process in depth, one that was initially developed at IBM but refined at Dell. I learned there is nothing like it. Granted, a large merger will stress any process but, given EMC’s structure and Dell’s approach, there should be little customer impact for 12 to 18 months, and much of that initial impact should be positive.
In most every other large merger, there would be a reason to run for the hills, largely because most large companies don’t want to learn from their mistakes and would rather focus on shooting the people that made them. But Dell is very different. It actually has an incredibly successful merger process that, for some screwy reason, no one else seems to want to emulate.
I’ll compare the HP/Compaq merger that I thought was idiotic to the Dell/EMC merger, so you get a sense of what makes this different.
US government agencies are no longer allowed to build or expand data centers unless they prove to the Office of the Federal CIO that it’s absolutely necessary, according to a new memo released by the White House’s Office of Management and Budget.
The new Data Center Optimization Initiative replaces the now six-year-old Federal Data Center Consolidation Initiative and has much stricter goals and additional rules meant to reduce the government’s sprawling data center inventory and the amount of money it takes to maintain it.
The government spent about $5.4 billion on physical data centers in fiscal year 2014. The new initiative’s goals are to reduce data center spending by $270 million in 2016, by $460 million in 2017, and by $630 million in 2018, for a total of $1.36 billion in savings over the next three years.
The current approach to business continuity, which generally focusses on ‘what could happen’, has significant limitations says Graham Goodenough. In this article he explains why this is the case; and suggests a better, more positive, method.
The use of the term ‘resilient enterprise’ as expressed in this article, applies to a business that has been purposely designed to have the ability to adapt to significant increases, or decreases, in production/service demands from the market it serves, and which can adjust demands within an acceptable time frame that is not financially detrimental to the business. Establishing such an ability for critical activities to respond within the business for normal operations and any unplanned disruptions will provide flexibility within the organization that will enable capacities to be delivered as needed, and maintain business income, whatever may be the cause of disruption.
In the second article in a three-part series exploring ‘people and resilience’, Paul Kudray looks at a common misconception: that when disaster strikes employees will automatically rally round and play their part in helping the organization recover.
I’m sure you’re familiar with the phrase: “I hate my job!” You may even have used it: possibly on more than one occasion.
You and I know there are people who have dream jobs; they work in their favourite place, doing the things they love to do, and they even have great bosses! Yes, it happens!
The employers they work for may even have a great resilience plan. Everyone in the organization may be aware of it and each person may know what to do when the proverbial hits the fan. In short it’s a fantastic resilient organization, based around the people who make it work.
Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Dean Coleman looks at some practical steps that businesses can take to utilise these for effective IT risk management.
IT is playing an increasingly prominent role within every organization and IT service managers need to be keenly aware of the importance of risk management to ensure they have control and influence over any issues likely to get in the way of the smooth running of the business. Technology is now so pivotal to the healthy running of the majority of companies that IT risk management has become a key discussion point on the corporate agenda of many boardrooms, as downtime of critical systems – whether due to accidental or malicious intention – threatens to undermine the productivity of the entire organization. Yet, despite its importance, many organizations still use manual spreadsheets to manage risk which are not dynamically linked to IT real estate, so lack any ability to equate theoretical IT risk with the actual situation on the ground.
Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Many service management tools are likely to already have a database of IT assets and users, so it makes sense to link IT risk management to your overall service management capabilities. That being so, what are the practical steps that businesses can take to rest back control of their IT assets and ensure that problems in one area of the business don’t have a knock-on effect on other functions?
The Business Continuity Institute’s annual North America business continuity and resilience awards will be presented at a ceremony on March 15 at DRJ Spring World 2016 in Orlando. The shortlist of finalists is as follows:
Continuity and Resilience Consultant
Suzanne Bernier MBCI, President of SB Crisis Consulting
Christopher Duffy, Strategic BCP
Christopher Rivera MBCI, Lootok, Ltd
Continuity and Resilience Professional (Private Sector)
Pauline Williams-Banta, Business Continuity Manager, The Energy Authority
Aaron Miller MBCI, VP/Director of Business Continuity, Fulton Financial Corporation
Linda Laun, Chief Continuity Architect, IBM
Continuity and Resilience Newcomer
Bradley Hove AMBCI, Consultant, Emergency Response Management Consulting Ltd
Greg Greenwald, BCM Consultant, Lootok, Ltd
Bryan Weisbard, Head of Threat Intelligence, Investigations & Business Continuity, Twitter
Continuity and Resilience Professional (Public Sector)
Nina White, Business Continuity Manager, Talmer Bank and Trust
Ira Tannenbaum, Assistant Commissioner for Public/Private Initiatives, New York City Office of Emergency Management
Continuity and Resilience Team
Aon Business Continuity Team, Global/Americas Team
The Devry Online Service (DOS) Core Business Continuity Team
Aon’s Global Business Continuity Management Team
Health Partners Plan (HPP) Business Continuity Team
CBRE Business Continuity Management Team – Americas
Continuity and Resilience Provider (Service/Product)
Premier Continuum Inc ParaSolution BCM Software
Fusion Risk Management Inc, and the Fusion Framework BCM Software
AtHoc, a division of Blackberry
Strategic BCP® ResilienceONE® BCM Software
Continuity and Resilience Innovation
The Everbridge platform
Mars, Resiliency Summits, #WeGotThis, BCM Portal
Fairchild Consulting, FairchildApp
Most Effective Recovery
Aon Global Business Continuity Management Team – Americas
Frank Leonetti FBCI
Howard Mannella MBCI
Brian Zawada FBCI
This year’s international Business Continuity Awareness Week is taking place from 16th-20th May 2016 and a set of four posters for promoting it is now available.
The theme for BCAW 2016 is ‘return on investment’, so all four posters display the message ‘Discover the value of business continuity’.
The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. The BCI also encourages sharing of the image versions through social media channels to spread the message.
NORTH LITTLE ROCK – Arkansas residents who have registered with FEMA for disaster aid are urged by recovery officials to “stay in touch.” It’s the best way to get answers and resolve potential issues that might result in assistance being denied.
“Putting your life back together after a disaster is difficult,” said John Long, federal coordinating officer for FEMA. “While the process of getting help from FEMA is intended to be simple, it’s easy to understand how sometimes providing important information is overlooked or missed.”
Residents of Benton, Carroll, Crawford, Faulkner, Jackson, Jefferson, Lee, Little River, Perry, Sebastian and Servier counties affected by the severe storms Dec. 26 – Jan. 22, 2016 may be eligible for disaster assistance and encouraged to register for assistance with FEMA.
After registering, it’s important to keep open the lines of communication. “It’s a two-way street,” said Long. “FEMA can’t offer assistance to survivors who – for whatever reason – have not provided all the necessary information.”
After registering with FEMA, applicants will receive notice by mail within 10 days on whether or not they qualify for federal disaster assistance.
- If eligible, the letter explains how much the grant will be, and how it is intended to be used.
- If ineligible – or if the grant amount reads “0” – you may still qualify. The denial may just mean the application is missing information or that you missed an appointment with an inspector.
Applicants who are denied assistance may call the Helpline to understand why, or go online to www.disasterassistance.gov or m.fema.gov. Becoming eligible for assistance may be as simple as supplying missing paperwork or providing additional information.
FEMA looks at a number of things to determine if a survivor will receive disaster assistance. The agency must be able to:
- Verify an applicant’s identity.
- Verify damages. If you believe the inspector didn’t see all of your damages, call the FEMA Helpline at 1-800-621-3362.
- Verify home occupancy. Applicants need to provide proof of occupancy such as a utility bill.
- Collect insurance information.
“FEMA personnel are here to help,” said Scott Bass, state coordinating officer with the Arkansas Department of Emergency Management. “Keep in touch. Use the Helpline. You’ll get answers to your questions and help with understanding the assistance process, and ways to move your personal recovery forward.
To register for assistance:
- call 800-621-3362 (FEMA). If you are deaf, hard-of-hearing or have a speech disability and use a TTY, call 800-462-7585. If you use 711-Relay or Voice Relay Services, call 800-621-3362; or
- go to www.DisasterAssistance.gov
The toll-free telephone numbers will operate from 7 a.m. to 10 p.m. seven days a week. Multilingual operators are available.
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
We’re reading an item of interest from across the pond where the United Kingdom’s Institute of Directors (IoD) has issued a new report that gives insight into how companies tend to react if they are under a cyber attack.
The IoD study, supported by Barclays, revealed that most companies keep quiet, with under one third (28 percent) of cyber attacks reported to the police.
This is despite the fact that half (49 percent) of cyber attacks resulted in interruption of business operations, the IoD noted.
(TNS) - Illinois State University became the first university in Central Illinois and the second outside of northern Illinois to be designated as a “StormReady University” by the National Weather Service.
To earn the designation, the university had to meet seven criteria involving preparation to respond to severe weather conditions and weather emergencies, explained Chris Miller, warning coordination meteorologist with the National Weather Service office in Lincoln.
These included having designated storm shelters, multiple methods for issuing warnings, trained weather spotters and formal, written emergency plans that are tested.
A survey conducted by Lockheed Martin and the Government Business Council finds reasons to be hopeful about federal IT and challenges that need to be addressed
During Tuesday's House Judiciary Committee hearing on the challenge of balancing privacy with public safety, FBI Director James Comey faced skepticism about whether the agency had really fully explored how it might access an encrypted iPhone, currently the focus of a legal battle between the US government and Apple.
Though Comey insisted the FBI had sought assistance from other government agencies with cybersecurity expertise, not everyone was convinced.
Worcester Polytechnic Institute professor Susan Landau, in prepared remarks, said that law enforcement agencies should modernize their investigatory capabilities rather than relying on the assistance of the courts.
There is an old saying that there are two things certain in life: death and taxes. I would like to add a third one–data security breaches. The Identity Theft Resource Center (ITRC) defines a data security breach as “an incident in which an individual name plus a Social Security, driver’s license number, medical record or financial records (credit/debit cards included) is potentially put at risk because of exposure.” The ITRC reports that 717 data breaches have occurred this year exposing over 176 million records.
On the surface, finding a pattern across all such breaches may appear daunting considering how varied the targeted companies are. However, the ITRC argues that the impacted organizations are similar in that all of the data security breaches contained “personally identifiable information (PII) in a format easily read by thieves, in other words, not encrypted.” Based on my experience, I’d expect that a significant portion of the data breaches compromised data in on-premises systems. Being forced to realize the vulnerability of on-premises systems, organizations are beginning to rethink their cloud strategy.
For example, Tara Seals declares in her recent Infosecurity Magazine article that “despite cloud security fears, the ongoing epidemic data breaches is likely to simply push more enterprises towards the cloud.” Is the move to the cloud simply a temporary, knee-jerk reaction to the growing trend in security breaches or are we witnessing a permanent shift towards the cloud? Some industry experts conclude that a permanent shift is happening. Tim Jennings from Ovum for example, believes that a driving force behind enterprises’ move to the cloud is that they lack the in-house security expertise to deal with today’s threats and highly motivated bad actors. Perhaps the headline from the Onion, which declares “China Unable To Recruit Hackers Fast Enough To Keep Up With Vulnerabilities In U.S. Security Systems” is not so funny after all.
In the latest edition of the Business Continuity Institute's Working Paper Series, Rudy Muls MBCI draws from his extensive experience to relate cyber resilience to its implications on business continuity practice. He further demonstrates possible opportunities for business continuity professionals to collaborate with their information security counterparts.
Cyber resilience is a topic of interest among practitioners as evidenced by the wealth of research on the subject. The BCI's most recent Horizon Scan Report revealed that cyber attacks and data breaches top the list of threats practitioners are most concerned about. The results of a global survey showed that 85% and 80% respectively expressed concern about the prospect of these threats materialising.
The paper concludes that there must be greater coordination and collaboration between those working in business continuity and information security, going as far as to say there could even be integration between the two functions. Furthermore, there should be more exercises to make staff and management aware of the cyber risk and how to react to incidents, as the involvement of all lines and areas of business management early in the incident management process is very important.
To download your free copy of ‘Digital business requires digital business continuity’, click here.
During military service (Reserve Captain) and as a voluntary fireman, Rudy Muls MBCI has gained a wealth of experience in crisis situations, and provided training in rescue and life saving techniques. During his professional career within an international financial institution he has been employed in different IT related positions, the most fulfilling of which started in 2010 when he was able to combine all his experience as business continuity manager and information security officer.
Think you already have enough on your plate, dealing with Wi-Fi and other network security in your organisation? You may have to add lighting to the list as well. A French start-up, Oledcomm, has been developing Internet by light, cunningly christened (you guessed it) Li-Fi. The technology is based on the concept of light flashes from an LED, rather like Morse Code on steroids. According to its inventors, Li-Fi also has at least two sizable advantages in terms of connectivity that, hopefully, will not be undermined by the existence of yet another attack vector.
It takes a long time takes up a lot of expensive bandwidth to push 100 terabytes of data across a Wide Area Network.
Amazon’s answer to moving those kinds of data volumes from customer data centers to its cloud data centers has been to ship its customers high-capacity storage servers. The customer uploads their data to the server, which then gets shipped back to Amazon for upload to the cloud.
Amazon announced the service last year. Today, the company started offering the same service, but in reverse. If a customer has accumulated a lot of data in their AWS environment and wants to move it elsewhere, Amazon will put it on its Snowball data shipping servers and ship them to the customer.
Your data center is alive.
It is a living, breathing, and sometimes even growing entity that constantly must adapt to change. The length of its life depends on use, design, build, and operation.
Equipment will be replaced, changed, and may be modified to best equip your specific data center’s individual specification to balance the total cost of ownership with risk and redundancy measures.
Just as with a human being, the individual care and love you show your data center can lengthen the life of your partnership.
The countdown has begun for Business Continuity Awareness Week (16-20 May 2016). We are only a few months away, and now we have published the posters that will be used to promote the week. The theme for BCAW this year is return on investment, so all four posters display the message discover the value of business continuity, as ultimately we want to get the message across that business continuity can have benefits other than the obvious returns when disaster strikes.
The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. Make sure you display these posters prominently in your workplace or any other suitable location, and share the image versions through your social media channels to really spread the message.
Business Continuity Awareness Week is your opportunity to help raise awareness of business continuity and highlight the value of your profession, so make sure you get involved. Ways you can take part include, but are not limited to: hosting a webinar, publishing a paper, recording a video, or writing a blog. All of which should demonstrate the theme for the week.
As an added incentive, all those who post a blog on the BC Eye blog site will be entered into a prize draw to win £250 worth of Amazon vouchers.
Cloud computing offers a wide range of solutions to companies, and online backup is one of the best: It keeps important data safe from disruptions and disasters, and provides a way to keep applications and data off-site in highly secured environment.
There are great advantages to using backup technology, such as automation functionality and encrypted data. There are some business experts who state that the cloud is not a secure source for important data. However, online backups have encryption capacity to keep data safe. Conversely, hard drive (external) storage is not secure, and could be stolen or misplaced. Online backup is also reasonably priced. By using online backup, companies are given an opportunity to keep important files and documents safe from disarray and disaster at reasonable rate.
When data center operators examine data center cost, they generally look at high-level metrics, such as gigabytes of storage or Power Usage Effectiveness. These do matter of course, but to get to the real cost, you have to zero in on lower-level components.
Do you know how much the flash drives on your servers cost? How about the CPUs or DRAM cards? A different vendor supplies each one of those components, and they make a big difference in total cost of ownership of every data center.
Web-scale data center operators like Google and Facebook learned this lesson long ago. For years, they have been re-examining each individual component of their IT gear, looking for ways to get it cheaper.
Whether they love police, hate police, or anything in between, most community members want to know more about police. Police themselves, on the other hand, are hesitant to share information, for good reason, at least most of the time. The key to successful sharing, especially with tools collectively known as social media, is to find the balance between letting people have more information and not giving out so much it causes problems.
Generally, when talking about community engagement in social media, the typical advice is to follow people, provide good content, answer questions, be transparent, and so on. These basics are important, essential even. But where the rubber meets the road is what lies a mile or so beyond the next curve.
(TNS) - A new study has provided the first evidence that the Zika virus may be the cause for a spike in cases of a severe neurological disorder called the Guillain-Barré syndrome (GBS).
The study, published in the medical journal Lancet, showed 42 patients developed symptoms of GBS, which causes the immune system to attack parts of the nervous system.
The neurological symptoms include acute motor axonal neuropathy, which is characterised by severe paralysis. It also caused respiratory problems in about a third of the patients who needed medical assistance to breathe properly, the report said.
However, none of the patient-subjects died.
A study published Thursday confirmed that the 100,000 tons of methane that flowed out of Aliso Canyon was the largest natural gas leak disaster to be recorded in the United States, and that it doubled the methane emission rate of the entire Los Angeles basin.
Researchers with the University of California's Irvine and Davis campuses, along with the National Oceanic and Atmospheric Administration (NOAA) found during the peak of the leak that "enough methane poured into the air every day to fill a balloon the size of the Rose Bowl."
University officials called it a first-of-its-kind study on the gas leak, published in the journal Science.
"The methane releases were extraordinarily high, the highest we've seen," said UCI atmospheric chemist Donald Blake in a statement. Blake, who has measured air pollutants worldwide for more than 30 years, collected surface air samples near homes in Porter Ranch.
The growing complexity of today’s enterprise computing environment means critical corporate data is stored in increasingly fragmented and heterogeneous infrastructures. Ensuring all this decentralized data is backed up in case of breach or disaster is a major cause of anxiety for both business executives and senior IT professionals.
That’s because comprehensive data protection is really not core to most people’s jobs – most of you have other things to worry about, and you just hope and pray that the systems you’ve implemented have backed up your data and will recover it in case of a disaster. But you’ve got your fingers crossed because you’re really not that confident that they will.
According to Jason Buffington, principal analyst for data protection at ESG, improving data backup and recovery systems has been a top five IT priority and area of investment for the past several years. That’s because continually-evolving computing infrastructures and production platforms are forcing companies to reexamine their data protection strategies. “When an organization goes from 30 percent virtualized to 70 percent, or from on-premises email servers to Office 365 in the cloud, these evolutions to your infrastructure drive the need to redefine your data protection strategy,” says Buffington. “Legacy approaches for data protection can’t protect all of the data in these more complex environments.”
Extending security to mobile devices and increasing the resilience of the enterprise against hackers are the two big moves Hewlett-Packard Enterprise will be announcing today at the RSA Conference in San Francisco.
The announcements mark a change of thinking at HPE, as the company wants to do a better job of weaving security into its service offerings and of responding to security issues "at machine speed," according to Chandra Rangan, vice president of marketing for HPE Security Products.
The company redefined the issues of today's threat landscape in its HPE Security Research Cyber Risk Report 2016 Report. Looking at mobility threats, HPE used its Fortify on Demand threat assessment tool to scan more than 36,000 iOS and Android apps for needless data collection. Nearly half the apps logged geo-location, even though they didn't need to. Nearly half of all game and weather apps collected appointment data, even though that information is not needed, either. Analytics frameworks used in 60% of all mobile apps can store information that can be vulnerable to hacking. Logging methods can also expose data to hacking.
(TNS) - For Harvey County Sheriff T. Walton and Community Chaplain Jason Reynolds, the past four days have been a blur.
While Walton was tasked with responding to a very dangerous situation, Reynolds was tasked with supporting first responders like Walton and all the others who showed up immediately at the mass shooting at Hesston’s Excel Industries, where four people, including the shooter, were killed Thursday and 14 others injured.
Finally, Monday was an opportunity for the two men to sit side-by-side and speak briefly of what they experienced.
For Walton, the tragedy began unfolding as he learned of a shooting victim near 12th and Meridian in Newton. As he was dealing with that incident, another 911 call came through.
“Everyone is coming to me and I hear of more shootings on the radio. I am trying to figure this out,” Walton said.
Why do we have business continuity management programmes? Is it because we want to make sure our organizations are able to respond to a disruption? Probably yes! It is common sense that we would want to be prepared for any future crisis.
In some cases however, it is also because there is a legal obligation to do so. Many organizations are tightly regulated depending on what sector they are in or the country they are based, and therefore must have plans in place to deal with certain situations. Furthermore, the rules and regulations that govern us are often being revised, and sometimes it can be difficult to keep up with which ones are applicable.
There is a solution however. The Business Continuity Institute has published what it believes to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based on information provided by the members of the Institute from all across the world. Some of the items may not relate directly to BCM, and should not be interpreted as being specifically designed for the industry, but rather they contain sections that could be useful to a BCM professional.
The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, of course, a link to the full document itself.
Looking to make it simpler and less expensive to back up data, Oracle today unveiled an update to Oracle StorageTek Virtual Storage Manager System software that enables Oracle customers to back up data and archive directly into the Oracle cloud.
Steve Zivanic, vice president of the Storage Business Group at Oracle, says version 7.0 of StorageTek Virtual Storage Manager System makes it possible for IT organizations to back up and archive data from both mainframes and distributed systems to a common public cloud. In the case of the mainframe in particular, the cost savings associated with not having to locally back up data on to a mainframe platform are substantial, says Zivanic.
With more data than ever being generated by mobile computing devices, securing that information has become a major challenge for IT organizations that often don’t control either the endpoint or even the network being used to transmit data.
At the RSA Security 2016 conference today, Hewlett-Packard Enterprise (HPE) moved to address that issue with the release of HPE SecureData Mobile, a solution that extends HPE encryption software to devices running Apple iOS and Google Android operating systems.
Chandra Rangan, vice president of marketing for HPE Security, says that given the lack of control most IT organizations have over mobile computing, it’s imperative that they find a way to encrypt data both when it’s at rest and in motion. In fact, a scan of 36,000 Apple iOS and Google Android devices conducted by HPE found that many of these applications routinely collect geolocation and calendar data. That information, notes Rangan, can in turn be used by hackers to enable all kinds of socially engineered attacks. In fact, the desire to get at that data helps explain why in 2015 there were 10,000 new Android threats discovered each day. And while Apple iOS devices benefit from being on a closed network, the number of malware exploits aimed at Apple iOS rose 230 percent in 2015.
If you are wondering whether a mobile solution would be right for your crisis management plan, start with a look at how much business life has changed in recent years. Then ask whether your organization is keeping up or lagging behind when it comes to crisis planning.
In the past, it was sufficient to add crisis plans and emergency instructions to company intranets or send by email. That was a huge improvement over handing executives in the company a binder with the plans.
But now we are well into the twenty-first century, and the whole concept of crisis management has evolved. Beyond planning for fires, floods, and strikes, organizations must prepare to cope with workplace violence, terrorist attacks, epidemics, data loss, data breaches, reputation damage, and a host of other possibilities that were not even thought about twenty or thirty years ago. Some of these crises will occur with no warning, and reach catastrophic levels in minutes or hours.
Over the many years I’ve been working in a clean room, I’ve grown quite familiar with hard drives and the many pros and cons they can present. Generally speaking, hard drives can be a pretty resistant medium when used correctly and a technology I confidently use for storing my personal files. However, I know bad things can happen to good data as I have witnessed countless damages and failures to these devices that can cause data loss.
In this post I will focus on physical issues in hard drives (HDDs) as the problems faced by this technology are completely different from those experienced by other alternatives available in the market, such as solid state drives (SSD).
Buyers, beware! While a car with one careful previous owner (we’ve all heard that one, right?) may still be a viable purchase proposition, somebody else’s security may be ill-suited to your organisation. Second-hand security can crop up in situations like company mergers and acquisitions. One of the challenges is to see beyond what the other party is telling you. Your prospective business partner may be assuring you with all the honesty in the world that security in its firm covers all requirements. However, what is true for one organisation does not necessarily carry over to another.
The modern business is directly tied with the capabilities of IT. Most of all, your data center now impacts how you create business goals and entire strategic directives. This means that business leaders and data center facilities managers must work in unison to create a truly cohesive ecosystem.
And decisions and actions on the IT side of the house can have a profound impact on mechanical systems and resulting operating costs and capacity of the data center.
When all sides of the house collaborate, there are specific benefits to the business and the entire data center environment. Consider these top challenges that collaboration aims to overcome:
Google announced a number of new security features for Gmail users in the enterprise today. Last year, the company launched its Data Loss Prevention (DLP) feature for Google Apps Unlimited users that helps businesses keep sensitive data out of emails. Today, it’s launching the first major update of this service at the RSA Conference in San Francisco.
The DLP feature allows businesses to set rules for what kind of potentially sensitive information is allowed to leave and enter its corporate firewall through email.
The most important new feature here is that DLP for Gmail can now also use optical character recognition to scan attachments for potentially sensitive information (think credit card numbers, drivers license numbers, social security numbers, etc.) and objectionable words (maybe a swear words or a secret project’s codename).
While most IT organizations are still held accountable for security breaches, many of them are now judged by the way they respond to a breach when one inevitably occurs. To help those IT organizations put a consistent incident response plan in place, today IBM at the RSA Security 2016 conference announced it has acquired Resilient Systems Inc.
As one of the pioneer vendors in the category, Caleb Barlow, vice president of IBM Security, says Resilient Systems extends IBM’s security portfolio to cover protecting and detecting threats, to include how to programmatically respond to them when they occur. Instead of wasting days trying to figure out what needs to be done in the event of a breach, Barlow says organizations need to have a plan in place that everyone in the organization can follow. That plan, adds Barlow, needs to cover everything from remediating the breach to informing the media and appropriate government agencies.
WASHINGTON – The Federal Emergency Management Agency (FEMA) is pleased to announce that the application period for the 2016 Individual and Community Preparedness Awards is open. The awards highlight innovative local practices and achievements by individuals and organizations that made outstanding contributions toward making their communities safer, better prepared, and more resilient.
Emergency management is most effective when the entire community is engaged and involved. Everyone, including faith-based organizations, voluntary agencies, the private sector, tribal organizations, youth, people with disabilities and others with access and functional needs, and older adults can make a difference in their communities before, during, and after disasters.
FEMA will review all entries and select the finalists. A distinguished panel of representatives from the emergency management community will then select winners in each of the following categories:
- Outstanding Citizen Corps Council
- Community Preparedness Champions
- Awareness to Action
- Technological Innovation
- Outstanding Achievement in Youth Preparedness
- Preparing the Whole Community
- Outstanding Inclusive Initiatives in Emergency Management (new category)
- Outstanding Private Sector Initiatives (new category)
- Outstanding Community Emergency Response Team Initiatives
- Outstanding Citizen Corps Partner Program
- America’s PrepareAthon! in Action (new category)
Winners will be announced in the fall of 2016 and will be invited as FEMA’s honored guests at a recognition ceremony. The winner of the Preparing the Whole Community category will receive the John D. Solomon Whole Community Preparedness Award.
More information about the awards is available at ready.gov/preparedness-awards.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
The RSA security conference is being held this week in San Francisco where security pros come together to discuss strategy. IBM made several security announcements this morning ahead of the conference, headlined by the purchase of Resilient Systems.
Instead of trying to prevent an attack, Resilient gives customers a plan to deal with a breach after it’s happened. While IBM offers pieces for protecting and defending the network, no security system is fool-proof and there will be times when hackers slip through the defenses (or the attack comes from within).
“What happens when an attack happens, which unfortunately has become an inevitably. You need resilience to get back up and running and minimize the damage. There has to be muscle memory of what you will do and how you will react,” Caleb Barlow vp of security at IBM told TechCrunch.
What do you think happens when the computer reservation system of an airline company crashes? Well, a major airlines experienced that exact situation last September – watch this three minute video and learn about the domino effect.
When a problem occurs with an airline computer system, it’s a ripple effect which can quickly become a real mess as passengers are stuck in airports. The airline will soon order that all aircrafts be grounded. Passengers will start complaining and calling into the reservation desk to book other flights. In addition, labor laws will prevent the crew from working or flying.
Renewable energy is tricky to use, and it’s even trickier to use in data centers, which have to be running around the clock, regardless of whether or not the sun is shining or the wind is blowing.
For data center operators that have turned to renewable energy, the three answers have been a) using a combination of renewable generation and energy storage to supplement a data center’s power supply, not replace it; b) investing in renewable energy generation for the same grid that feeds the data center – the grid that also has coal, nuclear, and other traditional energy sources; and c) simply buying Renewable Energy Credits equivalent to some or all energy a data center consumes.
Researchers behind an experimental project in Massachusetts hope to push the progress further by studying over time performance of a solar-powered micro data center launched this month. The test bed is called Mass Net Zero Data Center, or MassNZ. The project’s goal is to help researchers understand how to reduce data center energy consumption and increase data centers’ ability to use renewable energy.
(TNS) - The last damaging earthquake in Washington struck 15 years ago, on Feb. 28, 2001.
The next one is scheduled for June 7.
The ground isn’t expected to actually shake this spring. But nearly 6,000 emergency and military personnel will pretend it is during a four-day exercise to test response to a seismic event that will dwarf the 2001 Nisqually quake: A Cascadia megaquake and tsunami.
Called “Cascadia Rising,” the exercise will be the biggest ever conducted in the Pacific Northwest. Which is fitting, because a rupture on the offshore fault called the Cascadia Subduction Zone could be the biggest natural disaster in U.S. history.
Resilient specializes in incident response, which helps IT security teams bolster their defenses against data breaches. The Resilient incident response platform is deployed in more than 100 of the Fortune 500 corporations – which is IBM’s sweet spot.
“We are thrilled with our plans to have the Resilient team join IBM Security” said Marc van Zadelhoff, general manager, IBM Security. “The Resilient team includes some of the best security talent in the industry, along with leading products that enable clients to automate and consistently manage all aspects of responding to a security incident.”
The storage industry has historically been left out of the conversation when discussing the innovative and ground-breaking feats coming out of the technology world. Now, that focus is shifting thanks to three emerging trends in the enterprise: the move away from integrated systems to software-on-commodity hardware architectures, the focus on utilization rates of physical resources, and the increasing need to support millions of individual workloads.
Companies such as Facebook, Google and Amazon have devoted massive resources to build and maintain customized data center infrastructures from the ground up. In doing so, these companies have realized tremendous levels of scalability, flexibility, and efficiency. Enterprises today are experiencing large and growing amounts of data storage requirements and are focused on achieving the same benefits, driving these trends.
A majority of IT projects undertaken by government fail to deliver satisfactory results, cost more than anticipated or take longer to implement than planned. How often do we, as project managers and government employees, hear things like this: “It’s software development; it’ll take as long as it takes.” “I know it’s what I told you to build, but it’s not what I need.” “Tell me again why it’s going to cost an additional $50,000.”
Faced with tighter budgets, increasing expectations and closer public scrutiny, government IT organizations are under extreme pressure to deliver technology solutions that meet the needs of their users quickly and at low cost. Where the traditional project management approach has failed, agencies need to find alternatives to address these heightened expectations. Agile development has sparked the interest of public-sector change-makers as a way to save government IT from the debacle of skyrocketing costs and redundant systems.
Agile is an entirely new way of approaching project delivery, especially for public agencies. Many of the concepts employed in agile are not particularly new. They have been used in software development under names like prototyping, extreme programming or rapid application development. Frameworks like Scrum bring a structured methodology to these same concepts. It’s about breaking up large, complex projects into easily digested pieces and routinely getting feedback to make sure what is being delivered is in line with what’s needed.
Disaster Recovery Journal Spring World 2016 is taking place March 13-16, 2016 at Disney’s Coronado Spring Resort in Orlando, FL. We’re looking forward to another amazing show with numerous educational sessions and awesome people!
We have a lot planned during DRJ Spring World, and we hope you’ll join us:
- for lunch and a demo of Catalyst
- at the booth (707/709)
- for an educational session
Please take a look below for more details. We look forward to seeing you soon!
JOIN US FOR LUNCH AND A DEMO
OPTIMIZE YOUR CONTINUITY PROGRAM WITH CATALYST
Monday, March 14, 2016 | 12:00-1:15 PM | Coronado E | Lunch Provided
Speakers: Brian Zawada and Dustin Mackie, Avalution Consulting
Reserve your seat in advance: bccatalyst.com/drj
Catalyst makes business continuity and IT disaster recovery planning easy and repeatable for every organization. Join us to learn how Catalyst:
- Delivers the fastest implementation on the market
- Covers the ENTIRE continuity lifecycle
- Generates truly insightful (and automatic!) program metrics
- Saves you time by automating all administrative tasks
- Provides the lowest total cost of ownership
JOIN US IN THE EXHIBIT HALL
BOOTHS 707 & 709
Stop by our booths during exhibit hours to meet our team, learn about our business continuity and IT disaster recovery consulting and software solutions, and enter for a chance to win a hoverboard (don’t worry – we’ll ship it home for the winner)!
Want to learn more about our products and services before the show? Check out:
JOIN US AT A SESSION
FAILING BACK HOME CAN BE A TRIP
Solutions Track 3
When: Sunday, March 13, 2016 | 4:00-5:00 PM
Speakers: Michael Bratton and Bill DiMartini, Avalution Consulting
Many organizations design IT Disaster Recovery solutions like they’re booking a one-way flight – able to get to their destination but without a plan on how they’ll get back home. Even if plans include procedures to return to the restored data center, many times they are rarely tested and validated. This session is for you if you are responsible for the development and maintenance of your organization’s IT Disaster Recovery Plan or auditing IT Disaster Recovery Programs.
BCI 20/20 – THE FUTURE OF THE CONTINUITY INDUSTRY
General Session 3
When: Monday, March 14, 2016 | 10:30-11:45 AM
Moderator: Tracey Forbes Rice, Fusion Risk Management
Panelists: Brian Zawada, Avalution Consulting, Ann Pickren, MIR3, John Jackson, Fusion Risk Management
Where will continuity be in 10 years? What‘s new in the continuity tool box? This panel of subject matter experts consisting of DRJ’s executive council members will be discussing the BCI 20/20 visionary think tank project and what the future holds for the professionals of this industry. Discussion will include eliminating blind spots, and recognizing the risk posed by near and far-sighted thinking. The panel will be thinking outside the box with the goal of developing a 360 degree view of risk in today’s leading organizations. Join this lively discussion to form a vision of what the future holds for this profession.
BCI HORIZON STUDY – A COMPREHENSIVE LOOK AT THE 2015 RESULTS
Senior Advanced Track 2
When: Monday, March 14, 2016 | 2:45-3:45 PM
Speakers: Brian Zawada, Avalution Consulting, John Jackson, Fusion Risk Management
The Horizon Scan Survey seeks to consolidate the assessment of near-term business threats and uncertainties based on in-house analysis of business continuity (BC) practitioners worldwide. This session will present and discuss the results of the survey.
We sat down with VMware CEO Pat Gelsinger during the 2016 Mobile World Congress to learn more about the company's strategic partnership with IBM. Gelsinger also opened up about how the Dell-EMC deal has been affecting VMware's business, and shared an update on partner relationships.
BARCELONA – VMware's latest strategic partnership with IBM, the challenges it's faced as part of the Dell-EMC merger, and the status of partner relationships were among the topics discussed by VMware CEO Pat Gelsinger during an interview with InformationWeek at Mobile World Congress here.
On Feb. 22, IBM and VMware announced a strategic partnership that aims to enable enterprise customers to easily extend their existing workloads, as they are, from their on-premises software-defined data center to the cloud. As part of the deal, according to Gelsinger, IBM is "taking the full set of VMware technologies -- VSphere, NSX, plus our storage, plus our management -- and delivering that full set to the IBM cloud customers. IBM as an enterprise cloud provider is very significant, with 45 data centers worldwide. and they are making very vast investments into that strategy."
Storage is one of the hottest IT topics today. Acquisitions are happening regularly, as more users are moving to flash and new types of storage controller ecosystems. We’re seeing powerful hybrid systems emerge and even more impact around extending environments to cloud storage. Throughout all of this, organizations must understand how to utilize these new types of storage resources, and where they apply to their data centers.
The challenge to virtualization and storage engineers is this: How do you manage and work with all the new storage capabilities? Even more important, how can you dynamically manage workload storage requirements within a virtual environment?
Small businesses are bracing for another year of costly compliance change and complexity from Washington, D.C. While expecting a cascade of regulations, focus is on three priorities—the Affordable Care Act, Fair Labor Standards Act overtime regulations and mandatory paid family and medical leave.
Responding to a data breach is one of the more challenging events any company can face. On the one hand, a data breach requires nearly instantaneous decision making. Which servers are affected and should be removed from the network (but not shut off)? Who should be notified? Should law enforcement, a regulator or the insurer be contacted first? When should the breach be made public, if at all? What experts should be engaged, how much do their services cost and can that budget be approved on a Sunday night? And what is the home phone number for the Director of IT?
Even for the most agile of companies, informed and responsible decision making requires the input of an array of constituencies, some of whom rarely, if ever, have been in the same room together. The classic example is the C-Suite and IT personnel. The executives may have a difficult time understanding the scope of the breach, and the language IT speaks is decidedly not the language of the boardroom. The legal requirements can be contradictory—for example, a regulator (or the FBI) may ask that you notify no one, but your insurer may require notice within 10 days to trigger coverage. The scope of the breach may be unknown, resulting in over-protection or even paralysis based on the lack of information. These complications multiply with the size and public profile of the organization.
Iron Mountain, the nearly 70-year-old “information management” company that grew out of a big early 20th century underground mushroom growing operation, has joined a White House program created to push companies and government agencies to improve their data center energy efficiency.
President Barack Obama’s administration rolled out the Better Buildings Initiative in parallel with its clean energy investment program in 2011. The Better Buildings Challenge, one part of the initiative, called on companies and agencies to make specific energy efficiency improvement commitments for their facilities in return for access to some technical assistance from the government, shared best practices, and, of course, good publicity.
So far, Boston-based Iron Mountain is one of 11 private-sector data center operators to have accepted the challenge, pledging to reduce energy intensity of eight of its data centers by 20 percent in 10 years. The others are eBay, Facebook, Intel, Intuit, Home Depot, Staples, and Schneider Electric, as well as data center providers Digital Realty Trust, CoreSite Realty, and Sabey Data Centers.
(TNS) -- Area hospitals are riddled with cybersecurity flaws that could allow attackers to hack into medical devices and kill patients, a team of Baltimore-based researchers has concluded after a two-year investigation.
Hackers at Independent Security Evaluators broke into one hospital's systems remotely to take control of several patient monitors, which would let an attacker disable alarms or display false information.
The team strolled into one hospital's lobby and used an easily accessible kiosk to commandeer computer systems that track medicine delivery and bloodwork requests — more opportunities for malicious hackers to create mayhem.
The firm worked with the knowledge and cooperation of a dozen hospitals, including hospitals in Baltimore, Towson and Washington. They did not release the names of the hospitals.
(TNS) - Jakki Lewis was nearing the end of her first day of work at Excel Industries on Thursday, when she heard gunshots.
"I never did see him. We just heard bullets," Lewis said. "He was running all over the plant, chasing people."
Another employee, a man armed with a long gun and a pistol, pulled into the parking lot of the plant where about 1,000 people work, manufacturing lawn mowers, and started shooting. He walked inside, where he shot three people near the front office, Harvey County Sheriff T. Walton said later.
After hearing shots, Jeff Lusk, who was at Excel for an interview at 5 p.m., said he saw the shooter and then got under a desk.
Living with Climate Change: How Communities Are Surviving and Thriving in a Changing Climate (Jane Bullock, George Haddow, Kim Haddow, Damon Coppola) is a wide-ranging look at many aspects of past and present disaster mitigation efforts across the United States. The authors look at these efforts through the lens of climate change, and they understand that the debate on the cause of a warming climate is not accepted in all political circles. The book includes a number of case studies that look specifically at the previous benefits of the FEMA Project Impact program.
The body of the text comes primarily from a wide selection of contributors who have direct experience in academia, as well as emergency management practitioners. While book’s anticipated primary use might be as a classroom text for undergraduate and graduate students pursuing degrees in emergency management, it also has broad application for practicing emergency managers at the local, state and federal levels. We are entering a new era where climate impacts are beginning to reveal themselves. Emergency managers will need a resource that documents what has worked in the past and can apply to a new and undetermined future in which climate change exacerbates what were previously considered rare weather phenomena.
With new and more aggressive hazards come the need to understand terminology that is being used in different contexts. The two-page monograph by Cooper Martin, in which he tries to explain the difference between the terms “sustainability” and “resilience,” is quite helpful.
(TNS) - The county’s emergency planning agency is betting that moviegoers, after watching a 300-foot tsunami barrel through a Norwegian fjord toward a small town, will be more receptive to information about disaster preparedness.
The Clark Regional Emergency Services Agency will host a screening of the disaster thriller The Wave 6 p.m. March 4 at Kiggins Theatre in Vancouver. It’s the first of what agency Emergency Management Coordinator Eric Frank hopes will be a recurring disaster movie night.
A movie night might draw a bigger and different crowd than the agency’s other modes of outreach, he said. “We do a lot of events every single year, but we know we’re still missing some demographics in there.”
The Zika virus, a mosquito-borne virus linked to neurological birth disorders, continues to be a serious problem worldwide. More cases in the US are being announced every day, with 14 new cases of sexually transmitted Zika virus being announced by the CDC just this week, several of which among pregnant women. The CDC wrote in a recent statement, “These new reports suggest sexual transmission may be a more likely means of transmission for Zika virus than previously considered.”
As the Zika outbreak progresses, Zika preparedness and planning becomes a critical talking point for leaders in public and private sectors. Questions such as how to handle an infected employee in the office or where to direct citizens so they can acquire accurate, up to date information need to be addressed and answered to ensure the highest level of citizen and employee safety through Zika preparedness.
Managing and analyzing big data -- the exponentially growing body of information collected from social media, sensors attached to "things" in the Internet of Things (IoT), structured data, unstructured data, and everything else that can be collected -- has become a massive challenge. To tackle the task, developers have created a new set of open source technologies.
The flagship software, Apache Hadoop, an Apache Software Foundation project, celebrated its 10th anniversary last month. A lot has happened in those 10 years. Many other technologies are now also a part of the big data and Hadoop ecosystem, mostly within the Apache Software Foundation, too.
Spark, Hive, HBase, and Storm are among the options developers and organizations are using to create big data technologies and contribute them to the open source community for further development and adoption.
It’s no secret that Microsoft already has a lot of cloud data centers around the world. And the company is planning to build a whole lot more as it attempts to bite further into Amazon’s stranglehold on the cloud services market.
As it continues to build out its global cloud data center empire, Microsoft has to make sure it’s doing it in the most environmentally responsible way it can. It is one of tech’s biggest names and as such, it is under a lot of scrutiny by environmentalists and the public.
To help the cause, Microsoft has created a new role, dedicated specifically to data center sustainability. Not corporate sustainability, not energy strategy, not data center strategy, but data center sustainability. This week, the company announced it has hired Jim Hanna, who until recently led environmental affairs at Starbucks, to fill that role.
Cybercrime and cyber security attacks hardly seem to be out of the news these days and the threat is growing globally. Be it a major financial institution or an individual, nobody would appear immune to malicious and offensive acts targeting computer networks, infrastructures and personal computer devices. Firms clearly must invest to stay resilient.
Indeed, and according to the latest results of the 2016 Global Asset Management and Administration Survey from Linedata, a NYSE Euronext-listed IT vendor providing solutions to the investment management industry around the world, cybercrime is being viewed as the “greatest business disruptor” over the next five years. But alongside this regulation remains a priority for financial firms.
The 20-page survey, which was conducted by the fintech vendor in the fourth quarter of 2015 and canvassed two hundred market participants either face-to-face at Linedata Exchange events in London and San Francisco or via an online survey, found that more than a third (36%) of respondents were concerned about the threat from cyber criminals.
The 2015-16 El Nino season is far from over, and for many parts of the United States, the last couple of months have not been easy. In fact, the City of Pacifica, CA declared a state of emergency last month after pounding waves and powerful winds caused destruction up and down the coastline . The effects of El Nino span globally too – Stephen O’Brien, a United Nations’ under-secretary-general, said that El Nino has pushed the planet into “uncharted territory.” According to O’Brien, “the impacts, especially on food security, may last as long as two years .”
But has this El Nino season gone as planned? Back in December of 2015, we sat down with David Gold and Mike Gauthier of Weather Decision Technologies who took us through several prediction scenarios and preparation techniques for the impending El Nino season. Fast forward two months and we are back to take a look at how the current season is panning out. The results may surprise you.
The parade of data center REITs reporting exceptional Q4 and full-year 2015 results has just become even more impressive.
CyrusOne (CONE) crushed results across the board during 2015, including record leasing of 30MW across more than 200,000 square feet of data center space in the fourth quarter alone. The company is expanding capacity across six markets, but its biggest expansion plans are in New Jersey.
CyrusOne CEO Gary Wojtaszek said the flexibility for his customers to lease anywhere from a single rack to 10MW of capacity was a key reason for success in 2015. He also pointed to the company’s ability to deliver data halls in just a few months’ time at less than $7 million per megawatt.
In his final budget proposal, President Obama is asking for an increase in spending on cybersecurity -- $19 billion, which is $5 billion more than last year. The requested increase is a response to the rise in cybersecurity threats being made against government agencies.
The budget request follows a trend as we’re seeing more organizations bumping up their cybersecurity budgets. In fact, estimates are that cybersecurity spending will continue to rise, with expectations of more than $170 billion spent on security by 2020.
But is all this spending actually doing anything to improve cybersecurity? A new study from Venafi hints that perhaps much of that money is being wasted because it isn’t working on certain attacks. The problem, according to the CIOs surveyed, is that layered security defenses aren’t able to tell the difference between which keys and certificates should be trusted and which shouldn’t. A whopping 86 percent of those CIOs believe that stolen encryption keys and digital certificates are going to be the next big attack vector, which is a serious problem because, according to Information Age:
(TNS) - Cedar Rapids Mayor Ron Corbett said Wednesday officials are bracing for the increasing possibility that new federal flood protection money, which once seemed locked in, will never arrive.
At stake could be $70 million to $80 million for flood walls, levees and pump stations to protect low-lying areas from rising tides on the east bank of the Cedar River. Congress authorized $73 million in spending in 2014, but never appropriated the money.
“We are in serious risk of never being funded,” Mayor Ron Corbett said during his State of the City address.
The sentiment marks a transition for a city rocked by flooding in 2008 from hopeful waiting to wondering if it’s time to plot a Plan B. Eight years later, Cedar Rapids still is recovering.
(TNS) - McLean Fiscal Court approved the purchase of a critical communication service that is expected to help emergency management personnel keep the public better informed and alert.
The court approved the purchase of AlertSense, a public alert system that Emergency Management Director David Sunn said he believes could ultimately be a money saver for the county.
In the event of a critically dangerous event such as a hazardous material spill, the fire department, Sunn said, would be able to use AlertSense to determine a certain radius around the spill and send automatic phone calls or text messages to residents within the radius. That's important, he added, since the county includes vast portions of rural land where communication can be scarce.
Over the past several years, the cloud-based software-as-a-service (SaaS) model has proven to be a popular choice for enterprise applications, delivering efficiencies and value to organizations in many ways. Chief among these benefits are avoiding the major undertaking and licensing costs of deploying business-critical software across the organization and relieving IT of the burdens typically associated with maintaining on-premises software—including performing upgrades, installing patches and managing availability. Additionally, cloud-based solutions can enhance flexibility and scalability for enterprise applications and workloads. Of course, the benefits to be gained from adopting SaaS solutions in the enterprise must be balanced against potential risks. Exploring the path to ensuring your cloud applications are highly secure needs to be top priority.
Luke Bird highlights the requirement for many different organizational departments and professions to work together for effective organizational resilience and provides some ideas for how to overcome the associated challenges.
Organizational resilience is a highly complex and sometimes controversial term. It comes with a variety of challenges in trying to understand how it works (or potentially how it could work) in organizations. The likes of the BCI and Continuity Central have worked tirelessly to generate wider discussion and thought leadership on this topic. However, our ongoing dialogue in recent years has barely progressed beyond reaching an agreement for a simple definition (despite many of us helping to produce the British Standard 65000).
The recently published BCI Position Statement certainly highlights that we’re still not quite there in our understanding as to how to take this forward. Hopefully their official line will provoke a second wind of debate as many of us take the time to decide whether we agree or disagree. Although much of my own focus and interest is on the subject of multi-disciplinary collaboration and some of the challenges that we could potentially face.
Geary Sikich explains why enterprise risk and business continuity managers need to think more broadly about organizational risks. He describes how the use of ‘risk dimensions’ and ‘risk spheres’ can help.
There exists an overabundance of guidance for conducting risk assessments. Yet, it seems that we still have difficulty in getting risk assessments to reflect the appropriate level of concern for the identified risks that we are assessing. We also tend to view risk in relation to the place where we are employed and the industry that we work in. When we look at risk assessment from this perspective it should be clear that we are missing the point, or at best, are being too narrowly focused, when it comes to assessing risk for our organizations. This is not to say that our efforts are wasted. The risk assessment process is valuable regardless of how limited or narrowly focused it is. So, the question we should be asking ourselves as we prepare to implement a risk assessment is: ‘What future are we planning for?’
Nearly any discussion of contemporary channel trends includes a lament that dates back to the era when 20 megabyte floppy disks were considered state of the art. To wit: How do you offset shrinking profit margins?
Nothing new under the sun here. Profit erosion is an inevitable by-product of commodity competition. And it has been part of the tech scene - especially on the hardware side - since the first PCs rolled off the assembly lines. There’s little point in building a business around keeping hardware up and running - not when the cloud’s self-service on-demand provisioning promise is being realized.
But while you can’t make a living by only focusing on hardware any longer, another part of the value chain is thriving.
In business utopia, organisations automatically avoid problems, suppliers are selected by computer on the basis of their reliability and cost-efficiency, and machines repair themselves before they break. In business dystopia, too often seen in real world situations, the converse occurs. Organisations automatically engender problems, suppliers are selected by computer by default, and machines break themselves without proceeding to repairs. Automation can play a big part in both scenarios, but the results in terms of business continuity can be poles apart.
Workload cloud migration startup Ravello Systems was acquired on Monday by Oracle to ease enterprise adoption of its public cloud. Oracle is reported to have paid between $400 and $500 million for the California-based company which maintains a research presence in Israel, and Oracle is now expected to open a cloud research and development facility in Israel, according to Ha’aretz.
Ravello was started in 2011 by the team behind the KVM hypervisor. It offers nested virtualization solutions, allowing KVM and VMware workloads to be developed, tested, and demonstrated in the cloud without migration, and migrations to new cloud providers and management platforms without rewriting applications. KVM was passed in benchmark tests by Canonical-backed Linux container hypervisor LXD in May.
NORTH LITTLE ROCK –Teams of specialists from FEMA will offer tips and techniques to lessen the impact of future disaster-related property damage at building supply stores in three Arkansas locations Thursday, Feb. 25 – March 1, 2016.
The teams will be at these Lowe’s stores:
- Jefferson County: 2906A E. Harding Ave., Pine Bluff
- Faulkner County: 1325 Hwy. 64W, Conway
- Benton County: 1100 NW Lowes Ave., Bentonville
Teams will be at each location from 8 a.m. to 4:30 p.m. Thursday – Tuesday except for Sunday. Hours on Sunday are from 8 a.m. to 1:30 p.m.
FEMA specialists offer “how-to” information on both retrofitting buildings to make them more resistant to weather damage and ways to elevate utilities against flooding. They also provide tips to clean and help prevent mold and mildew.
Many of the tips and techniques are specifically geared for the do-it-yourselfer and for building contractors. If you have a disability and need an accommodation to access materials such as Braille, large print, or ASL interpreters please let our representatives know.
FEMA offers a number of free online resources for home and property owners. To get started, go to
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Application and data migration remains one of the most significant barriers to cloud adoption in the enterprise these days. And while today’s solutions are not perfect, there is at least a strong commitment on the part of vendors and cloud providers to address the issue.
The biggest move came this week with the announcement from IBM and VMware that they would work together to move legacy data center functions onto the IBM cloud. The pact is significant for two reasons. First, it combines the technical knowhow of two leading IT vendors – IBM on the hardware and services side and VMware on the virtual layer – to craft what will likely be a very robust hybrid cloud infrastructure (Disclosure: I provide content services to IBM).
Secondly, it enables organizations to move legacy apps to the cloud without having to rewrite code. As The Wall Street Journal’s Angus Loten points out, this is crucial for organizations that are seeking the flexibility and scalability of the cloud but still need to leverage existing infrastructure for ongoing business processes.
(TNS) -- When people in the Kansas City area need emergency help, they can now send a text message to 911.
Text-to-911 service has been growing more common among cities across the country in recent years and is now fully operational at all emergency dispatch centers in the Kansas City metro area, the Mid-America Regional Council announced last week.
Sending a text to 911 instead of calling could be a lifesaving option for people in situations where they can’t speak safely, such as home invasions or active shooter incidents, according to MARC.
Over the past decade, the amount of power outages in the United States has increased. A recent Federal study shared that the U.S. electric grid loses power 285% more often than it did 30 years ago.  These surprising numbers are mainly attributed to aging infrastructure, a growing population, and more severe weather patterns. On top of the financial burden that this has on business, resident’s daily lives are affected by these unexpected failures.
What can residents do to be best prepared in the case of a power outage? One of the key elements in being prepared is having a line of communication. During a power outage, watching the news for information from local officials is not an option. Having a system to send out a mass text or email notification is a huge advantage when traditional means of communication are cut off. During a power outage, residents are often left in the dark about how long the power will be out for, what was the cause, and if the problem is being solved. By using Nixle, police departments and other officials can keep a line of communication with residents to update them on the progress of the outage.
A new survey of 1,080 IT professionals conducted by cloud services company Evolve IP indicated the cloud has "gained corporate alignment, increased real business benefits and has near ubiquitous adoption."
Evolve IP's "2016 North American Cloud Adoption Survey" revealed 86 percent of respondents said they believe cloud computing represents "the future model of IT."
The hybrid cloud is going mainstream as more companies seek to capitalize on the benefits of both the private and public cloud.
But this tech transition is not without its sundry challenges, particularly when it comes to security - and that’s where managed service providers can play key roles as customers transform their IT infrastructures.
Many smaller companies view the hybrid cloud as a sensible balance between offloading storage and computational time to a public cloud, and keeping a firm’s computational services all on premises. The good news is that unlike bigger enterprises, SMEs moving to hybrid clouds won't need to jerryrig older legacy infrastructures - potentially opening security holes in the computer network. MSPs can steer that migration to the hybrid cloud with "clean" deployments by starting from scratch.
(TNS) - At least three people have died in severe weather in the southern states of the United States, where tornadoes, damaging hail and flash floods left a swath of destruction.
Tornadoes churned across many states, from Louisiana to Georgia, but the most destructive were in Louisiana and Mississippi.
More than 30 people were injured in the storms. Two people died in the hamlet of Convent, Louisiana, after a tornado demolished more than 160 mobile homes.
The third casualty died in a trailer park in Purvis, Mississippi.
The storm left tens of thousands of people without power in Louisiana, and John Bel Edwards, the state governor, declared a state of emergency in seven parishes.
The powerful storm developed when the jet stream dived across the region on Tuesday. A jet stream is a fast-flowing ribbon of air, blowing high above the Earth's surface, which can dictate the path of storms and can also encourage their development.
State health officials were heartened when President Barack Obama this month asked Congress for $1.8 billion to combat the spread of the Zika virus because they fear they don't have the resources to fight the potentially debilitating disease on their own.
Budget cuts have left state and local health departments seriously understaffed and, officials say, in a precariously dangerous situation if the country has to face outbreaks of two or more infectious diseases -- such as Zika, new strains of flu, or the West Nile and Ebola viruses -- at the same time.
"We have been lucky," said James Blumenstock of the Association of State and Territorial Health Officials, of states' and localities' ability to contain the flu, West Nile and Ebola threats of the last five years.
Cloud computing has become a significant topic of conversation in the technology industry and is being seen as a key delivery mechanism for enabling IT services. Today’s reality is that most organizations already are using some form of cloud because it opens up new opportunities and has become engrained in the fabric of how things are done and how business outcomes are achieved.
Cloud offers a host of service and deployment models: both on- and off-premises, across public, private, and managed clouds. We see some organizations starting with public cloud because of the perceived ease of entry and lower costs. Some organizations, such as test and development groups, use public clouds because they need to quickly stand-up infrastructure, test and run their application and take it down, and this can’t be supported by their existing IT team. Other companies, such as startups, use public clouds because they simply don’t have the resources to build, own and manage a private cloud infrastructure today. We’re also seeing a rather significant shift back towards private clouds, which are becoming much easier and quicker to deploy and still come with IT control and piece-of-mind security benefits.
That said, every organization’s cloud is a unique reflection of its business strategies, priorities and needs; and this is why there is a great variation in how companies go about implementing their own specific clouds.
We’re constantly hearing about how the lack of rain in much of the Southwest has contributed to the worst drought in the history of the region, but the subject of water doesn’t come up much with respect to data centers.
However, it should garner just as much attention—specifically water treatment programs—according to Data Center World speaker Robert O’Donnell, managing partner of Aquanomix.
“The water management program is a huge risk in data centers; one that many facility owners don’t understand or give enough credence to,” he says.
Outbreaks of Zika have been reported in tropical Africa, Southeast Asia, the Pacific Islands, and most recently in the Americas. Because the mosquitoes that spread Zika virus are found throughout the world, it is likely that outbreaks will continue to spread. Here are 5 things that you really need to know about the Zika virus.
Zika is primarily spread through the bite of an infected mosquito.
Many areas in the United States have the type of mosquitoes that can become infected with and spread Zika virus. To date, there have been no reports of Zika being spread by mosquitoes in the continental United States. However, cases have been reported in travelers to the United States. With the recent outbreaks in the Americas, the number of Zika cases among travelers visiting or returning to the United States will likely increase.
These mosquitoes are aggressive daytime biters. They also bite at night. The mosquitoes that spread Zika virus also spread dengue and chikungunya viruses.
Protect yourself from mosquitoes by wearing long-sleeved shirts and long pants. Stay in places with air conditioning or that use window and door screens to keep mosquitoes outside. Sleep under a mosquito bed net if air conditioned or screened rooms are not available or if sleeping outdoors.
Use Environmental Protection Agency (EPA)-registered insect repellents. When used as directed, these insect repellents are proven safe and effective even for pregnant and breastfeeding women.
Do not use insect repellent on babies younger than 2 months old. Dress your child in clothing that covers arms and legs. Cover crib, stroller, and baby carrier with mosquito netting.
Read more about how to protect yourself from mosquito bites.
Infection with Zika during pregnancy may be linked to birth defects in babies.
Zika virus can pass from a mother to the fetus during pregnancy, but we are unsure of how often this occurs. There have been reports of a serious birth defect of the brain called microcephaly (a birth defect in which the size of a baby’s head is smaller than expected for age and sex) in babies of mothers who were infected with Zika virus while pregnant. Additional studies are needed to determine the degree to which Zika is linked with microcephaly. More lab testing and other studies are planned to learn more about the risks of Zika virus infection during pregnancy.
We expect that the course of Zika virus disease in pregnant women is similar to that in the general population. No evidence exists to suggest that pregnant women are more susceptible or experience more severe disease during pregnancy.
Because of the possible association between Zika infection and microcephaly, pregnant women should strictly follow steps to prevent mosquito bites.
Pregnant women should delay travel to areas where Zika is spreading.
Until more is known, CDC recommends that pregnant women consider postponing travel to any area where Zika virus is spreading. If you must travel to one of these areas, talk to your healthcare provider first and strictly follow steps to prevent mosquito bites during the trip.
If you have a male partner who lives in or has traveled to an area where Zika is spreading, either do not have sex or use condoms the right way every time during your pregnancy.
For women trying to get pregnant, before you or your male partner travel, talk to your healthcare provider about your plans to become pregnant and the risk of Zika virus infection. You and your male partner should strictly follow steps to prevent mosquito bites during the trip.
Returning travelers infected with Zika can spread the virus through mosquito bites.
During the first week of infection, Zika virus can be found in the blood and passed from an infected person to a mosquito through mosquito bites. The infected mosquito must live long enough for the virus to multiply and for the mosquito to bite another person.
Protect your family, friends, neighbors, and community! If you have traveled to a country where Zika has been found, make sure you take the same measures to protect yourself from mosquito bites at home as you would while traveling. Wear long-sleeved shirts and long pants , use insect repellant, and stay in places with air conditioning or that use window and door screens to keep mosquitoes outside.
For more information on the Zika virus, and for the latest updates, visit www.cdc.gov/zika.
February is American Heart Month. In light of that, it seems only fitting that we should check the pulse of a challenge faced by many in Healthcare IT: disaster recovery.
In a training class several weeks ago, Ryan, an incredibly enthusiastic sales engineer, and I had a conversation about disaster recovery. “Disaster recovery is so much more than the question of, ‘Will I pass the audit?’” he began. “Buildings fall apart, water rises, systems fail, snow falls, power surges,” he explained, making imaginary drawings in the air to emphasize his points. “Anything that stops hospital operations for a period of hours is definitely a disaster.”
“The great thing is that Citrix is on top of it,” he confidently added. Ryan backed that statement with a contrasting tale of two US hospitals – one in Texas that was plagued by human error and another in the Southwest that experienced equipment failure after a power surge.
The Internet of Things (IoT) generates a lot of data, which organizations can store in the cloud. But how are they keeping it all safe?
Many companies are realizing they face this challenge and are ramping up efforts to improve data security as they embrace new platforms, including IoT and cloud-based applications, according to a recent survey conducted by 451 Research.
The survey, sponsored by data and cloud security vendor Vormetric, polled 1,114 senior IT executives, representing companies ranging from $50 million to more than $2 billion in annual sales.
CHICAGO — With a forecast that includes the potential for heavy snow and high winds, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) Region V encourages everyone to get prepared.
“If you must leave home in dangerous weather conditions, take precautions to get to your destination safely,” FEMA Region V Administrator Andrew Velasquez III said. “Taking simple steps to prepare before the storm not only keeps you safe, but others as well.”
Follow the instructions of state and local officials and listen to local radio or TV stations for updated emergency information. If you are told to stay off the roads, stay home, and when it is safe, check on your neighbors or friends nearby who may need assistance.
Find valuable tips to help you prepare for severe winter weather at www.ready.gov/winter-weather or download the free FEMA app, available for your Android, Apple or Blackberry device. Visit the site or download the app today so you have the information you need to prepare for severe winter weather.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
You have probably heard the old saying that “a lie will go round the world while truth is pulling its boots on.” But you may not have considered this: “A crisis can do half its damage before the crisis plan is even found!”
And every minute a crisis goes unmanaged, costs may be piling up.
For example—the longer your people go without clear guidance or worst wait to execute on your crisis management plans , the more likely it is that your situation will escalate. And what if the instructions for shutting down a manufacturing line come too late? That expensive equipment could end up a total loss.
Application containers, namely Docker containers, have been heralded as the great liberators of developers from worrying about infrastructure. Package your app in containers, and it will run in your data center or in somebody’s cloud the same way it runs on your laptop.
That has been the promise of the technology based on the long-existing concept of Linux containers the San Francisco startup named Docker devised its application building, testing, and deployment platform around. While developers love the concept of Docker, IT managers that oversee the infrastructure those applications eventually have to be deployed on have certain processes, policies, requirements, and tools that weren’t necessarily designed to support the way apps in Docker containers are deployed and the rapid-fire software release cycle they are ultimately meant to enable.
This week, Docker rolled out into general availability its answer to the problem. Docker Datacenter is meant to translate Docker containers and the set of tools for using them for the traditional enterprise IT environment. It is a suite of products that enables the IT organization to stand up an entire Docker container-based application delivery pipeline that is compatible with IT infrastructure, tools, and policies already in place in the enterprise data center.
Dell Inc. said Tuesday that it has received U.S. regulatory clearance to proceed with its planned $67 billion purchase of data storage company EMC Corp.
Round Rock, Texas-based Dell Inc. has passed a mandated waiting period under antitrust laws that are intended to allow the U.S. Federal Trade Commission time to review the purchase. If no FTC action is taken, the purchase can proceed.
But the Dell Inc. deal still has to receive regulatory approvals from other jurisdictions and from EMC shareholders. Reuters new service reported last week that European regulatory approval is expected.
In an article aimed at people new to business continuity, Jennifer Craig examines the basic content of a business continuity plan, describing seven components that should be incorporated in every plan:
1. Initial response
When something disrupts day-to-day operations, everyone should understand what – if anything – they should do immediately. By planning for that – and exercising it – no one will be running in circles muttering “What’ll we do? What’ll we do?”
Whoever notices the ‘event’ should know what to do (like calling emergency services, alerting Security, pulling the fire alarm, etc.). Protocols for alerting the proper decision-makers should be planned (along with contact information for those decisions-makers).
The initial response should also include a clear plan for who will be ‘in charge’. Whether that’s locally, regionally, or corporately, making it clear so that all participants will understand.
This year’s Disaster Recovery Journal Spring World event is nearly here, don’t miss MissionMode at this year’s show.
Event: DRJ Spring World 2016
Location: Orlando, FL
Date: March 13-16, 2016
This year’s theme Innovation to Ensure Resiliency is perfect for the largest assembly of business continuity professionals in the industry. This is your opportunity to learn about the latest tools and best practices for BC/DR success.
Make the most out of your time at Spring World:
Do Some Pre-Reading
Download MissionMode’s latest whitepaper, “Incident Management Systems – A Business Continuity Program Game Changer” to see how more and more companies are improving BC/DR program maturity by adopting incident management systems. These systems, including MissionMode’s Situation Center Suite, drive business continuity management efficiency and process standardization. Read our white paper on your trip to Orlando and stop by our booth for a demo.
Visit MissionMode Booth #507
Meet the MissionMode team and get a live demonstration of our Situation Center Suite. You won’t believe how easy the system is to use and how quickly it can help your business continuity teams better execute the plans you’ve developed.
Schedule time to meet with MissionMode Chief Operations Officer, Jason Zimmerman
For a serious discussion of how your organization can benefit from deployment of MissionMode Incident Management Solutions, schedule time with the experts. Jason has helped hundreds of MissionMode clients scope their needs and customize our Situation Center tools to address key pain points.
Have some fun in Orlando!
It’s winter, it’s Florida and it’s fun! Take a little time to enjoy some of Orlando’s top attractions:
- Walt Disney World
- The Wizarding World of Harry Potter
- Universal Studios
- Cirque du Soleil
Or just enjoy the area’s fine dining and warm winter temperatures. Today’s temperature – 81 degrees!
How your organization would respond if under attack from a physical assault or fire is obvious. Someone would dial 911 and emergency services would arrive quickly to assist. Unfortunately, the same can’t be said if your organization is the target of a cyber attack. Your best offense in this scenario is to create a resilient defense against cyber attacks. Let’s take a look at the top priorities any organization should adopt to build a reliable defense against cyber threats.
Evaluate Your Skills, Fill Gaps
It’s crucial to evaluate your security team’s core capabilities when it comes to shielding the organization from cyber threats. When gaps in expertise are uncovered, develop training, schedule mock exercises and partner with other entities who make it their business to shield yours from cyber attacks.
(TNS) - Before the walls shook, before the two-by-fours twisted and the roof began tearing off, Amanda Bose saw news about the tornado on television.
“Everybody in the bathroom — right now!” the 36-year-old mother told her 5-year-old and 15-year-old. There was almost no time to wonder, she says, whether the home would protect them — or collapse around them.
Similar scenes played out in homes across North Texas during the Dec. 26 storm, which destroyed 159 houses and did major damage to 311 in Rowlett alone. Damage from the storm will reach $1.2 billion, the Insurance Council of Texas estimates.
When an IT incident strikes, every minute spent offline could cost your company thousands. When Amazon.com experienced a 100 millisecond slowdown in webpage load times, it resulted in a 1% decrease in sales. This correlates to a loss of $660 million in online revenue!
Communication with your IT support team is the key to getting your company back up and running, but what if your IT professionals work thousands of miles away from the problem? Make sure you’re optimizing your communication strategies so you can resolve IT incidents faster and avoid costly disruptions.
Justin Ong moderated this panel discussion that covered best practices to reduce an IT incident’s Mean Time To Know, the leading cause for why IT incidents aren’t resolved as quickly as they could be. The webinar’s expert panel consisted of IT professional Liz Tesch, and Everbridge’s own Vincent Geffray and Frank Basso.
Directly addressing concerns about its readiness for production, application container leader Docker is rolling out "container-as-a-service" platform designed to ease application development and management at scale.
The Docker Datacenter unveiled Tuesday (Feb. 23) seeks to combine the inherent agility of application containers with greater control and security as enterprises attempt to scale container technology. Aiming to deliver on its "build, ship and run" mantra, the new container service is a "metaphor for pulling everything together" as container technology moves to production, according to Scott Johnson, Docker's senior vice president of product management.
Docker's holistic approach includes a control plane that can be used in the datacenter or in a private cloud along with the company's trusted registry and lightweight runtime. As an example of container agility, Johnson noted in an interview that the new service could help reduce the time needed to push an application change to production from weeks to as little as a day.
At a time when security is top-of-mind for every IT and business leader–from the boardroom to the executive suite to the front lines of operations–Citrix is coming to RSA with solutions and strategies to address the latest enterprise security requirements.
To set the stage, this post provides essential resources for everyone concerned with managing risk in the enterprise to bring you up to date on the latest thinking so you can use your time at RSA productively.
As transformative trends like mobility, BYO and the Internet of Things drive the expansion and evolution of the network perimeter, enterprises need new ways to provide access for employees, contractors, partners and customers while managing risk. With Citrix solutions, companies can secure and control applications, data and usage in any scenario to keep people productive wherever and however they choose to work.
Read our solution brief “Managing Risk by Protecting Apps, Data and Usage” and watch the video below to learn more about the Citrix approach to enterprise security.
Docker announced a new container control center today it’s calling the Docker Datacenter (DDC), an integrated administrative console that has been designed to give large and small businesses control over creating, managing and shipping containers.
The DDC is a new tool made up of various commercial pieces including Docker Universal Control Plane (which also happens to be generally available today) and Docker Trusted Registry. It also includes open source pieces such as Docker Engine. The idea is to give companies the ability to manage the entire lifecycle of Dockerized applications from one central administrative interface.
Customers actually were the driving force behind this new tool. While companies liked the agility that Docker containers give them, they also wanted management control over administration, security and governance around the containers they were creating and shipping, Scott Johnston, SVP of product management told TechCrunch.
(TNS) - Pennsylvania Gov. Tom Wolf today asked President Barack Obama to declare last month's record snowstorm a major disaster, which would make the state and municipalities in at least 26 counties eligible for reimbursement of 75 percent of their costs.
In a news release, the administration said that Pennsylvania has identified more than $55.4 million in expenses related to cleanup from the storm Jan. 22-23. The state Emergency Management Agency has been compiling costs reported by communities throughout the state to make the initial request for federal disaster relief.
The storm, which was concentrated more in central and eastern Pennsylvania, dumped more than three feet of snow in some areas. Weather-related traffic accidents tied up west-bound traffic on the Pennsylvania Turnpike and stranded some motorists for more than 24 hours between Bedford and Somerset.
When it comes to business IT solutions, cloud computing is unquestionably the way forward for many companies. Over the last few years, this technology has gone from being a hyped-up buzzword to a central part of the way organisations of all shapes and sizes operate. But if you’re coming to the cloud for the first time it may seem like a minefield, with a huge range of tools and deployment options to choose from. Get it right and you can be well set for years to come, but go down the wrong route and it can be costly and time-consuming to correct your course. One of the biggest decisions you’ll have to make is what type of cloud to go for. There are three key options here – public, private and hybrid. Each have their own pros and cons and may be better-suited to some scenarios that others. So which option is the best for your business? This decision will depend on many factors, such as the type of data you have, how flexible you need to be and your level of in-house IT resources. If you’re unsure about what will work best when you’re choosing a cloud solution, read on for our top tips on each option and what it could do for your business.
Pacific Rim economies’ exposure to the increasing threat of natural disasters has provided impetus for governments and the private sector to jointly address the need for more robust safeguards in the region.
Finance officials from the 21 APEC member economies, the world’s most disaster affected region, ramped up their collaboration to improve risk assessments and insurance coverage during meetings that concluded recently in Lima. The focus was on narrowing gaps in data gathering and financial protection needed to build economic resiliency among them, boosted by policy inputs from disaster risk experts from the OECD, the World Bank and industry.
“About two-thirds of reported disaster losses in APEC economies are uninsured on average and vulnerabilities in the region’s developing economies are even more severe,” noted Gregorio Belaunde, director of risk management at the Ministry of Economy and Finance of Peru, who guided the proceedings. “Quantifying disaster risk exposure is a prerequisite for reducing financial protection gaps which APEC is working to facilitate as climate change raises the stakes. It also helps to reduce physical disaster risk.”
APEC economies collectively account for about 3 billion people, half of global trade, 60 percent of total GDP and much of the world’s growth. They also experience more than 70 percent of all natural disasters and these are increasing in frequency and intensity as a result of climate change. Significantly, APEC economies incurred over USD 100 billion annually in related losses over the last decade.
Officials pinpointed the components of disaster risk as well as the technical requirements for model development and data gathering necessary to accurately assess them, drawing on best practices and case studies from the public and private sectors. They also shared real world lessons and guidance for creating systems that bring insurance companies together to form ‘catastrophe insurance pools’ that can rapidly boost insurance penetration.
The findings from IDC's recent IT services end-user survey reveal that the top themes for IT services spending in the Asia Pacific, excluding Japan, (APeJ) region are: security enhancement; business continuity and disaster recovery services; and IT staff retention and training.
“A comparison of two years’ results on the top themes for IT services spend shows that APeJ organizations have moved beyond the infrastructure consolidation phase to focus on improving reliability, security and resilience of the enterprise infrastructure and systems in order to be better prepared for the digital transformation wave. This is a huge and necessary positive step, allowing the CIO focus to shift from technology to people and process. As a result, we expect the IT education and training services market in the region to grow strongly, driven by a huge demand for re-skilling,” said Cathy Huang, research manager, Services and Cloud Research Group, IDC Asia/Pacific.
The survey data reveals interesting sub-trends within the broader context of enterprise expectations of transformative technologies and services.
The Internet of Things is rich in promises. Besides the old (by now) examples of connecting your fridge or coffee machine to the Web, the possibilities for connecting, controlling and optimizing “things” are vast. They range from monitoring and reducing energy usage in buildings to preventing oil pump failure in remote oil fields, and from cutting aircraft jet engine fuel bills to helping people park better in cities. In fact, “better” is often the keyword. The IoT or IIoT (Industrial Internet of Things) offers considerable potential for improvement. But what does it do for business continuity – and could we conceivably end up worse off for BC because of the IoT?
In today’s 24-hour news environment, most senior legal officers across corporate America acknowledge the importance of communications with stakeholders during high-profile lawsuits. Yet the majority have outdated strategies or no strategies at all to direct communications outside of court, according to a new survey conducted by Greentarget.
This lack of preparation leads to overly conservative communications, the survey shows, with decisions and actions that are often impulsive and governed by the fear of negative media attention. Ironically, these instincts can compound the likelihood of reputational damage.
“The fact is that most senior legal officers can name the top two or three lawsuits they never want their companies to face,” said Larry Larsen, senior vice president of Greentarget and head of the firm’s Crisis & Litigation Communications Group. “They should take some level of control and prepare for what’s to come.”
The Homeland Security Simulation Center offers realistic training on disaster preparedness and response through a virtual reality platform
After first responders in Gresham, Ore., handled a high school shooting, emergency management officials realized that they needed to improve their training, especially for law enforcement.
The incident had “a lot more complexity than just neutralizing a threat, which is what they’re focused on,” said Kelle Landavazo, emergency management coordinator for Gresham.
Reuniting students with panicked parents who are arriving at a campus — while keeping track of who has been picked up, and by whom — is a major logistical challenge. So is coordinating the efforts of everyone who is responding.
It seems the enterprise is approaching container technology with a mixture of anticipation and trepidation as it seeks to establish architectures that offer broader scalability and are more suitable to microservices than standard virtualization.
But the growing number of deployments is starting to point out the challenges inherent in container-based data environments, although it appears that most of the issues can be overcome by a proper management stack and a reasonably good understanding of what containers can and cannot do.
At the moment, much of the momentum behind containers comes from developers, says CIO.com’s Clint Boulton, while CIOs and other c-suite executives are a little more wary. At a recent Wall Street Journal gathering, Docker CEO Ben Golub focused primarily on the technology’s ability to support cloud-based app development and testing even as an online poll showed a fair amount of skepticism of containers’ value proposition and whether it could do anything that simple virtualization or platforms like Red Hat’s OpenShift could not. One key advantage that containers brings to the table is that it does not rely on a guest operating system, which in turn should provide a more integrated change management structure to enable the kind of continuous delivery and integration required of cloud-based apps and services.
The world’s biggest technology companies are handing over the keys to their success, making their artificial intelligence systems open-source.
Traditionally, computer users could see the end product of what a piece of software did by, for instance, writing a document in Microsoft Word or playing a video game. But the underlying programming – the source code – was proprietary, kept from public view. Opening source material in computer science is a big deal because the more people that look at code, the more likely it is that bugs and long-term opportunities and risks can be worked out.
Openness is increasingly a big deal in science as well, for similar reasons. The traditional approach to science involves collecting data, analyzing the data and publishing the findings in a paper. As with computer programs, the results were traditionally visible to readers, but the actual sources – the data and often the software that ran the analyses – were not freely available. Making the source available to all has obvious communitarian appeal; the business appeal of open source is less obvious.
(TNS) - Will Montgomery County build a backup 911 call center or opt for a regional service?
County officials, who have been mandated by the state to offer a backup facility, will have to make a decision concerning the center. The practicality of a backup was made plain the week of July 4, 2012, when a strong wind ripped the roof from the current center.
County Manager Matthew Woodard said the N.C. Legislature has passed a bill mandating that counties have a reserve facility in case the regular call center goes off-line or a widespread emergency requires backup. He said that in the case of the 2012 windstorm, emergency communications could have been disrupted if rain had damaged equipment.
(TNS) - Floodwaters, like many natural disasters, are not contained by political boundaries.
But on Monday, when overflowing Cowiche Creek inundated county and city homes, emergency management staff for both jurisdictions were not talking to each other about services for displaced residents.
“Between our office, the Red Cross, and the individuals in the Riverview Manor Mobile Home Park, there were some difficulties getting ahold of the city,” said Scott Miller, director of the Yakima County Office of Emergency Management.
NEW YORK – STOPit announces the launch of STOPit PRO – the only compliance reporting platform that enables companies to mitigate risk and prevent financial liabilities by empowering employees to anonymously report fraud, unethical behaviors and product-related issues.
As a 21st century solution for deterring, mitigating and investigating all forms of inappropriate conduct in the workplace, STOPit PRO provides uniquely anonymous two-way dialogue between the employee and company officials – including risk managers, general counsel and HR departments.
Employees can use the STOPit PRO mobile app to provide real-time reports and messages, including incident-related photo and video documentation. Employers can then follow up for additional information through the app, with all interactions remaining anonymous.
They say to be forewarned is to be forearmed, and nowhere is that more important than in IT security. Cisco this week unveiled a Cisco Firepower Next Generation Firewall that incorporates data from threat intelligence services to better secure applications before attacks are ever launched.
Rather than simply apply access controls to an application, Dave Stuart, senior director of product marketing for network security in the Cisco Security Business Group says, Cisco Firepower firewalls provide a more comprehensive approach to IT security that includes intrusion prevention, malware protection and reputation-based URL filtering. Stuart says that Cisco is moving to cut the time taken to discover malware from what is usually 100 to 200 days to an average of 17.5 hours.
The goal is to not only reduce the total cost of providing that security, but to also take advantage of technologies such as Cisco Identity Services Engine (ISE) to provide higher levels of security. Longer term, Stuart says, IT organizations should expect to see Cisco take advantage of machine algorithms and artificial intelligence to increasingly automate much of the management of IT security at the network layer.
It’s always been something of a conundrum. Plenty of midsize companies would certainly value being able to take advantage of software development resources that have emerged all over the world, but they simply lack the global footprint or resources of the Fortune 500 companies that have created the market for global outsourcing. However, it seems that conundrum may be a thing of the past.
A key change agent appears to be Accelerance, a global software development outsourcing services provider in Redwood City, Calif., that has created a global network of software development teams to work with SMBs. I recently had a fascinating conversation with Accelerance CEO Steve Mezak, and the company’s president, Andy Hilliard, and I got a first-hand account of how it all started. Mezak took me back to when he was working with a software development company in St. Petersburg, Russia:
In the early 2000s, I started looking at these other [software development] companies, and realized that some of them were good, but not all of them; and that the challenge my clients had in looking at using my firm, was that it was a very crowded market, which made it very difficult for them to decide who to choose. So I thought, let’s go out into the world and find great companies and vet them and make sure that they’re good, and then offer a variety of services to clients.
Are losses caused by the presence of, or exposure to, mold or fungus in a building covered by liability insurance? That question has never been easy to answer, and at the end of 2015, the Texas Court of Appeals added further complication to the already confusing structure of mold insurance law in America.
In a case titled In re: Liquidation of Legion Indemnity Company, the Director of Insurance in the State of Illinois was acting as liquidator for Legion Indemnity Company. The liquidator asked the court to disallow a claim by 23 governmental employees who had obtained a judgment against a construction company in a negligence action related to bodily injury the employees suffered from exposure to toxic mold during the course of their construction employment. Claimants sought to collect their judgment from the insurance company under a comprehensive general liability policy issued by Legion. Legion had been placed in liquidation prior to the claims for judgment being entered, so claimants filed a claim against the liquidator.
In the policy at issue, the insurance did not cover losses arising from either “contamination” of the environment by a pollutant or on account of a single, continuous or intermittent or repeated exposure to any “health hazard.” The policy defined the term “contaminant” to mean any unclean, unsafe, damaging, injurious or unhealthful condition arising out of the presence of any pollutant, whether permanent or transient, in any environment. The policy further defined “health hazard” to mean any chemical, alkaline, radioactive material or other irritants or any pollutant or other substance, product or waste product, where the fumes or other discharges or effects therefrom, whether liquid, gas or solid or gaseous are determined to be toxic or harmful to the health of any person, plant or animal.
The Business Continuity Institute recently published a very welcome positioning statement, looking to set out its view on organizational resilience. In this article David Honour, editor of Continuity Central, looks at the statement and invites business continuity and resilience professionals to have their say.
In the preamble to the positioning statement, BCI board member Tim Janes states that its aim “is to add clarity regarding the position of business continuity in the context of organizational resilience. It also provides the BCI’s perspective on how the development of resilience concepts may impact on the practice of business continuity.” There is certainly a need for such clarification. I have attended many webinars on the subject of organizational resilience and there is little agreement about how to define it, where its boundaries are, what it includes, and where it sits in relation to business continuity, risk management and other protective disciplines.
The Internet of Everything (IoT) has gone from a concept not many people grasped clearly, to a tangible, living and breathing phenomena on the verge of changing the way we live—and the way data centers strategize for the future.
At least, data center managers better develop new strategies for handling the IoT and all the data that could overwhelm current systems.
What does the volume of data look like: In the past five years, traffic volume has already increased five-fold; and according to a 2014 study by Cisco, annual global IP traffic will pass a zettabyte and surpass 1.6 zettabytes by 2018. Non-PC devices—expected to double the global population by that year—will generate more than half that traffic.
To be a successful managed service provider, you need to protect your customer’s critical business data. This involves a lot more than just providing a simple backup and disaster recovery solution. After all, what will you do if your client has lost all power or can’t access their office? The missing piece? Intelligent business continuity.
Here are three essentials of a top-notch business continuity plan for your customers' businesses (as well as your own).
JEFFERSON CITY, Mo. – Five more home improvement stores— in St. Louis, St. Charles and Jefferson counties — are teaming up with the Federal Emergency Management Agency (FEMA) to provide local residents with free information, tips, flyers and brochures to prevent and lessen damage from disasters.
FEMA mitigation specialists will be available over the next six days to answer questions and offer home improvement tips on making homes stronger and safer against disasters. Most of the information is geared toward do-it-yourself work and general contractors.
Advisers will be available February 18-23 at the following locations . . .
- Lowe's at 6302 Ronald Reagan Drive, Lake St. Louis, MO 63367 (St. Charles County)
- Home Depot at 3891 Mexico Rd, St. Charles, MO 63303 (St. Charles County)
- Home Depot at Chesterfield Commons, 390 THF Blvd., Chesterfield, MO 63005 (St. Louis County)
- Home Depot at 11215 St. Charles Rock Road, Bridgeton, MO 63044 (St. Louis County)
- Lowe’s at 920 Arnold Commons Drive, Arnold, MO 63010 (Jefferson County)
During these times . . .
- Thursday to Saturday 7 a.m. to 7 p.m.
- Sunday 8 a.m. to 6:30 p.m.
- Monday 7 a.m. to 7 p.m.
- Tuesday 7 a.m. to 4:30 p.m.
Mitigation teams will also have free reference booklets on protecting your home from flood damage. More information about strengthening property can be found at www.fema.gov/what-mitigation.
For breaking news about flood recovery, follow FEMA Region 7 on Twitter at https://twitter.com/femaregion7 and turn on mobile notifications or visit the FEMA webpages dedicated to this disaster at www.fema.gov/disaster/4250.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
All FEMA disaster assistance will be provided without discrimination on the grounds of race, color, sex (including sexual harassment), religion, national origin, age, disability, limited English proficiency, economic status, or retaliation. If you believe your civil rights are being violated, call 800-621-3362 or 800-462-7585(TTY/TDD).
The security industry has started to go through a transformation. The transformation is part evolution and part maturity. Exploits and attack techniques advance rapidly and a quick look at the headlines on any given week demonstrates that traditional network and endpoint security solutions are proving inadequate. The companies that form the new breed of security are bringing unique and innovative approaches to the problem rather than just tweaking the same old broken security model.
If you follow the money, it seems investors also see the proverbial writing on the wall and are actively looking for the “next big thing”. Companies like HackerOne, Cylance, and Venafi have benefited from a spike in security industry investments. Code42 and Tenable even made the CB Insights list as the top-funded startups for their respective states. Today, Vera announced that it has closed a $17 million round of Series B financing—bringing its total to over $31 million in funding.
A post from CSO in August of 2015 explained, “CB Insights reported that in the first half of 2015, venture firms invested $1.2 billion into cybersecurity startups. Yup, you read it correctly – one point two billion in just the first six months of 2015.”
During the past decade, while security threats have evolved quickly, the goal of security staffs remains the same, but has gotten far harder to fulfill: Protect all the devices that hold critical data and offer potential ways into an organization’s back end.
Doug Cahill, the senior analyst on cybersecurity at Enterprise Strategy Group, discussed at Dark Reading findings and recommendations on endpoint security that emerged from interviews with what he says are dozens of security folks.
The best approaches involve picturing the elements of security (methodology, prevention, detection and response) holistically and not as discrete and separate elements: Protect as one dresses for the cold, in layers; be proactive (this suggestion is primarily aimed at large organizations); have a spectrum of starting points, or entry points, in the security realm.
Several years ago Facebook shut down an entire data center to test the resiliency of its application. According to Jay Parikh, the company’s head of engineering, the test went smoothly. The data center going offline did not disrupt anybody’s ability to mindlessly scroll through their Facebook feed instead of spending time being a contributing member of society.
Facebook and other web-scale data center operators, companies that built global internet services that make billions upon billions of dollars, have shifted the data center resiliency focus from redundancy and automation of the underlying infrastructure – the power and cooling systems – to software-driven failover. A globally distributed system that consists of so many servers can easily lose some of those servers without any significant impediment to the application’s performance.
That’s not to say they’ve abandoned backup generators, UPS systems, and automatic transfer switches. You’ll still see all of those things in Facebook data centers; it’s just that they are no longer the single line of defense.
Cloud native apps are now being built using distributed systems, clustering and built-in fault tolerance so that a failure of any component cannot bring the application down. Furthermore, the application can be scaled on demand.
So, why can’t we build the IT management systems that way? They are nothing but a meta-app that converts bare metal hardware in to a software-driven cloud that can be consumed via APIs.
In the past I have argued that management systems are like puppies that need special attention. Their installation, maintenance and upgrade significantly increase the operational expenses of running an enterprise datacenter. Think about how Boeing builds new planes – every new model is better than the previous generation planes in fuel efficiency, level of automation, etc. That cannot be said of IT infrastructure management systems.
(TNS) - Lawrence County officials hope infrastructure upgrades in the aftermath of December’s flooding issues will help prevent future damage to county roads.
Repairs to shoulders and gravel roads are almost complete, County Engineer Ben Duncan said. Destroyed drainage systems on Lawrence 328, 326 and 429 will take some work.
“As long as everything runs smoothly, I would hope to be done within two months. That’s being optimistic,” Duncan said. “We’re still repairing things. We’ve still got a long ways to go.”
Duncan said officials discussed the reimbursement process for road repairs during last week’s meeting with the Federal Emergency Management Agency. FEMA declared 38 counties, including Lawrence, disaster areas after the Dec. 23-31 storms, making them eligible for federal funding.
(TNS) - Federal disaster-aid programs, including flood insurance, have paid nearly $43 million so far to Missouri residents and business owners who suffered damage from record rainfall and flooding in late December.
The largest single amount, $29 million, represents claims by 766 holders of flood-insurance policies. The Federal Emergency Management Agency, which administers the insurance program, made $9.7 million in grants to 1,715 households for uninsured losses. The U.S. Small Business Administration also has approved $4 million for 82 loans, mostly for residential repairs, with 141 other applications under review.
In theory, a flooded household can qualify for all three.
Apple's refusal to follow a court order to support the FBI's San Bernardino shooter investigation was the right move for the company and for its customers, as my colleagues and I cover in Fatemeh Khatibloo's blog post here, and in our full, detailed report, here. As we discuss, there are many constituents with a large stake in the outcome of this case, but I will focus on security and risk management decision makers in this post.
There are four key implications to consider:
2016 will be an exciting year for Mail-Gard as we celebrate our 20th Anniversary. But before we look ahead, we wanted to spend a moment reviewing 2015, which was another strong year for Mail-Gard.
We were fortunate to not have any formal disaster declarations in 2015. However, we had a few close calls with weather-related issues and possible work stoppages. Our customers know putting us on “alert” to a possible impending event is a smart preparation tool should an actual business disruption occur.
As our recovery business continued its growth, we saw an increase in the operational recovery services provided to our customers. Being able to assist them with peak production loads, as our testing schedule permits, is one of the benefits of our recovery solution, along with providing real-time recovery process reviews.- See more at: http://www.iwco.com/blog/2016/01/06/mail-gard-20th-anniversary/?utm_source=IWCO+Speaking+Direct+Newsletter&utm_campaign=36f7f927c1-RSS_EMAIL_CAMPAIGN&utm_medium=email&utm_term=0_6225488a32-36f7f927c1-104311797#sthash.dvEcV7je.dpuf
The cyber thief develops a new advantage, breaks into an IT system, and swipes data. An enterprise spots the hack too late, figures out how it was done, and changes its defense to stop the hack from happening again. The defense holds until the cyber thief figures out the next work-around.
That is the action/reaction cycle. Like a perverse iteration of Newton's third law, every clever action is followed by an equally clever reaction.
Companies are getting wise to this, adding depth to their cyber-defenses to contain, rather than prevent breaches. Yet, there can be no change in strategy without a change in thinking first.
From an investor’s point of view, Rackspace Hosting is now operating in uncharted territory, and Mr. Market hates uncertainty.
Fanatical belief in “fanatical support” and anecdotes about the potential of managed services for Amazon Web Services and Microsoft’s Azure, Private Cloud, and Office 365 simply didn’t excite analysts on the Q4 2015 earnings call.
Rackspace (RAX) investors bid the stock up 3 percent to close at $18.17 prior to the release of Q4 earnings and full-year 2015 results after the bell Tuesday.
Cloud computing has completely revolutionized the way businesses handle data. No longer limited by their own hardware, companies can now take advantage of technology tools offered by providers around the world. This trend will only continue as more organizations transition storage and compute power to the cloud. According to analysts at Gartner, cloud services are predicted to grow to $244 billion by 2017.
With all the benefits the cloud has to offer, it is imperative that businesses develop the essential awareness and master the fundamental security capabilities required to safely and securely deploy cloud computing solutions. This is especially critical for functions—and even entire industries—with a high risk of data breach, such as payroll processing, human resources management, health care services and anything related to financial data, from consumer banking to payment card transactions to retirement fund distributions.
Across the world, hackers are taking control of networks, locking away files and demanding sizeable ransoms to return data to the rightful owner. This is the ransomware nightmare, one that a Hollywood hospital has been swallowed up by in the last week. The body confirmed it agreed to pay its attackers $17,000 in Bitcoin to return to some kind of normality. Meanwhile, FORBES has learned of a virulent strain of ransomware called Locky that’s infecting at least 90,000 machines a day.
The Hollywood Presbyterian Medical Center’s own nightmare started on 5 February, when staff noticed they could not access the network. It was soon determined hackers had locked up those files and wanted 40 Bitcoins (worth around $17,000) for the decryption key required to unlock the machines. Original reports had put the ransom at 9,000 Bitcoin (worth roughly $3.6 million), but Allen Stefanek, president and CEO of Hollywood Presbyterian Medical Center, said in an official statement they were inaccurate.
Despite receiving assistance from local police and security experts, the hospital chose to pay the attackers. “The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key. In the best interest of restoring normal operations, we did this.”
In this era of shooting-from-the-hip or bombastic Donald Trump comments, companies have to attend to reducing employment litigation risks. In this era of nuisance litigation and employment-focused litigation, companies need to take affirmative steps to reduce employment claims and related litigation.
There are three key steps that every company should take in order to reduce employment litigation exposure. Companies have to recognize potential employee concerns early and take steps to act according to policies and practices designed to minimize employment litigation claims.
As organisations have boldly gone when no enterprise has gone before, meaning out to the far corners of cyberspace, the face of data security has changed significantly. The traditional firewall model has collapsed as companies store their data in cloud servers they do not own, perhaps even in countries where they have no corporate presence. External threat actors have developed new methods of attack and customer data breaches have become headline news. While organisations rethink their data security plans and actions, it is however important to remember that another important risk exists, which may need different treatment. It is the risk of employees stealing information about their colleagues.
For any data center cooling system to work to its full potential, IT managers who put servers on the data center floor have to be in contact with facilities managers who run the cooling system and have some degree of understanding of data center cooling.
“That’s the only way cooling works,” Adrian Jones, director of technical development at CNet Training Services, said. Every kilowatt-hour consumed by a server produces an equivalent amount of heat, which has to be removed by the cooling system, and the complete separation between IT and facilities functions in typical enterprise data centers is simply irrational, since they are all essentially managing a single system. “As processing power increases, so does the heat.”
Jones, who spent two decades designing telecoms infrastructure for the British Army and who then went on to design and manage construction of many data centers for major clients in the UK, will give a crash course in data center cooling for both IT and facilities managers at the Data Center World Global conference in Las Vegas next month. The primary Reuters data center in London and a data center for English emergency services – police and fire brigade – are two of the projects he’s been involved in that he’s at liberty to disclose.
WASHINGTON — The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA), in coordination with state, local, tribal, and territorial emergency managers and state broadcasters’ associations, will conduct a test of the Emergency Alert System (EAS) in twenty-two states, two territories, and the District of Columbia on Wednesday, February 24, at 2:20 p.m. (Eastern).
Broadcasters from the following locations are voluntarily participating in the test: Alabama, Arkansas, Delaware, District of Columbia, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Louisiana, Maryland, Mississippi, Missouri, Nebraska, New Jersey, New York, North Carolina, Oklahoma, Pennsylvania, Puerto Rico, South Carolina, Texas, U.S. Virgin Islands, and Virginia. The EAS test is made available to radio, broadcast and cable television systems is and scheduled to last approximately one minute.
The test will verify the delivery and broadcast, and assess the readiness for distribution of a national-level test message. The message of the test will be similar to the regular monthly test message of EAS, normally heard and seen by the public: “This is a national test of the Emergency Alert System. This is only a test.”
The EAS test might also be seen and heard in states and tribes bordering the states participating in the test.
Public safety officials need to be sure that in times of an emergency or disaster they have methods and systems that will deliver urgent alerts and warnings to the public when needed. Periodic testing of public alert and warning systems is a way to assess the operational readiness of the infrastructure for distribution of a national message and determine what improvements in technologies need to be made.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
When faced with unexpected outbreaks and emergencies like zoonotic plague, Ebola, or contaminated cilantro that causes cyclosporiasis, Career Epidemiology Field Officers (CEFOs) are the experts in the field. One of CDC’s newer field assignment programs, the CEFO program is made up of highly skilled professionals assigned to state, territorial, and local health departments across the country to strengthen nationwide epidemiologic capacity and public health preparedness. CEFOs accomplish this mission while supporting day-to-day operations and emergency response activities of health departments. Being in the field and embedded in the public health networks of the area, CEFOs are on the front lines where emergencies typically begin and end: the local level.
The CEFO program was launched in 2002 to boost public health surveillance, epidemiology, and response efforts following 9/11 and the 2001 anthrax attacks. As of November 2015, 34 CEFOs are assigned to 27 state, territorial, and local public health programs. CEFOs bring a direct CDC connection to the state and local level. Public health agencies request CEFO assistance for an initial 2-year commitment, after which they can extend annually. Selecting a CEFO with the right background and skillset for a specific agency’s needs is important for success.
Although CEFOs have diverse professional backgrounds (physicians, veterinarians, scientists, nurses, and health services), all are experts in applied epidemiology. CEFOs have either completed training through CDC’s Epidemic Intelligence Service (EIS) or have comparable practical experience. Agency assignments vary, but CEFO priorities include rapidly identifying and halting the spread of disease outbreaks and other public health threats. CEFO’s accomplish this mission through enhancement of public health surveillance, strengthening outbreak response, conducting epidemiologic investigations, and development of the public health workforce. They serve as liaisons between health departments, local and state emergency response partners, healthcare providers, and CDC. CEFOs also develop and implement jurisdictional preparedness plans for emergency situations. For instance, one CEFO is currently analyzing data to identify potential health threats and prioritize resource distribution following severe droughts in California. CEFOs use epidemiological tools to help guide public agencies towards fast and effective responses that can address the health needs of the community.
Do you want to be a CEFO?
According to CDC CEFO Supervisor, Brant Goode, CEFOs tend to be two things: highly personable and very intelligent. Though being a CEFO can be extremely rewarding, working as a CEFO does pose challenges. Goode provides a few tips to future CDC CEFOs:
- Utilize the data. Understanding the demographics and other aspects of a jurisdiction’s public health is a great way to tailor preparedness and response efforts to the population. Along with learning from healthcare providers and health department staff, using census and public health data to learn about the area can aid in planning and implementation.
- Be clear about roles. CEFOs are federal officers meant to strengthen a jurisdiction’s mission. Because CEFOs support both CDC and their jurisdiction, working well with diverse partners is crucial for success.
- Be comfortable with being uncomfortable. Working as a CEFO can be very rewarding, but also challenging. Going from the federal level to the state or local levels can come with a steep learning curve at an accelerated speed. CEFOs should be prepared to serve in emergency management roles.
- Accept agency support. The CDC, partnering jurisdictions, and fellow CEFOs can provide support to CEFOs in completing their mission. Utilize resources and refer to previous cases for best practices, as well as past mistakes, to improve efficiency and prevent “wheel reinvention.”
CEFOs serve as CDC’s frontline defense against public health threats. Through expertise in applied epidemiology, they continue to improve nationwide preparedness to respond to all types of public health emergencies.
(TNS) - For the first time, a wide-ranging voluntary directive to saltwater disposal well operators released Tuesday by Oklahoma regulators includes areas not yet experiencing major earthquakes.
The Oklahoma Corporation Commission said the directive would cut by 40 percent the volumes of saltwater injected into deep Arbuckle formation disposal wells that have been linked to the state's increase in earthquake activity.
The directive targets 245 disposal wells across more than 5,200 square miles of northwestern Oklahoma. It covers all or parts of Woods, Alfalfa, Grant, Harper, Woodward, Major and Garfield counties.
(TNS) - Only halfway through the school year, the Palm Beach County School District has witnessed nearly twice as many bomb threats – all false — as it did in the two previous years together. Three of those prompted entire campuses to be emptied, while others triggered a lockdown that kept students secure in their classrooms.
So far, the district has not seen the sweeping multiple threats that have plagued other states – ones like the wave that swept through at least six school districts in Mississippi Tuesday, or the one in January that targeted 30 schools from New Jersey to Iowa.
One of the oldest schoolhouse crimes, it still goes uncounted by any national database.
(TNS) - There is at least one major difference between Ebola and the Zika virus: Zika can’t be transmitted through “casual contact,” health officials said.
So if a patients shows signs of Zika — which include mild fever, skin rash, conjunctivitis or red eye, muscle and joint pain and fatigue — they’re treated with standard procedures like anyone with an infection, said Dr. John Kennedy, vice president of Medical Affairs at Mercy Health-Fairfield Hospital.
Still, the spread of the Zika virus outside the United States has spurred a slew of new travel guidelines and protocols at blood centers and other medical facilities across the region, where one of the four cases reported last week in Ohio was diagnosed in a 56-year-old Butler County woman returning from Guyana.
For a growing numbers of companies, the compass points toward the Internet of Things (IoT) as a pathway for improving customer service, enhancing operations, and creating new business models. In fact, IDC predicts that by 2020, some 32 billion connected IoT devices will be in use. The challenge is extracting timely, meaningful IoT data to enable these digital transformations. Following are five critical demands enterprises need to consider in developing their IoT analytics strategies.
IoT Analytics Must be Distributed
Most enterprise IoT environments are inherently distributed. Like spider webs, they connect a myriad of sensors, gateways and collection points with data flying between them. Moreover, these webs constantly change as components are added and subtracted, and data flows are modified or repurposed.
Such environments place multiple demands on analytics. First, the software has to handle a variety of networking conditions, from weak 3G networks to ad-hoc peer-to-peer networks. It also needs to support a range of protocols, often either the Message Queuing Telemetry Transport (MQTT) or Common Open Source Publishing Platform (CoApp), and then either ZigBee or Bluetooth low energy (BLE).
A Southern California hospital fell victim to hackers last week — offering a glimpse at one of many digital threats facing health care.
Criminals reportedly infected Hollywood Presbyterian Medical Center computers with ransomware — malware that cryptographically locks devices. The thieves have demanded 9,000 bitcoins, the equivalent of $3.65 million, to unlock the machines, according to sources who spoke with Los Angeles television stations.
Microsoft is testing a self-contained data center that could be deployed deep underwater so as to reduce cooling costs and emissions from land-based centers, the New York Times has reported.
Code-named Project Natick, Microsoft's experimental data complex is enclosed in a steel capsule designed to sit on the cold ocean floor.
The company is also exploring suspending capsules just below the ocean surface in order to capture energy from currents and generate electricity.
While socializing with my partner (something that will abruptly stop for a while after the imminent birth of my second child), when I tell people that I recruit for Big Data & Data Science professionals, their reactions vary from a vacant, glazed look in their eyes to a knowing nod (that actually masks a total lack of understanding). It is fair to say that most people don’t really get what Big Data & Data Science is about.
The industry is developing at a rapid pace, with the technology improving month-on-month instead of year-on-year. There is such a buzz about Big Data that the narrative has almost taken on a life of its own – it has become this mythical being that can slay uncertainty and save any business from an untimely end.
That is, unfortunately, not the case, so I thought that it was about time to take a light look at five of the more prevalent myths:
In 2013 the Financial Stability Board (FSB), the single most globally influential financial and securities regulator, issued the guidance that calls on national regulators to codify a new regulatory expectation from Boards of Directors:
“The Board of Directors must establish the institution-wide RAF (Risk Appetite Framework) and approve the risk appetite statement, which is developed in collaboration with the Chief Executive Officer (CEO), Chief Risk Officer (CRO) and Chief Financial Officer (CFO).”[i]
Likewise, in the UK, the 2014 update of the “comply or explain” UK Corporate Governance Code, which governs all UK-listed public companies, states the following principle in section C.2, “Risk Management and Internal Control:”
Ready to offer cloud backup and disaster recovery (BDR) services?
A managed service provider that wants to enter the cloud BDR services market will need to determine how to price its offerings, which may seem exceedingly difficult.
There are three common pricing strategies that MSPs may use for their cloud BDR services:
Just like a popular YouTube video is cheaper to deliver from a data center that’s in the same geographical region than from a remote one, both providers and users of enterprise cloud services benefit if the services are delivered from a local data center.
Quickly growing adoption of cloud services by enterprises has driven edge data center specialist EdgeConneX to locate its latest facility in Minneapolis. The Minneapolis-St. Paul metro has a population of about 3.8 million, yet digital content and cloud services consumed by its residents and companies have traditionally been served from data centers 400 miles away, in Chicago, Clint Heiden, chief commercial officer at EdgeConneX, said.
“When you have a [market] the size of Minneapolis-St. Paul pulling from another core market like Chicago, that to us screams like an edge market,” he said.
Apple AAPL +0.65% CEO Tim Cook has written an open letter to customers warning them of a “dangerous” request from the FBI to effectively create a backdoor in their iPhones. Cook was writing in response to a court order asking Apple to create a tool that would allow for unlimited guesses at a user’s passcode, in this case to crack into the iPhone of one of the San Bernardino shooters, who killed 14 and injured 22 others in December 2015.
On standard iPhones, the user can only attempt to get the passcode right 10 times before the device wipes itself. The order, handed down under the All Writs Act of 1789, demands Apple write a program for the government that would undo that and allow for so-called “brute force” attacks on iPhones. This would effectively break any encryption protections, as the passcode is the only real barrier between a hacker, be they government or criminal, and an iPhone. Once the passcode is broken, most encryption protections on iPhones are bypassed.
Unfortunately in today’s world, active shooter preparation is becoming an essential emergency response practice for organizations of all shapes and sizes. In fact, between the years 2000 to 2013, “the FBI identified 160 active shooter incidents and 1,043 casualties – an average of 6.4 incidents occurred in the first seven years, and 16.4 occurring in the following seven.” 
Although each organization is different, there are steps you can take for active shooter training to ensure that your employees and managers are prepared to initiate a response plan and manage the consequences of each incident:
Amazon Web Services has signed an agreement to acquire NICE, a software-as-a-service company based in Italy that helps customers optimize and centralize their HPC, cloud and visualization resources. The terms of the deal were not disclosed, but it is expected to close in Q1 2016.
According to NICE’s sparse website, it will continue to operate under its existing brand, and continue to support and develop EnginFrame and Desktop Cloud Visualization (DCV) products.
AWS didn’t drone on about the acquisition, instead opting for a short blog post written by AWS’ Chief Evangelist Jeff Barr, to briefly sum up the news. While not a lot may be known about the acquisition at this point, it is clear there are three main reasons why AWS pulled the trigger on the deal.
During historic 1998 El Niño season that created $550 million in damages, it was not until February that California experienced flooding damage that warranted a federal presidential declaration
OAKLAND, Calif. – The Federal Emergency Management Agency (FEMA) today released new data on National Flood Insurance Program (NFIP) policies, showing an increase of more than 27,000 new NFIP policies written in California during the month of December 2015. There is a 30 – 90 day waiting period for new policies to be reported to FEMA and the latest available data, released today, shows an increase of more than 55,500 new flood insurance policies purchased in California from August 31 – December 31, 2015.
The nearly 25% increase for the state is the first of its kind, in any state, in the history of the National Flood Insurance Program, created in 1968.
“FEMA recognizes that a government-centric approach to emergency management is not adequate to meet the challenges posed by a catastrophic incident,” said FEMA Region 9 Administrator Robert Fenton. “Utilizing a whole community approach to emergency management reinforces that FEMA is only one part of our nation’s emergency management team and individuals are arguably the most important part of that team.”
Although the agency does not directly correlate all NFIP claims this year to El Niño, FEMA has already seen 127 National Flood Insurance Program policyholders submit claims in California during January 2016 compared to only 1 claim submitted in California for the same period during the previous year.
Although parts of FEMA Region 9 have recently been in a relative dry period, according to the National Weather Service, the impact of El Niño is not over.
“It has not been uncommon during past strong El Niño events to go through drier periods, even during the winter months,” said National Oceanic and Atmospheric Administration/National Weather Service meteorologist Scott Carpenter. “A change in the weather pattern around the last week of February may start bringing the storm track farther south and across more of California into March.”
NOAA's Climate Prediction Center forecasts climate anomalies associated with the ongoing El Niño episode are expected to result in at least minimal improvements to the drought conditions across much of California and western Nevada through the end of April.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources.
Flooding can happen anywhere, but certain areas are especially prone to serious flooding. Many areas in California are at increased flood risk from El Niño, as a direct result of wildfires and drought.
Residents should be aware of a couple things:
o You can’t get flood insurance at the last minute. In most cases, it takes 30 days for a new flood insurance policy to go into effect. So get your policy now.
o Only Flood Insurance Covers Flood Damage. Most standard homeowner’s policies do not cover flood damage.
o Get all the coverage you need. An agent can walk you through coverage options.
o Know your flood risk. Visit FloodSmart.gov (or call 1-800-427-2419) to learn more about individual flood risk, explore coverage options and to find an agent in your area.
In September 2015, FEMA’s Region 9 office in Oakland, Calif., established an El Niño Task Force with the mission of preparing for the impact of El Niño. The task force is evaluating the core capabilities needed to protect against, mitigate, respond to, and recover from any flooding that occurs across the Region this winter and spring. In December 2015, FEMA Region 9 released its draft El Niño severe weather response plan and convened a Regional interagency steering committee meeting in Northern California to exercise the plan. The plan is a living document and is continuously updated as new information on the El Niño threat emerges.
FEMA administers the National Flood Insurance Program and works closely with more than 80 private insurance companies to offer flood insurance to homeowners, renters, and business owners. In order to qualify for flood insurance, the home or business must be in a community that has joined the NFIP and agreed to enforce sound floodplain management standards.
NFIP is a federal program and offers flood insurance which can be purchased through private property and casualty insurance agents. Rates are set nationally and do not differ from company to company or agent to agent.
These rates depend on many factors, which include the date and type of construction of your home, along with your building's level of risk.
Visit Ready.gov for more preparedness tips and information and follow @FEMARegion9 on Twitter.
This month (February), we focus on data centers built to support the Cloud. As cloud computing becomes the dominant form of IT, it exerts a greater and greater influence on the industry, from infrastructure and business strategy to design and location. Webscale giants like Google, Amazon, and Facebook have perfected the art and science of cloud data centers. The next wave is bringing the cloud data center to enterprise IT… or the other way around!
Here’s a collection of stories that ran on Data Center Knowledge in February, focusing on the data center and the cloud:
Telco Central Offices Get Second Life as Cloud Data Centers – As AT&T and other major telcos, such as Verizon, upend their sprawling network infrastructure to make it more agile through software, most of those facilities will eventually look less like typical central offices and more like cloud data centers.
Just in time for tax season comes word of all kinds of security breakdowns within important tax-related organizations.
In its review, the IRS identified unauthorized attempts involving about 464,000 unique Social Security numbers. About 101,000 Social Security numbers were used to access E-file PINs.
Also, several tax preparation companies reported breaches, which were likely caused because of poor password management. One of those breached companies was TaxSlayer, whose director of customer support Lisa Daniel was quoted by eSecurity Planet:
In January, we focused on data center design. We looked into design best practices and examined some of the most interesting new design trends. Here are the stories we ran as part of our data center design month:
Data Center Design: Which Standards to Follow? – Codes must be followed when designing, building, and operating your data center, but “code” is the minimum performance requirement to ensure life safety and energy efficiency in most cases. A data center is going to probably be the most expensive facility your company ever builds or operates. Should it have the minimum required by code?
Startup Envisions Data Centers for Cities of the Future – The Project Rhizome team is thinking of ways to design small urban data centers so they fit in urban environments functionally, economically, and aesthetically.
On Tuesday, IBM announced that is rolling out its latest version of its z13 mainframe, which, according to the company, aims to attract mid-size enterprises with a hybrid cloud mainframe designed to encrypt data without slowing down the computer's performance.
The IBM z13s, expected to be available beginning next month, is designed to encrypt and decrypt data at double the speed of previous generations because the security is embedded into the hardware.
Tom Rosamilia, senior vice president of IBM Systems, said in a statement:
With the new IBM z13s, clients no longer have to choose between security and performance. This speed of secure transactions, coupled with new analytics technology helping to detect malicious activity and integrated IBM Security offerings, will help mid-sized clients grow their organization with peace of mind.
The idea of fully outsourcing data infrastructure to the cloud is still novel enough to give many CIOs the shivers. But now that end-to-end data environments can be configured entirely in software, the notion is not as radical as it once was.
At the very least, the precise location of physical infrastructure is becoming less of an architectural criterion given that functions like security, governance and resource configuration are proving to be less costly and more effective when they are deployed on the application or data planes rather than a box somewhere. So this has some people wondering if we are on the cusp of a quiet revolution toward full utility-style computing, not because it is the latest must-have technology but because it is the most efficient, effective way to run a data environment.
For those who say their data is too broad or too complex to entrust to third-party infrastructure, we have only to look at Netflix, which recently shuttered its last video streaming data center to port its entire service to AWS. The company still maintains some back-office processes in-house, but the voluminous video feeds – the heart of its user-facing operation – are now 100 percent in the cloud. The company has made no secret that, given the scale and complexity of its operations, it had no choice but to turn to Amazon for support, which includes not just massive resources but a growing cadre of specialty services and feature sets.
NORTH LITTLE ROCK – Disaster recovery experts today urged applicants for federal assistance to complete a disaster loan application from the U.S. Small Business Administration. Taking a loan is not required; completing the application can open the door to all federal assistance, including possible additional grants from FEMA.
Most Arkansans who register for disaster assistance with the Federal Emergency Management Agency will receive an automated call with information on how to complete the loan application process. Low-interest loans from the SBA are the major source of funding for disaster recovery.
SBA offers low-interest loans to homeowners, renters, businesses of all sizes (including landlords) and private nonprofit organizations that have sustained disaster damage. There is no cost to apply and no obligation to accept a disaster loan.
Assistance from FEMA is limited to help jump-start the recovery; it may not cover all damage or property loss. Completing the SBA Loan application may make FEMA assistance available to replace essential household items, replace or repair a damaged vehicle, or cover storage expenses.
Interest rates can be as low as 4 percent for businesses, 2.625 percent for private nonprofit organizations and 1.813 percent for homeowners and renters with terms up to 30 years.
Eligible homeowners may borrow up to $200,000 for home repair or replacement of primary residences, and eligible homeowners and renters may borrow up to $40,000 to replace disaster-damaged or destroyed personal property, including a vehicle.
Businesses of all sizes may qualify for up to $2 million in low-interest loans to help cover physical damages.
Small businesses and most private nonprofits suffering economic impact due to the severe weather and flooding can apply for up to $2 million for any combination of property damage or economic injury under SBA’s Economic Injury Disaster Loan (EIDL) program.
For additional information about SBA disaster loans, the application process, or for help completing the SBA application:
- Call 800-659-2955 (Deaf and hard-of-hearing individuals may call (800) 877-8339)
- Visit SBA’s website at www.sba.gov/disaster
People with storm losses who still need to register with FEMA can register anytime online at www.DisasterAssistance.gov , or with a smartphone or device at m.fema.gov. Survivors can also register by phone from 7 a.m. to 10 p.m. by calling FEMA at 800-621-3362. People who use TTY can call 800-462-7585. Multilingual operators are available.
Federal disaster assistance is available to eligible residents of Benton, Carroll, Crawford, Faulkner, Jackson, Jefferson, Lee, Little River, Perry, Sebastian and Sevier counties that suffered damage from the severe storms, tornadoes, straight-line winds and flooding Dec. 26, 2015 - January 22, 2016.
Cyber-attacks are inevitable. Thankfully we have IT security teams that keep all of the technology within an organization secure from hackers, who are attempting to breach internal systems and gain control of private information. It is important not to be narrow minded when thinking of information security. System threats come in all shapes and sizes. Some of the most common threats that companies face today are software attacks, property or identity theft, and even information extortion.
In recent years, there have been many companies that were victims of cyber-attacks. You may not always be able to prevent them, but you are responsible for all of the technology and information within your company. So one might ask, how can I protect my company, my employees and my customers from hackers?
Here are a few tips that will help safeguard your organization:
(TNS) - Among the items scattered on the conference room table were a hand-cranked flashlight, a tri-fold shovel and food packets with a five-year shelf life.
They were next to the “blood stopper,” labeled as dressing for wounds and trauma, and a “survival tin,” which included a sewing kit, fishing hooks and condoms. That last item also is included to protect supplies from the elements.
“They help keep things dry,” said John Caine, manager of new business development for Quake Kare, a company that touts itself as the country’s “leading source of emergency survival kits.”
The recent acts of terrorism in Paris stunned the world, when 150 were killed and more than 300 were wounded. But the collateral damage went far beyond buildings being ripped apart and one of the most popular cities in the world being virtually shut down.
Business Travel Coalition, a U.S.-based lobby group, recently released a survey of 84 corporate, university and government travel and risk managers from 17 countries on their attitudes of trips to France following the bombings. Twenty-one percent of the respondents said they were very or somewhat likely to cancel travel to France for “some period of time,” and 20% were somewhat likely to cancel travel to and within Europe. A large majority said they’d probably allow employees to decide whether they were prepared to head to France. One in five corporate travel managers is likely to cancel trips to Paris “for some period of time.” These are not surprising statistics.
Terrorism has been defined as “The use of violence to instill a state of fear,” and that effect is far-reaching; a bomb explodes in Paris and it’s likely that 5,600 miles away in California some corporate risk manager for a Fortune 500 company is seriously considering cancelling a business trip to Europe—a visceral reaction that could cost his company untold sums of money. Mission accomplished.
Strong forces are at work to make emergency alerts more mobile and precisely targeted. Long gone are days when a siren blasting a loud horn near and far was sufficient to spur people to action. Now, people want information that’s precise, pertains specifically to them and is available wherever they are regardless of what they’re doing. Plus, studies show that people generally won’t take protective action unless they get an alert from at least two sources.
Add to the mix the fact that today’s emergencies are local and difficult. Our threats don’t include a fear that bombs will be dropped on our cities from a warring nation. It’s more likely that a terrorist will plant a bomb where we live, work, learn, worship and play. Or a flood will hit an unexpected neighborhood. Or a tornado will abruptly change its path. Or someone will kidnap a child and head for the state’s border. We could go on.
It’s easy to see why emergency alerting has evolved and continues to do so. Targeting specific areas became more practical in the late 1990s when telephone alerting was introduced. Practitioners could draw a diagram on a digital map and direct alerts to specific home and business phone numbers. They can do much more now, according to Russ Johnson, director of Public Safety and Homeland/National Security for Esri, one of the first providers of digital mapping for alerting.He said alerts can be much “smarter” through use of real-time mapping where “live” information from many sources can be analyzed. Then, a geo-fence can be established around the area. If something or someone crosses into the fenced area, an alert can be automatically issued.
In September 2008 a Metrolink commuter train collided head-on with a Union Pacific freight train in Chatsworth, Calif., killing 25 people and injuring more than 100. On Dec. 1, 2013, a Metro-North commuter train derailed in the Bronx, killing four and injuring dozens of others. The train’s engineer had fallen asleep and failed to slow the train from over 82 mph to the maximum authorized 30 mph as it entered a curve.
These and many other incidents could have been avoided, according to the National Transportation Safety Board, if railroads had implemented positive train control (PTC). They were supposed to do just that by the end of 2015. They missed the deadline, but got a reprieve, with Congress pushing back the deadline for PTC implementation to 2018.
Congress first mandated PTC in 2008 for rail lines used to transport passengers or toxic-by-inhalation materials. The unfunded mandate gave railroads seven years to comply. Questions arise: Why push back implementation to 2018? Why the delay? Will PTC actually help, whenever we get there? And what will it mean to emergency managers?
There has been a lot of talk lately in the Business Continuity industry about a “next generation” of Business Continuity planning. In a recent article from Continuity Central, David Lundstedt asserts that Business Continuity is Broken. But is it? Are we clinging too tightly to our old ways of creating plans and delivering results? Businesses and technologies change very rapidly—are we keeping up?
“The business continuity industry is evolving slowly. It must evolve, and some significant changes in perspective are warranted,” stated MHA CEO Michael Herrera. “We must be careful not to lose sight of the real goal: organizational survival/resilience.“
In the Continuity 2.0 Manifesto (first made available in September 2015) David Lindstedt and Mark Armour argue that “traditional approaches in business continuity management have become increasingly ineffectual.” Over the years, technology and organizations have undergone tremendous changes, but business continuity methodology has not kept pace. Small, incremental adjustments that focus increasingly on compliance over resilience are cited as contributors to “a progressively untenable state of ineffectual practice, executive disinterest, and an inability to demonstrate the value of continuity programs and practitioners.”
Partnership is the first with a U.S.-based MSP to sell intelligent converged platform in 1TB increments MELVILLE, N.Y. — FalconStor Software® Inc. (NASDAQ: FALC), a 15-year innovator of software-defined storage solutions, today announced that it has signed Innovative Solutions Consulting Inc. (ISC) to be the first managed service provider (MSP) partner in the United States to sell FreeStor® in 1TB increments. This agreement expands the reach of the company’s converged, hardware-agnostic, software-defined storage and data services platform to support organizations from the SMB community through the enterprise. Based outside of Kansas City, Missouri, Innovative Solutions Consulting Inc. provides high-quality IT products and services to carrier and enterprise-level organizations. The company offers a wide variety of services tailored to exceed its clients’ IT requirements, including managed, professional, cloud and IT procurement services. With more than 25 years of experience working with customers across a wide variety of industries, ISC prides itself on providing its clients with unique custom solutions offering elasticity and scalability to satisfy their future IT needs. As a long-time reseller and integrator of FalconStor products, Innovative Solutions sees FreeStor as a groundbreaking solution for its customers because it integrates the company’s entire suite of data management tools into a single product for a single, pay-as-you-grow price. ISC CTO, Mardy Martin, believes the flexibility FreeStor offers over competitive point-solutions makes it ideally suited for overcoming limitations of vendor lock-in, forklift upgrades, and cloud-based security issues. “FreeStor is an incredible opportunity for us to be able to offer a software-defined storage technology that will allow our customers to use a platform that has been recognized globally for its excellence,” said Mardy Martin, CTO of Innovative Solutions Consulting, Inc. “FreeStor gives MSPs the ability to manage the product in their cloud infrastructure or the customer’s environment. It gives us the flexibility to manage customers’ entire environment completely, or just a portion of it, or in being the one they call in a managed services support model. It resolves a real issue within the mid-market around the need to continually invest in additional equipment to maintain and grow their environments. FreeStor eliminates the need for this by extending capabilities on existing hardware and by being the most open software-defined storage platform on the market today.” FreeStor's horizontal architecture unlocks a new world of storage opportunities, allowing IT managers, MSPs and CSPs to maximize efficiencies and lower costs while taking advantage of the public cloud, hybrid cloud, flash storage and software-defined storage. FalconStor’s groundbreaking Intelligent Abstraction® approach delivers seamless access and unified data services across entire storage infrastructures without having to invest in new technology, or rip and replace existing platforms. Always-on availability and continuity keep businesses running while enabling them to move, synchronize and protect data seamlessly across virtual and physical storage platforms. “As we continue to expand the footprint of FreeStor throughout the world, we look at our MSP partners as the ideal ambassadors for advancing our message. There is no better way for organizations to gain greater efficiencies, reduced downtime, lower costs and improved simplicity from their IT infrastructures,” said Gary Quinn, FalconStor President, and CEO. “Innovative Solutions has the passion and experience for providing innovative technology to its customer base. We are pleased to partner with them as the first MSP in the U.S. to offer FreeStor in as small as 1TB increments.” About Innovative Solutions Consulting, Inc. Innovative Solutions Consulting, Inc. is a Woman Owned Missouri based company with over 25 years of IT industry experience providing high quality IT products and services to businesses in the Kansas City Metro area and nationwide. About FalconStor FalconStor® Software, Inc. (NASDAQ: FALC) is a leading software-defined storage company offering a converged data services software platform that is hardware agnostic. Our open, integrated flagship solution, FreeStor®, reduces vendor lock-in and gives enterprises the freedom to choose the applications and hardware components that make the best sense for their business. We empower organizations to modernize their data center with the right performance, in the right location, all while protecting existing investments. FalconStor’s mission is to maximize data availability and system uptime to ensure nonstop business productivity while simplifying data management to reduce operational costs. Our award-winning solutions are available and supported worldwide by OEMs as well as leading service providers, system integrators, resellers and FalconStor. The company is headquartered in Melville, N.Y. with offices throughout Europe and the Asia Pacific region. For more information, visit www.falconstor.com or call 1-866-NOW-FALC (866-669-3252).
There's a new sheriff in town and the title is chief data officer, or CDO. Found most often in regulated industries, the CDO is sometimes hired to help a company improve regulatory compliance, data management, and data governance. In other organizations the role may also be responsible for data analytics and/or data science. However broad or narrow, a CDO's charter depends on what the organization’s leadership thinks it requires, although the actual needs of the organization may vary over time. Here are a few important things to consider.
Is a CDO Necessary?
Large organizations in highly regulated industries are the most likely to employ a CDO. In smaller and data-first companies, a CDO's responsibilities may be shared among other titles or be the domain of a single individual, such as the CIO. The question is whether a CDO is actually necessary.
In a recent Forrester Research survey of 3,005 global data and analytics decision-makers, 45% of respondents said their company had appointed a CDO. The survey also revealed that "top performers" (those with 10% annual revenue growth) were 65% more likely to appoint a CDO than "low performers" that have less than 4% revenue growth.
Bitcoin, after reaching a peak value of $1,147 in December 2013, has now become a far more dependable currency valued at around $400 per bitcoin with only comparatively limited value fluctuation. Despite the perception that it is used for nefarious and underground deals, with sites like Silkroad creating a media storm against the digital currency, it is becoming a more widely accepted payment option, with some of the biggest companies in the world now accepting it as currency. Traditional companies like Paypal, Subway, CVS and Whole Foods are even jumping on the bandwagon and using the digital currency on their sites.
However, all is not well in bitcoin use, with companies looking at payments being made using traditional data analytics methods, and trying to track payments in order to create actionable insights. Although this may sound sinister, It is a practice that has been used for credit cards, cheques and electronic payments for decades. The difference with bitcoin is that it is a currency founded on a certain level of anonymity, making some uncomfortable with the practice.
One of the key differences between the two payment systems is that a payment through a credit card or similar needs to pass through a third party, whereas a bitcoin transaction creates a block, which, when added to all other bitcoin transactions, creates a blockchain. This means that technically it is possible to see every single Bitcoin transaction, which is a data scientist's dream. The problem is that although the transactions can be seen, the unique wallet address and identity is known only to the two people in the transaction.
When you think of insider threats, your first thought is a malicious attack by an unhappy employee or a staffer that’s about to quit or be fired. Unfortunately, if that were the case, there would be fewer instances of breaches and data leaks coming from inside your four walls. On the flip side, most organizations inherently trust that their employees understand how to handle sensitive information, following the company’s security best practices every day.
So much has been written about the rogue employee and how organizations must be vigilant in protecting customer and other sensitive data from theft and ultimately exposure. However, your model employee may be unknowingly exposing your organization’s most critical data at any given time. Regardless of the culprit, intentional or not, stopping insider threats is more difficult than hardening the perimeter, since insiders already have access to privileged information to do their jobs. While many organizations look at internal firewalls, intrusion detection and other system protections, the focus needs to move to the actual information that may be at risk – the data.
Every organization has significant risk exposures. The question is, does executive management and the Board of Directors really know what they are?
For many companies, the enterprise risk assessment (ERA) process focuses on the severity of impact of potential future events on the achievement of the organization’s business objectives and the likelihood of those events occurring within a stated time horizon. Developing risk maps, heat maps and risk rankings based on these subjective assessments is common practice. Encompassing an evaluation of available data, metrics and information, as well as the application of judgment by knowledgeable executives, the ERA process is intuitive to most people and provides a rough profile of the enterprise’s risks.
But there are some issues with the traditional risk-mapping approach:
Today IBM Corp. officially announced its z13s mainframe with speedy encryption, cyber analytics, and other security innovations which are baked into the new machine. Call it a cyberframe and watch the CIOs come running.
Big Blue spent 5 years and one billion dollars developing the z13 mainframe which was introduced last year for large customers. IBM IBM +1.24% describes it as the most sophisticated computer system ever built. Now they’ve added an ‘s’ to the end, for security.
The z13 can process 2.5 billion transactions a day, or the equivalent of 100 Cyber Mondays every day, based on results from IBM internal lab measurements. The z13s has advanced cryptography features built into the hardware that allow it to encrypt and decrypt data twice as fast as previous generations, protecting information without compromising performance.
Mainframes aren’t dead yet. IBM is launching a new version of its z13 mainframe for mid-sized enterprises today that introduces a number of new security features. With up to 4 TB of RAM, the z13s also supports 8x as much memory as IBM’s previous single-frame mainframes.
IBM also says the z13s offers faster processing speeds than some of its previous mainframes in this price range, but the focus of the z13s is clearly on security.
One feature that makes today’s mainframes different from standard servers is that they include numerous specialized processors for features like memory control, I/O, and cryptography.
The Zika virus is turning out to be a bigger and more unwelcome surprise than expected. Those responsible for pandemic planning and emergency management know how fast critical situations can develop. However, ZIKV, as the Zika virus is also known, is rapidly increasing in severity in at least two dimensions at the same time: the number of people infected and the level of danger of those infections. Initially, there were only a handful of known cases and initial descriptions of “mild illness”, with symptoms such as headaches, rashes, fever, conjunctivitis, and joint pains. Estimates have now risen to the possibility of millions infected and severe health risks including malformations in newborn babies and deaths of adult patients.
In recent years, there has been a significant amount of attention given to the concept of organizational resilience across the business continuity industry. Much of the debate has focused on the principles and practice of organizational resilience, and how this relates to the established business continuity management discipline.
The aim of this position statement, which has been produced and ratified by the Board of the Business Continuity Institute, is to add clarity regarding the position of business continuity in the context of organizational resilience. It also provides the BCI’s perspective on how the development of resilience concepts may impact on the practice of business continuity.
The BCI believes that this position statement will contribute to our stated purpose to "promote a more resilient world”. We also hope that it helps to move forward the future development of organizational resilience concepts, beyond definitional debates, towards a collaborative understanding between participants across many management disciplines.
Tim Janes Hon. FBCI, BCI Board Member
Organizational Resilience - BCI Position Statement - February 2016
- Business continuity is not the same as organizational resilience.
- The effective enhancement of organizational resilience will require a collaborative effort between many management disciplines.
- No single management discipline or member association can credibly claim ‘ownership’ of organizational resilience, and organizational resilience cannot be described as a subset of another management discipline or standard.
- Business continuity principles and practices are an essential contribution for an organization seeking to develop and enhance effective resilience capabilities.
- The wide range of activities required to develop and enhance organizational resilience capabilities provide an opportunity for business continuity practitioners to broaden their skills and knowledge, building on the foundation of their business continuity experience and credentials.
- The BCI, working with related partners and industry groups where appropriate, will develop relevant knowledge resources and training to support members who wish to advance their organizational resilience knowledge and skills.
In recent years, the concept of organizational resilience has attracted a significant amount of attention across the business continuity industry. Debate has focused on the principles and practice of organizational resilience, and how it relates to the established business continuity discipline. On occasion, the term 'organizational resilience' has been taken to mean the same as 'business continuity'.
This paper does not intend to add further to the debate in terms of the formal definition of organizational resilience. Rather the aim is to clarify the position of business continuity in the context of organizational resilience and how it impacts on business continuity practitioners. While there is still much debate on the definition of organization resilience, for the sake of simplicity, this paper takes the definition contained in the draft ISO 22316.
Organizational Resilience is the:
"adaptive capacity of an organization in a complex and changing environment"
ISO /WD 22316. Societal Security – Guidelines for organizational resilience
It is clear from this statement that organizational resilience is characterised as a broad concept. It is also widely accepted that organizational resilience draws on the experience and efforts of a large number of interrelated management disciplines. Business continuity is just one of the management disciplines that contribute to an organization’s resilience capabilities. The list of contributory disciplines is extensive; just a few examples include emergency management, crisis management, ICT service continuity, occupational health and safety, environment protection, physical security, supply chain management, information security management and various forms of risk management (e.g. credit, market, enterprise).
For this reason, no one management discipline or member association can credibly claim ‘ownership’ of organizational resilience concepts and principles. Furthermore, organizational resilience cannot be properly described as a subset of another management discipline or standard.
Clearly, business continuity and organizational resilience are not the same thing. However, it is apparent that business continuity provides principles and practices that are an essential contributor for any organization seeking to develop and enhance its resilience capabilities.
For example, business continuity practices explain how organizations can identify their priority activities and the risks of disruption to those activities. Established business continuity standards help organizations to understand what is required to ensure priority activities can continue in the face of disruption, and to rehearse the capability to respond to disruption through practical exercises.
Therefore, business continuity practitioners possess many, but not all, of the knowledge and skills that are necessary to help organizations to develop and enhance resilience capabilities.
As noted previously, a wide range of business activities and management disciplines contribute towards enhanced organizational resilience. It is unlikely that a single person in any organization will possess the necessary knowledge and skills to implement and deliver all resilience objectives. The development and enhancement of organizational resilience capabilities will require a collaborative effort between participants across many management disciplines.
This presents an opportunity for BCI members. Business continuity practitioners who wish to become resilience professionals can build on their proven competencies, broaden their knowledge and develop new skills in areas that contribute further to an organization’s resilience activities.
It is the BCI’s stated purpose to "promote a more resilient world”. The BCI recognises that this objective is supported when business continuity practitioners have access to a broad range of resilience-focused information and training. The BCI will support its members who seek to develop their organizational resilience knowledge and skills by providing access to relevant resources. This may be either directly through the BCI, training partners or working in collaboration with related industry associates and professional members groups.
A major financial institution is likely to be hit by significant cyber criminal activity in 2016, according to the latest ThreatMetrix Cybercrime Report.
Analysis of more than 15 billion transactions in the past 12 months by the ThreatMetrix Digital Identity Network revealed a 40% increase in cyber criminal activity targeting the financial sector.
A record 21 million fraud attacks and 45 million bot attacks were detected in the last three months of 2015 alone.
Change, convergence, complexity and convenience. These are words that describe the technology landscape as businesses look to create digital enterprises. Digital transformation, while not new, is evolving. Every part of a business is changing as a result of the rise of mobile, cloud computing, big data and analytics. In the past, companies could typical focus on one or two technology transitions at a time. Increasingly, executives across the organization are being asked to make multiple technology decisions. One the IT side, there are too many choices and companies are seeking convergence. At the same time, employees and line of business managers want to eliminate complexity while gaining the convenience of anywhere access to services.
Vendors must respond to these changes or risk being cast aside. In response to these trends, VMware VMW +0.69% launched a new product last week called the Workspace ONE Platform which is aimed at allowing people to work anywhere. Obviously not a new concept but the difference may be in the execution. Workspace ONE offers a simple and secure digital workspace, integrating identity, device management and application delivery. Let’s look at the functionality the platform provides and how it fits into the market.
Workspace ONE Platform offers one-touch mobile Single-Sign On access leveraging Secure App Token Systems (SATS) that establishes trust between the user, device, enterprise and cloud. Once authenticated, employees can subscribe to any of the corporation’s mobile, cloud or Windows application based on a company’s policies. It also enables unified management of BYO and corporate owned devices. With the new solution, an employee can self-configure BYO laptops, smartphones and tablets choosing the level of services and IT restrictions they are comfortable to use, increasing adoption of BYO programs and reducing the risk of data loss. Of course, IT will still set acceptable use and minimize access to corporate access based on various profiles. According to VMware, securing the data from the application through to the cloud with NSX is one of the companies main differentiators. In truth, this only works it you purchase a full VMware stack. But if you do, it can deliver on that promise.
Only 6 percent of the world’s top 1500 companies have appointed a Chief Digital Officer (CDO) to oversee the digital transformation of their business, but their ranks are growing, according to the results of a new study about the role from Strategy&, PwC's strategy consulting business.
The 2015 Chief Digital Officer Study looks at the top 1,500 public and private companies around the world by revenue to better understand how many companies have appointed a Chief Digital Officer, who they are, and where the position fits into companies’ hierarchies.
The tragic events in Paris last year represented a step change in the way that civilians were targeted at their most vulnerable, not only because of the primary mode of assault, but also in the way that the media responded. There has been a lot of analysis and discussion around this but for now, I would like to focus on the way that we responded to the incident using both the media and also social media.
This infamous video, marked a step change of how information is reported during an incident. The video represented one of the first times that live footage was instantly streamed of an attack in a Western country. The images from this video would never have been shown by any reputable media outlet as there are very strict controls in place to prevent this. Therefore we are seeing an evolution in the way that we communicate.
This was crystallised by the Facebook safety check, the social good media response – a method whereby our friends can let us know that they are safe during an incident. This represents very well how we can as a population respond to a crisis. Twitter is also an interesting media. It is the first port of call to find out what is going on, but you have to take the information with a pinch of salt, as sometimes the information on Twitter isn’t correct. Twitter was used during the Paris attacks for both good and bad, for example, the local hospitals used it to say that they urgently needed blood.
Where does this media evolution leave us as business continuity/crisis managers?
HIMSS is pushing the National Institute of Standards and Technology to keep its Framework for Improving Critical Infrastructure Cybersecurity voluntary.
HIMSS, which represents more than 52,000 health IT professionals, wrote to NIST on Monday in response to its request for information. NIST has extended the original Tuesday comment deadline to Feb. 23.
NIST noted it was looking for ways in which the framework is being used to improve cybersecurity risk management; how best practices for using the framework are being shared; the relative value of different parts of the framework; the possible need for an update of the framework, and options for long-term governance of the framework.
Air Enterprises Acquisition, the exclusive US distributor of the heat wheel-based data center cooling system by KyotoCooling, has filed a lawsuit against competitor Nortek Air Solutions, accusing it of patent infringement.
The patent in question adapts heat wheels, a cooling technology used for many years in other industries, for data center cooling. Held by Netherlands-based KyotoCooling, it describes a data center cooling system that relies on a heat wheel in an indirect economization process.
Heat wheels are used to maximize the use of outside air for cooling. A heat wheel is a rotating heat exchanger with separate ducts for warm server-exhaust air and cool outside air. It addresses common problems with direct airside economization, such as air contamination and unwanted humidity, thus expanding the number of locations where economization is possible.
Verizon Communications, which several years ago had huge public cloud ambitions, is shutting down its public cloud service, which competes head to head with giants like Amazon Web Services and Microsoft Azure.
The company notified its cloud customers of the coming change Thursday, giving them one month to move their data or lose it forever. It has already removed any mention of public cloud compute services from its website.
The move appears to be a confirmation of what many in the industry have been predicting, especially since news started coming out of big telcos looking to offload massive data center portfolios they had amassed in recent years to go after the cloud services market. It has become almost impossible to compete with AWS, Azure, and to a lesser extent with Google Cloud Platform in the market for renting virtual compute power over the internet and charging by the hour.
Proposals from lawmakers to force US companies to provide government agencies with backdoors to encrypted data would put them at a competitive disadvantage, without reducing the global availability of encryption, according to a report released Thursday by Harvard University researcher Bruce Schneier. While emphasizing that the results are not a complete catalogue, but rather more of a survey, Schneier and his team conducted A Worldwide Survey of Encryption Products and found 865 devices or programs incorporating encryption originating from 56 countries, with about one-third of the products coming from the US.
Schneier, who is a fellow at the Berkman Center for Internet & Society, along with fellow researchers Kathleen Seidel and Saranya Vijayakumar, replicated a study conducted in 1999 by researchers at George Washington University. The original study attempted to catalogue non-US encryption products, and found over 800 hardware and software products from 35 countries.
US and Judiciary Committee Chairman Richard Burr (R-N.C.), with an assist from Senator Dianne Feinstein (D-Calif.), has been drafting legislation to provide backdoors to encryption with warrants. Burr also sponsored the controversial Cybersecurity Information Sharing Act, which passed through the Senate in October.
Businesses aren’t the only ones struggling to ramp up budget allocations to fortify against cyberrisk. In his new $4.1 trillion budget proposal, President Obama has asked for $19 billion for cybersecurity efforts, a 35% increase from last year.
The president directed his administration to “implement a Cybersecurity National Action Plan (CNAP) that takes near-term actions and puts in place a long-term strategy to enhance cybersecurity awareness and protections, protect privacy, maintain public safety as well as economic and national security, and empower Americans to take better control of their digital security.” In addition to a cybersecurity awareness campaign targeting both consumers and businesses, the plan calls for government-wide risk assessments, a nation-wide push for a range of better consumer data security measures, and a range of initiatives to attract more and better cybersecurity personnel. Some of these new employees will offer cybersecurity training to more than 1.4 million small businesses, and the Department of Homeland Security is expected to double the number of cybersecurity advisors available to assist private sector organizations with risk assessments and the implementation of best practices.
Obama’s plan also takes a page from the private sector, creating the position of Federal Chief Information Security Officer to drive cybersecurity policy, planning and implementation across the federal government.
Inside the eBay operations "war room" last December, data analysts and data scientists had one big question on their minds as traffic approached its holiday crescendo: What was the hottest selling item among the 800 million available on the eBay website?
The answer wasn't one that many of them had expected.
"We found that every 12 seconds, we were selling a hoverboard," recalls Debashis Saha, vice president of Commerce Platform and Infrastructure. "It was our hottest-selling item" and one that previously hadn't even shown up on eBay's radar.
With that information in hand, eBay executives could contact suppliers and manufacturers of hoverboards, alert them to the unexpectedly high demand, and urge them to keep their manufacturing going and inventories stocked. It was a way of keeping customers satisfied and safeguarding eBay's own business, one made possible through a fast data analysis system called Kylin.
(TNS) - When fired Los Angeles police officer Christopher Dorner went on his killing spree it drew the largest law enforcement response in San Bernardino County history — until the Dec. 2 terrorist attack at the Inland Regional Center. What they learned that week in February 2013 helped shape how emergency responders reacted at the IRC.
Law enforcement agencies from across Southern California, led by the San Bernardino County Sheriff’s Department, hunted Dorner after he implicated himself with an online manifesto in two murders.
Six days later, on Feb. 12, 2013, Dorner was killed during a shootout in a cabin near Angelus Oaks in the San Bernardino National Forest.
According to leaders of public safety departments who responded to both incidents, the lessons learned during the manhunt for the ex-LAPD officer turned cop killer helped stop IRC attackers Syed Farook and Tashfeen Malik before they could harm more people after killing 14 and wounding 22 others.
(TNS) - Fearing its standards would impede the rebuilding of tornado-stricken neighborhoods, Rowlett, 20 miles from Dallas, is scaling back its construction requirements to encourage residents to rebuild after the Dec. 26 storms.
Recent updates to codes dealing with new residential construction don’t necessarily fit the tone of Rowlett’s older neighborhoods. For instance, the city now requires 100 percent masonry on single-family residential exteriors. And it has outlawed garages that face streets.
At a special meeting Wednesday, city leaders said they feared that meeting the current standards would be costly for residents and that in the long run, fewer would rebuild, leaving more vacant lots.
One of the primary reasons so many relatively simple attacks wind up compromising IT security defenses is that the internal IT organization suffers from IT security fatigue. In any given day, any number of IT security technologies will generate a stream of alerts, most of which wind up being false positives. After a while, the IT organization becomes inured to the alerts until, of course, one of them involves a previously undiscovered vulnerability. By then, the damage is done.
Arctic Wolf Networks this week unfurled AWN Cyber-SOC, a service through which security professionals provide a security information event management (SIEM) capability based on a combination of custom, open source and commercial software technologies that serves to reduce internal IT security fatigue.
Rather than take over IT security management completely, Arctic Wolf Networks CEO Brian NeSmith says, AWN Cyber-SOC is designed to supplement efforts of the internal IT security department. All the firewalls and endpoint security continues to be managed by the internal IT department. Arctic Wolf Networks takes over responsibility for keeping track of the number and types of attacks being launched and what vulnerabilities they are trying to exploit inside the organization, says NeSmith. In effect, NeSmith says, Arctic Wolf Networks becomes the security operations center for the organization that is responsible for all activities relating to IT security hygiene.
JEFFERSON CITY, Mo. – Missouri renters who lost their homes or personal property as a result of the severe storms and flooding between December 23 and January 9 may be eligible for recovery assistance from the Federal Emergency Management Agency (FEMA) and other agencies.
FEMA offers two kinds of help for eligible renters who were displaced from their homes by the recent storms:
- Money to rent a different place to live for a limited period of time while repairs are made to the household’s rented home
- A free referral service to find an adequate replacement rental property
FEMA also helps eligible renters with uninsured or underinsured expenses such as:
- Disaster-related medical and dental expenses
- Disaster-related funeral and burial expenses
- Replacement or repair of necessary personal property lost or damaged in the disaster, household items such as room furnishings or appliances, and tools and equipment required by the self-employed for their jobs
- Primary vehicles and approved second vehicles damaged by the disaster
Additionally, renters may borrow up to $40,000 from the U.S. Small Business Administration to repair or replace personal property.
To qualify for state/federal assistance, renters must first register with FEMA. They can do so online at www.DisasterAssistance.gov at any time or by calling 800-621-3362 (800-621-FEMA) or (TTY) 800-462-7585, 7 a.m. to 10 p.m. seven days a week. Those who use 711-Relay or Video Relay Services (VRS) can call 800-621-3362.
Multiple renters sharing the same dwelling (a.k.a. roommates or housemates) or boarders renting from the dwelling’s owner or leaseholder may apply separately for FEMA assistance after a disaster. Depending on certain conditions, they may be eligible for assistance to repair, clean or replace personal property or vehicles damaged during the disaster, as well as disaster-related expenses.
Renters who desire face-to-face assistance should visit one of FEMA’s Disaster Recovery Centers (DRCs) in Missouri or speak with someone from one of FEMA’s Disaster Survivor Assistance (DSA) teams currently going door-to-door in Missouri’s disaster-declared counties. The application deadline is March 21.
The 33 Missouri counties designated for federal disaster assistance to individuals are: Barry, Barton, Camden, Cape Girardeau, Cole, Crawford, Franklin, Gasconade, Greene, Hickory, Jasper, Jefferson, Laclede, Lawrence, Lincoln, Maries, McDonald, Morgan, Newton, Osage, Phelps, Polk, Pulaski, Scott, St. Charles, St. Francois, St. Louis, Ste. Genevieve, Stone, Taney, Texas, Webster and Wright.
For breaking news about flood recovery, follow FEMA Region 7 on Twitter at https://twitter.com/femaregion7 and turn on mobile notifications or visit the FEMA web pages dedicated to this disaster at www.fema.gov/disaster/4250.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
All FEMA disaster assistance will be provided without discrimination on the grounds of race, color, sex (including sexual harassment), religion, national origin, age, disability, limited English proficiency, economic status, or retaliation. If you believe your civil rights are being violated, call 800-621-3362 or 800-462-7585(TTY/TDD).
“What we’ve done is put together a pilot that is part of a portfolio of projects that the agency has to improve and modernize business practices statewide,” Drown said. “It’s open data to push, ultimately, a culture of data-based decision-making.”
How to optimize Skype for Business on any device
As you read (you did read it, right?) in “Securing Skype for Business in a Mobile World,” storing sensitive Skype for Business data in the data center is a secure alternative to help ensure files, contacts, logs and more all stay safe within the corporate vault. And hosting Skype for Business on XenApp provides a secure and efficient way to keep the apps next to the data they use. Until you try to make a voice and video call that is.
Yes, logic would dictate that performance for voice and video would be degraded because of what we call the hairpin–or tromboning–effect. That is when you have your local camera, microphone and speakers sending voice and video to the data center where it makes a return trip to the person you are calling, who could be another 800 miles away.
“Magnetic tapes are dead”; “Tapes still have a role in modern IT”. These are two opinions frequently heard among system administrators, but which of them is right? In recent years, there has been a lot of debate about the role of the oldest storage medium still in use. Tapes were first invented in 1928 for sound recording purposes, but since the fifties they have evolved into one of the most widespread and reliable media for storing data on a specially coated medium. Used reliably now for longer than half a century to store data, tapes have survived many attacks from competitors such as hard disk drives (HDD or SSD), or optical media such as Blu-Ray discs or DVDs.
Since it’s inception five years ago Cisco’s Unified Computing System (Cisco UCS) offerings have consistently driven positive technical and business value for our customers at many levels, some examples:
• Cisco UCS regularly delivers top-level performance as showcased via our leading benchmarking results.
• In their datacenters our customers have recognized material gains in operational efficiency with substantial benefits in provisioning, deployment, management, and staffing.
• In their physical environments customer value is derived in lowered heating, cooling, space, and cabling advantages.
The trend continues… Cisco UCS is the gift that keeps on giving! In a recent third party survey we were able to gather insight on the benefits received by customer’s use of our Cisco UCS Integrated Infrastructure Solution for Big Data. Here’s an overview:
It’s done and dusted. Since someday last month, everything Netflix does runs on Amazon Web Services, from streaming video to managing its employee and customer data.
In early January, whatever little bits of Netflix that were still running somewhere in a non-Amazon data center were shut down, Yuri Izrailevsky, the company’s VP of cloud and platform engineering, wrote in a blog post Thursday.
To be sure, most of Netflix had already been running in the cloud for some time, including all customer-facing applications. Netflix has been one of the big early adopters of AWS who famously went all-in with public cloud. Thursday’s announcement simply marks the completion of a seven-year process of transition from a data center-based infrastructure model to a 100-percent cloud one.
(TNS) -- The FBI still cannot unlock the encrypted cellphone of one of the San Bernardino shooters more than two months after the California terrorist attack.
FBI Director James Comey told the Senate Intelligence Committee on Tuesday that his agency’s inability to access the information in the retrieved phone is an example of the effect on law enforcement of the growing use of encryption technology.
Comey said the problem of “going dark” is overwhelmingly affecting law enforcement at all levels.
aul Lachance is President of Smartware Group.
As the the world becomes increasingly dependent on the Internet, data centers have come to power our everyday lives. In fact, the average US consumer spends roughly six hours a day online. When a data center goes down, it can negatively impact everything from professional and personal communications to finances and travel.
The financial implications of data center downtime are outrageous. Organizations lose an average of $138,000 for one hour of downtime. To put this in perspective, Amazon stands to lose $1,104 for every second Amazon.com is down. What’s more, 59 percent of Fortune 500 companies experience a minimum of 1.6 hours of downtime per week, which could lead to a loss of $46 million in labor costs annually.
According to the Uptime Institute, human error causes almost three-fourths of all data center outages. However, many other factors like cybercrime, natural disasters or flaws within the data centers themselves can also cause downtime. Even something as seemingly innocuous as a squirrel chewing through a cable can cause major damage to a data center.
OXFORD, Miss. — If disaster survivors in Mississippi apply for assistance with the Federal Emergency Management Agency and are referred to the U.S. Small Business Administration, it’s important for them to submit an SBA loan application to ensure that the federal disaster recovery process continues.
If you are a homeowner or renter and SBA determines you cannot afford a loan, you may be considered for FEMA’s Other Needs Assistance program. The program helps meet essential needs like medical and dental care, funeral costs and transportation expenses.
Next to insurance, an SBA loan is the primary source of funds for real estate property repairs and replacing lost contents following a disaster. Homeowners may be eligible for low interest loans up to $200,000 for primary residence structural repairs or rebuilding.
When applying for an SBA loan, survivors should start the process as soon as possible:
- Do not wait on an insurance settlement before submitting an SBA loan application. Survivors can begin their recovery immediately with a low-interest SBA disaster loan. The loan balance will be reduced by the insurance settlement. SBA loans may be available for losses not covered by insurance or other sources.
- Survivors should complete and return the applications as soon as possible. Failure to complete and submit the home disaster loan application may stop the FEMA grant process. Homeowners and renters who submit an SBA application and are not offered a loan may be considered for certain other FEMA grants and programs that could include assistance for disaster-related car repairs, clothing, household items and other expenses.
- SBA can help renters replace their important personal items. Homeowners and renters may be eligible to borrow up to $40,000 to repair or replace personal property, including automobiles damaged or destroyed in the disaster.
- SBA can help businesses and private nonprofit organizations with up to $2 million to repair or replace disaster-damaged real estate, and other business assets. Eligible small businesses and nonprofits can apply for economic injury disaster loans to help meet working capital needs caused by the disaster.
- Survivors don’t have to accept the loan if they qualify for one. Survivors who don’t qualify could be eligible for more assistance from FEMA and other organizations.
March 4, 2016, is the last day survivors can register with FEMA and apply for an SBA disaster loan for physical damage. Oct. 4, 2016, is the last day a small business or private, nonprofit organization may apply for an economic injury disaster loan.
Survivors can submit their SBA loan applications one of two ways: online at https://DisasterLoan.SBA.gov/ela or by mailing their paper application to:
U.S. Small Business Administration
Processing and Disbursement Center
14925 Kingsport Rd.
Ft. Worth, TX 76155-2243
Survivors who haven’t yet registered with FEMA can do so online at DisasterAssistance.gov or by calling FEMA’s helpline at 800-621-3362, which is video relay service accessible. Survivors who are deaf, hard of hearing or who have difficulty speaking may call TTY 800-462-7585.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
All FEMA disaster assistance will be provided without discrimination on the grounds of race, color, sex (including sexual harassment), religion, national origin, age, disability, limited English proficiency, economic status, or retaliation. If you believe your civil rights are being violated, call 800-621-3362 or 800-462-7585(TTY/TDD).
FEMA’s temporary housing assistance and grants for public transportation expenses, medical and dental expenses, and funeral and burial expenses do not require individuals to apply for an SBA loan. However, applicants who receive SBA loan applications must submit them to SBA loan officers to be eligible for assistance that covers personal property, vehicle repair or replacement, and moving and storage expenses.