With more enterprise IT organizations relying on software-as-a-service (SaaS) applications than ever, securing the data that flows in and out of those applications has become a major challenge and concern.
To give IT organizations more control over that data, Protegrity today unveiled the Protegrity Cloud Gateway, a virtual appliance that, once deployed on a server, enables organizations to apply policies to the flow of data moving in and out of multiple SaaS applications.
Protegrity CEO Suni Munshani says it applies a mix of encryption and vaultless tokenization to make sure data residing in a SaaS application can only be viewed by users that have been given explicit rights to see that data. Those rights are assigned using a “configuration-over-programming” (CoP) methodology that allows administrators to configure the gateway without having programming skills.
Support for SaaS applications is provided by accessing the public application programming interfaces (APIs) those applications expose, with support for each additional SaaS application that Protegrity supports taking a few days or weeks to add, depending on the complexity of the project.
The United Kingdom’s GCHQ, in association with the Centre for the Protection of National Infrastructure, Cabinet Office and Department for Business Innovation and Skills, has re-issued their ’10 Steps to Cyber Security’ publication, offering updated guidance on the practical steps that organizations can take to improve the security of their networks and the information carried on them.
Originally launched 2012, the guidance has made a tangible difference in helping organizations large and small understand the key activities they should evaluate for cyber security risk management purposes. The 2014 Cyber Governance Health Check of FTSE 350 Boards showed that 58% of companies have assessed themselves against the 10 Steps guidance since it was first launched. compared to 40% in 2013.
‘10 Steps to Cyber Security’ has been updated to ensure its continuing relevance in the climate of an ever growing cyber threat. It now highlights the new cyber security schemes and services that have been set up more recently under the National Cyber Security Programme.
The Business Continuity Institute’s Horizon Scan report has consistently shown that cyber attacks and data breaches are two of the biggest concerns for business continuity professionals with the latest report highlighting that 73% of respondents to a survey expressed either concern or extreme concern at the prospect of one of these threats materialising.
Robert Hannigan, Director of GCHQ, said: “GCHQ continues to see real threats to the UK on a daily basis, and the scale and rate of these attacks shows little sign of abating. However despite the increase in sophistication, it remains as true today as it did two years ago that there is much you can do yourself to protect your organisation by adopting the basic Cyber Security procedures in this guidance.”
You’ve taken the time to implement a disaster recovery (DR) plan for your company – you’re prepared for anything. You’ve covered all the milestones, including:
- Performing a Business Impact Analysis (BIA) to determine the recovery times you’ll need for your applications.
- Tiering your applications and documenting their interdependencies so you know which order your servers should be restored in.
- Putting your recovery infrastructure in a geographically-diverse data center.
- Created a comprehensive recovery playbook and tested each and every step.
Bring on the storms … the floods … the power outages … you’re ready. But are you really?
To small business owners, the buzz words from the Big Data world (i.e., petabytes, zettabytes, feeds, analytics, etc.) seem very foreign indeed. According to research from the SMB Group, only 18 percent of small businesses currently make use of Big Data analytics and business intelligence solutions. On the other hand, midsize businesses have shown greater adoption, with 57 percent of those surveyed reporting that they use BI and analytics to gain actionable information.
However, many Big Data vendors have begun creating a better story for smaller businesses, focusing more on how they can use their tools to achieve deeper insight into business data to help them make more informed decisions. And the ones that listen to this retooled message will receive a decent payoff for their efforts.
Talk to many data storage experts about high-performance storage and a good portion will bring up Lustre, which was the subject of a recent Lustre Buying Guide. Some of the tips here, therefore, concern Lustre, but not all.
Use Parallel File Systems
Parallel file systems enable more data transfer in shorter time period than their alternatives.
Lustre is an open source parallel file system used heavily in big data workflows in High Performance Computing (HPC). Over half of the largest systems in the world use Lustre, said Laura Shepard, Director of HPC & Life Sciences Marketing, DataDirect Networks (DDN). This includes U.S. government labs like Oakridge National Lab’s Titan, as well as British Petroleum’s system in Houston.
Whether you are planning a traditional data center build-out or all-new cloud infrastructure, the appeal of white box hardware is difficult to resist.
Provided you need enough of a particular device to benefit from economies of scale, and you have a plan to layer all the functionality you need via software, white box infrastructure can do wonders to reduce the capital costs of any project. Plus, you always have the option to rework the software should data requirements change.
But it isn’t all wine and roses in the white box universe. As IT consultant Keith Townsend noted to Tech Republic recently, white box support costs often emerge as a fly in the ointment. Large organizations like Facebook and Google have the in-house knowledge to deploy, configure and optimize legions of white boxes, but the typical data center does not. It takes a specialized set of skills to implement software-defined server, storage and networking environments, and white box providers as a rule do not offer much support other than to replace entire units, even if only a single component has gone bad. There is also the added cost of implementing highly granular management and monitoring tools to provide the level of visibility needed to gauge a device’s operational status to begin with.
Is your business prepared for IT outages? Disaster preparedness is vital for businesses of all sizes, especially for those that want to avoid prolonged service interruptions, and companies that prioritize disaster preparedness can find ways to protect their critical data during IT outages as well.
Managed service providers (MSPs) can offer data backup and disaster recovery (BDR) solutions to help companies safeguard their sensitive data during IT outages. These service providers also can teach businesses about the different types of IT outages, and ultimately, help them prevent data loss.
What role could social media play in effectively communicating information about breaking news such as natural disasters and disease outbreaks? It’s not a new question, but one that lacks an easy answer. Researchers and emergency response personnel in San Diego plan to spend the next four years exploring the topic, and what they find may eventually serve as a model for other communities looking to better leverage social media for disaster response.
San Diego County and San Diego State University (SDSU) recently formed a partnership to research and develop a new social media-based platform for disseminating emergency warnings to citizens. The project aims to allow San Diego County’s Office of Emergency Services (OES) to spread disaster messages and distress calls quickly and to targeted geographic locations, even when traditional channels such as phone systems and radio stations are overwhelmed.
In a Jan. 13 presentation to the federal Health IT Policy Committee, Annie Fine, M.D., a medical epidemiologist in the New York City Department of Health and Mental Hygiene, described both the sophisticated software used to track disease outbreaks such as Ebola, as well as how better integration with clinicians’ electronic health records (EHRs) would improve her department’s capabilities.
“In New York City, every day we are on the lookout for unusual clusters of illness. And we receive more than 1,000 reports a day just in my program,” Fine said. Epidemiologists run a weekly analysis to detect clusters in space and time, and use analytics and geocoding to compare current four-week periods with baselines of earlier four-week periods.
“We get a large number of suspect cases reported, and they may be way out of proportion to the number of actual cases,” Fine said. Epidemiological investigations require hundreds of phone calls to providers and labs. “That could be made much less burdensome and efficient if we could have improved integration with EHR data.”
Now that the dust has settled on the infamous hack of Sony Pictures Entertainment, it would be prudent to take a look back at how the attack was carried out, consider what lessons IT security professionals can learn from it, and formulate a plan to counter a similar attack.
To that end, I recently conducted an email interview with Gary Miliefsky, an information security specialist and founder and president of SnoopWall, a cybersecurity firm in Nashua, N.H. To kick it off, I asked him what the likelihood is that a Sony insider assisted with the attack, and whether it could have even been carried out without the help of an insider. Miliefsky dismissed the insider theory:
While many speculate that the attack on Sony Pictures Entertainment was done by a malicious insider, I believe that the DPRK carried out the attack themselves, originally initiated from IP addresses they lease from the Chinese government. I believe they initially eavesdropped on emails to learn a pattern of behavior for socially engineering a Remote Access Trojan to be installed via email of an unsuspecting employee, inside the network.