Fall World 2016

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 29, Issue 3

Full Contents Now Available!

Jon Seals

Raises the Bar for Security in Cloud Data Warehousing With a Comprehensive Set of Security Capabilities, Strong Partnerships, and Support for HIPAA Compliance

SAN MATEO, Calif. – Snowflake Computing, the cloud data warehousing company, today announced new features, third-party validations including support for HIPAA compliance, and a partner integration that together deliver a new standard in built-in security for cloud data warehousing. This comprehensive approach eliminates the complexity and burden of securing data that other solutions place on the customer, giving customers the peace of mind needed to trust important data to the Snowflake Elastic Data Warehouse.

Snowflake's announcement reinforces the growing recognition that security in the cloud can meet demanding customer requirements. In fact, Gartner predicts that by 2017, the number of enterprises with policies against placing any sensitive data in the public cloud will have dropped to 5%.(1) By building end-to-end security into its data warehousing service, and offering encryption everywhere, Snowflake ensures that customers can safely move data to the cloud and use the Snowflake service to power their analytics without the increasingly heavy burden of securing their own data environment.

"With organizations increasingly looking to the cloud, security has become a focal point of today's data storage conversation and concern," said Bob Muglia, CEO of Snowflake Computing. "To date, database offerings have put the burden of security on the customer, leaving them to configure, manage, and monitor infrastructure, data, and application security. Snowflake has brought world-class security expertise to designing security into our service from the start, going above and beyond other offerings to relieve users of this burden so that they can focus on solving business problems rather than suffer the headaches of security management."

The feature set delivers key offerings including:

  • Always-on Enterprise grade encryption. Snowflake automatically encrypts all customer data by default, in transit and at rest, using the latest security standards and best practices at no additional cost.
  • Multi-factor authentication. Snowflake offers integrated multi-factor authentication, to further control access to the Snowflake service, and to reduce the threat of brute force attacks.
  • Federated Services. Snowflake offers federated services for organizations that want to leverage their existing SAML 2.0 investments. 
  • Automatic key management. Snowflake rotates account and table keys on a regular basis, entirely transparent to the customer and requiring no configuration or management.
  • Intrusion Detection. Integrated SIEM services for monitoring and notifying customers of potential suspicious activities to help them thwart attacks.
  • Role based access. Snowflake allows role based access and control for both data access and operation.
  • Dedicated Instances. For customers with sensitive data who need added security including compliance requirements in a virtualized environment, Snowflake offers the option of dedicated compute resources.

Snowflake has also partnered with Okta, the leading provider of identity and mobility management for the cloud and mobile enterprise, to help customers integrate Snowflake security with their broader application environment. This integration enables Snowflake customers to use Okta's federated authentication services to extend their own authentication mechanism within the Snowflake service.

"As people increasingly look to the cloud to help them get more from their data, they need confidence that their data is secure," said Ryan Carlson, Okta CMO. "Okta and Snowflake have worked together bringing their extensive knowledge to enable customers to securely connect users and applications with data in the cloud. Rather than worrying about how to build a secure solution, customers can take advantage of our deep expertise delivering the secure data warehouse as a service."

Snowflake has also received key third-party validations of its service and processes, including a Service Organization Control (SOC) 2 Type II report and HIPAA compliance. SOC 2 is an industry standard that validates the security of infrastructures and services for cloud-based service providers. The HIPAA Security Rule (issued under the Health Insurance Portability and Accountability Act of 1996 or "HIPAA") requires appropriate administrative, physical and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information. The Snowflake Service enables customers who are regulated by HIPAA to store and analyze data in a manner that meets HIPAA Security Rule requirements.

Snowflake's comprehensive approach provides customers the assurance they need to store and analyze their data in the cloud. CapSpecialty, a preferred provider of specialty insurance products to the SMB market, uses Snowflake's secure data warehouse as a service to analyze 10 years of data (read more). "While selecting the right service, we came to the conclusion that achieving the level of security provided by Snowflake could only have been done internally at a far greater cost," said Bob Asensio, CIO CapSpecialty.

To learn more about how CapSpecialty uses Snowflake's secure service to perform queries 200x faster, click here.

Tweet This
@SnowflakeDB first comprehensive #cloud #datawarehouse w/ built-in #security, announces third-party certifications http://bit.ly/22vVN4A

Snowflake Resources

About Snowflake
Snowflake Computing, the cloud data warehousing company, has reinvented the data warehouse for the cloud and today's data. The Snowflake Elastic Data Warehouse is built from the cloud up with a patent-pending new architecture that delivers the power of data warehousing, the flexibility of big data platforms and the elasticity of the cloud -- at a fraction of the cost of traditional solutions. Snowflake can be found online at snowflake.net.

(1) Gartner "Predicts 2016: Cloud Computing to Drive Digital Business" by David Mitchell Smith, et al., December 8, 2015


Integration Delivers Unprecedented Visibility and Protects Enterprises From Malware Attacks Across Their Entire Cloud Footprint

SANTA CLARA, Calif. – Palerra, the leader in cloud security automation, and Check Point® Software Technologies Ltd. (NASDAQ: CHKP), today announced integrations that unite Palerra LORIC Cloud Access Security Broker (CASB) capabilities with Check Point's industry leading threat prevention platform. By means of this integration with Check Point, joint customers now have the ability to automatically detect and quarantine malware in Shadow IT SaaS applications and sanctioned cloud services, as well as custom applications built on top of PaaS or IaaS providers.

"Earlier this year we announced LORIC Discovery, an offering that addresses the modern challenges enterprises face with regard to Shadow IT. We are further extending our innovation by integrating with Check Point's malware sandboxing capabilities, thereby providing enterprises the strongest threat protection in the cloud," said Adina Simu, VP of products for Palerra. "Through this interoperability, enterprises can leverage Palerra LORIC to apply the latest threat analytics to all newly discovered Shadow IT applications as well as sanctioned cloud infrastructure and SaaS applications, enabling them to make intelligent decisions in real time."

With this integration, customers are able to:

  • Discover Modern Shadow IT Applications - Via integration with Check Point next generation firewall, uncover the use of unsanctioned SaaS applications, as well as federated applications integrated with sanctioned applications. In addition, discover the use of custom applications within IaaS and SaaS environments such as Amazon Web Services and Force.com.
  • Identify and Quarantine Zero-Day Malware - Leverage the latest sandboxing technology to identify zero-day malware on both newly discovered cloud services, as well as content in sanctioned cloud applications, and quarantine the service or the content as necessary.
  • Employ Consistent Threat Protection Across the Enterprise - Extend Check Point's industry-leading threat protection capabilities from securing offices and campuses to cloud-based infrastructure and applications.

To learn more about Palerra LORIC:

About Palerra
Palerra helps organizations protect their business-critical cloud infrastructure and data with Palerra LORIC™, the industry-leading solution for cloud security automation. Palerra is the only Cloud Access Security Broker (CASB) that provides visibility and security across the entire security lifecycle from infrastructure to applications, enabling organizations to realize the full promise of the cloud. Leading enterprises including BMC Software, Jefferies, and VMware leverage LORIC for continuous monitoring and security of their cloud applications. Palerra is a privately held company funded by Norwest Venture Partners, Wing Venture Capital, and August Capital, and is headquartered in Santa Clara, Calif. For more information, visit www.palerra.com.


Software-Defined Storage Startup Releases Docker Volume Plugin to Simplify DevOps Provisioning of Persistent Container Volumes

SANTA CLARA, Calif. – Hedvig, the company modernizing storage and accelerating enterprise adoption of private and hybrid clouds, today announced the release of the Hedvig Docker Volume Plugin and associated integration of the Hedvig Distributed Storage Platform with Docker Datacenter. This new integration simplifies how IT Ops and DevOps provision persistent storage volumes for Docker containers, enabling faster time-to-market for container-based applications while reducing the capital and operational costs of Docker storage. Enterprises can now access a complete set of data services for Docker containers directly from the Docker Universal Control Plane.

More enterprises are adopting containers, and Docker in particular, for production environments. A recent DevOps.com survey found 94 percent of organizations are interested in deploying containers, with 38 percent already doing so in production and 65 percent citing they'll have containers in production in 12 months. However, the same survey found 53 percent of organizations cited data management as a barrier to container adoption.

"Enterprises don't have the staff and resources to continuously experiment with tools. They need proven, integrated solutions that allow them to respond to business requirements faster, cut operating costs and improve customer satisfaction," said Mark Williams, CTO at Redapt. "We're excited that companies like Hedvig are extending Docker Datacenter to provide such enterprise-ready solutions."

Hedvig's integration with Docker Datacenter lets IT Ops and DevOps admins spin up containers on any host with access to persistent, shared storage on the Hedvig Distributed Storage Platform. To simplify workflows and increase Docker admin productivity, the Hedvig Docker Volume Plugin can be accessed natively from within Docker by using the Docker command line or through the Docker Universal Control Plane.

Enterprises using Docker in production environments can simplify the provisioning of persistent storage for containers through three capabilities unique to the Hedvig Docker Volume Plugin:

  • Enable container deployment of stateful microservices: Docker volumes are inherently available to any host running the Hedvig Docker Volume Plugin. Containers can be moved from one host to another while data persists, allowing stateful microservices like databases to be deployed in containers.
  • Increase flexibility and alignment with business requirements: IT Ops and DevOps admins can define granular, per-volume storage policies natively from the Docker Universal Control Plane. These include replication factor, deduplication, compression and cache acceleration so each container can have its own unique storage policy.
  • Improve performance and scalability of containers: The Hedvig Distributed Storage Platform is based on a distributed systems architecture, improving the performance, scalability and efficiency that stateful applications running in containers require.

"Containers are a disruptive and innovative force in the industry right now. Customers tell us Docker holds the most potential for them to achieve the scale and efficiency of Internet giants, but they need end-to-end solutions to make containers a reality in their data centers," said Avinash Lakshman, CEO and founder of Hedvig. "As the first solution to demonstrate Docker Datacenter compatibility, we believe we've taken an important step forward in delivering a complete solution. Adding enterprise-grade software-defined storage to Docker helps enterprises that are serious about deploying containers in production."

Hedvig provides the elasticity, simplicity and flexibility needed for next-generation infrastructure. The Hedvig Distributed Storage Platform is designed to make software-defined storage (SDS) technology accessible to enterprise IT, supporting common storage protocols like iSCSI and NFS along with object storage APIs like S3 and Swift. It is the only SDS solution that supports widely deployed workloads like SQL databases and virtualized server and desktop environments, along with modern workloads like NoSQL, OpenStack, Docker and Hadoop.

To read more about the technical integration and view a demonstration of Hedvig's integration with Docker Datacenter, please visit: http://www.hedviginc.com/blog/hedvig-software-defined-storage-integration-with-docker-datacenter

To register for a live webinar and an opportunity to speak with Hedvig experts, please visit:

About Hedvig
Hedvig reduces enterprise storage costs by 60 percent while accelerating migration to cloud. The Hedvig Distributed Storage Platform combines block, file, and object storage for bare metal, hypervisor and container environments. The only software-defined solution built on a true distributed system, Hedvig is built to keep pace with scale-out applications and the velocity of change in today's business climate. The Hedvig platform gets better and smarter as the system scales, transforming commodity hardware into the most advanced storage solution available today. Customers such as Intuit, LKAB, Mazzetti and Van Dijk use the Hedvig platform to transform their storage from a box where data resides to a fundamental business enabler. www.hedviginc.com

Connect with Hedvig:

Read our blog: http://hedviginc.com/blog
Follow us on Twitter: https://twitter.com/hedviginc
Like us on Facebook: https://www.facebook.com/hedviginc
Learn more: http://www.hedviginc.com/press-kit

LONDON, UK – eeGeo Limited announced that it has joined the Cisco® Solution Partner Program as a Solution Partner. The Internet of Everything (IoE) continues to bring together people, processes, data and things to enhance the relevancy of network connections. As a member of the Cisco Solution Partner Program, eeGeo is able to quickly create and deploy solutions to enhance the capabilities, performance and management of the network to capture value in the IoE.


“We have worked closely with Cisco to ensure our 3D mapping platform delivers visualization that enhances their Connected Mobile Experiences (CMX), Mobile Workspace and Enterprise Mobility Services Platform solutions”, said Jeremy Copp, eeGeo’s Chief Commercial Officer. “In an increasingly connected world it is vital to provide users with an engaging, intuitive and compelling mechanism to interact with the complex real time data generated by the IoE”.


The Cisco Solution Partner Program, part of the Cisco Partner Ecosystem, unites Cisco with third-party independent hardware and software vendors to deliver integrated solutions to joint customers. As a Solution Partner, eeGeo offers a complementary product offering and has started to collaborate with Cisco to meet the needs of joint customers. For more information on eeGeo, go to: our Cisco Solution Partner Program Catalog profile.


About eeGeo:

eeGeo is revolutionising the way in which businesses engage with their markets. With its gaming industry heritage the cloud-based platform marries gaming software with mobile technology and big data. The result is a 3D geospatially accurate representation of the world, including building-level detail and interior and exterior mapping. The platform enables clients to present information and services to their customers in an environment differentiated from their competitors. It provides a new way to visualise local search results, businesses, destinations and marketing content within an interactive 3D environment, encouraging user acquisition, engagement and retention.

eeGeo is a privately held company founded in September 2010 with offices in San Francisco, London and Dundee and is funded through investment from the founder, strategic partners, private investment funds and venture capital. eeGeo’s global customer base includes top brands from a range of sectors including retail, tourism, smart cities, local media and advertising, the Internet of Things and property.

More information and a full list of eeGeo’s customers can be found on the eeGeo website.

Helping AWS Customers Launch and Manage Apps More Efficiently

SAN ANTONIO –  Rackspace® (NYSE: RAX) has achieved Amazon Web Services™ (AWS) DevOps Competency within the AWS Partner Competency Program, which recognizes members of the AWS Partner Network who have completed a rigorous third-party audit demonstrating their expertise in DevOps practices, tools, and proven customer success. Customers can access this expertise and 24x7 operational support through Fanatical Support® for AWS architects and engineers, who collectively hold over 270 AWS Professional and Associate certifications across the globe.

Historically, the process of deploying and managing applications has been manually intensive and customized.  Manual deployment and management of applications generally leads to outcomes such as fewer release cycles, longer lead times, and unpredictable quality, all of which impact time to market and stable operations.  Applying those manual and custom practices in a cloud environment negates the inherent advantage of software defined infrastructure.  DevOps is the use of practices, tools and automation that can improve the efficiency of how businesses run applications in the cloud.

Fanatical Support for AWS provides businesses access to certified AWS experts who can help implement and operate workloads on AWS using DevOps practices and tooling, both from AWS and leading 3rd parties. Rackspace DevOps expertise can help customers improve the speed, frequency, and quality of their software deployments and the accompanying operational processes. Customers using the Aviator™ service receive ongoing support for native AWS DevOps tools such as AWS OpsWorks, AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline and AWS CodeCommit. As an additional service, Rackspace engineers can utilize a broad range of leading third-party DevOps tools, such as Chef™, SaltStack, and Ansible to help ensure that customers have choice around their preferred DevOps toolset.

“Rackspace has been active in DevOps practices since 2012, working to help customers increase speed and consistency in their operations.  We are excited to be recognized by AWS and to apply our knowledge to customers working on the AWS platform,” said Chris Cochran, senior vice president and general manager of AWS at Rackspace. “Not all customers start with a high degree of expertise in this area, but Rackspace can help these businesses continually improve their efficiency with running apps in the cloud.”

For more information about Rackspace’s achievement of the AWS DevOps Competency, please visit http://blog.rackspace.com/rackspace-achieves-aws-devops-competency/.

For more information on Fanatical Support for AWS, please visit www.rackspace.com/aws.


About Rackspace

Rackspace (NYSE: RAX), the #1 managed cloud company, helps businesses tap the power of cloud computing without the challenge and expense of managing complex IT infrastructure and application platforms on their own. Rackspace engineers deliver specialized expertise on top of leading technologies developed by AWS, Microsoft, OpenStack, VMware and others, through a results-obsessed service known as Fanatical Support®. The company has more than 300,000 customers worldwide, including two-thirds of the FORTUNE 100. Rackspace was named a leader in the 2015 Gartner Magic Quadrant for Cloud-Enabled Managed Hosting, and has been honored as one of Fortune’s Best Companies to Work For in six of the past eight years. Learn more at www.rackspace.com.

Emergency communication planning has become a key element of many businesses, but a surprising number of organizations are still completely unprepared for a potential crisis. According to the Business Continuity Institute, 14 percent of businesses do not have an emergency communications plan, and 68 percent of those organizations would only create one if they experienced a business-impacting event.Of course, having a plan does not necessarily mean you are prepared for the next big crisis. When was the last time you truly assessed your communication plan?

For quick and effective response during an emergency, you should be testing your business communication plans on a regular basis, as well as any time there is a significant change in your company. This might include newly-hired departmental heads or executives, business expansion or the use of new technology platforms.

Of course, there are several ways to ensure that a crisis communication plan is up to date and performing as intended. Here, we look at four ways to test your organization’s plans and make sure they are getting the right message across:



A new study out from CloudPassage — a cloud security firm based in San Francisco — concludes that the American higher-education system is failing at preparing students for careers in cybersecurity.

CloudPassage hired a third party consultant to analyze computer science programs at 121 universities listed on three rankings which included U.S. News and World Report’s Best Global Universities for Computer Science, Business Insider’s Top 50 best computer-science and engineering schools in America, and QS World University Rankings 2015 – Computer Science & Information.

The University of Michigan (ranked #12 on the U.S. News & World Report’s list) is the only program in the top 36 which requires a cybersecurity course for graduation.



You may not know it, but last month we celebrated World Backup Day, in which the tech industry encouraged both consumers and professionals to back up their important data. The occasion served as a good reminder for data center professionals that backing up critical data means having the right power protection strategy in place to ensure data center downtime doesn’t translate into lost revenue for their businesses.

But not everyone took notice. In fact, it’s somewhat surprising that many operators consider reliable power protection to be low on their list of priorities, even though it can have major implications for data loss. During the course of operation, power sags, surges and outages are unavoidable, and more than capable of damaging valuable IT equipment and cutting off access to important data. Because of this, it’s essential that data center operators incorporate a robust power protection solution into their overall data center design strategies.

This article will provide an introductory overview of why comprehensive power protection is critical to ensuring continuous uptime in the data center. Additionally, we’ll look at an example of how one data center operator, ByteGrid, recently implemented a comprehensive power management and monitoring solution to help ensure reliability and reduce the risk of downtime in its facility.



Aon Global Risk Consulting has published its 2016 Captive Cyber Survey report, which finds that the costs of business interruption due to a breach is the top cyber risk concern for businesses across all industries.

As Aon’s first cyber captive survey, the findings offer a better understanding of organizations’ current attitude towards cyber threats, risk assessment, insurance purchasing trends and loss adjustment concerns and provides insight into current retail market trends, including captives and other risk financing solutions.

“Our findings indicate that there is a disparity between companies recognizing that cyber is one of the fastest growing and permeating risks, and actually understanding what their individual exposures and coverage needs are,” said Peter Mullen, chief executive officer of Aon Risk Solutions’ Aon Captive and Insurance Management practice, who spearheaded the report. “Captives are a great alternative risk transfer solution for bridging this gap while the industry’s approach to cyber risk management catches up to the evolving pace of technology.”

The survey findings indicate that 94 percent of companies would share risk with others in their industry as part of a captive facility writing cyber. What’s more, Aon experts anticipate alternative risk transfer options to become increasingly sought after as these solutions give companies some control over underwriting, coverage scope and claims adjustment, while providing an opportunity to share best practices, experience and data in a private setting.

Additional highlights of the report include:

  • 61 percent of survey respondents buy cyber limits in the $10-25 million range, but overall 60 percent of large companies do not buy cyber insurance;
  • Of those that do, 68 percent of companies surveyed buy cyber for balance sheet protection closely followed by ensuring due diligence comfort for the board;
  • Only 25 percent of respondents that buy limits are confident that they comply with international best practices and standards for information security governance;
  • 95 percent of companies surveyed state clear policy wording as the most important issue in the cyber risk market, and 75 percent of large companies express concerns about the loss adjustment process.  

“Given the evolving nature and complexity of cyber exposures, we found that the use of cyber risk assessments is surprisingly low,” said Kevin Kalinich, global practice leader for cyber/network risk at Aon Risk Solutions. “Conducting such an assessment is a useful tool for improving risk understanding and maturity as well as for helping organizations better prepare for potential business interruption during or after a breach.”

Aon recommends the following three steps to begin a cyber risk assessment:

1. Scenario analysis: benchmark the existing cyber risk profile and work with business stakeholders to prioritize cyber risk scenarios;

2. Financial modeling: leverage advanced financial simulation tools using deterministic modeling to quantify first and third party costs of select cyber scenarios. Consider performing an analysis on non-damage business interruption scenarios using forensic accounting capabilities;

3. Insurability risk review: test the adequacy of limits against the assessed cyber risk as well as review the optimization of the proposed insurance program.


Many states are enduring tornado season, and all of the destruction and disaster that goes a long with tornados. Tornados can cause so much devastation in such a short amount of time. David Conrad, EMA director, said that form the time a warning siren goes off, the tornado has already passes within 6 minutes of that[1]. When communities issue a “watch”, it means that conditions are favorable and citizens should be on the lookout. Once a “warning” has been issued, that means that a tornado has been spotted.

In Osceola County, Florida, the city will sound the sirens once the National Weather Service issues a tornado warning for their area[2]. For the NWS to issue a tornado warning, weather conditions must line up perfectly. Often, the last minute siren is not enough notice to fully prepare for a full speed tornado. Osceola relies on social media and Nixle to help inform residents of looming weather conditions and keep them safe.