Fall World 2015

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 28, Issue 1

Full Contents Now Available!

Jon Seals

COMPUTERWORLD — WASHINGTON - From ocean sensors to orbiting satellites, the National Oceanic and Atmospheric Administration (NOAA) collects about 30 petabytes of environmental data annually. But only about 10% of the data is made public, something the agency now wants to change.

NOAA wants to move its vast amount of untapped data into a public cloud, but without having to pay a whopping cloud services bill.

The agency believes the data has a lot of value to it, and is now seeking partnerships with commercial entities, universities and others. An ideal partner might be someone who can apply advanced analytics to the data to create new products and value-added services that also generates new jobs.

...

http://www.cio.com/article/748785/NOAA_Wants_to_Turn_its_Ocean_of_Data_Into_Jobs

OTTAWA, Canada – Enterprise storage has historically been hampered by technical and architectural factors that severely limit application performance. The I/O limitations of hard disk-based arrays, and the latency imposed by PCI-Express based SSDs and architectures such as SAN and NAS, create bottlenecks for applications where speed and determinism are paramount.

The emergence of solid state flash continues to disrupt the storage market with the recent introduction of Memory Channel Storage™ (MCS™), an award-winning platform that delivers tens of terabytes of flash capacity in a single server with near-DRAM speeds. MCS technology puts NAND flash into a DIMM form factor and enables it to interface with the CPU via the integrated memory controller.  The result is a new class of high-performance, in-memory storage that eliminates the OS/IO/network overhead inherent to legacy storage arrays.

Diablo Technologies™, a proven innovator in memory system interface products and creator of the MCS architecture, offers eleven reasons to leverage this ground-breaking technology to optimize application performance for database, big data, virtualization, and low latency workloads.

1. Faster Data Persistence – Memory Channel Storage provides significantly lower write latency than any other flash storage technology.  Creating and updating persistent data is now faster than ever before with as low as 5 microsecond write latency.

2. Zero-Compromise Performance – MCS eliminates the trade-off between IOPS and latency that is inherent to other flash storage solutions. Applications can now support heavy I/O while maintaining fast response times to ensure IT managers no longer need worry about tuning for just one of these performance attributes. 

3. Predictable Response Times – MCS provides extremely deterministic latency.  Uncertainty surrounding storage-related Quality-of-Service (QoS) can now be eliminated.  As an example, IT managers deploying VDI can be assured of consistent response times for virtual machines, providing their users with a satisfying session experience.

4. Efficient Scalability – The MCS architecture enables the flash storage solution to tightly fit the customer requirements.  MCS I/O performance is linearly scalable and total capacity can be “right-sized” to match application needs.  Current products based on Memory Channel Storage are available as 200 GB or 400 GB modules.  Multiple modules can be integrated into servers or storage arrays as needed, based on the capacity and performance requirements of specific applications.

5. Platform Flexibility– Most data centers employ a variety of storage solutions depending on the challenges faced.  Via its combined advantages (i.e. ultra-fast persistence, supports heavy I/O without compromising response time, determinism, scalability), MCS provides a uniquely flexible platform that can address a wide variety of workloads. 

6. Strong Mixed-Workload Performance– Due to its distributed architecture, MCS provides strong mixed-workload performance.  Not only can MCS-based modules be written to or read from in parallel, but applications also have the flexibility to write to individual modules while reading from others.  Mixed workload applications, such as databases and virtualized environments, benefit greatly and are ideally suited for MCS-based solutions.

7.  Flexible Form Factor – By placing persistent memory on a standard DIMM module, MCS-based products will fit into any server or storage design that utilizes the standard DIMM form factor.  Flash memory can easily be integrated into standard server and storage arrays with no modifications to the motherboards or chassis and without the complexity of blade servers requiring custom PCIe mezzanine cards. This makes MCS the most flexible means of deploying persistent memory for enterprise and storage applications.

8. Ecosystem Support – With support for the most critical operating systems and hypervisors, MCS can be deployed quickly and easily into virtually any enterprise environment.  Driver availability for Microsoft Windows™, VMware ESXi™ and the most prevalent Linux distributions and kernels means broad applicability to enterprise applications.

9. Memory Expansion – MCS redefines the idea of memory expansion by placing flash on the same memory channels as system DRAM.  Paging occurs at near DRAM speeds as these operations are simply transfers of data from flash to DRAM within the same memory controller.  No transfers through a storage stack or movements external to the processor are required, thereby enabling extremely fast paging. 

10. Reduced TCO – With MCS, each node in a cluster is able to complete more work in less time.  With storage attached directly to the processors rather than across I/O expansion connections and storage stacks, data is accessed, manipulated, and rewritten to the flash in significantly reduced time.  Fewer nodes also means less external storage arrays filled with large amounts of hot, spinning media, and lower power and cooling costs that lead to a reduced Total Cost of Ownership.

11. Future-Proof Platform – The MCS architecture is designed with the ability to utilize current NAND-flash as well as future non-volatile memories, ensuring MCS customers will benefit from the capacity and performance enhancements of technologies such as 3D flash, phase-change memory, magnetoresistive RAM, and resistive RAM.

   https://twitter.com/diablo_tech
   https://www.facebook.com/pages/Diablo-Technologies/369582183128064
About Diablo Technologies
Founded in 2003, Diablo is at the forefront of developing breakthrough technologies to set the standard for next-generation enterprise computing. Diablo’s Memory Channel Storage solution combines innovative software and hardware architectures with Non-Volatile Memory to introduce a new and disruptive generation of Solid State Storage for data-intensive applications. 
The Diablo executive leadership team has decades of experience in system architecture, chip-set design and software development at companies including Nortel Networks, Intel, Cisco, AMD, SEGA, ATI, Cadence Design Systems, Matrox Graphics, BroadTel Communications and ENQ Semiconductor.
Website: http://www.diablo-technologies.com/

CIO — The demands of big data applications can put a lot of strain on a data center. Traditional IT seeks to operate in a steady state, with maximum uptime and continuous equilibrium. After all, most applications tend to have a fairly light compute load—they operate inside a virtual machine and use just some of its resource.

Big data applications, on the other hand, tend to suck up massive amounts of compute load. They also tend to feature spikes of activity—they start and end at a particular point in time.

"Big data is really changing the way data centers are operating and some of the needs they have," says Rob Clyde, CEO of Adaptive Computing, a specialist in private/hybrid cloud and technical computing environments. "The traditional data center is very much about achieving equilibrium and uptime."

...

http://www.cio.com/article/748742/Helping_Data_Centers_Cope_With_Big_Data_Workloads

IDG News Service (Boston Bureau) — A former Microsoft architect has founded a startup called Azuqua aimed at tackling the problem of joining together and automating business processes from multiple SaaS (software-as-a-service) applications.

The proliferation of SaaS and the "API [application programming interface] economy," provides a vast opportunity for a service that can easily pull together processes from multiple applications to serve various scenarios, CEO Nikhil Hasija said in an interview prior to Tuesday's launch of the company's platform.

There's also a need for a tool that can make doing this extremely easy for an average user, he said. While there are a wide range of cloud integration options, such as Dell Boomi and Informatica Cloud, "it requires a computer science degree to do something with them," Hasija claimed. "We're solving this for the business user and making IT look good for being able to deliver this."

...

http://www.cio.com/article/748749/Ex_microsoft_Architect_39_s_Startup_Focuses_on_Saas_Integration

IDG News Service (Boston Bureau) — Dell and NetSuite are broadening their relationship, with Dell becoming a global reseller and IT systems integrator for NetSuite's cloud ERP (enterprise resource planning) software.

NetSuite and Dell had already partnered around Dell's Boomi cloud integration technology, and signed off on the expanded agreement a couple of weeks ago, NetSuite CEO Zach Nelson said in an interview prior to Tuesday's announcement.

The deal has benefits for both companies. NetSuite will gain from Dell's vast global sales and service organizations, as well as the latter's specialization in industries such as health care and financial services.

...

http://www.cio.com/article/748737/Dell_to_Resell_Implement_Netsuite_39_s_Cloud_ERP_Software

Business Continuity Awareness Week takes place between 17th – 21st March 2014 and this year includes an opportunity to take part in the first business continuity ‘Flashblog’.

The Flashblog is basically a collection of short articles written around the same theme and published on the same date.

The topic which has been set is "Counting the cost, and benefits, for business continuity” and 500 word articles are being sought from the perspective of as many different types of authors as possible.

Articles will be published on various platforms (including Continuity Central), depending on the author’s preference, and will go live at 11am GMT on Tuesday 18th March using the hashtags #countingthecost and #bcFlashBlog.

For more details of how to take part go to http://bcflashblog.postach.io/join-in-the-bc-flashmob

The NFPA Technical Committee on Emergency Management and Business Continuity will meet between March 25th-27th 2014 to discuss progress on the 2016 edition of NFPA 1600.

The agenda for the First Draft Meeting, which will take place at Hilton St. Petersburg Carillon Park, St. Petersburg, FL, is as follows:

1. Starting time: 8:30 a.m., March 25, 2014.

2. Welcome (Don Schmidt, Chair)

3. Self-introduction of members and guests

4. Approval of Minutes of Pre-First Draft Meeting, Salt Lake City, 2013 Oct 22-23

5. Approval of agenda

6. NFPA staff liaison report (Orlando Hernandez)
Committee membership update
Distribution of sign-in sheets

7. Organizational reports/News related to NFPA 1600

8. Task group reports

9. Act on Public Comments to NFPA 1600. Take any other actions necessary to complete the ROC for NFPA 1600.

10. Old business.

11. New business

12. Adjourn

To read the minutes of the October 22nd-23rd meeting click here (PDF).

Tuesday, 25 February 2014 19:17

The Risk Appetite Dialogue

Risk levels and uncertainty change significantly over time. Competitors make new and sometimes unexpected moves on the board, new regulatory mandates complicate the picture, economies fluctuate, disruptive technologies emerge and nations start new conflicts that can escalate quickly and broadly. Not to mention that, quite simply, stuff happens, meaning tsunamis, hurricanes, floods and other catastrophic events can hit at any time. Indeed, the world is a risky place in which to do business.

Yet like everything else, there is always the other side of the equation. Companies and organizations either grow or face inevitable difficulties in sustaining the business. Value creation is a goal many managers seek, and rightfully so, as no one doubts that successful organizations must take risk to create enterprise value and grow. The question is, how much risk should they take? A balanced approach to value creation means the enterprise accepts only those risks that are prudent to undertake and that it can reasonably expect to manage successfully in pursuing its value creation objectives.

...

http://www.corporatecomplianceinsights.com/the-risk-appetite-dialogue/

Computerworld — Now, here's a noble goal. U.K. telecom giant Orange on Friday (Feb. 21) launched a campaign to encourage companies to be much more transparent about the data they are collecting with their mobile apps, as well as helping consumers to better control how such data is used. Laudable, really -- and terribly unrealistic.

I'm not even talking about the fact that most companies would rather not be transparent about why they retain consumer data. ("We're trying to get you to buy expensive stuff that you don't need and probably don't even really want. Why do you ask?") The real problem is that you can't disclose what you don't know.

And companies seem to know frighteningly little about what their mobile apps are doing, if efforts by Starbucks, Delta, Facebook, Match.com and eHarmony are any indication.

...

http://www.cio.com/article/748725/Transparency_About_Data_Retention_Requires_Knowing_What_You_have

There is no question that technology today forms the core of business. In their role of facilitating transactions and storing sensitive data—the data of both the staff of the company and the stored data of the clients—the systems and networks of companies are increasingly under siege. This makes data both the most precious asset to the corporation, and the most vulnerable. Losing it may cause irrevocable damage to the reputation of a business, and thereby also the trust of shareholders. Logically, then, network security should be a key focal point in the disaster recovery plan of any business that wishes to stay afloat.

How, then, do we prepare our businesses to deal with threats to network security?

...

http://www.opscentre.com.au/blog/the-importance-of-network-security-in-disaster-recovery-planning/