Fall World 2014

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 27, Issue 3

Full Contents Now Available!

Jon Seals

The recent fire at the distribution centre of leading British online retailer ASOS is a textbook example in the importance of having an effective disaster recovery plan in place across your organization’s supply chain, in order to ensure business continuity says, Jonathan Gibson, Head of Logistics at supply chain consultancy firm Crimson & Co.

The incident, which occurred in late June at ASOS’s main distribution centre in Barnsley caused damage to 20 percent of the retailer’s stock, and consequently required the business to temporarily cease trading. Despite this the online retailer made an efficient recovery and was operational again in 48 hours. Gibson states that the impressive recovery strategy is an eye opener to fellow retailers, demonstrating the importance of implementing a structured plan that is able to identify risks across your business.

“The ASOS warehouse fire brings home the importance of having backup and disaster recovery processes in place across your organization’s supply chain. Ultimately, consumer’s sympathy for an incident such as this will only go so far, and if you are offline for a significant amount of time customer loyalty will waiver and they will start to look elsewhere.

...

http://www.continuitycentral.com/news07276.html

Digital Realty Trust, Inc. has released Australia-specific findings following its annual commissioned survey of Asia Pacific data centre trends conducted by Forrester Consulting.

According to the survey, 76 percent of Australian organizations expect to increase spending on data centre facilities over the next 12 months, with 59 percent of respondents expecting to increase spending by 5 – 10 percent and 17 percent of respondents expecting to increase spending by more than 10 percent.

Big Data was cited as the key driver of data centre growth in Australia by over half (51 percent), followed by virtualization (39 percent) and business continuity (37 percent).
Additional findings from the survey include:

  • CIOs continue to have the strongest influence on data centre spend in Australia with over half (52 percent) of respondents identifying the CIO or most senior IT decision maker as influencing the decision, closely followed by the CEO (46 percent) and the IT VP/manager/director (46 percent).
  • Over half (52 percent) of Australian organizations surveyed have between one to four data centres.
  • Exactly half of respondents (50 percent) cited the need to expand space and number of cabinets/racks as the main reason their data centre facilities are running out of capacity.

www.digitalrealty.com

For some organisations, it’s an explicit legal requirement. For others, it’s the consequence of prevailing laws and regulatory structures. The mandatory requirement defined by the Australian Government for its agencies sets the tone: “Agencies must establish a business continuity management (BCM) program to provide for the continued availability of critical services and assets, and of other services and assets when warranted by a threat and risk assessment.” And for the rest? There’s a strong argument to be made that business continuity management is no longer a choice for any enterprise – and that an obligation for BCM is a good thing anyway.

...

http://www.opscentre.com.au/blog/business-continuity-management-now-effectively-mandatory-for-all/

When BYOD first established itself, IT and telecommunications departments, and the higher-ups who sign their checks, were rightly concerned that the new trend would have fundamental and widespread effects. They acknowledged that a tremendous amount of hard work would be necessary to leverage the new approach without compromising security.

It is easy to react when something is new and exciting. The challenge is long-term commitment. Vendors continually bring new products to market and strategies change. Will IT departments continue work to integrate the new techniques to make BYOD increasingly secure – and not grow lazy or compliant? Will the budgets they need to keep up with what is new start to shrink as other priorities emerge?

Joe McKendrick at ZDNet took a look at a BYOD survey by CompTIA. The results suggest that people are not paying as much attention as they should:

...

http://www.itbusinessedge.com/blogs/data-and-telecom/are-organizations-paying-attention-to-the-byod-security-challenge.html

Enterprise executives are under intense pressure these days to deliver a wide range of new and expanding data services amid shrinking IT budgets. It seems that the front office has taken to heart the mantra “do more with less” and is buoyed by the notion that the cloud will come to the rescue when confronting all manner of data challenges.

This is part of the reason why top research firms like Gartner are starting to pull back on their IT spending predictions. As I noted last week, the company recently trimmed its 2014 growth outlook by about a third, from a 3.2 percent expansion to 2.1 percent, even though that still represents a substantial $3.75 trillion market. A deeper dive into the numbers, however, shows a data center hardware market flat-lining for the next year at about $140 billion, while an oversupply of SAN and NAS systems is likely to drive prices down even further. IT services, meanwhile, are looking at about 3.8 percent growth this year, representing nearly $1 billion in revenue.

But is it really that bad? Are we on a one-way street to a utility-style, all-cloud data center? Not hardly, at least not yet. The fact remains that there are plenty of compelling reasons for enterprises of all stripes to build and maintain local data infrastructure, both as stand-alone IT environments and hybrid systems tied to third-party resources.

...

http://www.itbusinessedge.com/blogs/infrastructure/still-some-life-in-traditional-data-infrastructure.html

A few months ago, I was asked to write an excerpt on shadow IT for an e-book. I had to decline because I didn’t know much about shadow IT. Heck, I didn’t know anything about shadow IT—or so I thought. I just didn’t recognize it by that name. It turns out that it is a topic I’ve touched on; that whole idea of employees using outside technology, particularly cloud technologies, for business purposes but doing so without permission from the IT department. Thanks to free applications, downloads and the rise of BYOD, shadow IT has become common in the workforce. A study released earlier this year by Frost & Sullivan Stratecast and commissioned by McAfee defined shadow IT in this way:

SaaS applications used by employees for business, which have not been approved by the IT department or obtained according to IT policies. The non-approved applications may be adopted by individual employees, or by an entire workgroup or department. Note that we specified that the non-approved applications must be used for work tasks; this study is not about tracking employees’ personal Internet usage on company time.

...

http://www.itbusinessedge.com/blogs/data-security/shadow-it-is-risky-business.html

Emergency dispatchers and response teams are struggling with a widening language divide as they attempt to service Waterloo’s growing population of non-English speakers.

The communication barrier creates problems for all parties involved, from the dispatcher deciphering a 911 call to the officer trying to put together an accurate police report to the concerned resident trying to communicate a problem with little to no knowledge of the English language.

Over recent years, Waterloo Police have dealt with a slew of languages including Bosnian, Spanish, Serbian, Croatian, Burmese, French and Vietnamese.

In 2006, Burmese refugees began settling in Waterloo for the employment opportunities at Tyson's meat plant, and the community has been growing ever since.

...

http://www.emergencymgmt.com/safety/Language-Barrier-Complicates-Emergency-Response-Scenarios.html

Flash-driven storage arrays provide right combination of cost, performance, scalability
NEWARK, Calif. – Tegile Systems, the leading provider of flash-driven storage arrays for databases, virtualized server and virtual desktop environments, today announced that Grass Valley, a leading provider of end-to-end television production and content distribution workflow, has implemented a Tegile HA2300 hybrid array to provide the speed, cost and year-to-year investment protection for its growing VM environment. Grass Valley deployed a FlexPod architecture that included NetApp storage, the Cisco Unified Computing System and VMware at its Hillsboro, Oregon engineering center to support its large virtual infrastructure. The FlexPod system was used by Grass Valley for a variety of mission-critical applications including software development, training, customer service, documentation, build operations, VDI and Q&A testing. Performance and latency issues resulting in the growth of VMs on NFS led the company to seek out a new solution to replace its NetApp FAS2240 data storage system. With all of the Hillsboro operations except core IT running in its virtual infrastructure, Grass Valley began an analysis of storage offerings that would satisfy its demanding needs for speed, acquisition price point and long-term investment costs to accommodate year-after-year growth. The company evaluated options that included purchasing a replacement NetApp FAS2240 data storage system as well as offerings from EMC, Isilon and Hitachi before deciding to take a “leap of faith” and adopt a Tegile hybrid storage array to meet its business objectives. “Tegile really does perform like you’ve never seen before,” said Tony Combs, Solutions Architect at Grass Valley. “They’re more cost effective than anyone in the industry right now. And Tegile does something no one else does. Their dedupe is inline, which is very impressive. It’s almost too good to be true.” Tegile flash-driven arrays are designed to make the management of VDI easier, faster, more reliable, more scalable and less expensive. Whether used in conjunction with VMware View, Microsoft Terminal Services, Citrix Xen or other solutions, Tegile hybrid arrays allow organizations like Grass Valley to centralize operations, manage more machines without sacrificing capacity, mitigate the disruption of IOPS storms with seven times the performance at considerably lower latency, protect data at a vastly reduced cost compared to other arrays, and eliminate wear-leveling problems and data-integrity issues. For the cost of the NetApp system, Grass Valley was able to implement a Tegile HA2300 flash-driven array that proved to be 10 times faster than the FAS2240. NFS latencies have been reduced to an “amazing” sub-1 millisecond with an “almost obscene” 40,000 IOP and 9.5GB/sec throughput to the controller. Performance of the Tegile system easily handles the 480 virtual machines, 50 VDIs and 42 SQL databases utilized by the Grass Valley team. Additionally, the system went from running 90 percent CPU utilization to a 2 percent CPU load. “We’re pleased that Grass Valley took that leap of faith and was able to meet their cost and performance requirements with the implementation of a Tegile hybrid storage array,” said Rob Commins, VP Marketing of Tegile Systems. “Whether as a replacement for an overloaded storage system or as part of a new storage architecture, Tegile maximizes capacity with on-the-fly de-duplication and data compression to enable more hosted virtual desktops for a lower investment in storage and network infrastructure – all without compromising performance.” About Tegile Systems Tegile Systems is a leading provider of intelligent flash storage arrays. Our mission is to accelerate the transformation of enterprise IT by changing the performance and economics of enterprise storage. Our flash storage arrays, with patented IntelliFlash™ architecture, deliver high I/O and low latency for business applications such as databases, server virtualization and virtual desktops. Our customers achieve business acceleration and unmatched storage capacity reduction. Tegile is backed by premier venture capital firms August Capital and Meritech and strategic investors HGST and SanDisk. Follow us on Twitter @tegile or visit us at www.tegile.com
  • DE-CIX to launch a DE-CIX Apollon POP (point-of-presence) at Intergate.Manhattan, Sabey’s facility at 375 Pearl Street in downtown Manhattan.
  • Intergate.Manhattan represents an exponential expansion to New York City’s computing infrastructure capacity. The 32-story facility is also the most efficient data center in New York City, with storm proofing, brand new infrastructure and state-of-the-art climate control and electrical systems.
  • Sabey Intergate.Manhattan will use DE-CIX Apollon peering and Metro VLAN Ethernet capabilities in the NY/NJ metro to enhance interconnection options.

 

NEW YORK - Seattle-based Sabey Data Center Properties, one of the nation’s largest privately-owned multi-tenant data center owners and developers, announced today that DE-CIX, the world’s largest carrier- and data center-neutral Internet exchange, will launch a point-of-presence of its DE-CIX Apollon interconnection platform within Intergate.Manhattan, Sabey’s 1 million-square-foot facility at 375 Pearl Street in Lower Manhattan.

DE-CIX’s award-winning Apollon platform provides the infrastructure for direct and settlement-free IP interconnection among Internet service providers (ISPs), known as peering. DE-CIX operates Internet exchanges in Frankfurt, Munich, Hamburg, Dubai and New York. DE-CIX New York is already deployed in seven other data centers in New York in New Jersey and operates 111 access points in the metro area. Its expansion into Intergate.Manhattan is expected to take place within the next 30-60 days.

Sabey Data Centers’ proprietary IGX fiber network will provide DE-CIX with two redundant links on diverse routes to connect 375 Pearl Street and DE-CIX at 111 Eighth Avenue and 60 Hudson Street. The new DE-CIX POP allows all Sabey tenants to take advantage of DE-CIX’s Ethernet interconnection options. The connection is able to grow to multiple Terabits per second in the future.

Sabey Data Centers will use services from DE-CIX that include peering and Layer 2 connectivity to other New York City data centers.

John Sabey, President of Sabey Data Centers, said, “DE-CIX expressed its desire to be at 375 Pearl Street, and we have been working on our business agreement since January. This agreement clearly shows that both parties strongly support the growth of New York as an Internet hub for the region.”

“We are committed to growing DE-CIX New York to one of the world’s five largest Internet exchanges by 2020. Expanding to Sabey’s Intergate.Manhattan facility is part of our effort to establish a DE-CIX presence in every relevant data center and carrier hotel in this metro. DE-CIX New York is just a cross connect away from 99% of the providers in this metro market,” said Frank Orlowski, CMO for DE-CIX.

Mr. Sabey added, “We are very happy to partner with DE-CIX New York on this important market-enriching change. We are proud to be one of the first customers on the exchange, since having DE-CIX inside 375 Pearl leads to a larger choice of interconnection options for our customer base.” 

Intergate.Manhattan represents an exponential expansion to New York City’s computing infrastructure capacity. The 32-story Intergate.Manhattan is also the most efficient data center in New York City, with brand new infrastructure and state-of-the-art climate control and electrical systems. The building’s Con Ed substation is on the second and third floors, eliminating a risk from storm surge damage that affected so many buildings in Lower Manhattan during Superstorm Sandy.

Survey reveals 39% of enterprises not managing, or manually managing their virtual environments, negating any anticipated cost savings

MAIDENHEAD, UK - Tensions between application producers and enterprises around virtualisation are likely to heat up.  As software vendors and intelligent device manufacturers change their licencing rules to profit from virtualisation, the latest IDC/Flexera Software report, based on a survey of both application producers (software vendors and intelligent device manufacturers) and enterprises, shows that 42% of application producers plan on changing their licencing/compliance policies for virtualisation. At the same time, alarmingly, a large percentage of enterprises, 39%, either don't manage their software licences at all in virtual environments, or they do so manually.   

This lack of correlation is likely to increase tensions between buyers and sellers of enterprise software as more organisations are found to be out of compliance with virtualisation licencing rules, resulting in steeper "true-up" or cost balancing penalties.

The new Flexera Software 2013-14 Key Trends in Software Pricing & Licensing Report is prepared jointly with analyst firm IDC, and the ninth annual assessment of key issues and trends on the minds of software vendors, intelligent device manufacturers, and enterprise IT executives and managers.

"Virtualisation adds great complexity around software licencing and creates new compliance challenges for customers," said Amy Konary, Research Vice President - Software Licencing and Provisioning at IDC. "We've seen instances in which the savings that organisations anticipate through virtualisation disappear, and costs actually increase due to higher licencing fees. Smart organisations should be aware of the licencing cost implications of virtualisation and implement software licence management best practices and technologies to help reduce that risk and make more informed decisions."

According to the survey report, application producers are rapidly evolving their software pricing and licencing strategies - but they are largely unaware of the difficulties their enterprise customers have managing software entitlements. 33% of application producers said they've changed their software pricing and licencing models in the past 18-24 months.  48% said their primary reasons for making these changes were to generate more revenue.  Yet in assessing the impact of those changes on their customers, almost two-thirds of application producers - 59% -- say it is not difficult for enterprises to determine which products they are entitled to use.

In fact, enterprises experience tremendous difficulties managing their entitlements and staying in compliance. As reported in the previous Key Trends in Software Pricing and Licensing Survey on Software Licence Audits: Costs & Risks to Enterprises, 85% of organisations are out of compliance with their software licence agreements.  With 39% of enterprises in today's survey reporting that they either don't manage their virtualised software licences, or they do so manually -  software vendor audits will likely yield increased findings of software license non-compliance - fuelling tensions between application producers and their customers.

"There is already some strain on the producer/enterprise relationship.  No organisation enjoys the disruption of having to defend against a vendor's software licence audit or paying a true-up fee," said Jim Ryan, Chief Operating Officer at Flexera Software. "Application producers need to understand how challenging it is for their well-intentioned customers to remain in compliance with their licencing terms.  And prudent enterprises must understand that virtualisation adds a whole new layer of licence compliance risk exposure, requiring them to be proactive about implementing industry best practices and technology to manage those risks."