If you look at recent earnings reports by the biggest data center providers, you’ll get the impression that the industry is booming.
And it is. Enterprises are moving more workloads either to the cloud or to commercial colocation facilities, and data center providers are benefiting from both. As more companies use cloud services, cloud providers are racing to lease as much data center capacity as they can get their hands on, resulting in a boom for the big data center providers who can’t build new facilities fast enough to satisfy all the demand.
The sound of champagne corks popping after earnings reports by the biggest players in the market, however, can mask the fact that in general, the amount of new data centers being built for lease by one or multiple tenants in the US has been declining.
Next time you’re home when a heavy thunderstorm rolls in, take a moment to think about how damaging lightning losses can be and how insurance helps.
In fact, insurers paid out $790 million in lightning claims last year to nearly 100,000 policyholders, according to a new analysis by the Insurance Information Institute (I.I.I.) and State Farm.
Damage caused by lightning, such as fire, is covered by standard homeowners policies and some policies provide coverage for power surges that are the direct result of a lightning strike.
Why choose SSD?
For starters, you get faster access to your data and your operating system performs quicker. The many benefits of using a solid state disk (SSD) both privately and within a company are huge. It’s no wonder that in the last years the sales for traditional hard disk drives (HDDs) have fallen drastically, while SSD sales continue to rise. Some experts estimate that HDD unit sales will decrease from around 475 million in 2012 to 409 million in 2017, while SSDs will increase during the same period from 31 million to 227 million units.
More SSDs are being sold or already built-into notebooks, laptops, tablets and other mobile devices. But SSDs can also be found in desktops and even servers or other high-end storage devices. Even so, a recent report states that SSDs won’t overtake HDDs any time soon because the latter are still much cheaper than SSDs when comparing disk space. It’s also worth noting that by the end of last year, only 15% of all new notebooks had SSDs built-in.
Due to their higher price tag, SSDs are mainly bought and used for high-end devices where speed is critical. This could be the case of a premium notebook which needs to deliver the latest quotes from the financial market in just a few seconds or perhaps of servers where big data is constantly processed or stored for high frequency trading.
The official announcement of the result of the United Kingdom’s referendum about whether the UK should leave or remain in the EU has been declared and voters have decided to leave. The implications for businesses are unclear and this page will be continually updated with information to assist business continuity and risk managers steer through these turbulent waters.
On Twitter follow the hashtag #businessbrexit
Reusing data center heat instead of simply expelling it isn’t a new idea, but few have been able to do it effectively. The most frequently cited reason for that is that servers produce low-grade heat, meaning the heat energy is difficult to extract and move somewhere where it can be put to use.
One reason the heat is low-grade is because it usually comes in the form of hot air, and air is by far not the most effective heat-transfer medium. Replace air with a liquid medium, and the problem of low-grade heat dissipates.
That’s exactly what a company called LiquidCool Solutions is proposing. Its data center cooling technology submerges server electronics in dielectric fluid, and recent tests at a US Department of Energy laboratory have shown that not only is the technology extremely efficient at cooling servers but it can also be used effectively to heat water for typical building uses, such as handwashing.
This week, Apple released a crucial security patch for its AirPort routers. As PC World noted:
… the flaw is a memory corruption issue stemming from DNS (Domain Name System) data parsing that could lead to arbitrary code execution.
I don’t write much about DNS security, and maybe I should. A couple of recent studies show how vital it is and how much a DNS-related security incident can cost you.
How IT Alerting Can Save Your Business Money
A large US hospitality holding company had a serious dilemma. It held more than 6,300 hotels, representing more than 500,000 rooms, in more than 35 countries and territories. If an IT outage occurred, impacting this system, the business faced revenue losses that could total over approximately $27,000 per every minute of downtime. The company had a 24/7 IT monitoring team for critical business systems such as the hotel chain’s reservation system, but every time an outage occurred, it would take 20-30 minutes to get the right IT experts on a conference bridge together to begin resolving the issue.
Thankfully, the holding company found Everbridge, and its IT Alerting solution.
“Within 15 days of starting with Everbridge, we had the tool set up and were ready to roll it out without any additional help. It’s that intuitive.”
Adding small amounts of flash as cache or dedicated storage is certainly a good way to accelerate a key application or two, but enterprises are increasingly adopting shared all-flash arrays to increase performance for every primary workload in the data center.
Flash is now competitively priced. All-flash array operations are simpler than when managing mixed storage, and the performance acceleration across-the-board produces visible business impact.
However, recent Taneja Group field research on all-flash data center adoption shows that successfully replacing traditional primary storage architectures with all-flash in the enterprise data center boils down to ensuring two key things: flash-specific storage engineering and mature enterprise-class storage features.
(TNS) - Southern California’s smaller cities and large businesses must take the threat of a crippling earthquake far more seriously than they have been, a committee of business, public policy and utility leaders said Thursday, saying action is needed to “prevent the inevitable disaster from becoming a catastrophe.”
Despite strides made by the city of Los Angeles to focus on earthquake safety, Southern California still faces significant threats that haven’t been resolved.
One of the most ominous is the looming threat on the edge of Southern California’s sprawling metropolis — the Cajon Pass. It’s a narrow mountain pass where the San Andreas fault — California’s longest and one of its most dangerous — intersects with combustible natural gas and petroleum pipelines, electrical transmission lines, train tracks and Interstate 15 north of San Bernardino.
Digital disruption and pervasive innovation are redefining the way CIOs address the dynamics of today’s data center. Now more than ever they require solutions that address constant change within existing compute models as well as enable the build-out of a “future-ready” IT environment that engages solutions that drive and power the adoption of emerging technologies and hyperscale cloud solutions.
The modern era of computing requires CIOs to take a more flexible approach to building a data center that can handle the demands and workloads of today’s compute environment – all while allowing them to continue to address the priorities of their business and technology strategy.
Embracing a compute-centric strategy that synthesizes traditional and new IT builds a clear path to future-proofing the data center that delivers power and flexibility via a common platform. By taking a compute centric approach to empowering the data center, CIO’s can extend existing and new IT applications and architecture that run a spectrum of applications and workloads for any size data center, when and where needed.