It seems something of a misnomer that Data Center Infrastructure Management (DCIM) platforms are gaining in stature while the vast majority of enterprises are supposed to be de-emphasizing local resources in favor of the cloud.
But the trend is clear: Run-of-the-mill enterprises are turning to every means necessary to reduce costs and improve efficiencies within their on-premises infrastructure while large cloud providers and hyperscale organizations have no choice but to balance workloads against resource consumption or watch their business models collapse under the weight and complexity of their own IT operations.
The challenge going forward is not to simply deploy DCIM, says International Data Corp. in a new report, but to weigh the various DCIM platforms against emerging goals and technology developments. Not all DCIM solutions are the same; in fact, few of them are. Some focus largely on asset management and connectivity while others gear toward critical infrastructure and facilities control. Some are software-only while others introduce a mix of hosted services. Weighing the pros and cons will require a clear assessment of the nature of current infrastructure (is it converged, distributed or both?), as well as internal skillsets, plus future requirements in terms of scale, integration and automation.
There’s a lot of buzz around the Internet of Things (IoT), not least with latest forecasts from Gartner suggesting that 20.8 billion connected things will be in use worldwide by 2020.
Already the estimated number of connected things in 2016—6.4 billion, according to Gartner—is a 30 percent increase on 2015. In fact 5.5 million new things will get connected every day in 2016, Gartner predicts.
A press release notes:
One scenario that any office manager will inevitably dread is the logistical nightmare that is moving to new premises. Whether you’re a booming business that’s outgrowing its existing space, or you’re looking to move into more economical digs, making sure the big day goes smoothly is just as stressful for a business as it is in your personal life.
This is something I’m well aware of at the moment, as we at Kroll Ontrack are moving ourselves this weekend. And in between packing up boxes and sorting out issues such as moving our communications and utilities, it’s got me thinking about another critical – yet often overlooked – factor that needs to be considered when moving offices. Namely, how can you be sure your digital data is secure throughout the process?
When it comes to recent cybersecurity talks, the prevalent theme seemed to be, “We know we need to do something, but what?”
The recurring questions are: Where do we start, and how fast do we need to react to stop cyberattacks? What's become quite clear is that if we are to secure our digital world, we need to do it with technologies that run as fast as the networks and applications in which they operate — in milliseconds.
Repeated time and again in recent discussions is the need for proactive defensive measures in cybersecurity — and how quickly they must react to stop today's hacker. Even the language in the new cybersecurity bill seems to fall short of true cybersecurity protection, as it is more based on the sharing of information to assist in the detection and recovery of a cyberattack rather than a proactive cybersecurity solution that would stop the attack.
And this leads to a few important questions: Is there a big disconnect between the public and the private sectors when it comes to what cybersecurity is suppose to achieve? If so, what is that disconnect, and how can we move forward?
(TNS) - A violent rampage at UC Merced and threats of gunplay at Fresno State earlier this month are prompting universities to reassess the resources and policies in place to ensure safety and security on their campuses, and a school security training is being planned in Angels Camp.
Rural Domestic Preparedness Consortium will be delivering a Department of Homeland Security-certified course in crisis management for school-based incidents in an all-day training Dec. 21 at Bret Harte High School in Angels Camp. The course is free for first responders and school administrators with registration by Dec. 7.
At UC Merced, a student stabbed four people with a hunting knife Nov. 4 before being shot and killed by campus police. Two days earlier, a social media post attributed to a California State University, Fresno, student threatened that a shooting would take place that afternoon. Investigators made an arrest within hours.
Mark Armour and David Lindstedt recently proposed Continuity 2.0, a manifesto detailing how current approaches to business continuity planning might evolve. In this article Mark looks at how Continuity 2.0 might be applied in practice.
The following example is by no means definitive. Remember that the Continuity 2.0 principles are not about order of execution. The three steps suggested here provide just one example of how the principles could be applied in a fairly concise execution. So, without further ado: a practical approach to Continuity 2.0 in three easy steps.
The peak of our current El Niño is expected to occur in the next month or so… but what does that mean? We measure El Niño events by how much warmer the surface waters in a specific region of the equatorial Pacific are, compared to their long-term average. The difference from average is known as the “anomaly,” and we use the average anomaly in the Niño3.4 region as our primary index for El Niño. When the index in this region is at its highest, we have our peak El Niño.
However, El Niño-related impacts have been occurring around the globe for months already, and will continue for several months after the warmest temperatures occur in the tropical Pacific Ocean. For example, during the 1997-98 El Niño, the Niño3.4 Index peaked at 2.33°C in November (using ERSSTv4 data, the official dataset for measuring El Niño), and the most substantial U.S. effects occurred through the early spring of 1998. A bit later in this post, we’ll take a look at what’s been going on so far this year.
First, a quick update on the recent El Niño indicators
The average anomaly in the Niño3.4 region during August-October of this year was 1.7°C, second to the same period in 1997 (1).
The atmospheric response to the warmer waters is going strong. The Walker Circulation (tropical near-surface winds blowing from east to west, and upper-level winds blowing from west to east) is substantially weakened, as we expect during a strong El Niño.
In case you’re unimpressed by a 2°C (3.6°F) change, let’s do a little math. The area covered by the Niño3.4 region is a little more than 6 million square kilometers (2.4 million square miles). One cubic meter of water weighs 1,000 kg. So the top two meters (6.6 feet) of the Niño3.4 region contains about 12 quadrillion kilograms (about 13.6 trillion tons) of water.
The energy required to raise one kilogram of water one degree Celsius (the “specific heat”) is 4.19 kilojoules. A 2°C increase in just the top two meters of the Niño3.4 region adds up to an extra 100 quadrillion kilojoules (95 quadrillion BTUs), about equal to the annual energy consumption of the U.S.!
Who’s feeling the effects?
In the U.S., the season of strongest El Niño impacts is December through March. While we’re waiting to see what the strong 2015-16 El Niño brings us, we’ll look around a few other corners of the world to see what’s happened so far.
El Niño has substantial impacts in two regions of Africa. I checked in with the Climate Prediction Center’s International Desk to see what’s been going on. In East Africa, including Ethiopia, Somalia, Kenya, Tanzania, Uganda, Burundi, and Rwanda, the primary impact season is October–December, when El Niño tends to enhance the ”short rains” rainy season (the “long rains” season, which is much less ENSO-sensitive, is March-May), leading to wetter conditions. Over the last month, rain has begun to increase across much of the area, and some flooding has been seen in Somalia. Short-term forecasts suggest the wetter conditions should continue through the next few weeks, at least.
Southern Africa, including Zimbabwe, Botswana, Namibia, Angola, South Africa, Lesotho, Swaziland, and the southern half of Mozambique, tends to see a drier December–February during an El Niño. Areas of this region, especially South Africa, are very dry right now, after a failed monsoon last year. Another dry year would place more stress on water availability. You can check out recent rainfall conditions in Africa here, and see climate model forecasts for the continent here.
In a couple of short sentences, here are some huge impacts: El Niño-related dry conditions in Indonesia have set the stage for devastating fires, and the region is experiencing the greatest number of forest fires since 1997. Also, all the extra warm waters associated with this El Niño are placing heat stress on sea life, and an intense coral bleaching event is underway.
El Niños tend to enhance the hurricane season in the Pacific, and depress the Atlantic hurricane season. Phil Klotzbach of Colorado State University had this to say about the wild Pacific hurricane season: “So far this year, there have been a total of 21 Category 4 and 5 storms in the North Pacific, shattering the old record of 17, set in 1997. The North Central Pacific region (140-180W) has shattered records for most named storms, hurricanes, and major hurricanes tracking through the 140-180W region.”
According to Lindsey Long of the Climate Prediction Center, the Atlantic season has been fairly quiet, although the number of named storms has been close to average, at 11 storms so far (including Kate, which formed on Monday). The average is about 12… but the overall activity of this storm season (the combined strength and duration of all storms, measured as the Accumulated Cyclone Energy (ACE) has been less than 60% of average, and we’ve had 3 hurricanes, half the average number of 6.
We won’t know until next spring what exact impact this El Niño will have on the U.S., but it is already making its presence felt around the world.
(1) Note that CPC subtracts past 30-year “normals” from the current sea surface value to obtain the Nino-3.4 anomaly values, and the “normals” are updated every five years. Therefore, the long-term trends are removed. These monthly values are averaged together to obtain our Oceanic Niño Index [ONI].
A couple of recent studies show that companies continue to struggle with endpoint security. This has to be a serious concern as more employees are connecting to the corporate network through multiple devices.
Let’s look at these different studies. First, last week, MeriTalk and Palo Alto Networks released the Endpoint Epidemic report, which looks at endpoint security within federal government. Government agencies are failing badly when it comes to endpoint security: 44 percent of endpoints are either unknown or unprotected, and little is being done by up to half of the agencies to do anything about it, as SC Magazine pointed out:
Just over half of federal IT managers (54 percent) responded that their current policies and standards are very effective, practical or enforceable. Further, less than half said their agency's endpoint security policies and standards are very well integrated into their overall IT security strategy. And, half said their agency isn't taking key steps to validate users and apps.
Cutter Fellow Bob Charette has been blogging over at IEEE Risk Factor for the past decade, looking at the myriad ways software projects fail. To mark that 10-year milestone, he set out to analyze what’s changed — and what hasn’t — in the area of systems development- and operations-related failures.
Bob doesn’t claim to have compiled a comprehensive “database of debacles” in Lessons From a Decade of IT Failures. Instead, he’s endeavored to bring together the “most interesting and illustrative examples of big IT systems and projects gone awry.” Be sure to spend some time with his colleague Josh Romero’s five super cool interactive visualizations of the data where you’ll:
Transforming an acquired technology into a fully integrated product.
In 2014, Citrix acquired a company called ScaleXtreme, as part of our expansion into the world of enterprise SaaS solutions. ScaleXtreme was a powerful tool for automating delivery and management of IT services, and my design team was asked to redesign it to fit in with our existing products.
At the same time, we had to find a way to integrate the new product into an entirely new platform – Citrix Workspace Cloud — that was still being developed.
This was a multi-dimensional challenge — one that many companies have to deal with. Success is far from guaranteed and there are many potential pitfalls. It helps to have a clear strategy, early customer input, and most importantly teams who all work together to find the right solution.