WHEELING, Ill. – Response Team 1 has expanded and strengthened its national presence through the acquisition of four regional property restoration firms. Response Team 1 is the nation’s second largest property restoration company serving 34 states from 25 locations.
The companies acquired by Response Team 1 are Empire Construction & Technologies, Inc., Irvine, Calif.; ESN Restoration Services, San Marcos, Calif.; QCI Restoration, Elgin, Ill.; and Worldwide Restoration, Inc., Tulsa, Okla. They each provide a wide range of property restoration services, often in response to weather events and other disasters.
“The addition of these four companies is consistent with our strategy of expanding our best-in-class portfolio of property restoration and renovation firms,” said John M. Goense, Chairman of Response Team 1. “As a Response Team 1 company, individual firms will retain their leadership and commitment to service that has positioned them as leaders in their respective markets. This collaborative relationship will enhance their abilities to deliver high-quality services to their customers.”
With its resources and capabilities, Response Team 1 has the capacity and expertise to quickly restore any single family residence or large commercial building, or renovate any multifamily property coast-to-coast.
Response Team 1 was recently named to the Inc. Magazine 500 list as one of the fastest growing companies in America in 2014. The firm captured the 422nd slot in the ranking, making the Top 500 in its first year of eligibility.
About Empire Construction & Technologies, Inc.
Established in 2001 in Orange County, Calif., Empire Construction & Technologies specializes in all facets of residential and commercial restoration. More information is available at http://www.empirecompany.net/
About ESN Restoration Services
ESN Restoration Services provides emergency service and repair contracting for insurance carriers and property owners. More information is available at www.4esn.com.
Founded in 1978, Elgin, Ill.-based QCIRestoration is a leading Chicagoland fire and flood restoration contractor. For more information about QCI Restoration visit www.qcirestoration.com.
About Worldwide Restoration, Inc.
Worldwide Restoration Inc. is a Tulsa, Okla.-based restoration provider for water, fire and storm damage. More information is available at www.wwrestoration.com.
About Response Team 1
Response Team 1 is an award-winning national leader in the property restoration, disaster loss recovery and multifamily renovation industries. We are committed to getting life back to normal quickly and correctly for our customers.
Response Team 1 serves 34 states from 25 locations with high quality residential and commercial property restoration and renovation services. The national size and footprint give Response Team 1 the ability to mobilize resources capable of responding to large losses and community weather events. The company is based in Wheeling, Ill.
More information is available at www.responseteam1.com.
Internap’s colocation and Performance IP services deliver performance, reliability and scale to meet massive, dynamic digital content demands of media and entertainment sector
ATLANTA – Internap Corporation (NASDAQ: INAP), a provider of high-performance Internet infrastructure services, today announced that renowned visual effects company Scanline VFX is using Internap’s high-density colocation and route-optimized Performance IP™ services to deliver the speed, reliability and scale needed to process high volumes of performance-intensive digital content workloads and easily accommodate business growth and changing project workflows.
With offices in Los Angeles, Vancouver and Munich, Scanline VFX specializes in creating complex visual effects for high-end feature films and TV commercials. Well-known for its Flowline software, used to create fluid effects like water and fire, the company’s portfolio includes 300: Rise of an Empire, Batman Vs. Superman, Captain America, Divergent, Furious 7, Hunger Games: Mockingjay – Part 1, Iron Man, San Andreas and The Wolf of Wall Street.
Scanline previously operated four corporate data centers to support its visual effects workflows, which include the continuous acquisition, editing and production of massive amounts of data-intensive digital content. However, as its in-house space neared full capacity, the company sought to merge its infrastructure into one location with an outsourced data center provider that could offer improved performance, reliability and scalability. Scanline VFX required an infrastructure solution with large amounts of CPU power, along with guaranteed availability and redundancy, to run thousands of renders and simulations concurrently, amounting to more than 210 terabytes of data daily, with an overall storage capacity of more than three petabytes. High-performance data center networking was also imperative to ensure that engineers across Scanline’s three global locations could rapidly access and process visual effects jobs at any time, without interruption.
“We run one of the biggest data islands in the country, and with the sheer number of renders and simulations that are constantly running, flawless data management is absolutely critical to our business – downtime or blips in performance aren’t an option,” said Scott Miller, studio manager at Scanline VFX. “Moving our entire internal data center footprint to Internap was an easy decision. With fully redundant infrastructure, high-power density capabilities and an optimized, managed network, as well as the ability to scale on short notice to meet new client and render capacity demands, we can focus our efforts where they matter most – on delivering outstanding visual effects.”
Internap’s colocation service supports high-density power of up to 18kW, with a unique design that enables Scanline VFX to scale power in-rack without consuming more floor space. Internap’s data centers also feature a concurrently maintainable design, 24x7 on-site data center engineers, advanced security features and remote management self-service tools. Its global Performance IP service with patented Managed Internet Route Optimizer™ (MIRO) technology, backed by a 100% uptime guarantee, evaluates available service networks in real time, ensuring low latency and delivering Scanline VFX traffic over the fastest Internet path. Additionally, as Scanline VFX’s workload needs change over time, it is able to future-proof its infrastructure investment, with the ability to hybridize colocation with Internap’s OpenStack-based public cloud and automated bare-metal server instances, as well as managed hosting – all managed via a single-pane-of-glass portal.
“Media and entertainment companies are facing new digital business models, changing consumer viewing habits and increased competition, requiring higher levels of business agility and infrastructure performance than ever before,” said Mike Higgins, senior vice president of data center services at Internap. “Internap delivers performance on every front, with an unmatched combination of scalable, ultra-high density power and optimized data center networking to help the media and entertainment industry, from visual effects studios to broadcast and social media companies, turn infrastructure into a competitive advantage.”
- Blog - Customer spotlight: Scanline VFX creates stunning visual effects with high-density infrastructure http://www.internap.com/2015/05/21/customer-scanline-vfx-creates-visual-effects-high-density-infrastructure
- Powerful Trends – Scalable Density is on the Rise: http://www.internap.com/2015/01/13/powerful-trends-scalable-density-rise/
- Overcome Latency, Improve Network Efficiency with MIRO: http://www.internap.com/2015/03/14/internet-latency-network-efficiency-miro/
Tweet this news: http://buff.ly/1BeLyo6
About Scanline VFX
Based in Los Angeles, Vancouver, Munich and Cologne, ScanlineVFX specializes in creating complex visual effects for high-end feature films and commercials. Learn more at www.scanlinevfx.com.
Internap is the high-performance Internet infrastructure provider that powers the applications shaping the way we live, work and play. Our hybrid infrastructure delivers performance without compromise – blending virtual and bare-metal cloud, hosting and colocation services across a global network of data centers, optimized from the application to the end user and backed by rock-solid customer support and a 100% uptime guarantee. Since 1996, the most innovative companies have relied on Internap to make their applications faster and more scalable. For more information, visit www.internap.com.
You’re ready for advancement, you want to learn and you’re looking for an educational programme to encompass the needs of your current or planned role in the protection and preservation of your organisation’s functionality, viability and profitability. The MSc Organisational Resilience at Buckinghamshire New University will be good for you – here’s why:
You will become confident, capable and thorough in your knowledge and understanding of organisational resilience
You will understand how resilience needs to match the context of a changing global operating and threat landscape
You will develop the important skill of not just being able to talk about resilience, but also to take an analytical approach that allows you to offer balanced and evaluated solutions to real problems and issues
To ensure the availability of high performance, mission critical IT services, IT departments need both solid monitoring capabilities and dedicated IT resources to resolve issues as they occur. But even with the right tools in place, when an abundance of alerts and alarms start streaming in, it can quickly become overwhelming , particularly when IT staff have been asked to focus time and attention on activities that both support the organization’s end users and add to the company’s bottom line.
Logicalis US suggests that organizations need to ask the following five key questions to help ensure that enterprise IT monitoring is fit for purpose:
1. Is your monitoring tool configured properly? Most organizations have off-the-shelf monitoring tools that gather information from all of the devices on their network. The information coming from these tools can be overwhelming, and while it may be helpful to have access to all of that data, weeding through it in crunch-time can be cumbersome. To limit alerts to those that are most important takes training, knowledge and expertise, which leads many organizations that want to manage IT monitoring in house to employ full-time experts just to configure and manage their monitoring tools.
2. Do you update regularly? Since rules are continually being added to monitoring tools, monitoring isn’t an ‘implement and forget it’ situation, which means IT departments spend a considerable amount of time making sure the tools they depend on for alerts are as current and up-to-date as possible.
3. Can your tool provide event correlation? A single network error can have a ripple effect impacting applications that would otherwise be completely unrelated. As a result, it’s critical that an IT monitoring tool provide event correlation to speed diagnosis and remediation in all affected areas.
4. Does your monitoring tool offer historical trending data? When managing an enterprise environment, IT pros need to analyze historical trend data to identify recurring issues as well as to do capacity planning which, in many cases, can help prevent issues before they arise. Some of today’s popular monitoring tools, however, either operate in real time or store historical data for 30 days or less. Knowing what your tool offers is important information since being able to intelligently analyze and manage an organization’s IT environment can depend on having access to this historical data long term.
5. Do you have the right expertise in house? In an enterprise IT environment, it’s important to consider internal staffing needs and the expertise required to manage the monitoring tools and process in house. Keeping an enterprise environment up and running is no longer IT’s value-add; it’s an expectation. Today, most organizations want their IT staff delivering business results, which is why it may make sense to consider outsourcing monitoring to a third party skilled in assessing and limiting incident reports to only the handful that a busy internal staff actually needs to address.
On 12th December 2014 NATS, the UK's leading provider of air traffic control services, experienced a failure in its Swanwick flight data system. The outage resulted in widespread flight delays and cancellations. A report has now been published which details the events behind the outage and subsequent business continuity response.
Written by an enquiry panel led by Sir Robert Walmsley the report finds that:
- Failure occurred on the 12th December because of a latent software fault that was present from the 1990s. The fault lay in the software’s performance of a check on the maximum permitted number of Controller and Supervisor roles.
- The system error was caused because of a number of new Controller roles that had been added to the system the day before.
- The standard practice in NATS is that engineering recovery is coordinated through a group of designated engineers, known as the Engineering Technical Incident Cell (ETIC) and drawn from those available in the Systems Control Centre adjacent to the Operations Room. While some recovery actions are automated, ETIC manually control all key recovery actions, e.g. the restoration of data, to ensure that decisions are made with due and careful deliberation; this is important, as the wrong decisions could have further downgraded performance.
- Identifying a software fault in such a large system (the total application exceeds 2 million lines of code), within only a few hours, is a surprising and impressive achievement. This was made possible because system logs contain details of the interactions at the workstations.
The detailed 93 page report is available here as a PDF and should be of interest to business continuity managers whatever their sector. It shows how legacy systems can have unexpected and unanticipated impacts as well as giving useful details about the business continuity plans and strategies that were in place at the time of the incident.
The report makes clear that although this was a high profile incident which caused difficulties for NATS' direct customers and the supply chain, it was undoubtedly a business continuity success. Without a strong recovery team response and the pre-planned procedures that were in place the incident and disruption would have been much worse.
According to a new market research report published by MarketsandMarkets the mass notification market is estimated to grow from $3.81 billion in 2015 to $8.57 billion in 2020. This represents a compound annual growth rate (CAGR) of 17.6 percent from 2015 to 2020.
The major forces driving this market are the growing need for public safety, increasing awareness for emergency communication solutions, the requirement for mass notification for business continuity, and the trend towards mobility.
The report says that business continuity and disaster recovery and public safety compliance standards are boosting the sales of mass notification solutions.
Mass notification solutions providers are expected to collaborate and provide better competitive services to take advantage of the emerging mass notification market and to meet the need for complete crisis communication solutions.
Obtain the ‘Mass Notification Market by Solution (In-Building, Wide-Area, Distributed Recipient), by Application (Interoperable Emergency Communications, Business Continuity & Disaster Recovery, Integrated Public Alert & Warning, Business Operations), by Deployment, by Vertical & by Region - Global Forecast to 2020’ report from here.
Most people are visually oriented when it comes to taking in information. They also prefer analogue displays to digital ones. In other words, when it comes to understanding risk as part of business continuity, they like colours and graphics, rather than numbers in a spreadsheet. That makes the risk heat map a popular choice for presenting summary risk information to non-risk experts or senior management. Typically, areas in red on the heat map indicate the biggest risks and areas in green the smallest/most acceptable risks. But does this approach in fact too limited?
Virtually everyone is in favor of an energy-efficient data center. But if that is the case, why has the industry struggled so mightily to reduce power consumption?
Even with the remarkable gains in virtualization and other advanced architectures, the data center remains one of the primary energy consumers on the planet, and even worse, a top cost-center for the business.
But the options for driving greater efficiency in the data center are multiplying by the day – from low-power, scale-out hardware to advanced infrastructure and facilities management software to new forms of power generation and storage. As well, there is the option to offload infrastructure completely to the cloud and refocus IT around service and application delivery, in which case things like power consumption and efficiency become someone else’s problem.
Editor’s Note: This is part of a series on the factors changing data analytics and integration. The first post covered cloud infrastructure; the second discussed new data types, and the third focused on data services.
Data keeps expanding, but only recently have organizations been able to store the data in useful ways. Now, organizations can theoretically keep data at the ready, whether it’s in the cloud, a data lake or in-memory appliance.
Hopefully, it will soon be archaic to hear my doctor say, “Oh, we sent that x-ray to tape. We could get it — but it’s a huge hassle.”
The ability to store mass data is one of the five data evolutions that David Linthicum cited in his thesis on “The Death of Traditional Data Integration.” The ability to pool Big Data sets would not be disruptive, though, if it weren’t coupled with the ability to access it easily and as needed for analytics. As Informatica CEO Sohaib Abbasi points out, this “richness of big data is disrupting the analytics infrastructure.”
One of the often overlooked aspects of Big Data and the Internet of Things is the ability to model and simulate advanced data architectures. This is likely to become a crucial element in the emerging data-driven economy because it allows business leaders to further optimize their digital footprints in support of business goals without disrupting current operations.
As expected, there is a plethora of new simulation platforms hitting the channel that utilize both cloud and on-premises resources to, ironically, model cloud and on-premises infrastructure in support of advanced development and productivity applications.