ATLANTA – Internap Corporation (Nasdaq: INAP), a provider of high-performance Internet infrastructure services, and Akamai Technologies, Inc. (Nasdaq: AKAM), the global leader in Content Delivery Network (CDN) services, today announced that the companies have entered into an alliance to provide customers with Internap’s performance-optimized cloud, data center and network services combined with Akamai’s cloud security and data center protection services.
As part of the alliance, Internap will offer Akamai Cloud Security solutions to organizations in security-conscious industries – including financial services, healthcare and automotive – that require the infrastructure reliability, availability and scalability that Internap and Akamai jointly deliver. The alliance is expected to result in advanced web security solutions for organizations with performance-reliant applications and workloads, as well as more efficient access to both companies’ technologies.
“Organizations that demand optimal web and application performance simply can’t afford the potentially business-impacting outages and downtime associated with DDoS attacks,” said Michael Ruffolo, Internap’s president and CEO. “We believe this alliance is a logical and powerful combination that uniquely addresses this problem for our joint customers. Akamai will be our sole DDoS mitigation provider, bringing market-leading security capabilities to Internap’s state-of-the-art data center facilities and to our expertise in delivering high-performance cloud, colocation and networking services.”
According to research from Akamai, the frequency, size and sophistication of distributed denial of service (DDoS) attacks have been increasing sharply. Offering protection against the largest and most complex attacks, Akamai helps safeguard websites and other Internet-facing applications from the risks of downtime and data theft. Built on the Akamai Intelligent Platform™, Akamai Cloud Security solutions are designed to provide the scale to stop the largest DDoS and web application attacks without reducing performance, as well as offering intelligence into the latest threats and the expertise to adapt to shifting tactics and attack vectors.
“We view the combination of Internap’s high-performance Internet infrastructure services and Akamai’s cloud security solutions as a natural extension to what has been a very synergistic alliance between our two companies,” said Brad Rinklin, Chief Marketing Officer and SVP – Global Alliances, Akamai. “Faced with an ever-changing threat landscape, organizations require comprehensive security solutions that address many different protection scenarios. These include securing mission‑critical Web properties and applications from attack, as well as protecting enterprise IP applications across a data center. With this expanded alliance, Internap and Akamai are providing joint customers with both high-performing cloud and security capabilities.”
Internap offers one of the industry’s most flexible and high-performing Internet infrastructure portfolios – from OpenStack-based public cloud and automated bare-metal servers to managed hosting and colocation. Internap’s Performance IPÔconnectivity with patented Managed Internet Route OptimizerÔ (MIRO) technology probes network paths across 88 points of presence worldwide for packet loss, latency and jitter and dynamically routes traffic over the fastest path, resulting in faster, more consistent performance for customer applications.
Internap is a member of the Akamai NetAlliance Partner Program, which enables partners to align with Akamai through training and certification, sales and pre-sales support and co-marketing programs.
- Akamai Releases Q2 2015 State Of The Internet - Security Report
- Internap Positioned in Gartner Magic Quadrant for Cloud-Enabled Managed Hosting
- Internap blog: Enhanced Web Security and DDoS Protection
- Akamai Web security blog posts
As the global leader in Content Delivery Network (CDN) services, Akamai makes the Internet fast, reliable and secure for its customers. The company's advanced web performance, mobile performance, cloud security and media delivery solutions are revolutionizing how businesses optimize consumer, enterprise and entertainment experiences for any device, anywhere. To learn how Akamai solutions and its team of Internet experts are helping businesses move faster forward, please visit www.akamai.com or blogs.akamai.com/, and follow @Akamai on Twitter.
Internap is the high-performance Internet infrastructure provider that powers the applications shaping the way we live, work and play. Our hybrid infrastructure delivers performance without compromise – blending virtual and bare-metal cloud, hosting and colocation services across a global network of data centers, optimized from the application to the end user and backed by rock-solid customer support and a 100% uptime guarantee. Since 1996, the most innovative companies have relied on Internap to make their applications faster and more scalable. For more information, visit us at www.internap.com or www.internap.com/blog/, and follow us on Twitter: @Internap.
Geo-clusters are something that I often get asked about, especially from clients who are looking to protect mission-critical applications and mitigate the chances of data going missing. In this post, we’ll analyse what they are and what they can be used for.
What is a geo-cluster and how can it help prevent data loss?
In order to address what a geo-cluster is, it is first important to understand the concept of a Database Availability Group (DAG). A DAG allows an organisation to have up to 16 replications of an Exchange Database (EDB). Where we can see this come into play is in a situation (e.g. server failures, offline server) where users are prevented from accessing the primary Exchange server. A more detailed explanation of potential scenarios and how to implement DAGs can be found here. Another important term to understand is High Availability (HA), which Microsoft defines as “the implementation of a system design that ensures a high level of operational continuity over a given period of time.”
Things are seriously bad when one of the world’s most respected business focused publications, the Financial Times (FT), asks if the auto “industry faces ‘Libor moment’”? Yet that was a headline yesterday in the lead article in the FT about the still expanding crisis involving the auto manufacturer Volkswagen (VW) and its emission test cheating that has come apart over the past few days. Last week, the US accused VW of rigging its 500,000 American diesel cars so they would pass emissions requirements when being tested yet belch out 30%-40% more pollution when in actual operation. VW accomplished this through software that could distinguish between testing and operation.
What do you think the chances are that VW was not aware that the ‘defeat device’ software was in its vehicles? Anyone out there think that VW negligently installed and upgraded software through multiple product lines for over 6 years in upwards of 11 million autos? If you do it may be time for a very long session on the meaning of the word intentional.
However the world was stunned this week when not only VW admitted that it had installed software to provide incorrect data on emissions tests around its diesel vehicles in the US but, as reported in the online publication Slate, “the German car manufacturer announced that 11 million of its cars were fitted with diesel engines that had been designed to cheat emissions standards.” Obviously the culture of the company comes into serious question when such a worldwide, multiyear, systemic plan is designed and implemented to break the law.
As the latest Amazon earnings announcement for AWS suggests, enterprises have adopted cloud at a rapid pace over the last few years as a part of the emerging Bimodal IT paradigm. However, given the focus on cost and agile development, the sourcing of cloud vendors has typically been cost-based, and the governance framework adopted across empirical. The recent Sony cyberattacks have proved beyond doubt, that enterprise data is the biggest source of competitive advantage in today’s digital era and needs to be preserved and protected at all costs. Today, as critical business processes and data have started moving to the cloud, there is an increasing clamour for newer and more specific risk and control measures to ensure information security. At the same time, the threat landscape and information security requirements changes with each vendor, location, service, business priority and more. But, this does not and should not mean that organizations need re-invent their cloud management systems and governance processes again every time the threat landscape evolves.
As the phenomena of cloud-based software deployments become the new normal, enterprises need to take a deeper and renewed look into Information Security and Risk Management instead of perpetually trying to re-build their Governance, Risk and Compliance (GRC) programs to keep pace with regulations and emerging cloud service models and technologies. The modern and leading organizations of tomorrow need to adopt a layering approach. Organizations need to create a single GRC layer over their cloud ecosystem, which can expand across multiple cloud vendors and models. The layering approach is imperative to ensure the cloud ecosystem can scale securely across the following attributes:
The definition of VVOLs is simple but the effect is ground-breaking. Here is the simple definition part: Virtual Volumes (VVOL) is an out-of-band communication protocol between array-based storage services and vSphere 6.
And here is the ground-breaking part: VVOLs enables a VM to communicate its data management requirements directly to the storage array. The idea is to automate and optimize storage resources at the VM level instead of placing data services at the LUN (block storage) or the file share (NAS) level.
VMware replaces these aggregated datastores with one Virtual Volume (VVOL) endpoint whose data services match individual VM requirements. VVOLs enable more granular control over VMs and increase their visibility on the storage array. Note however that the array still operates within its own limitations. If an administrator has applied a policy to the VM with a specific snapshot schedule and the array cannot comply, then the VM doesn’t get that schedule.
Network World took a look at a study by tyntec that suggested that “a vast majority” of companies don’t protect themselves adequately from BYOD issues. About half (49 percent) of these firms have employees that at least partially use their own devices at work, which poses huge security risks. To that end, Molson Coors’ CIO Christine Vanderpool offers three lists that look at the risks of BYOD, risk issues to keep in mind and data access and security considerations.
Two surveys by Bitglass were highlighted by eWeek where they found that employees and even IT personnel are not happy with mobile device management (MDM) platforms, which they fear can access, alter or delete personnel data.
People who work for an organization don’t want to be in a situation in which their personal data is under the control of their employer. The most telling statistics from the surveys show that IT personnel – the very people who will be called upon to make such programs work – are almost as skeptical as the folks from PR and accounting about MDM platforms and BYOD:
Larger data loads are coming to the enterprise, both as a function of Big Data and the steady uptick of normal business activity. This will naturally wreak havoc with much of today’s traditional storage infrastructure, which is being tasked with not only with providing more capacity but speeding up and simplifying the storage and retrieval process.
Most organizations already realize that with the changing nature of data, simply expanding legacy infrastructure is not the answer. Rather, we should be thinking about rebuilding storage from a fundamental level in order to derive real value from the multi-sourced, real-time data that is emerging in the new digital economy.
Designers and engineers at Citrix use human-centered innovation approaches such as Design Thinking to create compelling user experiences for mobile devices. Just a few recent experiments from our internal incubators show how designing with the user at the center of the stakeholder map can improve the overall UX, introduce new concepts on the market or applications of existing Citrix products in new verticals and for new use case scenarios.
For example, the Cubefree team created a Yelp-like app for mobile workers starting with low-fi prototypes, then iterating on both the product and the business model during a 3-month Citrix Startup Accelerator program. The PatientConsult team used a similar approach, starting with gaining empathy for doctors and specialists, identifying their specific needs, and prototyping an app for secure communication in the healthcare vertical. Not to mention the newly released Citrix Workspace Cloud that focuses on Citrix customer needs and seamlessly integrates multiple offerings to satisfy them!
(MCT) - A man walked into a biology lab at Lamar University and sprinkled food into a fish tank, sustaining Trinidad-plucked guppies while the professor monitoring them was unable to tend to the subjects of his life's research.
It was a minor happening, but a win nonetheless while worry and stress gripped the Beaumont university reeling from $50 million in damage by Hurricane Rita, a storm that a top official said highlighted deficiencies in emergency preparedness and threatened to derail students' lives.
Ten years later, people across the entirety of Lamar's spectrum -- alumni, professors, officials and maintenance workers -- remembered how everyone came together to solve the most-pressing issue: resuming classes as quickly as possible to avoid canceling graduation.
They also point to structural changes they said alleviated some of the problems three years later during Hurricane Ike and should help Lamar University the next time a major storm strikes southeast Texas.
Almost a quarter of businesses reported annual cumulative losses of at least $1.05 million (CAD $1.4 million) due to supply chain disruptions, and 76% of businesses reported at least one instance of supply chain disruption annually, according to a survey conducted by the Business Continuity Institute and Zurich. The top causes of supply chain failure among businesses surveyed were ones that will likely get even more frequent in the coming years: unplanned IT outages, cyberattacks, and adverse weather.
As the supply chain continues to grow ever longer, adding more potentially disruptive risks along the way, businesses are learning some painful lessons about the financial and reputational damages that can result from failures to ensure supply chain resilience.
Check out the infographic below for some Zurich’s top insights on supply chain visibility, including the biggest sources of damage and key steps to mitigate losses: