ATLANTA – Colo Atl, the leading provider of carrier-neutral colocation, data center and interconnection services at 55 Marietta Street in Atlanta, GA, announces today announces today that Cyber Wurx, LLC, a privately held technology company located in downtown Atlanta, has successfully completed a fiber installation of 96 fiber strands (48 pairs) into Colo Atl’s Meet Me Room. The company hopes to add even more fiber to the facility over the next year.
A longtime customer and partner of Colo Atl’s, Cyber Wurx is also located at 55 Marietta Street. The company first collaborated with Colo Atl in 2004 as it sought interconnection to service providers via the Colo Atl facility extending to 56 Marietta Street.
“Our latest 48 pairs of fibers will allow Cyber Wurx to add more critical networking opportunities to our current and future customers,” states Chris Schwarz, CEO of CyberWurx. “In addition to its location, there are a number of advantages to working with Colo Atl. These include exceptionally fast installation turnaround times, a friendly and professional staff, and access to more than 80 network operators and service providers.”
“Cyber Wurx is a terrific tenant and technology partner of Colo Atl’s. Its fiber installation into the Colo Atl Meet Me Room supports Colo Atl’s ongoing mission to increase dark fiber availability and improve connectivity throughout the Southeast,” comments Tim Kiser, Owner and Founder of Colo Atl. “And by utilizing our facility and services, Cyber Wurx is able to facilitate easy cross connections to any of our providers here at 55 Marietta for its own customers, at no charge.”
Cyber Wurx also plans to further develop its working relationship with Colo Atl by becoming a member of Colo Atl’s sister company, the Southeast Network Access Point (SNAP). SNAP provides next-generation Internet Exchange (IX) solutions, including SDN peering, testing and implementation.
Founded in November 2001, Colo Atl provides a reasonable, accommodating and cost-effective interconnection environment for more than 80 local, regional and global network operators. In 2016, the company is celebrating its 15-year anniversary of providing service excellence and growth. Learn more about our 15-year celebration promotions and other activities here.
In addition to the Southeast Network Access Point (SNAP), the Colo Atl facility is also home to the Georgia Technology Center (GTC), a test bed and live production facility for network communications equipment.
About Colo Atl
Colo Atl, a JT Communications company, is the leading provider of network-neutral colocation, data center and interconnection solutions at 55 Marietta Street in the global telecom hub of Atlanta, GA. Colo Atl provides superior carrier-neutral colocation, data center and interconnection services at an affordable rate. Colo Atl is a network-neutral environment that allows all types of network operators to securely and conveniently cross connect within a SSAE 16 certified facility. Colo Atl has no monthly recurring cross connect fees between tenants and provides exceptional customer service.
Colo Atl is also home to the Georgia Technology Center (GTC), a live laboratory for network equipment vendors to highlight their optical and electrical hardware and operating systems, and the Southeast Network Access Point (SNAP), which provides next-generation Internet Exchange (IX) solutions, including SDN peering, testing, collaboration and implementation.
About Cyber Wurx, LLC
Cyber Wurx, LLC is a privately held technology company located in downtown Atlanta, Georgia. Founded in 1997, Cyber Wurx offers affordable colocation solutions for individuals and enterprise businesses within their SSAE 16 SOC 2 certified facility. Cyber Wurx maintains a network neutral state-of-the-art datacenter with no monthly recurring fees between customers and providers. With our extensive monitoring systems and friendly, knowledgeable staff, Cyber Wurx is able to rapidly provide an outstanding level of support to clients of all levels.
For more information, visit https://cyberwurx.com for details on the company’s services.
The deadly terrorist bombings in Brussels this week have elicited an outpouring of support for the victims and for Belgium, along with renewed rage and consternation regarding ISIS. These are predictable reactions. What these acts also elicited, I’ve noticed, are numerous comments from many outlets that the attacks were not surprising.
The BBC, in fact, said the bombings were “not a surprise” and security experts chimed in with similar assessments. Even Belgians themselves admit that the attack wasn’t shocking—Prime Minister, Charles Michel, lamented that “what we feared, has happened.” Think about how much has changed in less than a generation. Now, when the capital of the EU and NATO becomes a war zone, many react as though this is business as usual.
When it comes to political violence and warfare, we (or at least Western Europe) are living in a brave new world. In fact, research I’ve conducted in recent weeks for a RIMS Executive Report on political risk confirms how much the paradigm has changed. Political risk experts I interviewed have been emphasizing this point. “I think it is truly a distinctive point in world affairs,” said one. Another confessed, “I’ve been doing this for nearly 20 years, and this is by far the most unstable, tenuous, deteriorating…risk environment I’ve ever seen.”
(TNS) -- NYPD Commissioner William Bratton said on national television Wednesday that a deadly terrorist attack like the one in Brussels could happen in New York City.
“Certainly. It can happen anywhere in the world,” Bratton said in a live interview on CBS This Morning.
Although intelligence efforts and preventive measures are important deterrents, living in a free society exposes cities and metropolitan areas to such terrorist attacks, both Bratton and John Miller, NYPD’s deputy commissioner for intelligence and counterterrorism, said in the live interview.
Bratton called large metropolitan areas like New York City “soft targets” for terrorists.
Google builds some of the largest, most sophisticated, and energy efficient data centers in the world. Unfortunately for data center professionals who don’t work for the company, Google data centers are closed to them.
Today at the first user conference for Google’s cloud infrastructure services in San Francisco, the company launched a 360-degree video tour of one of its data centers. The conference, called Google Cloud Platform Next, is where the company’s top management are attempting to make the case to the industry that Google is not only serious about enterprise cloud but plans to lead in the space, currently dominated by Amazon Web Services and, in a distant second position, Microsoft Azure.
The company also announced at the event the launch of two new cloud data centers, in Oregon and Tokyo, and plans to launch 10 more between now and the end of 2017.
1.4 billion people in South Asia, 81% of the region’s population, are acutely exposed to at least one type of natural hazard and live in areas considered to have insufficient resources to cope with and rebound from an extreme event, according to a new study by Verisk Maplecroft.
The research also highlights a lack of resilience to hazards across the region, especially in India, Pakistan and Bangladesh where governments have struggled to translate record levels of economic growth into improved resilience against natural hazards, leaving investors open to disruption to economic outputs, risks to business continuity and threats to human capital.
South Asian nations lag behind the world’s leading economies when it comes to mitigating the worst impacts of natural hazards. The Natural Hazards Vulnerability Index, which assesses a country’s ability to prepare for, respond to, and recover from a natural hazard event, rates Japan (183) and the U.S. (173) as ‘low risk,’ while China (126) is considered ‘medium risk’. In comparison, the weaker institutional capacity, financial resources and infrastructure of Bangladesh (37), Pakistan (43) and India (49) mean they are rated ‘high risk,’ leaving organizations under greater threat if a significant event occurs.
The data identifies flooding as one of the most substantial risks to communities and business in South Asia. In India alone, 113 million people, or 9% of the population, are acutely exposed to flood hazard, with a further 76 million exposed in Bangladesh and 10 million in Pakistan. Indeed, heavy monsoon rain during November and December last year sparked record flooding in South India, which cost the country upwards of US$3 billion and displaced more than 100,000 people.
Adverse weather has slowly been dropping down the ranked threats in the Horizon Scan Report published by the Business Continuity Institute, but is still considered to be a concern by over half (55%) of the business continuity professionals who responded to a global survey. Meanwhile, earthquake/tsunami is considered a concern by nearly a quarter (25%).
“This data highlights the scale of the task facing governments and business in mitigating the threats to populations and workforces from natural hazards in these high risk regions,” states Dr James Allan, Director of Environment at Verisk Maplecroft. “With overseas investment pouring into the emerging Asian markets, companies have an increasing responsibility to understand their exposure and work with governments to build resilience.”
Businesses typically put a great deal of time and resources into customer communications, from elaborate public relations plans to customer surveys. But when it comes to internal communications—those formal and informal means by which employers communicate with staff—communication is often taken for granted. Employees have a critical impact on the outcome of every project, as well as the overall success of your business. Unfortunately, it’s easy for an organization’s leaders to fumble the ball when attempting to improve employee communications.
Here are 10 things your workforce probably wishes you knew about communicating with them:
Two days before Christmas the lights went out across the Ivano-Frankivsk region of Ukraine. As many as 225,000 customers lost power, the result of coordinated cyberattacks on three power grids.
The hackers tricked utility employees into downloading malware – BlackEnergy – that was linked to Russian spy agencies and that had been used to probe power companies across the world, including those in the U.S. On attack day they remotely shut off current to about 60 substations, inserted new code that blocked staff from reconnecting and even “phone bombed” the companies’ switchboards to discombobulate employees rushing to get power flowing again.
The Ukrainians claimed it was the first time a power grid had been knocked out by hackers and quickly pointed a finger at Russia. Robert M. Lee was skeptical. In the midst of preparing for a Christmas wedding in Alabama, the ex-cyberwarfare Air Force officer needed proof. There had only been two known destructive attacks on critical infrastructure. He and several colleagues in the U.S. cyber community coordinated with contacts inside Ukraine to recover malware from the network. Lee was the first person to report about the malware after reviewing the public information and analyzing the grid’s control systems. It was soon apparent: This was the real deal, though Lee shies away from blaming Russia. “What surprised me is the bold nature of it. … It was so coordinated. All the stuff we’ve seen before looked like intelligence. This looked like military. That’s kind of alarming.”
Taller hard drives with multiple actuator arms supplied in multi-drive packages — these are just a few of Google's suggestions as it calls for a complete rethink of storage disk design.
In a white paper called "Disks for Data Centers," published last month, the company gives some hints as to how the hard disk drive might evolve in the coming years.
There's a need for change, the white paper asserts, because the fastest-growing use case for hard drives is for mass storage services housed in cloud data centers. YouTube alone requires 1 million GB of new hard drive capacity every day, and very soon cloud storage services will account for the majority of hard drive storage capacity in use, it says.
In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.
Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.
Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.
If you think you know what Big Data is going to be like based on the volume of today’s workflows, well, to coin a phrase, “you ain’t seen nothin’ yet.”
The fact is that with the sensor-driven traffic of the Internet of Things barely under way, the full data load that will eventually hit the enterprise will be multiple orders of magnitude larger than it is today, and much of it will be unstructured and highly ephemeral in nature, meaning it will have to be analyzed and acted upon quickly or it loses all value.
The good news is that much of the processing will be done at the edge, where it can be leveraged for maximum benefit without flooding centralized resources. But a significant portion will still make it to the data center or the data lake, which means the enterprise will need to implement significant upgrades to infrastructure throughout the distributed data environment, and soon.