IRVINE, Calif. — ETAP®, the leading provider of software solutions for the design, optimization, and on-line operation of mission-critical electrical power infrastructure, today announced that Asahi Kasei Corp. is deploying ETAP® Real-Time™ software for a wide range of power management operations in one of its flagship chemical facilities.
From a power systems perspective, Asahi Kasei’s facility – located in Nobeoka, in the Miyazaki Prefecture of Japan – is a technological marvel. It is powered by hydroelectric and thermal power plants, utilizes both AC and DC power, and operates separate 50Hz and 60Hz grids connected by frequency changers.
In the short term, ETAP Real-Time will be used for power optimization and visualization; simulating maintenance operations; incident playback and analysis; and optimizing balance between loss minimization and power system stability. Future applications will include real-time energy management and intelligent load shedding.
“The grids that power our facilities are very specialized, and require significant collaboration between our power systems and facilities experts,” said Takao Nishi, manager at Asahi Kasei Corp. “ETAP and ETAP Real-Time provide a technological foundation that allows both engineering and operations to work in a synergistic manner. From the design phase through live operations, the Asahi Kasei team has a thorough understanding of how to optimize our power infrastructure, both now and for the future.”
“The intricate and energy-intensive nature of Asahi Kasei's operations make them ideal environments for real-time power system analysis,” said Tanuj Khandelwal, vice president of technology for ETAP. “Their use of ETAP and ETAP Real-Time will help them achieve lifecycle-long power system reliability and energy efficiency, while promoting facility safety for their employees maintaining their electrical infrastructure.”
About Asahi Kasei
Founded in 1931, the Asahi Kasei Group is one of Japan’s leading multinational corporations, with a diversified array of chemical, electronics, scientific, engineering, and industrial businesses operating in 19 counties. 2012 revenue from its nine core subsidiary companies was ¥ 1.7T ($16.4B USD). Founded in 1972 and based in Tokyo, Asahi Kasei Engineering Corp. is the engineering subsidiary responsible for facility planning, construction, operation, and maintenance. Visit asahi-kasei.co.jp for more information.
Founded in 1986 and headquartered in Irvine, Calif., ETAP is the global market and technology leader in electrical power system modeling, design, analysis, optimization, and predictive real-time solutions. The Company’s software technologies ensure that power systems are designed for optimal reliability, safety, and energy efficiency; when deployed in real-time mode, they enable organizations to manage energy as a strategic asset, maximize system utilization, lower costs, and achieve higher levels of financial stability. To date, more than 50,000 licenses of the Company’s ETAP and ETAP Real-Time products have been used in demanding generation, transmission, distribution, and industrial power system projects around the world. Visit ETAP.com for more information.
By Craig Garner
“A truth that’s told with bad intent
Beats all the lies you can invent.”
- William Blake
Formed through legislation signed by President Gerald Ford in 1976, the Office of the Inspector General (OIG) is one federal agency that should never be underestimated by those in the health care industry. In its pursuit to protect the integrity of health care programs and the welfare of their beneficiaries, the OIG boasts the power to determine the fate of most health care providers through standards both objective (42 U.S.C. § 1320a-7(a) – Mandatory Exclusions) and subjective (42 U.S.C. § 1320a-7(b) – Permissive Exclusions). While those unfortunate enough to find themselves on the List of Excluded Individuals and Entities (LEIE) may at times disagree, the pellucidity with which the OIG enforces its statutory directive is in perfect alignment with the transparency through which the agency insists providers conduct their business.
The recent examples of compliance program credits for Morgan Stanley and Ralph Lauren have demonstrated that, more than ever, an effective compliance program can protect a company from criminal indictment and generate bottom line benefits by helping a company avoid or reduce fines and penalties. Much of the recent enforcement action has been focused on liability for bribery and corruption actions performed by third parties on behalf of another company. When it comes to third party corruption, many compliance program leaders worry that they don’t know where to start on a third party compliance program and that they cannot afford the elaborate, richly funded programs that are so often profiled in the news.
Luckily, you don’t have to have a legion of compliance personnel and an unlimited budget to meet standards recently outlined in A Resource Guide to the U.S Foreign Corrupt Practices Act (FCPA Guidance) provided by the United States Department of Justice (DOJ) and Securities and Exchange Commission (SEC).
January 28th was the anniversary of the Space Shuttle Challenger disaster. The Rogers Commission detailed the official account of the disaster, laying bare all of the failures that lead to the loss of a shuttle and its crew. Officially known as The Report of the Presidential Commission on the Space Shuttle Challenger Accident - The Tragedy of Mission 51, the report is five volumes long and covers every possible angle starting with how NASA chose its vendor, to the psychological traps that plagued the decision making that lead to that fateful morning. There are many lessons to be learned in those five volumes and now, I am going to share the ones that made a great impact on my approach to risk management. The first is the lesson of overconfidence.
In the late 1970’s, NASA was assessing the likelihood and risk associated with the catastrophic loss of their new, reusable, orbiter. NASA commissioned a study where research showed that based on NASA’s prior launches there was the chance for a catastrophic failure approximately once every 24 launches. NASA, who was planning on using several shuttles with payloads to help pay for the program, decided that the number was too conservative. They then asked the United States Air Force (USAF) to re-perform the study. The USAF concluded that the likelihood was once every 52 launches.
Experts have long talked about the 360-degree of customers in near mythical terms and as a generally worthwhile, if not actually achievable, goal. A new business imperative could up the ante for integrating data about customers, according to Gartner.
In the past, what that’s really meant is that they want to align channels, such as in-store, online and customer. Now, the goal is to improve the customer engagement across business divisions as well. Basically, what that means is that they’ve added marketing and sales into the mix.
That’s going to be a big job, too. A Scribe Software survey released in October found that only 16 percent of companies support full integration between CRM and other business systems. And I can’t swear by this data because it’s a few years old, but back in 2012, Scribe found that 35 percent of businesses planned to handle CRM integration by manually re-entering the data.
COMPUTERWORLD — WASHINGTON - From ocean sensors to orbiting satellites, the National Oceanic and Atmospheric Administration (NOAA) collects about 30 petabytes of environmental data annually. But only about 10% of the data is made public, something the agency now wants to change.
NOAA wants to move its vast amount of untapped data into a public cloud, but without having to pay a whopping cloud services bill.
The agency believes the data has a lot of value to it, and is now seeking partnerships with commercial entities, universities and others. An ideal partner might be someone who can apply advanced analytics to the data to create new products and value-added services that also generates new jobs.
OTTAWA, Canada – Enterprise storage has historically been hampered by technical and architectural factors that severely limit application performance. The I/O limitations of hard disk-based arrays, and the latency imposed by PCI-Express based SSDs and architectures such as SAN and NAS, create bottlenecks for applications where speed and determinism are paramount. The emergence of solid state flash continues to disrupt the storage market with the recent introduction of Memory Channel Storage™ (MCS™), an award-winning platform that delivers tens of terabytes of flash capacity in a single server with near-DRAM speeds. MCS technology puts NAND flash into a DIMM form factor and enables it to interface with the CPU via the integrated memory controller. The result is a new class of high-performance, in-memory storage that eliminates the OS/IO/network overhead inherent to legacy storage arrays. Diablo Technologies™, a proven innovator in memory system interface products and creator of the MCS architecture, offers eleven reasons to leverage this ground-breaking technology to optimize application performance for database, big data, virtualization, and low latency workloads. 1. Faster Data Persistence – Memory Channel Storage provides significantly lower write latency than any other flash storage technology. Creating and updating persistent data is now faster than ever before with as low as 5 microsecond write latency. 2. Zero-Compromise Performance – MCS eliminates the trade-off between IOPS and latency that is inherent to other flash storage solutions. Applications can now support heavy I/O while maintaining fast response times to ensure IT managers no longer need worry about tuning for just one of these performance attributes. 3. Predictable Response Times – MCS provides extremely deterministic latency. Uncertainty surrounding storage-related Quality-of-Service (QoS) can now be eliminated. As an example, IT managers deploying VDI can be assured of consistent response times for virtual machines, providing their users with a satisfying session experience. 4. Efficient Scalability – The MCS architecture enables the flash storage solution to tightly fit the customer requirements. MCS I/O performance is linearly scalable and total capacity can be “right-sized” to match application needs. Current products based on Memory Channel Storage are available as 200 GB or 400 GB modules. Multiple modules can be integrated into servers or storage arrays as needed, based on the capacity and performance requirements of specific applications. 5. Platform Flexibility– Most data centers employ a variety of storage solutions depending on the challenges faced. Via its combined advantages (i.e. ultra-fast persistence, supports heavy I/O without compromising response time, determinism, scalability), MCS provides a uniquely flexible platform that can address a wide variety of workloads. 6. Strong Mixed-Workload Performance– Due to its distributed architecture, MCS provides strong mixed-workload performance. Not only can MCS-based modules be written to or read from in parallel, but applications also have the flexibility to write to individual modules while reading from others. Mixed workload applications, such as databases and virtualized environments, benefit greatly and are ideally suited for MCS-based solutions. 7. Flexible Form Factor – By placing persistent memory on a standard DIMM module, MCS-based products will fit into any server or storage design that utilizes the standard DIMM form factor. Flash memory can easily be integrated into standard server and storage arrays with no modifications to the motherboards or chassis and without the complexity of blade servers requiring custom PCIe mezzanine cards. This makes MCS the most flexible means of deploying persistent memory for enterprise and storage applications. 8. Ecosystem Support – With support for the most critical operating systems and hypervisors, MCS can be deployed quickly and easily into virtually any enterprise environment. Driver availability for Microsoft Windows™, VMware ESXi™ and the most prevalent Linux distributions and kernels means broad applicability to enterprise applications. 9. Memory Expansion – MCS redefines the idea of memory expansion by placing flash on the same memory channels as system DRAM. Paging occurs at near DRAM speeds as these operations are simply transfers of data from flash to DRAM within the same memory controller. No transfers through a storage stack or movements external to the processor are required, thereby enabling extremely fast paging. 10. Reduced TCO – With MCS, each node in a cluster is able to complete more work in less time. With storage attached directly to the processors rather than across I/O expansion connections and storage stacks, data is accessed, manipulated, and rewritten to the flash in significantly reduced time. Fewer nodes also means less external storage arrays filled with large amounts of hot, spinning media, and lower power and cooling costs that lead to a reduced Total Cost of Ownership. 11. Future-Proof Platform – The MCS architecture is designed with the ability to utilize current NAND-flash as well as future non-volatile memories, ensuring MCS customers will benefit from the capacity and performance enhancements of technologies such as 3D flash, phase-change memory, magnetoresistive RAM, and resistive RAM. https://twitter.com/diablo_tech https://www.facebook.com/pages/Diablo-Technologies/369582183128064 About Diablo Technologies Founded in 2003, Diablo is at the forefront of developing breakthrough technologies to set the standard for next-generation enterprise computing. Diablo’s Memory Channel Storage solution combines innovative software and hardware architectures with Non-Volatile Memory to introduce a new and disruptive generation of Solid State Storage for data-intensive applications. The Diablo executive leadership team has decades of experience in system architecture, chip-set design and software development at companies including Nortel Networks, Intel, Cisco, AMD, SEGA, ATI, Cadence Design Systems, Matrox Graphics, BroadTel Communications and ENQ Semiconductor. Website: http://www.diablo-technologies.com/
CIO — The demands of big data applications can put a lot of strain on a data center. Traditional IT seeks to operate in a steady state, with maximum uptime and continuous equilibrium. After all, most applications tend to have a fairly light compute load—they operate inside a virtual machine and use just some of its resource.
Big data applications, on the other hand, tend to suck up massive amounts of compute load. They also tend to feature spikes of activity—they start and end at a particular point in time.
"Big data is really changing the way data centers are operating and some of the needs they have," says Rob Clyde, CEO of Adaptive Computing, a specialist in private/hybrid cloud and technical computing environments. "The traditional data center is very much about achieving equilibrium and uptime."
IDG News Service (Boston Bureau) — A former Microsoft architect has founded a startup called Azuqua aimed at tackling the problem of joining together and automating business processes from multiple SaaS (software-as-a-service) applications.
The proliferation of SaaS and the "API [application programming interface] economy," provides a vast opportunity for a service that can easily pull together processes from multiple applications to serve various scenarios, CEO Nikhil Hasija said in an interview prior to Tuesday's launch of the company's platform.
There's also a need for a tool that can make doing this extremely easy for an average user, he said. While there are a wide range of cloud integration options, such as Dell Boomi and Informatica Cloud, "it requires a computer science degree to do something with them," Hasija claimed. "We're solving this for the business user and making IT look good for being able to deliver this."
IDG News Service (Boston Bureau) — Dell and NetSuite are broadening their relationship, with Dell becoming a global reseller and IT systems integrator for NetSuite's cloud ERP (enterprise resource planning) software.
NetSuite and Dell had already partnered around Dell's Boomi cloud integration technology, and signed off on the expanded agreement a couple of weeks ago, NetSuite CEO Zach Nelson said in an interview prior to Tuesday's announcement.
The deal has benefits for both companies. NetSuite will gain from Dell's vast global sales and service organizations, as well as the latter's specialization in industries such as health care and financial services.