Certification Enables Enterprises to Design and Implement Their WAN Architecture on Interoperable Platforms
MOUNTAIN VIEW, Calif. – VeloCloud™ Networks Inc., the Cloud-Delivered SD-WAN™ company, today announced that VeloCloud Cloud-Delivered SD-WAN has achieved VMware Ready™ status for Network Functions Virtualization (NFV). This designation indicates that after a detailed validation process, VeloCloud Cloud-Delivered SD-WAN has achieved VMware's highest level of endorsement and can now be found on the VMware Solution Exchange (VSX).
VeloCloud Cloud-Delivered SD-WAN enables enterprises to support application growth, network agility and simplified branch implementations while delivering optimized access to cloud services, private datacenters and enterprise applications. Global service providers are able to deliver advanced services and increase flexibility by delivering elastic transport, performance for cloud applications, and integrated advanced services all via a zero-touch, multi-tenant deployment model.
"The VMware Ready certification for VeloCloud Cloud-Delivered SD-WAN is an important milestone as SD-WAN reaches an inflection point this year, and as it continues to grow over the next three to five years," said Michael Wood, Vice President of Marketing, VeloCloud. "The option to deploy VeloCloud Cloud-Delivered SD-WAN with VMware vSphere® gives service providers and enterprises more choices to deploy interoperable NFV solutions which can reduce IT complexity."
"We are pleased that VeloCloud Cloud-Delivered SD-WAN qualifies for the VMware Ready™ logo, signifying to customers that it has met specific VMware interoperability standards and works effectively with VMware infrastructure, which can speed time to value within customer environments," said Howard Hall, senior director, Global Technology Partnering Organization, VMware.
The VMware Ready program is a co-branding benefit of the Technology Alliance Partner (TAP) program that makes it easy for customers to identify partner products certified to work with VMware infrastructure. Customers can use these products and solutions to help lower project risks and realize cost savings over custom built solutions. With thousands of members worldwide, the VMware TAP program includes best-of-breed technology partners with the shared commitment to bring the best expertise and business solution for each unique customer need.
VeloCloud Cloud-Delivered SD-WAN can be found within the online VMware Solution Exchange (VSX).
VeloCloud Networks, Inc., the Cloud-Delivered SD-WAN™ company, Gartner Cool Vendor 2016 and a winner of Best Startup of Interop, simplifies branch WAN networking by automating deployment and improving performance over private, broadband Internet and LTE links for today's increasingly distributed enterprises. VeloCloud SD-WAN includes: a choice of public, private or hybrid cloud network for enterprise-grade connection to cloud and enterprise applications; branch office enterprise appliances and optional data center appliances; software-defined control and automation; and virtual services delivery. VeloCloud has received financing from investors including NEA, Venrock, March Capital Partners, Cisco Investments and The Fabric, and is headquartered in Mountain View, Calif. For more information, visitwww.velocloud.com and follow the company on Twitter @VeloCloud.
VeloCloud is a registered trademark of VeloCloud Networks, Inc., in the United States and other countries. VMware, VMware Ready and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and other jurisdictions. All other brands, products, or service names are or may be trademarks or service marks of their respective owners. The use of the word "partner" or "partnership" does not imply a legal partnership relationship between VMware and any other company.
Increased Scope Includes Audit of up to 50 Domestic Data Centers of International Mobile Operator
JACKSONVILLE, Fla. – Duos Technologies Group, Inc. (OTCQB: DUOT), a provider of intelligent security analytical technology solutions, announced today that it has been awarded additional business by one of its strategic partners to provide data center audit services for an international mobile telecommunications operator. The project is expected to extend into the first quarter of 2017 and is expected to generate up to $1million in revenue in 2016. The services will be delivered by Duos Technologies' IT Infrastructure Services Division utilizing its patented data center audit methodology and a module of its proprietary centraco™ system.
"Our partner informed us about this significant scope expansion of work for this contract which is attributed to our efficiency in conducting the audits and the professionalism of our staff," said Joe Coschera, Duos Technologies Group Senior VP of IT Services. The data collection and analysis methodologies in conjunction with well proven software systems supporting the process, deliver measurable intelligence on large data center infrastructures. Mr. Coschera went on to say, "Our patented process and mobile audit software solution ensures timely completion and accurate deliverables to our clients and I believe this is the reason that we have been awarded a significant expansion in scope. We have successfully completed IT asset inventory audits for many Fortune 500 and Fortune 100 companies in scores of data centers with accuracy levels exceeding 99%."
Duos Technologies Group, IT infrastructure services division focuses on bringing products and services to support the largest Data Centers with their Data Center Infrastructure Management ("DCIM") implementations. The division has delivered data collection and audit services via its partners for most of the major telco operators. Adrian Goldfarb, CFO of Duos Technologies Group, has corporate responsibility for the IT infrastructure services division. "I am delighted that our partner recognized the professionalism of our implementation teams and the ability in conjunction with our technology to deliver results that exceeded customer expectations. I look forward to expanding our relationship well into the future," he commented.
About Duos Technologies Group Inc.
Duos Technologies Group, Inc. (OTCQB: DUOT), based in Jacksonville, FL, through its wholly owned subsidiary, Duos Technologies Inc., provides intelligent security analytical technology solutions with a strong portfolio of intellectual property. Duos Technologies' core competencies include advanced intelligent technologies that are delivered through its proprietary integrated enterprise command and control platform, centraco™. The Company provides its broad range of technology solutions with an emphasis on mission critical security, inspection and operations within the rail, utilities, petrochemical, healthcare, and hospitality sectors. Duos Technologies Group also offers IT, professional services.
For more information, visit: http://www.duostech.com.
Forward Looking Statements
This press release contains forward-looking statements that involve substantial uncertainties and risks. These forward-looking statements are based upon our current expectations, estimates and projections and reflect our beliefs and assumptions based upon information available to us at the date of this release. We caution readers that forward looking statements are predictions based on our current expectations about future events. These forward-looking statements are not guarantees of future performance and are subject to risks, uncertainties and assumptions that are difficult to predict. Our actual results, performance or achievements could differ materially from those expressed or implied by the forward-looking statements as a result of a number of factors, including but not limited to our ability to continue growing our IT asset inventory audit business generally, market wide acceptance of the data center auditing methodologies used by us, acceptance of referrals for equipment remarketing by our customers, continued revenue generation from our partners including up to $1m in anticipated revenue in 2016 for this specific project and ultimate profitability to allow further research & development of new solutions for the IT asset inventory audit business, our business environment and industry trends, competitive environment, the sufficiency and availability of working capital and general changes in economic conditions and other risks and uncertainties described in our filings with the Securities and Exchange Commission, including our Annual Report Form 10-K for the year ended December 31, 2015. Any forward-looking statement made by us herein speaks only as of the date on which it is made. We undertake no obligation to revise or update any forward-looking statement for any reason.
Latest Offering Leverages Open Source and Integrated Analytics to Enable Rapid, High-Quality Software Delivery
PALO ALTO, Calif. – Hewlett Packard Enterprise (NYSE: HPE) today announced the availability of HPE ALM Octane, an Application Lifecycle Management (ALM) software offering, designed to help customers accelerate their DevOps processes. This streamlined ALM solution leverages widely-used developer toolsets like Jenkins and GIT to bring continuous quality to lean, Agile, and DevOps-focused teams. HPE ALM Octane provides insights to developers and testers, helping them deliver applications quickly, without sacrificing quality or end-user experience.
To keep pace with a rapidly changing market landscape and deliver compelling customer experiences, organizations are building and enhancing software and mobile applications at lightning speed. This accelerating rate of change drives businesses to rethink how they optimize their internal software development processes. Application delivery management solutions that ensure on-going quality and scale at increased speed have not yet evolved fast enough to support development methodologies like Agile, Lean, and DevOps.
"At the core of successful business today, you will find agile, high-quality, high-performing applications that continuously provide engaging and intuitive user experiences," said Raffi Margaliot (@raffima), Senior Vice President and General Manager, Application Delivery Management, Hewlett Packard Enterprise. "However, to rapidly deliver these remarkable applications, IT teams need to be equally agile and continuously deliver high-quality products. HPE ALM Octane is specifically designed for Agile and DevOps-ready teams, bringing a cloud-first approach that's accessible anytime and anywhere, bolstered by big data-style analytics to help deliver speed, quality, and scale across all modes of IT."
Guided by Analytics and an Open Architecture, HPE ALM Octane Drives Speed, Quality, and Scale
To improve product direction and execution, software development teams need products capable of integrating data from a wide array of developer and testing tools, to seamlessly report on the state of quality across the application delivery pipeline, and to enable big data-style algorithms to leverage lifecycle data. HPE ALM Octane achieves this for software developers to streamline production. Along with HPE ALM and Quality Center, HPE ALM Octane can bridge the gap for software developers and organizations who continue to use legacy methods of software development in existing projects to modern software development practices such as DevOps, lean, Agile, and more.
HPE ALM Octane is optimized for integration with widely adopted developer testing tools -- leveraging Swagger-documented REST APIs -- and an open platform architecture built on a layer of open source. While inherently open, HPE ALM Octane is designed to address the challenges associated with the scale required by enterprise software delivery. To accelerate application delivery across multiple teams, the solution provides visually guided and easily configured business rules and workflow.
HPE ALM Octane configures and manages automated testing within the context of a continuous integration pipeline, making it simple for developers to automatically view results and defects in context. In addition, through HPE ALM Octane's open architecture, organizations can leverage a wide array of test frameworks from both HPE and third party due to the incorporation of continuous integration tools such as Jenkins and Team City.
Key features offered by HPE ALM Octane include:
- Enhanced Open Source Dev/Test Toolchains - HPE ALM Octane, is integrated with a core set of widely adopted tools focused on test automation, collaboration and application deployment, which increase volume and shift testing left. Leveraging the strength of these tools, HPE ALM Octane adds value for cross-toolchain visibility and insight. The solution will support the following tools and frameworks:
- Jenkins and TeamCity integration to trigger continuous integration and testing activities, discover tests, execute test runs, and maintain relationships and report results -- including defects associated with each pipeline build.
- GIT to provide manual tests script version management and managing tests in source code.
- Business-Driven Development (BDD) via support of Gherkin to develop tests earlier in the design and development phases of the lifecycle; this helps streamline the use of automated testing from manual processes as tests are easily converted to automated scripts.
- A wide array of test automation tools from HPE and Open Source including, HPE Unified Functional Testing, HPE LeanFT, HPE StormRunner Load, and Selenium that are executed via CI integration; and test activities, types and results which are continuously reported and linked to HPE ALM Octane application modules, builds and defects.
- Swagger-documented REST APIs for straightforward third-party tool integration.
- Continuous Quality for DevOps Software Delivery - Utilizes the continuous integration process and associated activities to capture, analyze, provide, and present intuitively actionable data for defect management and tracking.
- Enhanced Collaboration with ChatOps - Proactively tracks rapidly evolving relationships between pipeline activity, application architecture and components, and state of quality. In addition, HPE ALM Octane tracks change between application components, backlog, builds, tests, and defects. Artifacts, status, and relationships are easily maintained through intelligent tagging and with ChatOps, collaboration becomes context rich and automatic.
"Hitachi Consulting provides leading digital enablement platforms to achieve our customer's goals by focusing on exceptional operational capabilities and performance," said Jonathon Wright, Director of Digital Engineering at Hitachi Consulting (@HIT_Consulting). "With Hewlett Packard Enterprise, and especially leveraging HPE ALM Octane, we look forward to bolstering our capacity to deliver digital services in dynamic application delivery development environments for the critical areas they're looking to address -- especially around DevOps processes."
"Accelerating development and deployment across form factors -- from cloud and native mobile to IoT including micro services -- demands agile coordination across the software lifecycle," said Melinda Ballou, Program Director of IDC's (@IDC), Application Lifecycle & Executive Strategies service. "Each of these areas brings new challenges for developers. The sheer velocity and complexity of deployment needs for business innovation is changing how teams work to address design, speed, scale, continuous quality and integration with continuous test and continuous release. Management across disparate processes and leverage of open source is key."
In addition to the launch of HPE ALM Octane, HPE is also announcing several new features to HPE functional and performance testing that support Agile, DevOps, and quality at scale. Please click here to read more.
Organizations can leverage HPE ALM Octane as a cloud-delivered service starting today. On-premise HPE ALM Octane installations will be available later in 2016.
Mobile Center updates will be introduced to help developers further increase mobile application quality and user experience optimization throughout the application lifecycle.
About Hewlett Packard Enterprise
Hewlett Packard Enterprise is an industry-leading technology company that enables customers to go further, faster. With the industry's most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.
Reduces Operational Complexity of Modern Infrastructure for Customers Including HubSpot, Stage 3 Systems and Zenefits
SAN MATEO, Calif. – SignalFx (signalfx.com), creator of the monitoring solution for modern infrastructure, today introduced significant new updates built on its SignalFlow™ real-time streaming analytics platform designed to make it easy for engineering and operations teams to create high-quality, real-time alerts. These new capabilities address the explosion of operational complexity from modern infrastructure and applications, empowering teams at companies including HubSpot, Stage 3 Systems and Zenefits to take an analytics-based approach to monitoring that dramatically reduces the time to find, alert and act on anomalies.
Teams building and operating software on modern infrastructure like Amazon Web Services, Docker, Elasticsearch and Kafka, face a unique class of problems and challenges that homegrown and traditional monitoring tools cannot address. Existing monitoring solutions take the streaming mass of data from modern infrastructure platforms and technologies, turning it into alert storms that are neither relevant nor actionable. As companies look to create a competitive advantage in every sector through software like mobile apps and web services, the only way to understand what's happening is to use accessible and real-time analytics to find the needles in the haystack of all the data being generated.
"We rely on SignalFx to provide us with the monitoring infrastructure to run highly available production services to a global industrial customer base," said David Rivas, CEO of Stage 3 Systems. "SignalFx has found the sweet spot between turnkey access to industry best practices in monitoring and alerting that reduce the noise and burden on operations -- and a system we can fine tune to our specific unique needs."
From data to detection in seconds, for fewer but more meaningful alerts
SignalFx's new visualization and alerting capabilities -- Host Navigator, Outlier Detection, and Built-in Detectors -- enable customers to not only discover, but also create, signals that are impossible to see in raw data.
- Host Navigator provides a snapshot of infrastructure health. Host Navigator takes the unprecedented complexity of modern infrastructure and provides immediate clarity with visualizations familiar to every operator. Users can easily drill down by dimensions such as app, region, service, and cluster and immediately see the correct hosts, VMs, containers, processes, metrics, outliers and alerts for that grouping. This enables quick isolation of hot spots or outliers at any level of the stack.
- Outlier Detection proactively identifies abnormal performance. Outlier Detection enhances users' existing ability to build their own detectors using SignalFlow with pre-packaged analytics that identify and alert on outliers. These detectors are completely configurable and can be based on any metric to surface outliers from a population or historically over time. Outlier detection is built into host navigator and provided as detector templates, with a growing list of algorithms.
- Built-in Detectors make it easy to set up better alerts. Unlike existing products that either prescribe what to alert on or provide no guidance at all, SignalFx provides users with a starting point for good alert design -- embedding the complex statistical methods and adaptive thresholds needed to reduce alert noise. Built-in detectors provide pre-packaged and customizable alert configurations as templates for all the platforms and technologies supported by SignalFx. Relevant built-in and user-created detectors are surfaced in any view for easy tuning, activation and subscription.
"Every organization building applications using modern open source or cloud technologies can use SignalFx to operationalize their new environments," said Karthik Rau, CEO and cofounder of SignalFx. "Our technology provides fast-moving organizations with the safety net they desire to move quickly but catch problems proactively using our real-time analytics and high-quality alerting."
SignalFx provides the most advanced monitoring solution for cloud apps and modern infrastructure. The SignalFx solution aggregates metrics across distributed services with powerful streaming analytics to alert on service-wide issues and trends in real-time, versus host-specific errors well after the fact, addressing critical application and infrastructure management challenges unanswered by traditional monitoring, APM and logging vendors. This capability is critical for engineering and operations teams responsible for apps that go beyond a single instance and are built on modern infrastructures like AWS or Google Cloud Platform, and platforms such as Docker or Elasticsearch.
SignalFx provides pre-built integrations with out-of-box charts, dashboards, analytics and alerting for AWS services and popular open-source technologies, in addition to metrics from other monitoring products. Organizations of all sizes from small startups like Stormpath and Qubole, to mid-sized companies like Viki and Stage 3 Systems, to large scale organizations like HubSpot and Zenefits trust SignalFx to ensure applications are reliably performing.
MILPITAS, Calif. – ProphetStor Data Services, Inc. today announced that it has partnered with Cavium to co-develop and co-market a cost-effective all-solid-state storage array based on Cavium's ThunderX® ARM processors to meet the demands of budget-sensitive data centers and enterprises to lower both CAPEX and OPEX for their high-performance applications.
The application-aware storage array, taking full advantage of ThunderX's large core count, memory capacity, integrated virtSoC™ technology and multiple built-in Ethernet ports, delivers ProphetStor's Federator® SDS technologies that offers intelligent storage management, automation, and a suite of advanced data services to support the most mission critical applications such as database, virtualization, cloud computing, and big data analytics.
"The fully integrated I/O that is designed into ThunderX, combined with leading ARMv8 server features such as dual socket support and 48 cores per socket, is a tremendous platform to deliver the full capability of Federator SDS to end users," said Larry Wikelius, Vice President Software Ecosystem and Solutions, Cavium. "Cavium continues to drive the server ecosystem with partners that can fully utilize key ThunderX features and deliver significant value and competitive advantage to our customers. With its innovative and complementary software-defined-storage technologies, ProphetStor is an extremely valuable partner of our expanding community of ecosystems."
"Feature-rich software for storage arrays does come with a cost due to its many CPU-intensive tasks such as inline deduplication and compression," said Eric Chen, ProphetStor CEO. "With the high throughput of modern all-flash arrays, computation power becomes the next bottleneck. Cavium's ThunderX processers offer a cost-effective and energy-efficient solution to fully realize the performance advantages of the all-flash arrays powered by our Federator SDS software. With Cavium, we are able to create arrays with higher capacity and higher performance while reducing their chassis footprint and carbon footprint at the same time."
About Cavium, Inc.
Cavium is a leading provider of highly integrated semiconductor products that enable intelligent processing in enterprise, data center, cloud and wired and wireless service provider applications. Cavium offers a broad portfolio of integrated, software compatible processors ranging in performance from 100 Mbps to 100 Gbps that enable secure, intelligent functionality in enterprise, data-center, broadband/consumer and access & service provider equipment. Cavium's processors are supported by ecosystem partners that provide operating systems, tool support, reference designs and other services. Cavium's principal offices are in San Jose, California with design team locations in California, Massachusetts, India, and China. For more information, please visit http://www.cavium.com
About ProphetStor Data Services, Inc.
ProphetStor Data Services, Inc., a leader in Software-Defined Storage (SDS), provides federated storage and data services to enable both enterprises and cloud service providers to build an agile, automated, intelligent, and orchestrated storage infrastructure.
ProphetStor was founded in 2012 by seasoned storage experts with extensive experience in cloud computing platforms, software-based networked storage, data services, business continuity and disaster recovery.
Headquartered in Milpitas, California, ProphetStor has branch offices in Asia-Pacific regions to serve international customers. For more information, visit www.prophetstor.com.
ProphetStor Federator is a trademark of ProphetStor Data Services, Inc. in the US and other countries. All other company and product names contained herein may be trademarks of their respective holders.
Explosion in Ransomware Drives All-time High in Malicious Domain Creation
SANTA CLARA, Calif. – Infoblox Inc. (NYSE: BLOX), the network control company, today released the Infoblox DNS Threat Index for the first quarter of 2016, highlighting a 35-fold increase in newly observed ransomware domains from the fourth quarter of 2015. This dramatic uptick helped propel the overall threat index, which measures creation of malicious Domain Name System (DNS) infrastructure including malware, exploit kits, phishing, and other threats, to its highest level ever.
Ransomware is a relatively brazen attack where a malware infection is used to seize data by encrypting it, and then payment is demanded for the decryption key. According to Rod Rasmussen, vice president of cybersecurity at Infoblox, "There has been a seismic shift in the ransomware threat, expanding from a few actors pulling off limited, small-dollar heists targeting consumers to industrial-scale, big-money attacks on all sizes and manner of organizations, including major enterprises. The threat index shows cybercriminals rushing to take advantage of this opportunity."
The FBI recently revealed that ransomware victims in the United States reported costs of $209 million in the first quarter of 2016, compared to $24 million for all of 2015. High-profile Q1 ransomware incidents include the February 2016 attack on Hollywood Presbyterian Medical Center in Los Angeles and the March 2016 breach at MedStar Health in Washington D.C.
Record Number of New Malicious Domains
The Infoblox DNS Threat Index hit an all-time high of 137 in Q1 2016, rising 7 percent from an already elevated level of 128 in the prior quarter, and topping the previous record of 133 established in Q2 2015. The Infoblox DNS Threat Index tracks the creation of malicious DNS infrastructure, through both registration of new domains and hijacking of previously legitimate domains or hosts. The baseline for the index is 100, which is the average for creation of DNS-based threat infrastructure during the eight quarters of 2013 and 2014.
Five New Countries Top List of Those Hosting Malicious Domains
The United States continues to be the top host for newly created or exploited malicious domains, accounting for 41 percent of the observations, a significant drop from last quarter's 72 percent lion's share. Five other countries and regions saw major increases in activities:
- Portugal-17 percent
- Russian Federation-12 percent
- Netherlands-10 percent
- United Kingdom-8 percent
- Iceland-6 percent
Germany, which last quarter accounted for almost 20 percent of newly observed malicious domains and related infrastructure, nearly dropped off the list at less than 2 percent.
"Cybercriminals are as likely as anyone else to take advantage of sophisticated infrastructure, and all of the countries in this quarter's list fit that description," said Lars Harvey, vice president of security strategy at Infoblox. "But the geographic spread shows that much like cockroaches that scurry from the light, cybercriminals are quick to shift to a more advantageous location as needed."
Exploit Kits Remain Top Threat
Exploit kits-toolkits for hire that make cybercrime easier by automating malware creation and delivery-remain the biggest threat, accounting for just more than 50 percent of the overall index. As in past quarters, Angler remains the most used exploit kit, but a new contender has emerged from far back in the pack: observations of Neutrino grew by 300 percent. Angler is notorious for pioneering the "domain shadowing" technique used to defeat reputation-based blocking strategies, and for infiltrating malicious URLs into legitimate ad networks, taking visitors to websites that insert malware even if they don't click on the infected ads. Various iterations of recent Neutrino campaigns have been observed to infect victims' systems with various versions of ransomware such as Locky, Teslacrypt, Cryptolocker2, and Kovter.
About DNS and the Infoblox DNS Threat Index
DNS is the address book of the Internet, translating domain names such as www.google.com into machine-readable Internet Protocol (IP) addresses such as 126.96.36.199. Because DNS is required for almost all Internet connections, cybercriminals are constantly creating new domains and subdomains to unleash a variety of threats including exploit kits, phishing, and distributed denial of service (DDoS) attacks.
For more details about the Infoblox DNS Threat Index methodology and to read the full report for the first quarter of 2016, go to www.infoblox.com/dns-threat-index.
Infoblox (NYSE: BLOX) delivers critical network services that protect Domain Name System (DNS) infrastructure, automate cloud deployments, and increase the reliability of enterprise and service provider networks around the world. As the industry leader in DNS, DHCP, and IP address management, the category known as DDI, Infoblox (www.infoblox.com) reduces the risk and complexity of networking.
Forward-looking and Cautionary Statements-Infoblox
Certain statements in this release are forward-looking statements, which involve a number of risks and uncertainties that could cause actual results to differ materially from those in such forward-looking statements. As such, this release is subject to the safe harbors created by U.S. Federal Securities Laws. The risks and uncertainties relating to these statements include, but are not limited to, risks that there may be design flaws in the company's products, shifts in customer demand and the IT services market in general, shifts in strategic relationships, delays in the ability to deliver products, or announcements by competitors. These and other risks may be detailed from time to time in Infoblox's periodic reports filed with the Securities and Exchange Commission, copies of which may be obtained from www.sec.gov. Infoblox is under no obligation to (and expressly disclaims any such obligation to) update or alter its forward-looking statements whether as a result of new information, future events, or otherwise.
NEW YORK, NY – SmartMetric, Inc. (OTCQB: SMME) -- After developing its leading edge Biometric fingerprint activated credit card for use with the global EMV chip card format, SmartMetric is now working on bringing to market an advanced biometric multi function access control and identity biometric validated security identity card.
We have spent over a decade in research and development in order to make our credit card with a built inside the card, fingerprint scanner. Based on this R&D we are now able to move relatively quickly into bringing to market a world leading biometric based multi function access control and identity card that is no thicker than a standard credit card, said today SmartMetric's President & CEO, Chaya Hendrick.
The new biometric security card developed by SmartMetric has a built in fingerprint scanner in a card no thicker than a credit card so it easily fits into wallets and purses. Inside the card is RFID technology that is used to send a signal to doorway locks for secure physical access control. The card has on its surface a smartcard chip that is used for secure log on to computer networks. Also the card has visual identity on the spot identity verification with lights that shine green or red following an on card fingerprint scan by the card holder at a security desk or anywhere on a government or business inside or outside the building.
Bringing 100% identity verification using SmartMetric's miniature biometric in-card technology in a multi function credit card sized platform brings to the world of security a brand new and cutting edge product. Securing perimeter entry points, across campus on the spot identity checking, physical building entry control along with securing computer network log on access. All using one very smart biometric security card created by SmartMetric.
An IDC study forecasts the worldwide identity and access management market, which had revenue of $4.8 billion in 2013, up from $4.5 billion in 2012. "We anticipate that the overall market will increase to $7.1 billion in 2018," said Pete Lindstrom, research director for Identity and Access Management.
To view a video of the SmartMetric biometric chip card follow this link:
SmartMetric Biometric Payments Card -- https://youtu.be/zSX59uHoHqU
To view the company website: www.smartmetric.com.
Safe Harbor Statement: Certain of the above statements contained in this press release are forward-looking statements that involve a number of risks and uncertainties. Such forward-looking statements are within the meaning of that term in Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. Readers are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, and that actual results may differ materially from those indicated in the forward-looking statements as a result of various factors.
OSLO, Norway – NEXT Biometrics Group ASA (Oslo Bors: NEXT) today announced it introduced its new generation of low powered, cost-efficient, robust and full sized fingerprint sensors.
Tore Etholm-Idsoe, CEO of NEXT Biometrics, said, "After 12 months of development we are very happy to announce that we have succeeded in creating a brand new core sensor design that consumes less power and going forward will be used in both our flexible and rigid sensor modules."
"This sensor is a key part of our strategy to seize the leader position in the booming smart card market," he said.
The company added that the new sensor designs further enhance the NEXT ESD robustness (electrostatic discharges) to levels compliant with the most rigorous ESD standards, meeting the requirements of all the mass markets relevant for fingerprint sensors.
NEXT CEO Tore Etholm-Idsoe said, "This is an important milestone in our strategic road map. We thank our high quality team of designers, software, firmware and hardware experts which has done an impressive job developing this new sensor generation."
About NEXT Biometrics:
Enabled by its patented NEXT Active Thermal principle, NEXT Biometrics (www.NextBiometrics.com) offers high quality area fingerprint sensors at a fraction of the prices of comparable competitors. A wide range of product formats including Smart Cards, Smartphones, Tablets, PC's, Doors, Time registration systems, Wearables, Payment terminals, Flashdrives, USB-tokens, Key fobs and many more are targeted.
NEXT BIOMETRICS GROUP ASA is a publicly listed company headquartered in Oslo, Norway and with sales, support and development subsidiaries in Seattle, Silicon Valley, Taipei, Prague and Shanghai. Media and Investor contacts for NEXT Biometrics: Tore Etholm-Idsøe, CEO, Tore.Idsoe@NEXTbiometrics.com Knut Stalen, CFO, Knut.Stalen@NEXTbiometrics.com.
73% of Organizations Report Business Initiatives Thwarted or Delayed Frequently in Many Cases Because of Data Security Gaps
FREMONT, Calif. – Dataguise, a technology leader in secure business execution, today announced the findings of a new survey titled "Strategies for Securing Sensitive Data." In the survey, 100 senior IT decision makers, including CxOs, VPs, directors, and managers were questioned on the topic of sensitive data security, including technologies in use, impacts to businesses when failures occur, and accountability after such events. The survey participants represented firms from a wide variety of industries that were chosen for the intensity at which they consume data. Conducted between March and April of 2016, the survey uncovers several truths about sensitive data management, risks, and increasing budgets for improving IT security infrastructure.
In March of 2016, Dataguise commissioned Gatepoint Research to conduct an invitation-only survey of enterprise IT executives regarding strategies for securing sensitive data. Candidates were chosen from a wide range of industries, including financial services, healthcare, manufacturing, business services, consumer services, retail, media, and education. 54% of those that completed the survey work for Fortune 1000 organizations with revenues over $1.5 billion. 20% work for medium to large firms whose revenue is between $250 million and $1.5 billion. 26% are employed by small enterprises with less than $250 million in revenue.
Observations and conclusions of the 13 question survey included the following:
- Companies are transitioning toward big data frameworks, including cloud-based environments such as Microsoft Azure HDInsight. 28% of respondents report more than a year of experience with these big data repositories and another 38% in various stages of adoption.
- Data security challenges often have a negative impact on organizations with 73% reporting that data security concerns terminate or delay data-driven business initiatives.
- Companies use multiple security solutions to protect sensitive data, with 82% using network monitoring, 80% leveraging data encryption, 79% implementing access controls, 69% installing perimeter controls, 63% using volume and file encryption, and 43% implementing data masking.
Even with multiple layers of security in place, less than half of all respondents did not believe that their data was secure with only 47% of respondents confident that their sensitive data throughout their organization was safe. Furthermore, it was revealed that sensitive data within organizations can be widely accessed by a large number of individuals. In addition to 80% of respondents indicating that their IT teams had access to sensitive data, 40% said test and development teams also had access and 29% indicated that end-users throughout the enterprise maintained the ability to view this information. Finally, while 62% of those surveyed said their firms passed security audits, 11% failed and 20% were unclear if they passed their audit or not.
Identifying where the buck stops when unauthorized access to sensitive data occurs, the survey also asked who would be held accountable if the organization encountered a breach. 88% of respondents said that their IT security team (including the CISO/CIO) would face scrutiny. 47% said their CEO or board of directors would be placed with the responsibility. 38% of organizations would point to the chief data officer (CDO) for the breach and 24% would fault the user or users who created the data. The takeaway here is that IT security teams are at the greatest risk should a situation occur and must strengthen their data infrastructure to ensure the danger of unauthorized access remains low.
"As we have experienced, many companies are throwing everything they have at IT security challenges. The problem is that even multiple point solutions still leave gaps that put these organizations at risk," said JT Sison, vice president of marketing and business development for Dataguise. "Addressing this at the data layer plugs the remaining gaps, regardless of its migration across systems and networks. Additionally, platform agnostic monitoring of this sensitive data provides precise intelligence to administrators, providing a much higher level of protection for greater levels of confidence."
A complete copy of survey results are available for free download at: http://www.dataguise.com/strategies-for-securing-sensitive-data-survey-results/
Tweet This: Survey Reveals Information Security Issues Remain a Hindrance to Data-Driven Corporations - http://bit.ly/1PzF3FJ #bigdata @Dataguise
- Follow Dataguise on Twitter at: http://twitter.com/dataguise
- Follow Dataguise on LinkedIn at: http://www.linkedin.com/company/dataguise
- Follow Dataguise on Facebook at: http://www.facebook.com/dataguise
- Contact Dataguise directly at: http://www.dataguise.com/contact_us/
Dataguise is the leader in secure business execution, delivering data-centric security solutions that detect and protect an enterprise's sensitive data, no matter where it lives or who needs to leverage it. Dataguise solutions free the enterprise from traditional security constraints to support the data-driven organization and maximize the business value of information. DgSecure by Dataguise makes data security painless, delivering a powerful solution that provides the highest level of protection without the need for programming. The company is proud to secure the data of many Fortune 500 companies committed to responsible data stewardship. To learn more about how Dataguise is spearheading the secure data revolution, visit: www.dataguise.com
RAID Inc. to Deliver Next Level of Software Defined Storage Solutions
ANDOVER, Mass. – RAID Inc. was recently awarded a contract to provide Lawrence Livermore National Laboratory (LLNL) a custom parallel file system solution for its unclassified computing environment. RAID will deliver a 17PB file system able to sustain up to 180 GB/s. These high performance, cost-effective solutions are designed to meet LLNL's current and future demands for parallel access data storage.
According to Mark Gary, Deputy Division Leader in Livermore Computing, this new file system infrastructure "will be deployed in support of cutting edge application development and large-scale scientific simulation in LLNL's unclassified environment."
LLNL has built a world-class high performance computing (HPC) ecosystem designed to address a range of complex computational challenges which can also be used to solve high-impact problems critical to national concerns.
"RAID's tested parallel file system solutions are designed to help accelerate LLNL's HPC leadership in building the next generation of open source production environments," said Robert Picardi, CEO of RAID Inc. "And there are many commercial applications which will benefit from highly scalable, cost efficient and high performance storage solutions."
- The parallel file system will run Lustre 2.8 with ZFS OSDs and multiple metadata servers.
- The Lustre file system contains 36 OSS nodes, with each node capable of 5 gigabytes per second of sustained data performance, and 16 metadata servers with 25TB of capacity of SDD storage.
- The solution is anchored by enterprise class 4U 84 bay 12G SAS Jbods, LSI/Avago 12G SAS (serial attached SCSI), Mellanox EDR InfiniBand, HGST 12G Enterprise SAS disk drives, and Intel server technologies.
- The file system incorporates 6 scalable storage units each containing six Lustre OSS and six 4U-84Bay JBODs with 480-8TB SAS drives. Employing ZFS on Linux with raidz2 data parity protection. Resiliency is provided by multipath and high-availability failover connectivity, intended to eliminate single points of failure.
- An additional software feature was added to manipulate tunable features and settings on disks in the same way RAID controller manufacturers fine-tune disk firmware for their enclosures. It not only squeezes every bit of performance out of the drives but also provides extensive diagnostic reporting in order to catch and potentially fix problems long before they affect data flow and integrity.
- LLNL's HPC facility consists of numerous computer platforms and file systems spanning multiple buildings and operating at multiple security levels.
About Lawrence Livermore National Lab
Founded in 1952, Lawrence Livermore National Laboratory (www.llnl.gov) provides solutions to our nation's most important national security challenges through innovative science, engineering and technology. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy's National Nuclear Security Administration.
About RAID Incorporated
Since 1994, RAID Inc. has developed custom end-to-end vendor-agnostic technical computing solutions to address high performance computing and big data storage challenges across many industries and vertical markets. The company has acquired far-reaching industry knowledge and relationships, which, combined with its team of experts who have extensive academic, research lab and commercial expertise, make RAID Inc. a trusted industry leader.