DRJ's Fall 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 31, Issue 1

Full Contents Now Available!

(TNS) - Compared to this time last year, Santa Cruz Police Department’s calls for service are down by more than a quarter.

During a city Public Safety Committee meeting Monday, Police Chief Andy Mills unveiled early plans on how to further drive down those calls.

“Yesterday, I saw a person reporting that he responded to a call of a person dancing in the rain,” Mills told the committee, comprised of Mayor David Terrazas and Councilwomen Cynthia Chase and Richelle Noroyan. “That is why we’re allowing sergeants to screen out calls officers are responding to.”

The department saw the nearly 27 percent reduction in the first three months of the year, to 17,630 calls, in the wake of a consultant’s report showing city officers’ workload-to-population ratio was much higher than many cities Santa Cruz’s size. Arrests, on the other hand, were up nearly 12 percent in the same period, department statistics show.

...

http://www.govtech.com/em/disaster/-Calling-911-for-Everything-Soon-to-be-a-No-Go-in-Santa-Cruz-as-Police-Look-to-Prioritize.html

The IRS spent Tax Day trying to resolve IT issues rather than processing last-minute returns.

When the news broke that the IRS’s Modernized e-File (MeF) system was down, along with the Direct Pay and Payment Plan pages on the IRS site, three possible scenarios that can take businesses down came to mind: a hack, overloaded systems, or pure coincidence.

How likely is each?

The IRS spent Tax Day trying to resolve IT issues rather than processing last-minute returns

While an attacker could breach the systems and/or perimeter and turn off services that allow connections to the systems accepting the tax returns, no one is claiming responsibility and so far, there’s no evidence that this is a malicious denial of service attack.

...

https://blog.sungardas.com/2018/04/irs-outage-on-tax-day-hack-coincidence-or-overloaded-systems/

Florida’s small P/C insurers have withstood losses from Hurricane Irma and a legal environment that’s dubbed a “judicial hellhole” by the American Tort Reform Association, a recent article in S&P Global Market Intelligence reports.

The financial ratings firm Demotech affirmed the financial strength of over 50 companies in late March, a decision found “encouraging” by the CEO of the state-run Citizens Property Insurance Corp, Barry Gilway.

Gilway said that Demotech’s March actions is evidence of the resilience that smaller carriers showed during a year in which Hurricane Irma caused insured losses of about $8.61 billion, according to the latest Florida Office of Insurance Regulation tally.

...

http://www.iii.org/insuranceindustryblog/small-florida-insurers-survive-hurricanes-and-judicial-hellhole/

By Tim Crosby

PREFACE: This article was written before ‘Meltdown’ and ‘Spectre’ were announced – two new critical “Day Zero” vulnerabilities that affect nearly every organization in the world. Given the sheer number of vulnerabilities identified in the last 12 months, one would think patch management would be a top priority for most organizations, but it is not the case. If the “EternalBlue” (MS17-010) and “Conflicker” (MS08-067) vulnerabilities are any indication, I have little doubt that I will be finding the “Meltdown” and “Spectre” exploits in my audit initiatives for the next 18 months or longer. This article is intended to emphasize the importance of timely software updates.

“It Only Takes One” – One exploitable vulnerability, one easily guessable password, one careless click, one is all it takes. So, is all this focus on cyber security just a big waste of time? The answer is NO. A few simple steps or actions can make an enormous difference for when that “One” action occurs.

The key step everyone knows, but most seem to forget is keeping your software and firmware updated. Outdated software provides hackers the footholds they need to break into your network as well as privilege escalation and opportunities for lateral movement. During a recent engagement, 2% of the targeted users clicked on a link with an embedded payload that provided us shell access into their network. A quick scan identified a system with a Solaris Telnet vulnerability that was easily exploitable and allowed us to establish a more secure position. The vulnerable Solaris system was a video projector to which no one gave a second thought, even though the firmware update had existed for years. Our scan thru this projector showed SMBv1 traffic so we scanned for “EternalBlue”; targeting 2008 servers due to the likelihood that they would have exceptions to the “Auto Logoff” policy and would be a great place to gather clear text credentials for administrators or helpdesk/privileged accounts. Several of these servers were older HP Servers with HP System Management Home Pages, some servers were running Apache Tomcat with default credentials (should ring a bell – the Equifax Argentina hack), a few running JBoss/JMX and even a system vulnerable with MS09-050.

The vulnerabilities make the above scenario possible have published exploits readily available in the form of free opensource software designed for penetration testing. We used Metasploit Framework to exploit a few of the “EternalBlue” vulnerable systems, followed the NotPetya script and downloaded clear text credentials with Mimikatz. Before our scans completed, we were on a Domain Controller with “System” privileges. The total time from “One careless click” to Enterprise Admin: less than 2 hours.

The key to our success?? Not our keen code writing ability, not a new “Day 0” vulnerability, not a network of super computers, not thousands of IOT devices working in unison, it wasn’t even a trove of payloads we purchased with Bitcoin on the Dark Web. The key was systems vulnerable to widely publicized exploits with widely available fixes in the form of updated software and/or patches. In short, outdated software. We used standard laptops running Kali or Parrot Linux operating systems with widely available free and/or opensource software, most of the which come preloaded on those Linux distributions.

The projector running Solaris is not uncommon, many office devices including printers and copiers have full Unix or Linux operating systems with internal hard drives. Most of these devices go unpatched and therefore make great pivoting opportunities. These devices also provide an opportunity to gather data (printed or scanned documents) and forward them to an external FTP site off hours, this is known as a store and forward platform. The patch/update for the system we referenced above has been available since 2014. Many of these devices also come with WiFi and/or Bluetooth enabled interfaces even when connected directly to the network via Ethernet, making them a target to bypass your firewalls and WPA2 Enterprise security. Any device that connects to your network, no matter how small or innocuous, needs to be patched and/or have software updates applied on a regular basis as well as undergo rigorous system hardening procedures including disabling unused interfaces and changing default access settings. This device with outdated software extended our attack long enough to identify other soft targets. Had it been updated/patched, our initial foothold could have vanished the first-time auto logoff occurred.

Before you scoff or get judgmental believing only incompetent or lazy network administrators or managers could allow this to happen, slow down and think. Where do the patch management statistics for your organization come from? What data do you rely on? Most organizations gather and report patching statistics based on data directly from their patch management platform. Fact – systems fall out of patch management systems or are never added for many reasons, such as: a GPO push failed, a switch outage during the process, systems that fall outside of the patch managers responsibility or knowledge (printers, network devices, video projector, VOIP Systems). Fact – Your spam filter may be filtering critical patch fail reports, this happens far more often than you might imagine.

A process outside of the patching system needs to verify every device is in the patch management’s system and that the system is capable of pushing all patches to all devices. This process can be as simple and cost effective as running and reviewing NMAP scripts on or as complex and automated as commercial products such as Tenable’s Security Center or BeyondTrust’s Retina that can be scheduled to run and report immediately following the scheduled patch updates. THIS IS CRITICAL! Unless you know every device connected to your network; wired, wireless or virtual and where it’s patch/version health status, there are going to be wholes in your security. At the end of this process, no matter what it looks like internally, the CISO/CIO/ISO should be able to answer the following:

  • Did the patches actually get applied?

  • Did the patches undo a previous workaround or code fix?

  • Did ALL systems get patched?

  • Are there any NEW critical or high-risk vulnerabilities that need to be addressed?

There are probably going to be devices that need to be manually patched, there is a very strong likelihood that some software applications are locked into vulnerable versions of Java, Flash or even Windows XP/2003/2000. So, there are devices that will be patched less frequently or not at all. Many organizations simply say, “That’s just how it is until manpower or technology changes - we just accept the risk”.

That may be a reasonable response for your organization, it all depends on your risk tolerance. What about Firewall or VLANs with ACL restriction for devices that can’t be patched or upgraded if you have a lower risk appetite?? Why not leverage virtualization to reduce the security surface area of the that business-critical application that needs to run on an old version of Java or only works on 2003 or XP? Published application technologies from Citrix, Microsoft, VMware or Phantosys fence the vulnerabilities into a small isolated window that can’t be accessed by the workstation OS. Properly implemented, the combination of VLANs/DMZs and Application Virtualization reduces the actual probability of exploit to nearly zero and creates an easy way to identify and log any attempts to access or compromise these vulnerable systems. Once again these are mitigating countermeasure when patching isn’t an option.

We will be making many recommendations to our clients including multi-factor authentication for VLAN access, changes to password length and complexity, and additional VLAN. However, topping the list of suggestions will be patch management and regular internal vulnerability scanning, preferably as the verification step for the full patch management cycle. Keeping your systems patched makes sure when someone makes a mistake and lets the bad guy or malware in – they have nowhere to go and a limited time to get there.

As an ethical hacker or penetration tester, one of the most frustrating things I encounter is spending weeks of effort to identify and secure a foothold on a network only to find myself stuck; I can’t escalate privileges, I can’t make the session persistent, I can’t move laterally, ultimately rendering my attempts unsuccessful. Though frustrating for me, this is the optimal outcome for our clients as it means they are being proactive about their security controls.

Frequently, hackers are looking for soft targets and follow the path of least resistance. To protect yourself, patch your systems and isolate those you can’t. By doing so, you will increase the level of difficulty, effort and time required rendering a pretty good chance they will move on to someone else. There is an old joke about two guys running from a bear, the punch line applies here as well – “I don’t need to be faster that the bear, just faster than you…”

Make sure ALL of your systems are patched, upgraded or isolated with mitigating countermeasure; thus, making you faster than the other guy who can’t outrun the bear.

About Tim Crosby:

Crosby TimTimothy Crosby is Senior Security Consultant for Spohn Security Solutions. He has over 30 years of experience in the areas of data and network security. His career began in the early 80s securing data communications as a teletype and cryptographic support technician/engineer for the United States Military, including numerous overseas deployments. Building on the skillsets he developed in these roles, he transitioned into network engineering, administration, and security for a combination of public and private sector organizations throughout the world, many of which required maintaining a security clearance. He holds industry leading certifications in his field, and has been involved with designing the requirements and testing protocols for other industry certifications. When not spending time in the world of cybersecurity, he is most likely found in the great outdoors with his wife, children, and grandchildren.