Skip to content

It’s Time the Enterprise Strikes Back Against Cyber Threats

May 7, 2012

If 2011 was the year of hacktivists and APTs, then 2012 must be the year in which the enterprise strikes back. We’ve been talking about the need for a new approach to protecting large distributed networks for the last 18 months – but now it’s clear we’re not alone in that view. The Verizon Data Breach Investigation Report makes it clear that the current approach of data in silos – where data is collected by point products and then correlated manually – does not work.

Last August, we proclaimed that SIEM, as an effective way to protect a large distributed network, is dead – and everything we’ve seen since then validates that view. Working on the basis that the majority of large enterprises already have the right tools, it’s worrisome that according to the Verizon report, 97% of breaches that occurred in 2011 could have been fixed with only basic or intermediate controls. The traditional approach clearly isn’t working.

With this in mind, we’ve evolved our SecureVue situational awareness platform to make it even more powerful. We’ve listened to our customers and to the market, and we’ve created something that nobody else has – a single platform that gives complete visibility of an organization’s security posture – via a single console – a bit like having Google-style AI glasses to help you protect your environment.

In the new SecureVue, we have:

  • Re-architected the entire platform to deliver faster analysis and greater data granularity
  • Increased the number of traditional point security tools we can work with (including SIEM products), enabling more data to be fed into SecureVue’s powerful forensics engine
  • Developed more APIs for native data collection
  • Increased security data search speeds delivering “Big Data” analysis through the fastest database in the information security industry
  • Improved the scalability of our platform, enabling billions of security records to be searched across thousands of devices in seconds
  • Completely redesigned the user interface
  • Introduced an auto-profiling feature that analyzes large volumes of security data in their native formats to help organizations quickly determine “what’s normal,” without having to establish complex rules and alerts
  • Improved security configuration auditing capabilities through native support for the industry-standard SCAP format
  • Increased the number of compliance reporting templates out of the box, making it easy to demonstrate adherence with regulatory mandates

Over the course of the next few weeks, we’ll be talking about how these features – and others included in SecureVue 3.6 – deliver on the promise of situational awareness in a way that no other technology on the market today can.

For a free demonstration, contact enterprisestrikesback@eIQnetworks.com.

Advertisements

Compliance Matters for Healthcare IT

May 2, 2012

If you’re in healthcare IT, then I’m probably correct in guessing that one of your top priorities is compliance – whether that’s compliance with HIPAA (and avoiding those nasty increased penalties for HIPAA non-compliance in the HITECH Act), PCI DSS or something else. Of course, if your organization participates in programs for providing healthcare services to civilian or military employees of the federal government – such as FHEP or TRICARE – then you have another compliance worry: information security certification and accreditation (C&A).

The federal government is highly focused on protecting the healthcare data of its employees and military personnel, and simply complying with HIPAA won’t cut it; C&A programs such as DIACAP and FISMA require extensive information security controls, and among the most critical are ensuring that healthcare-related systems are configured in a secure manner, based on federally-developed standards such as DISA’s Secure Technical Implementation Guides (STIGs).

The problem, unfortunately, is that many healthcare IT organizations don’t have a clear understanding of whether their systems are compliant with these detailed DISA STIG standards, let alone any ability to ensure continuous monitoring of these controls (which is critical for both compliance and security).  The problem is there’s no quick and cost-effectively way to obtain this information – or is there?

eIQnetworks is hosting a webinar on Tuesday May 8th designed to help healthcare IT professionals understand how they can quickly audit positional of their enterprise assets against DISA STIG compliance, identify potential vulnerabilities and take steps to bring their environments in line with compulsory  Federal mandates.

To register for the webinar on May 8th at 1pm EDT click here

“The RSA breach wasn’t advanced; what happened afterwards was…”

March 30, 2012

At our Washington DC Executive Briefing on Wednesday, we were lucky enough to be treated to a keynote by former White House CIO Theresa Payton, who talked about the issues facing Federal security professionals as they battle to protect the nation’s critical infrastructure and sensitive data.  Over the coming days we’ll be sharing some of the points that she made with you, as well as a full transcript of Theresa’s presentation.

One of the most interesting – and thought-provoking – things that Theresa said was around APTs (Advanced Persistent Threats).  She made the point that the term has been used and abused – and in many cases is misunderstood.  She used the RSA breach of 2011 as an example, saying that the breach itself wasn’t all that advanced, but what happened after the environment had been compromised, was.

The problem is that often all that security analysts see is the breach (at the point of entry into the environment) and the damage done to the intended target (often the removal of data).  They don’t see the complexity of what happened between those two points because their systems don’t allow them to. Why? Because traditional point tools collect one piece of data, and send them to a traditional SIEM tool that turns them all into log and event data.  All the analyst sees is a lot of logs and events – they never see, for example, configuration changes as the state-based data that it is, or network traffic as a unique and highly different data type than server logs.  Similarly, the correlation between these different types of data in their native formats and even the best forensic analysis tools will only provide a low-resolution image of the true scale of an attack using log and event data.  Perhaps it’s easiest to think of SIEM as old-fashioned televisions that are only able to provide black and white images, when what security analysts really need is a full 1080p, high-definition color picture with Dolby Surround Sound.

Wouldn’t it be great if you could see your network in high definition?

Hindsight is a wonderful thing!

March 27, 2012

The 2012 edition of the Verizon Data Breach Investigations Report was published last week and makes interesting reading for anyone working in or associated with information security.  A couple of statistics stand out: 97% of breaches in 2011 were avoidable through implementing relatively simply security controls, although the report caveats this with claims that 85% of breaches took weeks or more to discover.

Looking at avoidable breaches first, the reports authors feel it is necessary to add the caveat “in hindsight”… surely everything is avoidable if you have time to reflect on it and understand what went wrong, isn’t it? I’m also not sure that I agree with the statement.  We’re increasingly hearing that information security has moved on from stopping a network breach from occurring and on to preventing significant damage from being done. Which takes us to the second issue – the time taken to discover breaches.

My gut-check instinct tells me that 85% is perhaps a little on the low side.  And, herein, lies the major challenge for information security professionals.  If it is almost inevitable that, if somebody wants to breach a network perimeter, they can – as was demonstrated by a number of hacker groups in 2011 – then most large organizations are unable to protect data within their environment.  By the time most have realized they have been breached, the data has been accessed, and often removed from the network. Given that the days of signature-based attacks are all but over, the chances of learning anything of value from an autopsy on one attack is minimal.

The 2012 Verizon Data Breach Report is further verification that the only way to protect sensitive data within a large distributed environment is to have the ability to spot an attack while it is in progress – not weeks after it has taken place.  This requires the ability to collect and correlate data from many types of devices, in real-time, and provide security analysts with a real-time view of what’s taking place in a network.  It’s further reinforcement that SIEM, as we know it, is dead – and that information security now depends on situationally aware security professionals.

Sitting ducks?

March 22, 2012

Malware that would enable attackers to breach Federal networks may already be on the country’s critical infrastructure, just waiting for the right moment to access sensitive data, or do damage to major systems. That was the theme of a report I was reading yesterday.  With millions of dollars spent every year on information security, you might be asking: why hasn’t it been detected and dealt with yet?  It’s a valid question.

The answer lies in the way that most information security tools work.  The key is that the majority of large networks – both federal and commercial – deploy security technologies such as IDS, SIEM and other designed to spot specific events on hosts or networks, rather than anomalies to detect changes in system state or configuration.  They’re designed to spot a DDOS attack, for example, or the penetration of the network at the perimeter – events that have all the hallmarks of an attack.  These security technologies identify abnormal events as potential attacks (or other security-related issues) and take a pre-defined action – triggering an alert, or quarantining a file, for example.

The majority of information security technologies deployed in large networks, however, aren’t prepared for complex, multifaceted attacks – or sophisticated ones that aren’t identifiable through either pre-defined signatures, or administrator-created rules and alerts that look for “known entities”.  These traditional point security tool won’t be able to identify, for example, that there is a connection between multiple failed logins on an account that has access to a critical system and – for example – configuration changes on a different system, abnormally high volumes of network traffic on a device containing sensitive data (such as HIPAA or PCI systems),  or anomalous network traffic that isn’t necessarily breaking policy, but is still not normal (such as using ephemeral TCP or UDP ports).

If the reports are true – and something is already on Federal networks – then what’s required to detect these threats is not a new system, but the next evolutionary step in information security.  It is, perhaps, this that is the toughest challenge all organizations face in protecting themselves against the new breed of threats that will attack each and every organization out there… regardless of whether they’re prepared for it or not.

We were right… SIEM is Dead!

March 7, 2012

At the end of last year we announced our belief that SIEM, as an effective tool for protecting large distributed networks against cyber or insider attack, was dead.  We cited the growing complexity of attacks, driven in part by:

–   An increased complexity in network architectures;

–   The inability of signature-based technologies to identify zero-day and advanced persistent threats;

–   The need to collect ALL security data (not just log and event data);

–   The effective end of perimeter security as a way of stopping breaches from happening; and

–   The need for a way to correlate increasing volumes of security data in its native format, to provide security analysts with the intelligence they need to proactively mitigate the damage done by an attacks and protect critical data assets.

Sure, there’s still a place for point SIEM tools in the information security equation – but effectively only as data collectors, and only for event-based data; as we all know, system state data, network traffic, performance metrics, and other security data elements are not events… and SIEM platforms that try to treat them as such are of minimal use.  So again, we say that SIEM – as it was intended and sold to customers as a panacea for security threat detection – is indeed dead!

So what’s the next step?  Read more…

For SIEM 2.0 Read SIEM 1.0 [with some shiny new marketing]

March 6, 2012

Wandering the show floor at RSA I was intrigued by a demo of what was billed as ‘SIEM 2.0’.  I was intrigued… does SIEM have a new pretender to its crown?

Sadly, it appears that SIEM 2.0 is just a repackaging of the same, tired old SIEM story.  The presenter challenges observers to, “answer three questions”.  All three were predicated with, “Would you know if…”:

–       A new account was created on your network, somebody logged in and then within 24 hours the account was deleted?

–       A privileged account was logged in from Kansas – and then again from Russia within a couple of hours?

–       An adobe reader process began activating outbound activity on opening a document?

All relevant questions, but all based around log and event data.  We’d like to ask the vendor in question whether with “SIEM 2.0” they could answer more critical and detailed questions that a real-world security analyst would actually want – and need – to know in order to mitigate a threat in real-time:

–   In addition to telling me that someone logged into an account that was soon deleted, can you tell me about the context of the user?  For example, were there any unauthorized configuration changes on the system on which the account was used in the prior days and weeks?  Is the system(s) from which the account was used compliant with the organization’s security configuration baseline?

–   While geolocation data is nice, it doesn’t help if a legitimate user is on business or vacation and is accessing systems through VPNs or multiple cloud infrastructures; can you determine what the user’s activity patterns are, to see whether recent logons from multiple countries in short periods of time is “normal” based on historical activity?

–   What user context is the Adobe Reader process running inside of?  Have there been any updates or changes to the version of Adobe Reader, including file integrity changes on key executables and DLL’s?  What’s actually inside the payload of network traffic that the process is initiating?

All of these questions are absolutely critical to enhancing the visibility of this so-called “SIEM 2.0” into something that’s actually actionable.  Can a “SIEM 2.0” technology answer these questions?  We’ll see if they respond.

SIEM 1.0, 2.0 and, I suspect 3.0 won’t help you answer these questions because they limit their visibility to event-based information; other pieces of critical data such as system state data, network activity, performance metrics, and user behavior detection are ignored.  SIEM 1.0 has been making the same promises of better visibility for more than 10 years so; perhaps it was time for a refresh, but while the marketing might have been given a polish, you’ll still be left with the same blind spots in your environment that SIEM promised to fix more than a decade ago.  What you need is better awareness… Situational Awareness.