Research and Development

The following is a braindump of an idea I had as a result of the work I have been doing on Portcullis’ STAR offering.

The question I set myself was, what testing could we perform, under our normal terms of engagement which would contribute to the “blue team” i.e. the system administrators and developers better understanding what a real world attack actually looks like and whether their organisation is mature enough to deal with it.

Firstly, let us consider what the kill chain actually consists of. The original research from Lockheed Martin suggests that a real world attack is likely to consist of the following events:

  • Recon
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • C2
  • Actions on targets

If you consider how this approach works in practice, then the attacker will usually have a target system that performs a particular function for the business and which contains data considered by the attacker to be valuable. The attacker needs to compromise this system and exfiltrate the data back to a system under their control. At the very least, they will need to bypass any controls that are present during the initial attack and subsequent exfiltration. They may on occasion need to avoid attribution (post-attack forensic identification) and/or achieve persistence (long running access to either the target systems and/or an intermediary system that can be used to stage attacks). To pull off an actual compromise, an attacker will likely need to:

  • Compromise a trusted (typically Internet facing) host where a client-side (e.g browser) exploit can be hosted
  • Compromise Windows desktop/laptop system using this exploit
  • Escalate privileges horizontally or vertically with the ultimate aim of gaining privileged (Windows domain) access to the network as a whole
  • Locate the legitimate users and/or administrators of the target system
  • Utilise the previously gained privileged access to gain interactive access to the legitimate users and/or administrators Windows desktop/laptop
  • Utilise the legitimate user’s access to the target system to extract and then exfiltrate the desired data

How can we assess components of this approach in isolation? As we have alluded to, the purpose of a “red team” assessment is actually to assess the maturity of an organisations security controls. That is to say, we’re looking to see how/when the “blue team” detects, denies, disrupts, degrades, deceives and contains our attacks. Whilst in performing a “red team” assessment, the part played by the Intelligence Providers cannot be underestimated, in this instance, the organisation or project stakeholders will perform this role.

Clearly, we can’t always test all of these controls if we’re performing a non-”red team” assessment. For one, the scope of an average assessment won’t allow it. However, there is some room to move, providing we as the attacker are creative about how we consider the problem. The trickiest one for me, is actions on targets. In a “red team” assessment this would be where the attacker pivots, moving from one host to another (often known as lateral movement). I can’t imagine many of our clients stomaching the idea that we might end up testing the mainframe if the original scope was an external marketing application however as can be seen below, there are still relevant metrics that can be used:

Infrastructure

  • Recon
    • Detect: Did you see my port scans?
    • Detect: Did you see my password guessing?
    • Deny: Was I blocked?
    • Degrade: Is account lockout enabled?
  • Weaponisation
    • Detect: Did you see any attempts at exploitation?
    • Deny: Could you have reacted?
  • Exploitation
    • Disrupt: Were my exploits successful?
    • Detect: Did you notice which user account(s) I compromised?
    • Deny: Were any of the services patched?
  • Installation
    • Disrupt: Could I dump hashes?
    • Disrupt: Could I use well know post-exploitation tools?
    • Detect: Did you notice to which hosts and domain groups I added user accounts? Did you notice when I created new accounts?
  • C2
    • Detect: Did you see me trying to connect to the Internet?
    • Deny: Did you prevent me access hacking tools on the Internet?
    • Deny: Am I blocked?
    • Deny: By IP?
    • Deny: By DNS?
    • Deny: by SMTP?
  • Actions on targets
    • Detect: Any idea what I got access to?
    • Detect: Did you see me run sudo?
    • Detect: What are you actually logging?
    • Contain: Where else are these credentials being used?

Web

  • Recon
    • Detect: Did you see my application scans (spider?)?
    • Deny: Was I blocked?
  • Weaponisation
    • Detect: Did you see any attempts at exploitation?
    • Deny: Could you have reacted?
  • Delivery
    • Disrupt: Was there a WAF/other filters?
  • Exploitation
    • Patch: Were any of the known vulnerabilities patched?
    • Disrupt: Were my exploits successful?
    • Detect: Did you see me running bruteforce attacks (including but not limited to password guessing)?
  • Installation
    • Disrupt: Could I dump hashes?
  • C2
    • Detect: Did you see me trying to connect to the Internet?
    • Deny: Am I blocked?
    • Deny: By IP?
    • Deny: By DNS?
    • Deny: by SMTP?
  • Actions on targets
    • Detect: Any idea what I got access to?
    • Detect: Did you see me run sudo?
    • Detect: What are you actually logging?
    • Contain: To what extent did your technical security controls stop me more fully exploitation vulnerabilities such as SQL injection?

Build

  • Exploitation
    • Deny: Were any of the services patched?
    • Disrupt: Were anti-exploitation mitigations enabled?
    • Detect: Did you notice that I uploaded the EICAR test file?
    • For workstations: Deny/Disrupt/Detect: Mail/web filtering checks like Cyber Essentials
    • For workstations: Deny: State of browser and plugin securit
  • Installation
    • Disrupt: Could I dump hashes?
    • Detect: Could I use well known post-exploitation tools during my audit?
  • C2
    • Detect: Did you see me trying to connect to the Internet?
    • Deny: Am I blocked?
    • Deny: By IP?
    • Deny: By DNS?
    • Deny: By SMTP?
  • Actions on targets
    • Detect: Did you see me run sudo?
    • Detect: What are you actually logging?
    • Detect: Did you integrity monitoring show that I’d changed anything?
    • Detect: Are you using a remote log server?
    • Detect: Can I disable your logging? (I’m thinking auditd’s “you can’t turn me off” option)
    • Contain: Where else are these credentials being used?

So, what do you notice? To me, whilst I’ve only focused on 3 common assessment types, there are still some obvious patterns that stand out, notably about the shared nature of the metrics. What I’m getting at, is that they could be examined, on almost any kind of assessment and conclusions drawn from whatever results are found. It’s worth noting that these activities are already likely to be performed on organisations as part of standard penetration assessments but that we testers are not explicitly and systemically measuring the effectiveness of our approaches and/or the likelihood of detection, nor are we communicating this need back to our opposing “blue teams”.

Should you wish to embark on this change of approach, then the first step to ensure that testing methodologies are updated to account for these new questions. It’s all well and good wanting answers to these questions, but unless you go looking for them, you’ll be stuck come reporting time. Some will be as simple as adding a new command to your list of checks, but in other cases, you may need to persuade your client of the need for the information. For example, whilst you might be lucky enough to have an administrator login that allows you to see the application’s audit trail, this probably won’t give you the capability to see logs from the web servers themselves.

Consider the fact that not all logs will necessarily be on the system under assessment. If you don’t get buy in from the “blue team”, you’re unlikely to have access to whatever logs have been pulled nor any analysis that the “blue team” may have performed. Ultimately, for a tester attempting to answer these questions, it may therefore be necessary to carry out post-assessment questionnaires and incorporate the results of these into the final report. One idea I did have was to provide dummy Indicators Of Compromise (IOC) based on the “red team”‘s activities along with an executive level recommendation that existing controls are reviewed for the presence of this evidence. This helps to tackle cases where the “blue team” is unwilling or unable to carry out a post-assessment questionnaire which can be incorporated into the final report.

Another obvious challenge in structuring assessments in this fashion, is ensuring that reports present the information in a manner that empowers the “blue team” to make informed changes. Currently we (and I’m sure other test houses) align our results against CWE. This makes sense for conventional tests where the “blue team” are only concerned with specific vulnerabilities and weaknesses. However, testing against the kill chain leads to the identification of broader architectural issues for which CWE would be inappropriate. I would suggest that instead (or in addition), it will be necessary to explicitly reference the aforementioned metrics for each stage of testing and draw out those places where controls are deemed insufficient.

This raises a final interesting philosophical point for testers. We’re often loathed to endorse the idea of controls such as WAF, instead making the argument that the underlying vulnerabilities ought to be resolved. Whilst this remains true, testing against the kill chain should encourage (as in real life) a greater degree of diversity in how and where security controls are deployed. Even if individual controls, be they Anti-Virus scanners or deep packet inspection do not improve significantly (another common gripe of security testers), the increased diversity will in itself increase the costs of compromise for real world adversaries.


Request to be added to the Portcullis Labs newsletter

We will email you whenever a new tool, or post is added to the site.

Your Name (required)

Your Email (required)