Portcullis Labs » GCS https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 Secrets of the motherboard https://labs.portcullis.co.uk/presentations/secrets-of-the-motherboard/ https://labs.portcullis.co.uk/presentations/secrets-of-the-motherboard/#comments Fri, 16 Feb 2018 10:13:07 +0000 https://labs.portcullis.co.uk/?p=6443 Presentation on “interesting” features of the Intel x86[_64] platform (as given at 44CON 2017). A lot of recent work has gone into the discovery, analysis, and (on occasion) marketing of hardware weaknesses in the Intel x86[_64] platform particularly with respect to how it is often implemented as part of specific motherboard designs. Some, such as […]

The post Secrets of the motherboard appeared first on Portcullis Labs.

]]>
Presentation on “interesting” features of the Intel x86[_64] platform (as given at 44CON 2017).

A lot of recent work has gone into the discovery, analysis, and (on occasion) marketing of hardware weaknesses in the Intel x86[_64] platform particularly with respect to how it is often implemented as part of specific motherboard designs. Some, such as the recent speculative execution borne attacks, are issues in the architecture itself. Other issues, however, affect individual implementations. This talk will take a wide-coverage “state of play” look at x86[_64] platform security covering:

  • Architectural failings in hardware design
  • Identifying security issues with modern computer hardware (treat it just like IoT devices!)
  • Attempts at restoring privacy, ownership, and security
  • Code and data persistence
  • How secure hardware can be re-used
44CSOTM
44CSOTM.pptx
February 16, 2018
5.7 MiB
MD5 hash: 912badf9570eef6597578674e52bbb9d
Details

The post Secrets of the motherboard appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/presentations/secrets-of-the-motherboard/feed/ 0
Enforcing a write-xor-execute memory policy from usermode https://labs.portcullis.co.uk/blog/enforcing-a-write-xor-execute-memory-policy-from-usermode/ https://labs.portcullis.co.uk/blog/enforcing-a-write-xor-execute-memory-policy-from-usermode/#comments Fri, 02 Feb 2018 02:21:20 +0000 https://labs.portcullis.co.uk/?p=6399 If BuzzFeed ran an article titled “26 Security Features You Probably Shouldn’t Enforce From Usermode”, this one would almost certainly make the list. But, for whatever reason, I thought it would be a fun learning experience to try to enforce a W^X memory policy from usermode. Some of you are probably asking what the heck […]

The post Enforcing a write-xor-execute memory policy from usermode appeared first on Portcullis Labs.

]]>
If BuzzFeed ran an article titled “26 Security Features You Probably Shouldn’t Enforce From Usermode”, this one would almost certainly make the list. But, for whatever reason, I thought it would be a fun learning experience to try to enforce a W^X memory policy from usermode. Some of you are probably asking what the heck a W^X policy is in the first place, and I’m terrible at thinking of ways to start blog posts (case in point: this paragraph), so I guess we’ll start out there.

What’s a W^X policy, anyway?

W^X is an exploit mitigation tactic in which memory pages that are, or have ever been, marked as writable can never be marked as executable during the process lifetime. The old exploit tactic of putting your exploit payload on the stack (or heap) and calling it directly was killed off with no-execute (NX, also known as hardware DEP on Windows) support, which made ret2libc/ROP approaches much more popular. ROP involves finding small pieces of existing executable code in the application and its libraries, chaining them together using the stack, with the goal of calling an API or two to allocate some executable memory for the payload to be copied into. On Windows this is usually done with a ROP chain to the VirtualAlloc() API, passing PAGE_EXECUTE_READWRITE() in order to allow for both writing the data in and executing it afterwards.

Enforcing a W^X policy breaks this approach, as an exploit cannot allocate memory as RWX, or as RW and then later executable. Applications in Windows 8.1 and later can opt into a W^X policy, enforced by the kernel, this using the SetProcessMitigationPolicy() API with the ProcessDynamicCodePolicy() argument. Of course, this is also the boring way (at least for this article).

The small print

I’m not going to make you read a sixty-eight page EULA and sign your life away on the dotted line, but there are things you should know before you gallivant away with some source code and a dream of securing your applications:

  1. I am a terrible C++ programmer. You should absolutely not use my code in production
  2. This is a proof-of-concept, so you still absolutely should not use it in production. And probably not any other context than “I want to learn how this works” or “I want to torture my eyes by reading wonky code”
  3. While some effort has been made to make the PoC thread-safe, there are some race conditions (probably security-critical ones) that exist and I haven’t done anything to fix for reasons of keeping the code fairly simple
  4. Only VirtualAlloc(), VirtualProtect(), and VirtualFree() are hooked. There are ways to get around this (e.g. calling `ntdll` functions directly, or using `Ex` suffix variants) so, again, don’t expect any concrete security from this
  5. In case you didn’t already get the memo, implementing this kind of security feature from usermode is colossally silly, particularly when the OS offers a proper version that is enforced in the kernel. An attacker who expects this usermode “protection” can tailor their exploit to bypass it in most cases
  6. Just like the kernelmode version, this breaks any application that uses JIT compilation. So that means all browsers, anything that uses Java, .NET, or a modern JavaScript engine. It also means things that embed a web frame

Caveat emptor, and all that.

How does this thing work?

The actual approach is fairly simple:

  1. Hook APIs
  2. Reject calls that would result in a page being writable and executable at the same time
  3. Track calls that result in a page being writable, and deny future calls that would make those pages executable

The grunt work involved with hooking APIs is fairly boring, so I enlisted the help of the mhook library by Marton Anka. This library provides a really intuitive way of hooking APIs:

Mhook_SetHook((PVOID*)&OriginalVirtualAlloc,   HookedVirtualAlloc);
Mhook_SetHook((PVOID*)&OriginalVirtualProtect, HookedVirtualProtect);
Mhook_SetHook((PVOID*)&OriginalVirtualFree,    HookedVirtualFree);

Each call to Mhook_SetHook() takes a pointer to the original API as the first parameter, and a pointer to the hooked version you want to replace it with.

VirtualAlloc hook

The VirtualAlloc hook checks if flProtect is either PAGE_EXECUTE_READWRITE or PAGE_EXECUTE_WRITECOPY. The former is the general-case RWX protection, and the latter is used when the segment of memory is a memory-mapped file. If either of these protection options are detected, the operation is failed with an access denied error.

Next, we perform the requested VirtualAlloc() call via OriginalVirtualAlloc. If this succeeds, we check to see if the requested allocation contained a writable flag (e.g. PAGE_READWRITE or PAGE_WRITECOPY) and, if so, add that allocation’s page address and allocation size to a tracking list. This allows us to later reject requests to make these pages executable, as they have been tainted with the writable mark.

VirtualProtect hook

The VirtualProtect hook is the most involved. As with VirtualAlloc it first rejects RWX protections outright. It then checks to see if the requested protection is executable and, if so, checks if the page exists within the boundary of a tracked writable allocation, i.e. if it starts within one, ends within one, or starts before and ends after one. This prevents tricks like allocating a small chunk of writable memory inside a larger readonly block, then calling VirtualProtect() over the whole block to make it all executable.

In order to protect against abuse of writable memory that was pre-allocated by the loader (e.g. the .data section) the code also calls VirtualQuery() to test the existing protection status of the memory, just in case we aren’t tracking it.

Another case we need to handle is similar to the VirtualAlloc() call. If the call is making memory writable, we need to track it. First we check if the exact allocation is already present in our tracked list, then if it isn’t we add it. It doesn’t matter if we have overlapping tracking metadata for writable allocations – we handle this case in our hooked VirtualFree(). Speaking of which…

VirtualFree hook

This hook is fairly simple. We just iterate over every item in the tracked allocations and remove them if they cover the address being freed.

Testing

The initial driver for me writing this code, before I decided to implement a full W^X policy with it, was to test for cases where an application under test would attempt to allocate RWX buffers, and if they actually needed those buffers to be executable (i.e. swap RWX for RW and see if you get a crash). For fun, I injected this DLL into a bunch of different programs. Many (e.g. notepad, calc) just work without problems, as they don’t rely on RWX memory at all. A number of others (e.g. Chrome, Spotify) crash due to JIT code that runs inside the process. It was quite fun to watch these allocations occur in realtime via debug messages.

Bypasses

There are a number of ways to bypass the PoC as it stands. I thought about eliminating them, but I think it’s more fun to go through the code and identify the problems.

The first and most obvious way is to ROP to GetModuleHandle() and find the original APIs that way, totally bypassing the checks. It is possible to fix this to some extent by hooking GetModuleHandle() and similar APIs, but this mostly ends up as a cat-and-mouse game. This is why you should implement this stuff in kernelmode.

The second way is a race condition. In both VirtualAlloc and VirtualProtect hooks we call the original function, then lock the tracking list and add the new allocation to the tracked list. It is possible to call either of these functions twice. This can be fixed with a global allocation mutex.

There’s also a potential TOCTOU race condition in our VirtualProtect hook, where we check the page protection using VirtualQuery and later potentially call VirtualAlloc based on the result. However, the attacker would have to get the application to call the unhooked VirtualProtect in order to exploit this particular issue.

Finally there’s a really interesting case – marking a page as writable, filling it with data, freeing it, then re-allocating as read-execute and hoping that you get the same page back before it gets reset to zeroes by the OS. In fact, when I thought of this issue, I wondered whether I might have stumbled across a potential mitigation bypass in the real W^X implementation for Windows, and my eyes turned into dollar signs. Thankfully (or sadly) the clever folks at Microsoft thought of this already, and forced every released page to be reset.

Closing words

I hope that this ham-fisted approach to implementing W^X has been of some educational use, at least in terms of thinking about how the protection can be implemented in practice. If you’d like to follow along at home, the code can be found in the WXPolicyEnforcer project on the Portcullis Labs GitHub. It is released under MIT license.

The post Enforcing a write-xor-execute memory policy from usermode appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/enforcing-a-write-xor-execute-memory-policy-from-usermode/feed/ 0
SSL/TLS Hipsterism https://labs.portcullis.co.uk/presentations/ssltls-hipsterism/ https://labs.portcullis.co.uk/presentations/ssltls-hipsterism/#comments Fri, 17 Nov 2017 10:44:40 +0000 https://labs.portcullis.co.uk/?p=6147 Presentation on finding implementation* bugs outside the mainstream (as given at Securi-Tay 2017). A lot of fantastic work has gone into the discovery, analysis, and (on occasion) marketing of SSL/TLS vulnerabilities. Some, such as BEAST and LUCKY13, are issues in the protocol itself. Other bugs, however, affect individual implementations of this complicated and nuanced protocol. […]

The post SSL/TLS Hipsterism appeared first on Portcullis Labs.

]]>
Presentation on finding implementation* bugs outside the mainstream (as given at Securi-Tay 2017).

A lot of fantastic work has gone into the discovery, analysis, and (on occasion) marketing of SSL/TLS vulnerabilities. Some, such as BEAST and LUCKY13, are issues in the protocol itself. Other bugs, however, affect individual implementations of this complicated and nuanced protocol. This talk will discuss an approach for identifying security bugs in SSL/TLS server implementations, outside the mainstream well-publicised issues that we all know so well.

Tools referenced in this talk include:

STHST
STHST.pptx
November 16, 2017
1.0 MiB
MD5 hash: 503a77150111d59a0352c27a62195c4c
Details

The post SSL/TLS Hipsterism appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/presentations/ssltls-hipsterism/feed/ 0
POODLE: Padding Oracle On Downgraded Legacy Encryption https://labs.portcullis.co.uk/blog/poodle-padding-oracle-on-downgraded-legacy-encryption/ https://labs.portcullis.co.uk/blog/poodle-padding-oracle-on-downgraded-legacy-encryption/#comments Wed, 15 Oct 2014 16:35:06 +0000 https://labs.portcullis.co.uk/?p=4767 Last night, researchers from Google released details of a new attack that they have called the Padding Oracle On Downgrade Legacy Encryption (POODLE) attack which has been assigned CVE-2014-3566. The summary is, essentially, that SSLv3 uses a MAC-then-encrypt construction, which doesn’t authenticate the padding as it is applied on the plaintext message before padding or […]

The post POODLE: Padding Oracle On Downgraded Legacy Encryption appeared first on Portcullis Labs.

]]>
Last night, researchers from Google released details of a new attack that they have called the Padding Oracle On Downgrade Legacy Encryption (POODLE) attack which has been assigned CVE-2014-3566.

The summary is, essentially, that SSLv3 uses a MAC-then-encrypt construction, which doesn’t authenticate the padding as it is applied on the plaintext message before padding or encryption are applied. This gives rise to a padding oracle bug, which is how BEAST worked too.

Block ciphers require plain-texts to be of a length divisible into fixed-size blocks, e.g. 128 bits in the case of AES. As normal messages don’t adhere to this (i.e. can be arbitrary in size) we use padding to ensure that the message is expanded to fit that requirement. Padding usually adds a minimum of 1 byte, and a maximum of one entire block. Its value is usually tied to the length of the padding, for example in PKCS#7 padding we would add 01 01 for two bytes, 02 02 02 for 3 bytes, 03 03 03 03 for 4 bytes, etc. However, instead of checking that all the bytes match, the SSLv3 specification states that only the last byte is to be validated. In some cases, in fact, client libraries mistakenly set the first padding bytes improperly – Oracle’s implementation has, in the past, used zeroes for all bytes but the last.

Another requirement that block ciphers don’t provide on their own is securely encrypting more than a single block with the same key. This is important, because simply transforming each block independently with a cipher (this is known as Electronic Codebook, or ECB mode) results in equal blocks encrypting to equal cipher-texts, which is bad news because this leaks information! Instead, we use different constructions to ensure safety when encrypting multiple blocks – Cipher Block Chaining (CBC) is a very common one.

In CBC, we take each previous cipher-text block and xor it with the current plaintext block before encryption. So for block 4, you take the encrypted block 3, xor it with the block 4 plaintext, then encrypt that value. The first block (block 0) has no previous cipher-text block, so we use an Initialisation Vector (IV). The IV is important, because it allows us to securely send the same full messages with the same key without their entire resulting cipher-texts being equal, as long as we don’t ever re-use an IV with the same key. Anyway, that detail isn’t important for this bug.

To decrypt a CBC message, we decrypt a block, then xor the output with the previous block’s cipher-text, i.e:

Mn = Dk(Cn) ⊕ Cn-1, where M is the message, Dk is a block decryption with a key k, C is the ciphertext, and ⊕ denotes an xor.

It turns out that this construction is malleable, meaning that an attacker can modify the cipher-text in a way that causes meaningful things to happen to the plaintext when it gets decrypted. This has been known about for a long time, and underpins many modern SSL/TLS bugs, as well as bugs in other crypto-systems.

Essentially, if you modify a block by xor’ing it with some value, the next block’s plaintext gets xor’ed with that value at the cost of the tweaked block being completely garbled. So if we want to attack block 4, we’d xor C3 with a value t, so that the decryption becomes:

Dk(C4) ⊕ (C3 ⊕ t) = M4 ⊕ t

This is really useful if you know any of the values of M4, because you can use xor to arbitrarily alter any value you know. This has a side-effect, though. Because C3 is now C3 ⊕ t, when decrypting C3 it gets utterly garbled, destroying it entirely. This is fine in some cases, for example if your target application doesn’t check block authenticity and doesn’t read (or doesn’t care about) the data in the garbled block. Another trick in this avenue is to change the IV instead, so that:

M0 = Dk(C0) ⊕ IV

becomes

Dk(C0) ⊕ (IV ⊕ t) = M0 ⊕ t

This doesn’t have the block-garbling side effect, but you can only do it on the first block. This is particularly useful in systems where cipher-text blocks are authenticated but the IV isn’t.

Anyway, you’re probably wondering what this has to do with POODLE. Remember all that padding stuff? Turns out that you can use that to work out the values of cookies.

An attacker gets the victim to visit a page controlled by them. That page includes JavaScript that repeatedly requests a target site for which we want to steal cookies from. The request body will look something like this:

POST /url\r\nCookie: name=value\r\n ... \r\n\r\nbody

Which after padding and MAC looks like this:

POST /url\r\nCookie: name=value\r\n ... \r\n\r\nbody{20-byte MAC}{pad}

The attacker controls the URL and the request body. This allows them to force the cookie into a particular position in the request. Specifically, the attacker sends requests with different length URLs and body data until the observed encrypted message grows by one block, with an earlier block having its last byte as one unknown byte of cookie. This sounds hard, but it’s as simple as knowing that the Cookie header is in a certain position and tweaking the URL to block-align it, then sending a maximum of 16 requests with incrementing POST body sizes until the output grows by one block.

The attacker knows that the final byte in the padding block will have a decimal value of 15 (i.e. 0x0F) because of the known padding size. He can then utilise the CBC malleability issue discussed above to replace the padding block with the target block (containing 1 byte of cookie at the end). Most of the time this will be rejected as invalid padding, but 1 in 256 times (on average) this will be accepted because the decrypted last byte will happen to be 15. This probability is due to the different key / IV used each time, so the cipher-text will be different (randomly) each time, so when the last block is decrypted it xors with the previous cipher-text block and has a chance of randomly producing 15 as the last byte.

That’s a bit of a challenge to follow, so let’s look at it specifically in terms of the decryption:

Mn = Dk(Cn) ⊕ Cn-1

where Cn, the final block, containing padding, is replaced with Ci:

Mn = Dk(Ci) ⊕ Cn-1

Because Ci wasn’t intended to be xor’ed with Cn-1 the output is garbage. However, because Cn-1 is random, in roughly 1/256 cases the last byte of output will be 15, leading the padding to be falsely accepted as valid. When this occurs, the attacker can deduce that:

Dk(Ci)[15] ⊕ Cn-1[15] = 15

Which, by doing a bit of bitwise algebra, gets us some plaintext:

Mi[15] = 15 ⊕ Cn-1[15] ⊕ Ci-1[15]

The above arises because Ci was xor’ed with Ci-1 when the CBC encryption occurred.

The Cn-1 and Ci-1 values are known to the attacker, so they just decrypted a single byte of the message, without knowing the key!

The attack requirements are:

  • Attacker can get the client to send HTTPS requests (easy via JS)
  • Attacker is in a position to modify client traffic (“Man-In-The-Middle”)
  • Connection uses a block cipher suite from SSLv3

Now that last one is interesting. SSLv3 is still very widely supported by servers, but TLS is usually there too and clients should prefer it. However, due to all sorts of issues with client/server compatibility, it turns out that if a connection fails to properly negotiate TLS then in many clients it’ll downgrade to SSLv3. All an attacker needs to do, is sit in the middle and interfere with the handshake such that the client assumes there’s an incompatibility and renegotiates down to SSLv3.

There are two fixes. The first is to just turn off SSLv3, or at least disable CBC cipher suites in SSLv3 (but that leads to other problems so isn’t recommended). There is also a client-side fix for the downgrade, which is to enable the TLS_FALLBACK_SCSV flag in the client hello – this is recommended within the original Google article. However, organisations should not rely upon all clients having this patch, or that some clients will not downgrade to SSLv3 for other reasons.

Our SSL Good Practice Guide has been recommending that SSLv3 be disabled for some time now, which prevents this attack. We’ve also updated the SSL cipher suite enum tool to include a check for POODLE, so you can test your configuration.

The post POODLE: Padding Oracle On Downgraded Legacy Encryption appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/poodle-padding-oracle-on-downgraded-legacy-encryption/feed/ 0
Windows System Objects and Sophos Endpoint Security https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/ https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/#comments Mon, 03 Feb 2014 11:30:42 +0000 https://labs.portcullis.co.uk/?p=3359 Windows system objects are one of the interesting areas of binary application assessments that are often ignored or misunderstood. Many people don’t realise that abstract Windows application programming concepts such as mutexes, events, semaphores, shared memory sections, and jobs all come together under the purview of the Windows Object Manager. These objects, like those in […]

The post Windows System Objects and Sophos Endpoint Security appeared first on Portcullis Labs.

]]>
Windows system objects are one of the interesting areas of binary application assessments that are often ignored or misunderstood. Many people don’t realise that abstract Windows application programming concepts such as mutexes, events, semaphores, shared memory sections, and jobs all come together under the purview of the Windows Object Manager. These objects, like those in the filesystem and registry namespaces, have all sorts of interesting security impacts when not properly managed.

This blog post relates to an advisory. See CVE-2014-1213: Denial of Service in Sophos Anti-Virus for the release.

One of the major differences of the system object namespace, versus filesystem and registry namespaces, is the concept of a default Discretionary Access Control List (DACL). These DACLs are the cornerstone of the Windows security model, and are used to describe which entities (users, groups, etc.) have specific types of access to an object. When you view the permissions on a file or directory, you’re looking at a direct representation of the DACL for that object. Each rule within a DACL is called an Access Control Entry (ACE). When an object in any namespace is created and the application does not explicitly provide a DACL, the system looks at the parent container to see if it has any ACEs within its DACL that are marked as inheritable. If it finds some, it applies them across into a new DACL for the newly created object. There are special rules around inheritance for containers, but we won’t get into that here. If there are no inheritable ACEs, it resorts to applying the default DACL for the namespace. This is where things get interesting from a security perspective; the system object namespace, in contrast with registry and filesystem namespaces, has no default DACL. In this situation, the system applies an empty DACL, which allows everyone full access to the object.

This is a corner-case that many developers fall foul of. Objects created in the local container (i.e. the system object container for the current session) inherit some ACEs from the session container, but the global container has no inheritable ACEs, and therefore objects within it that are created without an explicit DACL will end up with an empty DACL. We can see this in action by viewing the DACLs applied to the global and session containers, using a tool such as WinObj:

DACL applied to session container in the Windows system object namespace.
image-3360

DACL applied to session container in the Windows system object namespace.

DACL applied to global container in the Windows system object namespace.
image-3361

DACL applied to global container in the Windows system object namespace.

Notice that all the ACEs in the global container are marked as “Inherit None”, meaning that child objects will not inherit them as part of their DACL. As such, if you create a system object such as a mutex or an event through the usual CreateMutex or CreateEvent API calls, and fail to explicitly provide a DACL, all users on the system will have unrestricted access to that object.

Whilst digging into security issues around this common mistake, I found a number of vulnerabilities in a range of products. In general the impacts of being able to mess with these were low, usually causing the affected application to lock up or stop working in some way. In Sophos Endpoint Security, however, the impact was more interesting. Most anti-malware software consists of three major sections: a user-facing GUI for controlling and monitoring the product, a high privilege user-mode service for performing various scanning features, and one or more kernel-mode modules (commonly referred to as drivers) that provide filesystem filters, notification of new threads and processes, low-level memory access, hook detection, and other kernel-level functionality. Communicating quickly and reliably between these components is a daunting task, especially when your messages have to traverse across the user-mode / kernel-mode barrier. Enter global system objects. Mutexes, events, semaphores, and shared memory sections in the global container of the system object namespace are all directly accessible from both user-mode and kernel-mode. When combined properly, these object types allow a developer to create an inter-process communications framework that is fast, reliable, and thread-safe.

One example of this might be a feature where a filesystem filter driver needs to notify the user-mode service that new data has been written to disk, so that it can scan it. Three named objects – an event, a mutex, and a shared memory section – are created within the global namespace, so that both components can access them. The event is used to signal that a write operation is pending, the mutex is used to ensure that the shared memory section is accessed by only one thread at a time, and the shared memory section is used to hold information about the event. The whole process is rather complex, and is best described in a diagram:

Example IPC mechanism

Diagram of an example IPC mechanism between a user-mode AV service and a kernel-mode file system driver.

As you can see, the user-mode service is responsible for checking the write operations before they are allowed. The decision is passed back to the driver, which either completes the write or rejects it, issuing an appropriate error code.

Now, imagine you let a low-privilege user interact with these objects. For one, they may be able to wait on the event object themselves and modify the shared memory section via a race condition. This can be somewhat mitigated by various integrity checks, but isn’t outside the realms of possibility. Another issue is that all of these components modify their state, and in some cases block execution, when the event and mutex objects are waited upon or signalled. Imagine that a malicious local user acquires the mutex, then signals the event. The user-mode service continues execution (step 7) and attempts to acquire the mutex (step 8), but since the malicious user has already acquired it, the service thread is now blocked. From this point on, the driver’s calls to have write operations checked go unheeded. Although the architecture is not identical, this is precisely the mechanism in which Sophos Endpoint Security failed.

As the advisory describes, CVE-2014-1213 relates to a lack of DACLs applied to system objects. As we discussed above, failure to explicitly supply a DACL when creating system objects results in the object being created with the default DACL for the namespace, which is null (i.e. empty). The impact is that a local low-privilege user can manipulate these objects as they wish. Since this can lead to disk IO requests being ignored, or at least heavily delayed, the system eventually cannot continue. In many cases it simply locks up and becomes unresponsive, as user-mode programs and subsystems (e.g. SMSS / CSRSS) cannot complete blocking disk operations. In some cases, the system will recognise the pattern of failures and forcefully terminate the system with a bugcheck (BSoD) in order to reduce the potential for permanent damage to the system state. Of course, this isn’t particularly interesting from a security perspective if you only consider a desktop environment, but imagine the impact on a terminal services system with hundreds or thousands of users.

Sophos have now patched this issue in engine 3.50, which went live on the 21st of January. Portcullis have independently verified this fix as being effective after the update is applied and the system is rebooted.

The post Windows System Objects and Sophos Endpoint Security appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/feed/ 0
Securi-Tay 3 wrap-up https://labs.portcullis.co.uk/blog/securi-tay-3-wrap-up/ https://labs.portcullis.co.uk/blog/securi-tay-3-wrap-up/#comments Fri, 24 Jan 2014 01:01:35 +0000 https://labs.portcullis.co.uk/?p=3120 Of all the conferences I’ve been to, Securi-Tay has always been a favourite. I don’t know whether it’s the mix of security professionals and students, the relaxed atmosphere, or the balance between technical and non-technical talks, but it’s always a great time. For those of you that aren’t familiar with it, Securi-Tay is a student […]

The post Securi-Tay 3 wrap-up appeared first on Portcullis Labs.

]]>
Of all the conferences I’ve been to, Securi-Tay has always been a favourite. I don’t know whether it’s the mix of security professionals and students, the relaxed atmosphere, or the balance between technical and non-technical talks, but it’s always a great time. For those of you that aren’t familiar with it, Securi-Tay is a student organised and lead conference, held annually by the Abertay Ethical Hacking Society at the University of Abertay, Dundee. This year’s event, held on January 15th (last week, at time of writing), marked the third instance of the conference.

I spoke at Securi-Tay last year (video), before I joined Portcullis, on the security threats posed by common office and datacenter devices such as photocopiers, printers, and UPSs. This year I decided to tackle a much more foreboding and monolithic topic: cryptography. I feel that one of the major obstacles to learning about cryptography is the stigma around it – it is often seen as obscenely complex, studied only by mathematicians with foot-long beards and a slew of three-letter acronyms trailing their surnames, detailing their accomplishments. Another obstacle is the lack of reputable, high quality, entry-level education in the subject outside of university courses and paid workshops. This aspect has improved somewhat, especially with the introduction of free online courses like Stanford’s, but free materials are certainly nowhere near the breadth and ubiquity we come to expect in other areas of security education. As such, I felt that I should do my part in rectifying this.

My talk, entitled “Breaking crypto without breaking your brain”, aimed to give people a basic understanding of the common types of cryptography that I see in practical use as part of my day job, without needing an advanced degree in mathematics. In contrast to many introductions to the topic, I skipped the classic ciphers such as the Caesar cipher, simple alphabetic transposition and substitution ciphers, and other algorithms of that ilk, as I rarely find knowledge of them useful in the context of real security assessments. Instead, I focused upon one-time pads, modern stream and block ciphers, padding, block cipher modes, and demonstrated how seemingly strong encryption can often be broken trivially.

The talk was recorded and should be available on YouTube within the next few weeks, via the AbertayHackers YouTube account. In the meantime, you can download the slides and both demo applications:

Aside from my own, there were a number of very good quality talks from both industry professionals and students. I thought it would be nice to write a short description of each that I saw, with some take-away points for each talk.

Olly Whitehouse – “Real world threat modelling”

This talk covered a range of topics around the concept of assessing a black-box appliance, with a view to building a threat model. Olly proposed that by building a threat model, pentesters can obtain greater coverage of targets during tests, primarily by answering simple questions that help us understand the system. He highlighted the need to understand underlying technologies, especially the operating system, in order to minimise the risk of missing key issues. Finally, he went on to discuss ways of building threat models, visualising them, and utilising feedback from security tests to improve upon existing models.

Three take-away points:

  • Go for the simplest attacks to reach your goal – elabourate and sexy tricks aren’t important
  • Threat modelling can help both pentesters and the organisations that hire them
  • Threat model discussions with security engineers, developers, and other technical staff can lead to real improvements in security posture

Panagiotis Gkatziroulis – “Physical attacks: Walking past the egg shell perimeter”

This talk gave a broad overview of the technological, structural, procedural, and human challenges that are involved in physical security. First, a range of security measures were discussed that fall into the building design and security technology category, including entry choke points, placement of receptionists and guards, use of CCTV cameras and electronic doors, as well as the training of the personnel involved. Panagiotis noted that reception staff are often not trained to act as security guards, despite the fact that fooling them may get you the “keys to the kingdom”. He went on to discuss four avenues of exploitation when dealing with human opponents: misrepresentation, obligation, authority, and emergency. Five key reasons why humans fail to act were proposed: a sense of obligation to business goals (not wanting to hinder productivity), employees feeling that they aren’t targets, anxiety of punishment if they mistakenly report something, a lack of confidence in challenging people, and a belief that security is someone else’s problem.

Three take-away points:

  • Physical security is meaningless if humans can be subverted into giving you access
  • Often a lack of proactive security – changes are reactive, after the damage is done
  • Once you get a pass from reception, you win. These people should be trained in security procedures

Oren Benshabat – “DNS distress”

Oren’s talk focused on the problems of DNS cache poisoning and DNS amplification attacks from the ground up, explaining the subject in detail. He began with cache poisoning – an attack that tricks a system into accepting a spoofed DNS response. He described bailiwick checking, a series of checks designed to enforce correct responses, and how this process can be subverted by flooding the target with reply packets containing different query IDs until one is accepted. He went on to cover Dan Kaminsky’s BIND attack, which spoofs an entire domain rather than just one hostname, by spoofing the nameserver of a domain to point at an authoritative DNS server under the attacker’s control. He proposed several fixes, including BIND patches, randomised source ports, DNSSEC, and pinning of nameserver IP addresses. He went on to discuss DNS amplification DoS attacks, which involve tricking open DNS servers into sending large reply packets to a target system. A range of DNS features that increase response packet size were discussed, including EDNS0, DNSSEC, as well as proprietary and custom DNS extensions. Finally, countermeasures such as IP source validation (mainly at the ISP level) and restriction of open recursion were suggested, with the caveat that there is no bullet-proof solution against DNS amplification attacks at the current time.

Three take-away points:

  • There are multiple ways in which DNS can be abused, even when properly configured
  • Some DNS security features (e.g. DNSSEC) help to fix one problem but also contribute to other problems
  • There is no current practical solution to DNS amplification attacks

Dom Cashley – “Security in SCADA”

SCADA is a topic that I am personally interested in, but have very little knowledge of. As such, I was very happy to see someone approaching the subject at an entry level. Dom started the talk by explaining the traditional SCADA network design, including Human-Machine Interface, Master Terminal Unit, and Remote Terminal Unit devices, and how they are usually deployed in production facilities. Whilst the original design was focused on an air-gapped network, Dom noted that many modern SCADA devices are in fact no longer air-gapped, and are commonly available from the internet for remote access purposes. He went on to explain how off-the-shelf hardware was often used to network these devices and link them to IP networks, making an attacker’s job easier. Additional security issues were also noted, such as a lack of security training for engineers that are deploying equipment, general purpose computing platforms being used in field devices (including BYOD-style policies), and “SCADA in the cloud” management systems. Dom noted several problems with patching and updates, including long-standing hardware life spans, a reluctance to take down critical devices for patching, and use of old insecure protocols. A demo was also shown, using a special demo board and an exploit that causes the MTU to alter pump motor parameters without alerting the HMI user.

Three take-away points:

  • SCADA is usually controlled by software on regular PCs, often running Windows
  • Patching bugs is paradoxically problematic due to the critical nature of systems
  • Systems are often not air-gapped and regularly appear on the internet, and may be found on SHODAN

Paco Hope & Ritesh Sinha – “The Colour of your box: The art and science of security testing”

Along with Paco, Ritesh, and the usual exuberance we’ve come to expect, the stage was occupied with two rollercoasters built from k’nex, wrapped almost entirely in large cardboard boxes, exposing only the starting and finishing sections of track. These contraptions formed a metaphor for the core concept of their talk: black-box vs. white-box testing. A toy car was placed into each, which could be heard traversing the track, but the car did not appear at the other side. Two teams were invited from the audience to diagnose the fault, each given access to one rollercoaster each. This was described as the black-box test, where only inputs and outputs could be seen, functionality could be deduced, and some vision of the internal technologies was available. Each team reported that the system utilised k’nex, was meant to take a car in one side and send it through to the other, but did not work.

The teams were then allowed to open flaps on the sides of the boxes to see inside, and propose one single fix that would solve the problem – essentially a white-box test. The “one issue” restriction was designed to emulate a true penetration test, where the customer receives a report and cannot be guaranteed to fix anything but the highlighted issues. Paco noted that reports needed to be clear, concise, and free of unnecessary technical jargon, stating that far too many reports end up reading somewhere along the lines of “bla bla bla owned you bla bla bla root shell bla bla bla Cross-site Scripting bla bla bla disaster!”. Now that the Teams had access to the internals, both began identifying potential problems with the design, and tried to come to a conclusion on what should be fixed. After some pointers from the presenters, the solution was eventually found: a piece was blocking the passage of the car and needed to be moved.

The conclusion given was that both black-box and white-box approaches are necessary, as they both promote different types of thinking and analysis. Black-box tests tend to focus on the external attack surface that most attackers would see, whereas white-box tests tend to focus more on implementation details that may lead to potential issues. By combining the two methods, the presenters propose that a strong coverage of the target can be obtained.

Three take-away points:

  • Black-box makes you focus on external attack surface, white-box makes you focus on implementation details
  • Reports need to be clear, concise, and explain the full spectrum of problems in order to facilitate improvements
  • Combining both types of testing helps increase coverage during assessments

The conclusion of Paco and Ritesh’s talk marked the end of the conference day, after which we all congregated in the student union bar for after-party drinks. I’d like to issue personal thanks to all involved with organising and running the conference, including all the speakers and sponsors.

As I noted above, all talks were recorded and should hopefully be available on YouTube within the next few weeks.

The post Securi-Tay 3 wrap-up appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/securi-tay-3-wrap-up/feed/ 0