Portcullis Labs » biometrics https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 Is your sign signed? https://labs.portcullis.co.uk/blog/is-your-sign-signed/ https://labs.portcullis.co.uk/blog/is-your-sign-signed/#comments Thu, 03 Aug 2017 16:30:01 +0000 https://labs.portcullis.co.uk/?p=6002 Modern autonomous vehicles use a number of sensors to analyse their surroundings and act upon changes in their environment. A brilliant idea in theory, but how much of this sensory information can we actually trust? Cisco’s Security Advisory R&D team, a.k.a. Portcullis Labs, decided to investigate further. Various researchers have documented attacks against vehicle sensors […]

The post Is your sign signed? appeared first on Portcullis Labs.

]]>
Modern autonomous vehicles use a number of sensors to analyse their surroundings and act upon changes in their environment. A brilliant idea in theory, but how much of this sensory information can we actually trust? Cisco’s Security Advisory R&D team, a.k.a. Portcullis Labs, decided to investigate further.

Various researchers have documented attacks against vehicle sensors and cyber-physical systems resulting in the vehicle performing unwanted actions, such as falsifying alerts, malfunction and even causing the vehicle to crash. The very same sensors which are used to improve driver efficiency have been proven vulnerable to both spoofing and signal jamming attacks. In this blog entry, we will be focusing on the reliability of a vehicle’s underlying systems, it’s susceptibility to spoofing attacks, and in particular the vulnerabilities in the front-facing camera, to ascertain how these problems may be addressed.

The problem

Multiple cameras can be found in today’s vehicles, some of which provide a full 360 view of their surroundings. One of the most common uses for these cameras is for road traffic sign detection. The traffic sign is picked up by the vehicle’s camera and displayed at eye level within the instrument cluster for the driver’s convenience. This is designed to reduce the potential consequences of a driver failing to recognise a traffic sign.

Professors from Zhejiang University and the University of South Carolina recently presented a whitepaper detailing the countless attack scenarios against vehicle sensors and front-facing cameras. With regards to vehicle cameras, their experiment focused on blinding the camera using multiple easily obtained light sources, which proved to be successful.

Our experiment, on the other hand, looked into fooling the vehicle’s camera in order to present false information to the driver.

We started off by printing different highway speed signs on plain paper, some of which contained arbitrary values, such as null bytes (%00) and letters. The print-outs were then held up by hand as our test vehicle closely drove past. As expected, the camera detected our improvised road signs and displayed the value to the driver. Spoofing speed values of up to 130 mph was possible, despite being way beyond the nation’s speed limit. Does this mean we can now exceed the speed limit? Naturally, abiding by the highway code still comes first, but it does beg the question of why something this farcical can still occur.

Sign Signed

Although one could argue that the camera has done its job and detected what appears to be a valid road sign, there are no additional checks being performed to distinguish whether the detected sign is legitimate or even sensible.

Other scenarios to consider involve the use of intelligent speed limiters which are now present in some vehicles. Both the front-facing camera and built in speed limiter are used to limit your driving speed to the speed sign recognised by the camera, preventing you from exceeding the limit even if you were to floor the accelerator. In a car with that functionality, what would happen if a 20 mph sign was spoofed onto the camera while driving on a 70 mph limit motorway? We are yet to test this specific scenario, but a potentially dangerous outcome is easy to imagine.

What could be done to mitigate this problem?

We need some form of validation against sensory input. If we review and compare the advancements in securing biometrics, specifically fingerprint authentication devices, we can see that these devices are constantly bettered by incorporating new features, such as “life detection”, which detects the subtle conductivity a finger possesses thus preventing spoofing and finger cloning attacks. Could we implement a similar approach to securing vehicle sensors? Proper validation of the authenticity of each detected road sign would enable us to prevent spoofing attacks from occurring, but of course it is easier said than done.

What about introducing boundary detection? UK drivers know that 70mph is the absolute speed limit within the country, therefore the detection of speeds higher that this should be flagged as an error. A fixed boundary detection could, of course, prove unhelpful when driving in Europe, for example, where the speed limits are different, but this is easily fixed using GPS data or functionality enabling the driver to manually set the location as opposed to a global speed limit.

Independent researchers have even suggested novel ways to improve road sign detection systems using neural networks in order to learn and distinguish properties of legitimate road signs.

Conclusion

We have demonstrated that front-facing vehicle cameras used for traffic sign detection can easily be fooled into recording a false speed limit. While cameras do have an essential place in autonomous vehicles, their integrity and availability properties present a great deal of room for improvement. Even simple features and configuration changes, such as boundary detection, could be applied to increase the accuracy and efficiency of these systems. Further research into securing vehicle cameras needs to be conducted to ensure that spoofing attacks cannot be carried out as trivially as is currently possible.

The post Is your sign signed? appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/is-your-sign-signed/feed/ 0
Biometrics: Forever the “next big thing” https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/ https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/#comments Thu, 06 Jul 2017 10:08:30 +0000 https://labs.portcullis.co.uk/?p=5806 It’s not every day we get to assess biometric systems from a security perspective, they are still somewhat esoteric and testing them doesn’t quite fit with the usual slew of things that come along with being a security consultant. Recent engagements reminded us of just how interesting this facet of the industry can be and […]

The post Biometrics: Forever the “next big thing” appeared first on Portcullis Labs.

]]>
It’s not every day we get to assess biometric systems from a security perspective, they are still somewhat esoteric and testing them doesn’t quite fit with the usual slew of things that come along with being a security consultant. Recent engagements reminded us of just how interesting this facet of the industry can be and so we decided to write up a little something around biometrics. This article will cover some of the history and the basics of biometrics and some of the biometric-centric attacks you may come across…

Biometrics aren’t new

They have been around for, well, as long as we have. There is evidence that cavemen used to sign paintings with a handprint as a way to confirm authorship. Traders used to keep ledgers with physical descriptions of trade partners. Police started keeping “biometric databases” of criminals hundreds of years ago.

Even digital biometrics have been around for decades. Digitised systems, especially for voice, writing and fingerprints, started coming in to being in the 1970′s and 1980′s, largely funded by government and law enforcement agencies such as the FBI.

Somewhere around the 1990′s is when biometrics as we know them today came in to form: fully digitised and automated systems, automatic facial recognition in CCTV, biometric passports, etc… Since then it has largely been about miniaturisation, increasing sensor/template accuracy and finding new, novel things to measure such as ear biometrics – which I’m going to go on a whim and say nobody needs or wants.

Recently, biometrics have started to make their way directly in to the lives of consumers on a larger scale, thanks to increasing adoption of fingerprint and facial/retina scanners amongst smartphone and laptop manufacturers.

But what happens when a user enrols their finger – or any other appendage – on a biometric device?

A pixelated finger (probably).
image-5807

The biometric device makes an acquisition using whatever sensor is installed, for example a CCD or optical sensor like in a camera, or a capacitance scanner, even potentially an ultrasound scanner. This scan is then analysed for “interesting” things, or minutia. These important bits of the biometric are isolated and saved in a binary template, the rest of the reading is generally discarded.

Of course, manufacturers have their own algorithms for creating templates and matching. But in general, each template boils down to something akin to coordinates. For template matching, a number of different comparison algorithms are used, with hamming distances being the most common that I’ve seen. At a simple level, hamming distances measure the differences between two equal [length] strings [presented templates].

To explain this a bit clearer: when a user puts their finger on a fingerprint scanner, they don’t always put it in the exact same place or at the exact same angle. By using an algorithm such as hamming distances to calculate the difference, biometric devices can judge the presented biometric based on a number of different factors, such as the distances between each minutia detected with those of stored templates.

But it’s not all about fingertips and eyeballs

A table showing common biometrics and their attributes
image-5808

The above table is by no means a complete list of biometrics, it merely covers the ones people hear of or encounter the most. It is also by no means 100% representative, it is meant as a general guideline.

Accuracy in the table is how unique to an individual that biometric is and therefore the scan too. So for example hand geometry is not very accurate or unique – usually all that happens is the acquisition device takes a measurement of the key points of the hand (tips and grooves of the fingers and the width). Iris and retina are considered high accuracy because they are unique traits, even between identical twins and relatively high-quality acquisitions can be made. Just to clarify: the iris is the nice colourful part at the front of the eye which controls the eye’s aperture and the retina refers to the nerves near the back of the eye which collect that light, specifically in Biometrics, we refer to the veins.

Security is how safe the biometric is in terms of potential to being inadvertently “stolen”. So for example fingerprints aren’t very secure at all, people leave them all over the place, almost like leaving post-it notes with their passwords everywhere. The Retina is the only one that is considered high on this list because it is the only truly “internal” trait listed, so it isn’t something that can be seen or copied easily.

The final column is Usability, this is how easy it is to actually use the system. Fingerprint scanners are easy, just plop the finger on the acquisition sensor and away you go. Iris and face require the user to stand still in front of a camera so are a bit more awkward. Retina is the most difficult, because it’s an internal trait and difficult to scan. To use it the user has to place their eye right up the sensor and have a relatively unpleasant bright light shone in to their eye. Not particularly pleasant.

Finger vein and palm vein scanning are two types of biometrics I haven’t listed here but are quite promising and gaining increased traction. They both offer a sensible alternative to fingerprints – they retain most of the usability of fingerprints while removing the weakness of using an external trait. I’d personally really like to see a smartphone with an IR-based palm vein reader on the back, but maybe I’m just a little bit crazy.

Attack vectors

Just as with any other system, biometrics expose a slew of network and local attack vectors. From replaying old templates, modifying data in transit, modifying or theft from the backend database, brute force attacks etc. The security industry knows of these attacks all-to-well and we also know how to defend against them. What we are more interested in is the attack vectors a bit more specific biometrics: so attacking the input device (sensor) and the templates themselves.

Over the years, a number of techniques to achieve a successful authentication illegitimately have come to light, we’ll cover a few of the more common ones below:

Reverse engineering

We’ll start with the templates themselves. Imagine that we have acquired a template somehow (i.e. we have compromised a database containing biometric templates) and now need to get past an actual biometric scanner.

At some point in time, it was thought reverse engineering biometric templates back in to a presentable appendage wasn’t possible. After all, templates are just a few bytes of data which don’t contain enough information to reconstruct the original biometric from. This technique is essentially the biometric equivalent of “password cracking”.

As we already know, templates generally list the coordinates of the minutia in a biometric. This means that realistically the key information is already there, it just needs to be worked out in terms of a mappable grid and then add in all the ‘uninteresting’ data so that it resembles an actual trait.

This is something that sounds easier in theory than it is in practise, I’ve only ever seen it achieved successfully in lab environments.

A specific case-study that comes to mind is around iris reverse engineering, found in the “Handbook of Iris Recognition”.

The team used an open source system developed by Libor Masek to create an initial group of reconstructed irises which were then tested against the system. The closest matches from the initial group were then combined along with some new, random generation data. This repeated until a match was found. In over 90% of the cases the attack succeeded eventually.

Hillclimbing attacks

This class of attack is similar to a reverse engineering attack, except the attack starts without a template to work off of. Instead, the attacker would have to rely on the biometric system doing something stupid, such as returning the match percentage of any authentication attempt. Most of the security-conscious systems today do not do this, but there are still some edge-cases and older devices which do.

Against a system which does not return data about how close the match was to the sensor, the attacker would simply have to resort to brute-force attacks. Much like the equivalent for password cracking, it would just be a matter of trying a large number of templates [hashes] and comparing them against the real one. And just as with password brute-forcing, it’s much easier to do that with a stolen template than it is on a live system which may have anti-automation features such as account lockouts, rate limiting, etc.

Spoofed physical biometrics

Spoofed biometrics get a large amount of attention compared to other methods, especially when it comes to fingerprints and creating replicas. So how easy is it to take someone’s fingerprint and produce a working model from it?

The short answer is that it is relatively easy to do with the right equipment and a good fingerprint to work off of.

Possibly the most well-known and widely used method is the one known as “cyanoacrylate [superglue] fuming”.

Cyanoacrylate, when it evaporates, has a remarkable tendency to be attracted to grease (i.e. latent fingerprints left on things) in humid environments. Once it settles on the grease it re-solidifies, leaving a nice rigid and clearly marked fingerprint where before there was only grease. These prints are much more durable and defined, which makes them easier to extract and create a spoofed print from.

Superglue fuming is actually remarkably easy to do: all that is required is a container to put the thing you want to extract a fingerprint from in (such as a box or small fish tank), along with a small amount of superglue on some foil. Usually a heat source under the superglue (to help it evaporate quicker) and a small cup of water (to aid with humidity) are also added, for extra efficiency. Then simply wait a while.

After the print has settled nicely, it is simply a matter of extracting it and inverting it. There are many ways to do this, such as dental mould, high-resolution scans, even high-quality clear tape. Most professionals will attempt to further enhance a print at various stages, using things such as fine powders etc. But this post is meant as an overview, not an in-depth guide on how to extract prints.

The image below shows all the basic materials required for fingerprint extraction and superglue fuming:

superglue fuming
image-5809

In addition to extracting latent prints, back in 2014 a demonstration by a speaker at CCC in Germany showed that it is possible to spoof a fingerprint scanner on a smartphone starting with just a high-enough resolution photo of a person’s finger. If we put this in a “worst-case” context: when you use a fingerprints for authentication, not only are you potentially leaving copies of your unchangeable ‘password’ in places, you’re also carrying it around with you in plain sight.

Other biometrics

Voice-based biometrics are another area on the rise, especially as a way to ‘verify’ someone quickly and remotely (i.e. over the phone) – often touted as a way to reduce phone support overheads and costs via automation.

The primary attack vector is as one would expect here: replay attacks – so recording someone enrolling or authenticating and then replay the recording later – are surprisingly easy to execute and most voice biometric systems only appear to have very limited or non-existent abilities to detect or prevent replays.

To put this in a more traditional ‘password’ context, it’s like saying your password out loud for everyone to hear every time you use it. It doesn’t take an exceptional amount of skill to place a recording device. Voice distinction is also a limiting factor in voice biometrics. Imitating the speech pattern (mainly pitch, inflection of phonemes and cadence) of others is not hugely difficult with a bit of practise and thought.

Summary

The attacks described here are not all particularly mature, but they have not needed to be. Biometrics aren’t widely adopted and therefore are not a high-priority target. If there was real demand, we’d all keep biometric template cracking and reconstruction software on our machines.

Imagine a world where passwords were replaced by biometrics, once a breach happens – and lets be honest, sooner or later a breach always happens – you would spend the rest of your life wondering if it is game over for your all your logins that use your finger (or whatever) and it would be out of your control. There is often a lot of grumbling around passwords, but at least passwords are easily changed should the worst happen. Get a password manager and the trouble of remembering them all even largely goes away (I wish major OS’s would start incorporating decent password-managers when shipped, to get people in to this habit).

Of course, there is the third major option amongst all this: “something you have” – access cards of varying types. RFID, NFC, even cards with PKI certificates. All have their pros and cons and are part of a larger debate which I won’t bother going in to here. Ultimately, the industry has already decided that multi-factor authentication is the way to go for situations where security is prioritised. Biometrics fit in to this as part of the “multi” – use them alongside something else. And no, I don’t mean alongside a username/ID, that is not private information. An access token and/or a password.

The post Biometrics: Forever the “next big thing” appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/biometrics-forever-the-next-big-thing/feed/ 0