XmlHttpRequest-class, or object.
Webmail applications use it to quickly update the list of messages in your Inbox, while other applications use the technology to suggest various search-queries in real-time. All this without reloading the main, sometimes image- and banner- ridden, page. (That said, it will most probably be used by some of those ads as well.)
Before we go into possible weaknesses and things to keep in mind when implementing an AJAX enabled application, first a brief description of how this technology works.
With the XmlHttpRequest-object, however, this request can be done while the main page is still being shown.
Some web-enabled applications, such as for email, do have pretty destructive functionality that could possibly be abused. The question is — will the average AJAX-enabled web-application be able to tell the difference between a real and a faked XmlHttpRequest?
Do you know if your recently developed AJAX-enabled or enhanced application is able to do this? And if so — does it do this adequately?
Do you even check referrers or some trivial token such as the user-agent? Chances are you do not even know. Chances are that other people, by now, do.
To be sure that the system you have implemented — or one you are interested in using — is properly secured, thus trustworthy, one has to ‘sniff around’.
Incidentally, the first time I discovered such a thing was in a lame preview function for a lame ringtone-site. Basically, the XmlHttpRequest URI’s ‘len’ parameter specified the length of the preview to generate and it seemed like it was loading the original file. Entering this URI in a browser (well, actually, ‘curl‘), specifying a very large value, one could easily grab all the files.
This is a fatal mistake: implement an AJAX interface accepting GET requests. GET requests are the easiest to fake. More on this later.
The question is — can we perform an action while somebody is logged in somewhere else. It is basically XSS/CSS (Cross Site Scripting) but then again, it isn’t.
And as you may have already noted: if there is improper authentication on the location called by the XmlHttpRequest-object, this would leave a possibility for malicious purpose. This is exactly where we can expect weaknesses and holes to arise.There should be proper authentication in place. At all times.
As all these systems are built by men, chances are this isn’t done properly.
HTTP traffic analysis
Analysing HTTP traffic analysis with tools like ethereal (yeh I like GUIs so sue me) surely comes in handy to figure out whether applications you use are actually safe from exploitation. This application allows one to easily filter and follow TCP streams so one can properly analyse what is happening there.
If you want to investigate your own application, the use of a sniffer isn’t even necessary but I would suggest you let a colleague that hasn’t implemented it, play around with your app and a sniffer in an attempt to ‘break’ through it.
Cookies are our friend when it comes to exploiting, I mean researching any vulnerabilities in AJAX implementations.
If the XmlHttp-interface is merely protected by cookies, exploiting this is all the easier: the moment you get the browser to make a request to that website, your browser is happily sending any cookies along with it.
Back to my earlier remark about a GET-requests being a pretty lame implementation: from a developers point of view, I can imagine one temporary accepts GET requests to be able to easily debug stuff without having to constantly enter irritating HTTP data using telnet. But when you are done with it you really should disable it immediately!
I could shove a GET request hidden in an image link. Sure the browser doesn’t understand the returned data which might not even be an image. But my browser does happily send any authenticating cookies, and the web-application on the other end will have performed some operation.
Using GET is a major mistake-a-to-make. POST is a lot better, as it harder to fake. The XmlHttpRequest can easily do a POST. But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.
One can modify its own browser, to make request to other websites, but it would be hard to get the browser on somebody elses machine to do this.
Or would it?
If proper authentication, or rather credential verification, still sucks, I can still set up a web-site that does the exact POST method that the AJAX interface expects. That will be accepted and the operation will be performed. Incidentally I have found a popular site that, so far, does not seem to have proper checks in place. More on that one in another article.
Merely using cookies is again a bad idea.
One should also check the User-Agent and possibly a Referrer (the XmlHttpRequest nicely allows one to send any additional headers so you could just put some other token in the Referrer-field). Sure these can still be faked — but it may fend off some investigating skiddiots.
Sequence Numbering, kinda…
A possible way of securing one’s application is using some form of ‘sequence-numbering’-like scheme.
Roughly, this boils down to this.
The servers’ ‘challenge-string‘ should be as random as possible in order to make it non-predictable: if one could guess what the next sequence number will be, it is again wide open for abuse.
There are properly other ways of hardening interfaces like this, but they all basically come down to getting some fixed information from the webserver as far away from the end-users reach as possible.
You can implement this as complex as you want but can be implemented very basic as well.
For instance when I, as a logged-in user of a web-enabled email-application get assigned a Session-ID and stuff, the page that my browser receives includes a variable iSeq which contains an non-predictable number. When I click “Delete This Message”, this number is transmitted with the rest of the parameters. The server can then respond with new data and, hidden in the cookies or other HTTP Requests field, pass the next sequence number that the web-server will accept as a valid request, only.
As far as I know, these seems the only way of securing it. This can still be abused if spyware sniffs HTTP communications — which they recently started doing.
As these technologies keep developing — and lazy website developers do not update their websites to keep up with these changes.
If you didn’t mind the cookies getting caught — the sudden deletion of random items and/or public embarrassment might be something to entice you to verify your the code.
- Facebook Launches ThreatExchange – Security Clearinghouse API
- Acunetix OVS Review (Online Vulnerability Scanner)
- isowall – Completely Isolate A Device From The Local Network
- Sprajax – An Open Source AJAX Security Scanner
- An Introduction to AJAX
- Happy 1 Year Anniversary to the Relaunch of Darknet
Most Read in Countermeasures:
- AJAX: Is your application secure enough? - 119,375 views
- Password Hasher Firefox Extension - 117,164 views
- NDR or Backscatter Spam – How Non Delivery Reports Become a Nuisance - 57,584 views