Darknet - The Darkside

Don`t Learn to HACK - Hack to LEARN. That`s our motto and we stick to it, we are all about Ethical Hacking, Penetration Testing & Computer Security. We share and comment on interesting infosec related news, tools and more. Follow us on Twitter, Facebook or RSS for the latest updates.

03 April 2006 | 12,221 views

Slashdot Effect vs Digg Effect Traffic Report

Check For Vulnerabilities with Acunetix

As I’ve been Digged about 5 times now…and somehow got Slashdotted (whilst I was sleeping) until my server crashed and my host started crying..and my bandwidth went out.

I can give a reasonable comparison between Slashdot and Digg traffic.

From what I’ve seen Digg traffic is between 4,000 and 20,000 hits depending what time it hits the front page, what position it’s in and what the article is about, this on the first day, of course the traffic keeps coming after that, but not as much as in the first few hours.

I can’t totally accurately measure the Slashdot traffic either, as by 40,000 unique visitors my server died when I woke up I did a 302 redirect to the Coral Cache version to take the load off my server.

Here are the traffic spikes for the recent 1st Slashdot, followed by the 4th Digg.

Slashdot vs Digg Traffic

As for RSS subscribers, Digg brought around 200 (20 to 200), Slashdot brought around 400 (180 to 540).

Slashdot vs Digg RSS

So from what I’ve seen Slashdot still seems to be doubling or tripling the traffic generated by Digg.

Still an amazing acheivement for Digg, it being a new site in comparison to Slashdot.

Pretty interesting to see the traffic, getting Slashdotted is amazing.

Digg This Story

01 April 2006 | 16,820 views

P*rn Database Hacked – Buyers Exposed!

Haha, well serves them right, get out and get laid guys.

Online payment company iBill on Thursday said a massive cache of stolen consumer data uncovered by security experts did not come from its database.

“I’m the first person that would have taken this to the FBI and the first person to have gone on 60 Minutes to say ‘we screwed up,’ if that were the case,” said iBill President Gary Spaniak Jr.

Two caches of stolen data were discovered separately by two security companies while conducting routine research into malicious software online. Both had file names that purportedly linked them to iBill.

Losers..but well iBill seems to be off the hook anyway, could be part of a massive Phishing scam.

He says as long as iBill stays in business, it will try to repay those webmasters. “Over $20 million has been paid back, we have plans for paying back another $18 million.”

James says the actual source of the stolen data remains a mystery. An FBI spokeswoman says the bureau wouldn’t investigate the breach unless the source of the leak comes forward to make a complaint.

Source: Wired News

31 March 2006 | 14,764 views

Jacking Wifi is ‘OK’ say Ethics Expert

Honestly, I always thought it’s ok..

Why not, if someone puts a seat in the middle of a public walkway I can sit on it right? I don’t need to ask permissions, nor fear I am doing something wrong.

Likewise if someone broadcasts an open wireless network into my house or office or a public space, I should be able to use it right.

It’s their responsibility to limit it’s signal or secure it if they don’t want people using it, for once..I agree with an expert!

I’m always on the lookout for open access points when I’m wondering around with my laptop, never know when I might need to draft a new article for Darknet, when I get that inspiration, I just have to note it down..or I’ll completely forget it.

The Ethics Expert also points out that if you find an open connection, you should try to figure out who owns it to let them know it’s open — in case they want to cut it off. Of course, he leaves out the strongest argument for why there’s nothing wrong with using free WiFi, assuming you’re either on public property or your own property: those radio waves are no longer under the control of the access point owner once they drift off of his or her property

I totally agree, and well so says the expert.

While I suppose that an argument could be made that you should never use what you donĂ¢’t pay for, I don’t think this would apply here and I’m not even sure that I agree with the broad sentiment. Unless it is made clear to users tapping into wireless connections that they must agree to certain conditions before proceeding, they have not breached any ethical mandate by logging on in any way that they legally can.

The right thing would be for those who set up wireless connections and want to keep them private to take the time to do so. If you’re a piggybacking user and can identify the individual to whom the connection belongs, it would be courteous but not essential to let that person know that you and presumably others are able to enjoy their wireless largesse.

Source: Dispatch.com

30 March 2006 | 5,093 views

US Investigates Snort Sale as a Security Risk

Basically the Americans are saying a lot of their sensitive govermental organisations are using Snort and they don’t want the software to be controlled by an Israeli company, they see it as a threat.

The same Bush administration review panel that approved a ports deal involving the United Arab Emirates has notified a leading Israeli software company that it faces a rare, full-blown investigation over its plans to buy a smaller rival.

The objections by the FBI and Pentagon were partly over specialized intrusion detection software known as “Snort,” which guards some classified U.S. military and intelligence computers.

Snort’s author is a senior executive at Sourcefire Inc., which would be sold to publicly traded Check Point Software Technologies Ltd. in Ramat Gan, Israel. Sourcefire is based in Columbia, Md.

Check Point was told U.S. officials feared the transaction could endanger some of government’s most sensitive computer systems. The company announced it had agreed to acquire Sourcefire in October.

Is it really a threat?

I’m guessing from this though that the US government then doesn’t use ANY Checkpoint devices or software in any of its organisations.

The ongoing 45-day investigation into the Israeli deal is only the 26th of its type conducted among 1,600 business transactions reviewed by the Committee on Foreign Investments in the United States. The panel, facing criticism by Congress about its scrutiny of the ports deal, judges the security risks of foreign companies buying or investing in American industry.

I wonder what the outcome is going to be.

Let’s hope the whole thing is dealt with properly.

Source: Redmond Mag – (Slashdot)

29 March 2006 | 8,875 views

My SQL2005 Diary – Part1

At the place I pretend to work, the time has come that most developers equally fear and love, upgrade time. We’ve been using MSSQL2000 for 90% of our work for about 4 years now, and it’s served us well, but when a change as big as 2005 server comes along, you have to make the leap and upgrade. I suppose a little background is in order, but I’ll have to keep it fairly general as we have some strict rules on what we talk about with people outside the development team.

What we do now

The company I work for is a travel company, one of the big ones, and as with most big travel companies we do a huge variety of things. We own resorts, broker our own insurance, sell for third parties, sell our own holidays, own/rent cruise ships, provide resort management for small hotels, and many other things, all of which is managed through 3 internal sites. We handle the telephone auto-diallers in the call centre, stock-management at our red-sea resort, the links to the main UK flight database, the payment system, our SMS marketing servers, basically, everything.
We have 3 main centres, our corporate headquarters in America, the headquarters in the UK and 1 huge sales centre in the UK also. In addition to that we have either fixed line or internet linked terminals at all our resorts, most of the major airports, all of which connects to our headquarters in the UK(It’s an ex-cupboard upstairs). Because of the international nature of our business, and the resort links the sites must run with 100% uptime 24/7, even though they are all internal.

The sites run on a variety of different platforms, but the vast majority run on old style ASP and SQL server 2000, with a heavy focus on SQL server. To put the workload in perspective, our ASP apps use approximately 5% of our server’s total resources, with SQL server taking the other 95% and another magical 1% running Reporting Services (An excellent application if you’ve never used it). We have a multitude of databases, but we currently run on 4 SQL servers with the databases split as equally as we can get them to avoid having to deal with load balancing. The databases range greatly in size, from a few MB for the HR database, too over 50GB for the lead details database (Call centre data).

Why were upgrading

Due to the size and complexity of the database, performance is extremely important and we have our indexes and maintenance jobs tuned to absolute perfection or the entire thing would come crashing down around us, and we would have a lot of angry people looking to have our heads. But recently we have hit SQL server 2000’s “roof”, which is one of the reasons MSSQL has never challenged Oracle in the big enterprise market, and its proving a big problem for us. SQL server 7 was never meant to be an enterprise level database server, and in typical MS style a lot of SQL server 2000 has come from that original code, as have a lot of the problems, mainly its inability to handle truly massive database. 2005 fixes this.

SQL server 2000 was also limited in that it handled everything via transactions and locking, so if you want to retrieve data from the database in an editable format you have to basically lock that information so nobody else can access it. This can cause all kinds of problems, such as one user being told they can’t perform an action, because their locking themselves (Usually through bad coding) or a deadlock which is data being altered while they are waiting for a lock to end. 2005 borrows from Oracle in that is uses a combination of locking and versioning, which takes a copy of the data, performs the action on it and then puts it back into the database. This presents its own problems, but it does mean users can always get to their data.

There are also some significant coding changes, including some very cool stuff that is new to database servers as a whole. The ability to include code from other languages is one of the main talking points, which basically allows you to execute .net code within your stored procs. This may not sound so great, but you have to consider how it changes the way a DBA will work. At the moment database code needs to be specific, because speed is always an issue the server has to constantly optimize the way it works, and it can’t do this with vague and dynamic code. For example…

Select * from Invoice

Would bring back everything from the invoice table. But what if we just wanted a price field?

Select Invoice.Price From Invoice

That’s easy enough. But what if we wanted the gross price, for example, from insurance items, but the net price for everything else. We would do this(Pseudo-code);

Select (if Invoice.catagory = ‘INSURANCE’ then Invoice.Gross else Invoice.net end if) from Invoice

Again, it looks simple enough, but unfortunately the real code to do this is very complicated and grossly in-efficient at the moment, not to mention completely impossible in certain situations. In 2005 the method above would be perfectly legal, and using Microsoft’s CLR compiler to pre-compile the code, it’s considered adequate (It’s still not as good as plain SQL, but its good enough). This and the performance improvements in the new server would be enough to warrant an upgrade on their own.

What were doing next

We have setup 2 MSDN’d 2005 servers and mirrored our web server as a test bed for upgrading our code. Fortunately the vast majority of our code will still work, but to take advantage of the upgrades and new features we will have to re-write vast swathes of code. And all of our 500+ DTS’s and jobs will have to be completely re-written. And then comes the fun of learning an entirely new interpreter and compiler, and tuning it for maximum performance.

I’ll keep you updated

28 March 2006 | 137,021 views

Ophcrack 2.2 Password Cracker Released

Ophcrack is a Windows password cracker based on a time-memory trade-off using rainbow tables. This is a new variant of Hellman’s original trade-off, with better performance. It recovers 99.9% of alphanumeric passwords in seconds.

We mentioned it in our RainbowCrack and Rainbow Tables article.


  • (feature) support of the new table set (alphanum + 33 special chars – WS-20k)
  • (feature) easier configuration for the table set (tables.cfg)
  • (feature) automatic definition of the number of tables to use at the same time (batch_tables) by queriying the system for the size of the memory
  • (feature) speed-up in tables reading
  • (feature) cleaning of the memory to make place for table readahead (linux version only)
  • (feature) improved installer for windows version
  • (fix) change of the default share for pwdump4 (ADMIN$)

Get it at http://sourceforge.net/projects/ophcrack

Digg This Article

27 March 2006 | 6,894 views

Information about the Internet Explorer Exploit createTextRange Code Execution

Internet Storm Center’s always informative Diary has some good information.

At the urging of Handler Extraordinaire Kyle Haugsness, I tested the sploit on a box with software-based DEP and DropMyRights… here are the results:

Software-based DEP protecting core Windows programs: sploit worked
Software-based DEP protecting all programs: sploit worked
DropMyRights, config’ed to allow IE to run (weakest form of DropMyRights protection): sploit worked
Active Scripting Disabled: sploit failed

So, go with the last one, if you are concerned. By the way, you should be concerned.

It didn’t take long for the exploits to appear for that IE vulnerability. One has been making the rounds that pops the calculator up (no, I’m not going to point you to the PoC code, it is easy enough to find if you read any of the standard mailing lists), but it is a relatively trivial mod to turn that into something more destructive. For that reason, SANS is raising Infocon to yellow for the next 24 hours.

Microsoft recommends you turn Active Scripting OFF to protect against this vulnerability.

Source: ISC

Yah I know, yet another reason to dump Internet Explorer and grab Firefox, not that anyone reading this site would be using Internet Exploder..

The code is along the lines of:

<code><input type=”checkbox” id=’c’>

You can find the Bleeding Snort rule for the IE Exploit here.

Microsoft has now confirmed this.

“We’re still investigating, but we have confirmed this vulnerability and I am writing a Microsoft Security Advisory on this,” writes Lennart Wistrand, security program manager with the Microsoft Security Response Center, in a blog posting. “We will address it in a security update.”

There is also a 3rd party fix for this from eEye.

27 March 2006 | 8,307 views

Sealing Wafter – Defend Against OS Fingerprinting for OpenBSD

One way to defend against OS fingerprinting from tools such as nmap, queso, p0f, xprobe etc is to change the metrics that they base their analysis on.

One way to do this with OpenBSD is to use Sealing Wafter.

Goals of Sealing Wafter:
1. To reduce OS detection based on well known fingerprints network stack behavior.
2. To have the ability to load custom rules into the stack.
3. To unload, modify, reload the kernel module with on the fly rules. (great feature at packet parties)
4. To learn how the magic of tcpip stacks work.

What Sealing Wafter currently provides:
1. Hide from Nmap Syn/Xmas/Null scans, as well as the specific fingerprinting packets.
2. Ability to see what your stack is receiving without the need to drop your network device into promisc mode.
3. Complete control over rules that you can load on the fly todeal with specific incoming packets.
4. Initial support for several OS passive detection has been added for SYNs.

Weaknesses in current Sealing Wafter:
1. Full connection scans. e.g. nmap -sT will still find open ports. this is because I have yet to find anything that seperates a real tcp connection vs an nmap full connection. (most likely isn’t one.)
2. Can be very verbose when under heavy load. I have run this on my heaviest web servers, and have not noticed any major overhead.

Download the c code for the LKM here: Sealing Wafter

25 March 2006 | 155,807 views

Download youtube.com videos?

Ever wanted to download those cool videos from youtube.com? (Its an online video storage site similar to imageshack.us for storing images) and can’t because those peeps made it difficult for you to just download them offline? Well now you can !!

Go to fileleecher.com and follow the instructions on how to copy the youtube.com video link and download the video. Once you’ve download the video you’ll have to rename to .flv if doesn’t already have the extension. Then you’ll need to download the encoder to covert the .flv file format into other formats. For that you’ll need Riva FLV Encoder. The installation includes the player for FLV and the encoder for converting it to mpeg or avi.

After all that you can do what ever you want with the videos. Put it into your iPod video, PSP or even convert it to .3GP for putting it into your mobile phone.

Many thanks to CYBERAXIS SG for this site.

Digg This Article

25 March 2006 | 6,702 views

Spammer gets 8 years in Jail for Identity theft

Good I say, nothing worse than a spammer.

A bulk e-mailer who looted more than a billion records with personal information from a data warehouse has been sentenced to eight years in prison, federal prosecutors said Wednesday.

Scott Levine, 46, was sentenced by a federal judge in Little Rock, Ark., after being found guilty of breaking into Acxiom’s servers and downloading gigabytes of data in what the U.S. Justice Department calls one of the largest data heists to date. Acxiom, based in Little Rock, says it operates the world’s largest repository of consumer data, and counts major banks, credit card companies and the U.S. government among its customers.

In August 2005, a jury convicted Levine, a native of Boca Raton, Fla., and former chief executive of a bulk e-mail company called Snipermail.com, of 120 counts of unauthorized access to a computer connected to the Internet. The U.S. government says, however, there was no evidence that Levine used the data for identity fraud.

Looks like for some reason the FTP had access to the SAM file, or a copy of it, and this ‘hacker’ downloaded it then brute forced the hashes.

I wonder if he used RainbowCrack and Rainbow Tables?

If he read this site he might have done ;)

According to court documents, Levine and others broke into an Acxiom server used for file transfers and downloaded an encrypted password file called ftpsam.txt in early 2003. Then they ran a cracking utility on the ftpsam.txt file, prosecutors said, discovered 40 percent of the passwords, and used those accounts to download even more sensitive information.

Source: News.com