[ad]
This is not the first time Apache.org has been hacked, it was comprised back in September 2009 using SSH keys.
This time another targeted attack against the site was successful and allowed the attackers to capture the passwords of users logging into the bug-tracking service. It also exposed the entire password list, which sadly although hashed was salted with a static salt rather than a random one..so it’s vulnerable to brute-forcing.
I’d say a good set of Rainbow Tables would make short work of it.
Hackers penetrated the heavily-fortified servers for Apache.org in a “direct, targeted attack” that captured the passwords of anyone who used the website’s bug-tracking service over a three-day span last week.
The breach, the second to hit Apache.org in eight months, also exposed a much larger list of passwords belonging to people who accessed the site’s bug-tracking section. While the databases used a one-way hash to disguise the passwords, two of the lists are vulnerable to dictionary attacks because Atlassian, the maker of issue-tracking software used by Apache, failed to add “random salt” to them.
As a result, Apache officials said users who logged in to the bug section of the website from April 6 to April 9 “should consider the password as compromised, because the attackers changed the login form to log them.” They also warned that there’s a high risk of compromise to other users if they employed simple passwords based on dictionary words.
If you are a user of Apache.org and the bug tracker in particular and you logged in between April 6th and April 9th, you should consider your password comprised. That means change your password and if you use the same password anywhere else, change those too.
Personally if I had a login there I’d change my password regardless, because given enough time and processing power most of the hashed passwords can be cracked.
I think Apache.org should mandate a forceful password change for all accounts in the system for security reasons, I don’t think anyone would complain.
The intrusion began on April 5 when unknown attackers using a hacked server from Slicehost opened a new bug report on Apache.org. The post contained a shortened web link from tinyurl.com that exploited an XSS, or cross-site scripting, vulnerability on Apache’s support website.
The hole was the result of a bug in JIRA, the issue-tracking software made by a company called Atlassian. The exploit was designed to steal session cookies used to authenticate people logged in to Apache’s JIRA system. When several Apache administrators following the fraudulent bug report clicked on the on the malicious link, their JIRA administrator rights were then compromised.
The attackers also carried out a brute-force attack that flooded the site with hundreds of thousands of password combinations. By April 6, one of the two methods allowed the attackers to gain full administrative rights on the JIRA system. For three days, the hackers used their powers to copy users’ home directories and files and to install a program that logged the passwords of anyone accessing the system.
The initial attack vector was an XSS against the admins of the bug-tracking software which enabled the attackers to compromise their accounts and get further access to the system.
The full postmortem from the Apache team is here:
apache.org incident report for 04/09/2010
The same virtual host also attacked Atlassian directly and comprised their customer accounts.
Source: The Register
Michael Coates says
Regarding your statement:
“I’d say a good set of Rainbow Tables would make short work of it.”
One quick note, although rainbow tables are incredibly powerful, a generic rainbow table can’t be used in this case. As was mentioned in the article, a static salt was used when hashing the password. The attackers will now need to generate a completely new rainbow table using this new salt before any password cracking can occur. This will take some time and should give ample opportunity for users to change their passwords.
I do agree that it would be better to use a per-user salt – which would essentially require an entire rainbow table to be generated for each password that was to be cracked.
-Michael
Michael Coates says
Perhaps I misread the text. I read the following statement:
“failed to add “random salt” to them.”
and somehow inferred that this meant a static salt was used. Now that I read the article again, it looks like no salt was used for some of the password hashes. This is bad and vulnerable to generic rainbow tables.
Darknet says
Yah it seems some like some were not hashed at all and some used static salts.
And well building Rainbow tables doesn’t take too long with the processing power available on home computers now, it’s more a question of storage space than anything else.
Daniel Miller says
Storage would indeed be a problem, but you’re overlooking the fact that unless you already have a rainbow table for SHA-512 (the hash mentioned in the report), cracking the passwords with a dictionary or an intelligent brute-force (Markov chains, frequency tables, etc) will be faster than generating an exhaustive rainbow table. SHA-2 is not a particularly speedy algorithm. It would take a lot more time than generating NTLM tables (MD4 hash).
The advantages of rainbow tables are twofold: not needing to regenerate candidate hashes from one cracking session to the next, and using indexed lookups instead of comparing each candidate to each hash. The first advantage is nullified unless the attacker is planning on compromising more unsalted SHA-512 hashes at a later date. The second advantage is minimal, since the computational effort of comparison is so much smaller than that of the hash function. Additionally, indexing cannot be done until the table is complete, whereas individual comparisons could yield a cracked password early in the process, eliminating the need to crack any further passwords.
Rainbow tables really only make sense when the hash used is very common (LM, NTLM, MD5, etc). Even then, you should do a cost-benefit analysis of the time, storage, and energy you will invest in building the tables vs how often you will use the tables in the future.
Darknet says
Good and valid points Daniel, I agree – either way as the software used in known and the hashing algorithm can be established they should be able to make fairly short work of it.