Archive for the ‘Exploits, Hacking and Security’ Category

Just found this one this morning. A phishing email arrived on a user’s machine claiming to be from UPS. The email didn’t even look official. It was all in plain text too. The message in the email is copied below and the supposed date given that the parcel was undeliverable always differs.

Subject: UPS Delivery Problem NR.9618

Hello!

Unfortunately we failed to deliver the package which was sent on the 24th of June in time because the recipient’s address is wrong.

Please print out the invoice copy attached and collect the package at our office.

United Parcel Service.

Interesting thing was this. Saved the attachment to the C: drive with it still inside the zip folder. Then scanned the .zip folder with AVG and AVG reported no viruses found.

On the other hand, it could have been a program with non malicious code to the system, which still sent off details and is only malicious in principle rather than code. Only an idea because I didn’t open it….! 😛

This was an official warning released on the UPS website:

Attention Virus Warning
Service Update

We have become aware there is a fraudulent email being sent that says it is coming from UPS and leads the reader to believe that a UPS shipment could not be delivered. The reader is advised to open an attachment reportedly containing a waybill for the shipment to be picked up.

This email attachment contains a virus. We recommend that you do not open the attachment, but delete the email immediately.

UPS may send official notification messages on occasion, but they rarely include attachments. If you receive a notification message that includes an attachment and are in doubt about its authenticity, please contact customerservice@ups.com.

Please note that UPS takes its customer relationships very seriously, but cannot take responsibility for the unauthorized actions of third parties.

Thank you for your attention.

The attachment contains malware, detected as Trj/Agent.JEN by Internet Security company PandaLabs, that can replace an important file on Windows computers and then download other malware to the infected computer. PandaLabs notes:

This malware is copied in the system, replacing the Windows Userinit.exe (this file is the one which runs explorer.exe, the interface of the system and other important processes), copying the legitimate file as userini.exe, so that the computer can work properly.

Additionally, it establishes a connection with a Russian domain, which has been used on some occassions by banker Trojans. From this domain it will redirect the request to a German domain in order to download a rootkit and a rogue antivirus, detected as Rootkit/Agent.JEP and Adware/AntivirusXP2008 respectively.

Apparently, the reported sender is now coming from DHL.

Advertisements
The way signal strength varies in a wireless network can reveal what’s going on behind closed doors.

It’s every schoolboy’s dream: an easy way of looking through walls to spy on neighbors, monitor siblings, and keep tabs on the sweet jar. And now a dream no longer…

Researchers at the University of Utah say that the way radio signals vary in a wireless network can reveal the movement of people behind closed doors. Joey Wilson and Neal Patwari have developed a technique called variance-based radio tomographic imaging that processes the signals to reveal signs of movement. They’ve even tested the idea with a 34-node wireless network using the IEEE 802.15.4 wireless protocol, the protocol for personal area networks employed by home automation services such as ZigBee.

The basic idea is straightforward. The signal strength at any point in a network is the sum of all the paths the radio waves can take to get to the receiver. Any change in the volume of space through which the signals pass, for example caused by the movement of a person, makes the signal strength vary. So by “interrogating” this volume of space with many signals, picked up by multiple receivers, it is possible to build up a picture of the movement within it.

In tests with a 34-node network set up outside a standard living room, Wilson and Patwari say they were able to locate moving objects in the room to within a meter or so. That’s not bad, and the team says there is ample potential for improvement by increasing accuracy while reducing the number of nodes.

The advantage of this technique over others is, first, its cost. The nodes in such a network are off-the-shelf and therefore cheap. Other through-wall viewing systems cost in excess of $100,000. The second advantage is the ease with which it can be set up. Wilson and Patwari say that adding a GPS receiver to each node allows it to work out its own location, which should dramatically speed up the imaging process. Other systems have to be “trained” to recognize the environment.

Wilson and Patwari have even worked out how their system might be used:

“We envision a building imaging scenario similar to the following. Emergency responders, military forces, or police arrive at a scene where entry into a building is potentially dangerous. They deploy radio sensors around (and potentially on top of) the building area, either by throwing or launching them, or dropping them while moving around the building. The nodes immediately form a network and self-localize, perhaps using information about the size and shape of the building from a database (eg Google maps) and some known-location coordinates (eg using GPS). Then, nodes begin to transmit, making signal strength measurements on links which cross the building or area of interest. The received signal strength measurements of each link are transmitted back to a base station and used to estimate the positions of moving people and objects within the building.”

That’s ambitious, but if they do get their system to the point where it can be used like this, it raises another problem: privacy.

How might such cheap and easy-to-configure monitoring networks be used if they become widely available? What’s to stop next door’s teenage brats from monitoring your every move, or house thieves choosing their targets on the basis that nobody is inside?

Of course, in the cat-and-mouse game of surveillance, it shouldn’t be too hard to build a device that disables such a monitoring network. But only if you know it’s there in the first place.

There are fun and games galore to be had with this idea.

Source

Updated A small army of security and privacy researchers has called on Google to automatically encrypt all data transmitted via its Gmail, Google Docs, and Google Calendar services.

Google already uses Hypertext Transfer Protocol Secure (https) encryption to mask login information on this trio of cloud-based web-based applications. And netizens have the option of turning on https for all transmissions. But full-fledged https protection isn’t flipped on by default.

“Google’s default settings put customers at risk unnecessarily,” reads a letter lobbed to Google CEO Eric Schmidt by 37 academics and researchers. “Google’s services protect customers’ usernames and passwords from interception and theft. However, when a user composes email, documents, spreadsheets, presentations and calendar plans, this potentially sensitive content is transferred to Google’s servers in the clear, allowing anyone with the right tools to steal that information.”

Signatories includes Harvard-based Google watcher Benjamin Edelman; Chris Hoofnagle, the director of Information Privacy Programs at Berkeley Center for Law & Technology; and Ronald L. Rivest, the R in RSA.

In the past, Google has said it doesn’t automatically enable https for performance reasons. “https can make your mail slower,” the company explained in a July 2008 blog post announcing Gmail’s https-session option. “Your computer has to do extra work to decrypt all that data, and encrypted data doesn’t travel across the internet as efficiently as unencrypted data. That’s why we leave the choice up to you.”

But 37 researchers see things a differently. “Once a user has loaded Google Mail or Docs in their browser, performance does not depend upon a low latency Internet connection,” they write. “The user’s interactions with Google’s applications typically do not depend on an immediate response from Google’s servers. This separation of the application from the Internet connection enables Google to offer ‘offline’ versions of its most popular Web applications.”

Even where low latency matters, they say, outfits such as Bank of America, American Express, and Adobe have protected their via https without a heavy performance hit. Adobe automatically encrypts Photo Express sessions.

Of course, another good example is…Google itself. The company does automatic encryption with Google Health, Google Voice, AdSense, and Adwords. “Google’s engineers have created a low-latency, enjoyable experience for users of Health, Voice, AdWords and AdSense – we are confident that these same skilled engineers can make any necessary tweaks to make Gmail, Docs, and Calendar work equally well in order to enable encryption by default,” the researchers write.

The problem, they say, is that everyday netizens don’t realize the importance of encryption – and that Google fails to properly protect them from their own ignorance. Gmail now includes a setting that lets you “always use https.” But the researchers complain that most users don’t know it’s there. And with Docs and Calendar, they point out, users can’t use session encryption unless they remember to type https into their browser address bar every time they use the services.

If Google refuses to turn on https by default, the researchers say, the company should at least make sure that users understand the risks of encryption-less transmissions. There are four things they suggest:

  • Place a link or checkbox on the login page for Gmail, Docs, and Calendar that causes that session to be conducted entirely over https. This is similar to the “remember me on this computer” option already listed on various Google login pages. As an example, the text next to the option could read “protect all my data using encryption.’
  • Increase visibility of the “always use https” configuration option in Gmail. It should not be the last option on the Settings page, and users should not need to scroll down to see it.
  • Rename this option to increase clarity, and expand the accompanying description so that its importance and functionality is understandable to the average user.
  • Make the “always use https” option universal, so that it applies to all of Google’s products. Gmail users who set this option should have their Docs and Calendar sessions equally protected.

We have asked Google for a response to the letter, and once it arrives, we’ll toss it your way. Odd are, it will be completely non-committal.

In defense of Google, the company does go farther than many other big-name web outfits. As the researchers point out in their letter, Microsoft Hotmail, Yahoo Mail, Facebook, and MySpace don’t even offer an https option. But the 37 hold Google to a higher standard. “Google has made important privacy promises to users, and users naturally and reasonably expect Google to follow through on those promises.” ®

Update

Google has responded with a blog post. “Free, always-on HTTPS is pretty unusual in the email business, particularly for a free email service, but we see it as an another way to make the web safer and more useful. It’s something we’d like to see all major webmail services provide,” the company says. “In fact, we’re currently looking into whether it would make sense to turn on HTTPS as the default for all Gmail user.”

Google is planning a trial with a small number of Gmail users to test the affect of https all-the-time. “Does it load fast enough? Is it responsive enough? Are there particular regions, or networks, or computer setups that do particularly poorly on HTTPS?” the blog continues. “Unless there are negative effects on the user experience or it’s otherwise impractical, we intend to turn on HTTPS by default more broadly, hopefully for all Gmail users.”

The company is also considering how best to make automatic https work with docs and spreadsheets.

Correction

Google has also said that the researchers were in error in saying that a cookie from Docs or Calendar also gives access to Gmail without https. We have removed this error from our story as well.

Source

Encrypt now, for a better tomorrow….

Cyber cops want new laws to allow remote searches of seized hard drives in the hope they will help reduce long digital forensics backlogs – of up to two years for some forces.

It would mean specialised officers in London could access data held on hard drives in police evidence rooms nationally. How such information sharing would work technically hasn’t been decided.

The Association of Chief Police Officers (ACPO) is working with the Attorney General’s office on what changes to data law would be needed to allow the new Metropolitan Police Central e-Crime Unit (PCeU) to gather intelligence from around the country.

Detective Superintendent Charlie McMurdie, the head of PCeU, said at Infosec on Tuesday such powers would help the new unit get more up-to-date intelligence on online frauds. She said backlogs of unsearched seized hard drives were typically 18 to 24 months for the UK’s 43 police forces.

A spokesman for PCeU declined to provide further details of the ongoing legal work, which would require Parliamentary approval, saying it was too early to comment.

ACPO said: “ACPO e-crime committee is currently working with the Attorney-General’s Office on a range of issues; including whether changes to the law are required. As work is currently underway, we are unable to provide any further details at this time.”

At present, the proposed legislative changes don’t appear to be related to EU moves to step up hacking of PCs in homes and offices by police.

PCeU, which was formed six months ago, has 20 full time network investigators who it is hoped would carry out remote intelligence work if new legislation was brought in. The unit was set up to fill in the gap in policing e-crime when the National Hi-Tech Crime Unit was assimilated by the Serious and Organised Crime Agency in 2006.

McMurdie also appealed yesterday for volunteer help from industry, citing limited resources. PCeU has £3.5m in funding from the Home Office over the next two and a half years.

Earlier in the day, former Home Secretary David Blunkett said he hoped PCeU would receive more funding.

Source

UNFINISHED ARTICLE

I was preparing to write up the solution in this article but was unable to forcefully uninstall it!

I tried all of the ideas I could think of and suggestions scattered about the Internet about this piece of software. The request to remove it genuinely was from a user. The K9 web filter was installed on an old system and the user had forgotten the password which led them to not being able to use it for browsing the Internet.

I tried forcefully uninstalling the application with a tool called Portable Uninstall Tool which can be found on another blog called FC Portables. I used the uninstall tool to forcefully remove the K9 web filter and also scan for and remove all registry entries relating to the web filter. After that I tried more methods.

  1. Ran CCleaner on the system removing all of the files found and thorughly cleaned the registry. Don’t forget to backup the registry when it asks if you are also trying this.
  2. Manually searched through the registry deleting all registry keys referencing K9 Webfilter or BlueCoat.

The illusion of security….

US prosecutors have charged a man with stealing data relating to 130 million credit and debit cards.

Officials say it is the biggest case of identity theft in American history.

They say Albert Gonzales, 28, and two unnamed Russian co-conspirators hacked into the payment systems of retailers, including the 7-Eleven chain.

Prosecutors say they aimed to sell the data on. If convicted, Mr Gonzales faces up to 20 years in jail for wire fraud and five years for conspiracy.

He would also have to pay a fine of $250,000 (£150,000) for each of the two charges.

Mr Gonzales used a complicated technique known as an “SQL injection attack” to penetrate networks’ firewalls and steal information, the US Department of Justice said.

His corporate victims included Heartland Payment Systems – a card payment processor, convenience store 7-Eleven and Hannaford Brothers, a supermarket chain, the DOJ said.

According to the indictment, the group researched the credit and debit card systems used by their victims, attacked their networks and sent the data to computer servers they operated in California, Illinois, Latvia, the Netherlands and Ukraine.

The data could then be sold on, enabling others to make fraudulent purchases, it said.

Mr Gonzales is already in custody on separate charges of hacking into the computer system of a national restaurant chain.

This latest case will raise fresh concerns about the security of credit and debit cards used in the United States, the BBC’s Greg Wood reports.

Source

Here’s a very interesting one 😀

Computer keyboards are often used to transmit sensitive information such as username/password (e.g. to log into computers, to do e-banking money transfer, etc.). A vulnerability on these devices will definitely kill the security of any computer or ATM.

Wired and wireless keyboards emit electromagnetic waves, because they contain electronic components. These electromagnetic radiation could reveal sensitive information such as keystrokes. Although Kuhn already tagged keyboards as risky, we did not find any experiment or evidence proving or refuting the practical feasibility to remotely eavesdrop keystrokes, especially on modern keyboards.

To determine if wired and wireless keyboards generate compromising emanations, we measured the electromagnetic radiations emitted when keys are pressed. To analyze compromising radiations, we generally use a receiver tuned on a specific frequency. However, this method may not be optimal: the signal does not contain the maximal entropy since a significant amount of information is lost.

Our approach was to acquire the signal directly from the antenna and to work on the whole captured electromagnetic spectrum.

We found 4 different ways (including the Kuhn attack) to fully or partially recover keystrokes from wired keyboards at a distance up to 20 meters, even through walls. We tested 12 different wired and wireless keyboard models bought between 2001 and 2008 (PS/2, USB and laptop). They are all vulnerable to at least one of our 4 attacks.

We conclude that wired and wireless computer keyboards sold in the stores generate compromising emanations (mainly because of the cost pressures in the design). Hence they are not safe to transmit sensitive information. No doubt that our attacks can be significantly improved, since we used relatively inexpensive equipments.

UPDATE: This paper has been accepted to the 18th USENIX Security Symposium 2009 and will be available in August.

Here are the two videos that are featured in this article.

Compromising Electromagnetic Emanations of Wired and Wireless Keyboards – Video 1

Compromising Electromagnetic Emanations of Wired and Wireless Keyboards – Video 2

Frequently Asked Questions

Q: Why you disconnect the power supply of the laptop?

A: At the beginning of our experiments, we obtained very good results. We were able to capture the signal at an impressive distance. We discovered that the shared ground may acts as an antenna and significantly improve the range of the attack. To avoid any physical support for compromising emanations, we disconnected every cable connected to the computer. Thus, the objective of this demo is to confirm that compromising emanations are not carried by the power supply wire.

Q: The keyboard is connected to a laptop. Is it still working if the keyboard is connected to a usual computer (i.e. PC tower) ?

A: Yes, our attacks are still working (and are generally better on PC tower). Since a desktop computer (PC tower) has no battery, it must be connected to the electrical network. Thus, we cannot avoid the shared ground effect (see the previous question).

Q: Why you remove the LCD display? Is it because the LCD generates too much noise?

A: We remove the LCD display because it can emit compromising signals (see this paper) and could carry keyboard emanations. To avoid any support for compromising emanations we disconnected the LCD display. The noise generated by the LCD display is insignificant since it can be easily filtered. Moreover, in the first video, you can see two powered LCD displays in the same room during the measurements. They do not disrupt the experiment.

Q: You are typing so slowly! Why?

A: The filtering and decoding processes take time (about two seconds per pressed key). To make sure that we captured all keystrokes we typed very slowly. With hardware-based computation (i.e. FPGA) the filtering and decoding processes can obviously be instantaneous (e.g. less than the minimum time between two keystrokes), it’s just a matter of money.

Q: I found something odd. Your tool seems to capture 12 or 8 characters (depending on the video) and then decode them. How do you know that you will have exactly 12 (or 8) keys to recover?

A: If you look carefully at the videos the filtering and decoding processes take more time than the capturing process. To not spend 2 seconds between each keystroke, we first capture a fixed number of keys and then we recover the keystrokes. With some dedicated hardware-based FFT computation, we can avoid this time (see previous question). Thus, we fixed the number of character for the demo, but we can use an infinite scanning loop as well.

Q: If there is more than one keyboard in the same room, are you able to distinguish them and to recover all keystrokes?

A: Yes, each keyboard can be distinguished even if they comes from the same manufacturer and share the same model (more information in our paper).

Q: Are wireless keyboards vulnerable as well?

A: Yes, they are 🙂

Source