Why “Private” Browsing Really Isn’t
The idea behind a Web browser’s private browsing mode seems simple enough. Any activity that takes place in that browser while private mode is engaged is either erased at the end of the session or stored in such a way that it is inaccessible after the fact. Anyone who inspects the computer later will find no traces of the user’s activity. That’s the theory. As it turns out, it’s extremely difficult to implement all this in a practical way. Many of the current worries about private browsing stem from a recently published research paper, “An Analysis of Private Browsing Modes in Modern Browsers,” presented at the 2010 USENIX conference. The authors (Gaurav Aggarwal, Elie Bursztein, Collin Jackson, and Dan Boneh) created a test suite for private browsing and found many flaws, both conceptual and methodological, with the way private browsing works. Some of them are immediately fixable; some. . . less so.
Every browser leaves behind something from private mode. The sheer number of ways a browser interacts with the surrounding OS, the file system, and the user almost guarantees something will be left behind. Bookmarks, form auto-complete data, user-approved self-signed SSL certificates, and downloaded files are four common examples. Granted, not every browsing session involves these things, but it’s easy to leave traces without realizing it.
Part of the study’s test regimen consisted of restricting attacks to what they called “after the fact forensics,” that is, analyzing the system after the private browsing session was closed. They observed that not all state changes during private browsing should be erased—emphasis on should, because to erase some of them might be theoretically outside of the scope of the application’s behavior. Take downloads, for instance: Would it be wise for the browser to automatically scrub the file system of any files downloaded during private browsing? Some might argue it’s not the browser’s responsibility to clean up such things, but that only highlights just how complex the interactions are between the browser and the rest of the system.
Each browser has a slightly different version of private browsing. The exact things that are sanitized at the end of the session vary enormously, and the ways in which they are sanitized also vary. Something that might be dealt with cleanly in one browser may be handled poorly in another, as much for reasons of policy by the browser’s makers as anything technical.
For example, Chrome forbids extensions from running when in Incognito mode, to prevent user data from being leaked to disk. The user has to elect to allow any individual extension to run in Incognito mode, because Chrome has no way to ensure that extensions themselves don’t write data. But Firefox (as of version 3.6.10) allows extensions to run in Private Browsing. Mozilla’s design documents for extension developers do describe how to detect private browsing and act accordingly, but it’s still a possible privacy hole.
Browser extensions create security loopholes in private mode. Each browser also handles third-party add-ons differently in private mode. To wit: Firefox permits add-ons to run unrestricted (a major loophole); Chrome is a little more secure by default, but you can activate extensions manually if desired (and plug-ins, such as QuickTime or Flash, are always running); and IE disables browser helper object extensions but allows ActiveX add-ons to run by default. It’s a hodgepodge, in large part because every browser’s add-on architecture is different.
Server-side privacy is still a problem. Apart from the fact that private mode doesn’t provide you with any additional protection for your data, your IP address is logged by the sites you visit, which is all but impossible to avoid anyway unless you use a proxy server.
Even putting this aside, there are other betrayals of anonymity. Even if you’re in private browsing mode, a browser can be uniquely fingerprinted using JavaScript to analyze things such as the screen size, available fonts, time zone, and many other bits of data. The Electronic Frontier Foundation’s Panopticlick project (panopticlick.eff.org) demonstrates how easy it is to harvest all this information. We tried the EFF’s test ourselves and found that just about every browser in the market can be unmasked regardless of whether you’ve enabled private browsing. Firefox 3.6.10 and 4 (beta), Internet Explorer 8 and 9 (beta), Chrome 8, Opera 10.62, and Safari 5 all returned “at least 20 bits of identifying information” from Panopticlick, even when we used each browser’s privacy mode.
Browsers are vulnerable to attacks directed at the system itself. Although the following wasn’t a chief concern of the aforementioned study, it bears mentioning: Because the browser has to interact with the rest of the system in some way, attacks aimed at the OS, such as scanning the system swap file or RAM, are far harder to thwart or detect. Security consultant Rob Fuller demonstrated how to use commonly available tools, such as Process Memory Dumper (bit.ly/dbfqkv) to extract all sorts of private information from a Firefox session. Although this kind of attack requires that the process in question still be running (as opposed to forensics after the fact), it’s one more example of how the browser can only protect you against so much.
Our Own Testing
On a virtual machine loaded with a fresh copy of Windows 7 Home Premium, we installed and ran the latest versions of the following Web browsers: Chrome 7, Firefox 3.6.10, Opera 10.62, Internet Explorer 8, and Safari 5.0.2. Our test was deliberately simple: After browsing to a specific page in both conventional and private mode, we used NirSoft’s SearchMyFiles (bit.ly/a1fiez) to look within files created or modified by each browser for telltale strings—the page’s URL, its title, text found on the page, etc.
What we found further demonstrated how private browsing is implemented in a highly inconsistent way from one browser to the next. Chrome, Firefox, and Opera all showed no traces of the pages in question during and after a private browsing session. But Safari’s Web site icon cache (WebpageIcons.db) leaked information about visited domains from private sessions, and data from Internet Explorer’s browsing sessions were visible while the browser was open (although it was cleared after the browser was closed). This suggests that the first three browsers store private browsing cache information more securely—e.g., by encrypting the files that are written to disk. However, don’t take for granted that those cache files are entirely secure—they’re just less likely to give up their contents on casual inspection.
Browsers that write private-session data to disk leave themselves exposed to attacks that involve reading data left behind by deleted files. We checked to see if we could undermine Internet Explorer’s private browsing in this way, and found that we could locate phrases encountered on Web pages during private browsing sessions in IE simply by searching the disk’s unused space. It seems that IE doesn’t securely erase the cached files in question but simply marks them as deleted. Also, the beta version of IE9 appeared to handle private data the same as IE8. Browsers that store private session information securely (e.g., Chrome) don’t seem to suffer from this issue.
Possible Solutions
If private browsing by itself is imperfect, what about using it in conjunction with other security measures? That’s an available tactic, but adding more security manually often creates drawbacks as well—most of them obstacles of inconvenience.
Virtual machines. One way to isolate the browser from outside attacks is to run it in a virtual machine, which creates an additional layer of protection from the main OS. But this approach creates at least as many problems as it solves. It’s clunky to use casually, unfriendly to many novice users, and considerably more memory-intensive.
Sandboxing. A close cousin to using a VM is sandboxing, which involves running the browser through an application that intercepts all disk operations from the program and mops them up, leaving as few traces as possible. Sandboxie (www.sandboxie.com) is one such program, but its authors admit its protection only goes so far and doesn’t protect against, for example, swap file attacks.
Use a standalone OS. More than one article has been written about booting a Linux live CD and using that to run a browsing session that leaves behind no traces. Although it does leave behind far fewer traces—only the host’s RAM is touched, and this is cleared once the machine is rebooted—it’s an impractical solution for daily use.
Encryption. Another way to thwart attempts to read the on-disk traces left by anonymous browsing is to encrypt the system’s disks. This isn’t as unworkable as it used to be, because you can use either native OS features (NTFS filelevel encryption or Microsoft BitLocker) or third-party applications, such as TrueCrypt, to accomplish this. Its main drawback is that it only resists attacks that are executed when the encrypted volumes aren’t mounted. Any attack executed while the drive is unlocked (and, presumably, the browser is running) could bypass this entirely.
It’s also possible to encrypt portions of the system that are vulnerable to attack, such as the swap file, for example. One can automatically encrypt the swap file in Vista and Win7 by using the command fsutil behavior set encryptpagingfile 1. (See bit.ly/9Rn1nP for more details.)
Standalone browsing. This involves using a version of the browser that has been designed to run in its own directory, such as those found in the PortableApps.com collection. The PortableApps.com editions of Chrome and Firefox, for instance, don’t appear to leave anything on the host machine that could be used to analyze one’s browsing history. If a standalone browser was run from a removable drive that had on-disk encryption, this would make after the fact analysis even more difficult. This is probably the least impractical solution, because it doesn’t require a truly huge change in browsing habits.
“Safe” Is A Variable, Not An Absolute
Truth be told, most people don’t risk much by using private browsing modes. They prevent casual invasions of privacy in the same way that locking your car prevents someone walking by from stealing what’s in your backseat.
But a determined thief will always find a way into your car. Likewise, the sheer amount of information left behind by any browsing session, “private” or not, provides plenty of clues for a determined forensic researcher. And once the process of searching for those things is automated, it’s easier for them to end up in the hands of far more thieves, as well.
The biggest reason why private browsing can only grant so much privacy is the behavior of the end user. In addition to the things that users do casually that may compromise their privacy, developers must face the problem that users may end up shunning any privacy tool that makes doing anything online more difficult. The more privacy, the less functionality—and for most people, convenience trumps security.
It’s tough to predict how vulnerable people will be to attacks designed to exploit the few things not protected (or protectable) by private browsing. Worse, not all of these gaps in protection can be closed off. Some, such as file downloads, are a function of the browser being an application that interacts with the rest of the OS and not a self-contained unit.
To that end, we shouldn’t think about private browsing as an absolute. There are degrees of privacy with accompanying degrees of inconvenience. Feel free to browse in private but always keep in mind that “private” is a relative term.
Source of Information : Computer Power User (CPU) December 2010
0 comments: on "Can Web Browsers Really Cover Your Tracks?"
Post a Comment