Adam Fields (weblog) » Privacy / Security entertaining hundreds of millions of eyeball atoms every day Mon, 08 Apr 2013 17:49:20 +0000 en hourly 1 The Tragedy of the Selfish, again and again. Wed, 25 Jul 2012 03:15:09 +0000 adam I kept seeing this pattern emerge, and couldn’t find a good name for it (originally in reference to failures of the free market), so I came up with one. Simply put, The Tragedy of the Selfish is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better.

You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars.

You buy a gun because other people have guns. Then other people do, because they want to be safer too. Then… you see where I’m going with this. Perhaps you’ve made yourself safer in some limited way, but you’ve decreased the overall safety of the system.

This is not safety, it’s mutually assured destruction.

]]> 2
Why all this mucking about with irrevocable licenses? Sat, 09 Jul 2011 12:43:46 +0000 adam The Google+ Terms of Service include various provisions to give them license to display your content, and this has freaked out a bunch of professional photographers:

‘By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.’

I don’t even understand why this is necessary. Why can’t this just be ‘you give us a license to display your content on the service until you delete it’?

]]> 0
The Google Chrome terms of service are hilarious Wed, 03 Sep 2008 20:02:28 +0000 adam I’ve been very busy lately, but this is just too much to not comment on.

There are other articles about how the Google Chrome terms of service give Google an irrevocable license to use any content you submit through “The Services” (a nice catchall term which includes all Google products and services), but the analysis really hasn’t gone far enough – that article glosses over the fact that this applies not only to content you submit, but also content you display. Of course, since this is a WEB BROWSER we’re talking about, that means every page you view with it.

In short, when you view a web page with Chrome, you affirm to Google that you have the right to grant Google an irrevocable license to use it to “display, distribute and promote the Services”, including making such content available to others. If you don’t have that legal authority over every web page you’ve visited, you’ve just fraudulently granted that license to Google and may yourself be liable to the actual copyright owner. (If you do, of course, you’ve just granted them that license for real.) I’m not a lawyer, but I suspect that Google has either committed mass inducement to fraud or the entire EULA (which lacks a severability clause) is impossible to obey and therefore void. [Update: there is a severability clause in the general terms, which I missed on the first reading. Does that mean that the entire content provisions would be removed, or just the parts that apply to the license you grant Google over the content you don't have copyright to? I don't know.]

Even more so than usual, these terms are, quite frankly, ridiculous and completely inappropriate for not only a web browser but an open source web browser.

Nice going guys.

]]> 3
Why don’t we have degrees of terrorism? Tue, 04 Mar 2008 14:38:31 +0000 adam We have different classifications for the crime of “killing a person”, and those classifications encompass whether it was an accident or not, whether it was premeditated, and how many people were killed – e.g.: How serious a crime has actually been committed. But when we talk about terrorism, it’s always just “terrorism”. This results in the really sinister megacriminals being lumped in with the group of morons that can’t get it to together to leave the house without forgetting to wear pants, let alone actually arrange to blow anything up.

Most “terrorists” are less dangerous than your average serial killer or bus accident, but we still lump them all together simply because they have an agenda.

Similar to murder, I think we need some sort of classification system for these crimes:

  1. Intent to commit terrorism: you “plotted” with someone who may or may not have been an undercover cop, but didn’t actually acquire passports or learn how to make liquid explosives
  2. Manfrightening: you committed some other crime, and along the way someone got scared and called you a terrorist, but you have no stated agenda.
  3. Terrorism in the third degree: You actually blew up something, but no one was hurt.
  4. Terrorism in the second degree: You actually blew up something and killed some people, but failed to garner any sympathy from the public.
  5. Terrorism in the first degree: You actually blew up something, lots of people were killed, and the US declared war on some country you were unaffiliated with.

Tags: , ,

]]> 0
Circo Hazardous Sock Packaging Tue, 01 May 2007 19:27:49 +0000 adam I happened to take my 6-month old to Target this weekend, and we bought him some socks. He was playing with the package and put them in his mouth, and managed to get the little hanger plastic piece out. There’s certainly enough to say about parental responsibility, and not letting the baby get into dangerous things, but until this little plastic piece disappeared (it turns out he dropped it on the floor), we didn’t even give a second thought to the idea that a pair of socks for a 6-12 month old might contain this kind of incredible choking hazard. I’m normally pretty paranoid about this. Didn’t these things used to go all the way across? Is this REALLY the place where Target wants to save a tenth of a cent of plastic? It seems like a lawsuit waiting to happen.

Be careful out there…

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Tags: , , , ,

]]> 0
Google has just bought a lot of browsing history of the internet Sat, 14 Apr 2007 15:25:43 +0000 adam I pointed out that YouTube was a particularly valuable acquisition to Google because their videos are the most embedded in other pages of any of the online video services. When you embed your own content in someone else’s web page, you get the ability to track who visits that page and when, to the extent that you can identify them. This is how Google Analytics works – there’s a small piece of javascript loaded into the page which is served from one of Google’s servers, and then everytime someone hits that page, they get the IP address, the URL of the referring page, and whatever cookies are stored with the browser for the domain. As I’ve discussed before, this is often more than enough information to uniquely identify a person with pretty high accuracy.

DoubleClick has been doing this for a lot longer than Google has, and they have a lot of history there. In addition to their ad network, Google has also just acquired that entire browsing history, profiles of the browsing of a huge chunk of the web. Google’s privacy policy does not seem to apply to information acquired from sources other than, so they’re probably free to do whatever they want with this profile data.

[Update: In perusing their privacy policy, I noted this: If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. This doesn't specify which end of the merger they're on, so maybe this does cover personal information they acquire. I wonder if they're planning on informing everyone included in the DoubleClick database.]

Tags: , , ,

]]> 7
Remember when DoubleClick was pretty universally reviled and sued for privacy violations a few years back? Sat, 14 Apr 2007 14:49:46 +0000 adam Oh yeah.

]]> 0
ISPs apparently sell your clickstream data Fri, 16 Mar 2007 18:10:37 +0000 adam Apparently, “anonymized” clickstream data (the urls of which websites you visited and in what order) is available for sale directly from many ISPs. There is no way that this is sufficiently anonymized. It is readily obvious from reading my clickstream who I am – urls for MANY online services contain usernames, and anyone who uses any sort of online service is almost certainly visiting their own presence far more than anything else. All it takes is one of those usernames to be tied to a real name, and your entire clickstream becomes un-anonymized, irreversibly and forever.

I’ve talked about the dangers of breaking anonymization with leaking keys before:

Short answer: It is not enough to say that a piece of data is not “personally identifiable” if it is unique and exists with a piece of personally identifiable data somewhere else. More importantly, it doesn’t even have to be unique or completely personally identifiable – whether or not you can guess who a person is from a piece of data is not a black and white distinction, and simply being able to guess who a person might be can leak some information that might confirm their identity when combined with something else.

This is also completely setting aside the fact that you have very little direct control over much of your clickstream, since there are all sorts of ways for a site you visit to get your browser to load things – popups, javascript includes, and images being the most prevalent.

Preserving anonymity is hard. This is an egregious breach of privacy. Expect lawsuits if this is true.

Tags: , , ,

]]> 8
Google to purge some data after 18-24 months Wed, 14 Mar 2007 23:33:02 +0000 adam Well, that’s a nice start. Good for them.

Tags: , ,

]]> 2
Privacy is about access, not secrecy Sun, 15 Oct 2006 14:02:04 +0000 adam There’s a very important point to be made here.

Privacy in the digital age is not necessarily about secrecy, it’s about access. The question is no longer whether someone can know a piece of information, but also how easy it is to find.

If you take a bunch of available information and aggregate it to make it easily accessible, that’s arguably a worse privacy violation than taking a secret piece of information and making it “public” but putting it where no one can find it (or where they have to go looking for it).

This is a very important disctinction when you’re looking at corporate log gathering and data harvesting. Sure – your IP address or your phone number may be “public information”, but it’s still a privacy violation when it’s put in a big database with a bunch of other information about you and given to someone.

Tags: , , ,

]]> 0
Google has your logs (and all it took was a fart lighting video) Tue, 10 Oct 2006 19:06:26 +0000 adam The non-obvious side of Google’s purchase of YouTube: Google now has access to the hit logs of every page that a YouTube video appears on, including LOTS of pages that were probably previously inaccessible to them. MySpace pages were probably going to get Google ads anyway, because of the big deal that happened there, but many others weren’t.

Add this to AdSense, the Google Web Accelerator, Google Web Analytics, and Google Maps, and that’s a lot of data being collected about browsing habits, and the number of sites you can browse without sending some data to Google has just dropped significantly.


Tags: , , ,

]]> 2
Amazon Unbox is a travesty Sun, 17 Sep 2006 15:13:45 +0000 adam I was going to write something about this, but Cory beat me to it.

Amazon Unbox has the worst terms of service I’ve seen in a long time. Like Cory, I’m a longtime Amazon supporter, and I think their customer service is outstanding, and this is a travesty. Way to fuck over the people who won’t actually read the terms because they just want to download a movie.

I only really have one thing to add with respect to the “if it has value then we have a right to charge money for it” proposition. Does the MPAA reserve the right to charge more retroactively if you enjoy a movie more than you expected to? That’s hidden value, right? This madness has to stop.

Mr. Bezos, you should be ashamed of yourself, and also whoever you put in charge of this.

Tags: , , , ,

]]> 0
Doing what the terrorists want Fri, 25 Aug 2006 21:24:07 +0000 adam I’ve often said that terrorism is an auto-immune disease afflicting civilization. Bruce Schneier has a great article up about how responding to terrorism by locking things down is, in fact, exactly what the terrorists want.

Tags: , , , ,

]]> 0
AOL releases “anonymized” search data for 500k users Mon, 07 Aug 2006 16:03:26 +0000 adam This is a serious breach of user privacy, and I can’t imagine there won’t be lawsuits over this.

Either they didn’t think this through, or this is the best way they could think of to raise a public outrage.

Tags: , ,

]]> 1 Thu, 03 Aug 2006 15:02:46 +0000 adam This is a great video of the ZDNet Executive Editor explaining what’s wrong with DRM.

Tags: , ,

]]> 0
Google Government search Fri, 16 Jun 2006 14:42:19 +0000 adam I think it’s simultaneously good that Google is turning a watchful eye on the government, but also somewhat creepy that they’re putting themselves in the position of proxying people’s access to potentially sensitive information. I do NOT think that the Google privacy policy is sufficient to cover this situation.

As many have predicted, this is also likely to expose some interesting accidentally unprotected things at some point in the future.

Tags: , , ,

]]> 0
The motivations of wiretapping Sun, 04 Jun 2006 18:33:43 +0000 adam Boingboing points out this Wired article about a reporter who crashed a conference of wiretapping providers, mentioning this quotation in particular:

‘He sneered again. “Do you think for a minute that Bush would let legal issues stop him from doing surveillance? He’s got to prevent a terrorist attack that everyone knows is coming. He’ll do absolutely anything he thinks is going to work. And so would you. So why are you bothering these guys?”‘

It’s an interesting read, but I fundamentally disagree with the above statement, and this is the problem.

It’s not the surveillance that bothers me, it’s the resistance to oversight, even after the fact.

If there was any confidence that what they were doing was a reasonable tradeoff, they wouldn’t have to a) lie or b) break the law to do it. Yet they’ve done both of these things.

If the law enforcement community said “well shit, we’re out of ideas about how to stop these people, and so we really need to have our computers read everyone’s email and tap everyone’s phones and we guarantee that this information won’t be used for anything else, and anyone we find doing something nefarious will be dealt with according to due process”, then we could, you know, engage in a meaningful discussion about this. And then we could move on to the fact that “terrorist” is not a useful designation for a criminal, and then maybe we could fire the people who thought up this brilliant idea and find someone who would practice actual security because wholesale surveillance and profiling have been widely debunked as largely useless for anything besides persecution, political attacks, and invasions of privacy.

But we won’t, because that’s not what this is about.

This opinion of a member of the Dutch National Police is particularly telling:

‘He said that in the Netherlands, communications intercept capabilities are advanced and well established, and yet, in practice, less problematic than in many other countries. “Our legal system is more transparent,” he said, “so we can do what we need to do without controversy. Transparency makes law enforcement easier, not more difficult.”

The technology exists, it’s not going away, and it’s really not the problem. The secrecy is the problem.,71022-1.html

Tags: , , ,

]]> 0
Privacy without hiding Fri, 19 May 2006 13:57:49 +0000 adam Excellent article from Bruce Schneier on why privacy is important, even if “you have nothing to hide”.

‘We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.’,70886-0.html

Privacy is freedom. It is freedom from judgement, the freedom to stew in our own individual cognitive juices, the freedom to express and learn and argue.

Tags: , ,

]]> 0
New “security glitch” found in Diebold voting systems Thu, 11 May 2006 14:08:34 +0000 adam “Elections officials in several states are scrambling to understand and limit the risk from a “dangerous” security hole found in Diebold Election Systems Inc.’s ATM-like touch-screen voting machines.

The hole is considered more worrisome than most security problems discovered on modern voting machines, such as weak encryption, easily pickable locks and use of the same, weak password nationwide.”

Perhaps it’s time to acknowledge that the Diebold systems themselves ARE the security glitch.

Tags: , ,

]]> 0
US Mandatory Data Retention laws are coming Sun, 30 Apr 2006 14:35:48 +0000 adam Remember the privacy implications of the government asking Google for search data? (

It’s going to get worse before it gets better. No online service considers your IP address to be private information, and now they will be required to maintain logs mapping your IP address to real contact information, for a period of at least one year after your account is closed.

The only way to prevent this information from being misused is to not keep it, and now there won’t be any choice.

I’ve discussed this before:

Tags: , ,

]]> 2
Watch out for the, uh, oven door scam Thu, 06 Apr 2006 17:42:45 +0000 adam OVEN DOORS]]> Apparently, crooks have been breaking into vacation homes, stealing the >OVEN DOORS<, repackaging them in real flat screen TV boxes, and selling them to dupes on the street.

Words fail me.

Tags: , , , ,

]]> 0
MIT student told to drop out of school by the RIAA to pay settlement fines Thu, 06 Apr 2006 15:22:31 +0000 adam

Of course, this is nothing compared to the fact that the RIAA says you shouldn’t be allowed to break DRM even if it’s going to kill you if you don’t:

I’ve discussed this before:

Tags: , , , ,

]]> 0
Hidden dangers for consumers – Trojan Technologies Mon, 20 Mar 2006 16:55:33 +0000 adam I’ve been collecting examples of cases where there are hidden dangers facing consumers, cases where the information necessary to make an informed decision about a product isn’t obvious, or isn’t included in most of the dialogue about that product. Sometimes, this deals with hidden implications under the law, but sometimes it’s about non-obvious capabilities of technology.

We’re increasingly entering situations where most customers simply can’t decide whether a certain product makes sense without lots of background knowledge about copyright law, evidence law, network effects, and so on. Things are complicated.

So far, I have come up with these examples, which would seem to be unrelated, but there’s a common thread – they’re all bad for the end user in non-obvious ways. They all seem safe on the surface, and often, importantly, they seem just like other approaches that are actually better, but they’re carrying hidden payloads – call them “Trojan technologies”.

To put it clearly, what I’m talking about are the cases where there are two different approaches to a technology, where the two are functionally equivalent and indistinguishable to the end user, but with vastly different implications for the various kinds of backend users or uses. Sometimes, the differences may not be evident until much later. In many circumstances, the differences may not ever materialize. But that doesn’t mean that they aren’t there.

  • Remote data storage. I wrote a previous post about this, and Kevin Bankston of the EFF has some great comments on it. Essentially, the problem is this. To the end user, it doesn’t matter where you store your files, and the value proposition looks like a tradeoff between having remote access to your own files or not being able to get at them easily because they’re on your desktop. But to a lawyer asking for those files, it makes a gigantic difference in whether they’re under your direct control or not. On your home computer, a search warrant would be required to obtain them, but on a remote server, only a subpoena is needed.
  • The recent debit card exploit has shed some light on the obvious vulnerabilities in that system, and it’s basically the same case. To a consumer, using a debit card looks exactly the same as using a credit card. But the legal ramifications are very different, and their use is protected by different sets of laws. Credit card liability is typically geared in favor of the consumer – if your card is subject to fraud, there’s a maximum amount you’ll end up being liable for, and your account will be credited immediately, as you simply don’t owe the money you didn’t charge yourself. Using a debit card, the money is deducted from your account immediately, and you have to wait for the investigation to be completed before you get your refund. A lot of people recently discovered this the hard way. There’s a tremendous amount of good coverage of debit card fraud on the Consumerist blog.
  • The Goodmail system, being adopted by Yahoo and AOL, is a bit more innocuous on the surface, but it ties into the same question. On the face of it, it seems like not a terrible idea – charge senders for guaranteed delivery of email. But the very idea carries with it, outside of the normal dialogue, the implications of breaking network neutrality (the concept that all traffic gets equal treatment on the public internet) that extend into a huge debate being raged in the confines of the networking community and the government, over such things as VoIP systems, Google traffic, and all kinds of other issues. I’m not sure if this really qualifies in the same league as my other examples, but I wanted to mention it here anyway. There’s a goodmail/network neutrality overview discussion going on over on Brad Templeton’s blog.
  • DRM is sort of the most obvious. Consumers can’t tell what the hidden implications of DRM are. This is partly because those limitations are subject to change, and that in itself is a big part of the problem. The litany of complaints is long – DRM systems destroy fair use, they’re security risks, they make things complicated for the user. I’ve written a lot about DRM in the past year and a half.
  • 911 service on VoIP is my last big example, and one of the first ones that got me started down this path. This previous post, dealing with the differences between multiple kinds of services called “911 service” on different networks, is actually a good introduction to this whole problem. I ask again ‘Does my grandmother really understand the distinction between a full-service 911 center and a “Public Safety Answering Point”? Should she have to, in order to get a phone where people will come when she dials 911?

I don’t have a good solution to this, beyond more education. This facet must be part of the consumer debate over new technologies and services. These differences are important. We need to start being aware, and asking the right questions. Not “what are we getting out of this new technology?“, but “what are we giving up?“.

Tags: , , , , , , , , , ,

]]> 3
Claim your settlement from Sony Thu, 16 Mar 2006 00:11:48 +0000 adam If you bought an infected CD from Sony, you’re entitled to some benefits under the lawsuit settlement:

Tags: , , , , ,

]]> 0
Zfone is simple encrypted voip telephony Wed, 15 Mar 2006 14:30:11 +0000 adam Phil Zimmermann, the guy who brought you PGP, has just released a public beta of his new open source encrypted VOIP software – Zfone. The beta is Mac/linux only, the Windows version will be out in a month or so.

It’s an encrypting proxy for SIP calls using pre-existing software. I don’t know enough about how the protocol works to say if this would work with things like Vonage or not.

“In the future, the Zfone protocol will be integrated into standalone secure VoIP clients, but today we have a software product that lets you turn your existing VoIP client into a secure phone. The current Zfone software runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. You can use a variety of different software VoIP clients to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own little separate GUI, telling the user if the call is secure.”

Zfone has been tested with these VoIP clients and VoIP services:
VoIP clients: X-Lite, Gizmo, and SJphone.
VoIP service providers: Free World Dialup,, and SIPphone.

Tags: , , , , ,

]]> 0
Google forced to release records by the court Tue, 14 Mar 2006 20:37:28 +0000 adam As predicted, U.S. Judge James Ware intends to force Google to hand over the requested data to the DoJ.

Tags: , , , ,

]]> 0
Massive fraud alert on Citibank ATMs Mon, 06 Mar 2006 21:17:45 +0000 adam Some kind of massive fuckup is going on with the international ATM network, possibly a class break of the interbank ATM network. Lots of conflicting information, but it’s pretty clear that things are not going well:

Tags: , , , ,

]]> 0
Outrage fatigue roundup 3/2/2006 Thu, 02 Mar 2006 17:26:59 +0000 adam The big news this week – video that Bush knew that Katrina would destroy New Orleans a day before the storm hit:

Asking for complaint forms in Flordia Police stations gets you harassed and threatened:

Greek cell phone taps of high officials were enabled by embedded surveillance tech:

Zogby poll shows 72% of troops want to get out of Iraq in the next year, but also that 85% of them think they’re there to retaliate for Saddam’s attacking us on 9/11. So, there’s that:

Human rights abuses in Iraq are worse than under Saddam (oops, Freudian slip – I typed Bush there first):

Daily Kos is mumbling something about State-initiated impeachment:

And, a kitten:

Tags: , , ,

]]> 0
Greek wiretaps were enabled by embedded spy code Thu, 02 Mar 2006 14:47:34 +0000 adam Power, once given, will be abused. And not necessarily by those it’s given to.

Bruce Schenier has a blog entry about the Greek cell phone tapping scandal – about 100 cell phones of politicians and officials, including the American embassy, have been tapped by an unknown party since the 2004 Olympics.

Bruce points out that the “malicious code” used to enable this was actually designed into the system as an eavesdropping mechanism for the police.

“There is an important security lesson here. I have long argued that when you build surveillance mechanisms into communication systems, you invite the bad guys to use those mechanisms for their own purposes. That’s exactly what happened here.”

Tags: , , , ,

]]> 0
This is what we mean by abuse of databases Wed, 22 Feb 2006 14:48:26 +0000 adam Okay, here it is, folks.

When someone asks “what’s wrong with companies compiling huge databases of personal information?”, this is part of the answer:

Someone signed up for a Miller Brewery contest using a throwaway email address, and they tracked her down and signed up her “real” email address. The second link above concludes that they did it by using information collected by Equifax’s direct mail division, Naviant (which was supposed to have been shut down years ago). They own the domain from which the email was sent.

When we talk about privacy, it can mean a number of things. But indisputably, one of the definitions is “the right to be free from unauthorized intrusions”.

Maybe this is a small thing, but it’s a terrible precedent.

This person obviously didn’t want to be permanently signed up for messages from Miller. Letting an address expire is probably the ultimate form of “opt-out”. Yet, Miller thought it was okay to use personal information gleaned from who-knows-what sources to tie her to another email address, and send her more spam. Would they do the same thing if you changed your phone number to avoid telemarketers? What else is fair game?

Tags: , , , , , ,

]]> 0
The Hurtt Prize Tue, 21 Feb 2006 17:24:11 +0000 adam Harold Hurtt, police chief of Houston, has advocated changing building permits to require cameras in public areas of malls and apartment complexes, to try to deter crime:

He’s quoted in the article, saying “I know a lot of people are concerned about Big Brother, but my response to that is, if you are not doing anything wrong, why should you worry about it?”

1) “Wrong” is always changing, and isn’t always correct.

2) Our society and legal system are neither constructed for or capable of handling perfect law enforcement.

3) It’s not worth any price to catch all of the criminals. There are tradeoffs to be made.

The Hurtt Prize is a $1000-and-growing bounty offered for anyone who gets a video capture of Mr. Hurtt committing a crime.

Tags: , , ,

]]> 1
Fun stuff you can find with Google Fri, 17 Feb 2006 23:01:47 +0000 adam Nothing really new, but some interesting examples.

The world’s information doesn’t want to be organized.

Tags: ,

]]> 0
China loves the Patriot Act Wed, 15 Feb 2006 00:36:55 +0000 adam In an interview with a senior Chinese official responsible for policing the Internet, he defends China’s monitoring and filtering as no different from what other countries do to enforce their laws and keep the content on the internet “safe”. He points to the Patriot Act as evidence that the US is “doing a good job on this front”.

Tags: , ,

]]> 0
Storing your files on Google’s server is not a good idea Sun, 12 Feb 2006 19:29:11 +0000 adam I was going to write something long about this, but Kevin Bankston of the EFF has beaten me to it and put together pretty much everything I was going to say.

Here’s the original piece:

In response to a criticism on the IP list that this piece was too hard on Google, Kevin wrote the following, which I reproduce here verbatim with permission. I think that this does an excellent job of summing up how I feel about these privacy issues. I have nothing personally against Google, or any of the other companies that I often “pick on” in pointing out potential flaws. I do think that somewhere along the way in getting to where we are now, we have lost some important things in the areas of corporate responsibility and consumer protections, and technology has advanced to the point where it’s not even obvious what has been lost. The tough thing is that there are often tradeoffs with useful functionality, and it’s not clear what you’re giving up in order to make use of that potential new feature.

So, in this case – yeah, it’s great that you can search your files from more than one computer, but Google hasn’t warned you that your doing so by their method, under the current law, exposes your private data to less rigorous protections from search by various parties than it would be if it were left on your own computer. To most people, it doesn’t make any difference where their files are stored. To a lawyer with a subpoena in hand, it does. These are important distinctions, and they’re not being made to the general public. I believe it is the responsibility of those who understand these risks to bring this dialogue to those who don’t. It’s a a big part of why I write this blog.

Kevin’s response:

Thanks for your feedback. I’m sorry if you found our press release inappropriately hostile to Google, although I would say it was appropriately hostile–not to Google or its folk, but to the use of this product, which we do think poses a serious privacy risk.

Certainly, the ability to search across computers is a helpful thing, but considering that we are advocating against the use this particular product for that purpose, I’m not sure why we would include such a (fairly obvious) proposition in the release. And as to tone, well, again, the goal was to warn people off of this product, and you’re not going to do that by using weak language. Certainly, we’re not out to personally or unfairly attack the people at Google. Indeed, we work with them on a variety of non-privacy issues (and sometimes privacy issues, too). But it’s our job to forcefully point out when they are marketing a product that we think is a dangerous to consumers’ privacy, and dropping in little caveats about how clever Google’s engineers are or how useful their products can be is unnecessary and counterproductive to that purpose.

I think it’s clear from the PR that our biggest problem here is with the law. But we are also very unhappy with companies–including but not limited to Google–that design and encourage consumers to use products that, in combination with the current state of the law, are bad for user privacy. Google could have developed a Search Across Computers product that addressed these problems, either by not storing the data on Google servers there are and long have been similar remote access tools that do not rely on third party storage), or by storing the data in encrypted form such that only the user could retrieve it (it is encrypted on Google’s servers now, but Google has the key).

However, both of those design options would be inconsistent with one of Google’s most common goals: amassing user data as grist for the ad-targeting mill (otherwise known, by Google, as “delivering the best possible service to you”). As mentioned in the PR, Google says it is not scanning the files for that purpose yet, but has not ruled it out, and the current privacy policy on its face would seem to allow it. And although I for one have no problem with consensual ad-scanning per se, which technically is not much different than spam-filtering in its invasiveness, I do have a very big problem with a product that by design makes ad-scanning possible at the cost of user privacy. This is the same reason EFF objected to Gmail: not because of the ad-scanning itself, but the fact that Google was encouraging users, in its press and by the design of the product, to never delete their emails even though the legal protection for those stored communications are significantly reduced with time.

If Google wants to “not be evil” and continue to market products like this, which rely on or encourage storing masses of personal data with Google, it has a responsibility as an industry leader to publicly mobilize resources toward reforming the law and actively educating its users about the legal risks. Until the law is fixed, Google can and should be doing its best to design around the legal pitfalls, placing a premium on user privacy rather than on Google’s own access to user’s data. Unfortunately, rather than treating user privacy as a design priority and a lobbying goal, Google mostly seems to consider it a public relations issue. That being the case, it’s EFF’s job to counter their publicity, by forcefully warning the public of the risks and demanding that Google act as a responsible corporate citizen.

Once again, another reason why you should be donating money to the EFF. Do it now.

Tags: , , , ,

]]> 0
Detailed survey of verbatim answers from AOL, MS, Yahoo, and Google about what details they store Fri, 03 Feb 2006 16:42:32 +0000 adam Declan McCullagh has compiled responses from AOL, Microsoft, Yahoo and Google on the following questions (two of which are nearly verbatim from my previous query, uncredited):

So we’ve been working on a survey of search engines, and what data they keep and don’t keep. We asked Google, MSN, AOL, and Yahoo the same questions:

- What information do you record about searches? Do you store IP addresses linked to search terms and types of searches (image vs. Web)?
- Given a list of search terms, can you produce a list of people who searched for that term, identified by IP address and/or cookie value?
- Have you ever been asked by an attorney in a civil suit to produce such a list of people? A prosecutor in a criminal case?
- Given an IP address or cookie value, can you produce a list of the terms searched by the user of that IP address or cookie value?
- Have you ever been asked by an attorney in a civil suit to produce such a list of search terms? A prosecutor in a criminal case?
- Do you ever purge these data, or set an expiration date of for instance 2 years or 5 years?
- Do you ever anticipate offering search engine users a way to delete that data?

Tags: , , ,

]]> 0
Blackmal.e warning Thu, 02 Feb 2006 22:07:37 +0000 adam If you run a Windows machine, it’s probably a good idea to make sure your virus scans are up to date. There’s a nasty virus going around that’s set to delete some files on Feb 3.

]]> 0
US-VISIT approximate costs: $15M per criminal Wed, 01 Feb 2006 22:48:08 +0000 adam The system has cost around $15 billion, and has caught about 1000 criminals. No terrorists, all immigration violations and common criminals.

This estimate doesn’t include lost tourism revenue, academic implications of detaining foreign students or professors, or a count of how many of those criminals might have been caught anyway.

Tags: , , ,

]]> 7
Ruby script to fetch hosts file and turn it into a privoxy block list Wed, 01 Feb 2006 18:34:57 +0000 adam There are plenty of servers out there that, if they just disappear from the internet, not much bad happens. They include known ad server, spam, and spyware sites. The fine folks at maintain a good list, which is up to about 10,000 entries now. Since I couldn’t figure out how to get privoxy to honor the local hosts file when doing DNS lookups, I wrote a little ruby script to fetch that file, break it down, and output a privoxy block list.

I chose ruby, because I’ve been working with it lately, and I really really like it. I find it incredibly easy to write, read, and work with.

If you’re a ruby developer, improvements of all kinds are welcomed. Please feel free to comment and discuss ways I could have made this more ruby-ish. Also, I haven’t quite grokked what the right approach is for ruby error/exception handling. Opinions on where checks should go are welcomed. For example, the whole thing is wrapped in a conditional block of opening the file. Do I need to handle any exception conditions, or is that all just taken care of properly?


require 'open-uri'

hosts =
header = 1

open('') do |file|
  file.each_line() do |line|
    # skip if still in header
    header = 0 if line =~ /^#start/
      next if header == 1
    # skip comments
    next if line =~ /^\s*#/

    # add the hostname to an array
    hosts < < line.split[1] #(sorry, no space between << - wordpress keeps inserting one for some reason.)

  # write the output file
  outfile = open('privoxy_user_actions.txt', "w")
  outfile.puts "{ +block }" + "\n"
  hosts.each do |host|
    outfile.puts host + "\n"

Tags: , , ,

]]> 3
More specific Google tracking questions Tue, 31 Jan 2006 03:00:31 +0000 adam I asked two very specific questions in a conversation with John Battelle, and he’s received unequivocal answers from Google:

1) “Given a list of search terms, can Google produce a list of people who searched for that term, identified by IP address and/or Google cookie value?”

2) “Given an IP address or Google cookie value, can Google produce a list of the terms searched by the user of that IP address or cookie value?”

The answer to both of them is “yes”.

Tags: , , ,

]]> 4
Flickr pictures, web beacons, and a modest proposal Mon, 30 Jan 2006 16:46:23 +0000 adam As I noted in the comments of the previous post, I don’t have ads on the site, but I do have flickr pictures directly linked from my flickr account.

It is conceivable to me that flickr pictures could qualify as “web beacons” under the Yahoo privacy policy, and thus be used for tracking purposes. Presumably, this was not the original intention of the flickr developers, but it’s certainly a possibility now that they’re owned by Yahoo. Are the access logs for the static flickr pictures available to Yahoo? Probably. Are they correlated with other sorts of usage information? It’s not clear. Presumably, flickr pictures are linked in places where standard Yahoo web beacons can’t go, because they’re not invited (like on this site, for example).

I think my conclusion is that this is probably not a problem, but maybe it is. It and other sorts of distributed 3rd party tracking all have one thing in common:

It’s called HTTP_REFERER.

Here’s how it works. When you make a request for any old random web page that contains a 3rd party ad or an image or a javascript library or whatever, your browser fetches the embedded piece of content from the 3rd party. When it does that, as part of the request, it sends the URL of the page you visited as part of the request, in a field called the referer header (yes, it’s misspelled).

So, every time you visit a web page:

  • You send the URL to the owner of the page. So far so good.
  • You send your IP address to the owner of the page. Not terrible in itself.
  • You send the URL of the page you visited to the owner of the 3rd party content. And this is where it starts to degrade a little.
  • You send your IP address to the owner of the 3rd party content. The owner of the 3rd party content may be able to set a cookie identifying you. Modern browsers are set by default to refuse 3rd party cookies. However, if that 3rd party has ever set a cookie on your browser before (say, if you hit their site directly), they can still read it. In any case, you can be identified in some incremental way.
  • The next time you visit another site with content from the same 3rd party, they can probably identify you again.

That referer URL is a significant key that ties a lot of browsing habits together.

There’s an important distinction to be made here. The referer header makes it possible for 3rd party sites to track your content, and it’s only one of many ways. Doing away with the referer header won’t prevent the sites running 3rd party tracking content from doing so. The owner of the site can always send the URL you’re looking at to the 3rd party as part of the request, even if your browser isn’t. However, what this does prevent is tracking without the consent of the owner of the site you’re looking at. Of all of the sites you’re looking at, actually. Judging from my admittedly limited conversations with site owners, there are a LOT of people out there who have no idea that their users can be tracked if they include 3rd party ads on their site, or flickr images, or whatever. (Again, not to say that their users are being tracked, but the possibility is there.)

Again, the site that includes the ad or image or whatever isn’t sending that information – your browser is, and this is a legacy of the early days of the web. Some browsers allow you to turn it off and not send any referer information. I’d argue that this should be off by default, because there disadvantages outweigh the benefits. I’m told that legitimate advertisers don’t rely on the referer header anyway, because it can be unreliable. If that’s true, that’s even less reason to keep it around.

Suggestion number one was “Tracking information that’s linked to personally identifiable information should also be considered personally identifiable“.

Perhaps suggestion two is “Let’s do away with the Referer header”. (Of course, this comes on the heels of a Google-employed Firefox developer adding more tracking features instead of taking them away.)

Arguments for or against? Are there any good uses for this that are worth the potential for abuse?

Tags: , , , , , ,

]]> 17
What’s the big fuss about IP addresses? Sun, 29 Jan 2006 20:33:56 +0000 adam Given the recent fuss about the government asking for search terms and what qualifies as personally identifiable information, I want to explain why IP address logging is a big deal. This explanation is somewhat simplified to make the cases easier to understand without going into complete detail of all of the possible configurations, of which there are many. I think I’ve kept the important stuff without dwelling on the boundary cases, and be aware that your setup may differ somewhat. If you feel I’ve glossed over something important, please leave a comment.

First, a brief discussion of what IP addresses are and how they work. Slightly simplified, every device that is connected to the Internet has a unique number that identifies it, and this number is called an IP address. Whenever you send any normal network traffic to any other computer on the network (request a web page, send an email, etc…), it is marked with your IP address.

There are three standard cases to worry about:

  1. If you use dialup, your analog modem has an IP address. Remote computers see this IP address. (This case also applies if you’re using a data aircard, or using your cell phone as a modem.)
  2. If you have a DSL or cable connection, your DSL/cable modem has an IP address when it’s connected, and your computer has a separate internal IP address that it uses to only communicate with the DSL or cable modem, typically mediated by a home router. Remote computers see the IP address of the DSL/cable modem. (This case also applies if you’re using a mobile wifi hotspot.)
  3. If you’re directly connected to the internet via a network adapter, your network adapter has an IP address. Remote computers see this IP address.

Sometimes, IP addresses are static, meaning they’re manually assigned and don’t change automatically unless someone changes them (typically, only for case #3). Often, they’re dynamic, which means they’re assigned automatically with a protocol called DHCP, which allows a new network connection to automatically pick up an IP address from an available pool. But just because they can change doesn’t mean they will change. Even dynamic IP addresses can remain the same for months or years at a time. (The servers you’re communicating with also have IP addresses, and they are typically static.)

In order to see how an IP address may be personally identifiable information, there’s a critical question to ask – “where do IP addresses come from, and what information can they be correlated with?”.

Depending on how you connect to the internet, your IP address may come from different places:

  • If you use dialup, your modem will get its IP address from the dialup ISP, with which you have an account. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and billing details are part of your account information. By recording the phone number you call from, they may be able to identify your physical location.
  • If you have a DSL or cable connection, your DSL/cable modem will get its IP address from the DSL/cable provider. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and physical location, and probably other information about you, are part of your account information.
  • If you’re using a public wifi access point, you’re probably using the IP address of the access point itself. If you had to log in your account, your name and physical location, and probably other information about you, are part of your account information. If you’re using someone else’s open wifi point, you look like them to the rest of the internet. This case is an exception to the rest of the points outlined in this article.
  • If you’re directly connected to the internet via a network adapter, your network adapter will get its IP address from the network provider. In an office, this is typically the network administrator of the company. Your network administrator knows which computer has which IP address.

None of this information is secret in the traditional sense. It is probably confidential business information, but in all cases, someone knows it, and the only thing keeping it from being further revealed is the willingness or lack thereof of the company or person who knows it.

While an IP address may not be enough to identify you personally, there are strong correlations of various degrees, and in most cases, those correlations are only one step away. By itself, an IP address is just a number. But it’s trivial to find out who is responsible for that address, and thus who to ask if you want to know who it’s been given out to. In some cases, the logs will be kept indefinitely, or destroyed on a regular basis – it’s entirely up to each individual organization.

Up until now, I’ve only discussed the implications of having an IP address. The situation gets much much worse when you start using it. Because every bit of network traffic you use is marked with your IP address, it can be used to link all of those disparate transactions together.

Despite these possible correlations, not one of the major search engines considers your IP address to be personally identifiable information. [Update: someone asked where I got this conclusion. It's from my reading of the Google, Yahoo, and MSN Search privacy policies. In all cases, they discuss server logs separately from the collection of personal information (although MSN Search does have it under the heading of "Collection of Your Personal Information", it's clearly a separate topic). If you have some reason to believe I've made a mistake, I'm all ears.] While this may technically be true if you take an IP address by itself, it is a highly disingenuous position to take when logs exist that link IP addresses with computers, physical locations, and account information… and from there with people. Not always, but often. The inability to link your IP address with you depends always on the relative secrecy of these logs, what information is gathered before you get access to your IP address, and what other information you give out while using it.

Let’s bring one more piece into the puzzle. It’s the idea of a key. A key is a piece of data in common between two disparate data sources. Let’s say there’s one log which records which websites you visit, and it stores a log that only contains the URL of the website and your IP address. No personal information, right? But there’s another log somewhere that records your account information and the IP address that you happened to be using. Now, the IP address is a key into your account information, and bringing the two logs together allows the website list to be associated with your account information.

  • Have you ever searched for your name? Your IP address is now a key to your name in a log somewhere.
  • Have you ever ordered a product on the internet and had it shipped to you? Your IP address is now a key to your home address in a log somewhere.
  • Have you ever viewed a web page with an ad in it served from an ad network? Both the operator of the web site and the operator of the ad network have your IP address in a log somewhere, as a key to the sites you visited.

The list goes on, and it’s not limited to IP addresses. Any piece of unique data – IP addresses, cookie values, email addresses – can be used as a key.

Data mining is the act of taking a whole bunch of separate logs, or databases, and looking for the keys to tie information together into a comprehensive profile representing the correlations. To say that this information is definitely being mined, used for anything, stored, or even ever viewed is certainly alarmist, and I don’t want to imply that it is. But the possibility is there, and in many cases, these logs are being kept, if they’re not being used in that way now, the only thing really standing in the way is the inaction of those who have access to the pieces, or can get it.

If the information is recorded somewhere, it can be used. This is a big problem.

There are various ways to mask your IP address, but that’s not the whole scope of the problem, and it’s still very easy to leak personally identifiable information.

I’ll start with one suggestion for how to begin to address this problem:

Any key information associated with personally identifiable information must also be considered personally identifiable.

[Update: I've put up a followup post to this one with an additional suggestion.]

Tags: , , , , ,

]]> 21
Some evidence that Google does keep personally identifiable logs Fri, 27 Jan 2006 06:00:48 +0000 adam This article from Internet Week has Alan Eustace, VP of Engineering for Google, on the record talking about the My Search feature.

“Anytime, you give up any information to anybody, you give up some privacy,” Eustace said.

With “My Search,” however, information stored internally with Google is no different than the search data gathered through its Google .com search engine, Eustace said.

“This product itself does not have a significant impact on the information that is available to legitimate law enforcement agencies doing their job,” Eustace said.

This seems pretty conclusive to me – signing up for saved searches doesn’t (or didn’t, in April 2005) change the way the search data is stored internally.


(This was pointed out to me by Ray Everett-Church in the comments of the previous post, covered on his blog:

Tags: , , , , , ,

]]> 0
Does Google keep logs of personal data? Thu, 26 Jan 2006 15:16:43 +0000 adam The question is this – is there any evidence that Google is keeping logs of personally identifiable search history for users who have not logged in and for logged-in users who have not signed up for search history? What about personal data collected from Gmail, and Google Groups, and Google Desktop? Aggregated with search? Kept personally identifiably? (Note: For the purposes of this conversation, even though Google does not consider your IP address to be personally identifiable, at least according to their privacy policy, I do.)

It is not arguable that they could keep those logs, but I think every analysis I’ve seen is simply repeating the assumption that they do, based on the fact that they could.

Has there ever been a hard assertion, by someone who’s in a position to know, that these logs do in fact exist?

I have a suspicion about one possible source of all this. Google’s privacy policy used to say (amended 7/2004):

Google notes and saves [emphasis mine] information such as time of day, browser type, browser language, and IP address with each query.“.

But the policy no longer says that. The current version reads: “When you use Google services, our servers automatically record information that your browser sends whenever you visit a website. These server logs may include information such as your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.“. Again, no information about what’s being done with that data or how long it’s kept.

Given the possibility that they don’t, I think it drastically changes the value proposition of those free subsidiary tools. Obviously, if you ask for your search history to be saved, they’re going to keep it. But maybe that decision is predicated on the assumption that they’re going to keep it anyway, and you might as well have access to it. If the answer is that they’re not keeping it, that’s a different question.

It’s critical to point out that these issues are not even close to limited to Google. Every search engine, every “free” service you give your data to, every hub of aggregated data on the web has the same problems.

Currently, there’s no way to make an informed decision, because privacy policies don’t include specific information about what data is kept, in what form, and for how long. With all of the disclosures in the past year of personal data lost, compromised, and requested, isn’t it time for us to know? In the beginning of the web, having a privacy policy at all was unheard of, but now everybody has one. I don’t think it’s too much to ask of the companies we do business with that the same be done with log retention policies.

I agree with the request to ask Google to delete those logs if they’re keeping them, but I haven’t seen any evidence that they are. Personally, I’d like to know.

Tags: , , , , , ,

]]> 2
More thoughts on Google Fri, 20 Jan 2006 15:55:13 +0000 adam Having examined the motion and letters, I see a different picture emerging.

I am not a lawyer, but from my reading of the motion, it appears that Google’s objections are thin. Really thin.
Also, they seem to have been completely addressed by the scaling back of the DOJ requests. Of course, that’s not the complete story, but if the arguments in the motion are correct, it seems like to me that Google will lose and be compelled to comply.

Based on the letters and other analysis, they’re also pulling the slippery slope defense – “we’re not going to comply with this because it will give you the expectation that we’re open for business and next time you can ask for personal information”. If that’s true, I think that’s the first good news I’ve heard out of them in years. Good luck with that.

Google’s own behavior is inconsistent with their privacy FAQ, which states Google does comply with valid legal process, such as search warrants, court orders, or subpoenas seeking personal information. These same processes apply to all law-abiding companies. As has always been the case, the primary protections you have against intrusions by the government are the laws that apply to where you live. (Interestingly, this language is inconsistent with their full privacy policy, which states that Google only shares personal information … [when] We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request.

I wonder if they intend to challenge the validity of the fishing expedition itself, which would be the real kicker (and probably invalidate the above paragraph). I also idly wonder if they expect to lose anyway and have simply refused to comply with bogus arguments in order to get the request entered into the public record.

Interesting stuff. A lot of my criticisms of Google are about their unwillingness to publicly state their intentions with respect to the data they get (and the extent to which they may or may not be retaining, aggregating, and correlating that data), and I don’t think this case is any different. I think Google’s interest here in not releasing records is aligned with the public good, and as such, I wish them well. It’s been asserted that Google has taken extraordinary steps to preserve the anonymity of its records, and that well may be true. It’s also kind of irrelevant. Beyond this specific case, of whether the govnernment can request information about Google searches (let alone any of their more invasive services, or anyone’s more invasive services), is the issue of the ramifications of collecting, aggregating, and correlating this data in the first place.

There is no question that Google has access to a tremendous amount of data on everyone who interacts with its service. It is still troubling that its privacy policy is inadequate. It’s still troubling that Google (and Yahoo, and how many others) considers your IP address to be not personally identifiable information. It’s still troubling that Google (and Yahoo and how many others) do all of their transactions unencrypted and that search terms are included in the URL of the request. As this case has shown, Google’s actual behavior may not correlate to their stated intentions, of which there are few in the first place. By Google’s own slippery slope logic, this time it works for you – will it next time?

Perhaps it’s time to hold companies accountable for the records they keep.

]]> 0
Update on DOJ/Google Thu, 19 Jan 2006 23:52:09 +0000 adam This is a fascinating deconstruction of the court documents and letters available so far:

]]> 0
DOJ demands large chunk of Google data Thu, 19 Jan 2006 15:10:32 +0000 adam

The Bush administration on Wednesday asked a federal judge to order Google to turn over a broad range of material from its closely guarded databases.

The move is part of a government effort to revive an Internet child protection law struck down two years ago by the U.S. Supreme Court. The law was meant to punish online pornography sites that make their content accessible to minors. The government contends it needs the Google data to determine how often pornography shows up in online searches.

In court papers filed in U.S. District Court in San Jose, Justice Department lawyers revealed that Google has refused to comply with a subpoena issued last year for the records, which include a request for 1 million random Web addresses and records of all Google searches from any one-week period.

I’m sort of out of analysis about why this is bad, because I’ve said it all before.

See (particularly 4 and 5):


It really comes down to one thing.

If data is collected, it will be used.

It’s far past the time for us all to take an interest in who’s collecting what.

]]> 0
By the way, now’s probably a good time to update your hosts file Wed, 18 Jan 2006 16:54:01 +0000 adam The hosts file is a long list of known advertising and spyware domains. Using the hosts file makes these sites invisible to your computer.

]]> 0
Sometimes it hurts to be right. Wed, 18 Jan 2006 16:37:01 +0000 adam ‘The Mozilla Team has quietly enabled a new feature in Firefox that parses ‘ping’ attributes to anchor tags in HTML. Now links can have a ‘ping’ attribute that contains a list of servers to notify when you click on a link. Although link tracking has been done using redirects and Javascript, this new “feature” allows notification of an unlimited and uncontrollable number of servers for every click, and it is not noticeable without examining the source code for a link before clicking it.’

‘I’m sure this may raise some eye-brows among privacy conscious folks, but please know that this change is being considered with the utmost regard for user privacy. The point of this feature is to enable link tracking mechanisms commonly employed on the web to get out of the critical path and thereby reduce the time required for users to see the page they clicked on. Many websites will employ redirects to have all link clicks on their site first go back to them so they can know what you are doing and then redirect your browser to the site you thought you were going to. The net result is that you end up waiting for the redirect to occur before your browser even begins to load the site that you want to go to. This can have a significant impact on page load performance.’

Oh, well, that makes it all okay then. It’s for the user experience.

Where does Darin’s next paycheck come from? Oh, right. It’s Google. But I’m sure they have only our best interests at heart.

]]> 1
WMF official patch is out Sat, 07 Jan 2006 17:26:28 +0000 adam You should have the MS patch by now for the WMF exploit.

You can verify that the MS one is successfully installed by checking the box in Add or Remove Programs that says “show updates”. The proper one is KB912919.

Once this is installed, you should remove the unofficial patch, if you installed it.

]]> 0
WMF exploit unofficial patch Tue, 03 Jan 2006 16:55:39 +0000 adam This is pretty unbelievable. A major exploit was announced, diagnosed, and confirmed. While Microsoft has sat on their ass and said they won’t have a patch available FOR ANOTHER WEEK, someone has reverse engineered the binary and issued their own patch. The patch has been verified by a number of reliable sources as being trustworthy, effective, and reversible. Install it now, if you use Windows.

I’m not a lawyer, but this sounds like grounds for bringing a negligence lawsuit against Microsoft. It is completely unacceptable that the fix is simple enough that it can be done by someone without access to the source, there are known exploits in the wild, and it’s going to take another week for an official patch.

]]> 1