Adam Fields (weblog)

This blog is largely deprecated, but is being preserved here for historical interest. Check out my index page at for more up to date info. My main trade is technology strategy, process/project management, and performance optimization consulting, with a focus on enterprise and open source CMS and related technologies. More information. I write periodic long pieces here, shorter stuff goes on twitter or


The Tragedy of the Selfish, again and again.

I kept seeing this pattern emerge, and couldn’t find a good name for it (originally in reference to failures of the free market), so I came up with one. Simply put, The Tragedy of the Selfish is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better.

You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars.

You buy a gun because other people have guns. Then other people do, because they want to be safer too. Then… you see where I’m going with this. Perhaps you’ve made yourself safer in some limited way, but you’ve decreased the overall safety of the system.

This is not safety, it’s mutually assured destruction.


Why all this mucking about with irrevocable licenses?

The Google+ Terms of Service include various provisions to give them license to display your content, and this has freaked out a bunch of professional photographers:

‘By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.’

I don’t even understand why this is necessary. Why can’t this just be ‘you give us a license to display your content on the service until you delete it’?


The Google Chrome terms of service are hilarious

I’ve been very busy lately, but this is just too much to not comment on.

There are other articles about how the Google Chrome terms of service give Google an irrevocable license to use any content you submit through “The Services” (a nice catchall term which includes all Google products and services), but the analysis really hasn’t gone far enough – that article glosses over the fact that this applies not only to content you submit, but also content you display. Of course, since this is a WEB BROWSER we’re talking about, that means every page you view with it.

In short, when you view a web page with Chrome, you affirm to Google that you have the right to grant Google an irrevocable license to use it to “display, distribute and promote the Services”, including making such content available to others. If you don’t have that legal authority over every web page you’ve visited, you’ve just fraudulently granted that license to Google and may yourself be liable to the actual copyright owner. (If you do, of course, you’ve just granted them that license for real.) I’m not a lawyer, but I suspect that Google has either committed mass inducement to fraud or the entire EULA (which lacks a severability clause) is impossible to obey and therefore void. [Update: there is a severability clause in the general terms, which I missed on the first reading. Does that mean that the entire content provisions would be removed, or just the parts that apply to the license you grant Google over the content you don't have copyright to? I don't know.]

Even more so than usual, these terms are, quite frankly, ridiculous and completely inappropriate for not only a web browser but an open source web browser.

Nice going guys.


Why don’t we have degrees of terrorism?

We have different classifications for the crime of “killing a person”, and those classifications encompass whether it was an accident or not, whether it was premeditated, and how many people were killed – e.g.: How serious a crime has actually been committed. But when we talk about terrorism, it’s always just “terrorism”. This results in the really sinister megacriminals being lumped in with the group of morons that can’t get it to together to leave the house without forgetting to wear pants, let alone actually arrange to blow anything up.

Most “terrorists” are less dangerous than your average serial killer or bus accident, but we still lump them all together simply because they have an agenda.

Similar to murder, I think we need some sort of classification system for these crimes:

  1. Intent to commit terrorism: you “plotted” with someone who may or may not have been an undercover cop, but didn’t actually acquire passports or learn how to make liquid explosives
  2. Manfrightening: you committed some other crime, and along the way someone got scared and called you a terrorist, but you have no stated agenda.
  3. Terrorism in the third degree: You actually blew up something, but no one was hurt.
  4. Terrorism in the second degree: You actually blew up something and killed some people, but failed to garner any sympathy from the public.
  5. Terrorism in the first degree: You actually blew up something, lots of people were killed, and the US declared war on some country you were unaffiliated with.

Tags: , ,


Circo Hazardous Sock Packaging

Filed under: — adam @ 2:27 pm

I happened to take my 6-month old to Target this weekend, and we bought him some socks. He was playing with the package and put them in his mouth, and managed to get the little hanger plastic piece out. There’s certainly enough to say about parental responsibility, and not letting the baby get into dangerous things, but until this little plastic piece disappeared (it turns out he dropped it on the floor), we didn’t even give a second thought to the idea that a pair of socks for a 6-12 month old might contain this kind of incredible choking hazard. I’m normally pretty paranoid about this. Didn’t these things used to go all the way across? Is this REALLY the place where Target wants to save a tenth of a cent of plastic? It seems like a lawsuit waiting to happen.

Be careful out there…

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Tags: , , , ,


Google has just bought a lot of browsing history of the internet

I pointed out that YouTube was a particularly valuable acquisition to Google because their videos are the most embedded in other pages of any of the online video services. When you embed your own content in someone else’s web page, you get the ability to track who visits that page and when, to the extent that you can identify them. This is how Google Analytics works – there’s a small piece of javascript loaded into the page which is served from one of Google’s servers, and then everytime someone hits that page, they get the IP address, the URL of the referring page, and whatever cookies are stored with the browser for the domain. As I’ve discussed before, this is often more than enough information to uniquely identify a person with pretty high accuracy.

DoubleClick has been doing this for a lot longer than Google has, and they have a lot of history there. In addition to their ad network, Google has also just acquired that entire browsing history, profiles of the browsing of a huge chunk of the web. Google’s privacy policy does not seem to apply to information acquired from sources other than, so they’re probably free to do whatever they want with this profile data.

[Update: In perusing their privacy policy, I noted this: If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. This doesn't specify which end of the merger they're on, so maybe this does cover personal information they acquire. I wonder if they're planning on informing everyone included in the DoubleClick database.]

Tags: , , ,

Remember when DoubleClick was pretty universally reviled and sued for privacy violations a few years back?

Oh yeah.


ISPs apparently sell your clickstream data

Apparently, “anonymized” clickstream data (the urls of which websites you visited and in what order) is available for sale directly from many ISPs. There is no way that this is sufficiently anonymized. It is readily obvious from reading my clickstream who I am – urls for MANY online services contain usernames, and anyone who uses any sort of online service is almost certainly visiting their own presence far more than anything else. All it takes is one of those usernames to be tied to a real name, and your entire clickstream becomes un-anonymized, irreversibly and forever.

I’ve talked about the dangers of breaking anonymization with leaking keys before:

Short answer: It is not enough to say that a piece of data is not “personally identifiable” if it is unique and exists with a piece of personally identifiable data somewhere else. More importantly, it doesn’t even have to be unique or completely personally identifiable – whether or not you can guess who a person is from a piece of data is not a black and white distinction, and simply being able to guess who a person might be can leak some information that might confirm their identity when combined with something else.

This is also completely setting aside the fact that you have very little direct control over much of your clickstream, since there are all sorts of ways for a site you visit to get your browser to load things – popups, javascript includes, and images being the most prevalent.

Preserving anonymity is hard. This is an egregious breach of privacy. Expect lawsuits if this is true.

Tags: , , ,


Google to purge some data after 18-24 months

Filed under: — adam @ 6:33 pm

Well, that’s a nice start. Good for them.

Tags: , ,


Privacy is about access, not secrecy

There’s a very important point to be made here.

Privacy in the digital age is not necessarily about secrecy, it’s about access. The question is no longer whether someone can know a piece of information, but also how easy it is to find.

If you take a bunch of available information and aggregate it to make it easily accessible, that’s arguably a worse privacy violation than taking a secret piece of information and making it “public” but putting it where no one can find it (or where they have to go looking for it).

This is a very important disctinction when you’re looking at corporate log gathering and data harvesting. Sure – your IP address or your phone number may be “public information”, but it’s still a privacy violation when it’s put in a big database with a bunch of other information about you and given to someone.

Tags: , , ,


Google has your logs (and all it took was a fart lighting video)

The non-obvious side of Google’s purchase of YouTube: Google now has access to the hit logs of every page that a YouTube video appears on, including LOTS of pages that were probably previously inaccessible to them. MySpace pages were probably going to get Google ads anyway, because of the big deal that happened there, but many others weren’t.

Add this to AdSense, the Google Web Accelerator, Google Web Analytics, and Google Maps, and that’s a lot of data being collected about browsing habits, and the number of sites you can browse without sending some data to Google has just dropped significantly.


Tags: , , ,


Amazon Unbox is a travesty

I was going to write something about this, but Cory beat me to it.

Amazon Unbox has the worst terms of service I’ve seen in a long time. Like Cory, I’m a longtime Amazon supporter, and I think their customer service is outstanding, and this is a travesty. Way to fuck over the people who won’t actually read the terms because they just want to download a movie.

I only really have one thing to add with respect to the “if it has value then we have a right to charge money for it” proposition. Does the MPAA reserve the right to charge more retroactively if you enjoy a movie more than you expected to? That’s hidden value, right? This madness has to stop.

Mr. Bezos, you should be ashamed of yourself, and also whoever you put in charge of this.

Tags: , , , ,


Doing what the terrorists want

I’ve often said that terrorism is an auto-immune disease afflicting civilization. Bruce Schneier has a great article up about how responding to terrorism by locking things down is, in fact, exactly what the terrorists want.

Tags: , , , ,


AOL releases “anonymized” search data for 500k users

This is a serious breach of user privacy, and I can’t imagine there won’t be lawsuits over this.

Either they didn’t think this through, or this is the best way they could think of to raise a public outrage.

Tags: , ,


This is a great video of the ZDNet Executive Editor explaining what’s wrong with DRM.

Tags: , ,


Google Government search

I think it’s simultaneously good that Google is turning a watchful eye on the government, but also somewhat creepy that they’re putting themselves in the position of proxying people’s access to potentially sensitive information. I do NOT think that the Google privacy policy is sufficient to cover this situation.

As many have predicted, this is also likely to expose some interesting accidentally unprotected things at some point in the future.

Tags: , , ,


The motivations of wiretapping

Boingboing points out this Wired article about a reporter who crashed a conference of wiretapping providers, mentioning this quotation in particular:

‘He sneered again. “Do you think for a minute that Bush would let legal issues stop him from doing surveillance? He’s got to prevent a terrorist attack that everyone knows is coming. He’ll do absolutely anything he thinks is going to work. And so would you. So why are you bothering these guys?”‘

It’s an interesting read, but I fundamentally disagree with the above statement, and this is the problem.

It’s not the surveillance that bothers me, it’s the resistance to oversight, even after the fact.

If there was any confidence that what they were doing was a reasonable tradeoff, they wouldn’t have to a) lie or b) break the law to do it. Yet they’ve done both of these things.

If the law enforcement community said “well shit, we’re out of ideas about how to stop these people, and so we really need to have our computers read everyone’s email and tap everyone’s phones and we guarantee that this information won’t be used for anything else, and anyone we find doing something nefarious will be dealt with according to due process”, then we could, you know, engage in a meaningful discussion about this. And then we could move on to the fact that “terrorist” is not a useful designation for a criminal, and then maybe we could fire the people who thought up this brilliant idea and find someone who would practice actual security because wholesale surveillance and profiling have been widely debunked as largely useless for anything besides persecution, political attacks, and invasions of privacy.

But we won’t, because that’s not what this is about.

This opinion of a member of the Dutch National Police is particularly telling:

‘He said that in the Netherlands, communications intercept capabilities are advanced and well established, and yet, in practice, less problematic than in many other countries. “Our legal system is more transparent,” he said, “so we can do what we need to do without controversy. Transparency makes law enforcement easier, not more difficult.”

The technology exists, it’s not going away, and it’s really not the problem. The secrecy is the problem.,71022-1.html

Tags: , , ,


Privacy without hiding

Filed under: — adam @ 8:57 am

Excellent article from Bruce Schneier on why privacy is important, even if “you have nothing to hide”.

‘We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.’,70886-0.html

Privacy is freedom. It is freedom from judgement, the freedom to stew in our own individual cognitive juices, the freedom to express and learn and argue.

Tags: , ,


New “security glitch” found in Diebold voting systems

Filed under: — adam @ 9:08 am

“Elections officials in several states are scrambling to understand and limit the risk from a “dangerous” security hole found in Diebold Election Systems Inc.’s ATM-like touch-screen voting machines.

The hole is considered more worrisome than most security problems discovered on modern voting machines, such as weak encryption, easily pickable locks and use of the same, weak password nationwide.”

Perhaps it’s time to acknowledge that the Diebold systems themselves ARE the security glitch.

Tags: , ,


US Mandatory Data Retention laws are coming

Filed under: — adam @ 9:35 am

Remember the privacy implications of the government asking Google for search data? (

It’s going to get worse before it gets better. No online service considers your IP address to be private information, and now they will be required to maintain logs mapping your IP address to real contact information, for a period of at least one year after your account is closed.

The only way to prevent this information from being misused is to not keep it, and now there won’t be any choice.

I’ve discussed this before:

Tags: , ,


Watch out for the, uh, oven door scam

Apparently, crooks have been breaking into vacation homes, stealing the >OVEN DOORS<, repackaging them in real flat screen TV boxes, and selling them to dupes on the street.

Words fail me.

Tags: , , , ,

MIT student told to drop out of school by the RIAA to pay settlement fines

Of course, this is nothing compared to the fact that the RIAA says you shouldn’t be allowed to break DRM even if it’s going to kill you if you don’t:

I’ve discussed this before:

Tags: , , , ,


Hidden dangers for consumers – Trojan Technologies

I’ve been collecting examples of cases where there are hidden dangers facing consumers, cases where the information necessary to make an informed decision about a product isn’t obvious, or isn’t included in most of the dialogue about that product. Sometimes, this deals with hidden implications under the law, but sometimes it’s about non-obvious capabilities of technology.

We’re increasingly entering situations where most customers simply can’t decide whether a certain product makes sense without lots of background knowledge about copyright law, evidence law, network effects, and so on. Things are complicated.

So far, I have come up with these examples, which would seem to be unrelated, but there’s a common thread – they’re all bad for the end user in non-obvious ways. They all seem safe on the surface, and often, importantly, they seem just like other approaches that are actually better, but they’re carrying hidden payloads – call them “Trojan technologies”.

To put it clearly, what I’m talking about are the cases where there are two different approaches to a technology, where the two are functionally equivalent and indistinguishable to the end user, but with vastly different implications for the various kinds of backend users or uses. Sometimes, the differences may not be evident until much later. In many circumstances, the differences may not ever materialize. But that doesn’t mean that they aren’t there.

  • Remote data storage. I wrote a previous post about this, and Kevin Bankston of the EFF has some great comments on it. Essentially, the problem is this. To the end user, it doesn’t matter where you store your files, and the value proposition looks like a tradeoff between having remote access to your own files or not being able to get at them easily because they’re on your desktop. But to a lawyer asking for those files, it makes a gigantic difference in whether they’re under your direct control or not. On your home computer, a search warrant would be required to obtain them, but on a remote server, only a subpoena is needed.
  • The recent debit card exploit has shed some light on the obvious vulnerabilities in that system, and it’s basically the same case. To a consumer, using a debit card looks exactly the same as using a credit card. But the legal ramifications are very different, and their use is protected by different sets of laws. Credit card liability is typically geared in favor of the consumer – if your card is subject to fraud, there’s a maximum amount you’ll end up being liable for, and your account will be credited immediately, as you simply don’t owe the money you didn’t charge yourself. Using a debit card, the money is deducted from your account immediately, and you have to wait for the investigation to be completed before you get your refund. A lot of people recently discovered this the hard way. There’s a tremendous amount of good coverage of debit card fraud on the Consumerist blog.
  • The Goodmail system, being adopted by Yahoo and AOL, is a bit more innocuous on the surface, but it ties into the same question. On the face of it, it seems like not a terrible idea – charge senders for guaranteed delivery of email. But the very idea carries with it, outside of the normal dialogue, the implications of breaking network neutrality (the concept that all traffic gets equal treatment on the public internet) that extend into a huge debate being raged in the confines of the networking community and the government, over such things as VoIP systems, Google traffic, and all kinds of other issues. I’m not sure if this really qualifies in the same league as my other examples, but I wanted to mention it here anyway. There’s a goodmail/network neutrality overview discussion going on over on Brad Templeton’s blog.
  • DRM is sort of the most obvious. Consumers can’t tell what the hidden implications of DRM are. This is partly because those limitations are subject to change, and that in itself is a big part of the problem. The litany of complaints is long – DRM systems destroy fair use, they’re security risks, they make things complicated for the user. I’ve written a lot about DRM in the past year and a half.
  • 911 service on VoIP is my last big example, and one of the first ones that got me started down this path. This previous post, dealing with the differences between multiple kinds of services called “911 service” on different networks, is actually a good introduction to this whole problem. I ask again ‘Does my grandmother really understand the distinction between a full-service 911 center and a “Public Safety Answering Point”? Should she have to, in order to get a phone where people will come when she dials 911?

I don’t have a good solution to this, beyond more education. This facet must be part of the consumer debate over new technologies and services. These differences are important. We need to start being aware, and asking the right questions. Not “what are we getting out of this new technology?“, but “what are we giving up?“.

Tags: , , , , , , , , , ,


Claim your settlement from Sony

If you bought an infected CD from Sony, you’re entitled to some benefits under the lawsuit settlement:

Tags: , , , , ,

Zfone is simple encrypted voip telephony

Filed under: — adam @ 9:30 am

Phil Zimmermann, the guy who brought you PGP, has just released a public beta of his new open source encrypted VOIP software – Zfone. The beta is Mac/linux only, the Windows version will be out in a month or so.

It’s an encrypting proxy for SIP calls using pre-existing software. I don’t know enough about how the protocol works to say if this would work with things like Vonage or not.

“In the future, the Zfone protocol will be integrated into standalone secure VoIP clients, but today we have a software product that lets you turn your existing VoIP client into a secure phone. The current Zfone software runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. You can use a variety of different software VoIP clients to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own little separate GUI, telling the user if the call is secure.”

Zfone has been tested with these VoIP clients and VoIP services:
VoIP clients: X-Lite, Gizmo, and SJphone.
VoIP service providers: Free World Dialup,, and SIPphone.

Tags: , , , , ,


Google forced to release records by the court

As predicted, U.S. Judge James Ware intends to force Google to hand over the requested data to the DoJ.

Tags: , , , ,


Massive fraud alert on Citibank ATMs

Filed under: — adam @ 4:17 pm

Some kind of massive fuckup is going on with the international ATM network, possibly a class break of the interbank ATM network. Lots of conflicting information, but it’s pretty clear that things are not going well:

Tags: , , , ,


Outrage fatigue roundup 3/2/2006

The big news this week – video that Bush knew that Katrina would destroy New Orleans a day before the storm hit:

Asking for complaint forms in Flordia Police stations gets you harassed and threatened:

Greek cell phone taps of high officials were enabled by embedded surveillance tech:

Zogby poll shows 72% of troops want to get out of Iraq in the next year, but also that 85% of them think they’re there to retaliate for Saddam’s attacking us on 9/11. So, there’s that:

Human rights abuses in Iraq are worse than under Saddam (oops, Freudian slip – I typed Bush there first):

Daily Kos is mumbling something about State-initiated impeachment:

And, a kitten:

Tags: , , ,

Greek wiretaps were enabled by embedded spy code

Power, once given, will be abused. And not necessarily by those it’s given to.

Bruce Schenier has a blog entry about the Greek cell phone tapping scandal – about 100 cell phones of politicians and officials, including the American embassy, have been tapped by an unknown party since the 2004 Olympics.

Bruce points out that the “malicious code” used to enable this was actually designed into the system as an eavesdropping mechanism for the police.

“There is an important security lesson here. I have long argued that when you build surveillance mechanisms into communication systems, you invite the bad guys to use those mechanisms for their own purposes. That’s exactly what happened here.”

Tags: , , , ,


This is what we mean by abuse of databases

Okay, here it is, folks.

When someone asks “what’s wrong with companies compiling huge databases of personal information?”, this is part of the answer:

Someone signed up for a Miller Brewery contest using a throwaway email address, and they tracked her down and signed up her “real” email address. The second link above concludes that they did it by using information collected by Equifax’s direct mail division, Naviant (which was supposed to have been shut down years ago). They own the domain from which the email was sent.

When we talk about privacy, it can mean a number of things. But indisputably, one of the definitions is “the right to be free from unauthorized intrusions”.

Maybe this is a small thing, but it’s a terrible precedent.

This person obviously didn’t want to be permanently signed up for messages from Miller. Letting an address expire is probably the ultimate form of “opt-out”. Yet, Miller thought it was okay to use personal information gleaned from who-knows-what sources to tie her to another email address, and send her more spam. Would they do the same thing if you changed your phone number to avoid telemarketers? What else is fair game?

Tags: , , , , , ,


The Hurtt Prize

Harold Hurtt, police chief of Houston, has advocated changing building permits to require cameras in public areas of malls and apartment complexes, to try to deter crime:

He’s quoted in the article, saying “I know a lot of people are concerned about Big Brother, but my response to that is, if you are not doing anything wrong, why should you worry about it?”

1) “Wrong” is always changing, and isn’t always correct.

2) Our society and legal system are neither constructed for or capable of handling perfect law enforcement.

3) It’s not worth any price to catch all of the criminals. There are tradeoffs to be made.

The Hurtt Prize is a $1000-and-growing bounty offered for anyone who gets a video capture of Mr. Hurtt committing a crime.

Tags: , , ,


Fun stuff you can find with Google

Filed under: — adam @ 6:01 pm

Nothing really new, but some interesting examples.

The world’s information doesn’t want to be organized.

Tags: ,


China loves the Patriot Act

Filed under: — adam @ 7:36 pm

In an interview with a senior Chinese official responsible for policing the Internet, he defends China’s monitoring and filtering as no different from what other countries do to enforce their laws and keep the content on the internet “safe”. He points to the Patriot Act as evidence that the US is “doing a good job on this front”.

Tags: , ,


Storing your files on Google’s server is not a good idea

Filed under: — adam @ 2:29 pm

I was going to write something long about this, but Kevin Bankston of the EFF has beaten me to it and put together pretty much everything I was going to say.

Here’s the original piece:

In response to a criticism on the IP list that this piece was too hard on Google, Kevin wrote the following, which I reproduce here verbatim with permission. I think that this does an excellent job of summing up how I feel about these privacy issues. I have nothing personally against Google, or any of the other companies that I often “pick on” in pointing out potential flaws. I do think that somewhere along the way in getting to where we are now, we have lost some important things in the areas of corporate responsibility and consumer protections, and technology has advanced to the point where it’s not even obvious what has been lost. The tough thing is that there are often tradeoffs with useful functionality, and it’s not clear what you’re giving up in order to make use of that potential new feature.

So, in this case – yeah, it’s great that you can search your files from more than one computer, but Google hasn’t warned you that your doing so by their method, under the current law, exposes your private data to less rigorous protections from search by various parties than it would be if it were left on your own computer. To most people, it doesn’t make any difference where their files are stored. To a lawyer with a subpoena in hand, it does. These are important distinctions, and they’re not being made to the general public. I believe it is the responsibility of those who understand these risks to bring this dialogue to those who don’t. It’s a a big part of why I write this blog.

Kevin’s response:

Thanks for your feedback. I’m sorry if you found our press release inappropriately hostile to Google, although I would say it was appropriately hostile–not to Google or its folk, but to the use of this product, which we do think poses a serious privacy risk.

Certainly, the ability to search across computers is a helpful thing, but considering that we are advocating against the use this particular product for that purpose, I’m not sure why we would include such a (fairly obvious) proposition in the release. And as to tone, well, again, the goal was to warn people off of this product, and you’re not going to do that by using weak language. Certainly, we’re not out to personally or unfairly attack the people at Google. Indeed, we work with them on a variety of non-privacy issues (and sometimes privacy issues, too). But it’s our job to forcefully point out when they are marketing a product that we think is a dangerous to consumers’ privacy, and dropping in little caveats about how clever Google’s engineers are or how useful their products can be is unnecessary and counterproductive to that purpose.

I think it’s clear from the PR that our biggest problem here is with the law. But we are also very unhappy with companies–including but not limited to Google–that design and encourage consumers to use products that, in combination with the current state of the law, are bad for user privacy. Google could have developed a Search Across Computers product that addressed these problems, either by not storing the data on Google servers there are and long have been similar remote access tools that do not rely on third party storage), or by storing the data in encrypted form such that only the user could retrieve it (it is encrypted on Google’s servers now, but Google has the key).

However, both of those design options would be inconsistent with one of Google’s most common goals: amassing user data as grist for the ad-targeting mill (otherwise known, by Google, as “delivering the best possible service to you”). As mentioned in the PR, Google says it is not scanning the files for that purpose yet, but has not ruled it out, and the current privacy policy on its face would seem to allow it. And although I for one have no problem with consensual ad-scanning per se, which technically is not much different than spam-filtering in its invasiveness, I do have a very big problem with a product that by design makes ad-scanning possible at the cost of user privacy. This is the same reason EFF objected to Gmail: not because of the ad-scanning itself, but the fact that Google was encouraging users, in its press and by the design of the product, to never delete their emails even though the legal protection for those stored communications are significantly reduced with time.

If Google wants to “not be evil” and continue to market products like this, which rely on or encourage storing masses of personal data with Google, it has a responsibility as an industry leader to publicly mobilize resources toward reforming the law and actively educating its users about the legal risks. Until the law is fixed, Google can and should be doing its best to design around the legal pitfalls, placing a premium on user privacy rather than on Google’s own access to user’s data. Unfortunately, rather than treating user privacy as a design priority and a lobbying goal, Google mostly seems to consider it a public relations issue. That being the case, it’s EFF’s job to counter their publicity, by forcefully warning the public of the risks and demanding that Google act as a responsible corporate citizen.

Once again, another reason why you should be donating money to the EFF. Do it now.

Tags: , , , ,


Detailed survey of verbatim answers from AOL, MS, Yahoo, and Google about what details they store

Declan McCullagh has compiled responses from AOL, Microsoft, Yahoo and Google on the following questions (two of which are nearly verbatim from my previous query, uncredited):

So we’ve been working on a survey of search engines, and what data they keep and don’t keep. We asked Google, MSN, AOL, and Yahoo the same questions:

- What information do you record about searches? Do you store IP addresses linked to search terms and types of searches (image vs. Web)?
- Given a list of search terms, can you produce a list of people who searched for that term, identified by IP address and/or cookie value?
- Have you ever been asked by an attorney in a civil suit to produce such a list of people? A prosecutor in a criminal case?
- Given an IP address or cookie value, can you produce a list of the terms searched by the user of that IP address or cookie value?
- Have you ever been asked by an attorney in a civil suit to produce such a list of search terms? A prosecutor in a criminal case?
- Do you ever purge these data, or set an expiration date of for instance 2 years or 5 years?
- Do you ever anticipate offering search engine users a way to delete that data?

Tags: , , ,


Blackmal.e warning

Filed under: — adam @ 5:07 pm

If you run a Windows machine, it’s probably a good idea to make sure your virus scans are up to date. There’s a nasty virus going around that’s set to delete some files on Feb 3.


US-VISIT approximate costs: $15M per criminal

Filed under: — adam @ 5:48 pm

The system has cost around $15 billion, and has caught about 1000 criminals. No terrorists, all immigration violations and common criminals.

This estimate doesn’t include lost tourism revenue, academic implications of detaining foreign students or professors, or a count of how many of those criminals might have been caught anyway.

Tags: , , ,

Ruby script to fetch hosts file and turn it into a privoxy block list

Filed under: — adam @ 1:34 pm

There are plenty of servers out there that, if they just disappear from the internet, not much bad happens. They include known ad server, spam, and spyware sites. The fine folks at maintain a good list, which is up to about 10,000 entries now. Since I couldn’t figure out how to get privoxy to honor the local hosts file when doing DNS lookups, I wrote a little ruby script to fetch that file, break it down, and output a privoxy block list.

I chose ruby, because I’ve been working with it lately, and I really really like it. I find it incredibly easy to write, read, and work with.

If you’re a ruby developer, improvements of all kinds are welcomed. Please feel free to comment and discuss ways I could have made this more ruby-ish. Also, I haven’t quite grokked what the right approach is for ruby error/exception handling. Opinions on where checks should go are welcomed. For example, the whole thing is wrapped in a conditional block of opening the file. Do I need to handle any exception conditions, or is that all just taken care of properly?


require 'open-uri'

hosts =
header = 1

open('') do |file|
  file.each_line() do |line|
    # skip if still in header
    header = 0 if line =~ /^#start/
      next if header == 1
    # skip comments
    next if line =~ /^\s*#/

    # add the hostname to an array
    hosts < < line.split[1] #(sorry, no space between << - wordpress keeps inserting one for some reason.)

  # write the output file
  outfile = open('privoxy_user_actions.txt', "w")
  outfile.puts "{ +block }" + "\n"
  hosts.each do |host|
    outfile.puts host + "\n"

Tags: , , ,


More specific Google tracking questions

I asked two very specific questions in a conversation with John Battelle, and he’s received unequivocal answers from Google:

1) “Given a list of search terms, can Google produce a list of people who searched for that term, identified by IP address and/or Google cookie value?”

2) “Given an IP address or Google cookie value, can Google produce a list of the terms searched by the user of that IP address or cookie value?”

The answer to both of them is “yes”.

Tags: , , ,

Flickr pictures, web beacons, and a modest proposal

As I noted in the comments of the previous post, I don’t have ads on the site, but I do have flickr pictures directly linked from my flickr account.

It is conceivable to me that flickr pictures could qualify as “web beacons” under the Yahoo privacy policy, and thus be used for tracking purposes. Presumably, this was not the original intention of the flickr developers, but it’s certainly a possibility now that they’re owned by Yahoo. Are the access logs for the static flickr pictures available to Yahoo? Probably. Are they correlated with other sorts of usage information? It’s not clear. Presumably, flickr pictures are linked in places where standard Yahoo web beacons can’t go, because they’re not invited (like on this site, for example).

I think my conclusion is that this is probably not a problem, but maybe it is. It and other sorts of distributed 3rd party tracking all have one thing in common:

It’s called HTTP_REFERER.

Here’s how it works. When you make a request for any old random web page that contains a 3rd party ad or an image or a javascript library or whatever, your browser fetches the embedded piece of content from the 3rd party. When it does that, as part of the request, it sends the URL of the page you visited as part of the request, in a field called the referer header (yes, it’s misspelled).

So, every time you visit a web page:

  • You send the URL to the owner of the page. So far so good.
  • You send your IP address to the owner of the page. Not terrible in itself.
  • You send the URL of the page you visited to the owner of the 3rd party content. And this is where it starts to degrade a little.
  • You send your IP address to the owner of the 3rd party content. The owner of the 3rd party content may be able to set a cookie identifying you. Modern browsers are set by default to refuse 3rd party cookies. However, if that 3rd party has ever set a cookie on your browser before (say, if you hit their site directly), they can still read it. In any case, you can be identified in some incremental way.
  • The next time you visit another site with content from the same 3rd party, they can probably identify you again.

That referer URL is a significant key that ties a lot of browsing habits together.

There’s an important distinction to be made here. The referer header makes it possible for 3rd party sites to track your content, and it’s only one of many ways. Doing away with the referer header won’t prevent the sites running 3rd party tracking content from doing so. The owner of the site can always send the URL you’re looking at to the 3rd party as part of the request, even if your browser isn’t. However, what this does prevent is tracking without the consent of the owner of the site you’re looking at. Of all of the sites you’re looking at, actually. Judging from my admittedly limited conversations with site owners, there are a LOT of people out there who have no idea that their users can be tracked if they include 3rd party ads on their site, or flickr images, or whatever. (Again, not to say that their users are being tracked, but the possibility is there.)

Again, the site that includes the ad or image or whatever isn’t sending that information – your browser is, and this is a legacy of the early days of the web. Some browsers allow you to turn it off and not send any referer information. I’d argue that this should be off by default, because there disadvantages outweigh the benefits. I’m told that legitimate advertisers don’t rely on the referer header anyway, because it can be unreliable. If that’s true, that’s even less reason to keep it around.

Suggestion number one was “Tracking information that’s linked to personally identifiable information should also be considered personally identifiable“.

Perhaps suggestion two is “Let’s do away with the Referer header”. (Of course, this comes on the heels of a Google-employed Firefox developer adding more tracking features instead of taking them away.)

Arguments for or against? Are there any good uses for this that are worth the potential for abuse?

Tags: , , , , , ,


What’s the big fuss about IP addresses?

Given the recent fuss about the government asking for search terms and what qualifies as personally identifiable information, I want to explain why IP address logging is a big deal. This explanation is somewhat simplified to make the cases easier to understand without going into complete detail of all of the possible configurations, of which there are many. I think I’ve kept the important stuff without dwelling on the boundary cases, and be aware that your setup may differ somewhat. If you feel I’ve glossed over something important, please leave a comment.

First, a brief discussion of what IP addresses are and how they work. Slightly simplified, every device that is connected to the Internet has a unique number that identifies it, and this number is called an IP address. Whenever you send any normal network traffic to any other computer on the network (request a web page, send an email, etc…), it is marked with your IP address.

There are three standard cases to worry about:

  1. If you use dialup, your analog modem has an IP address. Remote computers see this IP address. (This case also applies if you’re using a data aircard, or using your cell phone as a modem.)
  2. If you have a DSL or cable connection, your DSL/cable modem has an IP address when it’s connected, and your computer has a separate internal IP address that it uses to only communicate with the DSL or cable modem, typically mediated by a home router. Remote computers see the IP address of the DSL/cable modem. (This case also applies if you’re using a mobile wifi hotspot.)
  3. If you’re directly connected to the internet via a network adapter, your network adapter has an IP address. Remote computers see this IP address.

Sometimes, IP addresses are static, meaning they’re manually assigned and don’t change automatically unless someone changes them (typically, only for case #3). Often, they’re dynamic, which means they’re assigned automatically with a protocol called DHCP, which allows a new network connection to automatically pick up an IP address from an available pool. But just because they can change doesn’t mean they will change. Even dynamic IP addresses can remain the same for months or years at a time. (The servers you’re communicating with also have IP addresses, and they are typically static.)

In order to see how an IP address may be personally identifiable information, there’s a critical question to ask – “where do IP addresses come from, and what information can they be correlated with?”.

Depending on how you connect to the internet, your IP address may come from different places:

  • If you use dialup, your modem will get its IP address from the dialup ISP, with which you have an account. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and billing details are part of your account information. By recording the phone number you call from, they may be able to identify your physical location.
  • If you have a DSL or cable connection, your DSL/cable modem will get its IP address from the DSL/cable provider. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and physical location, and probably other information about you, are part of your account information.
  • If you’re using a public wifi access point, you’re probably using the IP address of the access point itself. If you had to log in your account, your name and physical location, and probably other information about you, are part of your account information. If you’re using someone else’s open wifi point, you look like them to the rest of the internet. This case is an exception to the rest of the points outlined in this article.
  • If you’re directly connected to the internet via a network adapter, your network adapter will get its IP address from the network provider. In an office, this is typically the network administrator of the company. Your network administrator knows which computer has which IP address.

None of this information is secret in the traditional sense. It is probably confidential business information, but in all cases, someone knows it, and the only thing keeping it from being further revealed is the willingness or lack thereof of the company or person who knows it.

While an IP address may not be enough to identify you personally, there are strong correlations of various degrees, and in most cases, those correlations are only one step away. By itself, an IP address is just a number. But it’s trivial to find out who is responsible for that address, and thus who to ask if you want to know who it’s been given out to. In some cases, the logs will be kept indefinitely, or destroyed on a regular basis – it’s entirely up to each individual organization.

Up until now, I’ve only discussed the implications of having an IP address. The situation gets much much worse when you start using it. Because every bit of network traffic you use is marked with your IP address, it can be used to link all of those disparate transactions together.

Despite these possible correlations, not one of the major search engines considers your IP address to be personally identifiable information. [Update: someone asked where I got this conclusion. It's from my reading of the Google, Yahoo, and MSN Search privacy policies. In all cases, they discuss server logs separately from the collection of personal information (although MSN Search does have it under the heading of "Collection of Your Personal Information", it's clearly a separate topic). If you have some reason to believe I've made a mistake, I'm all ears.] While this may technically be true if you take an IP address by itself, it is a highly disingenuous position to take when logs exist that link IP addresses with computers, physical locations, and account information… and from there with people. Not always, but often. The inability to link your IP address with you depends always on the relative secrecy of these logs, what information is gathered before you get access to your IP address, and what other information you give out while using it.

Let’s bring one more piece into the puzzle. It’s the idea of a key. A key is a piece of data in common between two disparate data sources. Let’s say there’s one log which records which websites you visit, and it stores a log that only contains the URL of the website and your IP address. No personal information, right? But there’s another log somewhere that records your account information and the IP address that you happened to be using. Now, the IP address is a key into your account information, and bringing the two logs together allows the website list to be associated with your account information.

  • Have you ever searched for your name? Your IP address is now a key to your name in a log somewhere.
  • Have you ever ordered a product on the internet and had it shipped to you? Your IP address is now a key to your home address in a log somewhere.
  • Have you ever viewed a web page with an ad in it served from an ad network? Both the operator of the web site and the operator of the ad network have your IP address in a log somewhere, as a key to the sites you visited.

The list goes on, and it’s not limited to IP addresses. Any piece of unique data – IP addresses, cookie values, email addresses – can be used as a key.

Data mining is the act of taking a whole bunch of separate logs, or databases, and looking for the keys to tie information together into a comprehensive profile representing the correlations. To say that this information is definitely being mined, used for anything, stored, or even ever viewed is certainly alarmist, and I don’t want to imply that it is. But the possibility is there, and in many cases, these logs are being kept, if they’re not being used in that way now, the only thing really standing in the way is the inaction of those who have access to the pieces, or can get it.

If the information is recorded somewhere, it can be used. This is a big problem.

There are various ways to mask your IP address, but that’s not the whole scope of the problem, and it’s still very easy to leak personally identifiable information.

I’ll start with one suggestion for how to begin to address this problem:

Any key information associated with personally identifiable information must also be considered personally identifiable.

[Update: I've put up a followup post to this one with an additional suggestion.]

Tags: , , , , ,


Some evidence that Google does keep personally identifiable logs

This article from Internet Week has Alan Eustace, VP of Engineering for Google, on the record talking about the My Search feature.

“Anytime, you give up any information to anybody, you give up some privacy,” Eustace said.

With “My Search,” however, information stored internally with Google is no different than the search data gathered through its Google .com search engine, Eustace said.

“This product itself does not have a significant impact on the information that is available to legitimate law enforcement agencies doing their job,” Eustace said.

This seems pretty conclusive to me – signing up for saved searches doesn’t (or didn’t, in April 2005) change the way the search data is stored internally.


(This was pointed out to me by Ray Everett-Church in the comments of the previous post, covered on his blog:

Tags: , , , , , ,


Does Google keep logs of personal data?

The question is this – is there any evidence that Google is keeping logs of personally identifiable search history for users who have not logged in and for logged-in users who have not signed up for search history? What about personal data collected from Gmail, and Google Groups, and Google Desktop? Aggregated with search? Kept personally identifiably? (Note: For the purposes of this conversation, even though Google does not consider your IP address to be personally identifiable, at least according to their privacy policy, I do.)

It is not arguable that they could keep those logs, but I think every analysis I’ve seen is simply repeating the assumption that they do, based on the fact that they could.

Has there ever been a hard assertion, by someone who’s in a position to know, that these logs do in fact exist?

I have a suspicion about one possible source of all this. Google’s privacy policy used to say (amended 7/2004):

Google notes and saves [emphasis mine] information such as time of day, browser type, browser language, and IP address with each query.“.

But the policy no longer says that. The current version reads: “When you use Google services, our servers automatically record information that your browser sends whenever you visit a website. These server logs may include information such as your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.“. Again, no information about what’s being done with that data or how long it’s kept.

Given the possibility that they don’t, I think it drastically changes the value proposition of those free subsidiary tools. Obviously, if you ask for your search history to be saved, they’re going to keep it. But maybe that decision is predicated on the assumption that they’re going to keep it anyway, and you might as well have access to it. If the answer is that they’re not keeping it, that’s a different question.

It’s critical to point out that these issues are not even close to limited to Google. Every search engine, every “free” service you give your data to, every hub of aggregated data on the web has the same problems.

Currently, there’s no way to make an informed decision, because privacy policies don’t include specific information about what data is kept, in what form, and for how long. With all of the disclosures in the past year of personal data lost, compromised, and requested, isn’t it time for us to know? In the beginning of the web, having a privacy policy at all was unheard of, but now everybody has one. I don’t think it’s too much to ask of the companies we do business with that the same be done with log retention policies.

I agree with the request to ask Google to delete those logs if they’re keeping them, but I haven’t seen any evidence that they are. Personally, I’d like to know.

Tags: , , , , , ,


More thoughts on Google

Having examined the motion and letters, I see a different picture emerging.

I am not a lawyer, but from my reading of the motion, it appears that Google’s objections are thin. Really thin.
Also, they seem to have been completely addressed by the scaling back of the DOJ requests. Of course, that’s not the complete story, but if the arguments in the motion are correct, it seems like to me that Google will lose and be compelled to comply.

Based on the letters and other analysis, they’re also pulling the slippery slope defense – “we’re not going to comply with this because it will give you the expectation that we’re open for business and next time you can ask for personal information”. If that’s true, I think that’s the first good news I’ve heard out of them in years. Good luck with that.

Google’s own behavior is inconsistent with their privacy FAQ, which states Google does comply with valid legal process, such as search warrants, court orders, or subpoenas seeking personal information. These same processes apply to all law-abiding companies. As has always been the case, the primary protections you have against intrusions by the government are the laws that apply to where you live. (Interestingly, this language is inconsistent with their full privacy policy, which states that Google only shares personal information … [when] We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request.

I wonder if they intend to challenge the validity of the fishing expedition itself, which would be the real kicker (and probably invalidate the above paragraph). I also idly wonder if they expect to lose anyway and have simply refused to comply with bogus arguments in order to get the request entered into the public record.

Interesting stuff. A lot of my criticisms of Google are about their unwillingness to publicly state their intentions with respect to the data they get (and the extent to which they may or may not be retaining, aggregating, and correlating that data), and I don’t think this case is any different. I think Google’s interest here in not releasing records is aligned with the public good, and as such, I wish them well. It’s been asserted that Google has taken extraordinary steps to preserve the anonymity of its records, and that well may be true. It’s also kind of irrelevant. Beyond this specific case, of whether the govnernment can request information about Google searches (let alone any of their more invasive services, or anyone’s more invasive services), is the issue of the ramifications of collecting, aggregating, and correlating this data in the first place.

There is no question that Google has access to a tremendous amount of data on everyone who interacts with its service. It is still troubling that its privacy policy is inadequate. It’s still troubling that Google (and Yahoo, and how many others) considers your IP address to be not personally identifiable information. It’s still troubling that Google (and Yahoo and how many others) do all of their transactions unencrypted and that search terms are included in the URL of the request. As this case has shown, Google’s actual behavior may not correlate to their stated intentions, of which there are few in the first place. By Google’s own slippery slope logic, this time it works for you – will it next time?

Perhaps it’s time to hold companies accountable for the records they keep.


Update on DOJ/Google

This is a fascinating deconstruction of the court documents and letters available so far:

DOJ demands large chunk of Google data

The Bush administration on Wednesday asked a federal judge to order Google to turn over a broad range of material from its closely guarded databases.

The move is part of a government effort to revive an Internet child protection law struck down two years ago by the U.S. Supreme Court. The law was meant to punish online pornography sites that make their content accessible to minors. The government contends it needs the Google data to determine how often pornography shows up in online searches.

In court papers filed in U.S. District Court in San Jose, Justice Department lawyers revealed that Google has refused to comply with a subpoena issued last year for the records, which include a request for 1 million random Web addresses and records of all Google searches from any one-week period.

I’m sort of out of analysis about why this is bad, because I’ve said it all before.

See (particularly 4 and 5):


It really comes down to one thing.

If data is collected, it will be used.

It’s far past the time for us all to take an interest in who’s collecting what.


By the way, now’s probably a good time to update your hosts file

Filed under: — adam @ 11:54 am

The hosts file is a long list of known advertising and spyware domains. Using the hosts file makes these sites invisible to your computer.

Sometimes it hurts to be right.

Filed under: — adam @ 11:37 am

‘The Mozilla Team has quietly enabled a new feature in Firefox that parses ‘ping’ attributes to anchor tags in HTML. Now links can have a ‘ping’ attribute that contains a list of servers to notify when you click on a link. Although link tracking has been done using redirects and Javascript, this new “feature” allows notification of an unlimited and uncontrollable number of servers for every click, and it is not noticeable without examining the source code for a link before clicking it.’

‘I’m sure this may raise some eye-brows among privacy conscious folks, but please know that this change is being considered with the utmost regard for user privacy. The point of this feature is to enable link tracking mechanisms commonly employed on the web to get out of the critical path and thereby reduce the time required for users to see the page they clicked on. Many websites will employ redirects to have all link clicks on their site first go back to them so they can know what you are doing and then redirect your browser to the site you thought you were going to. The net result is that you end up waiting for the redirect to occur before your browser even begins to load the site that you want to go to. This can have a significant impact on page load performance.’

Oh, well, that makes it all okay then. It’s for the user experience.

Where does Darin’s next paycheck come from? Oh, right. It’s Google. But I’m sure they have only our best interests at heart.


WMF official patch is out

Filed under: — adam @ 12:26 pm

You should have the MS patch by now for the WMF exploit.

You can verify that the MS one is successfully installed by checking the box in Add or Remove Programs that says “show updates”. The proper one is KB912919.

Once this is installed, you should remove the unofficial patch, if you installed it.


WMF exploit unofficial patch

Filed under: — adam @ 11:55 am

This is pretty unbelievable. A major exploit was announced, diagnosed, and confirmed. While Microsoft has sat on their ass and said they won’t have a patch available FOR ANOTHER WEEK, someone has reverse engineered the binary and issued their own patch. The patch has been verified by a number of reliable sources as being trustworthy, effective, and reversible. Install it now, if you use Windows.

I’m not a lawyer, but this sounds like grounds for bringing a negligence lawsuit against Microsoft. It is completely unacceptable that the fix is simple enough that it can be done by someone without access to the source, there are known exploits in the wild, and it’s going to take another week for an official patch.


Nasty MS Web Image Exploit

Filed under: — adam @ 10:01 am

There’s an exploit of the Windows code used to render WMF files (windows metafile – it’s an image format). There are multiple reports of sites in the wild exploiting this to drop trojans.

***All versions of IE are vulnerable to automatic infection.***
Earlier versions of Firefox (1.04) and all versions of Opera are still vulnerable, but they prompt you first. Firefox 1.5 is not vulnerable. Some email and IM programs may be vulnerable if they do previews or you click on a link that opens in a vulnerable browser or opens a vulnerable desktop program (Windows Picture and Fax Viewer).

Obviously, the best workaround for this is not to be using Windows.

If you can, disable all access to WMF files at the network level.

A temporary workaround (although it’s apparently still possible to get infected if you open a malicious file in mspaint):

A Microsoft spokesperson said the company is investigating, though no official word from them yet. A couple of security firms, including Verisign’s iDefense, have published workarounds that appear to mitigate the threat. According to iDefense, Windows users can disable the rendering of WMF files using the following hack:

1. Click on the Start button on the taskbar.
2. Click on Run…
3. Type “regsvr32 /u shimgvw.dll” to disable.
4. Click ok when the change dialog appears.

iDefense notes that this workaround may interfere with certain thumbnail images loading correctly, though I have used the hack on my machine and haven’t had any problems yet. The company notes that once Microsoft issues a patch, the WMF feature may be enabled again by entering the command “regsvr32 shimgvw.dll” in step three above.

Now’s a good time to point out that VMWare now has a free player that you can use to run pre-built machines, and also a “safe web browsing” machine that you can download that comes pre-configured with firefox 1.5 running on Ubuntu. If you have enough memory, this is not a bad thing to do for general web browsing.


More Schneier on secret surveillance

Filed under: — adam @ 10:01 am

“This rationale was spelled out in a memo written by John Yoo, a White House attorney, less than two weeks after the attacks of 9/11. It’s a dense read and a terrifying piece of legal contortionism, but it basically says that the president has unlimited powers to fight terrorism. He can spy on anyone, arrest anyone, and kidnap anyone and ship him to another country … merely on the suspicion that he might be a terrorist. And according to the memo, this power lasts until there is no more terrorism in the world.”


Schenier on NSA surveillance in Salon

Filed under: — adam @ 9:08 am

Bruce Schneier has an excellent piece in Salon on the recent wiretap revelations:


Perry on felonious wiretaps

Filed under: — adam @ 11:36 am

This is an editorial that Perry sent to his cryptography mailing list.

I posted this earlier today to a mailing list for cryptographers that I run. Please feel free to send it to anyone you like.

To: cryptography
Subject: A small editorial about recent events.
From: “Perry E. Metzger” Date: Sun, 18 Dec 2005 13:58:06 -0500

A small editorial from your moderator. I rarely use this list to express a strong political opinion — you will forgive me in this instance.

This mailing list is putatively about cryptography and cryptography politics, though we do tend to stray quite a bit into security issues of all sorts, and sometimes into the activities of the agency with the biggest crypto and sigint budget in the world, the NSA.

As you may all be aware, the New York Times has reported, and the administration has admitted, that President of the United States apparently ordered the NSA to conduct surveillance operations against US citizens without prior permission of the secret court known as the Foreign Intelligence Surveillance Court (the “FISC”). This is in clear contravention of 50 USC 1801 – 50 USC 1811, a portion of the US code that provides for clear criminal penalties for violations. See:

The President claims he has the prerogative to order such surveillance. The law unambiguously disagrees with him.

There are minor exceptions in the law, but they clearly do not apply in this case. They cover only the 15 days after a declaration of war by congress, a period of 72 hours prior to seeking court authorization (which was never sought), and similar exceptions that clearly are not germane.

There is no room for doubt or question about whether the President has the prerogative to order surveillance without asking the FISC — even if the FISC is a toothless organization that never turns down requests, it is a federal crime, punishable by up to five years imprisonment, to conduct electronic surveillance against US citizens without court authorization.

The FISC may be worthless at defending civil liberties, but in its arrogant disregard for even the fig leaf of the FISC, the administration has actually crossed the line into a crystal clear felony. The government could have legally conducted such wiretaps at any time, but the President chose not to do it legally.

Ours is a government of laws, not of men. That means if the President disagrees with a law or feels that it is insufficient, he still must obey it. Ignoring the law is illegal, even for the President. The President may ask Congress to change the law, but meanwhile he must follow it.

Our President has chosen to declare himself above the law, a dangerous precedent that could do great harm to our country. However, without substantial effort on the part of you, and I mean you, every person reading this, nothing much is going to happen. The rule of law will continue to decay in our country. Future Presidents will claim even greater extralegal authority, and our nation will fall into despotism. I mean that sincerely. For the sake of yourself, your children and your children’s children, you cannot allow this to stand.

Call your Senators and your Congressman. Demand a full investigation, both by Congress and by a special prosecutor, of the actions of the Administration and the NSA. Say that the rule of law is all that stands between us and barbarism. Say that we live in a democracy, not a kingdom, and that our elected officials are not above the law. The President is not a King. Even the President cannot participate in a felony and get away with it. Demand that even the President must obey the law.

Tell your friends to do the same. Tell them to tell their friends to do the same. Then, call back next week and the week after and the week after that until something happens. Mark it in your calendar so you don’t forget about it. Politicians have short memories, and Congress is about to recess for Christmas, so you must not allow this to be forgotten. Keep at them until something happens.



New worms will chat with you via IM

Filed under: — adam @ 4:32 pm


Google really wants your logs

I wrote here about some of the privacy implications of Google’s data retention policies:

With the launch of Google Analytics, Google is now poised to collect that data not only from every Google visit, and every site that has Google ads on them, but also every site processed by Google for “analytical” purposes (although there’s probably a fair amount of overlap between the latter two).

Remember – Google does not consider your IP address to be personal information, and so it’s exempt from most of the normal restrictions on how they use the data they collect. The terms of service for Google Analytics suspiciously do not mention whether Google is allowed to utilize any of the data they collect on your behalf. One must conclude that they therefore assume that they are, and consequently that they do. It’s unclear, but it’s probably the case that Google could, according to the terms of these agreements, correlate search terms from your IP address with hits on other websites. I don’t see anything in there preventing them from doing so, because the two pieces of correlated data are obtained by different means.

Looky there, Google Web Accelerator is back

Filed under: — adam @ 12:59 pm

Google has apparently relaunched their controversial Web Accelerator.

I think I’ve already covered in detail all of the problems with this, and nothing seems to have changed except they’re just hoping people forgot about all of the reasons since last time, so just go read the previous articles if you missed them the first time around:

And especially this one:


Sony copy-protected CDs apparently contain rootkits

This article details the finding of an actual root kit (that is, a program designed to remain hidden from security software by cloaking itself and pretending to be part of the OS), that turned out to have been installed by a Sony copy-protected CD.

“I ran a scan on one of my systems and was shocked to see evidence of a rootkit. Rootkits are cloaking technologies that hide files, Registry keys, and other system objects from diagnostic and security software, and they are usually employed by malware attempting to keep their implementation hidden”

The EULA, also, apparently contained no mention of it.

This is probably illegal. I won’t be surprised in the least if Sony gets royally sued for this.

On World of Warcraft’s spyware

World of Warcraft was recently revealed to have a piece of spyware hidden in it called Warden, that tracks a large amount of information about other things running simultaneously on the machine, in order to prevent cheating.

There’s been some commentary on Dave Farber’s IP list that Warden was found by someone trying to hack the game, implying that that somehow justifies its existence.

I wrote the following in response to that:


The fact that this piece of spyware was found by someone trying hack the game is totally irrelevant to what it is, and the fact that there are people in an arms race over hacking the game doesn’t justify Blizzard’s raising the bar on that race to trample the privacy of legitimate users who are probably unaware that this is even going on.

As has been previously stated, Blizzard’s assertion that it’s not doing anything with the information is little comfort. What if the next round of arms race escalation is to hack Warden and release all of that information? How long will it be before Blizzard can properly respond? How much data will get out, because of the infrastructure that Blizzard has constructed?

The fact that this is justified by text buried in a long EULA is deplorable. The fact is, few people read EULAs at all, and even fewer read them for >games< . There ought to be full disclosure right up front in large capital letters - "If you want to play this game, you have to agree to let us spy on you, because we assume everyone's a cheater. YOU'VE BEEN ADEQUATELY WARNED. To agree, and be allowed to play the game, type: 'I UNDERSTAND THAT BLIZZARD IS SPYING ON ME TO CATCH CHEATERS'." Let's have no more of this "Press OK to continue" crap.


“No 911″ sticker for VOIP phones

Filed under: — adam @ 10:30 am

Some VOIP and computer phones don’t support 911 dialing in a way that’s equal to the conventional phone system. In an emergency, you probably don’t want to accidentally grab the wrong phone and use it to dial 911.

I sell a set of stickers that you can cut out and stick on phones that don’t support 911:

[Update: 50% of all profits from this will be donated to the EFF.]


On responses to threats

Filed under: — adam @ 1:26 pm

I love this comment on Bruce Schneier’s blog in reference to the recent NYC subway threat which turned out to be a hoax:

“Every time I read this kind of nonsense, I have a mental image of our government — from city level on up — as a strung-out derelict curled up in a fetal position in a corner, screaming about the spiders all over him as he clutches a bottle of cheap fortified wine cut with paint thinner.”


SMS Spam

Filed under: — adam @ 7:10 pm

I just got my first piece of SMS spam (verizon wireless). Anyone know who I should be reporting this to? The VZW customer support people don’t seem to know.


Schneier on lists for emergency tech

Filed under: — adam @ 8:23 am

Bruce Schneier has a link to the digital-er list, for discussion of technical solutions to emergency response and crisis situations. There are some other resources in the comments as well.


~50 banks exposed in ID keylogger spyware theft

Filed under: — adam @ 4:53 pm

The guys at Sunbelt have discovered a massive ID theft ring running a spyware pingback keylogger collecting lots of personal data.

“This is a very different type of trojan than others, because how it transmits data back. To our knowledge, it’s the first of its kind. So get a software firewalll in place that has outbound protection.”


Why I oppose DRM

As some of you know, on September 11, 2001, I lived one block north of Battery Park, at 21 West Street. (Ironic popup tag provided courtesy of Google Maps.) When I was forced to leave for thirteen days while the smoke cleared, I had little time to grab anything. I left without my computers, without my original installation discs, and without all of my Product ID stickers. I found myself suddenly without the mechanism to reinstall a number of legally purchased programs that I needed to use for work, and taking a lot of time that could have been better spent wallowing in my own PTSD calling around to various companies to get them to unlock things for me.

There were stories of rescue workers hampered by license management, and that’s when I knew.

The world is dangerous, and sometimes emergencies happen. While people can say “hey, maybe we should make an exception here, because there are extenuating circumstances”, computers just don’t care about that. We are backing ourselves into a restricted corner, and a dangerous one, where computers call the shots, even in the midst of crisis, even in the midst of rational exceptions. Granted, every case is not this extreme. Hopefully, the future will be without another like it in my immediate vicinity. But the trend to pre-emptively lock down everything by default scares me.

As we evolve towards tighter and tighter controls without any possibility for exception, what happens when those granting agencies stop granting? What happens when companies that issue DRM go bankrupt? What happens if they’re unreachable? What happens if they simply decide to stop supporting their framework?

As my high school calculus teacher used to say – “it’s always easier to ask forgiveness than to ask permission”. Security is many tradeoffs, and if you restrict legitimate uses in the name of preventing illegitimate ones, you’ve cut off part of the point of having security in the first place. If you restrict legitimate uses without even preventing the illegitimate ones, you’re wasting your customers’ time, and you’re part of the problem.

See more of my rants on DRM and security.

Blog-a-thon tag:


Excellent comments on NYC subway searches

Filed under: — adam @ 12:52 pm

Bruce Schenier’s blog has excellent comments on the NYC subway search stupidity.


NYC Police implement totally useless and invasive security measure

Filed under: — adam @ 4:48 pm

NYC Police are apparently going to start random bag searches of people entering the subway.

And if you refuse to have your bag searched? Why, you’ll have to leave the subway and try again later.

I fail to see the point of this huge waste of time, effort, and privacy.


Oh, by the way, uninstall greasemonkey now

Filed under: — adam @ 10:44 pm

Greasemonkey, not surprisingly, has some huge security flaws, and the author recommends you uninstall it. Who would have thought it would be a bad idea to download some code from random websites and run it in your browser?

ICE – not a bad idea

Filed under: — adam @ 9:20 am

Clever solution to a problem.

Put a contact in your phone that starts with ICE (In Case of Emergency). That way, if you’re ever incapacitated and someone finds your phone, they know which of the many entries to call.

This seems to be getting some press coverage, so I suppose there’s a chance before not too long that emergency responders will actually know to check.


Secure RSS idea using greasemonkey

Filed under: — adam @ 3:47 pm

That’s the best idea for greasemonkey I’ve seen so far. Generate an encrypted feed, subscribe to it in a public feedreader, and have your browser decrypt it locally in realtime.


Via schneierblog:


You’re scared, so we’ll take this opportunity to waste your money and invade your privacy

Filed under: — adam @ 1:19 pm

Millimeter-wave machines are to be deployed to scan the passengers of the London Tube.,,20409-1686151,00.html

Because the 500,000 security cameras obviously weren’t enough to prevent the bombing, obviously the answer is more invasive surveillance.


Compelling argument against strong ID

Filed under: — adam @ 3:40 pm

Perry Metzger (on hiatus from blogging), moderator of the cryptography list, wrote the following in response to the question of why Americans are so afraid of ID cards. I reproduce it here verbatim with permission:

Perhaps I can explain why I am.

I do not trust governments. I’ve inherited this perspective. My grandfather sent his children abroad from Speyer in Germany just after the ascension of Adolf Hitler in the early 1930s — his neighbors thought he was crazy, but few of them survived the coming events. My father was sent to Alsace, but he stayed too long in France and ended up being stuck there after the occupation. If it were not for forged papers, he would have died. (He had a most amusing story of working as an electrician rewiring a hotel used as office space by the Gestapo in Strasbourg — his forged papers were apparently good enough that no one noticed.) Ultimately, he and other members of the family escaped France by “illegally” crossing the border into Switzerland. (I put “illegally” in quotes because I don’t believe one has any moral obligation to obey a “law” like that, especially since it would leave you dead if you obeyed.)

Anyway, if the governments of the time had actually had access to modern anti-forgery techniques, I might never have been born.

To you, ID cards are a nice way to keep things orderly. To me, they are a potential death sentence.

Most Europeans seem to see government as the friendly, nice set of people who keep the trains running on time and who watch out for your interests. A surprisingly large fraction of Americans are people or the descendants of people who experienced the institution of government as the thing that tortured their friends to death, or gassed them, or stole all their money and nearly starved them to death, etc. Hundreds of millions of people died at the hands of their own governments in the 20th century, and many of the people that escaped from such horrors moved here. They view things like ID cards and mandatory registry of residence with the local police as the way that the government rounded up their friends and relatives so they could be killed.

I do not wish to argue about which view is correct. Perhaps I am wrong and Government really is the large friendly group of people that are there to help you. Perhaps the cost/benefit analysis of ID cards and such makes us look silly. I’m not addressing the question of whether my view is right here — I’m just trying to explain the psychological mindset that would make someone think ID cards are a very bad idea.

So, the next time one of your friends in Germany asks why the crazy Americans think ID cards and such are a bad thing, remember my father, and remember all the people like him who fled to the US over the last couple hundred years and who left children that still remember such things, whether from China or North Korea or Germany or Spain or Russia or Yugoslavia or Chile or lots of other places.


Crappy new Freedom Tower panned by the NYTimes

Since when does “one tower” evoke “two towers”?

A blistering indictment of Mozilla’s security process

Filed under: — adam @ 8:43 am

“There is no doubt that Mozilla has walked into an agenda capture process. It specifically excluded one CA,, for what appears to be competitive reasons. Microsoft enters these things frequently for the purposes of a) knowing what people are up to, and b) controlling them. (Nothing wrong with that, unless you aren’t Microsoft.) At least one of the participants in the process is in the throes of selling a product to others, one that just happens to leave itself in control. The membership itself is secret, as are the minutes, etc etc.”

IRS chooses ChoicePoint for records access

Filed under: — adam @ 8:34 am

Ironically, the headline of this article is “IRS search for public records access ends with ChoicePoint”. Ha!

“The Internal Revenue Service has awarded ChoicePoint Government Services a contract worth as much as $20 million to serve as the agency’s public records provider for batch processing projects, according to the company.”


So, being the target of the data breach du jour is now your ticket to FAT FAT GOVERNMENT CONTRACTS.

Oh, but wait, that was four or five months ago. Everyone must have forgotten about it by now.



UK government considering selling ID card data to pay for ID system

What’s that slightly coppery smell? It’s irony mixed with incompetence.

So, let me get this straight. The UK government decides to implement a national ID system amid serious criticism of its effectiveness at all and its cost-effectiveness in particular, and now is considering selling the data to pay for spiraling costs.

I bet that idea is guaranteed to reduce identity theft and abuse of the system.


Criminals are using stolen social security numbers to file false unemployment claims

Filed under: — adam @ 2:32 pm

“File a false unemployment claim and you can receive $400 per week for 26 weeks. Do it for 100 Social Security numbers and you’ve made a quick $1.04 million. It’s tough to make crime pay much better than that.”


“Beware the Google Threat”

Filed under: — adam @ 1:22 pm

Wired picks up on the fact that Google is growing fast, aggregating a LOT of data on its users (some of it incredibly personal) and not telling anyone what they’re doing with it behind the veil of vague privacy policies.

The article is a decent summary, but it also misses the point that a good deal (if not all) of the Google information is subject to revelation simply if “We conclude that we are required by law or have a good faith belief that access, preservation or disclosure of such information is reasonably necessary to protect the rights, property or safety of Google, its users or the public.

That is NOT the same as a subpoena. I talked about this previously in a discussion about why Google’s needs come before your own.,1284,67982-2,00.html


DNA tagging of criminals on the spot

I can’t wait for the nanobots to start fighting over molecule level tagging. Presumably, the coding of the DNA strands is protected by all sorts of interesting copyright law, too.

Bruce Schenier points out this new technology:

The system, called Sentry, works by fitting a box containing a powder spray above a doorway which, once primed, goes into alert mode if the door is opened.

It then sprays the powder when there is movement in the doorway again.

The aim is to catch a burglar in the act as stolen items are being removed.

The intruder is covered in the bright red powder, which glows under ultraviolet (UV) light and can only be removed with heavy scrubbing.

However, the harmless synthetic DNA contained in the powder sinks into the skin and takes several days, depending on the person’s metabolism, to work its way out.


Mastercard data theft possibly exposes 40m cards

Filed under: — adam @ 4:52 pm

“MasterCard International reported today that it is notifying its member financial institutions of a breach of payment card data, which potentially exposed more than 40 million cards of all brands to fraud, of which approximately 13.9 million are MasterCard-branded cards.”


Citibank loses data on four million customers

Filed under: — adam @ 9:58 am

Barn door, meet horse-shaped vacuum.

Identifying data on 4 million Citigroup customers was “lost” when a UPS package containing unencrypted tapes went missing in early May.

CitiFinancial said in its statement that the data loss “occurred in spite of the enhanced security procedures we require of our couriers.”

It said there was little risk of the accounts being compromised because most customers already had received their loans and that no additional credit could be issued without the customers’ approval.

Debby Hopkins, chief operations and technology officer for Citigroup, said that the tapes were produced “in a sophisticated mainframe data center environment” and would be difficult to decode without the right equipment and special software.

Hopkins said most Citigroup units send data electronically in encrypted form and that CitiFinancial data will be sent that way starting in July.,1848,67766,00.html

Basically, what this tells me is that “secure” financial identification data on every American with a bank or brokerage account has been stolen or very likely will be in the next two years. There’s nothing that anyone is doing that can stop it. It’s time we turned our attention towards making that data useless for fraud. I propose a two-pronged attack:

1) The end of the instant credit era.
2) Flood the system with garbage data that looks like real data, but is meaningless.

[Update: I've been thinking about this. WHY DID THIS SET OF TAPES EVEN EXIST? Is there any possible good reason for a company to have all of this data in one place?]


Prediction: GTax

In a conversation this weekend, on a whim, I made the prediction that within 3 years, Google will offer electronic tax filing.


Encryption is not a crime

I’m not sure how I feel about this.

A Minnesota court has ruled that the presence of encryption software is valid evidence for determining criminal intent. On the one hand, it seems like a severe misunderstanding of how the modern world actually works, given that encryption is absolutely essential for many things we take for granted.

I guess I can see that if there’s other evidence, this might be used as evidence that you have something to hide, but I worry for the situation where there isn’t any other evidence of a crime, and the fact that there’s something to hide becomes the key determining factor.

Everyone has something to hide. It may be private, it may be secret (not the same thing), it may be evidence of a crime, or it may be evidence of something that someone else thinks is a crime but you don’t. For the latter two, that is, of course, why we have a legal system in the first place. For the former two, there are plenty of legal reasons to want to keep those things private or secret.


676,000 accounts stolen at multiple banks

Filed under: — adam @ 6:02 pm

Fancy that. Yet another ID data theft.

‘CNN is reporting that about 676,000 bank accounts in at least four banks (Bank of America, Wachovia, Commerce Bancorp, and PNC Financial Services) have had personal information “illegally sold”.’

Look folks, banks – remember banks? The paragons of financial security, right? THEY HOLD YOUR MONEY FOR YOU TO KEEP IT SAFE. Banks. CAN. NOT. keep. your. data. safe. They can’t, they won’t, and they aren’t.

If not them, then who?

The only answer I can come up with is that this kind of data must simply not be aggregated. Once it’s all in one place, it’s a target that can’t be protected.

Real ID Rebellion blog

Filed under: — adam @ 10:02 am

I’ve written before about why a National ID card, and particularly dependence on a National ID card, is actually likely to make us less safe, not more. This is a new blog collecting ways to fight it:


NYC subway photo ban plan aban…doned

Filed under: — adam @ 11:34 am

Good, because this wasn’t going to help.


They don’t necessarily know who you are

In the last post, I wrote a lot about what’s wrong with Google’s new services and terms of service. I think one thing bears important repeating.

MANY of your important interactions with Google are unencrypted. As such, it is even more trivially easy to steal the value of someone’s Google cookie, and possibly pose as that person to Google. It’s possible that Google has taken precautions against this, but the risk is currently unknown. If this is possible, I think that throws a huge wrench into the use of this information by law enforcement.

I remember early discussions when it was first revealed that Google was storing a persistent lifetime cookie. It was generally perceived to be “okay” only because the value was not to be tied to search history in any way. We predicted that someday it would be.

Sometimes the slippery slope is actually slippery.


Google wants your logs

I’ve been kicking this around for a while, given the release of Google’s ability to save searches.

Google just announced the Google Web Accelerator, and this has the same kinds of privacy issues surrounding it, so I’ll discuss them both here. For those not in the know, Google Search History is the feature that lets you access your past searches if you’re logged into Google. The Web Accelerator is a proxy that pushes all of your browsing through Google’s servers. Ostensibly, this is to make your browsing faster, but it also has the side effect that Google can (and presumably will) monitor both the URLs and contents of every web page you’re looking at. You make a request for a web page, and Google fetches it for you. I’d expect that they’re also doing various tricks with preloading and caching.

Google is poised to collect a lot of data on browsing habits, and every indication is that they plan to keep it around.

As a brief aside, while I don’t personally know anyone who works for Google, I do have some friends who do. Every one of them has, in the past, asserted during conversations about Google’s privacy concerns, that Google both has (or had) no intentions of keeping permanent searching / browsing logs, and has (or had) actually built up complicated encryption / hashing mechanisms to allow aggregate data to be kept without individual search histories. That may have been true at one time, although I personally found it doubtful, given that if it were true, Google could only benefit by stating it publicly. They have never done so, and recent events have shown that assertion to be presently categorically false. Google does want to keep your individual search history. I think that’s a relevant point to the privacy debate.

In reference to search history, I wrote but never published, the following: “Search history is a sensitive area. Saving and aggregating search history is of dubious value to the end user – it’s maybe a minor convenience at best. If you care about that sort of thing, you’ll want to capture for yourself far more information than just search history, and do it locally across the board. There are several plugins for Firefox that will do exactly that for you, and not only watch your tracks, but save complete copies of everything you’re browsing.” In reference to the web accelerator, it’s evident that Google is heading towards collecting that information for themselves.

Set aside the fact that Google has now become an extremely juicy target for a one-stop shop for identity thieves. I’m sure they’ve got great security. But do you? Google’s lifetime cookie is, as always, a serious point of possible failure. One good cross-site scripting attack or IE exploit, or even a malicious extension, and the Google cookie can be easily exposed. What’s your liability for being associated with a search history, or now a browsing history, tied to a stolen Google cookie?

But here’s the real doozie.

The Google Privacy Policy states that Google may disclose personally identifiable information in the event that:

“We conclude that we are required by law or have a good faith belief that access, preservation or disclosure of such information is reasonably necessary to protect the rights, property or safety of Google, its users or the public.”

Welcome to Google, where the Third Law comes first.

This has serious implications. For logged-in users using all of Google’s services, this now includes the contents of your emails, your complete search AND browsing history, any geographical locations you’re interested in, what you’re shopping for, and probably plenty of things I haven’t thought of yet.

I posit that it would not significantly damage Google in any way for them to actually make use of this information, and that Google could withstand any public backlash resulting from it.

I think we’ve long passed the point at which we say “this is bad”.

This is bad.

In case you haven’t been paying attention, there’s a word for this.

It’s called “surveillance”.

I believe that Google should revise their privacy policy to reflect the actual intended usage of this information, and they should clarify under what circumstances this information will be released, and to whom. Will this information be used to catch terrorists? Errant cheating spouses? Tax evaders? Jaywalkers? Anarchists? Litterbugs? As a user, you have a right to demand to know. Of course, don’t expect Google to tell you, since they don’t actually get any of their money from you.



Building a lockpicking gun out of an old hard drive

Filed under: — adam @ 11:28 am

Now >that< is hacking.


Google adds “Prove Adam Right” button

Filed under: — adam @ 8:27 am

“Google Inc. is experimenting with a new feature that enables the users of its online search engine to see all of their past search requests and results”.

There is so much wrong with this that I don’t even know where to begin. I’m writing something up for a followup post.


Realtime physics accelerator

‘A San Jose startup is building a “physics accelerator” for PCs that will contain hardware optimized for calculating realistic simulations of real-world physics — they hope that this will bridge the gap between general-purpose PCs and the specialized game-graphics cards in consoles.’

Also, as they point out, it will be very useful for realtime weapons calculations, and I’m sure there have to be porn applications as well.


Freedom Keepsafe

Filed under: — adam @ 10:21 am

This seems like a good idea – a paired set of two cards where you attach one to a device and put the other in your pocket. If they move more than 20 feet away, yours beeps.

It would be more helpful if the remote one beeped also so you could draw attention to the creep who’s walking off with your laptop, or find your dropped ipod or whatever.


Flash Player may store local data akin to cookies

Filed under: — adam @ 1:48 pm

Apparently, Flash Player allows remote sites to set locally-stored data about you, and some sites are using this to get around persistent cookie settings.

Here’s how to adjust your settings and see what’s stored locally:


Bike Tree is secure city bike storage

Filed under: — adam @ 1:09 pm

That’s very cool. It locks up your bike and suspends it high above the ground, away from thieves and rain.

Of course, in NYC, that would probably just mean there would be a rash of clear-cutting…


ChoicePoint exploit is probably far worse than anyone knows

Filed under: — adam @ 12:33 pm

Bruce Schneier points out that the ChoicePoint exploit is probably far worse than anyone knows:

“Catch that? ChoicePoint actually has no idea if only 145,000 customers were affected by its recent security debacle. But it’s not doing any work to determine if more than 145,000 customers were affected — or if any customers before July 1, 2003 were affected — because there’s no law compelling it to do so.”


And the dominoes begin to tumble

Filed under: — adam @ 9:40 pm

LexisNexis loses data on 30,000 customers to data thieves. We apologize for the unfortunate… well, you know how it goes.

Even worse, it was pointed out to me that LexisNexis does not allow you to opt out of their data collection under most circumstances, and only at their discretion with proof of an exploit:

I wonder if the LexisNexis exploit counts.


Are people just stupid?

Filed under: — adam @ 8:01 pm

Would someone please explain to me why the very public exploitation of T-mobile’s notably bad security has resulted in an increase in sales of the Sidekick II?

Do people go shopping for a cell phone and think “Hmmm… today, I’d like to be like that Paris Hilton. Maybe my data will end up on the internet and people will gather round and just throw piles of money at my head and I’ll get to have sex.” ???

They keep your data, they take poor care of it, and THEY’RE SELLING LIKE HOTCAKES.

I don’t get it. I’m completely at a loss. What’s the motivation here?


ACLU goes pizza on corporate database aggregation

Filed under: — adam @ 6:32 pm


Is “We deeply regret this unfortunate incident” the hot new corporate motto?

Following close on the heels of the news about ChoicePoint, another “I’m a large corporation and I just exposed the personal data of lots of Americans, and ho ho ho I’m just going to apologize.”

“Bank of America Corp. has lost computer data tapes containing personal information on 1.2 million federal employees, including some members of the U.S. Senate.”


There are a whole bunch of problems here.

These companies have very little liability and regulation for aggregating personal data. So, Bank of America has financial information on 1.2 million customers. That’s to be expected. They are, after all, a bank and need access to your financial information. But once you step past that – what else is legal for them to do with the data? Are they liable if they expose it by accident? They have a privacy policy that says they’re careful with your data, but what happens if they break it? Are there actually any consequences?

Bank of America actually has quite a detailed privacy policy, but what’s hidden here is important – it doesn’t say anything about the risks. “Remember that Bank of America does not sell or share any Customer Information with marketers outside Bank of America who may want to offer you their own products and services. No action is required for this benefit.” But also remember that Bank of America is a target, and your recourse is largely limited to “telling them your preferences”.

I’ve been reading “The Digital Person” by Daniel J. Solove, and it’s been an eye-opener about the problems associated with the construction, storage, and use of digital dossiers. It’s possible that I haven’t gotten to the main point yet, but even in the beginning, he makes some good observations – the problems we’re facing here aren’t necessarily malicious, but they are impersonal and uncaring. The fact that an individual piece of data doesn’t really matter if it’s revealed doesn’t mean that lots of pieces all revealed together aren’t a problem.

There are synergistic network effects at play here. It needs to be recognized that the “simple” collection and aggregation of large amounts of data has side effects in and of itself.


Chief Privacy Officer of Gator appointed to DHS privacy committee


Powered by WordPress