Adam Fields (weblog)

This blog is largely deprecated, but is being preserved here for historical interest. Check out my index page at for more up to date info. My main trade is technology strategy, process/project management, and performance optimization consulting, with a focus on enterprise and open source CMS and related technologies. More information. I write periodic long pieces here, shorter stuff goes on twitter or


The Tragedy of the Selfish, again and again.

I kept seeing this pattern emerge, and couldn’t find a good name for it (originally in reference to failures of the free market), so I came up with one. Simply put, The Tragedy of the Selfish is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better.

You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars.

You buy a gun because other people have guns. Then other people do, because they want to be safer too. Then… you see where I’m going with this. Perhaps you’ve made yourself safer in some limited way, but you’ve decreased the overall safety of the system.

This is not safety, it’s mutually assured destruction.


My problem with the Netflix restructuring

I can accept that DVDs are a dying business with no future growth and _escalating_ costs. I can accept that Netflix wants to get out of that business and move forward, even if the streaming product is still nascent and not competitive yet. I like Netflix, and I’ve been a loyal customer since before they had unlimited plans (I was the first person I knew to get a DVD player).

I accept that all of this might be necessary and painful to grow the business. But the thing is – it’s not our burden as customers to carry those costs, and it’s disingenuous to ask us to do so. The fact is, while DVDs are limited in growth, they’re still the better product with far more selection, and the DVD business you’re jettisoning is still profitable. If you want us to switch to a worse product that may be better in the future, great. Lower your prices to compensate. All of this brouhaha could have been avoided if you’d announced that everyone’s plan was a dollar per month cheaper until the streaming selection got better.

We’ll bear with you to make a difficult transition. Asking us to do so while giving us a worse experience and making us pay more for the privilege feels like taking advantage.

It’s not too late to change your mind.


Guacamole Alfredo

Filed under: — adam @ 3:00 pm

I came up with this dish in response to a food question, and it sounded so good I had to make it the next night. It was promptly devoured by all present. It’s a great summer pasta dish.

Dice 2-3 heirloom tomatoes (or more), 3-4 avocados (or more), and 1 large onion.

Boil and salt 5-6 quarts of water for the pasta.

Heat a generous amount of olive oil in a medium frypan. Add the onion and sweat over medium-low heat until almost soft, then add half of the tomatoes and cook for about 2-3 minutes over medium heat. Add a generous sprinkle of salt and pepper. Meanwhile, cook a pound of fresh fettuccine.

Guacamole Alfredo

Add 2-3 cloves of minced garlic, and half of the avocados, and heat through until softened.

Guacamole Alfredo

Mince 3-4 tbsp. of cilantro, and add to a large bowl with the rest of the tomatoes and the avocados (not pictured).

Guacamole Alfredo

When the pasta is done, reserve a half-cup of the water, then drain and return to the pot. Toss with the onion mixture and cook over low heat for 30 seconds until the pasta absorbs some of the oil. Put pasta into the bowl with the tomatoes and avocados, and toss. Sprinkle with 1-2 tsp lime juice. Add some of the reserved pasta water if it’s too thick (unlikely).

Guacamole Alfredo

Top with parmesan and serve with a crusty semolina bread.

It’s vegan-friendly as is, but I think it would also be great with shrimp.


Influential food books of the past 10-20 years

Filed under: — adam @ 9:56 am

Someone on a list asked about influential food books of the past 10-20 years. Here’s my list:

Books that started the trend of deconstructing modern food chains:

The Omnivore’s Dilemma: A Natural History of Four Meals

Fast Food Nation: The Dark Side of the All-American Meal

This one is notable because it’s one of the first I’ve read that’s really well balanced with looking at balancing health, environmental impact, and worker welfare, but also taking into account taste and the fact that people really like to eat foreign foods:

The Ethical Gourmet

Seminal books that explore the science of cooking:

On Food and Cooking: The Science and Lore of the Kitchen

Molecular Gastronomy: Exploring the Science of Flavor (Arts and Traditions of the Table: Perspectives on Culinary History)

The beginning of the restaurant insider expose:

Kitchen Confidential Updated Edition: Adventures in the Culinary Underbelly (P.S.)

The Making of a Chef: Mastering Heat at the Culinary Institute of America

And the evolution towards bringing restaurant and cooking school techniques to the home chef:

Think Like a Chef

Ratio: The Simple Codes Behind the Craft of Everyday Cooking

There are many excellent recent cookbooks, but I don’t know how many of them really qualify as influential.



Choose Real Food

Filed under: — adam @ 6:56 pm

Here’s an edit I did of the new “Food Plate” graphic. I think this is more accurate.



Sugar may be toxic, but that NYT article doesn’t demonstrate it.

The NYT magazine ran this article on how sucrose is probably a poison that causes cancer and a whole raft of other ailments.

Unfortunately, the article is so poorly written and presents so little actual evidence that I’m shocked at the number of otherwise rational people who are simply taking it at face value. John Gruber, whose analysis I usually respect, writes “It’s not often that a magazine article inspires me to change my life. This is one.“.

Here are a few specific comments:

  • The article still perpetuates the assumption that high fructose corn syrup is identical to sucrose because they’re both made up of fructose and glucose. Setting aside the obvious difference that a 50% split between fructose and glucose is not the same as 45% glucose and 55% fructose (oh, but right – it’s “nearly” the same), sucrose is a disaccharide and HFCS is a mixture. Sucrose does easily break into glucose and fructose in the presence of sucrase, but the fact that there’s an enzymatic reaction there means that the rate at which it happens can be regulated. Sucrose and HFCS are different things, in much the same way that a cup of water is different from a balloon filled with hydrogen and oxygen, or a pile of bricks is different from a house. Every subsequent opinion in the piece that sugar is bad is doubly applicable to HFCS.
  • The article doesn’t actually cite any concrete evidence to support its hypothesis that sugar is toxic. It doesn’t even cite any sketchy evidence to support its hypothesis. Meanwhile, here’s a bit of recent research that suggests the opposite: “Female mice [getting 25% of their calories from sugars] that had been reared on the unbound simple sugars [(fructose and glucose mixture)] experienced high rates of mortality, beginning 50 to 80 days after entering the enclosure. Their death rate was about triple that of sucrose-treated females”.
  • Lustig’s Youtube presentation on which the article is based is fairly interesting. As far as I can tell, all it does it make the case that fructose is a poison in large quantities, that excessive amounts of sugar are worse for you than excessive amounts of fat, and that juice, soda, and “low-fat” processed crap that substitutes sugar (but primarily in the form of HFCS) for fat are responsible for the obesity and diabetes epidemics. Most of which is completely reasonable, although I think he ignores the sucrase regulation pathway, which is probably the most critical factor. But nowhere does he say that the body can’t metabolize _any_ sugar safely, which is the main thrust of the NYT piece, based on exactly, as far as I can tell, zero evidence. Lustig’s conclusion is exactly what it’s stated as at the beginning of the article: “our excessive consumption of sugar is the primary reason that the numbers of obese and diabetic Americans have skyrocketed in the past 30 years. But his argument implies more than that. If Lustig is right, it would mean that sugar is also the likely dietary cause of several other chronic ailments widely considered to be diseases of Western lifestyles — heart disease, hypertension and many common cancers among them”. It’s a long leap from there to the position that any sugar consumption is bad, which his argument doesn’t actually imply. Drinking a few cups of water a day is good for you. Drinking a few gallons is probably not so good.
  • Here’s an example of the kind of “argument” in the article: ”In animals, or at least in laboratory rats and mice, it’s clear that if the fructose hits the liver in sufficient quantity and with sufficient speed, the liver will convert much of it to fat. This apparently induces a condition known as insulin resistance, which is now considered the fundamental problem in obesity, and the underlying defect in heart disease and in the type of diabetes, type 2, that is common to obese and overweight individuals. It might also be the underlying defect in many cancers.” Of course, it completely ignores that the fructose does not hit the liver in sufficient quantity and with sufficient speed under normal circumstances, and it even flat out includes the counter-hypothesis that the liver is perfectly capable of metabolizing sugar up to a certain point with no detrimental effects.

Excess sugar is clearly bad. I accept that it’s probably even worse than excess fat. I don’t see even a small shred of evidence to accept the logical leap presented in this article that eating a cookie will increase your cancer risk in any meaningful way. Absolutely, we need to study this more. Concluding that sugar is toxic in normal quantities based on the available evidence is ridiculous. Despite the indecision in the article, it’s not hard to define “normal quantities”. I’m the first to agree that the current “sugar in everything” trend in packaged food is bad, and it’s important to check the nutritional labels. HFCS has no business being in bread. The brands you grew up with are not indicators of quality. In fact, I’d go so far as to say that if your food has a nutritional label at all, you’re already at a disadvantage.

Eat more whole foods. Stop taking your calories in liquid form. Cooking at home is different. Change your food chain.



Some thoughts on salad

Filed under: — adam @ 1:46 pm

A few years ago, I decided to eat a salad for lunch at least 5 days a week. It’s a great way to make sure you get a lot of vegetables, and if you do it right, it’s very satisfying. I didn’t want to do Bittman’s “vegan before 6pm” diet, but this is a similar approach. It also takes a lot of the guesswork out of what I’ll have for lunch on any given day. I usually make my own. If you’re on the go, Fit & Fresh makes a very convenient shaker container with a dressing compartment and a removable ice pack.

For me, a salad is at minimum: a green leafy vegetable (lettuce or spinach), cucumber, some sort of tomato, and dressing. Everything else is optional, but I try to mix in at least one ingredient from the following categories. The key is getting a range of interlocking textures and complementary flavors. Buy the best ingredients you can find.

Lettuce: I usually use a heartier crunchy lettuce (romaine) or a greens mix. Local greens are always preferable, and you can still find farmers who do greenhouse greens in the winter. People will tell you not to cut lettuce but to tear it up with your hands. I don’t find that it makes a difference either way. Always rinse lettuce a few times in a salad spinner and then soak it in very cold water for 5-10 minutes before drying and using it. Unwashed lettuce will usually last about a week in the fridge if it’s fresh. Washed lettuce may last 2-3 days – store it in an airtight container with a folded paper towel to absorb excess moisture. If there isn’t good lettuce to be found, baby spinach makes a nice alternative. You can wash grape or cherry tomatoes with the lettuce.

Cucumber: Local is always better. Generally, the smaller varieties will have more flavor and crunch – I usually use kirbys or small pickling cucumbers. Peel any cucumbers that have been shipped loose – they’re coated with a paraffin layer to protect them.

Tomato: If local tomatoes are in season, use the big ones, preferably heirlooms. They’ll dominate the salad, and when they’re in season that’s probably fine. Otherwise, tomatoes for salad should always be the smaller cherry or grape tomatoes. Out of peak season, these are the only ones that are tolerable, and they get less so as the winter wears on. I usually include them anyway for some color and texture.

Other vegetables: Depending on my mood, I’ll include some diced red, orange or yellow pepper (but almost never green – I’m not that fond of bitter flavors). Cooked beets add a nice sweetness if you like them. Shredded carrots can be nice, but I usually find their flavor too strong. To the detriment of my breath, I’ve been cursed with whatever Eastern European gene causes me to crave raw red onions, especially in the winter. In the summer, I like to use raw local corn.

Animal Protein: I often omit the animal proteins for side salads, but without them it doesn’t really feel like a meal, so I always include at least one in my lunch salads – a hard boiled egg (use eggs that are 2-4 weeks old, start in cold water, bring to a boil, cover, let sit for 8-9 mins, shock in ice water), crumbled bacon, diced leftover chicken or steak, or a few cooked shrimp (thaw frozen shrimp in water, then boil for 3 minutes and shock in ice water).

Fruit: In the summer, this will be a sliced peach or plum, in the fall it’ll be apple or pear. Dried fruits work well – raisins or cranberries. Raisins pair well with honey mustard dressing and bright vinaigrettes, cranberries go better with creamy dressings.

Cheese: I usually avoid cheese, but I have a periodic craving for the combination of blue cheese, roasted garlic vinaigrette, beets, and nuts. I think most cheese doesn’t mix well with dressing, but there are a few combinations that work.

Dressing: My favorite dressing of all time is Brianna’s Poppy Seed Dressing. It’s creamy and thick, and goes with just about everything, and is the exception to my belief that most bottled dressings aren’t very good. I also like some varieties of honey mustard, or I’ll make a vinaigrette. In the summer, I like to make a vinaigrette with a bold raspberry vinegar. Use whatever you like. I’d avoid lowfat dressings, because they generally don’t taste very good. You’re eating a salad instead of a chalupa! You can have a little fat.

Crunch: I like to include at least one crunchy element – croutons or slightly toasted (300F for 6-8 mins) walnuts or pecans. If you buy croutons instead of making your own, look for those without HFCS.

A few suggestions:

My standard winter salad: romaine lettuce, persian pickling cucumbers, grape tomatoes, sliced red onion, diced cold chicken/bacon/hardboiled egg, croutons, dried cranberries, poppy dressing.

My alternate salad: mixed green lettuce, cucumbers, tomatoes, red onion, shrimp, crumbled bacon, diced beets, croutons, plus either raisins and honey mustard dressing or crumbled blue cheese and roasted garlic vinaigrette.

My favorite summer salad: mixed green/red lettuce, cucumbers, heirloom tomatoes, sliced red onion, sliced cold skirt/flank steak, raw corn, sliced peaches, croutons, poppy dressing.

Suggest some of your favorites in the comments or on twitter.


Sous Vide Poached Egg

Filed under: — adam @ 9:38 am

Sous Vide eggs are tricky, because the yolk sets at a lower temperature than the white. If you cook a whole egg in the shell, you either get a properly cooked yolk with a runny and gelatinous white (some people like this, some think it’s like eating a ball of snot), or you get a properly set white with a really overcooked yolk. As far as I can tell, it’s not possible to get a perfect “poached egg” with the sous vide method, if you cook the egg whole in the shell.

To deal with this, I separated them and cooked them individually at two different temperatures. You can cook the white first at a higher temperature to just set it (160-162F or so, depending on how firm you like it), and then lower the temperature (to 144F or so) and add the yolk. To make this a little more convenient, you can even do the white in advance, chill it, and keep it and the separate yolk in the fridge overnight, then add the yolk and cook them in the morning. The white takes about 60-90 minutes to set (depending on whether you start from fridge or room temperature), and then the yolk takes about another 60-90 minutes. Cooking the yolk will also bring the white back up to serving temperature without overcooking it. It’s not quite fast enough for a rushed morning, but that’s acceptable timing for a lazy morning. 

I tried leaving them in the water overnight at 144F, and that didn’t work – the yolks got completely overcooked and gummy. There might be a lower temperature at which that would work. Even still, unlike with regular poached eggs, there’s very little fuss. This method takes longer, but it doesn’t require you to do anything while it’s cooking.

As I looked around for a proper vessel to cook them in, I found something I’d dabbled with but never really gotten good results with, which in retrospect is actually pretty perfect for sous vide cooking: an egg coddler.

Sous Vide Poached Egg

Yes, you can use your hands to separate eggs, but I wanted to be extra careful not to break the yolk membrane. I put the yolks in a covered bowl in the fridge while the white cooked.

Sous Vide Poached Egg

Here you can see the thin layer of undercooked white that was left over with the yolks, and the more properly cooked white layer underneath:

Sous Vide Poached Egg

You can serve it directly out of the coddler, or turn it out into a bowl:

Sous Vide Poached Egg

Here you can see how perfectly runny the yolk is, and the white is creamy and well set:

Sous Vide Poached Egg

This is a great poached egg. I’m not sure it’s that much easier or more convenient than regular poached eggs in terms of timing, but it certainly requires less effort for excellent and tasty results. I think this is probably overkill if you’re just cooking for one (the above was an experiment and I didn’t want to ruin a lot of eggs if it didn’t work out), but it would work just as well with a dozen or more. Doing a single poached egg isn’t that much effort, but doing a lot of them can add up. I also got very good results using a small Le Creuset ceramic crock with a lid, though that can’t be submerged in the same way that the egg coddler can. If you want to use something like that, you’ll need a rack to keep it near the top of the water level.


I’ve found that this silicone rack is the perfect size for the SVS. You’ll need two of them for a short crock/ramekin.


Sous Vide French Toast

Filed under: — adam @ 9:27 am

After a great success with scrambled eggs, I wondered if it would be possible to make french toast in the SVS. I’ve had some good results with french toast the regular way, but it requires a lot of advance planning, and I find it difficult to ensure that the egg mixture gets absorbed and cooked all the way through without making it soggy in the middle.

I whipped up 8 eggs, added about 3/4 cup of milk, a splash of vanilla, and a pinch of salt. I added this to two slices of homemade challah in a ziploc vacuum bag and sealed it. I then shook the bag to distribute the liquid evenly and sucked the air out with the pump. These bags are much easier than the Foodsaver bags when you’re dealing with liquids, because you don’t have to worry about the seal getting gunked up.

Sous Vide French Toast

I then cooked them in the SVS for an hour and a half at 147F, removed them from the bag, and put them in a hot skillet with a little butter to brown the outside on both sides.

Sous Vide French ToastSous Vide French Toast

They came out perfectly – slightly crispy on the outside, and evenly cooked throughout, with a wonderfully creamy yet substantial texture.

Sous Vide French Toast



Making Sous Vide Custard

Filed under: — adam @ 11:35 am

Drawing on some inspiration in this post on creme brulee at SVKitchen, I decided to try a custard. Since I bought entirely too many blueberries this season, and the last of the bunch is rapidly aging in my fridge as I try to use them up before they go bad (I’ve already frozen as many as my freezer can handle), I decided to top this batch with a blueberry gel.

The SVKitchen folks used a set of fiberglass rods to elevate the tray to allow the custard cups to sit near the top of the oven while maintaining the proper water level, but it turns out that one of my cooking racks fit perfectly underneath the included one:

Making Sous Vide Custard

Making Sous Vide CustardMaking Sous Vide CustardMaking Sous Vide Custard

The normal way to make custard is to cook the cream and sugar at a moderate heat together to mix them, and then add beaten eggs and cook in a water bath. I thought the SVS could make this easier, so I just mixed all of the ingredients together in the mixer until they were combined but not frothy (do the last little bit by hand for more control). I doubled Bittman’s standard custard recipe (I’ve pretty much given up on the book and use his $2 searchable iPhone app all the time) and substituted vanilla for the nutmeg and cinnamon, since I’m allergic to nutmeg and I like vanilla. This doubled recipe calls for 4 cups of cream, 1 cup of sugar, 4 whole eggs, 4 egg yolks, and a pinch of salt plus flavorings:

Making Sous Vide Custard

This recipe made enough to fill 8 8-oz ramekins all the way to the top, plus a little left over. I filled the cups through a mesh strainer to get out any last unmixed bits, covered them with plastic wrap, and cooked them (covered) in the SVS at 175F for an hour:

Making Sous Vide Custard

When they were almost done, I cooked about two cups of blueberries over moderate heat with a tablespoon or so of sugar and bloomed a packet of gelatin in a bowl of water. When the blueberries were cooked through and starting to burst (about 5-7 minutes), I stirred in the gelatin and let them cook for a few more minutes. Then I drizzled the hot syrup over the top of the cups:

Making Sous Vide Custard

After chilling in the fridge for about 4 hours, they were absolutely fantastic. The texture is flawless, the flavors are great, they’re perfectly cooked all the way through, and the whole thing only took about 15-20 minutes of actual effort.

Sous Vide Custard



Why I don’t eat High Fructose Corn Syrup (HFCS)

Filed under: — adam @ 10:11 am

The following is a catalog of my somewhat unscientific objections to High Fructose Corn Syrup (HFCS), across a number of different axes:

Health / Chemical

It’s not “Just like sugar”

Proponents of HFCS claim that it’s “just like sugar”, but that’s not strictly true. Even the form of HFCS that’s closest in chemical formulation to sucrose is 55% fructose and 45% glucose, which is a liquid at room temperature. Fructose metabolism behaves differently in the absence of glucose, and in practice that ratio seems like enough to tip the scales in that direction.

HFCS is a mixture

HFCS is a mixture, not a compound. In the case of sucrose, it’s a very weak bond between the fructose and the glucose, but there is a bond there that can be used to regulate the rate at which it’s metabolized (cleavage of the disaccharide into glucose and fructose happens in the small intestine). When you eat HFCS, you just dump a bunch of fructose and glucose on your metabolism all at once, to be absorbed as quickly as possible. I haven’t seen any research examining whether this is a problem or not, but it seems like it would be.

Research shows that it can be unhealthy

There is an increasing body of research pointing to excess fructose or HFCS specifically as being responsible for weight gain, raising bad cholesterol levels (see the Personal Experience section at the end) and causing cancers to grow faster.

Other thoughts on Fructose

I don’t know of any research examining whether the fructose in fruit or agave syrup has similar effects. My guess would be that the fructose in fruit is buffered by everything else in the fruit (see the coda on nutritionism at the end) and that agave syrup is probably not great for you either, but I have no evidence to support either of those assertions.



I don’t like the way HFCS tastes – I find that foods sweetened with it have a somewhat sickly flavor, and a lingering unpleasant aftertaste.

HFCS is a marker for cheap ingredients.

Companies that put HFCS in their food do so because it’s cheaper than sugar, not because it’s better than sugar. A few cents extra per loaf of bread makes a huge difference when you’re selling a few million loaves, and it makes a lot less difference when you’re buying one loaf. I try to make as much of my food from ingredients I personally choose, but when I have to buy packaged food, I generally want it to be as good as it can be. In my experience, foods that avoid HFCS also tend to use better ingredients and have better overall quality. I’m disgusted by how difficult it is to find food in the supermarket that doesn’t contain it.


HFCS is an industrial byproduct of corn subsidies. This is a very deep subject with a large number of complex interactions, but one thing is pretty clear — the aggregation of incentives for many farmers line up to cause them to grow lots of corn (and soybeans) to the detriment of other products. Monocultures in farming are generally problematic, and I think we should be encouraging more biodiversity instead of less. Vastly simplified, the government makes it financially attractive for a large number of farmers to grow very few varieties of corn with the use of petrochemical fertilizers and pesticides. This increases our dependence on foreign oil, it weakens the basis of our foodstocks, and it gives us a large number of very cheap byproducts that make their way into everything. Michael Pollan has given this subject far more exploration than I could – I highly recommend reading the chapter on corn in The Omnivore’s Dilemma (or the article on which that chapter was based).

Personal Experience

Sometime over the summer of 2008, I made the personal decision to eradicate as much HFCS as possible from my diet. I would no longer voluntarily buy any processed food containing HFCS, and I would make conscious attempts to avoid it. I had my cholesterol checked in June, before I started this experiment, and again in December. During that time period, with no other lifestyle changes, my Triglyceride count dropped by 39 points and my LDL count dropped by 28 points. I attribute this change entirely to the direct and indirect effects of cutting out HFCS – cutting out HFCS itself, cutting out the other processed ingredients that often go along with it, and decreasing my consumption of processed food overall. In actuality, HFCS itself may be entirely benign (though I see little evidence of that), but I feel that removing it from my diet was an unqualified net good. Unfortunately, it’s been impossible to remove entirely, as most restaurants use it. As a result, I’ve been trying to cook at home more (with a little help), which has also been largely a net good.


(Coda on Nutritionism vs. Nutrition)

Michael Pollan makes a really good point about eating whole foods in In Defense of Food (and the essay on which it was based). The whole essay is worth reading, but this section stood out for me:

Also, people don’t eat nutrients, they eat foods, and foods can behave very differently than the nutrients they contain. Researchers have long believed, based on epidemiological comparisons of different populations, that a diet high in fruits and vegetables confers some protection against cancer. So naturally they ask, What nutrients in those plant foods are responsible for that effect? One hypothesis is that the antioxidants in fresh produce — compounds like beta carotene, lycopene, vitamin E, etc. — are the X factor. It makes good sense: these molecules (which plants produce to protect themselves from the highly reactive oxygen atoms produced in photosynthesis) vanquish the free radicals in our bodies, which can damage DNA and initiate cancers. At least that’s how it seems to work in the test tube. Yet as soon as you remove these useful molecules from the context of the whole foods they’re found in, as we’ve done in creating antioxidant supplements, they don’t work at all. Indeed, in the case of beta carotene ingested as a supplement, scientists have discovered that it actually increases the risk of certain cancers. Big oops.

What’s going on here? We don’t know. It could be the vagaries of human digestion. Maybe the fiber (or some other component) in a carrot protects the antioxidant molecules from destruction by stomach acids early in the digestive process. Or it could be that we isolated the wrong antioxidant. Beta is just one of a whole slew of carotenes found in common vegetables; maybe we focused on the wrong one. Or maybe beta carotene works as an antioxidant only in concert with some other plant chemical or process; under other circumstances, it may behave as a pro-oxidant.

Indeed, to look at the chemical composition of any common food plant is to realize just how much complexity lurks within it. Here’s a list of just the antioxidants that have been identified in garden-variety thyme:4-Terpineol, alanine, anethole, apigenin, ascorbic acid, beta carotene, caffeic acid, camphene, carvacrol, chlorogenic acid, chrysoeriol, eriodictyol, eugenol, ferulic acid, gallic acid, gamma-terpinene isochlorogenic acid, isoeugenol, isothymonin, kaempferol, labiatic acid, lauric acid, linalyl acetate, luteolin, methionine, myrcene, myristic acid, naringenin, oleanolic acid, p-coumoric acid, p-hydroxy-benzoic acid, palmitic acid, rosmarinic acid, selenium, tannin, thymol, tryptophan, ursolic acid, vanillic acid.

This is what you’re ingesting when you eat food flavored with thyme. Some of these chemicals are broken down by your digestion, but others are going on to do undetermined things to your body: turning some gene’s expression on or off, perhaps, or heading off a free radical before it disturbs a strand of DNA deep in some cell. It would be great to know how this all works, but in the meantime we can enjoy thyme in the knowledge that it probably doesn’t do any harm (since people have been eating it forever) and that it may actually do some good (since people have been eating it forever) and that even if it does nothing, we like the way it tastes.



I think that about covers it. I welcome comments.



Cooking at home is different

Filed under: — adam @ 3:07 pm

There’s a bit of a debate going on about whether the lack of cooking at home is responsible for people eating unhealthily. Matt Yglesias has a piece arguing that cooking at home isn’t fundamentally different from restaurant cooking, and “If someone – Jamie Oliver, for example – devised an appealing mass-market food product that was better than Taco Bell on the taste/price/convenience dimension but also healthier, well that would be an excellent thing for the world.” Well, it sure would. It would be nice if someone could make a car that drives like a BMW but doesn’t use any gas and costs less than $1000 too.

“Cooking yourself” is not the point. “Cooking at home” is. This is because home cooking is different from restaurant cooking, and yes, there is a fundamental difference between food you prepare for yourself and food prepared by other people, at least when the latter is in a commercial/restaurant context. Unless you have a private chef, food prepared for you by other people is food prepared for… whomever. This difference is largest at scale. Industrial food is the way it is because it’s designed to be made/prepared/”made” by people with no skill at cooking for a clientele who may show up at any time and want what they want, and when you do that, you lose all kinds of properties of the food that go into making it healthier. You lose varietal selection. You lose focus on balance. You lose accounting for individual tastes. You lose someone insisting that you eat your vegetables (both because they’re good for you and also because whoever cooked them put a lot of effort into making them for you). You lose incentive to not use cheaper ingredients (or at least you divorce yourself from that decision). You lose incentive to not use flavor boosters that are unhealthy. You lose the ability to make food on demand, so there’s incentive to use ingredients that will store better. Fine cuisine doesn’t fare much better, because it’s not optimized for health, but for flavor and pleasure.

Healthy food has a lot of properties that are, I think, inherently unscalable. Saying that restaurants should offer cheap healthy options is not understanding the problem. Yes, cooking at home is a lot of work, and sometimes that takes away from the time you could be using to watch a movie or read a blog, but the benefits are immense, and they won’t realistically ever come out of a restaurant. Is that really such a bad thing?


Something interesting about scarcity

We used to have a 6-at-a-time Netflix plan. We’d get 6 movies, but then sometimes we’d go months before watching them, or even just deciding that enough was enough and sending them back. And frequently, even among those 6 movies, there would be nothing we wanted to watch on any given night. And then an interesting thing happened. As part of a general cost trimming in our house, we dropped down to a 3-at-a-time plan. And suddenly we started watching a lot more Netflix movies. With 6 movies to choose from, there was always “something else” to watch, and we didn’t have to worry about clearing out all of the cruft to make room for something we really wanted to watch. As a result, we didn’t think as carefully about whether we’d really want to watch a new movie, because just renting something wouldn’t really block something else that we wanted to see more if it came along. But when we introduced a little artificial scarcity into the mix, a slot became something worth protecting from something we didn’t really want to see, and we started thinking more about which movies to put to the top of the queue, and then actually being more aggressive about watching them and sending them back.

This seems like a strange effect to me – we’re paying less, we’re technically using less (3 out instead of 6 out), but we’re turning over more, so the net effect is probably that we’re heavier users now than at 6-out. Because the pricing is only on the number out instead of the turnover, we’re unarguably paying less and using more, even though we’re technically on a “lower usage” plan. At this point, even if we wanted to spend the extra money, I have no desire to go back to a 6-out plan, because I like the extra sense of urgency that comes from having the out slots be a scarce resource, and it makes me want to use the service more.

I don’t know if this makes me a better Netflix customer or not, from their perspective. Obviously I’m paying less money to them per month and using more in mailing fees, but I’m also holding onto premium movies for a lot less time than I used to, freeing them up to be sent to other customers.

Anyone notice the same thing?


Thoughts on the new Star Trek

Filed under: — adam @ 9:58 am

First, I loved it. I think it was the best movie I’ve seen in a long time. It treated the source material with respect, but established its own direction. The casting was basically flawless, and each of the major characters settled into their respective roles as gracefully as putting on a new pair of shoes from the same brand in the same size you were wearing before. The IMAX version is huge, but sit back more than you think you need to. We ended up being a little too close and it was sometimes hard to focus on the fast action scenes.

Spoilers ahead.

I loved the new bridge, and I was completely wowed at the omnipresent reflections and lens flares going on in the foreground. It really added the sense of being there and catching light bouncing off of some shiny panel, of which there are now many. Similarly, I loved the scope of the new engineering. Finally… it looks big.

All of the performances were grand and Zachary Quinto was impressive as a young Spock, but I think Karl Urban gets a special callout for really nailing the crotchety Bones. (And he was shafted a special commendation for saving the entire universe by sneaking Kirk on the Enterprise in the first place.)

I thought the time travel execution was very successful, and I very much liked that they didn’t take the standard “everything gets resolved at the end of the episode” tack and left things messy instead. The seamless shift into what otherwise would have been a reboot or a lifeless prequel… completely works for me. This is a different universe, most notably in the way that six billion Vulcans are now missing. As the Vulcans are the main ambassadors of the Federation for first contact, this has drastic implications on the influence and power of the Federation. But, as a mitigating factor, Spock is back from the future with 130 years of accumulated knowledge. Vulcans have essentially photographic memories and the ability to share thoughts widely without having to explain them, and Spock apparently doesn’t seem shy about applying this where necessary to rebalance things. And it’s not just technological knowledge – he’s one of the most well-versed people ever in galactic politics. He knows where all of the hidden enemies and backstabbing and power grabs are going to come from, he knows which alliances are likely to form, and he knows which resources people are going to be fighting about. This is a unique position – he can prepare the Federation in advance to deal with all of these threats before they fully manifest. As time goes on and the timelines diverge, his knowledge will become less useful, but should still provide the Federation with a significant advantage in the short run, enough to make up for the lack of influcence of most of the Vulcans (and they’ll still have the thousands who survived to carry on at least some of the legacy).

Going forward, this is a very different universe, and I very much look forward to seeing how it unfolds. I hope they consider returning to a serial format, though not necessarily TV. There’s way too much rich material to mine here now to only tease us with a single movie every few years, and I think it would be a waste of this potential to fully focus on the action stories which the movies tend to favor (which is not to say that they’re not a ton of fun). But for the first time in a long time, I need to go see it in the theater again.


Toys and Testing

BoingBoing reports that new rules on consumer safety threaten to put small producers out of business because the testing is too expensive.

I have a few thoughts on this.

This is a pretty common libertarian vs. nanny state disagreement – should consumers be allowed to make their own choices, but I don’t think it’s that simple, for a few reasons. (Before you go on, I think it’s worth reading my previous piece on some failure modes of the market.)

Keeping toxic chemicals out of kids toys can’t really be the responsibility of the parents, because it’s not within their domain of control. You can be a responsible parent, you can only buy toys you “trust” (whatever that means) and your child will still be exposed to toys you didn’t have any say about. It’s unavoidable – other kids have toys, day care centers have toys, kids play with toys in the playground that other kids bring or leave behind. The only way to prevent these toys from coming into contact with kids is to keep them out of the marketplace to begin with. If you like, it’s society’s responsibility to keep poisons out of kids’ toys in general, because the incentives don’t line up for the individual actors.

After-the-fact deterrents are simply not effective. Lawsuits take years to resolve, are overly burdensome, and it may be impossible to even track down the responsible party (I’m told it’s nearly impossible to sue a foreign company). On top of that, even an expensive PR-nightmare lawsuit may not be a sufficient deterrent to a large corporation with a hefty legal budget. A few million dollar settlements can seem very small in the face of a few hundred million in profits per year. Also, it’s worth noting that this is a reactive response which doesn’t actually fix the problem, but tries to throw monetary compensation in an attempt to “make things better”. But that’s basically what we’re being asked to accept here with the free market solution – let us do what we want and if you don’t like it, sue us, because it’s “too expensive” to ensure that we make safe products. We have that prefrontal cortex for a reason – people are uniquely capable of making predictive decisions, and to allow reactive forces to handle problems we can plainly see are coming seems ridicuously primitive to me. One might argue that we don’t have the capacity to predict how our actions might affect these complex systems, but that’s exactly why we need to be able to adapt and tweak them as we go. I haven’t seen any evidence that the market makes better choices in these kinds of situations, and in fact the call for regulation is a response to the failure of market forces – these companies have already shown an inability to keep toxic ingredients out of their products, yet we still continue to have these problems. Public outrage and whatever lawsuits are currently in the pipeline haven’t served as an adequate deterrent. Why’s that? I don’t know.

This is similar to the conundrum faced by small food producers. See Joel Salatin’s Everything I Want To Do Is Illegal for a lot of good examples of this. The main thrust is that the rules that are meant for large corporations where the overhead gets absorbed by the scale are overly burdensome for small producers, who don’t have the resources for dedicated testing facilities but also have less capacity to do harm, both because they have fewer customers but also because some kinds of harm are caused by the steps needed to operate at scale in the first place. I like to buy local food from farmers that I’ve come to know and trust. This can work at a small scale – if I want to see their operation, I can go visit the farm. I have no similar way to verify that with a larger company.

I don’t think that broken regulation is a condemnation of the entire idea of regulation, but I think it’s obvious that the rules need to be different depending on the scale of the domain they apply to. It is not unreasonable for Hasbro and Mattel to have to follow different rules than the guy who’s carving wood figures in his garage and selling them on etsy. Scale matters – more is different, and bigger is different.


Possibly the perfect omelet pan

Filed under: — adam @ 9:13 pm

I’ve long been looking for a good replacement for teflon for making classical french omelets, and I’d pretty much given in to the idea that it needed to be teflon or nothing. Cast iron (enameled or not) gets a nice big hotspot in the middle from the gas flame, and anodized aluminum isn’t non-stick enough. Even teflon is substandard for that, because to do it right, you need to use high heat and a metal fork.

Enter this new item in Cuisinart’s “Green Gourmet” line, a ceramic alternative to teflon for non-stick pans, which is made with no PFOA or PFTE. It’s not too expensive, and has anodized aluminum on the bottom for good heat distribution. I did a Pepin-style omelet with a little butter and a metal fork in it this evening. It has nary a scratch and the omelet came together perfectly. The surface of the pan feels very slick and hard, and the handle is comfortable. Major bonus points for this phrase in the instructions: “Never use Cuisinart Green Gourmet cookware on high heat or food will burn”.

Credit to the estimable Mr. McGee for a) scientifically confirming my assertion that cast iron has terrible distribution properties and b) mentioning some new non-stick coatings I hadn’t heard of (but not the one above, which may or may not be Thermolon, but which seems to be higher quality than the one he covered).



Get out there tomorrow and do what you feel you need to. This country has gone astray, and we need to fix it. The next four, eight, twelve years are important, and what you do tomorrow will dictate the path for those years. We need strong leadership who will listen to the concerns of our citizenship.

On that note, the Columbia Journalism Review has reported on a new map of political blogs that my company, Morningside Analytics recently produced for a study being conducted by Columbia’s Toni Stabile Center for Investigative Reporting and the Berkman Center at Harvard.

Political Clustermap
(Click the image to read our blog post about it.)

I find this map extremely compelling, and it speaks volumes about the respective approaches that will follow one of these two men to the White House tomorrow.

John Kelly, our chief scientist and founder, sums it up:

“There are some groups of pro-McCain and anti-Obama blogs that are well connected to each other but not densely linked with bloggers in the longstanding political blogosphere, even those on the conservative side [...]. If these were typical political bloggers, we would expect to see them better woven into the fabric of the network.”

Cogitate on that, sleep well, and vote proudly.


Why I eat what I eat.

Some number of years ago, I used to think that the ability to get any kind of fresh produce any time of the year was a mark of an advanced global civilization. We had conquered a small piece of space and time and weather to bring me blueberries in February. More recently, we lived for over a year in the shadow of the neighborhood that used to belong to the World Trade Center. I don’t want to talk about that right now, but it serves to highlight a personal revelation. When we moved, we moved to a new neighborhood, a new breath of fresh air. And a farmer’s market opened, literally, right outside my door.

After my first visit, I started making it a point to go every Friday morning, even in the dead of winter, just to see what new bounty would be there. It began with fruit and vegetables, and as I explored more, eggs, milk, breads, and eventually meat. Each new discovery reminded me of what potential could be held by a simple item of food. A peach — this is what a peach is supposed to taste like. The word “luscious” really does not fully convey the impact of biting into a local peach at the height of the season. Apples as tart as you like, strawberries with no white center to be seen, blueberries both sweet and tart at the same time, carrots you can eat without peeling them. This food was not only better for you, it was simply better, in every respect that mattered.

And then August came, and I got to the tomatoes. The tomatoes made me a lifelong convert – the drawn line between “there’s a market there” and “I need to go to the market”. A supermarket tomato is not even in the same vocabulary as a fresh, ripe, local market tomato. Flavor, texture, aroma – it’s just unfair to even do a comparison.

Of course, there’s a tradeoff here. Eating seasonally means you relish every bite until you can’t stand it anymore, because you know that it won’t last. Most crops have a few months, but some last only a few weeks. There are cycles for everything – they come in and they’re not quite ready yet, then the next week or two they’re perfect, then they’re gone until next year. Hopefully by that time you’ve been able to eat your fill to hold you until next year, but then there’s something else wonderful that takes its place. Peas move to berries move to tomatoes move to root vegetables.

The jury’s still out, but the evidence points to organic and sustainable food being healthier. It appears that plants are more nutritious when they have to defend themselves from pests. Garbage in, garbage out — I don’t want to eat vegetables that are made entirely of petrochemical fertilizers in the same way that I don’t want to eat meat that’s made entirely of corn. I don’t voluntarily buy anything that has high fructose corn syrup in it, and you won’t find any of that at the market.

And it’s not just about the food. Yes, it’s better, and everything I can buy at the market, I do. But it’s also about confidence, and community in one of the oldest senses of the word. I know these farmers. I have recently visited one of the farms and plan to go see more. They stand behind their food. I know, for the most part, which ones use pesticides and which ones don’t, and I can see the relative effects that has on the quality of their food. I’m not afraid to eat their eggs raw or undercook my burger.

Seasonal/local is not organic. That’s not to say that organic is bad, but they’re not the same thing. Organic doesn’t necessarily equate to sustainable, or even high quality. All other factors being equal, organic tends to be the better choice, but it’s not the whole answer. A local food may in fact not be the best choice, but at least if you have a question about it, you can often talk to the farmer directly and get whatever answers you’re looking for.

And so – my buying patterns: I always shop at the market first. If I can get something there, I do. The quality is always better, it is certainly healthier, it has a lower carbon footprint when you factor in the petrochemicals they don’t use to fertilize, keep the pests away, and get it to you, and all of the farms at my markets are committed to sustainable farming practices. Plus, I like them personally and I want to give them my business. Shopping at the market isn’t always numerically competetive, but it is always value competetive – if something is 1.5x more at the market, it’s likely 5-10x better.

For the things I can’t find at the market, I do try to buy organic, and I try to ensure that they’re seasonal somewhere. For example, I don’t buy oranges from Florida in July. Not only is there no reason to given the abundance of other wonderful fruits here, they’re just not as good as the ones in January. Organic is usually preferable, because I think that food is healthier and better for the planet than “conventional” (whatever that really means).

I’m not a die-hard localist. I still buy coffee, and I eat imported Italian canned tomatoes when I can’t find good ones here. I love to cook, and shopping at the market simultaneously makes some decisions easier (I make what’s good that week) and improves my results. But what it really comes down to is that I’m committed to procuring for my family and friends the best food we can have while supporting people who love food as much as I do.

This is a healthy food chain. It’s good for the planet, it’s good for the farmers, it’s good for the plants and animals, and it’s good for us. Every little bit makes a difference.


Shifting the Debate

My company (Morningside Analytics) has just launched our Political Video Barometer, which tracks the movement of YouTube videos through conservative and liberal blogs:

The Barometer is updated 4 times a day and allows you to see which new videos are starting to break through within either the conservative or liberal blogs and which ones are breaking through to non-political audiences. We identify influential blogs through a unique cutting edge clustering approach – the underlying technology was also used earlier this year to produce this detailed report on Iran’s blogosphere for the Berkman Center at Harvard.

We are also running a blog at which will examine interesting findings from the barometer.

It’s always fun to launch a new product. We worked very hard on this, and I’m proud of it.



Why the Mac is better.

Filed under: — adam @ 10:16 pm

This is a list I put together for my father. I thought you’d enjoy it. Got anything to add?

I’ve been putting together a more detailed response for you. There’s a reason why nearly every computer professional I work with has switched to the Mac in the past few years.

This is the short list:

1. It actually is more stable. It is very very difficult to crash the OS entirely. The only time I’ve done it is when running Windows in a virtual machine, because of the trickery needed to accomplish that in the first place. When you kill programs that aren’t responding, they almost always die and can be restarted. This may not be a huge problem for you, since you only use about 5 applications. I use well over a hundred, and on Windows, this was a disaster – anything misbehaving could lock up the entire system. It simply doesn’t happen that way on the Mac.

2. It is >far< easier to maintain. This is actually a few thousand things of various severities, but some highlights:

- You know how when you get a new Windows machine, you have to reinstall everything and search around all over the disk for where your files and preferences might be? On the Mac, you don't have to do any of that. Everything specific to you goes in your home directory, and your Applications go in your /Applications folder. 99% of everything will work if you just copy those two folders. When you install software, there's usually no installer to run, you just copy the application to the Applications folder. This clean split between application preferences and user preferences also means that having multiple users on the same machine just works, every time. No weird "some other user accidentally modified the global settings".

- Moreover, you don't have to actually do that to be up and running quickly, because you can just make a copy of an existing drive and boot off of that. This won't screw up any hidden settings the way it will on Windows. (It may sort of work, but it'll never be "quite right".) You can keep a running backup of your boot drive that automatically updates. If anything happens to your boot drive, you pop it out, pop the backup in, buy a replacement backup, and it's as if you never left. Someday, you're going to have a hard drive fail, and when that happens, you're going to suddenly realize that there's a lot of stuff you had strewn around your drive that you never found to back up, and it'll be gone forever (or you can spend a few thousand dollars for a slim chance to recover bits and pieces of it in probably mostly unusable forms from a drive recovery service).

- You don't have to worry about viruses or spyware. Nothing runs with administrator privileges by default, the system is very well locked down, and there's no need to run anti-virus or anti-spyware software, because no software can install itself without your permission. Granted, it's not perfect and there are some security holes, but no system is 100% secure and it's orders of magnitude better than Windows in this respect.

3. Laptop sleep works. I've never had a Windows laptop that came back from sleep reliably. The last one sometimes took up to 30 minutes to restore. Unacceptable.

4. Creative programmers are drawn to the Mac. As such, there's a vibrant community of small-developer software that's incredibly useful, well-written, innovative, and for the most part, follows the same set of UI guidelines, so they all behave the same way, have similar keyboard shortcuts, etc... There are a few exceptions, but most of it is this way. There's an actual ecosystem, and when developers make something that's useful, often other developers will make their applications work with it if you have it installed. Almost all of it is available in downloadable form with 30-day trials. I've installed maybe three programs off of CDs, so that means no CDs to lose on the off chance you need to reinstall something.

5. Hardware support is just better. It's simpler and easier. Most things don't require drivers. When you switch ports around, things continue to work just fine. You can add and remove external monitors at will, and the system just compensates - no rebooting.

6. Apple cares about getting all of the little details right. You must watch this:

7. The OS itself has MANY MANY little enhancements that make life easier in lots of little ways (many of which may be difficult to appreciate without using them, but once you do, you'll miss them when they're gone):

- In the Finder, the Mac's version of Explorer (though Finder came first), you can highlight a file and press the spacebar to bring up QuickLook, which is a floating window with a navigable preview of the document. Press the spacebar again to close it. This makes flipping through files extremely easy. Many file formats are supported (word docs, excel sheets, pdf, most kinds of images, most kinds of videos, etc...). The built-in Mail program also supports this, so you don't have to save mail attachments to view them. You can also very easily toggle back and forth between full screen view.

- It includes Time Machine, which is an automatic hourly backup which saves as many past versions of a file (in a compact changes-only format) as your disk will support. Accidentally write over a word file you needed? Bring up Time Machine and restore it from earlier in the day. Time Machine also supports Quick Look, so you can easily check whether a particular version of a file is the one you're looking for.

- Expose is the window manager for navigating many open windows. I can't really explain this, so watch the video:

- The desktop includes the Dashboard. Press a hotkey and the screen is overlaid with useful widgets - weather, dictionary lookup, etc...

8. Screen sharing is built in. Need some help? I can log in to your machine remotely and securely, and check it out for you. No more waiting until I can come over to fix something.

9. Built-in calendar and address book that most applications share.

10. iPhoto is MUCH better for managing your photos than the Nikon software you have.

11. You can dual boot Windows if you really want, or run it in a virtual machine with VMWare.

No, it's not perfect. But it is a hell of an improvement. I'd say I have 1% of the number of issues I had with Windows machines, if not less.


On libertarian/capitalist intent

For some time, I was a staunch Libertarian. That lasted until I started to examine the boundary cases where Libertarianism didn’t seem to offer a good answer. I still hold a lot of those principles dear, but I’m no longer convinced that complete Libertarianism can work in the real world. What follows are some of my recent thoughts on the free market.

The proponents of the free market often propose that private ownership gives people an incentive to make the most of resources, and that people with ownership incentive are likely to make the best decisions about the use of resources.

I tend to agree in many cases – the market does often work and find the best solution, but I’ve been mulling over some exceptions to that rule.

Some traps that individual decisions in the market can fall into:

1) Divergent interests: the interests of the owning party may not be the same as the general public.

2) Irrationality: people don’t always act rationally or in their best interest.

2a) Obscured Information: even in the face of good information, which is often not present, the right decision isn’t evident.

3) Vested Interest: ownership of a thing is not the same as stewardship of a thing, and if you don’t have a personal vested interest in the thing, your best use of it may be to divest yourself of it (i.e.: use it up, parcel it, consume it) in exchange for lots of short term money you can use to buy something you actually want.

4) Value dilution: the more stuff you own, the less you care about any given individual thing. Ownership of lots of things probably means that each individual one is less valuable, because the value of a thing must be measured not just against the external market value, but also its proportion to your total assets, difficulty to replace, your incentive to replace it if you lose it, sentimental value, subsidiary values (prestige from ownership, etc…), and a whole host of other things.

5) Lack of patience and susceptibility to fear: People in control of a thing may require immediate access to it (liquidity), and will sometimes act to preserve that liquidity at the expense of the health of the overall economy, and therefore at the expense of some value of the underlying asset. This happens even though the people controlling an asset may be able to see the writing on the wall – everyone will be fine if everyone sits tight, but if you wait and someone else moves first, you lose. I think this usually manifests as “private enterprise tends to seek short-term gains”, but it’s tightly tied to #6:

6) The Tragedy of the Selfish: this is a concept I’ve been toying with on and off for a few years. It’s not the Tragedy of the Commons, and it’s not quite the Tragedy of the Anticommons, though there are aspects of both in there, as well as an arms race component, and some Prisoner’s Dilemma. This is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better. Libertarian capitalism hinges on the assumption that making everybody individually better off is the best way to maximize the happiness of the group, and it’s simply the case that there are situations where that assumption does not hold. The example I often use for this is buying an SUV to be safer on the road. You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars. It’s a simplification, of course, but I hope that makes the point.

That’s what I’ve come up with so far. I’m sure there are more. Of course these don’t always apply, but I think at least one of them does often enough to warrant a better justification that “the market will solve the problem”. They’re certainly things to watch out for when getting out of the way and letting the market work.

What do you think?


The Google Chrome terms of service are hilarious

I’ve been very busy lately, but this is just too much to not comment on.

There are other articles about how the Google Chrome terms of service give Google an irrevocable license to use any content you submit through “The Services” (a nice catchall term which includes all Google products and services), but the analysis really hasn’t gone far enough – that article glosses over the fact that this applies not only to content you submit, but also content you display. Of course, since this is a WEB BROWSER we’re talking about, that means every page you view with it.

In short, when you view a web page with Chrome, you affirm to Google that you have the right to grant Google an irrevocable license to use it to “display, distribute and promote the Services”, including making such content available to others. If you don’t have that legal authority over every web page you’ve visited, you’ve just fraudulently granted that license to Google and may yourself be liable to the actual copyright owner. (If you do, of course, you’ve just granted them that license for real.) I’m not a lawyer, but I suspect that Google has either committed mass inducement to fraud or the entire EULA (which lacks a severability clause) is impossible to obey and therefore void. [Update: there is a severability clause in the general terms, which I missed on the first reading. Does that mean that the entire content provisions would be removed, or just the parts that apply to the license you grant Google over the content you don't have copyright to? I don't know.]

Even more so than usual, these terms are, quite frankly, ridiculous and completely inappropriate for not only a web browser but an open source web browser.

Nice going guys.


On gazpacho

Filed under: — adam @ 9:42 am

Salmonella-tainted tomatoes aside, gazpacho is about the healthiest thing you can eat, and I look forward to having some decent vegetables to make it with every year.

It’s pretty good with tomato juice, but I really prefer to use fresh tomato puree. I’m really not a fan of spicy tomato, and I go on the clean vegetal side. It really highlights the late spring vegetables that start to show up at the market in early June.

4-6 large tomatoes, quartered
2-3 medium tomatoes, diced
2 spring onions, diced
2 cucumbers, diced (or 4 kirbies – sweeter)
~10 basil leaves, chiffonade
salt & pepper
good ev olive oil

I use the plastic dough blade of my food processor to beat the crap out of the tomatoes – just quartered; peeling, coring and seeding not required – then run them through the finest disk on my food mill to remove the seeds, cores, and any remaining skin. The plastic blade won’t nick the seeds, which can be bitter. I used to just do this in the food mill, but it took >forever< and is about 50 times faster using the food processor first.

Use about 2-3 cups of the puree for the soup, but it really depends on how liquidy you like it. I like lots of chunks. Whatever you don’t use will keep in the fridge for a few days. I’m sure it would freeze well, though I’ve never done that.

Add the diced vegetables and basil leaves, and salt and pepper lightly. Stir in a drizzle of olive oil (on the order of 1-2 tbsp) until it thickens slightly. Cover and refrigerate for a few hours. Stir, taste, add more salt and pepper. You’ll need more than you think because it has less impact when served cold.

Serve cold.


Coming to a Rational First Sale Doctrine for Digital Works

In reference to this Gizmodo piece analyzing the rights granted by the Kindle and Sony e-reader:

I think the analysis in that article is flawed. It doesn’t make any sense to be able to resell the reader with the books on it, because the license for the books is assigned to you, not to the reader. For example, if your Kindle breaks, you can move your books to another one. I’ve never heard anything other than the opinion that you can’t resell the digital copy – the assumption has always been that these sorts of transactions break the first sale doctrine. The problem then becomes “what are you buying?”, if there’s nothing you can resell.

The first sale doctrine has to apply to the license, not the bits themselves, because under the scenario in which it applies to the bits, arguably Amazon retains no rights whatsoever. They had no direct hand in arranging the bits of your copy the way they are – they merely sent instructions to your computer about how to arrange them in a certain pattern. The article asserts that you can’t “transfer” the bits, but in the same way, in downloading a copy, Amazon hasn’t actually “transferred” anything to you, either.

There’s no reason you shouldn’t be able to sell your Kindle, and the books don’t necessarily go with it, but if you want to sell the books separately, you can do that too. Legally, if you do that, you’d be obligated to destroy all of the copies you’ve made. Amazon’s inability to police that is as relevant as their inability to police the fact that you haven’t made a photocopy of the physical book you sold when you were done with it. There’s no weight to the argument that this will encourage rampant piracy, given that unencrypted cracked copies of all of these things are available to those who want them anyway, and always will be. People comply with reasonable laws willingly because they’re honest, it’s the “right thing to do”, and they feel that the laws are an acceptable tradeoff for living in a civilized society where sometimes you have to make compromises and not just do whatever you want. People do not comply with one-sided laws where they feel like they’re being ripped off for no reason. A law which turns your sale into a non-sellable license is of the latter kind. It turns normal users into petty criminals who don’t care when they break the law, because the law is stupid. Once they’ve ignored some of the terms, it’s a shorter step to ignore others, or ignore similar terms for other products. People like consistency, especially in legal treatments. I would argue that it’s in Amazon’s interest (and the others) to not niggle on this point, because a reasonable license with terms that look like a sale makes for happier customers who aren’t interested in trodding on the license terms, and that’s better for everyone.

(Yes, I’m arguing that restrictive license “sales” are anti-civilization.)

The Kindle ToS not only prohibits selling the Kindle with your books on it, it prohibits anyone else from even looking at it. If someone reads over your shoulder on the train, you’re in violation.

This is, of course, ridiculous.

The right legal response here seems to me to be to not dicker about with splitting hairs about whether you can sell your digital copies if they’re on a physical device and you can’t if they’re not, but to declare that anything sufficiently close to a “right to view, use, and display [...] an unlimited number of times” de facto consitutes a sale, and with it comes certain buyer’s rights regardless of what kinds of outrageous restrictions the licensor tries to bundle in the ToS. The fact that this also seems to be the right business response reinforces my belief that this is the correct path. This kind of a transaction is different from renting, which is by nature a temporary one.

It is the right thing for society to declare that if you’ve bought something that isn’t time or use limited, you’ve therefore also bought the right to resell it, whether it’s a physical object or a license.


Tags: , , , , , , , , ,


Why don’t we have degrees of terrorism?

We have different classifications for the crime of “killing a person”, and those classifications encompass whether it was an accident or not, whether it was premeditated, and how many people were killed – e.g.: How serious a crime has actually been committed. But when we talk about terrorism, it’s always just “terrorism”. This results in the really sinister megacriminals being lumped in with the group of morons that can’t get it to together to leave the house without forgetting to wear pants, let alone actually arrange to blow anything up.

Most “terrorists” are less dangerous than your average serial killer or bus accident, but we still lump them all together simply because they have an agenda.

Similar to murder, I think we need some sort of classification system for these crimes:

  1. Intent to commit terrorism: you “plotted” with someone who may or may not have been an undercover cop, but didn’t actually acquire passports or learn how to make liquid explosives
  2. Manfrightening: you committed some other crime, and along the way someone got scared and called you a terrorist, but you have no stated agenda.
  3. Terrorism in the third degree: You actually blew up something, but no one was hurt.
  4. Terrorism in the second degree: You actually blew up something and killed some people, but failed to garner any sympathy from the public.
  5. Terrorism in the first degree: You actually blew up something, lots of people were killed, and the US declared war on some country you were unaffiliated with.

Tags: , ,


Numbers is a nice idea with some usability disasters

Filed under: — adam @ 9:35 pm

I’ve put up a screen cast made with the very easy Screenflow.

This is me trying to reorganize a large number of tables with attached comments in Numbers, such that there is no overlap and no tables cross a page break.

As should be evident even without narration, this is pretty much a usability disaster. Numbers is a nice idea, but it does not live up to my expectations for what a spreadsheet with page layout capability should be able to do. I hope they fix this.

Some notes:

1) It is extremely difficult for me to figure out where to click to consistently for a bunch of different options – move a whole table, resize a table, grab a comment handle. This behavior doesn’t seem to be the same every time, and varies whether or not the white handles appear. For example, you can’t make a table smaller if there is content or a comment in a cell you’d remove. That makes sense, but there’s no visual indicator that that’s what’s preventing you from making the table smaller. Watch how often I can’t get the click right on the first try, all over the place.

2) Comment callouts do not move with their tables and are not selectable as a group! Also, they don’t scroll the page when dragged to the edge.

4) Distribute Vertically sort of works, if the tables have no comments, but with comments, all of the tables move and their comments don’t. There does not seem to be a standard way to add descriptions to tables without comment callouts.

5) When you shorten a table, everything below it moves up, and the space where the table you shortened took up IS NOW GONE. This screws up the layout for everything below it on the page, and there does not seem to be any easy way to reclaim that space.

6) When you insert a table in the middle, there does not seem to be a good way to reconfigure the layout of everything else to accommodate the space you need for that insertion. This is basically the same problem as #3.

Tags: , , , , ,

Fed up with food labeling

Filed under: — adam @ 10:59 am

Our food labeling standards are completely out of whack.

As an example, let’s take “100% fruit juice”. I’m pretty sure that at some point, “100% fruit juice” meant that what you got in the bottle was, prior to being put in the bottle, a piece of fruit that was crushed and maybe filtered. I’m 100% sure that that’s what most people still expect when they buy something that’s labeled “100% fruit juice”.

Except that’s not what you get anymore. Now, it’s reconstituted from concentrates, mixed from different kinds of fruit juice concentrates (which may have vastly different nutritional profiles), and blended into whatever they like, but it’s still the healthy choice kids, because it’s 100% fruit juice!

Right off the labels:

Kedem concord grape juice (which, incidentally, is among the sweetest of the grapes):

The label says “100% fruit juice”.

Ingredients: Grape Juice, Potassium Metabisulfite Added To Enhance Freshness.

It has 150 calories per 8oz.

Welch’s grape juice:

The label says “100% grape juice”.

Ingredients: Grape Juice From Concentrate (Water, Grape Juice Concentrate), Grape Juice, Ascorbic Acid (Vitamin C), No Artificial Flavors Or Colors Added.

It has 170 calories per 8 oz.


They’re not using grapes that have 13% more sugar in them, they’re dickering with the proportions to make their juice sweeter.

This is just one particularly egregious example, but it’s all over the place – many “100% juices” are sweetened with cherry juice or other concentrates. It’s a complete sham. Even the Kedem is pushing it because it’s got preservatives, but at least the juice is actual juice. No way does that Welch’s bottle contain “100% juice”.

Our food labels don’t mean what they say anymore, they have very detailed technical specifications to go with them, and it’s impossible to know what they mean from common sense without understanding those specifications. This isn’t even about making dubious health claims – it’s about defining away the actual contents of the package.

Tags: , , , ,


The HD format war is lost by existing

[I've posted this as a comment on a few HD DVD vs. Blu-ray blog posts elsewhere, so I thought I'd put it up here as well.]

An HD format war is simply the height of stupidity, given the nice example of how quickly DVD was adopted by… everybody.

This happened for a few reasons, none of which are being replicated by the HD formats/players:

1) One alternative with no difficult competing choices.

2) Fit into existing home theater setups easily.

3) Clear, obvious quality advantages, even if you set it up incorrectly.

4) Significant convenience advantages – pause with no quality loss (anyone here remember VHS tracking?!), random access, extra features, multiple languages, etc…

5) More convenient and durable physical medium.

So – let’s look at what HD formats offer over DVD in these areas:

1) Multiple competing incompatible choices. Not just between HD DVD and Blu-ray, but also between different HD formats. 720p/1080i vs. 1080p, HDMI/HDCP vs. component. People aren’t adopting HD formats because they’re confusing.

2) Does not fit into existing home theater setups easily. If you had a DVD home theater, chances are you’re replacing most, if not all of your components to get to HD – you need a new TV/projector, you probably need some new switches, you need all new cabling, and you need at least three new players to do it right (HD DVD, Blu-ray, and an upscaling DVD player so your old DVDs look good). Not to mention a new programmable remote to control the now 7 or more components in your new setup (receiver, projector/tv, 3 players, HDMI switch, audio/component switch).

3) Clear, obvious quality advantages, but only if properly tuned and all of them work properly together. I can easily tell the difference between even HD movies and upscaled DVD movies. Upscaled DVD movies look fantastic, but HD movies really pop off the screen. But if things aren’t properly configured or you’re using the wrong cabling, these advantages disappear.

4) No significant convenience advantages, with some disadvantages. Pretty much the same extras, but most discs now won’t let you resume playback from the same place if you press stop in the middle, and they make you watch the warnings and splash screens again.

5) Indistinguishable physical medium. Maybe the Blu-ray coating helps, but we’ll see about that.

I’ve gone the HD route, because I really care about very high video quality, and I love tinkering with this stuff. Most people don’t, and find it incredibly confusing and expensive.

Is it really any wonder that people are holding off?

The HD format war is already lost, by existing at all, and every day that both formats are available for sale just makes things worse. The only good way out of it is to erase the distinction between the two formats – dual format players that reach the killer price point and aren’t filled with bugs.

Tags: , , ,


The first rule of community

I have a personal mailing list for my very close friends, to which I often send a few messages a day. If I stop for a day or two, it’s not a problem. If I stop for a long period of time (a week, a month) without telling someone, I have a strong belief that many of those people will check in to see what’s wrong. This is a major aspect of community for me, and it’s missing from every other piece of online interaction I’ve ever had, including this blog. Part of it has to do with the requirement that everyone on the mailing list is someone I’ve met in person and decided to include – I do not invite people whom I’ve never met physically, and I do not accept solicitations to join the list. But it’s a very strong driver for me, and it’s the reason I still maintain the list even in the presence of so many “better” ways to communicate.

There’s really only one rule for community as far as I’m concerned, and it’s this – in order to call some gathering of people a “community”, it is a requirement that if you’re a member of the community, and one day you stop showing up, people will come looking for you to see where you went.

Incidentally, this quality has been lacking from some real world organizations as well, and it’s become a very strong barometer for me to tell just how welcome I feel with any given group of people. If I left and didn’t come back, would anyone care enough to find out why? It’s a very visceral question, and perhaps a difficult one to ask. But I think it’s an important one, as we move into these so-called communities where all of our interaction is online, and fluid.

I quite enjoy my participation in a number of sites, flickr and ask metafilter among them. But I have no doubt that if I suddenly go away, not one other member will really care, with the probable exception of the people I know from offline. From time to time, they may wonder, “huh, haven’t seen Caviar in a while” (and the use of handles instead of names is probably a big contributor to this), but it’s unlikely that anyone will track me down to ask why, if they can even find out a way to reach me. They’ll probably just assume I found something better to do, or switched to a different site. And therein lies a big piece of the problem – the loose ties go both ways. That guy who disappeared may have just found something better to do, or switched to a different site, but maybe he died, or just didn’t feel welcome anymore. If we don’t have the presence to find out these reasons, or even the capacity to tell when such an event has occurred, are we really building a useful analogue to the binding offline communities that exist, or is it all just a convenient fiction?

I’ve blogged before about some of the problems with online communities, but I think this is a bigger point. That post focused more on how to get online communities to be more outward facing and less insular. This is more about how to get online communities to be more inclusive and meaningful. I must admit that I’m only at the beginning of an answer, but I welcome any ideas on the subject. I’ll avoid the temptation to suggest that we should probably meet for drinks to discuss it.

Tags: , , ,


The Canon Pixma Pro 9000 is a great inkjet photo printer

Filed under: — adam @ 3:15 pm

I got a Canon Pixma Pro 9000 to replace my dead Epson Stylus 1280. Having not bought a new inkjet printer in about 7 years, I’m totally stunned by how far the technology has improved, even over the previous round which was pretty impressive.

First, it’s REALLY fast. While a letter size photo on the 1280 would take a good 5 minutes to print, the Pixma spit my first test print out in, oh, about 25 seconds. When it started to go, I did an actual doubletake – I was not really expecting that.

Second, the color is outstanding. With no adjustment at all, it got very close to my calibrated screen. Not exact, but close enough that you probably wouldn’t notice unless you held it up to the screen and looked at them side by side. On regular old Costco photo paper.

Third, the ink usage seems better designed. It has 8 separate ink carts, which are individually replaceable, instead of one.

Fourth, when you’re not using it, the paper path trays fold up and click into the case, which I expect will significantly reduce the amount of dust and stray hair that always seemed to get into the paper path on the old printer.

Fifth, it has more cleaning modes, to clean the print heads, deep clean the print heads, and also clean the bottom tray to prevent smudges. Also, the entire print head is replaceable if needed.

The only drawback I can see so far is that it’s gigantic. That’s kind of a side effect to being able to print on big paper, but even though it’s physically slightly bigger than the 1280 was, it seems more intelligently designed to take up as little space as it can and still do what it does.

I got it for $439 at Amazon, which is about $100 less than I paid for the 1280 originally:

Highly recommended.

Tags: , , , , , ,


Microsoft should release XP for free

Filed under: — adam @ 3:47 pm

It is well known that free products are used more widely than products that people have to pay for. If Vista is so much better, then people will still pay money for it, and having more installations of XP around to keep people using Windows apps instead of switching to Mac or Linux can only be a good thing for Microsoft, whose continued success depends not only on agreements with PC manufacturers, but also on the continued existence of Windows-only software that people need to run. This benefits Microsoft, and will result in more sales of Vista (and subsequent versions), as other software vendors evolve into the same “The XP version is free, but if you want the premium version, you need Vista” pattern. Essentially – XP becomes the shareware limited demo version of Windows, and you pay if you want the full version.

This obviously benefits the consumer, because free is good, and there are plenty of places (VMs, especially), where it would be useful to run XP, but where the current price is cost prohibitive. Making XP free would open up the Windows market to those potential customers.

Anyone who’s switching to Mac or Linux has already made the decision to do it, and isn’t turning back because they can’t run Windows in a VM… because they already can. This would just make everyone’s life easier, and generate a LOT more goodwill for Microsoft than they have now.

Microsoft, despite being ridiculously profitable, is in danger of losing relevance. This is one way to combat that.

Tags: , , , , ,


Google has just bought a lot of browsing history of the internet

I pointed out that YouTube was a particularly valuable acquisition to Google because their videos are the most embedded in other pages of any of the online video services. When you embed your own content in someone else’s web page, you get the ability to track who visits that page and when, to the extent that you can identify them. This is how Google Analytics works – there’s a small piece of javascript loaded into the page which is served from one of Google’s servers, and then everytime someone hits that page, they get the IP address, the URL of the referring page, and whatever cookies are stored with the browser for the domain. As I’ve discussed before, this is often more than enough information to uniquely identify a person with pretty high accuracy.

DoubleClick has been doing this for a lot longer than Google has, and they have a lot of history there. In addition to their ad network, Google has also just acquired that entire browsing history, profiles of the browsing of a huge chunk of the web. Google’s privacy policy does not seem to apply to information acquired from sources other than, so they’re probably free to do whatever they want with this profile data.

[Update: In perusing their privacy policy, I noted this: If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. This doesn't specify which end of the merger they're on, so maybe this does cover personal information they acquire. I wonder if they're planning on informing everyone included in the DoubleClick database.]

Tags: , , ,


ISPs apparently sell your clickstream data

Apparently, “anonymized” clickstream data (the urls of which websites you visited and in what order) is available for sale directly from many ISPs. There is no way that this is sufficiently anonymized. It is readily obvious from reading my clickstream who I am – urls for MANY online services contain usernames, and anyone who uses any sort of online service is almost certainly visiting their own presence far more than anything else. All it takes is one of those usernames to be tied to a real name, and your entire clickstream becomes un-anonymized, irreversibly and forever.

I’ve talked about the dangers of breaking anonymization with leaking keys before:

Short answer: It is not enough to say that a piece of data is not “personally identifiable” if it is unique and exists with a piece of personally identifiable data somewhere else. More importantly, it doesn’t even have to be unique or completely personally identifiable – whether or not you can guess who a person is from a piece of data is not a black and white distinction, and simply being able to guess who a person might be can leak some information that might confirm their identity when combined with something else.

This is also completely setting aside the fact that you have very little direct control over much of your clickstream, since there are all sorts of ways for a site you visit to get your browser to load things – popups, javascript includes, and images being the most prevalent.

Preserving anonymity is hard. This is an egregious breach of privacy. Expect lawsuits if this is true.

Tags: , , ,


The Penny Gap is the difference between free and mostly free

Filed under: — adam @ 11:16 am

Interesting post about the Penny Gap. I think this is directly related to a similar concept which might be called the Unlimited Chasm.

The Penny Gap says that if your service is actually free, it will have a much greater uptake than one that is merely very very cheap. Rather than being a smooth curve up the value chain, there’s a quantum shift between “free” and “costs anything”. I think this is largely due to the implicit value factoring of the “cost” (in effort) of the transaction. If you could just wave your hand and pay a penny for something without getting out your credit card number or typing in your password, it seems like this gap would largely disappear.

There’s a similar effect at play when dealing with “unlimited” services. If you have to pay for usage, it takes a lot of mental effort to add up everything you’re paying and make sure you’re not over a certain amount. If you don’t, and have an unlimited plan, that mental effort goes away. Even if the unlimited service is more expensive than you’d pay with metered service, there’s less hesitation to use it because you never have to worry about keeping track of it. I feel like this effect is less prominent on services that give you constant feedback about how much you’ve used. Presumably the extra security of insurance of not ever going above a certain limit has some value to it as well.

Free and unlimited are obviously closely related, mentally and emotionally. I’ll have to think about this some more.

Tags: , , ,

Followup commentary on Windows Vista

Filed under: — adam @ 12:29 am

Perry said “I think you held back too much. Tell us what you really think.”

Okay. I think Windows is rotten to the core and always has been. Between Windows 3.1 and XP, there were no serious contenders. With Win2K and XP, it’s at least had the benefits of:

1) it being reasonably possible to hammer it into sufficient shape to be usable and secure “enough”.

2) running on significantly cheaper hardware.

3) being reasonably open for a closed-source product, and at least focused towards providing a good user experience, and aimed at the needs of the end user.

4) providing a mostly effortless hardware compatibility experience. Most of the things I’ve plugged into my XP box have simply worked, without too much trouble. Sure, I’ve had to install the driver, but there are a number of things where you have to do that with OSX, too.

5) having software exclusives, and existing in the world where virtualization/emulation on other platforms was at the end-user performance level of “barely usable, if you really need it”.

All of that seems to change with Vista and the fun 2007 world it inhabits:

#1 might have been good enough with XP, but I fail to see why none of those lessons have been learned, and we have to do it all over again with a new OS, especially one which otherwise seems to provide marginal benefits.

#2 the hardware requirements for Vista seem like simply an excuse to sell more hardware for overly bloated and inefficient software, because…

#3 they’ve totally sold out to the content industry and everything has been reoriented towards content protection, all of which eats hardware resources and diminishes usability, because of which…

#4 they broke the unified driver model and so we have to start all over again with hardware compatibility, and…

#5 now there are cheaper, better alternatives for running the same software, which actually seem to work this time around.

We’ve known this all along – Unix in any flavor is superior to Windows. We’ve finally reached the complexity point in operating systems where that difference is unmistakable even if you don’t have advanced degrees in Computer Science.

I’ve been a Windows user and defender for a very long time, because of the list of five advantages above. My primary desktop still runs XP. I expect that to be the case until I need to replace it, at which point I’ll probably get a Mac, for the five same reasons. Obviously, I haven’t hit all of the reasons, but this is a big chunk of why I have little interest in Vista. It’s the same reason I got tired of manually assigning SCSI ids to all of my disks. Tinkering is fun. Sometimes, tinkering is fun even when it’s mandatory and things don’t work unless you tinker. But after a while, you just want things to work.

Tags: , , , ,


It is time for the distinction between Mac software and PC software to go away

Filed under: — adam @ 6:21 pm

I’ve been thinking about the issue of Mac software vs. PC software a lot lately, particularly with the cross-platform beta and coming production release of Adobe CS3.

I’ve only been a recent convert to the Mac, and the thing that was holding me back was that certain software that I absolutely needed was not yet available on the Mac. Until recently, things I needed to do my job wouldn’t run on OS X, or wouldn’t run well, or would run perfectly well under Windows and OS X but would require me to buy another license (and a full price non-upgrade license at that) to run what was essentially the same software as I was running under Windows.

But with the conversion of the Macs to Intel chips and the consequent advent of Parallels (and eventually VMWare Fusion, which is not yet ready for prime time in my limited tryout), this distinction essentially evaporated. I could run all of the great software I wanted natively for Mac, and anything else that wasn’t available or would cost extra for the Mac version I could run under XP on Parallels. Since then, I haven’t bought any new Windows machines. Virtualization technologies existed before, of course, but the difference this time around is that Parallels works.

And now, Adobe, I’m looking squarely at you. Your license permits me to run a copy of CS2 on my desktop (which is still Windows), and one on my laptop (which is OS X). I’m not going to buy another full $1000 copy of CS2 for the Mac, so the question now is this – the license permits me to run it on my laptop, so why are you making me run it under Parallels? You’re letting me preview the beta version of CS3 on the Mac, but now you’re just teasing me, since you’ve said that there won’t be a cross-platform license available for the full version. When CS3 comes out, I’ll have no option but to buy the Windows version. Notwithstanding the fact that I already own the Windows version, that’s the only option that will let me run it on both my desktop and my laptop, there being no way to run OS X in a virtual machine. But that’s a degraded user experience for me, for no gain for you.

So why are we still dealing with this inconvenient fiction?

Here’s my call to arms to all software developers: where you’re making a Mac and Windows version of the same software available and currently require two separate licenses, collapse and simplify. Don’t make me run the Windows version under Parallels. It just makes me love you less, and the extra love goes to Parallels instead. I want to love you more.

Tags: , , , , , , , , ,


Gorillapod – yes!

Filed under: — adam @ 9:11 am

I’ve been continually unhappy with all of the ultraportable tripods I’ve bought. They’re too heavy, not flexible enough, take too long to set up, and the smaller ones won’t support my big camera. The gorillapod fixes all of that. It’s incredibly light, totally portable, and even sufficiently adjustable to wrap around small objects (benches, railings, bike frame, etc…). It is, in short, the best portable tripod I’ve ever seen.

It comes in three sizes:

I got the DSLR-Zoom for my big camera (which holds up to 6 lbs.) and the regular size for my little pocket cam (which is more portable). I’m a big fan of Canon’s wireless flash system, so this also seems like a great way to mount a remote flash in an inconspicious location.

Regular (digicams and flashes):

DSLR (no zoom):


Tags: , , ,


Dyson Root 6 is a bit of a marketing disaster

Filed under: — adam @ 11:21 am

So… wow.

I have a Dyson upright vacuum, and it is quite simply far far better than any other vacuum cleaner I’ve ever owned. I bought the newly released Dyson Root 6, the handheld model.

The only handheld that doesn’t lose suction… while it has charge.

It’s outstandingly good from a cleaning perspective – it does actually work very very well. But what they don’t tell you is that while the battery does charge faster than others (3.5 hours), it only lasts for 5 minutes on a charge. As a result, it’s really only good for spot cleaning, and not as a general purpose dusting vacuum, which means it misses an entire big use case of a handheld vacuum – carrying it around while cleaning the house to use for dusting shelves, surfaces, ledges, nooks, crannies, etc…. When I did this, I very quickly found that I had a completely dead battery, and I had to charge it again for 3.5 hours before being able to use it again.

What’s happened here is that, like Apple, Dyson has decided that they’re going to focus on one usage pattern (keep the vac in the charger and pull it out occasionally for spills and then put it right back in the charger) and optimize that pattern, completely ignoring any other possible uses that the customer might want to put the device to. Unfortunately, in this case, I think they’re going to be hard pressed to find many people willing to shell out $150 just for spot cleaning. Because of the real-world mechanics of lithium-ion batteries, the expected usage pattern of the vac (keep it in the charger most of the time so it’s always ready for short bursts) is at odds with the strategy for maximizing the life of the battery (drain the battery completely, then recharge fully before using again), and in a year, the effective run time will be 2.5 minutes, not 5. The value proposition would be a lot better if they included a spare battery or two that you could leave in the charger and swap out with the dead one, so you could at least rotate them and have some expectation of having a live one if you’re actually using the thing. Arguably, it has advantages over, say, a dustbuster, but at at least 3-5 times the cost for less than half of the usage pattern, I’m not sure it’s worth it.

I might have been more receptive to this idea if they’d said outright – “look, we made it work for 5 minutes, but for those 5 minutes, it’ll work much better than any other handheld vac”. But they didn’t. They completely glossed over this glaring design failure, and it’s kind of a surprise. Judging from the tone of voice of the customer service tech I called to find out if this was normal, they’ve been getting this question a lot, and it sounds like they’re a bit insulted that people would harp on something that they don’t consider to be a failure while overlooking the substantial advantages that they have produced. It’s almost a case study in misunderstanding the requirements of your audience. A 5 minute battery life is not an acceptable feature for a handheld vac, and if there’s a good reason why it should be, Dyson should have made some effort to educate people instead of just throwing it out there and letting people figure it out for themselves. I suspect that there isn’t, and this is just a design flaw that they haven’t been able to fix and one they’re trying to ignore. The users of the device, unfortunately, aren’t granted such a luxury, and the failings of it are far more evident than the successes.

That said, it’s certainly an open question about whether to return it or not, because those five minutes definitely suck as much as they should.

Tags: , , , , , , ,


Ramblings of a Switcher

Filed under: — adam @ 12:24 pm

Having moved my music and my primary laptop over to Apple machines in the past six months, there’s a lot to like, but also a lot of hate.

There are certain pieces of software that are Mac-only that I really prefer to anything available on Windows. TextMate stands out for development – while it’s not perfect, I can’t imagine doing rails coding without it anymore. Delicious Library has proven to be immensely useful for keeping track of what storage boxes I put things in when they’re rotated out to the storage space, a function I didn’t even really realize was missing until I had it. Dashboard works FAR better than anything equivalent on Windows.

On the interface side, while there are some improvements, many things are different for no apparent reason, without actually being better. This doesn’t really bother me, but it did take a little getting used to.

But what really gets me is that there are a bunch of things that are just wrong, for no apparent reason. They’d be easy to fix, but someone made an active decision that the platform was going to behave this way, and yes, I think they’re outright wrong. Some of these are problems with Apple software, some of them just problems with the general paradigm encouraged by Apple, and some problems with the specific pieces of software I’ve chosen (but which seem to be very popular in the Mac community).

  1. There are number of general interface oddities that make no sense. Why must windows only be resized from the bottom right corner? Why can’t I universally maximize windows? There’s that little green button on the interface. Who knows what it will do? Sometimes, it will maximize the current window to be full screen-ish, but just as often it does something completely useless. A particular failure of this function for which I blame Apple directly is what happens when you press this button when viewing PDF files in Preview. When reading a PDF file, I almost always want to, you know, be able to read the text on the page. The only way to do that is often to have the file fill the whole width of the screen, so the letters are large enough to be legible. There’s manual zoom in Preview, but no way to make the page fill the width of the screen. This makes reading documents in Preview unnecessarily frustrating. Hearing Apple apologists try to rationalize this away is amusing. “Oh, the Mac OS is based around the concept of having multiple windows open at once, so there’s no reason to maximize a window.” Uh, sure. Oh, I forgot, if Apple decides that it wasn’t important, I’m missing the point if I want it.
  2. There’s far too much clicking and insufficient use of keyboard shortcuts. Just about every piece of Mac software I’ve used suffers from this, but some are worse than others. For example, Omnigraffle – generally not a bad interface (although I have a list of other things that are specifically wrong with it), but there’s no way to edit the text of an item without double clicking on it. To add insult to injury, this function is even listed under the Keyboard Shortcuts section of the help.
  3. Don’t even get me started on the Finder.
  4. There’s plenty wrong with iTunes. Why is there no “currently playing” playlist? When you select an album and play it, then go look at another album, then jump to the next track, iTunes stops instead of playing the next song in the album you were listening to. There does not appear to be any way to play an entire album in the background without first making a playlist out of it. Which brings me to….
  5. iTunes management of external music folders is completely broken. There’s no way to synchronize the iTunes library with an external music source folder. If the folder is on a network drive and the network goes away for some reason, iTunes “loses” all of those tracks – they’re still listed, but they can’t be found until they’re individually played, one by one. Adding the external folder again causes all of these “missing” tracks to be doubled, and they only way to clear that out is to dump the entire library and re-add it, which also throws away all of the static playlists. iTunes, inexplicably, gives me the option to display duplicate tracks, but mysteriously no way to remove them automatically. That really helps when you’re dealing with thousands of tracks. Yes, I tried the Remove Duplicates Applescript. No, it didn’t work.

I complain, because I’d really like it to be better, and I’m surprised that it’s not. Don’t get me wrong – using the Mac is generally pretty pleasant. But these glaring flaws stick out like a sore thumb, and cast an avoidable and visceral pall over an otherwise happy experience.

Tags: , , , ,


Privacy is about access, not secrecy

There’s a very important point to be made here.

Privacy in the digital age is not necessarily about secrecy, it’s about access. The question is no longer whether someone can know a piece of information, but also how easy it is to find.

If you take a bunch of available information and aggregate it to make it easily accessible, that’s arguably a worse privacy violation than taking a secret piece of information and making it “public” but putting it where no one can find it (or where they have to go looking for it).

This is a very important disctinction when you’re looking at corporate log gathering and data harvesting. Sure – your IP address or your phone number may be “public information”, but it’s still a privacy violation when it’s put in a big database with a bunch of other information about you and given to someone.

Tags: , , ,


Google has your logs (and all it took was a fart lighting video)

The non-obvious side of Google’s purchase of YouTube: Google now has access to the hit logs of every page that a YouTube video appears on, including LOTS of pages that were probably previously inaccessible to them. MySpace pages were probably going to get Google ads anyway, because of the big deal that happened there, but many others weren’t.

Add this to AdSense, the Google Web Accelerator, Google Web Analytics, and Google Maps, and that’s a lot of data being collected about browsing habits, and the number of sites you can browse without sending some data to Google has just dropped significantly.


Tags: , , ,


Informal comparison of organic ketchups

Filed under: — adam @ 3:33 pm

I don’t really enjoy the taste of high fructose corn syrup, which seems to have worked its way into all kinds of places. The only kinds of ketchup that I’ve been able to find that are made with sugar instead are all organic, and I’ve tasted a bunch of them.

Here’s an informal summary of my findings:

  • Heinz Organic ($2.49/15 oz = $.17/oz) : Tasty. Almost exactly like Heinz ketchup, but without the HFCS twang. But even at this reduced price from Amazon Grocery (it was about $1 more for the same size bottle at my local supermarket), it’s the most expensive of the choices. Not worth the extra money.
  • Tree of Life Organic ($4.69/36 oz = $.13/oz) : Very good, but a little fruitier than I like. Still full bodied, and a perfectly acceptable choice. Sort of like getting Hunts if you like Heinz.
  • 365 Organic – Whole Foods ($1.89/24 oz = $.08/oz) : This was my favorite of the four, and also the cheapest. Very well balanced, good acidity. Tastes like Heinz, for the most part, but with a brighter, more persistent flavor.
  • Annie’s Organic ($2.79/24 oz = $.12/oz) : Not good. Very reminiscent of tomato paste, and too thick.


Tags: , , , , ,


Putting Comments Out of Our Misery.

Dante: You hate people!
Randal: But, I love gatherings, isn’t it ironic?

I hate comments. But I love conversations. As I peruse the web, I find myself (as many of us do) drawn to leave comments across the pages that other people have written. But it’s an incomplete puzzle – a comment as it exists now is an endpoint. It may lead to something else, but it’s up to someone else to figure out what that thing may be, or even if that evolution will happen at all. Comments tend to follow one of two patterns, neither of them productive:

  1. The comment thread trails off as people get disinterested, and nothing really comes of it.
  2. The comment thread gets so long that it’s impossible to follow, things get repeated, and the people commenting on the last page aren’t really talking to the people on the first page. Nothing really comes of it.

The process isn’t helping us out here. We haven’t even gotten into vanity comments, flame wars, or any of that stuff that’s detrimental.

Working on ORGware, we’re revamping comments. We’re starting with two major changes, and there will be others. The first big change is that every comment you leave on someone else’s post also gets posted on your own blog, and it will have to be positively rated before it appears anywhere else. If you want to blather on about whatever, you’re free to do that, but you won’t be allowed to join the discussion unless some threshold of other people think you have something useful to say. That’s a relatively minor one, but it’s important. It shifts the focus of the comment from the commenter to the discussion, and it makes it possible for the community to weed out (passively, by ignoring) the irrelevant wanderings.

The second change is far more interesting, and it deals with how the comment thread metamorphosizes into something else entirely – a discussion with usable output. Right now, you post, people comment, maybe people make followup posts on their own blogs… and if you want more than that, you have to do it yourself. We’re building in another step. Comments on their own, for any post that has an action output, are no longer an endpoint – they’re a stepping stone to writing that action output. Writing “good” comments (in the opinion of the original author and/or the community) gets you an invitation to help edit that output product, which can become a letter, or a fax, or an email, or even a followup post for more discussion. Britt has posted a good overview of the interface I designed for this, which we’re simply calling the comment editor now until we come up with a better term.

More to come…

Tags: , , , , ,


I’m with Ebert

Filed under: — adam @ 1:44 pm

After that last debacle, we saw Superman Returns on Sunday, at a different theater (but also an AMC one, since they seem to have acquired almost all of the good Manhattan theaters), and our experience was ruined in an entirely different way. We went to the DLP showing, for ENHANCED PICTURE AND SOUND. The sound was great admittedly, but the projector was miscalibrated and about 2-3 stops too dark. Many scenes were missing shadow detail, and some were entirely black. When we complained, the people at the theater first said “there’s nothing wrong with it”, then “that’s how it’s supposed to be”, then “it can’t be calibrated on our end”, then finally “we’ve been complaining to the projector people and we have someone coming to look at it next week”.

WTF?!?! Why are you lying to me? Just come right out and say it’s broken, we fucked up, and give me my money back?!

Anyway, I now have six free tickets to AMC theaters. I’ll have to find something interesting to do with them, since I don’t envision wanting to go back to the theater anytime soon.

As for the movie itself, I was thoroughly underwhelmed. Mainly, I was pretty strongly appalled that they seemed to have not decided if this was a sequel or a reboot, and as a result many things about it were confused. If this is 5 years later, why does everyone appear 7 years younger? We’ve already done the “Lex Luthor does something diabolical to increase his real estate holdings” and the “Miss Tessmacher gets all upset when people are going to die and crosses Lex at the last minute” plot elements, and they simply feel repeated here without any significant evolution. Why is there no mention of the last time Superman simply disappeared for no apparent reason, in Superman II?
Other random comments:

  • I’m not going to comment on the physics, because that’s a losing battle.
  • Yes, once again, please read Man of Steel, Woman of Kleenex before making a movie like this.
  • On a DLP screen, you can see entirely too much of Brandon Routh’s makeup. In some closeup scenes, his face looks like it was added in after the fact with CGI.
  • Where’s all the rest of that great Kryptonian technology that Lex was going to use to defend his giant island?
  • Kate Bosworth was simply not the right choice for Lois Lane. James Marsden is not terribly compelling. The rest of the casting was pretty much on-target. Kevin Spacey was great, but should have toned down the tag lines a bit. Okay, a lot. Show me the money or something.
  • That kid should totally have had Batman Underoos.
  • My favorite scene was the one where the lights go back on and everyone else realizes that Lex has backed away from the pool.

Tags: , ,


Another nail in the theater experience coffin

Filed under: — adam @ 6:55 pm

I’ve just about had it with theaters.

We tried to go see the new Superman movie this evening. I bought tickets on Fandango a few weeks ago. We arrived at the theater about 45 minutes early, which would have been plenty of time, except that the machines for some reason couldn’t find my ticket. After being shunted around to three desks, I finally arrived at the Guest Services counter, where they told me I could just use my printed receipt (which I’d thoughtfully brought) as a ticket. Of course, by this time, it was only 25 minutes before the show, and the theater was already getting pretty packed.

There were plenty of empty seats, but they were all “saved”. Normally, I expect that a few seats will be saved. Maybe even half. But we’re talking several rows of more than 12 seats. Saved. I approached a manager who seemed to be guarding them, who simply told me that they were saved. He “informed” me that there were plenty of places where we could get two seats together, and he couldn’t release any of the seats. I asked him where, and he pointed out two of them. I went to check it out. Saved. I went back and told him that, and he pointed out two more. Saved.

Saved, saved, saved.

Sorry, AMC IMAX theater, but no. Just no. I came expecting some competition for seats, and I arrived early. But I didn’t expect to be denied seats by your staff for actually being there, and told that I was just shit out of luck. For as long as I’ve been going to the movies, there have always been rules about general seating. One of them is that you can’t save more than two seats, three tops. But twelve?
I got a refund and was given two free additional tickets, but I still feel shafted. After all of the complaining about how people aren’t going to the movies anymore, the theaters should be falling over themselves that there’s actually this excitement.

I wanted to go to the movies to have some kind of shared experience, and instead I encountered a complete lack of any hospitality whatsoever. To be honest, I’m still kind of confused by the whole situation. I don’t know if I encountered some kind of special VIP situation, or just incompetence. But I do know that my time was wasted in going to the theater and going through all this, and the whole thing was pretty frustrating and unpleasant. I suppose it’s naive of me to expect them to recognize that their business lies in providing pleasant experiences.

Tags: , , ,

Addressing the lamentations of the local

Filed under: — adam @ 9:27 am

Meg says it’s too expensive to shop locally:

I have some responses to this.

1) The Union Square greenmarket is, in my experience, significantly more expensive than the other satellite markets throughout the city. There are a few possible reasons for this – it might draw the more expensive farms which sell different but slightly more expensive varietals of the same produce, the thriving restaurant business in the area could be a factor, or it could just be fame. What I do know is that everyone I know who shops at the USGM says “hey, this stuff is really expensive” far more than the people who don’t. I’d suggest doing some comparison shopping at other markets.

2) There are regional variations in the growing season and only the most prime produce will be at better prices. The berry season has barely started here in NY, so they’re more expensive. But lettuce, greens, beans, and cucumbers are all MUCH cheaper at my green market than the supermarket, and much higher quality. You’ve got to pick your battles. One exception I’ve found to this has been tomatoes. Local tomatoes are outrageously expensive compared to shipped tomatoes. But on the other hand, they’re incomparable, because tomatoes were not meant to be shipped. They are completely different beasts. $3/lb for local tomatoes is an indulgence I’ll gladly pay to consume what I consider to be among the most pleasurable culinary experiences we have available to us. The depth of flavor and delicate texture in a local tomato is simply something you can’t get for any price nonlocally, because what it must go through to survive shipping destroys its unique characteristics. I feel the same way about Ronnybrook Farms milk. It’s pretty expensive compared to other milks, but that’s only if you assume that because they have the same name that they’re somehow the same product. They’re not.

3) There’s a lot to be said about the freshness and fridge life of fruits and vegetables purchased locally. If you’re actually going to eat it in a day or two, the quality will likely be unmatched by anything you can find even at Whole Foods. On top of that, a head of lettuce purchased at Whole Foods will last maybe 3-5 days in the fridge before it starts to wilt, but I’ve eaten lettuce purchased two weeks prior from the farmer’s market, and it’s always still crisp and green.

    Tags: , , , , , , ,


    Collected thoughts on the futility of online communities

    This is a long post collecting comments and thoughts from some emails and conversations with Britt Blaser, Doc Searls, and others. Some of this is from external impressions of the Dean campaign (I wasn’t involved, and I haven’t found a good postmortem), but also about my own participation in online communities and the lack of incentive that I often feel to do so.

    There is a huge untapped market for community software. There’s a lot of “community software” out there, and it all fails on the same key point – it’s all centered on the software itself (or more specifically, the website experience), and fundamentally, communities don’t happen in discussion groups or impersonal online participation. People come to a community like dailykos or metafilter or whatever, and they “join” the community, but those ties are fragile, and the experience of most participants is that they almost never extend to anything beyond participating in the online community itself. If you suddenly disappear, no one will come looking for you. This is not the same as an actual community.

    Reading isn’t participation in a community. Writing to the public isn’t participation in a community, and the fatal flaw of the existing approach is that the underlying assumption is that the collective act of reading and writing is equal to participation. This is especially misleading if the online community is supposed to be mirroring some sort of participation in the real world, like political involvement.

    The end result is exactly what we saw with the Dean campaign, as perceived by an outsider. Lots of “participation”, lots of “involvement”, but everybody sat around reading and writing and thinking that they were somehow involved, but when it came down to it, no one got up to vote.

    Now, actually, there’s a corollary problem here, which is that the online community itself, while very vocal, was also VERY bad at doing anything to engage anyone outside of the online community, because they spent all of their time reading and writing, and those activities, even as they fail to engage those inside the online community to action, COMPLETELY fail to engage anyone outside the online community.

    As I wrote the above, the universe graciously provided a perfect example to illustrate my point:,,1788774,00.html

    It’s an article about the futility of discussing things online, which has somehow accumulated an inordinate number of comments.

    I’ll pause for a moment while that sinks in.

    So, we have some problems to fix. Participation in the online community needs to have the following properties:

    1) It should be centered around activity that breaks out of the online community. This needn’t actually be physical meetings, although those are also good, but all actions must be classified as “inward” (aimed towards engaging with others in the online community) or “outward” (aimed towards engaging with other outside the online community). EVERY inward action must have a corresponding outward action. If it doesn’t, there’s already a name for this – it’s called “preaching to the choir”, and it’s the death of activism.

    2) It should allow and encourage those inside the online community to engage with each other temporarily to reinforce the commitments of those who are already involved, but all such actions should be considered subsidiary to engaging with others outside the online community. Think of this as the difference between vegetables (outward) and chocolate (inward). A little bit of the latter is very rewarding and tastes good, but if that’s all you eat, you get fat and die.

    3) It should allow those in the online community to evolve internally the mechanisms for accomplishing goals outside the online community. This may involve consensus building, electing representatives inside the online community, collaborative letter writing, legislation hashing, and so on.

    4) It must have a mechanism for elimination of cruft. Old ideas, bad ideas, unpopular ideas, and irrelevant ideas are all barriers to entry. The online community must be able to decide on what the salient points are, and delete the rest. I’ve had it with relativistic egalitarianism. There is such a thing as a bad idea, and they’re distracting and harmful. We need to create a marketplace where all ideas have an equal opportunity to flourish, but if they don’t, then let’s be done with them. Archive the discussion for posterity, and clear it out of the center of attention.

    It’s not enough to talk, communities must be a driver for action.

    Tags: , , , , , ,


    The motivations of wiretapping

    Boingboing points out this Wired article about a reporter who crashed a conference of wiretapping providers, mentioning this quotation in particular:

    ‘He sneered again. “Do you think for a minute that Bush would let legal issues stop him from doing surveillance? He’s got to prevent a terrorist attack that everyone knows is coming. He’ll do absolutely anything he thinks is going to work. And so would you. So why are you bothering these guys?”‘

    It’s an interesting read, but I fundamentally disagree with the above statement, and this is the problem.

    It’s not the surveillance that bothers me, it’s the resistance to oversight, even after the fact.

    If there was any confidence that what they were doing was a reasonable tradeoff, they wouldn’t have to a) lie or b) break the law to do it. Yet they’ve done both of these things.

    If the law enforcement community said “well shit, we’re out of ideas about how to stop these people, and so we really need to have our computers read everyone’s email and tap everyone’s phones and we guarantee that this information won’t be used for anything else, and anyone we find doing something nefarious will be dealt with according to due process”, then we could, you know, engage in a meaningful discussion about this. And then we could move on to the fact that “terrorist” is not a useful designation for a criminal, and then maybe we could fire the people who thought up this brilliant idea and find someone who would practice actual security because wholesale surveillance and profiling have been widely debunked as largely useless for anything besides persecution, political attacks, and invasions of privacy.

    But we won’t, because that’s not what this is about.

    This opinion of a member of the Dutch National Police is particularly telling:

    ‘He said that in the Netherlands, communications intercept capabilities are advanced and well established, and yet, in practice, less problematic than in many other countries. “Our legal system is more transparent,” he said, “so we can do what we need to do without controversy. Transparency makes law enforcement easier, not more difficult.”

    The technology exists, it’s not going away, and it’s really not the problem. The secrecy is the problem.,71022-1.html

    Tags: , , ,


    Musings on Consumer Content Experience (or sometimes, maybe you need a brand)

    Filed under: — adam @ 3:00 pm

    Doc Searls gave an interesting closing keynote talk on the Live Web at the Syndicate conference yesterday. He started with search engines and how they index the static web, but they’re also branching off into indexing the live web via blog search and rss (not sure I agree, but more on that later). From there, he drew further dichotomies between marketing and participation/demand, and publishing as a finished product and blogging as a provisional conversation. All of this centers around his assertion that the Live Web is (or will be) a dynamic expression of the demand side of the equation fulfilling its own needs. Instead of a value chain, you get a value constellation, where each star participates in the network, and in between is freedom. I like that metaphor, and it flowed right into his main point that the Live Web economy consists of two halves – the attention economy and the intention economy. In the Live Web, consumers not only command where they look (attention), but are also in control when they’re ready to buy (intention).

    The intention economy hasn’t really arrived. As a customer (no longer “consumer”), when you’ve decided what you’re going to buy, you still have to go find someplace to buy it. In the intention economy, you should be able to announce your intention to buy, and companies who are selling will come looking for you. We’re getting closer to that – shopping comparison sites help, but they’re still static snapshots. What’s needed is a dynamic marketplace around these ideas. Incidentally, that’s why I don’t necessarily think that blog search is a marker of the Live Web – RSS feeds aren’t interactive. They’re push, to be sure, so you get more updated static information, but like the shopping comparison sites, they’re still just static snapshots. On the other hand, getting people used to having some automated process working in the background is a step in the right direction.

    The existence of branding is tied very closely into this. In a certain sense, a brand exists primarily to help make products seem better than they are, by associating them with other things that are known to be good. If you already know what you want to buy, maybe you’re past this point, and it’s more honest to do without. As a counter example, consider these two products, which are made by the same company and basically identical. One’s a piece of foam sex furniture for adults, and one’s a piece of foam gaming furniture for kids. Esse vs. Zerk. Same product, two very different uses. Brands serve to make the distinction. Does the fact that the same product has two different names for two different audiences make a difference? I’ll have to think about that one some more. Incidentally, if you switch the marketing copy on those two pages, it’s really funny.

    (Who wants to help me come up with a brand for my spool-fed bacon-wrapped CPU cooling scheme? You have to refresh the bacon every once in a while, but on the plus side, it’s tasty.)

    Tags: , , , , , , , , , ,


    In which I go all Top Chef on Craftsteak

    Filed under: — adam @ 8:44 am

    We had the pleasure of eating at the newly formed Manhattan outpost of artisan meat yesterday evening, the newest jewel in the Colicchio empire – Craftsteak. There’s a constant assertion that one should avoid new restaurants, but I have really tremendously enjoyed every experience I’ve had with visiting restaurants in their first month. In many cases, these have even been preferable to subsequent excursions. Even as the staff may not have hit their stride yet, there’s something undeniably fresh about a new restaurant, and that adds a lot to the dining experience for me. Think Like a Chef is really the book that got me interested in pursuing serious fine cooking, so I feel a special connection to Chef Colicchio’s places.

    The decor is fabulous, of course. The layout of the space has a good flow, with the main dining room separated from the bar and raw bar by a characteristic walk-in transparent wine cellar. The dining room is very open and has exquisitely high ceilings. Even at full capacity, the sound level was pleasant.

    And, on to the food.

    We started with three appetizers for the four of us – roasted veal sweetbreads, roasted foie gras, and wagyu beef tartare. I’m a big fan of sweetbreads, and these were among the best I’ve ever had, and a generous portion for an appetizer course. The foie gras was outstanding in flavor, although it was not completely cleaned of veins (despite, as Mayur noted, explicit instructions to do this in Think Like a Chef). The wagyu beef tartare was served with a quail egg and toast, and it was tasty, if not terribly impressive. We all felt that the presentation was too much like traditional beef tartare, and would have preferred a coarser cut usually reserved for fish tartare, to really highlight the exceptional texture of this fine meat.

    And now, the steaks.

    The selection is large and detailed, from a few varieties of corn-fed heresford beef, both wet and dry aged, through grass-fed Hawaiian beef, to the premium grade Wagyu beef (which tempted all of us, but which budgets demanded we resist). Surprisingly, the waiter was pushing everyone to get medium rare, but couldn’t really explain why beyond “that’s what the chef recommends”. Despite our mostly ignoring that advice and asking for more on the rare side, one of the steaks did arrive fully medium rare, and had to be sent back. We had a similar problem with the rabbit. It was actually a beautiful presentation, with the various pieces separated – leg, a mini rib rack, some “pulled” rabbit meat, and a tenderloin. This would have worked well, but the tenderloin was slightly underdone. However, once we got past those two problems, everything was great. I opted for the grass-fed filet mignon, and it was one of the best steaks I’ve ever had, and outstandingly prepared. It was uniformly and perfectly rare all the way through (about 2.5 inches thick), and impressively tender and flavorful. The other two steaks on the table – a 42-day dry aged strip and a grass-fed ribeye, were also superlative. As with the main Craft, sides are ordered and prepared separately. We opted for the more seasonal choices – roasted ramps, sugar snap peas, and baby carrots, and a pea and morel risotto. All of them were up to the usual standards.

    We paired with a moderately priced Qupe syrah, which was intensely berry-oriented, and matched well with everything.

    The desserts (pineapple upside down cake, a warm chocolate tart, and monkey bread – a cinnamon and nut encrusted brioche) were all acceptable, but the balance was off a bit on everything. A little too sweet, too salty, or just not quite right. The espresso was sub-par, disappointing and bitter. This wasn’t enough to really ruin the meal, but it wasn’t an impressive close, and it’s obvious that the most attention has been paid to the meat.

    Overall, I had a thoroughly enjoyable and delicious meal that very much worked for me despite the nitpicking flaws above, and the very exceptional quality of the steak is really the standout here, the gem that puts the shine on the whole thing.

    I see great potential.

    Tags: , , , , , ,


    Hidden dangers for consumers – Trojan Technologies

    I’ve been collecting examples of cases where there are hidden dangers facing consumers, cases where the information necessary to make an informed decision about a product isn’t obvious, or isn’t included in most of the dialogue about that product. Sometimes, this deals with hidden implications under the law, but sometimes it’s about non-obvious capabilities of technology.

    We’re increasingly entering situations where most customers simply can’t decide whether a certain product makes sense without lots of background knowledge about copyright law, evidence law, network effects, and so on. Things are complicated.

    So far, I have come up with these examples, which would seem to be unrelated, but there’s a common thread – they’re all bad for the end user in non-obvious ways. They all seem safe on the surface, and often, importantly, they seem just like other approaches that are actually better, but they’re carrying hidden payloads – call them “Trojan technologies”.

    To put it clearly, what I’m talking about are the cases where there are two different approaches to a technology, where the two are functionally equivalent and indistinguishable to the end user, but with vastly different implications for the various kinds of backend users or uses. Sometimes, the differences may not be evident until much later. In many circumstances, the differences may not ever materialize. But that doesn’t mean that they aren’t there.

    • Remote data storage. I wrote a previous post about this, and Kevin Bankston of the EFF has some great comments on it. Essentially, the problem is this. To the end user, it doesn’t matter where you store your files, and the value proposition looks like a tradeoff between having remote access to your own files or not being able to get at them easily because they’re on your desktop. But to a lawyer asking for those files, it makes a gigantic difference in whether they’re under your direct control or not. On your home computer, a search warrant would be required to obtain them, but on a remote server, only a subpoena is needed.
    • The recent debit card exploit has shed some light on the obvious vulnerabilities in that system, and it’s basically the same case. To a consumer, using a debit card looks exactly the same as using a credit card. But the legal ramifications are very different, and their use is protected by different sets of laws. Credit card liability is typically geared in favor of the consumer – if your card is subject to fraud, there’s a maximum amount you’ll end up being liable for, and your account will be credited immediately, as you simply don’t owe the money you didn’t charge yourself. Using a debit card, the money is deducted from your account immediately, and you have to wait for the investigation to be completed before you get your refund. A lot of people recently discovered this the hard way. There’s a tremendous amount of good coverage of debit card fraud on the Consumerist blog.
    • The Goodmail system, being adopted by Yahoo and AOL, is a bit more innocuous on the surface, but it ties into the same question. On the face of it, it seems like not a terrible idea – charge senders for guaranteed delivery of email. But the very idea carries with it, outside of the normal dialogue, the implications of breaking network neutrality (the concept that all traffic gets equal treatment on the public internet) that extend into a huge debate being raged in the confines of the networking community and the government, over such things as VoIP systems, Google traffic, and all kinds of other issues. I’m not sure if this really qualifies in the same league as my other examples, but I wanted to mention it here anyway. There’s a goodmail/network neutrality overview discussion going on over on Brad Templeton’s blog.
    • DRM is sort of the most obvious. Consumers can’t tell what the hidden implications of DRM are. This is partly because those limitations are subject to change, and that in itself is a big part of the problem. The litany of complaints is long – DRM systems destroy fair use, they’re security risks, they make things complicated for the user. I’ve written a lot about DRM in the past year and a half.
    • 911 service on VoIP is my last big example, and one of the first ones that got me started down this path. This previous post, dealing with the differences between multiple kinds of services called “911 service” on different networks, is actually a good introduction to this whole problem. I ask again ‘Does my grandmother really understand the distinction between a full-service 911 center and a “Public Safety Answering Point”? Should she have to, in order to get a phone where people will come when she dials 911?

    I don’t have a good solution to this, beyond more education. This facet must be part of the consumer debate over new technologies and services. These differences are important. We need to start being aware, and asking the right questions. Not “what are we getting out of this new technology?“, but “what are we giving up?“.

    Tags: , , , , , , , , , ,


    How to be the boss

    Filed under: — adam @ 4:06 pm

    I was thinking about using this to kick off a business and technology blog I’m planning, but I just haven’t had the time to do the work necessary to launch it, and this was too good to not share (and a corollary rule is that when you’re the boss, you need to realize early that things aren’t going to work out and make alternate arrangements).

    This is from an exchange with a client who has a problem which is, in my experience, not unique among small business leaders – they’re the bottleneck.

    “I don’t like being the bottleneck but I am on most projects and I can’t seem to break the trend”.

    The answer I gave him, and the answer I give you, is this:

    Stop doing other people’s work for them. Stop being the customer. You’re the bottleneck because you have the vision. When someone does some work, they’ll reach a point where they have to stop and check it with you, because you have the vision for what it should be. If they had the vision, they’d know if their work was right or not, but they’re not sure. And when that happens, sometimes, maybe even often, instead of helping to transfer the vision, you get involved in their work more deeply because it worries you that they don’t have the vision and that means you need to do more oversight. That makes you busier, and takes away the time you have to approve the other things that are waiting for your approval of the vision. Maybe you’ll even take over some of those things, “because it will be faster if I just do it”, which sucks even more of your time, which makes you more of a bottleneck. As the boss, concentrate on transferring the vision instead of doing work that other people can and should be doing. You won’t always be able to, but wherever you can, it will help. Focus on giving people a template to check their work against, and you’ll have to do less of it.

    This is not to say that you shouldn’t be involved, but when people bring work to you for approval, it goes a lot faster if they’re already confident that it’s right.

    Tags: , , ,


    This is what we mean by abuse of databases

    Okay, here it is, folks.

    When someone asks “what’s wrong with companies compiling huge databases of personal information?”, this is part of the answer:

    Someone signed up for a Miller Brewery contest using a throwaway email address, and they tracked her down and signed up her “real” email address. The second link above concludes that they did it by using information collected by Equifax’s direct mail division, Naviant (which was supposed to have been shut down years ago). They own the domain from which the email was sent.

    When we talk about privacy, it can mean a number of things. But indisputably, one of the definitions is “the right to be free from unauthorized intrusions”.

    Maybe this is a small thing, but it’s a terrible precedent.

    This person obviously didn’t want to be permanently signed up for messages from Miller. Letting an address expire is probably the ultimate form of “opt-out”. Yet, Miller thought it was okay to use personal information gleaned from who-knows-what sources to tie her to another email address, and send her more spam. Would they do the same thing if you changed your phone number to avoid telemarketers? What else is fair game?

    Tags: , , , , , ,


    Storing your files on Google’s server is not a good idea

    Filed under: — adam @ 2:29 pm

    I was going to write something long about this, but Kevin Bankston of the EFF has beaten me to it and put together pretty much everything I was going to say.

    Here’s the original piece:

    In response to a criticism on the IP list that this piece was too hard on Google, Kevin wrote the following, which I reproduce here verbatim with permission. I think that this does an excellent job of summing up how I feel about these privacy issues. I have nothing personally against Google, or any of the other companies that I often “pick on” in pointing out potential flaws. I do think that somewhere along the way in getting to where we are now, we have lost some important things in the areas of corporate responsibility and consumer protections, and technology has advanced to the point where it’s not even obvious what has been lost. The tough thing is that there are often tradeoffs with useful functionality, and it’s not clear what you’re giving up in order to make use of that potential new feature.

    So, in this case – yeah, it’s great that you can search your files from more than one computer, but Google hasn’t warned you that your doing so by their method, under the current law, exposes your private data to less rigorous protections from search by various parties than it would be if it were left on your own computer. To most people, it doesn’t make any difference where their files are stored. To a lawyer with a subpoena in hand, it does. These are important distinctions, and they’re not being made to the general public. I believe it is the responsibility of those who understand these risks to bring this dialogue to those who don’t. It’s a a big part of why I write this blog.

    Kevin’s response:

    Thanks for your feedback. I’m sorry if you found our press release inappropriately hostile to Google, although I would say it was appropriately hostile–not to Google or its folk, but to the use of this product, which we do think poses a serious privacy risk.

    Certainly, the ability to search across computers is a helpful thing, but considering that we are advocating against the use this particular product for that purpose, I’m not sure why we would include such a (fairly obvious) proposition in the release. And as to tone, well, again, the goal was to warn people off of this product, and you’re not going to do that by using weak language. Certainly, we’re not out to personally or unfairly attack the people at Google. Indeed, we work with them on a variety of non-privacy issues (and sometimes privacy issues, too). But it’s our job to forcefully point out when they are marketing a product that we think is a dangerous to consumers’ privacy, and dropping in little caveats about how clever Google’s engineers are or how useful their products can be is unnecessary and counterproductive to that purpose.

    I think it’s clear from the PR that our biggest problem here is with the law. But we are also very unhappy with companies–including but not limited to Google–that design and encourage consumers to use products that, in combination with the current state of the law, are bad for user privacy. Google could have developed a Search Across Computers product that addressed these problems, either by not storing the data on Google servers there are and long have been similar remote access tools that do not rely on third party storage), or by storing the data in encrypted form such that only the user could retrieve it (it is encrypted on Google’s servers now, but Google has the key).

    However, both of those design options would be inconsistent with one of Google’s most common goals: amassing user data as grist for the ad-targeting mill (otherwise known, by Google, as “delivering the best possible service to you”). As mentioned in the PR, Google says it is not scanning the files for that purpose yet, but has not ruled it out, and the current privacy policy on its face would seem to allow it. And although I for one have no problem with consensual ad-scanning per se, which technically is not much different than spam-filtering in its invasiveness, I do have a very big problem with a product that by design makes ad-scanning possible at the cost of user privacy. This is the same reason EFF objected to Gmail: not because of the ad-scanning itself, but the fact that Google was encouraging users, in its press and by the design of the product, to never delete their emails even though the legal protection for those stored communications are significantly reduced with time.

    If Google wants to “not be evil” and continue to market products like this, which rely on or encourage storing masses of personal data with Google, it has a responsibility as an industry leader to publicly mobilize resources toward reforming the law and actively educating its users about the legal risks. Until the law is fixed, Google can and should be doing its best to design around the legal pitfalls, placing a premium on user privacy rather than on Google’s own access to user’s data. Unfortunately, rather than treating user privacy as a design priority and a lobbying goal, Google mostly seems to consider it a public relations issue. That being the case, it’s EFF’s job to counter their publicity, by forcefully warning the public of the risks and demanding that Google act as a responsible corporate citizen.

    Once again, another reason why you should be donating money to the EFF. Do it now.

    Tags: , , , ,


    Detailed survey of verbatim answers from AOL, MS, Yahoo, and Google about what details they store

    Declan McCullagh has compiled responses from AOL, Microsoft, Yahoo and Google on the following questions (two of which are nearly verbatim from my previous query, uncredited):

    So we’ve been working on a survey of search engines, and what data they keep and don’t keep. We asked Google, MSN, AOL, and Yahoo the same questions:

    - What information do you record about searches? Do you store IP addresses linked to search terms and types of searches (image vs. Web)?
    - Given a list of search terms, can you produce a list of people who searched for that term, identified by IP address and/or cookie value?
    - Have you ever been asked by an attorney in a civil suit to produce such a list of people? A prosecutor in a criminal case?
    - Given an IP address or cookie value, can you produce a list of the terms searched by the user of that IP address or cookie value?
    - Have you ever been asked by an attorney in a civil suit to produce such a list of search terms? A prosecutor in a criminal case?
    - Do you ever purge these data, or set an expiration date of for instance 2 years or 5 years?
    - Do you ever anticipate offering search engine users a way to delete that data?

    Tags: , , ,


    More specific Google tracking questions

    I asked two very specific questions in a conversation with John Battelle, and he’s received unequivocal answers from Google:

    1) “Given a list of search terms, can Google produce a list of people who searched for that term, identified by IP address and/or Google cookie value?”

    2) “Given an IP address or Google cookie value, can Google produce a list of the terms searched by the user of that IP address or cookie value?”

    The answer to both of them is “yes”.

    Tags: , , ,

    Flickr pictures, web beacons, and a modest proposal

    As I noted in the comments of the previous post, I don’t have ads on the site, but I do have flickr pictures directly linked from my flickr account.

    It is conceivable to me that flickr pictures could qualify as “web beacons” under the Yahoo privacy policy, and thus be used for tracking purposes. Presumably, this was not the original intention of the flickr developers, but it’s certainly a possibility now that they’re owned by Yahoo. Are the access logs for the static flickr pictures available to Yahoo? Probably. Are they correlated with other sorts of usage information? It’s not clear. Presumably, flickr pictures are linked in places where standard Yahoo web beacons can’t go, because they’re not invited (like on this site, for example).

    I think my conclusion is that this is probably not a problem, but maybe it is. It and other sorts of distributed 3rd party tracking all have one thing in common:

    It’s called HTTP_REFERER.

    Here’s how it works. When you make a request for any old random web page that contains a 3rd party ad or an image or a javascript library or whatever, your browser fetches the embedded piece of content from the 3rd party. When it does that, as part of the request, it sends the URL of the page you visited as part of the request, in a field called the referer header (yes, it’s misspelled).

    So, every time you visit a web page:

    • You send the URL to the owner of the page. So far so good.
    • You send your IP address to the owner of the page. Not terrible in itself.
    • You send the URL of the page you visited to the owner of the 3rd party content. And this is where it starts to degrade a little.
    • You send your IP address to the owner of the 3rd party content. The owner of the 3rd party content may be able to set a cookie identifying you. Modern browsers are set by default to refuse 3rd party cookies. However, if that 3rd party has ever set a cookie on your browser before (say, if you hit their site directly), they can still read it. In any case, you can be identified in some incremental way.
    • The next time you visit another site with content from the same 3rd party, they can probably identify you again.

    That referer URL is a significant key that ties a lot of browsing habits together.

    There’s an important distinction to be made here. The referer header makes it possible for 3rd party sites to track your content, and it’s only one of many ways. Doing away with the referer header won’t prevent the sites running 3rd party tracking content from doing so. The owner of the site can always send the URL you’re looking at to the 3rd party as part of the request, even if your browser isn’t. However, what this does prevent is tracking without the consent of the owner of the site you’re looking at. Of all of the sites you’re looking at, actually. Judging from my admittedly limited conversations with site owners, there are a LOT of people out there who have no idea that their users can be tracked if they include 3rd party ads on their site, or flickr images, or whatever. (Again, not to say that their users are being tracked, but the possibility is there.)

    Again, the site that includes the ad or image or whatever isn’t sending that information – your browser is, and this is a legacy of the early days of the web. Some browsers allow you to turn it off and not send any referer information. I’d argue that this should be off by default, because there disadvantages outweigh the benefits. I’m told that legitimate advertisers don’t rely on the referer header anyway, because it can be unreliable. If that’s true, that’s even less reason to keep it around.

    Suggestion number one was “Tracking information that’s linked to personally identifiable information should also be considered personally identifiable“.

    Perhaps suggestion two is “Let’s do away with the Referer header”. (Of course, this comes on the heels of a Google-employed Firefox developer adding more tracking features instead of taking them away.)

    Arguments for or against? Are there any good uses for this that are worth the potential for abuse?

    Tags: , , , , , ,


    What’s the big fuss about IP addresses?

    Given the recent fuss about the government asking for search terms and what qualifies as personally identifiable information, I want to explain why IP address logging is a big deal. This explanation is somewhat simplified to make the cases easier to understand without going into complete detail of all of the possible configurations, of which there are many. I think I’ve kept the important stuff without dwelling on the boundary cases, and be aware that your setup may differ somewhat. If you feel I’ve glossed over something important, please leave a comment.

    First, a brief discussion of what IP addresses are and how they work. Slightly simplified, every device that is connected to the Internet has a unique number that identifies it, and this number is called an IP address. Whenever you send any normal network traffic to any other computer on the network (request a web page, send an email, etc…), it is marked with your IP address.

    There are three standard cases to worry about:

    1. If you use dialup, your analog modem has an IP address. Remote computers see this IP address. (This case also applies if you’re using a data aircard, or using your cell phone as a modem.)
    2. If you have a DSL or cable connection, your DSL/cable modem has an IP address when it’s connected, and your computer has a separate internal IP address that it uses to only communicate with the DSL or cable modem, typically mediated by a home router. Remote computers see the IP address of the DSL/cable modem. (This case also applies if you’re using a mobile wifi hotspot.)
    3. If you’re directly connected to the internet via a network adapter, your network adapter has an IP address. Remote computers see this IP address.

    Sometimes, IP addresses are static, meaning they’re manually assigned and don’t change automatically unless someone changes them (typically, only for case #3). Often, they’re dynamic, which means they’re assigned automatically with a protocol called DHCP, which allows a new network connection to automatically pick up an IP address from an available pool. But just because they can change doesn’t mean they will change. Even dynamic IP addresses can remain the same for months or years at a time. (The servers you’re communicating with also have IP addresses, and they are typically static.)

    In order to see how an IP address may be personally identifiable information, there’s a critical question to ask – “where do IP addresses come from, and what information can they be correlated with?”.

    Depending on how you connect to the internet, your IP address may come from different places:

    • If you use dialup, your modem will get its IP address from the dialup ISP, with which you have an account. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and billing details are part of your account information. By recording the phone number you call from, they may be able to identify your physical location.
    • If you have a DSL or cable connection, your DSL/cable modem will get its IP address from the DSL/cable provider. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and physical location, and probably other information about you, are part of your account information.
    • If you’re using a public wifi access point, you’re probably using the IP address of the access point itself. If you had to log in your account, your name and physical location, and probably other information about you, are part of your account information. If you’re using someone else’s open wifi point, you look like them to the rest of the internet. This case is an exception to the rest of the points outlined in this article.
    • If you’re directly connected to the internet via a network adapter, your network adapter will get its IP address from the network provider. In an office, this is typically the network administrator of the company. Your network administrator knows which computer has which IP address.

    None of this information is secret in the traditional sense. It is probably confidential business information, but in all cases, someone knows it, and the only thing keeping it from being further revealed is the willingness or lack thereof of the company or person who knows it.

    While an IP address may not be enough to identify you personally, there are strong correlations of various degrees, and in most cases, those correlations are only one step away. By itself, an IP address is just a number. But it’s trivial to find out who is responsible for that address, and thus who to ask if you want to know who it’s been given out to. In some cases, the logs will be kept indefinitely, or destroyed on a regular basis – it’s entirely up to each individual organization.

    Up until now, I’ve only discussed the implications of having an IP address. The situation gets much much worse when you start using it. Because every bit of network traffic you use is marked with your IP address, it can be used to link all of those disparate transactions together.

    Despite these possible correlations, not one of the major search engines considers your IP address to be personally identifiable information. [Update: someone asked where I got this conclusion. It's from my reading of the Google, Yahoo, and MSN Search privacy policies. In all cases, they discuss server logs separately from the collection of personal information (although MSN Search does have it under the heading of "Collection of Your Personal Information", it's clearly a separate topic). If you have some reason to believe I've made a mistake, I'm all ears.] While this may technically be true if you take an IP address by itself, it is a highly disingenuous position to take when logs exist that link IP addresses with computers, physical locations, and account information… and from there with people. Not always, but often. The inability to link your IP address with you depends always on the relative secrecy of these logs, what information is gathered before you get access to your IP address, and what other information you give out while using it.

    Let’s bring one more piece into the puzzle. It’s the idea of a key. A key is a piece of data in common between two disparate data sources. Let’s say there’s one log which records which websites you visit, and it stores a log that only contains the URL of the website and your IP address. No personal information, right? But there’s another log somewhere that records your account information and the IP address that you happened to be using. Now, the IP address is a key into your account information, and bringing the two logs together allows the website list to be associated with your account information.

    • Have you ever searched for your name? Your IP address is now a key to your name in a log somewhere.
    • Have you ever ordered a product on the internet and had it shipped to you? Your IP address is now a key to your home address in a log somewhere.
    • Have you ever viewed a web page with an ad in it served from an ad network? Both the operator of the web site and the operator of the ad network have your IP address in a log somewhere, as a key to the sites you visited.

    The list goes on, and it’s not limited to IP addresses. Any piece of unique data – IP addresses, cookie values, email addresses – can be used as a key.

    Data mining is the act of taking a whole bunch of separate logs, or databases, and looking for the keys to tie information together into a comprehensive profile representing the correlations. To say that this information is definitely being mined, used for anything, stored, or even ever viewed is certainly alarmist, and I don’t want to imply that it is. But the possibility is there, and in many cases, these logs are being kept, if they’re not being used in that way now, the only thing really standing in the way is the inaction of those who have access to the pieces, or can get it.

    If the information is recorded somewhere, it can be used. This is a big problem.

    There are various ways to mask your IP address, but that’s not the whole scope of the problem, and it’s still very easy to leak personally identifiable information.

    I’ll start with one suggestion for how to begin to address this problem:

    Any key information associated with personally identifiable information must also be considered personally identifiable.

    [Update: I've put up a followup post to this one with an additional suggestion.]

    Tags: , , , , ,


    Google does keep cookie- and IP-correlated logs

    I asked John Battelle the question about whether Google keeps personally identifiable search log information, particularly search logs correlated with IP address. He asked Google PR, who confirmed that they do.

    From my comment there, ultimately, this is bad for users. If the information is kept, it’s available for request, abuse, or theft.

    Tags: , , , , , ,

    Some evidence that Google does keep personally identifiable logs

    This article from Internet Week has Alan Eustace, VP of Engineering for Google, on the record talking about the My Search feature.

    “Anytime, you give up any information to anybody, you give up some privacy,” Eustace said.

    With “My Search,” however, information stored internally with Google is no different than the search data gathered through its Google .com search engine, Eustace said.

    “This product itself does not have a significant impact on the information that is available to legitimate law enforcement agencies doing their job,” Eustace said.

    This seems pretty conclusive to me – signing up for saved searches doesn’t (or didn’t, in April 2005) change the way the search data is stored internally.


    (This was pointed out to me by Ray Everett-Church in the comments of the previous post, covered on his blog:

    Tags: , , , , , ,


    Does Google keep logs of personal data?

    The question is this – is there any evidence that Google is keeping logs of personally identifiable search history for users who have not logged in and for logged-in users who have not signed up for search history? What about personal data collected from Gmail, and Google Groups, and Google Desktop? Aggregated with search? Kept personally identifiably? (Note: For the purposes of this conversation, even though Google does not consider your IP address to be personally identifiable, at least according to their privacy policy, I do.)

    It is not arguable that they could keep those logs, but I think every analysis I’ve seen is simply repeating the assumption that they do, based on the fact that they could.

    Has there ever been a hard assertion, by someone who’s in a position to know, that these logs do in fact exist?

    I have a suspicion about one possible source of all this. Google’s privacy policy used to say (amended 7/2004):

    Google notes and saves [emphasis mine] information such as time of day, browser type, browser language, and IP address with each query.“.

    But the policy no longer says that. The current version reads: “When you use Google services, our servers automatically record information that your browser sends whenever you visit a website. These server logs may include information such as your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.“. Again, no information about what’s being done with that data or how long it’s kept.

    Given the possibility that they don’t, I think it drastically changes the value proposition of those free subsidiary tools. Obviously, if you ask for your search history to be saved, they’re going to keep it. But maybe that decision is predicated on the assumption that they’re going to keep it anyway, and you might as well have access to it. If the answer is that they’re not keeping it, that’s a different question.

    It’s critical to point out that these issues are not even close to limited to Google. Every search engine, every “free” service you give your data to, every hub of aggregated data on the web has the same problems.

    Currently, there’s no way to make an informed decision, because privacy policies don’t include specific information about what data is kept, in what form, and for how long. With all of the disclosures in the past year of personal data lost, compromised, and requested, isn’t it time for us to know? In the beginning of the web, having a privacy policy at all was unheard of, but now everybody has one. I don’t think it’s too much to ask of the companies we do business with that the same be done with log retention policies.

    I agree with the request to ask Google to delete those logs if they’re keeping them, but I haven’t seen any evidence that they are. Personally, I’d like to know.

    Tags: , , , , , ,


    More thoughts on Google

    Having examined the motion and letters, I see a different picture emerging.

    I am not a lawyer, but from my reading of the motion, it appears that Google’s objections are thin. Really thin.
    Also, they seem to have been completely addressed by the scaling back of the DOJ requests. Of course, that’s not the complete story, but if the arguments in the motion are correct, it seems like to me that Google will lose and be compelled to comply.

    Based on the letters and other analysis, they’re also pulling the slippery slope defense – “we’re not going to comply with this because it will give you the expectation that we’re open for business and next time you can ask for personal information”. If that’s true, I think that’s the first good news I’ve heard out of them in years. Good luck with that.

    Google’s own behavior is inconsistent with their privacy FAQ, which states Google does comply with valid legal process, such as search warrants, court orders, or subpoenas seeking personal information. These same processes apply to all law-abiding companies. As has always been the case, the primary protections you have against intrusions by the government are the laws that apply to where you live. (Interestingly, this language is inconsistent with their full privacy policy, which states that Google only shares personal information … [when] We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request.

    I wonder if they intend to challenge the validity of the fishing expedition itself, which would be the real kicker (and probably invalidate the above paragraph). I also idly wonder if they expect to lose anyway and have simply refused to comply with bogus arguments in order to get the request entered into the public record.

    Interesting stuff. A lot of my criticisms of Google are about their unwillingness to publicly state their intentions with respect to the data they get (and the extent to which they may or may not be retaining, aggregating, and correlating that data), and I don’t think this case is any different. I think Google’s interest here in not releasing records is aligned with the public good, and as such, I wish them well. It’s been asserted that Google has taken extraordinary steps to preserve the anonymity of its records, and that well may be true. It’s also kind of irrelevant. Beyond this specific case, of whether the govnernment can request information about Google searches (let alone any of their more invasive services, or anyone’s more invasive services), is the issue of the ramifications of collecting, aggregating, and correlating this data in the first place.

    There is no question that Google has access to a tremendous amount of data on everyone who interacts with its service. It is still troubling that its privacy policy is inadequate. It’s still troubling that Google (and Yahoo, and how many others) considers your IP address to be not personally identifiable information. It’s still troubling that Google (and Yahoo and how many others) do all of their transactions unencrypted and that search terms are included in the URL of the request. As this case has shown, Google’s actual behavior may not correlate to their stated intentions, of which there are few in the first place. By Google’s own slippery slope logic, this time it works for you – will it next time?

    Perhaps it’s time to hold companies accountable for the records they keep.


    DOJ demands large chunk of Google data

    The Bush administration on Wednesday asked a federal judge to order Google to turn over a broad range of material from its closely guarded databases.

    The move is part of a government effort to revive an Internet child protection law struck down two years ago by the U.S. Supreme Court. The law was meant to punish online pornography sites that make their content accessible to minors. The government contends it needs the Google data to determine how often pornography shows up in online searches.

    In court papers filed in U.S. District Court in San Jose, Justice Department lawyers revealed that Google has refused to comply with a subpoena issued last year for the records, which include a request for 1 million random Web addresses and records of all Google searches from any one-week period.

    I’m sort of out of analysis about why this is bad, because I’ve said it all before.

    See (particularly 4 and 5):


    It really comes down to one thing.

    If data is collected, it will be used.

    It’s far past the time for us all to take an interest in who’s collecting what.


    Google really wants your logs

    I wrote here about some of the privacy implications of Google’s data retention policies:

    With the launch of Google Analytics, Google is now poised to collect that data not only from every Google visit, and every site that has Google ads on them, but also every site processed by Google for “analytical” purposes (although there’s probably a fair amount of overlap between the latter two).

    Remember – Google does not consider your IP address to be personal information, and so it’s exempt from most of the normal restrictions on how they use the data they collect. The terms of service for Google Analytics suspiciously do not mention whether Google is allowed to utilize any of the data they collect on your behalf. One must conclude that they therefore assume that they are, and consequently that they do. It’s unclear, but it’s probably the case that Google could, according to the terms of these agreements, correlate search terms from your IP address with hits on other websites. I don’t see anything in there preventing them from doing so, because the two pieces of correlated data are obtained by different means.


    What’s wrong with the Google Print argument

    Does this phrase sound familiar? “You may not send automated queries of any sort to Google’s system without express permission in advance from Google.” It’s from Google’s terms of service, and it’s just one of several aspects of that document that make this leave a bad taste in my mouth.

    Larry Lessig makes the point that “Google wants to index content. Never in the history of copyright law would anyone have thought that you needed permission from a publisher to index a book’s content.” But that’s not what Google wants to do. Google wants to index content and put their own for-pay ads next to it. Larry says ” It is the greatest gift to knowledge since, well, Google.”

    Don’t forget this for a second. Google is not a public service, Google is a business. Google isn’t doing this because it’s good for the world, Google is doing this because it represents a massive expansion in the number of pages they can serve ads next to. In order to do that, the index remains the property of Google, and no one else will be able to touch it except in ways that are sanctioned by Google. It’s not really about money, it’s about control. It’s against the terms of service to make copies of Google pages in order to build an index. Why should it be okay for them to make copies of other people’s pages in order to build their own? It’s not that they’re making money that bothers us, it’s the double standard. The same double standard that says that Disney can take characters and stories from the public domain, copyright them, and then lock them up and prevent other people from using them.

    Oh, but you hate that, don’t you, Larry? (And I think a lot of us do.) How is what Google is doing any different? Google is just extending the lockdown one step further, into their own pockets. There’s no share alike clause in the Google terms of service, and that is what’s wrong with it. They want privileges under the law that they’re not willing to grant to others with respect to their own content.

    The day Google steps forward and says “we’re building an index, and anyone can access it anonymously in any way they please”, then sure – I’m all with you.

    (Found at


    On sharing

    Filed under: — adam @ 12:29 pm

    There are two competing monetary questions in content ownership: “How can I get the maximum amount for what I’ve already done?” and “How can I get the maximum amount for what I’m going to do next?”.

    The former is seemingly answered by maximum control. Tight focused marketing, sell as many copies, wring every last dollar out of existing properties by making sure that people need to buy them more than once and can’t do anything interesting with them. In my opinion, this is a strategy for shooting the latter. It makes enemies, it makes people not care what else you have, and it makes people upset.

    Feeding the commons is about ongoing effort. Releasing your work to as many people as possible gets you attention for the next thing you do. It’s so simple. It’s not about selling any one thing anymore, it’s about selling your stream. My previous post, Preaching to the Esquire, is a link that contains the entire text of an article from Esquire. It’s blatantly copied. But if it hadn’t been, only existing subscribers would have read it. As it is, that article is getting forwarded around to lots of people, and it has at the bottom of it this:

    Wow. Not something I expected from “Esquire.”

    followed by this ringing endoresment:

    Esquire is a great magazine. Read it more often: there’s tons of articles on politics, science, current events…it’s, like, Maxim for intelligent people.

    Esquire probably had nothing to do with this, but in one stroke, Esquire has certainly grabbed more people for their stream. Many of them will buy an issue. Some of them will subscribe. It’s not about monetizing this article, it’s about getting people to pay attention to what you’re going to do next – the recurring and predictable revenue streams that keep ongoing operations… ongoing.

    Put your best work out there, let it speak for itself, and maybe someone will already be paying attention next time you have something interesting to say. Maybe they’ll even pay for the privilege. Locking it up where only people who are already interested can find it is a recipe for obscurity and irrelevance. Yes, TimesSelect, I’m looking at you.


    The ELF invades Sears

    All this talk about plays written in the dark ages made me reread the skit I wrote in high school with my friend Tristan.

    I give you The ELF invades Sears.

    It still makes me laugh.

    (It’s an homage. Read a book!)

    Republic Dogs


    My friend Nat’s screenplay “Republic Dogs”, a Plato/Tarantino mashup, is making the blog rounds:

    There seems to be some contention over when it was actually written. I can personally attest to being present around the time of the original writing and presentation, at or near Columbia’s Philolexian Society (Columbia’s oldest student organization, founded in 1802), sometime between 1992 and 1996. Nat says 1994, and I believe him.

    In fact, I made a poster for its theatrical (okay, in the basement of River) performance as part of a series of one-act plays, Onion Days and Starry Nights in the Zero-Sum Republic:

    Republic Dogs


    Shared lessons between programming and cooking

    Filed under: — adam @ 7:22 pm

    I originally wrote this a few years ago, but I thought it was worth restating. Here it is lightly edited:

    Fine programming and fine cooking are similar disciplines, each a mixture of a lot of craft with a good deal of art. In each, you can have just the craft without the art, or just the art without the craft, but the results are extremely likely to be disappointing without both. The balance between the two is a reflection on the practitioner’s technique, the personality of which is always highly evident in the end product. I have found that my development discipline has been adaptable to cooking, and that many of the things I’m learning about cooking have analogues in programming.

    For example, in cooking, good stock is critical. It adds flavors to other dishes, and can be layered to build complexity and texture. The more attention you pay to getting your stock right and correctly flavored, the better your end product will be. Stock requires upfront planning, dedication of resources, patience, and unit testing. Stock is a module. Like any module, you can make your own and it will be exactly what you need (or terrible, depending on your own skills), or you can buy someone else’s and it will either be good enough or terrible (depending on the skills of the stockmaker), and the quality of your final product will hinge heavily on which one it actually is.

    Some shared lessons:

    1. Perfection is the goal, but the product had better damn well go out when it needs to and be right when it does. Perfection is the standard by which you measure what you did wrong last time so you can try not to do it again.
    2. Taste, test, measure, know. If you don’t know what’s supposed to be happening, or you don’t know what is actually happening, you have no way to compare the two, and you certainly have no way to bring them together.
    3. Building and maintaining your toolkit, which includes both tools and ingredients, is of utmost importance. For development, this is your development environment and your past history of specs, diagrams, and old code to repurpose. For cooking, this is your knives and other tools, as well as your collection of stocks, scraps, and spices.
    4. Knowing what’s in your toolkit is important, but knowing where to find something you need if it’s not is even more so.
    5. It’s sometimes easier to buy components, but it can be less effort in the long run to start from scratch. It’s entirely likely that a component you build yourself will be better for you, but the trick lies in knowing the difference before you start. Sometimes you have no choice.
    6. Waste is the enemy. Time, materials, and resources all have costs. Usage is not necessarily waste. Not taking care to avoid waste is itself waste. Failing to properly maintain your tools is waste. Not using everything that can be used is waste. Doing unnecessary tasks is waste. Documenting what you did is not waste.


    Horse, barn door, something.

    I got the new Fiona Apple album today. It came with huge labels, both on the box and the disc, reading:

    “FBI Anti Piracy Warning: Unauthorized copying is punishable under federal law.”

    Bang up job, folks.


    Unthrilled with the Office 12 UI

    Screenshots for Office 12:

    Okay, they’ve cleaned up the interface a bit by grouping related things into similar boxes that are actually labeled, and I’m told that the interface elements are all vector-based so you can resize them arbitrarily. That’s nice.


    Over many years of designing custom content management interfaces for lots of people to use, it became crystal clear that there’s a huge difference between a “tool” and a “task”. A tool is a function that lets the user do something, but a task is a function that lets the user accomplish something.

    In my experience, most successful content management interfaces are primarily task-based. When the user sits down in front of the computer, the goal is to get something done, not just use some tools. Tasks are for most people (beginners and power users alike), but tools are for power users. If you know what you want to do, but it doesn’t fit nicely into the framework of getting something done, you need a tool. Tasks should be the default.

    This is why the new Office UI is still confusing – it’s full of tools.

    Let’s take Word as an example. The forefront example of tools vs. tasks is the question “why is there still a font box?”, and the corollary question “why do the font options still occupy a huge chunk of prime screen real estate?”. Changing the font is a “tool function”. When you change the font in a document, you haven’t really accomplished anything. Sure, you’ve made it look different, but “making it look different” probably wasn’t the goal. What you were really doing is the unspoken “drawing attention to this text” or “making it match the company colors” or any number of other things that aren’t just “making it look different”. With a tool, you can “make it look different”, but it requires a lot of input from the user in order to get the rationale right, and this is why expert users get frustrated when beginniners change the fonts and their results don’t match their intent. The software shouldn’t make it easy to change the font without understanding why. There should be tasks centered around things you might want to do, and the software should guide you. Importantly, if you do understand why, and you have different intentions than the software does, it should get out of your way – but that comes around to letting you use tools to get around the limitations of pre-defined tasks.

    (An important note: a “wizard” is not a task-based interface. It’s a poor substitute that attempts to graft tasks onto what is primarily a tool-based interface.)

    This goes right to the heart of the debate of semantic content vs. formatting. A huge portion of the tech community has been trying very hard to get people to think in ways that are structured, for various reasons. It’s not always the best approach, but it’s by far the best default if you don’t know what you’re doing. If you go through your document and decide “this needs to be 14 point Helvetica and this needs to be italic and this needs to be 24 point Times”, the onus is on you to understand why you’ve chosen those particular settings. “It looks nice” isn’t good enough, if it doesn’t match your intent. You’ve lowered the chances of getting the right result, and you’ve made things more difficult for the next person to go through and standardize your settings when your one-page memo gets reformatted to be used in the company brochure. You’ve probably also made things more difficult for yourself. Instead of trying to decide what it should look like, you could have just told the machine “this is a heading, that’s a title, and this paragraph is a summary of findings”, and made your life easier.

    The UI appears to have some of this by grouping tools by tasks, but it doesn’t follow through — “Write”, “Insert”, “Page Layout”… but then, “References”? Nope. “Mailings” – maybe, but probably not. “Review” – we’re back. “Developer”? That’s a noun. Obviously there isn’t a consistent organizational structure here. Task-based interfaces are a radical shift from tool-based ones, and they require the UI designer to ask of every function put in front of the user: “Do I really want to give them this power? Am I making their life easier by doing so, or just giving them a shotgun to aim at their feet?”. It’s Microsoft Office, not Microsoft Fun with Fonts, Colors, and Margins. There’s a strong argument to be made that it shouldn’t be easier to use all of the features, because they’re a waste of time for most users.

    Microsoft should have taken this opportunity to put together a new interface that’s not only prettier, but also radically easier to use, more intuitive, and above all, more productive. Instead, they’ve produced what appears to be more of the same.


    Why I shoot photography.

    Filed under: — adam @ 12:13 am

    I shoot photos for the same reason I cook and program computers.

    I believe that humanity’s high calling and deep purpose is the neverending struggle against the varied forces of entropy. Tempered by the wisdom of allowing natural forms of order to co-exist and simultaneously be captured in time, we live to create in our environment a reflection of our own inner sense of order. Every meal prepared, every elegant algorithm, and every imperfect echo frozen by sheer force of will is one more piece of the pattern coalesced from the ethereal storm and notched on the spear of humanity’s collective soul.

    Take a handful, grab hold of the writhing chaos, keep your grip in the face of adversity, and shape it into something that can’t help but be beautiful until it hurts.

    We will eventually be forgotten, and remembered only for what we added or took away.

    I prefer to add.


    Kottke asks “what’s next for the internet?”

    Filed under: — adam @ 12:54 pm

    I’m waiting for the computers to get out of the way.

    Kottke says: ‘”Web 2.0″ arrived a year or two ago at least and we’re still talking about it like it’s just around the corner. What else is out there?’

    I think that all of this stuff is still too difficult to use, and it’s spread out to too many services that don’t sufficiently talk to each other, and it’s not sufficiently preserved as raw data. Where are the other services supporting Flickr’s API on the receiving side, so tools built for flickr can just work with those other services too? It’s not a standard if no one else does it. Why can’t I download all of my pictures from flickr without writing some code? Why can’t I see my favorites as a set of links? Why can’t I browse links as Flickr sets?

    What’s missing is the usability layer that makes it possible to use all of these services together without writing to their individual APIs.


    Why I oppose DRM

    As some of you know, on September 11, 2001, I lived one block north of Battery Park, at 21 West Street. (Ironic popup tag provided courtesy of Google Maps.) When I was forced to leave for thirteen days while the smoke cleared, I had little time to grab anything. I left without my computers, without my original installation discs, and without all of my Product ID stickers. I found myself suddenly without the mechanism to reinstall a number of legally purchased programs that I needed to use for work, and taking a lot of time that could have been better spent wallowing in my own PTSD calling around to various companies to get them to unlock things for me.

    There were stories of rescue workers hampered by license management, and that’s when I knew.

    The world is dangerous, and sometimes emergencies happen. While people can say “hey, maybe we should make an exception here, because there are extenuating circumstances”, computers just don’t care about that. We are backing ourselves into a restricted corner, and a dangerous one, where computers call the shots, even in the midst of crisis, even in the midst of rational exceptions. Granted, every case is not this extreme. Hopefully, the future will be without another like it in my immediate vicinity. But the trend to pre-emptively lock down everything by default scares me.

    As we evolve towards tighter and tighter controls without any possibility for exception, what happens when those granting agencies stop granting? What happens when companies that issue DRM go bankrupt? What happens if they’re unreachable? What happens if they simply decide to stop supporting their framework?

    As my high school calculus teacher used to say – “it’s always easier to ask forgiveness than to ask permission”. Security is many tradeoffs, and if you restrict legitimate uses in the name of preventing illegitimate ones, you’ve cut off part of the point of having security in the first place. If you restrict legitimate uses without even preventing the illegitimate ones, you’re wasting your customers’ time, and you’re part of the problem.

    See more of my rants on DRM and security.

    Blog-a-thon tag:


    We’re just going to deal with the fact that people die from time to time

    Filed under: — adam @ 9:53 pm

    Kottke asks:

    As members of the human species, we’re used to dealing with the death of people we “know” in amounts in the low hundreds over the course of a lifetime. With higher life expectancies and the increased number of people known to each of us (particularly in the hypernetworked part of the world), how are we going to handle it when several thousand people we know die over the course of our lifetime?

    Interesting question. I think, like everything else, the lack of novelty will acclimate us to the experience and we’ll just get used to the fact that lots of people we know will come out of our lives as easily as they entered.


    Grokster is not like gun companies being sued for crimes committed with guns

    I’ve been hearing a lot that the Grokster decision is akin to the court saying that gun manufacturers should be liable for crimes committed with guns.

    I think a better analogy is this: if a gun manufacturer sells guns with a sign that says “Bank Robbing Projectile Launchers Here” and then in the middle of a bank robbery they help their customers unjam their guns so they can fire at the cops again, that they might have some liability as an accessory to the bank robbery.


    Grokster is reasonable

    I’ve actually had time to read the entire decision, and I find this totally reasonable.

    This is the text of the decision, and I’m surprised that this was turned into such a landmark case to begin with. It’s meaninless – all it says is that if you promote a service meant to contribute to copyright infringement, you can’t hide behind the defense that your service also has other uses.

    Discovery revealed that billions of files are shared across peer-to-peer networks each month. Respondents are aware that users employ their software primarily to download copyrighted files, although the decentralized networks do not reveal which files are copied, and when. Respondents have sometimes learned about the infringement directly when users have e-mailed questions regarding copyrighted works, and respondents have replied with guidance. Respondents are not merely passive recipients of information about infringement. The record is replete with evidence that when they began to distribute their free software, each of them clearly voiced the objective that recipients use the software to download copyrighted works and took active steps to encourage infringement.

    This isn’t about freedom of technological expression, unless I’m missing something big here. These guys were running a service encouraging people to trade copyrighted content, giving them tech support on it, and then hiding behind the claim that other people were using the service for legitimate means.

    We can argue about whether that should be legal or not, but as far as I can tell, there isn’t a strong case for the argument that it actually is legal. This isn’t about protecting the rights of technologists to develop new ideas – this is about actual copyright infringement. There are two different cases here:

    1. You release a tool that enables people to infringe copyright and you have no control over that.
    2. You release a tool that enables people to infringe copyright and you advertise it as such, promote it with that goal, and help people out when they can’t download the latest Britney album.

    Sounds like #2 to me.

    UK government considering selling ID card data to pay for ID system

    What’s that slightly coppery smell? It’s irony mixed with incompetence.

    So, let me get this straight. The UK government decides to implement a national ID system amid serious criticism of its effectiveness at all and its cost-effectiveness in particular, and now is considering selling the data to pay for spiraling costs.

    I bet that idea is guaranteed to reduce identity theft and abuse of the system.


    Open letter to Adobe

    Dear Adobe:

    Your activation system is a failure.

    I have been a loyal customer for more than ten years. I’ve dutifully paid pretty much whatever you’ve asked for upgrades over the years, and I’ve always been happy with your product.

    I understand that you don’t want people to steal your software. Never mind that Photoshop is largely the industry leader in image management because it was mercilessly copied by everyone. Your product is good, and I like it.

    Let’s be clear about this. I’m not stealing your software.

    But you’re treating me like a criminal. Twice in the past few weeks, I’ve had to talk to one of your activation support reps because your online activation system is broken. It has several times just decided that I’d activated enough, and was suspicious. Never mind that I was reinstalling on a brand new replacement computer. Never mind that on the first occasion this happened, there was no grace period, and the software simply would not run until I talked to a representative on the phone, who, by the way, are ONLY AVAILABLE DURING WEST COAST BUSINESS HOURS.

    Thanks. You’ve given me reasons to think twice about giving you more money in the future and tarnished your spotless reputation.

    Bravo. I hope it was worth it.

    [Update: I've spoken to Adobe support after my fourth automated reactivation failure, and apparently, this is an issue with RAID devices, where the activation system sees it as a different computer configuration on subsequent checks. My previous comments stand. This is totally unacceptable. Worse than that. The system is not only broken, it's returning false positives for theft masquerading as valid and accepted disaster prevention techniques. So, my opinion is now this - Adobe has not only foisted misguided copy protection techniques on us, but, to add insult to injury, they're still beta. There is a patch available, so contact Adobe if you have this problem.]

    [Update #2: I installed the patch, and activation failed yet again. Holding for support... ]

    [Update #3: After activating again via phone, all seems to be working. For now. ]


    Three simple rules for good customer service

    Filed under: — adam @ 11:42 am

    There are three simple rules for customer service:

    1) Don’t ask me for the same information more than once.

    2) When I give you information, understand it. This goes double for when I tell you I’ve already tried certain steps to fix the problem.

    3) Help me solve the problem. If you can’t (see rule #2), reroute me to someone who can (see rule #1).


    P2P is a technique, not a thing

    Filed under: — adam @ 9:44 am

    This message to the IP list claims that Tarleton University will “shut down P2P”:

    P2P file sharing programs provide Internet users with the ability to share files on their computers with millions of other Internet users. Common P2P use includes song and movie file sharing, gaming and instant messaging. P2P file sharing software makes it possible for people to accidentally share personal files or sensitive data. These programs also allow easier access to computer systems for theft of sensitive documents and unauthorized use of network resources. There have been incidents where P2P programs have exposed sensitive federal government documents.

    P2P file sharing software potentially compromises computer systems. The use of this software creates vulnerabilities through which malicious code (viruses, worms, Trojans, bots) or other illegal material can be introduced.

    The use of P2P file sharing can result in network intrusions.

    Few, if any, university owned computers have an operational reason for running P2P file sharing software. These applications represent a network vulnerability that cannot be afforded without a strong justification.

    This followup message illustrates a number of ways to get around that restriction:

    As I’ve written before, P2P is not a technology, it’s a technique. You can’t just wish it away by eliminating programs or blocking ports. Bits are just bits. By themselves, they’re meaningless, until you apply a filter to them that explains what they are. Once again, all together now. Content is a pattern, not a thing.

    P2P isn’t going anywhere.


    I’m very confused about EliteTorrents

    The MPAA shut down EliteTorrents, which was supposed to be “one of the first peer to peer networks to post an illegal copy of Star Wars: Episode III – Revenge of the Sith before the movie officially opened in theaters last Thursday”, according to the MPAA press release.

    (Sorry, word format.)

    This kind of thing has a limited lifetime, because Bittorrent has gone trackerless. What this means is that once a full copy is out there somewhere, the network becomes very resistant to taking down any particular copy. I’ve written about the MPAA’s problems with this before, but I feel the need to reiterate: this is not something that you can just make go away. It’s not a technology, it’s a technique. The ability to reconstruct a whole from disparate parts, without a central resource means that it doesn’t help to shut down one, or even a few sites to stop the flow – you have to eradicate every last copy out there. Frankly, I don’t see that happening, and even if we did, the means to get there could not possibly be worth the end product.

    So, assume that p2p file sharing is here to stay, and can’t be stopped.

    Now, this is very interesting, because although I can’t find a reference for it, I’m told that Revenge of the Sith made back its entire investment in merchandising tie-ins before a single ticket was sold. If that’s true, even setting aside the record numbers of ticket revenue on opening weekend, this is hardly a poster child for revenue lost to filesharing, but instead an argument that filesharing is, in fact, great for generating buzz and activating supplemental revenue streams.

    I’m not a marketer, I’m a technologist, but even this is obvious to me:

    1. People like to spend money.
    2. People don’t like to be treated like criminals.
    3. People like to spend money on those they consider friendly or part of their community, even if it’s not true (you know who you are).
    4. People share with their friends.

    The creative commons folks get it.

    I’m also confused about why EliteTorrents was hosting a copy of the movie, if in fact they were. With a trackerless torrent, if someone puts up a movie, and then they take it down, but multiple other people have sucked it down and are sharing it, you’ve got a pretty big whack-a-mole problem. The original sharer has probably complied with a what a C&D would accomplish, but the problem still exists. This is bad, I think – it increases the incentive for copyright owners to try to make the penalties greater for smaller instances of filesharing, and I think that would be counterproductive approach.


    Encryption is not a crime

    I’m not sure how I feel about this.

    A Minnesota court has ruled that the presence of encryption software is valid evidence for determining criminal intent. On the one hand, it seems like a severe misunderstanding of how the modern world actually works, given that encryption is absolutely essential for many things we take for granted.

    I guess I can see that if there’s other evidence, this might be used as evidence that you have something to hide, but I worry for the situation where there isn’t any other evidence of a crime, and the fact that there’s something to hide becomes the key determining factor.

    Everyone has something to hide. It may be private, it may be secret (not the same thing), it may be evidence of a crime, or it may be evidence of something that someone else thinks is a crime but you don’t. For the latter two, that is, of course, why we have a legal system in the first place. For the former two, there are plenty of legal reasons to want to keep those things private or secret.


    Lessons learned from Revenge of the Sith

    Filed under: — adam @ 11:26 pm
    1. When the leader says “Everything’s fine, go wait on the LAVA PLANET”, be suspicious.
    2. The Dark Side of the Force is called “The Dark Side” for a reason. It’s not like “The Dark Side of the Moon”.
    3. Robots with cutesy voices are annoying, not adorable. That goes double for aliens with cutesy voices. Triple for robots with cutesy voices and smoker’s cough.
    4. For some reason, robots talk to each other in English, instead of using wifi or bluetooth or something.
    5. Coruscant OB/GYN technology leaves something to be desired. [Update: "Luke" and "Leia" are clearly the Naboo words for "Morphine" and "Epidural"]
    6. 20 years seems like nothing when you’re ruling the galaxy.
    7. Don’t forget what happened to your mother in the last movie, or there will be extra exposition.
    8. Darth Vader is not scarier with an artful allusion to Frankenstein.

    Great fight choreography, but man… what a piece of garbage.

    Star Wars theories redux

    Filed under: — adam @ 9:38 am

    Over beers after watching Attack of the Clones, I posited two theories that were not explicitly mentioned in the movie, but which make it much more interesting.

    1. Padme doesn’t love Anakin, but has instead been coerced into thinking that she does with Jedi/Sith mind tricks. Anakin as much as says this, and it explains all of a) her rapid change of heart, b) why she falls for Anakin in the absence of any redeeming qualities and c) all of the bad dialogue.
    2. Yoda is complicit. Rather than being an idle participant or “the good guy”, he’s an integral part of the plot. There’s a fair amount of evidence for this. Someone high up in the Jedi order erased the existence of the cloner planet from the archives. Yoda thinks the Jedi are too set in their ways and crumbling as an institution, and need “balance” restored (which is not necessarily good). It’s not believable that he could stand in Palpatine’s presence and not pick up on something. He clearly lets Dooku get away in the fight at the end, feigning being “distracted” by some tiny falling beam. We know he survives the purging. All training leads back to Yoda (Anakin trained by Obi-Wan trained by Qui-Gon trained by Dooku trained by Yoda. For that matter, big open question – did Yoda train Palpatine? If not, then who?)

    I’m curious to see if either of these is acknowledged, or at least not contradicted by the third movie (I’ve got my tickets for tonight).


    Per Se review

    Filed under: — adam @ 6:52 pm

    I was talking about the meal we had at Per Se a year ago, and I realized I’d never posted the review here. This originally appeared on my livejournal blog, but what’s a repost among friends…

    A year later, I can still taste everything on the menu.

    Here’s the original review I wrote:

    It’s not so much a restaurant as it is a very well oiled food perfection delivery machine. Not everything was 100% perfect, mind you, but the things that weren’t were mostly of no consequence (or wrong only out of convention and not in the sense of being, say, inferior in any way), and only served to add character to the things that were. More on that.

    I can’t remember the last time going out to eat gave me the giggles.

    To say that the food was exquisite is missing the point – it’s just in a different class altogether. Every bite is full of both genius and playfulness. Keller’s lighthearted flavor fugue is all over the place, and it shows. For example:

    Bread. They start with a choice of three kinds of bread – 9-grain, “simple” country white, or a french bread roll, with two kinds of butter. All great. But then later, they bring out something else – “this is the only bread we make here”. It’s a “Parker House roll”, little quatrains of fleur de sel crusted puffy cubes. Imagine a pretzel crossed with a croissant, and you’re mostly there. But it doesn’t stop. At the end of the explanation of the bread, the service captain tells us “we’ll revisit this later”. The dessert course has a bunch of amazing simple things on the plate; one of them is a little puddle of cream. “Remember I said we’d come back to the Parker House rolls?” The cream is ‘”Pain au Lait” Coulis’, and it’s made out of the rolls. They pulverize them in a food processor, then cook them down in a process I don’t entirely understand. But it’s outstanding.

    Wine. The wine was reasonably priced. We had a bottle of Neyers 2002 Chardonnay ($50), which was great. The captain recommended individual glasses of sharper whites (which I don’t remember) for the second course, which we did and was the right decision. The bottle went with everything, one bottle lasted the meal, and it hit a perfect match with the lobster course. The wine list is a staggering book of much more expensive choices, but I think this was a fine selection.

    They have over 200 kinds of plates, most of which were custom designed by Chef Thomas with Limoges. This attention to detail is in every aspect of the meal.

    We each started with the Per Se cocktail – ciroc vodka with a white port, glasses washed with a fruity liquor, and garnished with two red grapes. Extremely refreshing, and smooth.

    A note on the service. About halfway through the meal, we got fairly confused about who was doing what and had to have it explained. There were no fewer than 6 people involved in various parts of our meal – the waiter, the sommelier, two or three servers, and also a service captain to top it all off. They were very well coordinated, and the service was exceptionally attentive and, for lack of a better word, bright. I felt like everyone was extremely proud of their job, and rightly so.

    Shortly after drinks, we ordered, and Chef Thomas’s signature amuse-bouche was presented to us – salmon tartare “ice cream cones”. A black sesame tuile filled with onion creme fraiche, topped with salmon tartare. Delightful and fresh.

    ** Course 1:

    “Oysters and Pearls”
    “Saybayon” of Pearl Tapioca with Island Creek Oysters and Iranian
    Osetra Caviar

    Fantastic! Thomas Keller talks a great deal about the texture of luxury in his cookbook. Strain strain strain. This is it. A sweetish custardy pudding with droplets of oceanic salty goodness.

    ** Course 2:

    Marinated Holland White Asparagus
    White Asparagus Terrine and Garden Mache

    “I feel like I’m eating Spring.”

    “Peach Melba”
    Moulard Duck “Foie Gras Au Torchon”
    Frog Hollow Farms Peach Jelly, Pickled White Peaches, Marinated Red Onion, and Crispy Carolina Rice

    “I feel like I’m eating a big fat duck liver.”

    In a sea of a meal of the best things I’ve ever tasted, this stands out. Wow. Foie gras and peaches. Perfectly smooth, fruity, creamy, and surrounded by crunchy crisp bits.

    Another note on the service here. Two of the aforementioned minor imperfections in the service were on this course. First, the server spilled some of the rice crispies on the table while spooning them into the bowl. Unforgivable. Second, they served this with three slices of melba toast, and were about 45 seconds after I thought “they really should have served this with more toast” with offering more. They were going for a surprise, but missed it. Terrible.
    As you can see, the service was less than outstanding. :)

    ** Course 3:
    “Pave” of South Florida Cobia “A La Plancha”
    Fava Beans, Chanterelle Mushrooms, and a Preserved Meyer Lemon Emulsion

    I wasn’t familiar with Cobia before, but I think this was the most well-balanced fish course I’ve ever had. The texture was great, perfect crust, a little citrus.

    ** Course 4:
    Sweet Butter Poached Maine Lobster
    “Cuit Sous Vide”
    Wilted Arrowleaf Spinach and a Saffron-Vanilla Sauce

    Yeah… It’s just indescribably good. I can’t even try.

    ** Course 5:
    Pan Roasted Cavendish Farms Quail
    “Puree” of Spring Onions, Apple Wood Smoked Bacon “Lardons” and Split English Peas

    This seemed a little out of place to me, seasonally. But it was still amazing.

    ** Course 6:
    Elysian Fields Farm “Carre D’Agneau Roti Entier”
    Grilled Swiss Chard Ribs “en Ravigote”, Roasted Sweet Peppers, and a Nicoise Olive Sauce

    I think this qualifies as a “main” course. Lamb is all good.

    ** Course 7:
    “La Tur”
    “Gelee de Pomme Verte”, Satur Famrs Red Beets and English Walnut Short Bread

    Cheese course, a wedge of something creamy with tart apple gel and beets. Anne doesn’t like beets, but I found this very refreshing.

    ** Course 8:
    Napa Valley “Verjus” Sorbet
    Poached Cherries and Cream Cheese “Bavarois”

    Sorbet course. My palate was refreshed!

    ** Course 9:
    “Tentation Au Chocolat, Noisette Et Lait”
    Milk Chocolate “Cremeux”, Hazelnet “Streusel” with Condensed Milk Sorbet and “Sweetened Salty Hazelnuts” and “Pain au Lait” Coulis

    Formal dessert, basically a chocolate mousse with puddles of creamy things, and the Parker House bread pudding.

    ** “Mignardises 1″

    Creme Brulee

    Anne really liked this, but I found it, to my surprise, to be too smooth. It’s the texture of luxury, but I still think that Le Cirque has it beat. It was quite delicious, but it wasn’t right for me.

    Hazelnut Panna Cotta w/ Apricots

    This is Keller’s take on yogurt with fruit on the bottom. Yummy.

    ** “Mignardises 2″
    Assortment of cookies & chocolates
    Rosemary / Thyme chocolate

    Here, I had an espresso, and we both had white tea. I’m quite pleased that more restaurants seem to be offering high-end teas.

    The cookies were tasty and buttery, but the standout here was the filled chocolates, particularly one with a rosemary and thyme cream.

    So, that’s it. Afterwards, we got a tour of the kitchen, which is like some sort of serene temple.

    I had a fabulous time. Previously, I didn’t really feel up to the task of tackling any of the recipes in the French Laundry cookbook, but now I feel like I have some idea of where they’re supposed to go. This is unmistakably one of the standout meals in my appreciation for the art of cooking.


    Why you should urge your Senator to vote against REAL ID

    In short, the Real ID Act is a huge waste of money that will likely have the opposite of the stated effect, but will enable other kinds of tracking that are not worth the cost at best and totalitarian at worst, while leaving huge vulnerabilities for legitimate users of the system (i.e. MOST of the population).

    On Tuesday, it comes up for vote in the Senate. It’s already passed the House.

    Senator Durbin’s opposing viewpoint:,594,8140,9251

    Bruce Schneier has written extensively on why a National ID card is both a waste of money and likely to make us less safe.

    I’ll paraphrase here, but I urge you to read his versions:

    And particularly, his analysis of REAL ID:

    There are several key points:

    1) It’s a common fallacy that identification is security, and that putting a label on everybody will automatically mean you can identify the bad guys. This is simply not true, and it’s an excuse to get an ID card implemented for other things. It is not possible to make an unforgeable ID card, and spending money on that is money that could be better spent on other, more useful (from a security standpoint) things, like training border guards. This fallacy has been propagated for years by the airline industry – matching ID to the name on the ticket does nothing for security.

    2) A national ID card is a single point of very valuable failure for ID theft. With a one-stop card that’s good for everything, the incentive to forge that one card goes WAY up.

    3) There isn’t one database of every citizen, currently, although the IRS probably comes closest. There has been no discussion about the feasibility of merging a bunch of databases into one, or how access will be limited to that data, how it will be secured, etc… This is not a small problem, and it’s being swept under the rug as an afterthought.

    4) A very simple question – “is this a smart way to spend how much money for … what gain exactly?”.

    A few quotes from Bruce:

    “REAL ID is expensive. It’s an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I’ve seen estimates that the cost to the states of complying with REAL ID will be $120 million. That’s $120 million that can’t be spent on actual security.

    And the wackiest thing is that none of this is required. In October 2004, the Intelligence Reform and Terrorism Prevention Act of 2004 was signed into law. That law included stronger security measures for driver’s licenses, the security measures recommended by the 9/11 Commission Report. That’s already done. It’s already law.

    REAL ID goes way beyond that. It’s a huge power-grab by the federal government over the states’ systems for issuing driver’s licenses.”

    “Near as I can tell, this whole thing is being pushed by Wisconsin Rep. Sensenbrenner primarily as an anti-immigration measure. The huge insecurities this will cause to everyone else in the United States seem to be collateral damage.”

    A few observations of my own:

    - This comes on the tail of the realization that the TSA has spent 4.5 BILLION dollars in the past few years on useless “security” measures in the past 3 years, some not insignificant chunk of which was spent on things relating to identification of passengers. It has been widely concluded that the airlines are no safer than they were in 2001.

    - This administration is seriously deluded about security measures in electronically readable identification (particularly RFID implementation), and was recently forced against their every protest to face the fact that bad guys don’t play by your rules, and you need to design security measures against the worst case, not the best case. I see nothing like that here.

    - Just the fact that it was slipped into a military appropriations bill and will pass with no debate is reason enough for me to be suspect.


    Google is destroying the private

    Filed under: — adam @ 12:13 pm

    A year and a half ago, I read a great essay by Danny O’Brien (who now works at the EFF) illustrating the difference between public, private, and secret:

    Google has a history of disregarding the private-but-not-secret. The Google Toolbar causes pages that aren’t linked from anywhere to end up in the index anyway when they’re visited. Now, they’re dismantling this distinction even further.

    Some things aren’t linked, or they’re protected with plaintext passwords. THIS DOESN’T MEAN THEY ARE PUBLIC. By putting up a password but not encrypting, or not linking to pages, you’re saying “I know this isn’t really secret, but go away anyway. There’s nothing valuable to you here, and don’t make me work too hard to keep you out.” This is roughly equivalent to putting up a “no-trespassing” sign.

    The Web Accelerator ignores private-but-not-secret login functionality by returning pages generated with the cookies (i.e.: logins) of other Web Accelerator users.

    This is Google coming by and taking down all of the no-trespassing signs on the web, and forcing everybody to put up fences to keep the poachers out. I can’t even begin to see how this is okay.

    Would Google be equally fine with the situation if some other company (Yahoo or Microsoft come to mind as the obvious candidates) were to release their own Web Accelerator that proxied Google pages and mangled all of the relationships between cookies and users?

    Just because this stuff isn’t secret doesn’t mean it’s public either. There’s a distinction here that should be maintained, and isn’t. Google, not using https for all of its own pages, should realize and recognize this.


    Google wants your logs

    I’ve been kicking this around for a while, given the release of Google’s ability to save searches.

    Google just announced the Google Web Accelerator, and this has the same kinds of privacy issues surrounding it, so I’ll discuss them both here. For those not in the know, Google Search History is the feature that lets you access your past searches if you’re logged into Google. The Web Accelerator is a proxy that pushes all of your browsing through Google’s servers. Ostensibly, this is to make your browsing faster, but it also has the side effect that Google can (and presumably will) monitor both the URLs and contents of every web page you’re looking at. You make a request for a web page, and Google fetches it for you. I’d expect that they’re also doing various tricks with preloading and caching.

    Google is poised to collect a lot of data on browsing habits, and every indication is that they plan to keep it around.

    As a brief aside, while I don’t personally know anyone who works for Google, I do have some friends who do. Every one of them has, in the past, asserted during conversations about Google’s privacy concerns, that Google both has (or had) no intentions of keeping permanent searching / browsing logs, and has (or had) actually built up complicated encryption / hashing mechanisms to allow aggregate data to be kept without individual search histories. That may have been true at one time, although I personally found it doubtful, given that if it were true, Google could only benefit by stating it publicly. They have never done so, and recent events have shown that assertion to be presently categorically false. Google does want to keep your individual search history. I think that’s a relevant point to the privacy debate.

    In reference to search history, I wrote but never published, the following: “Search history is a sensitive area. Saving and aggregating search history is of dubious value to the end user – it’s maybe a minor convenience at best. If you care about that sort of thing, you’ll want to capture for yourself far more information than just search history, and do it locally across the board. There are several plugins for Firefox that will do exactly that for you, and not only watch your tracks, but save complete copies of everything you’re browsing.” In reference to the web accelerator, it’s evident that Google is heading towards collecting that information for themselves.

    Set aside the fact that Google has now become an extremely juicy target for a one-stop shop for identity thieves. I’m sure they’ve got great security. But do you? Google’s lifetime cookie is, as always, a serious point of possible failure. One good cross-site scripting attack or IE exploit, or even a malicious extension, and the Google cookie can be easily exposed. What’s your liability for being associated with a search history, or now a browsing history, tied to a stolen Google cookie?

    But here’s the real doozie.

    The Google Privacy Policy states that Google may disclose personally identifiable information in the event that:

    “We conclude that we are required by law or have a good faith belief that access, preservation or disclosure of such information is reasonably necessary to protect the rights, property or safety of Google, its users or the public.”

    Welcome to Google, where the Third Law comes first.

    This has serious implications. For logged-in users using all of Google’s services, this now includes the contents of your emails, your complete search AND browsing history, any geographical locations you’re interested in, what you’re shopping for, and probably plenty of things I haven’t thought of yet.

    I posit that it would not significantly damage Google in any way for them to actually make use of this information, and that Google could withstand any public backlash resulting from it.

    I think we’ve long passed the point at which we say “this is bad”.

    This is bad.

    In case you haven’t been paying attention, there’s a word for this.

    It’s called “surveillance”.

    I believe that Google should revise their privacy policy to reflect the actual intended usage of this information, and they should clarify under what circumstances this information will be released, and to whom. Will this information be used to catch terrorists? Errant cheating spouses? Tax evaders? Jaywalkers? Anarchists? Litterbugs? As a user, you have a right to demand to know. Of course, don’t expect Google to tell you, since they don’t actually get any of their money from you.



    Hitchhiker’s Guide movie was a waste of everyone’s time

    Filed under: — adam @ 11:20 pm

    It wasn’t bad, per se. It was certainly better than most of the crap Hollywood churns out. But – why? Why did they even make this movie?

    I understand that certain things need to be modified, sped up, adapted, cut out, spliced, twisted, and generally modified in order to make a good book into a good movie. But they took an absolutely fantastic book, did all these things, and ended up with a wholly unremarkable movie.

    Some specific complaints:

    1) The comedic timing was off. It really felt like everyone, with the possible exception of Sam Rockwell, was just reading off of a script, rather than saying their lines as their characters.

    2) There weren’t very many new jokes! In fact, there weren’t very many jokes at all. There was some physical comedy, but very much of what was funny in this movie was funny ONLY because it was funny in the book. As previously noted by others, a fair number of the jokes lack any context whatsoever.

    3) Douglas Adams imagined a galaxy full of wonder and absurdity. The movie is a galaxy full of tedious adherence to rules.

    I could go on, but that’s about all the energy I have for that.

    Again, it wasn’t actively bad, it just wasn’t good. Oh well.

    Ten out of ten for picking good source material, but minus several million for misinterpreting the Restaurant at the End of the Universe joke.


    New Superman pics

    Filed under: — adam @ 9:40 pm

    They released the first pictures of Brandon Routh in the Superman suit.

    Unless one of his new powers is absorbing color, that picture is really desaturated, and that’s not a good representation of what the suit is going to look like – is his face really that pale?

    I’ve fixed it up a bit below – the edits were applied across the board to the entire picture, and not in selective areas. I also took the liberty of sharpening it a bit, to compensate for the downsampling they did:

    Superman Color Corrected

    Update: I found another edit, but this one is just the suit colors, and doesn’t take into account that the whole color cast is off.

    Also, there’s this analysis of new Superman (not so) buff vs. new Batman buff.


    Adobe is buying Macromedia

    Filed under: — adam @ 8:55 am

    This is a very interesting merger, and it totally makes sense.

    1. Adobe gets Dreamweaver, which is way more advanced than Go Live, in my limited experience.

    2. Adobe gets Fireworks to integrate into ImageReady. I’ve never really used Fireworks, but I’m told that it makes common web graphics a lot easier.

    3. Adobe now controls PDF >and< Flash.

    And, possibly most importantly:

    4. Adobe has been doing a lot of dynamic PDF generation with J2EE. Macromedia has Coldlfusion/JRun. I think, but I’m not sure, that I smell a really kickass CMS brewing here that integrates directly with Adobe’s image server. Moreover, it may be what I’ve been waiting for from Adobe for years – print and web output from the same system, and a web-based interactive system for editing content for print publications based on InDesign (or similar) templates.

    IBM has been working on techniques for combining PDF and XML documents via J2EE:

    I’ve put together a possible pretty simple flow diagram for #4:


    Note to users of Earth

    Filed under: — adam @ 2:37 pm

    “Google’s success doesn’t automatically mean that every wacky idea is worthwhile.”


    Why photo editing is important

    Filed under: — adam @ 3:38 pm

    While looking at Kottke’s pictures from Paris, I immediately noticed that he’s got a good eye. But here’s why photography, in my mind, is not just about taking good pictures, but also about solid editing.

    Here’s one I particularly liked:

    Kottke Original

    I spent about 15 minutes with this, adding sharpening (big difference!), tightening up the color curves, and cropping a little. Granted, I’m making some assumptions about what the day looked like, but I didn’t get the sense that it was really hazy there.

    Adam Fields Edit

    In this particular case, the yellow hose here is such a defining factor that I might even go one step further and give it a real focus:

    Adam Fields Black and White Edit

    But maybe that’s too much.

    These edited images are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike license.


    Sin City is pure beautiful genius

    Filed under: — adam @ 4:53 pm

    We saw Sin City on Friday, and I wanted to let it gel a little before writing it up. The more I think about it, the more I enjoyed it. It is brutal, ugly, violent, and unpleasant, and also one of the most interesting movies I’ve seen in a LONG time. It captured my interest from the very beginning, and didn’t let go. Unlike Sky Captain, the cinematography is varied and fresh, the pacing is good, and the characters are certainly not boring cookie-cutter templates without life.

    Obviously, it was very beautiful, and captured the revolutionary look of the books in a way that has never been done before. But, there’s been a lot of dismissal of the violence and the story as childish and simplistic, and I think it goes way beyond that.

    (Some spoilers inside.)

    I think that some of what I see here is represented heavily in Dwight’s characterization of Marv. This passage appears in A Dame to Kill For, one of the stories that didn’t make it into the movie, but was obviously important enough for them to use anyway.

    I’m no shrink and I’m not saying I’ve got Marv all figured out or anything, but “crazy” just doesn’t explain him. Not to me. Sometimes I think he’s retarded, a big brutal kid who never learned the ground rules about how people are supposed to act around each other. But that doesn’t have the right ring to it either. No, it’s more like there’s nothing wrong with Marv at all–except that he had the rotten luck of being born at the wrong time in history. He’d have been okay if he’d been born a couple thousand years ago. He’d be right at home on some ancient battlefield, swinging an ax into somebody’s face. Or in a Roman arena, taking a sword to other gladiators like him. They’d have tossed him girls like Nancy, back then.

    This is pre-pretty Dwight speaking, which was also largely dropped from the movie.

    This passage struck me as remarkably apt when I first read it, and again when I heard it delivered on screen.

    Sin City itself is, in fact, exactly the kind of world that best fits Marv. For some value of the word, he thrives there. He acquires a drive, in murder and revenge, and while he is ultimately done in by the forces that be, he goes willingly and defiantly to that end, having accomplished his goals of driving some greater evil than he from the world. But if you take the statement in the context of the real world, hopefully it’s true – Marv doesn’t fit here, in the kind of world we’d like to have. Maybe Sin City the story doesn’t either, and that’s okay.

    Dwight was my favorite character in the books, and he shines in the movie. Some have complained that he’s portrayed as sexist somehow, as the man that the women of Old Town need to save them from their own evil. I really don’t see that. He’s a man with a plan, yes. He’s a serious badass, yes. But there’s never a sense that Gail and the others can do any less than look out for themselves just fine. Sometimes you need to be saved, and sometimes you need to do the saving. If anything, this is a respectful relationship of equals. “Where to fight. It counts for a lot. But there’s nothing like having your friends show up with lots of guns.”

    That Yellow Bastard is just weird. I still have no idea how I feel about that. I do think the parallels between Marv and Hartigan are interesting – they’re both busted for taking out a Roark, they’re both tortured for confessions (which they both sign – and that’s a whole other analysis right there), and they both ultimately feel like their lives are worth trading for something larger than themselves – for Marv, it’s revenge in and of itself; for Hartigan, it’s an end to the chain. Cowardly? Maybe, but he’s also supposed to be MUCH older and in much more pain than he’s accurately portrayed in the movie. I can see where that sort of a decision might seem to make sense.

    There’s a lot of layered complexity in here, and I think there’s much more in there than credit is being given for.


    Illegal Tender

    If companies can insist on non-negotiable terms for every product sold as a service, why not terms for your money in return?


    ourmedia seems ready now

    I’ve been working on performance optimization for for the past day and a half or so (for those not in the know – performance optimization is part of what I do). I’d submitted some photos and done a very small amount of template development a few weeks ago, but wasn’t involved in the launch, which, as you’ve probably heard, didn’t go so well.

    The site wasn’t able to stay up past about 300 or so concurrent users, let alone the 10,000 that slashdot brought. I did some emergency MySQL tuning on the current server to alleviate the load somewhat, but it was clear that the first priority needed to be migrating to a bigger dedicated server. This evening, we completed that move, and the site was brought back up.

    There’s still a bunch of tuning that needs to be done in short order, but it should hopefully be fairly stable from here on in.

    I think this is huge, and I’m glad to have been a part of it so far. Congratulations to JD and Marc on their launch.


    The life of the converged device

    Filed under: — adam @ 10:34 am

    Chris Justus writes:

    “Hundreds of companies – big and small – are all racing to replace your cellphone / MP3 player / digital camera / video camera / PDA / PC / gaming platform / etc with a single device. I’ve been waiting years for this device to emerge, and it finally occured to me that it is not ever going to happen. When did this epiphany occur? I was reading Wired magazine a month or two ago and somewhere in there, they recommended that small and/or home based businesses should go get combo fax/scanner/printers.”

    He goes on to explain why converged devices inevitably fail to be really good at any of the converged functions, particularly multifunction printers.

    I have to disagree. I recently bought a multi-function printer (Brother), largely because it seemed to be the cheapest way to get a sheetfed scanner for filing contracts and sending faxes (which I do on the computer anyway, not directly, as I have no land line). It’s no substitute for a good laser printer, but the ink jet performance is pretty impressive. Granted, I don’t ask much from it, but it works perfectly well.

    However, that’s not really my point. Getting convergence to work well is probably one of two things:

    1. A shoehorn problem to stick together two things that don’t belong together (because they have different ideal form factors or different usage patterns or different failure rates)
    2. A design problem to work out the issues and present the proper functionality to the user

    I think the TV/VCR combo fits into the former. People look at these units and say “what happens when the VCR breaks?”. The shoehorn is really an insoluble issue – these devices don’t belong in one box together. I’d much rather the entertainment industry focus on standardizing the AV connections to make components easier to connect.

    The latter problem is a more interesting one. There are devices that go together, but there are “minor” issues to resolve. The multifunction printer/fax/scanner is actually, I think, a great example of three devices that should go together. A fax machine is just a scanner with a sheet feeder and a printer – so why not be able to use any of the functions independently, and make them better than you’d expect out of a normal fax machine? I think this is just a matter of getting the components cheap enough so that when you buy all three, it’s still cost-effective (and we’re just about there, now that the still-usable-but-still-pretty-crappy bottom of the ink jet and single-page scanner markets are around $30-$40). Complaining that the multifunction fax machine doesn’t give you adequate feedback about the fax is a design issue, not a convergence failure.

    I’m reasonably happy with my Kyocera Palm/Phone. Granted, there are some design issues, but a palm and a phone should go together. I want to have my calendar and contacts with me all the time, and the fewer devices I have to carry, the better. With a decent set of headphones, there’s no reason why this can’t be an MP3 player, too (although I have a standalone one from wayyyy back which I’ve never felt the need to replace). I’ve never been terribly interested in portable video, so TV on my phone seems like a bit much, but I think a standard KVM connector for portable devices would be really handy – so you can use the same machine through whatever kind of interface you have handy, but it all lives on the device you can take with you.

    Maybe it won’t be the next big thing, but there’s value is getting our devices to not just do as many things as possible, but also to do them all well. For many applications, I see no reason it can’t be possible.


    Why I don’t like Google Autolink

    Filed under: — adam @ 11:33 am

    Kottke thinks that the Google toolbar is a good idea. Here’s why I disagree.

    I have a strong visceral reaction to this because it disturbs the decentralized nature of the web. It’s the same reason people got upset about DoubleClick tracking visits from one site to another through a shared cookie. It’s because a lot of what makes the web the web is that there are disparate competing resources from LOTS of different sources, and Autolink gives that the finger.

    For me, the issue isn’t about modifying layout or even content, it’s about Google standing between the user and every other site and saying “you go here now”. Some things are bad just by being ubiquitous. In a sea of Amazon and Google Maps links, everything else will start to look out of place.

    As for reasonable intelligent adults being able to make their own decisions, as I’ve said before, I think that technology has gotten to be too pervasive for the non-technical to have enough information and perspective to make these decisions, and the informed experts need to take a stand against what we perceive to be detrimental trends being enforced without full and knowledgeable consent.

    Jason thinks this isn’t like DRM, but it is – it’s about centralized control. Don’t think for a second that this “puts the power in the hands of the user”.


    Is “We deeply regret this unfortunate incident” the hot new corporate motto?

    Following close on the heels of the news about ChoicePoint, another “I’m a large corporation and I just exposed the personal data of lots of Americans, and ho ho ho I’m just going to apologize.”

    “Bank of America Corp. has lost computer data tapes containing personal information on 1.2 million federal employees, including some members of the U.S. Senate.”


    There are a whole bunch of problems here.

    These companies have very little liability and regulation for aggregating personal data. So, Bank of America has financial information on 1.2 million customers. That’s to be expected. They are, after all, a bank and need access to your financial information. But once you step past that – what else is legal for them to do with the data? Are they liable if they expose it by accident? They have a privacy policy that says they’re careful with your data, but what happens if they break it? Are there actually any consequences?

    Bank of America actually has quite a detailed privacy policy, but what’s hidden here is important – it doesn’t say anything about the risks. “Remember that Bank of America does not sell or share any Customer Information with marketers outside Bank of America who may want to offer you their own products and services. No action is required for this benefit.” But also remember that Bank of America is a target, and your recourse is largely limited to “telling them your preferences”.

    I’ve been reading “The Digital Person” by Daniel J. Solove, and it’s been an eye-opener about the problems associated with the construction, storage, and use of digital dossiers. It’s possible that I haven’t gotten to the main point yet, but even in the beginning, he makes some good observations – the problems we’re facing here aren’t necessarily malicious, but they are impersonal and uncaring. The fact that an individual piece of data doesn’t really matter if it’s revealed doesn’t mean that lots of pieces all revealed together aren’t a problem.

    There are synergistic network effects at play here. It needs to be recognized that the “simple” collection and aggregation of large amounts of data has side effects in and of itself.

    Powered by WordPress