Adam Fields (weblog)

This blog is largely deprecated, but is being preserved here for historical interest. Check out my index page at for more up to date info. My main trade is technology strategy, process/project management, and performance optimization consulting, with a focus on enterprise and open source CMS and related technologies. More information. I write periodic long pieces here, shorter stuff goes on twitter or


The Tragedy of the Selfish, again and again.

I kept seeing this pattern emerge, and couldn’t find a good name for it (originally in reference to failures of the free market), so I came up with one. Simply put, The Tragedy of the Selfish is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better.

You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars.

You buy a gun because other people have guns. Then other people do, because they want to be safer too. Then… you see where I’m going with this. Perhaps you’ve made yourself safer in some limited way, but you’ve decreased the overall safety of the system.

This is not safety, it’s mutually assured destruction.


My problem with the Netflix restructuring

I can accept that DVDs are a dying business with no future growth and _escalating_ costs. I can accept that Netflix wants to get out of that business and move forward, even if the streaming product is still nascent and not competitive yet. I like Netflix, and I’ve been a loyal customer since before they had unlimited plans (I was the first person I knew to get a DVD player).

I accept that all of this might be necessary and painful to grow the business. But the thing is – it’s not our burden as customers to carry those costs, and it’s disingenuous to ask us to do so. The fact is, while DVDs are limited in growth, they’re still the better product with far more selection, and the DVD business you’re jettisoning is still profitable. If you want us to switch to a worse product that may be better in the future, great. Lower your prices to compensate. All of this brouhaha could have been avoided if you’d announced that everyone’s plan was a dollar per month cheaper until the streaming selection got better.

We’ll bear with you to make a difficult transition. Asking us to do so while giving us a worse experience and making us pay more for the privilege feels like taking advantage.

It’s not too late to change your mind.


Guacamole Alfredo

Filed under: — adam @ 3:00 pm

I came up with this dish in response to a food question, and it sounded so good I had to make it the next night. It was promptly devoured by all present. It’s a great summer pasta dish.

Dice 2-3 heirloom tomatoes (or more), 3-4 avocados (or more), and 1 large onion.

Boil and salt 5-6 quarts of water for the pasta.

Heat a generous amount of olive oil in a medium frypan. Add the onion and sweat over medium-low heat until almost soft, then add half of the tomatoes and cook for about 2-3 minutes over medium heat. Add a generous sprinkle of salt and pepper. Meanwhile, cook a pound of fresh fettuccine.

Guacamole Alfredo

Add 2-3 cloves of minced garlic, and half of the avocados, and heat through until softened.

Guacamole Alfredo

Mince 3-4 tbsp. of cilantro, and add to a large bowl with the rest of the tomatoes and the avocados (not pictured).

Guacamole Alfredo

When the pasta is done, reserve a half-cup of the water, then drain and return to the pot. Toss with the onion mixture and cook over low heat for 30 seconds until the pasta absorbs some of the oil. Put pasta into the bowl with the tomatoes and avocados, and toss. Sprinkle with 1-2 tsp lime juice. Add some of the reserved pasta water if it’s too thick (unlikely).

Guacamole Alfredo

Top with parmesan and serve with a crusty semolina bread.

It’s vegan-friendly as is, but I think it would also be great with shrimp.


Why all this mucking about with irrevocable licenses?

The Google+ Terms of Service include various provisions to give them license to display your content, and this has freaked out a bunch of professional photographers:

‘By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.’

I don’t even understand why this is necessary. Why can’t this just be ‘you give us a license to display your content on the service until you delete it’?


Choose Real Food

Filed under: — adam @ 6:56 pm

Here’s an edit I did of the new “Food Plate” graphic. I think this is more accurate.



Sugar may be toxic, but that NYT article doesn’t demonstrate it.

The NYT magazine ran this article on how sucrose is probably a poison that causes cancer and a whole raft of other ailments.

Unfortunately, the article is so poorly written and presents so little actual evidence that I’m shocked at the number of otherwise rational people who are simply taking it at face value. John Gruber, whose analysis I usually respect, writes “It’s not often that a magazine article inspires me to change my life. This is one.“.

Here are a few specific comments:

  • The article still perpetuates the assumption that high fructose corn syrup is identical to sucrose because they’re both made up of fructose and glucose. Setting aside the obvious difference that a 50% split between fructose and glucose is not the same as 45% glucose and 55% fructose (oh, but right – it’s “nearly” the same), sucrose is a disaccharide and HFCS is a mixture. Sucrose does easily break into glucose and fructose in the presence of sucrase, but the fact that there’s an enzymatic reaction there means that the rate at which it happens can be regulated. Sucrose and HFCS are different things, in much the same way that a cup of water is different from a balloon filled with hydrogen and oxygen, or a pile of bricks is different from a house. Every subsequent opinion in the piece that sugar is bad is doubly applicable to HFCS.
  • The article doesn’t actually cite any concrete evidence to support its hypothesis that sugar is toxic. It doesn’t even cite any sketchy evidence to support its hypothesis. Meanwhile, here’s a bit of recent research that suggests the opposite: “Female mice [getting 25% of their calories from sugars] that had been reared on the unbound simple sugars [(fructose and glucose mixture)] experienced high rates of mortality, beginning 50 to 80 days after entering the enclosure. Their death rate was about triple that of sucrose-treated females”.
  • Lustig’s Youtube presentation on which the article is based is fairly interesting. As far as I can tell, all it does it make the case that fructose is a poison in large quantities, that excessive amounts of sugar are worse for you than excessive amounts of fat, and that juice, soda, and “low-fat” processed crap that substitutes sugar (but primarily in the form of HFCS) for fat are responsible for the obesity and diabetes epidemics. Most of which is completely reasonable, although I think he ignores the sucrase regulation pathway, which is probably the most critical factor. But nowhere does he say that the body can’t metabolize _any_ sugar safely, which is the main thrust of the NYT piece, based on exactly, as far as I can tell, zero evidence. Lustig’s conclusion is exactly what it’s stated as at the beginning of the article: “our excessive consumption of sugar is the primary reason that the numbers of obese and diabetic Americans have skyrocketed in the past 30 years. But his argument implies more than that. If Lustig is right, it would mean that sugar is also the likely dietary cause of several other chronic ailments widely considered to be diseases of Western lifestyles — heart disease, hypertension and many common cancers among them”. It’s a long leap from there to the position that any sugar consumption is bad, which his argument doesn’t actually imply. Drinking a few cups of water a day is good for you. Drinking a few gallons is probably not so good.
  • Here’s an example of the kind of “argument” in the article: ”In animals, or at least in laboratory rats and mice, it’s clear that if the fructose hits the liver in sufficient quantity and with sufficient speed, the liver will convert much of it to fat. This apparently induces a condition known as insulin resistance, which is now considered the fundamental problem in obesity, and the underlying defect in heart disease and in the type of diabetes, type 2, that is common to obese and overweight individuals. It might also be the underlying defect in many cancers.” Of course, it completely ignores that the fructose does not hit the liver in sufficient quantity and with sufficient speed under normal circumstances, and it even flat out includes the counter-hypothesis that the liver is perfectly capable of metabolizing sugar up to a certain point with no detrimental effects.

Excess sugar is clearly bad. I accept that it’s probably even worse than excess fat. I don’t see even a small shred of evidence to accept the logical leap presented in this article that eating a cookie will increase your cancer risk in any meaningful way. Absolutely, we need to study this more. Concluding that sugar is toxic in normal quantities based on the available evidence is ridiculous. Despite the indecision in the article, it’s not hard to define “normal quantities”. I’m the first to agree that the current “sugar in everything” trend in packaged food is bad, and it’s important to check the nutritional labels. HFCS has no business being in bread. The brands you grew up with are not indicators of quality. In fact, I’d go so far as to say that if your food has a nutritional label at all, you’re already at a disadvantage.

Eat more whole foods. Stop taking your calories in liquid form. Cooking at home is different. Change your food chain.



Some thoughts on salad

Filed under: — adam @ 1:46 pm

A few years ago, I decided to eat a salad for lunch at least 5 days a week. It’s a great way to make sure you get a lot of vegetables, and if you do it right, it’s very satisfying. I didn’t want to do Bittman’s “vegan before 6pm” diet, but this is a similar approach. It also takes a lot of the guesswork out of what I’ll have for lunch on any given day. I usually make my own. If you’re on the go, Fit & Fresh makes a very convenient shaker container with a dressing compartment and a removable ice pack.

For me, a salad is at minimum: a green leafy vegetable (lettuce or spinach), cucumber, some sort of tomato, and dressing. Everything else is optional, but I try to mix in at least one ingredient from the following categories. The key is getting a range of interlocking textures and complementary flavors. Buy the best ingredients you can find.

Lettuce: I usually use a heartier crunchy lettuce (romaine) or a greens mix. Local greens are always preferable, and you can still find farmers who do greenhouse greens in the winter. People will tell you not to cut lettuce but to tear it up with your hands. I don’t find that it makes a difference either way. Always rinse lettuce a few times in a salad spinner and then soak it in very cold water for 5-10 minutes before drying and using it. Unwashed lettuce will usually last about a week in the fridge if it’s fresh. Washed lettuce may last 2-3 days – store it in an airtight container with a folded paper towel to absorb excess moisture. If there isn’t good lettuce to be found, baby spinach makes a nice alternative. You can wash grape or cherry tomatoes with the lettuce.

Cucumber: Local is always better. Generally, the smaller varieties will have more flavor and crunch – I usually use kirbys or small pickling cucumbers. Peel any cucumbers that have been shipped loose – they’re coated with a paraffin layer to protect them.

Tomato: If local tomatoes are in season, use the big ones, preferably heirlooms. They’ll dominate the salad, and when they’re in season that’s probably fine. Otherwise, tomatoes for salad should always be the smaller cherry or grape tomatoes. Out of peak season, these are the only ones that are tolerable, and they get less so as the winter wears on. I usually include them anyway for some color and texture.

Other vegetables: Depending on my mood, I’ll include some diced red, orange or yellow pepper (but almost never green – I’m not that fond of bitter flavors). Cooked beets add a nice sweetness if you like them. Shredded carrots can be nice, but I usually find their flavor too strong. To the detriment of my breath, I’ve been cursed with whatever Eastern European gene causes me to crave raw red onions, especially in the winter. In the summer, I like to use raw local corn.

Animal Protein: I often omit the animal proteins for side salads, but without them it doesn’t really feel like a meal, so I always include at least one in my lunch salads – a hard boiled egg (use eggs that are 2-4 weeks old, start in cold water, bring to a boil, cover, let sit for 8-9 mins, shock in ice water), crumbled bacon, diced leftover chicken or steak, or a few cooked shrimp (thaw frozen shrimp in water, then boil for 3 minutes and shock in ice water).

Fruit: In the summer, this will be a sliced peach or plum, in the fall it’ll be apple or pear. Dried fruits work well – raisins or cranberries. Raisins pair well with honey mustard dressing and bright vinaigrettes, cranberries go better with creamy dressings.

Cheese: I usually avoid cheese, but I have a periodic craving for the combination of blue cheese, roasted garlic vinaigrette, beets, and nuts. I think most cheese doesn’t mix well with dressing, but there are a few combinations that work.

Dressing: My favorite dressing of all time is Brianna’s Poppy Seed Dressing. It’s creamy and thick, and goes with just about everything, and is the exception to my belief that most bottled dressings aren’t very good. I also like some varieties of honey mustard, or I’ll make a vinaigrette. In the summer, I like to make a vinaigrette with a bold raspberry vinegar. Use whatever you like. I’d avoid lowfat dressings, because they generally don’t taste very good. You’re eating a salad instead of a chalupa! You can have a little fat.

Crunch: I like to include at least one crunchy element – croutons or slightly toasted (300F for 6-8 mins) walnuts or pecans. If you buy croutons instead of making your own, look for those without HFCS.

A few suggestions:

My standard winter salad: romaine lettuce, persian pickling cucumbers, grape tomatoes, sliced red onion, diced cold chicken/bacon/hardboiled egg, croutons, dried cranberries, poppy dressing.

My alternate salad: mixed green lettuce, cucumbers, tomatoes, red onion, shrimp, crumbled bacon, diced beets, croutons, plus either raisins and honey mustard dressing or crumbled blue cheese and roasted garlic vinaigrette.

My favorite summer salad: mixed green/red lettuce, cucumbers, heirloom tomatoes, sliced red onion, sliced cold skirt/flank steak, raw corn, sliced peaches, croutons, poppy dressing.

Suggest some of your favorites in the comments or on twitter.


My Huck Finn edit

“Jim was monstrous proud about it, and he got so he wouldn’t hardly notice the other dudes. Dudes would come miles to hear Jim tell about it, and he was more looked up to than any dude in that country. Strange dudes would stand with their mouths open and look him all over, same as if he was a wonder. Dudes is always talking about witches in the dark by the kitchen fire; but whenever one was talking and letting on to know all about such things, Jim would happen in and say, ‘Hm! What you know ’bout witches?’ and that dude was corked up and had to take a back seat.”


Sous Vide Poached Egg

Filed under: — adam @ 9:38 am

Sous Vide eggs are tricky, because the yolk sets at a lower temperature than the white. If you cook a whole egg in the shell, you either get a properly cooked yolk with a runny and gelatinous white (some people like this, some think it’s like eating a ball of snot), or you get a properly set white with a really overcooked yolk. As far as I can tell, it’s not possible to get a perfect “poached egg” with the sous vide method, if you cook the egg whole in the shell.

To deal with this, I separated them and cooked them individually at two different temperatures. You can cook the white first at a higher temperature to just set it (160-162F or so, depending on how firm you like it), and then lower the temperature (to 144F or so) and add the yolk. To make this a little more convenient, you can even do the white in advance, chill it, and keep it and the separate yolk in the fridge overnight, then add the yolk and cook them in the morning. The white takes about 60-90 minutes to set (depending on whether you start from fridge or room temperature), and then the yolk takes about another 60-90 minutes. Cooking the yolk will also bring the white back up to serving temperature without overcooking it. It’s not quite fast enough for a rushed morning, but that’s acceptable timing for a lazy morning. 

I tried leaving them in the water overnight at 144F, and that didn’t work – the yolks got completely overcooked and gummy. There might be a lower temperature at which that would work. Even still, unlike with regular poached eggs, there’s very little fuss. This method takes longer, but it doesn’t require you to do anything while it’s cooking.

As I looked around for a proper vessel to cook them in, I found something I’d dabbled with but never really gotten good results with, which in retrospect is actually pretty perfect for sous vide cooking: an egg coddler.

Sous Vide Poached Egg

Yes, you can use your hands to separate eggs, but I wanted to be extra careful not to break the yolk membrane. I put the yolks in a covered bowl in the fridge while the white cooked.

Sous Vide Poached Egg

Here you can see the thin layer of undercooked white that was left over with the yolks, and the more properly cooked white layer underneath:

Sous Vide Poached Egg

You can serve it directly out of the coddler, or turn it out into a bowl:

Sous Vide Poached Egg

Here you can see how perfectly runny the yolk is, and the white is creamy and well set:

Sous Vide Poached Egg

This is a great poached egg. I’m not sure it’s that much easier or more convenient than regular poached eggs in terms of timing, but it certainly requires less effort for excellent and tasty results. I think this is probably overkill if you’re just cooking for one (the above was an experiment and I didn’t want to ruin a lot of eggs if it didn’t work out), but it would work just as well with a dozen or more. Doing a single poached egg isn’t that much effort, but doing a lot of them can add up. I also got very good results using a small Le Creuset ceramic crock with a lid, though that can’t be submerged in the same way that the egg coddler can. If you want to use something like that, you’ll need a rack to keep it near the top of the water level.


I’ve found that this silicone rack is the perfect size for the SVS. You’ll need two of them for a short crock/ramekin.


Sous Vide French Toast

Filed under: — adam @ 9:27 am

After a great success with scrambled eggs, I wondered if it would be possible to make french toast in the SVS. I’ve had some good results with french toast the regular way, but it requires a lot of advance planning, and I find it difficult to ensure that the egg mixture gets absorbed and cooked all the way through without making it soggy in the middle.

I whipped up 8 eggs, added about 3/4 cup of milk, a splash of vanilla, and a pinch of salt. I added this to two slices of homemade challah in a ziploc vacuum bag and sealed it. I then shook the bag to distribute the liquid evenly and sucked the air out with the pump. These bags are much easier than the Foodsaver bags when you’re dealing with liquids, because you don’t have to worry about the seal getting gunked up.

Sous Vide French Toast

I then cooked them in the SVS for an hour and a half at 147F, removed them from the bag, and put them in a hot skillet with a little butter to brown the outside on both sides.

Sous Vide French ToastSous Vide French Toast

They came out perfectly – slightly crispy on the outside, and evenly cooked throughout, with a wonderfully creamy yet substantial texture.

Sous Vide French Toast



Making Sous Vide Custard

Filed under: — adam @ 11:35 am

Drawing on some inspiration in this post on creme brulee at SVKitchen, I decided to try a custard. Since I bought entirely too many blueberries this season, and the last of the bunch is rapidly aging in my fridge as I try to use them up before they go bad (I’ve already frozen as many as my freezer can handle), I decided to top this batch with a blueberry gel.

The SVKitchen folks used a set of fiberglass rods to elevate the tray to allow the custard cups to sit near the top of the oven while maintaining the proper water level, but it turns out that one of my cooking racks fit perfectly underneath the included one:

Making Sous Vide Custard

Making Sous Vide CustardMaking Sous Vide CustardMaking Sous Vide Custard

The normal way to make custard is to cook the cream and sugar at a moderate heat together to mix them, and then add beaten eggs and cook in a water bath. I thought the SVS could make this easier, so I just mixed all of the ingredients together in the mixer until they were combined but not frothy (do the last little bit by hand for more control). I doubled Bittman’s standard custard recipe (I’ve pretty much given up on the book and use his $2 searchable iPhone app all the time) and substituted vanilla for the nutmeg and cinnamon, since I’m allergic to nutmeg and I like vanilla. This doubled recipe calls for 4 cups of cream, 1 cup of sugar, 4 whole eggs, 4 egg yolks, and a pinch of salt plus flavorings:

Making Sous Vide Custard

This recipe made enough to fill 8 8-oz ramekins all the way to the top, plus a little left over. I filled the cups through a mesh strainer to get out any last unmixed bits, covered them with plastic wrap, and cooked them (covered) in the SVS at 175F for an hour:

Making Sous Vide Custard

When they were almost done, I cooked about two cups of blueberries over moderate heat with a tablespoon or so of sugar and bloomed a packet of gelatin in a bowl of water. When the blueberries were cooked through and starting to burst (about 5-7 minutes), I stirred in the gelatin and let them cook for a few more minutes. Then I drizzled the hot syrup over the top of the cups:

Making Sous Vide Custard

After chilling in the fridge for about 4 hours, they were absolutely fantastic. The texture is flawless, the flavors are great, they’re perfectly cooked all the way through, and the whole thing only took about 15-20 minutes of actual effort.

Sous Vide Custard



Why I don’t eat High Fructose Corn Syrup (HFCS)

Filed under: — adam @ 10:11 am

The following is a catalog of my somewhat unscientific objections to High Fructose Corn Syrup (HFCS), across a number of different axes:

Health / Chemical

It’s not “Just like sugar”

Proponents of HFCS claim that it’s “just like sugar”, but that’s not strictly true. Even the form of HFCS that’s closest in chemical formulation to sucrose is 55% fructose and 45% glucose, which is a liquid at room temperature. Fructose metabolism behaves differently in the absence of glucose, and in practice that ratio seems like enough to tip the scales in that direction.

HFCS is a mixture

HFCS is a mixture, not a compound. In the case of sucrose, it’s a very weak bond between the fructose and the glucose, but there is a bond there that can be used to regulate the rate at which it’s metabolized (cleavage of the disaccharide into glucose and fructose happens in the small intestine). When you eat HFCS, you just dump a bunch of fructose and glucose on your metabolism all at once, to be absorbed as quickly as possible. I haven’t seen any research examining whether this is a problem or not, but it seems like it would be.

Research shows that it can be unhealthy

There is an increasing body of research pointing to excess fructose or HFCS specifically as being responsible for weight gain, raising bad cholesterol levels (see the Personal Experience section at the end) and causing cancers to grow faster.

Other thoughts on Fructose

I don’t know of any research examining whether the fructose in fruit or agave syrup has similar effects. My guess would be that the fructose in fruit is buffered by everything else in the fruit (see the coda on nutritionism at the end) and that agave syrup is probably not great for you either, but I have no evidence to support either of those assertions.



I don’t like the way HFCS tastes – I find that foods sweetened with it have a somewhat sickly flavor, and a lingering unpleasant aftertaste.

HFCS is a marker for cheap ingredients.

Companies that put HFCS in their food do so because it’s cheaper than sugar, not because it’s better than sugar. A few cents extra per loaf of bread makes a huge difference when you’re selling a few million loaves, and it makes a lot less difference when you’re buying one loaf. I try to make as much of my food from ingredients I personally choose, but when I have to buy packaged food, I generally want it to be as good as it can be. In my experience, foods that avoid HFCS also tend to use better ingredients and have better overall quality. I’m disgusted by how difficult it is to find food in the supermarket that doesn’t contain it.


HFCS is an industrial byproduct of corn subsidies. This is a very deep subject with a large number of complex interactions, but one thing is pretty clear — the aggregation of incentives for many farmers line up to cause them to grow lots of corn (and soybeans) to the detriment of other products. Monocultures in farming are generally problematic, and I think we should be encouraging more biodiversity instead of less. Vastly simplified, the government makes it financially attractive for a large number of farmers to grow very few varieties of corn with the use of petrochemical fertilizers and pesticides. This increases our dependence on foreign oil, it weakens the basis of our foodstocks, and it gives us a large number of very cheap byproducts that make their way into everything. Michael Pollan has given this subject far more exploration than I could – I highly recommend reading the chapter on corn in The Omnivore’s Dilemma (or the article on which that chapter was based).

Personal Experience

Sometime over the summer of 2008, I made the personal decision to eradicate as much HFCS as possible from my diet. I would no longer voluntarily buy any processed food containing HFCS, and I would make conscious attempts to avoid it. I had my cholesterol checked in June, before I started this experiment, and again in December. During that time period, with no other lifestyle changes, my Triglyceride count dropped by 39 points and my LDL count dropped by 28 points. I attribute this change entirely to the direct and indirect effects of cutting out HFCS – cutting out HFCS itself, cutting out the other processed ingredients that often go along with it, and decreasing my consumption of processed food overall. In actuality, HFCS itself may be entirely benign (though I see little evidence of that), but I feel that removing it from my diet was an unqualified net good. Unfortunately, it’s been impossible to remove entirely, as most restaurants use it. As a result, I’ve been trying to cook at home more (with a little help), which has also been largely a net good.


(Coda on Nutritionism vs. Nutrition)

Michael Pollan makes a really good point about eating whole foods in In Defense of Food (and the essay on which it was based). The whole essay is worth reading, but this section stood out for me:

Also, people don’t eat nutrients, they eat foods, and foods can behave very differently than the nutrients they contain. Researchers have long believed, based on epidemiological comparisons of different populations, that a diet high in fruits and vegetables confers some protection against cancer. So naturally they ask, What nutrients in those plant foods are responsible for that effect? One hypothesis is that the antioxidants in fresh produce — compounds like beta carotene, lycopene, vitamin E, etc. — are the X factor. It makes good sense: these molecules (which plants produce to protect themselves from the highly reactive oxygen atoms produced in photosynthesis) vanquish the free radicals in our bodies, which can damage DNA and initiate cancers. At least that’s how it seems to work in the test tube. Yet as soon as you remove these useful molecules from the context of the whole foods they’re found in, as we’ve done in creating antioxidant supplements, they don’t work at all. Indeed, in the case of beta carotene ingested as a supplement, scientists have discovered that it actually increases the risk of certain cancers. Big oops.

What’s going on here? We don’t know. It could be the vagaries of human digestion. Maybe the fiber (or some other component) in a carrot protects the antioxidant molecules from destruction by stomach acids early in the digestive process. Or it could be that we isolated the wrong antioxidant. Beta is just one of a whole slew of carotenes found in common vegetables; maybe we focused on the wrong one. Or maybe beta carotene works as an antioxidant only in concert with some other plant chemical or process; under other circumstances, it may behave as a pro-oxidant.

Indeed, to look at the chemical composition of any common food plant is to realize just how much complexity lurks within it. Here’s a list of just the antioxidants that have been identified in garden-variety thyme:4-Terpineol, alanine, anethole, apigenin, ascorbic acid, beta carotene, caffeic acid, camphene, carvacrol, chlorogenic acid, chrysoeriol, eriodictyol, eugenol, ferulic acid, gallic acid, gamma-terpinene isochlorogenic acid, isoeugenol, isothymonin, kaempferol, labiatic acid, lauric acid, linalyl acetate, luteolin, methionine, myrcene, myristic acid, naringenin, oleanolic acid, p-coumoric acid, p-hydroxy-benzoic acid, palmitic acid, rosmarinic acid, selenium, tannin, thymol, tryptophan, ursolic acid, vanillic acid.

This is what you’re ingesting when you eat food flavored with thyme. Some of these chemicals are broken down by your digestion, but others are going on to do undetermined things to your body: turning some gene’s expression on or off, perhaps, or heading off a free radical before it disturbs a strand of DNA deep in some cell. It would be great to know how this all works, but in the meantime we can enjoy thyme in the knowledge that it probably doesn’t do any harm (since people have been eating it forever) and that it may actually do some good (since people have been eating it forever) and that even if it does nothing, we like the way it tastes.



I think that about covers it. I welcome comments.



Sous Vide Black Beans

Filed under: — adam @ 8:53 pm

I couldn’t find a recipe for making dried beans sous vide for my Sous Vide Supreme, so I winged it. It worked really well.

1 cup dried black beans, rinsed 1 diced medium red onion Roughly the same volume of beans in ice cubes

Preheat SVS to 180F. Seal all ingredients in a vacuum bag.

Sous Vide Black Beans

Cook for 36 hours. After 24 hours, I squeezed the bag and it still felt a little firm, so I put them back in. The beans were completely tender all the way through, but not squishy and had a really pleasant texture. Salt to taste before serving.

Sous Vide Black Beans


Cooking at home is different

Filed under: — adam @ 3:07 pm

There’s a bit of a debate going on about whether the lack of cooking at home is responsible for people eating unhealthily. Matt Yglesias has a piece arguing that cooking at home isn’t fundamentally different from restaurant cooking, and “If someone – Jamie Oliver, for example – devised an appealing mass-market food product that was better than Taco Bell on the taste/price/convenience dimension but also healthier, well that would be an excellent thing for the world.” Well, it sure would. It would be nice if someone could make a car that drives like a BMW but doesn’t use any gas and costs less than $1000 too.

“Cooking yourself” is not the point. “Cooking at home” is. This is because home cooking is different from restaurant cooking, and yes, there is a fundamental difference between food you prepare for yourself and food prepared by other people, at least when the latter is in a commercial/restaurant context. Unless you have a private chef, food prepared for you by other people is food prepared for… whomever. This difference is largest at scale. Industrial food is the way it is because it’s designed to be made/prepared/”made” by people with no skill at cooking for a clientele who may show up at any time and want what they want, and when you do that, you lose all kinds of properties of the food that go into making it healthier. You lose varietal selection. You lose focus on balance. You lose accounting for individual tastes. You lose someone insisting that you eat your vegetables (both because they’re good for you and also because whoever cooked them put a lot of effort into making them for you). You lose incentive to not use cheaper ingredients (or at least you divorce yourself from that decision). You lose incentive to not use flavor boosters that are unhealthy. You lose the ability to make food on demand, so there’s incentive to use ingredients that will store better. Fine cuisine doesn’t fare much better, because it’s not optimized for health, but for flavor and pleasure.

Healthy food has a lot of properties that are, I think, inherently unscalable. Saying that restaurants should offer cheap healthy options is not understanding the problem. Yes, cooking at home is a lot of work, and sometimes that takes away from the time you could be using to watch a movie or read a blog, but the benefits are immense, and they won’t realistically ever come out of a restaurant. Is that really such a bad thing?


Attn: Dyson, Inc.

Filed under: — adam @ 2:17 pm

“I have purchased a number of household appliances, more than a few of them made by the Dyson corporation, and never have I been so embarrassed about any of them as I am about this Dyson DC16. I am embarrassed for myself that I didn’t immediately return this unit after trying it out the first time, instead giving you the benefit of the doubt. I am embarrassed for you that you produced it and sold it as a functional appliance, sullying the reliability of your brand.

During the first year I had this handheld vac, never once did a full charge of the battery last more than 5 minutes. When it got down to less than 3 minutes, I spoke to your support representatives, who told me that the charger was probably defective, and sent me a new one. That helped for a short period of time, and I lived with it.

However, I’ve reconsidered. I no longer want to own this device. Its complete inadequacy at the task for which it was designed plagues me, and I am sufficiently disgusted with it that I cannot bear to pass it on to someone else. Nor am I willing to just throw it in the garbage – it is not worthy to even contribute to landfill somewhere.

I am therefore left with no choice but to return it to you. Please do as you see fit.”


Something interesting about scarcity

We used to have a 6-at-a-time Netflix plan. We’d get 6 movies, but then sometimes we’d go months before watching them, or even just deciding that enough was enough and sending them back. And frequently, even among those 6 movies, there would be nothing we wanted to watch on any given night. And then an interesting thing happened. As part of a general cost trimming in our house, we dropped down to a 3-at-a-time plan. And suddenly we started watching a lot more Netflix movies. With 6 movies to choose from, there was always “something else” to watch, and we didn’t have to worry about clearing out all of the cruft to make room for something we really wanted to watch. As a result, we didn’t think as carefully about whether we’d really want to watch a new movie, because just renting something wouldn’t really block something else that we wanted to see more if it came along. But when we introduced a little artificial scarcity into the mix, a slot became something worth protecting from something we didn’t really want to see, and we started thinking more about which movies to put to the top of the queue, and then actually being more aggressive about watching them and sending them back.

This seems like a strange effect to me – we’re paying less, we’re technically using less (3 out instead of 6 out), but we’re turning over more, so the net effect is probably that we’re heavier users now than at 6-out. Because the pricing is only on the number out instead of the turnover, we’re unarguably paying less and using more, even though we’re technically on a “lower usage” plan. At this point, even if we wanted to spend the extra money, I have no desire to go back to a 6-out plan, because I like the extra sense of urgency that comes from having the out slots be a scarce resource, and it makes me want to use the service more.

I don’t know if this makes me a better Netflix customer or not, from their perspective. Obviously I’m paying less money to them per month and using more in mailing fees, but I’m also holding onto premium movies for a lot less time than I used to, freeing them up to be sent to other customers.

Anyone notice the same thing?


Thoughts on the new Star Trek

Filed under: — adam @ 9:58 am

First, I loved it. I think it was the best movie I’ve seen in a long time. It treated the source material with respect, but established its own direction. The casting was basically flawless, and each of the major characters settled into their respective roles as gracefully as putting on a new pair of shoes from the same brand in the same size you were wearing before. The IMAX version is huge, but sit back more than you think you need to. We ended up being a little too close and it was sometimes hard to focus on the fast action scenes.

Spoilers ahead.

I loved the new bridge, and I was completely wowed at the omnipresent reflections and lens flares going on in the foreground. It really added the sense of being there and catching light bouncing off of some shiny panel, of which there are now many. Similarly, I loved the scope of the new engineering. Finally… it looks big.

All of the performances were grand and Zachary Quinto was impressive as a young Spock, but I think Karl Urban gets a special callout for really nailing the crotchety Bones. (And he was shafted a special commendation for saving the entire universe by sneaking Kirk on the Enterprise in the first place.)

I thought the time travel execution was very successful, and I very much liked that they didn’t take the standard “everything gets resolved at the end of the episode” tack and left things messy instead. The seamless shift into what otherwise would have been a reboot or a lifeless prequel… completely works for me. This is a different universe, most notably in the way that six billion Vulcans are now missing. As the Vulcans are the main ambassadors of the Federation for first contact, this has drastic implications on the influence and power of the Federation. But, as a mitigating factor, Spock is back from the future with 130 years of accumulated knowledge. Vulcans have essentially photographic memories and the ability to share thoughts widely without having to explain them, and Spock apparently doesn’t seem shy about applying this where necessary to rebalance things. And it’s not just technological knowledge – he’s one of the most well-versed people ever in galactic politics. He knows where all of the hidden enemies and backstabbing and power grabs are going to come from, he knows which alliances are likely to form, and he knows which resources people are going to be fighting about. This is a unique position – he can prepare the Federation in advance to deal with all of these threats before they fully manifest. As time goes on and the timelines diverge, his knowledge will become less useful, but should still provide the Federation with a significant advantage in the short run, enough to make up for the lack of influcence of most of the Vulcans (and they’ll still have the thousands who survived to carry on at least some of the legacy).

Going forward, this is a very different universe, and I very much look forward to seeing how it unfolds. I hope they consider returning to a serial format, though not necessarily TV. There’s way too much rich material to mine here now to only tease us with a single movie every few years, and I think it would be a waste of this potential to fully focus on the action stories which the movies tend to favor (which is not to say that they’re not a ton of fun). But for the first time in a long time, I need to go see it in the theater again.


Why I’m not going to see Watchmen tonight

Filed under: — adam @ 12:24 pm

It just doesn’t look like a very good movie to me. I didn’t like 300 terribly much. It was visually accurate with the book, but I found it fairly boring for most of the way through. I’m tired of ILM demo reels masquerading as masterwork films. The actors, with the exception of Silk Spectre, all seem about 10-15 years too young, and far too shiny. Watchmen is not supposed to be a shiny movie, except in very specific parts. Also, by and large, it’s not an action movie, again except in very specific parts.

The goal of “I’m doing this so someone else won’t fuck it up worse” is laudable, but ultimately flawed.

I expect it’s going to be a lot like the Hitchhiker’s Guide movie – visually accurate but stripped of everything great except glancing references to everything great in the book. I don’t really need a reminder to go read the book again.

One of the most striking moments of the book is when you realize that even the visual panel structure in issue 5 is symmetrical around the assassination attempt on Veidt and is flanked on both sides by about eighty thousand important plot elements that have been carefully arranged for you, by hand, in advance. That is when you realize that what you’re holding in your hands is really something special. It can’t be done as a movie, because it’s not something that can make its impact when it just flashes by. You have to sit there and stare at the page, and flip back and forth, and let it sink in, and sometimes take a few minutes to just absorb everything in one panel.

I’m guessing… not. Maybe I’ll see it eventually, but not tonight.


Toys and Testing

BoingBoing reports that new rules on consumer safety threaten to put small producers out of business because the testing is too expensive.

I have a few thoughts on this.

This is a pretty common libertarian vs. nanny state disagreement – should consumers be allowed to make their own choices, but I don’t think it’s that simple, for a few reasons. (Before you go on, I think it’s worth reading my previous piece on some failure modes of the market.)

Keeping toxic chemicals out of kids toys can’t really be the responsibility of the parents, because it’s not within their domain of control. You can be a responsible parent, you can only buy toys you “trust” (whatever that means) and your child will still be exposed to toys you didn’t have any say about. It’s unavoidable – other kids have toys, day care centers have toys, kids play with toys in the playground that other kids bring or leave behind. The only way to prevent these toys from coming into contact with kids is to keep them out of the marketplace to begin with. If you like, it’s society’s responsibility to keep poisons out of kids’ toys in general, because the incentives don’t line up for the individual actors.

After-the-fact deterrents are simply not effective. Lawsuits take years to resolve, are overly burdensome, and it may be impossible to even track down the responsible party (I’m told it’s nearly impossible to sue a foreign company). On top of that, even an expensive PR-nightmare lawsuit may not be a sufficient deterrent to a large corporation with a hefty legal budget. A few million dollar settlements can seem very small in the face of a few hundred million in profits per year. Also, it’s worth noting that this is a reactive response which doesn’t actually fix the problem, but tries to throw monetary compensation in an attempt to “make things better”. But that’s basically what we’re being asked to accept here with the free market solution – let us do what we want and if you don’t like it, sue us, because it’s “too expensive” to ensure that we make safe products. We have that prefrontal cortex for a reason – people are uniquely capable of making predictive decisions, and to allow reactive forces to handle problems we can plainly see are coming seems ridicuously primitive to me. One might argue that we don’t have the capacity to predict how our actions might affect these complex systems, but that’s exactly why we need to be able to adapt and tweak them as we go. I haven’t seen any evidence that the market makes better choices in these kinds of situations, and in fact the call for regulation is a response to the failure of market forces – these companies have already shown an inability to keep toxic ingredients out of their products, yet we still continue to have these problems. Public outrage and whatever lawsuits are currently in the pipeline haven’t served as an adequate deterrent. Why’s that? I don’t know.

This is similar to the conundrum faced by small food producers. See Joel Salatin’s Everything I Want To Do Is Illegal for a lot of good examples of this. The main thrust is that the rules that are meant for large corporations where the overhead gets absorbed by the scale are overly burdensome for small producers, who don’t have the resources for dedicated testing facilities but also have less capacity to do harm, both because they have fewer customers but also because some kinds of harm are caused by the steps needed to operate at scale in the first place. I like to buy local food from farmers that I’ve come to know and trust. This can work at a small scale – if I want to see their operation, I can go visit the farm. I have no similar way to verify that with a larger company.

I don’t think that broken regulation is a condemnation of the entire idea of regulation, but I think it’s obvious that the rules need to be different depending on the scale of the domain they apply to. It is not unreasonable for Hasbro and Mattel to have to follow different rules than the guy who’s carving wood figures in his garage and selling them on etsy. Scale matters – more is different, and bigger is different.


Possibly the perfect omelet pan

Filed under: — adam @ 9:13 pm

I’ve long been looking for a good replacement for teflon for making classical french omelets, and I’d pretty much given in to the idea that it needed to be teflon or nothing. Cast iron (enameled or not) gets a nice big hotspot in the middle from the gas flame, and anodized aluminum isn’t non-stick enough. Even teflon is substandard for that, because to do it right, you need to use high heat and a metal fork.

Enter this new item in Cuisinart’s “Green Gourmet” line, a ceramic alternative to teflon for non-stick pans, which is made with no PFOA or PFTE. It’s not too expensive, and has anodized aluminum on the bottom for good heat distribution. I did a Pepin-style omelet with a little butter and a metal fork in it this evening. It has nary a scratch and the omelet came together perfectly. The surface of the pan feels very slick and hard, and the handle is comfortable. Major bonus points for this phrase in the instructions: “Never use Cuisinart Green Gourmet cookware on high heat or food will burn”.

Credit to the estimable Mr. McGee for a) scientifically confirming my assertion that cast iron has terrible distribution properties and b) mentioning some new non-stick coatings I hadn’t heard of (but not the one above, which may or may not be Thermolon, but which seems to be higher quality than the one he covered).


Why I eat what I eat.

Some number of years ago, I used to think that the ability to get any kind of fresh produce any time of the year was a mark of an advanced global civilization. We had conquered a small piece of space and time and weather to bring me blueberries in February. More recently, we lived for over a year in the shadow of the neighborhood that used to belong to the World Trade Center. I don’t want to talk about that right now, but it serves to highlight a personal revelation. When we moved, we moved to a new neighborhood, a new breath of fresh air. And a farmer’s market opened, literally, right outside my door.

After my first visit, I started making it a point to go every Friday morning, even in the dead of winter, just to see what new bounty would be there. It began with fruit and vegetables, and as I explored more, eggs, milk, breads, and eventually meat. Each new discovery reminded me of what potential could be held by a simple item of food. A peach — this is what a peach is supposed to taste like. The word “luscious” really does not fully convey the impact of biting into a local peach at the height of the season. Apples as tart as you like, strawberries with no white center to be seen, blueberries both sweet and tart at the same time, carrots you can eat without peeling them. This food was not only better for you, it was simply better, in every respect that mattered.

And then August came, and I got to the tomatoes. The tomatoes made me a lifelong convert – the drawn line between “there’s a market there” and “I need to go to the market”. A supermarket tomato is not even in the same vocabulary as a fresh, ripe, local market tomato. Flavor, texture, aroma – it’s just unfair to even do a comparison.

Of course, there’s a tradeoff here. Eating seasonally means you relish every bite until you can’t stand it anymore, because you know that it won’t last. Most crops have a few months, but some last only a few weeks. There are cycles for everything – they come in and they’re not quite ready yet, then the next week or two they’re perfect, then they’re gone until next year. Hopefully by that time you’ve been able to eat your fill to hold you until next year, but then there’s something else wonderful that takes its place. Peas move to berries move to tomatoes move to root vegetables.

The jury’s still out, but the evidence points to organic and sustainable food being healthier. It appears that plants are more nutritious when they have to defend themselves from pests. Garbage in, garbage out — I don’t want to eat vegetables that are made entirely of petrochemical fertilizers in the same way that I don’t want to eat meat that’s made entirely of corn. I don’t voluntarily buy anything that has high fructose corn syrup in it, and you won’t find any of that at the market.

And it’s not just about the food. Yes, it’s better, and everything I can buy at the market, I do. But it’s also about confidence, and community in one of the oldest senses of the word. I know these farmers. I have recently visited one of the farms and plan to go see more. They stand behind their food. I know, for the most part, which ones use pesticides and which ones don’t, and I can see the relative effects that has on the quality of their food. I’m not afraid to eat their eggs raw or undercook my burger.

Seasonal/local is not organic. That’s not to say that organic is bad, but they’re not the same thing. Organic doesn’t necessarily equate to sustainable, or even high quality. All other factors being equal, organic tends to be the better choice, but it’s not the whole answer. A local food may in fact not be the best choice, but at least if you have a question about it, you can often talk to the farmer directly and get whatever answers you’re looking for.

And so – my buying patterns: I always shop at the market first. If I can get something there, I do. The quality is always better, it is certainly healthier, it has a lower carbon footprint when you factor in the petrochemicals they don’t use to fertilize, keep the pests away, and get it to you, and all of the farms at my markets are committed to sustainable farming practices. Plus, I like them personally and I want to give them my business. Shopping at the market isn’t always numerically competetive, but it is always value competetive – if something is 1.5x more at the market, it’s likely 5-10x better.

For the things I can’t find at the market, I do try to buy organic, and I try to ensure that they’re seasonal somewhere. For example, I don’t buy oranges from Florida in July. Not only is there no reason to given the abundance of other wonderful fruits here, they’re just not as good as the ones in January. Organic is usually preferable, because I think that food is healthier and better for the planet than “conventional” (whatever that really means).

I’m not a die-hard localist. I still buy coffee, and I eat imported Italian canned tomatoes when I can’t find good ones here. I love to cook, and shopping at the market simultaneously makes some decisions easier (I make what’s good that week) and improves my results. But what it really comes down to is that I’m committed to procuring for my family and friends the best food we can have while supporting people who love food as much as I do.

This is a healthy food chain. It’s good for the planet, it’s good for the farmers, it’s good for the plants and animals, and it’s good for us. Every little bit makes a difference.


Why the Mac is better.

Filed under: — adam @ 10:16 pm

This is a list I put together for my father. I thought you’d enjoy it. Got anything to add?

I’ve been putting together a more detailed response for you. There’s a reason why nearly every computer professional I work with has switched to the Mac in the past few years.

This is the short list:

1. It actually is more stable. It is very very difficult to crash the OS entirely. The only time I’ve done it is when running Windows in a virtual machine, because of the trickery needed to accomplish that in the first place. When you kill programs that aren’t responding, they almost always die and can be restarted. This may not be a huge problem for you, since you only use about 5 applications. I use well over a hundred, and on Windows, this was a disaster – anything misbehaving could lock up the entire system. It simply doesn’t happen that way on the Mac.

2. It is >far< easier to maintain. This is actually a few thousand things of various severities, but some highlights:

- You know how when you get a new Windows machine, you have to reinstall everything and search around all over the disk for where your files and preferences might be? On the Mac, you don't have to do any of that. Everything specific to you goes in your home directory, and your Applications go in your /Applications folder. 99% of everything will work if you just copy those two folders. When you install software, there's usually no installer to run, you just copy the application to the Applications folder. This clean split between application preferences and user preferences also means that having multiple users on the same machine just works, every time. No weird "some other user accidentally modified the global settings".

- Moreover, you don't have to actually do that to be up and running quickly, because you can just make a copy of an existing drive and boot off of that. This won't screw up any hidden settings the way it will on Windows. (It may sort of work, but it'll never be "quite right".) You can keep a running backup of your boot drive that automatically updates. If anything happens to your boot drive, you pop it out, pop the backup in, buy a replacement backup, and it's as if you never left. Someday, you're going to have a hard drive fail, and when that happens, you're going to suddenly realize that there's a lot of stuff you had strewn around your drive that you never found to back up, and it'll be gone forever (or you can spend a few thousand dollars for a slim chance to recover bits and pieces of it in probably mostly unusable forms from a drive recovery service).

- You don't have to worry about viruses or spyware. Nothing runs with administrator privileges by default, the system is very well locked down, and there's no need to run anti-virus or anti-spyware software, because no software can install itself without your permission. Granted, it's not perfect and there are some security holes, but no system is 100% secure and it's orders of magnitude better than Windows in this respect.

3. Laptop sleep works. I've never had a Windows laptop that came back from sleep reliably. The last one sometimes took up to 30 minutes to restore. Unacceptable.

4. Creative programmers are drawn to the Mac. As such, there's a vibrant community of small-developer software that's incredibly useful, well-written, innovative, and for the most part, follows the same set of UI guidelines, so they all behave the same way, have similar keyboard shortcuts, etc... There are a few exceptions, but most of it is this way. There's an actual ecosystem, and when developers make something that's useful, often other developers will make their applications work with it if you have it installed. Almost all of it is available in downloadable form with 30-day trials. I've installed maybe three programs off of CDs, so that means no CDs to lose on the off chance you need to reinstall something.

5. Hardware support is just better. It's simpler and easier. Most things don't require drivers. When you switch ports around, things continue to work just fine. You can add and remove external monitors at will, and the system just compensates - no rebooting.

6. Apple cares about getting all of the little details right. You must watch this:

7. The OS itself has MANY MANY little enhancements that make life easier in lots of little ways (many of which may be difficult to appreciate without using them, but once you do, you'll miss them when they're gone):

- In the Finder, the Mac's version of Explorer (though Finder came first), you can highlight a file and press the spacebar to bring up QuickLook, which is a floating window with a navigable preview of the document. Press the spacebar again to close it. This makes flipping through files extremely easy. Many file formats are supported (word docs, excel sheets, pdf, most kinds of images, most kinds of videos, etc...). The built-in Mail program also supports this, so you don't have to save mail attachments to view them. You can also very easily toggle back and forth between full screen view.

- It includes Time Machine, which is an automatic hourly backup which saves as many past versions of a file (in a compact changes-only format) as your disk will support. Accidentally write over a word file you needed? Bring up Time Machine and restore it from earlier in the day. Time Machine also supports Quick Look, so you can easily check whether a particular version of a file is the one you're looking for.

- Expose is the window manager for navigating many open windows. I can't really explain this, so watch the video:

- The desktop includes the Dashboard. Press a hotkey and the screen is overlaid with useful widgets - weather, dictionary lookup, etc...

8. Screen sharing is built in. Need some help? I can log in to your machine remotely and securely, and check it out for you. No more waiting until I can come over to fix something.

9. Built-in calendar and address book that most applications share.

10. iPhoto is MUCH better for managing your photos than the Nikon software you have.

11. You can dual boot Windows if you really want, or run it in a virtual machine with VMWare.

No, it's not perfect. But it is a hell of an improvement. I'd say I have 1% of the number of issues I had with Windows machines, if not less.


On libertarian/capitalist intent

For some time, I was a staunch Libertarian. That lasted until I started to examine the boundary cases where Libertarianism didn’t seem to offer a good answer. I still hold a lot of those principles dear, but I’m no longer convinced that complete Libertarianism can work in the real world. What follows are some of my recent thoughts on the free market.

The proponents of the free market often propose that private ownership gives people an incentive to make the most of resources, and that people with ownership incentive are likely to make the best decisions about the use of resources.

I tend to agree in many cases – the market does often work and find the best solution, but I’ve been mulling over some exceptions to that rule.

Some traps that individual decisions in the market can fall into:

1) Divergent interests: the interests of the owning party may not be the same as the general public.

2) Irrationality: people don’t always act rationally or in their best interest.

2a) Obscured Information: even in the face of good information, which is often not present, the right decision isn’t evident.

3) Vested Interest: ownership of a thing is not the same as stewardship of a thing, and if you don’t have a personal vested interest in the thing, your best use of it may be to divest yourself of it (i.e.: use it up, parcel it, consume it) in exchange for lots of short term money you can use to buy something you actually want.

4) Value dilution: the more stuff you own, the less you care about any given individual thing. Ownership of lots of things probably means that each individual one is less valuable, because the value of a thing must be measured not just against the external market value, but also its proportion to your total assets, difficulty to replace, your incentive to replace it if you lose it, sentimental value, subsidiary values (prestige from ownership, etc…), and a whole host of other things.

5) Lack of patience and susceptibility to fear: People in control of a thing may require immediate access to it (liquidity), and will sometimes act to preserve that liquidity at the expense of the health of the overall economy, and therefore at the expense of some value of the underlying asset. This happens even though the people controlling an asset may be able to see the writing on the wall – everyone will be fine if everyone sits tight, but if you wait and someone else moves first, you lose. I think this usually manifests as “private enterprise tends to seek short-term gains”, but it’s tightly tied to #6:

6) The Tragedy of the Selfish: this is a concept I’ve been toying with on and off for a few years. It’s not the Tragedy of the Commons, and it’s not quite the Tragedy of the Anticommons, though there are aspects of both in there, as well as an arms race component, and some Prisoner’s Dilemma. This is the situation that exists when an individual makes what is logically the best decision to maximize their own position, but the sum effect of everybody making their best decisions is that everybody ends up worse off rather than better. Libertarian capitalism hinges on the assumption that making everybody individually better off is the best way to maximize the happiness of the group, and it’s simply the case that there are situations where that assumption does not hold. The example I often use for this is buying an SUV to be safer on the road. You buy an SUV, then other people do, because they want to be safer too. Except that if enough people make that same decision, you’ve overall raised the chances that if you’re hit by a car, it’ll be an SUV, which will do much more damage than a smaller car. Everyone is better off if everyone else backs off and drives smaller cars. It’s a simplification, of course, but I hope that makes the point.

That’s what I’ve come up with so far. I’m sure there are more. Of course these don’t always apply, but I think at least one of them does often enough to warrant a better justification that “the market will solve the problem”. They’re certainly things to watch out for when getting out of the way and letting the market work.

What do you think?


Dear Senator McCain

Filed under: — adam @ 9:09 am

Dear Senator McCain,

Please remember that you are in America, and in America, we don’t suspend elections.

Have a nice day.


The Google Chrome terms of service are hilarious

I’ve been very busy lately, but this is just too much to not comment on.

There are other articles about how the Google Chrome terms of service give Google an irrevocable license to use any content you submit through “The Services” (a nice catchall term which includes all Google products and services), but the analysis really hasn’t gone far enough – that article glosses over the fact that this applies not only to content you submit, but also content you display. Of course, since this is a WEB BROWSER we’re talking about, that means every page you view with it.

In short, when you view a web page with Chrome, you affirm to Google that you have the right to grant Google an irrevocable license to use it to “display, distribute and promote the Services”, including making such content available to others. If you don’t have that legal authority over every web page you’ve visited, you’ve just fraudulently granted that license to Google and may yourself be liable to the actual copyright owner. (If you do, of course, you’ve just granted them that license for real.) I’m not a lawyer, but I suspect that Google has either committed mass inducement to fraud or the entire EULA (which lacks a severability clause) is impossible to obey and therefore void. [Update: there is a severability clause in the general terms, which I missed on the first reading. Does that mean that the entire content provisions would be removed, or just the parts that apply to the license you grant Google over the content you don't have copyright to? I don't know.]

Even more so than usual, these terms are, quite frankly, ridiculous and completely inappropriate for not only a web browser but an open source web browser.

Nice going guys.


On gazpacho

Filed under: — adam @ 9:42 am

Salmonella-tainted tomatoes aside, gazpacho is about the healthiest thing you can eat, and I look forward to having some decent vegetables to make it with every year.

It’s pretty good with tomato juice, but I really prefer to use fresh tomato puree. I’m really not a fan of spicy tomato, and I go on the clean vegetal side. It really highlights the late spring vegetables that start to show up at the market in early June.

4-6 large tomatoes, quartered
2-3 medium tomatoes, diced
2 spring onions, diced
2 cucumbers, diced (or 4 kirbies – sweeter)
~10 basil leaves, chiffonade
salt & pepper
good ev olive oil

I use the plastic dough blade of my food processor to beat the crap out of the tomatoes – just quartered; peeling, coring and seeding not required – then run them through the finest disk on my food mill to remove the seeds, cores, and any remaining skin. The plastic blade won’t nick the seeds, which can be bitter. I used to just do this in the food mill, but it took >forever< and is about 50 times faster using the food processor first.

Use about 2-3 cups of the puree for the soup, but it really depends on how liquidy you like it. I like lots of chunks. Whatever you don’t use will keep in the fridge for a few days. I’m sure it would freeze well, though I’ve never done that.

Add the diced vegetables and basil leaves, and salt and pepper lightly. Stir in a drizzle of olive oil (on the order of 1-2 tbsp) until it thickens slightly. Cover and refrigerate for a few hours. Stir, taste, add more salt and pepper. You’ll need more than you think because it has less impact when served cold.

Serve cold.


Coming to a Rational First Sale Doctrine for Digital Works

In reference to this Gizmodo piece analyzing the rights granted by the Kindle and Sony e-reader:

I think the analysis in that article is flawed. It doesn’t make any sense to be able to resell the reader with the books on it, because the license for the books is assigned to you, not to the reader. For example, if your Kindle breaks, you can move your books to another one. I’ve never heard anything other than the opinion that you can’t resell the digital copy – the assumption has always been that these sorts of transactions break the first sale doctrine. The problem then becomes “what are you buying?”, if there’s nothing you can resell.

The first sale doctrine has to apply to the license, not the bits themselves, because under the scenario in which it applies to the bits, arguably Amazon retains no rights whatsoever. They had no direct hand in arranging the bits of your copy the way they are – they merely sent instructions to your computer about how to arrange them in a certain pattern. The article asserts that you can’t “transfer” the bits, but in the same way, in downloading a copy, Amazon hasn’t actually “transferred” anything to you, either.

There’s no reason you shouldn’t be able to sell your Kindle, and the books don’t necessarily go with it, but if you want to sell the books separately, you can do that too. Legally, if you do that, you’d be obligated to destroy all of the copies you’ve made. Amazon’s inability to police that is as relevant as their inability to police the fact that you haven’t made a photocopy of the physical book you sold when you were done with it. There’s no weight to the argument that this will encourage rampant piracy, given that unencrypted cracked copies of all of these things are available to those who want them anyway, and always will be. People comply with reasonable laws willingly because they’re honest, it’s the “right thing to do”, and they feel that the laws are an acceptable tradeoff for living in a civilized society where sometimes you have to make compromises and not just do whatever you want. People do not comply with one-sided laws where they feel like they’re being ripped off for no reason. A law which turns your sale into a non-sellable license is of the latter kind. It turns normal users into petty criminals who don’t care when they break the law, because the law is stupid. Once they’ve ignored some of the terms, it’s a shorter step to ignore others, or ignore similar terms for other products. People like consistency, especially in legal treatments. I would argue that it’s in Amazon’s interest (and the others) to not niggle on this point, because a reasonable license with terms that look like a sale makes for happier customers who aren’t interested in trodding on the license terms, and that’s better for everyone.

(Yes, I’m arguing that restrictive license “sales” are anti-civilization.)

The Kindle ToS not only prohibits selling the Kindle with your books on it, it prohibits anyone else from even looking at it. If someone reads over your shoulder on the train, you’re in violation.

This is, of course, ridiculous.

The right legal response here seems to me to be to not dicker about with splitting hairs about whether you can sell your digital copies if they’re on a physical device and you can’t if they’re not, but to declare that anything sufficiently close to a “right to view, use, and display [...] an unlimited number of times” de facto consitutes a sale, and with it comes certain buyer’s rights regardless of what kinds of outrageous restrictions the licensor tries to bundle in the ToS. The fact that this also seems to be the right business response reinforces my belief that this is the correct path. This kind of a transaction is different from renting, which is by nature a temporary one.

It is the right thing for society to declare that if you’ve bought something that isn’t time or use limited, you’ve therefore also bought the right to resell it, whether it’s a physical object or a license.


Tags: , , , , , , , , ,


Why don’t we have degrees of terrorism?

We have different classifications for the crime of “killing a person”, and those classifications encompass whether it was an accident or not, whether it was premeditated, and how many people were killed – e.g.: How serious a crime has actually been committed. But when we talk about terrorism, it’s always just “terrorism”. This results in the really sinister megacriminals being lumped in with the group of morons that can’t get it to together to leave the house without forgetting to wear pants, let alone actually arrange to blow anything up.

Most “terrorists” are less dangerous than your average serial killer or bus accident, but we still lump them all together simply because they have an agenda.

Similar to murder, I think we need some sort of classification system for these crimes:

  1. Intent to commit terrorism: you “plotted” with someone who may or may not have been an undercover cop, but didn’t actually acquire passports or learn how to make liquid explosives
  2. Manfrightening: you committed some other crime, and along the way someone got scared and called you a terrorist, but you have no stated agenda.
  3. Terrorism in the third degree: You actually blew up something, but no one was hurt.
  4. Terrorism in the second degree: You actually blew up something and killed some people, but failed to garner any sympathy from the public.
  5. Terrorism in the first degree: You actually blew up something, lots of people were killed, and the US declared war on some country you were unaffiliated with.

Tags: , ,


Numbers is a nice idea with some usability disasters

Filed under: — adam @ 9:35 pm

I’ve put up a screen cast made with the very easy Screenflow.

This is me trying to reorganize a large number of tables with attached comments in Numbers, such that there is no overlap and no tables cross a page break.

As should be evident even without narration, this is pretty much a usability disaster. Numbers is a nice idea, but it does not live up to my expectations for what a spreadsheet with page layout capability should be able to do. I hope they fix this.

Some notes:

1) It is extremely difficult for me to figure out where to click to consistently for a bunch of different options – move a whole table, resize a table, grab a comment handle. This behavior doesn’t seem to be the same every time, and varies whether or not the white handles appear. For example, you can’t make a table smaller if there is content or a comment in a cell you’d remove. That makes sense, but there’s no visual indicator that that’s what’s preventing you from making the table smaller. Watch how often I can’t get the click right on the first try, all over the place.

2) Comment callouts do not move with their tables and are not selectable as a group! Also, they don’t scroll the page when dragged to the edge.

4) Distribute Vertically sort of works, if the tables have no comments, but with comments, all of the tables move and their comments don’t. There does not seem to be a standard way to add descriptions to tables without comment callouts.

5) When you shorten a table, everything below it moves up, and the space where the table you shortened took up IS NOW GONE. This screws up the layout for everything below it on the page, and there does not seem to be any easy way to reclaim that space.

6) When you insert a table in the middle, there does not seem to be a good way to reconfigure the layout of everything else to accommodate the space you need for that insertion. This is basically the same problem as #3.

Tags: , , , , ,

Fed up with food labeling

Filed under: — adam @ 10:59 am

Our food labeling standards are completely out of whack.

As an example, let’s take “100% fruit juice”. I’m pretty sure that at some point, “100% fruit juice” meant that what you got in the bottle was, prior to being put in the bottle, a piece of fruit that was crushed and maybe filtered. I’m 100% sure that that’s what most people still expect when they buy something that’s labeled “100% fruit juice”.

Except that’s not what you get anymore. Now, it’s reconstituted from concentrates, mixed from different kinds of fruit juice concentrates (which may have vastly different nutritional profiles), and blended into whatever they like, but it’s still the healthy choice kids, because it’s 100% fruit juice!

Right off the labels:

Kedem concord grape juice (which, incidentally, is among the sweetest of the grapes):

The label says “100% fruit juice”.

Ingredients: Grape Juice, Potassium Metabisulfite Added To Enhance Freshness.

It has 150 calories per 8oz.

Welch’s grape juice:

The label says “100% grape juice”.

Ingredients: Grape Juice From Concentrate (Water, Grape Juice Concentrate), Grape Juice, Ascorbic Acid (Vitamin C), No Artificial Flavors Or Colors Added.

It has 170 calories per 8 oz.


They’re not using grapes that have 13% more sugar in them, they’re dickering with the proportions to make their juice sweeter.

This is just one particularly egregious example, but it’s all over the place – many “100% juices” are sweetened with cherry juice or other concentrates. It’s a complete sham. Even the Kedem is pushing it because it’s got preservatives, but at least the juice is actual juice. No way does that Welch’s bottle contain “100% juice”.

Our food labels don’t mean what they say anymore, they have very detailed technical specifications to go with them, and it’s impossible to know what they mean from common sense without understanding those specifications. This isn’t even about making dubious health claims – it’s about defining away the actual contents of the package.

Tags: , , , ,


What the Apple Keynote should have delivered

Filed under: — adam @ 10:08 am

Here’s the thing. The past few years have overwhelmingly delivered a whole class of Apple devices I simply want. I’ve bought a number of them. Not so for anything announced this year. Here’s what we got, and what I would have liked to see Apple have announced instead:

We got: A new super slim but otherwise really limited laptop aimed at… who exactly? Not mobile creatives, executives, or cost-sensitive casual users, given the spec and upgrade limitations.

I wanted instead: Two new laptops – a super portable Macbook Mini, and a Macbook Pro upgrade (thinner, bigger drives/battery, more RAM, higher resolution screen in the same size package). Both thin and light. Touchscreen tablet versions would have been interesting, but even upgrades to the standard laptop package would have been good. The Macbook Mini would be roughly the size of three iPhones side by side (maybe 7.5″ x 5″ or so), running full Mac OS X.

We got: A $20 software bundle for the iPod, but only for the lucky customers who paid 15 or 20 times that already for the top of the line iPod only a few months ago.

I wanted instead: to be honest, I didn’t care much about this one, not owning an iPod Touch or an iPhone. Still, if I did, I’d probably be disappointed.

We got: A software upgrade to Time Machine masquerading as completely new hardware (Time Capsule).

I wanted instead: Allow Time Machine to work with something other than locally plugged in external drives, particularly external drives attached to existing (again only months old) Airport Extremes.

We got: Overpriced limited “movie rentals” and a minor supporting upgrade to the miscast living room product that no one bought last year and which is still a hard sell because it lags behind its competitors in features and doesn’t make up for it with anything that’s great about Apple products.

I wanted instead: Remove whatever restriction is preventing Netflix from doing Watch Now on the Mac. Treat movie rentals like digital media instead of overpriced restricted analogues to going to the video store. Why the 24-hour limit?!? Give me 30 days for a video rental so I don’t feel like I’m being ripped off. Give me TV shows in HD for less than it costs to buy the disc. Let me watch whatever I want to watch on the set top box. In fact, forget the set top box and morph the Mac Mini into the set top box. Anyone watching movies on an HD screen also probably wants to do computing tasks on that screen too. That’s why I have a Mac Mini attached to my living room projector. For not too much more than the Apple TV, you could buy a used Mac Mini and get 100 times the functionality. What I want to see here is making it easier to watch more kinds of digital media on the Mac Mini in a living room setting – Front Row is just awful and limited.

Bonus: Where’s OpenDocument support in iWork?!? Come on man, don’t be like Microsoft on this one. There’s no possible way that .pages and .numbers are going to become the dominant interchangeable file formats that will make people have to buy iWork anytime this century. People buy iWork because they like your applications, not because they have to in order to read a file someone sent them. It doesn’t hurt you to support the open standards, and it helps the users.

[update: I was thoroughly shocked to discover that, of all things, reads .odt files. There's also Quick Look support for them.]

After all, ranting about this stuff is fun, and I enjoy picking it apart, but sometimes it helps to be productive too. So, those are my suggestions for things I’d actually hand over some cash to Apple for this year.

Tags: , , ,


Disappointed in the Macworld Keynote

Filed under: — adam @ 4:54 pm

I’ve become a huge Apple fan over the past two years. I think they’ve done a number of wonderful things for desktop computing interfaces, and they’ve far surpassed Windows in usability, stability, and general pleasantness. I spend a lot of time in front of computers, and I try to make as much of it as possible in front of a Mac.

But I’m disappointed with a number of items in today’s keynote.

The Macbook Air is certainly pretty, but when you look at the limitations, who’s this really aimed at? No firewire, only 2GB ram, 4200rpm disk – this rules out mobile creatives. No replaceable battery – this rules out actual mobile executives. It seems to be an upgrade for the regular Macbook users – mobile browsing, email, writing, maybe a little video and music, but it’s far too expensive for that. So, I don’t get it – who’s this aimed at?

I can understand that new hardware sometimes makes old hardware obsolete. But a few of the “hardware upgrades” announced here are really software upgrades in disguise, but which nonetheless are forcing you to buy new hardware to take advantage of the new features. The Time Capsule looks good, but it’s really just an Airport Extreme with an internal disk. Why isn’t this feature available on existing Airport Extremes with external disks?

Note to Apple: your existing customers generally love you. They love you even more when you go the extra mile and suddenly update the stuff they’ve already bought with new capabilities. This makes them more likely to buy new stuff, not less, and even much more likely to recommend that to all of their friends. Are you really going to sell enough $299 Time Capsules to make up for the hate you just scored with everyone who uses an Airport Extreme with an external disk and wants to back up their laptop to it, who now think you’re being greedy and trying to force a $300 upgrade for no reason?

Same deal with the multitouch gestures on the Macbook Air – why aren’t they being backported to the existing Macbook line? The trackpads are multi-touch capable, and this is a software update, to applications that are already running on those machines.

And then there’s the actual software update for the iPod Touch. I’m the last person to say that all software should be free (free software should be free, but that’s a discussion for another day), but these are people who just a few months ago paid a premium to buy your top of the line product, and now you’re fleecing them for a bit of extra cash.

I don’t expect a world-shattering new product line every year, but these “announcements” look like the actions of a company that’s scrambling, not one that’s innovating.

[ Followup: Some suggestions for what I wanted to see instead. ]

Tags: , , ,


Dear Netflix

Dear Netflix:

I would very much like your website to stop redirecting me to a page that tells me that Im using an unsupported browser. I know I use Opera. I like it. I understand if you dont want to support it, but at least set a cookie so I can just tell you once that I dont care, instead of making me click through your tedious “only browsers we like are supported” splash page every time I want to check my queue.

Thanks. Have a wonderful day.

Tags: , , ,


The HD format war is lost by existing

[I've posted this as a comment on a few HD DVD vs. Blu-ray blog posts elsewhere, so I thought I'd put it up here as well.]

An HD format war is simply the height of stupidity, given the nice example of how quickly DVD was adopted by… everybody.

This happened for a few reasons, none of which are being replicated by the HD formats/players:

1) One alternative with no difficult competing choices.

2) Fit into existing home theater setups easily.

3) Clear, obvious quality advantages, even if you set it up incorrectly.

4) Significant convenience advantages – pause with no quality loss (anyone here remember VHS tracking?!), random access, extra features, multiple languages, etc…

5) More convenient and durable physical medium.

So – let’s look at what HD formats offer over DVD in these areas:

1) Multiple competing incompatible choices. Not just between HD DVD and Blu-ray, but also between different HD formats. 720p/1080i vs. 1080p, HDMI/HDCP vs. component. People aren’t adopting HD formats because they’re confusing.

2) Does not fit into existing home theater setups easily. If you had a DVD home theater, chances are you’re replacing most, if not all of your components to get to HD – you need a new TV/projector, you probably need some new switches, you need all new cabling, and you need at least three new players to do it right (HD DVD, Blu-ray, and an upscaling DVD player so your old DVDs look good). Not to mention a new programmable remote to control the now 7 or more components in your new setup (receiver, projector/tv, 3 players, HDMI switch, audio/component switch).

3) Clear, obvious quality advantages, but only if properly tuned and all of them work properly together. I can easily tell the difference between even HD movies and upscaled DVD movies. Upscaled DVD movies look fantastic, but HD movies really pop off the screen. But if things aren’t properly configured or you’re using the wrong cabling, these advantages disappear.

4) No significant convenience advantages, with some disadvantages. Pretty much the same extras, but most discs now won’t let you resume playback from the same place if you press stop in the middle, and they make you watch the warnings and splash screens again.

5) Indistinguishable physical medium. Maybe the Blu-ray coating helps, but we’ll see about that.

I’ve gone the HD route, because I really care about very high video quality, and I love tinkering with this stuff. Most people don’t, and find it incredibly confusing and expensive.

Is it really any wonder that people are holding off?

The HD format war is already lost, by existing at all, and every day that both formats are available for sale just makes things worse. The only good way out of it is to erase the distinction between the two formats – dual format players that reach the killer price point and aren’t filled with bugs.

Tags: , , ,


New Star Trek movie apparently reboots with an open time loop

Filed under: — adam @ 10:15 am

“Picture an incident that throws a group of Romulans back in time. Picture that group of Romulans figuring out where they are in the timeline, then deciding to take advantage of the accident to kill someone’s father, to erase them from the timeline before they exist, thereby changing all of the TREK universe as a result. Who would you erase? Whose erasure would leave the biggest hole in the TREK universe is the question you should be asking.

Who else, of course, but James T. Kirk?”

Although I don’t think it would work as a standalone movie, I’m still waiting for the followup series they hinted at the end of TNG – the continual use of warp drive is found to be definitively unraveling the fabric of space-time. How do you deal with that? What does that do to interplanetary politics? How do you develop alternate forms of travel that don’t use warp technology? How do you stop everyone from using warp drive, and how do you police that? How do you impose that restriction on hostile entities? Nothing like a good galactic environmental crisis to bring Star Trek back into relevance.

(Of course, in TNG, the answer obviously lies in Wesley Crusher’s newly acquired godlike Traveler capabilities, but I think there are a lot of people who would find that objectionable.)

Tags: , , ,

Newer PS3s apparently use software emulation for PS2 games

Apparently, Sony dropped including the PS2 hardware in the 80GB model, and the last version that includes it is the now discontinued, recently price-cut $499 60GB model. If you care about playing older PS2 games and are thinking about getting a PS3, you probably want to get that one, before it disappears. It should also be noted that the HD is user-replaceable, so there’s actually very little tradeoff there.

The new model includes a software emulator, but a fairly large number of the older games have at least some problems.

I’ve really been pretty blown away by how much fun the PS3 is, both for the newer games (which are huge and gorgeous) and for how much better it makes the PS2 experience – all games that support it can play in widescreen, everything’s faster, using the hard drive instead of memory cards is both more convenient and MUCH faster, and the analog sticks are more precise. I think dropping the hardware emulator is an unfortunate cost-saving move that will probably diminish the experience, if you care about that.

Also interesting – I found this list of current and upcoming PS3 exclusives, including PSN (downloadable) games:–a1079-p0.php

I think the PS3 has only shown a mere fraction of its power, and Sony didn’t do even a passable job of promoting it properly at launch, but the slate of games on the list for the next six months and beyond has me very excited.

Tags: , , , , ,


Why am I writing about HD home theater frustrations?

Filed under: — adam @ 12:21 pm

The consumer electronics companies really have their collective head so far up their ass they’re wearing their tongue for a hat.

So to speak.

I made the jump to an HD projector, which I have nothing but good things to say about. It’s a Mistubishi HD1000U. At this point, it’s a few years old, but that’s how you get a 720p projector at a sub-$1000 price instead of dropping a few grand. The picture quality is amazing, the contrast is strong, and it’s bright enough for me. We’re projecting onto a plain off-white wall instead of a screen, and the color is brilliant and rich. For the most part, we watch movies at night with the lights off, and I sometimes use it during the day with a computer for web browsing and email. For these purposes, it’s just fine. I’m very sensitive to picture artifacts, particularly the rainbow effect of DLP projectors (which this is), and while they’re still sometimes present, they’re MUCH less noticeable than on any other projector I’ve looked at. Big thumbs up to Mitsubishi here – this is a winner at this price point or cheaper. Two small notes on the setup:

  1. This projector has a weird throw angle which is noted in many reviews, so positioning is limited and they claim you’ll want to ceiling mount it or put it on a table in front of your seating. I put it on top of a high bookshelf behind the seating, angled down at about an 18-degree angle by putting it on top of a Roadtools Podium CoolPad at the maximum height. This is stable, allows plenty of air circulation under the projector, and is well within the 30-degrees of maximum tilt usually recommended for projectors.
  2. The native resolution for the projector is 1280×720, which my Mac Mini couldn’t do by default. It looked terrible at all of the choices, so I dropped a whopping $18.37 on SwitchResX, which let me set a native resolution of 1280×720, and which looks fabulous.

Set aside for the moment the fact that there’s an HD disc format war to begin with, which is the height of idiocy because DVD was the most successful consumer electronics uptake ever solely because there was one single format and everyone looked at DVD compared to VHS and said “oh, yeah, well, I’ll take that”.

It was the cheapest option and I might get a PS3 at some point in the future, so I picked up a Toshiba HD-A2 HD-DVD player to check out some HD content. I got rid of cable a while ago (but would probably go back if I could just buy Discovery HD and maybe cartoon network and scifi), and Netflix, sans tonguehat, kindly offered to send me a bunch of stuff that was already in my queue in HD-DVD instead of crappy old regular DVD.

They’ve reproduced a bunch of the usability problems in the first generation DVD player which I bought ten years ago (which, now that I think about it, may also have been a Toshiba). The machine itself is big (same form factor as my 6-disc DVD changer). The machine takes a long time to boot up. Backward compatibility is weird – regular DVDs play in a tiny portion of the screen unless you manually set the machine to 480p mode before starting. The first round of discs don’t seem to support the “resume from where I stopped when I press stop then play again” feature, so if you press stop for a minute, you have to watch the FBI warning again. Why is there even an FBI warning in the first place?! Isn’t the overly invasive “copy protection” they foisted on me supposed to prevent me from copying it, even if I wanted to? Oh wait, that’s right… it’s just there to irritate me and not prevent anyone from actually copying anything. The warning I have to stare at every time I switch discs does that.

Which brings me to inputs. I’m somewhat of an expert at setting up electronics, and I find this needlessly frustrating. The projector has HDMI and component inputs, but no output. Previously, I’d had everything wired through S-video and optical audio (TOSlink), using my receiver as a switcher. This worked pretty well. However, the receiver is older and has neither component nor HDMI in or out. I have a component switcher with TOSlink support which I’m using for all of the things that I used to use S-video for (DVD player and PS2), and the component video goes to the projector and the TOSlink goes to the receiver on a single input. But this totally breaks down with HDMI. They collapsed the audio and video streams into one cable to “simplfy things”, but that doesn’t change the fact that the two streams need to go to different devices. There seems to be no standard way to deal with this. There are HDMI switchers that will split out the audio portion to a TOSlink audio cable automatically, but they’re prohibitively expensive (hundreds of dollars). The solution seems to be to use separate switchers for HDMI and TOSlink, and program a universal remote to switch them at the same time. Hardly fun for the average person. It’s doable, but what were they thinking?!?. It makes no sense to put audio and video on the same cable unless all of the devices support that (they don’t) and you can freely move the signal around, which of course you can’t because the “copy protection” won’t let you.

On the other hand, the picture quality is quite stunning. DVD looks “really really good”. HD-DVD looks “better than film”.

A big thank you to Mitsubishi, Netflix, and the film crew on that BBC Planet Earth Documentary. The rest of you, please buy another hat.

Tags: , , , , , , , , , , ,


Transformers was the most fun I’ve had in a movie theater in a while

Filed under: — adam @ 11:48 am

Since I live in the land of the future, we got to see Transformers last night. It was silly, the plot was thinner than the sheen of sweat on hot girl mechanic’s belly, and it had faults. The pacing fell apart after the first 2/3 or so, and it was pretty clear that they went through a few different endings.

Who cares?!? Giant Fighting Robots!!

Where did they find all of those robots, and why aren’t they doing any other movies?

Short review – this was the most fun I’ve had in a movie theater in a long time.

Tags: ,


The value of RAID0 for caching, paging, and temp

Filed under: — adam @ 9:50 am

I recently realized that I had a few extra drive bays in my desktop (with corresponding open SATA ports) and a few extra SATA drives lying around. So last night, I put them in and set them up as a RAID0 striped array.

I’d always avoided striping because of the instability concerns – if either drive goes bad, you lose the data on both of them. However, I’ve recently begun to feel the pinch in speed as my desktop has aged and I installed CS3. I maxed out the RAM a long time ago, and I’m not quite ready to replace it, although I certainly will in the next 6 months. So any little extra bit of speed I can get is welcome. A striped array has a significant speed advantage because the controller can read and write both disks simultaneously, roughly doubling your disk throughput. Also, you end up with one big disk that’s the size of the two put together.

While it is fragile if one of the drives goes, it performs much better. That makes it incredibly useful as a cache drive. I put the Windows paging file, all of my temp directories, and the CS3 cache and scratch files on it, as well as my browser caches. After not much testing, not surprisingly, I noticed an immediate speed boost across the board, and particularly in browsing directories with lots of photos in Bridge.

The setup was not very difficult, although there were some hiccups. I had to configure the bios to have the second sata controller (integrated into the motherboard) work in RAID mode, which took some fiddling. Then I had to switch the boot rom to it to boot into its firmware to actually configure the array, then switch the boot rom back to the other controller so I could boot my pre-existing Windows install (which is on a RAID1 mirrored array). After that, it was just a matter of installing the Windows driver for the RAID controller, formatting the new drive, and moving everything appropriate over to it.

Disks are pretty cheap. I highly recommend this configuration.

Tags: , ,


The first rule of community

I have a personal mailing list for my very close friends, to which I often send a few messages a day. If I stop for a day or two, it’s not a problem. If I stop for a long period of time (a week, a month) without telling someone, I have a strong belief that many of those people will check in to see what’s wrong. This is a major aspect of community for me, and it’s missing from every other piece of online interaction I’ve ever had, including this blog. Part of it has to do with the requirement that everyone on the mailing list is someone I’ve met in person and decided to include – I do not invite people whom I’ve never met physically, and I do not accept solicitations to join the list. But it’s a very strong driver for me, and it’s the reason I still maintain the list even in the presence of so many “better” ways to communicate.

There’s really only one rule for community as far as I’m concerned, and it’s this – in order to call some gathering of people a “community”, it is a requirement that if you’re a member of the community, and one day you stop showing up, people will come looking for you to see where you went.

Incidentally, this quality has been lacking from some real world organizations as well, and it’s become a very strong barometer for me to tell just how welcome I feel with any given group of people. If I left and didn’t come back, would anyone care enough to find out why? It’s a very visceral question, and perhaps a difficult one to ask. But I think it’s an important one, as we move into these so-called communities where all of our interaction is online, and fluid.

I quite enjoy my participation in a number of sites, flickr and ask metafilter among them. But I have no doubt that if I suddenly go away, not one other member will really care, with the probable exception of the people I know from offline. From time to time, they may wonder, “huh, haven’t seen Caviar in a while” (and the use of handles instead of names is probably a big contributor to this), but it’s unlikely that anyone will track me down to ask why, if they can even find out a way to reach me. They’ll probably just assume I found something better to do, or switched to a different site. And therein lies a big piece of the problem – the loose ties go both ways. That guy who disappeared may have just found something better to do, or switched to a different site, but maybe he died, or just didn’t feel welcome anymore. If we don’t have the presence to find out these reasons, or even the capacity to tell when such an event has occurred, are we really building a useful analogue to the binding offline communities that exist, or is it all just a convenient fiction?

I’ve blogged before about some of the problems with online communities, but I think this is a bigger point. That post focused more on how to get online communities to be more outward facing and less insular. This is more about how to get online communities to be more inclusive and meaningful. I must admit that I’m only at the beginning of an answer, but I welcome any ideas on the subject. I’ll avoid the temptation to suggest that we should probably meet for drinks to discuss it.

Tags: , , ,


Circo Hazardous Sock Packaging

Filed under: — adam @ 2:27 pm

I happened to take my 6-month old to Target this weekend, and we bought him some socks. He was playing with the package and put them in his mouth, and managed to get the little hanger plastic piece out. There’s certainly enough to say about parental responsibility, and not letting the baby get into dangerous things, but until this little plastic piece disappeared (it turns out he dropped it on the floor), we didn’t even give a second thought to the idea that a pair of socks for a 6-12 month old might contain this kind of incredible choking hazard. I’m normally pretty paranoid about this. Didn’t these things used to go all the way across? Is this REALLY the place where Target wants to save a tenth of a cent of plastic? It seems like a lawsuit waiting to happen.

Be careful out there…

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Circo Socks Hazardous Packaging

Tags: , , , ,


The Canon Pixma Pro 9000 is a great inkjet photo printer

Filed under: — adam @ 3:15 pm

I got a Canon Pixma Pro 9000 to replace my dead Epson Stylus 1280. Having not bought a new inkjet printer in about 7 years, I’m totally stunned by how far the technology has improved, even over the previous round which was pretty impressive.

First, it’s REALLY fast. While a letter size photo on the 1280 would take a good 5 minutes to print, the Pixma spit my first test print out in, oh, about 25 seconds. When it started to go, I did an actual doubletake – I was not really expecting that.

Second, the color is outstanding. With no adjustment at all, it got very close to my calibrated screen. Not exact, but close enough that you probably wouldn’t notice unless you held it up to the screen and looked at them side by side. On regular old Costco photo paper.

Third, the ink usage seems better designed. It has 8 separate ink carts, which are individually replaceable, instead of one.

Fourth, when you’re not using it, the paper path trays fold up and click into the case, which I expect will significantly reduce the amount of dust and stray hair that always seemed to get into the paper path on the old printer.

Fifth, it has more cleaning modes, to clean the print heads, deep clean the print heads, and also clean the bottom tray to prevent smudges. Also, the entire print head is replaceable if needed.

The only drawback I can see so far is that it’s gigantic. That’s kind of a side effect to being able to print on big paper, but even though it’s physically slightly bigger than the 1280 was, it seems more intelligently designed to take up as little space as it can and still do what it does.

I got it for $439 at Amazon, which is about $100 less than I paid for the 1280 originally:

Highly recommended.

Tags: , , , , , ,


Microsoft should release XP for free

Filed under: — adam @ 3:47 pm

It is well known that free products are used more widely than products that people have to pay for. If Vista is so much better, then people will still pay money for it, and having more installations of XP around to keep people using Windows apps instead of switching to Mac or Linux can only be a good thing for Microsoft, whose continued success depends not only on agreements with PC manufacturers, but also on the continued existence of Windows-only software that people need to run. This benefits Microsoft, and will result in more sales of Vista (and subsequent versions), as other software vendors evolve into the same “The XP version is free, but if you want the premium version, you need Vista” pattern. Essentially – XP becomes the shareware limited demo version of Windows, and you pay if you want the full version.

This obviously benefits the consumer, because free is good, and there are plenty of places (VMs, especially), where it would be useful to run XP, but where the current price is cost prohibitive. Making XP free would open up the Windows market to those potential customers.

Anyone who’s switching to Mac or Linux has already made the decision to do it, and isn’t turning back because they can’t run Windows in a VM… because they already can. This would just make everyone’s life easier, and generate a LOT more goodwill for Microsoft than they have now.

Microsoft, despite being ridiculously profitable, is in danger of losing relevance. This is one way to combat that.

Tags: , , , , ,


Google has just bought a lot of browsing history of the internet

I pointed out that YouTube was a particularly valuable acquisition to Google because their videos are the most embedded in other pages of any of the online video services. When you embed your own content in someone else’s web page, you get the ability to track who visits that page and when, to the extent that you can identify them. This is how Google Analytics works – there’s a small piece of javascript loaded into the page which is served from one of Google’s servers, and then everytime someone hits that page, they get the IP address, the URL of the referring page, and whatever cookies are stored with the browser for the domain. As I’ve discussed before, this is often more than enough information to uniquely identify a person with pretty high accuracy.

DoubleClick has been doing this for a lot longer than Google has, and they have a lot of history there. In addition to their ad network, Google has also just acquired that entire browsing history, profiles of the browsing of a huge chunk of the web. Google’s privacy policy does not seem to apply to information acquired from sources other than, so they’re probably free to do whatever they want with this profile data.

[Update: In perusing their privacy policy, I noted this: If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. This doesn't specify which end of the merger they're on, so maybe this does cover personal information they acquire. I wonder if they're planning on informing everyone included in the DoubleClick database.]

Tags: , , ,


Does your old PS2 play dual-layer DVD games?

Filed under: — adam @ 5:21 pm

I have an old Playstation 2 (30001 series). It has never played dual-layer DVD movies – it plays the first layer, and then freezes. Everyone I know with this model has the same issue with it. It was never a problem, because all of the games on DVD that I had were single layer. But now they’ve started releasing games on dual-layer DVD, notably God of War 2. And, of course, it won’t play on my old player. The official word from Sony is that this is a problem isolated to my machine (which also, incidentally, has stopped playing the purple CD-ROM games too), and they want me to pay $45 for a refurbished machine of the same old model. Before I do that, I’d like to locate some corroborating opinions.

Do you have an older PS2? Can it play God of War 2?

Tags: , , , , , ,


Followup commentary on Windows Vista

Filed under: — adam @ 12:29 am

Perry said “I think you held back too much. Tell us what you really think.”

Okay. I think Windows is rotten to the core and always has been. Between Windows 3.1 and XP, there were no serious contenders. With Win2K and XP, it’s at least had the benefits of:

1) it being reasonably possible to hammer it into sufficient shape to be usable and secure “enough”.

2) running on significantly cheaper hardware.

3) being reasonably open for a closed-source product, and at least focused towards providing a good user experience, and aimed at the needs of the end user.

4) providing a mostly effortless hardware compatibility experience. Most of the things I’ve plugged into my XP box have simply worked, without too much trouble. Sure, I’ve had to install the driver, but there are a number of things where you have to do that with OSX, too.

5) having software exclusives, and existing in the world where virtualization/emulation on other platforms was at the end-user performance level of “barely usable, if you really need it”.

All of that seems to change with Vista and the fun 2007 world it inhabits:

#1 might have been good enough with XP, but I fail to see why none of those lessons have been learned, and we have to do it all over again with a new OS, especially one which otherwise seems to provide marginal benefits.

#2 the hardware requirements for Vista seem like simply an excuse to sell more hardware for overly bloated and inefficient software, because…

#3 they’ve totally sold out to the content industry and everything has been reoriented towards content protection, all of which eats hardware resources and diminishes usability, because of which…

#4 they broke the unified driver model and so we have to start all over again with hardware compatibility, and…

#5 now there are cheaper, better alternatives for running the same software, which actually seem to work this time around.

We’ve known this all along – Unix in any flavor is superior to Windows. We’ve finally reached the complexity point in operating systems where that difference is unmistakable even if you don’t have advanced degrees in Computer Science.

I’ve been a Windows user and defender for a very long time, because of the list of five advantages above. My primary desktop still runs XP. I expect that to be the case until I need to replace it, at which point I’ll probably get a Mac, for the five same reasons. Obviously, I haven’t hit all of the reasons, but this is a big chunk of why I have little interest in Vista. It’s the same reason I got tired of manually assigning SCSI ids to all of my disks. Tinkering is fun. Sometimes, tinkering is fun even when it’s mandatory and things don’t work unless you tinker. But after a while, you just want things to work.

Tags: , , , ,


Treo 700p Text Messaging Problems

Filed under: — adam @ 1:46 pm

My Treo 700p has many problems, but one of them is completely infuriating, so obviously the result of a bug, and so invasive that I can’t imagine that everyone with the same phone hasn’t seen it.

When Palm introduced the 700p, they replaced the SMS application that was used on the 600 and 650, to a new centralized messaging application. Setting aside the fact that it couldn’t import the sms messages from the old application, it obviously suffers from some sort of indexing bug, because if I have more than a handful (maybe 20-30) of messages saved on the phone, EVERY time I send a text message, the phone freezes for some amount of time before it responds again. The more messages I have saved, the longer it hangs. I’ve timed it at over 2 minutes with a lot of messages. Purging all of the existing saved messages completely fixes the problem, until a sufficient number of messages accumulate again.

This is a real pain – I often refer back to old text messages, and I feel like the phone is robbing me of some of my history by forcing me to delete them.

And it can’t just be me – with Verizon support, I tried a brand new phone with none of my programs or data installed, and the problem recurred after sending and receiving a bunch of text messages. I can’t believe that Palm hasn’t fixed this already.

Do you have a Treo 700p? Does it exhibit this problem?

Tags: , , , , , , ,


It is time for the distinction between Mac software and PC software to go away

Filed under: — adam @ 6:21 pm

I’ve been thinking about the issue of Mac software vs. PC software a lot lately, particularly with the cross-platform beta and coming production release of Adobe CS3.

I’ve only been a recent convert to the Mac, and the thing that was holding me back was that certain software that I absolutely needed was not yet available on the Mac. Until recently, things I needed to do my job wouldn’t run on OS X, or wouldn’t run well, or would run perfectly well under Windows and OS X but would require me to buy another license (and a full price non-upgrade license at that) to run what was essentially the same software as I was running under Windows.

But with the conversion of the Macs to Intel chips and the consequent advent of Parallels (and eventually VMWare Fusion, which is not yet ready for prime time in my limited tryout), this distinction essentially evaporated. I could run all of the great software I wanted natively for Mac, and anything else that wasn’t available or would cost extra for the Mac version I could run under XP on Parallels. Since then, I haven’t bought any new Windows machines. Virtualization technologies existed before, of course, but the difference this time around is that Parallels works.

And now, Adobe, I’m looking squarely at you. Your license permits me to run a copy of CS2 on my desktop (which is still Windows), and one on my laptop (which is OS X). I’m not going to buy another full $1000 copy of CS2 for the Mac, so the question now is this – the license permits me to run it on my laptop, so why are you making me run it under Parallels? You’re letting me preview the beta version of CS3 on the Mac, but now you’re just teasing me, since you’ve said that there won’t be a cross-platform license available for the full version. When CS3 comes out, I’ll have no option but to buy the Windows version. Notwithstanding the fact that I already own the Windows version, that’s the only option that will let me run it on both my desktop and my laptop, there being no way to run OS X in a virtual machine. But that’s a degraded user experience for me, for no gain for you.

So why are we still dealing with this inconvenient fiction?

Here’s my call to arms to all software developers: where you’re making a Mac and Windows version of the same software available and currently require two separate licenses, collapse and simplify. Don’t make me run the Windows version under Parallels. It just makes me love you less, and the extra love goes to Parallels instead. I want to love you more.

Tags: , , , , , , , , ,


Confabb is a conference portal and social networking service

Filed under: — adam @ 2:13 pm

Over the past six months, in addition to all of the other things going on in my life, including several exciting other projects, I’ve been working as the lead architect for Confabb, a comprehensive conference portal and social networking service. It’s a testament to the magic of rails and modern business practices that we’ve been able to pull this together with an entirely distributed team, some of whom have never met each other, in our spare time, with an outlay of cash measured in hundreds of dollars. On that note, the incredible rails deployment team at EngineYard deserves our unqualified thanks.

Check it out!

If you’re at all interested in conferences, we should have something interesting for you. On top of the large conference database, we’ve got features to help you track conferences you’re interested in, review and rate conferences and speakers, plus some treats for speakers and conference organizers.

The application has been an interesting ride. It fills a real need, and provides solid, useful features. After 10 years of building CMS and intranet systems for clients, I’ve spent the past few years on viscerally owning the projects I’m working on. This is the first of those launches, but it’s not the last. Stay tuned in the next few months to see what else I’m working on.

Techcrunch covered our launch today:

Tags: , , ,


Dyson Root 6 is a bit of a marketing disaster

Filed under: — adam @ 11:21 am

So… wow.

I have a Dyson upright vacuum, and it is quite simply far far better than any other vacuum cleaner I’ve ever owned. I bought the newly released Dyson Root 6, the handheld model.

The only handheld that doesn’t lose suction… while it has charge.

It’s outstandingly good from a cleaning perspective – it does actually work very very well. But what they don’t tell you is that while the battery does charge faster than others (3.5 hours), it only lasts for 5 minutes on a charge. As a result, it’s really only good for spot cleaning, and not as a general purpose dusting vacuum, which means it misses an entire big use case of a handheld vacuum – carrying it around while cleaning the house to use for dusting shelves, surfaces, ledges, nooks, crannies, etc…. When I did this, I very quickly found that I had a completely dead battery, and I had to charge it again for 3.5 hours before being able to use it again.

What’s happened here is that, like Apple, Dyson has decided that they’re going to focus on one usage pattern (keep the vac in the charger and pull it out occasionally for spills and then put it right back in the charger) and optimize that pattern, completely ignoring any other possible uses that the customer might want to put the device to. Unfortunately, in this case, I think they’re going to be hard pressed to find many people willing to shell out $150 just for spot cleaning. Because of the real-world mechanics of lithium-ion batteries, the expected usage pattern of the vac (keep it in the charger most of the time so it’s always ready for short bursts) is at odds with the strategy for maximizing the life of the battery (drain the battery completely, then recharge fully before using again), and in a year, the effective run time will be 2.5 minutes, not 5. The value proposition would be a lot better if they included a spare battery or two that you could leave in the charger and swap out with the dead one, so you could at least rotate them and have some expectation of having a live one if you’re actually using the thing. Arguably, it has advantages over, say, a dustbuster, but at at least 3-5 times the cost for less than half of the usage pattern, I’m not sure it’s worth it.

I might have been more receptive to this idea if they’d said outright – “look, we made it work for 5 minutes, but for those 5 minutes, it’ll work much better than any other handheld vac”. But they didn’t. They completely glossed over this glaring design failure, and it’s kind of a surprise. Judging from the tone of voice of the customer service tech I called to find out if this was normal, they’ve been getting this question a lot, and it sounds like they’re a bit insulted that people would harp on something that they don’t consider to be a failure while overlooking the substantial advantages that they have produced. It’s almost a case study in misunderstanding the requirements of your audience. A 5 minute battery life is not an acceptable feature for a handheld vac, and if there’s a good reason why it should be, Dyson should have made some effort to educate people instead of just throwing it out there and letting people figure it out for themselves. I suspect that there isn’t, and this is just a design flaw that they haven’t been able to fix and one they’re trying to ignore. The users of the device, unfortunately, aren’t granted such a luxury, and the failings of it are far more evident than the successes.

That said, it’s certainly an open question about whether to return it or not, because those five minutes definitely suck as much as they should.

Tags: , , , , , , ,


Smack about the Finder

Filed under: — adam @ 10:58 am

Following on my Ramblings of a Switcher post, someone got me started on what’s wrong with the Finder. Here’s a short list just off the top of my head:

1) Why is there no option to display folders first? Descending into the file tree is a decidedly different cognitive action from looking at the files in the current directory. This is an extension of the “you shouldn’t care where your files live and I’m going to make it difficult for you if you do” problem. Sort by kind is not an option if I want the files in the directory sorted alphabetically.

2) I like the idea of the multi-column file browser, but why do I have to resize each column separately, and why is that control so obtuse? If I extend the last pane so it’s wider and I can see what’s in it, but then I descend another level, why do I need to resize again? Why does making the whole window bigger not automatically resize the last pane? Why is there no sort-by control for multi-column view?

3) Why can I not browse network shares directly from the Network tab? Why do I have to mount them first and then go back to the root and find the share I just mounted?

4) When I make an alias to a directory in the left hand quick links, why can it not have a different name from the directory itself? If I have clients/x/projects and clients/y/projects, and I drag them both to that bar, they both show up as projects, and renaming the alias renames the directories. Gah!

5) Why do I still get new windows open in icon view when I have the “Open new windows in column view” preference item checked?

Tags: , , ,


Ramblings of a Switcher

Filed under: — adam @ 12:24 pm

Having moved my music and my primary laptop over to Apple machines in the past six months, there’s a lot to like, but also a lot of hate.

There are certain pieces of software that are Mac-only that I really prefer to anything available on Windows. TextMate stands out for development – while it’s not perfect, I can’t imagine doing rails coding without it anymore. Delicious Library has proven to be immensely useful for keeping track of what storage boxes I put things in when they’re rotated out to the storage space, a function I didn’t even really realize was missing until I had it. Dashboard works FAR better than anything equivalent on Windows.

On the interface side, while there are some improvements, many things are different for no apparent reason, without actually being better. This doesn’t really bother me, but it did take a little getting used to.

But what really gets me is that there are a bunch of things that are just wrong, for no apparent reason. They’d be easy to fix, but someone made an active decision that the platform was going to behave this way, and yes, I think they’re outright wrong. Some of these are problems with Apple software, some of them just problems with the general paradigm encouraged by Apple, and some problems with the specific pieces of software I’ve chosen (but which seem to be very popular in the Mac community).

  1. There are number of general interface oddities that make no sense. Why must windows only be resized from the bottom right corner? Why can’t I universally maximize windows? There’s that little green button on the interface. Who knows what it will do? Sometimes, it will maximize the current window to be full screen-ish, but just as often it does something completely useless. A particular failure of this function for which I blame Apple directly is what happens when you press this button when viewing PDF files in Preview. When reading a PDF file, I almost always want to, you know, be able to read the text on the page. The only way to do that is often to have the file fill the whole width of the screen, so the letters are large enough to be legible. There’s manual zoom in Preview, but no way to make the page fill the width of the screen. This makes reading documents in Preview unnecessarily frustrating. Hearing Apple apologists try to rationalize this away is amusing. “Oh, the Mac OS is based around the concept of having multiple windows open at once, so there’s no reason to maximize a window.” Uh, sure. Oh, I forgot, if Apple decides that it wasn’t important, I’m missing the point if I want it.
  2. There’s far too much clicking and insufficient use of keyboard shortcuts. Just about every piece of Mac software I’ve used suffers from this, but some are worse than others. For example, Omnigraffle – generally not a bad interface (although I have a list of other things that are specifically wrong with it), but there’s no way to edit the text of an item without double clicking on it. To add insult to injury, this function is even listed under the Keyboard Shortcuts section of the help.
  3. Don’t even get me started on the Finder.
  4. There’s plenty wrong with iTunes. Why is there no “currently playing” playlist? When you select an album and play it, then go look at another album, then jump to the next track, iTunes stops instead of playing the next song in the album you were listening to. There does not appear to be any way to play an entire album in the background without first making a playlist out of it. Which brings me to….
  5. iTunes management of external music folders is completely broken. There’s no way to synchronize the iTunes library with an external music source folder. If the folder is on a network drive and the network goes away for some reason, iTunes “loses” all of those tracks – they’re still listed, but they can’t be found until they’re individually played, one by one. Adding the external folder again causes all of these “missing” tracks to be doubled, and they only way to clear that out is to dump the entire library and re-add it, which also throws away all of the static playlists. iTunes, inexplicably, gives me the option to display duplicate tracks, but mysteriously no way to remove them automatically. That really helps when you’re dealing with thousands of tracks. Yes, I tried the Remove Duplicates Applescript. No, it didn’t work.

I complain, because I’d really like it to be better, and I’m surprised that it’s not. Don’t get me wrong – using the Mac is generally pretty pleasant. But these glaring flaws stick out like a sore thumb, and cast an avoidable and visceral pall over an otherwise happy experience.

Tags: , , , ,


Privacy is about access, not secrecy

There’s a very important point to be made here.

Privacy in the digital age is not necessarily about secrecy, it’s about access. The question is no longer whether someone can know a piece of information, but also how easy it is to find.

If you take a bunch of available information and aggregate it to make it easily accessible, that’s arguably a worse privacy violation than taking a secret piece of information and making it “public” but putting it where no one can find it (or where they have to go looking for it).

This is a very important disctinction when you’re looking at corporate log gathering and data harvesting. Sure – your IP address or your phone number may be “public information”, but it’s still a privacy violation when it’s put in a big database with a bunch of other information about you and given to someone.

Tags: , , ,


Informal comparison of organic ketchups

Filed under: — adam @ 3:33 pm

I don’t really enjoy the taste of high fructose corn syrup, which seems to have worked its way into all kinds of places. The only kinds of ketchup that I’ve been able to find that are made with sugar instead are all organic, and I’ve tasted a bunch of them.

Here’s an informal summary of my findings:

  • Heinz Organic ($2.49/15 oz = $.17/oz) : Tasty. Almost exactly like Heinz ketchup, but without the HFCS twang. But even at this reduced price from Amazon Grocery (it was about $1 more for the same size bottle at my local supermarket), it’s the most expensive of the choices. Not worth the extra money.
  • Tree of Life Organic ($4.69/36 oz = $.13/oz) : Very good, but a little fruitier than I like. Still full bodied, and a perfectly acceptable choice. Sort of like getting Hunts if you like Heinz.
  • 365 Organic – Whole Foods ($1.89/24 oz = $.08/oz) : This was my favorite of the four, and also the cheapest. Very well balanced, good acidity. Tastes like Heinz, for the most part, but with a brighter, more persistent flavor.
  • Annie’s Organic ($2.79/24 oz = $.12/oz) : Not good. Very reminiscent of tomato paste, and too thick.


Tags: , , , , ,


Advice for the Democrats

Filed under: — adam @ 9:36 am
  Everyone expects you to win Everyone expects you to lose
You win. Well, everyone expected you to win. No one’s surprised. Yay, you win. OMG! You overcame all of the odds and pulled it out!
You lose. What?? You lost?!? How could that have happened? Oh, well, no one expected you to win. Try again next time.

Tags: , ,


Putting Comments Out of Our Misery.

Dante: You hate people!
Randal: But, I love gatherings, isn’t it ironic?

I hate comments. But I love conversations. As I peruse the web, I find myself (as many of us do) drawn to leave comments across the pages that other people have written. But it’s an incomplete puzzle – a comment as it exists now is an endpoint. It may lead to something else, but it’s up to someone else to figure out what that thing may be, or even if that evolution will happen at all. Comments tend to follow one of two patterns, neither of them productive:

  1. The comment thread trails off as people get disinterested, and nothing really comes of it.
  2. The comment thread gets so long that it’s impossible to follow, things get repeated, and the people commenting on the last page aren’t really talking to the people on the first page. Nothing really comes of it.

The process isn’t helping us out here. We haven’t even gotten into vanity comments, flame wars, or any of that stuff that’s detrimental.

Working on ORGware, we’re revamping comments. We’re starting with two major changes, and there will be others. The first big change is that every comment you leave on someone else’s post also gets posted on your own blog, and it will have to be positively rated before it appears anywhere else. If you want to blather on about whatever, you’re free to do that, but you won’t be allowed to join the discussion unless some threshold of other people think you have something useful to say. That’s a relatively minor one, but it’s important. It shifts the focus of the comment from the commenter to the discussion, and it makes it possible for the community to weed out (passively, by ignoring) the irrelevant wanderings.

The second change is far more interesting, and it deals with how the comment thread metamorphosizes into something else entirely – a discussion with usable output. Right now, you post, people comment, maybe people make followup posts on their own blogs… and if you want more than that, you have to do it yourself. We’re building in another step. Comments on their own, for any post that has an action output, are no longer an endpoint – they’re a stepping stone to writing that action output. Writing “good” comments (in the opinion of the original author and/or the community) gets you an invitation to help edit that output product, which can become a letter, or a fax, or an email, or even a followup post for more discussion. Britt has posted a good overview of the interface I designed for this, which we’re simply calling the comment editor now until we come up with a better term.

More to come…

Tags: , , , , ,


I’m with Ebert

Filed under: — adam @ 1:44 pm

After that last debacle, we saw Superman Returns on Sunday, at a different theater (but also an AMC one, since they seem to have acquired almost all of the good Manhattan theaters), and our experience was ruined in an entirely different way. We went to the DLP showing, for ENHANCED PICTURE AND SOUND. The sound was great admittedly, but the projector was miscalibrated and about 2-3 stops too dark. Many scenes were missing shadow detail, and some were entirely black. When we complained, the people at the theater first said “there’s nothing wrong with it”, then “that’s how it’s supposed to be”, then “it can’t be calibrated on our end”, then finally “we’ve been complaining to the projector people and we have someone coming to look at it next week”.

WTF?!?! Why are you lying to me? Just come right out and say it’s broken, we fucked up, and give me my money back?!

Anyway, I now have six free tickets to AMC theaters. I’ll have to find something interesting to do with them, since I don’t envision wanting to go back to the theater anytime soon.

As for the movie itself, I was thoroughly underwhelmed. Mainly, I was pretty strongly appalled that they seemed to have not decided if this was a sequel or a reboot, and as a result many things about it were confused. If this is 5 years later, why does everyone appear 7 years younger? We’ve already done the “Lex Luthor does something diabolical to increase his real estate holdings” and the “Miss Tessmacher gets all upset when people are going to die and crosses Lex at the last minute” plot elements, and they simply feel repeated here without any significant evolution. Why is there no mention of the last time Superman simply disappeared for no apparent reason, in Superman II?
Other random comments:

  • I’m not going to comment on the physics, because that’s a losing battle.
  • Yes, once again, please read Man of Steel, Woman of Kleenex before making a movie like this.
  • On a DLP screen, you can see entirely too much of Brandon Routh’s makeup. In some closeup scenes, his face looks like it was added in after the fact with CGI.
  • Where’s all the rest of that great Kryptonian technology that Lex was going to use to defend his giant island?
  • Kate Bosworth was simply not the right choice for Lois Lane. James Marsden is not terribly compelling. The rest of the casting was pretty much on-target. Kevin Spacey was great, but should have toned down the tag lines a bit. Okay, a lot. Show me the money or something.
  • That kid should totally have had Batman Underoos.
  • My favorite scene was the one where the lights go back on and everyone else realizes that Lex has backed away from the pool.

Tags: , ,


On the integration of Web 2.0 apps

Filed under: — adam @ 9:48 am

Britt sent me this link lamenting the lack of interaction between Web 2.0 services:

This is an interesting and correct observation, but let’s look at an analogous situation – unix command line tools.

Unix is designed around the pipe – the ability to string long chains of commands together, each of which only does a small thing, to accomplish what you actually want to do. There are some places where this breaks down, but by and large, this method has been spectacularly successful.

Web2.0 apps are much better positioned to emulate this than Web1.0 apps, but they’re still not there yet.

What’s missing is the switches that enable those apps to play nice with other apps.

You’re probably familiar with ls, which lists files in a directory:

fields@server2:~$ ls /tmp
mysql-snapshot-20060621.tar.gz mysql-snapshot-20060621_master_status.txt

ls also has another mode, that outputs a long listing, which includes more detailed information about the files:

fields@server2:~$ ls -l /tmp
total 841520
-rw-r–r– 1 root root 860863512 Jun 22 19:08 mysql-snapshot-20060621.tar.gz
-rw-r–r– 1 root root 382 Jun 22 18:50 mysql-snapshot-20060621_master_status.txt

Once you have that, you can pass the list to other programs that may want to filter the list by one of those pieces of data. The default mode is useful for dealing with the files themselves, but less useful if you want to interact with their metadata. What if the -l flag was left out, and that behavior was restricted to maintain ls’s competetive advantage (in the hypothetical situation where it’s something provided by your filesystem vendor)? If the information you’re looking for isn’t returned at all, you may have no other way to get at it. Maybe you’d have to use the vendor’s lslong, which costs money. You may be just fine with that, or you may be compelled to look for a filesystem competitor that does what you want. I’d argue that ls is less useful without that ability. That’s the situation we’re looking at when a Web 2.0 API is lacking certain core features to interact with the data it represents.

Is that an acceptable tradeoff? Maybe it is for a free service. It seems less so for a service you pay for, because fundamentally, you’re paying for the ability to manage your data, not for the ability to use the particular software – that’s the whole concept behind software as a service in the first place.

This is, of course, made more complicated by the fact that Web 2.0 isn’t just data sharing, it’s also about more dynamic interfaces. Theoretically, these two are interconnected and the dynamic interfaces work better because they can deal with small chunks of data that are in more standardized formats, and also theoretically, the data access mechanics are decoupled from the actual interaction semantics, which would have the effect of making outside non-gui access to your data easier with standard tools. In practice, that seems to rarely happen.

This is the only good rationale I’ve heard for using XML for gui/backend interchange.

These are good things to be thinking about when designing web applications. It’s not enough to think of them in a vacuum; we have to consider the implications of living in the ecosystem. It’s possible that that means opening up far more access to the underlying workings than we’re accustomed to. I would LOVE to see some applications that fully work if you take away the browser front-end, but still interact in exactly the same way via HTTP.

[Update: More on this discussion from Phil Windley.]

Tags: , , ,


Testing different monitor calibration targets

Filed under: — adam @ 8:53 am

With the purchase of new monitors (see also, I noticed that I was getting really muddy blacks, even though I had the contrast set properly. Through some trial and error, I discovered that the Spyder2Pro I was using to calibrate was wiping out whatever changes I made to the contrast and brightness settings, and flattening about the lower fifth of the gradient curve to black.

I discovered that I could alleviate this by calibrating to a different gamma/temperature target – I had been using the windows default of 2.2-6500K. Through some more trial and error, I found that the “right” balance seems to be 1.6-6400K – colors are still crisp, and I still get a good range of shadows. I think I may have thought that my old monitors had some limitations that they didn’t, and the calibration was at fault instead of the hardware.

Have you experimented with different gamma/temperature targets? I know the mac defaults to 1.8-6500K, but when I tried that one, it was still way too dark in the shadows (testing on a 64-band gradient). 1.6-6400K looks great, but it seems like a weird number to end up at.

Tags: , , ,


Collected thoughts on the futility of online communities

This is a long post collecting comments and thoughts from some emails and conversations with Britt Blaser, Doc Searls, and others. Some of this is from external impressions of the Dean campaign (I wasn’t involved, and I haven’t found a good postmortem), but also about my own participation in online communities and the lack of incentive that I often feel to do so.

There is a huge untapped market for community software. There’s a lot of “community software” out there, and it all fails on the same key point – it’s all centered on the software itself (or more specifically, the website experience), and fundamentally, communities don’t happen in discussion groups or impersonal online participation. People come to a community like dailykos or metafilter or whatever, and they “join” the community, but those ties are fragile, and the experience of most participants is that they almost never extend to anything beyond participating in the online community itself. If you suddenly disappear, no one will come looking for you. This is not the same as an actual community.

Reading isn’t participation in a community. Writing to the public isn’t participation in a community, and the fatal flaw of the existing approach is that the underlying assumption is that the collective act of reading and writing is equal to participation. This is especially misleading if the online community is supposed to be mirroring some sort of participation in the real world, like political involvement.

The end result is exactly what we saw with the Dean campaign, as perceived by an outsider. Lots of “participation”, lots of “involvement”, but everybody sat around reading and writing and thinking that they were somehow involved, but when it came down to it, no one got up to vote.

Now, actually, there’s a corollary problem here, which is that the online community itself, while very vocal, was also VERY bad at doing anything to engage anyone outside of the online community, because they spent all of their time reading and writing, and those activities, even as they fail to engage those inside the online community to action, COMPLETELY fail to engage anyone outside the online community.

As I wrote the above, the universe graciously provided a perfect example to illustrate my point:,,1788774,00.html

It’s an article about the futility of discussing things online, which has somehow accumulated an inordinate number of comments.

I’ll pause for a moment while that sinks in.

So, we have some problems to fix. Participation in the online community needs to have the following properties:

1) It should be centered around activity that breaks out of the online community. This needn’t actually be physical meetings, although those are also good, but all actions must be classified as “inward” (aimed towards engaging with others in the online community) or “outward” (aimed towards engaging with other outside the online community). EVERY inward action must have a corresponding outward action. If it doesn’t, there’s already a name for this – it’s called “preaching to the choir”, and it’s the death of activism.

2) It should allow and encourage those inside the online community to engage with each other temporarily to reinforce the commitments of those who are already involved, but all such actions should be considered subsidiary to engaging with others outside the online community. Think of this as the difference between vegetables (outward) and chocolate (inward). A little bit of the latter is very rewarding and tastes good, but if that’s all you eat, you get fat and die.

3) It should allow those in the online community to evolve internally the mechanisms for accomplishing goals outside the online community. This may involve consensus building, electing representatives inside the online community, collaborative letter writing, legislation hashing, and so on.

4) It must have a mechanism for elimination of cruft. Old ideas, bad ideas, unpopular ideas, and irrelevant ideas are all barriers to entry. The online community must be able to decide on what the salient points are, and delete the rest. I’ve had it with relativistic egalitarianism. There is such a thing as a bad idea, and they’re distracting and harmful. We need to create a marketplace where all ideas have an equal opportunity to flourish, but if they don’t, then let’s be done with them. Archive the discussion for posterity, and clear it out of the center of attention.

It’s not enough to talk, communities must be a driver for action.

Tags: , , , , , ,


Not terribly impressed with the flickr redesign

Filed under: — adam @ 9:53 am

Flickr got a big redesign this week. Some of the visual tweaks are good, but overall, my feeling is “really? that’s it?”.

The thumbnails are bigger, and there are more per page by default. Brilliant! Why didn’t I think of that?

I don’t understand why the sets moved from the left to the right, but there’s still a whole bunch of wasted white space on the page.

The new organizational structure doesn’t really seem to make navigating the site much easier, except that the archive page is easier to find. That’s good.

The new Organizr is AJAX instead of Flash, and it doesn’t work in Opera. Ditto for basically all of the other new dynamic elements on the page. Thanks for that, I guess. Everyone else seems to be able to make AJAX pages that work fine in Opera. Why can’t you?

Where’s the large version slideshow? Where’s the setting to view all pictures from your contacts? Where’s the ability to navigate your contacts’ pictures as if they were a set or a group?

Flickr’s still great, of course, but I’m thoroughly underwhelmed by the changes.

Tags: , ,

It’s like a heatsink for your head – in praise of the Chillow

Filed under: — adam @ 7:59 am

It sounds stupid.

Okay, it sounds really stupid.

But it works.

I’m a big fan of the Chillow. It’s a sealed foam pad that you fill with 10 cups of water. The pad then acts as a heat sink to draw heat away from your body and release it to the air. I was given one a few summers ago by a friend who ordered one and got a free one as part of a promotion, and I was immediately hooked. When I’m hot, I don’t sleep well. With a chillow, I sleep a lot more soundly, and often I fall asleep almost immediately upon laying my head down on it.

It’s not without its problems – it’s very dense, so if you don’t like the feeling of a big weight under your head, it may not be comfortable for you. It doesn’t bother me. It also may tend to bunch up if the pillow under it isn’t supportive enough to keep it in one place. I’ve never had a problem with leakage. The previous ones I’ve had didn’t last forever – they each developed a stale smell after about 6 months and I tossed them. I’m not sure if that’s the water or the plastic casing, but the newest one I’ve just gotten seems to be made out of a slightly different material, so we’ll see. The instructions do not say to periodically change the water, so I may also try that. Even still, at around $20 per six months (at, the recurring cost is easily worth the benefits to my sleep patterns and comfort.

It’s a stupid-sounding product that I probably would never have even known I needed if the decision had been left to me, but I’m a total convert.

Tags: , , , ,


Elections are not enough feedback

Filed under: — adam @ 9:59 am

Another idea that came out of the tired and somewhat inebriated tail end of last night’s gathering, that I didn’t want to forget.

Our system of representative democracy is predicated on the core idea that elected representatives are beholden to their constituents, because if they’re not, they’ll get elected out on the next cycle. But this is typically a four-year turnaround, and that’s plenty of time to do irreparable damage. I posit that this is not enough feedback, and we need to have a way to get citizen input taken more seriously, with direct consequences for representatives who fail to listen. This also probably goes along with increasing the number of representatives, and possibly giving up on the presumption that people who live near each other necessarily share the same views (or have views that are not directly contradictory and can be rationalized into a coherent position by one representative).

I have to think about this more.

Tags: , ,

Dinner with Britt and Doc

I had the rare and interesting pleasure of having dinner with Britt Blaser and Doc Searls last night, since Doc is in town for Syndicate (which I’ve never attended, but which does seem to attract fascinating conversations to my doorstep every year).

Doc and Britt

Asked to pick a restaurant for our gathering, I suggested D’Or Ahn, a newish Korean fusion place in west Chelsea. I’d eaten there a few times, and the food has always been top-rate. Unfortunately, the sushi chef was out for the evening (for reasons I didn’t entirely catch, but which seemed to involve some sort of surgery), so their wonderful raw bar was closed. However, the rest of their selection more than makes up for it. The menu is somewhat confusing, separated into “raw”, “cold”, “hot”, and “main” (which are also hot) sections, but the best advice is simply to ignore that, order for the table, and share everything. Flavor is the overriding component here, and everything is full of it, with rich but not overpowering sauces.

Scallops are outstanding now, so we opted for those, prepared a few ways, from a simple pan sear to encased in a crispy sesame leaf (the latter was delightful). The slightly seared duck breast with droplets of foie gras was, as expected, delicious (and it’s hard to go wrong with those ingredients). I’m a huge fan of braised meats in general, and their short rib preparation is beautiful, with a celeriac puree that’s ethereal mixed with slightly crunchy green onion slivers. Their take on the classic Korean dish bibimbop rounded out our selection of “appetizers”. I would have liked to have the rice a bit crunchier, but the flavor of the mushrooms mixed with a lightly soft cooked egg mixed into the rice leaves nothing to complain about. For the “main”, we split the lobster, which is literally a split lobster served spiced and grilled with a melon confit and a lobster claw chunk porridge. Lobster and melon is a combination I first discovered a few years ago in Maine, and I was instantly hooked. The sweet fruit complements every one of the notes in the sweet meat.

We paired everything with one of my favorite sakes – Otokoyama – served cold in boxes.

For dessert, we did an apple (a cake with sorbet) and cheese course (a Fourme d’Ambert “grilled cheese”), which were the two choices we wanted to try. Much as they did not go together in the least, both were still excellent. Their desserts tend to range from enjoyable to outstanding, and I’ve never been disappointed. A few glasses of port rounded out the libations.


But of course, the food was secondary to the conversation. With these two heavyweights across the table, the topics ranged across the board, from social networking, to how to handle spam and read email with mutt, to hacks for piloting a zero-g suspension flight (I’ve never had the honor), and of course to politics and the role of technology. Some portion of what was said can not or should not be replicated in a public forum, and so I won’t, but there was one great new idea (to me) mentioned in the course of a discussion about Doc’s new Santa Barbara community trying to get very high speed internet access and looking to bypass the traditional carriers who refuse to provide the kind of speeds they want. Britt mentioned Free Entry, a term which I’d never heard before. In a certain sense, this concept defines the growth of disruptive web services – if the current provider isn’t doing a good enough job, they should be replaced by someone who’s selling what people want to buy. This goes right to the heart of why lock-in legislation to protect antiquated business models is a bad bad bad idea. It doesn’t protect competition, it’s not an incentive to develop, it’s simply “protection” for companies to foist bad products on consumers who want something better. Disruptive business models work, because they’re good for the consumer.

It’s such a simple idea, yet so rarely practiced. If people don’t want to buy what you’re selling, sell something better. It’s almost the opposite of traditional advertising. It was a strong theme of the evening.

Tags: , , , , , , , , , , ,

(Larger photos)


In which I go all Top Chef on Craftsteak

Filed under: — adam @ 8:44 am

We had the pleasure of eating at the newly formed Manhattan outpost of artisan meat yesterday evening, the newest jewel in the Colicchio empire – Craftsteak. There’s a constant assertion that one should avoid new restaurants, but I have really tremendously enjoyed every experience I’ve had with visiting restaurants in their first month. In many cases, these have even been preferable to subsequent excursions. Even as the staff may not have hit their stride yet, there’s something undeniably fresh about a new restaurant, and that adds a lot to the dining experience for me. Think Like a Chef is really the book that got me interested in pursuing serious fine cooking, so I feel a special connection to Chef Colicchio’s places.

The decor is fabulous, of course. The layout of the space has a good flow, with the main dining room separated from the bar and raw bar by a characteristic walk-in transparent wine cellar. The dining room is very open and has exquisitely high ceilings. Even at full capacity, the sound level was pleasant.

And, on to the food.

We started with three appetizers for the four of us – roasted veal sweetbreads, roasted foie gras, and wagyu beef tartare. I’m a big fan of sweetbreads, and these were among the best I’ve ever had, and a generous portion for an appetizer course. The foie gras was outstanding in flavor, although it was not completely cleaned of veins (despite, as Mayur noted, explicit instructions to do this in Think Like a Chef). The wagyu beef tartare was served with a quail egg and toast, and it was tasty, if not terribly impressive. We all felt that the presentation was too much like traditional beef tartare, and would have preferred a coarser cut usually reserved for fish tartare, to really highlight the exceptional texture of this fine meat.

And now, the steaks.

The selection is large and detailed, from a few varieties of corn-fed heresford beef, both wet and dry aged, through grass-fed Hawaiian beef, to the premium grade Wagyu beef (which tempted all of us, but which budgets demanded we resist). Surprisingly, the waiter was pushing everyone to get medium rare, but couldn’t really explain why beyond “that’s what the chef recommends”. Despite our mostly ignoring that advice and asking for more on the rare side, one of the steaks did arrive fully medium rare, and had to be sent back. We had a similar problem with the rabbit. It was actually a beautiful presentation, with the various pieces separated – leg, a mini rib rack, some “pulled” rabbit meat, and a tenderloin. This would have worked well, but the tenderloin was slightly underdone. However, once we got past those two problems, everything was great. I opted for the grass-fed filet mignon, and it was one of the best steaks I’ve ever had, and outstandingly prepared. It was uniformly and perfectly rare all the way through (about 2.5 inches thick), and impressively tender and flavorful. The other two steaks on the table – a 42-day dry aged strip and a grass-fed ribeye, were also superlative. As with the main Craft, sides are ordered and prepared separately. We opted for the more seasonal choices – roasted ramps, sugar snap peas, and baby carrots, and a pea and morel risotto. All of them were up to the usual standards.

We paired with a moderately priced Qupe syrah, which was intensely berry-oriented, and matched well with everything.

The desserts (pineapple upside down cake, a warm chocolate tart, and monkey bread – a cinnamon and nut encrusted brioche) were all acceptable, but the balance was off a bit on everything. A little too sweet, too salty, or just not quite right. The espresso was sub-par, disappointing and bitter. This wasn’t enough to really ruin the meal, but it wasn’t an impressive close, and it’s obvious that the most attention has been paid to the meat.

Overall, I had a thoroughly enjoyable and delicious meal that very much worked for me despite the nitpicking flaws above, and the very exceptional quality of the steak is really the standout here, the gem that puts the shine on the whole thing.

I see great potential.

Tags: , , , , , ,


Hidden dangers for consumers – Trojan Technologies

I’ve been collecting examples of cases where there are hidden dangers facing consumers, cases where the information necessary to make an informed decision about a product isn’t obvious, or isn’t included in most of the dialogue about that product. Sometimes, this deals with hidden implications under the law, but sometimes it’s about non-obvious capabilities of technology.

We’re increasingly entering situations where most customers simply can’t decide whether a certain product makes sense without lots of background knowledge about copyright law, evidence law, network effects, and so on. Things are complicated.

So far, I have come up with these examples, which would seem to be unrelated, but there’s a common thread – they’re all bad for the end user in non-obvious ways. They all seem safe on the surface, and often, importantly, they seem just like other approaches that are actually better, but they’re carrying hidden payloads – call them “Trojan technologies”.

To put it clearly, what I’m talking about are the cases where there are two different approaches to a technology, where the two are functionally equivalent and indistinguishable to the end user, but with vastly different implications for the various kinds of backend users or uses. Sometimes, the differences may not be evident until much later. In many circumstances, the differences may not ever materialize. But that doesn’t mean that they aren’t there.

  • Remote data storage. I wrote a previous post about this, and Kevin Bankston of the EFF has some great comments on it. Essentially, the problem is this. To the end user, it doesn’t matter where you store your files, and the value proposition looks like a tradeoff between having remote access to your own files or not being able to get at them easily because they’re on your desktop. But to a lawyer asking for those files, it makes a gigantic difference in whether they’re under your direct control or not. On your home computer, a search warrant would be required to obtain them, but on a remote server, only a subpoena is needed.
  • The recent debit card exploit has shed some light on the obvious vulnerabilities in that system, and it’s basically the same case. To a consumer, using a debit card looks exactly the same as using a credit card. But the legal ramifications are very different, and their use is protected by different sets of laws. Credit card liability is typically geared in favor of the consumer – if your card is subject to fraud, there’s a maximum amount you’ll end up being liable for, and your account will be credited immediately, as you simply don’t owe the money you didn’t charge yourself. Using a debit card, the money is deducted from your account immediately, and you have to wait for the investigation to be completed before you get your refund. A lot of people recently discovered this the hard way. There’s a tremendous amount of good coverage of debit card fraud on the Consumerist blog.
  • The Goodmail system, being adopted by Yahoo and AOL, is a bit more innocuous on the surface, but it ties into the same question. On the face of it, it seems like not a terrible idea – charge senders for guaranteed delivery of email. But the very idea carries with it, outside of the normal dialogue, the implications of breaking network neutrality (the concept that all traffic gets equal treatment on the public internet) that extend into a huge debate being raged in the confines of the networking community and the government, over such things as VoIP systems, Google traffic, and all kinds of other issues. I’m not sure if this really qualifies in the same league as my other examples, but I wanted to mention it here anyway. There’s a goodmail/network neutrality overview discussion going on over on Brad Templeton’s blog.
  • DRM is sort of the most obvious. Consumers can’t tell what the hidden implications of DRM are. This is partly because those limitations are subject to change, and that in itself is a big part of the problem. The litany of complaints is long – DRM systems destroy fair use, they’re security risks, they make things complicated for the user. I’ve written a lot about DRM in the past year and a half.
  • 911 service on VoIP is my last big example, and one of the first ones that got me started down this path. This previous post, dealing with the differences between multiple kinds of services called “911 service” on different networks, is actually a good introduction to this whole problem. I ask again ‘Does my grandmother really understand the distinction between a full-service 911 center and a “Public Safety Answering Point”? Should she have to, in order to get a phone where people will come when she dials 911?

I don’t have a good solution to this, beyond more education. This facet must be part of the consumer debate over new technologies and services. These differences are important. We need to start being aware, and asking the right questions. Not “what are we getting out of this new technology?“, but “what are we giving up?“.

Tags: , , , , , , , , , ,


How to be the boss

Filed under: — adam @ 4:06 pm

I was thinking about using this to kick off a business and technology blog I’m planning, but I just haven’t had the time to do the work necessary to launch it, and this was too good to not share (and a corollary rule is that when you’re the boss, you need to realize early that things aren’t going to work out and make alternate arrangements).

This is from an exchange with a client who has a problem which is, in my experience, not unique among small business leaders – they’re the bottleneck.

“I don’t like being the bottleneck but I am on most projects and I can’t seem to break the trend”.

The answer I gave him, and the answer I give you, is this:

Stop doing other people’s work for them. Stop being the customer. You’re the bottleneck because you have the vision. When someone does some work, they’ll reach a point where they have to stop and check it with you, because you have the vision for what it should be. If they had the vision, they’d know if their work was right or not, but they’re not sure. And when that happens, sometimes, maybe even often, instead of helping to transfer the vision, you get involved in their work more deeply because it worries you that they don’t have the vision and that means you need to do more oversight. That makes you busier, and takes away the time you have to approve the other things that are waiting for your approval of the vision. Maybe you’ll even take over some of those things, “because it will be faster if I just do it”, which sucks even more of your time, which makes you more of a bottleneck. As the boss, concentrate on transferring the vision instead of doing work that other people can and should be doing. You won’t always be able to, but wherever you can, it will help. Focus on giving people a template to check their work against, and you’ll have to do less of it.

This is not to say that you shouldn’t be involved, but when people bring work to you for approval, it goes a lot faster if they’re already confident that it’s right.

Tags: , , ,


Taking advantage of the Commons

Filed under: — adam @ 10:21 am

I received this email in my flickr inbox this morning:

“I am writing to let you know that one of your photos with a creative commons license has been short-listed for inclusion in our Schmap Rome Guide, to be published late March 2006.”

And a link where I was given an opportunity to remove my photo from the queue or approve it for use in their guide. I responded to this before I had my coffee, so I didn’t capture the text from the page as I should have before clicking no. But it had a short blurb of text with something along the lines of “oh, even though some people may disagree, this isn’t really a commercial use, because it’s free to download and the ads support keeping it free”.

I might buy that if there was any sort of community sharing going on here. I don’t see the content of the site being released under a CC license, I see a big fat “All rights reserved” at the bottom of the homepage, and the terms of use (which also, incidentally, says you’re not allowed to use ad blocking software) contains this choice little gem:

The geographic data, photographs, diagrams, maps, points of interest, plans, aerial imagery, text, information, artwork, graphics, points of interest, video, audio, listings, pictures and other content contained on the Site (collectively, the “Materials”) are protected by copyright laws. You may only access and use the Materials for personal or educational purposes and not for resell or commercial purposes by You or any third parties. You may not modify or use the Materials for any other purpose without express written consent of Schmap (”Schmap”). You may not broadcast, reproduce, republish, post, transmit or distribute any Materials on the Site.

This is a gross perversion of what Creative Commons is about. Ad-supported “free” content is commercial (unless Google is “just trying to organize the world’s information and any money collected from selling ads is just helping keep that goal alive”). Taking CC-licensed media from other sources and roadblocking the license while claiming that the use is non-commercial is possibly deceptive.

[Update: there's more discussion on this Flickr Central thread.]

Tags: , ,


Love for gentoo

Filed under: — adam @ 12:41 pm

I’ve been using gentoo as my main linux distro for about six months now, and I still love it.

The best place to start is with the gentoo handbook:

Once you get everything set up, installing new software is as simple as:

$ sudo emerge rails

which will find rails, analyze the dependencies, fetch all of the source packages, and build it for you. Don’t have ruby installed? No problem – it’ll get and install that automatically because rails depends on it. There are plenty of other options as well.

Even better, every package you install is custom compiled for just your options, and targeted at your specific processor family instead of just the regular i386 or i686 or whatever generic binaries. As a result, gentoo tends to perform better than other distros on the same hardware.

Like apt, pretty much everything you need is downloadable with portage, except that it compiles from source for your specific
system. Need ssl support? ruby support? You tell it. All of this is accomplished through the very rich USE environment variable, and all new compilations automatically pick up the dependencies you need.

Here’s the current list:

If you change the list, you can also dynamically recompile everything on your system to update the dependencies (or selectively just the things that are affected).

Yes, compiling takes a long time (sometimes a long long time), but there are ways to distribute it if you have multiple machines, and hopefully, you don’t actually need to do this very often. Also, you can usually just let it run – it doesn’t need constant attention.

If you’re looking around for a new linux distro, check it out. The up-front work may seem intimidating, but it’s really a great foundation for long-term maintainability.

Tags: , , , ,


This is what we mean by abuse of databases

Okay, here it is, folks.

When someone asks “what’s wrong with companies compiling huge databases of personal information?”, this is part of the answer:

Someone signed up for a Miller Brewery contest using a throwaway email address, and they tracked her down and signed up her “real” email address. The second link above concludes that they did it by using information collected by Equifax’s direct mail division, Naviant (which was supposed to have been shut down years ago). They own the domain from which the email was sent.

When we talk about privacy, it can mean a number of things. But indisputably, one of the definitions is “the right to be free from unauthorized intrusions”.

Maybe this is a small thing, but it’s a terrible precedent.

This person obviously didn’t want to be permanently signed up for messages from Miller. Letting an address expire is probably the ultimate form of “opt-out”. Yet, Miller thought it was okay to use personal information gleaned from who-knows-what sources to tie her to another email address, and send her more spam. Would they do the same thing if you changed your phone number to avoid telemarketers? What else is fair game?

Tags: , , , , , ,


Ruby script to fetch hosts file and turn it into a privoxy block list

Filed under: — adam @ 1:34 pm

There are plenty of servers out there that, if they just disappear from the internet, not much bad happens. They include known ad server, spam, and spyware sites. The fine folks at maintain a good list, which is up to about 10,000 entries now. Since I couldn’t figure out how to get privoxy to honor the local hosts file when doing DNS lookups, I wrote a little ruby script to fetch that file, break it down, and output a privoxy block list.

I chose ruby, because I’ve been working with it lately, and I really really like it. I find it incredibly easy to write, read, and work with.

If you’re a ruby developer, improvements of all kinds are welcomed. Please feel free to comment and discuss ways I could have made this more ruby-ish. Also, I haven’t quite grokked what the right approach is for ruby error/exception handling. Opinions on where checks should go are welcomed. For example, the whole thing is wrapped in a conditional block of opening the file. Do I need to handle any exception conditions, or is that all just taken care of properly?


require 'open-uri'

hosts =
header = 1

open('') do |file|
  file.each_line() do |line|
    # skip if still in header
    header = 0 if line =~ /^#start/
      next if header == 1
    # skip comments
    next if line =~ /^\s*#/

    # add the hostname to an array
    hosts < < line.split[1] #(sorry, no space between << - wordpress keeps inserting one for some reason.)

  # write the output file
  outfile = open('privoxy_user_actions.txt', "w")
  outfile.puts "{ +block }" + "\n"
  hosts.each do |host|
    outfile.puts host + "\n"

Tags: , , ,


Flickr pictures, web beacons, and a modest proposal

As I noted in the comments of the previous post, I don’t have ads on the site, but I do have flickr pictures directly linked from my flickr account.

It is conceivable to me that flickr pictures could qualify as “web beacons” under the Yahoo privacy policy, and thus be used for tracking purposes. Presumably, this was not the original intention of the flickr developers, but it’s certainly a possibility now that they’re owned by Yahoo. Are the access logs for the static flickr pictures available to Yahoo? Probably. Are they correlated with other sorts of usage information? It’s not clear. Presumably, flickr pictures are linked in places where standard Yahoo web beacons can’t go, because they’re not invited (like on this site, for example).

I think my conclusion is that this is probably not a problem, but maybe it is. It and other sorts of distributed 3rd party tracking all have one thing in common:

It’s called HTTP_REFERER.

Here’s how it works. When you make a request for any old random web page that contains a 3rd party ad or an image or a javascript library or whatever, your browser fetches the embedded piece of content from the 3rd party. When it does that, as part of the request, it sends the URL of the page you visited as part of the request, in a field called the referer header (yes, it’s misspelled).

So, every time you visit a web page:

  • You send the URL to the owner of the page. So far so good.
  • You send your IP address to the owner of the page. Not terrible in itself.
  • You send the URL of the page you visited to the owner of the 3rd party content. And this is where it starts to degrade a little.
  • You send your IP address to the owner of the 3rd party content. The owner of the 3rd party content may be able to set a cookie identifying you. Modern browsers are set by default to refuse 3rd party cookies. However, if that 3rd party has ever set a cookie on your browser before (say, if you hit their site directly), they can still read it. In any case, you can be identified in some incremental way.
  • The next time you visit another site with content from the same 3rd party, they can probably identify you again.

That referer URL is a significant key that ties a lot of browsing habits together.

There’s an important distinction to be made here. The referer header makes it possible for 3rd party sites to track your content, and it’s only one of many ways. Doing away with the referer header won’t prevent the sites running 3rd party tracking content from doing so. The owner of the site can always send the URL you’re looking at to the 3rd party as part of the request, even if your browser isn’t. However, what this does prevent is tracking without the consent of the owner of the site you’re looking at. Of all of the sites you’re looking at, actually. Judging from my admittedly limited conversations with site owners, there are a LOT of people out there who have no idea that their users can be tracked if they include 3rd party ads on their site, or flickr images, or whatever. (Again, not to say that their users are being tracked, but the possibility is there.)

Again, the site that includes the ad or image or whatever isn’t sending that information – your browser is, and this is a legacy of the early days of the web. Some browsers allow you to turn it off and not send any referer information. I’d argue that this should be off by default, because there disadvantages outweigh the benefits. I’m told that legitimate advertisers don’t rely on the referer header anyway, because it can be unreliable. If that’s true, that’s even less reason to keep it around.

Suggestion number one was “Tracking information that’s linked to personally identifiable information should also be considered personally identifiable“.

Perhaps suggestion two is “Let’s do away with the Referer header”. (Of course, this comes on the heels of a Google-employed Firefox developer adding more tracking features instead of taking them away.)

Arguments for or against? Are there any good uses for this that are worth the potential for abuse?

Tags: , , , , , ,


What’s the big fuss about IP addresses?

Given the recent fuss about the government asking for search terms and what qualifies as personally identifiable information, I want to explain why IP address logging is a big deal. This explanation is somewhat simplified to make the cases easier to understand without going into complete detail of all of the possible configurations, of which there are many. I think I’ve kept the important stuff without dwelling on the boundary cases, and be aware that your setup may differ somewhat. If you feel I’ve glossed over something important, please leave a comment.

First, a brief discussion of what IP addresses are and how they work. Slightly simplified, every device that is connected to the Internet has a unique number that identifies it, and this number is called an IP address. Whenever you send any normal network traffic to any other computer on the network (request a web page, send an email, etc…), it is marked with your IP address.

There are three standard cases to worry about:

  1. If you use dialup, your analog modem has an IP address. Remote computers see this IP address. (This case also applies if you’re using a data aircard, or using your cell phone as a modem.)
  2. If you have a DSL or cable connection, your DSL/cable modem has an IP address when it’s connected, and your computer has a separate internal IP address that it uses to only communicate with the DSL or cable modem, typically mediated by a home router. Remote computers see the IP address of the DSL/cable modem. (This case also applies if you’re using a mobile wifi hotspot.)
  3. If you’re directly connected to the internet via a network adapter, your network adapter has an IP address. Remote computers see this IP address.

Sometimes, IP addresses are static, meaning they’re manually assigned and don’t change automatically unless someone changes them (typically, only for case #3). Often, they’re dynamic, which means they’re assigned automatically with a protocol called DHCP, which allows a new network connection to automatically pick up an IP address from an available pool. But just because they can change doesn’t mean they will change. Even dynamic IP addresses can remain the same for months or years at a time. (The servers you’re communicating with also have IP addresses, and they are typically static.)

In order to see how an IP address may be personally identifiable information, there’s a critical question to ask – “where do IP addresses come from, and what information can they be correlated with?”.

Depending on how you connect to the internet, your IP address may come from different places:

  • If you use dialup, your modem will get its IP address from the dialup ISP, with which you have an account. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and billing details are part of your account information. By recording the phone number you call from, they may be able to identify your physical location.
  • If you have a DSL or cable connection, your DSL/cable modem will get its IP address from the DSL/cable provider. The ISP knows who you are and can correlate the IP address they give you with your account. Your name and physical location, and probably other information about you, are part of your account information.
  • If you’re using a public wifi access point, you’re probably using the IP address of the access point itself. If you had to log in your account, your name and physical location, and probably other information about you, are part of your account information. If you’re using someone else’s open wifi point, you look like them to the rest of the internet. This case is an exception to the rest of the points outlined in this article.
  • If you’re directly connected to the internet via a network adapter, your network adapter will get its IP address from the network provider. In an office, this is typically the network administrator of the company. Your network administrator knows which computer has which IP address.

None of this information is secret in the traditional sense. It is probably confidential business information, but in all cases, someone knows it, and the only thing keeping it from being further revealed is the willingness or lack thereof of the company or person who knows it.

While an IP address may not be enough to identify you personally, there are strong correlations of various degrees, and in most cases, those correlations are only one step away. By itself, an IP address is just a number. But it’s trivial to find out who is responsible for that address, and thus who to ask if you want to know who it’s been given out to. In some cases, the logs will be kept indefinitely, or destroyed on a regular basis – it’s entirely up to each individual organization.

Up until now, I’ve only discussed the implications of having an IP address. The situation gets much much worse when you start using it. Because every bit of network traffic you use is marked with your IP address, it can be used to link all of those disparate transactions together.

Despite these possible correlations, not one of the major search engines considers your IP address to be personally identifiable information. [Update: someone asked where I got this conclusion. It's from my reading of the Google, Yahoo, and MSN Search privacy policies. In all cases, they discuss server logs separately from the collection of personal information (although MSN Search does have it under the heading of "Collection of Your Personal Information", it's clearly a separate topic). If you have some reason to believe I've made a mistake, I'm all ears.] While this may technically be true if you take an IP address by itself, it is a highly disingenuous position to take when logs exist that link IP addresses with computers, physical locations, and account information… and from there with people. Not always, but often. The inability to link your IP address with you depends always on the relative secrecy of these logs, what information is gathered before you get access to your IP address, and what other information you give out while using it.

Let’s bring one more piece into the puzzle. It’s the idea of a key. A key is a piece of data in common between two disparate data sources. Let’s say there’s one log which records which websites you visit, and it stores a log that only contains the URL of the website and your IP address. No personal information, right? But there’s another log somewhere that records your account information and the IP address that you happened to be using. Now, the IP address is a key into your account information, and bringing the two logs together allows the website list to be associated with your account information.

  • Have you ever searched for your name? Your IP address is now a key to your name in a log somewhere.
  • Have you ever ordered a product on the internet and had it shipped to you? Your IP address is now a key to your home address in a log somewhere.
  • Have you ever viewed a web page with an ad in it served from an ad network? Both the operator of the web site and the operator of the ad network have your IP address in a log somewhere, as a key to the sites you visited.

The list goes on, and it’s not limited to IP addresses. Any piece of unique data – IP addresses, cookie values, email addresses – can be used as a key.

Data mining is the act of taking a whole bunch of separate logs, or databases, and looking for the keys to tie information together into a comprehensive profile representing the correlations. To say that this information is definitely being mined, used for anything, stored, or even ever viewed is certainly alarmist, and I don’t want to imply that it is. But the possibility is there, and in many cases, these logs are being kept, if they’re not being used in that way now, the only thing really standing in the way is the inaction of those who have access to the pieces, or can get it.

If the information is recorded somewhere, it can be used. This is a big problem.

There are various ways to mask your IP address, but that’s not the whole scope of the problem, and it’s still very easy to leak personally identifiable information.

I’ll start with one suggestion for how to begin to address this problem:

Any key information associated with personally identifiable information must also be considered personally identifiable.

[Update: I've put up a followup post to this one with an additional suggestion.]

Tags: , , , , ,


Does Google keep logs of personal data?

The question is this – is there any evidence that Google is keeping logs of personally identifiable search history for users who have not logged in and for logged-in users who have not signed up for search history? What about personal data collected from Gmail, and Google Groups, and Google Desktop? Aggregated with search? Kept personally identifiably? (Note: For the purposes of this conversation, even though Google does not consider your IP address to be personally identifiable, at least according to their privacy policy, I do.)

It is not arguable that they could keep those logs, but I think every analysis I’ve seen is simply repeating the assumption that they do, based on the fact that they could.

Has there ever been a hard assertion, by someone who’s in a position to know, that these logs do in fact exist?

I have a suspicion about one possible source of all this. Google’s privacy policy used to say (amended 7/2004):

Google notes and saves [emphasis mine] information such as time of day, browser type, browser language, and IP address with each query.“.

But the policy no longer says that. The current version reads: “When you use Google services, our servers automatically record information that your browser sends whenever you visit a website. These server logs may include information such as your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.“. Again, no information about what’s being done with that data or how long it’s kept.

Given the possibility that they don’t, I think it drastically changes the value proposition of those free subsidiary tools. Obviously, if you ask for your search history to be saved, they’re going to keep it. But maybe that decision is predicated on the assumption that they’re going to keep it anyway, and you might as well have access to it. If the answer is that they’re not keeping it, that’s a different question.

It’s critical to point out that these issues are not even close to limited to Google. Every search engine, every “free” service you give your data to, every hub of aggregated data on the web has the same problems.

Currently, there’s no way to make an informed decision, because privacy policies don’t include specific information about what data is kept, in what form, and for how long. With all of the disclosures in the past year of personal data lost, compromised, and requested, isn’t it time for us to know? In the beginning of the web, having a privacy policy at all was unheard of, but now everybody has one. I don’t think it’s too much to ask of the companies we do business with that the same be done with log retention policies.

I agree with the request to ask Google to delete those logs if they’re keeping them, but I haven’t seen any evidence that they are. Personally, I’d like to know.

Tags: , , , , , ,


More thoughts on Google

Having examined the motion and letters, I see a different picture emerging.

I am not a lawyer, but from my reading of the motion, it appears that Google’s objections are thin. Really thin.
Also, they seem to have been completely addressed by the scaling back of the DOJ requests. Of course, that’s not the complete story, but if the arguments in the motion are correct, it seems like to me that Google will lose and be compelled to comply.

Based on the letters and other analysis, they’re also pulling the slippery slope defense – “we’re not going to comply with this because it will give you the expectation that we’re open for business and next time you can ask for personal information”. If that’s true, I think that’s the first good news I’ve heard out of them in years. Good luck with that.

Google’s own behavior is inconsistent with their privacy FAQ, which states Google does comply with valid legal process, such as search warrants, court orders, or subpoenas seeking personal information. These same processes apply to all law-abiding companies. As has always been the case, the primary protections you have against intrusions by the government are the laws that apply to where you live. (Interestingly, this language is inconsistent with their full privacy policy, which states that Google only shares personal information … [when] We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request.

I wonder if they intend to challenge the validity of the fishing expedition itself, which would be the real kicker (and probably invalidate the above paragraph). I also idly wonder if they expect to lose anyway and have simply refused to comply with bogus arguments in order to get the request entered into the public record.

Interesting stuff. A lot of my criticisms of Google are about their unwillingness to publicly state their intentions with respect to the data they get (and the extent to which they may or may not be retaining, aggregating, and correlating that data), and I don’t think this case is any different. I think Google’s interest here in not releasing records is aligned with the public good, and as such, I wish them well. It’s been asserted that Google has taken extraordinary steps to preserve the anonymity of its records, and that well may be true. It’s also kind of irrelevant. Beyond this specific case, of whether the govnernment can request information about Google searches (let alone any of their more invasive services, or anyone’s more invasive services), is the issue of the ramifications of collecting, aggregating, and correlating this data in the first place.

There is no question that Google has access to a tremendous amount of data on everyone who interacts with its service. It is still troubling that its privacy policy is inadequate. It’s still troubling that Google (and Yahoo, and how many others) considers your IP address to be not personally identifiable information. It’s still troubling that Google (and Yahoo and how many others) do all of their transactions unencrypted and that search terms are included in the URL of the request. As this case has shown, Google’s actual behavior may not correlate to their stated intentions, of which there are few in the first place. By Google’s own slippery slope logic, this time it works for you – will it next time?

Perhaps it’s time to hold companies accountable for the records they keep.


Rumours of Google acquisition of Opera

Filed under: — adam @ 12:19 pm


Dear Google: Please stop buying good companies/developers and ruining them with your consumer unfriendly terms of service and loose privacy policies. Thanks a bunch. – Earth.

And I quote from Opera’s privacy policy (

No personal information is collected or shared, and providing ad profile information in the browser is strictly optional. The Opera user’s Web usage is not tracked.

There’s nothing like this in any Google policy, because this very idea is antithetical to Google’s philosophy, which wants to collect and know everything about you and use that to “improve the Google user experience”/stock price. This phrase in the Opera privacy policy is critical to what makes Opera any good at all. Let’s all gather round and keep an eye on that if this rumor turns out to be true.


Google really wants your logs

I wrote here about some of the privacy implications of Google’s data retention policies:

With the launch of Google Analytics, Google is now poised to collect that data not only from every Google visit, and every site that has Google ads on them, but also every site processed by Google for “analytical” purposes (although there’s probably a fair amount of overlap between the latter two).

Remember – Google does not consider your IP address to be personal information, and so it’s exempt from most of the normal restrictions on how they use the data they collect. The terms of service for Google Analytics suspiciously do not mention whether Google is allowed to utilize any of the data they collect on your behalf. One must conclude that they therefore assume that they are, and consequently that they do. It’s unclear, but it’s probably the case that Google could, according to the terms of these agreements, correlate search terms from your IP address with hits on other websites. I don’t see anything in there preventing them from doing so, because the two pieces of correlated data are obtained by different means.


What’s wrong with the Google Print argument

Does this phrase sound familiar? “You may not send automated queries of any sort to Google’s system without express permission in advance from Google.” It’s from Google’s terms of service, and it’s just one of several aspects of that document that make this leave a bad taste in my mouth.

Larry Lessig makes the point that “Google wants to index content. Never in the history of copyright law would anyone have thought that you needed permission from a publisher to index a book’s content.” But that’s not what Google wants to do. Google wants to index content and put their own for-pay ads next to it. Larry says ” It is the greatest gift to knowledge since, well, Google.”

Don’t forget this for a second. Google is not a public service, Google is a business. Google isn’t doing this because it’s good for the world, Google is doing this because it represents a massive expansion in the number of pages they can serve ads next to. In order to do that, the index remains the property of Google, and no one else will be able to touch it except in ways that are sanctioned by Google. It’s not really about money, it’s about control. It’s against the terms of service to make copies of Google pages in order to build an index. Why should it be okay for them to make copies of other people’s pages in order to build their own? It’s not that they’re making money that bothers us, it’s the double standard. The same double standard that says that Disney can take characters and stories from the public domain, copyright them, and then lock them up and prevent other people from using them.

Oh, but you hate that, don’t you, Larry? (And I think a lot of us do.) How is what Google is doing any different? Google is just extending the lockdown one step further, into their own pockets. There’s no share alike clause in the Google terms of service, and that is what’s wrong with it. They want privileges under the law that they’re not willing to grant to others with respect to their own content.

The day Google steps forward and says “we’re building an index, and anyone can access it anonymously in any way they please”, then sure – I’m all with you.

(Found at


On sharing

Filed under: — adam @ 12:29 pm

There are two competing monetary questions in content ownership: “How can I get the maximum amount for what I’ve already done?” and “How can I get the maximum amount for what I’m going to do next?”.

The former is seemingly answered by maximum control. Tight focused marketing, sell as many copies, wring every last dollar out of existing properties by making sure that people need to buy them more than once and can’t do anything interesting with them. In my opinion, this is a strategy for shooting the latter. It makes enemies, it makes people not care what else you have, and it makes people upset.

Feeding the commons is about ongoing effort. Releasing your work to as many people as possible gets you attention for the next thing you do. It’s so simple. It’s not about selling any one thing anymore, it’s about selling your stream. My previous post, Preaching to the Esquire, is a link that contains the entire text of an article from Esquire. It’s blatantly copied. But if it hadn’t been, only existing subscribers would have read it. As it is, that article is getting forwarded around to lots of people, and it has at the bottom of it this:

Wow. Not something I expected from “Esquire.”

followed by this ringing endoresment:

Esquire is a great magazine. Read it more often: there’s tons of articles on politics, science, current events…it’s, like, Maxim for intelligent people.

Esquire probably had nothing to do with this, but in one stroke, Esquire has certainly grabbed more people for their stream. Many of them will buy an issue. Some of them will subscribe. It’s not about monetizing this article, it’s about getting people to pay attention to what you’re going to do next – the recurring and predictable revenue streams that keep ongoing operations… ongoing.

Put your best work out there, let it speak for itself, and maybe someone will already be paying attention next time you have something interesting to say. Maybe they’ll even pay for the privilege. Locking it up where only people who are already interested can find it is a recipe for obscurity and irrelevance. Yes, TimesSelect, I’m looking at you.


On World of Warcraft’s spyware

World of Warcraft was recently revealed to have a piece of spyware hidden in it called Warden, that tracks a large amount of information about other things running simultaneously on the machine, in order to prevent cheating.

There’s been some commentary on Dave Farber’s IP list that Warden was found by someone trying to hack the game, implying that that somehow justifies its existence.

I wrote the following in response to that:


The fact that this piece of spyware was found by someone trying hack the game is totally irrelevant to what it is, and the fact that there are people in an arms race over hacking the game doesn’t justify Blizzard’s raising the bar on that race to trample the privacy of legitimate users who are probably unaware that this is even going on.

As has been previously stated, Blizzard’s assertion that it’s not doing anything with the information is little comfort. What if the next round of arms race escalation is to hack Warden and release all of that information? How long will it be before Blizzard can properly respond? How much data will get out, because of the infrastructure that Blizzard has constructed?

The fact that this is justified by text buried in a long EULA is deplorable. The fact is, few people read EULAs at all, and even fewer read them for >games< . There ought to be full disclosure right up front in large capital letters - "If you want to play this game, you have to agree to let us spy on you, because we assume everyone's a cheater. YOU'VE BEEN ADEQUATELY WARNED. To agree, and be allowed to play the game, type: 'I UNDERSTAND THAT BLIZZARD IS SPYING ON ME TO CATCH CHEATERS'." Let's have no more of this "Press OK to continue" crap.


Shared lessons between programming and cooking

Filed under: — adam @ 7:22 pm

I originally wrote this a few years ago, but I thought it was worth restating. Here it is lightly edited:

Fine programming and fine cooking are similar disciplines, each a mixture of a lot of craft with a good deal of art. In each, you can have just the craft without the art, or just the art without the craft, but the results are extremely likely to be disappointing without both. The balance between the two is a reflection on the practitioner’s technique, the personality of which is always highly evident in the end product. I have found that my development discipline has been adaptable to cooking, and that many of the things I’m learning about cooking have analogues in programming.

For example, in cooking, good stock is critical. It adds flavors to other dishes, and can be layered to build complexity and texture. The more attention you pay to getting your stock right and correctly flavored, the better your end product will be. Stock requires upfront planning, dedication of resources, patience, and unit testing. Stock is a module. Like any module, you can make your own and it will be exactly what you need (or terrible, depending on your own skills), or you can buy someone else’s and it will either be good enough or terrible (depending on the skills of the stockmaker), and the quality of your final product will hinge heavily on which one it actually is.

Some shared lessons:

  1. Perfection is the goal, but the product had better damn well go out when it needs to and be right when it does. Perfection is the standard by which you measure what you did wrong last time so you can try not to do it again.
  2. Taste, test, measure, know. If you don’t know what’s supposed to be happening, or you don’t know what is actually happening, you have no way to compare the two, and you certainly have no way to bring them together.
  3. Building and maintaining your toolkit, which includes both tools and ingredients, is of utmost importance. For development, this is your development environment and your past history of specs, diagrams, and old code to repurpose. For cooking, this is your knives and other tools, as well as your collection of stocks, scraps, and spices.
  4. Knowing what’s in your toolkit is important, but knowing where to find something you need if it’s not is even more so.
  5. It’s sometimes easier to buy components, but it can be less effort in the long run to start from scratch. It’s entirely likely that a component you build yourself will be better for you, but the trick lies in knowing the difference before you start. Sometimes you have no choice.
  6. Waste is the enemy. Time, materials, and resources all have costs. Usage is not necessarily waste. Not taking care to avoid waste is itself waste. Failing to properly maintain your tools is waste. Not using everything that can be used is waste. Doing unnecessary tasks is waste. Documenting what you did is not waste.


Unthrilled with the Office 12 UI

Screenshots for Office 12:

Okay, they’ve cleaned up the interface a bit by grouping related things into similar boxes that are actually labeled, and I’m told that the interface elements are all vector-based so you can resize them arbitrarily. That’s nice.


Over many years of designing custom content management interfaces for lots of people to use, it became crystal clear that there’s a huge difference between a “tool” and a “task”. A tool is a function that lets the user do something, but a task is a function that lets the user accomplish something.

In my experience, most successful content management interfaces are primarily task-based. When the user sits down in front of the computer, the goal is to get something done, not just use some tools. Tasks are for most people (beginners and power users alike), but tools are for power users. If you know what you want to do, but it doesn’t fit nicely into the framework of getting something done, you need a tool. Tasks should be the default.

This is why the new Office UI is still confusing – it’s full of tools.

Let’s take Word as an example. The forefront example of tools vs. tasks is the question “why is there still a font box?”, and the corollary question “why do the font options still occupy a huge chunk of prime screen real estate?”. Changing the font is a “tool function”. When you change the font in a document, you haven’t really accomplished anything. Sure, you’ve made it look different, but “making it look different” probably wasn’t the goal. What you were really doing is the unspoken “drawing attention to this text” or “making it match the company colors” or any number of other things that aren’t just “making it look different”. With a tool, you can “make it look different”, but it requires a lot of input from the user in order to get the rationale right, and this is why expert users get frustrated when beginniners change the fonts and their results don’t match their intent. The software shouldn’t make it easy to change the font without understanding why. There should be tasks centered around things you might want to do, and the software should guide you. Importantly, if you do understand why, and you have different intentions than the software does, it should get out of your way – but that comes around to letting you use tools to get around the limitations of pre-defined tasks.

(An important note: a “wizard” is not a task-based interface. It’s a poor substitute that attempts to graft tasks onto what is primarily a tool-based interface.)

This goes right to the heart of the debate of semantic content vs. formatting. A huge portion of the tech community has been trying very hard to get people to think in ways that are structured, for various reasons. It’s not always the best approach, but it’s by far the best default if you don’t know what you’re doing. If you go through your document and decide “this needs to be 14 point Helvetica and this needs to be italic and this needs to be 24 point Times”, the onus is on you to understand why you’ve chosen those particular settings. “It looks nice” isn’t good enough, if it doesn’t match your intent. You’ve lowered the chances of getting the right result, and you’ve made things more difficult for the next person to go through and standardize your settings when your one-page memo gets reformatted to be used in the company brochure. You’ve probably also made things more difficult for yourself. Instead of trying to decide what it should look like, you could have just told the machine “this is a heading, that’s a title, and this paragraph is a summary of findings”, and made your life easier.

The UI appears to have some of this by grouping tools by tasks, but it doesn’t follow through — “Write”, “Insert”, “Page Layout”… but then, “References”? Nope. “Mailings” – maybe, but probably not. “Review” – we’re back. “Developer”? That’s a noun. Obviously there isn’t a consistent organizational structure here. Task-based interfaces are a radical shift from tool-based ones, and they require the UI designer to ask of every function put in front of the user: “Do I really want to give them this power? Am I making their life easier by doing so, or just giving them a shotgun to aim at their feet?”. It’s Microsoft Office, not Microsoft Fun with Fonts, Colors, and Margins. There’s a strong argument to be made that it shouldn’t be easier to use all of the features, because they’re a waste of time for most users.

Microsoft should have taken this opportunity to put together a new interface that’s not only prettier, but also radically easier to use, more intuitive, and above all, more productive. Instead, they’ve produced what appears to be more of the same.


Treo 700w

Filed under: — adam @ 4:33 pm

Can someone tell me what’s compelling in any way about a Treo running Windows CE (or whatever they’re calling it these days), especially one with only a 240×240 screen? Sure, it’s got EVDO and 64MB RAM, but you probably need that extra to run Windows, and besides, those are hardware enhancements.

Apparently, the Office integration was rewritten by Palm and it isn’t using the built-in windows mobile implementation (and one would assume they could just as easily have done so for PalmOS or Cobalt or whatever), so… what’s the upside here?


Please stop telling people to “google it” in public forum posts

Filed under: — adam @ 12:09 pm

I’ve noticed that with increasing frequency, I’ll search for something on one of the search engines and be directed to some forum post where the answer is “google for the answer”.

How do you think I found you in the first place?!?!

If it’s possible that your page will turn up in the search results for the thing you’re discussing, please include the answer (or at least, a specific URL where the answer can be found).


SMS Spam

Filed under: — adam @ 7:10 pm

I just got my first piece of SMS spam (verizon wireless). Anyone know who I should be reporting this to? The VZW customer support people don’t seem to know.


THANK YOU ADOBE – favorites in the open document window

Filed under: — adam @ 10:05 am

My most frequent complaint about the Windows interface, all flavors of it, is that the open file dialog has this little row of quick access icons, and they’re not programmable. My folder arrangement doesn’t match the default, so this makes me work just a little bit more than I should have to nearly every time I open a new file. I recently switched to using the Adobe open file dialog for CS2, and was stunned to find that it shares the defined favorite folder settings with Bridge.


Thank you!

(Activate this by pressing the “Use Adobe Dialog” button in the file open dialog – you can switch back anytime you like.)


Sploid on the Katrina response

Filed under: — adam @ 6:10 pm

Sploid has written a blistering critique of the federal government’s response to Katrina.

I still haven’t seen anyone come out and say “If you voted for Bush or you didn’t vote in the last presidential election, you virtually begged for this response, and we told you so.”. This, the complete and total failure to respond to an expected national disaster in a way that even approaches sanity, isn’t the fault of the administration – we knew they sucked. This is the fault of every single red dot on that election map, for letting them still be in charge (for whatever that’s worth) when another problem finally rolled around.

A rising tide lifts all boats, my ass. A rising tide strands and drowns those who can’t afford boats in the first place.


Why I shoot photography.

Filed under: — adam @ 12:13 am

I shoot photos for the same reason I cook and program computers.

I believe that humanity’s high calling and deep purpose is the neverending struggle against the varied forces of entropy. Tempered by the wisdom of allowing natural forms of order to co-exist and simultaneously be captured in time, we live to create in our environment a reflection of our own inner sense of order. Every meal prepared, every elegant algorithm, and every imperfect echo frozen by sheer force of will is one more piece of the pattern coalesced from the ethereal storm and notched on the spear of humanity’s collective soul.

Take a handful, grab hold of the writhing chaos, keep your grip in the face of adversity, and shape it into something that can’t help but be beautiful until it hurts.

We will eventually be forgotten, and remembered only for what we added or took away.

I prefer to add.


Adobe Camera Raw possibly doing something wrong with noise?

Filed under: — adam @ 12:15 pm

I’ve noticed that a lot of my photos have been more noticeably grainy recently. Like this onion stacking shot. (Whether you like this effect or not is not the point.)

At first, I just chalked it up to high ISO and/or exposure compensation. But then I did some informal tests, and found that Breezebrowser Pro (which I switched away from to go to Adobe Camera Raw 3 because ACR is easier to use with my Photoshop workflow) on a few of the RAW files, gave much cleaner and less noisy results, even with noise reduction turned off.

In my limited testing, with the settings I used, images produced from ACR are definitely noisier and more posterized in the noisy areas than the same raw files processed with BB, with the same exposure comp, no sharpening, and noise reduction completely off. Indeed, even the +1.6 images from BB are less noisy than the +1 images from ACR.

I’m not certain that one of the other settings isn’t causing the problem, and I’m somewhat at a loss about how to go about doing an unbiased test. Suggestions are welcomed.

On a related note, turning noise reduction on in BB gives even better results than the BB baseline, where even cranking luminance smoothing and color noise reduction all the way up in ACR seems to have very little effect. Moreover, Photoshop’s reduce noise filter works noticeably better on BB-converted raw images where it seems to do very little on ACR images.

Needless to say, this result is pretty disturbing, and I hope I’ve just done something wrong.


What does an ID textbook look like?

Here’s what I don’t get. What would it even mean to teach intelligent design in schools?

Chapter 1: Some things are too complicated to have arisen by evolution, specifically people.
Chapter 2: …..?
(Chapter 3: Profit?)

As far as I can tell, there’s nothing to it. It’s the opposite of science.

“I don’t understand this, so there must be no possible answer”.

It says not just that we don’t know, but that we can’t know, so there’s really no point in trying to figure it out.


Why I oppose DRM

As some of you know, on September 11, 2001, I lived one block north of Battery Park, at 21 West Street. (Ironic popup tag provided courtesy of Google Maps.) When I was forced to leave for thirteen days while the smoke cleared, I had little time to grab anything. I left without my computers, without my original installation discs, and without all of my Product ID stickers. I found myself suddenly without the mechanism to reinstall a number of legally purchased programs that I needed to use for work, and taking a lot of time that could have been better spent wallowing in my own PTSD calling around to various companies to get them to unlock things for me.

There were stories of rescue workers hampered by license management, and that’s when I knew.

The world is dangerous, and sometimes emergencies happen. While people can say “hey, maybe we should make an exception here, because there are extenuating circumstances”, computers just don’t care about that. We are backing ourselves into a restricted corner, and a dangerous one, where computers call the shots, even in the midst of crisis, even in the midst of rational exceptions. Granted, every case is not this extreme. Hopefully, the future will be without another like it in my immediate vicinity. But the trend to pre-emptively lock down everything by default scares me.

As we evolve towards tighter and tighter controls without any possibility for exception, what happens when those granting agencies stop granting? What happens when companies that issue DRM go bankrupt? What happens if they’re unreachable? What happens if they simply decide to stop supporting their framework?

As my high school calculus teacher used to say – “it’s always easier to ask forgiveness than to ask permission”. Security is many tradeoffs, and if you restrict legitimate uses in the name of preventing illegitimate ones, you’ve cut off part of the point of having security in the first place. If you restrict legitimate uses without even preventing the illegitimate ones, you’re wasting your customers’ time, and you’re part of the problem.

See more of my rants on DRM and security.

Blog-a-thon tag:


Things I hate about Bridge

Filed under: — adam @ 5:32 pm

Adobe Bridge (bundled with CS2) is much much better than the File Browser in previous versions. It has some great features. It’s very fast, and has good support for previewing a large number of different file types.

But there’s still a lot to hate, mostly about things they seem to have left out (of course, it’s entirely possible that I’ve just missed them). I’d love to see these things in an incremental update and not have to wait for CS3, if in fact they are missing.

  1. There’s no place to paste in a location from another window! If you’re looking at an open folder in another browser or OS window, to get to that path in Bridge, you have to navigate to it. This is really basic missing functionality!
  2. Yes, you can drag and drop files to your email program, but I think this belies the Adobe workflow way of doing things. I’d like to see the ability to “Send files to” a location, including another program (email, batch uploader, etc…), with the ability to run a script or some actions (think Image Processor) automatically. And while I’m talking about Image Processor, why can’t I run an arbitrary number of actions there?
  3. Why no fullscreen view or very large preview?
  4. This is more of a Camera Raw issue, but it’s central to the centralized workflow that Bridge encourages – why can’t I store multiple different raw settings for a single image? (I haven’t been using Version Cue – is this doable that way?)

There’s probably more, but the point here is that Bridge is great. It’s fanstastic for many things, and it does a lot that’s good that none of the other files browsers I have do. But it falls down on some of the basics that make it unsuitable for using as the only file browser.


Wonder Woman casting

Filed under: — adam @ 11:36 am

I watched Blade:Trinity recently, and I got my answer for Wonder Woman casting.

Jessica Biel.

She’s got the right look, she can do kickass fight scenes that are reasonably believable, and she’s actually a pretty good actress.

Tell me that’s not Wonder Woman:,%20Jessica,%20Jessica&seq=3

[ Update: it seems I'm not alone, and the rumor mill has it that she's one of the candidates: ]


Some things I really want my next console to do

Filed under: — adam @ 9:29 am

I’ve been reading over the specs and thoughts for the new consoles, and I haven’t seen any discussion of some things I REALLY want out of the next generation of console games:

  1. NO LOAD SCREENS ONCE THE GAME STARTS. Just stop that right now.
  2. Profiles! Multiple people in my house play games. There’s no reason for single-player games to make it difficult to figure out who saved which game. Also, don’t make me sit through a load screen before making my choice. Different cards for each person aren’t a good answer for this.
  3. Varied controllers. The gamepad is okay for many kinds of games, but hands down, one of the best gaming experiences I’ve ever had was playing Tie Fighter two handed with an F-16 Combatstick and a separate throttle. I see no reason why I shouldn’t be able to get this kind of experience in my living room. Similarly, you can’t play RTS games with a gamepad. The controls just don’t work. Give me some innovation here.


Grokster is not like gun companies being sued for crimes committed with guns

I’ve been hearing a lot that the Grokster decision is akin to the court saying that gun manufacturers should be liable for crimes committed with guns.

I think a better analogy is this: if a gun manufacturer sells guns with a sign that says “Bank Robbing Projectile Launchers Here” and then in the middle of a bank robbery they help their customers unjam their guns so they can fire at the cops again, that they might have some liability as an accessory to the bank robbery.


Open letter to Adobe

Dear Adobe:

Your activation system is a failure.

I have been a loyal customer for more than ten years. I’ve dutifully paid pretty much whatever you’ve asked for upgrades over the years, and I’ve always been happy with your product.

I understand that you don’t want people to steal your software. Never mind that Photoshop is largely the industry leader in image management because it was mercilessly copied by everyone. Your product is good, and I like it.

Let’s be clear about this. I’m not stealing your software.

But you’re treating me like a criminal. Twice in the past few weeks, I’ve had to talk to one of your activation support reps because your online activation system is broken. It has several times just decided that I’d activated enough, and was suspicious. Never mind that I was reinstalling on a brand new replacement computer. Never mind that on the first occasion this happened, there was no grace period, and the software simply would not run until I talked to a representative on the phone, who, by the way, are ONLY AVAILABLE DURING WEST COAST BUSINESS HOURS.

Thanks. You’ve given me reasons to think twice about giving you more money in the future and tarnished your spotless reputation.

Bravo. I hope it was worth it.

[Update: I've spoken to Adobe support after my fourth automated reactivation failure, and apparently, this is an issue with RAID devices, where the activation system sees it as a different computer configuration on subsequent checks. My previous comments stand. This is totally unacceptable. Worse than that. The system is not only broken, it's returning false positives for theft masquerading as valid and accepted disaster prevention techniques. So, my opinion is now this - Adobe has not only foisted misguided copy protection techniques on us, but, to add insult to injury, they're still beta. There is a patch available, so contact Adobe if you have this problem.]

[Update #2: I installed the patch, and activation failed yet again. Holding for support... ]

[Update #3: After activating again via phone, all seems to be working. For now. ]


Dual cores for emulation?

Filed under: — adam @ 8:36 am

From what very little I know about the architectures, it seems to me that dual-core CPUs will be VERY VERY optimized for emulation – use one core to do the translation that’s then passed to the other across the common cache for actual execution.

Thoughts on this?


More thoughts on Mac and Intel

Filed under: — adam @ 11:12 am

Having thought about this for a while, and heard all of the conspiracy theories about how this is Apple’s latest move to dominate the world, here’s what I think.

There is no master plan.

Steve Jobs got backed into a corner by IBM, because IBM wanted Apple to put more money into the PowerPC chip line, which IBM is not terribly interested in keeping alive because, frankly, Apple doesn’t sell that many computers and the technology’s a dead end. So Apple didn’t pay up, and IBM didn’t produce faster, cooler chips for the laptops that Apple wanted, and Apple was left with two choices: produce no more faster laptops, or move to another chip vendor. Intel is the obvious choice, as the largest-and-not-cheapest vendor. Intel’s a little pissed off at Microsoft, and they’ve been getting some heat from AMD (despite continually producing better but not cheaper technology, IMHO). Porting OSX to Intel or anything else is actually laughably easy, because all along, it’s been built on top of a Mach microkernel architecture which is designed specifically for this.

So Apple dumps IBM, who doesn’t really care because they’ve got Sony’s business for the new Cell processors and the PS3 is going to way outsell the Mac any day of the week. Apple will sell, what, 5 million Macs this year? Sony sells that many PS2s in two months. Intel gets a little coolness factor for selling even more of their incredibly popular chip line, and Apple gets their dual-core Pentium M laptops.

Microsoft continues to dominate whole industries through sheer obstinacy.


Thoughts on Mac on Intel

Filed under: — adam @ 9:48 am

So, you’ve probably heard by now that Apple is switching to Intel x86 chips for the Mac, from the PowerPC chips made by IBM.

The implications are pretty clear – not much changes. IBM couldn’t deliver fast enough chips on Apple’s schedule, and they got the boot. Apple can’t be happy about this, but they’re still going to control the rest of the hardware, OSX won’t run out of the box on your random home-built PC.

So, I think there’s very little interesting in this announcement, except for two things:

1) This is a major score for Intel.
2) Older software is going to be supported via emulation. We have FINALLY reached the point where hardware is fast enough, and we have something to burn all those extra cycles on – emulating other chipsets in a way that’s usable.

We’re seeing this in more places – Xbox 360 is rumored to support Xbox games via emulation. I think it’s an interesting trend to have chip design progress to the point where processors are optimized for specific purposes, but they’re fast enough to run everything else via emulation anyway. It really blurs the line at the application level about “what is a platform anyway?”.

For example, if you buy Photoshop for Windows, and you switch to Mac, you need to buy Photoshop for Mac. What if you buy Photoshop for Mac, and switch to Mac for Intel? (Same platform, different chip.) What if you then switch to Windows for Intel? (Same chip, different platform.)

I think there are going to be some confused consumers.


Three simple rules for good customer service

Filed under: — adam @ 11:42 am

There are three simple rules for customer service:

1) Don’t ask me for the same information more than once.

2) When I give you information, understand it. This goes double for when I tell you I’ve already tried certain steps to fix the problem.

3) Help me solve the problem. If you can’t (see rule #2), reroute me to someone who can (see rule #1).

Powered by WordPress