Britt sent me this link lamenting the lack of interaction between Web 2.0 services:
This is an interesting and correct observation, but let’s look at an analogous situation – unix command line tools.
Unix is designed around the pipe – the ability to string long chains of commands together, each of which only does a small thing, to accomplish what you actually want to do. There are some places where this breaks down, but by and large, this method has been spectacularly successful.
Web2.0 apps are much better positioned to emulate this than Web1.0 apps, but they’re still not there yet.
What’s missing is the switches that enable those apps to play nice with other apps.
You’re probably familiar with ls, which lists files in a directory:
fields@server2:~$ ls /tmp
ls also has another mode, that outputs a long listing, which includes more detailed information about the files:
fields@server2:~$ ls -l /tmp
-rw-r–r– 1 root root 860863512 Jun 22 19:08 mysql-snapshot-20060621.tar.gz
-rw-r–r– 1 root root 382 Jun 22 18:50 mysql-snapshot-20060621_master_status.txt
Once you have that, you can pass the list to other programs that may want to filter the list by one of those pieces of data. The default mode is useful for dealing with the files themselves, but less useful if you want to interact with their metadata. What if the -l flag was left out, and that behavior was restricted to maintain ls’s competetive advantage (in the hypothetical situation where it’s something provided by your filesystem vendor)? If the information you’re looking for isn’t returned at all, you may have no other way to get at it. Maybe you’d have to use the vendor’s lslong, which costs money. You may be just fine with that, or you may be compelled to look for a filesystem competitor that does what you want. I’d argue that ls is less useful without that ability. That’s the situation we’re looking at when a Web 2.0 API is lacking certain core features to interact with the data it represents.
Is that an acceptable tradeoff? Maybe it is for a free service. It seems less so for a service you pay for, because fundamentally, you’re paying for the ability to manage your data, not for the ability to use the particular software – that’s the whole concept behind software as a service in the first place.
This is, of course, made more complicated by the fact that Web 2.0 isn’t just data sharing, it’s also about more dynamic interfaces. Theoretically, these two are interconnected and the dynamic interfaces work better because they can deal with small chunks of data that are in more standardized formats, and also theoretically, the data access mechanics are decoupled from the actual interaction semantics, which would have the effect of making outside non-gui access to your data easier with standard tools. In practice, that seems to rarely happen.
This is the only good rationale I’ve heard for using XML for gui/backend interchange.
These are good things to be thinking about when designing web applications. It’s not enough to think of them in a vacuum; we have to consider the implications of living in the ecosystem. It’s possible that that means opening up far more access to the underlying workings than we’re accustomed to. I would LOVE to see some applications that fully work if you take away the browser front-end, but still interact in exactly the same way via HTTP.
[Update: More on this discussion from Phil Windley.]