The Gillmor Gang – Dan Farber, Jason Calacanis, Doc Searls, Robert W. Anderson, and Mike Vizard – welcome Google’s Mark Lucovsky to talk about cloud computing from Hailstorm to today’s Feed API’s and beyond. Recorded Friday, June 6, 2008.
Gillmor: Hi, this is Steve Gillmor. Welcome to the Gillmor Gang. We’ve got some old-timers here today. I think Doc Searls will join us in a minute, once he gets out of his debriefing ceremony at Berkman. And we’ve got Dan Farber. Hi, Dan.
Farber: Good morning.
Gillmor: Robert Anderson.
Gillmor: And others will join us shortly. Our special guest is Mark Lucovsky of Google. Hi, Mark, how are you?
Lucovsky: Good, how about you guys?
Gillmor: We’re sort of getting it together a little slowly here. Fridays always feel like Sundays.
Anderson: I never got to find out, Mark, if you enjoyed the “Flight of the Conchords” at Google I/O.
Lucovsky: [laughs] You know what I enjoyed about it? Telling my son that I was at the “Flight of the Conchords.”
Anderson: Oh, yeah.
Lucovsky: Yeah. That made an impression.
Anderson: Yeah, that’s good. So was it worth walking up to watch that guy talk for four minutes about wine?
Lucovsky: It was a good talk. It was a good walk. I enjoyed the walk up and the walk back and the five minutes that we spent at Borders. [laughs]
Gillmor: Well, thanks for that ambient stuff about Gary V., because we certainly know that he needs some more publicity.
Gillmor: Dan Farber, you were mentioning that Amazon is down?
Farber: Yes, Amazon’s been down for about a half an hour it seems this morning. No shopping. As well as the web services, it seems. It’s like a catastrophic failure for them.
Gillmor: So, Mark Lucovsky, it’s all your fault. Isn’t this the revenge of Hailstorm?
Lucovsky: Hailstorm is long dead. There’s nothing left, just some of the ideas are living on in what we’re seeing in RSS, but there’s no centralized company trying to run this any more.
Gillmor: Yeah, but the idea of huge in-memory databases with little micro-objects floating around in them, that’s what Hailstorm was, wasn’t it?
Lucovsky: Yeah. I guess, at a high enough level, that’s exactly what it was. Little tiny objects flying around tied to an identity, exposed using open protocols and web services. Which, if you look at some of the more interesting RSS feeds, that’s exactly what you have.
Just talking something simple like a media RSS grammar inside any given feed, it takes your standard canonical article or feed and it gives it some media flavor in a structured way and you get the data the same way you would have gotten it with a Hailstorm thing.
Gillmor: So what do you mean by a media RSS?
Lucovsky: Well, media RSS is the grammar that Yahoo came up with to describe media objects that are in a feed, whether that media object is a podcast or a video or an image. Just a very, very simple small set of tags, I don’t know, maybe a dozen tags.
Gillmor: So these are..
Lucovsky: It changes everything.
Gillmor: This is the dreaded microformat?
Lucovsky: Well, I think that microformats, when people use that they really apply to HTML markup, the semantic markup with class tags that have meaning. So when you regular HTML and you take a div tag, for instance, and you apply a well-known class to it, it becomes a microformat, where you can look at that element and say, “I know what this contains now.”
It’s not just text on a screen, this happens to me more of this or that. So I think microformats are very different that what we were talking about with Hailstorm or what you’re seeing in the RSS world. I think can take those elements from RSS and transform them, render them into HTML using microformat techniques.
And that’s exactly what we’ve done in all of the Ajax services that my team at Google does. When we generate HTML for, say, a search result, if you look at the HTML that we’ll generate, it’s essentially a microformat. We’ll say that this block is a web search result or a video search result or a You Tube result. And then this property is the link, and this is a snippet, and this is a publish date.
So we’re using microformat techniques to describe search results, if you will.
Gillmor: Lucovsky’s spelled with a C. Correct?
Lucovsky: For what, first name or last name?
Lucovsky: Yeah. It’s L-u-c-o-v-s-k-y.
Gillmor: Yeah, I was just giving that to Jerry in Los Angeles for the video. We have a simultaneous video feed on ustream.tv/channel/techaura. Somebody is echoing now.
So, can you describe a little bit at a higher level what you have been doing at Google?
So we make things like DOJO, JQuery, Prototype, Scriptaculous and Mootools globally available with perfect cache semantics, served by Google frontends all over the world.
Lucovsky: I suppose the Slashdot crowd and the conspiracy guys will always find something evil like that to associate with anything that we do. Nothing like that could be further from the truth.
My team is the only real team that has detailed access to the logs of what’s going on in a service. Our logs access and our privacy constraints at Google — we take it very seriously. While, yes, I do have access to the logs, I can see what’s going on, we only ever look at this stuff in the aggregate.
WE look at that to say, “We launched this API, how well are we doing? Did we surprise ourselves? Is there another library that’s popular. Are our cache semantics working?” So the aggregate numbers that we get out of the logs are important for us to make sure that we’re going down the right path and that we’re doing the right thing.
I don’t think that there’s any reasonable to really look at that and say, oh, from that we can do better job at targeting advertising.
Anderson: Mark, you just said something interesting. You said that looking at the logs– and I may have misunderstood you — but you can tell if there’s something else that’s popular.
Lucovsky: No, we can’t tell if there’s something else that’s popular, but we can tell if we were mislead into, “Geez, all the buzz about such-and-such a library…” Let’s just take a library that we’re not hosting, foo.js. And let’s say the community says, “God, Google, you’ve got to host this. This is the best thing in the world. Everybody’s using it.”
So we go out and host it and we find that, wow, not many people are using this system. Either it’s not very popular, or there’s something that we’re doing wrong. And it could be that the library is designed to be build-to-order or configured-to-order, and we only released one configuration of it and the popular configuration of it is slightly different. So that’s an example of where we might learn from that.
Or we might find that, wow, everybody that loads Scriptaculous also loads Prototype. So we’re seeing those two in synch, for instance. It might be, what if we offered a combo library that was both? That would save a round trip for our customers and lead to, perhaps better caching.
So those are the types of things that we can learn by looking at patterns. But, in truth, it’s a lot of work to do that learning, and really just looking at the aggregate, make sure that things are trending as expected and there are no surprises. That’s how we learn.
Gillmor: So what is the value proposition for hosting these libraries, from Google’s perspective?
Lucovsky: For Google? Nothing. Just goodwill, doing the right thing for customers. We get nothing out of this; this is really all about — we’ve gone through a lot of latency work over the last quarter. And a lot of the things that we’ve learned along the way are obvious things that everybody knows, but a lot of websites don’t have t capability to do things right in terms of caching.
So we said, hey, if we can do this correctly, that’ll benefit anybody who visits sites that uses these popular libraries.
Anderson: But this fits in with the bigger Google message, “Google makes the Web a better place.” It’s better for Google.
Lucovsky: Right. Yeah. But this one, this is really a better niche for Google. I look at this and say, “This is better for customers that visit these sites that are using popular libraries.” Prototype, JQuery, some of those low=level libraries are very, very popular. And because of the way those were released, every single site has their own version of those libraries, and browsers can’t really cache those that well.
So you have all these extra round trips that the end users are suffering, because they’re sitting in front of the browsers doing those round trips. We can tell a browser, “Here’s JQuery 1.2.6, and you can hold onto it forever. When you ask again for the latest we’ll give you the next version when it’s available. But until then, don’t even come talk to us with an if/modified sense. You have the right sense.”
Gillmor: So it’s kind of like — this is a stretch in terms of an analogy — but it’s kind of like what you’re doing with Gmail where you host a lot of the libraries, if you will, that are the Gmail functionality. You host them in memory. It’s kind of like extending that memory base back to the servers. I would imagine doing some intelligent caching of the appropriate files on the machine. Or am I wrong about that?
Lucovsky: No, I think really the caching is happening in the browsers, is what we’re shooting for.
Anderson: But it’s more like unifying the name space for these shared libraries, right?
Lucovsky: Sure, the URL name space. Right. That’s how you get the caching.
Anderson: That makes the browser think, “Oh, this is all the same thing,” as it goes to 10 different things that happen to be the same, but they have different root URLs.
Gillmor: So let me abstract this up to a slightly higher level, and go back to the reason that I asked the question about the so-called “less than not evil” potential possibilities. Is that a nice way of putting it?
Gillmor: OK. So the reason that I asked this is, from my perspective when you and Gates announces Hailstorm lo those many years ago, there was, I think, a very, very sincere attempt on your part — and I believe on Gates’ part — to suggest that you download the capabilities running with open source tools.
Gillmor: Command line interfaces.
Lucovsky: We actually did the Microsoft stack last.
Gillmor: Yeah, exactly. And of course the politics of that situation got sort of torn apart by the passport relationship to the service, or at least the potential of that. I mean, it certainly was a requirement in those days. I think Microsoft learned a fundamental lesson which they’re still trying to back out of, which is how do we provide services without lock-in.
Which I think that we’re at a point where they’re going to have to do it in order to survive. So it doesn’t necessarily mean that because someone’s in a monopoly position, as Microsoft was in those days, that what they’re doing is bad.
Gillmor: And so do you see the potential for Google’s ubiquity and power creating a situation where even though this is all being done for very, very appropriate reasons, that it will be interpreted by some group or anther as being an attack on our overall digital right because of the size of Google and what it implies.
Lucovsky: You know, I do. But I think that, at least in the APIs and the services that my group provides, we’re really providing value-added services where there’s plenty of choice. Like let’s take out feed API for instance. We provide basically cross-domain access to RSS feeds to any browser client.
They’re very easy to use. You can get up and running and put feeds on your site or whatever you want with feeds very easily. And you get to leverage Google’s calling infrastructure if you want. Our feed reader is out there crawling the web, pulling in feeds, processing them, saving overall bandwidth against the feed providers, because we ping them. One ping from our crawler gets to deliver that feed to potentially millions of users.
Gillmor: So this is like a Google Reader abstracted out at the API level.
Lucovsky: It’s exactly that. We’re using the Google Reader feed crawling and caching mechanism and exposing those cached feeds to the users of the API. Nobody has to use that. If you’re a site and you don’t want to get into the game and you want to go hit time.com and read their top headlines feeds directly, knock yourselves out.
You can go do that. There’s a lot of work that goes on in processing feeds correctly and robustly. But if you’re a good enough programmer and you want to do it, there’s nothing that prevents you from doing that on your own.
Google is just doing this as a service for those applications and those sites that choose to want to use our infrastructure to access feeds. If you want freshness on a feed, what if everybody wanted a feed and pinged it every 15 minutes. The site sending that feed out is going to get overwhelmed and it’s going to start dropping requests.
If Google can broker that traffic for you, that’s good for the consumers of the feed and for the publisher of the feed. But nobody has to use it.
Gillmor: All right, so nobody has to use it. Why are they paying you to maintain this?
Lucovsky: Because it doesn’t cost a lot to maintain it.
Gillmor: So you’re a goodwill ambassador basically, and that’s…
Lucovsky: We enable things that weren’t possible before. Cross-domain access to a feed from a browser client used to mean that you actually had to host a proxy on your frontends.
So it is something. What we’ve done is make it possible to, say, use a feed in a context of a browser-only mashup, if you will. Or use a feed where instead of your server resources and your company’s server bandwidth having to go access the feed, we do it at the edge of the network and the browser.
So we’re using the client’s bandwidth instead of the server’s bandwidth. It’s a good API, it’s a good way to access this kind of stuff. Push the computing to the edge, which is the same message that Microsoft promotes at a high enough level.
Gillmor: Well, I want to get to Mesh in a little bit yet.
Lucovsky: I’m talking an even higher level than Mesh. I mean, I think Microsoft is really promoting the rich client, the rich procession power of the browser. And I think that when you look at Ajax apps and the state of the art in Ajax, where you are pushing that processing to the edge, we’re all saying the same thing. We’re just saying that there’s lots of ways to push computation to the edge of the network.
In the case of our APIs, they’re perfect for use in the browser, where there are all of the extra compute cycles.
Gillmor: You came out of the NT group at Microsoft, right?
Lucovsky: Yeah. I started in the NT group back in ’88.
Gillmor: So this is a long, strange trip, isn’t it?
Lucovsky: Well, yeah. For the bulk of my career I did nothing but work on low-level operating systems. And then in 2000 when Microsoft did the shift, I got asked to move on to some of these services, and the kind of the future of where we’re going.
And it’s exciting. It’s a lot of fun. Dave Cutler used to say — he’s got a limited quota of device drivers, tape drivers in particular — and he said he’s run out. He’s never doing another tape driver again. He’s kind of run out of quota on driver.
I felt that I spent 20, 25 years writing operating system kernels and operating system internal, and it’s great and it’s a lot of fun, but I’m done. I’m not going to do that any more.
Gillmor: So it’s kind of like from assembly to a higher level.
Lucovsky: Yeah. It’s amazing. I love these new languages.
Gillmor: Dan Farber, have you got a question.
Farber: Mark, you talk about the APIs and how it’s a great public service that you’re providing them and allowing people to access them. What are doing in terms of this overall notion that was talked about before, which was to wire up Google as a social network by taking email and feeds and all the elements to build from the inside out instead of the outside in?
Lucovsky: I’m personally not involved in much of that effort at all. I’m more involved in the public side, the public search, public feeds, that end of it. For a lot of reasons. I’m not too into the OpenSocial side of things.
I think it’s a great idea. I’m excited to be a customer of that. I don’t know where it’s going though, to be honest with you. I think that everything that the community does in that space, there’s two sides to every story there, and we’ll just have to see where the journey takes us all. But it’s an interesting way to go.
Gillmor: Doc Searls, have you been able to hear some of this?
Searls: Yeah, I just came in. I was just going to send you an IM saying that I’m, in fact, on the case. [laughs] So I haven’t heard enough to have anything interesting to say.
Lucovsky: Let me just — the parallels to, if you squint at Hailstorm, you can see that it was really all about assuming every identity has a very, very rich, extensible profile. And we extended the model of a profile to say that an identity can have a calendar as part of its profile. It can have a collection of photos. It can have a collection of friends.
Basically anything that you can dream up, you can say this attached to an identity and therefore it extends the base profile. A lot of the social networks, and a lot of what’s interesting in social networks, is exactly that concept.
When I go to Facebook, if I look at my Facebook page, the page represents me or my identity, and hanging off that page, I have my friends, my photos, a message stream. I have all these facets that we were talking about nearly 10 years ago. And by opening up API access to that rich, enhanced profile, it’s basically a replay of what we’re talking about back in the day with Hailstorm.
Now when you throw in OpenSocial, and companies agreeing on standard protocols for accessing different facets of a profile, it’s the same sort of thing we were talking about 10 years ago. It’s the same sort of idea, it’s the same excitement. I just don’t know where it’ll take us.
Gillmor: Go ahead, Dan.
Farber: I wanted to ask you about Live Mesh, and get your impressions about that technology that Microsoft has been promoting as a kind of layer of plumbing, that would seem to be something that Google could do as well for synchronization and all kinds of APIs for notifications and everything else. Is that competition toward Google, and other are doing? Or is it something you could take advantage of?
Lucovsky: I think that the problem with just looking at things from a plumbing perspective is all you get is plumbing. And that’s the biggest concern. I think that if Live Mesh had launched with some really, really concrete, compelling, out-of the-box useable scenarios and application, then I think everybody could really understand where it’s really going and what the potential is.
I think without those concrete drivers, different companies will come up with the same end user experience, but possibly based on different plumbing. That’s my problem with plumbing in general, it that it doesn’t really always take you where you want to go, unless you show people at the same time that you release the plumbing. I don’t know if that makes sense.
So, in a nutshell, I think that even if Live Mesh is the best technology in the world for doing this kind of synchronization thing, without the compelling end user experience and application suite to go along with it, there’s still a lot of room for people to come up with competing plumbing and alternate ways to think about the plumbing.
Gillmor: Yeah, there may be alternate way of doing that, but if they’re complementary to that platform, than all the better.
Lucovsky: Yeah, but the chances of them complementing it out of the box is small. There’s a greater chance of them being somewhat in conflict. Look at the various XML synch protocols out there. There’s a million different ways people are going synch in XML, and in most cases it’s really tied to a real application-level problem, and less about the underlying synch protocol.
Gillmor: So what you’re saying is there’s no opportunity for interoperability between them.
Lucovsky: No, I think there’s great opportunity. But everybody has to go in wanting to standardize and say, “OK, that technology is the layer that we’ve all agreed to and that we want to use.” So it has to be kind of voluntary at that level.
Gillmor: Yeah, I understand that. That goes back to what I was saying before about Hailstorm and Passport. I think Microsoft at this point has the motivation in many ways to be able to cooperate rather than to try and lock down formats or any other kind of stuff.
And clearly, Google is driving some of that with its Friend Connect strategy etcetera.
Lucovsky: Yeah, sort of. Although I think Friend Connect is fundamentally different than what is going on with Mesh. I mean, I will say that if there’s one guy that I would trust to come up with the right protocol for doing this kind of distributed synch, it would be Ray and his team.
Ray, personally, has been through this many, many times over the years. I’m sure he’s made many, many mistakes in his past, and he’s learned a lot from it. Mesh probably represents his best effort after working on this problem for 20, 25 years.
I think that’s exciting. I think Microsoft, with the various thing that they’ve done over the years between Outlook and Exchange and changing that synch protocol on every release and making it better and better and understanding the problem very deeply. They’ve got a lot of history in this space, and there’s a great chance that they have the answer that very few other people have. So that’s exciting to me.
Gillmor: And it’s also exciting that their economic imperative is to join a space where Google has much more strength. Typically the way in for someone is to catch up through the use of standards. So it would seem that that would be probably the best chance that they have in order to be able to regain some momentum.
Lucovsky: It could be. Time will tell.
Gillmor: And I would also not necessarily characterize what Ray’s done in the past as mistakes. I mean, he had to invent an XML synch strategy with Groove, because none exited in the operating system.
Lucovsky: But what I mean is Ray understands tombstoning better than anyone else. He’s done it many times, he understands the failure paths. A kid fresh out of college, even if he went to Stanford..
Gillmor: What is “tombstoning”?
Lucovsky: When I delete a record. I’ve got to delete the record but leave a marker so other people can see it was actually deleted. And if you didn’t synch for a long time, and the delete happened a week ago and you’re just catching up now and making sure it deletes on your client correctly, those are some of the hard problems in synch.
Or multiple changes to the same record, detecting the conflicts and coming up with strategies for conflict resolution, those are all interesting problems. And Ray’s pretty good at that.
And what I don’t think is, I don’t think a kid fresh out of Stanford with a BS in computer science necessarily has the same history and the same background. They haven’t made enough mistakes to know the gotchas along the way. So [xx] I think is important in some of these spaces.
Gillmor: So you see Mesh as primarily a synchronization technology, and that’s all it really is?
Lucovsky: No, I think that’s how it’s being spun right now, more than anything else. I think that until we get that next layer of applications that really demonstrate what it means to have your application sit on the edge of the network and share data through synch rather than through kind of a hub-and-spoke server-centric model. I think that’s interesting.
Gillmor: I also think that what you were talking before about, just in general the sort of Hailstorm patterns that are emerging in a more open universe. It seems to me that that has an interesting kind of social media platform kind of internal that it’s made up. And interestingly, I think they’re all RSS feeds.
Lucovsky: Yeah, exactly.
Farber: I think what I just heard you describe was that every application becomes a service, and it’s sitting out on the edge of the network and to synchronize that creates all these different challenges. I think for every application to also become a service is going to take a fair amount of work. No?
Lucovsky: I don’t think it means that every application is necessarily a service, but I think it means that the bulk of the applications on the edge consume a service. So whether you act as server and a client or just a client of the service is probably the big distinction.
I think the bulk of these applications will start out life as more of a consumer than consumer and a producer.
Gillmor: And what does that imply?
Lucovsky: I think what it would mean is that if I’m a word processor sitting on the edge, I might share documents and content nuggets through synch as a consumer of this, but I might not necessarily republish the aggregate document through the same infrastructure.
Gillmor: Right, the deltas rather than the..
Lucovsky: So, I don’t know. We’ll see where it goes. I think that’s where things get interesting though, is when applications start participating both ways.
Farber: And not every application or every variation of that application is going to be always be one thing or the other. At some point the user has to create a “setting” that says in this particular instance I want it to be this way or that way.
Lucovsky: Yeah. But I think that it’s really — I mean, take something non-controversial like iTunes, sitting on the edge of the network. Clearly a client-side application. There’s a lot of data nuggets in iTunes that would be really cool to share without really sharing. Seamless sharing through these synch protocols.
Things like playlists that can refer to music that you might not necessarily have. But if I could camp on somebody else’s playlist seamlessly, and if I have the music, great, and I don’t have the music, make it possible for me to purchase the music that fills out that playlist. I think that that would be a great example of client-side app consuming and producing these little nuggets of information all synchronized through some sort of Mesh-like system.
Gillmor: Yeah. It’s what I have called “gestures” or consuming attention data and then sending that information out in an anonymized form so that other people can take advantage of it without violating privacy.
Lucovsky: Yeah, anonymous or not.
Farber: And then in a corporate model, then could I not have data that was associated with a particular business process signal its existence to other data sets, so that it could be consumed or rolled up inside those data sets.
Lucovsky: Even if it’s an inventory control type of system. You have an RFID tag on everything. As merchandise flows through your warehouse or through the store, or the point of sale terminals and stuff, all these nodes on the edge are kind of running a partitioned synchronized database so you know what’s available and where things are.
Farber: How do you think that will change the way developers approach everything they do? Are we looking at a generational change here?
Lucovsky: Probably. They’ll get bigger headaches, I don’t know.
Lucovsky: I think the tooling that makes it easier on developers to use is going to be a big factor in this. I mean, the person that writes the tools to say this is how easy it is to use, you don’t have to think of this as a distributed partitioned database were things come and go willy-nilly. Somebody that can abstract that in the right programming model is going to do well.
Gillmor: All right. So the announcements that Google made at Google I/O, any of those relate to that tooling idea?
Lucovsky: This is just me speaking for me on some of these tooling ideas. I think that Google made a number of announcements at I/O that all move the Web forward and all make it easier to write code. If you’re a Java developer and your boss has asked you to sprinkle some of that Ajax stuff on your application, then the Google Web Toolkit is an ideal environment for you to move into that model.
I you’ve got a brilliant idea and you don’t want to take on venture funding early, and you want to take your time to get your application together, App Engine is the perfect playground where you get essentially free web hosting and a relatively powerful compute system behind you to build your site or build your application for free.
If you’re mobile, Android, an open source stack for cell phones is ideal. Where do we think some of these communication devices are going to be over the years? Why not a cell phone in your car? Why not Android in the dashboard that may not have anything to do with making voice calls, but might have everything to do with being connected.
Calacanis: This is Jason here.
Gillmor: Speak up, Jason.
Calacanis: Oh. I push *4 to speak up?
Gillmor: That would help.
Calacanis: Is that better?
Gillmor: Yeah, go ahead.
Calacanis: Coming in OK, 5X5? I was in a meeting yesterday with some developers, our developers at Mahalo, and we were looking at — we’re moving out of our current hosting place, which was costing us $30, 000 a year. We have offers for $15, 000 or $20, 000 a year for the same exact setup.
And we’re also looking at just putting up commodity servers, which would cost half of that, like $7, 000 a month. You don’t even need to have that managed anymore. Just buy twice as many servers as you need and then let them die as you go through the rack. And then we’re also looking at moving everything to EC2 of Google App Engine. So we’re in discussions with both of those companies talking about, “I wonder if the entire Mahalo application could be a media Wiki essentially run off of EC2 or Google App Engine?”
And nobody knows really. But there’s companies like SmugMug, which is a huge company with huge amounts of pages, and over 100, 000 paid users or something to that effect, the rumor is. It’s definitely tens of thousands. But if they get over 100, 000 paid users — and they’re running off of it.
So the idea that it’s a little startup that could run off of these services might not actually be accurate even today.
Lucovsky: I didn’t mean a little startup. I didn’t mean that these were limited to that. I mean that to get up and running, you need to have something. So the friction to get started, if you’re starting up and you’ve got a bright idea, we all have a place for you to go now.
And you’re not limited. Your growth isn’t capped. You don’t have to stay small on these things. It’s that you can get up and running really quickly.
Lucovsky: And this is the same dynamic — I remember interviewing this guy at Microsoft years ago, and he was telling me about his great system that he had put together. And he put it together on Linux. And I asked him why, because I’m at Microsoft, and he said, “Look, we had no money. We had no money. Linux was not the right answer for our application, but it was the right answer for our checkbook.”
Calacanis: [laughs] Well, it also happens to be massively more stable than Microsoft products. I mean, that’s why they use it.
Lucovsky: That wasn’t the issue in his mind.
Calacanis: I know. But that’s really the reason. Let’s be honest.
Lucovsky: The practical reality was that that was the real answer for them and their level of funding.
Lucovsky: So free, I think, really means a lot when you get up and do it.
Calacanis: This really goes back to what Chris Anderson’s doing with his new book, “Free,” which is that the Moore’s Law and the Congress law for bandwidth, or the similar law for bandwidth — bandwidth, storage, and processing power are all going down to zero. It’s pretty much a given.
Calacanis: And when the software costs went down to zero, you saw Microsoft’s server business — at least for web stuff — go to zero. Then it was the hosting and the hardware and the bandwidth that were costing things. Now the bandwidth is becoming essentially free. And it’s all going to just move into the cloud. And does Microsoft have a cloud platform, like EC2 or like Google’s, now?
Searls: Can I challenge a little of that, Jason? What makes you think, or what makes Chris think, that bandwidth is going to zero? I think it should, by the way, but I don’t see how, as long as the telephone and cable companies are owning too many of the pipes, that that’s ever going to happen.
Calacanis: Oh, no. It just keeps going down. I mean, you’re talking about all these different backbone providers. I’m talking about Internet providers. I’m not talking about consumers. Consumers, who knows? I’m talking about people that are hosting servers. The cost of bandwidth, getting onto the Internet…
Searls: Oh, for them. OK. All right.
Calacanis: Cheaper. I’m not talking about end users. End users, they have a monopoly…
Searls: No, no. I’m talking about what happens between S3, for example, and any particular user. I’m lucky in the sense that I have Verizon FiOS where I live, and so I’ve got 20 megabits each way, but that’s a fairly rare condition. The funny thing is, when I hooked up with that service, they asked me if I was a gamer, and I said, “No, I’m an uploader, and I’m a businessperson, and I want to be able to use big-time web services.” And they hardly knew what I was talking about.
Calacanis: Right. [laughs]
Searls: But, anyway. I think it’s a good thesis, but I think in the practical reality, the bandwidth is going to be the last thing they’re going to sell.
Calacanis: Actually, the practical reality is like every two or three months, the people who we talk to two or three months before that are offering the same bandwidth at 20% less, 30% less.
And all the edge-casting places, like CDN’s content-delivery networks, which is how a lot of web services people are doing stuff–so you put your stuff at the Edge with CacheFly or whatever, and they have massive peering agreements, and they have massive server infrastructure across every data center. We use one called EdgeCast right now, which is freaking phenomenal.
Lucovsky: All this stuff is getting cheaper. It’s not really going to zero, though.
Calacanis: For a startup company, it’s essentially going to zero because, when you look at the cost, it becomes nominal. So, nominal to zero, in terms of for companies.
Lucovsky: True. Right.
Calacanis: It’s sort of like, I don’t know, the cost of software. It’s like it is free. Email is now free. Yes, there are some costs associated with it, some basic costs, like we buy Google Apps. We moved the whole company to Google Apps for $50. Yes, there is a cost to it, but $2,500 a year for a company is the same as zero. It’s essentially no-cost, especially when compared to buying a $400 office suite and a $4,000 server to run it.
It’s essentially zero when you compare it to that other alternative, the same way servers are essentially zero. The server software in the LAMP stack is zero. The hosting was the only cost. The hosting is now going to go to essentially zero. The rate card for EC2, the rate card for Google App engine is so ridiculously low that for any startup with any amount of Microsoft adCenter or Yahoo Publisher Network, Google AdSense on it, it’s zero now. It’s just zero.
Vizard: I guess I wonder how anybody’s going to make money if it’s zero.
Searls: Well, you make money because of it. That’s all. You make money because of it, not with it. By the way, Amazon is down right now. For any of you who want to check that rare occasion, just dial in Amazon and…
Gillmor: Right. Somebody in the chat room, however, suggested that EC2 is up and running fine.
Searls: Yeah. EC2 is what?
Gillmor: EC2 is their…
Lucovsky: Elastic Compute Cloud.
Gillmor: Exactly. In other words, cycles for sale.
Vizard: Yeah, I mean, basically…
Calacanis: Is Microsoft coming out with something like that?
Gillmor: Mark is now at Google, so he’s probably not the person to ask that. But my opinion is that, yeah, that’s what Mesh is. Mesh is sort of an aggregation of all of those kinds of on-demand services.
Anderson: Well, and if you look at the slideware for what Mesh is, underneath that is a Microsoft compute cloud. They’re just not, at this point, selling services like EC2. But I expect that’s going to happen.
Calacanis: Yeah. Microsoft’s going to have an EC2…
Anderson: Well, the other day, Bill Gates said there’s going to be millions of Microsoft computers in the cloud, in the Microsoft cloud.
Vizard: Doesn’t this, at some point, have to move to some kind of conversation about interoperability between all these differing clouds, and how is that going to be accomplished.
Gillmor: Well, we were talking about that before. Mark, are you still there?
Gillmor: Weren’t you talking about that, and suggested that some of the infrastructure around sort of random collisions between different strategies is going to slow that down?
Lucovsky: I think there’s a lot of issues. When somebody talks about interoperability, there’s interoperability at the application level, like, “Can I, from Google App Engine, use a service at Amazon or at Yahoo or whatever?” And sure, these high-level APIs are incredibly open and incredibly web-friendly.
I think that the other facet that people worry about is, “What are my switching costs? If I chose EC2 as my system, what does it cost me to change my mind down the road?” So I think that those are some of the issues that people do worry about with these cloud systems.
Gillmor: But doesn’t that imply that there’s going to be a layer of connectors or stubs that allow people to, basically, the way that the browser wars were sort of resolved by the one group?
Lucovsky: That’s one possible answer.
Calacanis: Talking about moving stuff off of EC2 and Google App Engine, you can move this stuff fairly easily from one to the other. That is not a big deal. And these services will be able to pretty easily connect to each other. I mean, SmugMug, as an example, is using EC2 for certain parts of their system and S3 for certain parts of their system. And you’re going to be able to mix and match these and pick which ones you like, and it’s going to be a totally non-issue.
The discussion we’re having right now would be like the discussion in the ’80s about like, “Did you get a Seagate hard drive, or did you get like an ATI graphics card?” And now, when people buy Macs, they buy it based on the color.
Anderson: Right. But we shouldn’t just muddle EC2 and Google App Engine in together, because they’re very different services. I would think of EC2 as more of a flexible, virtualized hosting system.
Lucovsky: That’s right.
Anderson: So you can do whatever you want there. You can add more nodes whenever you want. You can host any kind of a service there, do any kind of computation you want. So of course you can also host services that will serve the same data or information on the Google App Engine, but you can’t just necessarily, depending on what you’ve done, take whatever you did there and move it over to the Google App Engine easily.
Calacanis: Well, Google App Engine is Python, and so it’s…
Gillmor: Right. It’s not a “there will be” that there will logically be abstraction frameworks that will ease the difficulty of switching. Would you agree, Mark?
Lucovsky: Right. I think that there’s definitely APIs in these camps that, when you’re heavy users of those APIs, moving is a challenge sometimes.
Anderson: But the EC2 APIs have to do with provisioning servers, which is…
Lucovsky: Right. Right. And the App Engine APIs are at a much higher level…
Lucovsky: The level of accessing your data store and how you do your queries and how you partition your data and that sort of thing. So, EC2 switching is really all about, “OK, I don’t want to use a virtual host on EC2. I’m going to use real metal in this data center.”
Calacanis: The truth is, if Twitter was built today, it would be built on this. FriendFeed, a lot of these services — I think these are going to be scaled services. I think these are going to be the best solutions for scaled services. Google is going to allow everybody to piggyback their infrastructure, and then they’re going to look at their data and go, “OK. Twitter has the most engagement. [laughs] Let’s buy these companies.”
What they’re basically doing is they’re going to be subsidizing. It’s basically taking $10 million out of the five-year lifespan cost of a startup company. That’s why they’re doing this. They take 10 years out of these startup companies, they get them on their App Engine, and then, when they buy them, they don’t have to have an integration issue. They’ve taken $10 million out of the investment of companies, startups, and then they take $20 million out of [laughs] trying to convert them over and all the risks of them being successful.
Imagine if Flickr was already on the data centers at Yahoo when Yahoo bought them. Having done a lot of M&A in my career, it just takes out massive headaches. So it’s actually, as a startup, you should pick who you want to get bought by, and then build your company on top of that.
Vizard: Or throw this thing at research and development all over again, right?
Calacanis: I am so excited about this, I can’t even tell you. I was in a meeting yesterday. I basically stopped everything I’m doing, and I’m literally on the phone. I emailed Jeff Bezos yesterday immediately. I emailed my contact — I won’t say who — at Google immediately. I am on the phone with EC2 and the Google App people today. Massive conference calls. On the phone with SmugMug, who’s using EC2.
It is for scaled system. And somebody, if you figure how to build as scaled system on this, you basically take out the $50,000 a month. Twitter’s probably in the $100,000 in brand damage every month that Twitter’s going through. And somebody will figure out how to get Hadoop and MediaWiki platform on it.
I mean, Wikipedia should be running on this. As a matter of fact, Wikipedia spends most of their budget on these stupid servers and admins and everything. They should just go to Google and get free hosting from them.
Vizard: Well, wasn’t that what all those Google developers were doing at that conference last week, checking that whole idea out?
Gillmor: This is what we’re talking about.
Calacanis: At Google I/O, I had two of my people there. This is the biggest change for startup companies since the Internet went commercial and images were in the browser.
Lucovsky: Yeah. I think that what’s happening is all of us are taking our best-of-breed, the things that we’re best at, and opening those up to the public. In Google’s case, it’s mass-scale infrastructure, mass-scale data centers, and putting a simple-to-use programming stack on top of that. In Amazon’s case, they’re really hot on their EC2 and their storage system.
So I can build a hybrid application that says, “OK, my front ends are going to be on Google because it offers the best environment for front-end development and some back-end development. And I might do my bulk processing or long-term storage in S3.” Who knows? Who knows what kind of hybrids people will build? But you now have an opportunity to pick and choose, and use providers that offer best-of-breed services in their space.
Gillmor: Now, there was an announcement yesterday that Arrington and I were at Google about something called Gmail labs. Are you at all familiar with that, Mark?
Lucovsky: Yeah. It’s really cool. I mean, the thing that I like about it is it’s set up so that any Googler now can go hack in a feature to Gmail and test it out in the wild with real customers. It’s very, very cool.
Gillmor: Yeah, the thing that I was interested in — I mean, I agree with what you just said. I also think it’s creating this interesting feedback loop of behavior on the part of users. What this service allows you to do is go to this page and look at various offerings of these little special features. I just applied one that adds a picture to GTalk, or to Gchat, so that you can see who you’re talking to.
Gillmor: And these are fairly primitive right now, but as they start to get some mind share around these different experiments, it creates an interesting feedback loop that I think is going to eventually open up to third-party developers coming in and, essentially, hacking into the hack inside Gmail, and then starting to wire up outside applications, like, for example, Twitter, which is, of course, what I’m interested in.
Lucovsky: Yeah. I don’t know how long that’ll take to get to that level.
Gillmor: Well, a lot of these things are basically built on top of Greasemonkey experiments.
Lucovsky: Today they are, yeah.
Gillmor: Yeah. But what’s going to prevent Greasemonkey experiments using this capability and basically patching into it with outside services? Nothing.
Lucovsky: Well, I think that the thing with the Labs release, for me as a Google engineer, basically means that I can go in and add a feature to Google without having to Greasemonkey it in. So I can do a much nicer integration than what I could have done with Greasemonkey. So that’s a benefit to me as an engineer, and you as a customer get to see that work.
Gillmor: Right. But I think that, once you open the door to people thinking about what this means, it inevitably will lead to, first, user voting, which is what is enabled right now, but then, beyond that, the hacker community coming back in and basically running experiments up the flagpole and having some way…
I mean, what if somebody writes a little applet inside Google that essentially has some connectors that allow you to be able to plug in a variable and take data from a Twitter or from a Facebook, etc., etc.? That’s going to open the door to all the kinds of applications that people are going to be interested in doing with this stuff.
Gillmor: All right.
Lucovsky: It’s exciting. It’s a good move. It’s a great move.
Vizard: I kind of wish there was a Twitter lab to do that same idea with Twitter itself.
Gillmor: Yeah. Well, the idea would be..
Calacanis: Let’s have Twitter be stable first before we put any kind of a lab on top of it.
Gillmor: Well, seriously, Jason, I don’t think it needs to be stable in order to do this. That’s the whole idea of what I call Plan B, which is, if you’ve got a stable environment, like a Google or a Mesh or whatever, you can basically set up the infrastructure, and then if Twitter continues to fail, just start a stream of data that goes through these other clouds, and at some point, you drop the failed servers away.
Anderson: Right. But just don’t ask Twitter to have its own lab, because they can’t handle it.
Vizard: Not yet. Yeah.
Calacanis: There is a Twitter lab out there. In fairness, they have the most open API and probably one of the most successful APIs in the history of APIs, at least for consumer services. Look at all the Twitter clients out there, all the Twitter services, Twitter search services. I mean, part of what gave them such a headache was too many people using the API too inappropriately, like the instant messaging and stuff, and just they were getting crushed by API calls for a while. That was a big part of it.
Lucovsky: Yeah. That is definitely something to keep an eye on when you’re doing these APIs.
Gillmor: Well, the reason that I mentioned this is in the context of what Mark suggested, about how we’re not going to be able to anticipate, nor should be, how different services are going to be orchestrated together by people who are interested in mashing this stuff up.
Lucovsky: Yeah. I think that’s the most exciting thing that’s going on right now is that you do have so many APIs to pick and choose from and so many opportunities and all these places to host your ideas. So this next wave of innovation is going to be very interesting for us.
Gillmor: And given your evangelism role — and I’m not trying to pigeonhole it as that, but the idea that what’s good for the network is good for Google. The product manager for Gmail, Keith…
Lucovsky: Keith Coleman.
Gillmor: Yeah. He was specifically asked whether or not the engineers inside Google were going to be able to take advantage of outside, third-party APIs. And his response was “absolutely.” So this isn’t some sort of closed box that’s being enabled here.
Lucovsky: Oh, no. Yeah. Yeah, it’s very open.
Gillmor: All right. Well, we don’t have a lot of time with Mark today, so I’d like to go around the table and ask everybody to sort of ask one more question, and then we’ll wrap this up. Dan Farber, you still there?
Farber: Yes, I am. I’d like to get some insight as to where you see the APIs heading, in terms of new kinds of APIs that would further kind of unleash people to create new things and leverage the things that Google provides.
Lucovsky: Well, I think that if you look at how we’re doing, our APIs right now, we’re basically opening up all of Google bit by bit, programmatically. So we’ve opened up virtually all the search systems, where, two years ago, if you looked at Google and said, “Do you have a search API?” The answer would be, “Well, we have the SOAP Search API. It’s limited to 1,000 queries a day, and it’s an XML web service.” That was it.
Now we have basically full API access to all Google search systems. We’ve done the same thing for feeds. We just did the thing for our machine-language translation system, on the GData side; anywhere where there is data stored in a Google application, there is read-write API access to that. So that means your calendar, the Picassa photo albums, your You Tube video channel.
So we basically have read-write access to all Google applications and read-only access to the Google mediated public web. On top of that what other things does Google do a lot with?
Well, we do a lot with web analytics and site monitoring. We have an API and a full system for that. About the only things that we haven’t opened up yet are our massive compute cluster for processing data out-of-band. Our map-reduce and our large file systems. I think you have heard it at Google IO, in the App Engine talk, that things that are on their radar stream are exactly those sorts of systems, our backend processing systems.
Calacanis: I have to say, web analytics is another one of the greatest products in the last two or three years. When I was at AOL, they forced Omniture on us. It is the worst product in the history of products. We are talking people who wrote Engadget, the largest blog in the world, with the most technically savvy people. They couldn’t figure out Omniture. It’s terrible.
Google Analytics is so stable and fast and beautiful, and they keep adding incredible features to it. It is the greatest analytics product out there and we have people calling us constantly. Omniture sales people call every other week. They don’t seem to know that — they don’t have a Salesforce account, so different people call every week.
They must be looking at compute charge or electric charge, trying to figure out stuff like why would I pay for your product when Google Analytics is great. And they are right back there like, yeah. I don’t know if I make sense. Basically their statement is a gap.
Again, the friction of startup companies is going to go down to your idea and your execution of that idea. All the infrastructure stuff is going to go away. I think the economy in the United States is going to come roaring back, based upon this kind of technology and its impact on the workplace.
The recession, if it gets saved, they are not coming out of the recession, and us getting our debt back down and being cash flow positive as a country, and getting out of the massive debt we ran in Iraq, is going to be based on these kinds of savings — cloud computing, and all these free things.
I know this sounds insane.
Gillmor: Is it true that you are Obama’s technology advisor now, Jason?
Calacanis: I am not commenting on my relationship with Barack at this moment.
Gillmor: OK, very good.
Calacanis: We are going to have an announcement on Friday, and we are going to have then all my supporters, on Saturday.
Gillmor: You better hurry up. Today is Friday.
Calacanis: Yes, do it tonight.
Gillmor: Just to keep you informed.
Calacanis: Who is on the call? Microsoft or Google?
Lucovsky: Ex-Microsoft and Google.
Calacanis: Who is this person?
Gillmor: This is Mark Lucovsky, who is the champion architect of Hailstorm, the late great version of what Google is now doing.
Lucovsky: Why don’t you call me the champion architect of Windows too? I spent way more time on that.
Gillmor: I [...] toward Dave Cutler and the NT team about half an hour ago. Now, stop complaining.
Calacanis: Wait, Mark, you are at Google now?
Calacanis: You left Microsoft for Google?
Lucovsky: Yeah, about four years ago.
Calacanis: And did the chair hit you on the head when Steve Ballmer threw it?
Lucovsky: He threw it at his conference room table.
Calacanis: Ah, that’s fine.
Gillmor: You destroyed my negotiations. I promised that we would not bring up the chair throwing.
Lucovsky: Oh, that’s right.
Calacanis: Are you serious? Are you the guy who had the chair thrown at him?
Calacanis: Oh, I didn’t think it was you! [laughs] I had no idea that was you!
Searls: He said we are not talking about that.
Lucovsky: It never happened.
Gillmor: We will edit that part out.
Calacanis: I love Ballmer. I love his passion, whatever. Two great companies, and I think Microsoft is going to do awesome in the near term. I think they are going to — nothing wakes up Microsoft like competition. It is the most competitive company out there.
It’s awesome for everybody in the ecosystem. Microsoft is going to get [...]
Gillmor: If they will. Somebody is breathing directly into the microphone.
Searls: I think it’s Steve Ballmer.
Gillmor: I think it is Doc. Doc Searls, do you have a question?
Searls: I had the mute button on. I guess the Darth effect was disabled.
Gillmor: Doc, you have a question?
Searls: Actually it is sort of a generalized question. On the one hand, I agree with Nick Carr that everything is becoming a utility. In many ways what I am hearing here is sort of the ghost of Nick talking about what he does in “The Big Switch,” that all these services move into the big cloud and more and more efficiencies happen in the cloud.
But I wonder about the dependencies. I wonder about whether or not there are some vulnerabilities in that, and that we don’t need to be even more distributed in some ways. I don’t know, I am just wondering how you guys think about that.
What are the vulnerabilities of dependency on big Google, big Yahoo!, big Microsoft backend?
Lucovsky: If I were starting out today, those big guys are exactly the guys that I would want to be dependent on to keep the services up and running, at speed, with reasonable terms, and that sort of thing. We can look at these big guys and say we are scared of them — they will screw us eventually.
Or we can look at them and say, hey, they have deep experiences in these large scale distributed systems, and I don’t have to worry about my app getting popular in, say, Asia or Germany or the Midwest or whatever. Whatever happens with my app, I’m confident that the APIs that I’m based on are there and can scale and can handle the load that I can generate.
So, I would be betting on the big guys as opposed to the upstart guys that might be in one location. If that data center goes down for a generator failure or a storm or whatever, I’m tossed. I want to bet on the big guys.
Gillmor: So you’re are not worried?
Lucovsky: I’m not warred at all. In fact, the opposite. I love the fact that Amazon is out there, that Google is out there, that Yahoo is out there with APIs, and that Microsoft is out there with a good trajectory and a good track record. I think it’s absolutely the right way to go.
Gillmor: I think that more than two, or at least two, different competitors kind of eases that question.
Lucovsky: I don’t even think it’s that. I think best of breed in each particular area is really what I would be shooting for. For instance, what if the only API that Google did was search? And what if the only API that Facebook did was access to a social graph?
I would still look at that and say, hey, I’m going to build my app and it’s a hybrid. I am going to host it on EC2 because they have the virtualized free environment that I want. My storage is S3. I use Google search and I use Facebook social.
What’s wrong with that as an app architecture?
Gillmor: There is none as long as Facebook will let you do it.
Lucovsky: Well, I’m just saying. If those are the players, they are best of breed in certain spaces, and building applications that leverages the unique strengths of each of big dominant companies that really have the systems in place to do these high-scale APIs.
I would bet on that rather than say, well, I’m going to bet on, say, the Microsoft social graph, where they might not have any experience in that space.
Gillmor: Certainly you have a point, but I also think that there is a certain amount of entropy that turns into momentum toward standardization around one or two players.
Lucovsky: Yes, and I think that happens almost naturally, sometimes.
Gillmor: Yeah, I agree, and I don’t think there’s anything wrong with it.
Vizard: If you follow that line of logic there, what is the role of the client in any of this? Is there any requirement for any intelligence on the client. I mean, you built Windows and then you are working for Google, what’s going to be the role of the client in this cloud computing?
Lucovsky: Well, I believe that it’s certainly more than just host a browser. That’s me. I have a kind of warped way of thinking because of my Microsoft legacy. I think some of my co-workers at Google might say, no, all we need a client for is to run the browser.
But I think in reality — we are all developers, right? We write code. I write code using an editor that’s local to my client. If I’m a graphic designer, I produce content using an application that is on my edge system.
Pushing these computes at the edge, I think, is always going to be there, and that content that they create might be distributed in the cloud. In some cases, you might actually do your content creation in the cloud. But I think that, realistically, we all use this edge computing way more than we think. Even my little iPod uses…
Gillmor: The point about the “way more than you think,” if the user thinks that they’re using it in the cloud, or if they don’t care where they’re using it, that becomes not only the perception but the actuality.
Gillmor: Like, for example, we’ve got this QuickCam, QUQIA. And it employs an algorithm — a pretty simple one — which is, if there’s not enough bandwidth currently to push the data to the server, in fact, it stores the whole file on the server until it’s been uploaded for sure. And it will just basically do what Groove’s relay server does and what Mesh’s server system does, relay services. It just basically pushes it to the cloud behind the scene.
So, from the user’s perspective, they may not think that they’re using a rich client, to your point, but, of course, they are.
Gillmor: And so, therefore, who’s making money on the client, I think, is another way of putting Mike Vizard’s question.
Vizard: No, that’s not my question. My question then becomes what is the role of that client to put — and can I see or expect to see another level of intelligence on that client, where I can personalize it to each individual user so that the client becomes this ultimate filter for what’s happening in the cloud?
Gillmor: Yeah. I’ll let Mark answer the question, but my perception is that all of those intelligence services can be in the cloud and then replicated down to the client as a caching mechanism so that it appears to be a richer experience. I don’t think that that’s a client application; I think it’s a server one. Mark?
Lucovsky: Yeah. I mean, I think the strength of the client, in a lot of ways, is really about presentation and interaction with the data or in original content creation. Like an email application is, in my mind, the canonical client application, that it works great as a web app, and I think it works awesome as a client app. Like, personally, I love Outlook. And I don’t share the same love for Gmail, but the thing that I love about Gmail is the accessibility. I can walk into an Apple Store on Fifth Avenue and check my mail if I didn’t have my iPhone with me, for instance.
So web-based apps are pervasive, and client-side apps have this difficulty of setup. If I could count on Outlook being there on every application on every machine, and all I had to do to configure my Outlook is type in my URL, then I think that that’s the ultimate. But that’s not how we tend to build that kind of client application. Outlook requires a profile, and it’s a pain in the ass to set up and configure, so it’s much easier to use a webmail client.
But if the Outlook skin was this client app, that all I had to do was type in the URL to my mail service, I think that that would represent a great hybrid client app.
Gillmor: And going back to Hailstorm, that’s what the APIs, if you will, my mail, my inbox…
Gillmor: All of that stuff is what we see now in Gmail, not necessarily the richness of the synchronization experience, which is why I think that Mesh is a fairly good transitional strategy.
Lucovsky: Right. Right.
Gillmor: We’ve got one more question from Robert Anderson, and we’re done.
Anderson: Right. Mark, I wanted to ask you about OAuth, and about OpenID as well, although I don’t know if that directly fits in with the API work that you guys are doing. Is Google going to start supporting OAuth more generally?
Lucovsky: We do, yeah. I mean, that’s definitely our big direction. And it doesn’t really come up in the APIs that my team works on, but it comes up very heavily in our GData APIs, where we’re talking about authenticated read/write access to services. GData is almost a peer to the APIs that my team’s responsible for, but targeted against authenticated services that allow read/write. And OAuth is very heavy there.
Anderson: OK. And what about OpenID? Anything to say about that?
Lucovsky: We’re definitely strong supporters of it. Blogger, I believe, supports OpenID on both sides. So yeah, I think that if we look back at some of the things that twerked people on Hailstorm, it was the relationship to Passport. At the time, what were we talking about? Liberty Alliance? There wasn’t really a good distributed global authentication scheme. I think that moving towards that kind of system is definitely going to happen in our lifetimes, and maybe OpenID is the thing that emerges from that.
I think that what’s going to be crucial for OpenID, though, is for people to play both sides of it, not selfishly say, “Well, we’ll validate your credentials,” or “We’ll be a provider, but we’re not going to trust somebody else’s credentials.” So I think everybody has to get over that hump and say, “We don’t really care who the issuer is. We all play together, and we all play both sides.” I think that’s when it’s really going to take off.
Anderson: Well, I think the OpenID Foundation has to do some work there, because it’s not really clear why just some random service provider would know how to decide which of the different providers actually are providing credentials that have any value.
Lucovsky: Well, that’s again, in the Hailstorm days, we had kind of invented that tiering, and we had this kind of confidence level and trust level: you’re a tier A provider, or you’re a tier B provider or a tier C provider. I think that we have to get into that mode.
But if we look at the world today and said, “What if the only guys that were really providers were Microsoft, Yahoo, Amazon, AOL — the 25 largest, most trustworthy sites out there?” I mean, if that’s all we had, that would be infinitely better than where we’re at today.
And that would go a long, long way to say, “You can pick your provider of choice.” It could be your bank. It could be your credit card company. It could be one of these big Internet sites. But if we have that level of trust in the issuers, then I think we’re there. It’s basically game over — we have the global system that we need.
Lucovsky: It’s the same thing. When I go to North Carolina, for instance, to rent a car, the Avis over there trusts my California driver’s license, right? So I think that we need that level of certification, and we can do it at the bank, credit card, state level, or Internet provider level. But once we’re in that mode, I think we’re there.
Gillmor: All right. I’m going to leave it there. Mark, I want to, again, thank you very much for showing up and doing this, and I hope that you’ll come back in the near future as these technologies start to get more built up and more obviously deployed, as Jason was talking about.
Lucovsky: Sure, thanks.
Calacanis: And this is Jason. I apologize for making that comment about the thing that I wasn’t supposed to. I didn’t really know. Really, I’m sorry about that.
Lucovsky: That’s OK. [laughs] It’s not a deal at all.
Gillmor: I never said anything at all about that. I was lying through my teeth.
Calacanis: Oh, OK. Well, anyway, I apologize for doing something I shouldn’t have, which is pretty much three times a show.
Gillmor: Well, you haven’t been on a lot, so you had more mistakes to make, relatively speaking.
Calacanis: Finally get a good guest, and I blow it.
Gillmor: All right. This is Steve Gillmor. I want to thank everybody who showed up, and especially those who didn’t. We’ll see you again on Monday. And there will be a News Gang Live broadcast today at 1:00. Thanks again, guys. Bye-bye.