当前位置:Gxlcms > mysql > 什么是Web2.0

什么是Web2.0

时间:2021-07-01 10:21:17 帮助过:43人阅读

What Is Web 2.0 Design Patterns and Business Modelsfor the Next Generation of Software Tim O'Reilly 09/30/2005 【注】中文翻译见《互联网周刊》什么是Web 2.0 Oct. 2009: Tim O'Reilly and John Battelleanswer the question of What's next for Web



What Is Web 2.0

Design Patterns and Business Modelsfor the Next Generation of Software

Tim O'Reilly

09/30/2005

【注】中文翻译见《互联网周刊》什么是Web 2.0


Oct. 2009: Tim O'Reilly and John Battelleanswer the question of "What's next for Web 2.0?" inWeb Squared: Web 2.0 Five Years On.


The bursting of the dot-com bubble in the fall of 2001marked a turning point for the web. Many people concluded that the web wasoverhyped, when in fact bubbles andconsequent shakeouts appear to be a common feature of all technologicalrevolutions. Shakeouts typically mark the point at which anascendant technology is ready to take its place at center stage. The pretendersare given the bum's rush, the real success stories show their strength, andthere begins to be an understanding of what separates one from the other.


The concept of"Web 2.0" began with a conference brainstorming session betweenO'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'ReillyVP, noted that far from having "crashed", the web was more importantthan ever, with exciting new applications and sites popping up with surprisingregularity. What's more, the companies that had survived the collapse seemed tohave some things in common. Could it be that the dot-com collapse marked somekind of turning point for the web, such that a call to action such as "Web2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.


In the year and ahalf since, the term "Web 2.0" has clearly taken hold, with more than9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0means, with some people decrying it as a meaningless marketingbuzzword, and others accepting it as the new conventional wisdom.


This article is anattempt to clarify just what we mean by Web 2.0.


In our initialbrainstorming, we formulated our sense of Web 2.0 by example:

Web 1.0

Web 2.0

DoubleClick

-->

Google AdSense

Ofoto

-->

Flickr

Akamai

-->

BitTorrent

mp3.com

-->

Napster

Britannica Online

-->

Wikipedia

personal websites

-->

blogging

evite

-->

upcoming.org and EVDB

domain name speculation

-->

search engine optimization

page views

-->

cost per click

screen scraping

-->

web services

publishing

-->

participation

content management systems

-->

wikis

directories (taxonomy)

-->

tagging ("folksonomy")

stickiness

-->

syndication

The list went on and on. But what wasit that made us identify one application or approach as "Web 1.0" andanother as "Web 2.0"? (The question is particularly urgent becausethe Web 2.0 meme has become so widespread that companies are now pasting it onas a marketing buzzword, with no real understanding of just what it means. Thequestion is particularly difficult because many of those buzzword-addictedstartups are definitely not Web2.0, while some of the applications we identified as Web 2.0, like Napster andBitTorrent, are not even properly web applications!) We began trying to teaseout the principles that are demonstrated in one way or another by the successstories of web 1.0 and by the most interesting of the new applications.

1. The Web As Platform

Like many importantconcepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitationalcore. You canvisualize Web2.0 as a set ofprinciples and practices that tie together a veritable solar system of sitesthat demonstrate some or all of those principles, at a varying distance fromthat core.




Figure 1 shows a "meme map" of Web 2.0 thatwas developed at a brainstorming session during FOO Camp, a conference at O'ReillyMedia. It's very much a work in progress, but shows the many ideas that radiateout from the Web 2.0 core.


For example, at thefirst Web 2.0 conference, in October 2004, John Battelle and I listed apreliminary set of principles in our opening talk. The first of thoseprinciples was "The web as platform." Yet that was also a rallyingcry of Web 1.0 darling Netscape, which went down in flames after a heatedbattle with Microsoft. What's more, two of our initial Web 1.0 exemplars,DoubleClick and Akamai, were both pioneers in treating the web as a platform.People don't often think of it as "web services", but in fact, adserving was the first widely deployed web service, and the first widelydeployed "mashup" (to use another term that has gained currency oflate). Every banner ad is served as a seamless cooperation between twowebsites, delivering an integrated page to a reader on yet another computer.Akamai also treats the network as the platform, and at a deeper level of thestack, building a transparent caching and content delivery network that easesbandwidth congestion.


Nonetheless, thesepioneers provided useful contrasts because later entrants have taken theirsolution to the same problem even further, understanding something deeper aboutthe nature of the new platform. Both DoubleClick and Akamai were Web 2.0pioneers, yet we can also see how it's possible to realize more of thepossibilities by embracing additional Web 2.0 design patterns.


Let's drill down fora moment into each of these three cases, teasing out some of the essentialelements of difference.


Netscape vs. Google


If Netscape was thestandard bearer for Web 1.0, Google is most certainly the standard bearer forWeb 2.0, if only because their respective IPOs were defining events for eachera. So let's start with a comparison of these two companies and their positioning.


Netscape framed"the web as platform" in terms of the old software paradigm: theirflagship product was the web browser, a desktop application, and their strategywas to use their dominance in the browser market to establish a market forhigh-priced server products. Control over standards for displaying content andapplications in the browser would, in theory, give Netscape the kind of marketpower enjoyed by Microsoft in the PC market. Much like the "horselesscarriage" framed the automobile as an extension of the familiar, Netscapepromoted a "webtop" to replace the desktop, and planned to populatethat webtop with information updates and applets pushed to the webtop byinformation providers who would purchase Netscape servers.


In the end, both webbrowsers and web servers turned out to be commodities, and value moved "upthe stack" to services delivered over the web platform.


Google, by contrast,began its life as a native web application, never sold or packaged, butdelivered as a service, with customers paying, directly or indirectly, for theuse of that service. None of the trappings of the old software industry arepresent. No scheduled software releases, just continuous improvement. Nolicensing or sale, just usage. No porting to different platforms so thatcustomers can run the software on their own equipment, just a massivelyscalable collection of commodity PCs running open source operating systems plushomegrown applications and utilities that no one outside the company ever getsto see.


At bottom, Googlerequires a competency that Netscape never needed: database management. Googleisn't just a collection of software tools, it's a specialized database. Withoutthe data, the tools are useless; without the software, the data isunmanageable. Software licensing and control over APIs--the lever of power inthe previous era--is irrelevant because the software never need be distributedbut only performed, and also because without the ability to collect and managethe data, the software is of little use. In fact, the value of the software is proportional to the scaleand dynamism of the data it helps to manage.


Google's service isnot a server--though it is delivered by a massive collection of internetservers--nor a browser--though it is experienced by the user within thebrowser. Nor does its flagship search service even host the content that itenables users to find. Much like a phone call, which happens not just on thephones at either end of the call, but on the network in between, Google happensin the space between browser and search engine and destination content server,as an enabler or middleman between the user and his or her online experience.


While both Netscapeand Google could be described as software companies, it's clear that Netscapebelonged to the same software world as Lotus, Microsoft, Oracle, SAP, and othercompanies that got their start in the 1980's software revolution, whileGoogle's fellows are other internet applications like eBay, Amazon, Napster,and yes, DoubleClick and Akamai.

DoubleClick vs. Overture and AdSense


Like Google,DoubleClick is a true child of the internet era. It harnesses software as aservice, has a core competency in data management, and, as noted above, was apioneer in web services long before web services even had a name. However,DoubleClick was ultimately limited by its business model. It bought into the'90s notion that the web was about publishing, not participation; thatadvertisers, not consumers, ought to call the shots; that size mattered, andthat the internet was increasingly being dominated by the top websites asmeasured by MediaMetrix and other web ad scoring companies.


As a result,DoubleClick proudly cites on its website "over 2000 successfulimplementations" of its software. Yahoo! Search Marketing (formerlyOverture) and Google AdSense, by contrast, already serve hundredsof thousands of advertisers apiece.


Overture and Google'ssuccess came from an understanding of what Chris Anderson refers to as"the long tail," the collective power of the small sites that make upthe bulk of the web's content. DoubleClick's offerings require a formal salescontract, limiting their market to the few thousand largest websites. Overtureand Google figured out how to enable ad placement on virtually any web page.What's more, they eschewed publisher/ad-agency friendly advertising formatssuch as banner ads and popups in favor of minimally intrusive,context-sensitive, consumer-friendly text advertising.


The Web 2.0 lesson: leverage customer-self service and algorithmic datamanagement to reach out to the entire web, to the edges and not just thecenter, to the long tail and not just the head.

Not surprisingly, other web 2.0 success storiesdemonstrate this same behavior. eBay enables occasional transactions of only afew dollars between single individuals, acting as an automated intermediary.Napster (though shut down for legal reasons) built its network not by buildinga centralized song database, but by architecting a system in such a way thatevery downloader also became a server, and thus grew the network.


Akamaivs. BitTorrent

Like DoubleClick,Akamai is optimized to do business with the head, not the tail, with thecenter, not the edges. While it serves the benefit of the individuals at theedge of the web by smoothing their access to the high-demand sites at thecenter, it collects its revenue from those central sites.


BitTorrent, like otherpioneers in the P2P movement, takes a radical approach to internetdecentralization. Every client is also a server; files are broken up intofragments that can be served from multiple locations, transparently harnessingthe network of downloaders to provide both bandwidth and data to other users.The more popular the file, in fact, the faster it can be served, as there aremore users providing bandwidth and fragments of the complete file.


BitTorrent thusdemonstrates a key Web 2.0 principle: the service automaticallygets better the more people use it. While Akamai must add servers toimprove service, every BitTorrent consumer brings his own resources to theparty. There's an implicit "architecture of participation", abuilt-in ethic of cooperation, in which the service acts primarily as anintelligent broker, connecting the edges to each other and harnessing the powerof the users themselves.

2. Harnessing Collective Intelligence

The central principlebehind the success of the giants born in the Web 1.0 era who have survived tolead the Web 2.0 era appears to be this, that they have embraced the power ofthe web to harness collective intelligence:

  • Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
  • Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
  • Google's breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
  • eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen. What's more, eBay's competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
  • Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company's own products, or sponsored results, Amazon always leads with "most popular", a real-time computation based not only on sales but other factors that Amazon insiders call the "flow" around products. With an order of magnitude more user participation, it's no surprise that Amazon's sales also outpace competitors.

Now, innovative companies that pick upon this insight and perhaps extend it even further, are making their mark onthe web:


  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond's dictum (originally coined in the context of open source software) that "with enough eyeballs, all bugs are shallow," to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • Sites like del.icio.us and Flickr, two companies that have received a great deal of attention of late, have pioneered a concept that some people call "folksonomy" (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, rather than rigid categories. In the canonical example, a Flickr photo of a puppy might be tagged both "puppy" and "cute"--allowing for retrieval along natural axes generated user activity.
  • Collaborative spam filtering products like Cloudmark aggregate the individual decisions of email users about what is and is not spam, outperforming systems that rely on analysis of the messages themselves.
  • It is a truism that the greatest internet success stories don't advertise their products. Their adoption is driven by "viral marketing"--that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn't Web 2.0.
  • Even much of the infrastructure of the web--including the Linux, Apache, MySQL, and Perl, PHP, or Python code involved in most web servers--relies on the peer-production methods of open source, in themselves an instance of collective, net-enabled intelligence. There are more than 100,000 open source software projects listed on SourceForge.net. Anyone can add a project, anyone can download and use the code, and new projects migrate from the edges to the center as a result of users putting them to work, an organic software adoption process relying almost entirely on viral marketing.

The lesson: Networkeffects from user contributions are the key to market dominance in the Web 2.0era.


Blogging and the Wisdom of Crowds

One of the mosthighly touted features of the Web 2.0 era is the rise of blogging. Personalhome pages have been around since the early days of the web, and the personaldiary and daily opinion column around much longer than that, so just what isthe fuss all about?


At its most basic, ablog is just a personal home page in diary format. But as Rich Skrenta notes,the chronological organization of a blog "seems like a trivial difference,but it drives an entirely different delivery, advertising and valuechain."


One of the thingsthat has made a difference is a technology called RSS. RSS is the most significant advance inthe fundamental architecture of the web since early hackers realized that CGIcould be used to create database-backed websites. RSS allows someone to linknot just to a page, but to subscribe to it, with notification every time thatpage changes. Skrenta calls this "the incremental web." Others callit the "live web".


Now, of course,"dynamic websites" (i.e., database-backed sites with dynamicallygenerated content) replaced static web pages well over ten years ago. What'sdynamic about the live web are not just the pages, but the links. A link to aweblog is expected to point to a perennially changing page, with"permalinks" for any individual entry, and notification for eachchange. An RSS feed is thus a much stronger link than, say a bookmark or a linkto a single page.


RSS also means that the web browser is not the onlymeans of viewing a web page. While some RSS aggregators, such as Bloglines, areweb-based, others are desktop clients, and still others allow users of portabledevices to subscribe to constantly updated content.


RSS is now being usedto push not just notices of new blog entries, but also all kinds of dataupdates, including stock quotes, weather data, and photo availability. This useis actually a return to one of its roots: RSS was born in 1997 out of theconfluence of Dave Winer's "Really Simple Syndication" technology,used to push out blog updates, and Netscape's "Rich Site Summary",which allowed users to create custom Netscape home pages with regularly updateddata flows. Netscape lost interest, and the technology was carried forward byblogging pioneer Userland, Winer's company. In the current crop ofapplications, we see, though, the heritage of both parents.


But RSS is only partof what makes a weblog different from an ordinary web page. Tom Coates remarkson the significance of the permalink:


It may seem like a trivial piece offunctionality now, but it was effectively the device that turned weblogs froman ease-of-publishing phenomenon into a conversational mess of overlappingcommunities. For the first time it became relatively easy to gesture directlyat a highly specific post on someone else's site and talk about it. Discussionemerged. Chat emerged. And - as a result - friendships emerged or became moreentrenched. The permalink was the first - and most successful - attempt tobuild bridges between weblogs.


In many ways, thecombination of RSS and permalinks adds many of the features of NNTP, theNetwork News Protocol of the Usenet, onto HTTP, the web protocol. The"blogosphere" can be thought of as a new, peer-to-peer equivalent toUsenet and bulletin-boards, the conversational watering holes of the earlyinternet. Not only can people subscribe to each others' sites, and easily linkto individual comments on a page, but also, via a mechanism known astrackbacks, they can see when anyone else links to their pages, and canrespond, either with reciprocal links, or by adding comments.


Interestingly,two-way links were the goal of early hypertext systems like Xanadu. Hypertextpurists have celebrated trackbacks as a step towards two way links. But notethat trackbacks are not properly two-way--rather, they are really (potentially)symmetrical one-way links that create the effect of two way links. Thedifference may seem subtle, but in practice it is enormous. Social networkingsystems like Friendster, Orkut, and LinkedIn, which require acknowledgment bythe recipient in order to establish a connection, lack the same scalability asthe web. As noted by Caterina Fake, co-founder of the Flickr photo sharingservice, attention is only coincidentally reciprocal. (Flickr thus allows usersto set watch lists--any user can subscribe to any other user's photostream via RSS.The object of attention is notified, but does not have to approve theconnection.)


If an essential partof Web 2.0 is harnessing collective intelligence, turning the web into a kindof global brain, the blogosphere is the equivalent of constant mental chatterin the forebrain, the voice we hear in all of our heads. It may not reflect thedeep structure of the brain, which is often unconscious, but is instead theequivalent of conscious thought. And as a reflection of conscious thought andattention, the blogosphere has begun to have a powerful effect.


First, because searchengines use link structure to help predict useful pages, bloggers, as the mostprolific and timely linkers, have a disproportionate role in shaping searchengine results. Second, because the blogging community is so highlyself-referential, bloggers paying attention to other bloggers magnifies theirvisibility and power. The "echo chamber" that critics decry is alsoan amplifier.


If it were merely anamplifier, blogging would be uninteresting. But like Wikipedia, bloggingharnesses collective intelligence as a kind of filter. What James Surioweckicalls "the wisdom of crowds" comes into play,and much as PageRank produces better results than analysis of any individualdocument, the collective attention of the blogosphere selects for value.


While mainstreammedia may see individual blogs as competitors, what is really unnerving is thatthe competition is with the blogosphere as a whole. This is not just acompetition between sites, but a competition between business models. The worldof Web 2.0 is also the world of what Dan Gillmor calls "we, the media,"a world in which "the former audience", not a few people in a backroom, decides what's important.


3.Data is the Next Intel Inside


Every significantinternet application to date has been backed by a specialized database:Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database ofproducts, eBay's database of products and sellers, MapQuest's map databases,Napster's distributed song database. As Hal Varian remarked in a personalconversation last year, "SQL is the new HTML." Database management isa core competency of Web 2.0 companies, so much so that we have sometimesreferred to these applications as "infoware" rather than merely software.


This fact leads to akey question: Who owns the data(谁拥有数据)?


In the internet era,one can already see a number of cases where control over the database has ledto market control and outsized financial returns. The monopoly on domain nameregistry initially granted by government fiat to Network Solutions (laterpurchased by Verisign) was one of the first great moneymakers of the internet.While we've argued that business advantage via controlling software APIs ismuch more difficult in the age of the internet, control of key data sources isnot, especially if those data sources are expensive to create or amenable toincreasing returns via network effects.


Look at the copyrightnotices at the base of every map served by MapQuest, maps.yahoo.com,maps.msn.com, or maps.google.com, and you'll see the line "Maps copyrightNavTeq, TeleAtlas," or with the new satellite imagery services,"Images copyright Digital Globe." These companies made substantialinvestments in their databases (NavTeq alone reportedly invested $750 millionto build their database of street addresses and directions. Digital Globe spent$500 million to launch their own satellite to improve on government-suppliedimagery.) NavTeq has gone so far as to imitate Intel's familiar Intel Insidelogo: Cars with navigation systems bear the imprint, "NavTeqOnboard." Data is indeed the Intel Inside of these applications, a solesource component in systems whose software infrastructure is largely opensource or otherwise commodified.


The now hotlycontested web mapping arena demonstrates how a failure to understand theimportance of owning an application's core data will eventually undercut itscompetitive position. MapQuest pioneered the web mapping category in 1995, yetwhen Yahoo!, and then Microsoft, and most recently Google, decided to enter themarket, they were easily able to offer a competing application simply bylicensing the same data.


Contrast, however,the position of Amazon.com. Like competitors such as Barnesandnoble.com, itsoriginal database came from ISBN registry provider R.R. Bowker. But unlikeMapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied datasuch as cover images, table of contents, index, and sample material. Even moreimportantly, they harnessed their users to annotate the data, such that afterten years, Amazon, not Bowker, is the primary source for bibliographic data onbooks, a reference source for scholars and librarians as well as consumers.Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where oneis present, and creates an equivalent namespace for products without one.Effectively, Amazon "embraced and extended" their data suppliers.


Imagine if MapQuesthad done the same thing, harnessing their users to annotate maps anddirections, adding layers of value. It would have been much more difficult forcompetitors to enter the market just by licensing the base data.


The recentintroduction of Google Maps provides a living laboratory for the competitionbetween application vendors and their data suppliers. Google's lightweightprogramming model has led to the creation of numerous value-added services inthe form of mashups that link Google Maps with other internet-accessible datasources. Paul Rademacher's housingmaps.com,which combines Google Maps with Craigslist apartment rental and home purchasedata to create an interactive housing search tool, is the pre-eminent exampleof such a mashup.


At present, thesemashups are mostly innovative experiments, done by hackers. But entrepreneurialactivity follows close behind. And already, one can see that for at least oneclass of developer, Google has taken the role of data source away from Navteqand inserted themselves as a favored intermediary. We expect to see battlesbetween data suppliers and application vendors in the next few years, as bothrealize just how important certain classes of data will become as buildingblocks for Web 2.0 applications.

Therace is on to own certain classes of core data: location, identity,calendaring of public events, product identifiers and namespaces. In manycases, where there is significant cost to create the data, there may be anopportunity for an Intel Inside style play, with a single source for the data.In others, the winner will be the company that first reaches critical mass viauser aggregation, and turns that aggregated data into a system service.


For example, in thearea of identity, PayPal, Amazon's 1-click, and the millions of users ofcommunications systems, may all be legitimate contenders to build anetwork-wide identity database. (In this regard, Google's recent attempt to usecell phone numbers as an identifier for Gmail accounts may be a step towardsembracing and extending the phone system.) Meanwhile, startups likeSxip are exploring the potential offederated identity, in quest of a kind of "distributed 1-click" thatwill provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB isan attempt to build the world's largest shared calendar via a wiki-stylearchitecture of participation. While the jury's still out on the success of anyparticular startup or approach, it's clear that standards and solutions inthese areas, effectively turning certain classes of data into reliablesubsystems of the "internet operating system", will enable the nextgeneration of applications.


A further point mustbe noted with regard to data, and that is user concerns about privacy and theirrights to their own data. In many of the early web applications, copyright isonly loosely enforced. For example, Amazon lays claim to any reviews submittedto the site, but in the absence of enforcement, people may repost the samereview elsewhere. However, as companies begin to realize that control over datamay be their chief source of competitive advantage, we may see heightenedattempts at control.


Much as the rise ofproprietary software led to the Free Software movement, we expect the rise ofproprietary databases to result in a Free Data movement within the next decade.One can see early signs of this countervailing trend in open data projects suchas Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which allow users to takecontrol of how data is displayed on their computer.

4. End of the Software Release Cycle

As noted above in thediscussion of Google vs. Netscape, one of the defining characteristics ofinternet era software is that it is delivered as a service, not as a product.This fact leads to a number of fundamental changes in the business model ofsuch a company:


1. Operations must become a core competency. Google'sor Yahoo!'s expertise in product development must be matched by an expertise indaily operations. So fundamental is the shift from software as artifact to softwareas service that the software will cease to perform unless it ismaintained on a daily basis. Google must continuously crawl the weband update its indices, continuously filter out link spam and other attempts toinfluence its results, continuously and dynamically respond to hundreds ofmillions of asynchronous user queries, simultaneously matching them withcontext-appropriate advertisements.


It's no accident thatGoogle's system administration, networking, and load balancing techniques areperhaps even more closely guarded secrets than their search algorithms.Google's success at automating these processes is a key part of their costadvantage over competitors.


It's also no accidentthat scriptinglanguages such as Perl, Python, PHP, and now Ruby, play such a large role at web 2.0 companies. Perl wasfamously described by Hassan Schroeder, Sun's first webmaster, as "theduct tape of the internet." Dynamic languages (often called scriptinglanguages and looked down on by the software engineers of the era of softwareartifacts) are the tool of choice for system and network administrators, aswell as application developers building dynamic systems that require constantchange.


2. Users must be treated as co-developers, in areflection of open source development practices (even if the software inquestion is unlikely to be released under an open source license.) The opensource dictum, "release early and release often" in fact has morphedinto an even more radical position, "the perpetual beta," in whichthe product is developed in the open, with new features slipstreamed in on amonthly, weekly, or even daily basis. It's no accident that services such asGmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a"Beta" logo for years at a time.


Real time monitoringof user behavior to see just which new features are used, and how they areused, thus becomes another required core competency. A web developer at a majoronline service remarked: "We put up two or three new features on some partof the site every day, and if users don't adopt them, we take them down. Ifthey like them, we roll them out to the entire site."


Cal Henderson, thelead developer of Flickr, recently revealedthat they deploy new builds up to every half hour. This is clearly aradically different development model! While not all web applications aredeveloped in as extreme a style as Flickr, almost all web applications have adevelopment cycle that is radically unlike anything from the PC orclient-server era. It is for this reason that a recent ZDnet editorial concludedthat Microsoft won't be able to beat Google: "Microsoft'sbusiness model depends on everyone upgrading their computing environment everytwo to three years. Google's depends on everyone exploring what's new in theircomputing environment every day."


While Microsoft hasdemonstrated enormous ability to learn from and ultimately best itscompetition, there's no question that this time, the competition will requireMicrosoft (and by extension, every other existing software company) to become adeeply different kind of company. Native Web 2.0 companies enjoy a naturaladvantage, as they don't have old patterns (and corresponding business modelsand revenue sources) to shed.

5. Lightweight Programming Models

Once the idea of webservices became au courant,large companies jumped into the fray with a complex web services stack designedto create highly reliable programming environments for distributedapplications.


But much as the websucceeded precisely because it overthrew much of hypertext theory, substitutinga simple pragmatism for ideal design, RSS has become perhaps the single mostwidely deployed web service because of its simplicity, while the complex corporateweb services stacks have yet to achieve wide deployment.


Similarly,Amazon.com's web services are provided in two forms: one adhering to theformalisms of the SOAP (Simple Object Access Protocol) web services stack, theother simply providing XML data over HTTP, in a lightweight approach sometimesreferred to as REST (Representational State Transfer). While high value B2Bconnections (like those between Amazon and retail partners like ToysRUs) usethe SOAP stack, Amazon reports that 95% of the usage is of the lightweight RESTservice.


This same quest forsimplicity can be seen in other "organic" web services. Google'srecent release of Google Maps is a case in point. Google Maps' simple AJAX (Javascript and XML)interface was quickly decrypted by hackers, who then proceeded to remix thedata into new services.


Mapping-related webservices had been available for some time from GIS vendors such as ESRI as wellas from MapQuest and Microsoft MapPoint. But Google Maps set the world on firebecause of its simplicity. While experimenting with any of the formalvendor-supported web services required a formal contract between the parties,the way Google Maps was implemented left the data for the taking, and hackerssoon found ways to creatively re-use that data.


There are severalsignificant lessons here:


1. Support lightweight programming models thatallow for loosely coupled systems. The complexity of thecorporate-sponsored web services stack is designed to enable tight coupling.While this is necessary in many cases, many of the most interestingapplications can indeed remain loosely coupled, and even fragile. The Web 2.0mindset is very different from the traditional IT mindset!

2. Think syndication, not coordination. Simple web services, like RSS andREST-based web services, are about syndicating data outwards, not controllingwhat happens when it gets to the other end of the connection. This idea isfundamental to the internet itself, a reflection of what is known as the end-to-end principle.

3. Design for "hackability" andremixability. Systems like the original web, RSS, and AJAX all have this incommon: the barriers to re-use are extremely low. Much of the useful softwareis actually open source, but even when it isn't, there is little in the way ofintellectual property protection. The web browser's "View Source"option made it possible for any user to copy any other user's web page; RSS wasdesigned to empower the user to view the content he or she wants, when it'swanted, not at the behest of the information provider; the most successful webservices are those that have been easiest to take in new directions unimaginedby their creators. The phrase "some rights reserved," which waspopularized by the Creative Commons to contrast with the more typical "allrights reserved," is a useful guidepost.


Innovationin Assembly

Lightweight businessmodels are a natural concomitant of lightweight programming and lightweightconnections. The Web 2.0 mindset is good at re-use. A new service likehousingmaps.com was built simply by snapping together two existing services.Housingmaps.com doesn't have a business model (yet)--but for many small-scaleservices, Google AdSense (or perhaps Amazon associates fees, or both) providesthe snap-in equivalent of a revenue model.


These examplesprovide an insight into another key web 2.0 principle, which we call"innovation in assembly." When commodity components are abundant, youcan create value simply by assembling them in novel or effective ways. Much asthe PC revolution provided many opportunities for innovation in assembly ofcommodity hardware, with companies like Dell making a science out of suchassembly, thereby defeating companies whose business model required innovationin product development, we believe that Web 2.0 will provide opportunities forcompanies to beat the competition by getting better at harnessing andintegrating services provided by others.


6.Software Above the Level of a Single Device

One other feature of Web 2.0 that deserves mention isthe fact that it's no longer limited to the PC platform. In his parting adviceto Microsoft, long time Microsoft developer Dave Stutz pointed out that"Useful software written above the level of the single devicewillcommand high margins for a long time to come."

Of course, any webapplication can be seen as software above the level of a single device. Afterall, even the simplest web application involves at least two computers: the onehosting the web server and the one hosting the browser. And as we've discussed,the development of the web as platform extends this idea to syntheticapplications composed of services provided by multiple computers.


But as with manyareas of Web 2.0, where the "2.0-ness" is not something new, butrather a fuller realization of the true potential of the web platform, thisphrase gives us a key insight into how to design applications and services forthe new platform.


To date, iTunes isthe best exemplar of this principle. This application seamlessly reaches fromthe handheld device to a massive web back-end, with the PC acting as a localcache and control station. There have been many previous attempts to bring webcontent to portable devices, but the iPod/iTunes combination is one of thefirst such applications designed from the ground up to span multiple devices.TiVo is another good example.


iTunes and TiVo alsodemonstrate many of the other core principles of Web 2.0. They are not webapplications per se, but they leverage the power of the web platform, making ita seamless, almost invisible part of their infrastructure. Data management ismost clearly the heart of their offering. They are services, not packagedapplications (although in the case of iTunes, it can be used as a packagedapplication, managing only the user's local data.) What's more, both TiVo andiTunes show some budding use of collective intelligence, although in each case,their experiments are at war with the IP lobby's. There's only a limitedarchitecture of participation in iTunes, though the recent addition of podcasting changesthat equation substantially.


This is one of theareas of Web 2.0 where we expect to see some of the greatest change, as moreand more devices are connected to the new platform. What applications becomepossible when our phones and our cars are not consuming data but reporting it?Real time traffic monitoring, flash mobs, and citizen journalism are only a fewof the early warning signs of the capabilities of the new platform.


7. Rich User Experiences

As early as Pei Wei's Viola browser in 1992, the web was being used todeliver "applets" and other kinds of active content within the webbrowser. Java's introduction in 1995 was framed around the delivery of suchapplets. JavaScript and then DHTML were introduced as lightweight ways to provideclient side programmability and richer user experiences. Several years ago,Macromedia coined the term "Rich Internet Applications" (which hasalso been picked up by open source Flash competitor Laszlo Systems) tohighlight the capabilities of Flash to deliver not just multimedia content butalso GUI-style application experiences.


However, the potential of the web to deliver full scale applications didn't hit themainstream till Google introduced Gmail, quickly followed by Google Maps, webbased applications with rich user interfaces and PC-equivalent interactivity.The collection of technologies used by Google was christened AJAX, in a seminal essay by JesseJames Garrett of web design firm Adaptive Path. He wrote:


"Ajax isn't a technology. It's really several technologies, each flourishing in itsown right, coming together in powerful new ways. Ajax incorporates:

  • standards-based presentation using XHTML and CSS;
  • dynamic display and interaction using the Document Object Model;
  • data interchange and manipulation using XML and XSLT;
  • asynchronous data retrieval using XMLHttpRequest;
  • and JavaScript binding everything together."

AJAX is also akey component of Web 2.0 applications such as Flickr, now part of Yahoo!,37signals' applications basecamp and backpack, as well as other Googleapplications such as Gmail and Orkut. We're entering an unprecedented period ofuser interface innovation, as web developers are finally able to build webapplications as rich as local PC-based applications.


Interestingly, manyof the capabilities now being explored have been around for many years. In thelate '90s, both Microsoft and Netscape had a vision of the kind of capabilitiesthat are now finally being realized, but their battle over the standards to beused made cross-browser applications difficult. It was only when Microsoftdefinitively won the browser wars, and there was a single de-facto browserstandard to write to, that this kind of application became possible. And while Firefox hasreintroduced competition to the browser market, at least so far we haven't seenthe destructive competition over web standards that held back progress in the'90s.


We expect to see manynew web applications over the next few years, both truly novel applications,and rich web reimplementations of PC applications. Every platform change todate has also created opportunities for a leadership change in the dominantapplications of the previous platform.


Gmail has already provided some interesting innovations in email, combining the strengths of the web(accessible from anywhere, deep database competencies, searchability) with userinterfaces that approach PC interfaces in usability. Meanwhile, other mailclients on the PC platform are nibbling away at the problem from the other end,adding IM and presence capabilities. How far are we from an integratedcommunications client combining the best of email, IM, and the cell phone,using VoIP toadd voice capabilities to the rich capabilities of web applications? The raceis on.


It's easy to see howWeb 2.0 will also remake the address book. A Web 2.0-style address book wouldtreat the local address book on the PC or phone merely as a cache of thecontacts you've explicitly asked the system to remember. Meanwhile, a web-basedsynchronization agent, Gmail-style, would remember every message sent orreceived, every email address and every phone number used, and build socialnetworking heuristics to decide which ones to offer up as alternatives when ananswer wasn't found in the local cache. Lacking an answer there, the systemwould query the broader social network.


A Web 2.0 wordprocessor would support wiki-style collaborative editing, not just standalonedocuments. But it would also support the rich formatting we've come to expect inPC-based word processors.Writely isa good example of such an application, although it hasn't yet gained widetraction.


Nor will the Web 2.0revolution be limited to PC applications. Salesforce.com demonstrates how theweb can be used to deliver software as a service, in enterprise scaleapplications such as CRM.


The competitiveopportunity for new entrants is to fully embrace the potential of Web 2.0.Companies that succeed will create applications that learn from their users,using an architecture of participation to build a commanding advantage not justin the software interface, but in the richness of the shared data.


CoreCompetencies of Web 2.0 Companies

In exploring theseven principles above, we've highlighted some of the principal features of Web2.0. Each of the examples we've explored demonstrates one or more of those keyprinciples, but may miss others. Let's close, therefore, by summarizing what webelieve to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models


The next time a company claims that it's "Web 2.0," test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.



Tim O'Reilly
O’Reilly Media, Inc., tim@oreilly.com
President and CEO

人气教程排行