This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Network Effects, Information Cascades, and the Primary

Recently we’ve been talking about information cascades, and lately network effects, mostly in the context of tangible consumer goods and services. A more intangible example of both of these phenomena is going on right now, across the country. Although the first primary is more than eight months away, fund-raising has already started. This article in the Economist discusses the current state of the so-called ‘money primary,’ the race to amass funds for the upcoming campaign.

We can think of the money primary in terms of both network effects and information cascades. Someone who donates to a particular candidate’s campaign is putting some value on the election of that candidate. That is, they value the election of that candidate over another candidate that they did not donate money to. The value might be purely ideological, or it might be more crude–that isn’t particularly important here–but it’s clear that there is some value, or it wouldn’t be worth donating money. The game is a little trickier because donating money does not guarantee that your candidate gets elected. If your candidate loses, the money is lost, and essentially wasted. A savvy donor will therefore want to donate more to a candidate who is likely to win.

This is where network effects and information cascades both come into effect. If we make a simplifying assumption that a candidate with more money is more likely to win (an assumption that may have serious flaws, as the Economist article mentions, but is probably broadly accurate), then there is an immediate network effect. If everyone else is donating money to candidate A, then it makes more sense for me to donate money to A as well, even if I prefer B. Assuming that I attach a positive value to both candidates (for instance, if both A and B are from one party, and I want that party to win regardless of the candidate), then it makes sense to donate to A because A already has a lot of money. Donating to B is probably a lost cause, because B still won’t have much money compared to A.

This is also a place where information cascades can be seen. If we think that other donors have some private information about which candidate is likely to win, then seeing a lot of money giving to A will make us think that many donors think A will win. This means that donating to A is probably a good bet. So the perception that a candidate is ’strong’ can lead to it becoming true.

Last class we also discussed how network effects in a system can cause outcomes that are not socially optimal. I’m not touching that one.

Posted in Topics: General, social studies

View Comment (1) »

Google’s New TV Advertising System

http://adage.com/digital/article?article_id=115896

http://business.timesonline.co.uk/tol/business/industry_sectors/media/article1604806.ece

http://www.businessweek.com/technology/content/apr2007/tc20070404_033516.htm?chan=technology_technology+index+page

In class we talked about Google’s methods of selling their ads. Google sells ads based on the keywords users use to find content they’re interested in using their search engine. Using this method, Google is able to provide the most pertinent ads to the correct group of audience, hence facilitating a larger percent in click through rate, generating more revenue for itself. Google has revolutionized the way ads are being sold on the internet, many rival companies including Microsoft and Yahoo are beginning to alter their own search engines to become more like Google’s. However, the “$170 billion global television market still eclipses online advertising spending, which is expected to be worth $31 billion this year.” Google hopes that its recent joint venture with EchoStar, the American Satellite network, to create a new TV advertising system will tap into this massive market, and generate an effect as big as it initially did in the online world.

The classical method of selling ads on the TV allows advertisers to bid for a certain spot between shows, and the slots during the most popular shows costing the most. In this method “a marketer is given the overall rating for the program in which his or her ads ran”, and the advertisers have no idea if the audience “tuned out” for the commercial break, or whether their commercials reached the intended audience. In its new TV advertising system, Google will “report a rating for that specific ad. If a program generated 1 million viewers, but 50, 000 tuned out before the commercial break commenced, Google would only report an audience of 950,000 for the ad.” This is analogous to the paying per click model we talked about in class, and the the advertiser would only pay for every click, or every commercial seen by the viewer in this case, increasing the confidence of the advertiser, and revenue of the system. In addition, Google also uses the “first widely available second by second measurement of each commercial,” allowing the advertisers to know exactly when the viewer “tuned away” from the commercial, hence telling them whether or not the commercial is efficient.

Google also utilizes its innovative second price auction in this new system. Advertisers submit a closed one time bid into the system for a particular time slot during the day on a specific channel, perhaps a time and channel that’s most pertinent to the thing they’re trying to advertise. The marketers will “only know what the winning bid price was if they indeed won the auction, and all bidding will be based on household cost-per-thousand viewers.” Each marketer will then have their own independent private values, and we can use the same methods we learned in class to find the market-clearing price for the advertisements. The prices then are set by the auction.

Skeptics are less sure of success on this move by Google. One large reason for Google’s success on the dot.com world is its ability to efficiently target the individuals most interested in the ads they’re trying to sell. However, offline, “Google lacks the same extensive targeting ability.” The advent of devices such as the Apple TV “will help advertisers use information collected by Web users to serve more relevant ads both online and on TV.” That is where the strengths of Google lies, and where it should be pursuing in the future.

Only time will tell whether Google’s new TV advertising system is a boom or bust, until then, we have to sit back, relax and perhaps read a little from Google news.

Posted in Topics: Education

No Comments

Network Externalities in Microsoft Antitrust Suit

Is Microsoft a Monopolist?

This article concerning the late ’90’s Antitrust suit filed against Microsoft is essentially a summary of the various arguments that were made.  A portion of the article discusses the issue that some of Microsoft’s early adoption practices were percieved as monopolistic, especially pertaining to the inclusion of the Internet explorer web browser in a release of Windows 95.

This issue is interesting in relation to our study of network externalities because the US attorneys’ arguments were that Microsoft used its place as the leader in Operating systems to essentially force early adoption of other programs such as Internet Explorer, thus pushing the use of many programs above the critical points necessary for them to become the market leaders.

Since it is not really possible to tell whether a given product was adopted because it is actually the best product or by simple network externality issues, there will probably never be a satisfactory conclusion.  Regardless, this lawsuit demonstrates some of the issues that go along with network externalities as well as the problems with using these principles for legal purposes.

Posted in Topics: Education

No Comments

Value and Growth of Networked Information: Aggregation, Search in a Semantic Web, and HCI on Graphs

As pointed out by beefcake [link], finding relevant content often limits users on the internet more than lack of content. Of course, companies like Google have profited greatly from search: utilizing the link structure of the internet to help users navigate. Indeed, web growth these days is very much dependent on the aggregation, search, and classification of cheap content. The age of “Content is King” on the web has gone.

Aggregation is the process of generating powerful Hubs on the internet; hubs link to good content. Many web 2.0 can attribute their rise to fame to their success in pointing to lots of good content: Digg, Flickr, and LiveJournal are examples. Each individual news post on Digg [alexa] has limited value (many are quite useless), but the sum total of all the posts can be very interesting. LiveJournal collects all the posts of a user’s friends and displays (+links to) them in one place to read (making it easier than ever to stalk your high school buddies). Then there’s flickr for photos, iTunes for music, and YouTube for video. We can expect more aggregation to come!

What’s more, good aggregators have made it easier than ever to find good content: aggregation promotes better search. Digg can allow more popular news posts to show up more in searches because they track user behavior. Youtube can assume two videos are similar if they’re posted by the same person - and thus put both videos up in relevant searches. Amazon’s business model is based heavily on recommendations [collaborative filtering]; it suggests books that might be interesting to the user based on other users’ browsing and buying patterns. Amazon also has information about which books have similar content.

In all these cases, web sites are adding value by including information about the relationship (links) between pages. Search engines such as Google face difficulty indexing and ranking the billions of sites on the internet: there’s simply not enough information to tell which sites are good and relevant. Websites like Digg and Youtube can often do much better search (for their respective content): Digg can infer a link between two articles just based on the fact that users who view one also view another, i.e.: “Apple Sells Product; Jobs h4x World” and “Microsoft Loses 50% Stock Share; Praise All Mighty SteveJ”. Since computers cannot yet reliably understand language, these relationships are critical for better search on the internet.

Indeed, web inventor Tim Berners-Lee has been pushing for web sites to extensively ‘describe’ their content in machine-readable form. This so called “Semantic Web” would allow search engines to know exactly what the content of a page was about without trying to parse, say, English. Although the web as a whole has been slow to adopt this concept, it’s clear that individual web site’s are doing it. News site’s like Digg (or Slashdot) require users to categorize their news post: a form of meta data [markup]. iTunes requires Artists to send extensive information about their songs/tracks.

Thus, web pages now almost always link to many pages, are linked to many pages, or otherwise have some relationship to other content (such as appearing in as an Amazon.com recommended book). Still, there’s plenty of opportunity for finding more links between content by examining user behavior. A site such as Digg could rank the user comments that are posted under an article - showing comments that suit the tastes of the user (or personal threshold for shocked incredulity). Similarly, YouTube could target videos based on past browsing in very much the way that Pandora.com finds music according to a user’s past listening patterns.

Thus there’s opportunity to treat every user, page, and comment as a node — as well as every link, facebook poke and user page viewing as an edge (each with different weights too!). With so much relational information and a seemingly boundless supply of web pages, we can expect to have more and more of the world at our fingertips, very soon.

Posted in Topics: General, Mathematics, Technology

No Comments

Facebook Rode Network Effects to the Top

Why are a handful of social networking sites so successful while most attract far fewer users? Right now, MySpace and Facebook are strongly in control of the online social networking market, while all other networking sites attract less total traffic combined. According to Hitwise, MySpace has about 81% of the social network traffic while Facebook holds 10%. First mover advantage was not the deciding factor for these shares, as predecessor Friendster was once on top but has since fallen to a level of marginal competition.

Most people would rather belong to the same social network as all their friends, so the payoff for joining and maintaining a profile on a particular social network increases as that network grows. Most people have a limited amount of time to spend browsing social networking sites, so it is reasonable that they would want to join the site with the largest payoff for time spent online. Users get one-stop-shopping for friends in the largest social networks, so these network effects predict that the most popular networks should balloon with users while others atrophy.

So, what influences put Facebook and MySpace at the top? Fred Stutzman, a Ph.D. student in Information Science at UNC, blogs about the possible causes of Facebook’s early success in his article, “Facebook Critical Success Factors.” Network size must be “jump-started” past a critical tipping point at which payoff for joining the network is greater than the cost of joining (in time, effort, and resources). Since the value of the first person to join Facebook would essentially be zero if nobody else joined, it seems that these early adopters predicted that many of their friends would soon join the service, and signed on board.

Friendster reached its critical tipping point due to its first-mover advantage, but it was soon plagued with technical problems that disillusioned many users. More and more people became unwilling to load the site as load times increased. Friendster did a poor job of managing network load, so negative network externalities such as congestion plagued the site. Its user base dwindled, causing a void that was ready to be filled by Facebook.

A new group of college undergrads sprang up who had never heard of Friendster. Stutzman cites this untapped market as Facebook’s first critical success factor. Facebook became their first social networking experience, and they gravitated to the site because it appealed to their trusted college communities.

We’ve seen that general network effects increase the social utility for all users when someone new joins the network. Paradoxically, a sense of exclusion and entitlement can serve to make a network more desirable, even when adding more users increases the amount of social information available. People gain some satisfaction from smaller, private social groups, organizations, or services. In some cases, the more exclusive a group is, the more excluded people want to join the group.

There is some value to be gained from excluding others. In online communities, smaller social circles can foster a greater level of trust among users. In a network where anybody can join, many users feel that their personal information has a greater chance of falling into the hands of someone they do not trust.

Facebook managed this psychology well with its use of gated networks. You could still feel like you were joining a special club (your prestigious university’s network) while gaining the benefit of having friends from other networks join the service. Stutzman argues that this handling of privacy was another feature that vaulted Facebook to the top.

Not only did Facebook arrive on the market at the perfect time and appeal to the psychology of college undergrads, it also delivered on speed and features that far surpassed Friendster. Stutzman writes that Facebook’s dual use as a social networking cite and a college directory contributed directly to its success.

So, the popularity of Facebook skyrocketed among college students as network effects and information cascade took over. Even people who didn’t have an Instant Messaging screen name before began signing up for Facebook profiles. People without Facebook found themselves out of the social loop as communication about events and people switched from word-of-mouth to Facebook announcements.

As users create a complex web of friendships, event postings, communication, and photo uploads on Facebook, they effectively become locked in to the service. According to the Wikipedia article on network externalities, such de facto lock-in can result in “provider complacency” - since there are few major competitors, Facebook is free to make changes that benefit it at the expense of users.

In a competitive market, users could simply switch providers if an unpopular change occurred, but since Facebook has network lock-in, the usual result of change is a vocal complaint that gradually dies down. An example of such an unpopular change was the Facebook “news feed” feature that broadcasted updates to friends’ profiles. This feature greatly benefits Facebook, because it increases site addiction by encouraging users to log on frequently and check the feed. These frequent log-ons translate to advertisement revenue. However, users were angered because they felt that their information was being beamed out across the network. Soon after more advanced privacy measures were implemented, many more users began restricting the amount of information available to others.

Posted in Topics: Education

Comments (2) »

Digg.com as a game and manipulations of information cascades

Digg.com

“Digg is a community-based popularity website with an emphasis on technology and science articles, recently expanding to a broader range of categories such as politics and entertainment. It combines social bookmarking, blogging, and syndication with a form of non-hierarchical, democratic editorial control.

News stories and websites are submitted by users, and then promoted to the front page through a user-based ranking system. This differs from the hierarchical editorial system that many other news sites employ.” (The above is from the Wikipedia Definition of Digg and has been added to give a point of reference to those not familiar with Digg and its applications)

Digg As A Game

Digg functions democratically, allowing any and all users to vote on sites and attempt to “digg up” the article. Typically an article will be brought to the top based upon the quality of the content however the algorithm involves the frequency to which it is linked to from other “dugg” articles and views in general.

Several interesting social phenomena occur in the Digg.com setting which place interesting twists on the analysis of information cascades that we have done in class. Primarily the existence of “top diggers” drastically effects the principals learned thus far about information cascades. “Top diggers” are individuals who, through an alternate algorithm involving the popularity of articles they have “dugg” and their experience, have added weight to their votes. In this setting if we view the sequence of votes as a network of information, votes [guesses] by several “top diggers” can cause a cascade that would normally not occur. Because of conformity and the resulting information cascade it is very easy for “top diggers” to cause cascades for sub-par articles with their persistency or the weight of their votes. The probabilities the other voters would normally view still place emphasis on voting rather than not voting even though voting is not the preferred or “rational” action, as described by the author.

These probabilities, which we learned in class can be used to determine subsequent guesses of individuals in the network, are incredibly skewed due to the value of certain votes [guesses] over others. Unlike the situation viewed in class, the probability that an article is worth a vote will always be the greatest, conditioned upon the votes [guesses] of the “top diggers.” As a result, as stated before, it will always be in the best interest in the other voters to vote for the article, therefore causing the resulting cascade.

This article is a very interesting analysis of information cascades with additional information. As well, this article touches upon other types of information cascades, specifically “The Urn Game,” which is the basis for the digg.com analysis.

Posted in Topics: Education

No Comments

Ad Exchange

NY Times Article

DoubleClick plans to announce they will set up a NASDAQ-like exchange for the buying and selling of digital ads.  Currently this company serves both buyers and sellers of digital ads, monitoring things such as number of clicks on a particular ad and working out marketing strategies.  This new type of exchange for buyers and sellers could revolutionize the way that online advertisements are sold.  Buyers and sellers would have an exchange to directly interact with one another to participate in auctions for ad space.  Sellers could now see who bids on their spaces and the prices, much in the way an auction on eBay works.

Traditionally, advertisements have been sold through some human intermediary, but this new service allows for the buying and selling of online ad space without the middleman.  Obviously someone would not host this service without a fee for using it, but DoubleClick would not directly interact with purchases through their service.  This could allow the ad selling process to vary from that which we discussed in class.  Sellers could organize any sort of auction they want to sell their space.  That alone could change how people think of ad space auctions.  Buyers would no longer be restricted to how a particular company wants to run their auction; buyers could go elsewhere to place their ads.  Also, the location of a company’s ads can be more specifically known because of the direct interaction with certain companies.

This sort of new ad exchange would also have a direct effect on what strategies companies use to buy ad locations.  Depending on the type of auction being run, sellers would prefer different outcomes to maximize their total profit, and buyers would prefer certain bids to minimize their costs.  A new game could evolve from this type of bargaining that the buyers and sellers would need to formulate their best strategies to play.  One further comment to note is how sites would work their keyword-based advertising when selling ads through an exchange such as this.  Would advertisers continue to run a modified second-price auction, or would they prefer some other kind of auction for this?  A lot of these questions will be answered when the exchange goes live in the third-quarter of 2007, and companies begin to offer their advertising space.

Posted in Topics: Education

No Comments

Evolution of Web Advertising

Web advertising today is a vast network of pop-up ads, ads that play music and sound tracks, ads that swim across the screen, and infinitely other in your face advertising techniques used by advertisers. This article studies the different forms of Web advertising in use today, as well as the economics that are driving them. Around 1997, commercialization found its way onto the Web and the “banner ad” was born. These were the 728×90 pixel ads that you see at the top of almost every Web site today. Sites such as Yahoo charged $30, $50, $100 per thousand impressions to run banner ads on their pages in the late 90s. Where did these rates come from? These figures are what magazines typically charge for full-page color ads. Internet companies basically used the same model for banner ads. Eventually advertisers concluded that banner ads were not as effective as full-page magazine ads or TV commercials and rates began to plummet; thus newer, flashier, and more innovative methods were developed.

First, advertisers simply thought bigger is better and instituted the “sidebar ad” which are vertically orientated ads two to three times larger than a banner ad; also a sidebar ad cannot be scrolled off the screen since it stretches the entire height of the page. The sidebar ad proved to have a higher click-through rate and advertisers typically pay $1-$2 per 1,000 impressions. The next innovation came in the form of varying the shapes and sizes of banner and sidebar ads. Although advertisers pay lower fees for these smaller ads, a web page can now cram 10 ads on to a single page.

The final step in the evolution of Web advertising incorporates the use of motion. From pop-ups and pop-unders to floating ads and moving banner/side bar ads, the use of movement successfully captures the attention of any Web surfer. Pop-ups make ads unavoidable for the internet user, thus annoying many and causing the creation of pop-up blockers. However, pop-ups receive 10 to 15 times more clicks on average than banner ads and a pop-up ad will pay the Website 4 to 10 times more as well. This is the reason for the over abundance of pop-ups today.

For more on the future and economics of Pop-ups, Floating, Unicast Ads and other variations. Refer to the Following link: http://computer.howstuffworks.com/web-advertising1.htm

Posted in Topics: Education

View Comment (1) »

Search Engine Optimization a Burden?

SEO Tips

With the growing popularity of online search engines, especially Google, more and more web developers have grown to realize the importance of Search Engine Optimization (SEO).  More and more people become aware of what kind of procedures search engines rank pages, and this motivates these people to optimize their web page in terms of getting high ranks.  Though search engines such as Google obviously do not flaunt their ranking algorithms, they do give some idea of what systems they use to rank websites.  For instance, Google admits to using PageRank as one of its many ways to sort pages.  In turn, web developers try to attain the highest possible rank they can by changing around links and other small tricks.  Many of these developers will exploit any errors or imperfections in the algorithms to maximize their ranks.  All of this, in a way, ruins the goals of the search engines, which are to rank pages in order of relevance, not to place the pages with the best SEO schemes at the top.

A large company can hire experts to handle the SEO on its websites.  This may be one of the many reasons why sites such as Amazon and eBay are ubiquitous in search results.  On the other hand, the smaller websites which do not have access to as much money will obviously have inferior SEO.  Whether the website has little or a lot of money is most likely not to be relevant to the user, and yet he or she will get results clearly influenced by the SEO techniques employed by larger websites.  Small websites created by developers will little or no knowledge or resources to deal with SEO are not necessarily bad sources of information.  All of this SEO further sways the results in favor of the larger websites, which probably already have high in-degrees and have paid the search engines to make their website more visible.  It would probably behoove the search engines to curb this practice a bit so that the users can have more varied results.

Posted in Topics: Education

No Comments

SxSW tips Twitter

One of the most talked about new web applications today is a service called Twitter. A dead-simple social messaging application, Twitter has been the subject of recent controversy in the blogosphere. Some like it. Some don’t. Some just wonder how we will get anything done.

What is Twitter? From Medialoper:

If you’re not familiar with Twitter, it’s a web 2.0ish chat/SMS mashup that allows users to send quick messages to friends (or the world) from just about anywhere. Unlike traditional chat and SMS, Twitter seems to be more group based and messages have persistence. Your most recent twit becomes something of a short-term status for your entire life.

Twitter is interesting to study because like any good social networking site, the service is only of value to you when your friends are on it. In the same way that Facebook without friends isn’t worth using, Twittering by yourself wouldn’t be much fun at all. Twitter, like those before it, had to reach its tipping point.

Recently, Twitter tipped, and it tipped dramatically. This year’s SxSW festival in March provided Twitter just the boost it needed to tip. Take a look at page views, noting specifically March 2007:

Twitter Alexa Graph

Another look at the data is in terms of the number of messages posted. Andy Baio took a guess at estimating number of messages in the system using the IDs displayed in Twitter URLs. We see the same pattern in his Twitter analysis. I’ve taken the Excel file of ev’s Twits and added the second half of March. The rate of new posts has stayed the same, even after SxSW:

EV’s Twitter messages by date

SxSW tipped Twitter in a big way.

What makes such a simple server so appealing? Well, no one really knows. Kathy Sierra has speculated that it’s an addiction:

it’s a near-perfect example of the psychological principle of intermittent variable reward, the key addictive element of slot machines.

Jason Kottke has a different take:

Maybe that’s when you know how you’ve got a winner: when people use it like mad but can’t fully explain the appeal of it to others. See also: weblogs, Flickr.

Whatever the reason, it’s clear that Twitter is on its way to the mainstream, just like weblogs, Facebook and Flickr before it. Usually the tipping point of a phenomenon is hidden behind the scenes, but it seems pretty clear that SxSW provided a very specific date and reason for Twitter’s tipping.

Oh, and yes, I Twitter.

Posted in Topics: Technology

No Comments