This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Information Cascades in Financial Markets

A fairly old article by Andrea Devenow and Ivo Welch written in 1996 still continues to accurately describe the financial world more than 10 years later.

Rational Herding in Financial Economics

The authors describe the success of the Herding and the Efficient Markets Hyporthesis (EMH) by discussing the stock market and the behavior of many investors. The stock market is a place that spawns numerous information cascades on a daily basis, yet these cascades are also extremely fragile. Investors can observe each other’s behavior and thus imitate what others are doing. (i.e. if a particular stock is selling really well, one might be more tempted to buy it). However, these investors also posses valuable private information that may persuade them to go against the cascade, and very often end it through the actions of just a few individuals.

As the authors of the article discuss, there are several strong reasons for imitating the actions of others:

“Payoff externalities models show that the payoffs to an agent adopting an action increases in the number of other agents adopting the same action (example: the convention of driving on the right side of the road).”

This is fairly obvious, since by adopting the actions of everyone else, one can minimize the chance of losing money because in a failing situation, everyone else would lose money as well, and from a relative perspective the investor would not lose anything. At the same time this creates an incentive to break the cascade, as obviously there is the potential for a lot of gain. Considering the same situation with everyone making the wrong choice; the one person who makes the right choice will end up with a much greater payoff. Another example of this can be demonstrated in the following situation:

“Payoff externalities may also drive the decisions of agents for which stocks they acquire information. Under certain circumstances, agents find it worthwhile to acquire further information only if other agents do. Agents thus herd on information acquisition (or lack thereof).”

Since acquiring additional information can be costly, one might choose not to do it if other agents choose not to. It can indicate that this extra information is unnecessary and would not bring a greater payoff. There is also the flipside of the issue where spending more on a little extra information can in fact result in a greater payoff. Thus, it is easy to see how information cascades spawn in the financial world, but at the same time are extremely fragile as there is always the potential of greater payoff through the choice of breaking the cascade.

Posted in Topics: Education, General, social studies

No Comments

Crowdsourcing

Recently we’ve talked about the “wisdom of the crowds” in information cascades. In particular, how two consumers’ decision to choose to go to restaurant A instead of B may cause all subsequent consumers to make the same choice. This scenario embodies this notion that herds of people might make a better choice than a single person. Of course this idea isn’t new, after all a democracy is a type of government based on this principle and there are plenty who argue the decisions made now or historically aren’t wise at all. Crowdsourcing is a term coined by Jeff Howe that describes this notion, and is defined as

“the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call.”

Catone’s blog article Crowdsourcing: A Million Heads is Better than One analyzes this new phenomenon and comments on its advantages and disadvantages.

Catone claims crowdsourcing takes three forms: 1) creation, 2) prediction, and 3) organization. Creation refers to designing new or modifying existing products based on the opinions of the people in the network, which usually is the World Wide Web. Two examples of creation are Cambrian house and Crowd Spirit. Prediction refers to using the crowd’s knowledge to make investments. One example is Marketocracy, which created a mutual fund called the Masters 100 Index based on the virtual investments of its top 100 most successful members. Now the fund is worth $44 million and has beat S&P 500 Index, which is considered a good gauge of the U.S. equities market. Organization refers to using the crowd to classify articles, photos, web pages, and etc. by interest and relevance. digg is an excellent example organizing news, videos, and podcasts by the interests of the crowds

Catone concludes his post noting that crowds are susceptible to “mania” (i.e. popularity may promote a terrible idea). Catone believes to use crowdsourcing effectively, first there ought to be constraints. Second, a “core group” makes the final decisions on what products the company will manufacture or investments to make. Third, contributors should try to avoid following the crowd in order to produce the best product. Finally, crowds are better voters than inventors (i.e. parts of a product ought to be created by individuals to promote creativity, but the design that goes in the final product ought to be decided by vote). To find out more articles about crowdsourcing you can check this blog that is devoted to it.

Posted in Topics: Education, Technology, social studies

No Comments

Why the Semantic Web Will Fail - True or False?

The Semantic Web has often been hailed as one of the biggest revolutions in networking, the creation of well defined organizational links between media, information and other things between networked media. However, as promised as thei technology has been, it has yet to truely take off. Initially posted in Slashdot, Stephen Downes’s inflamatory post on the Semantic Web has raised many good questions. As he points out, business is cruical to allowing Web 3.o, or the Semantic Web, to succeed. Without business support, it is inevitable that the wave or proprietary formats will swamp the web and then fight until the user loses. This has been the case not only with new technologies now, but in the past as well. With Web 2.0 we saw the rise of ATOM versus RSS, with DRM schemes the Windows Media versus Apple’s FairPlay. Now, with more integration on the way, what evidence do we see otherwise. True, Google is considered to be one of the world’s most advanced and interesting companies. Yet even their front page does not conform to W3C standards for HTML. For an industry wide standard to come into being, more widespread connections between corporate R&D is required. It’s slightly fitting that to define the new wave of standards for links and edges between nodes will require the very same thing, though not neccesarily through technology, but through interpersonal communication between policy makers and implementors. And at the moment, the implementors are threatening to leave the negotiating table. As pointed out, Yahoo!, MySpace, the large companies are leaving the table from industry wide standards on interconnection to move to a more localized network. While this potentially could offer large profits toward the proprietary technology licensees, this requires that they be in the centralized, so-called “large group”, otherwise they risk losing everything. Perhaps Mr. Downes is right, maybe the Semantic Web will fail, if this does not change.

Slashdot Discussion: http://developers.slashdot.org/article.pl?sid=07/03/21/0235208

Blog Post: http://halfanhour.blogspot.com/2007/03/why-semantic-web-will-fail.html

Posted in Topics: Education

No Comments

Google’s Pay-Per-Action Threatening Affiliate Marketing Networks

http://www.webpronews.com/blogtalk/2007/03/20/is-googles-pay-per-action-a-threat-to-affiliate-networks

In a post by Andy Beal, entitled “Is Google’s Pay-Per-Action a Threat to Affiliate Networks?” Beal raises the notion that the emergence of Google’s new Pay-Per-Action product might come off as a challenge to any of the affiliate marketing networks. As mentioned in previous blog posts, with Pay-Per-Action (PPA), instead of paying per click like in the Cost-Per-Click product that was discussed in class, advertisers will now be able to pay whenever a customer not only clicks, but perhaps fills out a form on the advertiser’s web site, or even buys a product from the company. In general, advertisers are now given the ability to pay whenever the customers engage in some sort of action when visiting the advertiser’s website, as opposed to simply clicking on the advertisement.

In addition to this, Publishers now have the choice displaying Google’s PPA ads. With PPA Publishers are now given more options to choose from as well. For example Publishers may see the ad before it is displayed, and they may choose the number of ads to display. They also have the option of receiving a “text link ad.” With this new “text link ad” Publishers have the power to display ads as one line of text instead of as an ad block.

Affiliate market networks work in a similar fashion. In these marketing networks an affiliate collects payment whenever the customer buys a product or a service from the website that the affiliate links back to. To Beal, Google seems to be clearing itself a space in the affiliate marketing industry. By performing the same functions as these networks, Google’s PPA seems to threaten the existing affiliate marketing networks. However, Beal reports that after discussing this issue with product manager of Google’s advertising products, Rob Kniaz, Kniaz responded that by offering more choices and control, PPA is different than the “traditional marketing industry.”

Posted in Topics: Education

No Comments

Self-correcting Information Cascades

The topic of this research paper (Princeton University) is information cascades. The concepts are the same as many of the principles that we have gone over in class, however they are developed much further to explain phenomena observed in real situations. The experiments involve a simple scenario, with many decision makes, two choices, and some input regarding the choices. This is very similar to the “restaurants in a foreign country” example described in lecture. The results show that often times in real situations the cascade is broken, contrary to the predicted outcomes of a Nash Equilibrium. The method used to describe the behavior was a technique called “Quantal Response Equilibrium” (QRE). This is a much more complex method and has proven to have much more accurate predictions. The main phenomenon that the paper is focused on is that of self- correcting cascades. Sometimes in a real situation an information cascade is broken. This means that sometimes a decision maker will make a decision against the cascade. However, in some situations this deviation is only temporary, and the cascade can re-establish itself. Very interesting paper, certainly worth reading.

 

http://www.princeton.edu/~tpalfrey/cascade_exp_030206.pdf

Posted in Topics: Education

No Comments

Tipping “Vote Different” into the Mainstream

The sudden and widespread attention given to the “Vote Different” video clip (also sometimes referred to as “Obama 1984″ or “Hillary 1984″ ) is a dramatic example of the sort of tipping phenomenon that the course is beginning to cover. On YouTube, the commercial has now been viewed almost 3 million times, which doesn’t count the tens of millions who have watched it on nightly news programs and cable news channels.

For those unfamiliar with the video clip, the commercial was an edit of Apple’s 1984 commercial introducing the Macintosh computer***, where the creator, Phil de Vellis, replaced Big Brother’s droning rhetoric with video of Hilary Clinton beginning her “conversation with America”. A woman wearing an Obama campaign logo throws the sledgehammer into the screen. The clip ends with an image that contains Barack Obama’s website address.

There appears to me to be two different surges of attention - one quick viral surge from politically-centered areas (immediately sending views into the hundreds of thousands), but also a distinctly different second surge as it attracted the mainstream mass media (sending views into the millions). I focus on the second.

In a subsequent interview with YouTube, de Vellis mentions Adam Conner’s coverage of the spread of the video. Viewing the data afforded by the posts can reveal some hints as to who are - and aren’t - the major players - the Connectors and Mavens - in tipping the video into the consciousness of the mainstream news media and onward to the general masses.

The video was posted on March 5. By March 7, it had 100,000 views and had caught the attention of Micha L. Sifry at techPresident, who was already describing it as a viral video, at least in YouTube terms. Micha Sifry revisited the video on March 19, and it had somewhat more than 300,000 views. Although he describes it as being “really hot on YouTube”, hindsight reveals how lethargic the growth was - in the 12 days between the articles, it received an average of about 17 thousand views a day, paling in comparison to the hundred thousand in received in the first two days.*

Sometime around the 19th, something made the video tip (again). Adam Conner pegs the view count at 400,000 at 2PM on the 19th. By 11PM, less than ten hours later, it had reached nearly 800,000.** The next day, March 20, CBS’s The Daily Show’s Harry Smith had written about it, marking its views at 900,000. It was broadcast on the CBS’s Evening News that same night, already up to “more than a million” views on YouTube. The video went even further mainstream by March 21, when the Associated Press ran a story, pegging the view count at 1.5 million.

More than half a million views a day? What caused all this attention in the mainstream media?

As we have learned before, “weak links” are most likely to occupy the niche of bridging. Adam Conner attributes the leap to a March 18 San Francisco Chronicle article by “Carla Marinucci, Chronicle Political Writer”. Conner lists two more distinctly-political arenas, but we can note how close they occupy bridging roles into the mainstream media: The Drudge Report, which is described by ABC news as “set[ting] the tone for national political coverage”, and Time’s The Sludge, where the link with Time Magazine is clear.

Further, we can definitely see how some sites are not the bridging points: those which publicized the video on March 5, the initial posting date. According to Garance of TAPPED (as quoted in Conner’s article), blog appearances on March 5 were “pro-Democratic sites” like MyDD and TalkingPointsMemo. He also traces the word-of-mouth chain from de Vellis (the creator) to TalkingPointsMemo - friend of a friend, only two strong connections. This clearly illustrates how bouncing around in a strongly-linked component may get everyone in the component to see the video, but it needs a weaker link to get outside.

Another hint of the transition may be the shift from coverage of the clip itself and coverage of its spread. While the CBS news clip and The Sludge both discuss the implications of the clip on how the 2008 elections will look, the earlier posts seem to deal much more with the mystery of the author. This again suggests how weaker relationships take topics farther -articles on the same exact topic are less likely to be passed on to new people.

For Micah L. Sifry’s two coverage pieces: http://www.techpresident.com/node/130 and http://www.techpresident.com/node/159

For Adam Conner’s analysis: http://www.mydd.com/story/2007/3/20/13319/9340

For the original video on YouTube: http://www.youtube.com/watch?v=6h3G-lMZxjo

* The article implies that there was a noticeable rate increase already by the time it was posted - factoring this in, the average view count between the two surges would be even lower.

** I assume that “11:56 PM today” refers to the night before the March 20, 2AM timestamp.

*** According to the aforementioned TAPPED post, the video is actually an edit of a 2004 remake, as can be seen by the iPod worn by the woman who throws the sledgehammer.

Posted in Topics: Education, General, Mathematics, Technology, social studies

No Comments

Information Cascades in Sunstein’s Infotopia

In this article, http://www.worldchanging.com/archives/005507.html, Ethan Zuckerman offers bloggers an overview and a review of Cass Sunstein’s book, Infotopia: How Many Minds Produce Knowledge.  The review is fairly long and covers a large amount of information and Sunstein’s argument about how today’s society aggregates information—for better of worse.  While I cannot cover all of Sunstein’s points, I recommend that you browse through the review to read about the author’s full argument, which incorporates interesting ideas from Hayek’s theory of market aggregation and Habem’s ideas about deliberative groups to prediction markets and open source software.  Zuckerman’s analysis of Sunstein’s argument reveals some of the holes and lingering questions in Infotopia.

One of Sunstein’s most interesting conclusions reflects back to what we’ve learned this week about information cascades.  Sunstein looks into group decision making through the Condorcet Jury Theorem.  The Theorem states that if individual jury members each have a probability of making the correct decision on the verdict that is greater than half (.5), then together, as a full jury, they will have an even better probability of choosing the correct verdict.  But, if the individual probabilities are less than one-half, they combined probability for a correct decision will be even worse.  This jury shares some qualities with an information cascade, if we translate making the correct decision to having a good outcome, as we did in class.  There is a parallel of p(G/H) > p, since Condorcet suggests that p(Choosing right verdict given signals from others) is greater that p, or the individual probability of choosing the right verdict. 

To fix that negative trend of higher probability of a wrong verdict, Sunstein proposes group deliberation as a way to break a cascade in the wrong direction.  Yet, citing his own study that used politically-oriented focus groups in Colorado, deliberation in a group setting often fails to secure the “right” outcome.  Sunstein suggests that deliberation fails for three reasons.  Firstly, individuals in groups support the opinion that is relayed with the most confidence.  An impassioned juror who speaks first can cause a cascade, as we have learned from class.  Also, if you consider the information gleamed from the media during the trial as a sort of better information, an individual member who has this information, as a “fashion leader,” can throw off the cascading jury vote, for better or worse.  The fragility of this cascade demonstrates the importance of isolating jurors during high profile trials.  A second factor limiting deliberation is that individuals in like-minded groups repeat evidence that further supports that dominant opinion and shut out minor opinions.  This “preaching to the choir” trend makes it less likely for a minority opinion, even if it is the correct one, to be heard.  Thirdly, individuals who support the minority opinion, which in this case is the “right” one, tend to avoid conflict and do not speak up in group settings.  In reasons two and three, Sunstein argues that cascades are not as fragile as previously believed.  He suggests that it is rather difficult to change the course of a cascade through group deliberation unless an individual presents the most “strongly-stated opinion.”  By building this argument, Sunstein seems to urge individuals with the “right” opinion to assert themselves in group situations in order to effectively steer the group’s cascading decision-making process.

 

 

Posted in Topics: Education

No Comments

Getting Rich in the Blogosphere

Blogs to Riches

http://nymag.com/news/media/15967/ 

(This article contains a few instances of language that some may find objectionable.)

This article looks at the growth of blogging as a source of income through advertising, and examines the factors that lead to the differential success (in terms of advertising revenue generated) of blogs.  Blogging as a ‘job’ has very few barriers to entry – all you need is a computer connected to the internet, and hosting etc is basically free.  As such, it would seem that anyone could get on the bandwagon, and with enough resourcefulness and effort build a successful blog that got lots of hits and generated lots of revenue (the article cites one blogger charging advertisers on his site between $6 and $10 per 1000 views).  However, this doesn’t seem to be the case.  While it’s possible to build a successful blog that might bring in a comfortable income, building a blog that will make you a millionaire is all but impossible, and many bloggers find that no matter how hard they work, they simply cannot break through this ‘glass ceiling’.  Given the very democratic nature of this medium of communication, this phenomenon does seem surprising.  It turns out this ‘glass ceiling’ can be explained by network effects.  The structure of the blog network is such that there is a very small number of hugely successful blogs (the A-list), a larger number of moderately (but still much less) successful blogs (the B-list) and a huge base of also-rans (the C-list).  Essentially, the three lists follow a power law distribution.  The article claims that “Internet studies have found that inbound links are an 80 percent–accurate predictor of traffic”.  Therefore, the success of a blog, if number of page views translate directly to revenue, depends heavily on the number of inbound links.  The article goes into a little detail, but the main point is this:  First-movers get a crucial leg up in this kind of power-law system.”  From there, popularity breeds popularity, and the situation is similar to the ‘rich get richer’ phenomenon.  According to the article, “this pattern is called “homeostasis”—the tendency of networked systems to become self-reinforcing.” As a final passing thought, the ‘blogosphere’ seems to lend itself to the first question on homework 4.  The blogs in this context are authorities, with many inbound links.  It would be interesting to investigate the notion of mutually reinforcing links versus a high in-degree in this context.  Do all the top blogs (that deal with the same subject matter) have mutually reinforcing links?  Or perhaps conversely, if we could determine that a collection of successful blogs simply had a high in-degree, would it then be possible to engineer a way to construct a set of mutually reinforcing nodes, that might then sway the balance of the authority score, since the odds would be k2 to n in our favor?  The discussion in class and in the homework on valuing advertising space would also be applicable for successful blogs, especially since, at least for now, they charge per view instead of per click.

Posted in Topics: Education

No Comments

Searching in a Small World

In the 1960’s Stanley Milgram performed an experiment where subject individuals were given a letter and asked to send this letter to an individual unknown to the subject through a chain of personal acquaintances. The result, that these letters flowed a chain of weak personal acquaintances, was used by Mark Granovetter in “The Strength of Weak Ties” to support for his observation that weak ties (acquaintances) and not strong ties (close friends) are the most important trafficker of information in a network.

The Free Network Project (Freenet) is the architect of a fully-decentralized, anonymous network, where each user (node) stores many, encrypted small pieces of many different files that represent a small subset of the files published on the network as a whole. The user, him or her-self, does not know what small bits of files are stored by the network at any one time and does not explicitly know where to find all the pieces that constitute a file they want. The basic network design is described in Ian Clarke’s thesis, “Freenet: a Distributed Anonymous Information Storage and Retrieval System”.

Inherently, searching for information in this type of decentralized network cannot be done via some centralized file indexer as pages only reside temporarily in the same location and the fact that the user needs to know what they are searching for before they can view results inherently destroys the capabilities of any global indexer because it would need to analyze a sequence of queries across this network for every possible query string.

So, searching for information is relegated to a process where a node asks other nodes it is aware of to forward a request for information to nodes that are likely to be close to the actual information. But, as Milgram showed in terms of letters with his experimental observations, the number of steps before the request for information would reach an authority on that information is surprisingly small. This phenomena is known as the Small-World Phenomena. For a more intense algorithmic analysis of these networks, see professor Kleinberg’s paper: “The Small-World Phenomena”.

Suppose one wanted to search this network by keyword and entered a query for information on a certain subject. For the search, one wants to find not only the files needed across the network but also the highest valued authority on the subject. The Freenet routing system increases the number of copies of much-trafficked files as well as moves files closer to where they are being accessed. As it currently stands, you have to know the key of the file you are searching for but these keys also include namespaces that describe general topics.

Suppose your first hit was pointing to the key for a file and as you access that you get returns from many other nodes saying that as a group they had all found a different authority that you might want to look at. This could potentially be an example of a useful information cascade, where there might be multiple misleading pieces of information one receives as there is no way to trackback the goodness of each hub but as the new information has a high in-degree it could potentially be useful.

In sum, searching over anonymous, decentralized networks proves to be much harder than the standard search-engine indexer. In the shadow of these issues, we face multiple questions. How can an agent apply a social and algorithmic analysis that provides an effective method for traversing a graph? How could we ever anonymously sell these search results? Is an anonymous, decentralized network an efficient, better way to store information?

Posted in Topics: Mathematics, Science, Technology

No Comments

Google taking advertising method to radio and print

http://www.nytimes.com/2007/03/29/technology/29google.html

In class we have studied keyword based advertising on the internet: a way to selectively place ads based on search engine keywords, to better streamline the advertising process and avoid ads hitting the wrong markets.  Google has pioneered this method and it has been very succesful for their online business.  Now, Google is looking to use this same method in newspapers and on the radio.

 The New York Times and the Chicago Sun Tribune have already hopped on board, along with most major newspaper chains.  Radio stations have been somewhat hesitant to try out Google’s advertising, fearing they might lose relationships with advertisers that they have spent time and money building.

 Nonetheless, from the small samples of data that are currently available, Google’s auction procedure has been succesful in print and radio so far, and if the revenue Google has acquired from the online advertising business is any indicator, print and radio success shouldn’t be far off.

Posted in Topics: Education

No Comments