This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Procter & Gamble Launches Ads to Calm Consumers Over Contaminated Pet Food

While this article doesn’t directly discuss information cascades, Procter & Gamble became the first to launch an advertising campaign aiming to calm consumers over recent contaminations in pet food blamed for the deaths of thousands of pets across the United States. They purchased full-page advertisements in 59 daily newspapers.

This situation, when examined on a deeper, more network-oriented level, certainly is an example of an information cascade. It is interesting to ponder exactly how many people needed to lose their pets for some sort of negative information cascade to form convincing people to no longer purchase brands of pet food that were the culprits. What, exactly, would be the “tipping point” associated with a huge decline in the sales of these foods (or brands of foods)? To reference Gladwell’s book, how many Connectors lost their pets, and would feel strongly enough about it to do a good bit of damage to the companies responsible by spreading that information? Obviously, these two wouldn’t do much damage by actually verbally spreading the word, but if anything similar were to happen in the United States over a pet lost due to this contaminated food, the negative publicity would be even greater.

What would make an information cascade like this much, much more complicated than the linear and “open” (in the restaurant example, everyone could see where everyone else was eating) would be exactly those factors described above. People would not be able to tell what pet foods everyone else was buying and would obviously weigh the opinions of friends more heavily than some random person in front of them at the pet store, who may not even know about the contamination. Certain people (Connectors) will also make their opinions more well-known more easily (whereas in the restaurant example, people all had equal power in terms of spreading their opinion).

One more recent example of a similar situation would be the E. Coli scare for Taco Bell last December which is discussed by Tracy Samantha Schmidt in Time magazine in the preceding link. In it, she discusses certain failures of Taco Bell as a company in dealing with the ensuing negative information cascade (she doesn’t call it an information cascade, but that’s what it was). Jonathan Bernstein, a crisis-management consultant, says “The company’s response should be prompt, compassionate, honest, audience-appropriate and interactive.” These five qualities would presumably fight a negative information cascade. The article also mentions the 2005 incident at Wendy’s when a woman claimed to have found a finger in her chili and thus drove sales down more than 50%.

The author also brings up the issue of class-action lawsuits, which would make it much easier and cheaper (and thus more likely) for people to sue the company at fault. Participation in a class-action lawsuit would probably be classified more as a network effect rather than an information cascade, as the payoff would be direct and would directly affect people’s participation.

Posted in Topics: Education

No Comments

Alternative Energy Business

http://www.swampfox.ws/south-carolina-lured-away-the-director-of-the-university-of-connecticuts-global-fuel-cell-center/

 This article looks into the potential use of alternative energy and hydrogen fuel cells.  The future success of alternative energy seems to rely heavily on its publicity, public appeal, and opinion.  The article explains that the state of Connecticut is the one state that has been really trying to spur the development, manufature, and use of fuel cells.  Their efforts however may not be fast or vigorous enough to recruit enough support. 

The alternative energy issue seems to be a good example of a service that relies on network effects to gain support, but has not been able to recruit enough users to make the service worth the while.  If there would be a clear and marked trend to use hydrogen as the energy of future, the demand for alternative energy would increase since people would buy hydrogen powered cars and need fuel.  The more people who demand hydrogen fuel, the more convenient it for each of those people to obtain it, since the supply and availiability would increase.  The more people who use alternative energy, the greater the environmental, health, and economic benefits for everybody. The downside though is that during this potential transition to alternative energy, most people will pay more to participate then to not.  So the gradual efforts to convert energy form won’t be successful since those people will not see shortterm benefits until a bulk of the population also follows.  

http://news-service.stanford.edu/news/2005/july13/hydrogen-071305.html 

   

Posted in Topics: Education

No Comments

Herd Mentality

http://waderoush.typepad.com/twr/2005/03/james_surowieck.html

The article written by Rhody about a talk given by Surowiecki bring up a few valid points about the benefits and problems brought about by the ability to share information between people. He begins by mentioning that information taken from many sources can provide a diverse set of ideas that will greatly aid in solving many problems that we may come across. However, the herd mentality of humans will usually get in the way of this phenomenon. In all sorts of situations, humans tend to follow the crowd, which is precisely what constitutes an information cascade.

            The problem with these information cascades however is that much of the time they will steer many people in the wrong direction. Surowiecki uses an example from a study of computer science at the University of Michigan. Subjects competed until divided into dumb, intelligent and random groups. When tested as groups, the intelligent group slightly beat the dumb group but the random group easily beat both groups. The conclusion was not that they knew more necessarily but that they had different ideas. When it comes to information cascades, often  following the group will cause harm to everyone.

Posted in Topics: Education

No Comments

Network Externalities for P2P Systems

When dealing with any large network of users, be it social, technical, or some amalgamation of both, a discussion of network externalities will often yield fruitful results. As peer-to-peer filesharing systems are often large in both scope and impact on modern computer users, one might expect that there would be a host of literature discussing the network effects created by users joining peer-to-peer system. However, this seems not to be the case, although there are a few on the subject.

One of the few papers available is An Emperical Analysis of Network Externalities in Peer-To-Peer Music Sharing Networks by Asvanung, Clay, Krishnan, and Smith. This paper analyzes data collect in 2000-2001 from a number of OpenNap peer-to-peer networks of varying sizes in an attempt to determine the network externalities in that type of network, and to use those externalities to predict an optimal size for a filesharing network. The externalities that the paper identifies are both positive and negative. On the one hand, the more users that are logged into a network, the more files exist, and the more redundant those files are, both of which is a benefit to users. However, because the OpenNap protocol uses a central server to catolog available materials, additional users will create congestion for the server, which is a negative effect of a growing user base.

These two things combined won’t neccessarily limit the optimal size of a network. However, the paper hypothesizes that, in fact, the positive effects of additional users decrease with the size of the network, while the negative effects increase. This would eventually lead to a situation where the negative effects of additional users outweighed the positive, and the network would have reached the optimal size. The researchers used several measures of file availability and congestion, and provided evidence that this was indeed occuring.

However, this paper was written in 2002-2003, and peer-to-peer filesharing networks have changed since then. Bittorrent, which uses a decentralized server model, has gained ground, and other network protocols have come into existence. So while the analysis of OpenNap still stands, it would be useful to see an updated estimate of network effects on more modern systems.

Posted in Topics: Education

No Comments

European Digital Libraries vs. Major Search Engines

           A recent article called “France Launches Digital Library”  by Helana Spongenberg of BusinessWeek.com discusses Europeana, a digital library that France is introducing as its supplement to the European Digital Library. What struck me in reading this article was that the creation of Europeana was described as a substitute for US search engines.  The article discusses how major search engines, including those of Google, Yahoo, and Microsoft, have all announced plans to digitize large collections of books and other documents , and how France wants to help European countries preserve and reinforce all of its unique cultures by preventing European information from being completely ”Americanized”.  According to the article, France’s new cyberlibrary is scheduled to exceed 6 million books, movies, and other digital content by 2010. (Spongenberg, 1)

            I thought this article was very relevant to our class discussions on how new web searching methodes need to be introduced to accomodate the rapid growth of cyberspace. An important point that this article makes is that although major search engines like Google and Yahoo have the capability to index nearly everything, it may not be in everyone’s best interest if it does so, even if doing so would make information retrieval more convenient. Perhaps the widespread availability of some knowledge should not be limited to the American search engines, as it would make the cyber world one-dimensional and, to a degree, uninsteresting.  By making large public collections of documents available, countries like France are ensuring that American search engines don’t homogenize the web by consuming everything.  At the very least, they are offering Europeans an attractive reprieve from search engine giants.

Posted in Topics: Education

No Comments

Variations on Information Cascade Models

A Simple Model of Fads and Cascading Failures

Maximizing the Spread of Influence through a Social Network (click on “Full Text: PDF” to view article)

Our class discussion of information cascades has mostly centered around a scenario in which individuals receive information about a particular choice and make a decision based on their information as well as other individuals’ choices. While this approach allows for reasonable predictions regarding whether a cascade will occur, it makes the severely limiting assumption that the population is entirely homogeneous. Realistically, there is nearly always some variety among a population, and this variety can influence the possibility and structure of an information cascade.

Duncan J. Watts’ paper “A Simple Model of Fads and Cascading Failures” examines a few ways in which a diverse population might influence the probability of a cascade, while “Maximizing the Spread of Influence through a Social Network” by Kempe, Kleinberg, and Tardos explores various ways in which this diversity might be modeled in order to build analysis algorithms. (Please note: my decision to reference a paper written by Professor Kleinberg and two other Cornell professors was purely incidental; my search criteria did not include any reference to either of the course professors, nor did they include Cornell University.) Both papers analyze the population in terms of an undirected connected graph in which a node represents an individual and an edge connects any two nodes between which information is exchanged. Watts postulates that each node has a (randomly assigned) threshold such that the node does not become part of the cascade unless at least a certain percentage of its neighbors have done so already. Nodes with a relatively low threshold value are termed “vulnerable,” and Watts notes that the success of a cascade can depend largely on the prevalence of such nodes, since a large percentage of “vulnerable” nodes increases the likelihood of overcoming nodes with high threshold values. He also points out that the problem is significantly exacerbated by clustering individuals into groups in which there is frequent interaction, while inter-group interaction occurs with lower frequency. This creates a situation in which a trend will likely propagate quickly within a group, but inter-group propagation may be much slower.

The paper written by Kempe, Kleinberg, and Tardos also suggests the notion of node threshold values and suggests several other useful schemes. These include examining the “influence” of an initially affected set of nodes in terms of the number of affected nodes at the end of the cascade sequence, assigning a “weight” to each node in the initial set based on its effect on the final outcome, or assigning a probability value to each node in the graph and using this value to determine which neighbors of a particular node may be affected. The latter idea is particularly interesting. In the authors’ words, “When node v first becomes active, […] it is given a single chance to activate each currently inactive neighbor w; it succeeds with a probability [p …] independently of the history thus far” (2). This lends itself to a model in which some function could be created to assign values of p to each node based on the particular dynamics of the population in question. For example, one might propose a function where the value of p for a particular node is directly related to the number of that node’s neighbors, reasoning that a node with many connections is a naturally outgoing individual who is likely to influence others (perhaps a “Maven,” “Connector,” or “Salesman,” as described in Gladwell’s The Tipping Point). By deriving an appropriate function to assign the p-values over the network, one can create a model for an extremely large set of scenarios.

Posted in Topics: Education

No Comments

Cisco as a Social Networking Conglomerate

The Odd Couple of Social Networking

In early March, a router giant, Cisco, purchased Utah Street Networks, the owners of a social networking site tribe.net. Cisco did not buy the actual social network community (which is comprised mainly of those involved in the Burning Man, but rather, the underlying technology so that they can focus “on infrastructure products to help digital media content owners improve the consumer experience.”

Marc Andreessen, the co-founder of Netscape and owner of his third startup, Ning, which allows social networking space to be designed and integrated to other social networking sites. He made the remark “the idea that Cisco is going to be a force in social networking is about as plausible as Ning being a force in optical switches.”

At first, I thought this was just friendly trash talking because Cisco may be a direct competitor with Ning (both are browser based tools for developing social web apps), but Andreesen may have a point. What kind of a crowd in the social networking scene actually likes large corporations? How can Cisco attract/create a social network? Not only are there plenty of options for social networking out on the internet already, but also I feel that anyone creating a back-end has enough knowledge in php, asp, and html to come up with all of this without paying Cisco.

Again, this comes down to profitability and payoffs of the users. The product Cisco purchased, may be on the order of a rounding error on their income statement, and may not have severely impacted their bottom line. However, if they are unable to draw a crowd, which seems very unlikely, there is no payoff for anyone to join. Also, everyone who wants to have a social networking space, has most likely acquired one and will be unwilling to switch. However, we won’t know what Cisco plans with their recent acquisition. Maybe it’ll be geared towards the corporate environment, which I believe, will be the only market that will welcome the technology.

Posted in Topics: Technology, social studies

View Comment (1) »

Jaxtr: A Voice for Social Networking

On March 20, 2007, Jaxtr launched the public beta version of its free widget, which “allows users to connect their personal phone to their digital personality.” Users can put this widget on their favorite social networking sites and blogs such as Blogger, Craigslist, eBay, Facebook, Flickr, Friendster, LinkedIn, LiveJournal, MySpace, YouTube, and Wikipedia. During the three months it spent in private beta (December 14, 2006 - March 20, 2007), Jaxtr was tested by thousands of individuals “from more than 80 countries and included bloggers, real estate agents, lawyers, doctors, customer service agents, public service organizations and others.”

When a user clicks on the widget, they enter their mobile or land-line phone number. They then receive a call on that phone, and once they pick up, Jaxtr calls the phone number provided by the owner of the widget in order to join the two calls. Jaxtr stores voice mails as well, and can be accessed by simply calling into the service. Perhaps the most important feature of Jaxtr is that the phone numbers of all users, both those who initiate and accept calls, are kept private. When calls are made, unique numbers are assigned so that real phone numbers are never shared. And, for the time being, the service is free.

In the past, social networking via the web has been very text-based. Over the last several years, however, talking over the web has become more mainstream and popular. When I think of talking over the web, though, I think of people speaking with individuals who they actually know. I feel like when people make new acquaintances on the internet, they prefer text-based communication, at least at first. Despite the increase in popularity of VoIP (Voice over Internet Protocol) applications such as Skype, I think it will take some time before, if at all, this form of communication overtakes text-based applications such as AOL Instant Messenger.

Unlike Skype, Jaxtr allows individuals to communicate using the physical phone of their choice, versus being constrained by the internet’s speed or the sound quality it allows. Jaxtr presents the opportunity for new new social networks to form, and for weak ties to strengthen. The service allows users to communicate using voice channels while giving them complete anonymity. Whereas before many users associated anonymous communication with text-based channels, Jaxtr allows people to communicate using voice channels while giving them complete anonymity. Furthermore, networking ties between weaker links could be strengthened by using Jaxtr, since it eliminates the need to be tied to your computer in order to speak with someone who before you would only chat with over the internet. Jaxtr preserves the anonymous relationship established online and carries it over to allow for live voice conversation.

Posted in Topics: Education, General

Comments (3) »

Google introduces Pay-Per-Action advertising

Google has launched a new AdSense/AdWords beta feature called “Pay-Per-Action” that lets advertisers to pay for ads only after a user clicks on the ad and completes a certain “action”, such as buying a product or filling out an application. This seems like a natural progression from the newspaper-style cost-per-impression, through the internet’s pay-per-click model, onto a system where advertisers pay for exactly what they want, which is for people to actually respond to their ads. This system is designed to give advertisers more control over their return on investment by allowing them to specify how much to pay when a user completes a desired action, as opposed to calculating their ROI using fluctuating conversion rates from clicks. It is now possible to simply state that you will pay $1 for every user who clicks an ad and signs up for your re-manufactured beanie baby store’s website, and $4 for users who actually buy one, which eliminates a lot of complications in terms of setting ad prices. Another interesting aspect of this system is that conversions are tracked for 30 days, so that if someone clicks a pay-per-action ad and doesn’t complete an action, but goes back two weeks later and completes the defined action, the advertiser still pays. This seems like a very good feature because internet shoppers will often search for alternative products and sellers before making a final purchasing decision, and a previously viewed advertisement can affect this decision.

This relatively new advertising system brings up a host of questions about its motivation, userfulness and possible pitfalls. One noted goal of the system is to cut down on click fraud, which can be the result of ad publishers fraudulently clicking the ads that they hosting in order to get paid per click by the advertiser, which user t3khn3 recently wrote about. This problem can be mitigated if the advertiser only pays for actions that produce monetary benefit. For example, if you pay your ad publisher $1 every time a user clicks your ad and completes an order of $10 worth of beanie babies, then click fraud is non-existent because any ad sale accompanies an actual product sale, which profits the advertiser. This is a major feature of this new advertising system, and it will be interesting to see if this method becomes popular on the Internet in the future due to this click-fraud reduction.

Posted in Topics: Education

No Comments

Click Fraud: Easily and Frequently Done, Harmful to Both Search Engines and Advertisers

In our discussion of keyword-based advertising, we focused quite a bit on keyword auctions where search engines would sell ad slots to the highest bidding advertisers. In our model, the value of a slot to an advertiser was determined by the expected clickthrough rate multiplied by the value of each click to the advertiser. However, what happens if the value of each click to the advertiser is overestimated? The advertiser ends up paying for more than they get in return.

As discussed in the Wired article, “How Click Fraud Could Swallow the Internet,” the pay-per-click model commonly used by search engines in generating revenue from advertising slots is easily and frequently exploited by click fraud. One big difference between our model and this model is that advertisers pay the search engine each time their ad is clicked. One type of click fraud involves a company’s competitor clicking on that company’s ads with the intention of using up that company’s marketing budget. Another common variety involves some website, carrying ads from a search engine and splitting the pay-per-click revenue with the search engine, where the website owner invites click fraud so he/she may profit from it. Fraudulent clicks are estimated to make up anywhere from 10-50% of all clicks, and some advertisers are not happy. A class action lawsuit against Google alleging that the Internet advertising giant had overcharged advertisers without taking click fraud into account was settled for $90M out of court.

The generic solutions preventing overcharging advertisers include ignoring repeated clicks from the same IP source, clicks which come in rapid succession, and probabilistic models. The more elegant methods are not discussed by search engines for security reasons. The techniques for generating click fraud revenue are perhaps most interesting. Spammers can create spam blogs called splogs, which copy pieces of popular websites along with certain keywords, sign up as an affiliate of a search engine, then link themselves to popular websites in order to be listed high among search results. Confused users end up on these sites and click on ads which generate revenue for the site owners. Perhaps most dangerous fraud technique is the zombie network, where rogue hackers control multiple computers without the computer owner’s knowledge, and can therefore create click fraud from different IP addresses and at random intervals. Such a network was recently discovered by Dutch police where 1.5M computers were controlled by 3 men.

The difficulty in both our auction model and pay-per-click is appropriately determining the value a click has to an advertiser. The big thing nowadays is the pay-per-action model where an advertiser sets a value for a clicker performing some action on their site(such as filling out a form or buying a product), and then pays the search engine. This, however, requires more cooperation between search engine and the advertiser. The fight between click spammers and search engines/advertisers is only likely to get bigger as Internet advertisement continues growing. As it grows, it will also become more interesting as technological factions fight to break or protect the economic models that drive Internet advertisement.

Posted in Topics: Technology

No Comments