This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Networks and Sex

I found a very interesting paper by Fredrik Liljeros, et all titled “Sexual networks: implications for the transmission of sexually transmitted infections” It explains how many epidemiological models which use ordinary differential equation (ODE) to model the spread of sexually transmitted diseases ignore many important details to be able to analytically derive results from these models. However, many of these details are important to really understand how the diseased is spread in a population. For example, one of the assumptions that these ODE models make is that sexual encounters among people are random. However, studies show that this is simply far from being true.

One way to account for these important details in to construct a model by thinking of people as nodes and connect people who have sex by edges. In other words, the idea is to work with a network. By constructing a sex network based on real data, we can get a much better idea of what the dynamics of sexual relationships are like. For example, an important question that a network could answer is the following: Studies show that within each community there is small group of people who have a lot more sex than the rest of the people. Do the people in this group mostly have sex with people within the group, or do they mostly have sex with people outside the group? The answer to this questions has been shown to make a big difference as to how fast and how much a disease spreads. Clearly, understanding these dynamics is very important in order to understand how a sexually transmitted disease spreads.

The authors of this paper make another very good point. If we lived in a completely monogamous world then it would be really hard for sexually transmitted disease to spread since they cannot go from one couple to another couple. Therefore, the most important aspect of sex network for this problem is how many nodes are connected to more than one other node. In other words, it would be better to study a network of concurrent sexual contacts. This new network is called a line graph and it is derived from the original network by making each edge between two people a node and by making an edge whenever one person has more than one contact.

This paper was very interesting because it gave a direct application of networks and also discussed how it is useful to construct a more complex type of graph i.e the line graph to study this specific problem. It can be found at http://amaral.chem-eng.northwestern.edu/.

Posted in Topics: Education

No Comments

Cascades and Pedestrians

While looking over some of the academic work done on information cascades, I came across a paper by Bikchandani, Hirshleifer, and Welch which seems to have been the basis or an influence on how Prof. Easley taught information cascades to us. The authors discuss High and Low signals, good and bad outcomes (here called desirable and undesirable) and Bayes rule. The paper itself is interesting if one wants to look further into some of the academic work done on this model.

http://www.jstor.org.proxy.library.cornell.edu/view/08953309/di014715/01p0058j/0?frame=noframe&userID=80549e6c@cornell.edu/01c0a8346c32ec118ee073e92&dpi=3&config=jstor

(It’s called “Learning from the Behavior of Others: Conformity, Fads, and Informational Cascades” in Vol 3 No. 3 of the Journal of Economic Perspectives)

I want to suggest an application of this model. Cascades are interesting and fairly common. I remember seeing a few (or what I think could be considered a cascade) just after lecture. If after lecture you head out from the ground level toward the Statler, there was a dirt path that is currently being turned into a concrete one (and which is currently cordoned off). Now for the past few months it’s mostly been under snow and often muddy. Most people never took that path, preferring to walk the long way (on the already-established concrete path).

Perhaps there were issues of conformity here. Maybe someone unconsciously followed everyone else. Maybe once a person a group of friends went along the concrete path, everyone else in the group followed. Maybe during periods of snow the path’s shorter distance did not provide a high enough utility in the face of the extra mud and snow that the transverser would have to face. I think there are probably several factors at play in determining why a person chose a particular path, especially the longer, concrete one.

However, I can say that if a group of people was leaving (or heading to) the building and someone did decide to take the shorter path, whether it was snowy or muddy or what-have-you, that person was never alone. He or she would usually be followed by a few more individuals who would go across the path as well. In time there was enough of a trail that the administration decided to build a concrete pathway right over the shorter path.

Going back to the paper (and the lecture) it seems that the first intrepid souls who decided to trek over the unforgiving wastes made a decision based solely on the information they had (crummy path vs. cleared conconcrete path and shorter time vs. longer time), but that when that individual decided to go either way, he or she created a signal to others about the potential utility of that path. This apparently lead to more people following the shorter but dirtier path than before.

Posted in Topics: General

View Comment (1) »

Decision Making and Information Cascades

http://tierneylab.blogs.nytimes.com/2007/10/09/how-the-low-fat-low-fact-cascade-just-keeps-rolling-along/

http://www.mnp.nl/images/EEM%20paper%20WJ_revised_tcm61-31259.pdf

In class we listed various examples of information cascades and also debated whether the herd mentality was a mainly irrational blind-leading-the-blind consequence or whether it was based more on the unavoidable ‘binary mathematics’ behind informational cascades.

As irresistible it is to ascribe much significance to the mathematics behind informational cascades, one must also consider the difference in cognitive effort attributed to different decisions. Wander Jager discusses this in his paper regarding consumer behavior, stating that ” the less important a decision is to a consumer, the less cognitive effort he would expend on the decision”.

What does this imply? Depending on the consumer’s own focus, he might well make decisions based on simple heuristics rather than careful rational considerations if the decision is one that is unimportant, like buying groceries or selecting which car wash to head for. On the other hand, if the decision is important, like buying a home entertainment system or a car, one would invest much more cognitive effort to understand the nuances between each choices, consider more signals and inputs from others before making a decision.

Making decisions based on simple heuristics most often means just ‘following’ the crowd, since ‘what everyone is doing can’t be wrong’ or alternatively, ’since everyone is doing it, there is a baseline where if the product is bad, everyone loses’. Such heuristics are often over-simplistic, and therein lies the greatest criticism of the herd mentality where people often blindly and irrationally follow the crowd in their decision making.

Information cascades does provide great insight on how rational thinking people do end up being part of a cascade, but it is wise to consider also that many often operate under simple heuristics when making unimportant decisions, and ultimately also contribute toward this cascade and the herd mentality.

Posted in Topics: Education

No Comments

Information Cascades and Real Estate Bubbles

http://www.nytimes.com/2008/03/02/business/02view.html?_r=1&oref=slogin

Robert Shiller wrote the article above in the NY Times earlier this month relating information cascades to the housing bubble of real estate investments. Shiller starts by criticizing economic experts and quotes Alan Greenspan as saying that he had “come to realize that we’d never be able to identify irrational exuberance with certainty, much less act on it, until after the fact.” This implies that the market bubbles are made by “irrational exuberance,” when in fact, it can be clearly linked to information cascades. Shilling makes this connection by pointing out that rational people also get caught up in these bubbles which he then makes evident through definitions and examples from the scholarly paper “A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades” by Bikhchandani, Hirschleifer, and Welch of UCLA , which you can find at: http://www.jstor.org/view/00223808/di980598/98p00557/0

The emergence of any information cascade is based on the fact that every individual’s private information is incomplete and that each individual only conveys their information sequentially through actions. The result of such situations is that rational people with information that is mostly accurate, will still create an incorrect collective conclusion. Shilling ends by describing that a downward cascade, like a market bubble bursting, is likely in response to such an upsweep of real estate investments.

Many bloggers responded to this article contesting Shiller’s claims of whether the housing bubble was in fact predicted or whether it is really a better thing to catch bubbles before they become too large. One points out that the recognition of a bubble is the very thing that causes the burst and so it really should not be sought out. This response can be found at: http://www.crossingwallstreet.com/archives/2008/03/the_bubblephobe.html Yet more blogs even argue with the existence of economic bubbles to begin with.

The Wall Street Journal today had an article similar to Shiller’s discussing Alan Greenspan’s commentary in 1999 about the need, or lack of, to prevent the impending bubbles. The article is called Wanted: A New Policy For Bubbles by Jon Hilsenrath and seems to suggest that if the government is going to intervene after the bursting of bubbles, that maybe they should take steps to prevent them in the first place.

Although there are many opinions on the status of market bubbles and what we should do about them, one thing is clearly uncontested. That is the fact that information cascades are the cause of bubbles and that it is an optimal choice for rational people to follow these trends. In order to prevent these bubbles, we would have to find a way to trick rational behavior in these situations, and that would be no easy feat. In fact, some wise investors may neither want the bubbles overtly detected nor prevented because they can exploit the cascade for economic gain.

Posted in Topics: General, social studies

No Comments

Information Cascades and Pop Culture

Information cascades can cause many bubbles in markets. However, it is also the reason as to why popular culture is so difficult to predict. In an article found in the NYT Magazine – “Is Justin Timberlake a Product of Cumulative Advantage?” by Duncan J. Watts – this topic was discussed.

It is a very common occurrence for a publishing company or a movie production company to reject a book or movie and later becomes extremely popular. Some predominant examples of this are “Star Wars” and “Harry Potter”. One may wonder why it is so difficult to predict whether a product of culture may become a hit. The practice of predicting success in cultural markets is based on anticipating the number of people who will have a preference for a certain product. Logical reasoning leads to the conclusion that if one were able to replicate a cultural success, then the replication should also become a hit. This reasoning, however, is based on a very important assumption: “when people make decisions about what they like, they do so independently from one another.” However, Watts claims that people hardly ever make decisions independent of others because, especially when it comes to culture, they want to share the experience with other people. For instance, you would most likely only decide to go see a movie if you had friends who want to see it with you. It is this type of human behavior that makes popular culture hits so unpredictable.

As a result of this social behavior, when one subject is more popular than another he is said to have a “cumulative advantage” or the “rich get richer” effect. Thus, when one subject happens to be more popular than other, then the popularity of that subject will increase even more. As a result, at any point in timer, even miniscule differences of popularity among several subjects will become blown up into large differences over time. It is just by chance that the currently popular subjects are hits today. If history were done over, that subject may be just a little bit less popular and would eventually end up as ordinary people rather than the stars they are today.

A web-based experiment was run to study this human behavior. In this experiment, participants were to listen to music, rate, and download them if they choose to do so. This music was always from bands that they had never heard of. The participants were divided into two groups, one that saw only the names of the songs and bands, and another that also saw the number of times the song has been downloaded so far. This second group is used to examine the effect of knowing how other people decide on the songs. With each song starting with zero downloads, this experiment was run.

By reading about the experiment, one may reason that the most popular songs should have the same magnitude of popularity for both cases of the experiment if people make decisions regardless of what other people like. One may also hypothesize that the same songs would become popular every time the experiment was run. However, the result showed that in the case where social influences exist, the popular songs were much more popular than the popular songs in the other case. Also, different songs became popular in the two cases. It was found that the best ranked songs in the experiment only had a 50% success rate of making it to the top 5.

As can be seen from the result of this web-based experiments, it is nearly impossible to predict whether something will become a big hit or unpopular. Publishing companies may try to predict whether a book will become popular. However, as can be seen by “Harry Potter”, who had been rejected by eight publishers before becoming as popular as it is today, information cascades make popular more unpredictable than if people were socially individual entities.

http://www.nytimes.com/2007/04/15/magazine/15wwlnidealab.t.html?_r=4&ref=magazine&pagewanted=all&oref=slogin&oref=slogin&oref=slogin&oref=slogin

Posted in Topics: General, social studies

No Comments

What’s in a price?

A study published earlier this year in the Proceedings of the National Academy of Sciences by Antonio Rangel of the California Institute of Technology indicates that humans have evolved subconscious mechanisms relating to information cascades.  Dr. Rangel conducted a study in which a group of volunteers was asked to drink and rate the taste of 5 different wines ranging in price from $10 to $90 a bottle.  In reality, the volunteers only tasted 3 different wines.  Two of the five were repeated but listed with different prices. 

 

In the past similar studies have shown that given a survey, humans will invariably rank (what they think to be) more expensive items as being of higher quality.  What made Dr. Rangel’s study unique was the fact that he used functional magnetic-resonance imaging to look at the volunteers’ medial orbitofrontal cortices.  Basically by measuring this brain activity, Dr. Rangel was able to see what his participants “thought” about the wine’s quality rather than just observing what they said in a survey.  Therefore he was able to measure the volunteers’ visceral subconscious opinions.

 

The results showed a strong correlation between brain activity and the price the volunteers were told.  This would indicate that people actually “enjoy” a wine more if they are told it’s more expensive.  Dr. Rangel repeated the experiment with the five supposedly different wines but this time he didn’t tell the volunteers any prices.  In this control group, the brain scanner showed differences between the three real wines but not between the repeated wines.

 

To account for criticism that “his volunteers were not wine experiments” (and to take a jab at a rival school), Dr. Rangel repeated the experiment with the Stanford University Wine Club.  This experiment yielded similar results to the non-wine expert group.  Again volunteers showed more brain activity when they were told a wine was expensive.

 

This study presents an interesting question:  How can a wine expert’s brain chemistry change simply upon being told a phony price?  Dr. Rangel believes the answer could lie in the evolutionary process.  “The point of learning is to improve an individual’s chances of surviving and reproducing: if the experience and opinions of others can be harnessed to that end, so much the better.”  Essentially, properly harnessing information cascades produces an evolutionary boost. 

 

In many cultures (but especially in a capitalist economy), price is set by the market or the collective wisdom of the masses.  By adjusting brain chemistry according to price, the mind is rationally inferring that a product deemed more valuable by society must, in fact, be more valuable.

 

This study shows that information cascades are based, at least in part, in our subconscious as an evolutionary construct.  We now know that humans try to imitate on a more basic level than action.  They try to imitate enjoyment. Our brains actually make an effort to imitate what “the crowd” feels. 

 

Reference:

http://www.economist.com/science/displaystory.cfm?story_id=10530119

 

 

Posted in Topics: Education

No Comments

Choosiness and Cooperation in Human Behavior

http://www.nature.com/nature/journal/v451/n7175/full/nature06455.html

http://www.nature.com/nature/journal/v451/n7175/box/nature06455_BX1.html

“The coevolution of choosiness and cooperation” from Nature magazine

http://en.wikipedia.org/wiki/Evolutionarily_stable_strategy

Supplementary Wikipedia article: “Evolutionarily Stable Strategy”

The motivation for analyzing choosiness and cooperation between individuals is to seek a better understanding of biological systems and human societies. The interaction that occurs specifically between non-relatives is what the article focuses on.

The way choosiness and cooperation relate to the course is through game theory as applied to the “game of life”, what is described as a special case of the prisoner’s dilemma, which we explored in question 4 of problem set 2. They consider an infinite population, of which two individuals play several rounds in a game termed a “social dilemma.” Traits of a player are divided into two components: one for cooperativeness, x, the amount of effort used to create benefits for a co-player, and one for choosiness, y, the “minimum degree of cooperativeness that the focal individual is prepared to accept from its co-player.” In this model, traits x and y are constant, unaffected by a co-player’s behavior. Their model also gives players a certain payoff per round, W(x,x’), a function of each player’s effort, and intentionally designed to create a conflict of interest, in which each player wants the other to put in most of the effort. (The “prime” symbol refers to the other player.) In receiving the payoff, each player becomes aware of the other player’s effort or cooperativeness. This is the point in the round where the relationship either continues or ends. If each player is mutually acceptant of the other, i.e. x y’ and x’ y, then both players stay with each other into the next round if neither dies. However, if one of the above two conditions does not hold, then the pair breaks up and survivors reconvene as a group of unpaired individuals. You can see that if, say, player 2 is not very cooperative and player 1 is very choosy (x’ y), then this would imply a conflict of interest. New pairs are taken randomly from this group to participate in the next round. This model is much more complex than ones discussed in class. It also considers reproduction, the cost of finding a new co-player, and distinctions between adult and juvenile mortality, all as part of a five-step population cycle model.

As in class, there is a Nash equilibrium solution corresponding to the expected behavior when individuals try to maximize their own payoffs. This can also be called a non-cooperative solution. In addition, there is a cooperative solution corresponding to the expected behavior when payoffs to pairs are being maximized, a solution not covered in class but which involves several concepts that we learned in the first few weeks of the course.

In analyzing this evolutionary game, we seek to identify an evolutionarily stable strategy (ESS), a sort of refinement of the Nash equilibrium such that once it is set in a population, natural selection is enough to bar alternative, or mutant, strategies from pervading. The Nash equilibrium is modified in part to account for the effects of evolution, namely that the rationality associated with Nash equilibria is not appropriate for evolutionary applications. According to Wikipedia, “rational foresight cannot explain the outcomes of trial-and-error processes like evolution.” In this particular game, a good ESS would involve neither cooperativeness nor choosiness, as in the prisoner’s dilemma. This is a stable strategy because being choosy has no payoff in a world where everyone is the same. As a result, there is no incentive to put in more effort than the Nash effort because there is no risk of one’s co-player being unhappy with the matching and deciding to dismiss the other.

What is interesting is that if a means to vary the characteristics of a population, such as a natural process like mutation, is taken into account, the result changes significantly to produce higher and higher levels of cooperativeness. The degree of cooperativeness depends greatly on the degree of variation. There are now advantages in dismissing uncooperative people, because that would increase the availability of cooperative people. This relationship drives up the average level of cooperativeness in a society. One can argue that in response, the standards of a population would rise, meaning that the degree of cooperativeness that is considered “cooperative enough” increases. This would cause an increase in the number of individuals being dismissed for uncooperativeness, which would in turn further increase the levels of cooperativeness, and this process would continue in a positive feedback loop. If the payoff function is set just right, cooperativeness levels will reach the cooperative solution, which is equated to the continuous prisoner’s dilemma.

Posted in Topics: Mathematics, Science

No Comments

Millions of Queries

Despite ongoing research and constant improvements to online search engines, no user in guaranteed the results he/she desires for any given search. Of course we may believe sometimes that our computers can read our minds – with advances such as AutoComplete and cookie-based recognition – but inevitably we find ourselves modifying search queries and/or scrolling through pages of resultant links to find just exactly what we are looking for.

During the year 2006, researchers at the University of Michigan began compiling histories of individual queries from the AOL.com search engine – available at http://tangra.si.umich.edu/clair/clair/qla.html . The following records were kept of about 36 million queries: an anonymous userID, the user’s verbatim text query, the link that that user ended up choosing (from the results page), and the rank of that one link. Interestingly, they began to notice that on occasion a link would appear in different ranks for the same query, or rather, for the same rank in different queries. For the most part, however, they noted that the most popular links were typically ranked 1 or 2, and that these links could be accessed from a variety of (related) textual queries.

It consequently appears that although we may believe computers can exactly interpret our needs as users, we find that when dealing specifically with search engine queries, computers are programmed to display a certain list of results, independent of the exact wishes of the user. The article lists, for example, that if I am trying to buy a car – and simply query “car” – I am likely to find links pertaining to anything in the car world. Because the computer doesn’t know which type of car I would like to buy (or even that I would like to buy a car at all) I need to modify my search, to say “Ford sedan” or “cars for sale.”

Finally, these “query sessions” are made into directed graphs with one node representing the initial query and an edge from that node to the subsequent, modified query (and potentially another edge to a further modified query, etc.) *Note that the size of a node is proportionate to the number of times it was a query.* The compactness of a group of linked nodes is referred to as a cluster, and the graph as a whole is assigned a clustering coefficient – based on the number and magnitudes of its clusters.When displayed as a directed G=(V,E) graph, the results are rather interesting: clustering is readily visible, and the graph has a calculated clustering coefficient of about 0.35. However, when a computer randomly generated an equal number of queries (and made a similar graph) the coefficient dropped to a mere 0.01 – indicating that the likelihood of humans crafting their queries similarly is actually quite high.

I enjoyed reading this article and relating it to what we’ve recently studied in 204, because the ideas of page rank and generalized second price auctions (for slots) do little to address the needs of the user. This does well to define that metaphorical line between actual and artificial intelligence, in that computers must be programmed to return results independent of users’ feelings or thoughts. However, we do find that in Michigan’s query logs, many times the method of page rank was useful – in that many users chose links in the 1 or 2 position.

Also, I thought that the transition from logs to a directed graph was quite unique (as it prompted me to recall topics from earlier in the class.) We see from this directed graph that many cycles exist, as users can begin and end at effectively any query. Also, although it seems that beginning a query with “car” would not likely lead a user to later querying, say, “flower”, it appears that this directed graph actually contains a global network, lending to the small-world principle; the average distance between nodes in the graph is only about 2.5 edges. This helps to solidify the theories of a unified Web.

Posted in Topics: Technology

No Comments

Flaws in the PageRank Algorithm

Link: http://en.wikipedia.org/wiki/PageRank

 

I find the page rank system that we discussed in class to be overly simple. Yes it works for the application of simple networks that we are looking for, but I really didn’t understand how that could be used on such a large scale. The Wikipedia page that I found on this algorithm really goes in depth on how the algorithm is put into practice. It looks at the damping factor that is involved in depth. I found it interesting that the reason the damping factor is put in is to rule out random clicks on the links. But can it not be said that all the links that you click on are random clicks. Even though the damping factor drives down the scores I still feel like it over compensates for the random clicking. Looking at the equation that is given, the probabilities are completely arbitrary as well. Does someone realistically go through all the web pages that possible for a certain subject and think about the probably the person clicked on it by mistake. So is the damping factor really doing anything that is not completely subjective? Google can only guess at these probabilities and that fact alone makes the page rank system have some variability within it.

 

The website also goes into the problem of false and manipulated page ranks. This algorithm is flawed in the fact that once someone figures out the system, there is no way to stop them from getting high page ranks. Ultimately, there can be topics that the relevance of any page that comes up in the ranks can be close to zero. If an advertising firm could potentially take over a topic that is searched and block out any true links and replace them with their own by creating pages that link to each other. All it takes is the firm getting into one highly rated page and linking to itself through as many pages as it is willing to create. I would not be surprised if there was a group of advertisers that are doing this successfully on topics that are not as popularly searched. Google has no way to check every single topic searched for being spammed and advertisers know that. Although the algorithm seems to work, I feel that eventually every search will just end up being manipulated by advertisements and no one will ever get the type of relevant information that they are actually looking for.

Posted in Topics: Education

No Comments

Online Advertising and the Monetization of Social Networks

In class last week we focused on keyword-search based advertisement auctions used by search engines such as Google, Yahoo!, and Microsoft. While the majority of the revenue of these companies is received through this type of advertisement, all of the big players in the industry are looking for ways to monetize e-mail and social networks in a similar fashion, but the circumstances are different, making it far more difficult.

A recent article in this week’s Economist, http://www.economist.com/business/displaystory.cfm?story_id=10880936, “Online Social Networks: Everywhere and Nowhere,” addresses the issues involved in monetizing social networking. The big players in the search engine industry, such as Google, Yahoo!, and Microsoft, are constantly seeking smaller start-ups to acquire in order to increase their returns on online advertising. The rational behind these acquisitions is that by owning more web pages the company will have more advertising space to sell. However, advertising has proven to be more effective in some forums then others.

Keyword based searches have rolled in the bulk of online advertising revenues, which makes sense as it is the most efficient means to target internet users. In keyword-based searches, most of the big players use the “generalized second price auction,” or GSP, to determine the order of the ads that appear when an Internet user inputs a certain keyword.

But a new trend of social networking has emerged and online advertising firms are scrambling to take advantage of the large number of page views. Currently, on sites that are not search engines, the industry players, such as Google, automatically determine the subject of the pages and display ads for which the advertiser has specified an interest in that subject. The ads show in boxes resembling banner ads, with the designation “Ads By Gooooooooooogle.” This is the mechanism used on the social networking sites among others. (For more on Google’s advertising mechanisms, visit http://www.asiaing.com/the-maximum-effect-making-the-most-out-of-your-google-adwords-account.html).

Based on recent transactions, it is clear that the Internet companies believe there are large revenues to be realized from advertising on the social networking sites. In 2006 Google engaged in a transaction with the number one online social network, MySpace, in which Google gained control of the placement of search-related ads on MySpace. Google also serves as the main MySpace search engine. Microsoft recently invested $240 million in Facebook, a transaction that values the company at $15 billion. The arrangement gives Microsoft control over the placement of banner ads on Facebook outside the U.S., where about 60% of Facebook’s 49 million active users reside (see article in Businessweek, http://www.businessweek.com/technology/content/oct2007/tc20071024_654439.htm?chan=search). This month AOL bought Bebo, a small but up-and-coming online social network, for $850 million.

The world of social networking has grown so large that it has led the big Internet companies to bid up valuations of social networking sites. However, unlike the keyword-based searches, online advertising in this area has not produced a working revenue model. Sergey Brin, Google’s co-founder, recently admitted that Google’s “social networking inventory as a whole” was proving problematic and that the “monetization work we were doing there didn’t pan out as well as we had hoped.”

Similarly, Microsoft has failed to realize gains on its investment in Facebook. Its new approach to social marketing, called Beacon, was an attempt to redefine the advertising industry, but it has failed miserably. The idea was to inform users of their friends’ online activities, such as when a friend purchased an item from an online retailer. When the purchase was made, a small announcement ran inside the user’s “news feed.” Facebook thought this would be a new form of word-of-mouth marketing. However, users were upset by the violation of their privacy.

These giants have pursued these deals with the intention of utilizing online advertising, yet none of these deals proven fruitful. This begs some questions:

• Have the Internet companies simply over-valued these social networking sites?

• If their valuations are correct, have they not yet realized the means through which they are able monetize the networks? Although Beacon failed, might there be other ways of redefining online advertising without violating users’ privacy that have yet to be discovered?

• Or have the social networking sites not yet reached their growth potential, at which point revenues from online advertisements will be realized?

Clearly this method of advertising on social networks needs to be revised if revenues are to be realized. While it may not be possible for social networking to produce revenues equal to those of keyword-based searches, it still has a tremendous amount of utility. According to the article in the Economist, “Social networking has made explicit the connections between people, so that a thriving ecosystem of small programs can exploit this ‘social graph’ to enable friends to interact via games, greetings, video clips and so on.”

Websites and applications in which users play each other in Scrabulous, post their pictures, follow their friends’ travels, or play fantasy football have found a home on these social networks. The vast number of these sites and applications linked to these socials networks has made them huge hubs. Like the search engines, these social networks are becoming very powerful hubs with substantial authority. Though they have yet to monetize these social networks, their enormous valuations of these sites indicate that the Internet giants see tremendous value and potential in these social networks. It is conceivable that the revenues that will be gained through them will be indirect. As they become large hubs, it will be the vast number of links through which the sites will be monetized using pay per click advertising. It will be interesting to see how these sites grow and if the Internet giants are able to find new forms of online advertising that will allow them to reach their valuations’ expectations.

Posted in Topics: Education, Technology

No Comments