This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Information Cascade Cause of Housing Bubble?

After reading about information cascades in the text, I searched recent articles written about this herd mentality, and came across several that attribute the recent housing bubble to this phenomenon. In an article appearing in the New York Times, Robert Shiller, an economist at Yale University, attributes the unawareness of the looming bubble to herd mentality. It was this inability to see the bubble coming that caused the recent destruction of financial markets in the United States and globally. Even Alan Greenspan, an expert on market conditions and movements, was blinded by the judgments and predictions of others rather than trusting his own information. Shiller writes, “Cascades can affect even perfectly rational people and cause bubble like phenomena - people sometimes need to rely on the judgment of others, and therein lies the problem.” Shiller contends that Greenspan and others suppressed their own opinions because of lack of confidence in their own predictions due to incomplete information. Information cascades ultimately begin due to incomplete information. Without complete information, which is incredibly rare in real world situations, one can never be sure of their own decision, and at that point become susceptible to falling into a herd mentality trap.  

Shiller further contends that houses reached unsustainably high values due to an information cascade among investors. Some home buyers ignored their own beliefs in the value of a house, instead falling victim to extraordinary values others placed on houses. This led to an escalation of home prices that caused the current bubble. As we have seen in other examples in class, often the information passed down this cascade is incorrect, whether intentionally (information from the National Association of Realtors), or unintentionally. In fact, in the original paper presenting the idea of information cascades, Sushil Bikhchandanii showed that the incorrect assumption is reached 37% of the time in such a cascade.

While this may not be the sole reason for the housing bubble or for the failure to see the bubble approaching, it is certainly an interesting and logical explanation. It will also be interesting to see how this effect will alter the future of the housing market. As we know, there is a significant amount of fragility inherent in any cascade. One theory is that this same herd mentality will now rebound to draw the value of homes below their true values.

http://www.nytimes.com/2008/03/02/business/02view.html?_r=2&ref=business&pagewanted=all&oref=slogin&oref=slogin

Posted in Topics: Education

No Comments

Buying, Selling, and Trading in Azeroth

http://www.lewrockwell.com/orig7/villacampa1.html

With its sunny beaches, snow-covered mountains, lush forests, and unique deserts, there’s something for everyone in Azeroth. It wouldn’t be a bad place to visit either . . . that is, if it weren’t inhabited by orcs, elves, demons, undead, and the like. Yes, Azeroth exists in a game–in World of Warcraft to be exact. However, just because it’s a game doesn’t mean the principles of the real world do not apply to the virtual world. In fact, there is a thriving economy that exists in WoW (granted, the economy is based on a bunch of 1’s and 0’s but hey, so is ours with all the financial databases today). Now, there already was a post made earlier about the auction house system in WoW (and how gold farming is actually a good thing). What I’m going to talk about now is another aspect of the economy in WoW–the profession system to be exact.

First off, in World of Warcraft, characters can choose to learn professions or tradeskills (such as mining, blacksmithing, jewlcrafting, tailoring, and even engineering). However, each character is limited to learning only two professions out of the ten available. This means that by learning professions, the characters are basically specializing in something. Adding another layer of detail to the profession system is the fact that not everyone can make the same things (meaning not every blacksmith can make that really big sword or that really cool helmet). The reason being is that the patterns/schematics/recipes for things that can be made have to be found (and naturally, some of them are harder to find than others).

So what does all this mean and how is the related to economics? With thousands of players per server each specializing in something, it doesn’t take much for trading networks to spring up between players in the game. The traders are the players with professions. The sellers are the players that are selling raw materials to the traders (such as selling iron ore to a blacksmith). The buyers are the players that wish to purchase the good that the trader can make (like a really big sword). Generally, the network would involve a lot of sellers and buyers, and very few traders. The abundance of sellers almost guarantees that the trader can purchase all the raw materials for a low price. And, after crafting the good out of the raw materials, the number of buyers eagerly waiting to get their hands on that rare good (that really big and shiny sword) means the trader can choose to sell to the buyer that has the highest value (and thus would pay the most for it).

The means in which the trader goes about selling the item is also something interesting to note. The trader can run an impromptu auction via the chat channels, having buyers message him with their values and selling to the buyer with the highest value (a first price sealed bid auction if you will). Or, the trader can put the item up for auction at the auction house. The auction house works basically like a first price sealed bid auction but with two twists: 1) there is a time limit and 2) the player who posted the auction can have the option of setting a buyout price (like the ‘buy it now’ feature of eBay). The more interesting of the two differences is the time limit. Because the auction will end in a set amount of time and because it is a first price sealed bid auction, the time limit actually encourages bidding at the last second (or sniping).

So, despite the fact that World of Warcraft and other games exist in the virtual world, the fact that the games are played by humans means that economies and economic principles from the real world will inevitably spring up and be apparent within the virtual world.

Posted in Topics: General, Technology

No Comments

AdWords and Demographic Bidding

Google recently released a new feature for their AdWords advertising – demographic bidding. This new feature allows advertisers to target their ads to users of a particular age group, by gender, or by combinations of these groups. The feature can be used in conjunction with cost-per-click (CPC: the highest amount you are willing to pay for a click on your ad) and cost-per-impression (CPM: the highest amount that you’re willing to pay for each 1000 impressions on your placement-targeted ad) bidding.

With demographic bidding, if an advertiser knows primarily females in the age group of 18 to 25 like wear pink Nike shoes, the advertisers can target their ads directly to that group by restricting their ads from certain users . Demographic bidding gives the advertiser more control over the demographic group that sees their ad by letting the advertiser modify their bid and/or restrict their ad’s visibility based on their audience.

Bidding More For a Certain Demographic Group

To help their ad be seen by a particular demographic group, advertisers can boost their bid whenever their ad is eligible to be shown to a member of that particular group. This is done with the “Bid + % system.

Google notes, “By entering a percentage of up to 500%, you’ll raise your bid by that amount for members of the group you choose. For instance: If your existing bid for a placement or keyword is $1.00 CPM, and you set your demographic bid multiplier for women to +25%, then your CPM bid for users identified as women would be $1.25. ($1.00 + $0.25 = $1.25.) In this way, you’ll be making a more competitive bid for the potential customers that mean the most to you.”

As discussed in class, AdWords utilizes a variant of the second-price auction.

What Demographic Bidding Means

Google will be able to monetize contextual ads on participating sites such as MySpace and Friendster. In addition, it steers Google in the direction of online profiling and lessens their dependence on contextual and search targeting. Also, recently acquired DoubleClick will help provide online behavioral information to Google advertisers.

Posted in Topics: Technology

No Comments

GoogleBot

In class we touched on the topic of how web pages get indexed and how search-engines keep track of the new pages. There were various sections on Crawling, Searching the Web and Ranking the Web-Pages. However, it was not quite covered how the most popular search-engine nowadays does it. Hence, I thought it would be a good idea to write an article on how Google it is so successful at achieving this.

According to an article by the University of California-Berkeley on the Internet, millions of pages get added to the Web on a daily basis (to be more specific 7.3 million). So how is it possible to keep updating the search-engine’s indexes to take into account the newly created pages? Well, Google has a quite common way of doing this, through what is called GoogleBot. As the Wikipedia article on Googlebot describes, it is an automated script (also known as Web-Crawler) that browses the Web in search for newly created pages and pages that have been extended/updated. The frequency at which it does so varies between pages, reserving the highest frequencies for blogs, forums and news articles, and leaving the lowest ones for statistical web-pages. The Googlebot consists of exactly two types of bots, one called the Deepbot and the other the Freshbot. These two have quite similar yet very different tasks. Deepbot is in charge of searching through links for pages while Freshbot has the task of looking for extensions and updates of pages already indexed. Google achieves this by requesting and fetching pages through many computers at the same time. Once a new link/page is found Google adds it to a queue, from which it retrieves pages and adds them to its index database, which is in turn organized alphabetically. The ranking process is done through PageRank which is described in one of the articles on the blog. The process further continues since now the pages need to be ranked and matched to relevant queries.

However simple these processes may sound, Google actually encounters numerous problems while indexing new pages since spammers take advantage of these processes to index pages full of advertisements. The methods vary, it ranges from the normal Add URL spam (adding multiple pages directed to propaganda) to cloaking, the spam in which pages are filled with false information so that they are matched to a random query. Thus, Googlebot must be able to deal with such situations so that such pages are not returned in a query nor ranked among the first pages. The algorithm is quite intricate and must be updated continuously since spammers come about doing this different ways on a daily basis (it is sometimes called a war, with spammers trying to defeat the system and search-engines trying to deliver a good service).

This article covers some of the topics discussed in lecture while giving a more specific example of how crawling a web happens. In lecture we discussed the general idea behind it. With this article I think I provide an idea of how it is specifically done in one of the search-engines, thus contributing to the explanation behind web-crawlers. Additionally, it touches on another type of network, that between the spammers and the search-engine. A constant interaction, that although not good, provides an example (like at the beginning of the course) of the interaction between two groups.

Posted in Topics: Education

No Comments

Traffic Routing in the Cellphone/GPS Age

The equilibrium-based analysis of traffic routing that we studied earlier in the semester (in connection with Braess’ Paradox) assumes that the traffic delay functions are known in advance by all drivers. Drivers are assumed to be taking the same trip day after day, to experiment with alternative routes, and to develop an instinct about the associated delay functions.

In reality, drivers not only do not know the traffic delay functions but may not know all of the route alternatives available to them. The popularity of GPS devices is an indication that drivers are interested in better guidance. But what happens to traffic flows as this guidance becomes available?

An Intelligent Traffic System (ITS) is “a combination of electronics, telecommunications and information technology to the transportation sector for improving safety and travel times on the transportation system” according to the Michigan Department of Transportation. While in the past such systems have consisted of speed detectors in the roadway, central computers, and electronic signage to tell drivers which way to go, the advent of cell phone-based GPS systems suggests a different, less expensive and possibly more efficient infrastructure. Drivers can use the GPS feature to compute shortest routes, while in combination with cell-based communication the impact of traffic on route times can be reported to the GPS route computation, potentially resulting in different routes that take these real time traffic reports into account. However the traffic volume can change as the route is traversed (partially as the result of multiple travelers getting the same local route directions) so a predictive mechanism is also required.

On a recent trip I experienced the current, limited state of the art. I needed to travel from a point in northern New Jersey to a northern suburb of Philadelphia. Google Maps, which I can access from my cell phone, showed the fastest route to be via the Garden State Parkway, the New Jersey Turnpike and the Pennsylvania Turnpike. However I knew from experience that at some times of the day various segments of this route become congested, which called into question the optimality of the route recommended by Google.

In fact, as my departure time neared, Google Maps reported increasing delays on the route it recommended, but was not able to recommend an alternative route. I knew of a “Shunpike” route via state highways and forced Google Maps to compute the travel time for this alternative (by specifying an intermediate point). By departure time, the Shunpike travel time was already shorter than the Turnpike route Google continued to recommend! Clearly Google Maps does not take traffic delays into account in making its recommendations. In this respect it is no better than a standalone GPS system - the communications capability and the central availability of data are not being fully exploited.

One might expect that Google will eventually integrate its route calculations with its traffic measurement capabilities. It probably gets its traffic measurement data today (through intermediaries) from real time monitoring by inductive loops in the road, but it seems increasingly likely that the best traffic delay information will eventually be deduced from real travel speeds recorded by individual GPS units and reported to a central computer (it is uncanny how reliably GPS units report current speed). Such reports would cover a much greater fraction of all roads and would be very current. This idea is currently being tested, by Nokia, not Google - see http://www.latimes.com/news/local/la-me-gpscars9feb09,0,4765729.story. The article reports that some transportation officials are skeptical, one claiming that people will find uncongested alternatives on their own.

Note that if individual GPS units are to compute traffic-based fastest routes, data on travel speeds for all pertinent links must also be downloaded or broadcast to the unit in the car. This suggests that route computations might better be carried out centrally, based on the most current traffic speed estimates garnered from GPS-equipped cellphones.

Even with current link speed data, however, the system described so far might not produce the best routes. In the example given above, I was able to confirm that delays continued to build up on the Turnpike route during the course of the day. Had I taken that route I would have encountered delays that would have been even greater than predicted at my starting time. And delays might have been building up on the Shunpike route as well. Clearly, a predictive mechanism is needed to accurately predict travel times and compute optimal routes. (Contrast this with the fact that my primitive standalone GPS is not able to learn and predict from my own speeds on the routes that I take even in the absence of traffic - it greatly underestimates the typical speeds on non-Interstate roads, nearly always recommending that I take the Interstate when non-Interstates get me there much sooner.)

Link travel time prediction is the subject of a paper from the University of Michigan ITS Research Center of Excellence, published in IEEE Transactions on Intelligent Transportation Systems (see http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/6979/18819/00869017.pdf?temp=x via the Cornell library web site for the full paper). This paper proposes an approach which simulates traffic flows based on reported link travel times, computes optimal routes based on these travel times, updates the link travel times, and iterates to convergence, which the paper argues will occur. The approach includes a backdating of simulated link travel times that is claimed to overcome the inaccuracy resulting from the fact that link times are inherently out of date by the time they are used.

The ITS literature and the Michigan paper in particular distinguishes between centralized and decentralized route guidance systems. A key problem that must be overcome with any intelligence routing system is the ‘all or nothing’ nature of routing advice. Think of a widely heeded helicopter traffic report that broadcasts delay reports on one Hudson River crossing into Manhattan, thereby directing traffic to another crossing that quickly becomes congested — while travel actually becomes faster on the crossing that was reported to be congested.

According to the University of Michigan article, “Allocation of routes under high market penetration is projected to be more stable under a centralized archtecture because the route guidance service provider can control more precisely the number of vehicles routed onto a specific route.” For example, a system in which routes are calculated centrally can randomly direct different motorists to take different routes between the same pairs of locations, thereby coming closer to network equilibrium. The Michigan authors claim that broadcasting link delays based on their simulation would bring about equilibrium in a decentralized way, with individual GPS units computing optimal routes based on the predicted link delays (essentially replicating what the central simulation has accomplished in advance).

If a routing system that uses real time data compiled from cellphone-based GPS units to recommend routes becomes widely used, a centralized approach does have an additional advantage that I have not seen mentioned in the literature: if a large number of drivers request routes, the centralized system will have a basis for estimating intended travel between pairs of points, usually called origin-destination (O-D) information. Getting such O-D information is ordinarily quite difficult, because measured link traffic volumes may not be sufficient to accurately compute it (given O-D information, link volumes can be computed by matrix multiplication, but it may not be possible to invert the matrix to retrieve O-D information from link volumes). Although some drivers might end up cancelling or delaying their travel plans once they see the optimal time predictions, the raw data can probably be adjusted to yield good estimates of traffic loading when combined with the simulation.

It remains to be seen whether drivers would be willing to report their travel plans, in the form of requests to Google Maps or the equivalent. Here again, the issue of privacy that permeates our course can be raised. However if enough drivers are willing to sacrifice privacy in return for excellent routing recommendations (and confidentiality promises from Google), traffic flow through existing networks and steps to eliminate bottlenecks might well be improved. Here in the 10 square miles of Ithaca we are fortunate that we do not need to worry much about traffic, but when driving in the surrounding reality, technology to beat traffic congestion would be highly welcome.

STOP PRESS: While this post is speculative and attempts to predict future developments, shortly after it was posted the Wall Street Journal published an article about a GPS system from Dash Express (http://www.dash.net/) that has many of the characteristics discussed here, though it may lack the predictive capability. The Wall Street article is linked from the company’s web site.

Posted in Topics: Education, Mathematics, Technology

No Comments

Information Cascade in Medicine: the fat error

http://www.nytimes.com/2007/10/09/science/09tier.html?_r=2&em&ex=1192248000&en=9f36687fe8aef756&ei=5087%0A&oref=slogin&oref=slogin

In yesterday’s lecture, we began discussing information cascades. The above referenced New York Times article highlights a very interesting cascade whose effect can still be seen prevalent today. In his article, John Tierney discusses the book “Good Calories, Bad Calories” by Gary Taubes. In the book, Taubes describes an inaccurate cascade about heart disease that originated in the 1950s. At that time a diet researcher named Ancel Keys asserted that more Americans were suffering from heart disease due to an increase in the consumption of fat.

Taubes’ discovery reveals an alarmingly powerful cascade. A committee that included Dr. Keys effectively reversed the American Heart Association’s report about the link between fat and heart disease. This in turn led to similar assertions by a Senate committee report, the U.S.D.A.’s “food pyramid”, and the National Institution of Health. Researchers who pointed out that the claim lacked sufficient evidence were rebuked and alienated.

In light of this information, I would like to point out the significance of network position with respect to the cascade. Dr. Keys’ position as a committee member in the American Heart Association undoubtedly affected the magnitude of the cascade. When this association, a prominent figure in U.S. health circles, put out the report, it easily led other prominent groups (i.e. the U.S. Senate, the U.S.D.A.) to propagate the cascade. It thus seems clear that having a central role in a network lends a person power to initiate far-reaching information cascades.

In any case, Tierney asserts that cascades occur frequently in the field of medicine. We may all want to think twice next time we hear a health report, no matter how widespread it may be.

Posted in Topics: Education, Health

No Comments

Online ad Spend Continues to Grow

http://www.utalkmarketing.com/pages/Article.aspx?ArticleID=4789&Title=Recession_fears_impact_on_online_ad_spend

“Recession fears impact on online ad spend”

Despite the fact that U.S. economy is gradually falling into recession, market research specialists still estimate that advertisers will spend an increase of 23% over 2007’s spending online, which is equivalent to $25.8 billion. And there are several key factors behind this phenomenon, including better understanding of the audience, more effective ad placements, and easier purchases for advertisers. 

Take the four most popular online search engines for example; Google, Yahoo!, MSN and AOL are continuously improving and implementing upon above key factors in order to stay competent, which results in increased per-click prices as well. They are aiming for becoming one-stop shops for advertisers by integrating multiple functions into their ad networks, such as targeting and tracking. This claim is further supported with Google’s recent purchase of DoubleClick, an ad serving and tracking company, and AOL’s purchase of Adtech, a German ad-serving company in 2007.

As online search engines are determined to provide better and more convenient services, while advertisers are always willing to compete for that top slot, it is not hard to imagine why internet ad is such a promising business.

Posted in Topics: Education

No Comments

YouTube and Network Ties

http://www.cnn.com/2008/TECH/ptech/03/21/youtube.awards.ap/index.html

This news article is mainly on reporting videos that won awards from YouTube, which allows its users to nominate 6 videos for each of its 12 award categories. These categories are music, sports, comedy, instructional, short film, inspirational, commentary, creative, politics, series, eyewitness and “adorable.” The awards have allowed unknown artists such as Tay Zonday to have his song, Chocolate Rain, to be booked on National TV shows, and many others to gain fame and fortune. To many people, the YouTube awards are like the new Emmys.

This article is related to Network which is discussed in class. Before the YouTube era, unknown artists such as Tay Zonday, have limited medium to publish their work to the world. In fact, their only medium is through record companies such as EMI or Sony BMG. That puts them in a weak position for gaining profit from their work. The record company has to like their work first and then people or consumers have to decide whether or not the particular artist is any good. If both the company and people love their work, artists are still at a weak position to gain profit since the record company that puts out their work has monopoly over the artist’s work, and the artist has only one tie to one record company which has multiple ties with multiple successful artists and ties to the public. If the record company decides to cut off this particular artist, this artist won’t make any profit. Things are different when YouTube is invented. Artists are able to publish their work without record companies. If people like what the artist has to offer, the artist would gain popularity and multiple record companies would compete to sign this particular artist. That gives the artist multiple ties to multiple records that have the same amount of ties to general population, which in turn gives the artist more power to negotiate for more profit since if one record decides to break tie with this artist by not signing a deal with him, the artist will have other alternatives.

Posted in Topics: General

No Comments

Facebook: The Ultimate Social Networker?

In a social network, there is a tendency for two people with a mutual friend to form some sort of link. This is known as triadic closure. The basic principle behind this idea is that new links are likely to form between people with one or more common friends due to a number of factors: proximity/opportunity or homophily (friends having similar attributes). Whatever the case may be, Facebook, in a nutshell, offers a quick and easy guide to who a person is and who they affiliate themselves with in the real world. Although Facebook is one of the most popular social networking techniques, there are two sides to this networking machine: building social ties in a positive or negative manner. According to the article attached by link, job candidates who maintain a profile on Facebook are beginning to understand that their personal images and descriptions of how they present themselves to their friends may hinder their chances at getting the job they desire.

The fact that Facebook offers an individual’s “personal” information to an array of different audiences has shown to create friends and enemies in a social network. The truths and details of personal information may begin to leak to people who were not meant to see them. In this particular article, many individuals who were well qualified for the jobs ended up not getting the positions based solely on the way they depicted themselves on Facebook. Many Facebook users “often don’t expect their personal information to be monitored by potential employers, and many consider their online profile information to be private.” It seems as though Facebook’s quick and easy approach (internet access) to making new ties and social networks with different people may in fact be its true downfall. Too much private information has the ability to fall into the wrong person’s lap at the wrong times.

Therefore, Facebook has the ability to form two different groups of friends that have opposing feelings towards one member of one of the groups. In this example, both groups of friends have complete graphs with balanced triangles. In the real world, one group would be the job hiring council and the other would be all of the friends of the Facebook user trying to get a job. Everyone in their particular group is friends with everyone else in their group. And the only way to balance the two groups would be for each member of one group to be enemies with each member of the other group. In turn, each member of the job hiring council is enemies with the Facebook user trying to get the job and any other friends of this particular individual who portray themselves the same way on their Facebook accounts (if they are also trying to get the job). If this scenario were to get even more extreme, the reputation of this individual may even linger with him to other job interviews. This would create more negative ties in the social networking scheme. In conclusion, Facebook is a great social networking tool but with its great power allows for social networking to get out of hand in some cases.

Reference: http://www.msnbc.msn.com/id/20202935/

Posted in Topics: Education

No Comments

Factors Changing the Auction World: Bid Timing and Software Agents

The second price (sealed bid) auction runs the same as the first price auction, except the winner pays the second highest bid. Today, eBay is one of the most popular second price auctions (along with Google, Amazon.com, AuctionWatch.com and Yahoo!, etc) in the markets. According to eBay’s second price auction, each buyer specifies their maximum bid. And then the current bid price is set to the second highest bid, plus a minimum bid increment. But how does the timing of bids favor or disfavor buyers in an ordinary eBay auction?

According to the article attached by link, different types of software agents and how well they perform depend greatly on the rules of the game. Since eBay has a fixed deadline, certain incentives may persuade bidders to bid late. In these cases, artificial last-minute bidding agents (i.e. esnipe.com) help bidders by allowing them to out-compete other bidders based on their submitted bids. But this all depends on the behaviors from different bidders (strategy comes into play). This disrupts the goal of the second price auction, where a bidder is to bid their own true value and not be persuaded by other bidders. Is the dominant strategy of the second price auction for eBay really to remain at your own value for the object so that you do not lose by going over or under bidding? Or have other artificial agents changed the scheme of bidding in eBay auctions?

It is clear that strategies and incentives of different market participants are being changed by the availability of bidding agents and changes in bidder behavior. As time progresses and bidders gain experience in the market, the demand for different software bidding agents is increasing dramatically. The fixed deadline for eBay is allowing market participants to physically obtain information that provides the highest bids and deadlines for the object being sold. This is where sniping and other bidding agents are coming into play. They are changing the way the second price eBay auction is working (by allowing market participants to outbid others at the last minute). Although sniping is allowing people to spend more or less than their true value on an object for sale on eBay, the dominant strategy for second price auctions remains as bidding your true value. Regardless of what other people do, bidding your true value is the only strategy that does not allow you to pay more than your own value or lose a potential gain.

Reference: http://kuznets.fas.harvard.edu/~aroth/papers/eBay.ai.pdf

Posted in Topics: Education

No Comments