This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Doping is a Dominant Strategy

Drugs: Sports’ Prisoner’s Dilemma

The use of performance-enhancing drugs is a major issue in the realm of professional sports. The fact is that drug testing seems to be lagging behind evasion methods. As soon as there is an improvement in drug testing technology, a new undetectable performance-enhancing drug developed and ready for use.

The problem is that the choice of doping is a dominant strategy for professional athletes. Consider the game represented bellow. Athletes 1 and 2 have the choice to either dope (D) or not dope (N). If Athlete 2 dopes then Athlete 1 should also dope in order to compete. If Athlete 2 does not dope then Athlete 1 should still dope in order to gain a competitive advantage. Therefore, it is a dominant strategy for Athlete 1 to dope and the same logic can be applied to Athlete 2. However, both athletes would be better off if neither choose to dope because they would not have an advantage over one another and there would not be the possibility of testing positive for performance-enhancing drugs. Due to this game athletes continue to use performance enhancing drugs in order to remain competitive.

The way to stop athletes from doping is to store current blood and urine samples for future testing when more advanced drug detection techniques are available. Athletes know what drugs can be detected now and are able to avoid detection. However, athletes do not know what drugs will be detectable in the future. The only choice for athletes would be to avoid all performance-enhancing drugs.

Posted in Topics: Education

No Comments

Google’s Automatic Matching

Following up on naota’s post about Google’s algorithms which maximize revenue instead of achieving the optimal social welfare maximization, I will focus on the particular aspect of automatic matching. In late February of 2008, Google informed a group of advertisers that they were selected to participate in the automatic matching beta where Google uses the advertiser’s surplus budget on queries that it deems to be relevant but are currently not on the advertisers’ keyword list. For example. if your keyword bid was on “running clothing”, the query “jogging pants” would display your ad. Similarly, a keyword bid on “craft for kids” would result in your ad being displayed for “things make”. However, the question is how relevant are “craft for kids” ads when a user is searching for “things make”? Clearly, advertisers are likely to suffer because they are potentially paying even though they are failing to reach out to their targeted audience.

Unsurprisingly, Google maintains secrecy over the exact mechanics and algorithms underlying its auctioning procedure for advertisements. However, natural questions that follow are how much should advertisers pay if their advertisement is clicked solely because of automatic matching and what slot should the advertisement be placed in? This throws a new twist to the auctioning procedure. If all the bidders for “jogging pants” have received their slots, where will the automatically matched advertisers from “running clothing” be placed? If a placement downgrades the slots of those who actually bid for “jogging pants”, the original advertisers would definitely be unhappy since they were ripped off. Thus, the alternate option is to place all the automatically matched advertisements at the bottom but how much value is there? Also, how will Google determine the price to charge per click for such low positions? Charging the original price is certainly unjustified. A third option is to integrate the automatically matched results with the bidders for that keyword into one larger auction using the VCG procedure. Again, the same question remains. How will Goggle determine the prices (how much the advertiser values a slot) to enter for the automatically matched advertisers?

It appears that Google has overstepped its boundaries. In auctions, the seller does not control the value that each bidder possesses for the particular item. However, in this case, Google appears to be controlling the value that the advertisers value certain slots under certain keywords. It is very possible that the advertiser may not even agree that some of the keywords are relevant. Regardless of which of the above options Google chooses, it is essentially deciding the value that a buyer has over its advertising slots. Essentially, the analogy is that the seller in an auction decides how much each buyer should bid. Of course, the seller benefits because it can set any arbitrary value to profit. This can be considered an unfair practice which Google is using to increase its revenue. Even worse, some advertisers may not even be aware that such practice is in existence and thus, could be cheated out of a significant sum of money by literally unkonwingly participating in an auction. Of course, the optimal social welfare outcome is for each bidder to bid truthfully but how does Google even know what those values are? Instead, it has its own interests in mind to generate more revenue. Even worse, it knows how much companies are willing to spend in total since they each have a budget so it can exploit that and use as much as possible.

One possible solution to the problem is to allow advertisers to review the automatically matched keywords before deciding whether or not to bid for them. However, it does not appear that Google possess those intentions given that it intends to set the default action of each advertiser to allow for automatic matching to occur. Unlike the idea of utilizing perfect matching to maximize social welfare, Google seems to be more interested in increasing its revenue. Since this was only in beta in February, we shall see how this story unfolds.

Sources:

http://www.straightupsearch.com/archives/2008/02/googles_automat_1.html

http://www.altogetherdigital.com/20080227/google-beta-tests-automatic-matching/

http://www.channelregister.co.uk/2008/03/18/when_google_does_evil/

Posted in Topics: Education, Technology

No Comments

Power in the Information Network of Academia

In today’s world, we rank everything, from college basketball teams to cookies that go well with milk. A simple way of quantifying quality eases us through the maze of decisions in our daily life. But, certain areas prove to be quite difficult to rank simply. One such category is the influence a researcher has in the giant information network of academia. We all know of the few elite awards, such as the Nobel Prize and the Fields Medal, that recognize academic achievement, but can we easily quantify the influence all researchers have in the information network?

In 2005, J.E. Hirsch of the University of California at San Diego developed a system to measure the scientific output of a researcher. Hirsch writes “A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np−h) papers have no more than h citations each.” Basically, Hirsch has developed a one-number index to quantify the power and influence an academic has had in the information network of scientific citations. That index is the number of papers a scientist has with that many citations each.

Hirsch claims that his system has advantages over many other potential systems. For example, one way to quantify would be with the total number of papers produced, but that disregards whether or not those papers are actually being read. Another way would be to rank by total citations, but that allows for one outstanding paper, maybe coauthored, to catapult a researcher very high in the rankings. Or maybe the ranking could be based on number of citations per paper, but that would reward low productivity, according to Hirsch. Other, more complicated systems, like providing a minimum threshold of citations for a paper to count, eliminate these disadvantages, but are more clunky that the simple and elegant h-index.

Hirsch has found a way to analyze the links (citations) between the nodes (papers) in the information network of scientific research to quantify power and influence in the network through a one-number index. Similar to the hub/authority update procedure, the Hirsch index uses only links to determine strength in the network. The Hirsch index provides an interesting and simple way to find the most powerful nodes in the information network of science.

Here is a link to the original article:

“An index to quantify an individual’s scientific research output”

J.E. Hirsch

http://arxiv.org/abs/physics/0508025NSDL Annotation NSDL Annotation

Posted in Topics: Education

No Comments

Playing Nice Can Payoff—An Iterated Prisoner’s Dilemma

After the second homework assignment, faurik commented on the question referring to an iterated prisoner’s dilemma. Specifically, faurik explained how in the one-shot game, the Nash Equilibrium of both players confessing does not necessarily hold in this new game. The observation arises because players realize that the game repeats for an unknown number of times, so working together might payoff. In this iterated version, then, does “playing nice,” contrary to the one-shot Nash Equilibrium, actually payoff? 

The director of Harvard’s evolutionary dynamics lab, Martin Nowak, examined that question in his research project, published in tomorrow’s edition of Nature. Seth Borenstein provides his own take on the study in this article from The Boston Globe. 

For his study, Nowak enlisted 100 Boston-area college students to play a prisoner’s dilemma game repeatedly with dimes, following the payoff matrix below:

matrix2.JPG

He then added a twist, a potential punishment to uncooperative players: Any player could punish a player who chose to defect by 40 cents, but this punishment cost the player a dime to enact. Interesting. What were the results? “The players who punished their opponents the least made the most money.” So, perhaps the common proverb that nice guys finish last isn’t quite correct. To see if these findings play out in real-life scenarios, Nowak plans to study chief executives’ behaviors.

Posted in Topics: General

View Comment (1) »

Ebay’s New SMI Policy Raises Concerns from Users

Popular online auction site eBay recently announced an expansion of their SMI (Safeguarding Member ID) system into all of their US-based live auction listings. The SMI system essentially hides bidder screen names from everyone but the seller. At the moment, if one logs onto eBay and looks at the list of bidders for any given US auction, he or she would immediately notice that although eBay displays the bid prices for the item, it does not show the actual screen names of those bidders. EBay cites a need to thwart fraudulent Second Chance Offers as the reason for this new policy. In the past, scammers have often browsed the list of the screen names of the under-bidders of an auction near the closing time and have been able to contact and send those under-bidders fake offers to purchase similar goods for a reduced price. This is now impossible because the bidder identities are kept secret, and scammers are no longer able to contact those bidders directly and issue fraudulent Second Chance Offer slips.

However, this recent policy change elicits worries from many longtime users. One major problem that SMI is likely to create is an increased amount of shill bidding. Shill bidding is a scenario in which the seller in an auction uses additional accounts to bid up his or her own item in order to incite a bidding war that will ultimately raise the final price. Because SMI hides the identities of all bidders, it is very hard for the legitimate bidders to tell if the seller is using any additional accounts in the aforementioned manner. Now, one would think that shill bidding should not affect final prices in eBay auctions, because in such second-price auctions, the dominant strategy is to never bid above private valuations. However, humans are not always rational bidders, and in the real world, people do tend to overbid and call out prices that surpass their private valuations, many times due to greed, the competitive spirit, or other irrational emotions. There have even been many cases in which people have bid over $100 in an auction for a $100 dollar bill (up to $465 in some cases!). But nevertheless, because shill bidding artificially drives up the final prices of items, many users are demanding that eBay instead eliminates the use of Second Chance Offers altogether, calling SMI an “overreaction to a small problem.”

Referenced Articles:

http://blog.auctionbytes.com/cgi-bin/blog/blog.pl?/pl/2008/3/1204572683.html

http://collectdolls.about.com/od/auctions/a/ebayhiddenbid.htm

http://www.marketwatch.com/News/Story/Story.aspx?guid=%7B0F186388%2DDEDF%2D467B%2D8AB3%2DA13FF669FB87%7D

 

 

Posted in Topics: Education

No Comments

Google’s Money-Making Ad Machine

http://www.channelregister.co.uk/2008/03/18/when_google_does_evil/

(6 Pages - see links at bottom to progress)

This article discusses many of the details of Google’s ad auction service that do not get covered in Econ 204 lecture. Recently, advertisements on Google have been receiving less clicks, which Google claims is an attempt to get rid of “unintentional” clicks and provide higher quality ads. The article uses this as a jumping point for examining how Google’s secret ad machine works. In our discussions in class, we talked about how bids for ads might work for specific keyword searches. However, Google also uses something called “automatic matching” which tries to apply ads to related searches, even when the advertiser has not bid on that search term. That way, if the terms the advertiser bid on do not generate as many clicks as Google expected, they can broaden the appearance of ads to other searches and charge the advertiser for the clicks they receive with these other keywords. Google is currently treating this service as a beta and automatically applying it to advertiser bids unless they opt out, meaning many advertisers may be paying for ads they hadn’t bidded on.

Google’s algorithm for running the advertising auctions is much more intricate than the algorithms given in class. In fact, only the company itself knows exactly how it works, and it changes over time. One added intricacy is that instead of bidding on an exact search, like was discussed so far in class, advertisers can also bid on a “phrase match” (which consists of the exact search plus a few other words tagged on) or a “broad match.” With “broad match,” Google will place the ads in searches it deems relevant to the one the advertiser paid for. However, the article gives the example that a bid on “dog gifts” may result in an ad on the search “hot dog buns” which likely is not what the advertiser wants. Previously broad match meant applying the ads to searches with any keyword in any order, but now ads might appear on searches containing none of the keywords supplied by the advertiser. Erick Herring of Adapt is quoted in the article saying “If you’re Google, the way to expand revenue is to expand ‘coverage.’ Google can’t get paid if someone does a search and there’s not an ad in place that a person can click on.”

By using some hidden algorithm to determine relevant searches, Google may not perform the service the advertisers thought they was paying for, and companies may be charged for ads thay didn’t want to place. One customer for instance found out that after paying nearly $7500 a month, most of his ads appeared on other “content sites” that pay Google for their ad service rather than Google itself, some of which with nonsensical URLs like 1000jogos.com. The matching algorithm for matching advertisers to slots has some hidden components as well. In class, we treated this as a second price auction with slots being valued based on click-through rate and advertisers bidding on the value of a single click. However, Google also considers other factors like the quality of the page placing the ad and the relevance of ad text. They give each advertiser a quality score and the lower one’s score, the higher the minimum bid to place any ads at all and often the lower the placement on the page. Hence big businesses get better treatment than small ones and can place ads with a lower minimum bid. As a business, Google’s algorithms moreso act to maximize the company’s own revenue, rather than the social welfare maximizing algorithms in class.

Posted in Topics: Education

No Comments

Planning Google’s Downfall

While during the past few years Google has been the number one search engine, recently, numerous individuals and companies have been looking to change the current search system to develop something more innovative and profitable. One individual who has become fed up with Google’s continuous success is Rich Skrenta. As discussed in the article “Planning the New Google,” Skrenta believes that Google’s techniques are out of date and that it is time for a new search engine to take its place. Skrenta is currently in the process of developing a new search site known as ‘Blekko,’ which he has yet to reveal much about.

What is it about Google that Skrenta so strongly contests? Clearly millions of internet viewers constantly use Google on a daily basis and are quite content with their results. I even used Google to find articles and information to write this blog, clearly indicating my satisfaction and commitment to the site. Nevertheless, Google’s main downfall is its once revolutionary, yet currently outdated search technique: page rank. As discussed in class and in the Networks book draft, page rank is a ‘type of “fluid” that circulates throughout a network and is passed from node to node across edges and is eventually pooled at the most important nodes’ (Networks 144). Google uses the scaled page rank update rule to rank the various web pages it lists in a search. The update rule scales down each page rank by a common factor s, in order to prevent the ‘wrong’ nodes from having all the page rank (as is the case when a leak develops in the network). It is this technique which first enabled Google to stand out from all the other search engines, but it is also what has made Skrenta strive to improve internet search engines. Prior to Google, most search engines focused on keywords where they would note how often specific words appeared on specific sites, and those with the highest matching would be listed first. However, this search system was not very efficient. Google decided to change this system and look for links between sites and the origin of these links (the sites which have better incoming links appear higher up on search). While page rank certainly was once quite ground-breaking, Skrenta believes that it has ruined the quality of web searches. New secretive trade links have developed which attempt to increase the number of incoming links rewarded to a website (and therefore increasing its rank on a web search). Links used to demonstrate the value of a site, but since the growth of page rank, web sites have become too focused on discovering ways in which to undermine Google’s ranking system, causing page rank to lose much of its meaning.

Thus Skrenta and others are beginning to develop new ways to enhance the search system. Jimmy Wales, the founder of Wikipedia has recently created wikia search, which strives to improve the relevancy and precision of the search experience through creating a more human-mediated search engine. Mahalo is yet another example of a human search technique (this site pays those who specialize in specific fields to write reviews about their areas of expertise). It organizes the best links and strives to avoid scam sites, all in order to save searchers time during their searches.

While these new sites strive to add the human sense of judgment to search engines, computer algorithms that can unite the computer’s efficiency with a human’s shrewdness regarding page quality are the new frontier for search engines. While little is known about Blekko, it may end up accomplishing this specific goal. Nevertheless, Google has won over most consumers, and it may be rather difficult to convince so many that a new search engine is better than the one they are already so comfortable with. The future of search engines is on the verge and it is rather interesting to see what is yet to come with Google and its competitors.

Posted in Topics: Technology

No Comments

Yahoo announces move to semantic search

The world’s second most-used search engine, Yahoo, is announcing a move to semantic search. Traditional search engines analyze a web page’s relevance to the search terms by analyzing the quantity of links to and from the web page. However, semantic search attempts to capture the context of data on a web page to understand the actual meaning behind the data. It does so by accessing data in XML and RDF formats from semantic networks in order to make connections between the intial search query and everything on the internet related to that search query.

Here is an example of a semantic network:

So here, we can see a semantic network pertaining to the musician Yo-Yo Ma. Semantic search embraces information found in this sort of semantic network. For example, if one were to search for “Yo-Yo Ma” in Yahoo’s proposed semantic search engine, the engine would gather data all over the internet on everything closely related to the search query, “Yo-Yo Ma”. The engine would return results that include biographies, discographies, reviews of Yo-Yo Ma’s albums and performances, musicians that Yo-Yo Ma has collaborated with in the past, sites to purchase tickets for Yo-Yo Ma’s upcoming performances, etc. This search engine system is predicted to yield higher relevancy results than those of Google’s PageRank system, which uses a link-based voting game to determine the relative importance of pages. Of course, Yahoo still has many challenges to overcome, including the difficulty of discerning the semantic information behind images and videos, especially tagless ones. And there is always the problem of the never-ending internet spam that clutters and muddles search results. But all in all, semantic search promises to deliver more comprehensive and relevant search results to improve online productivity.

Referenced Articles:

http://news.bbc.co.uk/2/hi/technology/7296056.stm

http://www2003.org/cdrom/papers/refereed/p779/ess.html

Posted in Topics: Education

No Comments

Trader profits fall allong with business costs

It is well known that the cost of products decreases over time. In most cases, the products are quickly replaced by new products. The new products are better and bump up the price long enough for the next product to come along. There are some things that are not replaced. Postal service as a service hasn’t changed for thousands of years. People need to get their items from one location to another. Today this can be done faster and for less money than it could have been done one hundred years ago. In another hundred years it will undoubtedly cost less for even faster delivery. Products and services such as transportation and communication don’t have replacement products to maintain their cost over time.

As a result, traders find themselves with more competition and smaller profit margins. Traders a century ago faced only regional competition for most products and could justify large turnover costs due to the price of transportation and other costs. While the decreased costs could have meant larger profits for th traders, increased communication and transportation technologies forced traders from across the country into direct competition. So how do this affect the market networks discussed in class?

It forces both traders and sellers into direct competition that results in zero profits. In a recent Time magazine article*, the author suggests that the only way to be successful selling a product is to convince consumers to purchase your product even when it costs more. The success of Starbucks and the Geek Squad were cited as examples of how companies have been successful by convincing consumers to pay more for a “better” product. Starbucks sells food and drinks for more than competitors and manages to keep it’s stores busy. The Geek Squad manages to capture the tech repair market by promising customers authentic “geeks,” which are surely the best tech repairmen. In today’s global economy, it seems that authenticity will be the only way to escape the profit drain of competition.

And what about Walmart and McDonald’s? Well they are just damn good cutting costs.

* http://www.time.com/time/specials/2007/article/0,28804,1720049_1720050_1722070,00.html

Posted in Topics: Education

No Comments

FCC Auction of 700 MHz band

As television broadcasters move to digital transmission systems, mandated by Congress through legislation signed into law on February 8, 2006 (known as the Digital Television Transition and Public Safety Act), frequency space that was previously used for television signals will be made available for other purposes. Analog broadcasters will have to stop transmitting their legacy signals by February 17, 2009. In addition to broadcasters, who are required to upgrade their infrastructure to meet these requirements, consumers will also be affected by this change. According to the site DTV Facts, approximately 60 percent of the consumers who are cable subscribers are paying for analog service, and will have to upgrade to digital at their own expense. The intent of this legislation, at least partially, is to make more efficient use of the frequency spectrum and make space for additional wireless broadband services. This will occur in the 700 MHz band, previously used by UHF channels 52 through 69. This frequency space is especially attractive due to its propagation characteristics, in that it has the ability to travel through buildings as well as poor weather.

As discussed in class, Google utilizes an auction method to price the ads it places with the search results it provides (AdWords). Google also uses auctions for other purposes. For example, according to an article on Engadget (see link below), Google decided to participate in an FCC auction for the 700 MHz band mentioned previously. Interestingly, the FCC uses a Simultaneous Multiple-Round (SMR) auction in which licenses for the frequency space are made available. The auctions are composed of individual rounds, between which participants may evaluate their strategy and adjust accordingly. Also, during the time between rounds, the results of the previous round are made public and bidders are made aware of the bids of other participants. This allows participants to determine who places the highest value on the licenses, and increases the probability that the licenses will go to those who place the highest value on them. The bidding continues in this fashion, through multiple rounds, until all bidding activity stops. Although not specifically stated, the information provided on the FCC website infers that the winning bidder will have the highest bid, and will pay that price. Google has not state specifically how they would utilize these frequencies, but in a press release, encouraged the FCC to utilize practices that serve customer interests, regardless of the auction winner. According to a recent Information Week article, $19.6 billion has already been bid for the 700 MHz band, and bidding is expected to conclude by next week. Ericsson has developed chips to be used in broadband devices that would utilize this band, and is likely to be one of the strong contenders in the auction process.

Sources:

DTV Facts

Engadget article

FCC Information on SMR auction

Google press release

Information Week article

Posted in Topics: Education

No Comments