This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Braess’ Paradox, Queuing Networks, and Nash Equilibrium

Braess’ paradox deals with networks where when users selfishly choose the best route, under certain circumstances, adding additional capacity increases travel time. We also applied the concept of Nash equilibrium to analyze Braess’ paradox. A Stanford professor Sunil Kumar explores the question of how to increase capacity and throughput without risking increasing delays in his research paper Customer Choice and Routing in Queuing Networks. In a summary of his work, an article describes a scenario where additional choice can be added to queuing networks in order to make them more efficient.

The paper involves analyzing a simple two-station service system and what rules are necessary to maximize efficiency. An example of such a station is a Starbucks that has a payment and a coffee counter, and people choose which to go to first. In this situation, if the coffee counter became overwhelmed, the coffee server would give priority to those who have already paid (in order to encourage people to use the cashier first). If the cashier became overwhelmed, those who have picked up their coffee would receive priority. The dynamic change in rules would increase efficiency even after customers have made their decisions.

The cause of the suboptimal queues is that customers’ selfish decisions result in a Nash equilibrium that is not Pareto optimal or a social welfare maximizer. Simply stated, the “user-determined equilbria” deviates from the “system-optimal equilibria.”(1) Kumar’s research deals with creating theoretical rules that would balance the load. His theories have applications in many situations where queues are used such as communication networks, manufacturing, and even the DMV.

Posted in Topics: Education, Technology, social studies

No Comments

Where are Irrationality, Justice, and Morality in Game Theory?

http://www.gametheory.net/News/Items/108.html

http://www.cscs.umich.edu/~crshalizi/notebooks/ethics-biology.html

 

Where are Irrationality, Justice, and Morality in Game Theory?

 

The entire field of game theory seems to be built upon the concept of rationality. In each situation, players will play the strategy which maximizes their individual payoff. Any experience with the real world however, will lead us to the conclusion that this assumption is inherently flawed and there is another factor besides profit that drives us. Even in an example as simple as the ultimatum game, we can see that the results of experiments are far from the predicted equilibrium results. The predicted results tell us that the optimal strategy, and thus the one most played, would be to offer the smallest increment possible. Actual experiments however, contradicts this and find that most people offer a fair split and that offers of less than 20% are often rejected. Rationality would say that, any profit is better than no profit. So why would people ever reject offers? I believe the answer to this question, in a word, is Justice.

 

When a player offers a highly unfair split he is being unjust, and the moral solution is to punish this injustice, thus restoring justice, even at a cost of hurting oneself. Even though the decision to punish the offending player is visibly irrational from an economist standpoint, I believe that when this decision is applied consistently and repeatedly it is an optimal and rational strategy. To show this lets take the case where there are 2 just players and 2 unjust players our “world”. Now suppose just one of these players is given the opportunity to make offers to each of the other “residents” in turn. If he is unjust, he will offer a 99:1 split, which will be accepted by the other unjust player (since he is completely rational), but rejected by the 2 just players. This gives him a net payoff of 99. However, if he were a just player, he will offer each person a 50:50 split, which would be accepted by all 3, giving him a net of 150. In this case, the just player, although he makes irrational decisions, comes out with a larger payoff then the perfectly rational game theorist.

 

Although this situation is a bit contrived, I think our society today and our world as a whole, gives us a bit of indirect evidence that justice, (and irrationality) is optimal. If we make the assumption that, a larger profit will give us a greater chance to survive and reproduce (which is a very believable assumption) then Darwin’s theory of natural selection leads us to believe in morality. Darwin’s theory can be paraphrased as “survival of the fittest”. After 200,000 years of evolution, natural selection must have reach, or at least approached, the Nash equilibrium of human behavior. Since we are all at least somewhat irrational, with morals and a sense of justice, then those behaviors and ideas must somehow be “better” than the completely rational alternative. Therefore, when predicting equilibriums from a payoff matrix, it seems somewhat silly to say with definitiveness that this is what people will do, because they are all rational and it is the best choice. I am not saying that completely irrational choices are better, but simply that the rationality of a choice cannot always be contained in a 2×2 payoff matrix.

Posted in Topics: Education, Mathematics, social studies

Comments (2) »

The Ultimatum Game in Business

Here’s a practical application of the bargaining networks that we’ve been discussing recently, and it touches back on the stock market lectures. This article in the New York Times, “When Unequals Try to Merge as Equals“, is about the upcoming (pending approval) merger of the Sirius and XM satellite radio companies. The crux of the article is that while the companies are merging as equals, in reality on the stock market Sirius is worth almost $1 billion more than XM. As a result, Sirius is paying what amounts to a 30% premium for XM shares, so that the merged company will be equally owned by both groups of shareholders. This is somewhat confusing for outsiders, because it’s unclear why Sirius would be interested in such a deal. The author, Andrew Ross Sorkin, explains that the merger made sense for both companies for a few reasons.

Some of the reasons that Mr. Sorkin cites are less related to our course than others. One reason is that a merger of equals is more likely to be approved by the government. Because Sirius and XM are the only two satellite radio companies around, a merger might be a violation of anti-trust, and a takeover of one company by the other might look too much like a monopoly in the making. Another line of reasoning is that, while Sirius is valued more than XM, it is not really worth more: XM has more subscribers and more revenue, but Sirius’s stock price had been inflated because of good press and big deals (such as signing Howard Stern).

The other reasons can be seen as a natural outcome of network exchange theory. As mentioned, Sirius and XM are the only two companies in satellite radio. We can think of them as being the only two members of a graph on these companies. More specifically, we can consider a graph where the nodes are companies and the edges represent mergers. The values on the edges reflect the percentage stake that each company has in the final, merged company. With this model it becomes clear that an equal merger is likely, because while Sirius is the one making the offer, it only has one company it can merge with. While Sirius might be thought to have an advantage because it has more money to spend, in reality it needs XM just as much as XM needs it. As Mr. Sorkin puts it:

And then Mr. Parsons [XM’s chairman] played his ace: If Mr. Karmazin [Sirius’s CEO] wanted to create the enormous savings they both projected would result from a deal — worth more than $5 billion, more than the value of either company — they needed each other. And Mr. Parsons would not play unless his shareholders could capture half of those savings.

This is a classic example of someone playing the Ultimatum Game. Both Sirius and XM need the savings of the merger — they are both losing money — but even though XM is worse off, it refuses to be taken over. Mr. Parsons is apparently a particularly stubborn player of the Ultimatum Game, and in this case Mr. Karmazin was lenient, because he did not try to push his advantage, perhaps because of the other factors in play. Or maybe he just knew how to play the game.

Posted in Topics: Education

Comments (13) »

The Enron Network

The Enron scandal was unveiled in the November 2001 and has underwent a large amount of examination ever since. Many questions arose asking how the public could have been so blindsided. While Enron’s stock continued to plummet from 2000 into 2001, Enron’s stockholders were encouraged to hold on to their stock and assured that the price would soon rebound. The stockholders were unaware of the network including chief executives, accountants, and insider traders that were benefiting from their losses. When the truth was finally brought to court, this network was a useful tool in defining those who were strongly connected. Similar to the tactic used by Kossinets and Watts, email was used to track these connections. A detailed network of this information was compiled by Jeffrey Heer and can be found at http://jheer.org/enron/v1/. Heer restricted his email connections to just the senders and the recipients, avoiding any other names listed within the email’s body. Also, only emails related to business or strategy were included. This created a refined set of concrete edges. While the “betweenness” value could be measured to locate the powerful nodes, Heer examined the links in a more global approach. He was more interested in the communities within the network than the single players. Thus, Heer used Newman’s algorithm for discovering the bounds between different groups within the company. This algorithm measured the quantity of links between communities against the expected number of links. The strength of this result was an indictor of the community structure’s strength. This type of analysis was of particular importance to the Enron situation where the community of those guilty was of central interest. Of course, this study does not guarantee a capture of all guilty persons, but it certainly was successful in identifying the larger communities of concern.

Posted in Topics: General

No Comments

Peer Innovation

This Business Week article, Peer Innovation and Production discusses the value of opensouce software and how large companies such as IBM whose business models initially revolved around proprietary software are now using the wisdom of crowds as initially experimented with Linux. Many non-software companies are looking at this idea of harnessing peer production as a way of achieving growth, innovation, and profit. As related to the network diagrams in class, instead of a top down hierarchical structure, nodes far from the center become more powerful as they are allowed to edit and build upon the software. In effect better utilizing the knowledge of people who do not even work for IBM.

IBM by helping to develop Linux instead of its own software has been able save close to $900 million per year. By using “self-organizing webs of independent contributors” IBM was in effect outsourcing some of the production costs to the community at large. The article lists www.marketocracy.com, www.zopa.com, and www.threadless.com as companies which revolve around the same usage of the collective knowledge of a network. Even though Linux software is free, sales of complementary hardware, software, and services will reach $37 billion annually by 2008 at IBM. In effect, IBM is utilizing the knowledge of weak ties to improve Linux. With a minimal investment and royalty-free access to their software patents, IBM has created a link to the Linux community and is gaining both complementary business and the collective knowledge of Linux developers.

Posted in Topics: Education

No Comments

A Brief Primer on Vehicle-to-Grid

As the debate over global warming rages on, there’s one thing we can all agree on. It’s stupid! It’s like wondering if the can of varnish you drank was half-full or half-empty. Pollution is bad. Period.

This leads to several conclusions I’m sure you’ve heard ad nauseam, so I’ll jump straight to the point: renewable energy, and sustainability in society as a whole, is paramount to our long-term survival as a country, and as a species.

Renewable energy has come a long way in the last half-century. Photovoltaic cells have become cheaper and more efficient; wind turbine design has improved power output, reduced noise, and expanded the range of acceptable wind speeds; and new sources of energy like geothermal show promise.

In general, renewable energy can be filed into two categories: that which generates at a fairly constant rate (hydroelectric, geothermal), and that which generates intermittently, subject to outside factors (wind, solar).

The electric grid in the United States is powered overwhelmingly by coal. The Regional Transmission Operators (RTOs) who run the grid since the federal government deregulated the industry like on-demand power, because the load on the grid is not constant by any means. It goes up during the day, down at night, has a higher average in the summer due to air conditioning, etc. Being able to generate power as needed makes managing the grid simple.

Sadly, this same reasoning leads the RTOs to dislike most renewable sources—they’re harder to integrate into the grid. You can predict the average power output of a wind farm or solar site, but at any given moment they may be well above or well below this average. If they’re above, it’s no problem—just turn off some traditional generation. If they go below, though, you need to turn on generation, which takes time. A lot of time.

For this reason, Generation Companies (GenCos) typically have spinning reserve: generation that can be ramped up quickly to meet sudden grid deficits. This typically comes in the form of gas turbines or coal plants running at not-quite-maximum capacity. Paradoxically, adding renewable power to the grid increases the need for spinning reserve, and thus the need for central generation.

If there was a way to level the output of these clean power sources, it would be much simpler to add them to the grid. Once the grid constraints are overcome, the economics of renewables would make them a very attractive option to the GenCos. For example, wind power is very fast to install, easy to maintain, is cheaper than nuclear and gas power, and is priced competitively with coal.

Leveling of output requires energy storage. While there are some interesting and creative ways of doing this (Compressed air energy storage and pumped hydroelectric, just to name a couple), there is also one very obvious way: batteries. But simply building giant banks of batteries would require tremendous amounts of space and money. Is there some unused source of electric energy storage we’re missing?

There is, and it’s in the light vehicle fleet. There are approximately 191,000,000 cars in the USA. If each of these was outfitted with a 15 kilowatt plug and a high capacity battery, the grid could draw 2,865 gigawatts of power from these batteries—over six times the average load on the national grid. The storage is there, so how to we use it?

Unfortunately, the cost of retrofitting cars with the kind of batteries and control systems necessary to make this scheme work would be astronomical. Fortunately, we are on the edge of a new frontier in automotive propulsion: plug in hybrid-electric vehicles, fuel cell cars, and of course electric cars are all ideally suited to Vehicle-to-Grid, or V2G, since they already have state-of-the-art Li-Ion batteries with much better capacity/weight than the traditional Lead-acid batteries in most cars today.

It may be easier to implement large scale renewable power generation and low emissions vehicles at the same time, rather than separately. The grid storage afforded by V2G makes renewable power far more attractive, while the fees paid by RTOs to owners of these new vehicles for use of their batteries would help offset their otherwise higher price.

Now that the framework for V2G has been laid, we have to consider a vast array of implementation obstacles, all of which must be modeled with graph theory and game theory.

The graph theory is a consequence of geography: electric current cannot be transported an arbitrary distance due to line losses. RTOs must somehow weight storage according to its distance from load, and at some distance, the storage will simply become inaccessible.

The game theory arises from myriad concerns of both RTOs and car owners. RTOs need to make sure that the grid supply equals the grid load at all times. They can do this by drawing upon spinning reserve or the new V2G storage, each with their own costs.

There are rules about how they may draw on V2G, though. Cars are parked 23 hours a day on average, but you can’t draw power from a car while it’s driving. You also can’t draw batteries all the way down at any time, since this can cause lasting damage to the battery and leave the motorist stranded.

How low can the batteries be drawn? If you’re sure the motorist isn’t going anywhere while he’s at work, you could draw them almost to death and let them recharge while he’s on the highway heading home, in the case of HEVs, which run on gasoline when the engine is at its most efficient, aka, highway driving). If this is the strategy, though, and the car is driven around during lunch, the vehicle will be forced to rely on the gas engine, which is less efficient at city driving than the electric one.

For electric vehicles, the question becomes more complicated. How much charge needs to be left in the battery at the end of the day depends on how far user needs to drive to get home. Will he take the car out soon after reaching home? Will he be able to charge it overnight every night? And how will he weight all of these concerns against the money paid to him by the RTOs?

Presumably, all of this information can be gathered from the car owners with known statistical deviation, such as how often they go out for lunch and on what days. These statistics, along with the rules discussed earlier, define a game played by the various motorists and transmission companies, and it’s played on a nation-wide graph. Given this vast body of data and the complexity of the grid network as is, how can we best use this newfound electrical resource?

I don’t know. I’m just an undergrad. But this is perhaps the most useful and broad application of graph theory in the world today.

Primary sources:

Kempton, W., and Dhanju, A., 2006, Electric Vehicles With V2G: Storage For Large Scale Wind Power, available at http://www.udel.edu/V2G/docs/KemptonDhanju06-V2G-Wind.pdf

and

IEEE Power and Energy Magazine, Vol. 3, Number 6, Working with Wind: Integrating Wind into the Power System

Posted in Topics: Science, Technology

No Comments

Viral Marketing

http://www.cs.washington.edu/homes/pedrod/papers/kdd01a.pdf

Often when companies go about mining for potential customers to market to, they estimate their expected value from a particular customer as the expected profit less the cost of marketing. What the companies do not usually consider is the network value of the potential customer, that is, the expected revenue garnered as a result of other people in the social network of the marketed to customer buying the product. The type of marketing that does explore the network value of customs is called viral marketing, and according to the authors, it “is still a black art.” The intent of the authors’ paper is stated to be to advance the study of viral marketing.

In their introduction the authors state, “Ignoring network effects when deciding which customers to market to can lead to severely suboptimal decisions.” But they go on to say that it is very difficult to quantify the network value of customers. The network value of the customer does not depend solely on him, but perhaps the entire network. In the past, companies have taken losses due to the difficulties of accurately mapping customer networks. Now, however, with the vast supplies of information available via the internet, it has become easier to acquire the data necessary for such modeling.

The bulk of this paper attempts to construct such a model. The authors depicts each customer as Boolean variables (0 – didn’t buy product, 1 – did buy product), and each customer Xi has a set of neighbors Ni, used to describe his social network. To describe their model they use mathematical concepts not discussed in class, but the main idea: to use social networks to better market products, is very closely related to what we have studied thus far in class.

Posted in Topics: Education

No Comments

Types of Combinatorial Auctions

In class we used a preferred-seller graph to model an ascending bid auction of multiple items, where the number of items and bidders is the same. Later, we adapted it for the auctioning of one item by adding “fake” items for which each bidder’s valuation is zero. However, not all types of auctions are easy to model, in particular combinatorial auctions, which is discussed in Noussair’s
Innovations in the Design of Bundled-Item Auctions.

Computational complexity is one of the obstacles to modeling combinatorial auctions. Consider the magnitude of the search space an algorithm would have to plow through to compute the prices standing bids if each bidder had 2n-1 bids for n items. In fact, the linear programming problem is usually nondeterministic polynomial complete (NP-complete). Banks and Porter’s Adaptive User Selective Mechanism (AUSM) solves this problem by accepting bids for package, P containing item i only if it’s higher than the sum of the standing bids for any packages, S that contain i. The example Noussair gives involves a scenario where the standing bids for items A and B is $500 and for items C and D is $700. If a bidder wanted items B ad C, he would have to bid $1201.

Efficiency and the revenue generated are often used to measure the performance of an auction. Efficiency is achieved when there is a matching where valuations are maximized. One of the faults of AUSM is that it might not be efficient. The example Noussair provides involves a situation where bidder i’s valuation of item A is $1,000 while bidder j’s valuation of item B is $800, but bidder k’s valuation for both A and B is $1,200 and $0 separately. Suppose bidder k’s bid is $1,001 then there is nothing i and j can bid to win the items (because it’s beyond their valuation for the item), unless they decide to work together to overbid k. However, bidder k can still find another bidder to outbid bidders i and j. This story illustrates the threshold problem.

A combinatorial clock auction fixes the threshold problem by having the seller raise prices of the items until one bidder remains (much like the way our auction algorithm we saw in class runs). Each round the bidders decide whether or not they want to buy the items and packages at the current price, but if they want a subset of the package the bidder must be willing to pay for the whole package. Afterwards if there are unclaimed packages and a program is run that searches for a configuration that maximizes the sum of the bids. This type of auction solves the cognitive complexity of combinatorial auctions that Noussair noted in the article. Each bidder doesn’t have to think about all the possible bids for all the items he wants, he just has to accept or reject the current price. Since the “clock” sets the prices, this type of auction also limits colluding among bidders.

For more an introduction to combinatorial auctions read dakru’s post.

Posted in Topics: Education

Comments (2) »

Network Theory and International Terrorism

Uncloaking Terrorist Networks

Modeling Terrorist Networks 

We have seen examples of networks and the application of theory to topics like traffic patterns and auctions. Though these topics are interesting and important in their own right, it isn’t always apparent how the study of networks can have a much more serious and deadly application: the study of international terrorists. In the example of high school dating:  the nodes that originally represented students now come to represent some of the most dangerous and violent terrorists in the world, the “links” that used to represent relationships between teenagers now develop into distribution networks for explosives and intelligence between terrorist plotters.

Analysts have made a graph, similar the one we saw in class representing the high school dating, showing the network of the terrorists involved in the September 11th attacks. This graph shows the network produced, including the ties that resulted from study of the interactions between those involved (especially the pilots). It shows Mohammed Atta (widely believed to be the head of the suicide hijackers) to be a very well connected node. However, further analysis has revealed interesting insight into the structure of command in terrorist networks. If weak ties (such are short phone calls and emails) are removed, Nawaf Alhazami (thought to be the hijacker-pilot of American flight 77) becomes the most powerful node. It is theorized by some that Alhazami was responsible for planning the attacks and Atta was more involved during the course of the actual attacks. In terrorist networks, properties like closeness and betweenness between nodes have very different implications. They help government agencies determine which suspects they should follow to learn more about the network, and those who are heavily involved in network traffic and whose arrest would damage the network the most.

This is one area that we have not learned about much in class, how the removal of nodes and links affects the network overall. Perhaps what makes terrorist networks so hard to stop is their resilience to the removal of nodes (arrest of suspects). These networks are designed so that the flow of information and control cannot easily be interrupted by the removal of one or two individuals. Terrorist organizations are structured as “cells”, each of which is capable of operating independently from the others and can survive on its own. These cells are often connected components in the terrorist networks, with suspects being separated by only one or two links, and having many connections to others in the cell. The lack of strong connections between cells, or to a small number of controlling individuals, makes it very hard for the authorities to dismantle these terrorists. Further study of how terrorist organizations are formed and operated will hopefully help authorities investigate and eliminate some of the most dangerous networks in existence.

Posted in Topics: General, social studies

No Comments

Second-Price Auctions for Internet Advertising Keywords

Source: Internet Advertising and the Generalized Second-Price Auction: Selling Billions of Dollars Worth of Keywords by Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz

Pay-Per-Click advertising is the primary source of revenue for internet search companies such as Yahoo and Google. When a user enters a search query, advertisements to sites that match the query are returned along with the search results. In class, Prof. Kleinberg mentioned that Google uses a second-price auction to sell the rights to have their ads displayed beside certain search keywords. Generally in second-price auctions, the buyer should bid their true value. In reality, multiple advertisements can appear for a given keyword, so advertising providers such as Google and Yahoo! use a form of second-price auctions to place advertisements on the page. The key difference is that there is a limited number of ads that can appear for each keyword, and some placements (at the top of the page) are better than others (at the bottom of the page). In this modified second-price auction, which the authors refer to as GSP, short for generalized second-price, advertisers bid on a keyword, and when the user searches for that keyword, the advertisements are shown in order of bid price, the bidder who bid the most gets to have the most desirable placement. When a user clicks on an advertisement, the owner of that ad is charged the second-price amount for that bid, that is the bid price of the ad below the one the user clicked.

The article reveals that GSP guarantees the seller’s payoff will be least that of the second-price auctions we did in class, given the same bid prices. Perhaps even more interesting is that the added twist of ad positioning makes the dominant strategy for GSP no longer each bidder bidding their true value. The authors give the following counterexample of why bidding truthfully is no longer the dominant strategy:

  • 3 bidders with values $10, $4, $2 vying for 2 positions.
  • Position 1 has 200 clicks per hour, Position 2 has 199 clicks per hour.
  • If bidder 1 bids his true value, his payoff will be (10-4)*200 = $1200
  • If instead bidder 1 bids $3, his payoff in Position 2 will be (10-2)*199 = $1592
  • $1592 > $1200, therefore all agents bidding truthfully is not a dominant strategy.

Like we did in class, the article also shows how GSP is the same as a generalized English Auction, where bidding starts with everyone in and price $0. As the price increases, bidders drop out until there is just one bidder left, who is given the first ad position at the price where the second to last bidder dropped out, etc. In this auction, a bidder will drop out before the price gets to the point where moving to the next position will no longer increase his payoff. This strategy is an equilibrium and will yield the same payoffs as the English and second-price auctions we learned in class, but only if all the other bidders follow the same strategy. The generalized English Auction is different from the English Auction in that there still is no dominant strategy.

Source: Internet Advertising and the Generalized Second-Price Auction: Selling Billions of Dollars Worth of Keywords by Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz

Posted in Topics: Education, Technology

View Comment (1) »