This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Cold War Strategies Becoming Outdated by U.S. Nuclear Supremacy

The Cold War held a shaky peace through the use of a strategy known as Mutual Assured Destruction (MAD). As a result of the nuclear arms race of the 1950’s and 1960’s, both the United States and the Soviet Union possessed enough thermonuclear weapons and ballistic missiles to instantly theoretically destroy all human life on earth several times over. Moreover, both sides developed early warning satellite and radar systems, along with hardened silos and mobile platforms (such as bombers, submarines, and wheeled launchers). This allowed either side to detect and survive a first strike for long enough to launch a full-scale retaliation against the attacker. Therefore, a first strike would be suicidal and kept both the U.S. and the USSR from attacking. The payoff matrix for a MAD strategy would be:

MAD Payoff Matrix

The ‘-9′ payoff assumes that the first strike manages to slightly cripple the full retaliatory capabilities of the victim (this assumption does not change the outcome). The Nash equilibrium is clearly “Not First Strike/Not First Strike”, which also happens to be Pareto optimal. This equilibrium prevented the Cold War from erupting into full-scale nuclear war.

With the Cold War over, the United States and Russia have taken radically different approaches to the maintenance and upgrade of their nuclear stockpiles. “The Rise of U.S. Nuclear Primacy”, an article by the Foreign Affairs newsletter, describes how the U.S. has become the sole dominant member in this nuclear club. While the U.S. keeps putting money into nuclear research & development and keeps its first-strike submarines on patrol, the Russians have let their mobile platforms deteriorate and their early-warning systems develop dangerous holes. As a result, a first strike by the U.S. has a high chance of eliminating most of Russia’s retaliatory capabilities. A first strike by Russia would also pack less of a punch. The payoff matrix has thus evolved to:

New Payoff Matrix

Although the Nash equilibrium is still the Pareto-optimal “Not First Strike/Not First Strike”, the U.S.’s “Not First Strike” strategy is no longer as dominant as it was before, since the “First Strike” U.S. payoff has increased dramatically. If the Russian retaliatory capability is entirely eliminated, MAD ceases to be an applicable strategy. At that point, there will no longer be a distinct Nash equilibrium. However, we can hope that the U.S. will not launch the first strike because doing so is not Pareto optimal (i.e. does not maximize social welfare for both the U.S. and Russia). As the game theory computer realizes in the 1983 film WarGames, “The only winning move is not to play.”

Posted in Topics: Education, social studies

No Comments

Toshiba Drops HD DVD as Blu-ray Gains Power

    Recently, Toshiba has announced that it will stop developing HD DVD products and will drop out of the high-definition DVD market.  This was due to several recent changes in the market, mainly the decisions of several large media companies such as Walmart and Netflix to abandon HD for Blu-ray.  This situation is analogous to that of a network where two nodes (Toshiba’s HD DVD and Sony’s Blu-ray) carry the bulk of the power in the entire network.  In an ideal situation, both companies should start out with the same market available to them and thus the same number of customers.  However, as time goes on and the market is further developed, many of the customers will begin to prefer one product over another.  As one product gains more customers, the other product will have fewer options as to who they may sell their product to.

In this case, one of the reasons Toshiba is losing the war is because they did not manage to develop well in European and Asian markets.  Because they do not have as many potential consumers, they played the price card in an effort to keep up with Sony.  Unfortunately, in many cases, this may have caused them to lose money rather than increase their profit.  Dropping out of the high definition DVD market may have serious future setbacks for Toshiba, as the high definition format will quickly take momentum and render the older DVD format out-dated.

As can be seen from this recent development within the high definition DVD market, developing connections with a wide spectrum of markets can be much more important that simply selling their product at a lower price.  A larger market of potential consumers means more power in the industry.

http://www.cnn.com/2008/BUSINESS/02/19/toshiba.hdd/

Posted in Topics: General, Technology

No Comments

Power In Corporate Networks

Power in networks is clearly visible in the financial world.  Investment banking, with the buying and selling of stocks on a stock exchange is all about network exchange theory.  On a larger scale, corporations are constantly upgrading and improving their product to reach new consumers that would otherwise go to other companies.  An example that is prominent in the business news today is Microsoft’s bid for Yahoo!.  Microsoft has openly been pursuing Yahoo! for some time now but it made it’s official bid at the beginning of January this year.  Yahoo! refused this stating that it was too low of a bid.  Yahoo! is now looking for new bidders to either drive up the offered price of Microsoft or to dilute the control the power and prevent a hostile takeover.

Competitors of Yahoo!, specifically Google, are threatened by this potential deal and look to prevent this merger from happening because they would lose their dominance on the market. It is also worth mentioning that Google is unable to bid for Yahoo! itself because of industry regulations (which basically prevent monopolies).  

http://business.timesonline.co.uk/tol/business/industry_sectors/technology/article3419923.ece This particular article from the Times in London describes Google’s search for regulatory institutions to intervene and investigate this merger because it does not want the deal to go through.

The web provides a structure for all kinds of networking; whether they are social, financial, or any other type. This deal is directly related to the way we obtain information and network on the Internet. If the merger were to go through, the new Microsoft-Yahoo! would create a huge dominance in the web search, mail, and advertising markets.

This relates to the concept of network exchange theory that considers buyers and sellers.  The strategy of Yahoo! is to find other buyers to provide some kind of competition for Microsoft and in turn, create a “game”.  There aren’t many other companies that can afford to offer to outbid Microsoft, but if there was, the dominant strategy that Microsoft would choose would end up being more profitable for the seller.  We have learned from class that the more options that a seller has, the more power they have.  In this specific case, the more options (buyers) Yahoo! has, the less likely they will lose control of their own company, preventing a hostile takeover by Microsoft.

 http://business.timesonline.co.uk/tol/business/columnists/article3422406.ece 

Posted in Topics: Education

No Comments

The Internet’s Undersea World

How one clumsy ship cut off the web for 75 million people

Fiber optic cables are the backbone to the entire internet. Miles of these cables span continents, sending data as pulses of light from one destination to another. But what happens if the destination is somewhere across the ocean or if data needs to travel between two islands? Believe it or not, there are thousands of miles of submarine fiber optic cables connecting continents and countries where land based cables cannot. Often, these submarine cables are the only edge connecting two nodes, a bridge. If this bridge is ever severed, then millions would lose connection to the rest of the global internet.

On January 30, 2008, two of the undersea fiber optic cables off the coast of Egypt were severed. The initial cause was thought to be a ship trying to anchor off the coast of Egypt in severe weather conditions caused the accident, but a later report determined that it was an abandoned anchor. This incident cut connections for 75 million people in India and the Middle East. Many believe this to be a “wake-up call” to show how vulnerable the network is to natural disasters or attack. Although most of the damage was felt in the Middle East and India, many American companies were having trouble with connections to Indian based support services. The accompanying image shows the undersea network of fiber optics and the detailed extent of the outage.

network2.jpg

Posted in Topics: Education

No Comments

Network Theory to Thwart Terrorists?

Networks Thwarting Terrorists

New York Times

By PATRICK RADDEN KEEFE

Published: March 12, 2006

A lot is being said about the NSA’s warrantless eavesdropping program, where many people feel their privacies are being invaded. The argument of many of the critics that oppose this program feel that this terrorist surveillance program actually does little to help catch terrorists, instead they use it to invade the lives of normal everyday people such as you and I. However, like every story, there are two sides to every argument. Many advocates of this program do feel that this technique can be used very efficiently to help thwart any growing terror cells. An interesting point brought up by this article is that after Sept. 11, a Cleveland consultant by the name of Valdis Krebs decided to map the networks of the hijackers of the 9/11 attacks. He started with two of the plotters, Khalid al-Midhar and Nawaf Alhazmi, and, using press accounts, produced a chart of the interconnections — shared addresses, telephone numbers, even frequent-flier numbers — within the group. His results were actually pretty interesting. All of the 19 hijackers were all tied together by a few number of links, with a disproportionate number of links converged on one man: their leader, Mohamed Atta. This correlates to something else mentioned in the article that we actually discussed in class: Milgram’s “Six Degrees of Separation.” What if we were able to detect this network forming and what if we had noticed the large amount of links that led to Atta? Could we have done something to prevent the devastating day that was 9/11?

Actually, there was a program going on at this time that did have Atta on the radar. Pre-9/11, there was an Army project by the name of Able Danger, whose purpose was to map the Al Qaeda terror networks and served in “identifying linkages and patterns in large volumes of data.” This project actually may have identified Atta as a key player even before the attacks played out. The use of this type of network-based analysis actually helped bring about the rise of the surveillance program. Although this kind of surveillance isn’t as in depth as the kind of surveillance that the Foreign Intelligence Surveillance Court uses, it does implicate a lot more people which eventually led to a number of problems among civilians. What basically goes on with this “surveillance” is that they have some computer that monitors the metadata of your phone calls and emails to see if you talk to any terrorists. However, the problem with that is that we are all connected to a vast amount of people by a couple of degrees of separation so what ends up happening is that the NSA ends up having a numerous amount of false leads that often implicate innocent civilians that are tangled in this web.

The other problem with this approach is that there is just too many people involved in this web to keep track of everyone. There is just not enough manpower to keep track of all the names and connections that are associated with each individual node. For example, the NSA database of suspected terrorists has approximately 325,000 names and some of the Able Danger analysts provided graphs with many of these individuals that spanned up to 20 feet and covered in small print. So now we see why we were not able to do anything about Atta when these graphs are created. There is a constant information overload and the fact that we didn’t pursue him shows the ineffectiveness of this technique. However, many people still feel that we just need to find those centralized nodes that have a massive amount of edges but the problem with that is that even if we capture those “hubs,” as we have many times since 9/11, the network continues to work because they easily go to another leader and the job will still get done.

In theory, the approach that we are taking to catch these terrorists is a good one. However, we need to find methods in which the process will be more effective. We need to find ways to pinpoint these terrorist cells because by having all these fake leads, it will continue to discourage the use of the network theory. As was shown in the article, we can see that this method does help find these leaders but sometimes there are just too many leads and when a true lead does come in, we can’t afford to miss it. We don’t want another event such as 9/11 to occur on our soil again.

Posted in Topics: Education

No Comments

Optimizing Traffic Flow in Airplane Boarding

http://www.wired.com/science/discoveries/news/2008/02/boarding

In class, we examined how varying patterns affect the flow of traffic through the network, particularly in Braess’ Paradox, where self-interested drivers intent on finding their best strategy in fact results in a worse situation overall for everyone involved. This article from Wired.com focuses on how to reduce the severe inefficiency of airplane boarding and explains Jason Steffen’s novel method of improving the process, possibly making it twice as fast as the current back-to-front method. He argues that the current method of boarding (back-to-front) is not optimal because although it eliminates clogging in the aisles that would occur in a front-to-back boarding scenario, it doesn’t eliminate the clogging that occurs in each section of the plane as a large group of people attempt to situate themselves in one small space.

We saw in examples of Braess’s Paradox that sometimes procedures that are meant to help traffic flow can throw off the equilibrium and ultimately delay all users of the network. In the theory proposed by Steffen, airlines could fill a plane by asking all passengers in alternating rows to board at the same time; passengers in seats 1A,3A,5A,7A etc… would board the plane together and would have enough room to get situated faster because their seats are staggered throughout the plane. They would be followed by another group of staggered passengers, and so on until the plane is filled. Although this model slows the flow of passengers onto the plane, each passenger is given more time and square area to situate themselves without slowing other passengers down; the outcome is a shorter boarding time for the entire aircraft. This model is more efficient than the back-to-front model because it results in a better equilibrium within which each passenger takes the shortest amount of time to get situated. The back-to-front model creates delays for the first group of passengers that enter the plane because it forces each passenger to wait for previous passengers to get situated before moving on, which in turn delays each subsequent passenger boarding the plane.

This new method of optimizing boarding traffic has proven efficient in computer models but has yet to be tested in real life situations; such tests would demonstrate whether this model would move the traffic flow closer to equilibrium or present another Braess’s Paradox.

Posted in Topics: Education

No Comments

The Dollar Auction

 Say we have a situation in which a seller is auctioning off one dollar to two buyers. The rule is that the both bidders must both pay their bids, but only the highest bidder will receive the dollar. Now, we assume that there is no collusion between the two buyers to slightly simplify the model.

To start, the first bidder will bid $.01, looking for a $.99 profit. The next bidder will bid $.02, looking for a $.98 profit. Now, the first bidder will look at his options. Either bid $.03 and make a $.97 profit, or stay at his previous $.01 bid and lose that money. This reasoning drives up the bids of both bidders, even past the price of the prize. The first bidder will bid $.99 and the second bidder with have a choice between bidding $1.00 and breaking even, or losing $.98. The bids will escalate to such a point that the $1 originally considered becomes insignificant compared to the ever-rising bids. This is called an irrational escalation of commitment.

It seems natural for human beings to take what is already lost and try to justify it. In this example, each bidder considers the amount that they bid previously and in an effort not to lose any more, bids higher. It doesn’t matter who ends up winning or losing this auction, both pay severe prices for their actions. This fallacy of judgment shows up in many real-world scenarios: manufacturers will keep unprofitable plants open at the expense of more profitable ones for the sole reason that it would be “a waste” to shut them down and lose the capital that went into them, researchers will continue to labor over projects that have already proven to be impractical and useless only because good money went into starting them, and wars will continue to be fought even though there is no resolution in sight.

There is no easy way to address this concern. At the end of the dollar auction, one buyer is going to have to bow out to the other and accept the larger loss in order to save both himself and the other bidder from losing any more to the ever-increasing cost of participating in a lost cause. It’s not to say that the dollar auction won’t be profitable to anyone or that past commitments should never enter into considerations for the future, but to take a sober look at what the dollar auction could become.

http://en.wikipedia.org/wiki/Irrational_escalation_of_commitment

Posted in Topics: Education

No Comments

PGP/GPG, The Web of Trust and the Strong Set

PGP (Pretty Good Privacy) and its free counterpart, the GNU Privacy Guard (GPG) are encryption suites that allow users to securely over the internet. They can also be used for digital signing and trusted identification. To do this, they utilize a pair of keys, one private and one public. Information that is encrypted using the public key can only be decrypted by someone holding the private key, and the private key can be used to create a digital signature for some data so that anyone holding the public key can verify that the signature was made by the private key. Such a signature can only be created by the private key and any change to the signed data will invalidate the signature.

While this may seem like a panacea for identifying people over the internet, there is a problem with key management and key trust. Specifically, while you can verify that a specific email was signed by key #C34AA484, how do you know that this key actually belongs to me (Benjamin Seidenberg). The key does have my name on it, but the name was user-supplied at the time the key was generated. The same issue occurs with trusting SSL certificates for secure web browsing - yes, the encryption to a website is encrypted, but how do you ensure that the website is actually what it claims to be?

There are two approaches to this problem. Websites use a central authority to verify certificates. Your browser trusts several companies such as Verisign. When you visit a secure website, your browser sees that Verisign has verified the identity of the website for you and proceeds without incident. However, there is no similar infrastructure for GPG keys.

Instead, GPG/PGP keys use a model called the web of trust. Every key is capable of signing other keys. This signature is a cryptographic statement that means “I <key 1> (believe/know) that <key 2> actually belongs to the real-life person whose name is on it.” (The human event in which this takes place is called a key signing and usually involves checking a government-issued ID). These signatures allow us to construct a network of keys, using a directed graph. This network is called the web of trust. Every key is a node, and every signature is a directed edge. Paths from one key to another are called trust paths. If I have signed the key of person A, and they signed the key of person B who signed the key of person C, there is a trust path from me to C and I have reason to believe that this key actually belongs to him or her.

One interesting aspect of the web of trust is that there has emerged what is known as “the strong set”. This is a large connected sub-graph within the web of trust. To enter the strong set, a key merely needs to be signed by a key already within the strong set. Because this network is easily parsed by computers, there are automatically generated reports with statistics about the set. One important statistic is the Mean Shortest Distance (MSD) of a key. The MSD is the average of the length of the shortest path from every key in the set to the key in question. This shows how connected a key is. Keys with a low MSD tend to act like hubs or focal points and represent the more social individuals.

Based on the report from 2/17/08, an examination of 201,989 publicly available keys found a strong set of 110,957 keys (about 54% of the sample). In this strong set, the average MSD was 6.3449. The most connected key belongs to Peter Palfrader and has an MSD of 3.5588. One interesting phenomenon is that all of the top (most connected) 25 keys and all but 12 of the top 50 keys belong to people involved with the Debian project. The Debian project relies very heavily on GPG for authentication and demands that members be in the web of trust before they can join.

References:

http://dtype.org/keyanalyze/explanation.php

http://keyserver.kjsl.com/~jharris/ka/2008-02-17/

http://www.debian.org

See Also:

http://pgp.cs.uu.nl/plot/

Posted in Topics: Technology

No Comments

The Influence of Strangers

In class we have talked about the power that comes from occupying certain locations in a network. Nodes with a lot of edges, or well connected nodes, have a lot of power. Furthermore, we have talked about how edges form. They can form randomly but they are more likely to form between two nodes with a mutual neighbor. This concept is called triadic closure. In networks with positive and negative edges, the signs of the existing edges have a great influence on the sign of newly forming edges.

Our discussions in class regarding the formation of edges has been limited to direct, local influences. It doesn’t even seem likely that the signs on edges between two nodes would influence the sign on a newly formed edge between one of the nodes and a previously isolated node. After all, the isolated node didn’t know either node before. Despite this, there are very real effects on edge formations from nodes that are only connected to one of the nodes. Imagine a social network, for instance, that represents the positive and negative attitudes of high school students toward their peers. Now look at how edges form between a new student and all of the existing students. The new student is far more likely to gain a positive opinion of a given student if most of the other students already have a positive opinion of the given student. This is because people assume that if a person is well liked then that person is probably a person worth liking. It is human nature to emulate each other. However, it would be easy to attribute this particular scenario to factors outside the structure of the network. The charisma of the student that was well liked was likely responsible for why the new edge between the well liked and the new students was positive.

Because of this, we must look at a different scenario. The Presidential elections are a good example. At the beginning of the primary season, Hilary Clinton was so far ahead in the polls for so many states that it looked like only a matter of time before her opponents bowed out of the race. As the primaries began, Barack Obama won a few states. This had very little meaning since Obama had gained only a handful of delegates, except that something started happening to the polls. As Obama won primaries, the polls started shifting in his favor. Clinton won a few primaries towards the beginning but once Obama had won enough states for articles in newspapers to start overstating his growing popularity, the upcoming states shift drastically in his favor. Furthermore, in the Republican race, the early primaries were split between the three major candidates. As soon as John McCain started gaining delegates over the other candidates, however, the polls swung quickly in his favor.

Obviously there are a number of extraneous factors, but there seems to be a very direct influence of perceived opinions on the opinions of others, even when the other people are complete strangers. This is not necessarily confined to human networks. The popular Google search engine returns pages based on how many other pages containing the search criteria link to the returned pages. In other words, the popular pages get returned.

Posted in Topics: social studies

No Comments

Passive Networking

In the NYTimes today, Michelle Slatalla wrote an article entitled Building a Web of Influence. In it, Slatalla discusses the effects and benefits of social networking sites, particularly LinkedIn, which allows one to form contacts mostly for the sake of professional advancement. Among others, she discusses a couple things that are relevant to the more formal ideas of social networks we’ve discussed in class.

In LinkedIn, you have what are called “first-degree contacts,” or people whom you are directly familiar with. The primary means of social advancement in LinkedIn is via having your first-degree contacts introduce you to their first-degree contacts. This is essentially a form of Triadic Closure. This is one of the huge benefits of services such as LinkedIn. Slatalla comments:

Before I knew it I had created a business network that included 99 connections (first-degree contacts), more than 10,000 friends of friends and more than 700,000 third-degree contacts.   

Slatalla has the potential to become friends with 10,000 more people, simply via immediate Triadic Closure, and an order of magnitude more than that if taken further.

However, what LinkedIn truly provides, as Slatalla mentions throughout the article, is this idea of passive networking. In this model, it is not necessary that you reach out to other people to form friendships, but rather people can (and will, assuming your friendship seems worthwhile) come to you. Essentially this equates to a culture where people seek out other contacts, and this, in combination with Triadic Closure, is what makes it such a successful service.

One can imagine a network with many nodes but no links. If this is a network where everyone keeps to themselves, it is unlikely that any edges will form. However, if this were a network where even just one person seeks out many contacts, this creates the possibility for triadic closure, and thus rapid network growth.

Posted in Topics: Education

No Comments