This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Symmetric Breaking and the history of the world

Wikipedia Synopsis of “Guns, Germs and Steel” 

 In his award-winning book, “Guns, Germs and Steel”, author Jared Diamond attempts to understand and explain why it seems certain cultures throughout the world (and throughout history) have dominated others with their culture and wealth. In his own words, why some have more and others have less. Diamond claims non-bias and tries to avoid making any assertions that could be grouped with racial superiority opinions. Instead, he views the resultant state of the world as a confluence of pre-conditions that brought about changes at different times across otherwise similarly positioned peoples. Diamond belives that agriculture, geography and germs were extremely important and differentiating factors that gave certain civilizations particular developmental advantages. For example, settled agriculture allowed for the development of non-production societal jobs, where some members of a community could forego hunting and gathering. Specialization in turn bred many inventions in social structure and human endeavor.

Diamond’s methodological analysis of the confluence of agriculture, geography and germs may be related to our discussion of symmetric breaking: otherwise equal groups of people with similar interests and abilities are eventually differentiated by fluctuations in their pre-conditions. Variations in agriculture, environment (geography) and germs around the world tip similar populations down very diverse paths. Like Schelling’s neighborhood example, even symmetric groups with identical and seemingly unobtrusive desires can become segregated simply by their environs (neighborhood or agriculture/geography/germs). Also, like the examples discussed in class, segregation theories can be very controversial: inferring causation from the resultant data can lead to extreme and incorrect conclusions about populations. Diamond has received heavy criticism from all sides, especially that he may be providing a geographic and environemtal justification for Eurocentric supremacy. It is, I think, incorrect to believe in Euro-centric supremacy – even if you use environmental justification. My understanding is that Diamond’s point is not to say that Europeans or Africans or Asians (or their civilizations) are better or worse or ill-positioned, but that some differences pointed to as “defining” are really not inherently characteristic – and that these differences are the result of some kind of symmetric breaking.

Posted in Topics: Education

No Comments

Paradox of Groups

http://shirky.com/writings/group_enemy.html

Clay’s writings discuss the concept of a “critical mass” that Schelling discusses in Micromotives and Macrobehavior. The idea is that there is some threshold - be it percentage of membership, number of members actively attending, or so forth - beyond which a group is guaranteed to succeed, and below which a group is guaranteed to die out. Of course, “die out” and “succeed” are subjective measures and depend extensively on group metrics, as does the critical mass; that is, the measures of these metrics for one group are not correlated with those of another. For some, “die out” may signify that only a few core members remain; these are the people who are so loyal to the cause or idea behind the group that their attendance does not depend on how many other supporters are present. For others, “die out” may signify that a group has entirely dissipated; in this case, the reasons behind the group’s initial organization were probably not strong or significant enough to shift anyone into the classification of “core members.” Furthermore, “succeed” need not refer to a group becoming wildly popular and drawing new members at incredible rates. Rather, it may signify that everyone who is interested in the group’s ideals, or a percentage thereof, have attended. However, both outcomes are not significant because of the final attendance counts that occur under them. Rather, those two outcomes are significant because they are stable equilibria. Reduce the number of members below the threshold, and the group will “die out,” regardless of the exact meaning thereof, and it will remain at that state sans external influence. Bring the number of members above the threshold, and, similarly, the group will “succeed.”

Clay mentions W. R. Bion’s notion of “social stickiness” - that is, the idea that people will often stay in a group in order to maintain the appearance of being socially adept and to uphold ideals of politeness and propriety, even though they dislike being there. People also want to avoid feeling guilty about leaving their fellow group members behind by abandoning them. However, if enough people are united in their decision to leave, many more people follow them - the same people who were contemplating leaving before, but refused in order to assuage their consciences. This “paradox of groups,” as Bion puts it, is also responsible for the phenomenon that occurs at the end of class practically every day. If the professors continue their lectures past 12:05 pm, a few students start shuffling bags and papers, in an attempt to convince other students to do the same. A few individuals will cautiously start to stand up, to see if others will follow their lead, and, if others do not, will quickly switch to reaching for their bags in an attempt to save face and not appear to be too eager to leave. However, if even a few others do follow suit, often a large portion of the class will also begin to leave. Having acquired the moral support of others, as well as a sense of anonymity and deindividuation afforded by being in a large group, they now feel less accountable for leaving before the professors have finished talking. This also ties in to the lectures on information cascades, in that once enough people have decided to leave (i.e. received a High signal), (most of) the rest of the class will begin to follow their precedent, despite wanting to hear the rest of the lecture.

Posted in Topics: Education

No Comments

Moral Judgement As A Social Phenomenon

The Article

Carey, Benedict. “When Death is on the Docket, Moral Compass Wavers.” The New York Times. 7 Feb 2006. 15 April 2007. <http://www.nytimes.com/2006/02/07/health/psychology/07exec.html?pagewante…>.

“It’s something we do whether we’re for it or against it, and we try to make the process as humane as possible,” says Burl Caine chief executioner at the Louisiana State Penitentiary, “The issue is coping, how we cope with it.”

 

We. This article from the New York Times touches upon the nature of morality, on how we as a society create and follow laws based on the number of other people we believe are following them. Laws were created in order to help better define the boundaries that allow society to run smoothly and cohesively. Like the models we have been studying in class, if an individual believes that most of his peers are following a set of rules, he too will follow this set of rules based on the instinct that if a good number of people are making a choice, it must be a good one. The difference here is that we’re no longer dealing with toaster ovens and fax machines, we’re dealing with the root of morality and it’s scary to find out that it’s a much more democratic, Kafka-novel-like, intangible thing than we had all hoped. If a group of students collaborate on a take home test or project, the students who aren’t in this group but are aware of the occurrence will fear for their own grades and will be under pressure to cheat themselves. They engage in “moral distancing” and seek solace in the fact that many others before them have made the same decision and redefine their moral standards. Ethics is something that seems to pop up more and more in academia with increased pressure to get good scores and technology providing more and more ways to cheat.

 

Michael Osofsky and Philip Zimbardo from Stanford University did a study on prison guards which seems to point to the collectivity of moral judgment. It draws on what researchers call “diffusion of responsibility.” In the study, they gave a series of 19 statements such as, “Nowadays the death penalty is done in ways that minimize the suffering” to prison guards who both have and haven’t been part of an execution team. The results showed that prison guards involved in executions were much more likely to agree with these statements than guards who had never taken part in an execution. This is a result of what many psychologists call moral disengagement, where people sanctify immoral acts in order to cope with what they have done.

 

It takes a “team” of 15 men to perform an execution, not because it is difficult to control the prisoner, but for the sake of the executioners. It turns out that performing the act as a group takes much of the burden off the executioners, allowing them to share the blame with other guards. This is exactly the sort of idea that we have been talking about in class. They are able to rationalize the acts that they are committing simply because of the number of other people making the same decision. Many of them are also deeply religious and rely on the supernatural to justify their deeds. While it would be hard for any sane man to kill another, these men draw on each other and the belief that there are many people out there who believe in the death penalty.

 

The article really causes one to question the roots of “morality.” Do people make decisions because they are compelled internally or because they believe that it is a decision that the moral majority would agree with? Are feelings of guilt based on societal conditioning or natural sentiment? Should morality be a democratic process?

Posted in Topics: Education

No Comments

Network Effects of Programming Languages

“The Economics of Programming Languages”

“Programming Language Popularity”

Author David Welton analyzes the important and subtle economic characteristics of programming languages in the two articles above. He states that, though languages can be very useful regardless of the number of people using them, most of them gain strength through a large reinforcing network of users. Also, his claims depict how languages that succesfully capitalize on establishing a broad user base benefit from further increased growth because of the inherent popularity effects that drive certain languages outwards towards the tail of a user distribution.

Though programming languages essentially have a direct cost of zero (i.e. there is often no price to write code in a given language), they have many indirect costs associated with them, and thus adhere to many of same economic principles that encompass more traditional products in the technology market.

Due to the rapid pace of change in the high tech sector, we often need to evaluate new technologies in order to decide whether to allocate time to learning and using new systems. Jump on the bandwagon too early, and you risk becoming involved with something that just heads downhill or doesn’t go anywhere. Wait too long, and you may find yourself behind the times with regards to “the latest thing”.

Selecting a language involves many factors, and certainly isn’t something that can be considered in a vacuum. Of course, it’s important to pick something that can do the job correctly and efficiently, but depending on what you need to accomplish, and who you have to work with, the availability of external libraries, people to help you out, or even to hire you or be hired by you can all be important factors to weigh.

The two greatest costs involved in choosing/switching programming languages are the time and resources to teach your employees the new grammar and train them to implement it successfully, and also the time and resources for your employees to convert all of your previous code (in a different language) into the new one. Because of these severe indirect costs, in order for a language to emerge above the rest of the myriad of other languages, it must have a few defining traits which can help it diffuse easier into and across a vast network. Essentially, a language must cut down on the indirect costs to the programmers and companies who use it. Thus, the savings to “consumers’ rely directly on the quality of the language, and less on the actual skill of the programmer.

Easier to write - expanding the number of people who can use the language, and thus its value.

More efficient - making for faster programs that take less time or system resources to accomplish what they need.

Higher quality - meaning that less programmer time is spent hunting bugs, and more on developing new features.

More productive - languages that let you do complex things easily mean that you can do more than a competitor using a language that’s slower to develop in.

The reason that less used programming languages survive in the face of better, more popular competitors is because many languages hold defining traits that allow them to fulfill different niches in the complex world of computer science. Some languages are more tailored for web-applications, while others ease the use of maintaining a database, or creating a operating system. The fact that no language is an all-incompassing tool kit for the vastly differing uses of software programs means that most languages can succesfully linger around and not be “buried” under the highly popularity of a competitor.

Interestingly enough, Welton uses the aspects of Google’s search engine and ad-pricing to equate a general model of the popularity for an array of programming languages. Based on raw Google hits, he shows which languages are “visually” popular, with C and Java leading the way. By analyzing the cost-per-click that bidders are willing to pay with the overall visibility of a language, Welton is able to distinguish which languages have more “commercial interest and potential” rather than just sheer visibility. To further depict which “lesser” languages hold strong value, he analyzes the current number of job offerings for a certain language against the languages raw visibility. This denotes which languages have the most physical applicability rather than just “overhype.” Overall, this data relates how languages are certainly capable of being driven towards an outward, “popular” power within the industry, but how the underlying features of many programming languages does not allow the “leaders” to fully separate themselves from the rest.

Posted in Topics: Technology

No Comments

Physical vs Online networks

http://www.iht.com/articles/2007/04/12/news/networks.php

In Europe, some companies are taking an innovative approach to integrate a physical social network and an online physics network. These companies are helping individuals find other people they share something in common with. For instance, the Xing social network has become a popular tool for European businessmen by providing details of “interests, experience and personal history.” The network has essentially become a tool for linking individuals.

Another such company is Imity. Their approach seems much more radical. Imity builds up a list of who a certain individual has come in contact with. Over time, the network will be able to show which individuals share many friends, yet do not have a link between them. In this sense, the Imity network is aiding in the triadic closures between these people. Also, the network can help individuals with common interests meet. Here, Imity would aid in a “focal closure” similar to the one described in paper “The Strength of Weak Ties” by Mark S. Granovetter (except in this case, common interests replace common classes). Imity also incorporates the use of blue tooth devices in their network. Whenever someone who shares many friends or interests with you is near, the system will be able to tell you so. Thus, Imity uses its online networking to help create a link in the physical (real world) social network.

Posted in Topics: Education

No Comments

Economic Power, Innovation, and Network Effect

In 2003, Wal-mart using its economic prowess mandated that its top 100 suppliers begin using Radio Frequency Identification (RFID) tags by 2005.  This new technology allows for monitoring inventory remotely, electronically, and automatically.  Its application ranges from tracking how many items of a particular product is on the shelf in a store to how many pallets of a good is available at a distribution warehouse.  Wal-mart is interested mostly in the large scale application; it wants to track its supplies.  It is estimated that the company could save $8.35 billion by forcing its suppliers to adopt this technology.

The cost of developing and implementing is not going to be incurred by Wal-mart.  The suppliers have to upgrade it facilities and equipment and they will have to develop RFID tags that overcomes present limitations.  The reason the suppliers are willing to bear this cost is because of Wal-mart’s size.  The company employs over one million people, earned over $300 billion in revenues (FY 2006), and has been atop the Fortune 500 list for several years.  (It was number two in FY 2006 largely because of record high energy prices).  For many suppliers, Wal-mart account for 10-40% of their sales.  Losing that would be detrimental to their business.

This mandate from Wal-mart will likely result in a network effect.  In the initial stage, the market for RFID tags is less than the critical n1 value (from April 2nd handout) where the network effect is large enough to sustain the technology.  In this situation, the technology normally fails to gain traction; suppliers and retailers would be reluctant to adopt the it.  Wal-mart’s mandate immediately moves n from 0 or less than n1 to n1.  In essence, the suppliers have no choice but to adopt the technology.  Further, Wal-mart’s suppliers tend to be the biggest in its respective industries meaning that n would most likely be sufficient surpass n1.  Once n1 is reached, network effect benefits can be reaped.  Suppliers tagging products for Wal-mart will likely tag products for other retailers as the marginal costs for doing this is minimal after the initial fixed cost of complying with Wal-mart.  In addition, actual costs of production will decrease as well as the market for RFID tags increase allowing other smaller suppliers to adopt it.  If the technology does not fail to deliver the cost savings it promises, RFID tags will spread throughout the retail industry supply chain.  Depending on n2, it could spread to other industries and smaller scale applications as well as well.

Posted in Topics: Education

No Comments

Power of a Single Node

http://chaos1.la.asu.edu/~yclai/papers/PRE_02_ML_3.pdf

 

            In most networks, each single node has a minimal effect on the entire network as a whole. The removal of single nodes generally does not endanger the network as a whole. An example of this is the Internet: if a few sites are removed, the entire Internet still stays as a whole and would run as smoothly as it did before the sites were removed. In Adilson E. Motter’s “Cascade-based attacks on Complex Networks,” nodes that do have significant power are focused on. One example of such a network is a power transmission grid. According to Motter, when one node in this grid fails, the balance of the grid fails, and the entire network needs to redistribute the loads of the other nodes so that they can support the links that were broken by the removed node. This can cause a cascade of overload failures. This holds true for routers also. When one goes down, the others must increase the amount of information it transmits to support the downed router. This, in turn, would cause congestion. These two cases are examples of what Motter calls a cascading failure. This is when one failure leads to a redistribution of loads, which in turn causes more failures. The chance that the first failure would cause this cascade is dependent on how much load this one node has. The more important a node is to a network, the greater a chance the cascading failure would occur.

            This article is very similar to our talk on diffusion in networks. We learned that when a certain number of nodes switch to a different product, a cascade would form causing every node to switch. In the case here, when a node with a high load fails, a cascade of failure would occur, causing those around it to fail as well.

Posted in Topics: Education

No Comments

World’s economies show similarities in economic inequality

http://www.physorg.com/news95074548.html

According to some research, the wealth of the population follows two separate distributions. Scientists from Saha Institute of Nuclear Physics and Institute of Mathematical Sciences have analyzed different sets of data that shows that the poorer majority of the population follows the Gibbs/log-normal distribution, while the wealthiest follow a power law distribution. The power law distribution reflects the “rich get richer.” Data, income tax returns and net value of assets, was taken from a multiple countries including the U.S., UK, Japan, and India. Also data was taken from 19th century Europe. Surprisingly, all this data showed the same conclusion that the lower 90% of the population, in terms of income, followed a log-normal distribution. As income increased, there is an initial rapid rise in population followed by a rapid decline. The wealthiest 10% followed a Pareto power law. Interestingly, the Gibbs distribution that the poorest 90% of the population follow is also the distribution of energy in an ideal gas in equilibrium. This is because, on the global level as scientists say, the exchange of assets and the scattering of molecules are both random. The help of these models can lead to more equitable distribution of wealth.

The “rich get richer” phenomenon explains the reason why only the top 10% of the wealthiest individuals follow a power law. This phenomenon says that typically the largest increase in wealth is attributed to the richer part of the population; therefore, increasing the gap between the rich and the poor. As the article implies that the exchange of assets on the global level is random, the Gibbs distribution, however, applies only to the lower 90% of the population. Neither of these two things is intuitive and is actually quite interesting. The reason lies with the “rich get richer” theory that is present in a lot of places such as networks or popularity of books. The theory of the richest “nodes” growing the fastest is what leads to the power law distribution. The most popular person/object will gain popularity the fastest because the most people know about him/it. However, this distribution fails after the top 10% where the “randomness” of asset trading on the global level becomes more of a factor than the “rich getting richer.”

Posted in Topics: Education

View Comment (1) »

Modeling Heresy as a Disease

http://arxiv.org/ftp/cond-mat/papers/0306/0306031.pdf

During the Medieval Inquisition, the Roman Catholic Church regarded heresy as a disease. This paper talks about heresy in 13th and 14th century France.

According to this paper, the Catholic Church noticed how the spread of heresy was like a disease spreading throughout Christendom. Leaders of the church likened heresy to rotting food, disease among livestock and plagues that cursed man. At first the church tried to wipe out heresy through intimidation. Intimidation later turned into the mass killing of populations. The church took the attitude of “We’ll kill them. Let God sort them out.” The church noticed that even though they were punishing massive amounts of people, the ones who escaped were helping spread heresy even more. These individuals were highly connected allowing them to evade capture and spread heresy.

The church decided to change their focus to these highly connected individuals. “Heresy hunters” would interrogate prisoners asking them what other heretics they associated with, in what sense did they associate, and what times they had known each other. From this information the church was able to target the most connected individuals. When the heretics were captured, the church would force them to publicly repent for not following Roman Catholic doctrine. The church hoped that this public display of repentance would slow the spread of heresy. Later the church changed punishment to a kind of solitary confinement so that heresy would not spread. This greatly disrupted heresy networks.

Heresy spread quickly across countries much like a contagious disease. Even when heresy was believed to have been wiped out, it would come back again and again. Mass inoculation (mass murdering) was unsuccessful is stopping this disease (heresy). It was only when data was gathered, a strategy was developed, and connected individuals were targeted was the church able to slow down the spread of heresy.

Posted in Topics: Education

No Comments

Networks & the Phenomenon of “The Rich-Get-Richer”

Everyone has heard of the phrase “the rich-get-richer”, as it is often used to describe the inequality in economics. This theory cannot only be applied to economics, but it is also used in understanding networks and their power laws. When a network has a certain number of nodes, it usually gains new connections in proportion to how many it already has, thus resulting in an rapidly increasing network. In contrast, networks with fewer nodes, usually end up with less connections, or lose connections over time. In class we discussed power laws, and how they pertain to networks. We saw that the probability that page “l” experiences an increase in popularity is directly proportional to “l’s” current popularity. The more links a page has, and the more time it appears in the searches, the more likely it is to gain more connections.

In an article entitled, Why the Rich get Richer, researchers from various universities investigate the underlying principles behind preferential attachment, and how “the rich-get-richer” model applied to networks. They have found that internet routers “could make trade offs between the network distance between nodes and the number of connections between them. By tweaking the conditions, they could make preferential attachment — a power-law distribution of the number of connections — stronger or weaker.” The full published write up, on the study can be found here.

Preferential Attachment

An example of “preferential attachment”

Posted in Topics: Education

View Comment (1) »