This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Websites as Graphs

Websites As Graphs
This site provides a way of visually modeling the HTML tags throughout a website. The nodes are color coded to represent different types of tags - links, images, DIV, line breaks, etc. The model is created via a Java applet, which lets you watch the graph as it’s created. The nodes and edges for larger sites take longer to “fall into place.” There is a Flickr site where people can post images of their personal graphs. Comparing the site itself to its visual model shows some very interesting things. For example, the model for http://www.spinning-jennie.com, a personal blog, has a large central symmetrical cluster with primary orange nodes (orange is for line breaks and block quotes). The focus on orange is not surprising for a blog, which is essentially a lot of formatted text with optional links and images.

You can compare this to a less clustered graph such as the one for the BBC’s homepage. This model shows a site with many more distinct pages and a wide variety of content (images, tables, outside links, etc.) It seems that branches of the tree with mostly dead-end blue nodes represent pages with mostly links to outside sites. In the upper right of the BBC graph, a cluster of nodes with blue nodes leading to dead-end purple nodes likely represents a page with a lot of links to individual BBC photos.

One can also make inferences about the structure of a site and the style of the person who coded it - the BBC site is laid out with a lot of tables (red nodes), as compared to another site which might use frames or CSS positioning.

I ran the class weblog through the modeler and got this. There is a large cluster of blue and grey nodes off to the side, then a sprawling tree of oranges and greens. Yellow on one site likely represents the input form for new entries. I’m not sure what the circle of blues represents - a page with ONLY links and no formatting is rare.

Posted in Topics: Technology

Comments (2) »

“Word Of Mouth” Marketing

The big guys in the advertising industry have long been aware of the power of social networks, and have attempted in many different ways to capture the power of face-to-face, word of mouth in order to turn a profit. The general idea is that people will respond more to information coming from those they know and trust as opposed to mass media advertisements that they often do not notice, or if noticed, may not perceive as credible. This past summer I had the experience of my life, working at a young advertising agency called “BzzAgent”:

http://www.bzzagent.com

Instead of providing traditonal media services, the company provides their clients with access to their 200,000+ (and growing daily) network of “Bzz Agents”. When a campaign for a new product is launched, agents that fill the criteria (demographic, psychographic, geographic) decided upon are invited to “sign up” for the campaign. Their job was to try out the product (they received a free sample in the mail) and, if they approved of it, to share it with their friends- neighbors, co-workers, etc.- anyone they decide might be interested in the product and would be positively impacted by it. Some of these campaigns have proven to be extremely successful. But you may wonder, but who is allowed to become a BzzAgent? Interestingly enough, the answer is anyone. By simply going to

http://www.bzzagent.com/signup/NewAgentSignup.do

one can register and join the network of agents immediately. Campaigns that fit the information one provides appear on the homepage every time the individual logs in. Reading “The Tipping Point” by Malcom Gladwell as part of the assigned reading for Econ 204 made me ponder this seeming lack of selectivity. According to Gladwell, there are “sneezers”- people prominent in their social circles that have a knack for spreading a new product or idea like a virus. According to him, it is these “influentials”, the mavens or “hubs” in social circles that are the reason products catch on and attract a following. This philospohy is quite different from Dave Balter’s, the founder and CEO of BzzAgent; he thinks real social power resides in the honest word of mouth of everyday people. An interesting dichotomy, but I would like to point out that BzzAgent is to date a thriving business.

Posted in Topics: Education, social studies

View Comment (1) »

‘Intelligent’ local traffic routing to avoid Braess’

Internet traffic is as succeptible to Braess’ paradox as is vehicle traffic. Kagan Tumer and David H. Wolpert at NASA Ames Research Center present an improvement to greedy internet traffic routing algorithms in their paper “Avoiding Braess’ Paradox through Collective Intelligence”.

We have talked about how imposing rules from outside the system can help improve performance in a Braess’ paradox situation. What Tumer and Wolpert’s work adds to the theory is the possibility that in fact local rules can also help to alleviate the effect of Braess’ paradox on global performance.

Tumer and Wolpert first show that Shortest Path Algorithms (SPAs), which each router on the internet uses to attempt to minimize the latency of packets it sends through the network, lead to Braess’ paradox situations just as individual drivers trying to minimize their travel time does so in road networks, when the load on the network is relatively high. They then propose a routing algorithm based on the COIN (COllective INtelligence) concept that allows individual nodes to make independent routing decisions which result in improved traffic flow over the entire network. Their algorithm operates under the restriction that the private utility of each node increases if and only if the world utility (analogous to social welfare in market networks) also increases [Tumer 10, 11]. In essence, their algorithm makes each node in the network value its choices of routing paths according to how using each path will benefit performance of the network as a whole, rather than just benefitting (in the short run) the performance of the individual node. Each node in the network ‘learns’ which routing paths are most beneficial to the global system using a fairly simple machine learning algorithm.

Tumer and Wolpert’s algorithm does not necessarily find the optimal routing configuration for a given network (which is a difficult problem in general). However, it does succeed in avoiding Braess’ paradox in situations where the greedy algorithm fails to–often with a significant increase in performance. In cases where adding an extra route to the network causes latency to increase significantly using the greedy routing algorithm, with the COIN algorithm there is often little or no increase in latency, and in some cases even a significant drop in latency–in effect, the algorithm has become smart enough to not fall into Braess’ paradox situations.

Whether to use top-down regulations or ‘intelligent’ local rules to avoid Braess’ paradoxes seems to depend on the nature of the network you are investigating. For internet traffic, local rules are much preferred because of the decentralized nature of the internet–there is no central authority (at least not yet) to tell internet servers how to route their traffic, and even if there was, its regulatory influence would be subject to the same problems as any other internet traffic. For road traffic, however, transportation authorities often have good control of the roadways and a bird’s-eye-view of the network, and so can much more effectively make decisions for traffic flows than can individual drivers, whose only thoughts are “I need to get to work!”.

Posted in Topics: Mathematics, Science, Technology

No Comments

Rising Costs in Rush Hour Travel

In class we have addressed the idea of Braess’ Paradox and how it applies to transportation networks–we showed that, in one example, the addition of a new road will actually throw off the original Nash Equilibrium, increase everyone’s travel time, and make all travelers worse off as a result.

This idea–that new paths and roads can be harmful rather than beneficial for transportation–seems to go against the American way of approaching traffic congestion. In most cases, the solution to traffic jams has been expansion and construction: more roads and more lanes are the only way we know how to deal with increasing traffic and travelers. But there is a sign that our country’s way of approaching this problem may be changing, and it came from President Bush’s recent budget proposal, of all places.

The following link

http://www.washingtonpost.com/wp-dyn/content/article/2007/02/13/AR2007021301159.html

is to a Washington Post editorial that outlines the Administration’s proposal regarding how American cities should begin to deal with traffic problems. The White House proposes following the example set by London, in which “drivers who enter a city center during peak driving hours must pay a fee to use the roads.” The system has worked quite well for London–in the zone controlled by the charges, traffic has decreased 30%, while average journey time for travelers has decreased by 14%.

This idea of congestion charging is sure to upset many Americans, who are already frustrated by tolls for things such as roads and tunnels that are common around many city centers. But the principles behind the proposal are sound: by creating real and explicit economic costs for travelers who wish to use these congested roads during rush hour, municipal governments will basically be toying with a large-scale economic network like the one we examined in class. These charges will change the Nash Equilibrium currently in place, and if American cities are able to duplicate the successes of London, average driving times and traffic will both decrease markedly.

On a related note, a controversy has been brewing in London regarding these charges–staff members of the U.S. and German embassies have been refusing to pay the fees, claiming diplomatic immunity! This link

http://news.bbc.co.uk/2/low/uk_news/england/london/4352520.stm

describes in detail the reasoning behind both nations’ argument for not having to pay the charge, while also outline the British response to this affront.

In addition to being a funny story, this dispute also raises a serious question regarding busy roads and their costs, whether they be monetary or not: What happens if an individual, or a group of individuals, as in this case, chooses to ignore the costs altogether? What sort of effect will this have on the Nash equilibrium in the network? I ask these questions because the reaction of U.S. diplomats to this charge is, in my opinion, a harbinger of how the American people will respond to similar taxes in our major cities.

Posted in Topics: General

Comments (2) »

Where do good ideas come from?

The answer to the above question has been proffered by Ronald S. Burt, a Hobart W. Williams Professor of Sociology and Strategy at University of Chicago School of Business. In his paper, http://faculty.chicagogsb.edu/ronald.burt/research/SHGI.pdf, Burt proposes that one does not have to come up with a brilliant innovation in order to become creative but simply recognize the potential of an existing idea that can be reused. This existing idea comes from the networks that the individual is a part of. The fundamental element of his account is a structural hole which he defines as a gap between two individuals with complementary resources or information. When the two are connected through a third individual, the gap is filled, creating important advantages for the third person who now has the “power to reuse (produce) a good idea”. This suggests that creative people are often bridges between diverse networks. They use the knowledge that is not valuable to one community and apply it elsewhere where it is deemed exceptionally valuable. In this manner, they exploit their social capital to earn a competitive edge. As an example, the significance of these special nodes finds its mention in the recruitment concerns for the student project teams at Cornell University:
http://www.engineering.cornell.edu/student-services/learning/student-project-teams/resources/testing/future-years.cfm

In addition, the notion of third parties filling in structural gaps can be compared to weak bridges in social and corporate networks. The weak bridges serve as primary channels for information diffusion among diverse groups. Since the nodes forming these bridges stand to earn most of the information and control benefits, Burt claims that social structures enable competition by creating specific opportunities for certain people and not for others.

An interesting extension of his theory can be seen in its application to gender inequalities. Burt’s research shows that even though women are more comfortable in a small circle of supportive mutual friends while men are drawn to the rough and tumble of an entrepreneurial network, the sizes of their social networks are approximately the same. The gender inequality observed in senior ranks is, then, a result of the dynamic distribution of these social networks. Men tend to compartmentalize their networks; in the process of which they create multiple structural holes that they themselves serve to fill. They can have separate sets of ‘poker buddies’, ‘golf buddies’, ‘beer buddies’ etc. and these can be mutually exclusive groups. Women, on the other hand, tend to introduce their friends to each other, thus increasing the coefficient of redundancy in their networks and reducing the possibilities of acting as fillers. His research proves that women end up borrowing social capital from their colleagues in order to rise in rank. Competitive advantage, therefore, is not just dependent on the extent of one’s social network but also on the access to the so called structural gaps that are either inherent in the social fabric or are created artificially.

Posted in Topics: social studies

No Comments

Applying Social Network Analysis to Team Sports

This article is about power and influence in social networks and their application to team sports. The article entitled “Game Plan - First Find the Leader,” published in BusinessWeek Online in August 2006, discusses the discovery of these social networking phenomena by Head Coach Sasho Cirovski of the University of Maryland Terrapins men’s soccer team.

After enjoying years of success in the league, in 2000, the Terps failed to make the NCAA tournament and ended their season in the basement of the Atlantic Coast Conference. As the team lost many of its players to professional teams and graduation, Coach Cirovski began to notice a lack of leadership on the field, despite selecting the team’s two best players as co-captains. It became clear to him that he was recruiting only talent, not leaders. This resulted in a total lack of team chemistry, and a frustrated group of talented student-athletes.

 

Coach Cirovski consulted his brother Vancho who suggested that the coach distribute a survey to the team- a survey similar to the one Vancho had used for organizational development in his company. “The results, Vancho said, would identify off-the-radar leaders. Also called social network analysis, such surveys, the results of which are plotted as a web of interconnecting nodes and lines representing people and relationships, are increasingly popular among corporate managers who want to visualize their informal organizational charts.” And identify the leaders they did.

 

After the results were complete, Coach Cirovski clearly identified one of the quietest and least celebrated recruits as the player with the biggest influence in the entire team network. The player was immediately named the third co-captain of the Terps. The team enjoyed instant success, rallying around their new tremendously effective leader. In the following seasons, the Terps made their way to four straight College Cup appearances (college soccer’s Final Four) and a national championship victory in 2005.

 

Coach Cirovski began strategically attempting to strengthen ties between specific players in his team network. He has fine tuned his recruiting strategies as well to take into account team chemistry on and off the field of play.

 

The concept of power and influence in social networks in the context of team sports is very interesting. It is unlike some of the negotiation examples we have discussed in class in which specific dollar values may be assigned to a network and the payoffs are devised based on a node’s location in the graph. In the case of the soccer team, it is difficult to quantify the value that can be divided between individuals in the team network. Instead of deriving payoffs and determining the rules and logistics of bargaining between nodes, this example relies on concepts discussed in class such as dependence, exclusion, satiation, and betweenness to determine those team members who would be the most effective leaders of the squad. All of this is done by applying the basic principles above to the network graph constructed using the results of the team survey.

 

The article discussed above can be found by clicking the link below:

http://www.businessweek.com/magazine/content/06_34/b3998437.htm

 

Posted in Topics: social studies

View Comment (1) »

Collaboration Networks

Collaboration Networks, networks in which nodes represent researchers and edges between two nodes indicate collaboration on a paper, give a way of modeling the flow of ideas in the academic world.  In the paper “Some Analyses of Erdos Collaboration Graph”, the authors Vladimir Batagelj and Andrej Mrvar apply techniques of analyzing large graphs to the connected component of the global Collaboration Network containing Paul Erdos, a prolific mathematician of the 20th century.  According to the paper, Paul Erdos wrote over 1500 papers, and was a strong supporter of mathematical collaboration.

 

Given a vertex v of the graph, define the Erdos Number of v to be the distance from v to Paul Erdos, with Erdos himself having Erdos Number 0.  The paper analyzes the subgraph of vertices having Erdos number two or less, called the Erdos Graph, because not much is known about collaboration between mathematicians of Erdos number at least two.  It’s clear that the Erdos Graph is connected since Erdos himself is in the graph, but if we remove Erdos, the paper says that there will be seventeen connected components.  As one expects with large social networks, of these seventeen components there is one giant component of 6045 vertices while all other components contain 12 vertices at most.  This is not all that surprising since intuitively as we add edges to a large graph they will bring together connected components.

 

While the paper gives several methods for analyzing large graphs, I will discuss one which I found interesting.  Define the k-core of a graph to be the largest subgraph in which each of the vertices has degree at least k, and let the main core be the k-core with k maximal.  For each vertex v we can define core(v)=k where v is in a k-core but not a (k+1)-core, and we can define core(v)* to be the average of all the core values of the neighbors of v.  Note that high values of core(v)* seem to indicate that an important mathematician is collaborating with mostly other important mathematicians so the authors of the paper define the collaborativeness of a vertex v to be: coll(v) = core(v)/core(v)*.  The authors propose that this gives a good measure of how open a mathematician is to collaborating with mathematicians who are less important.  While the paper never delves into whether this is a good metric or not, there is some intuition here since for large values of core(v)* we have that v is collaborating with a group of elitists, and for large values of core(v) we have that v is collaborating with a lot of people.

 

As a final remark, I would like to give some possible directions for research on collaboration graphs, and that of the Erdos graph in particular.  Firstly, each edge {v,w} in the graph corresponds to some collaboration between mathematicians v and w, but it may also be worthwhile to know how many times v and w collaborated on papers.  Thus it may be of some use to model the collaboration using multigraphs where each edge represents a specific paper that the researchers collaborated on.  Furthermore, it would be interesting to see whether the collaboration graph satisfies triadic closure, or at least how often it occurs.  If Jon collaborates with Eva and Eva collaborates with Dexter is it likely that Jon and Dexter will take note of this and collaborate on a future research?

 

It doesn’t seem likely that much insight can be gained from the entire collaboration network as it is far too large, but if social scientists continue to analyze subgraphs such as the Erdos graph, it is entirely possible that we may learn more about the larger collaboration graph.

Posted in Topics: social studies

No Comments

Google – An Internet “Democracy”

The Internet revolution and Google have completely restructured information, giving more power to individuals in publishing their ideas on the Internet. Blogs, forums, Pod casts, groups, and personal pages have grown significantly, creating a plethora of individual written content. Our society has trusted Google as the industry leader in organizing all of the web’s information. Google utilizes a technology called PageRank that analyzes the hyperlinks between pages. Google states that it is a democratic system in which each page ‘votes’ for another page by linking to it. If a page ‘votes’ for two or more other pages, then its ‘vote’ is split equally amongst the pages it links to, treating each link as an equal fraction of a ‘vote.’ A page receiving many of these ‘votes’ receives a higher PageRank and thus has more ‘votes’ to pass onto the pages it links to. Google uses PageRank to sort its results, placing pages with higher PageRank above others. The technology also incorporates the content of links; assuming that a hyperlink will contain relevant information to the page it links to. Ultimately PageRank rewards already popular pages by displaying them first on Google, making them even more popular while ignoring unpopiular sites. Best-selling author Steven Johnson explains, “PageRank considers every link from one page to another… letting the amateurs have a vote. There are 40 million blogs out there… there has been an amazing shift toward a mass kind of media – a news democracy” (Link). Google has become a certain kind of democracy, allowing more people to express their work on the web. Unfortunately Google’s democracy also offers the web a biased, corrupt, and illogical democratic system.

Google’s PageRank system is susceptible to financial influences, allowing people to profit from rebalancing PageRank. Businesses have been increasingly relying on the Internet to make money. Most companies sell, market, and inform online, facing their brick-and-mortar competitors in this virtual arena. A good ranking on Google can make or break a website in a cutthroat virtual marketplace. Often times when Google adjusts their sorting algorithms companies report a loss in profits when their website is bumped off the first page of results. Due to such a reliance on Google, business must rely on Search Engine Optimization companies to increase ratings on search engines. These optimization companies are making a profit by shifting Google’s democracy and increasing their clients’ PageRanks, suggesting that money can buy power in Google’s government. Google itself is exploiting its own PageRank system for a profit. Google contains a ‘Google Enterprise Solutions’ page, listing useful enterprise corporations. However, Google charges $10,000 per year to be listed on this page. More interestingly, the pages listed here all have very high PageRanks and thus increased business. Are the companies purchasing these links interested in being in a directory, or are they paying for a guaranteed PageRank, putting them at the top of search results? Google maintains a standard allowing web page authors to block Google’s PageRank and ignore links in its PageRank computations. Interestingly enough Google opts not to use this standard on the Enterprise Directory, voluntarily giving these sites increased PageRank as an added incentive. The companies are most likely interested in the PageRank boost rather than being on a listing. This evidence reveals how Google is exploiting its own PageRank system like so many others to make a profit at democracy’s expense (Link).

In addition to financial corruption, controversial administration is also expected in a flawed democracy. PageRank has become so important that it is often the lifeblood of many companies. When changes in Google’s algorithms take place companies feel it, some experiencing devastating drops in search rankings and profits. These algorithm changes have caused many legal disputes, revealing companies’ heavy dependence on PageRank. SearchKing, a company representing text-based ads, sued Google for altering the PageRank technology and thus lowering SearchKing’s PageRank. Many other websites have complained or filed lawsuits against Google for lowering the PageRank of newly created sites, favoring already established web pages. These complaints reveal the power Google has and how it can affect the financial success of websites. Google is often changing its technology for unknown reasons; either to make the search results better or to shift power in rewarding long-standing sites over new, delicate sites (Link).

Some web authors do not let Google toy around with their lives and have formed ‘rebellions’ to use Google’s PageRank as an advantage for themselves. Google and other search engines complain about websites that establish “link farms,” a mutual linking network to increase all involved pages’ PageRanks, disrupting the search engine’s technology. For example a group of 100 participating web sites would have links to each other, creating a complete graph and a sharing their PageRank ‘votes’ within this network, mutually increasing their PageRanks. These groups interfere with Google and lower its accuracy. However it isn’t the link traders’ fault that Google ignores newly established and distant material. Such rebellious behavior is only a reaction to Google’s unfair methods of page sorting and poses a threat to Google’s methods of organizing information.

It seems as if Google represents almost every negative aspect of democracy. However the most important one has been overlooked: a lack of voter participation. Google claims to be democratic, and to be fair it gives each web developer an equal vote by analyzing hyperlinks equally. However an overwhelming majority of web users don’t contribute to the Internet and simply read it. In fact the people that do control the hyperlink structure are few in number and consist of a cultural niche of web developers; the contributors are not cross-sectional, as a true democracy should have. Therefore the authors of the Internet not only control the content on their pages, but also the hyperlink structure, determining what people will see. This system leaves out the opinions of Internet viewers, leaving them with no say in how PageRank works (Link).

Google’s democracy is nothing more than a way of letting web developers create and chose what the public should view, leaving ample room for financial corruption and deception of the PageRank system. The web has a long way to go in adapting to its viewers’ preferences by allowing small, unknown websites to grow without having to have thousands of hyperlinks from fellow web developers. It should be up to the web viewers to determine what they like, not the web developers as implemented by Google’s system. In a true Internet democracy both the readers and writers of content should freely control how they experience the Internet, something Google has yet to figure out.

-sps34

Posted in Topics: General, Technology

No Comments

The Clever System

“Mining the Web’s Link Structure”

Chakrabarti, S.; Dom, B.E.; Kumar, S.R.; Raghavan, P.; Rajagopalan, S.; Tomkins, A.; Gibson, D.; Kleinberg, J.
Computer Magazine
Volume: 32  Issue: 8  Aug 1999
Pages: 60-67

http://ieeexplore.ieee.org/iel5/2/16967/00781636.pdf?tp=&arnumber=781636&isnumber=16967

In 1999, when this article was written, search engines were much more ineffective than they are today. Searches would often return with thousands of sites, many of them not totally relevant. The search engines at that time searched for words in the body of the text only, and sometimes did not acknowledge the sites with the most information related to the search keywords. One of the examples from the paper is that a search on “Japanese Automobile Manufacturers” wouldn’t necessarily find links to the homepages of Honda or Toyota because these exact words aren’t necessarily in the body of these sites. Similarly, a search on “British Rock Bands” won’t bring up The Rolling Stones’ homepage. To find sites on these relevant topics, the researchers who wrote this paper (including our own Jon Kleinberg!) developed the Clever system.

Clever is a search engine that works by analyzing hyperlinks based on their authorities, which are sites that are frequently linked to, and their hubs, which provide collections of links to different authorities. This system uses the HITS (Hyperlink-Induced Topic Search) algorithm. To explain this algorithm, it is best to think of the Web as a very large graph, where nodes are web pages and edges are hyperlinks. The Web had about 300 million web pages on the Internet at the time (compared to about 10 billion now), with each page linking to many others. The user enters the keywords of his search, which narrows the 300 million sites to about 200 or so sites with the keywords in the body of their webpages. This initial grouping may or may not contain the ideal search results, so the program follows the hyperlinks on each of these initial sites to try to see what sites they are linked to. If enough of these initial sites link to the same result sites, Clever recognizes that these sites (nodes) with many links to them (edges) are authorities. Clever will then place the authorities with the most links to them at the top of the “search results” page. Organizing directed graphs, very similar to the graphs of social networks that we studied in class, are really the fundamental driving force behind the Clever search engine and others like it, such as Google.

Posted in Topics: Education

No Comments

Small World Phenomena in Social Networks, the Web, and the Food Web

The small world phenomenon in social networks is quite a surprising property when one first learns of it [wikipedia: small world phenomenon]. Six degrees of separation - the idea that each person is connected to every other person by a path of length 6 on average - is quite remarkable. Yet we observe the small world phenomenon in many different types of networks.

We discussed the web graph in class recently. [1] discusses the structure of the web graph using results from a web crawl of approximately 200 million pages and 1.5 billion links. It describes the graph as consisting of 4 parts: (1) SCC: a giant strongly connected component of pages that can reach each other, (2) IN: pages that can reach SCC but cannot be reached from SCC, (3) OUT: pages that can be reached from SCC but cannot reach SCC, and (4) TENDRILS: pages that can neither reach nor be reached from SCC. Previous work reported that most pairs of webpages are separated by few (<20) links; this can be thought of as a small world phenomenon. [1], however, reports a more subtle picture claiming that over 75% of the time there is no directed path from a random start node to a random finish node.

[2] discusses the notion of “two degrees of separation in complex food webs.” The food web can be thought of as a network where nodes are “trophic species” and species on either end of a link have a consumer-resource (e.g. predator-prey) relationship. The interdependence amongst species in the food web means that perturbations (such as population fluctuations) in one species can have significant effects on other species - the question is how much of the food web is affected by a perturbation to one of the species. Emperical evidence suggests that strong effects rarely propogate more than 3 links away from the initial perturbation. However, [2] finds that there are two degrees of separation even in large complex food webs, i.e., the average shortest path distance between any two species is two. This suggests that a local perturbation can effect almost the entire food web.

References:
[1] Andrei Broder, Ravi Kumar, Farzin Maghoul, Prabhakar Raghavan, Sridhar Rajagopalan, Raymie Stata, Andrew Tomkins, Janet Wiener: Graph structure in the web
[2] Richard J. Williams, Eric L. Berlow, Jennifer A. Dunne, Albert-László Barabási, Neo D. Martinez: Two degrees of separation in complex food webs (2002)
[3] S. Strogatz, D. Watts: Nature 393, 440-442 (1998 )
[4] J. Kleinberg: Navigation in a small world; Nature 406 (2000), 845

Posted in Topics: Education, Science, Technology

Comments (3) »