This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Microsoft’s Zune: A Social Networking Failure?

Microsoft’s Zune aims to be social butterfly | CNET News.com


On November 14, 2006, Microsoft launched its highly anticipated alternative to Apple’s iPod: the Zune. The device, which has a 30GB hard drive, FM tuner, 3-inch screen and USB 2.0, was the “talk of the web” as technology enthusiasts anxiously awaited an announcement of its official release date during the months before the actual release. Among all of its features, the one contributing most to its hype was its wi-fi capability. It allows Zune owners to share their music with other Zuners within a 30-foot radius.

In its first week of sales, the Zune accounted for 9% of the portable digital music player market share, landing in second place, behind Apple. It dropped to fifth place in its second week.

The opportunity to enjoy all the music your neighbor listens to via communicating portable music players is appealing; it adds another dimension to social networking. Music is a great conversation starter, in both internet and face-to-face social situations. Previously, internet services have allowed individuals to share lists of their favorite songs. Never before, however, have users had the opportunity to have instant access to someone’s music play list at one moment, and then walk over and start a conversation with them the next. This is a great way to meet new people at parties, etc.

Technology spreads quickly on college campuses. However, on a college campus with 20,000 students, I have yet to see a single Zune. This is not to say that no one owns one; but the Zune clearly has not saturated the student population, and shows no signs of doing so. In Zune television advertisements, groups of people were pictured in a social scene, “beaming” music to one another. While such social networking was Microsoft’s intent, it has not yet played out as planned. Most Zune owners do not activate the wi-fi feature, because it uses too much battery power and doesn’t actually transmit a lot of the songs people own, due to DRM protection. As a result, the Zune’s social networking effectiveness is dissipated.

Despite these initial results, it seems Microsoft is dedicated to its digital music player venture. According to the CNET News article, Microsoft “said it expects the Zune effort to take years and cost hundreds of millions of dollars.” Right now, however, the Zune’s potential for offering a new phase to social networking has been reduced by its design flaws, and the already-existing social entity flag-shipped by Apple’s iPod.

Posted in Topics: Technology

View Comment (1) »

19 Degrees of Separation

I came across an interesting article while searching for information related to Stanley Milgram’s “small-world” network. Those of you who have read “The Tipping Point” may remember Malcolm Gladwell’s story about Stanley Milgram, the psychologist who in 1967 gave 160 residents of Omaha, Nebraska a letter with the name of a stockbroker who lived in Cambridge, Massachusetts, and asked each of the Nebraskans to send the letter to a friend or acquaintance whom they believed would be able to get the letter closer to the stockbroker. Milgram found that the letters reached the stockbroker in six steps on average. In his paper, “The Small-World Problem,” about the experiment, Milgram coined the phase “six degrees of separation,” and theorized that any two people in the United States are connected through a short path of social acquaintances.

 In 1998, Steven Strogatz and Duncan Watts (the same Watts from the “Empirical Analysis of an Evolving Social Network” article we read) defined a small-world network as a graph with a high clustering coefficient and small mean-shortest path length in their paper, “Collective Dynamics of Small-World Networks.” They also proposed a mathematical model for a small-world network by taking a regular network (meaning a completely structured lattice-type network) and replacing some of structured connections with randomly assigned connection, thus creating a hybrid between a completely regular network and a completely random network. They also showed the neural network of a certain type of worm, the power grid in the Western United States, and “the collaboration graph of film actors” all could be modeled as small-world networks. Since that paper, many others have been published, applying this small-world model (or modified versions of it) to real-life networks.

The interesting article which I came across, “The Diameter of the World Wide Web,” is by Reka Albert, Hawoong Jeong, and Albert-Laszlo Barabasi. The article describes an experiment they conducted to help describe the topology of the internet by modeling its local connectivity. They created “robots” which take a page on the web, search it for any URLs, and then follow those URLs to find related pages, search those pages for URLs and follow those URLs, and so on, recording data all the while. Then they used this data to create the probability function that a page had k  incoming URL links and the probability function that a page had k outgoing URL links. They found that these probability function did not fit those predicted for random graphs, but that they described a small-world network model. They also used these probability functions to create a define “d,“ the average number of URL links from any one page to any other page (or, as they call it, the “diameter” of the web), as a function of the number of pages. Plugging in 800 million for the number of web pages (this article was written in 1999), they found the “diameter” of the internet to be 18.59, so from any page on the web, every other page of the internet is on average about 19 URL links away.

The researchers did mention that were the internet to increase by 1000% (or to 8.8 billion pages, which is around the values that have been thrown around in lecture), the diameter would only change from 19 to 21 due to the “logorithmic dependence” on N.

 There are some important results discussed in this paper which I think are very applicable to our discussion of the internet as a network in lecture. For one, this paper provided insight into the topology of the world wide web: it can be modeled as a small-world network, meaning it is clustered, yet has a small mean-shortest path length. Also, the mean-shortest path length is around 19 (or 21 by now), which means the web is highly connected, relative to its size.

 The URL for the “Diameter of the World Wide Web” article is below:

http://www.cs.cmu.edu/~eairoldi/nets/public/albe.jeon.bara.1999.pdf

Posted in Topics: Education

No Comments

TAC Competition - Theoretical analysis to practice

The Trading Agent Competition is a multiple auction-type game competition held every year. A trading agent is a computer program that tries to satisfy the preferences of its client(s) by attempting to assemble a variety of travel packets through different types of auctions and competitions. The agent who fulfills his client’s preferences best wins.

This creates several problems that this computer program has to deal with. They have to deal with multiple auctions occurring at the same time, and try to package everything to meet their client(s) preferences. They also have to deal with the strategies of other agents trying to compete for the same goods.

A more complete explanation of the game and rules can be found here:
TAC Competition

In 2002 Ioannis Vetsikas, a Cornell student, won the competition. He also wrote a paper dealing with one of the sub-problems and how it relates to different types of auctions and Nash Equilibrium.

The paper can be found here:
Vetsikas Paper

This paper leads to a very interesting analysis of Nash Equilibrium. It links auctions and equilibrium to develop an optimal strategy to play this game. This analysis allows for development of algorithms that have better chances to win the TAC game. It is a good example of taking the theory (albeit much more advance) we learned in class and applying it to a practical situation.

In the Trading Agent the agents are competing for airline tickets, hotel rooms, and event tickets. In his paper, Vetsikas analyzes the sub-problem of the hotel room reservations.

There are 16 hotels rooms available each night at each of two hotels. Rooms are sold by the day by each hotel room in a separate, ascending, multi-unit, 16th price auction. A random auction will close every minute through-out the game. Agents place their bids between closing times in a sealed price auction. Every consecutive round agents who didn’t win a unit place another sealed bid auction equal to or higher than the previous round. Agents can also not retract bids.

They propose a theorem which states a differential equation that all equilibriums are solutions too. This guarantees that an equilibrium does exist and is unique.

To expand upon this theorem they analyze the Nash Equilibrium for a single unit auction, with N agents, using a first price sealed bid auction. They then move on to the multi-unit auction with N agents. They derive several differential equations necessary to compute an auction that closes at multiple possible times. They explicitly compute the solution for the last two rounds, and then recursively compute the solutions for the previous rounds.

Posted in Topics: Education, Mathematics, Technology

No Comments

Letting Users Create Value in User Communities and Social Networks

In the February 07 edition of their magazine, View, PriceWaterhouseCoopers (PWC) addresses social networks from a business’ point of view. Three articles touch on social networks; the first concerns consumers and the following two discuss employee networks.

View calls this new type of consumer, the socially networked consumer, a powerful force in the marketplace that CEOs “simply cannot ignore”. Citing examples such as Sheraton Hotel’s Global Neighborhood and American Express’ OPEN, the article explores the idea of harnessing the economic power in user communities’ social networks. There are several macro manifestations of user community created value, including but not exhaustively:

(a) Users explicitly contributing content (YouTube).

(b) Users providing services for a company’s product, such as troubleshooting (Skype’s community forums).

(c) Users’ social networks themselves as a source of economic value by way of advertising (MySpace).

    The article continues by discussing how businesses should interact with these user communities: while direct or indirect interaction with the user base is frequently beneficial, new risks emerge. These risks apply not just to forums and communities managed by the company itself (like OPEN), but also public forums (MySpace etc). Concerns over privacy as companies capture “rich data” from these social networks have arisen, and at times a company will feel the negative reactions of trends and fads spreading through the social networks. (For example, the Diet Coke / Mentos experiments in videos all over the net are somewhat resented by Coca-Cola). Alternatively, some companies are giving power to their consumers’ networks, for example by running competitions to have users create advertising videos for them (Wal-Mart).

    It is interesting to consider the impact of the network as a whole on a company or an entire industry, in contrast to our studies where we have studied mostly characteristics internal to the network. The articles on employee networks similarly offer novel approaches in looking at networks. We have studied (among other things) positions of power and structural balance in social networks; View interviews Rob Cross in order to better understand employee performance and employee networks.

    Cross is the co-author of The Hidden Power of Social Networks: understanding how work really gets done in organizations. He has studied a layer of energy across the graph, and discovered that performance is related to employee positions relative to energy centers. An employee who energizes others is also a higher performer. Being central in an energy layer is “four times the predictor of high performance than being a hub of information ties only”; this is not immediately obvious in a team-project oriented environment.

    Posted in Topics: Education, Technology

    View Comment (1) »

    The Economics of Information Security

    A recent paper by Ross Anderson and Tyler Moore, “The Economics of Information Security: A Survey and Open Questions,” brings together the seemingly disparate fields of security and economics to discuss the reasoning behind security decisions. They provide examples such as why individual PC owners choose to install anti-virus software and why large banks choose to protect their ATMs. The authors use economics as a way to frame the choices and tradeoffs that are inherent in security design. They note the increased reliance on game and graph theory as an indicator of a changing security paradigm: “game theory and microeconomic theory are becoming just as important to the security engineer as the mathematics of cryptography.” (Anderson et al.)

    Anderson and Moore propose that security can be viewed topologically where attackers and defenders can both be viewed as nodes and, as usual, their alliances or attacks can be viewed as edges. As such, the defender often tries to attack the nodes and edges of their attacker, while the attacker tries to make his system robust, flexible, and adaptive.

    Here is an excerpt from their paper: “Network topology can strongly influence conflict dynamics. Often an attacker tries to disconnect a network or increase its diameter by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; a police force trying to decapitate a terrorist organisation; and a totalitarian government conducting surveillance on political activists.” (Anderson et al.)

    Economics of Information Security: http://www.cl.cam.ac.uk/~rja14/Papers/toulouse-summary.pdf

    Posted in Topics: Mathematics, Science, Technology, social studies

    No Comments

    Relationship Models in Attention and Social Networks

    “Attention Networks vs. Social Networks”

    Article link: http://www.zephoria.org/thoughts/archives/2005/11/29/attention_netwo.html

    This article discusses the difference between how attention networks and social networks model relationships.  The author believes that social networks are not an ideal model because they do not provide adequate information about relationships because of its undirected nature.  For examples, they don’t measure how long you’ve been friends, how strong the connection is, etc.  However, attention networks show what people are the focus of other people’s attention, and do not require reciprocity.  These relationships often go one way as in a directed graph; for instance, Joe Shmoe may read a lot of articles about Britney Spears, but Britney doesn’t know who Joe is.  Because many real-life relationships are unequal, attention networks better illustrate different hierarchies and power structures.

    This article relates to the class discussion of the definition of nodes and edges in social networks.  Using the social networking model of relationships, edges are very simple (you’re either a friend or you’re not), but you don’t know exactly what kind of relationship it is.  However, in the attention network, you can see more clearly the power structures within relationships, and thus, see who is popular or a celebrity, who has the possibility to be a “connector”, and who has an important role in a network.  Currently, Facebook attempts to create more realistic relationships to one’s “friends” by adding details about how one knows everyone in his/her network.  This option allows one to put the people in his/her network into different categories/levels of “friends”.  While this adds more information about the relationship, Facebook still requires reciprocation of the details of the relationship, making the edges of the network still undirected.  While this may still be unrealistic compared to real-life relationships, it does avoid major problems with privacy, stalking, awkward situations, and is a step in the right direction. 

    In the future, social networking sites may move toward being closer to attention networks.  As social networks get larger (since there can only be one main giant connected group of nodes in a large network), there becomes a more pressing need to categorize friends and differentiate the close friends from the casual acquaintances.  By grouping people into different levels of friendship, it will allow people to focus more on who they want rather than everybody at once.  I think that eventually, social networks will become less of an entertainment/procrastination tool and more of an informational tool.  If they could be connected to other systems (i.e. email, phones, IM, recommendation and review services, etc), attention networks could be used to tailor the order of phone lists, email messages, consumer opinions, etc. to put the people who you focus the most attention on at the top of your list – to give you the most relevant and trustworthy information.

    Posted in Topics: Education

    No Comments

    Neural Networks

    http://www.codeproject.com/csharp/Neural_Network_OCR.asp
    (Note: Knowledge of C# / Java / C++ is recommended.) 

    The beauty of neural networks is that they’re inherently generalized with respect to inputs and outputs.  Simply put, there is no type of neural network dedicated to one particular task.  While there are many implementations and algorithms that can be used to create a neural network, they can all be manipulated into recognizing text in images, into detecting which samples of protein crystals are likely to hold up the longest under X-ray diffraction (there was a project at BOOM nvolving this), and so forth.  Best of all, you don’t have to figure out how to configure them to get the results you want!  Simply provide a network with several sets of sample inputs and their expected outputs, and run the network in a training mode” for a few thousand iterations, and the neural network “figures out” what aspects of a particular input make it unique, as well as how to categorize a wide assortment of additional input.  Of course, the accuracy of the network depends on how many sample inputs you provide, and how many iterations you run in training mode. 

    The article discusses the implementation of a C# application that trains a neural network to perform OCR (optical character recognition).  The input for this application is a 5×6 pixel, black-and-white image of a letter.  The output is a vector (array) of probabilities that the input matches each of the 26 possible letters, which can then be used to determine which ASCII (text) character the image most likely represents.  It is important to note that neural networks can never produce perfectly accurate results, as their training is, in essence, based upon learning by trial and error (with some heuristics thrown in to help accelerate the process).  As such, the neural network used in the article will not classify, for example, an image of the letter K as the letter “K”.  Rather, it will classify it as x% “K”, where x is some high percentage (assuming enough training has occurred).  This is why most applications of neural networks operate with the concept of an accuracy threshold and accept results that are correct with some high probability, and this is why even professional / commercial OCR applications make mistakes. 

    In this program, a neuron is essentially a set of code that models physical neurons in the brain.  Because this program uses a single-layer neural network, each neuron accepts an input vector, and is connected to an output vector.  Multiple-layer neural networks also exist, and are generally used to perform a sequence of increasingly concrete classifications when there are too many possibilities to create training data for all potential inputs, and the speed of classification is important.  (For example, a three-layer network might classify an image of a dog as an “animal”, then as “four-legged”, then as a “dog”.  The separate layers of classifications allow the network to reject impossible classifications early on.  For example, if the aforementioned three-layer network was then given an image of a plant, it could reject everything that is a subclass of “animal” early in the analysis, thereby increasing the speed at which it runs.)  An input vector is a set of data that describes the input; in this case, it’s a 30 element array of floating point values (one value for each pixel) that represent whether a pixel is “on” (black, = 0.5f) or “off” (white, = -0.5f) in the letter that the image represents.  The output vector in this case is a 26 element array of floating point values, with all elements initially set to -0.5f, with the ith element representing the probability that the input letter is the ith letter of the alphabet. 

    To recognize a letter in an image, the program loops through every element in the input vector and sends it through the network.  Each neuron “fires” (outputs 1) if its input is probably (as indicated by training) in the output letter it is representing (i.e. the 3rd neuron represents “C”).  The average of all neuron outputs then comprises the probability that the image represents the letter being checked.  Once all possible letters have been processed, the one with the highest probability of being correct is chosen as the proper interpretation of the image.  Commercial OCR applications work in (practically) the same way; they also include proprietary heuristics that take into effect the context of the word being recognized, as well as the word itself, to limit the set of possible outputs and increase recognition accuracy. 

    The benefits of neural networks, however, are not limited to OCR.  Because they are indifferent with regard to the types of input they can process, and their output is determined by training, neural networks can be implemented for an astounding variety of tasks.  Future applications include improved searches (done by analyzing potential matches, context of queries, past queries, and so forth), improved identification and security systems, and new ways of interacting with computers (i.e. through analysis of brain waves / “reading your mind”). 

    Posted in Topics: Mathematics, Science, Technology

    No Comments

    Virtual Online Game Auctions

    Auctions: Theory and Practice

    Auctioneer Addon for WoW

    Our lectures and literatures discussed the theory of auctions. World of Warcraft (WoW) is an online game that was discussed earlier in another post. It has is own virtual economy with an auction house system that uses an ascending first price sealed bid system. Items are placed in the auction for a price determined by the seller with the option of setting a ‘buy out’ value that buyers can immediately purchase instead of waiting for the bid to close out. Bidding for the item can be placed for eight, twelve, or twenty-four hours. At that time, potential buyers place bids on the item raising the value until either the time runs out or it reaches the buy out value.

    While each buyer in the game may have their own value that they are willing to pay for an item, is it possible for the seller to intentionally change the value to gain more profit? WoW has interesting add-ons that are developed by individuals that can provide a variety of features that can make a difference playing the virtual economy. One popular add-on is called Auctioneer. It allows a user to scan the auction house and display the average value of an item that has been scanned before. It is quite popular that a huge majority of players that use the auction house has become accustomed to using it to find out if something is worth bidding on or buying out in the auction house. This coincides with Klemperer’s theory that those with information advantages will gain more profit than those that do not.

    I wanted to manipulate the auction house with the information that I knew others used this add-on to determine their own value of the item. For two weeks I scanned the auction house and it saved the values of all the items that were constantly put up for sale. I found the item ‘essence of air’ was always on sale at an average bid and buy out of around 8 gold (Gold is the virtual currency of WoW). There was usually a few of these on auction simultaneously. Purchasing all the items on the auction house for their value of 8 gold, I replaced them back on the auction for 15 gold. I announced in the trade channel the price of the item and was usually told by random people it would not be purchased because they were not worth that much value. Each day I scanned the auction house and bought out others that were placed cheaper and replaced them with ones valued at 15 gold. The first few days, nothing was bid on or purchased. After the third day, a few finally sold at the value of 15 gold. I raised each of the prices from 15 to 20 gold then eventually 25 gold slowly over the time of a week. Within that time, buyers now started bidding and purchasing the item at the overset value. I stopped placing the item to see if the values would drop back to their low price originally, but instead others now placed the same value when trying to sell theirs also. While multi-unit auctions theories are still in its infancy, the behavior is similar to what Klemperer described that buyers and bidders end up ‘colluding’ and coordinating their own behavior whether unknowingly or not. It is interesting to see that the virtual world and economy of an online game still follows many of the same rules and theories.

    Posted in Topics: Education

    No Comments

    Network Theory to Identify Terrorist “Hubs”

    Link to Article
    In the opening days of this course we discussed at length the different ways to create triadic closure amongst several nodes. The idea that information can flow just as easily through weak ties led us to the analysis of information flow across social networks and how it is entirely possible that weak ties can provide vast amounts of information.

    The defense agencies of the United States have taken to using network theory and graphs to help identify potential terrorists. The idea behind this approach is similar to that of the class and student graph derived during one of the homeworks. Through the use of metadata from emails and phonecalls has allowed the mapping of social networks and the idea is to find the “hub,” of focus, like the class student example. These hubs help to identify potential pairing that could lead to terrorist activity through the idea of triadic closure. As well the hub could be connected to several networks through weak ties but yet it is through these weak ties that opportunities for activity could evolve.

    The biggest flaw with this approach, which the article states, is that by the three edges we are connected to “hundreds of thousands of people,” which makes it very difficult to analyze this type of graph. A possible modification of this approach is the addition of betweenness values to these hubs based upon the concept of proportional betweenness. This idea was discussed in a homework, that betweenness values could be given proportionally based on the distance the between node v is from t and s. This would show who the true hubs are through their ability to connect many people he or she is close with. Identifiying bridges between different networks groups would also show who brokers the flow of information.

    Posted in Topics: Education

    No Comments

    Auctioning Ad Time

    http://www.mediaweek.com/mw/news/recent_display.jsp?vnu_content_id=1003552272

     

    Popular auction site eBay is attempting to expand into the world of media by auctioning off television ad time.  With this new service, advertisers will be able to detail the target demographic, preferred time of day, and funding available.  Conversely, television networks will be able to make time available for willing bidders.  eBay is providing an important connection between buyers and sellers of ad time.  It also collects all the information in one convenient location.  There’s just one problem: no media outlets have signed up yet!

     As of this point, the auctions are one-sided, but eBay is working on selling the idea to cable networks, which are generally smaller companies and more open to taking a risk on a new technology.

     From the point of view of the television networks (and anyone in Info 204!), eBay is providing what should be a vital new service.  Essentially what they are doing is adding many more links in the trading network for ad time.  More links provide more options, and more options provide more power.  In social networks, individuals with more friends have advantages in dependence, exclusion, satiation, and betweenness.  The concepts of dependence and satiation carry over to this type of trading network quite nicely.  A television network with more advertising options is not as dependent on any one advertiser to fill up airtime, and its available airtime will be satiated much more easily.

     Given all this information, I would not be surprised if television networks soon do begin signing up for this service, and I expect to hear more about this in the future.

    Posted in Topics: Education

    No Comments