This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


The Media and its Effect on Wall Street

According to this article, the market did not do well yesterday, and the primary reason was that indicators such as falling consumer confidence, housing prices, and inflation in other sectors induced people to stop spending and further injure the economy thus lowering the Dow Jones Industrial Average. This may seem to make perfect sense, on the surface, but why?

In other economics courses I have taken, a widely-held theory is that there is “no cash left on the table” - meaning that if there is information or opportunities to be had (and timing is important), and others have found about it first, that the opportunity is now gone. When these reports come out, and the everyday small-time stock trader reads about it, theoretically, all of the surplus that information held is now already gone and these people are then perpetuating a trend whose utility has all but dried up. This is where you see the “cascade” occur - and markets bubble and burst based on the positive or negative nature of the news, regardless if its accurate.

In this way, reports, especially those given weight by the media, become (in my opinion) immediately outdated as the content being observed immediately changes upon its observation - an uncertainty principle, in a way. As a result of this dynamic system, individuals make reactions too late, and usually create a self-fulfilling prophecy by reacting to what the media predicts will happen.

Posted in Topics: Education

No Comments

An Information Cascade: The Google Influence

In accordance with the topic of the Information Cascade, a recent study was published which analyzed how likely college level students are to click a link from a Google search based on the abstract relevance and its position on the page (see http://jcmc.indiana.edu/vol12/issue3/pan.html).

The results of the study indicate that college students are influenced both by the order in which the links are presented on Google and by the relevance of the abstract that is presented with it. However, when compared, it seems that the position of the link on the webpage is much more influential on the students’ decision as opposed to the relevance of the abstract.

These findings are characteristic of an Information Cascade. In this scenario, the searcher is given two pieces of information. First, they are given the rank of the link on the webpage. In other words, they have the first link, which is most relevant according to Google, then the second which is second in relevance, and so on. And second, they are given the abstract which provides them with direct insight as to the content of the link that they are about to click. The abstract, as we all know, provides the words we search for in bold along with a few words or a sentence that accompany it on either side.

Intuitively, we should think that by reading the abstract, we give ourselves our best personal insight as to what we think is contained within the webpage. This can be considered our own ‘research’ as to what we are likely to find. But we can also consider the rank on the page as our trust in what is suggested by Google. This is the cascade factor that sometimes changes our decision from what we would previously feel is right.

In the findings of the study, it is found that students are more likely to click on a webpage based on its rank, more concisely, the suggestion that Google makes as to its relevance to our query. Even if the results of webpage are scrambled and more relevant abstracts are found at the bottom of the page, students are inclined to pick the link that is closest to the top. This strongly suggests that we follow the outside suggestions instead of our own intuition in accordance with an Information Cascade.

In short, we must ask ourselves - do we trust machines and their computing algorithms too much? Should we in the future depend on ourselves more to make a better-educated decision as to what we should choose in our sources of information? Perhaps these thoughts will run through our minds the next time we do some research on the internet.

see: http://jcmc.indiana.edu/vol12/issue3/pan.html

Posted in Topics: Education

No Comments

Searchme.com: a visual search engine

When Google was founded in 1998, the standard for Internet service was still dialup – barely sufficient to support “media-rich” sites, let alone streaming video. Search engines were confined to plain text in the interest of speed. Today, with most internet users connecting by broadband, high-bandwidth implementations of search engines can have expanded abilities. One new search engine, called Searchme (www.searchme.com) , has attracted significant attention, including backing from Sequoia Capital, an early investor in Google.

The Searchme engine offers two innovations. First, the engine offers categories for keywords as they are entered in real time. The sites’ official promotion video gives the example of “taurus”: Searchme offers results in either astrology or cars, aiding the user by narrowing search spaces.

The second innovation is a visual preview of the pages, which can be likened to Cover Flow in iTunes:

searchme.jpg

Since search engines are essentially the “gateway” into the Web, this has enormous implications for the way pages are created. Searchme claims that a user can browse through its search results more than in pure text. Thus, “optimization” of a page will not only depend on its position in the list, but also by how visually attractive it is. A page will be successful through its designed aesthetic qualities and not solely on its degree of inter-linking.

If and when visual search engines overtake their traditional competitors, the greatest impact will be in advertising. The established method of selling “advertising slots” is tied to the limited flexibility of HTML code. Searchme’s interface is in Flash, which provides for slicker content, but also boundless opportunities for advertising much akin to banner ads. The article suggests that the ad slots of the future may be up for auction in a manner analogous to today’s cable TV commercial slots.

The likely development of visual search into the de facto search method (whether by Searchme or Google implementing the same ideas) will have benefits and drawbacks. Websites will aim to be more visually appealing and usable to attract clicks. On the other hand, users will have less control over how much advertising they see. One can imagine full-screen flash banners for the query “Super Bowl” costing advertisers $2M a pop.

The Searchme engine is slated to open to the public within a month. Regardless of its long-term success, the practical advances Searchme offers remind us that even Google is prone to obsolescence.

Original Article

Posted in Topics: Technology

No Comments

Yahoo’s New Approach to Search

Section 6 in Chapter 10 talks about how link analysis is applied in modern web search. This online article would be a good source for one who wants to know more about in what direction current link analysis would be improved by major search engines. The article can be found at: http://searchenginewatch.com/showPage.html?page=3628767

In this article, Andrew Tomkins, chief scientist for search at Yahoo, says Yahoo is taking a new approach to search which is a task-based approach to improve results to focus on the user’s task stage. At Yahoo, researchers are making changes in both a top-down approach and a bottom-up approach with a main goal of making search easier for users to complete their tasks. According to Tomkins, Yahoo is picturing the next generation of search as a service that comprehends a customer’s needs and adjusts the way search engine looks for information in order to achieve perceived goals. With more focus on productivity and conversation sort of technique between a search engine and a user, Yahoo’ new search places a user in a  control position where the user is the one who drives the search the way one wants, instead of a search engine trying to take charge of identifying a user’s intent. As discussed in Chapter 10.6, traditional analysis on anchor text, links, and content was a major source in determining relevance. Recent researches have been examining other signals such as user clicks and engagement for this new refined approach to search. It is also likely that their findings include new ways to rank content-based search, understanding queries and analysis in a task a user is involved in.

As discussed in text, to prevent competing search engines from finding out what Yahoo is doing and growth of SEO, the article is not specifically talking about how new search would be processed. However, it gives a good simple picture of how search techniques are being used currently and how these will be modified to meet a user’s need in the future. Since the web is such a fast-paced environment and these days, most people choose to go online to retrieve information they need, search engines are especially expected to keep up with the pace of changes in a user’s interest. It is interesting to see that user clicks and engagement are new clues researchers are using to refine the search. It is helpful information for search engines to access user clicks and user’s engagement to provide most relevant information. However, it seems there needs to be also some restrictions to ensure a user’s privacy. When search engines used to crawl and rank pages to show in response to a query, these techniques did not require history records of a user, but as the search evolves more like a series of interactions between search engine and a user, effective search techniques tend to step on a user’s privacy. The direction suggested by Tomkins reflects well the trend in the Web these days, but I also wonder how the issue of privacy will be handled in using these new clues.

Posted in Topics: Technology

No Comments

Economic and Political Implications of Informational Cascade Theory

The concept of informational cascades seems to have a large following in academic circles. It is such an attractive idea because it offers a rational explanation for what appears to be irrational “herd-like” behavior. Information cascade theory is used to describe a situation in which an individual makes a decisions based on the actions of other people rather than on personal but incomplete information. While gaining popularity in political science, economics, sociology, and other scholarly fields there is some criticism of cascade theory and its implications. One such critic, a private equity professional and blogger who goes by the name Private Equity, argues in a post called “Cascades are a Poor Excuse for Stupidity” that scholars are taking cascade theory too far and rationalizing human stupidity. Private Equity writes:

It strikes me as quite tortured to label a market actor who possesses so little actual information about the contemplated transaction that he or she might be caught up in a cascade as “rational.” Particularly where, as here, the costs of an erroneous decision are extremely high and the decision is not forced.

When conducting research in the field of informational cascades one has to be careful about how one defines rationality. More importantly, one cannot discount the agency of the individual in making his or her choice.

There are political implications in informational cascade theory that have not been explored in class yet and I hope they will. If we assume that individuals caught up in the cascade were rational then they are not to blame for their loss in the event of a negative outcome. Private Equity argues that people will thus argue for bailouts and forms o government intervention to cover their mistakes. We see this logic in the wake of the housing market bubble. Investors and homeowners are asking politicians, banks, and presidential hopefuls to be bailed out from their poor investment. Informational cascades theory seems to provide us with a useful analytical tool but according to classical economic theory there is nothing strange about agents in a market trying to exploit a new opportunity. Whether it be a short line at the grocery store’s check-out counter, a profitable business endeavor, or investment in a specific stock people will continue to rush in until all an equilibrium is reached.

http://equityprivate.typepad.com/ep/2008/03/cascades-are-a.html

Posted in Topics: social studies

No Comments

Business Networks, “Clusters,” Serve as Incubators, Catalysts in Regional Economic Development

     Earlier this semester, I enrolled in a course on regional economic development through the architecture school’s Department of City and Regional Planning.  From the onset, our professor challenged us to look at different aspects of economically successful areas to choose a factor or indicator that we believe is most important to creating a successful regional economic development strategy.  In thinking about it, and in doing the assigned readings and research for the course so far , I have come to believe that business “clusters,” as planning theorist and professor Michael Porter calls them, are the most important factor in regional development.  In several of his articles, especially “Locations, Clusters, and Company Strategy,”  Porter argues that the most important feature of a region’s economic health and growth is the existence or creation of “clusters,” which are essentially local networks of businesses and companies in related fields and industries.  Clusters may form in an area for a variety of reasons, such as the presence of a major firm such as Kodak or Microsoft (members of the Rochester, NY, imaging and Silicon Valley technology clusters, respectively), which require smaller firms to supply them with parts, service, or consultation, the presence of funding, especially from the federal government, or because of the presence of necessary resources, and help to stimulate local and regional economic stability and growth. 

   The “clustering” theory of development is something like this: once a major firm or several smaller firms are in place, they immediately start the development of a business network, from the most simple levels all the way up.  For example, workers from offices or factories have to go out to eat or have to order in, just as they also need their offices cleaned and maintained.  As a result, a number of low-skilled, specialized businesses pop up to meet those needs.  In addition, companies also need parts, consulting, and research done (if they do not have their own R&D division) for the actual production or provision of their good or services, and so more businesses and possibly even educational institutions are founded or improved to meet those needs.  Larger firms may also have gaps in the types of services they provide or in the types of products they develop and produce, and so other businesses spring up to fill those gaps, ones that may even be founded by former employees of the original companies.  This list of things goes on and on and, as you can see, creates a network of businesses and services that are related, interlinked, and often interdependent, relying on each other to remain productive and profitable. 

       The biggest finding of Porter’s and other’s research, however, is that these networks that are formed may spread outside of an area to some degree, but mostly tend to cluster within one immediate region, something which many people may find surprising given growing amounts of globalization and communication, but which makes a lot of practical sense if you think about it.  For one thing, it is cheaper and faster to transport parts, products, and people to related businesses if they are all in close proximity to each, cutting costs and increasing efficiency.  Secondly, it allows many business to share services and resources, such as water, electricity, roads, etc., instead of having to find or build their own in a different area.  Thirdly, it creates a larger labor pool to draw from, with different companies’ employees locating in one area, as well as local schools and universities turning out workers with degrees in related fields.  Finally, no matter how advanced technology gets, there is nothing that can truly compare with being able to meet and talk with someone in person.  People like to see and talk with people they will be doing business with in person, and having related companies in one area makes this possible.

     Although no one can ever be certain exactly what successful regions have that unsuccessful ones do not, I believe that the presence, or lack there of, of clusters is a major difference between the two.  For a number of reasons, business tend to develop robust and intracate networks with a given region and help bringing about economic vitality.  By creating a need for basic and specialized services and products, and by inspiring and paving the way for new firms, business clusters spur economic growth within regions and create a stream of revenue into the area.  Although there are other aspects of economic development, I think regional and city planners should think about the development and presence of clusters in writing their economic plans.

Michael Porter’s “Locations, Clusters, and Company Strategy” can be found at:

http://wf2dnvr3.webfeat.org/A8udJ1636/url=http://content.ebscohost.com/pdf19_22/pdf/2000/T60/01Jan00/25223100.pdf?T=P&P=AN&K=25223100&EbscoContent=dGJyMMvl7ESeqLI4y9fwOLCmrlCep7dSs6m4SLOWxWXSAAAA&ContentCustomer=&S=R&D=buh

Posted in Topics: Education

No Comments

Information Cascades: Mass Hysteria and Outbreaks

In The Tipping Point, Malcolm Gladwell discusses a 1999 outbreak of food poisoning in which over one hundred school children became ill after drinking contaminated coca-cola at several schools across Belgium (p. 268-70). Gladwell reveals that upon examination of the culprit coca-cola factory, the contaminants in the CO2 (which gave off a detectable odor, similar to rotten eggs) were found to be present at levels that were not nearly high enough to cause food poisoning, just a detectable unpleasant odor. With this, we are left to assume that a handful of school children must have detected the odor in the coca-cola, probably once they had already drank from it, and began to feel symptoms that merely started as psychological, not physical. Or possibly one child did actually get sick with some sort of food poisoning, probably unrelated to the coca-cola, spreading mass hysteria and psychological symptoms/sickness among hundreds of other school children in the area.

This leads me to the idea that information cascades can be  genuine causes of sickness and hysteria, even in adults. Recently, two Cornell students were diagnosed with Meningitis, a fairly rare and serious condition which can be fatal. Cornell University as an institution began an information cascade with a very serious March 14th  e-mail sent out to all students. The e-mail stated:

A Cornell University student was hospitalized at Cayuga Medical Center (CMC) March 13 with presumed meningococcal meningitis and is in stable condition. He is the second Cornell student to develop the disease in the past week. On March 8, a 21-year-old student was hospitalized at CMC and is currently recovering in a hospital close to her home.

Anyone who attended parties at the following dates and locations, below,  should seek preventive antibiotic from Gannett Health Services or another health care provider, as soon as possible:

- March 6 at 124 Catherine St.

- March 6 at 118 Cook St.

- March 8 at 306 Highland Ave. (Tau Epsilon Phi)

All members of the Cornell community, regardless of attendance at these parties, should be on the alert for the signs and symptoms of meningococcal disease and aware of ways to reduce the risk of infection.

Meningococcal meningitis (not to be confused with viral meningitis) is a serious though rare infectious disease that can cause permanent health problems or death. People are advised to be aware of their health and watch for symptoms of the

illness, which include fever, headache, nausea, vomiting, stiff neck, rash, irritability and mental confusion. If you have any of these symptoms, seek immediate medical care.

From my own experience, and using my best guesses, I’m sure that there were many students who began to feel that heavy feeling in your chest when you start to look for symptoms that aren’t really there. I would guess that a large number of students suffering from fever/nausea from spending too many hours in the florida/cancun/mexican sun or nursing a headache from too many drinks the night before began to feel an increase in their symptoms after receiving that dread-filled e-mail. And similarly to the in-class example of  the number of people looking up at the sky correlating with the number of passer-bys who decide to look-up at the sky, I would guess that proximity to the aforementioned locations or people correlates to the level of anxiety and symptoms.  For example, people who came in direct contact with the 2 infected students are at the highest risk not only physically, but mentally as well followed by students who were at the aforementioned three parties/locations on March 6-8 that had not come in contact with the sick students. Students who were already sick with what they thought might be a cold, flu, etc. but were not at the locations were probably feeling very uneasy and also sicker than they had before.

The University’s mentioning of the three locations related to the outbreak was a definite fear factor. Although I can’t know for sure, I’d be willing to bet that the Brothers of Tau Epsilon Phi and the residents at cook and catherine streets might still have that odd feeling in their stomachs, heads, etc that comes with the knowledge that you could possibly be seriously sick. So far there has not been mass hysteria around this issue, but i’m sure gannett has had significantly more calls than usual in the past week, despite the lack of any further meningitis diagnoses. It is easy to see how an information cascade like this one could easily lead to mass illness or hysteria. Information cascades can often lead to things (in this case symptoms) occuring out of “thin air.”

Posted in Topics: Education

No Comments

Go Cross Campus

http://www.nytimes.com/2008/03/21/technology/21ivygame.html?em&ex=1206504000&en=f0c1e25feefc393d&ei=5087%0A, the passage here introduces GoCrossCampus, or GXC, a massive multiplayer game built on social networks. GXC is a team-based locally social online sport that revolves around your connections, location and interests. “Any team of people can play, whether it’s rival dorms battling across a college campus, employees competing in an exciting corporate team-building  tournament, universities duking it our in an inter-campus athletic conference championship, or even supporters of presidential candidates in a political bash across the country.” “It’s totally free, and takes just 2 minutes a day to play! Use your armies to attack your enemies and defend your territory. Everyone is part of the action, but your team as a while mush band together to coordinate strategy, elect leadership, and even root out spies! join a strategy meeting…or start one! Or just play in the sidelines. How you participate is up to you.”"Conquer your campus, Win your world.”

This passage says no one is claiming this is the next Facebook, and in http://bits.blogs.nytimes.com/tag/gocrosscampus/, Gabe Smedresman, the game’s creator, also says there should be no Facebook-style controversy. Besides, this game is different from others online games because it unites participants of real-world communities in a common online cause.

And James Boyle, the institutes’s director, describes GXC as “If there is any aspect of social networking that has not been as fully exploited up now, GXC has found it.” And this game is pretty impressive, combining online combating and offline strategy.

In the end, we find there is now a offshoot version called GoCrossPoliticalBasho8. It even ‘triggers’ online battle among presidential candidates supporters over a US map. Because it bases on real-world community, this virtual online game finally may come out realistic meanings.

I look forward to how far GXC can go.

Posted in Topics: Education

No Comments

Market Theory in Higher Education

In the market of higher education, buyers are students who can buy only one unit of the good, a college education, but we change the rules so that sellers have multiple units of their good. Buyers’ valuations on each seller’s good may be modeled as how much each buyer could and would be willing to pay for a unit of each seller’s good. We disregard in this example the probability of being accepted into the school, putting all buyers on the same level, but with individual valuations on the goods. Each person belongs to this network, having at some time been in the position of the buyer.

In this article, Brandeis professor of social and economic policy, Robert Reich, writes about the flaws of higher education in America, and how it is a market changing for the worse. In the past decade, the percentage of students of low-income families that attend college has been decreasing. Financial aid now covers a much smaller percentage of tuition costs compared to earlier generations. Using the market system described before, this fact can account for the reason why buyers today from lower income families will have lower valuations on the goods, because they will not be able to pay as much to go to the university. Performing an auction procedure on the higher education market, it’s clear that students who come from high-income families will receive admission to their choice schools, which are often private, more renowned universities. Assuming that there are enough schools so that every student can potentially have a higher education, there should be a matching in a bipartite graph, and theoretically, a set of market-clearing prices from a perfect matching in a preferred-seller graph. However, because of the wide differences in the buyers’ valuations, which correlate to family income, it is likely that the payoffs for some students will become negative or that there will be an unreasonably large gap in the final prices. In the market-clearing auction procedure, only the goods of the sellers in the neighboring set of the constricted set of buyers is raised. However, prices are increasing for all universities, private or state-funded. As a result, buyers from low-income families often cannot afford to go to college at all.

Reich also observes that the goal of higher education has shifted from public good of society to the advancement of the individual’s ambitions. So it may be that there is social-welfare maximization with regards to the valuations that directly correlate to income, but it should be that valuations should be closer. As it is now, the valuation a higher income buyer may have for a good they might not even want is often still higher than the valuation a low-income buyer has for their first choice of good. The way the system is set up makes it harder and harder for the low-income buyers to have goods they really want. Therefore, in the higher education, there is rarely a baseline-maximizing perfect matching. Many low-income students are left not well off, and while higher education is becoming like an industry catered to specific individuals who are allowed to get more out of what society has to sell, others are made to settle for less.

Of course these flaws in the higher education system extend way beyond that which can be accounted to the changing market and the simple theory behind it. But perhaps in the future, we will be able to see changes in this market that influence social-welfare maximizing on a more human level.

Posted in Topics: Education, General

No Comments

Public SpaceRank

I stumbled upon this architecture competition entry titled “Quameleon” by Ayssar Arida Associates:  http://q-news.archsitestudio.com/files/AA_MEIAC_MIMESIS.pdf.  This project allows occupants to design their own temporary public spaces by using light/sound qualities to augment existing spaces.  Occupants may upload new sound files or video files into database or choose from existing uploaded files/qualities.  As a networked system that compiles a community database of information, it is potentially very effective in creating a democratic, participatory design solution for the problem of the design of public space by committee.  However, a search engine is not a part of this system.  This lack of searching ability limits not only the access of users to information as database grows, but also limits the capacity of the system to build toward a consensus “design” of public space.  In applying concepts from our class discussion, there seems to be an opportunity for a PageRank inspired search engine to identify the most popular light/sound qualities based on their induction of desired moods or atmospheres.  These moods/atmospheres have the potential of becoming popularized by different communities with different agendas, taking the project a step further in its effort at creating a democratic model for the development of public space.  How would this system, which I’m calling Public SpaceRank, work based on the PageRank model?      

 

The system could use PageRank logic to rank combinations of spatial qualities (light/sound) that are the most popular ways to induce desired moods/atmospheres, adjusting and constantly updating the ranking as different qualities are uploaded into the network through the addition of sound/video files.  Users would begin to assign keywords to specific combinations of qualities as they modify the public spaces. For example, Isaac Hayes music, red coloration, and low lighting levels might be assigned the keywork “romantic” or “sexy”.  Users could search this keyword database to identify desired atmosphere/moods.  Based on the PageRank logic, two types of ranking systems would be needed to then translate these keywords to specific combinations of qualities.  First, qualities are ranked based on the number of in-links they have currently attributed to them by relevant keywords.  Second, keywords would also carry a score based on their number of links to other relevant qualities.  The final ranking system would consider both of these criteria, identifying keywords that seem to select more popular qualities, and qualities that seem to be selected by a greater number of keywords.  A keyword search would yield qualities deemed communally most popular.  In this way, a search system based upon the PageRank logic may be able to yield powerful community access to shared moods/atmospheres and thereby provide the means for emergence of unique community identity in the evolution of the ongoing design of public space.

Posted in Topics: Education

No Comments