This is a supplemental blog for a course which will cover how the social, technological, and natural worlds are connected, and how the study of networks sheds light on these connections.


Less is More for Google

http://www.webpronews.com/topnews/2007/03/22/googles-ads-less-is-more

 

In class, we began to discuss how Google runs their auctions for their ad slots. But, one thing that this article brings to attention is the fact that Google runs fewer ads than its competitors. In a linked blog, we find that when “San Francisco Sushi” is searched on Google, we get 2 ads on top and 3 ads on the side. When the same search is done on Yahoo, there are 2 ads on top, but on the side there are a whopping 8 ads! So, why does Google seem to “limit” the revenue it could be making by only offering a few ads?

 

The answer lies in the research Google has done. Their research has discovered that fewer ads means lower revenue in the short term. However, in the “long term the advertising revenue actually goes up.  Why?  They found their users started trusting the advertising more and were more likely to click on ads.” With fewer ads, Google can provide ads that are more relevant to the search. This ties into Google’s new “pay per action” advertising policy, as advertisers will have to pay for fewer accidental clicks. With more relevant ads, and less clutter on their web searches, Google feels that it can get potential clients to its advertiser’s websites.

Posted in Topics: Education

No Comments

Yahoo Unleashes New Weapon

HELFT, MIGUEL. “A Long-Delayed Ad System Has Yahoo Crossing It’s Fingers.” New York Times. 5 Feb. 2007.2p

Link 

 In class we have been dealing with models in which the slot an advertiser gets is based solely on their valuation of that slot. In reality, the value of a slot to an advertiser is not the only criteria used to place them. This article talks about how big players in the keyword advertising business use a mix of bid price and ad relevance to a given search in order to maximize revenue. Basing slot assignment solely on the advertiser’s needs is not a good strategy. Click rate is not a static phenomenon; it is dependent upon the degree of relevance of an advertiser to a given search as well. If the highest bidding advertiser for a keyword is not a well known company or is not relevant to the word they are bidding for, the clicks per search go down and the search engine loses money.

 

The article focuses mainly on Yahoo and the release of “Project Panama,” their new ad system that they project will gain them much more revenue starting about half a year from now. This new system will allow advertisers to target certain geographical regions and test expected effectiveness of their ad. It coaches advertisers on what and how to bid based on the user’s budget and the number of customers they want to bring in. The article is interesting because it goes behind the scenes of keyword advertising. It talks about the difficulty of changing a running system and the long hours spent getting “Project Panama” launched.

 

Project Panama is not expected by any means to eclipse competitors like Google or MSN. Rather, it is aimed to get Yahoo back into the online advertising game after neglecting to update itself sufficiently over the past year where Yahoo shares fell from $22 per share to $35. While Google makes 4.5 to 5 cents per search, Yahoo makes a mere 2.5 to 3 cents. These numbers, while small, multiply to a difference of billions of dollars a year.

 

The article also mentions Google’s new strategy of displaying fewer advertisers employed last Wednesday. The change makes the search experience better for users as they see less ads with more relevance. Google is already seeing more clicks per search since the release of their new strategy.

Posted in Topics: Education

View Comment (1) »

Building a Network Theory of Social Capital

In his scholarly article “Building a Network Theory of Social Capital,” Nan Lin attempts to consider social capital as “assets in networks,” studying social networks in terms of social capital and the returns that investments in social capital can provide. Lin explains the notion of “capital” in terms of classical social and economic theories, though mainly uses the term to refer to the process by which individuals can acquire and invest in capital in order to earn returns. Lin explains that the notion of social capital is straightforward: “investment in social relations with expected returns” (Lin 30).
Lin identifies three opportunities for “profit” through social networking. First, Lin argues that individuals usually find themselves in imperfect market conditions. As such, social ties can provide an individual with increased information about opportunities or choices that the individual would not have been privy to without investing in joining a social network. This theory loosely relates to theories about the strength of weak-ties and network “bridges”, though here Lin seems to suggest that these weak-ties are solely concerned with sharing information (while perhaps strong-ties are limited to actual friendship). Second, Lin proposes that an individual’s existence in a social network may exert an influence on agents that may make decisions (for example, hiring or promotion) regarding the individual. Lastly, Lin argues that an individual’s membership in a social network can convey social credentials about that individual. These social credentials may reflect the individual’s accessibility to outside resources or simply convey some sort of status (if other individuals have allowed this person in their network, they must know/like something about that person that I don’t know).
Based upon these three “profit opportunities,” Lin attempts to both explain the existence of vast social networks (and the inequalities within them) as well as the value that individuals receive from “investing in” and joining these networks. Lin’s article was published in 1999, well before the phenomenal explosion of social networking sites like myspace.com or facebook.com, though the author does have the foresight to mention that he thinks “cyber-villages” may provide an easily-measurable tool for determining the value of social networks.

Posted in Topics: Education

View Comment (1) »

Win a Hummer H3 for $36.65!!!

In addition to the main types of auctions covered in class such as first- and second-price sealed bid auctions, a number of interesting buying and selling systems have been devised to suit specific needs. This article (http://www.slashphone.com/70/6770.html) discusses Verizon Wireless’s use of a unique bid auction to simultaneously generate publicity and sales. Subscribers to Verizon Wireless V CAST can participate in a game show called the Limbo Show in which they text message bids for fun prizes such as a new flat screen television or free vacations. However, in this auction, it isn’t the highest bidder who wins the prize but the one who places the lowest unique bid.

The allure to buyers is obvious – they have a chance to win incredible deals. Imagine winning a Hummer for $36.65! I’m not sure if there is an optimal strategy other than dumb luck. Choosing the same bid amount for different auctions is probably not the best strategy, since there may be another person bidding their favorite number on every single auction, and that favorite number could be the same number you chose for everything.

Unfortunately, players still need to pay their bid even if they lose. My guess is that even if you win an item every now and then for a ridiculously cheap price, you probably spend so much money losing all the other auctions you participated in that it all evens out. So, the unique bid auction has elements of an auction and elements of a lottery. I’m guessing that participants will eventually get tired of losing, which in turn would lower the number of bids and make the auctions unprofitable. On the other hand, a large media company with lots of capital like Verizon seems ideally suited to generate enough interest in the game to make the unique bid auction work, even if it is short-lived.

Posted in Topics: Education

View Comment (1) »

Google Plans to Move Forward

http://publications.mediapost.com/index.cfm?fuseaction=Articles.showArticleHomePage&art_aid=57492

From lectures, we learn about the keyword-based advertising that Google and other search engines have. This way of advertising turns out to make $10-$20 billion per year, creating a massive market that companies fight over through an auction called the Generalized Second Price auction. This auction follows the idea of second-price sealed-bid auction for multiple slots for ads, with the first slot priced the highest. These ads are based on a Cost-Per-Click model, which means that an ad that links to a company’s Web site is shown every time a certain query is made and the business pays the search engine when the users click on those links. This is because the act of clicking ads on search engines indicate intent on the users’ part, and intent means potential business or sale for that company.
There are some objections to the cost-per-click model, such as click fraud, where people may hire others to click on their links all day long without intent. But it seems that Google may improve from the click model towards the Pay-Per-Action model (PPA). Incorporating the PPA may either flop horribly for Google or make the ad slots more valuable if people show further interest into purchasing products from the advertising companies by actually using the companies’ sites for purchases and business transactions. This also reduces the chances of click fraud, making Google’s customers less reluctant to bid on ad slots.
Furthermore, Google plans to serve fewer ad slots. To some people, it may sound crazy but Google thinks it may raise the slots’ values and allow more relevant ads to show up, which in the long run, might make users trust and click on the ads more.
All this will probably make Google the even more favourite of all search engines and the cost-per-click model we talked about in class may well be ancient history in world-wide-web time soon.

Here’s a blog post on the same subject.

Posted in Topics: General, Technology

View Comment (1) »

The Google Pagerank Algorithm - and how it works.

http://www.iprcom.com/papers/pagerank/

In class we have been talking a lot about page rank and how it works. Well, this website takes the time to explain how google page rank works in detail.

Page rank is what google uses to determine relevance and importance in determining websites for its search results. PageRank actually says nothing about the content or size of a page, the language it’s written in, or the text used in the anchor of a link. Instead, it is determined by how many pages around the web links to the certain page in context. So, it is somewhat of a popularity / vote contest.

The page goes on to explain the exact dynamics of how page rank is determined in great detail, so if you are interested, please read it. :) This way of determining page rank is much more efficient than using meta tags or page content because people can manipulate that to make their site show up higher in search results. In fact, I remember trying to get my personal homepage higher in yahoo search back about 10 years ago when I was making my personal homepage. :) Ah, those were the days.

Posted in Topics: Education

No Comments

Google’s Caching Feature- Spam Protection or Copyright Violation?

http://news.com.com/2100-1038_3-1024234.html

The above url links to a 2003 article from CNET news about Google’s caching feature.  When you query a search item through google, besides being able to click on the main link there is also a link labeled “cached,” which directs the user to a cached version of the website they wish to visit. Google introduced caching in 1997, mainly to avoid page-jacking and spam. Page-jacking occurs when someone copies a frequently accessed webpage, resubmits it to a search engine to recreate the high rankings generated through the search engine, and then alters the content of the page to suit their interests (in other words, stealing surfers from the page they intended to visit and redirecting them to a copied webpage of altered, unwanted content). Caching can help to avoid this problem. In 1997, Google began archiving web-pages by making direct copies of them and offering these copies to web-surfers. By clicking on a cached site, there is a greater chance that the user will end up with the original content of the page they wish to view. However, caching has posed a few questions concerning accuracy of information and copyright infringement. The article above points out that web surfers can access “dead” or removed internet pages, archived, member-only accessible New York Times articles, as well as inaccurate information, because Google web-crawlers cannot constantly re-copy every internet page as it is updated. The article also points out that the caching feature could potentially violate copyright law by directly copying pages without consent from the page operators and offering access to these out of date or member-only pages.

This topic relates to what we’ve been learning in class about how search engines operate in two ways. First of all, the article sheds some light on how page-jackers use the sytem of page-ranking to their advantage by imitating pages with high authority scores. Also, the article mentions how Google differs from other search engines by archiving pages, whereas many of its counterparts use statistical records of page-content for page ranking. In class we discussed the drawbacks of using only statistical methods to generate page-ranking. This article shows how relying solely on statistics leaves large room for spammers to imitate highly relevant pages. However, Google’s alternative may prove precarious.

Posted in Topics: General, Technology

No Comments

Why do people let themselves become too connected?

Its a question that pops up from time to time in newspaper articles (e.g. this BBC article ) and technology magazines: are we too connected? In other words, with the ever present cellphone at our collective sides are we too easy to reach? The worry is that as the number of networks we join increases, the pressure to devote equal and significant amounts of time to each becomes increasingly unmanageable. This situation seems reminiscent of the network exchange theory problems we discussed in class a few weeks ago. If we assume that the time someone spends in constructing and maintaining a connection to a network is the cost or worth of an edge to them, and that each node is a network (in the most abstract sense of the word, anything from a cellphone network to one’s immediate physical surroundings) that person is connected to (with one special “personal” node representing the individual), we can easily see the result of adding another node connected to the “personal” node. Now the individual must repartition their limited time yet again to account for this additional edge. Furthermore, these nodes represent networks; the networks (in most cases) can exist without the edge to a particular “personal” node, thus each other node has an “outside offer” of much higher value than the current edge. This fact, prima facie, forces the person to act much like the middle node in our 5 person exchange model, simultaneously competing with every outside offer to not get cut out of an exchange. Its an unwinnable battle.

Then why do people try anyway?

The reason lies in human nature. The fundamental difference between a social and technical network is that the former remembers past transgressions. Social networks take emotional investment and time to build; because of this, we are preprogrammed not to let any of our connections languish or die*. In fact, if a social tie does perish (not languish or become temporarily cut), its often harder to recreate than it was the first time. Unfortunately, we have not yet adapted to technical networks which have no memory, and will let you join, leave, and rejoin with much more flexibility. This leads to people joining more networks than we have adapted to (because they can or because they feel they must), and treating each of those networks as equivalent to our real-life networks.

The solution, far from divesting oneself of all electronics for a day (as the BBC article touted), is to recognize that technical networks are intrinsically less important than social networks (its more acceptable to let a phone call roll to the answering service than it is to interrupt a conversation) and actively enforce that rule in one’s daily life. Moreover, people always have power over all the technical networks they are part of because they can dictate the terms of the exchange; its far more acceptable to walk away from any exchange with a electronic device than it is to walk away from a person. Those who have fully embraced this ideal have actually prioritized their life so that the less important things, like work-related activities, happen solely over a technical network. These people (predictably programmers and their ilk) were recently singled-out in a slashdot article which touted the benefits, particularly the freedom, of working wholly online. Perhaps the many virtual edges in our personal networks we are developing today will give us more physical freedom tomorrow.

*If you want neat empirical proof of this fact, try this trick. If you need to ask a quick question of someone - particularly someone like a receptionist who has a line of people waiting to talk to them - instead of joining the line, try calling the phone on their desk. Most of the time, the person will make the entire line wait while they answer the phone (and your question). This virtual queue-jumping technique works because the receptionist knows that the “connection” between him/her and the person on the other end of the phone will expire the soonest. In an attempt to save the “connection” he/she will stop everything else and devote all their energy to the phone even if he/she knows that it’s just another non-important person calling.

Posted in Topics: Education, Technology

No Comments

Microsoft Searches For Better Methods

http://news.com.com/Microsoft+looks+for+better+way+to+search+the+Net/2100-1038_3-6165026.html

Going along with what we have been learning about search, this article discusses some of the future fixes that Microsoft has planned in their hopes to overtake Google. Microsoft plans to look at how people refine unsuccessful searches, that is searches that did not retrieve the wanted results, in order to better provide results. This method sounds very intuitive. The Personalized Search, based on what users have on their hard drive looks like a bigger leap, allowing for better matches based on individual information.

Both of these steps sound like natural improvements to the indexing heuristics that were discussed in lecture, since the first does the equivalent of learning from mistakes and is similar to fuzzy logic results. However, this would require a lot more data to be gathered in order to work successfully. Also, it may not take into account the authority aspect that some sites have, as it is likely to help lesser known sites that get overshadowed by those that are returned in rather broad search results. The second one of course sounds very easy to imagine thinking up, but it seems likely people could have misgivings about the way that it uses personal content to search. One method I think might be useful is to find a way to use personal information to create some sort of search analogue to the Music Genome Project, taking into account a variety of tastes and preferences.

Posted in Topics: Education

View Comment (1) »

Deviations from truthful bidding in Google’s ad auctions

http://www.gsb.stanford.edu/news/research/econ_ostrovsky_internetauction.shtml

Stanford economists say that under Google’s current auction system, inexperienced bidders often end up overpaying, while experienced ones funnel personnel and money that could be used for growing the company into instead figuring out how to beat Google’s system.

The problem is that Google says it is conducting a second-price auction using ideas from the Vickrey auction model, which makes advertisers think they should be bidding their true value. However, the amount of money that an advertiser pays is not actually the harm he does to the other bidders by winning his slot; instead, the advertiser pays $.01 more than the next highest bid. In many situations the advertiser will be better off underbidding his true value, because the savings from paying less for a lower slot outweighs the loss in revenue from advertising at a lower slot.

To illustrate this point, consider the following example offered by the economists.

Advertisers X, Y and Z truthfully bid $10, $8 and $7 dollars per click for slots a, b and c that receive 100, 70 and 50 clicks per hour. Thus, under the Vickrey auction model advertiser X would pay an amount equal to the harm he does to advertiser Y by Y not getting the first slot + the harm he does to advertiser Z by Z not getting the second slot. This equals $8 (30) + $7 (20) =$380 per hour, and the Vickrey model says that advertiser X should pay $380/100 = $3.80 per click. In this system bidders have incentives to state their true value, as illustrated in class.

Under Google’s system, however, advertiser X would be charged $8.01 per click. If he instead bids under $7, he would take the third position and pay a very small price, which likely would offset his loss in clicks from 100 to 50, and give him higher profits.

The authors urge Google to apply a true Vickrey model to its auction. The current system is volatile; advertisers are constantly observing each other’s bids and adjusting theirs accordingly, and ad placement changes from one minute to the next. However, a change may be unlikely as Google’s current system gives the company a lot more money than the Vickrey auction would.

Another reason for lower bids

An Ad Upstart Challenges Google

Google does not give advertisers the opportunity to bid for space on specific websites or a list of sites where their ads end up; as a result, most advertisers assume their ads will end up on many second and third-tier sites and thus lower their bids accordingly. Big name websites in Google’s AdSense network thus receive lower bids from advertisers and get less revenue than they would if the advertiser had known the website’s identity.

Quigo Technologies, a New York contextual ad company that has succeeded in luring away some Google customers (including ESPN), challenges Google’s lack of transparency on this front by offering its own customers opportunities to bid on specific sites and information about where their sites end up. Google and Yahoo! argue that because advertisers pay per click, the additional information Quigo supplies is not relevant. Still, Google has announced that it plans to share with customers the information Quigo currently gives out.

Posted in Topics: Mathematics, Technology

View Comment (1) »