NSDL Logo
 Annual Meeting
 Swiki Main
Sunday Sessions
  New Projects Luncheon
  Orientation
  Posters
Monday Sessions
  Opening Keynote
  Intellectual & Economic
  Research Challenges
  DLs & Education
  Implications
Tu & Wed Strands
  Birds of a Feather
  Building Collections
  Deployment & Continuity
  Services Development
  User-Centered Design
  Committees
 

Best Evaluation Practices for the Overworked and Understaffed

Room: South American A


Christine Walker

Chris Neuhaus

There is much recent writing on digital library evaluation. Many documented usability studies and current evaluation techniques require significant time, effort and manpower. Must evaluation be cumbersome and time-consuming? What is a small project with very-nearly-overworked team members to do? Veteran evaluators and newbies (c'est moi) will discuss not just best evaluative practices - but best quick-and-dirty evaluative practices for digital libraries. Suggestions for simplifying or streamlining certain phases of evaluation would be shared. High yield but high effort practices - essential for digital library evaluation - will inevitably be discussed as well.


Notes - Best Evaluation Practices for the Overworked and Understaffed


Information

Birds of a Feather Session, facilitator Christine Walker: Best Evaluation Practices for the Overworked and Understaffed

Attending: Frank Settle, Lillian Cassel, Carol Terrizzi, Paul Craig, Ann Renninger (recorded session) and Thad Lurie.

Christine Walker: Community College based, ATE Center of Excellence; questions to redefine what it means to be an environmental technology; need to continue to learn about users

Frank S.. bibliographic site: questions how people use the Internet. Just at the point of evaluating web design/user interface, using filter log. Has server filter, can look at sessions and hits on various objects in session; can track how a user goes through session; found majority of hits came from Google- referrals are over 60%; other half robots. The hits from Google; people come, get what they need. Direct, focused, browse function. Different sets of users. Has identified 6 major browse topics; none dominate. Server log stuff is tricky.

Having your site listed helps market the site. However, just because your site is registered on another site does not guarantee you are getting a lot of hits from them.

Frank: Need to go out and market sites to persons individually. Grass roots marketing approach is effective.

Focus groups, track users, #s of user groups. Could do links to sites.

Boots Cassell, Citidel project: xml-based log standard, in addition to standard logs. Benefit of xml-based log standard is that it allows you to know about activity. Just finishing 2nd year, goal: to provide any and everything in computing education at any level. They have lots of stuff; they want people to know about it and to use it. Citidel can help people who want to use logs effectively in evaluation practices.

Carol Terrizzi: Raised the role of nsdl.org as website; community level tab gives info on referrals. A vast amount of data has been amassed at NSDL. What do we do with it? What do people want to know? We need to work with the community to expose them to nsdl.org and its portals.

People know about Google, not necessarily about ENC or DLESE.
Lab view, leveraging technology, exposes NSDL data repository to take NSDL out to leverage off other technology.

Paul Craig: Raises the question: how we know what a good website it? Paul asks student to find 3 referreed publications- often bring back Googled info. He would prefer students could find quality, for example, with an NSDL brand. NSDL brand on these sites could be a Bookmark, which could feed numbers back to NSDL.

Thad Lurie: Webtrends.com, part of APT Communities for Digital Resources in Education Commercial software, instead creates comprehensive log; can track across platforms; does graphing. WebTrends.com is customizable and flexible to help with logs.

Have to have design standards before you look at numbers. Frank feels numbers are still going to be important to granting organizations. We need to provide numbers as well as some of the squishy evaluative results. Ann: Need to blend methods. Is traffic going up or down? Numbers may be what we need to prove impact. It can be augmented with anecdotal information.

Carol: Takes some information and pull it into a picture (GUI)- this helps understanding, presentation of data Treemap (out of UMaryland)
Star map- one level look at collection, how do they look

Scripted user testing, another analysis- helps people to consider how their experience was. Blandsford's talk aloud technique: trained interviewer, have user talk aloud about work.

Paul Craig, RIT: Real things vs. things we think should work-
Out of box strategies, we don't know if there is another way to have impact

Carol: How do you merge the three environments of social science, technology, and pure science? For different cultures, assessment means different things. How do you merge those cultures? Example: Cultural anthropologist- responsible for everything outside of INTEL; studies technology all over the world: all of the computers were burning up in India. People didn't like the way the computers looked, covered them with shower curtains. Needed to think about how the computers looked.

Frank- the NSDL community overestimates user base. It is hard to make a non-aggressive person, aggressiveto take risks, etc.

Annotated evaluation references: www.uni.edu/neuhaus/eerlwork.html


Comments

Please enter any comments in the following format.
  • (commenters' initials) - month/day [comment date]
  • comment





NSDL thanks DLESE for hosting the swikis for the NSDL Annual Meeting 2003.

 Swiki Features

ViewEditPrintLockReferencesUploadsHistoryMain PageChangesSearchHelp