Comm Portal -> Educational Impact

This web site will be decomissioned in March 2009. If you use it, please email systems@nsdl.org immediately.

Home
Mail Lists
Documents
Events
Deliverables
Resources
 
Edit This Page
Workspace Wiki
Summary Page
Community Portal

 


Many thanks to those who participated in the Wed, 3/9 teleconference! Below are the minutes which will be posted on the web and we welcome your participation in the following matters.

With best regards,

Anita Coleman, Laura Bartolo & Casey Jones

EIESC Telecon: Mar 9 2005, 1 pm PT / 2 MT / 3 CT / 4 pm ET

Participants: Laura Bartolo, Bethany Carlson, Anita Coleman, Jim Dorward, Sarah Giersch, Ellen Iverson, Casey Jones, Reagan Moore, Judy Ridgway, Peter Shin

1. Call for EIESC Volunteers in Planning NSDL Annual Report:

If there are EIESC members who would like to participate in developing the Annual Report, please email Carol Minton Morris (clt6 at cornell.edu) or call 607-255-2702. The EIESC works closely with CI in preparing the NSDL's Annual Report. Teleconference planning meetings for this year's Annual Report will begin shortly and are held periodically during the spring and summer months. The AR is published in the fall, released at the NSDL Annual Meeting, and distributed at national meetings and conferences throughout the year. The 2005 Annual Meeting theme is "Examining NSDL's Impact" and the Annual Report will collect narrative and data that tell the story of how NSDL is creating context and meaning around the use of digital resources for learning in classrooms.

Anita, Sarah, and Laura discussed briefly how the AR worked last year: bi-weekly telecons in the spring, data gathering in the spring, and report writing in the summer with proofing in early fall.

2. Annual Meeting Planning Update:

The Annual Meeting Planning Committee meets weekly on a phone call. The EIESC representative this year is Chris Walker (cwalker at eicc.edu).

Please contact Chris if you ever have feedback or ideas about the meeting and EIESC's role in the meeting and sessions. The Planning Committee has its face-to-face meeting on March 14 - if you have any ideas about the meeting format and the session types or about any foci for the call for participation (that is, hot community topics), please let Chris know since many of those decisions will be made on March 14.

Some initial thoughts and freedback for the AMPC to consider include:

  • SIG format great but larger rooms needed
  • having no other meetings during EIESC meeting worked well.
  • standing committee meetings seem to revisit/ focus on old issues too much and there is preference to have EIESC meeting be more of a business meeting where participants come prepared to get stuff done (possibly have an informal lunch meeting at the Annual Meeting or a telecon the week before the Annual Meeting to help people prepare for the business meeting)
  • Evaluation experts panel would be good but funding is needed (possibly an idea for something that can be included in the NSDL Summer Institute if that occurs and funding can be obtained.).

3. Web Analytics Update:

Casey reported that the web analytics pilot with 7 sites is going well.

All sites have implemented the code. The project is ready to take next step with considering variables to include in the tracking. Additionally, the pilot will be looking for volunteers to aid in analyses and provide suggestions for next steps and additional material to aid in analyses (usage and user models, for example). Please email Casey Jones (casey at ucar.edu) if you would like to volunteer.

4. Impact Taskforce Update & an invitation to participate:

The Impact taskforce is working on two fronts:
1) to get a TREC-like evaluation of the NSDL going and
2) building the evaluation materials clearinghouse using DLIST.

Reagan Mooore and Peter Shin of SDSC discussed their interest in a TREC-like eval. project with EIESC. Peter and Anita have drafted a brief plan for how this could be pursued (attached as Appendix 1 below). Please review and provide feedback (pshin at sdsc.edu).

SDSC is developing a testbed for text mining services for NSDL. The SDSC approach to these services is probabilistic text categorization by scientific discipline, topic, and grade level. Currently, SDSC is using two datasets to develop a classification model: 'golden' files and metadata set (8000 records) from ENC. Sarah Giersch and Ellen Iverson (eiverson@carleton.edu) made suggestions on how to obtain more datasets. Sarah gave a contact, Mike Luby from Columbia to get in touch with a text book publisher, and Ellen offered vocabulary datasets from her project. Using the textbooks to generate these models is a good approach. The goal of our study is to apply the learned categorization model to the whole NSDL collection. While collecting the dataset, SDSC has developed high performance text processing pipelines, and has various text mining engines installed in our supercomputers. In addition, they are installing Cheshire II system to evaluate its performance on NSDL dataset. SDSC will publish all datasets including processed data, various tools, and results to facilitate the collaboration, and to evaluate the efforts.

Last year, at the end of the content analysis of NSDL websites done for the AR, the need for an NSDL repository of project publications and presentations was identified. EIESC members have also long expressed the need for locating DL eval. materials more easily (see discussion at EIESC-JCDL 2004 meeting). Last summer we piloted an Evaluation materials clearinghouse (details in the EIESC presentation slides at NSDL AM 2004). We are using DLIST, an open access archive which enables self-archiving to serve as our clearinghouse. Thanks to Kaye Howe, a recent notable addition is a comprehensive summative evaluation by WGBH . EIESC members may want to browse through this report. A question about the status of Fedora with the underlying concern being one of duplicating efforts with DLIST was raised. Since Fedora is still in testing and DLIST is a fully functional repository there is no duplication; DLIST is OAI compliant and is harvested by others (including NSDL).

EIESC members who have evaluation materials (instruments, papers, presentations) are encouraged to self-register with DLIST and deposit it themselves or send their materials for deposit to DLIST at u.arizona.edu.

5. Policy Committee Update and Online Forums invitation:

Laura will attend the next PC meeting. Would the EIESC be interested in sponsoring an "event" or online forum in collaboration with the Sustainability SC through a new NSDL service/website called Ask the Community (somewhat like NSF's ESTEME week and Ask a Scientist)? These would be week-long events/forums to get community questions out in the open, online, and answered, or at least noted for further discussion in the committees. The event could be jumpstarted with a joint telecon. Possible outcomes would be increased online presence and interaction of the committee memberships. The event could be held in late spring and could also tie to mini-workshops where evaluation experts could be brought in to work hands-on with participants in tutorial or crit-lab like settings. If you're interested, please send feedback to the list .

6. Reframing the EIESC questions:

At the 2004 NSDL Annual Meeting EIESC meeting, discussion was held concerning the four questions originally posed by EIESC:

  1. How are people using NSDL?
  2. How are NSDL collections growing?
  3. How are distributed library building and governance processes working?
  4. What processes are needed to answer these 3 questions?

Two new questions grew out of these discussions at the Annual Meeting:

  • Should we reframe our questions in light of all the changes that has happened within NSDL?
  • What are performance measures for NSDL that can be linked to educational impact, such as search, retrieval and relevance based on user tasks across disciplines and grade levels?

We would like to return to these two questions and invite EIESC members to post ideas and recommentations to the list . These postings will be a great way to begin discussions in preparation for the EIESC meeting at JCDL!

7. Attendance at JCDL:

Almost all the participants at the telecon plan to attend JCDL and so a face to face meeting of the EIESC at JCDL will be arranged as usual.


Appendix 1:

Tasks for Prelim TREC-like Evaluation of a focused pilot

Objective:

1. Identify the dataset by looking for the following characteristics:
a. K-16 textbook (preferably single publisher)
b. Single-multiple or interdisciplines (Geography may be a good test discipline)
c. Electronic availability

Who: TBD
Duration/When: 2 months
Deliverable: E-corpus of texts (K-16 - limit by discipline/subject and grade)

2. Preparatory tasks

Pre-process the raw electronic texts into a format that the algorithms can use

Who: Peter Shin (SDSC)
Duration/When: 1 month
Deliverable: Transformed data (number of term counts, indices)

3. Train and test the model using the dataset (Make decisions about challenges such as level of granularity, and then classify unseen document given the full new text)

Who: Peter
Duration/When:
Deliverable: Model and expected performance measurement of the model

4. Apply the model to the top level NSDL documents in collections with Geosciences focus (henceforth called NSDL test documents)

Who: Peter
Deliverable: Audience Label for NSDL test documents ((Elementary (1-5), Middle (6-8), High (9-12) Undergraduate)

5. Evaluate how well the model did (ask two question on survey for each resource: is the grade level classification appropriate and if not, which category is appropriate)

Who: End user (teachers & learners in the different audience levels)

How: EIESC & Peter will design an interface for collecting responses; EIESC will write this part up (two groups, one would be each level and the other is mixed/all; how many documents to how many end users) When/Duration: April/May/June

The long range goals (to discuss and refine at end of pilot) are:

To develop NSDL Eval. Tracks (for example, using themes like evaluation by audience; and evaluation by discipline)

To have Datasets available (textbook dataset; labeled metadata for toplevel NSDL pages with metadata for selected DC elements including audience and possibly subject (limit to discipline), etc.

To define and develop user tasks to match 4 pre-defined levels at the very least (Elementary (1-5), Middle (6-8), High (9-12) Undergraduate)

To develop measures of performance/assessment (NSDL impact – educational impact primarily)

Background information:

  • We want to see if we can establish a mini NSDL eval much like TREC and so TREC is our analogy:
  • TREC has Tracks (themes evaluation by type of material)
  • TREC makes Datasets available (categorized news dataset)
  • TREC has Well-Defined User Tasks with Measures of Performance
  • TREC has Measures (for example, measures based on relevance)

More info is at the NSDL AM 2004 EIESC meeting minutes (click on pdf or flash file to see presentation and discussion details).