by on

Espresso Book Machine, Reports from the Field

Terri Geitgey is the Manager of Library Print Services at MPublishing. On April 4 & 5, she attended the Coalition for Networked Information spring meeting in San Diego, California.

Espresso Book Machine

By Flickr user sukisuki, under a BY-NC-SA Creative Commons License

I attended the Coalition for Networked Information spring meeting to do a co-presentation with staff from the University of Utah Library on our respective experiences with the Espresso Book Machine (EBM). Our session, The Espresso Book Machine in the Library: Case Studies from Two University Libraries, described the number and types of uses we’ve had, collaborations that have developed, what we’ve learned, and future directions and goals.

Based on the questions and discussion at our session and subsequent individual conversations I had, my sense is that there is great interest in the machines, and that an increasing number of academic libraries are considering purchasing an EBM.  Consequently, they are understandably eager to know what to expect should they do so.

While most libraries would probably be motivated by similar goals for installing an EBM—increasing access to content, encouraging the creation of new content, forging new partnerships and collaborations, and supporting the university’s teaching and research needs—those goals are expressed in different ways based on the unique services, content, and philosophies at each library. I find it interesting to learn what others are doing with the machines, and appreciated the chance to find out more about the Marriott Library’s EBM activities. The way that offerings and uses differ among locations helps inspire new ideas for our services here at the University of Michigan.

As for the rest of the conference:

Dr. Christine Borgman of UCLA opened with a thought-provoking talk entitled Information, Infrastructure, and the Internet: Reflections on Three Decades in Internet Time, describing what she sees as four trends or shifts in cyberinfrastructure from the 1970’s to today: from closed to open networks, from static to dynamic (“published” used to equal “permanent”), from readers to authors (now a kind of DIY world of universal scholarship), and from publications to data (rather than a product at the end of a project, the nature of publications is changing to something more like a series of snapshots in time). Moving forward, the challenges include “taking back” information retrieval from commercial development, engaging the information lifecycle (research design, data, metadata, writing, curation, use & reuse, reproducibility), distributing the architecture (data scales fastest; select and filter; user surrogates for discovery; move computation to the data; share access and assets), and matching policies to incentives with regard to data curation, reuse, and management.

The closing plenary, given by Professor Todd Presner of UCLA, was entitled HyperCities: Using Social Media and GIS to Archive and Map Time Layers in Los Angeles, Berlin, Tehran, Rome, and Cairo. This innovative project is described as “bringing together social media, the analytic tools of GIS, and traditional methods of humanistic inquiry, HyperCities is a digital research and educational platform for exploring, learning about, and interacting with the layered histories of city and global spaces.” Difficult to describe (at least for me), but fascinating to explore, so I encourage you to check out the presentation and the HyperCities web site.

In between the two plenaries, sessions were offered on a wide range of topics, covering data management, digital humanities, open access, software and networking projects, and new ways of publishing and reusing data, to name just a few. Descriptions and links to all of the sessions can be found on the CNI website.