Tuesday, March 1st, 2011

List of programs:

 


Libraries and Streaming Media: How Libraries Can Provide Access to Streaming Media

Cyrus Ford, University of Nevada, Las Vegas
Tuesday March 1, 9:40-10:25, Room 204
reported by Jane Gruning

Cyrus Ford presented on his institution’s experiences with providing streaming media to their users. Historically, problems with Internet speed, buffering, and bandwidth have made it difficult to offer streaming media. Now that this technology is much easier to provide, how can libraries use it?

OLAC offers training materials and guidelines (http://olacinc.org/drupal/?q=node/11).

The basic mechanics of streaming media are that temporary files are downloaded to the viewer’s computer, and more files are downloaded as the media is viewed. The files are deleted when viewing is complete, so that less memory is used on the viewer’s computer. This method is faster than a method where all of the content must be downloaded before viewing can begin.

There are two kinds of streaming; live and on-demand. Live streaming is exactly what it sounds like and is available one time, while the event is occurring. On-demand streaming is available multiple times, for an extended period of time. Ideally, the user experiences no pauses in media delivery. There are a variety of streaming protocols (including UDP and RTSP), and some of them can limit the number of users that may view an item at any one time. Many streaming media platforms do not require the user to download a media player.

Some of the benefits of streaming are that the price can often be cheaper than buying the physical item, if buying in bulk (however, access may be available for a limited amount of time); and access is anytime, anywhere. This last point has the benefit of giving distance education students the same access as on-campus students. In addition, if one library branch has rights to a digital streaming object, all branches may use it, unlike a physical object. One issue must be decided is whether to catalog fully items that may only be available for a limited time. Some providers of streaming media will supply a basic catalog record, but generally a detailed one is not available.

 


Metrics Based Journal Value Analysis

Jacqueline Radebaugh, Columbus State University
Tuesday, 9:40-10:25, Salon DE
reported by Rebecca Rosenberg

Representing the California Digital Library system, Chan Li detailed the research and development process of a new metrics based journal analysis used in UC libraries. The new system has been instituted to improve the quality and relevance of journals for their particular user base.

The research team utilized a variety of metrics such as utility, cost effectiveness, historical value, quality, size, and institutional research dissemination. All research was done on real journals and data collection was based on the stated metrics. The research team compared the Journal Impact Factor (JIF) with the Source Normalized Impact per Paper or SNIP-the ratio of citation impact per paper to citation potential in its subject field, and then combined the results into a weighted total called the CDL Overall Weighted Value Metric.

According to Li’s definition, Overall Weighted Value Metrics measures a title’s relative local value against the other local licensed titles within the same broad subject categories from utility, cost effectiveness and quality perspectives. The advantages of this value system include straightforward value categories of lowest, low, medium and high, and the ability to obtain institution-specific results. Some noted advantages of using SNIP over the well-known Impact Factor were superiority in accounting for subject differences, broader title coverage, and twice-yearly updates.

Ms. Li demonstrated the use of the algorithm, explaining the benchmark values determined for each metric by subject, the use of median values for benchmarks in order to reduce the skewed impact of outliers, and comparison of a group of titles against those benchmark values, which are then assigned scores.

The results of this research are applied during the journal review process, where CDL campuses vote whether or not to keep low-scoring titles. Webinars targeted toward the California Digital Library community helped to raise awareness of metrics based analysis and attracted people to the voting process. Finally, she discussed future research questions and possible improvements to the current metrics and algorithm.

 


Does Every cloud really have a silver lining? Moving from Local to Remote Host

Ronda Rowe and Jim Irwin, University of Texas Libraries
Tuesday, March 1st, Room 203, 9:40-10:25
reported by Andrea Ogier

In their session “Does every cloud have a silver lining?” Ronda Rowe and Jim Irwin discuss moving library applications into the cloud, and how that move could affect libraries. The University of Texas at Austin (UT) is a large ARL library with very sophisticated datacenters whose move towards Google email shows a willingness to work in a cloud environment on a University level.

But what is cloud computing? What does it mean to work in the cloud? Briefly, cloud computing has three different aspects: Software as a service (Saas) which involves running applications like email in a remote environment, Platform as a service (Paas), a more intentional move to the cloud where both hardware and software are run somewhere else, and Infrastructure as a service (Iaas) where entire pieces of the organization are moved to the cloud. Saas specifically involves an application, like gmail, connecting through a front end or user interface, to the cloud. In these instances, the user doesn’t have to worry about maintaining software or hardware because everything is run remotely.

There are a number of library services that have moved or are in the process of moving to the cloud: subscription resources, OpenURL resolvers, Federated searches and webscale discovery services, ERMs, and even an ILS. The presenters then described their library’s decision to move into the cloud from SFX to ExLibris, including the issues and difficulties associated with the move. They found that the Information Security Office, whose responsibilities include the maintenance of a secure computing environment and the protection of IT resources on campus, had to be consulted early in the process. The library let the ISO shoulder the responsibility of identifying and fixing any system vulnerabilities. Additionally, they had a few problems with the vendor domain: UT libraries wanted to use a UT domain rather than a vendor domain in order to retain ownership of the work already invested in customizing the pages and providing the data. Problems also existed with authentication as the IT department was uncomfortable with giving Ex Libris, an outside organization, root access to the LDAP servers, and updating URL and IP changes.

 


Transcendental metadata: A collaborative schema for eResource Description

Craig Harkema, Charlene Sorensen, and Karin Tharani, University of Saskatchewan Library
March 1, 2011, 10:45-11:30 AM, Room 204
reported by Kathryn Pierce

Craig Harkema, Charlene Sorensen, and Karin Tharani from the University of Saskatchewan Library presented a schema and tool to support librarian collaboration, facilitate accountability, and improve decision-making and assessment. They described the need for creating the schema, the research they conducted to inform development, and the tool creation and implementation. Harkema, Sorensen, and Tharani have taken a bottom-up approach to the project by responding to a scarcity of information for librarians. The project is situated within the university context, but the information and methodology could be applied anywhere. Harkema, Sorensen, and Tharani define e-resources as anything digital that can be accessed through any digital device, but their focus here was on e-books and journals. The primary problem they are addressing is that it is difficult for librarians who are not directly involved with acquisition and collection development of e-resources to feel informed. Their research and development principles included the appreciation of perspectives, development of discourse, and a desire to keep the solution efficient and flexible.

Harkema, Sorensen, and Tharani held consultations with library colleagues to help guide their tool development. They identified themes within the consultations and found that librarians want to feel prepared, need to be able to access more information themselves, and need to work with new types of resources. The consultations allowed librarians to voice their needs. Harkema, Sorensen, and Tharani’s methodology is focused on gathering and organizing information on librarian needs, then prioritizing and acting on those needs to develop a solution. They drew on the Web 2.0 mindset as a guide to work in a way that was fast, iterative, responsive and used the involvement and expertise of librarians as a foundation.

What they developed is a collaborative, descriptive framework that allows and encourages librarians to contribute metadata. The flexible small, agile tool is a low-tech solution, but it is responsive to needs of librarians. In line with Web 2.0, the tool is designed to take advantage of collective intelligence, be user-centered and interoperable, and allow for continuous feedback. There are standard fields and it allows additional tags that are social, flexible and task specific. Attributes discussed include source, bib #, title, subject, license digitized, date acquired, liaison cluster, form/type, locally hosted, mobile compatible, and perpetual access. Fields can be used in conjunction with each other to provide a richer data set. One of the primary advantages is that it allows for the addition of specialized metadata. Through developing a tool for librarians, based on their needs, the project team created a tool for efficiently adding metadata that is flexible, builds on collective intelligence of librarians, and addresses the information needs of non e-resource librarians. Still in the early stages, the feedback from librarians has been positive and the team is considering further development.

 


The Necessity, Opportunity, and Challenge of Managing Free Electronic Resources

George Stachokas of Indiana State University, and Stephanie Braunstein of Louisiana State University
Tuesday, 10:45-11:30, Salon DE
reported by Rebecca Rosenberg

George Stachokas of Indiana State University and Stephanie Braunstein of Louisiana State University presented their ideas and perspectives on managing, organizing, and providing access to the wealth of free information available online. He has addressed problems sparked by the abundance of free e-info, such as how to organize, classify and assimilate useful information for your patrons. He developed a classification schema designed to help librarians track free electronic resources, determine workflows and troubleshoot problems based on a resource’s ratings.

The five criteria for classification are scholarship, persistence, entity, compatibility and convenience. The rankings in scholarship range from peer review to popular. The rankings in persistence range from highly persistent (> 10 years) to temporary. The rankings in entity range from Formal to Special. The rankings in compatibility range from highly compatible to not compatible, and the rankings in convenience range from very convenient for users to requires staff mediation. Compatibility and convenience are local and should be judged according to your library’s criteria.

For example, Open J-Gate (http://www.openjgate.com/Search/QuickSearch.aspx) is one of the largest free sources of peer-reviewed journals. Because of this source’s high ranking it will be fully tracked in the A-Z list and ERM, cataloged with MARC record(s), made available on the library’s primary Web pages and through link resolver, and will receive full tech support from library staff, while another source with a lower rating might receive different treatment in some or all areas.

LSU government documents librarian Stephanie Braunstein labeled free information available through the Federal Depository Library Program (FDLP) a mixed blessing, as substantial tangible collections still require ongoing maintenance.

Since 1994, the portal service from the U.S. Government Printing Office (USGPO) has provided free e-access to a wealth of important official content produced by the Federal Government. USGPO is now introducing a new era of online government information with the Federal Digital System (FDsys). With more sophisticated search capabilities, we can now browse for documents and publications by collection, Congressional committee, or date, as well as access metadata and download documents in variable formats.

FDsys was created to address the problem of fugitive documents- those not given to the GPO by the obligated departments/agencies/bureaus. Crawlings is one way of locating fugitive documents, and the GPO recently began encouraging federal depository libraries to crawl FDsys as well.

One highlighted source for collection development was the New Electronic Titles monthly archive. Their reports are generated from data retrieved from the Catalog of U.S. Government Publications (CGP), and contain only new, non-updated records from the CGP.

Another tool, MARCIVE Enhanced GPO Database Service, provides depository libraries a convenient and economical way to manage cataloging of government publications. MARCIVE’s GPO file contains transferable records for all GPO materials available since 1976. MARCIVE collaborated with librarians at Rice University, Louisiana State University and Texas A&M University to create this enhanced database.

The Document Without Shelves initiative is a way to provide access to over 55,000 titles published by the GPO. You don’t have to be a depository library to give your patrons this access, including full MARC records with URL’s for thousands of e-documents, but it does require a yearly subscription.

Challenges accompanying the availability of increasing amounts of information include providing continuity when switching from tangible to online versions of titles, comprehensive cataloging of all online titles, and ensuring the permanence of online documents and links. Additionally important is the willingness and ability to help patrons effectively use government e-resources, which are often specialized and can be difficult for novice users.

PURLS, or Persistent Uniform Resource Locators, are one way to ensure record retention. The PURL however, which functions just like a permalink, is still susceptible to server shutdown.

 


HTML5: Will It Be Worth It?

Christine Peterson, Amigos Library Services
Tuesday 10:45–11:30am (Room 203)
reported by Jared L.Howland

The World Wide Web Consortium (W3C) in an international organization that creates Web standards. The next standard, scheduled to be recommended in 2012, is HTML5. Though not yet a complete standard, it is stable enough that it is being used by major commercial websites and, according to Christine Peterson, is complete enough that libraries can start using it (http://amigos.org/training/peterson/html5.pdf).

Before discussing HTML5 in depth, Christine explicated a little about web standards history (http://www.alistapart.com/articles/a-brief-history-of-markup/).

  • 1997: HTML 4 became a W3C Recommendation
  • 2000: XHTML 1 became a W3C Recommendation
  • XHTML 2: The W3C began working on XHTML 2 which, unlike XHTML 1, was not going to be backwards compatible with existing web content or with older versions of HTML
  • Mozilla, Opera and Apple were not happy with the direction XHTML 2 was going and formed the Web Hypertext Application Technology Working Group (WHATWG)
  • WHATWG created the Web Apps 1.0 and Web Forms 2.0 specifications
  • W3C abandoned XHTML 2 and took Web Apps 1.0 and Web Forms 2.0 and merged them into a single standard, HTML5

Accessibility vendors were asked to participate in the creation of the HTML5 standard but felt they were too busy to participate at the time. The HTML5 standard creators attempted to add on accessibility features but it is not as ideal as it would have been had the vendors been on board from the beginning.

The following highlight the major differences between HTML5 and previous versions of HTML and XHTML.

DOCTYPE and Other <head> Elements

Old doctypes were virtually impossible to memorize and had to be copied and pasted to get correct but, fortunately, has been greatly simplified in HTML5. This means that instead of this in :

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">

You can now say:

<!DOCTYPE html> or <!doctype html>

Other elements within <head> that have changed are as follows:

  • <meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″ /> is now simply <meta charset=”utf-8″ />.
  • <html xmlns=”http://www.w3.org/1999/xhtml” lang=”en” xml:lang=”en”> is now <html lang=”en”>. The namespace is assumed to be HTML5 unless explicitly told otherwise.
  • <link rel=”stylesheet” href=”main.css” type=”text/css” /> is now <link rel=”stylesheet” href=”main.css” />. The type attribute is no longer needed because there are no longer multiple types of stylesheets.

Sections

With previous versions of (X)HTML there was no semantically correct way to differentiate between parts of a website such as the header, footer, navigation, section and article. Instead, these elements were created by using id’s on divs like so:

<div id="nav"></div>

HTML5 created new elements for these sections of a website (from http://diveintohtml5.org/semantics.html#new-elements):

  • <section> The section element represents a generic document or application section. A section, in this context, is a thematic grouping of content, typically with a heading. Examples of sections would be chapters, the tabbed pages in a tabbed dialog box, or the numbered sections of a thesis. A Web site’s home page could be split into sections for an introduction, news items, contact information.
  • <nav> The nav element represents a section of a page that links to other pages or to parts within the page: a section with navigation links. Not all groups of links on a page need to be in a nav element — only sections that consist of major navigation blocks are appropriate for the nav element. In particular, it is common for footers to have a short list of links to common pages of a site, such as the terms of service, the home page, and a copyright page. The footer element alone is sufficient for such cases, without a nav element.
  • <article> The article element represents a component of a page that consists of a self-contained composition in a document, page, application, or site and that is intended to be independently distributable or reusable, e.g. in syndication. This could be a forum post, a magazine or newspaper article, a Web log entry, a user-submitted comment, an interactive widget or gadget, or any other independent item of content.
  • <aside> The aside element represents a section of a page that consists of content that is tangentially related to the content around the aside element, and which could be considered separate from that content. Such sections are often represented as sidebars in printed typography. The element can be used for typographical effects like pull quotes or sidebars, for advertising, for groups of nav elements, and for other content that is considered separate from the main content of the page.
  • <hgroup> The hgroup element represents the heading of a section. The element is used to group a set of h1–h6 elements when the heading has multiple levels, such as subheadings, alternative titles, or taglines.
  • <header> The header element represents a group of introductory or navigational aids. A header element is intended to usually contain the section’s heading (an h6 element or an hgroup element), but this is not required. The header element can also be used to wrap a section’s table of contents, a search form, or any relevant logos.
  • <footer> The footer element represents a footer for its nearest ancestor sectioning content or sectioning root element. A footer typically contains information about its section such as who wrote it, links to related documents, copyright data, and the like. Footers don’t necessarily have to appear at the end of a section, though they usually do. When the footer element contains entire sections, they represent appendices, indexes, long colophons, verbose license agreements, and other such content.
  • <time> The time element represents either a time on a 24 hour clock, or a precise date in the proleptic Gregorian calendar, optionally with a time and a time-zone offset.
  • <mark> The mark element represents a run of text in one document marked or highlighted for reference purposes.

The problem with these new elements is that some browsers, IE in particular, do not recognize them. Unrecognized elements are displayed inline instead of as a block element which will mess up a website’s layout. The solution to this problem is to create a CSS rule that tells the browser to display the elements as block elements:

article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; }

In any version of Internet Explorer below version 9, unknown elements do not get styled so this new rule will not be applied. Unknown elements also lead to empty nodes and no children in the DOM for content rendered in IE. The solution to this problem is to use javascript to create dummy elements just for IE by placing the following code in the head of the HTML5 document:

<!--[if lt IE 9]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]-->

Forms

HTML5 also allows for client-side form validation which does not interfere with any form validation you may already be doing server-side. This form validation also allows for new form fields as shown below:

  1. <input type=”search”> for search boxes
  2. <input type=”number”> for spinboxes
  3. <input type=”range”> for sliders
  4. <input type=”color”> for color pickers
  5. <input type=”tel”> for telephone numbers
  6. <input type=”url”> for web addresses
  7. <input type=”email”> for email addresses
  8. <input type=”date”> for calendar date pickers
  9. <input type=”month”> for months
  10. <input type=”week”> for weeks
  11. <input type=”time”> for timestamps
  12. <input type=”datetime”> for precise, absolute date+time stamps
  13. <input type=”datetime-local”> for local dates and times

You are also able to include placeholder text in a form to aid users in how the form should be filled out:

<input type="email" placeholder="Type email here" />

One advantage of HTML5 form fields is that an appropriate keyboard can be shown to users on mobile devices such that if the field is “tel,” a telephone keypad shows up instead of just a QWERTY keyboard. Older browsers default to input type=”text” if it doesn’t understand the new fields so there is nothing to lose by using these in all your current forms.

Figures

Adding a caption to a photo of figure on a web page used to be complicated. HTML5 simplifies this by including a <figcaption> element:

<figure> <img src="/ward.png" alt="Headshot of Amy Sample Ward"> <figcaption>Amy Sample Ward</figcaption> </figure>

Audio/video

Compared to video, audio is rather straightforward in HTML5:

<audio autoplay controls> <source src="lesson6.mp3" type="audio/mpeg"> <source src="lesson6.ogg" type="audio/ogg"> </audio>

Make sure to include the controls portion of the opening audio tag so that users can control the audio on the site. Not all codices are supported by all browsers so you may need 2 or 3 different file formats to make sure the audio will play for all users. However, no plugin is required for this content because the appropriate codex should be built right into the browser the user is using.

Video is more complicated:

<video controls> <source src="video.mp4" type="video/mpeg" /> <source src="video.ogv" type="video/ogg" /> <!-- FOR IE: must do this old school using Flash plugin --> <object width="160" height="90" type="application/x-shockwave-flash" data="video.swf"> <param name="movie" value="video.swf"> <embed src="video.swf" width="160" height="90"> </object> </video>

The following video codices are supported by the following browsers:

Ogg Theora/Ogg Vorbis

  • Chrome
  • Firefox
  • Opera

MP4 H.264

  • Safari
  • Internet Explorer 9

Helpful sites

The following should be helpful sites as you consider moving your content to HTML5:

 


Becoming a leader, to the eyes of your manager

[View this presentation at Vimeo]

Elisabeth Leonard, MSLS, MBA: Associate Dean of Library Services, Western Carolina University
Tuesday, March 1, 10:45-11:30, Salon ABC
reported by Gina Bastone

The soft skills are important. Self-help books may be easy to laugh at, but they have good advice. That’s what this session is all about.

It’s important to find a mentor, and if possible, find more than one. Look for people who have different perspectives. Find someone internally who can teach you how your organization works, and look for a mentor outside your department or even outside your organization so you get a fresh perspective. Ms. Leonard had mentors from the IT department and outside the library profession who gave her great alternative view points.

What does it mean to be a leader? First of all, leadership is not management – leadership can flourish at any level in an organization. See Gardner’s definition on slide 2. It says nothing about being in a position of authority. Instead, leadership is all about influence.

Next, find out who you are, and be able to articulate that to your boss. This is important for young librarians – figure out what you are passionate about as a librarian and learn to articulate that. Job descriptions for electronic resources librarians do not just include a list of skills – they have qualities and characteristics that relate to leadership and influence. Knowing vendors, ERMs and licensing negotiations are important skills and make you a good employee, but they don’t necessarily make you a good leader.

Find ways to make an impact in your organization. Make sure you’re in the room for budget meetings so you can vocalize how expensive digital resources are being used. For example, report on patron-driven acquisitions. PDA shows our patrons’ points of view, and you can communicate that higher up. Look for ways to improve your department’s workflow, processes and services and see how you can help your colleagues. Don’t just monitor trends – report on them. Get others to talk about trends so that multiple people are looking for ways to implement them.

Don’t go to your manager just to complain. If you want to be a leader, look for ways to fix the problem, and come to meetings with ideas. The Geneen quote on slide 6 points out that “only performance is reality.” Ms. Leonard says, basically, talk is cheap.

Make sure you get the support you need, and evaluate what your manager knows about you. Do you ever talk about more than just your daily work? How well does your manager know you outside of the job? If possible, establish a personal relationship with your boss so you can get the support you need. Determine if you want to be a manager, and look at the leadership skills valued at your institution. Remember what unique skills you bring to the table. For example, if you have retail experience, you have valuable customer service and people skills. Highlight those things. Communicate to your boss what you are passionate about.

Also get to know your manager, and find out what makes her tick and what her goals are. Look at how she is evaluated because it might influence the way she is evaluating you. Is your boss a leader, a manager or both? Try to figure out his or her management style and pay attention to if she or he is threatened by a leader. Just as you want to communicate to your boss what you are passionate about, find out his or her passions in return. Listen as much as you talk, and see how your boss presents things to you.

Questions (paraphrased)

Q: How do we reverse the negative influence of leadership? For example, someone who has been in the institution for 30 years and doesn’t want to try new things. Or a gossip who always focuses on the worst case scenario.

A: Every organization has its naysayers, but don’t necessarily ignore them. They may have some wisdom, especially if they’ve been around a long time. Some people, though, are just “pit dwellers” and will never see the light. Acknowledge them but don’t let them stop the conversation.
Q: What do you do when there’s no direct line of authority and peers manage each other?

A: That can be really tough but it can also be very dynamic. Your conversations with each other can be more honest and casual. The flipside is that you don’t control their time and can’t ask them to take on a big project. In some cases, you may need to bring in your bosses to negotiate time on projects.
Q: What do you do when the corporate culture of an institution states certain values, but in reality, things work differently – even opposed to – those values? For example, decision makers say they want input, but they already made their decision and the input doesn’t matter.

A: That’s when you need to exercise the characteristics of a leader (rather than a set of job skills). Be courageous and stand up to the problem. Don’t be afraid to go up the line of authority, but be sure to let your manager know how you’re feeling. Your manager needs to know if there is a toxic environment, and sometimes, all that is needed is greater transparency in the decision-making process. It’s not always blatant dishonesty. Also, try to protect the people around you.

 


What am I really getting? Finding the unique content in your databases

Monica Moore, Illinois Wesleyan University
March 1, 2011, 11:40-12:25, Salon DE
reported by Kathryn Pierce

To assist librarians in making decisions about resource purchasing on limited budgets, Monica Moore developed a methodology for identifying title overlap in databases. She also created a step-by-step instruction key, which is on the ERL flash drive, so it can be used it as a resource, even for those who didn’t attend the talk. The project was in response to budgetary cuts at Illinois Wesleyan University, a small private liberal arts institution, where Moore is an e-Resources librarian. She found the methodology proved to be simple to do, effective at showing where there is an overlap, and helpful in allowing librarians to cut costs by making informed purchasing decisions. It also supports the evaluation process by providing another tool for understanding the collections. Finally, Moore found that developing the tool has facilitated communication and education among libraries and administration, as it provides another tool for sharing information about the value of resources.

Moore was looking at three databases in one subject area: Education Abstracts, ERIC, and Professional Development. Two of the three are free which raised the possibility of eliminating the third to cut costs. But there was no simple way to compare the three products as looking at Excel title lists proved to be complex and frustrating. Moore needed a find a way to look at relationships between the three databases. Using Microsoft Access, she created tables from the titles lists in Excel, joined tables through unique identifiers, then exported the results back to Excel and shared them with subject librarians to make decisions about what would be lost.

Applying the methodology did not require tools or an extensive skill set in databases. Moore did save the library over $1000, which is promising for a pilot project. The study also resulted in adding value in the form of new databases with more full-text options. The methodology can be used to save money, but it can also be applied to question the value of databases, compare options, and make informed decisions about the library purchases.

 


When Two Become Three: Adding Additional Staff to Electronic Resource Management

Carolyn DeLuca, Dani Roach, and Kari Petryszyn, University of St. Thomas
Tuesday 11:40–12:25pm (Room 203)
reported by Jared L. Howland

Carolyn DeLuca, Dani Roach and Kari Petryszyn, all from the University of St. Thomas, presented a case study of lobbying for, creating and hiring for an electronic resources position within their library. The University of St. Thomas has 9,000 FTE enrollment and almost 40 FTE library staff and is the largest liberal arts University in Minnesota.

From 2004 to 2010, the library at the University of St. Thomas decreased their print serials expenditures from around $640,000 to around $350,000 while their electronic serials and databases expenditures increased from around $350,000 to around $992,000. This lead to decrease in print serials subscriptions from around 4,000 to around 1,700 print subscriptions. Congruent with increasing electronic content, staffing remained the same for electronic resources. Like many libraries around the country, and probably the world, this lead to a situation where a large portion of the staff was dedicated to print operations while nearly 70% of the budget was being spent on acquiring electronic resources.

When lobbying for more help with electronic resources from your administration, you must paint a clear picture of how the library’s resources are currently being allocated. This means providing reliable and meaningful statistics that tell a story and also telling actual stories about problems encountered with electronic resources due to a lack of personnel. For example, in the lifecycle of a print book, the person acquiring materials is not the one who catalogs it, or shelves it or repairs it when it needs repair years later. However, typically, those in charge of electronic resources do all of those things for electronic content. Drawing this analogy can be helpful in explaining to administrators the need for more staffing and further streamlining of electronic workflows.

Eventually, administrators at the University of St. Thomas agreed to new staff for electronic resources and this lead to the need for figuring out new workflows. The staff charted out all current responsibilities required for maintaining electronic resources along with who was performing the various tasks. After evaluating all the tasks that needed to be done, they tried to logically assign them to the appropriate people. They created a large spreadsheet that outline the tasks that helped organize the workflows and made it easier to see who should be doing what (available on the flash drive for attendees).

Getting a third person to help with electronic resources helped them to more efficiently allocate collection development funds because they were able to more effectively evaluate usage and other information available for electronic resources. It also allowed them to free up staff to work on other “big picture” projects.

 


The Role of Collection Development Policies in Today’s Academic Library

Mary Ellen Pozzebon and Suzanne Mangrum, Middle Tennessee State University
Tuesday 2:40–3:25pm (Room 203)
reported by Jared L. Howland

Electronic Resources have permanently changed business models by changing the amount of control librarians have over what content comes into the building because of licensing databases, journals. Librarians now have to be concerned about issues such as licensing, access (including perpetual). These changes mean that collection development librarians must now be team focused, informed about contracts and content, collaborative, and on top of technical problems to ensure consistent access for users. Effective content delivery to users requires good execution of collection development.

By not keeping up with electronic access models, collection development policies failed to document and coordinate changes in workflows and policies. In other words, current collection development policies do not typically reflect current realities of collection development. Without teamwork and coordination, gaps in old collection development policies will be filled with assumption-based decisions instead of assessment-based decisions. When updated, collection development policies will lead to better decision making and will lead to balanced and relevant collections for our users.

Goals of collection development policies remain the same as they have been in the past:

  1. Identify institution’s audience or community
  2. Balance a library’s collection
  3. Relations with and documentation for library community
  4. Inform and direct library staff

Because of these changes, Mary Ellen Pozzebon and Suzanne Mangrum of Middle Tennessee State University undertook an environmental scan of library collection development policies to make improvements to their policies and to become informed of current best practices and trends within the profession at large. This environmental scan was largely a content analysis of collection development policies of 23 peer institutions that were determined to be comparable to Middle Tennessee State University’s library gathered over a period of one month in order to account for potential policy changes if the project gathered policies over a longer period of time.

Each policy was thoroughly read and coded by the researchers and placed into the following categories determined by searching the literature and using their own collection development experience:

  • cost
  • consortia
  • responsible parties
  • content
  • access
  • usability
  • assessment
  • licensing (user perspective)
  • licensing (library management)

Each of these criteria had 4 levels of detail and a point was given to a policy if the criteria detail was found in the policy for a possible total of 36 points.

Results

The following are the percentages of criteria in collection development policies of the peer institutions:

  • 85% content
  • 52% usability
  • 45% responsible parties
  • 38% costs
  • 36% access
  • 36% licensing (user perspective)
  • 25% consortia
  • 25% assessment
  • 22% licensing (library management)

Collection development policies are consistent in discussing current/authoritative content and the academic need/scope/depth of content. Policies also consistently mention cost of materials. On the other hand, policies are not currently good at the following:

  • Termination rights (licensing – user perspective)
  • Duration of license (licensing – library management)
  • Who implements (responsible parties)
  • Justification of the cost (cost)
  • Indemnification (licensing – library management)
  • Interlibrary Loan (licensing – library management)
  • Trial period (assessment)
  • Consortia maintenance of electronic resources (consortia)
  • Hidden costs (cost)
  • MARC record availability (access)
  • Consortia cost negotiation (consortia)

Best practice dictates that electronic resource collecting should be integrated into the library’s general collecting policy. An interesting finding of the environmental scan was that integrated policies typically did a poor job addressing new electronic resource collecting realities. Libraries with separate collecting policies for the general collection and the electronic collections typically did much better of reflecting current realities. The authors’ hypothesis is that libraries with integrated policies have not taken the time to fully update their policies and have simply added token language to current policies as a nod to the existence of electronic content.

 


The Copyright Outreach Librarian

Martin J. Brennan, MLS: Copyright and Licensing Librarian, UCLA Library
Tuesday, March 1, 4:40-5:25, Room 203
reported by Gina Bastone

Not every university has a copyright librarian, but it can be a valuable role for a number of reasons. A copyright librarian is responsible for educating faculty and graduate students in a way that campus counsels often cannot. Such a position overall reduces risk for the university, and in the event of a copyright suit, a copyright librarian on staff can add to an argument in court. So this is a position worth advocating for.

If you take on the role of copyright librarian, get familiar with the 1976 copyright law, and understand your university’s policies. See Stanford’s site on fair use and look at fair use court cases. Particularly examine the gray areas in the law, such as classroom and library exceptions. There are many good online class you can take to learn more.

Determine your agenda and understand your university’s policies. Decide on advocacy positions, such as author’s rights, open access, and fair use, and prioritize what’s most important. You only have so much time in front of faculty and students. Build a network of contacts around campus. Your relationship with campus counsel is probably the most important. Make sure you know what’s going on in other library departments, especially electronic reserves, and reference and instruction colleagues can help you make contacts in academic departments. Faculty, other librarians, graduate students and relevant support staff are your priority, although educating undergraduates is important too.

Next, figure out how to present your message. Call and email faculty and important support staff, informing them of who you are and what you do. Look for easy, point-of-need methods, such as modules created on PowerPoint, instructional online videos, and podcasts. Go to campus resource fairs, graduate student orientations, seminars and brown bags, and have a dynamic website where you can easily refer faculty and students. The most important method, however, is individual consultation, and it requires a substantial time commitment.

Whatever methods you use, always remember to include a disclaimer – you are not an attorney! You also are not the copyright cop. It’s up to individuals to follow the law. Often, faculty and students want a yes or no answer, but most situations are not black and white, especially with fair use.

Copyright librarian positions are a new phenomenon, and unfortunately, assessment is difficult. You can easily keep outreach and education statistics, such as numbers of sessions and numbers of people reached. However, qualitative impacts, like the reduced risk of litigation, are difficult to measure.