Last week we logged the 1,036,791th search on our Primo system. I think I’m most impressed by the nearly vertical line between the start of this semester (August 24) and early October.
I’m building an SQL database to help with assessment of library services. Today’s autodidactic activity involved counting the number of students by status (undergrad, grad or law) based on the declared home address in the registrar’s database.
Finally nailed the SQL syntax but as I looked more closely at the results, excitement waned…
I suppose there are other ways to mangle Washington, DC during data entry. Maybe if I were to just count DC in the “state” field–surely that would be a reasonable proxy for living in the District, right?
Ordinarily, yes, but that student from Bangalore, India, DC still leaves us with an off-by-one error.
Seems librarians are more interested in assessment today than was the case just a few years ago. There are many reasons for the quickening of interest but I suspect most cluster around one or the other of these themes:
- - a sense that libraries need to justify their existence, relevance, etc.
- - everybody’s talking about data so you need to have some to be seen as serious
In this sort of environment, the prudent course is to prepare for a dramatic rise in “Can we get some numbers on this?” questions. If you haven’t heard them yet they’re surely coming.
One way to get ready is to build in ways to measure new services as you develop them. Another is to look at data you already have and see if it can be enhanced to deliver a more compelling usage metric.
For my library, one option worth investigating–and the only place where I know I can capture every bit (and byte) of what’s happening with e-resources for off-campus users–is our EZproxy server. Is there a new way to look at the activity logs on that system? Let’s see…
This graph charts the 690,000 searches conducted on our Primo system since January 1, 2013. I am happy to see that vertical jump during the final week tracked by the graph–the first week of this term.
We placed an search widget on our library’s home page once Primo went “live”–pretty standard fare for libraries that implement a discovery product. Our search widget looks like this:
You’ll notice we’ve put a lot of explanatory text (which, of course, no one reads) and a number of options. That little “locally-held collections” box was added so a user could limit search results to just our Voyager catalog and our DSpace and LUNA systems–reducing the noise that enters a result set when you include in the Primo Central Index content.
Thanks to some logging this widget performs, we know that since January 8, 2013, it has been used to launch 104,186 searches. For 4,180 of them, the “Limit to locally-held collections” box was also checked.
Which means our usage stats show that our little “limit” checkbox gets ignored 96% of the time. Should be easy to make the case that we should just remove it, but still…
- it is used in 4% of searches
- it likely performs a useful function for the 4% that select it
- it imposes no real penalty if you choose to ignore it
I understand why some lean toward a search box that offers no options and very little explanation–just enter something and see what you get. I also appreciate the fact that you can offer a user so many choices and options that all you’ve really done is increase the odds that they’ll choose the wrong thing.
What I haven’t quite figured out is when is it right to toss a useful feature that you know only a small percentage of people use.
Participated in a panel at ALA last Saturday:
“Hiding in Plain Cite: The Growing Importance of Content Neutrality in Library Discovery Services”
Roger Schonfeld served as moderator. Joining me on the panel was Lisa O’Hara, University of Manitoba; Todd Carpenter, Executive Director of NISO; and Amira Aaron, Northeastern University.
One of the questions I took the lead on was, “What does Content Neutrality Mean to You?” Here’s my response:
I’m sure most have heard the phrase “net neutrality” — a network model that says bandwidth providers should treat all data that moves across their network in the same way. It is certainly true that many ISPs are just in the bit-moving business..providing network access..but a smaller percentage also provide content. It’s that vertical integration (meaning significant parts of the supply chain fall under the same owner) that gives rise to trouble.
For example, in a net-netural world:
- Comcast as an access provider shouldn’t shape traffic in such a way that Netflix video streams end up slower than content flowing from Comcast’s own Xfinity platform.
- Verizon shouldn’t count Amazon video streams against a user’s data allowance while exempting the same user from cap charges on a FIOS video stream.
“Content neutrality” is a similar idea. Our “access provider” in this instance is the discovery platform vendor. The analogs to traffic shaping or billing distortions occur instead around the metadata that’s being searched to “discover” relevant content. As with ISPs and net neutrality, there are some companies that just provide a discovery platform and others that are also in the content business. As before, vertical integration and perceptions of competitive advantage are problem incubators.
Eleven weeks in two things are clear:
1) use of Primo is increasing
2) researchers begin winding down a week before Spring Break and appear to need another week to get back into the swing of things.
Seems we now average roughly 30,000 Primo searches a week. Over half of the 250+ public workstations in our libraries still default to our “classic catalog” (Voyager) so I suspect we haven’t yet hit Peak Primo.