In March 2015 we conducted usability tests on QuickSearch (NCSU Libraries' customized federated search tool with a bento-box results interface designed to connect users to a variety of library resources, services, and tools). Users showed no confusion over the bento-box layout. They seemed to understand it and asked no questions about the layout or arrangement of the search-results page. The most significant problem we found was that users did not see the categories of "Best Bets" (high-demand resources) that lack a descriptive tagline.
QuickSearch Product Team members have confidence in the usability of the search-results screen based on years of search log analysis, analytics, and a high clicks-per-serve ratio. Would usability testing validate the team’s confidence?
What high-level usability problems could we find?
Does our machine-learning prototype site recommend the correct category of results?
What we found
Interface works well. The tests validated QuickSearch Product Team members’ confidence in the basic usability of the interface. Users showed no confusion over the bento-box layout. They seemed to understand it and asked no questions about the layout or arrangement of the search-results page. They all understood they could click through to see more results than the three results shown for each category.
"Best Bets" could be improved. The most significant problem we found was that users did not see some categories of “Best Bets.” (“Best Bets” are a QuickSearch feature that calls out our highest-demand resources.) Best Bets with taglines such as this one:
were seen and clicked on by users, while those that lack descriptive taglines such as this one:
were more often missed.
Machine-learning prototype needs more work. Tests of the machine-learning prototype yielded mixed results. The prototype attempts to identify the most appropriate results category (articles, books & media, journals, etc.) for a user’s search and place that category at the top of the interface. The prototype recommended appropriate categories, but the top result from that category was not often relevant.
Some users’ query terms made it hard for them to find certain information:
- Multiple users to added "ncsu," “library,” and/or “dh hill”’ to their queries, even though they were searching within an NCSU Libraries website. We assume this technique is learned while doing Google searches and then applied in QuickSearch.
- Some users formed natural-language queries or formatted them as questions.
- Some users used terms not used by the library website such as "renting" instead of “borrowing” or “lending.” Typeahead feature does not work in all browsers.
Recommendations and Changes
Add taglines to all "Best Bets." This will increase consistency and standardize on the “Best Bets” that are more findable.
Continue working on the machine-learning prototype and test again in the future.
Consider possible solutions to improve results when users use natural-language queries and unhelpful search terms such as ncsu, renting, etc. Seek a better match between our system and the real world.
Investigate typeahead feature’s compatibility with standard browsers.
Graduate student employees facilitated the tests. They recorded the sessions using Morae. Two members of the QuickSearch Product Team, Kevin Beswick, Digital Technologies Development Librarian, and Josh Boyer, Head, User Experience, later watched the recordings.
6 participants; all undergraduate; men and women; from 5 different academic departments; incentivized with cash for participating.
Usability test tasks and detailed results available upon request