'Colleges' search result page, University of York website

Getting a handle on site search

Have you ever visited a website looking for a specific piece of information; maybe a swimming pool timetable, or a specific recipe? 

Have you ever used a search function within the website in order to do this?

If so, you have been a ‘user’ engaging with ‘site search’. 

We get a significant number of searches every month across york.ac.uk for information ranging from accommodation to fees, timetables to departments. 

Website search function
Site search bar on york.ac.uk

We use Funnelback to facilitate site search which contains our web content in collections and, through both manual and automated configuration, presents the most appropriate results to search terms. 

Getting irrelevant results from these searches can result in a negative user experience, something we’re keen to avoid.

So, why do we need to get a handle on this?

Over time additions or changes to our collections result in ‘experience rot’. This is a slow deterioration of user experience as our search function increasingly fails to meet their needs. 

Most of the changes we make in Funnelback are reactive, responding to requests such as adding or modifying promoted results (best bets), or removing results that are no longer relevant.

The reactive nature of this work meant that we had no strategy or process in place to optimise our search function and improve results.

How can we optimise our site search?

Preventing the deterioration of results was at the heart of getting a handle on search results; shifting the focus away from problem solving to problem prevention. 

The first step of this was to properly understand the current state of our site search.

Step 1 – Quantifying our search experience

Quantifying an experience is typically a challenging undertaking. A majority of our site search feedback is both anecdotal and emotive with specific failures in search results causing user frustration which sometimes gets fed back to the relevant team. 

However, successful searches are rarely commented on and we had no process in place of assessing search performance outside of user experience case studies. 

Utilising Louis Rosenfeld’s methodology for ‘Measuring the Unmeasurable’ as outlined in his book, ‘Search analytics for your Site’, I undertook a process of quantifying and measuring site search performance. 

Identifying objective ‘best results’ for our top search terms, provided by Google Analytics, I was able to set benchmarks for our site search relevancy and precision. 

Being a niche piece of work, this didn’t provide a metric by which we could compare ourselves to competitor institutions. Instead, we now had a better, more objective, understanding of our site search performance and a means of measuring any improvements resulting from configuration changes. 

Step 2 – Carrying out configuration changes

I focused on the following configuration settings within Funnelback: 

Best bets:

Screenshot of a promoted result appearing when a search for politics is entered into york.ac.uk site search.
Example of a best bet appearing when ‘politics’ is searched.

These are Funnelback’s promoted results and appear highlighted at the top of search results. 

Best bets can be created for important content that needs promoting in a timely manner. However, their relevance can expire, or their destination URL may change, and we had no process of determining which best bets needed updating or removing. 

Importing a list of our best bet destination URLs into https://httpstatus.io/ meant I could identify those with redirects and update them accordingly. It also allowed for a sense check of live URLs meaning specific, outdated ones could be deleted. 

Search refinements:

This is a key site search metric reported in Google Analytics. It’s the rate at which a search term is amended and re-entered. It can indicate that users did not find what they were looking for in their initial search and so had to try again, which isn’t an ideal user experience. 

As search refinements are most commonly the result of a misspelling, the best solution is to input a synonym telling Funnelback to return the results that would have been shown, had the search been entered correctly. For example: 

Screenshot of search results for a misspelling of accommodation.
Screenshot of ‘accommodation’ synonyms impacting search results

By adding ‘acomodation’ as a synonym for ‘accommodation’ the user experience is preserved and they can more quickly reach relevant information. We did this for all common misspellings appearing in our Google Analytics reports.

Exclude and Include patterns:

The content we include in, or exclude from, our search results is managed via Include and Exclude patterns. 

I reviewed what’s included in these lists to ensure any new domains are included in search if necessary, and removing legacy exclusions of sites which no longer exist. 

For example, we no longer needed to include www.capanina.org/ so this was removed from the Include list, I also added utm_campaign parameters to the Exclude patterns to ensure campaign pages were kept out of search results. 

This update helps to keep Funnelback up to date with relevant content as well as streamlining results by removing legacy domains. 

It was important to record the number of documents crawled by Funnelback so as to investigate any sudden increases which may slow site search down. I also listed everything excluded along with a brief description of its purpose to future-proof our exclusion rules. 

Tuning:

Funnelback tuning is “…a process that can be used to determine which attributes of a document are indicative of relevance and adjust the ranking algorithm to match these attributes…” (Tuning search ranking docs – Squiz)

Adjusting tuning configuration requires search terms to be added with their best result. Funnelback then uses this information to adjust search result rankings accordingly, providing more accurate search results. 

I took the top search terms with the best result as identified in the benchmarking stage (step 1) and entered them into our tuning configuration. Funnelback predicted that this would have a positive impact on our results so the config changes were published. 

Predicted improvement score provided by Funnelback after updating tuning data.
Funnelback’s predicted improvement based on tuning data update.

Step 3 – Test that configuration changes have helped

Checking it’s worked!

After updating best bets, adding to our synonym collection, reviewing Include and Exclude patterns and tuning our results, I undertook another round of precision analysis to determine if these changes had led to any measurable improvements.

I saw that our search precision increased across the three main definitions of precision.

  • Strict = only very relevant results are deemed acceptable 
  • Loose = very relevant and relevant results are deemed acceptable 
  • Permissive = very relevant, relevant and nearly relevant results are deemed acceptable
Search precision (by strict definition)Search precision (by loose definition)Search precision (permissive definition)
Benchmark score58%76%91%
Post config changes score66%85%95%
Comparison of precision scores before and after Funnelback configuration changes

Step 4 – Document and put processes in place to maintain search quality

As I saw a positive impact from our changes, I created processes for maintaining the improvements, adding recurring tasks for updating configurations to our work management tool, Asana

Some config changes (best bets and refinements) are done on an individual or term-by-term basis and can be time consuming. I scheduled these to repeat more frequently so lots of work doesn’t build up too much between updates. 

Changes such as the amends to inclusion and exclusion patterns and the tuning runs are scheduled less frequently as they have a longer-lasting impact on search results. 

In our internal wiki documentation I included instructions on how to conduct each config review or update to future proof these processes and ensure consistency. 

So, what are the outcomes?

We now have a benchmark from which we can measure improvements. Most importantly, we have already seen an improvement in search precision; one we hope to build on! 🎉

We have processes in place to prevent experience rot, and preserve and improve site search quality. This is supported by relevant and up to date documentation to future proof these processes. 

Published by

Katie Shearston

Search, Analytics and Digital Advertising Specialist at the University of York. I get to dive into data analytics, SEO and our digital marketing campaigns.

Leave a Reply

Your email address will not be published. Required fields are marked *