📇
Catalogue API
  • Catalogue API
  • developers
  • How users request items
  • Search
    • Current queries
      • images
      • Query structure
    • Search
      • Changelog
      • Collecting data
      • Query design
      • Query design
      • wellcomecollection.org query development index
      • Reporting and metrics
      • Work IDs crib sheet
      • Analysis
        • Less than 3-word searches
        • Subsequent searches
        • Searches with 3 words or more
      • Hypotheses
        • Behaviours
        • Concepts, subject, and another field
        • Concepts, subjects with other field
        • Concepts, subjects
        • Contributor with other field
        • Contributors
        • Further research and design considerations
        • Genre with other field
        • Genres
        • Mood
        • Phrases
        • Reference number with other field
        • Reference numbers
        • Search scenarios
        • Synonymous names and subjects
        • Title with other field
        • Titles
      • Relevance tests
        • Test 1 - Explicit feedback
        • Test 2 - Implicit feedback
        • Test 3 - Adding notes
        • Test 4 - AND or OR
        • Test 5 - Scoring Tiers
        • Test 6 - English tokeniser and Contributors
        • Test 7 - BoolBoosted vs ConstScore
        • Test 8 - BoolBoosted vs PhaserBeam
    • Rank
      • Rank cluster
      • Developing with rank
      • Testing
Powered by GitBook
On this page
  • Status: Draft
  • TODO
  • Objectives
  • Access
  • Precision
  • Session segmentation
  1. Search
  2. Search

Reporting and metrics

Status: Draft

TODO

  • Define a timeframe for data retention (should relate to testing as we should be able to make decisions within this timeframe)

Objectives

Access

A greater breadth of our collection accessed

  • Decrease in % of catalogue with 0 views

  • Greater spread of works viewed across work types

Precision

People with a specific search intentions can have their expectations met

  • Clicks per search (CPS) is measured passively by tracking users' behaviour while they use the search function. This is a variant of a traditional click through rate (CTR), calculated by taking the ratio of the number of items clicked to the number of distinct searches, for each anonymised session id

  • Top n clicks per search (CPS-n) is almost exactly the same as the above, but only counts the clicks on works which appear in the top n results.

Metrics without a home

Session segmentation

We occasionally distinguish between 'discerning' and 'non-discerning' sessions where 'discerning' sessions are those which include searches beyond the first page of results. This follows an assumption that the users who work their way through a page of results and decide to keep going are more engaged than those who are satisfied with a single page. This is a very rough proxy for intent and we know that there will be some genuinely discerning users who never make it beyond the first page, either because the results are good enough to satisfy their needs already, or because they're so bad that they lose hope of finding the work(s) they're looking for. Nevertheless, this can be a useful way of splitting metrics to ensure that we're meeting the coarsest of user intentions.

Previouswellcomecollection.org query development indexNextWork IDs crib sheet

Last updated 10 months ago