Nowadays there are many repositories offering various ways of accessing the representations of digital content, whether in symbolic or document format. Some (like Spotify and Soundcloud) focus on audio and sound recordings. Others, like the MusicBrainz and Discogs repositories, offer metadata which provide information about the musical documents. Lastly, there are thousands of libraries and archives which give their users (performers, musicologists, etc.) access to large quantities of scanned music in image formats. However, none of these repositories shows the basic information needed to analyse, for example, all the scores for a musical form or for analysis and musicological interpretation of a particular musical style (comprising thousands of works).
In the case of scanned scores, the musical information is represented in the form of pixels and therefore nor does it allow any computerised processing in order to search or analyse within their musical content. In terms of that content, the substantive information in those documents is moreover inaccessible to library users who do not have sufficient musical knowledge but nevertheless want to listen to a particular piece of music.
At the same time, in conventional library contexts, users search for information by consulting automated catalogues. A search normally uses a title, author or subject. This search, the one we do when we want to know what documents a library has, is carried out using data previously catalogued according to internationally established standards. Yet, could a musician look for a score by keying in a melody or even whistling it? Might it be possible to listen to a digitised score at any tempo we might want?