GNAT 0.1 released
You can get it here.
What does it do?
As mentioned in my previous blog post, GNAT is a small software able to link your personal music collection to the Semantic Web. It will find dereferencable identifiers available somewhere on the web for tracks in your collection. Basically, GNAT crawls through your collection, and try by several means to find the corresponding Musicbrainz identifier, which is then used to find the corresponding dereferencable URI in Zitgist. Then, RDF/XML files are put in the corresponding folder:
/music /music/Artist1 /music/Artist1/AlbumA/info_metadata.rdf /music/Artist1/AlbumA/info_fingerprint.rdf /music/Artist1/AlbumB/info_metadata.rdf /music/Artist1/AlbumB/info_fingerprint.rdf
These files hold a number of
<http://zitgist.com/music/track/...> mo:available_as <local
file> statements. These files can then be used by a tool such as
(which will be properly released next week), which swallows them, exposes a
SPARQL end point, and provides some linked
data crawling facilities (to gather more information about the artists in
our collection, for example), therefore allowing to use the links pictured here
(yes, sorry, I didn't know how to introduce properly the new linking-open-data schema - it looks good! and
keeps on growing!:-) ):
Two identification strategies
GNAT can use two different identification strategies:
- Metadata lookup: in this mode, only available tags are used to identify the track. We chose an identification algorithm which is slower (although if you try to identify, for example, a collection with lots of releases, you won't notice it too much, as only the first track of a release will be slower to identify), but works a bit better than Picard' metadata lookup. Basically, the algorithm used is the same as the one I used to link the Jamendo dataset to the Musicbrainz one.
- Fingerprinting: in this mode, the Music IP fingerprinting client is used in order to find a PUID for the track, which is then used to get back to the Musicbrainz ID. This mode is obviously better when the tags are crap :-)
- The two strategies can be run in parallel, and most of the times, the best identification results are obtained when combining the two...
- To perform a metadata lookup for the music collection available at
./AudioCollection.py metadata /music
- To perform a fingerprint-based lookup for the music collection available at
./AudioCollection.py fingerprint /music
- To clean every previously performed identifications:
./AudioCollection.py clean /music