
On Monday 31st of August I presented the preliminary results of my work on sound representations during the weekly Artificial Intelligence meeting at the VU University Amsterdam. In this collaboration with Emiel van Miltenburg, a sound corpus is built with annotations on how people perceive these sounds. Sounds can often be interpreted in multiple ways, but tags in sound corpora do not directly relate to the acoustic features of sounds. Because of this limited representation of what can be heard in a sound, the ranking of search results is not optimal. In this research, we use crowdsourcing to build an annotated corpus of sounds from freesound.org with meaningful representations that are perceptually grounded. The presented slides can be seen below or on slideshare.