EU Science Hub

Volunteered metadata, and metadata on VGI : Challenges and current practices

In the face of an exploding range of volunteered data initiatives, it is important to maintain good metadata and quality information in order to ensure the appropri-ate combination and re-use of the resulting datasets. At the same time, there is in-creasing evidence that validation and quality assessment of data (whether that data be volunteered or ‘official’) can sometimes be usefully crowdsourced, i.e. the required efforts can be distributed to a large number of people. However, as with VGI itself, maintaining the consistency, semantics and reliability of volunteered metadata present a number of challenges. Initiatives which archive the history of features and tags (e.g OpenStreetMap) lend themselves to some mapping of dis-puted features, but among citizen science projects in general there is often limited scope for users to comment on their own or others’ submissions in a consistent way which may be translated to any of the currently accepted geospatial metada-ta standards. At the same time, platforms which allow the publication of more ‘authoritative’ datasets, (e.g. Geonode and ArcGISOnline), have introduced the option of user comments and ratings. Volunteered metadata (on both authorita-tive and VG information) is potentially of huge value in assessing fitness-for-purpose, but some form of standardization is required in order to aggregate di-verse ‘opinions’ on the content and quality of datasets, and extract the maximum value from this potentially vital resource. We discuss major challenges, and pre-sent a set of examples of current practices, which may assist in this aggregation.