I am participating by guiding the project with the core team, leading the Metadata Evaluation and Guidance Project, and participating in other projects and discussions. I have also contributed three blogs to the project.
No.men.cla.ture - Proposing a set of terms to use when talking about metadata communities, recommendations, and dialects.
Can We Agree - A comparison of discovery metadata recommendations from several communities.
Many researchers create software as a critical part of their research. Whether it is data cleaning, processing, modeling, or visualizing, these software resources need to be integrated into the corpus of research objects. They also need to be described and referenced so that other members of the scholarly community can find them.
Software citation is an on-going interest of mine. I have several blogs and papers that describe my progress:
The Metadata Improvement and Guidance (MetaDIG) Project combines evaluating specific metadata records using a variety of tests defined by the provider or repository with quantitative evaluations and comparisons of metadata collections. It is funded by the U.S. National Science Foundation.
We analyzed metadata from DataOne to explore the influence of community recommendations on the metadata completeness. We analyzed the completeness of EML and CSDGM metadata records from DataONE in terms of the LTER recommendation for Completeness. The goal of the study is to quantitatively measure completeness of metadata records and to determine if metadata developed by LTER is more complete with respect to the recommendation than other collections in EML and in CSDGM. We conclude that the LTER records are broadly more complete than the other EML collections, but similar in completeness to the CSDGM collections.
The first implementation of the Metadata Quality Engine is at the NSF Arctic Data Center. When you search for data and select a dataset, a Quality Report button is available. Click this button to see a report generated using a set of tests specifically designed for the Arctic Data Center. This collection of glacier photos did very well.
Identifying good examples of consistent affiliation information providers within the Crossref community and automating the identification of organizations are important steps towards the goal of assigning identifiers (e.g. RORs) to these organizations.