FAIR Metrics

FAIR metrics are measures designed to assess the extent to which data and digital objects adhere to the principles of FAIR, which stands for Findable, Accessible, Interoperable, and Reusable. The FAIR principles provide a framework to enhance the usability and value of digital assets, particularly in the context of scientific research and data sharing.

FAIR Metrics or Indicators

Wilkonson et.al. have developed indicators that can be used by researchers to assess the FAIRness of resources such as datasets, code, workflows, and research objects. The initial survey-based FAIRness assessments utilized Gen1 and could only be assessed manually. Utilizing the survey results as a comprehensive overview of community FAIRness approaches, we subsequently developed Gen2 MIs, which are fully automated. The current iteration of the FAIR Evaluator employs Gen2 MIs. At this point the Maturity Indicator System consist of 14 Items, are close to the fair principles and human and machine readably.

The description of the indicators can be found in the following table

The template for creating FAIR Metrics [2]

FIELD,

DESCRIPTION

Metric Identifier

FAIR Metrics should, themselves, be FAIR objects,
and thus should have globally unique identifiers.

Metric Name

A human-readable name for the metric

To which principle does it apply?

Metrics should address only one subprinciple, since
each FAIR principle is particular to one feature of a digital resource;
metrics that address multiple principles are likely to be measuring multiple features,

and those should be separated whenever possible.

What is being measured?

A precise description of the aspect of that digital
resource that is going to be evaluated

Why should we measure it?

Describe why it is relevant to measure this aspect

What must be provided?

What information is required to make this measurement?

How do we measure it?

In what way will that information be evaluated?

What is a valid result?

What outcome represents “success” versus “failure”

For which digital resource(s) isthis relevant?

If possible, a metric should apply to all digital resources;
however, some metrics may be applicable only to a subset.
In this case, it is necessary to specify the range of resources
to which the metric is reasonably applicable.

Examples of their application across types of digital resource

Whenever possible, provide an existing example of success, and an example of failure.

RDA FAIR Data Maturity Model

In response to the diversity of FAIRness measurements arising from various interpretations of the principles, the RDA Working Group “FAIR data maturity model,” formed in January 2019, seeks to create a unified set of fundamental assessment criteria for FAIRness, intended to be adopted as an RDA Recommendation. Throughout 2019 and the initial months of 2020, the working group formulated a series of indicators and corresponding maturity levels.

As a result of the work, a first set of guidelines and a checklist related to the implementation of the indicators were produced, with the objective to further align the guidelines for evaluating FAIRness with the needs of the community. [3]

RDA ranks priority for each item (“● useful / ●● important / ●●● essential”) and provides suggestion for assessment (implementation).

Currently there are 41 indicators.

FAIRsFAIR

Version 0.5 of this specification introduces 17 minimum viable metrics designed to systematically gauge the FAIRness of research data objects. A research data object, encompassing data, metadata, and documentation (such as policies and procedures), plays a crucial role in shaping the FAIR assessment implementation. These components can either serve as resources for evaluation or act as evidence supporting FAIR implementation. The metrics are developed incrementally and are rooted in indicators proposed by the RDA FAIR Data Maturity Model Working Group, as well as previous contributions from project partners like FAIRdat and FAIREnough, and the WDS/RDA Assessment of Data Fitness for Use checklist. Continuous refinement of the metrics has been achieved through processes such as focus groups, internal reviews, public feedback, and the utilization of tools like F-UJI and FAIR-Aware, specifically designed to assist FAIR assessment in selected use cases. The automated FAIR assessment tool (F-UJI) has been applied to datasets from five CoreTrustSeal certified repositories. We encourage the adaptation of these metrics and tools to support various FAIR assessment scenarios throughout the research data lifecycle.

The metrics adhere to the format outlined in the template (Table 2), which has been adjusted from Wilkinson et al. [1]. In each metric table, we furnish descriptions and assessment particulars, along with the metric’s correlation to the pertinent FAIR principle(s) and CoreTrustSeal requirement(s). Currently there are 17 items.

Modified Metric Template [4]

FIELD,

DESCRIPTION

Metric Identifier

The local (FAIRsFAIR) identifier of the metric.

Metric Name

A human-readable name for the metric

Description

The definition of the metric, including examples.

FAIR Principle

The FAIR principle most related to the metric.

CoreTrustSeal Alignment

The CoreTrustSeal requirement(s) most related to the metric.

Assessment

Requirements and methods to perform the assessment against the metric.

FAIR Metric for EOSC

The EOSC FAIR Working Group (WG) has been assigned the responsibility of defining a collection of metrics for evaluating the FAIRness of datasets and other digital objects to be utilized within the European Open Science Cloud (EOSC). To carry out this task, the FAIR WG established a Metrics and Certification Task Force, working in tandem with related certification initiatives.

Literature