How our Sapia Labs team adapted a Google invention to lift the bar on Ai transparency in recruitment

Artificial Intelligence (mostly Machine Learning) is being used more and more for high-impact decision making, so it is important to ensure these models are used in a fair manner. 

At Sapia, we recognise the impact these technologies have on candidates when used in screening. We are committed to ensuring fairness by making the evaluations more inclusive, valid, unbiased and explainable – this is the essence of our FAIR™ framework.  

The Fair Ai for Recruitment (FAIR™) framework presents a set of measures and guidelines to implement and maintain fairness in Ai-based candidate selection tools. It does not dictate how Ai algorithms must be built, as these are constantly evolving. Instead, it seeks to provide a set of measures that both Ai developers and users can adopt to ensure the resulting system has factored in fairness.

The lack of transparency related to training data and behavioural characteristics of predictive models is a key concern raised when using machine learning based applications. For example, in most instances, there is no documentation around intended/unintended use cases, training data, performance, model behaviour, and bias-testing. 

Recognising this limitation, researchers from Google’s Ethical Artificial Intelligence team and the University of Toronto proposed Model Cards in this research paper. A Model Card is intended to be used as a standard template for reporting important information about a model, helping users make informed decisions around the suitability of the model. The paper outlines typical aspects that should be covered in a Model Card, such as  “how it was built, what assumptions were made during its development, what type of model behaviours could be experienced by different cultural, demographic, or phenotypic population groups, and an evaluation of how well the model performs with respect to those groups.”

Sapia Labs has adopted and customised the concept of a Model Card to communicate a broad range of important information about a model to relevant internal and external stakeholders. It acts as a model specification, and the single source for all model details.

Here are some of the topics covered in a Sapia Model Card:

  • Model Details: Provides high-level information about the model under the subsections overview, version, owners, licence, references, model architecture, feature versions, input format, output format. These details clearly set out the responsibility for the model and document all the relevant information.
  • Considerations: Important considerations in using this model, such as intended users and use cases, ensuring that the model is used only as originally intended. It also includes a colour-coded summary of adverse impact testing results (covered under quantitative analysis below).
  • Dataset: Sources and composition of the dataset and distribution charts of features used by the model.
  • Quantitative analysis:
    • Adverse impact testing: Statistics on sensitive attributes and groups, a visual overview of adverse impact testing results in terms of effect sizes and the ratio of recommendation rates (4/5th rule), followed by a very detailed report going into the adverse impact at the individual feature level.
    • Model dynamics: Distribution of the outcome score and the behaviour of the model, presented with partial dependency plots, which improve the explainability of the model.

The generation of the Model Card is automated, and is an integral part of the model build process, ensuring a Model Card is available with every model. 

Having a standardised document for communicating a model specification has enabled faster and more effective decision making around models, especially on whether to go live or not. Integrating Model Cards is part of the continuous improvement process at Sapia Labs on the ethical use of ML/AI. The contents continue to evolve based on the team’s ongoing research and requests made by other stakeholders. As far as we know, this effort is an industry first for the employment assessment industry, and we are proud to be leading in this space.

Sign up to our newsletter