Skip to main content
Log in

Algorithmic Bias and Risk Assessments: Lessons from Practice

  • Original Paper
  • Published:
Digital Society Aims and scope Submit manuscript

Abstract

In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of assessments: an ethical risk assessment and a narrower, technical algorithmic bias assessment. We explain how the two assessments depend on each other, highlight the importance of situating the algorithm within its particular socio-technical context, and discuss a number of lessons and challenges for algorithm assessments and, potentially, for algorithm audits. The discussion builds on our team’s experience of advising and conducting ethical risk assessments for clients across different industries in the last 4 years. Our main goal is to reflect on the key factors that are potentially ethically relevant in the use of algorithms and draw lessons for the nascent algorithm assessment and audit industry, in the hope of helping all parties minimize the risk of harm from their use.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Data sharing not applicable to this article as no datasets are directly relevant to the content of this article.

Notes

  1. See, e.g., Liao (2020, 3), and Russell and Norvig (2010, 2) for similar definitions.

  2. We use “ethical risk assessment” rather than “ethical impact assessment” because the latter naturally suggests the actual impact or consequences of the use of an algorithm, while the former covers all ethical risks, whether they come to be realized or not. We recognize, however, that “impact” is often used in the broader sense that covers risk.

  3. See, for example, the Canadian government’s (2021) algorithmic impact assessment tool: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html

  4. An exception is a case study by Mökander and Floridi (2022). They provide a detailed case study based on an observation of an “ethics-based audit” of AztraZeneca conducted by a third party (not by the authors).

  5. Carrier and Brown’s (2021) taxonomy also includes an “internal audit” which is carried out by a group or unit working independently within and in service of the organization rather than society or some other party but otherwise has the characteristics of an audit.

  6. https://oecd.ai/en/ai-principles

  7. https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf

  8. See, for example, the Ada Lovelace Institute (2020) report: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/

  9. In some cases, one might be unfairly treated or discriminated against without being worse off than one otherwise would have been, and at least in this sense one might not be “harmed” by unfairness. We use ‘harm’ in a broader sense that includes such cases of unfair or discriminatory treatment.

  10. For more on this, see Andrus and Villenueve (2022) and Benjamin (2019).

  11. For example, see Watcher et al.’s (2021) “conditional demographic disparity,” and IBM Research’s list of metrics on the AI fairness 360 site: https://aif360.mybluemix.net

  12. For some discussion of this point, see Dotan (2021) and Fazelpour and Danks (2021).

  13. See Brown et al. (2021) for a similar list. For more on transparency and its ethical relevance, see Basl et al. (2021).

  14. See, for example, Ben-Shahar and Schneider (2014). Thanks to an anonymous referee for raising this important point.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Hasan.

Ethics declarations

Conflict of Interest

The article is based on the authors’ work for BABL AI, a consultancy that focuses on responsible AI governance, algorithmic risk and bias assessments, and corporate training on responsible AI.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hasan, A., Brown, S., Davidovic, J. et al. Algorithmic Bias and Risk Assessments: Lessons from Practice. DISO 1, 14 (2022). https://doi.org/10.1007/s44206-022-00017-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s44206-022-00017-z

Keywords

Navigation