Skip to main content
  1. Posts/

Human-Centered Interactive Machine Learning

·367 words

There is a responsibility crisis in the machine learning community. Practitioners are creating systems without empathizing with the humans involved at every step along the way.

Papers are often published with human subject data who have not provided informed consent. Data are collected in unfair and unethical means. Models are released (or not) without due consideration of potential impacts.

By acknowledging the crisis, we can progress scientific research in a fair, transparent, and accountable way. The first way to acknowledge the magnitude of the crisis is to appreciate that the human is central to all machine learning.

All machine learning is interactive.

Machine learning aims to augment humans’ ability to learn and make decisions over time through the development of semi-automated interactive decision-making systems. This interaction represents a collaboration between multiple intelligent systems—humans and machines.

A lack of appropriate consideration for the humans involved can lead to problematic system behaviour, and issues of fairness, accountability, and transparency.

An obligation of responsibility for public interaction means acting with integrity, honesty, fairness, and abiding by applicable legal statutes. With these values and principles in mind, we as a research community can better achieve the collective goal of augmenting human ability.

I develop a guide intended to be used by artificial intelligence practitioners who incorporate human factors in their work. These practitioners are responsible for the health, safety, and well-being of interacting humans.

This practical guide aims to support many of the responsible decisions necessary throughout iterative design, development, deployment, and dissemination of machine learning systems.

The guide is broken down as follows:

Human-Centered Design #

  • Step 1: Define the hypothesis
  • Step 2: Loop in humans
  • Step 3: Define the goal
  • Step 4: Define the data

I include two useful design-thinking exercises to conclude this section, they are: The Whiteboard Model and The Pre-mortem.

Develop, Analyze, Evaluate, and Iterate #

  • Step 5: Build model
  • Step 6: Evaluate model
  • Step 7: Analyze trade-offs
  • Step 8: Re-evaluate and iterate

Disseminate #

  • Step 9: Deploy the system
  • Step 10: Communicate

The guide will be presented as a poster at the 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM) in Montreal, Canada, July 7-10, 2019, and is also posted on ArXiv (PDF).