Table 2 Potential AI Risks as identified in our Software Engineering Review.
From: An example of governance for AI in health services from Aotearoa New Zealand
Risk type | Description |
|---|---|
AI development risk | Risk of AI failing due to insufficient engineering due diligence in the development of the algorithm. Examples include but are not limited to: Direct coding errors, poor feature engineering, AI predicting on confounding features, absence of bias analysis, overfitting, poor problem definition. |
Data accuracy risk | Risks associated with the accuracy of the data set. Examples include bias against data poor regions, bias through historical discrimination, unit errors, errors in data collection hardware, data labelling risks. |
Software erosion risk | Risk of AI eroding overtime. This can be caused through classical software erosion such as system upgrades, or AI specific erosion such as changes in the healthcare environment (such as a pandemic), or feedback loop risk where a successful AI may distort patient outcomes in future datasets. |
Clinical use risk | Risk of AI being used inappropriately or in a context where the intended health outcome is not able to be delivered |