• Content Type

Research and analysis

Browse our database of relevant research published by the Hub and other sources.

  • Type of organisation

  • Topic

  • Domain

  • Application

  • Name of organisation

  • Clear filters

NIST has released a plan for prioritizing federal agency engagement in the development of standards for artificial intelligence (AI). The plan recommends the federal government “commit to deeper, consistent, long-term engagement in AI standards development activities to help the United…
1 reply
2 followers
Organisation: National Institute of Standards and Technology
Last updated: 9 Aug 2019
The artificial intelligence (AI) revolution is upon us, with the promise of advances such as driverless cars, smart buildings, automated health diagnostics and improved security monitoring. Many current efforts are aimed to measure system trustworthiness, through measurements of Accuracy, Reliability,…
0 replies
1 follower
Organisation: National Institute of Standards and Technology
Last updated: 2 Mar 2021
Research and analysis item

Towards auditable AI systems

See more details
Organisation: Federal Office for Information Security
Last updated: 1 May 2021
As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and…
0 replies
2 followers
Organisation: National Institute of Standards and Technology
Last updated: 1 Feb 2023
Organisation: ANSI
Last updated: 1 Feb 2023
In this paper, we make the case that interpretability and explainability are distinct requirements for machine learning systems. To make this case, we provide an overview of the literature in experimental psychology pertaining to interpretation (especially of numerical stimuli) and…
0 replies
1 follower
Organisation: National Institute of Standards and Technology
Last updated: 12 Apr 2021
We introduce four principles for explainable artificial intelligence (AI) that comprise fundamental properties for explainable AI systems. We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users;…
0 replies
1 follower
Organisation: National Institute of Standards and Technology
Last updated: 1 Feb 2023
Research and analysis item

AI risk management framework

See more details
NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability…
0 replies
2 followers
Organisation: National Institute of Standards and Technology
Last updated: 1 Feb 2023
This NIST Interagency/Internal Report (NISTIR) is intended as a step toward securing applications of Artificial Intelligence (AI), especially against adversarial manipulations of Machine Learning (ML), by developing a taxonomy and terminology of Adversarial Machine Learning (AML). Although AI also includes…
0 replies
1 follower
Organisation: National Institute of Standards and Technology
Last updated: 1 Oct 2019