Welcome to the safety, security and resilience discussion!
Activities for this topic will be facilitated by the BSI team, one of the strategic partners for the AI Standards Hub, and will include a mix of research, online discussion and events. Together with you, the members of this community, we’ll be looking at some of the key challenges in this space and how these could be addressed by existing or potential new standards. We’ll share more information about these activities through this thread over the coming weeks. In the meantime, we’d love to hear your thoughts on this topic.
Safety, security and resilience encompasses a wide range of both cross-sector and use-case specific challenges. Our initial, early stage, research has already revealed a few interesting points, for example understanding where liability falls for safety-critical applications such as automated vehicles and machines, ensuring the security of data and models, and providing the resilience to resist manipulation and ensure longevity of models.
Stay tuned to find out how you can get involved in the initial discussions and events on this topic. And please let us know, by adding your comments into this thread, if you already have ideas on issues to address or on relevant standards.
For those that are interested, the Office for Product Safety and Standards recently published a study looking at the impact of AI on product safety (i.e. consumer products such as appliances, toys, robots). This identified a number of characteristics of AI which have the potential to lead to technical challenges and harms. It also looks at regulatory and liability challenges as well as opportunities for AI to enhance safety.
Of particular relevance to this hub, there is also an assessment of applicable standards and other initiatives to mitigate these risks. See below: