Responsible AI and AI Risk Management Frameworks
The number, types, capabilities, and use of AI-enabled systems and applications are exploding. Unfortunately, the gap between the power of these systems and their governance is wide and growing daily. This poses significant risk to organizations and their employees who use these technologies.
Numerous Responsible AI (RAI) and AI Risk Management (RM) Frameworks have been proposed to address this; their number is growing rapidly; and their structures can vary widely. This has led to a great deal of confusion and hesitance among organizations wanting to implement RAI programs to ensure the safe and ethical development, deployment, and use of AI systems.
The Center for Applied AI (C4AI) has proposed a solution to this problem that helps organizations get moving confidently and quickly, including the development of a living, searchable RAI frameworks database. It also includes the development of a Framework Profiler that can be used to query the database and quickly define the elements of the most effective and efficient RAI framework to govern their specific AI systems and applications.
Please visit the following blogpost for a description of our proposed solution: Survey of Responsible AI and AI Risk Management Frameworks, with Proposed Framework Profiler