Credo AI, the governance company operationalizing Responsible AI, announced the general availability of new assessment and reporting capabilities in its Responsible AI Governance Platform. These enhancements will enable enterprises to easily meet new regulatory requirements and customer demands for governance artifacts, reports and disclosures on their development and use of AI, with a focus on assessing and documenting Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy.
This release is the latest addition to Credo AI’s software that helps enterprises manage AI risk and compliance at scale. The new feature set allows organizations to standardize and automate reporting of Responsible AI issues across all of their AI/ML applications.
Credo AI’s Intelligent SaaS platform empowers enterprises to measure, monitor and manage AI introduced risks at scale.
These features were developed in response to the growing call for transparency and documentation of AI systems from regulators, customers and consumers. Increasingly, the world is demanding to know how AI systems behave, particularly when it comes to issues like fairness and bias. Forthcoming regulations like New York City’s algorithmic hiring law and the EU AI Act will soon mandate that organizations building, buying and using AI conduct regular assessments or audits of their AI tools and publish reports for public consumption.
Recently, the White House also introduced a blueprint for an AI Bill of Rights which provides guidance on the design, use and deployment of AI. And last month, the House of Representatives Committee on Science, Space, and Technology held a hearing on managing the risks of AI where tech leaders including Credo AI’s founder and CEO Navrina Singh discussed the need for context-focused governance and transparent reporting.
Credo AI enables customers to comply with upcoming regulations and address their customers’ questions and concerns about the AI systems they’re offering and implementing. The platform is already in use at Fortune 100 enterprises in the financial services, insurance, high tech, and aerospace and defense sectors, which are using it to generate governance artifacts and reports on the fairness, performance, and governance of their AI systems to share with customers and regulators.