Skip to content
This repository has been archived by the owner on Dec 18, 2023. It is now read-only.

credo-ai/credoai_lens

Repository files navigation

Credo AI Lens

Workflow Tests Coverage


⚠️ DEPRECATION WARNING: This project is no longer maintained.

Lens by Credo AI - Responsible AI Assessment Framework

Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community. In short, Lens connects arbitrary AI models and datasets with Responsible AI tools throughout the ecosystem.

Lens can be run in a notebook, a CI/CD pipeline, or anywhere else you do your ML analytics. It is extensible, and easily customized to your organizations assessments if they are not supported by default.

Though it can be used alone, Lens shows its full value when connected to your organization's Credo AI App. Credo AI is an end-to-end AI Governance App that supports multi-stakeholder alignment, AI assessment (via Lens) and AI risk assessment.

Dependencies

  • Credo AI Lens supports Python 3.8+
  • Sphinx (optional for local docs site)

Installation

The latest stable release (and required dependencies) can be installed from PyPI.

pip install credoai-lens

Additional installation instructions can be found in our setup documentation

Getting Started

To get started, see the quickstart demo.

If you are using the Credo AI Governance App, also check out the governance integration demo.

Documentation

Documentation is hosted by readthedocs.

For dev documentation, see latest.

AI Governance

As an assessment framework, Lens is an important component of your overall AI Governance strategy. But it's not the only component! Credo AI, the developer of Lens, also develops tools to satisfy your general AI Governance needs, which integrate easily with Lens.

To connect to Credo AI's Governance App, see the Governance tutorial on readthedocs.

For Lens developers

Running tests

Running a test

scripts/test.sh

Running tests with pytest-watch

ptw --runner "pytest -s"