Cisco Research

HomeOur TeamPublications
Contact us
Cisco Logo
  • Help
  • Cookies
  • Terms & Conditions
  • Trademarks
  • Contact

© 2026 Cisco Systems, Inc.

    Cisco Research

    HomeOur TeamPublications
    Quantum Research
    About Quantum LabsCisco Universal Quantum SwitchQuantum Publications
    Outshift Quantum Blogs
    Explore Quantum in action

    Research topics

    Quantum Networking

    Quantum Digital Twin

    Quantum Security

    Quantum Data CenterQuantum Resistence

    Research topics

    Security for AIAI for security
    FlameResponsible AIModelSmithMultiWorld
    BlazeDeep VisionLionPolygraph LLM

    Learn more

    Research FundingOpen RFPs
    Contact us
    powered byoutshift
    Responsible AI
    What is RAI?

    Rai is an open-source project that helps AI developers with various aspects of responsible AI development. It consists of a core API and a corresponding web-based dashboard application. RAI can easily be integrated into AI development projects and measures various metrics for an AI project during each phase of AI development, from data quality assessment to model selection based on performance, fairness, and robustness criteria. In addition, it provides interactive tools and visualizations to understand and explain AI models and provides a generic framework to perform various types of analysis including adversarial robustness.

    Responsible AI

    Why RAI?

    RAI can handle many types of data such as text, images, and tabular data. In addition to providing a comprehensive set of quantitative metrics that can be used to study the performance and bias and other aspects of models, RAI provides a flexible interactive design where new types of analysis and explanations for data and model can be easily integrated as custom Analysis. The rich set of out of the box visualization such as grad-cam can assist developers with the analysis and debugging of ai models. Here is a summary of the main features of RAI:

    Automated measurement of more than 150 different metrics, with additional information regarding each metric’s definition, range, and reference Automated testing of AI models by providing an easy to use, logic-based test cases for certifying various qualities of AI model An easily extensible Model analysis tool to provide custom visualization and explainability to AI models

    Getting Started

    This document is a good starting point.

    Support

    We welcome feedback, questions, and issue reports.GitHub Issues

    Contribution

    We welcome contributions. We use GitHub issues to track public bugs. Report a bug byopening a new issue.