Security for Gen AI
Invitation Code: RFP-24-03

As AI transitions from a research environment to real-world applications, AI security is quickly becoming a critical issue. Without strong security guarantees, deployment of AI solutions is extremely limited in scope. As we have learned from many years of Internet experience, any application where attackers can gain any sort of benefit will be attacked and any vulnerabilities will eventually be found and exploited.

Cisco is interested in soliciting proposals to address these issues. Topics of interest include (but are not limited to):

  • Adversarial attacks and defenses for machine learning models

Data poisoning and backdoors

Adversarial inputs

Model hardening via adversarial training or similar methods

  • Formal verification and specification of machine learning model behavior
  • Machine learning-driven agent security
  • Security of RAG and similar systems
  • AI development pipeline security (data collection, training, deployment, and monitoring)
  • Model alignment security and robustness
  • Model data leakage and exfiltration (PII, confidentiality, etc)
  • Evaluation and benchmarking of model security and robustness

Proposal Submission:

After a preliminary review, we may ask you to revise and resubmit your proposal. RFPs maybe be withdrawn as research proposals are funded, or interest in the specific topic is satisfied. Researchers should plan to submit their proposals as soon as possible.

General Requirements for Consideration, Proposal Details, FAQs

You can find the information by scrolling down to the bottom of the webpage: Research Gifts. If your questions are not answered in the FAQs, please contact research@cisco.com.

Constraints and other information

IPR will stay with the university. Cisco expects customary scholarly dissemination of results and hopes that promising results would be made available to the community without limiting licenses, royalties, or other encumbrances.