Enhancing Factuality and Reasoning in Generative AI Models
Invitation Code: RFP-24-05

Request for Proposals (RFP): Enhancing Factuality and Reasoning in Generative AI Models

Introduction

The rapid development of large language models (LLMs) and multimodal foundational models has revolutionized various domains, including education, healthcare, art, entertainment, and human-technology interaction. While these advancements offer tremendous opportunities, they also pose challenges, particularly regarding the accuracy and reliability of generated information. Ensuring the factuality and improving the reasoning capabilities of these models are critical for their responsible and effective deployment.

Objective

This RFP invites research proposals focused on enhancing the factuality and reasoning capabilities of generative AI models, including LLMs and multimodal foundational models. We seek innovative approaches to detect and mitigate hallucinations, improve factual accuracy, and advance the models' logical reasoning and problem-solving abilities.


Areas of Research

Proposals are encouraged to address one or more of the following areas, but are not limited to these topics. We welcome any innovative and alternative ideas that align with the objective of enhancing generative AI models' factuality and reasoning:


Hallucination Detection

  • Development of techniques and tools for detecting hallucinations in large language models and multimodal foundational models.
  • Evaluation and benchmarking of hallucination detection methods across different model architectures and applications.

Improving Factuality and Reasoning Capabilities

  • Formal approaches to enhance the factual accuracy of generative models.
  • Integration of logical reasoning frameworks to improve model decision-making processes.
  • Exploration of multi-agent problem-solving techniques to enhance collaborative reasoning in AI models.
  • Iterative problem-solving methods to refine model responses over multiple interactions.
  • Advanced planning techniques to improve the coherence and accuracy of long-term model outputs.
  • Innovations in prompt engineering and meta-prompting to guide models towards more accurate and relevant responses.
  • Data selection strategies for fine-tuning models to improve their factuality and reasoning abilities.
  • Instruction tuning methodologies to align model outputs with human expectations and factual correctness.

We seek innovative and impactful research proposals that will contribute to the responsible growth and utilization of generative AI technologies while maximizing their benefits and minimizing risks.

Proposal Submission:

After a preliminary review, we may ask you to revise and resubmit your proposal. RFPs maybe be withdrawn as research proposals are funded, or interest in the specific topic is satisfied. Researchers should plan to submit their proposals as soon as possible.

General Requirements for Consideration, Proposal Details, FAQs

You can find the information by scrolling down to the bottom of the webpage: Research Gifts. If your questions are not answered in the FAQs, please contact research@cisco.com.

Constraints and other information

IPR will stay with the university. Cisco expects customary scholarly dissemination of results and hopes that promising results would be made available to the community without limiting licenses, royalties, or other encumbrances.