Cisco Research

HomeOur TeamPublications
Contact us
Cisco Logo
  • Help
  • Cookies
  • Terms & Conditions
  • Trademarks
  • Contact

© 2026 Cisco Systems, Inc.

    Cisco Research

    HomeOur TeamPublications
    Quantum Research
    About Quantum LabsCisco Universal Quantum SwitchQuantum Publications
    Outshift Quantum Blogs
    Explore Quantum in action

    Research topics

    Quantum Networking

    Quantum Digital Twin

    Quantum Security

    Quantum Data CenterQuantum Resistence

    Research topics

    Security for AIAI for security
    FlameResponsible AIModelSmithMultiWorld
    BlazeDeep VisionLionPolygraph LLM

    Learn more

    Research FundingOpen RFPs
    Contact us
    powered byoutshift

    Security for AI

    Code Hallucination

    Diagram
    • What is code hallucination: Generative models such as large language models are extensively used as code copilots and for whole program generation. However, the programs they generate and update may suffer from hallucination.
    • In this work, we present several types of code hallucination. We have generated such hallucinated code manually using large language models.
    • We present a technique - HallTrigger, in order to demonstrate efficient ways of generating arbitrary code hallucination.
      • Our method leverages 3 different dynamic attributes of LLMs to craft prompts that can successfully trigger hallucinations from models without the need to access model architecture or parameters. 
    • Results from popular blackbox models suggest that HallTrigger is indeed effective and the pervasive LLM hallucination have sheer impact on software development.
    • Paper: "Code Hallucination", Mirza Rahman, Ashish Kundu, Ramana Kompella, Elisa Bertino, July '24,https://arxiv.org/abs/2407.04831