Share to: share facebook share twitter share wa share telegram print page

Garak (software)

garak
Original author(s)Leon Derczynski
Developer(s)Nvidia
Initial releaseJune 13, 2023; 2 years ago (2023-06-13)
Stable release
0.13.0 / September 2, 2025 (2025-09-02)
Repositorygithub.com/NVIDIA/garak
Written inPython
Operating systemCross-platform
TypeSecurity
LicenseFramework: Apache License
Websitegarak.ai

garak is a computer security tool that provides information about  LLM security vulnerabilities and aids in penetration testing and red teaming of language models and dialog systems. It is supported by Nvidia. Officially the name is short for "generative AI red-teaming & assessment kit".

garak is described as the leading LLM vulnerability scanner in an independent 2024 review by Fujitsu Research.[1] It is used and recommended as tooling in articles from Microsoft,[2] Trend Micro,[3] NVIDIA[4] and Cisco,[5] and has been covered in major IT news outlets.[6][7]

History

garak was developed in Spring 2023 by Prof. Leon Derczynski of ITU Copenhagen[8] during a sabbatical at the University of Washington. It was first released under GPL on 13 June 2023.[9] The license was later updated to Apache 2.0. The software is now homed at NVIDIA, where it lives as an open-source project with long-term support, and has been available via the NVIDIA public GitHub since November 2024.[10]

Framework

The main components in garak are probes, generators, and detectors.[11] Probes manage attacks and implement an adversarial technique. Generators abstract away targets, which may be an LLM, a dialogue system, or anything that can take text and return text (plus optionally other modalities). Probes attempt to attack generators and pass the resulting output to a detector. The detectors assess whether or not the output indicates a successful attack. The whole is compiled into reporting by an HTML page and a JSON object summarizing results.

See also

References

  1. ^ Brokman, Jonathan. "Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis". 2025 IEEE/ACM International Workshop on Responsible AI Engineering (RAIE). doi:10.1109/RAIE66699.2025.00005.
  2. ^ Kumar, Ram Shankar Siva. "Announcing Microsoft's open automation framework to red team generative AI Systems". Microsoft. Retrieved 7 September 2025.
  3. ^ "Exploiting DeepSeek-R1: Breaking Down Chain of Thought Security". Trend Micro. Retrieved 7 September 2025.
  4. ^ Briski, Kari. "NVIDIA Releases NIM Microservices to Safeguard Applications for Agentic AI". NVIDIA. Retrieved 7 September 2025.
  5. ^ "Detecting Exposed LLM Servers: A Shodan Case Study on Ollama". Cisco. Retrieved 7 September 2025.
  6. ^ "Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling". The Register. Retrieved 7 September 2025.
  7. ^ "Garak – An Open Source LLM Vulnerability Scanner for AI Red-Teaming". GBHackers.
  8. ^ Jensen, Theis Duelund. "ITU researcher develops software to secure Large Language Models". IT University of Copenhagen. Retrieved 7 September 2025.
  9. ^ "garak 0.9". PyPI. Retrieved 7 September 2025.
  10. ^ "NVIDIA Releases Garak to Safeguard LLMs". Analytics India Mag. Retrieved 7 September 2025.
  11. ^ Derczynski, Leon; Galinkin, Erick; Martin, Jeffrey; Majumdar, Subho; Inie, Nanna. "garak: A Framework for Security Probing Large Language Models". arXiv. Retrieved 7 September 2025.
Prefix: a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

Portal di Ensiklopedia Dunia

Kembali kehalaman sebelumnya