Home

Published

- 2 min read

Pirates of the RAG: Adaptively Attacking LLMs to Leak Knowledge Bases

img of Pirates of the RAG: Adaptively Attacking LLMs to Leak Knowledge Bases

Conference Paper

While moving from the core studies of Collectionless AI, this paper is a great reference to showcase how easy is (was? time flies here!) to crack Large Language Models and steal private knowledge. It remarks the importance of strong activities toward security and privacy enhancements, that is what the Collectionless AI pushes.

Details

  • Authors: Christian Di Maio, Cristian Cosci, Marco Maggini, Valentina Poggioni, Stefano Melacci
  • Title: State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era
  • Where: European Conference on Artificial Intelligence (ECAI) 2025

BibTeX

   @inproceedings{DBLP:conf/ecai/MaioCMPM25,
  author       = {Christian Di Maio and
                  Cristian Cosci and
                  Marco Maggini and
                  Valentina Poggioni and
                  Stefano Melacci},
  editor       = {In{\^{e}}s Lynce and
                  Nello Murano and
                  Mauro Vallati and
                  Serena Villata and
                  Federico Chesani and
                  Michela Milano and
                  Andrea Omicini and
                  Mehdi Dastani},
  title        = {Pirates of the {RAG:} Adaptively Attacking LLMs to Leak Knowledge
                  Bases},
  booktitle    = {{ECAI} 2025 - 28th European Conference on Artificial Intelligence,
                  25-30 October 2025, Bologna, Italy - Including 14th Conference on
                  Prestigious Applications of Intelligent Systems {(PAIS} 2025)},
  series       = {Frontiers in Artificial Intelligence and Applications},
  volume       = {413},
  pages        = {4041--4048},
  publisher    = {{IOS} Press},
  year         = {2025},
  url          = {https://doi.org/10.3233/FAIA251293},
  doi          = {10.3233/FAIA251293}
}