Security issues of Generic Large Language Models
Speaker: Dipankar Dasgupta – Memphis, USATopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Generic Large Language Models (GLLMs) are continuously being released with increased size and capabilities, promoting the abilities of these tools as universal problem solvers. While the reliability of GLLMs' responses is questionable in many situations, these are augmented/ retrofitted with external resources for different applications including cybersecurity.
The talk will discuss major security concerns of these pre-trained language-based models: first GLLMs are prone to adversarial manipulation such as model poisoning, reverse engineering and side-channel cyberattacks. Second, the security issues related to LLM-generated codes using open-source libraries/codelets for software development can involve in software supply chain attacks. Moreover, due to lack of disclosing training data coverage and the model architecture, prohibiting independent 3rd party evaluations which may result in model manipulation, information disclosure, access to restricted resources, privilege escalation, and complete system takeover. Third, language-based generic models as an universal problem solver, likely to hallucinate with customized prompts with conflicting contexts and ambiguity.
This talk will also cover the benefits and risks of using GLLMs in cybersecurity, particularly, in malware detection, log analysis, intrusion detection, etc. I will highlight the need for diverse AI approaches (non-LLM-based smaller models) trained with application-specific curated data, fine-tuned for well-tested security functionalities will be necessary in identifying and mitigating emerging cyber threats including zero-day attacks.
References:
• Issues with Generic Large Language Models (GLLMs). D Dasgupta, A Roy – IEEE Conference on Artificial Intelligence for Business (AIxB), pp. 47-50, 2024.
• Pitfalls of Generic Large Language Models (GLLMs) from reliability and security perspectives. D Dasgupta, A Roy - 2024 IEEE 6th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA), Pages 412-419, 2024.
• A Review of Generative AI from Historical Perspectives. Dipankar Dipankar Dasgupta, Deepak Venugopal, Kishor Datta Gupta. The University of Memphis, Tech Report, February 2023. TechRxiv. DOI: 10.36227/techrxiv.22097942.v1
• Why Language Models Hallucinate. Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, Open AI Tech Report, 2025. Edwin Zhang. https://arxiv.org/pdf/2509.04664
About this Lecture
Number of Slides: 50Duration: 60 minutes
Languages Available: English
Last Updated: 03/12/2025
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.