TY - GEN
T1 - SafeToolBench
T2 - 30th Conference on Empirical Methods in Natural Language Processing, EMNLP 2025
AU - Xia, Hongfei
AU - Wang, Hongru
AU - Liu, Zeming
AU - Yu, Qian
AU - Guo, Yuhang
AU - Wang, Haifeng
N1 - Publisher Copyright:
©2025 Association for Computational Linguistics.
PY - 2025
Y1 - 2025
N2 - Large Language Models (LLMs) have exhibited great performance in autonomously calling various tools in external environments, leading to better problem solving and task automation capabilities. However, these external tools also amplify potential risks such as financial loss or privacy leakage with ambiguous or malicious user instructions. Compared to previous studies, which mainly assess the safety awareness of LLMs after obtaining the tool execution results (i.e., retrospective evaluation), this paper focuses on prospective ways to assess the safety of LLM tool utilization, aiming to avoid irreversible harm caused by directly executing tools. To this end, we propose SafeToolBench, the first benchmark to comprehensively assess tool utilization security in a prospective manner, covering malicious user instructions and diverse practical toolsets. Additionally, we propose a novel framework, SafeInstructTool, which aims to enhance LLMs’ awareness of tool utilization security from three perspectives (i.e., User Instruction, Tool Itself, and Joint Instruction-Tool), leading to nine detailed dimensions in total. We experiment with four LLMs using different methods, revealing that existing approaches fail to capture all risks in tool utilization. In contrast, our framework significantly enhances LLMs’ self-awareness, enabling a more safe and trustworthy tool utilization. Our code and data are publicly available at https://github.com/BITHLP/SafeToolBench.
AB - Large Language Models (LLMs) have exhibited great performance in autonomously calling various tools in external environments, leading to better problem solving and task automation capabilities. However, these external tools also amplify potential risks such as financial loss or privacy leakage with ambiguous or malicious user instructions. Compared to previous studies, which mainly assess the safety awareness of LLMs after obtaining the tool execution results (i.e., retrospective evaluation), this paper focuses on prospective ways to assess the safety of LLM tool utilization, aiming to avoid irreversible harm caused by directly executing tools. To this end, we propose SafeToolBench, the first benchmark to comprehensively assess tool utilization security in a prospective manner, covering malicious user instructions and diverse practical toolsets. Additionally, we propose a novel framework, SafeInstructTool, which aims to enhance LLMs’ awareness of tool utilization security from three perspectives (i.e., User Instruction, Tool Itself, and Joint Instruction-Tool), leading to nine detailed dimensions in total. We experiment with four LLMs using different methods, revealing that existing approaches fail to capture all risks in tool utilization. In contrast, our framework significantly enhances LLMs’ self-awareness, enabling a more safe and trustworthy tool utilization. Our code and data are publicly available at https://github.com/BITHLP/SafeToolBench.
UR - https://www.scopus.com/pages/publications/105028975693
U2 - 10.18653/v1/2025.findings-emnlp.958
DO - 10.18653/v1/2025.findings-emnlp.958
M3 - Conference contribution
AN - SCOPUS:105028975693
T3 - EMNLP 2025 - 2025 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2025
SP - 17643
EP - 17660
BT - EMNLP 2025 - 2025 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2025
A2 - Christodoulopoulos, Christos
A2 - Chakraborty, Tanmoy
A2 - Rose, Carolyn
A2 - Peng, Violet
PB - Association for Computational Linguistics (ACL)
Y2 - 4 November 2025 through 9 November 2025
ER -