AutoPatchBench is introduced as a benchmark for AI automated repair of vulnerabilities found through fuzzing, facilitating objective evaluation of AI program repair systems.
It aims to enhance the development of more secure solutions and promote collaboration within the security community.
It includes 136 C/C++ vulnerabilities, verified fixes, and aims to evaluate AI-driven repair tools effectively.
AutoPatchBench aims to automate the process of fixing fuzzing-found vulnerabilities by utilizing AI's efficiency in analyzing and proposing solutions.
It involves steps like analyzing crashes, pinpointing root causes, patching code, and verifying fix accuracy.
AutoPatchBench overcomes challenges in AI-driven program repair by creating a specific benchmark for fuzzing-found vulnerabilities.
It includes a curated subset from the ARVO dataset, focusing on C/C++ vulnerabilities with automated verification processes.
The benchmark emphasizes criteria like valid vulnerability, reproducibility, stack trace, compilation success, and more for robust evaluation.
AutoPatchBench-Lite subset focuses on simpler vulnerabilities to cater to tools in early stages of development.
The process involves patch generation, post-verification with fuzzing and differential testing, and considerations for improving patch accuracy.