Recent improvements in large language models (LLMs) have prompted a focus on building fully autonomous AI agents, but a position paper argues against this approach.
Autonomous AI systems still face challenges with reliability, transparency, and understanding human requirements.
The paper proposes LLM-based Human-Agent Systems (LLM-HAS) where AI collaborates with humans instead of replacing them, ensuring trustworthiness and adaptability.
By involving humans to provide guidance and maintain control, these systems can be more reliable and suitable for tasks like healthcare, finance, and software development.
Human-AI teamwork is shown to handle complex tasks more effectively than AI working in isolation, with examples provided from various industries.
Challenges in building collaborative systems are discussed in the paper along with practical solutions to address them.
Progress in AI, as argued in the paper, should be based on how effectively systems can work with humans rather than becoming fully independent.
The paper advocates for AI systems that enhance human capabilities through partnership rather than replacing human roles entirely.