Amazon OpenSearch Service has evolved since 2023, with improved performance, cost-effectiveness, and new features for hybrid search methods using dense and sparse vectors.
In 2024, there was a shift towards production use of Retrieval Augmented Generation (RAG) applications and semantic search workloads to enhance relevance.
2025 brings support for OpenSearch 2.17, featuring enhancements focused on lowering costs, reducing latency, and improving search accuracy.
OpenSearch Service offers a vector database supporting FAISS, NMSLIB, and Lucene engines for exact and approximate nearest-neighbor matching with various distance metrics.
Builders are adopting a hybrid search approach combining lexical and semantic retrieval methods to cater to diverse user queries effectively.
OpenSearch improved hybrid search capabilities in 2024 through conditional scoring logic, optimized structures, and parallel query processing, reducing latency and post-filtering for refined results.
Sparse vector search simplifies the integration of lexical and semantic information, enhancing query processing latency in 2024.
OpenSearch introduced strategies in 2024 to reduce costs for production workloads, including scalar and binary quantization, in-memory handling optimizations, and support for JDK21 and SIMD instruction sets.
Innovations like k-NN query updates, chunking strategies, and reduced RAM consumption methods contribute to improved accuracy and efficiency in 2024.
OpenSearch's focus on dense vector handling, cost reduction through quantization, and support for AI-native pipelines highlights a commitment to advancing search AI use cases and integrations.
Overall, OpenSearch continues to enhance its capabilities for semantic search and vector databases, offering builders powerful, scalable solutions for AI-driven applications.