Endor Labs has developed Endor Labs Scores for AI Models, which uses 50 out-of-the-box metrics to score models based on security, activity, quality and popularity.
The new platform will initially score more than 900k open-source AI models available on Hugging Face.
Models are being built on top of each other and this presents problems when it comes to visibility and security.
Endor's platform scores models based on popularity and security, examining the date of creation and update.
AI models are akin to open source software, making it difficult for developers to know that their foundational elements are trustworthy, secure and reliable.
Security in AI models is complex and interesting due to numerous vulnerabilities and risks, making visibility of utmost importance.
Endor will eventually expand beyond Hugging Face and will deploy LLMs that parse, organize and analyze data, automatically and continuously scanning for model updates or alterations.
Basic testing is not something that can be done lightly or easily, as the data that is available can be convoluted and fragmented making it quite painful for people to read and understand.
Security and licensing obstacles must be overcome, whilst companies handling AI models must also be aware of requirements regarding intellectual property and copyright terms.
LLMs are larger and more complex to evaluate than traditional open-source dependencies.