IBM has released new open-source model, Granite 3.0. The datasets that are used for training are clearly disclosed by IBM. The language models are trained on Blue Vela, powered entirely by renewable energy.
Recently, Stanford’s Foundation Model Transparency Index report stated that IBM’s Granite models achieved a 100% score in transparency and openness of a model test.
Big tech companies including Google, Apple, and OpenAI remain tight-lipped about the data used to train their LLMs.
IBM’s approach is to increase transparency by default. It can help businesses overcome the scepticism and negative ‘black box’ connotation attached to LLMs.
By providing transparent models, IBM is likely to receive more recognition for playing the good guy in the world of AI.
Deep integration of AI into technology is something that people will choose on how to use or if they want to use it.
Granite models aim to make open source the winner in the world of the internet. People encountering good samaritans may get motivation to contribute to open source datasets.
IBM’s generative AI book of business is up by more than $1 billion quarter to quarter.
IBM's transparency can help businesses spend more time looking for solutions to their problems rather worry about the reliability of models they’re using.
IBM’s training datasets are mainly open-source, which would help in research, development and enhance these datasets.