In August 2024, Cosmos, an Australian science magazine, published AI-written articles that were fact-checked to avoid misinformation. However, one contained errors, leading to criticism for devaluing science journalism and concerns about the misuse of AI technology. Although generative AI has the capability to create new patterns, it can contribute to online misinformation and inaccuracies. But AI could benefit science communication in ways many do not realize, including providing broader accessibility to those facing barriers to education and communication.
Generative AI, which can create new interpretations, could spell trouble for science journalism, with misinformation and copyright issues being some of the concerns scientists and researchers have raised.
Generative AI can do more than just create articles. Chatbots, like ChatGPT, which can understand natural language, offer great potential for interactive and conversational discussions about science.
Generative AI could also prove beneficial for groups that face cultural and geographic barriers to educational resources, especially in helping such groups become more familiar with scientific concepts.
Generative AI is here to stay. However, information obtained from AI needs to be fact-checked, and science communication and journalism providers must be transparent in explaining how they used AI.
Although AI could increase the range of voices and making physics more accessible, it could worsen existing inequalities in society and contribute to biases in healthcare data. AI is not a substitute for critical thinking and having something substantial to say.
Generative AI can be a helpful tool for science communication while presenting its own set of unique challenges and concerns. However, incorporating AI into the scientific information ecosystem and evaluating it continuously is essential to maintain the quality of science journalism and public trust in science.