According to my current work, artificial intelligence-generated summaries of scientific papers make complex information more comprehensible to the general public than human-authored summaries published in PNAS Nexus. AI-generated summaries not only improved public understanding of science, but additionally improved people's perceptions of scientists.
I used a preferred large language model, GPT-4 by OpenAIto create easy summaries of educational papers; This variety of text is sometimes called a significance statement. The summaries created by the AI used simpler language — they were easier to read, in accordance with a readability index, and used more common words like “job” as a substitute of “profession” — than summaries written by the researchers who conducted the work.
In one experiment, I discovered that readers of AI-authored statements had a greater understanding of the science and provided more detailed and accurate summaries of the content than readers of human-authored statements.
I also examined what impact the simpler summaries might need on people's perceptions of the scientists who conducted the research. In this experiment, participants rated the scientists whose work was described within the simpler texts as more credible and trustworthy than the scientists whose work was described within the more complex texts.
In each experiments, participants didn’t know who had written each summary. The simpler texts were at all times generated by AI, and the complex texts were at all times created by humans. Ironically, after I asked participants who they thought wrote each summary, they thought the more complex ones were written by AI and the simpler ones by humans.
Why it matters
Have you ever read a couple of scientific discovery and felt prefer it was written in a foreign language? If you're like that most AmericansNew scientific information is prone to be obscure – especially if you happen to try to tackle a scientific article in a research journal.
At a time when scientific knowledge is critical to informed decision-making, the flexibility to speak and grasp complex ideas is more necessary than ever. The trust in science was declining for yearsand one contributing factor could possibly be the challenge of understanding scientific jargon.
This research points to a possible solution: using AI to simplify science communication. By making scientific content more accessible, this work shows that AI-generated summaries may help restore trust in scientists and, in turn, encourage them greater public engagement with scientific topics. The issue of trust is especially necessary because people often depend on science of their every day lives, from eating habits to medical decisions.
What isn’t yet known
As AI continues to develop, its role in science communication could increase, particularly if using generative AI becomes more common or sanctioned by academic journals. In fact, the sector of educational publishing remains to be evolving Norms regarding using AI. By simplifying academic writing, AI could contribute to greater engagement with complex topics.
While the advantages of AI-generated science communication could also be clear, ethical considerations must even be taken into consideration. There is a few risk that using AI to simplify scientific content will eliminate nuance, potentially resulting in misunderstandings or oversimplifications. There can also be at all times the chance of mistakes if nobody is paying close attention.
Furthermore, transparency is crucial. Readers ought to be informed when AI is used to generate summaries to avoid possible bias.
Simple scientific descriptions are preferable and helpful to complex descriptions, and AI tools may help with this. But scientists could also achieve the identical goals by working harder to reduce jargon and communicate clearly—no AI required.
image credit : theconversation.com
Leave a Reply