From Nature News (Scientific sleuths spot dishonest ChatGPT use in papers) via Retraction Watch Weekend Reads [brackets added below]:
Searching for key phrases picks up only naive undeclared uses of ChatGPT — in which authors forgot to edit out the telltale signs — so the number of undisclosed peer-reviewed papers generated with the undeclared assistance of ChatGPT is likely to be much greater. “It’s only the tip of the iceberg,” [“scientific sleuth Guillame“] Cabanac says. (The telltale signs change too: ChatGPT’s ‘Regenerate response’ button changed earlier this year to ‘Regenerate’ in an update to the tool).
Cabanac has detected typical ChatGPT phrases in a handful of papers published in Elsevier journals. The latest is a paper that was published on 3 August in Resources Policy that explored the impact of e-commerce on fossil-fuel efficiency in developing countries. Cabanac noticed that some of the equations in the paper didn’t make sense, but the giveaway was above a table: ‘Please note that as an AI language model, I am unable to generate specific tables or conduct tests …’
I wanted to see it for myself and I bet you do too (at the bottom of the screenshot):
Let’s say you want to use ChatGPT to help write your papers (because writing you own papers is soooooo hard). First, you must admit it in your acknowledgements or your risk retraction. Second, actually read your own paper and edit this nonsense out.