Japanese Startup Sakana Said That Its Ai Generated The first peer-reviewed scientific publicationBut while the claim isn’t untrue, there are significant caveats to note.
The Debate swirling Around Ai and Its Role in the Scientific Process Grows Fiercer by the Day. Many researchers do’t believe ai is quite ready to serve as a “co-textist,” while others think that there’s potential-but acaneowledge it’s early days.
Sakana falls into the latter camp.
The company said that it used an ai system called the AI scientist-V2 to generate a paper that sakana then submitted to a workshop at iclr, a long-running and reputable ai conference. Sakana claims that the workshop’s organizers, as well as ICLR’s Leadership, Had agreed to work with the company to the company to conductor to conductor to double-genestad review.
Sakana said it collaborated with researchrs at the university of British columbia and the university of oxford to submit three ai-generated papers to the aforemented workshop for peer review. The AI scientist-V2 generated the papers “End-to-Ed,” Sakana Claims, Including The Scientific Hypotheses, Experiences and Experimental Code, Data Analys, Visualizations, Visualizations, Visualizations, Titles.
“We generated research ideas by providing the workshop abstract and description to the Ai,” Robert Lange, A Research Scientist and Founding Member at Sakana, Told Techcrunch Via Email. “This ensured that the generated papers were on topic and suitable submissions.”
One paper out of the three was accepted to the iclr workshop – a paper that casts a critical lens on training techniques for ai models. Sakana said it immediatively withdrew the paper before it would be published in the interest of transparency and respect for iclr conventions.
“The accepted paper bot introduces a new, promising method for training neural networks and shows that there are remaining Empirical Challenges,” Lange Said. “It provides an interesting data point to spark further scientific investment.”
But the achievement is impressive as it might seem at first glass.
In a blog post, sakana admits that its ai obcastionly made “Embarrassing” Citation Errors, for example incorrectly attributing a method to a method to a 2016 paper instead of the ororiginal 1997 works.
Sakana’s paper also didn’t undergo as much scrutiny as some other peer-reviewed publications. Being the company with after the initial Peer Review, The Paper Didn Bollywood an additional “Meta-Review,” DURING which which has the workshop organizers cou in in theory rejected it.
Then there’s the fact that acceptance rates for conference work tend to be higher than accepts rates for the main “Conference track” – A FACT SAKANA CANDLY MENDLY MENDILY MENDELS BOLGE The company said that none of its ai-generated studes passed its internal bar for iclr conference track publication.
Matthew Guzdial, An AI Researcher and Assistant Professor at the University of Alberta, Called Sakana’s Results “A Bit Misleading.”
“The Sakana follows selected the papers from some number of generated ons, meaning they were using human judgment in terms of picking outputs they thought might get in,” He Said Via Email. “What i think this shows is that humans plus ai can be effective, not that ai alone can create scientific programs.”
Mike Cook, A Research Fellow at King’s College London Specializing In Ai, Questioned the Rigor of the Peer Reviewers and Workshop.
“New Workshops, like this one, are often reviewed by more Junior Researchers,” He Told Techcrunch. “It’s also Worth Noting That This Workshop is About Negative Results and Difability – Why is Great, I’ve Run a Simirar Workshop Before – But It’s Arguable Easier To get an ai to work About a Failure Convincingly. “
Cook added that he wasn’t surprised an ai can pass peer review, Considering that Ai Excels at Writing Human-Sounding Prose. Partly-AI-Generated papers Passing Journal review isn Bollywood new, cook pointed out,
AI’s Technical Shortcomings – Such as it tendency to hallucinate – Make many scientists wary of endorsing it for serial work. Moreover, Experts Fear AI Black Simply end up generating noise in the scientific literature, not elevating program.
“We need to ask orselves where [Sakana’s] Result is about how good ai is at designing and conducting experts, or whather it’s about how good it is at selling ideas to humans – which we know ai is great at already, ” “There’s a differentce between passing peer review and contributing knowledge to a field.”
Sakana, to its credit, makes no claim that its ai can produce groupsbreaking – or even especially novel – Scientific work. Rather, the Goal of the Experiment was to “Study the Quality of Ai-Generated Research,” The company said, and to highlight the urgaent need for “norms Regarding AI-Generated Science.”
,[T]Here are Diffficult Questions About Whether [AI-generated] Science should be judged on its own merits first to avoid bias against it, ”The Company Wrote. “Going forward, we will continue to exchange opinions with the research communication on the state of this technology to ensure that it does not does not develop into a situation in the future with purses and its passes review, thereby substantically undermining the meaning of the scientific peer review process. “