

Someone claiming to be one of the authors showed up in the comments saying that they couldn’t have done it without GPT… which just makes me think “skill issue”, honestly.
Even a true-blue sporadic success can’t outweigh the pervasive deskilling, the overstressing of the peer review process, the generation of peer reviews that simply can’t be trusted, and the fact that misinformation about physics can now be pumped interactively to the public at scale.
“The bus to the physics conference runs so much better on leaded gasoline!” “We accelerated our material-testing protocol by 22% and reduced equipment costs. Yes, they are technically blood diamonds, if you want to get all sensitive about it…”






From the preprint:
“Methodology: trust us, bro”
Edit: Having now spent as much time reading the paper as I am willing to, it looks like the first so-called great advance was what you’d get from a Mathematica’s
FullSimplify, souped up in a way that makes it unreliable. The second so-called great advance, going from the special cases in Eqs. (35)–(38) to conjecturing the general formula in Eq. (39), means conjecturing a formula that… well, the prefactor is the obvious guess, the number of binomials in the product is the obvious guess, and after staring at the subscripts I don’t see why the researchers would not have guessed Eq. (39) at least as an Ansatz.All the claims about an “internal” model are unverifiable and tell us nothing about how much hand-holding the humans had to do. Writing them up in this manner is, in my opinion, unethical and a detriment to science. Frankly, anyone who works for an AI company and makes a claim about the amount of supervision they had to do should be assumed to be lying.