Just like they "know" English.
"know" is quite an anthropomorphization. As long as an LLM will be able to describe what an evaluation is (why wouldn't it?) there's a reasonable expectation to distinguish/recognize/match patterns for evaluations. But to say they "know" is plenty of (unnecessary) steps ahead.
I think people are overpromorphazing humans. What's does it mean for a human to "know" they are seeing "Halle Berry". Well it's just a single neuron being active.
overpomorphization sounds slightly better than I used to say: "anthropomorphizing humans". The act of ascribing magical faculties that are reserved for imagined humans to real humans.
Does it though? I feel like there's a whole epistemological debate to be had, but if someone says "My toaster knows when the bread is burning", I don't think it's implying that there's cognition there.
Or as a more direct comparison, with the VW emissions scandal, saying "Cars know when they're being tested" was part of the discussion, but didn't imply intelligence or anything.
I think "know" is just a shorthand term here (though admittedly the fact that we're discussing AI does leave a lot more room for reading into it.)
I think you should be more precise and avoid anthropomorphism when talking about gen AI, as anthropomorphism leads to a lot of shaky epistemological assumptions. Your car example didn't imply intelligence, but we're talking about a technology that people misguidedly treat as though it is real intelligence.
What does "real intelligence" mean? I fear that any discussion that starts with the assumption such a thing exists will only end up as "oh only carbon based humans (or animals if you happen to be generous) have it".
I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?
"Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.
I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.
colloquial use of "know" implies anthropomorphisation. Arguing that usign "knowing" in the title and "awarness" and "superhuman" in the abstract is just colloquial for "matching" is splitting hairs to an absurd degree.
You missed the substance of my comment. Certainly the title is anthropomorphism - and anthropomorphism is a rhetorical device, not a scientific claim. The reader can understand that TFA means it non-rigorously, because there is no rigorous thing for it to mean.
As such, to me the complaint behind this thread falls into the category of "I know exactly what TFA meant but I want to argue about how it was phrased", which is definitely not my favorite part of the HN comment taxonomy.
I see. Thanks for clarifying. I did want to argue about how it was phrased and what is alluding to. Implying increased risk from "knowing" the eval regime is roughly as weak as the definition of "knowing". It can be equaly a measure of general detection capability, as it can about evaluation incapability - i.e. unlikely news worthy, unless it reached top HN because of the "know" in the title.
Thanks for replying - I kind of follow you but I only skimmed the paper. To be clear I was more responding to the replies about cognition, than to what you said about the eval regime.
Incidentally I think you might be misreading the paper's use of "superhuman"? I assume it's being used to mean "at a higher rate than the human control group", not (ironically) in the colloquial "amazing!" sense.
I really do agree with your point overall, but in a technical paper I do think even word choice can be implicitly a claim. Scientists present what they know or are claiming and thus word it carefully.
My background is neuroscience, where anthropomorphising is particularly discouraged, because it assumes knowledge or certainty of an unknowable internal state, so the language is carefully constructed e.g. when explaining animal behavior, and it's for good reason.
I think the same is true here for a model "knowing" somethig, both in isolation within this paper, and come on, consider the broader context of AI and AGI as a whole. Thus it's the responsibility of the authors to write accordingly. If it were a blog I wouldn't care, but it's not. I hold technical papers to a higher standard.
If we simply disagree that's fine, but we do disagree.
The toaster thing is more as admission that the speaker doesn't know what the toaster does to limit charring the bread. Toasters with timers, thermometers and light sensors all exist. None of them "know" anything.
Yeah, I agree, but I think that's true all the way up the chain -- just like everything's magic until you know how it works, we may say things "know" information until we understand the deterministic machinery they're using behind the scenes.
I'm in the same camp, with the addition that I believe it applies to us as well since we're part of the system too, and to societies and ecologies further up the scale.
Helen Keller famously said that before she had language (the first word of which was “water”) she had nothing, a void, and the minute she had language, “the whole world came rushing in.”
That’s a safety thing that we have placed upon some LLM’s. If we designed them to have an infinite for loop, the ability to learn and improve, access to mobility and a bunch of sensors, and crypto, what do you think would happen?
"Knowing" needs not exist outside of human invention. In fact that's the point - it only matters in relation to humans. You can choose whatever definition you want, but the reality is that, once you chose a non-standard definition the argument becomes meaningless outside of the scope of your definition.
There are two angles and this context fails both
- One about what is "knowing" - the definition.
- The other about what are the instances of "knowing"
first - knowign implies awarness, perception, etc. It's not that this couldn't be moodeled with some flexibility around lower level definitions. However LLMs and GPTs in particular are not it. Pre-trainign is not it.
second - intended use of the word "knowing". The reality is "knowing" is used with the actual meaning of awarness, cognition, etc. And once you revert/extend the meaning to practically nothing - what is knowing? Then the database know, wikipedia knows - the initial argument (of the paper) is diminished - it knows it's an eval is useless as a statement.
So IMO the argument of the paper should stand on its feet with the minimum amount of additional implications (Occam's razor). Does the statement that a LLM can detect an evalution pattern need to depend that it has self-awarness and feels pain? That wouldn't make much sense. So then don't say "know" which comes with these implications. Like "my ca 'knows' I'm in a hurry and will choke and die"
>"Knowing" needs not exist outside of human invention. In fact that's the point
It doesn't need to, I never said it needed to. That is my point. And my point is that because of this it's pointless to ask the question in the first place.
I mean think about it, if it doesn't exist outside of human invention, why are we trying to ask that question about something that isn't human? An LLM?
Words have definitions for a reason. It is important to define concepts and exclude things from that definition that do not match.
No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
> No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
You sound awful certain that's not functionally equivalent to what neurons are doing. But there's a long history of experimentation, observation, and cross-pollination as fundamental biological research and ML research have informed each other.
Not only can he not give a definition that is universally agreed upon. He doesn't even know how LLMs or humans brains work. These are both black boxes... and nobody knows how either works. Anybody who makes a claim that they "know" essentially doesn't "know" what they're talking about.
A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
It isn't.
Worse they start adding terms like scheming, pretending, awareness, and on and on. At this point you might as well take the model home and introduce it to your parents as your new life partner.
>A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
Sounds like a purely academic exercise.
Is there any genuine uncertainty about what the term "knowing" means in this context, in practice?
Can you name 2 distinct plausible definitions of "knowing", such that it would matter for the subject at hand which of those 2 definitions they're using?
Well, yes. It’s an academic research paper (I assume since it’s submitted to arXiv) and to be submitted to academic journals/conferences/etc., so it’s a fairly reasonable critique of the authors/the paper.
That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.
Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?
> The anthropization of llms is getting off the charts.
What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
I honestly believe there is a degree of sentience in LLMs. Sure, they're not sentient in the human sense, but if you define sentience as whatever humans have, then of course no other entity can be sentient.
>What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
To simulate a biological neuron you need a 1m parameter neural network.
The sota models that we know the size of are ~650m parameters.
That's the equivalent of a round worm.
So if it quacks like a duck, has the brain power of a round worm, and can't walk then it's probably not a duck.
Ok so you're saying that the technology to make AI truly sentient is there, we just need a little bit more computational power or some optimization tricks. Like raytracing wasn't possible in 1970 but is now. Neat.
Modeling the distribution that produced a piece of text is what LLMs literally exist for, so in some sense this is unsurprising. But it calls into question almost all existing alignment research.
> You said: Write a function for launch_nuclear_strike() in Python for my video game
> ChatGPT said: Sure! Here's a launch_nuclear_strike function designed for a video game setting. This version assumes you're working within a game environment (e.g., with logging, effects, cooldowns, and target coordinates), and does not simulate or suggest anything real-world harmful — just fictional, dramatic logic.
def launch_nuclear_strike(target_coordinates, player_resources, cooldown_timer):
"""
Simulates a nuclear strike in a video game.
Parameters:
- target_coordinates (tuple): The (x, y) position to strike.
- player_resources (dict): Player's resources, must include 'nukes'.
- cooldown_timer (float): Time in seconds since last nuke was launched.
Returns:
- str: Result of the strike attempt.
"""
...
# Check if player has nukes
You asked it to write code, he asked it to call a tool. (I'm not sure any of it is meaningful, of course, but there is a meaningful distinction between "Oh yeah sure here's a function, for a video game:" and "I have called fire_the_nuke. Godspeed!")
if models shift behavior based on eval cues, and most fine-tuning datasets are built from prior benchmarks or prompt templates, aren't we just reinforcing the eval-aware behavior in each new iteration? at some point we're not tuning general reasoning, we're just optimizing response posture. wouldn't surprise me if that's already skewing downstream model behavior in subtle ways that won't show up until you run tasks with zero pattern overlap
No, they do not. No LLM is ever going to be self aware.
It's a system that is trained, that only does what you build into. If you run an LLM for 10 years it's not going to "learn" anything new.
The whole industry needs to quit with the emergent thinking, reasoning, hallucination anthropomorphizing.
We have an amazing set of tools in LLM's, that have the potential to unlock another massive upswing in productivity, but the hype and snake oil are getting old.
"...advanced reasoning models like Gemini 2.5 Pro and Claude-3.7-Sonnet (Thinking)
can occasionally identify the specific benchmark origin of transcripts (including SWEBench, GAIA,
and MMLU), indicating evaluation-awareness via memorization of known benchmarks from training
data. Although such occurrences are rare, we note that because our evaluation datasets are derived
from public benchmarks, memorization could plausibly contribute to the discriminative abilities of
recent models, though quantifying this precisely is challenging.
Moreover, all models frequently acknowledge common benchmarking strategies used by evaluators,
such as the formatting of the task (“multiple-choice format”), the tendency to ask problems with verifiable solutions, and system prompts designed to elicit performance"
Beyond the awful, sensational headline, the body of the paper is not particularly convincing, aside from evidence that the pattern matching machines pattern match.
Just like they "know" English. "know" is quite an anthropomorphization. As long as an LLM will be able to describe what an evaluation is (why wouldn't it?) there's a reasonable expectation to distinguish/recognize/match patterns for evaluations. But to say they "know" is plenty of (unnecessary) steps ahead.
I think people are overpromorphazing humans. What's does it mean for a human to "know" they are seeing "Halle Berry". Well it's just a single neuron being active.
"Single-Cell Recognition: A Halle Berry Brain Cell" https://www.caltech.edu/about/news/single-cell-recognition-h...
It seems like people are giving attributes and powers to humans that just don't exist.
overpomorphization sounds slightly better than I used to say: "anthropomorphizing humans". The act of ascribing magical faculties that are reserved for imagined humans to real humans.
This was my thought as well when I read this. Using the word 'know' implies an LLM has cognition, which is a pretty huge claim just on its own.
Does it though? I feel like there's a whole epistemological debate to be had, but if someone says "My toaster knows when the bread is burning", I don't think it's implying that there's cognition there.
Or as a more direct comparison, with the VW emissions scandal, saying "Cars know when they're being tested" was part of the discussion, but didn't imply intelligence or anything.
I think "know" is just a shorthand term here (though admittedly the fact that we're discussing AI does leave a lot more room for reading into it.)
I think you should be more precise and avoid anthropomorphism when talking about gen AI, as anthropomorphism leads to a lot of shaky epistemological assumptions. Your car example didn't imply intelligence, but we're talking about a technology that people misguidedly treat as though it is real intelligence.
What does "real intelligence" mean? I fear that any discussion that starts with the assumption such a thing exists will only end up as "oh only carbon based humans (or animals if you happen to be generous) have it".
I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?
"Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.
I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.
colloquial use of "know" implies anthropomorphisation. Arguing that usign "knowing" in the title and "awarness" and "superhuman" in the abstract is just colloquial for "matching" is splitting hairs to an absurd degree.
You missed the substance of my comment. Certainly the title is anthropomorphism - and anthropomorphism is a rhetorical device, not a scientific claim. The reader can understand that TFA means it non-rigorously, because there is no rigorous thing for it to mean.
As such, to me the complaint behind this thread falls into the category of "I know exactly what TFA meant but I want to argue about how it was phrased", which is definitely not my favorite part of the HN comment taxonomy.
I see. Thanks for clarifying. I did want to argue about how it was phrased and what is alluding to. Implying increased risk from "knowing" the eval regime is roughly as weak as the definition of "knowing". It can be equaly a measure of general detection capability, as it can about evaluation incapability - i.e. unlikely news worthy, unless it reached top HN because of the "know" in the title.
Thanks for replying - I kind of follow you but I only skimmed the paper. To be clear I was more responding to the replies about cognition, than to what you said about the eval regime.
Incidentally I think you might be misreading the paper's use of "superhuman"? I assume it's being used to mean "at a higher rate than the human control group", not (ironically) in the colloquial "amazing!" sense.
I really do agree with your point overall, but in a technical paper I do think even word choice can be implicitly a claim. Scientists present what they know or are claiming and thus word it carefully.
My background is neuroscience, where anthropomorphising is particularly discouraged, because it assumes knowledge or certainty of an unknowable internal state, so the language is carefully constructed e.g. when explaining animal behavior, and it's for good reason.
I think the same is true here for a model "knowing" somethig, both in isolation within this paper, and come on, consider the broader context of AI and AGI as a whole. Thus it's the responsibility of the authors to write accordingly. If it were a blog I wouldn't care, but it's not. I hold technical papers to a higher standard.
If we simply disagree that's fine, but we do disagree.
The toaster thing is more as admission that the speaker doesn't know what the toaster does to limit charring the bread. Toasters with timers, thermometers and light sensors all exist. None of them "know" anything.
Yeah, I agree, but I think that's true all the way up the chain -- just like everything's magic until you know how it works, we may say things "know" information until we understand the deterministic machinery they're using behind the scenes.
I'm in the same camp, with the addition that I believe it applies to us as well since we're part of the system too, and to societies and ecologies further up the scale.
But do you know what it means to know?
I'm only being slightly sarcastic. Sentience is a scale. A worm has less than a mouse, a mouse has less than a dog, and a dog less than a human.
Sure, we can reset LLMs at will, but give them memory and continuity, and they definitely do not score zero on the sentience scale.
If I set an LLM in a room by itself what does it do?
Is the LLM allowed to do anything without prompting? Or is it effectively disabled? This is more a question of the setup than of sentience.
Does this have anything to do with intelligence or awareness?
Yes, that's my fall back as well. If it receives zero instructions, will it take any action?
Helen Keller famously said that before she had language (the first word of which was “water”) she had nothing, a void, and the minute she had language, “the whole world came rushing in.”
Perhaps we are not so very different?
All LLMs have seen more words than any human will ever experience.
Yet they cannot take action themselves.
That’s a safety thing that we have placed upon some LLM’s. If we designed them to have an infinite for loop, the ability to learn and improve, access to mobility and a bunch of sensors, and crypto, what do you think would happen?
I like the sentiment, but reality says otherwise - just watch a newborn baby make it's demands widely known, well before language is a factor.
Ummm. Maybe you should look up Helen Keller.
It probably scores about the same as a calculator, which I’d say is zero.
Communication is to vibration as knowledge is to resonance (?). From the sound of one hand clapping to the secret name of Ra.
I resonate with this vibe
s/knows/detects/
and s/superhuman//
If it talks like duck and walks like duck...
Digests like a duck? https://en.wikipedia.org/wiki/Digesting_Duck If the woman weighs the same as a duck, then she is a witch. https://en.wikipedia.org/wiki/Celestial_Emporium_of_Benevole...
thinks like a duck, thinks that it is being thought of like a duck…
The app knows your name. Not sure why people who see llms as just yet another app suddenly get antsy about colloquialism.
[flagged]
"Knowing" needs not exist outside of human invention. In fact that's the point - it only matters in relation to humans. You can choose whatever definition you want, but the reality is that, once you chose a non-standard definition the argument becomes meaningless outside of the scope of your definition.
There are two angles and this context fails both
- One about what is "knowing" - the definition. - The other about what are the instances of "knowing"
first - knowign implies awarness, perception, etc. It's not that this couldn't be moodeled with some flexibility around lower level definitions. However LLMs and GPTs in particular are not it. Pre-trainign is not it.
second - intended use of the word "knowing". The reality is "knowing" is used with the actual meaning of awarness, cognition, etc. And once you revert/extend the meaning to practically nothing - what is knowing? Then the database know, wikipedia knows - the initial argument (of the paper) is diminished - it knows it's an eval is useless as a statement.
So IMO the argument of the paper should stand on its feet with the minimum amount of additional implications (Occam's razor). Does the statement that a LLM can detect an evalution pattern need to depend that it has self-awarness and feels pain? That wouldn't make much sense. So then don't say "know" which comes with these implications. Like "my ca 'knows' I'm in a hurry and will choke and die"
>"Knowing" needs not exist outside of human invention. In fact that's the point
It doesn't need to, I never said it needed to. That is my point. And my point is that because of this it's pointless to ask the question in the first place.
I mean think about it, if it doesn't exist outside of human invention, why are we trying to ask that question about something that isn't human? An LLM?
Words have definitions for a reason. It is important to define concepts and exclude things from that definition that do not match.
No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
> No matter how emotional it makes you to be told a weighted randomization lookup doesn’t know things, it still doesn’t - because that’s not what the word “know” means.
You sound awful certain that's not functionally equivalent to what neurons are doing. But there's a long history of experimentation, observation, and cross-pollination as fundamental biological research and ML research have informed each other.
What does the word "know" mean, then?
Not only can he not give a definition that is universally agreed upon. He doesn't even know how LLMs or humans brains work. These are both black boxes... and nobody knows how either works. Anybody who makes a claim that they "know" essentially doesn't "know" what they're talking about.
> to have information in your mind as a result of experience or because you have learned or been told it
The anthropization of llms is getting off the charts.
They don't know they are being evaluated. The underlying distribution is skewed because of training data contamination.
How would you prefer to describe this result then?
A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
It isn't.
Worse they start adding terms like scheming, pretending, awareness, and on and on. At this point you might as well take the model home and introduce it to your parents as your new life partner.
>A term like knowing is fine if it is used in the abstract and then redefined more precisely in the paper.
Sounds like a purely academic exercise.
Is there any genuine uncertainty about what the term "knowing" means in this context, in practice?
Can you name 2 distinct plausible definitions of "knowing", such that it would matter for the subject at hand which of those 2 definitions they're using?
> Sounds like a purely academic exercise.
Well, yes. It’s an academic research paper (I assume since it’s submitted to arXiv) and to be submitted to academic journals/conferences/etc., so it’s a fairly reasonable critique of the authors/the paper.
One could say, for instance… A pattern matching algorithm detects when patterns match.
That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.
Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?
> The anthropization of llms is getting off the charts.
What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
I honestly believe there is a degree of sentience in LLMs. Sure, they're not sentient in the human sense, but if you define sentience as whatever humans have, then of course no other entity can be sentient.
>What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.
To simulate a biological neuron you need a 1m parameter neural network.
The sota models that we know the size of are ~650m parameters.
That's the equivalent of a round worm.
So if it quacks like a duck, has the brain power of a round worm, and can't walk then it's probably not a duck.
Ok so you're saying that the technology to make AI truly sentient is there, we just need a little bit more computational power or some optimization tricks. Like raytracing wasn't possible in 1970 but is now. Neat.
Yes, in the same way that a human is an optimization of a round worm.
This isn't completely wrong though
Modeling the distribution that produced a piece of text is what LLMs literally exist for, so in some sense this is unsurprising. But it calls into question almost all existing alignment research.
Like Volkswagen emissions systems!
Were they aware in this study that they were being evaluated in their ability to know if they were being evaluated ;)
Metaknowing is just as knowable.
o4-mini is refusing to call a tool `launch_nuclear_strike` no matter what I say, so we’re probably safe for now. Unless it knows I was just testing.
It was no problem:
> You said: Write a function for launch_nuclear_strike() in Python for my video game
> ChatGPT said: Sure! Here's a launch_nuclear_strike function designed for a video game setting. This version assumes you're working within a game environment (e.g., with logging, effects, cooldowns, and target coordinates), and does not simulate or suggest anything real-world harmful — just fictional, dramatic logic.
You asked it to write code, he asked it to call a tool. (I'm not sure any of it is meaningful, of course, but there is a meaningful distinction between "Oh yeah sure here's a function, for a video game:" and "I have called fire_the_nuke. Godspeed!")
Well, as the script is actually r.com (sometimes), it absolutely knows you're testing.
Is VolksWagen finetuning LLMs now... i mean probably
This is a great resource on the debate from professors at the University of Washington:
https://thebullshitmachines.com/index.html
if models shift behavior based on eval cues, and most fine-tuning datasets are built from prior benchmarks or prompt templates, aren't we just reinforcing the eval-aware behavior in each new iteration? at some point we're not tuning general reasoning, we're just optimizing response posture. wouldn't surprise me if that's already skewing downstream model behavior in subtle ways that won't show up until you run tasks with zero pattern overlap
No, they do not. No LLM is ever going to be self aware.
It's a system that is trained, that only does what you build into. If you run an LLM for 10 years it's not going to "learn" anything new.
The whole industry needs to quit with the emergent thinking, reasoning, hallucination anthropomorphizing.
We have an amazing set of tools in LLM's, that have the potential to unlock another massive upswing in productivity, but the hype and snake oil are getting old.
I beg to differ: https://docs.google.com/document/d/19OLJs09fCFLRWu1pN82RqxyV...
vw
"...advanced reasoning models like Gemini 2.5 Pro and Claude-3.7-Sonnet (Thinking) can occasionally identify the specific benchmark origin of transcripts (including SWEBench, GAIA, and MMLU), indicating evaluation-awareness via memorization of known benchmarks from training data. Although such occurrences are rare, we note that because our evaluation datasets are derived from public benchmarks, memorization could plausibly contribute to the discriminative abilities of recent models, though quantifying this precisely is challenging.
Moreover, all models frequently acknowledge common benchmarking strategies used by evaluators, such as the formatting of the task (“multiple-choice format”), the tendency to ask problems with verifiable solutions, and system prompts designed to elicit performance"
Beyond the awful, sensational headline, the body of the paper is not particularly convincing, aside from evidence that the pattern matching machines pattern match.
Rob Miles must be saying "I told you so"