Remember how OAI claimed that O3 had displayed superhuman levels on the mega hard Frontier Math exam written by Fields Medalist? Funny/totally not fishy story haha. Turns out OAI had exclusive access to that test for months and funded its creation and refused to let the creators of test publicly acknowledge this until after OAI did their big stupid magic trick.
From Subbarao Kambhampati via linkedIn:
"๐๐ง ๐ญ๐ก๐ ๐ฌ๐๐๐๐ฒ ๐จ๐ฉ๐ญ๐ข๐๐ฌ ๐จ๐ "๐ฉ๐๐๐๐ ๐๐๐ ๐๐ ๐จ๐ฎ๐ฐ ๐ด๐๐๐ ๐๐ ๐ช๐๐๐๐๐๐๐๐๐ ๐ฉ๐๐๐๐๐๐๐๐ ๐ช๐๐๐๐๐๐๐" hashtag#SundayHarangue. One of the big reasons for the increased volume of "๐๐๐ ๐๐จ๐ฆ๐จ๐ซ๐ซ๐จ๐ฐ" hype has been o3's performance on the "frontier math" benchmark--something that other models basically had no handle on.
We are now being told (https://lnkd.in/gUaGKuAE) that this benchmark data may have been exclusively available (https://lnkd.in/g5E3tcse) to OpenAI since before o1--and that the benchmark creators were not allowed to disclose this *until after o3 *.
That o3 does well on frontier math held-out set is impressive, no doubt, but the mental picture of "๐1/๐3 ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐, ๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐๐๐ ๐๐๐๐"--that the AGI tomorrow crowd seem to have--that ๐๐ฑ๐ฆ๐ฏ๐๐ ๐ธ๐ฉ๐ช๐ญ๐ฆ ๐ฏ๐ฐ๐ต ๐ฆ๐น๐ฑ๐ญ๐ช๐ค๐ช๐ต๐ญ๐บ ๐ค๐ญ๐ข๐ช๐ฎ๐ช๐ฏ๐จ, ๐ค๐ฆ๐ณ๐ต๐ข๐ช๐ฏ๐ญ๐บ ๐ฅ๐ช๐ฅ๐ฏ'๐ต ๐ฅ๐ช๐ณ๐ฆ๐ค๐ต๐ญ๐บ ๐ค๐ฐ๐ฏ๐ต๐ณ๐ข๐ฅ๐ช๐ค๐ต--is shattered by this. (I have, in fact, been grumbling to my students since o3 announcement that I don't completely believe that OpenAI didn't have access to the Olympiad/Frontier Math data before hand.. )
I do think o1/o3 are impressive technical achievements (see https://lnkd.in/gvVqmTG9 )
๐ซ๐๐๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐ ๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐ ๐๐ ๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐--๐๐๐ ๐ ๐๐๐๐'๐ ๐๐๐๐๐ ๐๐๐๐๐๐ "๐จ๐ฎ๐ฐ ๐ป๐๐๐๐๐๐๐."
We all know that data contamination is an issue with LLMs and LRMs. We also know that reasoning claims need more careful vetting than "๐ธ๐ฆ ๐ฅ๐ช๐ฅ๐ฏ'๐ต ๐ด๐ฆ๐ฆ ๐ต๐ฉ๐ข๐ต ๐ด๐ฑ๐ฆ๐ค๐ช๐ง๐ช๐ค ๐ฑ๐ณ๐ฐ๐ฃ๐ญ๐ฆ๐ฎ ๐ช๐ฏ๐ด๐ต๐ข๐ฏ๐ค๐ฆ ๐ฅ๐ถ๐ณ๐ช๐ฏ๐จ ๐ต๐ณ๐ข๐ช๐ฏ๐ช๐ฏ๐จ" (see "In vs. Out of Distribution analyses are not that useful for understanding LLM reasoning capabilities" https://lnkd.in/gZ2wBM_F ).
At the very least, this episode further argues for increased vigilance/skepticism on the part of AI research community in how they parse the benchmark claims put out commercial entities."
Big stupid snake oil strikes again.
Reposting this for the new week thread since it truly is a record of how untrustworthy sammy and co are. Remember how OAI claimed that O3 had displayed superhuman levels on the mega hard Frontier Math exam written by Fields Medalist? Funny/totally not fishy story haha. Turns out OAI had exclusive access to that test for months and funded its creation and refused to let the creators of test publicly acknowledge this until after OAI did their big stupid magic trick.
From Subbarao Kambhampati via linkedIn:
"๐๐ง ๐ญ๐ก๐ ๐ฌ๐๐๐๐ฒ ๐จ๐ฉ๐ญ๐ข๐๐ฌ ๐จ๐ โ๐ฉ๐๐๐๐ ๐๐๐ ๐๐ ๐จ๐ฎ๐ฐ ๐ด๐๐๐ ๐๐ ๐ช๐๐๐๐๐๐๐๐๐ ๐ฉ๐๐๐๐๐๐๐๐ ๐ช๐๐๐๐๐๐๐โ hashtag#SundayHarangue. One of the big reasons for the increased volume of โ๐๐๐ ๐๐จ๐ฆ๐จ๐ซ๐ซ๐จ๐ฐโ hype has been o3โs performance on the โfrontier mathโ benchmarkโsomething that other models basically had no handle on.
We are now being told (https://lnkd.in/gUaGKuAE) that this benchmark data may have been exclusively available (https://lnkd.in/g5E3tcse) to OpenAI since before o1โand that the benchmark creators were not allowed to disclose this *until after o3 *.
That o3 does well on frontier math held-out set is impressive, no doubt, but the mental picture of โ๐1/๐3 ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐, ๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐๐๐ ๐๐๐๐โโthat the AGI tomorrow crowd seem to haveโthat ๐๐ฑ๐ฆ๐ฏ๐๐ ๐ธ๐ฉ๐ช๐ญ๐ฆ ๐ฏ๐ฐ๐ต ๐ฆ๐น๐ฑ๐ญ๐ช๐ค๐ช๐ต๐ญ๐บ ๐ค๐ญ๐ข๐ช๐ฎ๐ช๐ฏ๐จ, ๐ค๐ฆ๐ณ๐ต๐ข๐ช๐ฏ๐ญ๐บ ๐ฅ๐ช๐ฅ๐ฏโ๐ต ๐ฅ๐ช๐ณ๐ฆ๐ค๐ต๐ญ๐บ ๐ค๐ฐ๐ฏ๐ต๐ณ๐ข๐ฅ๐ช๐ค๐ตโis shattered by this. (I have, in fact, been grumbling to my students since o3 announcement that I donโt completely believe that OpenAI didnโt have access to the Olympiad/Frontier Math data before handโฆ )
I do think o1/o3 are impressive technical achievements (see https://lnkd.in/gvVqmTG9 )
๐ซ๐๐๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐ ๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐ ๐๐ ๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐โ๐๐๐ ๐ ๐๐๐๐โ๐ ๐๐๐๐๐ ๐๐๐๐๐๐ โ๐จ๐ฎ๐ฐ ๐ป๐๐๐๐๐๐๐.โ
We all know that data contamination is an issue with LLMs and LRMs. We also know that reasoning claims need more careful vetting than โ๐ธ๐ฆ ๐ฅ๐ช๐ฅ๐ฏโ๐ต ๐ด๐ฆ๐ฆ ๐ต๐ฉ๐ข๐ต ๐ด๐ฑ๐ฆ๐ค๐ช๐ง๐ช๐ค ๐ฑ๐ณ๐ฐ๐ฃ๐ญ๐ฆ๐ฎ ๐ช๐ฏ๐ด๐ต๐ข๐ฏ๐ค๐ฆ ๐ฅ๐ถ๐ณ๐ช๐ฏ๐จ ๐ต๐ณ๐ข๐ช๐ฏ๐ช๐ฏ๐จโ (see โIn vs. Out of Distribution analyses are not that useful for understanding LLM reasoning capabilitiesโ https://lnkd.in/gZ2wBM_F ).
At the very least, this episode further argues for increased vigilance/skepticism on the part of AI research community in how they parse the benchmark claims put out commercial entities."
Big stupid snake oil strikes again.