> Krapivin was not held back by the conventional wisdom for the simple reason that he was unaware of it. “I did this without knowing about Yao’s conjecture,” he said.
This idea resonates with me, especially in the context of modern physics. The extensive learning required to make advancements often means that one's thinking is heavily influenced by established theories and teachings, potentially stifling true innovation.
If I may be so bold, it resonates (with you, with me, with most people) because the idea that one can make revolutionary discoveries that contradicts the status quo as an outsider is an inherently attractive idea. It scratches many emotional itches: tweaking the nose of "the man"; suspicion of institutions; being an unknown hero (before the discovery); discovering one's inherent, but unrecognized, greatness; and more.
But the reality of the matter is that for every Krapivin, there are thousands of other people who tried but did not. And more importantly, there are hundreds of others who did the work, gained the foundational knowledge, and THEN made a discovery that changed how we think about the world even if only a tiny bit. While it's not as romantic, it's the way reality works. Mostly.
It sounds like Krapivin is continuing his education. And hopefully his hard work will hone his brilliance and help him to make even more discoveries in the future.
We can have both. I think we need both. Some problems are better suited to solutions coming from outsiders. Most are not. We don't know which are which.
> because the idea that one can make revolutionary discoveries that contradicts the status quo as an outsider is an inherently attractive idea
This might be satisfying if you have a grievance with the "institution" but to OP's point its also interesting because we often limit our areas of exploration to what is conventionally accepted. If you are ignorant of convention you are more likely to retread ground, but less likely to be bounded by it. As you say, conventional wisdom is conventional because its pretty dang good, so this doesn't often pay off.
Yeah, everybody wants to be the Karate Kid who binged karate studies in one montage and then won the tournament instead of training in the dojo for years and still having to cheat to come close to winning (and still failing) like the chumps at Cobra Kai.
>> Yeah, everybody wants to be the Karate Kid who binged karate studies in one montage and then won the tournament instead of training in the dojo for years and still having to cheat to come close to winning (and still failing) like the chumps at Cobra Kai.
Somehow I think you missed the lesson of that movie - if there is one. It's been a long time and I can't put it into words well, but I suspect it was about someone with character and humility vs brute force and cheating. Something like that, but far different from underdog find shortcut to success.
I know it's beside the point of the film, but it's still worth thinking of what it would be like to train for years and years on end at an infamously brutal school to be handily defeated by a kid with a few months of one-on-one mentorship.
Been here for 15 years, there's been a slight eternal summer / "YCombinator is a Thing Now" effect and a quite dramatic effect of agreeableness being rewarded --- when people hear "agreeableness", they can misread it as kindness --- on HN, a common manifestation of agreeableness is the not-even-wrong idea that physics is stuck.
But not all that right either. The thing about physics is that most progress is being made in highly theoretical areas without direct obvious applications.
That might look like stalling because it's been a long time since the last new insight a layman could understand.
I live in Boston, there's a ton of Gillette employees. It's about 7 years ago. They're talking about their fight to innovate for the millennial consumer. I say something like yeah there hasn't been a push for anything new since that 4 blade razor. They're immediately able to name 2 or 3.
I guess what I'm saying is, it's not wrong, it's insidiously not even wrong, in a way that shades rather than illuminates.
Not in this case. taylodi's comment is timestamped "2025-03-17T14:17:19 1742221039" , and lines up with Algolia's index. casey2 is presumably referring to the similar-sounding, not identical, top comment in https://news.ycombinator.com/item?id=43002511
All of the advances of modern physics were made by people who were well trained and well acquainted with the state of the art.
There is a myth that Einstein was an outsider. He had a degree in physics. He was working as a patent clerk because he couldn't find a job as a high school teacher, not because he didn't know physics.
One of his earliest great works is one that indicated wrong in the foundational aspect of the theory, namely incompatibility with E&M Maxwell equations and Galilean Transformations and E&M equations not being invariant under those transformations. The principle of symmetry is one of the foundation issues in physics. He also had th wisdom of understanding of the physics of new transformation, Lorentz transformation, which we know today as Special Relativity.
Yes, of course he was well-trained and had the enough background, but also the problem at the time was the type of problem that was solvable (i.e. no limitation in terms of tech) and that required new framework with new understanding.
Potential digression here, this is why I absolutely insist on high performing software teams to have at least one junior. You need someone asking questions like this.
Would it work in this case? Had he asked about this in a team, he would be told about the existing approach and that'd be it. I've been on the both sides, as junior questioning common practices and as senior answering such questions - and it always resulted in transfer of common knowledge, not some breakthrough.
I totally support searching for the new truths, but we must not forget why the phrase "do not roll your own crypto" exists. It is ok, or maybe it even MUST be done by students and researchers, but I am not so sure about juniors working on production systems. Still fine if you work in R&D department
unfortunately, I have a feeling that in the age of LLMs, this junior on the team will have no impetus to actually put in effort and _think_ about such a problem
Because SO at least requires you to THINK a little about what do you have a problem with.
With LLMs, you don't even need that. Just copy paste the error and you get a response. Copy and paste the Jira ticket description and you get a response. This wasn't possible with SO. Yes, none of those will likely work straight away but the point is that less thinking is required.
Hopefully, the junior's code will be reviewed before it gets merged
Did we survive the age of StackOverflow though? The market (globally) is absolutely flooded with not-even-mediocre software devs who are effectively doing what an LLM is i.e. finding the most plausible looking answers on SO and somehow munging them together, without any real understanding of what they're doing nor why it's working (or not working). The number of people charging contract rates yet lacking an understanding of actual software design principles (largely language agnostic) and no idea how computers actually work, is scary.
Since people are getting a little squirrelly, I think it's important to point out that this discovery was only in contradiction with a conjecture, not something anyone pretended to prove. Conjectures exist to be falsified.
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it "
This is just a phrase in an article, there is no reason to believe it reflects the reality in any way. For example, the guy faced a sceptical advisor's response to his proposal. You could say "Krapivin was held back by the conventional scepticism of his boss" (which is a real thing vs the mythical power of conventional wisdom). Yet he wasn't.
Martha Argerich has a similar anecdote when it came to learning a very difficult piano piece. Ravel's Gaspard de la nuit is notoriously difficult, even for accomplished pianists. She learned it (along with another tough one) in 5 days because her teacher told her to learn it. She claims that she was helped by the fact that she was unaware that it was supposed to be difficult.
One question I had in college was if learning inhibits our ability to learn new things. We can gain knowledge but is that the same as learning? I'm not sure where the line sits between questioning and obstinance.
Looking back at Theranos, lots of "experts" (at Stanford, etc.) told her her idea was impossible given the small amount of blood but she convinced people she found a way. It was all a fraud but is there a way to do blood testing at that scale? Do we just need people to keep failing and see if anyone ever gets there? Should academia stick to things that aren't known to be "impossible"? I don't have any answers but find it interesting to contemplate.
Picking Theranos as an example in favor of ignoring conventional wisdom is really a bold choice. I know there are better examples, based on actual results instead of wishful thinking.
Avoiding more recent (aka controversial) examples, the best search engines of the early Internet required users to trawl through pages and pages of search results, with the useful answers on page 8 or so. Conventional wisdom was this was as good as it was gonna get, and that's just how search is. That was, of course, until Google came around and totally changed the game.
Do you have a better example to set the context up to ask whether blood testing at that scale could ever become possible if enough people kept trying to crack that nut, contrary to the general consensus of infeasibility?
Why would you insist on framing a general discussion of the wisdom of defying conventional wisdom in terms of blood tests? Blood tests only come up in this context because of the one time someone lied about them.
You wouldn't. It has little to do with the general discussion. It was quite clearly posted as an aside.
It came up because the previous commenter was on a train of thought where the general topic lead him to think about Theranos, which lead him to think about blood testing in general and thus questions about that arose.
That is how discussions occur. If that slide into a new topic doesn't interest you, you don't need to reply.
Speaking of "don't need to reply", there are many times when I start writing a comment, realize that in fact it's wholly off topic, and delete it. Not every "train of thought" needs to be aired out.
No train of thought ever needs to be aired out. But at the same time, if you don't see that your questions are asked, how are you to ever know? Surely we are not anti-education types around here?
You are right that if you find yourself staring to make off-topic statements, like some of the earlier comments with their authors wanting to arbitrarily assert that they don't know how to read, it is time to go outside instead.
Why would you need a better example? It's a very, very stupid question; asking it is not worthwhile.
This is the problem the experts pointed out:
If your sample is so small that a substance you're looking for isn't present at all, you will fail to detect that substance, even if it was present in the pool you drew your sample from.
Theranos claimed the reverse, that they could detect a substance whether it was present or not.
How much thought do you want to devote to the question of whether a different approach might realize that dream?
Physics is the poster child of a discipline that knows its foundations are wrong. Basically every physicist understands that our current theories are full of holes and a new way of thinking is needed. So I don't really buy the idea that physics in particular is stifled by a rigid adherence to the status quo.
The charitable version of this is that to reconcile all the holes, we in fact need radically new and different mathematical underpinnings that aren't currently on the horizon. I don't know how that could be true; certainly any new foundation would have to reduce to something very like the current theories under already-studied conditions. If it is, though, we might be on a really big local maximum, and the path off of it might look really weird and nonsensical for a long time (which is why I can't quite bring myself to fully dismiss Stephen Wolfram, for instance :D).
Not 25, but about 30 years ago, Peter Shor showed that quantum systems could be arranged and evolved in a manner that we could factorize large numbers exponentially faster than using classical means. That was a humongous expansion in what we thought were the possible physical evolutions in our universe. Related ideas have been used since to significantly expand our understanding of blackholes, for example.
25 years takes us back quite a bit. Meaningful breakthroughs in physics I can think of:
The Higgs Boson was a pretty big thing. We "saw" gravitational waves with LIGO. There was that picture of a black hole. Despite the unavailability of a useful quantum computer, there's been a ton of advancement in that area. Neutrino stuff won the 2015 Nobel Prize that stuff will require new physics.
There’s some truth to that. But there are also a LOT of “self-taught” cranks out there who think they’ve discovered an amazing new theoretical framework… but if they’d done any background reading, they’d know it’s been tried and didn’t work.
So what? Let them crank on, it's no less productive than most career researchers, collecting data on the 10,000th subdegree of some nested abstraction. (It is good to have some number of people doing that!) To me the two seem to actually be extremely similar behaviors except one is blessed by the church and the other is heterodoxy.
Thou shalt not inquire without passing through the blessed gates of wisdom and drinking from the fountain of holy knowledge!
I would say a reason to limit the attention paid to it. If some crank came to me with a perpetual motion machine I might take 2 minutes to ask them, "So how does this account for the conservation of energy?" If their first response is "Energy conservation doesn't apply at the QUANTUM level." then I can quickly move on and ignore them. If their first response is "I think it is extracting energy from the Earth's rotation through interaction with it's magnetic field." then it may be worth me investing another 5 minutes to see what the next level of reasoning says.
The amount of spam and nonsense already exceeds many many lifetimes of all researchers that ever lived combined, so unless you can demonstrate the current generation for some weird reason does read a lot of extra published nonsense on the arxiv, the practical impact of the change in ratio is still 0
I think that researchers, educators, and others with power over institutions & culture publicly lampooning and attacking anyone with even the thought of daring to ask a 'dumb' question or have a 'dumb' theory, without proper rite of passage and standing as a crackpot has a larger negative effect than whatever positive effect these attacks have in dissuading (probably not!) the delusional and fraudulent from stuffing people's mailboxes with their theories and amassing followings.
Political thinking that probably predates the Egyptian pyramids: If you empower people with freedom, it will be abused in a way that produces some suffering for everyone that wouldn't be had the freedom not been granted. There would definitely be tradeoffs to having a culture of free and open learning, research, and knowledge sharing, unguarded by institutions.
If my reading of the paper was correct, this kind of hash table is incredibly complicated to resize because resizing would invalidate all previously handed out pointers unless you only do chaining?
The other challenge is that computing N hash functions is quite expensive so in practice I think this ends up slower in real world terms despite the big-O speed up?
Does this actually end up beating open addressed hash tables? It seems more like a neat hypothetical CS result that wouldn’t have any actual real world application? That’s cool in and of itself but I’m curious if there’s any real world reason to do this.
Based on the last two convos I saw on this, I feel like there’s a simpler algorithm here waiting to break out.
Underneath this is a sort of b-tree scented data structure that avoids the bimodal distribution on insertion times, which is important for a concurrent hash table.
The performance consideration Paul Heckle identified was in consideration of index access in arrays versus hash tables. Hash tables are accessed randomly, or pseudo-randomly, until the desired index is found where as indexes in an array are accessed in index order.
If it was his discovery, would be nice if they'd give him first author on the paper's author list (Farach-Colton, Krapivin, Kuszmaul). Though I understand if the proofs were not done by him.
It is not, other than sometimes in the case of equal contribution. The first and sometimes second authors are the most important, and the last author is often the advisor/senior researcher supervising the work.
This is not accurate; it depends on the subfield. As a rule, the more theoretical the subfield, the more likely that alphabetical order is used. See e.g. papers from a theoretical conference like STOC vs. a systems conference like HotOS.
I wonder if there is a memory consumption tradeoff for this new data structure? Based on a few initial implementations I see in github, looks like it may be significant? Still a nice discovery.
> Krapivin was not held back by the conventional wisdom for the simple reason that he was unaware of it. “I did this without knowing about Yao’s conjecture,” he said.
This idea resonates with me, especially in the context of modern physics. The extensive learning required to make advancements often means that one's thinking is heavily influenced by established theories and teachings, potentially stifling true innovation.
If I may be so bold, it resonates (with you, with me, with most people) because the idea that one can make revolutionary discoveries that contradicts the status quo as an outsider is an inherently attractive idea. It scratches many emotional itches: tweaking the nose of "the man"; suspicion of institutions; being an unknown hero (before the discovery); discovering one's inherent, but unrecognized, greatness; and more.
But the reality of the matter is that for every Krapivin, there are thousands of other people who tried but did not. And more importantly, there are hundreds of others who did the work, gained the foundational knowledge, and THEN made a discovery that changed how we think about the world even if only a tiny bit. While it's not as romantic, it's the way reality works. Mostly.
It sounds like Krapivin is continuing his education. And hopefully his hard work will hone his brilliance and help him to make even more discoveries in the future.
We can have both. I think we need both. Some problems are better suited to solutions coming from outsiders. Most are not. We don't know which are which.
Physics skunkworks where the inductees are purposefully shielded from status quo ideas, as a hedge against local maxima.
Sounds like a great idea for a science fiction story.
“The Gold at the Starbow’s End” by Frederik Pohl is in that direction.
https://en.wikipedia.org/wiki/The_Gold_at_the_Starbow%27s_En...
> because the idea that one can make revolutionary discoveries that contradicts the status quo as an outsider is an inherently attractive idea
This might be satisfying if you have a grievance with the "institution" but to OP's point its also interesting because we often limit our areas of exploration to what is conventionally accepted. If you are ignorant of convention you are more likely to retread ground, but less likely to be bounded by it. As you say, conventional wisdom is conventional because its pretty dang good, so this doesn't often pay off.
I find that thought reassuring. There is less of a gap between oneself and the greats than one might think.
Yeah, everybody wants to be the Karate Kid who binged karate studies in one montage and then won the tournament instead of training in the dojo for years and still having to cheat to come close to winning (and still failing) like the chumps at Cobra Kai.
>> Yeah, everybody wants to be the Karate Kid who binged karate studies in one montage and then won the tournament instead of training in the dojo for years and still having to cheat to come close to winning (and still failing) like the chumps at Cobra Kai.
Somehow I think you missed the lesson of that movie - if there is one. It's been a long time and I can't put it into words well, but I suspect it was about someone with character and humility vs brute force and cheating. Something like that, but far different from underdog find shortcut to success.
I know it's beside the point of the film, but it's still worth thinking of what it would be like to train for years and years on end at an infamously brutal school to be handily defeated by a kid with a few months of one-on-one mentorship.
I think Karate Kid must win the prize for the most diegetic analysis of the original text of all time.
This exact thread was posted the last time this article was posted.
Is this site all bots. Hello? any real humans?!?!
Just old farts repeating the same 4 perspective responses to each other without remembering nor caring for the last time they did it
Hey! I resemble that remark! :)
That was the first thought I had, too! Glitch in the Matrix, anyone?
Been here for 15 years, there's been a slight eternal summer / "YCombinator is a Thing Now" effect and a quite dramatic effect of agreeableness being rewarded --- when people hear "agreeableness", they can misread it as kindness --- on HN, a common manifestation of agreeableness is the not-even-wrong idea that physics is stuck.
> the not-even-wrong idea that physics is stuck
But not all that right either. The thing about physics is that most progress is being made in highly theoretical areas without direct obvious applications. That might look like stalling because it's been a long time since the last new insight a layman could understand.
I agree.
I live in Boston, there's a ton of Gillette employees. It's about 7 years ago. They're talking about their fight to innovate for the millennial consumer. I say something like yeah there hasn't been a push for anything new since that 4 blade razor. They're immediately able to name 2 or 3.
I guess what I'm saying is, it's not wrong, it's insidiously not even wrong, in a way that shades rather than illuminates.
second chance pool? https://news.ycombinator.com/item?id=26998308
Not in this case. taylodi's comment is timestamped "2025-03-17T14:17:19 1742221039" , and lines up with Algolia's index. casey2 is presumably referring to the similar-sounding, not identical, top comment in https://news.ycombinator.com/item?id=43002511
All of the advances of modern physics were made by people who were well trained and well acquainted with the state of the art.
There is a myth that Einstein was an outsider. He had a degree in physics. He was working as a patent clerk because he couldn't find a job as a high school teacher, not because he didn't know physics.
One of his earliest great works is one that indicated wrong in the foundational aspect of the theory, namely incompatibility with E&M Maxwell equations and Galilean Transformations and E&M equations not being invariant under those transformations. The principle of symmetry is one of the foundation issues in physics. He also had th wisdom of understanding of the physics of new transformation, Lorentz transformation, which we know today as Special Relativity.
Yes, of course he was well-trained and had the enough background, but also the problem at the time was the type of problem that was solvable (i.e. no limitation in terms of tech) and that required new framework with new understanding.
Potential digression here, this is why I absolutely insist on high performing software teams to have at least one junior. You need someone asking questions like this.
Would it work in this case? Had he asked about this in a team, he would be told about the existing approach and that'd be it. I've been on the both sides, as junior questioning common practices and as senior answering such questions - and it always resulted in transfer of common knowledge, not some breakthrough.
I totally support searching for the new truths, but we must not forget why the phrase "do not roll your own crypto" exists. It is ok, or maybe it even MUST be done by students and researchers, but I am not so sure about juniors working on production systems. Still fine if you work in R&D department
Yes, just like evil geniuses should have their plans for world domination reviewed by a 5 year old
Of course not - that would be ridiculous - it's clearly a job for a Mini-Me! ;)
unfortunately, I have a feeling that in the age of LLMs, this junior on the team will have no impetus to actually put in effort and _think_ about such a problem
We survived the age of StackOverflow. I don't see why LLM's will be the death of critical thinking where all else has failed so far.
Because SO at least requires you to THINK a little about what do you have a problem with.
With LLMs, you don't even need that. Just copy paste the error and you get a response. Copy and paste the Jira ticket description and you get a response. This wasn't possible with SO. Yes, none of those will likely work straight away but the point is that less thinking is required.
Hopefully, the junior's code will be reviewed before it gets merged
I can't imagine blindly committing what LLMs spit out will get you very far, much less into a job.
It's quite fashionable, people call it "vibe coding" with Cursor in "yolo mode"
I don't think LLMs are actually good enough for their code to work without any changes in all but the very simplest of cases.
Did we survive the age of StackOverflow though? The market (globally) is absolutely flooded with not-even-mediocre software devs who are effectively doing what an LLM is i.e. finding the most plausible looking answers on SO and somehow munging them together, without any real understanding of what they're doing nor why it's working (or not working). The number of people charging contract rates yet lacking an understanding of actual software design principles (largely language agnostic) and no idea how computers actually work, is scary.
Totally, the future is nearly identical to the past.
I think a presence of even single senior can dampen the questioning part. So I prefer all junior team to make some big path breaking changes.
It’s nice to have someone around who doesn’t understand that I’ve given them hard work. Big, open ended tasks like “invent a photogrammetry pipeline”
Since people are getting a little squirrelly, I think it's important to point out that this discovery was only in contradiction with a conjecture, not something anyone pretended to prove. Conjectures exist to be falsified.
Somewhat similar to Plank's principle:
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it "
Or more simply: "Science progresses one funeral at a time."
This is just a phrase in an article, there is no reason to believe it reflects the reality in any way. For example, the guy faced a sceptical advisor's response to his proposal. You could say "Krapivin was held back by the conventional scepticism of his boss" (which is a real thing vs the mythical power of conventional wisdom). Yet he wasn't.
Martha Argerich has a similar anecdote when it came to learning a very difficult piano piece. Ravel's Gaspard de la nuit is notoriously difficult, even for accomplished pianists. She learned it (along with another tough one) in 5 days because her teacher told her to learn it. She claims that she was helped by the fact that she was unaware that it was supposed to be difficult.
One question I had in college was if learning inhibits our ability to learn new things. We can gain knowledge but is that the same as learning? I'm not sure where the line sits between questioning and obstinance.
Looking back at Theranos, lots of "experts" (at Stanford, etc.) told her her idea was impossible given the small amount of blood but she convinced people she found a way. It was all a fraud but is there a way to do blood testing at that scale? Do we just need people to keep failing and see if anyone ever gets there? Should academia stick to things that aren't known to be "impossible"? I don't have any answers but find it interesting to contemplate.
Picking Theranos as an example in favor of ignoring conventional wisdom is really a bold choice. I know there are better examples, based on actual results instead of wishful thinking.
Avoiding more recent (aka controversial) examples, the best search engines of the early Internet required users to trawl through pages and pages of search results, with the useful answers on page 8 or so. Conventional wisdom was this was as good as it was gonna get, and that's just how search is. That was, of course, until Google came around and totally changed the game.
Your example of a time where ignoring the "experts" was a good thing was a time when the experts were correct?
Do you have a better example to set the context up to ask whether blood testing at that scale could ever become possible if enough people kept trying to crack that nut, contrary to the general consensus of infeasibility?
Why would you insist on framing a general discussion of the wisdom of defying conventional wisdom in terms of blood tests? Blood tests only come up in this context because of the one time someone lied about them.
You wouldn't. It has little to do with the general discussion. It was quite clearly posted as an aside.
It came up because the previous commenter was on a train of thought where the general topic lead him to think about Theranos, which lead him to think about blood testing in general and thus questions about that arose.
That is how discussions occur. If that slide into a new topic doesn't interest you, you don't need to reply.
Speaking of "don't need to reply", there are many times when I start writing a comment, realize that in fact it's wholly off topic, and delete it. Not every "train of thought" needs to be aired out.
No train of thought ever needs to be aired out. But at the same time, if you don't see that your questions are asked, how are you to ever know? Surely we are not anti-education types around here?
You are right that if you find yourself staring to make off-topic statements, like some of the earlier comments with their authors wanting to arbitrarily assert that they don't know how to read, it is time to go outside instead.
Why would you need a better example? It's a very, very stupid question; asking it is not worthwhile.
This is the problem the experts pointed out:
If your sample is so small that a substance you're looking for isn't present at all, you will fail to detect that substance, even if it was present in the pool you drew your sample from.
Theranos claimed the reverse, that they could detect a substance whether it was present or not.
How much thought do you want to devote to the question of whether a different approach might realize that dream?
Do you have a personal example that you can point to regarding modern physics?
Do you have an example of a meaningful breakthrough in that field in the last 25 years?
What's more likely, that we are nearing completion of understanding, or that our approach is misguided and in need of Newtonian revolution of method?
Physics is the poster child of a discipline that knows its foundations are wrong. Basically every physicist understands that our current theories are full of holes and a new way of thinking is needed. So I don't really buy the idea that physics in particular is stifled by a rigid adherence to the status quo.
The charitable version of this is that to reconcile all the holes, we in fact need radically new and different mathematical underpinnings that aren't currently on the horizon. I don't know how that could be true; certainly any new foundation would have to reduce to something very like the current theories under already-studied conditions. If it is, though, we might be on a really big local maximum, and the path off of it might look really weird and nonsensical for a long time (which is why I can't quite bring myself to fully dismiss Stephen Wolfram, for instance :D).
Maybe the trick is to forget math entirely! Accept we live in a universe where 2+2=, where pi will change before your even half way around the circle!
Cast off the shackles of rationality and embrace a universe where the only constant is change.
While you're at it you should smoke this shit, it's wild.
Username checks out.
Not 25, but about 30 years ago, Peter Shor showed that quantum systems could be arranged and evolved in a manner that we could factorize large numbers exponentially faster than using classical means. That was a humongous expansion in what we thought were the possible physical evolutions in our universe. Related ideas have been used since to significantly expand our understanding of blackholes, for example.
25 years takes us back quite a bit. Meaningful breakthroughs in physics I can think of:
The Higgs Boson was a pretty big thing. We "saw" gravitational waves with LIGO. There was that picture of a black hole. Despite the unavailability of a useful quantum computer, there's been a ton of advancement in that area. Neutrino stuff won the 2015 Nobel Prize that stuff will require new physics.
Reminds me of (I think) the PostIt note inventors saying that if they'd looked at the research ahead of time, they never would have even tried.
That says more about the culture of giving up than of doing research.
Research can tell you immediately if something is possible, or that you need to do more research.
This intrigues me, can you share more about this?
?
https://www.post-it.com/3M/en_US/post-it/contact-us/about-us...
There’s some truth to that. But there are also a LOT of “self-taught” cranks out there who think they’ve discovered an amazing new theoretical framework… but if they’d done any background reading, they’d know it’s been tried and didn’t work.
So what? Let them crank on, it's no less productive than most career researchers, collecting data on the 10,000th subdegree of some nested abstraction. (It is good to have some number of people doing that!) To me the two seem to actually be extremely similar behaviors except one is blessed by the church and the other is heterodoxy.
Thou shalt not inquire without passing through the blessed gates of wisdom and drinking from the fountain of holy knowledge!
It's a heuristic for understanding why you don't need to take some people seriously with your limited time available to approach new concepts.
Credentialism should never be a fundamental basis for refuting new theories, but it might be a reason to not pay attention to them.
I would say a reason to limit the attention paid to it. If some crank came to me with a perpetual motion machine I might take 2 minutes to ask them, "So how does this account for the conservation of energy?" If their first response is "Energy conservation doesn't apply at the QUANTUM level." then I can quickly move on and ignore them. If their first response is "I think it is extracting energy from the Earth's rotation through interaction with it's magnetic field." then it may be worth me investing another 5 minutes to see what the next level of reasoning says.
> Let them crank on
Spoken like someone without a physics department email address.
It matters because they waste the time of serious researchers.
How? It's not like those cranks give out grants that influence how researchers spend their time?
Sending them manuscripts, publishing nonsense on the arxiv, reducing the overall SNR of research.
The amount of spam and nonsense already exceeds many many lifetimes of all researchers that ever lived combined, so unless you can demonstrate the current generation for some weird reason does read a lot of extra published nonsense on the arxiv, the practical impact of the change in ratio is still 0
Ok.
Look, I’m not saying the cranks should be dragged out back and shot, just that they are a net negative.
I think that researchers, educators, and others with power over institutions & culture publicly lampooning and attacking anyone with even the thought of daring to ask a 'dumb' question or have a 'dumb' theory, without proper rite of passage and standing as a crackpot has a larger negative effect than whatever positive effect these attacks have in dissuading (probably not!) the delusional and fraudulent from stuffing people's mailboxes with their theories and amassing followings.
Political thinking that probably predates the Egyptian pyramids: If you empower people with freedom, it will be abused in a way that produces some suffering for everyone that wouldn't be had the freedom not been granted. There would definitely be tradeoffs to having a culture of free and open learning, research, and knowledge sharing, unguarded by institutions.
It resonates because of dopamine, and little else
https://youtu.be/11lPhMSulSU?si=Lqi274EhmzM7nTgB
I was hoping they'd have a discussion of the algorithm itself. Quanta is usually good about making these sorts of things approachable.
In any case, the full paper is here [1] if you don't want to scroll through the Wired article.
[1] https://arxiv.org/abs/2501.02305
Quanta did have an article about this discovery: https://www.quantamagazine.org/undergraduate-upends-a-40-yea...
Related HN discussion:
https://news.ycombinator.com/item?id=43002511
Right, the Wired article is just a 1:1 reprint of the Quanta article. Both lack a substantive description of the algorithms in question
If my reading of the paper was correct, this kind of hash table is incredibly complicated to resize because resizing would invalidate all previously handed out pointers unless you only do chaining?
The other challenge is that computing N hash functions is quite expensive so in practice I think this ends up slower in real world terms despite the big-O speed up?
Does this actually end up beating open addressed hash tables? It seems more like a neat hypothetical CS result that wouldn’t have any actual real world application? That’s cool in and of itself but I’m curious if there’s any real world reason to do this.
Still worth it for situations where you can memoize hashes and reuse them. Basically, any situation where you want to intern strings.
Also worth it for the cases were you already know the maximum size for the table.
Likely why the article said that it wouldn’t lead to any immediate applications.
Cause from the headline, one would get all excited about the applications
original discussion: https://news.ycombinator.com/item?id=43002511
Top comment there: https://news.ycombinator.com/item?id=43005231
Second-to-top comment here: https://news.ycombinator.com/item?id=43388872
Shows the wide range of ideas we produce here.
Based on the last two convos I saw on this, I feel like there’s a simpler algorithm here waiting to break out.
Underneath this is a sort of b-tree scented data structure that avoids the bimodal distribution on insertion times, which is important for a concurrent hash table.
This article strongly reminded me of Heckle Diff, which was first published almost 47 years ago. https://dl.acm.org/doi/10.1145/359460.359467
The performance consideration Paul Heckle identified was in consideration of index access in arrays versus hash tables. Hash tables are accessed randomly, or pseudo-randomly, until the desired index is found where as indexes in an array are accessed in index order.
If it was his discovery, would be nice if they'd give him first author on the paper's author list (Farach-Colton, Krapivin, Kuszmaul). Though I understand if the proofs were not done by him.
I believe alphabetical is the norm in computer science, is it not?
It is not, other than sometimes in the case of equal contribution. The first and sometimes second authors are the most important, and the last author is often the advisor/senior researcher supervising the work.
If you look at the papers of the third author [1], almost all of them seem to be alphabetical by last name.
[1] https://arxiv.org/search/cs?searchtype=author&query=Kuszmaul...
This is not accurate; it depends on the subfield. As a rule, the more theoretical the subfield, the more likely that alphabetical order is used. See e.g. papers from a theoretical conference like STOC vs. a systems conference like HotOS.
Interesting! I didn't realize it varied between sub-disciplines of CS, I guess.
Theoretical computer science and cryptography both typically do alphabetical. Maybe because of their adjacency to pure math?
I wonder if there is a memory consumption tradeoff for this new data structure? Based on a few initial implementations I see in github, looks like it may be significant? Still a nice discovery.
what makes the new memory consumption significant? from the paper they break the initial array into log(n) arrays of size 1, 2, 4, 8...
Earlier: https://news.ycombinator.com/item?id=43002511