Crazy Science Idea: Tell People Something Is Possible When You Actually Have No Clue
Author: Étienne Fortier-Dubois
Étienne Fortier-Dubois is a writer and programmer in Montreal. This essay was originally posted on his blog, Atlas of Wonders and Monster.
As I write this, an increasingly large corner of the internet is ablaze with speculation around LK-99 (note: essay first published 8/2/2023).
If you’re reading this from outside that corner, or from a disappointing future timeline where LK-99 turns out to be fake and quickly forgotten, you should know that LK-99 is a material that is supposed to be a room-temperature superconductor. That is, a material that conducts electricity with no resistance, and levitates over magnets, without any need to be cooled hundreds of degrees below freezing, as all other superconductors known so far do.
Room-temperature superconductors, if they’re real, could revolutionize everything, or so we’re told. The special magnetic properties of superconductors, as well as their capacity to conduct electricity without losses, could open the door to big advances in transports (more economically feasible maglev trains), medicine (better and cheaper MRI machines), energy (easier electricity transportation), and others. Not to mention many future applications we simply haven’t dreamt of yet, as it always happens with new technology.
So, people are excited. Are they right to be?
As I write this, several teams around the world have undertaken replication attempts. The results (summary here) are partial and contradictory. LK-99 seems to be a finicky material — simple enough to synthesize in theory (someone in the ex-Soviet Union claims to have done it in her kitchen), but possibly highly dependent on specific conditions during the process. Also not helping is the way LK-99 came to international awareness. It was made by a small team of South Korean scientists who’ve been researching superconductors since at least 1999, and suddenly one of them defected from the others and published an incomplete, error-laden paper on the arXiv preprint server, and now all those Korean scientists are feuding with each other, though they all maintain that LK-99 is real as the world watches and tries to use their (incomplete? error-laden?) recipe to make some more LK-99.
So, we don’t know yet. It might be fraud! It might be a mistake! It might be real! Fortunately, we should know pretty soon.
In the meantime, I want to use this story as an excuse to explore the following question: does science benefit from occasionally thinking that something is real, even if it isn’t?
Let’s consider the scenario in which LK-99 is a true room-temperature superconductor, but it’s not really commercially viable. Maybe it’s too difficult to produce it reliably in large quantities, for instance.
That would still be a big deal, for the following reason: now we would know that room-temperature superconductors are a possibility! Therefore, many smart minds would turn their attention to the field. They’d try to come up with a new production method. They’d look for similar chemicals that yield similar properties. They’d explore the adjacent theoretical space of atomic configurations. They’d replace the copper with gold to see if that gave it better superconductive properties. Eventually, someone would find something, and then we’d have revolutionary, commercially viable room-temperature superconductors. Crucially, we’d have them far faster than if LK-99 hadn’t been discovered at all.
It seems pretty self-evident that it’s easier to make progress on a problem when you know for sure that a solution exists. When the very existence of a solution is uncertain, it’s easy to get discouraged: perhaps you’re wasting your time.
Imagine you’re a physicist working on the Manhattan Project in 1943 (or the German or Soviet or Japanese equivalents). Sure, you might think, an atomic bomb seems theoretically possible; but no one has done it yet, so who’s to say that we’re going to succeed? Maybe the theory is wrong. Maybe it’s correct, but it’ll take literal tons of uranium, a relatively rare ore, making it impractical.1
And then the first bomb actually goes off at the Trinity test in New Mexico, and the second one over the city of Hiroshima, and the third above Nagasaki, and suddenly it’s very easy for the Soviets to go full-speed ahead on their nuclear program and develop a working bomb within a few years. (And eventually also the British and the French and the Chinese and so on, though not the Germans or the Japanese, for obvious reasons.)
But no need to limit ourselves to world-altering technologies like atomic bombs or superconductors. In the far less serious field of athletics, there is the well-known story of the 4-minute mile. For decades athletes had failed to run a mile in less than 4 minutes — until, in 1954, Roger Bannister did it. Almost immediately after (well, 46 days actually), another runner did it. Then, within a year, three runners broke the 4-minute mark in a single race. Once it was known that running a mile under 4 minutes was possible, it became a common achievement.
This sort of situation comes up all the time. Once it was known (thanks to Christopher Columbus) that you could reach new lands by sailing west, tons of explorers and European kings suddenly got interested. Once it was known by John von Neumann that a proof to a particular theorem was possible, it suddenly became easy for him to find it too.
Ultimately, the whole reason that we have things like intellectual property laws, and Nobel Prizes and other prestigious recognition for “innovators” and “visionaries,” is that finding a solution when one is not known to exist is so damn difficult — so it needs to be rewarded somehow.
This phenomenon is also why, I think, puzzles are so much more fun to solve within video games than in real life. Since games are designed by humans, we trust that the solution exists. So we keep looking.
Okay, that’s cool, things become easy once they’re done once. But that’s hardly a groundbreaking idea. The hard part is still to do it the first time. How can we make that easier?
Well, let’s consider another LK-99 scenario. Let’s imagine that it’s fake, and that the original paper by the Korean scientists was totally mistaken. Actually, scratch that — let’s pretend that it was fraud all along.
Even if LK-99 is fraud, we still have thousands of people paying attention, and dozens of labs trying to create a room-temperature superconductor using methods they had never thought of trying. It seems far less likely that any one of them will succeed if LK-99 is fake, compared to the scenario where it’s real but non viable. But it’s possible!
Imagine living in a world, ten or twenty years from now, where there’s ton of new technology thanks to superconductors — and it was all due to a case of fraud by a small team of South Korean scientists. That fraud could be ultimately responsible for an immense improvement in quality of life. Wouldn’t that be something.
Now, lest I am accused of encouraging scientific fraud, let me emphasize that:
I very much hope that LK-99 is not fraudulent. That’d be quite disappointing.
I fully acknowledge that the drawbacks of fraud are significant:
Once the fraud is exposed, it might slow down research in the affected field.
Also, if the fraud is about something that’s actually not possible, it may lead researchers to actually waste much of their time and resources. (Cold fusion comes to mind.)
It’s not a good look if a lot of public money gets spent on fraudulent research, and it would probably lead to less science overall, which would be bad.
There have been several cases of fraud precisely involving superconductor claims, and those haven’t led to superconductor breakthroughs.2
So no, even though part of me wants to write the spicy take that we should let scientific fraud go unchecked, I don’t actually think that’s a good idea. At most, I’ll say that, as for most bad things, the optimal amount of fraud is probably not zero.
But at the same time, it seems worth asking: can we somehow channel the powerful deceptive energy of fraud for good?
It is sometimes said that great innovators — startup founders, genius artists etc. — have to be somewhat self-delusional. Not too much, or they’ll just waste their time on impossible dreams. But they have to somehow believe in their own crazy idea to a degree that’s perhaps just slightly unreasonable. Normal people, i.e. you and me, would easily give up, because we’re not self-delusional. But then that’s why we’re not great innovators.
Sometimes great innovators purposefully… not quite lie, but distort reality a little bit. When they’re trying to raise money, for example.
A great example of this is Ferdinand Magellan. When he went to the Spanish court to explain his plan of reaching the Spice Islands by sailing beyond America, he convincingly told the king that he knew where there existed a passage that no other sailor or geographer knew about. But Magellan had never been anywhere near America, and as it turns out, his secret passage didn’t exist. At best it was a mistake, a bad interpretation of the gulf of the Rio de la Plata in what is now Argentina and Uruguay; at worst it was a lie. But Magellan did manage to get funding from the king of Spain. And find a passage he did, except much farther south. Now that passage is called the Strait of Magellan, and everyone knows Magellan’s exploit of achieving the first circumnavigation of the world.3
It’s possible that progress, especially scientific progress, ultimately depends on exceptional, ballsy, self-delusional people like Magellan. The question, then, if we want to accelerate new discoveries, is twofold:
How can we not discourage those people from pursuing their crazy ideas? There’s a lot of thinking around this already, with people creating grants for moonshot projects, etc. Part of the answer seems to also be in reforming the way we do scientific funding (which may be biased against crazy ideas). However, all solutions in this area are inherently limited by the supply of exceptional, ballsy, self-delusional people.
How can we encourage the others to pursue crazy ideas? This more or less reduces to: how can we make crazy ideas sound less crazy?
My moderately spicy answer to that is: maybe, from time to time, authoritative scientific figures should distort the facts somewhat when they tell young PhD students what is possible. Tell them that a lab, somewhere in South Korea or some similarly faraway location, is rumored to have achieved what they’re working on. That lab isn’t sharing results for some complicated reason involving legal barriers and possible incompetence, and we can’t rule out that their results are mistaken or outright fraudulent — but we’re not going to take any chances, and we’re going to make a real, heroic effort at it.
And then after the young researcher has made a real, heroic fort, you tell them that there were problems with the South Korean results and maybe they should switch research directions, and you’re going to help them publish their negative results anyway. The goal isn’t to waste their time!
There are a thousand problems with this idea, of course. I don’t actually support deception any more than I support fraud. But I do wonder if the Great Stagnation isn’t simply the result of everyone being a bit too transparent about what works and what doesn’t, to the extent that we all forbid ourselves from dreaming crazy, world-altering ideas.
This is what the Germans thought, due to a math error. But after the bomb was demonstrated at Hiroshima and Nagasaki, Werner Heisenberg, who had been the head of the German nuclear weapons program, quickly figured out the error, since he knew there was a solution. See here (cmd-F for “uranium”) for some discussion.
Well, not counting potential second-order consequences, of course.
The anecdote about Magellan’s lie or distorted facts is sourced from Stefan Zweig’s excellent biography of Magellan.
Interesting position. A counterargument would be the phenomenon of reputation traps in scientific research. A good example is the Fleishman and Pons cold fusion announcement. After the initial frenzy in the media, and the rush to replicate the results, there was an equally frenzied denouncing due to the first few groups failing to replicate easily. By the time a few groups did report anomalies consistent with replication, the news cycle had moved on. And now if a mainstream researcher decides to study "cold fusion" or even the rebranded "low energy lattice confined nuclear reactions" the knee-jerk response is that the person is a crank and all that was "debunked" decades ago. Meanwhile the anomalous observations continue to accumulate in the margins, but the phenomenon gets only a tiny fraction of the energy it deserves (especially compared to huge hot fusion projects which are going nowhere fast).
A positive example of the phenomenon is Mendel's data on genetic inheritence. Subsequent statistical analysis of his data showed it was too perfect to be real. He most likely "hand polished" the data in his notes. At the time, this may have helped convince other people that the theory was real, though if it had been discovered he was cherry picking data then who knows what would have happened. The same is true for millikans calculation of the charge on the electron with his very fiddly apparatus that took years to refine (IIRC- I know one of these seminal particle property experiments features the experimenter hand picking experimental runs that showed the "correct" result and discarding the rest).
Unfortunately, most scientists are caged by the granting systems that provide funds. I completely agree that crazy ideas should at times be pursued and can lead to more innovation (or maybe just a cool learning experience) but when you have five years to ‘accomplish’ all the mundane experiments you proposed in your grant you had best stay within the lines and get it done if you want another five years of funding - and all that comes with it, like grad students, money to publish in open access journals, etc....