During the first of 3Blue1Brown's "lockdown math" livestreams, he threw out a fun poll question, asking the audience "What integer will most people enter into this box?"
He offered this explanation: "Did you know that 69 is the first number where, if you square it, and then you cube it, those two values, between them, encounter the numbers — or the digits — 0 through 9, once and only once? Yeah. It's the first number with that property, which I assume is why that was the second-most-popular answer."
We can confirm that, as promised, every decimal digit is present with no repeats. Nice.
But 3Blue1Brown undersold its niceness. Not only is 69 the first number with this property, it's the only number with this property. I confirmed this with a computer program.
But even this undersells its niceness. I was discussing this fact with a friend recently, and they were curious about how many numbers with this property there were in other bases.
So I had my program check other bases, and it got up to about base 30 before becoming prohibitively slow.
Yet the program turned up only one number whose square and cube, written in that base, contain all the digits in that base with no repeats.
69, in base 10.
That was when my interest was truly piqued.
(By the way, I changed the transparency so the background rainbow image is only at 69%.)
I asked around online for a name for numbers like 69 (just in case there are more), and it was referred to as a "moral obligation" to call them nice numbers. (Other suggestions included "sixty-niners" and "square-cube pandigitals.") I don't really have a better name, so that's what I'm going with. Anyway, they definitely won't be getting a lump of coal this Christmas.
The program turned up only one number whose square and cube, written in that base, contain all the digits in that base with no repeats. 69, in base 10.
What Are The Odds?
Pretty low! If the ten digits were random, we can calculate the probability of niceness by dividing 10 factorial (the number of ways to arrange the digits 0 to 9) by 10^10 (the number of 10-digit strings). That's about a 1 in 2756 chance!
In base 10, the range of possibilities — that is, the range of numbers whose square and cube combined have exactly ten digits at all — consists only of the 53 numbers between 47 and 99.
53 possibilities. 1 in 2756 chance for each. That's an expected value of about 0.02 examples if they were distributed randomly. And yet there is one!
The Two Mysteries
Since my program turned up no nice numbers in other bases, I wanted to make sure it was working properly. Therefore, I had it print numbers whose square and cube had the right length, but one or two repeats. Sure enough, it turned up plenty of near misses.
It did seem like the near misses were thinning out a bit. Maybe 69 was the only one!
In any case, there were two interesting mysteries that I noticed. The first mystery: the actual numbers that were close seemed like they were always increasing as the bases increased. My code definitely started checking all the way from 0 every time, so this didn't have to be the case. (Although, after realizing this, I had my code not start from 0, hoping this would be an optimization — didn't help that much, though.)
The second mystery: something was weird about the bases with near misses. When we consider these bases mod 5 — in other words, taking the remainder when we divide by 5 — there seem to be more examples in bases that are 0 or 2 mod 5, fewer in 3 or 4 mod 5, and absolutely none in 1 mod 5.
Why was this? Was there some reason that squares and cubes in 1-mod-5 bases had to repeat digits?
Sort of. Turns out the lengths are impossible! In a base equivalent to 1 mod 5, a square-cube pair can never have a number of digits equal to the base.
In a base equivalent to 1 mod 5, a square-cube pair can never have a number of digits equal to the base.
The Impossibility of 1 Mod 5
Consider a base b, equal to 1 mod 5. The beautiful math derivation below illustrates why b to the power k, a totally arbitrary and not intentionally selected example number yes for sure, has 5k+2 digits in this base.
As a reminder, b equals 5k+1. Can anything have b digits?
Well, no. 1000000 is the smallest number with seven digits. In general, a 1 with a bunch of zeroes after it is the smallest number with that number of digits. So the number right below b^k, (b^k)–1, will have a square with 2k digits and a cube with 3k digits. For a total of 5k.
So nothing will have 5k+1 digits! Bases with a remainder of 1 mod 5 can be ruled out.
Ranges
I used similar logic to determine the ranges for all bases:
(Don't worry about the floors and ceilings: they're just there to keep the numbers whole, and I'll ignore them in the rest of the blog post.)
In any case, I'm not going to justify the ranges too rigorously ("an exercise for the reader"), but they can be proven in much the same way. We saw that any time the exponent is a whole number, both the squares and the cubes "roll over" in number of digits. However, when the exponent is an integer plus 1/3, the cubes will still roll over, but not the squares. Similarly, when the exponent is an integer plus 1/2, the squares will roll over, but not the cubes. Either way, that increases the number of total digits by one.
Anyway, this formula for our ranges actually answered my first mystery as well! I had been wondering why the near-miss numbers were strictly increasing, even moving into another base. But these ranges make it clear: there's simply no overlap between the ranges. Going from a base that's 4 mod 5 to one that's 0 mod 5 increases k by 1, so the old b^(k+2/3) is the new b^(k–1/3).
However, there's actually a bit of buffer in between, for the simple reason that we are increasing b. So the new b^(k+1/3), for example, is a bit bigger than the old b^(k+1/3) anyway.
So Are There More Nice Numbers?
Alright, we know the size of the ranges now! Let's use them.
Well, let's assume that any number whose square and cube have the right number of digits has a random chance of being nice. This is a big assumption — this does feel like the sort of problem where it wouldn't necessarily be pseudorandom — but based on the data I have, I don't actually doubt the assumption, because I couldn't spot any patterns.
Okay. So what is this random chance? Well, let's do what we did earlier in assessing the chances for base 10: the number of successful square-cube combos is b factorial, or b!, since that's the number of ways to arrange the digits we want to appear. The number of possible sets of digits is b to the power b, or b^b, since there are b choices for each digit and there are b digits. Therefore, the chance is b!/(b^b).
Awesome. We can just multiply this by the size of the range, and we'll get an estimate for how many such numbers to expect!
To make the math cleaner, let's just look at bases that are multiples of 5. (The situation is similar for 2 mod 5, 3 mod 5, and 4 mod 5.)
Let's plug this formula into Desmos and see the graph:
This was what I expected. The probability peaks at around 0.14, and pretty soon it's tiny.
Basically, this chart shows that the probability of niceness becomes vanishingly small. Kinda makes sense. We've got a b^(b/5) in the numerator and a b^b in the denominator. There is the factorial, but maybe its size doesn't matter enough to be significant.
Let's just check a few more values of b to confirm.
...what?
Suddenly, at around 120, the function shoots up, reaches a maximum value above 8, and...discontinuously jumps to 0?
I'm actually pretty sure that the discontinuous jump to 0 is some sort of error that happens when the number gets too big for the computer. This is because if there was a jump to 0, it would certainly be continuous. Anyway, 150 factorial and 150 to the 150th power are pretty big, so I don't really blame the computer. (Did you know that any factorial above 170! is mathematically infinite?)
But the function shooting up is absolutely real. And confused me. It implies that, once you get to around base 120, there suddenly become an abundance of nice numbers, and in fact there are an infinite number of them in higher and higher bases!
What Is Going On With This Function?
Alright. The Google Slides images were fun, but analyzing this function requires some heavier algebra, so I'm switching to LaTeX.
We can rewrite our formula a bit.
How can we actually analyze this pesky factorial?
Well, it turns out there's a neat trick: Stirling's approximation. I currently do not know why it works, though probably I could learn. But it helps us a lot, because it approximates the factorial like this:
Now, this may look more complicated, but it involves simpler functions: exponents and radicals! We can handle those. And it's a pretty good approximation. In blue, the graph of the actual expected value function, and in red, the function when we Stirling-approximate it:
Oh, sorry. When I said the expected value function was in blue, I wasn't lying, it's just that the red function appears to entirely on top of it. Stirling's approximation isn't perfect, but it's incredibly good.
In any case, when we substitute in Stirling's approximation, the b^b terms will cancel.
(As it happens, when we remove the (b^b)/(b^b) term from the equation, Desmos no longer shows a discontinuous jump to 0, instead continuing upward, confirming the theory that it's a computational error.)
Anyway, let's use the fact that e^b is equal to (e^5)^(b/5) to rewrite our function:
We now have three terms. The second term goes to 1, since b to the negative 1/3 power, aka 1 divided by the cube root of b, goes to zero. (Interestingly, there is a natural interpretation of this: the lower bound of our range might as well be 0 when compared to the upper bound. This actually makes sense, since the number of 10-digit strings in base 10 is much larger than the number of all shorter strings, and this discrepancy only gets bigger as the base gets larger.) The third term is also relatively small, growing only with the square root of b. It's the first term that drives the growth.
e^5 is a decent-size number, about 148. This explains why our graph was zero for so long — we're raising a tiny number like 20/148 to a power, so of course the result is tiny. But once we get to 148, b/(e^5) goes above 1, and then the expected value starts going to infinity. (The actual crossover point is really around 132, because the √2πb term does help out a bit: it's a factor of about 29, so the first term there only has to be 1/29.)
But once we get to 148, b/(e^5) goes above 1, and then the expected value starts going to infinity.
This formula also explains why, when the base is very small, the expected value isn't that low: when b is less than 5, b/e^5 is being raised to a power less than 1, so it gets closer to 1 despite being tiny.
In Conclusion
69 in base 10 has a nice property: its square and cube together have all the digits in its base. It's the only number with this property before the chance of one appearing becomes exquisitely rare for higher bases.
But. Once we get to around base 120 or base 130, the expected quantity of nice numbers, assuming they're distributed pseudorandomly, skyrockets! So there should be more nice numbers out there in the world.
My computer program only got to around base 30, but if you're reading this, you might be able to write a better one! I used Python and didn't think too hard about optimization. I would love to hear if someone manages to discover another nice number.
Or maybe someone in the past has even looked at this problem! I couldn't find anything, but I did not look infinitely hard.
Either way, I made a prediction market about whether one is brought to my attention by the end of March 2023, so you would stand to win some fake money!
Quick Tangent About Prediction Markets
Most of my research into nice numbers happened a couple months ago, and I procrastinated finishing this blog post for awhile. Ultimately, I made a prediction market on whether I'd publish it by the end of this week, with some fantastic narrative tension between the "keep it at 69%" crowd and the "but it's more likely than that!" crowd. It was also noted in the comments of said market that "thursday is 12/15/2022 and 12+15+20+22 = 69," so I meant to get it posted by then, but tragically it's two days late. (Though that comment did encourage me to procrastinate less, out of a desire to deliver the post on Thursday, and I did only do the Stirling's approximation stuff this week.) Anyway, something something, power of prediction markets.
Sorry for the long blog post drought. I hope to write more soon. Until next time!
—Jacob
ความคิดเห็น