Previously: Is 69 Unique? The Search for Nice Numbers
Poorly written definitions. Semi-nice numbers. Quasi-nice numbers. Multiply nice numbers. Last digit filtration. Residues mod b–1. Brute-force searching. The nerd-snipability of internet strangers, especially those who stand to win fake money on Manifold Markets, into making cool breakthroughs. And still —
alone, atop the niceness throne —
The prediction market on whether I'd learn of another nice number by end of March got 28 bettors, which is pretty high for Manifold! They were initially optimistic about the search, with their implied probability hovering around 69%. This made sense to me: I had really just scratched the surface of the search problem in my blog post, and the internet hadn't yet been unleashed to find insights, optimizations, or just better brute force.
And the internet did find insights, optimizations, and better brute force galore! They've just proved pitiful compared to the massive search space. Within two weeks, the market had dropped to 26% as no second nice number had been found.
And the internet did find insights, optimizations, and better brute force galore! They've just proved pitiful compared to the massive search space.
Then the market resolved...N/A? What happened?
The Resolution Crisis of -69
December 27, 2022. Midday. I'm visiting the Hoover Dam. (Two dam facts: it was completed two years ahead of schedule; and it's on a time zone border, between Nevada and Arizona, allowing visitors like me to walk back in time.) Cadence, a mod on one of my Discord servers, has reemerged after taking a hiatus from the server. I reminisce about how she'd always bring up options I hadn't considered when I worded polls on Discord.
She says she's joined Manifold Markets — my full-time obsession these days, if you couldn't tell — so we chat a bit about it, and I mention the nice numbers market.
And that's when Cadence, in an epic narrative moment, asks if –69 counts as a nice number.
Oh no. How did I define nice numbers? "In general, a nice number, in some base, is one where the square and cube, written in that base, contain all the digits in that base with no repeats." And, negative sign notwithstanding, –69 meets that criterion.
This is quite embarrassing for me, because if I'd thought about negative numbers for a second, I'd have excluded them from the definition. –69 is...well, it's cute, but we all know it wasn't the goal. So I quickly issue a clarification to the market description saying that –69 doesn't count. However, a few minutes later, I rethink this decision: I wrote resolution criteria for a reason, and as much as they wound up insufficient here, –69 did clearly meet the conditions I had set. So I suspend betting on the market and publicly agonize over what the right decision is.
"In general, a nice number, in some base, is one where the square and cube, written in that base, contain all the digits in that base with no repeats." —Past-me, a fool who forgot about negative numbers
It's really a classic case of common-sense-spirit vs. resolution-criteria. Most people side with the common-sense spirit, but some, especially the most active Manifold Markets members, think I should stick with what I'd promised in the description, or use Manifold's option to resolve N/A, which cancels the markets and reverts all trades. I'm torn.
Ultimately, I decide to resolve N/A. My full reasoning, and some of the discussion, is in the market comments.
However, to prevent such future calamities, I add "General policy for my markets: In the rare event of a conflict between my resolution criteria and the agreed-upon common-sense spirit of the market, I may resolve it according to the market's spirit or N/A, probably after discussion." at the bottom of all my market descriptions. If I'd had that policy in place, I'd have been able to properly ignore –69.
(By the way, I'm very grateful that Manifold has an N/A option. Real-money crypto betting platform Polymarket did not have the option for their market "Will Volodymyr Zelenskyy be the 2022 TIME Person of the Year?", which specified in the description "this market will resolve to 'Yes' only if exclusively Volodymyr Zelenskyy is named as the TIME's Person of the Year. If, for example, 'Defenders of Kyiv' or 'People of Ukraine' or any other group is named as 2022 TIME's Person of the Year, this market will resolve to 'No'." Time selected as Person of the Year "Volodomyr Zelensky and the Spirit of Ukraine", which...that's horrible, the spirit of Ukraine isn't a person! So should this count as exclusively Zelenskyy? There was a lot of drama, and people I've asked have been almost evenly split, but the market was ultimately resolved 'Yes'. How would you have resolved it?)
In any case, I remake my market, excluding negative numbers (and non-integers) this time, and the probability gradually descends to 12%.
So let's get back to the math, shall we?
For fans of leading zeroes, Manifold user Yev found that 5 is what I deemed a "semi-nice number" in base 6 (jan Misali's beloved "seximal" — other bases are wannabes):
Here's what's extra impressive: I did a computer search through small bases for numbers whose square-cube concatenation (what's been dubbed its "sqube") is only missing one digit, whether or not that digit is 0 (so it can be written as a leading zero like this). As with 69, I was unable to find any others! 5 in base 6 is the only known semi-nice number.
5 in base 6 is the only known semi-nice number.
Yev also thought of a more useful generalization of the idea of nice numbers: namely, instead of looking at numbers whose squbes are pandigital, we can look at numbers where other powers are. I refer to such numbers as "quasi-nice." Considering third and fourth powers, there's again only one example in small bases — in our very own base 10! This was found by 2022 midterm prediction tournament runner-up Joshua Brower:
Maybe we do have a pretty cool base on our hands.
Anyway, the more significant work on quasi-nice numbers, which Yev did, concerned first and second powers: so, numbers x where x and x^2 together have all the digits in the base. The great thing about these quasi-nice numbers is that Yev was able to find a plethora! This data supports the theory that more nice numbers do exist in very high bases. It also provides an opportunity to test our hypotheses about density.
And the results were a bit weird! The black dots in this Desmos graph show the actual, computer-verified quantity of quasi-nice numbers. The purple shows the heuristic approximation (number of candidate numbers times the number of candidates — based on the work on ranges from my earlier blog post, but adapted for quasi-nice), and the black dotted lines show the discrepancy.
(The red and blue functions just help illustrate where the purple comes from — the red is the approximation for multiples of 3, and blue is for 2 mod 3. Note that bases equal to 1 mod 3 will always have zero candidate numbers. The purple is just plucked from whichever is applicable.
So we've got examples in base 6, 8, and 9, and then nothing until base 17, which is a bit surprising but believable — then suddenly we're getting more than expected for bases 18, 20, and 21... and then back to none for 23. What's up with that? And what does it portend about actual nice numbers?
I will actually explain this discrepancy later in the blog post. But not yet! This reflects the fact that I was also confused at this point in the story, so perhaps you should be too. Though, mild anachronism: Yev only ran the code up to base 22 at the time. Anyway, this probably builds suspense or something.
Smash Cut to Reddit and Mathstodon
Friend of the blog and unambiguous swiftie-uaena chuh08 wrote a post asking about nice numbers on the subreddit r/math. The post has 15 comments and 26 upvotes as I write this (so not, like, viral). Nonetheless, at least three people who'd read my blog post independently saw the reddit post while simply browsing r/math, including the person who first sent me down this whole rabbit hole by wondering about nice numbers in other bases. (Maybe this should be no surprise given the types of people I associate with.)
Anyway, the comments have some good insights: chickenwing727 says they wrote a program in C++ and "peaked around base 40", which is comparatively pretty good. Oscar_Cunningham links a pretty cool relevant post on Mastodon (specifically, the instance "mathstodon"). Meanwhile, ParadoxReboot cleverly suggests "They should start at base 420."
The Multiply Nice
Anyway, the mathstodon post generalizes niceness in yet another way. Robin Houston notes that if we want the squbes to contain every digit exactly twice, exactly three times, etc. (instead of actually once), this always seems to be possible in base 10, and with an ever-broadening number of examples. Undisputed Pirate Waiter World Champion Joshua Brower and I have been calling such numbers "doubly nice", "triply nice", etc., and in general "multiply nice". (That's "multiply" the adverb form of "multiple".)
Furthermore, looking at the first multiply nice number of each order, which David Radcliffe from mathstodon did, reveals a sort of convergence of first digits! (Sorry for not including the super long squbes in the visual below.)
However, this actually makes a lot of sense. The explanation is that these numbers are extremely close to the beginning of their possible ranges (the cube of 4641588990290676, the first octuply nice number, is 100000010126569242519798896313482268435398035776, so very early in the space of numbers with that number of digits). Because base 10, that means the original numbers have similar starting digits. This is what we'd expect if multiply nice numbers are abundant. So yay, multiply nice numbers are probably abundant!
Most-likely-to-be-deemed-trustworthy-ish Proofnik Joshua Brower did a comprehensive search of multiply nice numbers (from doubly to sextuply) up to base 32, and the results are in the comments of the market.
Zip It trouncee (and Boggle trouncer (modulo technical difficulties)) Joshua Brower has actually done a lot of work on nice numbers: y'all should check out his Manifold post, well-commented Colab code, and Overleaf writeup about prime bases vs. composite bases! He focused particularly on filtering by last digits, a technique which I'll talk about now.
Fun math fact: If you know the last digit of a number, you also know the last digit of that number raised to any power. For example, just as 6 squared, 36, ends in 6, the square of any number ending in 6 will end in 6. How can we be sure that, for example, 76 squared ends in 6? Hopefully this algebra should clarify that:
(Other fun math fact that's not really related: 5776 is the only four-digit number where its last two digits are its square root! 625 is a three-digit number with this property.)
Hopefully, this result about last digits feels pretty intuitive, or at least believable. And we can use similar logic to show that all numbers ending in 6 have a cube ending in 6. This proves that a number ending in 6 cannot be nice in base 10.
So for every base, we could check all the last digits to see which ones instantly fail. We might expect this to remove an average of one last digit, since there are b digits in every base and each digit naively has a 1 in b chance (so b(1/b) = 1). The reality is a bit better: notably, 0 and 1 will always fail. As it happens, in base 10, 5 and 6 fail as well. That's 40% of our search space, removed! And the remaining 60% are guaranteed not to fail on the last digit. However, we might expect the percentage removed to drop for higher bases.
This won't even help by an order of magnitude, and we need many. The upshot is that, while this is a cool optimization, it only offers residual improvements.
We can expand the method to looking at last two digits, last three digits, etc. This works similarly, because if you know the last n digits, you do know the last n digits of all powers.
These graphs, courtesy of Jolly Rancher sculptor Joshua Brower's above-linked Colab code, show how much of the search space you can remove for each base when filtering by last digits — orange is the naive expectation, blue is actual:
You may notice that the blue lines go down. This means that, yes, the percentage removed drops for higher bases. But frankly, all that matters here is that the lines aren't at the very top. Since they're not, this won't even help by an order of magnitude, and we need many. The upshot is that, while this is a cool optimization, it only offers residual improvements.
"Residual improvements"? Get it? It's a weird adjective to use. That's because it's a bad pun segue.
The residue class of r mod m consists of all numbers with a remainder of r when you divide them by m. So the residue class of 7 mod 10 consists of 7, 17, 27, etc. Of course, these are all the numbers where 7 is the last digit. So filtering by last digit is equivalent to filtering by residue classes mod b.
But we can do better. Manifold user cat had the genius idea to filter by residue classes mod b–1. (So mod 9, in base 10.)
The Genius of Filtration by Residue Classes mod b–1
I know. It sounds arbitrary! (Or maybe you have better number theory intuition than me and can sense where this is going.) Why not mod b+1, or mod b–2, or mod 7?
But here's why it's awesome: in mod b–1, swapping digits does not affect residue class. This is the idea behind the divisibility tests for 3 and 9, where you add up all the digits and check their remainder mod 3 or 9. Also how casting out nines works.
But here's why it's awesome: in mod b–1, swapping digits does not affect residue class.
If you just want some quick intuition for why this claim is reasonable, 1 and 10 differ by 9, so moving the placement of digits will only change the sum by increments of 9. In any base b, 1 and 10 differ by b–1, so moving the placement of digits will only change the sum by increments of b–1.
For more intuition, here's a short algebraic proof of why 256 mod 9 is equivalent to 2+5+6 mod 9, which easily generalizes to all integers:
But how does this help us?
Well, if we know a number is nice, we know all the digits in its sqube! Namely, those from 0 to b–1, exactly once. Since the order doesn't matter for mod b–1, we then also know the sqube's residue mod b–1.
To find the target residue, we take the digit sum. That's 0+1+2+...+(b–1), also known as the triangular number of (b–1). I've notated this with the triangle as an homage to the denizens of Triangland (where numbers are triangled instead of squared, and where old Chromatic Conflux posts are shamelessly plugged). To simplify, we can apply their Triangle Identity, which comes with a gorgeous proof without words.
We can actually simplify this even further mod b–1 depending on the parity of b: it's 0 when b is even and (b–1)/2 when b is odd. The algebra is "left as an exercise for the reader." But it doesn't matter computationally because the computer can calculate b(b–1)/2 mod b–1 very quickly anyway: it's not the bottleneck.
Okay, so remember how when we know the last digit of a number, we know the last digit of all its powers? This generalizes to all residue classes. If we know the residue class mod b–1 of a number, we know the residue class mod b–1 of its square and is cube. Therefore, for every residue class, we can add the residue classes of its square and cube together to see if they work. If they don't, we've eliminated that entire residue class from being nice.
Let's see this process in action for base 10:
That's some quality filtration!
So consider a number n whose residue we know is correct. How much likelier is n to be nice?
To answer this question, imagine we're calculating the sqube of n, and we have everything except for one digit. Well, we know what the digit sum mod n–1 is! This allows us to be sure that the missing digit is the one we want. Therefore, knowing that the residue class is right is effectively equal to knowing that one of the digits is right.
Therefore, knowing that the residue class is right is effectively equal to knowing that one of the digits is right.
From a searching perspective, this means that we shouldn't despair when lots of residue classes work: even though the search space is ostensibly larger, it means more numbers than expected have passed our preliminary test, so we have a better shot of finding a nice number among them!
Here's how this pans out for 69 (which, being equal to 6 mod 9 (nice), is reassuringly in the correct residue class), if we knew all digits in its sqube except d:
Technically, we run into a minor issue if the missing residue mod b–1 is 0, because then the missing digit could be either 0 or b–1. This doesn't impact things much, but it means a correct residue multiplies our probability of niceness by b–1 instead of b.
Remember That Weird Quasi-Nice Number Graph? It's Not So Weird Anymore
We can revise our estimate for the quantity of nice numbers in a base by multiplying by the number of residues that work in that base (and a factor of x/(x-1)). I've displayed that in green on the graph. As you can see, this explains the apparent anomalies in the data quite well,; it reflects the fact that there are 2 possible residues each in bases 18, 20, and 21, and 0 in base 23.
Although: it's not a coincidence that base 23 is devoid of valid residues. It turns out that all bases equivalent to 3 mod 4 are impossible.
Here's why: as you can see in the visual below, if x is nice, and the base is of the form 4k+3, then x^2 + x^3 mod b–1 is forced to be simultaneously even and odd.
This argument works similarly for quasi-nice numbers, so it explains why base 23 has none.
There are lots of other patterns in the list of possible residues. For even bases (where the modulus, b–1, is odd, and the target residue is 0), 0 and b–2 (aka modulus–1, or more informally just –1) always work. The general rule is more complicated, but it depend on the prime factors of the modulus. This means that bases 1 above a highly divisible number or prime power tend to be pretty good. (10, being 3^2+1, falls into this category.)
Prime college applicant Joshua Brower realized that residues should be either 0 or –1 mod every prime factor of the modulus. If there are no squared prime factors of the modulus, the number of residues is 2^(number of prime factors), because for each factor, we can freely choose whether to go for 0 or –1 mod that factor. I don't remember if Joshua or I figured out the exact rule for when there are squared prime factors, or much of anything about odd bases (where the target residue is not 0 but (b-1)/2).
Regardless, here's the chart of residues, sorted by base mod 4:
I checked OEIS for these residue sequences, and none are in it, although the sequence of residue counts if we wanted 0 for all numbers (not just evens) is. I also checked the sequence of valid residues for quasi-nice numbers with x and x^2 — not there — but all terms except the first were even, so I tried dividing the entire sequence by two. That sequence is there! It's called "Number of primitive Pythagorean triangles with leg n." I checked extra terms after it told me this, so I'm pretty confident this is equivalent! A proof would be neat.
Manifold user wasabipesto wrote a blog post himself about nice numbers, and it's written in a very breezy and readable style. (It's where I got the term "sqube" from.) It also has some cool ideas, including yet another generalization of nice numbers. As wasabipesto puts it:
A number's niceness is equal to the proportion of unique digits in the square/cube concatenation to the total number of possible digits (i.e. the base). A number with niceness equal to 1 is "nice".
Here's a histogram of niceness among numbers checked in base 42:
wasabipesto included histograms up to base 45, and they all look basically the same. Yev noted that the peak is at about 1–1/e, which is what we'd expect to see for a pseudorandom distribution.
To clear things up: higher bases are not "more fertile" — they're actually less fertile — but they have sufficiently more numbers that they have a higher chance overall.
(The original version of wasabipesto's blog post also had a minor misunderstanding, which I bring up not to pick on wasabipesto, but because I've had conversations with multiple other people who had this misunderstanding. To clear things up: higher bases are not "more fertile" — they're actually less fertile — but they have sufficiently more numbers that they have a higher chance overall.)
wasabipesto has also created a cool framework where you can lend your computational power to the search. He's put up fake-money bounties: currently M$100 for every 10^12 numbers searched, M$100 for every off-by-two, M$1000 for every off-by-one, and M$5000 for every actual nice number. (I've mostly just been giving out symbolic bounties of M$69 to people on Manifold who've made cool insights, so wasabipesto is giving out more!) Anyway, some of those smaller prizes could earn you fake money!
As for the grand prize...well, the residue filtration optimization functionally divides search space by the base, which is around two orders of magnitude here. Using my ranges and adjusting for residue filtration, I assembled this list of benchmarks for expected number counts. When low, the expected nice number count is about the same as the probability of finding another nice number. (It counts universes where multiple nice numbers are found multiple times, but this is even more unlikely.)
Currently, wasabipesto's site says that numbers are being processed at a rate of 10^7 numbers per second, or around 10^12 numbers per day. At the current pace, to get up to base 65, thereby achieving a 1% chance of finding a second nice number, would take about 10^6 days, or about 3000 years.
According to Ars Technica, the world's fastest supercomputer can perform 1.1 quintillion (10^18) operations per second. I don't know very much about computer hardware, so let's pretend that one operation is enough to check a number for niceness. Then, on this supercomputer, it would take about 20 minutes to get a 1% chance by reaching base 65.
What if we wanted a 3% chance at finding another nice number? We'd have to go up to base 85. In our fantastical supercomputer one-operation-per-number scenario, that would take...10^13 seconds, or about 300,000 years.
What if we wanted a 3% chance at finding another nice number? Well, we'd have to go up to base 85. In our fantastical supercomputer one-operation-per-number scenario, that would take...10^13 seconds, or about 300,000 years. For a 10% chance (about what the market thinks right now), we'd need base 97. That'll be 10^19 seconds. Among WolframAlpha's helpful comparisons is that 10^19 seconds is the age of the sun...times 69.
Suffice to say that we're gonna need some more theoretical breakthroughs.
(I should also mention that Manifold user BoltonBailey has made markets about the success of two approaches: a SAT solver and a search over arithmetic progressions. I haven't thought super deeply about them, but I don't see how either overcomes the core challenge: the search space is far too big.)
When I published my original blog post, I didn't expect to be the first person on the internet to ponder nice numbers in other bases. I had looked, but I nevertheless thought it was pretty likely that there'd be some old math paper or forum post or book which rendered the search moot. So I tried to avoid saying things like "69 is the only known number with this property," because maybe someone else knew of one.
69 is now meaningfully the only nice number! Probably, it will be for the long haul.
But now people have searched in sincerity — majority YES shareholder Joshua Brower said "at least a quarter of my break time waking activities have been caused by you," while wasabipesto said "The last time I accidentally worked this late, I was playing Factorio" — and no one's found anything. 69 is now meaningfully the only known nice number! Probably, it will be for the long haul.
Honestly, all the nicer for 69.
That's all from me for now! If you make any breakthroughs, I would love to hear about them. I'm also happy to answer any questions. Here's a link to my ugly not-cleaned-up spreadsheet I've been using, here's another link to the betting market, and here's a link to "The Conflux 2022", my suggested way to consume the Chromatic Conflux archive. My next blog post will contain an exciting personal announcement unrelated to nice numbers!