`So we've been turning up the difficulty in Space Alert lately.I should probably clarify who "we" is -- me, Noelle, Andy, Nick, and Seth. That's basically our usual Truth House crew now; roles vary. Josh and Eric (first-year Eric, not Gamble or DeVries, who haven't lived here for a while) have played before, as has Sam, but these days it's usually the same five of us. On the one hand, it seems kind of disappointing that we're not getting more people involved, like we used to; on the other hand, hey, a consistent crew. (Although Seth stayed out for the first two out of the four games tonight.)And we definitely seem to be getting better! For quite a while here "standard difficulty" was all decks mixed, except serious internal, which would be white. But not too long ago we started mixing in the yellow serious internals; then removing the white ordinary threats; until today the final game we played was all decks yellow, except for serious internal, which was mixed.Two of the games we played had all-yellow threats! We survived! In our final game the Seeker -- the dreaded Seeker, which I consider the scariest threat in the base set -- showed up! We survived! (Yeah, I realize other people consdier the Executioner or the Nuclear Device scarier. To be fair, I've still never faced the Nuclear Device; it might be harder than I'm giving it credit for.)Actually, on that topic -- I realized today, one thing that makes the Seeker easier is that it doesn't move on ties, and ties are probably more likely than non-ties, and not moving is easier to handle than moving....we forgot to account for the Seeker knocking out the person who kills it though, which was Seth. Oops. Seth actually got knocked out in both games he was in tonight. In the earlier one, he was knocked out by the Executioner; we knew he would get knocked out, since his move to the bridge, where the Executioner was headed, was locked in by then; but I (the captain) got mixed up and thought he'd be knocked out at the end of the turn rather than the beginning. Which meant he was knocked out before hitting the mouse rather than after. But we survived!Maybe soon we'll turn it up to all yellow -- I mean, guaranteed all yellow, not just possibly all yellow. And maybe soon I should look into getting the expansion, for the few more months we're all here...-Harry`

`Obviously, I haven't been posting much here lately. Here's another entry for the "problem dump" series -- two math problems I don't really intend to work on (at least not at the moment). I'll probably put this up on my website later.Actually, I'm a little surprised to find that I haven't written anything about this here earlier. Oh well. Here we go.Let's consider the game of Snake; except instead of playing it on a grid, we're going to play it on an arbitrary finite [simple] graph. We assume the snake starts at length 1 -- the game begins by the "computer" (who acts adversarially) picking a starting spot for the snake and a starting spot for the fruit. When the snake consumes the fruit (and thereby grows by 1), the computer picks a new spot for the fruit, which cannot be somewhere currently covered by the snake. We'll say the player wins if the snake covers the whole graph. If the game goes on infinitely, I guess that's a draw, but we'll give it to the computer.(Here are some obvious ones -- if a spanning subgraph of a graph is winnable, then so is the whole thing; any cycle is winnable, so so is any graph with a Hamilton cycle; in order to be winnable, a graph must have a Hamilton path. But like I said, I'm not going to put all my notes here.)So the general question then is, for which graphs can the player always win? (Hence why I said draws go to the computer.) Now this question is not necessarily that concrete; there might not be any nice characterization. I have bunch of notes about necessary conditions and sufficient conditions for this, but I'm not going to list them all here. I do, however, have two specific questions about this that I'd like to ask.So -- you'll notice I didn't give a totally formal specification of the rules above, and there's a reason for that. When you formalize the rules, you realize there's the question: Should a snake of length 2 be allowed to turn around? As in, move its head to the current location of the tail. If you write the rules in the obvious manner, this is entirely legal, but it seems kind of unintuitive. So we can have two variants of the game: The `

`So!I just learned from a discussion on Wikipedia that Jacobsthal multiplication was rediscovered in the 1980s by Conway. Possibly because they didn't `

`As discussed last entry... here it is! Again, hopefully I'm not embarrassing myself with this.If you look at nothing else, look at the two tables. Especially the first table, it's so fancy! :)-Harry`

`So remember that Jacobsthal thing? (Here's the version on my UMich website, it's a bit easier for me to find than the LJ version. Also, I've updated the terminology there; see below.)Well -- so, a while ago Jeff told me that yeah I should write this up as an actual paper. (I may have said this here before.) So I did. I didn't post it on arXiv, however; I wasn't too sure whether it was actually original, and I didn't want to potentially embarrass myself by posting something unoriginal. I mean, I hadn't been able to turn it up in my searches, but I'm not a set theorist, and it's not like I did a super-thorough search. (Integer complexity is a pretty small area, and addition chains isn't a huge area either. But ordinal arithmetic seems rather larger.) Maybe for all I knew it had been done a hundred years ago!So instead I figured I'd try to get it published in a journal first -- I'd know it was original if it got accepted! Unfortunately, the first place I submitted it didn't accept it, and after that I was really busy so I haven't resubmitted it anywhere else yet.(I also changed "semi-Jacobsthal" to "super-Jacobsthal". Jeff sugested I should really change "semi-Jacobsthal" to something else, and I'm glad he did, because "super-Jacobsthal" is much better and I wouldn't have thought of it if he hadn't suggested that.)Point is -- Jeff, and also Dan Hathaway, recently pointed out to me this paper by Paolo Lippiari. And this is certainly not the same thing I'm doing, but it was close enough in flavor that I was basically just like "Oh crap, I should probably really put this up." So I'm doing that! I can't link to it yet, because it doesn't go up till Monday, but I'm doing it. I guess we'll see whether I end up embarrassing myself...But if nothing else, at least my paper has two really nice tables in it! One of them is pretty fancy and took a decent amount of work in TeX. :)(Meanwhile -- do you remember the problem of "ordinal multichoose"? Well, I figured out how to compute it in general! Both the "left" and "right" versions, and both choose and multichoose. I also came up with a third one, "natural" multichoose (you can guess the definition), but I can't compute that one except in very special cases, with lower bounds and conjectures for a few more cases. I can write more about this if people want. I haven't here partly because the rules for computing these are, unfortunately, really complicated. I'm probably not going to get around to writing this up formally for quite a while -- finally getting back to my integer complexity and addition chains work, after the constant not-having-time that was last semester, is higher priority, I'm pretty sure.)-Harry`

`So! This past weekend was MIT Mystery Hunt, which means it's time for my annual Mystery Hunt roundup. Unfortunately the puzzles aren't up yet on the archive, or full stats, but limited stats are up in the form of the slideshow, and the solutions are at least up for those of us who participated. Hopefully everything's up before too long, but I'm just going to go ahead and do this anyway. EDIT: I've now put the links in. (Also, I had accidentally mislabeled "Dory" as "Nemo".)I was once again on the Donner Party team this year. Not a very large team. We did OK, I guess? Solved a decent fraction of the puzzles. Ended up right about the middle of the pack. We didn't solve any of the metametas (certainly didn't make it to the runaround -- I don't know how possible the runaround would have been anyway, seeing as we're a mostly-remote team), and I don't think we even solved any of the metas until after the coin was found (and we only got two of those). So, not great. Apparently we solve a record number of puzzles for our team, but that was probably helped by the existence of the School of Fish round, which consisted of a large number of lower-difficulty puzzles (apparently designed to be about "something like one-third the difficulty of typical ocean puzzles").Nobody else here really joined in, although Seth and Noelle each did very briefly, and so did Angus, who was here for the weekend, and actually Angus and Seth both ended up contributing substantially during that brief time! But oh well. Anyway, on to discussing particular puzzles.( Cut for spoilersCollapse )So, that's that. We'll see what next year brings. (Maybe I should join a different team -- I miss having a shot at actually winning...)-Harry`

`By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.`

`Something clever I thought of today:So I've got this enormous file on my hard drive -- about 35 GB. While it's enormous, it's not particularly important; I haven't deleted it only because there's no need to. But it's in a directory with lots of other important stuff; actually, it's in a subdirectory of a directory with lots of important stuff. So when I'm doing backups, it's a pain, because I have to go in and copy everything in this directory `

`Hey hey hey here you go!So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now....I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P-Harry`

`So I've moved from Truth House to Gregory House. (I guess that was a few weeks ago now.)First house meeting was Wednesday. I managed to get myself elected treasurer (well, nobody else ran), so that's good. It only counts for 2 hours per week, though; not getting out of chores entirely. (I also ran for secretary, which only counts for 1 hours, but didn't win.)Unfortunately for various reasons I haven't actually been hanging around Gregory very much so far (probably spent more time at Truth still), or when I have it's largely been empty. People don't seem to stay up very late there. Oh well. Obviously part of why I moved to Gregory is that it would be quieter than Truth, being the no-drugs-no-drinking house, but so far it's been a bit to quiet.Well, not in the literal sense -- the big reason I moved to Gregory was so that I could get a single! If I stayed in Truth another year I'd have to take a roommate. I wasn't planning to be in Ann Arbor another year at all, since I figured I'd have found a job of some sort, but here I am. But I didn't sign up for Gregory until really late, so while I did get a single, I got the last one; and Gregory has a few rooms that are different from the rest. My room is tiny, and right by the kitchen and common areas. Oh well. Still better than having a roommate, I think!So, yeah, I can't say a lot at the moment. But, we'll see how this goes. I expect it will go well. Once we get power back, anwyay; there was quite a storm tonight...-Harry`

`What would happen to homotopy theory if we used a more general notion of homotopy?Let me make a formal definition: Given topological spaces X and Y and continuous maps f,g:X→Y, we'll say f and g are C-homotopic if there exists a connected space Z with points z`

`So apparently there's now a computer verified proof of the Kepler Conjecture.Obviously computer verification is now becoming a "thing", with homotopy type theory and Vladimir Voevodsky pushing it and all that. (I still have no idea why there is topology involved.) Not much of a thing, mind you; people still aren't doing it a whole lot. But they're talking about it like it's actually a thing that could happen, rather than something totally impractical, and some people are actually taking steps to make it possible. So writing proofs formally as a routine thing just might be a possible future.But that's not what I mean to focus on right now. Right now, computer-verified proofs basically only seem to happen in two cases:1. People who are trying to push computer verification, and so are building up a library or showing off an example2. There is actually some amount of uncertainty about a proof.And I mean, this latter is a bit funny, because it means that computer verification is to a large extent starting with the *hardest*, most complicated proofs!And, like, for computer verification to ever really catch on, there are going to have to be libraries of formal theorems for use. And the people writing these computer-verified proofs to a large extent presumably don't yet have those to go on, except for the most basic things, instead writing them themselves.So I wonder if this is how things start -- libraries getting written to do something complicated and horrible, and only *then* getting used to do the ordinary.(This leaves me with visions of math having some of the problems programming currently has -- libraries with horrible interfaces that everyone uses anyway because nobody wants to rewrite it, or they can't get anyone else to use it... I don't know, I think the nature of mathematics would serve to mitigate that effect.)-Harry`

`So!I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.Say α=ω`

`It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation a^{b}.Then I think we can agree on the following desiderata:1. p(a,1) = a2. p(a,b⊕c)=p(a,b)⊗p(a,c)3. p(a,b⊗c)=p(p(a,b),c)4. p(a,b) is weakly increasing in a5. For a>0, p(a,b) is weakly increasing in bThing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ω^{b}≤a. I.e. it'sthe largest exponent appearing the Cantor normal form.All the other operations have degree laws, or something like them. Inparticular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ωdeg(a^{b}) = deg(a^{×b}) = deg(a) * b.And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ωdeg(a^{⊗b}) = deg(a) × b.(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ bAnd with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(n^{k})=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.Whoo. Now I need to decide whether to add this as an appendix.(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)-Harry`

`[I wrote this as a letter initially, figured I'd adapt this into an LJ entry. Some of this is a bit out of date as this is originally from July 2nd. Oh well.]So this year most of the ICPSR people are from China and also most of them don't speak much English. Among these people is Chaoqun Mei (using Western name order) -- she says we can just call her by her last name since it's easier for us to say -- who is here studying some sort of math or statistics thing? Communication is difficult. But the main reason I am pointing her out is not because of her as such but rather because, uh, she brought a kid with her."Peter", we call him in English. He's 8 years old (and doesn't speak much English). It's kind of a weird situation -- how do we have a contract with an 8-year-old? I mean, OK, his mom signed for him, and is paying for him, but...? And, like, are we going to make him do chores? He can't wash the dishes, he isn't tall enough! Not sure he can operate a vacuum cleaner either. I think the current plan is that Beatrix (who is work manager at the moment) will talk to Mei and try to work something out. (Though obviously Peter doesn't eat as much as we do!)Anyway, Peter is adorable. Though unsurprisingly he is also kind of a pain in the ass. But I play with him a bunch and he likes me a lot. He likes to play pool (he cheats). Mostly though I've been playing with him outside -- often in that tiny backyard of ours. Often just tossing a frisbee or a softball back and forth. The other day we were playing a game where we would take turns tossing the softball onto the chair-swing and trying to get it to stay....then he threw the softball onto the roof. Guess he's not getting that back!Also yesterday we were playing hide and seek. At one point I hid in a pretty obvious spot because he gave me very little time, but he missed me anyway... and then seemed to forget about it and started throwing gravel at a birds' nest instead.Also yesterday: Me being constantly worried I was going to get poison ivy. There was a bunch of poison ivy growing on the back of the house -- not nearly so much as there used to be, thankfully. Shane has been going and killing it by spraying soap-salt-vinegar on it. But I was constantly worried that playing with Peter in the back I was going to get poison ivy still somehow (you can still get it from dead plants, apparently). Especially when he went and hid in the wooded area behind Triangle (which is on their property, not ours -- Shane would have been doing nothing about that).I mean it takes several days to show up so I could still have gotten it and not realize! Gods, what a terrible plant. I don't think I did; my worries are probably unfounded. But, ugh. I mean it's pretty easy to recognize in vine form, because the hairy vines are a giveaway, but when it's just on the ground? Sure, sure, "leaves of three, let it be", but *so many* plants have leaves of three. Goddamn usesless rhyme. I found it useless as a kid and I find it useless now. Well, Peter seems to be fine too.[Since it's now over a week later, I can verify that neither of us ended up getting poison ivy.]-Harry`

`So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )So what's wrong with ordinal hyper?Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is `

`It seems to me that the common meaning of the phrase "no true Scotsman [fallacy]" has shifted quite a bit from the original meaning as I understand it.Let's take Antony Flew's original example, which I've copied over from Wikipedia:`

Imagine Hamish McDonald, a Scotsman, sitting down with hisGlasgow Morning Heraldand seeing an article about how the "Brighton [(England)] Sex Maniac Strikes Again". Hamish is shocked and declares that "No Scotsman would do such a thing". The next day he sits down to read hisGlasgow Morning Heraldagain; and, this time, finds an article about an Aberdeen [(Scotland)] man whose brutal actions make the Brighton sex maniac seem almost gentlemanly. This fact shows that Hamish was wrong in his opinion but is he going to admit this? Not likely. This time he says, "NotrueScotsman would do such a thing".

The honest thing to do when presented with a counterexample (that you agree with) is to openly fall back to a weaker position. Hamish is doing that, in a sense -- restricting his claim to

(Note, of course, that if you have to retreat to a weaker position sufficiently often, you do have to consider the possibility that your original position really was just totally wrong and you are making a mistake in trying to incrementally salvage it.)

But the way I see people using the phrase these days is something entirely different. Rather, it seems that to most people, the "no true Scotsman" fallacy is when you say someone isn't part of group X when they say they're a member of X.

Not only is this not the "no true Scotsman" mistake (in the original sense), it isn't even necessarily wrong. Some groups are essentially defined by self-identification, but not all are.

Now it's worth noting here that many groups are defined as empirical clusters -- they're defined extensionally, not intensionally. Suppose that I claim "No member of group X does Y", and someone else replies "I'm a member of group X and I do Y." And let's say I also happen to know this person, and I know that they have very little in common with the people usually denoted by X. Then I think my best reply would be, "I'm sorry, but you seem to have very little in common with the people usually denoted by X. I don't think most peole, when they would use the word X, are referring to a group that includes you. Seeing as X is a category that is defined extensionally, by empirical clusters of similarity, I don't really think it can be said that you are an X, at least, not if the word X is used in the standard way. In particular, you differ regarding Y, Z, and W, all of which would generally be considered essential. Hence, I hope you don't mind if I continue to use the word X in this way -- meaning, people who can be described as some combination of Y, Z, and W, typical examples of which are A, B, and C -- rather than in a way that includes you. If you really object, I will say X' rather than X to denote the cluster that I am talking about, and say X'' to denote the cluster that you aren talking about, but I hope you realize that, in my opinion, outsiders will probably read X as meaning X' rather than X''."

What I've done there now really does look a lot like No True Scotsman (in the original sense)! I've insisted on using a particular definition of a word, in a way that's not totally transparent. But, unlike the original "no true Scotsman" example:

1. I'm making entirely clear what I am doing.

2. I am doing so in a way that is in concordance with standard usage, rather than going against it and trying to sneak in connotations.

3. While my definition isn't totally transparent, I have tried to make it as clear as I can with some combination of defining features and typical examples. It's not totally transparent, but neither is it totally opaque and mutable.

4. I am ultimately offering unambiguous terminology, rather than getting into an argument over definitions. Remember, if an argument didn't start out about being about definitions, don't let it become about definitions! (And if you did start an argument about definitions, hopefully it's only because you know what you're doing and had a good reason for doing so.)

By the way -- let's note here that the above examples dealt in universal claims and counterexamples. But in arguments about the real world, universal claims are rarely appropriate. That was the original form, though, so I've left it that way -- and using more realistic claims would have made the examples more complicated.

Point is, claiming that someone is not an X when they claim to be an X is not necessarily incorrect, and bears little relation to the original meaning of the term "no true Scotsman", though they may often coincide.

-Harry

`So here's an interesting paper that really deserves to be better known: On On_{p}, by Joseph DiMuro.`

Here, On_{2} refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F_{2}; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε_{0}, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On On_{p}", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in On_{p}, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of F_{p}. And the operations are computable! So this gives a concrete description of the algebraic closure of F_{p}! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On_{0}", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F_{2}(t), you have to go well beyond ε_{0}, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

-Harry

`You're all familiar with the disagreement hierarchy, right? Actually, I'm not sure how helpful it is most of the time, as I feel like a lot of the arguments I see (at least on the internet) consist of people arguing at cross-purposes rather than actually disagreeing with each other. Nonetheless, I would like to suggest two revisions to it.Revision 1: Add level DH4.5: Nonconstructive refutation.The archetypical example of refuting an argument is finding a hole in it -- "Your inference of P is unjustified given only what you've established so far." (Or, better yet, "Your inference of P is unjustified given only what you've established so far; indeed, here is an example where what you've established so far holds, but P does not.") But it's possible to show an argument wrong without actually finding a hole in it. The classic example is showing that an argument proves too much. If an argument proves too much, you can conclude that it's wrong -- but you still don't necessarily know exactly why it's wrong. It's still a form of refutation and should be above counterargument, but it's not as good as a constructive refutation.Revision 2: Replace DH6, "Refuting the central point", with "Refutation and counterargument"."Refuting the central point" doesn't really strike me as qualitatively different from "refutation". Honestly to my mind, if you're refuting some peripheral thing, that hardly even counts. When I argue I like to spot the other person lots of points because I want to get to the central disagreement as quickly as possible; arguing over peripheral stuff is mostly a waste of time. Of course, sometimes peripheral stuff becomes central later, but you can always un-spot a point.Anyway, point is, what is qualitatively different is refuting `

`Instead, I'd just like to point out that this old entry is now unlocked.`

Navigate: (Previous 20 Entries)