Chronicles of Harry

2015 May 25

01:08:00 - An unplanned trip to Chicago

So this past weekend I ended up taking an unplanned trip to Chicago. The occasion for this was that Nadja posted on Facebook that Shuo was going to be in Chicago for the weekend and, well, I realized that I had little reason not to.

I mean, it's a funny thing -- I hadn't seen Shuo since before she disappeared 8 years ago. Of course, at this point she reappeared... well I don't know how long ago it was but it's been years, is the point. So why had I never gone to see her before? Well, basically, travel is annoying. It's not something I do often. And I mean yeah I guess strictly speaking I could have made time but it's so easy to forget, you know, when you're just constantly addressing whatever the most urgent task you have to do is. But hey -- here I am in Ann Arbor, nothing's particularly urgent at the moment, why not take an unplanned trip to Chicago?

So I did! I stayed with Colin and Aviva. Since I last saw them they've gotten a dog, named Gately. Gately is, um, extremely dog. He jumps on everyone and licks their faces and runs after thrown objects and chews on things and rolls over and lets you pet him (and then starts play-biting you) and eats anything that gets dropped and climbs on the furniture and plays with the cat (Prince Myshkin) and... I don't know, he's like some sort of dog archetype. He is maybe too much dog. I don't want a dog licking my face quite that much.

Friday night ended up kind of turning into a big Tufts House reunion, actually. You see, Colin and Aviva already had invited over Girl Alex and Antonia and Matt Woolf (not actually Tufts but kind of in that circle) for their weekly Shabbat dinner (and also Jan, their upstairs neighbor, who once also had a cat named Prince Myshkin). When Nadja and Shuo tried to coordinate us getting together, Colin suggested they just come over here as well. And then coincidentally it turned out that Grant was visiting his parents out in the Chicago suburbs and could get here in about an hour, and we got him to show up too. We took a big picture and sent it to Kate Harney, who reportedly replied "Whoa. That is a lot of Tufts." Jim unfortunately could not make it that night though Shuo and Nadja and I met up with him the next day.

We did not get a chance to play 64 Smash, nor did I get a chance to play Twilight Struggle against Colin (I didn't actually bring it, I would have had to bring a suitcase), but Shuo and Nadja and Colin and I did play Wiz-War (the Fantasy Flight reprinting, not the original). It was fun but kind of disappointing? I don't know, I expected more from it. I was wondering how it compares to the original but it's actually rated higher on BGG. Hm. It seems like the sort of game that might need the expansions.

(Also, Shuo tells me she still has that letter I sent her while she was disappeared because somehow I managed to get ahold of her physical address and I figured, why not, I can try to contact her that way. She never responded at the time, but now I learn it worked! :D )

I could probably go on about a whole bunch of things, but instead, two things about transport:

1. Man, the payment machines on the Chicago subway system are unfriendly to people who don't live there. I thought the New York ones were bad, but the Chicago ones are substantially worse. I mean, they don't give change!

2. Here's an unexpected delay for you: The Megabus I took back to Ann Arbor got pulled over by the police.

Anyway, I'm stopping here.

-Harry

(Post comment)

2015 Apr 12

02:53:00 - Time T+4, serious internal threat. Repeat, time T+4, serious internal threat.

So we've been turning up the difficulty in Space Alert lately.

I should probably clarify who "we" is -- me, Noelle, Andy, Nick, and Seth. That's basically our usual Truth House crew now; roles vary. Josh and Eric (first-year Eric, not Gamble or DeVries, who haven't lived here for a while) have played before, as has Sam, but these days it's usually the same five of us. On the one hand, it seems kind of disappointing that we're not getting more people involved, like we used to; on the other hand, hey, a consistent crew. (Although Seth stayed out for the first two out of the four games tonight.)

And we definitely seem to be getting better! For quite a while here "standard difficulty" was all decks mixed, except serious internal, which would be white. But not too long ago we started mixing in the yellow serious internals; then removing the white ordinary threats; until today the final game we played was all decks yellow, except for serious internal, which was mixed.

Two of the games we played had all-yellow threats! We survived! In our final game the Seeker -- the dreaded Seeker, which I consider the scariest threat in the base set -- showed up! We survived! (Yeah, I realize other people consdier the Executioner or the Nuclear Device scarier. To be fair, I've still never faced the Nuclear Device; it might be harder than I'm giving it credit for.)

Actually, on that topic -- I realized today, one thing that makes the Seeker easier is that it doesn't move on ties, and ties are probably more likely than non-ties, and not moving is easier to handle than moving.

...we forgot to account for the Seeker knocking out the person who kills it though, which was Seth. Oops. Seth actually got knocked out in both games he was in tonight. In the earlier one, he was knocked out by the Executioner; we knew he would get knocked out, since his move to the bridge, where the Executioner was headed, was locked in by then; but I (the captain) got mixed up and thought he'd be knocked out at the end of the turn rather than the beginning. Which meant he was knocked out before hitting the mouse rather than after. But we survived!

Maybe soon we'll turn it up to all yellow -- I mean, guaranteed all yellow, not just possibly all yellow. And maybe soon I should look into getting the expansion, for the few more months we're all here...

-Harry

(3 comments | Post comment)

2015 Mar 11

17:05:00 - Two questions about Snake

Obviously, I haven't been posting much here lately. Here's another entry for the "problem dump" series -- two math problems I don't really intend to work on (at least not at the moment). I'll probably put this up on my website later.

Actually, I'm a little surprised to find that I haven't written anything about this here earlier. Oh well. Here we go.

Let's consider the game of Snake; except instead of playing it on a grid, we're going to play it on an arbitrary finite [simple] graph. We assume the snake starts at length 1 -- the game begins by the "computer" (who acts adversarially) picking a starting spot for the snake and a starting spot for the fruit. When the snake consumes the fruit (and thereby grows by 1), the computer picks a new spot for the fruit, which cannot be somewhere currently covered by the snake. We'll say the player wins if the snake covers the whole graph. If the game goes on infinitely, I guess that's a draw, but we'll give it to the computer.

(Here are some obvious ones -- if a spanning subgraph of a graph is winnable, then so is the whole thing; any cycle is winnable, so so is any graph with a Hamilton cycle; in order to be winnable, a graph must have a Hamilton path. But like I said, I'm not going to put all my notes here.)

So the general question then is, for which graphs can the player always win? (Hence why I said draws go to the computer.) Now this question is not necessarily that concrete; there might not be any nice characterization. I have bunch of notes about necessary conditions and sufficient conditions for this, but I'm not going to list them all here. I do, however, have two specific questions about this that I'd like to ask.

So -- you'll notice I didn't give a totally formal specification of the rules above, and there's a reason for that. When you formalize the rules, you realize there's the question: Should a snake of length 2 be allowed to turn around? As in, move its head to the current location of the tail. If you write the rules in the obvious manner, this is entirely legal, but it seems kind of unintuitive. So we can have two variants of the game: The weak game, where this is allowed; and the strong game, where it is not. (Note that you could consider graphs with multiple edges, and say that then in the strong game you're allowed to turn around if there's a double edge; but I'm restricting to simple graphs for reasons you'll see shortly.)

There is at least one graph that is weakly winnable but not strongly winnable, namely, a path of length 3. So, question number one: Are there any other such graphs? I suspect the answer is no but have been unable to prove it. Hence the restriction to simple graphs -- if I'm right, then unless the "underlying simple graph" is P3, adding multiple edges simply won't affect anything. (And it's easy to see how it affects P3.)

Here's a second question. When I initially mentioned this problem to John, he suggested that whether a graph is winnable or not should depend only on its topology. But this is not so; I have examples of (weakly or strongly) winnable graphs that can be subdivided to be made non-winnable. However, I have been unable to find an example of a (weakly or strongly) non-winnable graph which can be subdivided to be made winnable. So, question number two: If a graph is non-winnable (weakly or strongly), is the same true of any subdivision of it? For this question I don't really have a good idea what the answer should be. My guess would be yes, but I wouldn't be all that surprised if someone found a counterexample.

By the way, someone -- I forget who, unfortunately -- suggested that you could also play Snake on infinite graphs, with the win condition being that you grow arbitrarily large (now it's only a "draw" if the game goes on infinitely but your size stays bounded). But that seems like a pretty different problem, so I'm not going to consider that here.

-Harry

Tags:
(Post comment)

2015 Mar 7

22:18:00 - Two papers on Jacobsthal multiplication I didn't know about

So!

I just learned from a discussion on Wikipedia that Jacobsthal multiplication was rediscovered in the 1980s by Conway. Possibly because they didn't have Wikipedia back then, he didn't know it was originally due to Jacobsthal. As far as I can tell, this rediscovery resulted in only two papers: One by Harry Gonshor, and one by John Hickman, both of which explored the arithmetic of this multiplication.

I think I'm going to go edit references to these into my own paper on the subject now. The history deserves to be connected up!

-Harry

Tags:
(Post comment)

2015 Jan 26

02:45:00 - Jacobsthal thing on arXiv

As discussed last entry... here it is! Again, hopefully I'm not embarrassing myself with this.

If you look at nothing else, look at the two tables. Especially the first table, it's so fancy! :)

-Harry

Tags:
(3 comments | Post comment)

2015 Jan 24

19:11:00 - Jacobsthal again

So remember that Jacobsthal thing? (Here's the version on my UMich website, it's a bit easier for me to find than the LJ version. Also, I've updated the terminology there; see below.)

Well -- so, a while ago Jeff told me that yeah I should write this up as an actual paper. (I may have said this here before.) So I did. I didn't post it on arXiv, however; I wasn't too sure whether it was actually original, and I didn't want to potentially embarrass myself by posting something unoriginal. I mean, I hadn't been able to turn it up in my searches, but I'm not a set theorist, and it's not like I did a super-thorough search. (Integer complexity is a pretty small area, and addition chains isn't a huge area either. But ordinal arithmetic seems rather larger.) Maybe for all I knew it had been done a hundred years ago!

So instead I figured I'd try to get it published in a journal first -- I'd know it was original if it got accepted! Unfortunately, the first place I submitted it didn't accept it, and after that I was really busy so I haven't resubmitted it anywhere else yet.

(I also changed "semi-Jacobsthal" to "super-Jacobsthal". Jeff sugested I should really change "semi-Jacobsthal" to something else, and I'm glad he did, because "super-Jacobsthal" is much better and I wouldn't have thought of it if he hadn't suggested that.)

Point is -- Jeff, and also Dan Hathaway, recently pointed out to me this paper by Paolo Lippiari. And this is certainly not the same thing I'm doing, but it was close enough in flavor that I was basically just like "Oh crap, I should probably really put this up." So I'm doing that! I can't link to it yet, because it doesn't go up till Monday, but I'm doing it. I guess we'll see whether I end up embarrassing myself...

But if nothing else, at least my paper has two really nice tables in it! One of them is pretty fancy and took a decent amount of work in TeX. :)

(Meanwhile -- do you remember the problem of "ordinal multichoose"? Well, I figured out how to compute it in general! Both the "left" and "right" versions, and both choose and multichoose. I also came up with a third one, "natural" multichoose (you can guess the definition), but I can't compute that one except in very special cases, with lower bounds and conjectures for a few more cases. I can write more about this if people want. I haven't here partly because the rules for computing these are, unfortunately, really complicated. I'm probably not going to get around to writing this up formally for quite a while -- finally getting back to my integer complexity and addition chains work, after the constant not-having-time that was last semester, is higher priority, I'm pretty sure.)

-Harry

Tags:
(Post comment)

2015 Jan 22

21:04:00 - Mystery Hunt roundup 2015

So! This past weekend was MIT Mystery Hunt, which means it's time for my annual Mystery Hunt roundup. Unfortunately the puzzles aren't up yet on the archive, or full stats, but limited stats are up in the form of the slideshow, and the solutions are at least up for those of us who participated. Hopefully everything's up before too long, but I'm just going to go ahead and do this anyway. EDIT: I've now put the links in. (Also, I had accidentally mislabeled "Dory" as "Nemo".)

I was once again on the Donner Party team this year. Not a very large team. We did OK, I guess? Solved a decent fraction of the puzzles. Ended up right about the middle of the pack. We didn't solve any of the metametas (certainly didn't make it to the runaround -- I don't know how possible the runaround would have been anyway, seeing as we're a mostly-remote team), and I don't think we even solved any of the metas until after the coin was found (and we only got two of those). So, not great. Apparently we solve a record number of puzzles for our team, but that was probably helped by the existence of the School of Fish round, which consisted of a large number of lower-difficulty puzzles (apparently designed to be about "something like one-third the difficulty of typical ocean puzzles").

Nobody else here really joined in, although Seth and Noelle each did very briefly, and so did Angus, who was here for the weekend, and actually Angus and Seth both ended up contributing substantially during that brief time! But oh well. Anyway, on to discussing particular puzzles.
Cut for spoilersCollapse )
So, that's that. We'll see what next year brings. (Maybe I should join a different team -- I miss having a shot at actually winning...)

-Harry

(Post comment)

2014 Nov 11

22:17:00 - A neat paper about finite groups, and some irresponsible speculation

By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.

So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.

EDIT DEC 1: The paper has been updated; the following refers to the original version of the paper. See the comments for discussion of the updated version.

So: If you have a finite group, you can consider the probability that a randomly chosen ordered pair of elements from it commutes. You could then consider the set of all probabilities obtained this way; this is some set of rational numbers between 0 and 1.

In this paper, Eberhard shows that this set is in fact reverse well-ordered! Its order type is of the form ωα where 2≤α≤&omega². (Though Juan Arias de Reyna points out that it's not too hard to find examples that show that in fact α≥ω, so the order type is at least ωω.) He also shows that all the limit points of this set are rational. I think it should be pretty obvious why I'm interested in this! (Even though I have no plans to actually study this, my hands are full as it is.)

Now for the irresponsible speculation:

1. Instead of just doing finite groups, one could generalize to compact groups; one can consider probabilities there as well. How would this affect the set? It would be really nice if this just gave you the closure of the probability set for finite groups! Though Arias de Reyna tells me it's conjectured that the finite group commuting probability set is closed in (0,1], so that would be saying that using general compact groups only gives you 0 in addition. (I mean, it certainly does give you 0 in addition!)

2. I'm reminded of some work of John Wiltshire-Gordon and Gene Kopp. They considered probabilities of randomly chosen elements from a compact group satisfying some general word; the case of commuting
probabilities is the case where the word w is aba-1b-1. I wonder if the same phenomenon might be seen for other words.

Probably it would be best to first look at words in one variable. Obviously using w=a generates an ω if we stick to finite groups and an ω+1 if we allow general compact groups -- not ωω, but still well-ordered. As for a² or a³... well, I don't know, and I don't really intend to work on it as I said above, but it seems vaguely plausible and it's an interesting question!

So yeah that's basically all I have to say on the matter.

-Harry

Tags:
(2 comments | Post comment)

2014 Oct 22

04:12:00 - ~/enormousfiles

Something clever I thought of today:

So I've got this enormous file on my hard drive -- about 35 GB. While it's enormous, it's not particularly important; I haven't deleted it only because there's no need to. But it's in a directory with lots of other important stuff; actually, it's in a subdirectory of a directory with lots of important stuff. So when I'm doing backups, it's a pain, because I have to go in and copy everything in this directory except this one file in a subdirectory -- the file isn't important enough to be worth slowing down my backups for.

Solution I thought of today: make a new directory, ~/enormousfiles, put the file in there, and put a symlink to it in the original directory. Yay!

-Harry

(4 comments | Post comment)

2014 Sep 8

20:20:00 - Addition chain defects on arXiv!

Hey hey hey here you go!

So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now.

...I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P

-Harry

Tags:
(Post comment)

2014 Sep 6

02:55:00 - Gregory House

So I've moved from Truth House to Gregory House. (I guess that was a few weeks ago now.)

First house meeting was Wednesday. I managed to get myself elected treasurer (well, nobody else ran), so that's good. It only counts for 2 hours per week, though; not getting out of chores entirely. (I also ran for secretary, which only counts for 1 hours, but didn't win.)

Unfortunately for various reasons I haven't actually been hanging around Gregory very much so far (probably spent more time at Truth still), or when I have it's largely been empty. People don't seem to stay up very late there. Oh well. Obviously part of why I moved to Gregory is that it would be quieter than Truth, being the no-drugs-no-drinking house, but so far it's been a bit to quiet.

Well, not in the literal sense -- the big reason I moved to Gregory was so that I could get a single! If I stayed in Truth another year I'd have to take a roommate. I wasn't planning to be in Ann Arbor another year at all, since I figured I'd have found a job of some sort, but here I am. But I didn't sign up for Gregory until really late, so while I did get a single, I got the last one; and Gregory has a few rooms that are different from the rest. My room is tiny, and right by the kitchen and common areas. Oh well. Still better than having a roommate, I think!

So, yeah, I can't say a lot at the moment. But, we'll see how this goes. I expect it will go well. Once we get power back, anwyay; there was quite a storm tonight...

-Harry

(Post comment)

2014 Sep 5

02:12:00 - Idle question: What if we changed the definition of homotopy?

What would happen to homotopy theory if we used a more general notion of homotopy?

Let me make a formal definition: Given topological spaces X and Y and continuous maps f,g:X→Y, we'll say f and g are C-homotopic if there exists a connected space Z with points z0 and z1 and a continuous map h:X×Z→Y such that h(x,z0)=f(x) and h(x,z1).

So, obviously, this is a more inclusive notion than our usual notion of homotopy. We can then talk about C-homotopy equivalence, C-contractibility, C-homotopy groups, etc. And certainly there are maps that are C-homotopic but not homotopic; let Y be connected but not path-connected, and consider two points in Y in different path components as maps from the one-point space.

But can we find less trivial examples of maps that are C-homotopic but not homotopic? What about examples that just straight up are *not* C-homotopic? What about examples of spaces that are C-homotopy equivalent, but not homotopy equivalent, as well as spaces that aren't C-homotopy equivalent at all? (Question I tried unsuccessfully to answer: Is the topologists's sine curve C-contractible?)

Do C-homotopy groups agree with the usual homotopy groups? Do our usual algebraic topology functors respect C-homotopy in addition to just homotopy? (I asked John about this, he suggested that cohomology at least probably should.)

I'm posting this here as idle speculation because really, I don't know topology very well; I don't know enough to try to answer this. (Maybe someone already has. John hadn't heard of such a thing, that much I can say.) I thought of asking MathOverflow... but I was afraid I wouldn't be able to understand any answer I got! So yeah, I'm posting this here.

-Harry

Tags:
(1 comment | Post comment)

2014 Aug 11

16:56:00 - Quick thoughts on computer-verified proofs

So apparently there's now a computer verified proof of the Kepler Conjecture.

Obviously computer verification is now becoming a "thing", with homotopy type theory and Vladimir Voevodsky pushing it and all that. (I still have no idea why there is topology involved.) Not much of a thing, mind you; people still aren't doing it a whole lot. But they're talking about it like it's actually a thing that could happen, rather than something totally impractical, and some people are actually taking steps to make it possible. So writing proofs formally as a routine thing just might be a possible future.

But that's not what I mean to focus on right now. Right now, computer-verified proofs basically only seem to happen in two cases:
1. People who are trying to push computer verification, and so are building up a library or showing off an example
2. There is actually some amount of uncertainty about a proof.

And I mean, this latter is a bit funny, because it means that computer verification is to a large extent starting with the *hardest*, most complicated proofs!

And, like, for computer verification to ever really catch on, there are going to have to be libraries of formal theorems for use. And the people writing these computer-verified proofs to a large extent presumably don't yet have those to go on, except for the most basic things, instead writing them themselves.

So I wonder if this is how things start -- libraries getting written to do something complicated and horrible, and only *then* getting used to do the ordinary.

(This leaves me with visions of math having some of the problems programming currently has -- libraries with horrible interfaces that everyone uses anyway because nobody wants to rewrite it, or they can't get anyone else to use it... I don't know, I think the nature of mathematics would serve to mitigate that effect.)

-Harry

(2 comments | Post comment)

2014 Aug 10

00:35:00 - Back to ordinal multichoose for a moment

So!

I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.

First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.

Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).

So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.

Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.

When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.

I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.

I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.

Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.

Say α=ωα_0a0 + ... + ωα_ra_r + a, where I'm writing α in Cantor normal form, and separating out the finite part "a" as worthy of special attention. Then we have, for k finite and nonzero,

((α↓k)) = ωα_0 ka0 + ... + ωα_r k a_r + ((a|k)).

Pretty nice, no? The choose version is the same, except the multichoose at the end becomes a choose. Unfortunately, once k becomes infinite, things become complicated fast.

Also last time I was trying to resolve the question, for k finite, does one always have ((α↑k))≤(α↓k))? (And the same for the choose versions.) I had thought this was true, and spent some time trying to prove it, but now I can report a counterexample: α=ωω+1+1, k=2. On the left hand side we get ωω2+1ω+1+1, and on the right hand side we get ωω2+1+1. At least, I'm pretty sure I calculated both of those correctly. It's also a counterexample for the choose versions; in that case, we get the same things but without the +1s on each.

So, there's that. But the big thing is... how did I not notice this before? There's a symmetry law! The two operations are very closely related!

With ordinary multichoose, we have ((n+1|k))=((k+1|n)), since both are equal to (n+k|n,k) (I write it that way, rather than (n+k|k), to emphasize the symmetry.) With these ordinal versions of multichoose, we get

((α+1↑β)) = ((β+1↓α))

The proof is pretty simple, too! As in, you can straight-up construct an order isomorphism between these. I feel a little silly for not noticing this, but, this is really cool!

I feel like this also indicates that ((α↓β)) is somehow the "more fundamental" of the two operations. Because, see, ((α↑β)), well, if α=0, we know what it is; if α is a successor ordinal, we can apply the symmetry law to express it in terms of ↓ and if α is a limit ordinal, well, it's continuous in α, so it's a limit of things that can be expressed in terms of ↓. With ((α↓β)), if α=0 we know what it is, and if α is a successor we can switch it around, but if α is a limit ordinal, well, we don't have anything like that. (EDIT next day: Although, ((α↓β)) is pretty easy to compute in the case where α is a limit ordinal -- see below. The successor case is the only actually hard case.)

So, yeah. Now to put that away (for now, anyway) and get back to work...

EDIT next day: Actually, let me say a bit more about the computation of ((α↓β)) that I left out at first. Say we write α=α'+a, where α' is either 0 or a limit ordinal, and a is finite. Then in fact this breaks down into ((α'↓β))+((a↓β)). And the first of these is easy -- if β=0 we know it; if β is positive and finite it's given above (just multiply all the exponents in the Cantor normal form on the right by β); and if β is infinite, it's also easy (multiply by β+1 instead of β). Problem is the ((a↓β)) part! That seems to be complicated. Well, by the symmetry rule above, figuring out ((n↓β)) is equivalent to figuring out ((α↑k)), but, well, you'll notice I didn't give a formula for that -- that seems complicated in general. It might be doable, though. (Note that for any given β, since ((α↓β)) is strictly increasing in α, one has ((n↓β))<((ω↓β)) which does at least mean that when you compute ((α↓β)), the purely infinite part and the finite part do not interfere with one another.)

-Harry

Tags:
(Post comment)

2014 Jul 18

22:18:00 - Why there is no natural exponentiation (sort of)

It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?

(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)

Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation ab.

Then I think we can agree on the following desiderata:

1. p(a,1) = a
2. p(a,b⊕c)=p(a,b)⊗p(a,c)
3. p(a,b⊗c)=p(p(a,b),c)
4. p(a,b) is weakly increasing in a
5. For a>0, p(a,b) is weakly increasing in b

Thing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.

But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ωb≤a. I.e. it's
the largest exponent appearing the Cantor normal form.

All the other operations have degree laws, or something like them. In
particular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ω
deg(ab) = deg(a×b) = deg(a) * b.
And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ω
deg(a⊗b) = deg(a) × b.

(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)

Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:

6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ b

And with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(nk)=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.

Whoo. Now I need to decide whether to add this as an appendix.

(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)

-Harry

Tags:
(Post comment)

2014 Jul 12

18:36:00 - Peter, pool, and poison ivy

[I wrote this as a letter initially, figured I'd adapt this into an LJ entry. Some of this is a bit out of date as this is originally from July 2nd. Oh well.]

So this year most of the ICPSR people are from China and also most of them don't speak much English. Among these people is Chaoqun Mei (using Western name order) -- she says we can just call her by her last name since it's easier for us to say -- who is here studying some sort of math or statistics thing? Communication is difficult. But the main reason I am pointing her out is not because of her as such but rather because, uh, she brought a kid with her.

"Peter", we call him in English. He's 8 years old (and doesn't speak much English). It's kind of a weird situation -- how do we have a contract with an 8-year-old? I mean, OK, his mom signed for him, and is paying for him, but...? And, like, are we going to make him do chores? He can't wash the dishes, he isn't tall enough! Not sure he can operate a vacuum cleaner either. I think the current plan is that Beatrix (who is work manager at the moment) will talk to Mei and try to work something out. (Though obviously Peter doesn't eat as much as we do!)

Anyway, Peter is adorable. Though unsurprisingly he is also kind of a pain in the ass. But I play with him a bunch and he likes me a lot. He likes to play pool (he cheats). Mostly though I've been playing with him outside -- often in that tiny backyard of ours. Often just tossing a frisbee or a softball back and forth. The other day we were playing a game where we would take turns tossing the softball onto the chair-swing and trying to get it to stay.

...then he threw the softball onto the roof. Guess he's not getting that back!

Also yesterday we were playing hide and seek. At one point I hid in a pretty obvious spot because he gave me very little time, but he missed me anyway... and then seemed to forget about it and started throwing gravel at a birds' nest instead.

Also yesterday: Me being constantly worried I was going to get poison ivy. There was a bunch of poison ivy growing on the back of the house -- not nearly so much as there used to be, thankfully. Shane has been going and killing it by spraying soap-salt-vinegar on it. But I was constantly worried that playing with Peter in the back I was going to get poison ivy still somehow (you can still get it from dead plants, apparently). Especially when he went and hid in the wooded area behind Triangle (which is on their property, not ours -- Shane would have been doing nothing about that).

I mean it takes several days to show up so I could still have gotten it and not realize! Gods, what a terrible plant. I don't think I did; my worries are probably unfounded. But, ugh. I mean it's pretty easy to recognize in vine form, because the hairy vines are a giveaway, but when it's just on the ground? Sure, sure, "leaves of three, let it be", but *so many* plants have leaves of three. Goddamn usesless rhyme. I found it useless as a kid and I find it useless now. Well, Peter seems to be fine too.

[Since it's now over a week later, I can verify that neither of us ended up getting poison ivy.]

-Harry

(Post comment)

2014 Jun 17

01:39:00 - Why, briefly, ordinal hyper is stupid

So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.

(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)

Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.

I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.

(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )

So what's wrong with ordinal hyper?

Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.

OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!

The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is continuous in the right summand. Which would mean that if we were to take aω under this weird modified mutliplication, we would get a fixed point of left-addition by a. Multiplying by any higher ordinal would still get you aω. That isn't what we want at all.

Thus we have to add on the right. Similarly, when it comes to defining exponentiation, we have to multiply on the right. But what about tetration?

For natural numbers, we define tetration by doing our exponentiation on the left, and there's a good reason for this. (x^x)^x is the same as x^(x*x), which makes doing it on the right a bit silly. Addition and multiplication don't have this problem. They're associative, sure, but associativity of, say, multiplication, doesn't involve any operations simpler/smaller than multiplication. By contrast, this relation turns two exponeniations into an exponentiation and a multiplication, and in general (if you keep putting more exponents on the right) turns n exponentations into 1 exponentiation and n-1 multiplications. So this is not very good.

Thus when we try to generalize to ordinals, we have a conflict of which way things need to go in order to be nonstupid. If we continue to do things on the right, we run into the same problem we do for finite numbers. If, on the other hand, we break that convention and switch to the left, we run into the problem of continuity and stabilization. (We can't have "been on the left all along", or we'd have run into that problem even sooner!)

Now the reader may point out here that "left" and "right" are just directions, without any particular meaning, but in fact they have been used here with a consistent meaning: Left is what's getting iterated, right is the number of iterations. So this is indeed a real problem.

And so -- assuming we do switch to the left, because we want finite things to work -- we run into the continuity problem and things become stupid pretty quickly. Tetration is pretty stupid; hyper-n for 5≤n<ω is very stupid; and for n≥ω is maximally stupid.

(There is also the problem that H4(0,ω) is undefined, but oh well.)

(Note, by the way, that if you're defining hyper so that things are done on the right, you should define H0(a,b)=Sa, not Sb.)

(Also of note is the fact that while "ordinary tetration", "Jacobsthal tetration", and "semi-Jacobsthal tetration" all remain distinct, once you go to hyper-5, they all become the same.)

(Tangentially -- the other day I had the idea: while trying to define "natural exponentiation" using surreal exponentials doesn't work... what if you just rounded up to the next ordinal? Turns out, this is a pretty bad idea. No algebraic relations there that I can see.)

-Harry

Tags:
(5 comments | Post comment)

2014 Jun 14

00:00:00 - Some quick notes on "no true Scotsman"

It seems to me that the common meaning of the phrase "no true Scotsman [fallacy]" has shifted quite a bit from the original meaning as I understand it.

Let's take Antony Flew's original example, which I've copied over from Wikipedia:


Imagine Hamish McDonald, a Scotsman, sitting down with his Glasgow Morning Herald and seeing an article about how the "Brighton [(England)] Sex Maniac Strikes Again". Hamish is shocked and declares that "No Scotsman would do such a thing". The next day he sits down to read his Glasgow Morning Herald again; and, this time, finds an article about an Aberdeen [(Scotland)] man whose brutal actions make the Brighton sex maniac seem almost gentlemanly. This fact shows that Hamish was wrong in his opinion but is he going to admit this? Not likely. This time he says, "No true Scotsman would do such a thing".
So what is actually wrong with what Hamish is doing here? Let's assume that this is part of some larger argument.

The honest thing to do when presented with a counterexample (that you agree with) is to openly fall back to a weaker position. Hamish is doing that, in a sense -- restricting his claim to true Scotsmen rather than all Scotsmen -- but not in any useful way. The notion of "true" Scotsman is completely opaque; it's not a useful coherent position at all, just a way to make it look like he was essentially right all along. (See also: Moving the goalposts.) If you can question Hamish and perhaps get him to nail down just what he means by a "true" Scotsman, then perhaps the argument can continue in a productive fashion -- though you should really use a term other than "true Scotsman", as that's obviously loaded. But as long as it remains opaque it remains mutable.

(Note, of course, that if you have to retreat to a weaker position sufficiently often, you do have to consider the possibility that your original position really was just totally wrong and you are making a mistake in trying to incrementally salvage it.)

But the way I see people using the phrase these days is something entirely different. Rather, it seems that to most people, the "no true Scotsman" fallacy is when you say someone isn't part of group X when they say they're a member of X.

Not only is this not the "no true Scotsman" mistake (in the original sense), it isn't even necessarily wrong. Some groups are essentially defined by self-identification, but not all are.

Now it's worth noting here that many groups are defined as empirical clusters -- they're defined extensionally, not intensionally. Suppose that I claim "No member of group X does Y", and someone else replies "I'm a member of group X and I do Y." And let's say I also happen to know this person, and I know that they have very little in common with the people usually denoted by X. Then I think my best reply would be, "I'm sorry, but you seem to have very little in common with the people usually denoted by X. I don't think most peole, when they would use the word X, are referring to a group that includes you. Seeing as X is a category that is defined extensionally, by empirical clusters of similarity, I don't really think it can be said that you are an X, at least, not if the word X is used in the standard way. In particular, you differ regarding Y, Z, and W, all of which would generally be considered essential. Hence, I hope you don't mind if I continue to use the word X in this way -- meaning, people who can be described as some combination of Y, Z, and W, typical examples of which are A, B, and C -- rather than in a way that includes you. If you really object, I will say X' rather than X to denote the cluster that I am talking about, and say X'' to denote the cluster that you aren talking about, but I hope you realize that, in my opinion, outsiders will probably read X as meaning X' rather than X''."

What I've done there now really does look a lot like No True Scotsman (in the original sense)! I've insisted on using a particular definition of a word, in a way that's not totally transparent. But, unlike the original "no true Scotsman" example:
1. I'm making entirely clear what I am doing.
2. I am doing so in a way that is in concordance with standard usage, rather than going against it and trying to sneak in connotations.
3. While my definition isn't totally transparent, I have tried to make it as clear as I can with some combination of defining features and typical examples. It's not totally transparent, but neither is it totally opaque and mutable.
4. I am ultimately offering unambiguous terminology, rather than getting into an argument over definitions. Remember, if an argument didn't start out about being about definitions, don't let it become about definitions! (And if you did start an argument about definitions, hopefully it's only because you know what you're doing and had a good reason for doing so.)

By the way -- let's note here that the above examples dealt in universal claims and counterexamples. But in arguments about the real world, universal claims are rarely appropriate. That was the original form, though, so I've left it that way -- and using more realistic claims would have made the examples more complicated.

Point is, claiming that someone is not an X when they claim to be an X is not necessarily incorrect, and bears little relation to the original meaning of the term "no true Scotsman", though they may often coincide.

-Harry

(Post comment)

2014 Jun 4

01:51:00 - This really deserves to be better known

So here's an interesting paper that really deserves to be better known: On Onp, by Joseph DiMuro.

Here, On2 refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F2; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε0, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On Onp", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in Onp, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of Fp. And the operations are computable! So this gives a concrete description of the algebraic closure of Fp! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On0", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F2(t), you have to go well beyond ε0, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

-Harry

Tags:
(1 comment | Post comment)

2014 May 30

02:28:00 - A suggested revision to the disagreement hierarchy

You're all familiar with the disagreement hierarchy, right? Actually, I'm not sure how helpful it is most of the time, as I feel like a lot of the arguments I see (at least on the internet) consist of people arguing at cross-purposes rather than actually disagreeing with each other. Nonetheless, I would like to suggest two revisions to it.

Revision 1: Add level DH4.5: Nonconstructive refutation.

The archetypical example of refuting an argument is finding a hole in it -- "Your inference of P is unjustified given only what you've established so far." (Or, better yet, "Your inference of P is unjustified given only what you've established so far; indeed, here is an example where what you've established so far holds, but P does not.") But it's possible to show an argument wrong without actually finding a hole in it. The classic example is showing that an argument proves too much. If an argument proves too much, you can conclude that it's wrong -- but you still don't necessarily know exactly why it's wrong. It's still a form of refutation and should be above counterargument, but it's not as good as a constructive refutation.

Revision 2: Replace DH6, "Refuting the central point", with "Refutation and counterargument".

"Refuting the central point" doesn't really strike me as qualitatively different from "refutation". Honestly to my mind, if you're refuting some peripheral thing, that hardly even counts. When I argue I like to spot the other person lots of points because I want to get to the central disagreement as quickly as possible; arguing over peripheral stuff is mostly a waste of time. Of course, sometimes peripheral stuff becomes central later, but you can always un-spot a point.

Anyway, point is, what is qualitatively different is refuting and counterarguing. If you only refute but you don't counterargue, all you've established is that the other person's argument is wrong -- not that your own position is right! Refutation does not automatically include counterargument, and I think this is worth singling out a separate higher level.

(Sometime, I really need to get around to writing "Harry's guide for how to have an argument well"...)

-Harry

(Post comment)

Navigate: (Previous 20 Entries)