?

Log in

Chronicles of Harry

2017 Feb 1

22:32:00 - A late, short Mystery Hunt roundup for 2017

So, I didn't get to participate much in Mystery Hunt this year, unfortunately, due to my computer suddenly crapping out on Saturday. (Website with solutions for this year's hunt: http://solutions.monsters-et-manus.com/hunt/index.html )

So, yeah, I was on Donner Party again this year, at least until my computer failed. Seems I don't know anyone else in Madison who does Mystery Hunt so I was just doing it alone from my apartment.

This one didn't go very well for Donner Party. So -- we were a remote-only team this year; we had literally nobody in Cambridge. Turns out, this hunt was structured in such a way as to make that a real problem -- there were, as originally planned, no time-releases; you had to unlock puzzles by solving previous ones... and it was arranged in such a way that a remote-only team would inevitably get stuck. (Thankfully we did at least eventualaltely get time-releases.) This (and other similar problems, see below) caused quite a bit of (justified, IMO) grumbling on the internal team chat. Basically it went:
A: This is so unfair, that they don't accomodate remote-only teams at all!
B: Well, we could have sent someone to Cambridge.
A: Sure, but we explicitly asked them if it was OK that we were a remote-only team, and they said yes. If it was going to cause a problem they should have warned us when we asked.

I'm in agreement with A. Although ultimately it became irreelvant to me when computer problems forced me to withdraw.

(I'm also writing this late because getting my computer fixed, uh, took a while.)

Also, is it just me, or did this year's metas make backsolving especially difficult? Or maybe that's only true of the rounds I actually saw.

So, this will be shorter than usual, but:

Cut for puzzle spoilersCollapse )

And, that's all I have to say for this year. Hopefully things go better for me next year!

-Harry

2016 Dec 16

18:38:00 - Two types of libertarianism

OK, OK, I know, any number of people have written things about how there are multiple types of libertarianism. Like, the "it's just good policy" sort vs. the deontological sort. Or the "UBI sounds good, just don't go distorting any markets" sort vs. the "hands off my money!" sort. But, here's another distinction I want to talk about.

I'd always thought of libertarianism as, y'know, fundamentally individualistic. I mean, it's based in classical liberalism, right? Enlightenment stuff, John Stuart Mill... about protecting the individual weirdo from the group, who insists everything be done their way. Do your own thing, and if you're not harming anyone, you shouldn't be punished for it. Certainly don't punish people for building new things! (But neither should you have to be building new things.) I don't consider myself a libertarian, but, that's basically what I want to see out of liberalism in general, you know? Protecting the individual from the "village".

But I'm realizing now that there's this other sort of libertarianism out there that doesn't seem to be about the individual. It's especially visible in the Republican party's pseudo-libertarianism, where often it's specifically the federal government that's singled out as a problem. (Not a political philosophy that's very portable to other countries!) But there does seem to be a more principled form of this, that makes the issue fundamentally about local control.

The thing that really bugs me about this is -- well, firstly, calling it "libertarianism" at all seems odd, but they call themselves that, so whatever; but ignoring that, well, it's often local governments, county sheriffs and such, that do the most (among governments; usually government isn't the problem, seems to me) to harass and oppress the individual weirdo. Indeed, in the US, oftentimes it's the federal government that protects citizens from local governments! Now any libertarian with their salt will point out that this structure is ridiculous, but the point remains -- favoring local control, without doing more to limit local governments as well, doesn't seem to be individualistic at all, but rather enabling the "village" in its oppression of the individual weirdo.

So, needless to say, one of these two sorts I am pretty sympathetic to; and the other sort, I'm really not.

2016 Oct 29

17:06:00 - In Minneapolis, out of context

Shuo: I have very little sympathy or compassion for children.
Me: Oh. I expected you to end that sentence earlier.

2016 Oct 26

03:17:00 - Was there really a "rockets can't work in space" myth?

I posted this on /r/AskHistorians over on Reddit but got no responses. I expect I'll get less of one here, but, may as well ask!

I remember as a kid always hearing about how in the early days of rocketry, when it was still primarily theoretical, most people believed rockets couldn't work in space because Newton's 3rd law doesn't work without an atmosphere to "react against".

Now that I know more physics, it seems pretty unlikely to me that any real physicist could make such a mistake, because, I mean, Newton's 3rd law and its equivalents (like, you know, conservation of momentum) are pretty tightly woven into the structure of both classical and quantum physics in ways that have absolutely nothing to do with the presence of any atmosphere. And indeed when I've gone and tried to find any reference to this myth, the most I can find is one particular New York Times editorial piece from 1920 that made this claim; I haven't been able to find anything about such an idea being more generally widespread.

So was there really a "rockets can't work in space" myth among some substantial segment of the population? Or if it wasn't really a preexisting idea, were people actually convinced by this editorial to any substantial and lasting extent (before the eventual practical demonstrations that yes, rockets work fine in space)? Or is the tale of this myth itself a myth, inflated from the mistake of a few newspaper writers? Does anyone know about this?

(My suspicion is that it's the latter.)

-Harry

2016 Sep 18

18:56:00 - An inequality that doesn't work

So, here's an idea I had the other day: My inequality that o(Rev(α,B))≤o(Rev(α,o(B))) doesn't work if you replace o(B) by B', an extension of B. But what if you replace it by B', an extension of B satisfying o(B')=o(B)?

(We can also more generally replace α by A, a WPO, although then we have to make the additional assumption that everything involved is well-defined, and also in that case I haven't actually proven the original inequality in general)

I mean, it's a fairly natural generalization, right? And like obviously it works if A and B are both finite. And there's also some cases where, based on what I already know, one can see that they have to actually be equal (both being equal to o(Rev(α,o(B)))).

But in fact I'm pretty sure now that this generalization is false! One can take α=ω, B=N²+N³, and B'=N+N³. Since N² can be extended to N, B' is an extension of B; and o(B)=o(B')=ω³. And yet things don't work out quite the same way when you take weakly decreasing maps from ω into these partial orders. Because, and I haven't checked this totally thoroughly, but I'm pretty sure that (with the additional restriction that the functions must be eventually 0) the left hand side is equal to ωω5, while the right hand side is ωω4.

(Without that restriction, it's only slightly different; ωω5+2ω3+3 vs ωω4+1ω3+3.)

So, so much for that! That said, it can probably still be proven under more restrictive conditions. For instance, maybe if A is finite. And I'd definitely bet on it being true if A=α is a power of ω and B is finite; probably more generally if A=α is total and B is finite. (Probably also if o(A) is initial and B is finite.) Beyond that it's hard to say. One can probably get it to work with other smallness conditions -- I had to take o(B)=ω³ to get a counterexample, can one prove that's the smallest? -- but I don't know to what extent I'll bother...

EDIT: OK, here's a counterexample with B finite: Take α=ω once again, take B'=2×2, and take B that looks as follows:

*
|
*   *
 \ /
  *
Maybe it can still work if A is finite, though, at least!

EDIT: Nope, it fails then too! Take A=2, and take B=(1∪ω)+(1∪ω), where here by "union" I mean "disjoint union"; you can extend the lower copy of 1∪ω to an ω to get B'=ω+(1∪ω). Then o(Rev(2,B))=ω²3+ω+2, but o(Rev(2,B'))=ω²3+ω+1.

(I think this case should work also with A=ω, but I haven't done a proper check. I could be wrong there.)

So, basically, this one is unsalvageable. It works in the trivial case when A and B are both finite, and some other trivial or not-so-trivial cases I can prove, but not in any decent generality. I'm calling this one a wrap.

-Harry

2016 Aug 1

21:37:00 - More on that inequality about order-reversing functions between WPOs, and a correction to last entry

EDIT 8/3: One tiny correction, see struck out portion below. Also, pretty sure I really have proved this now.

So!

OK, so it turns out that despite my big "EDIT: This entry is wrong" warning on my last entry, almost everything it is correct. Just, not quite for the reasons I thought.

My formula for "multidimensional natural multichoose", in the case of powers of ω, was indeed correct. What was not correct was the part I didn't write down -- what if the inputs are not powers of ω? In this entry I will describe the correct formula for this.

...OK, I say "correct" but in reality I haven't actually sat down and fully proved it. I'm pretty certain this has to be correct, though. If somehow I'm wrong, well, I'll put a big edit on this later. :P

So. Recall, we're given ordinals α1 through αn, and we want to compute the maximum extending ordinal of the poset of lower sets of α1×...×αn that are "strictly bounded in each dimension", i.e., none of the projections onto any of the αi are surjective. (To ignore some stupid stuff, I'm going to assume all αi are nonzero. You can rephrase the below so it's still correct even if one of them is zero, but I don't want to have to phrase it that way here.)

Here is what we do. First, we write out each ordinal in Cantor normal form; we're going to use the "weak" Cantor normal form, where exponents can be repeated but there are no coefficients. Each ordinal can be broken down into "segments" corresponding to the Cantor normal form, each of length a power of ω. Now, we're looking at the product of these ordinals, so this breaks up the product into finitely many "boxes"; again, the dimensions of these boxes are all powers of ω.

For ease of talking about it, I am going to place these boxes on a grid. Suppose that αi has ri segments. We will associate each box to an element of {1,...,r1}×...×{1,...,rn}.

Now, as per ancient mathematical tradition, we are going to take a (natural) sum of (natural) products (I'll omit the "natural" from here on out). (Indeed, if we were in 2 dimensions -- the case I'd solved earlier -- it would even be a sum over paths, with the value of each path being a product based on what it passed through. A wonderful mathematical tradition! Unfortunately, things don't quite work out that way in the general case.) The set we are going to be summing over is the set of lower sets of {1,...,r1-1}×...×{1,...,rn-1}. That's right -- we only want lower sets that don't use the highest possible coordinate in any dimension. Not unlike the problem we're trying to solve.

So for each such lower set, we need to get a value. What we're going to do is take the "frontier" of the lower set; to define that, let me introduce some notation. I'll use ε to denote the vector (1,...,1). A point p will be on the frontier of S if it is not in S, but p-ε is; or if p is along one of the lower boundaries (p has 1 among its coordinates), so that p-ε is "out of bounds".

So, are we going to take the product of values associated to the boxes on the frontier of S? Not quite. There's an additional restriction. For each point p, I'll define ε(p) to be the vector which is 1 in each dimension where the corresponding box is finite (i.e. has length 1), and 0 in each dimension where it's infinite. The box associated to the point p will only be included in the product if both p and p-ε(p) are on the frontier. (Note that if the box is infinite in all dimensions -- if it's "full-dimensional" -- then p-ε(p) is just p again.)

So now we can associate a value to each box included in our product. How do we do that? Well, write down the box's dimensions; let's drop all of them that are finite (i.e. 1), so we'll be looking at a box, possibly lower-dimensional, all of whose dimensions are infinite.

If the box has dimension 0, its value is 1. If the box has dimension 1, its value is equal to its length.

If the box has dimension 2 or more, its value is given according to the formula I stated earlier; if its dimensions are ωβ_1×...×ωβ_k, then its value is ωω^g(β_1,...,β_k), where g(β_1,...,β_k) is given as follows:
1. If at least two of the βi are infinite, or at least one of them is of the form ε+k where ε is an ε-number and k is finite, then g(β_1,...,β_k) is the natural sum of the βi.
2. Otherwise, it is the natural sum minus 1. (And it is guaranteed in this case that the natural sum will be a successor, so you will be able to subtract 1.)

(Note how different the rule is in dimension 2 or higher from in dimension 1! In dimension 2 or higher, the value of the box always has the form ωω^γ; this is not true at all in dimension 1.)

(Isn't the rule about the value of a box of dimension 0 unnecessary, seeing as such a box can never be included in the product anyway? Yes... unless you're in dimension 0 in the first place. I'm a believer in including trivial cases. Yes. You could actually leave the value undefined; I decided not to, but whatever, it's arbitrary.)

So, to summarize: You sum over lower sets of the set of boxes, excluding the outer boundaries; for each lower set S you multiply over boxes p on the frontier of S such that p-ε(p) is also on on the frontier of S; and the value of a box is given by the formula above. Not too bad, really. Much simpler than the general rule for "right-multichoose", certainly!



Now that that's out of the way, let's talk about inequalities. Specifically, the inequality o(Rev(A,B))≤o(Rev(A,o(B))), where A and B are WPOs, and we assume that Rev(A,B) and Rev(A,o(B)) are also WPOs.

Previously, I had proved this inequality when A is total, by mimicking the proof of the 2-dimensional case of the above. I then claimed (in the entry below) that by iterating this, one could prove it more generally when A is a product of ordinals, rather than just an ordinal.

That iterative proof does not work. Fortunately, now that I know how to handle the multidimensional case, indeed I can say that it is true when A is a product of ordinals, by mimicking the proof of the multidimensional case.

But I've been able to prove some other nice special cases of this inequality as well -- ones dependent only on the values of o(A) and o(B). Unfortunately, the methods used here are seriously unlikely to extend to other cases. Still, a month or two ago I would have been surprised to make basically any progress at all on the case where A might not be total.

So, first recall that, for trivial reasons, it works if A and B are both finite. Also, using the stuff above, it's not too hard to see that it works if A is finite, and o(B) is a power of ω (although somehow I apparently missed this earlier).

But I've also got it working now if we assume that B is finite... and that o(A) is an initial ordinal. Yeah, that's a pretty restrictive condition. Oh well. It'll have to do for now. (Note that a WPO A with o(A) initial is in some sense "close to total". Though maybe not too close; you can have A with o(A)=ω1 but Rev(A,2) not a WPO.)

In fact I've also got it working if o(B)=ω and o(A) is an initial ordinal. Or if we reverse it -- if o(A)=ω and o(B) is an initial ordinal. Unfortunately, I have not been able to "complete the rectangle" and get it working when o(A) and o(B) are both initial ordinals (unless one of them is ω, obviously).

Still, considerable progress! Maybe this inequality really is true in general. Or maybe I just haven't been clever enough in trying to come up with a counterexample. Well, we'll see -- hopefully, anyway.

-Harry

2016 Jul 4

23:08:00 - Lower sets in products of more than two ordinals, or, "natural multichoose" dimensions beyond 2

EDIT 7/27: The statements in this entry may very well be FALSE! It turns out my formula is very wrong in more than two dimensions! Now, admittedly, the case I actually bothered to state here might well still be right; in fact, I'm guessing that more likely than not, it is. (OK, having gone over it again now actually I'm pretty sure it is.) But the more general formula they are a part of is false, and I will have to see about correcting this. More on this in an entry soon to come.

So I wrote this down in an email to John, and then realized I should probably post it here as well.

So let's consider "big multichoose". I want to consider the variant that I denoted ((α multichoose β))0, where we consider all weakly decreasing functions from beta to alpha that are eventually 0.

This can also be considered as the set of lower sets of α×β that are "bounded in each direction", in the sense that neither projection is surjective.

Now of course if α and β are finite -- say n and m -- then this is just counting the number of partitions that fit in an (n-1)x(m-1) box, i.e., it's (n+m-2 choose n-1). (Which is different from multichoose, obviously, but nicer for the purposes I want to consider here.)

So suppose we generalize from two ordinal factors in the product to multiple. What happens now? Can we get a formula similar to my formula for dimension 2?

Answer: We can! It's not hard, either, once you know the dimension 2 case (OK, I haven't sat down and fully proven it, but the proof should be similar).

The thing is that I'd considered this before but always dismissed it because, well -- counting partitions in a given box is easy; counting plane partitions in a given box is less easy; and in dimensions higher than 3, things apparently get pretty wild pretty fast. So I figured, well, if the finite case is intractable, what's the point of attempting the infinite case?

Of course, this is silly, because if you can reduce the infinite case to the finite one, that's still significant. Like really, in the dimension 2 case, if you plug in finite ordinals, my formula doesn't actually say "OK, it's (n+m-2 choose n-1)", rather it says, OK, count the partitions in this box. We just know what that is. So in higher dimensions it will just fail to reduce the problem in the finite case, but in the infinite case, at least now it is a finite description.

So anyway as before the main core of the problem is when all the factors are powers of ω (and not equal to 1). So let's say we're putting in ωα_1, ..., ωα_n. Also, say n≥2; if n=1 (or n=0) the problem A. is trivial and B. is *not* given by the formula below, so let's exclude that. (And yes there is a good reason that dimension≥2 turns out different from dimension 1, but I'm not going to go into that here.)

So what we get out is ωω^g(α_1,...,α_n), for some g. What's g? Well, it's basically the same as before:

If either:
1. one of the αi is of the form epsilon+finite, or
2. at least two of the αi are infinite
then g(α1,...,αn) is just the natural sum of the αi.
But otherwise, it's the natural sum, minus 1.

(Before I did it out, I had assumed that condition (2) would require that all of the αi be infinite, but in fact, only two of them need to be. This is the influence of dimension 2, the starting point for the pattern, being felt all the way up the chain.)

(Note again that in dimensions 1 and 0, the pattern is broken and there is no g; in dimension 0 it's always 1 and in dimension 1 it's the identity. We don't generally get something of the form ωω^α.)

This got me wondering if I could prove some bound on o(Rev(α1×...×αn,X)) for any WPO X by similar means, like how earlier I proved a bound on o(Rev(α,X)), but then I realized I could actually just prove that by taking my earlier bound and iterating it; I don't need to get into anything fancy there (which is good, it's ugly enough in the two-dimensional case I did already need to do by hand).

EDIT 7/25: The above paragraph seems to be false! Yikes, I may have to do that case by hand as well!

So, one more addition to the lore of "big multichoose"...

-Harry

2016 Apr 30

23:55:00 - Natural multichoose and Hahn series over lambda-rings

It works! OK, so...

All ordinal arithmetic in this entry will be natural. All addition will be natural addition and all multiplication will be natural multiplication. Basically, for now, I am embedding the ordinals in the surreals. Also, all rings will be commutative.

So you'll recall I came up with notions of "natural multichoose" and "natural choose" (aka "big multichoose" and "big choose") for ordinals. These are defined order-theoretically, remember: ((α multichoose β)) is the largest ordinal extending the partial order of weakly decreasing functions from β to α, and (α choose β) is the same with strictly decreasing functions. (Obviously, this latter is zero unless β is finite.)

So I spent quite a bit of time figuring out how to compute these, but that's a problem I solved a while back. Also you'll recall I figured out some inequalities for them. But just a few weeks ago I realized something -- that while I was looking at inequalities, I had missed at least one straight-up identity!

While the description of how to compute ((α multichoose β)) in general is more complicated (though not that complicated!), the description of how to compute ((α multichoose k)), where k is finite, is quite simpler. Here's one simple way of stating it, in the form of two rules that determine it completely: 1. We'll start with powers of ω; ((ωα multichoose k))=ωαk. 2. Now extend by Vandermonde's convolution. (Vandermonde's identity, while usually stated for choose, is also valid for multichoose, with the same proof.)

Now I had never described it in that form before; and yet, that's plainly what it was. Somehow, until the other week, I never realized that, wait, natural multichoose satisfies Vandermonde's identity! (So, for that matter, does natural choose. The rule for computing natural choose is the same as the rule for computing natural multichoose, except that of course (1 choose k) is 0 instead of 1 for k>1.)

This got me wondering about the possibility of a version of Vandermonde's identity holding even when the thing on the bottom is infinite -- after all, we're using natural addition here, and a given ordinal has only finitely many natural summands. Unfortunately, the obvious analogue doesn't work. For choose it does... but only because if you have something infinite on the bottom, both sides are automatically 0. So that's not interesting. Let's forget about putting infinite things on the bottom; while it's a fruitful subject (that I've written about elsewhere!), for the remainder of this entry, I will only consider choosing or multichoosing whole numbers.

So this got me wondering -- what about further identities? There are various combinatorial identities we could try, and I could go over how I came up with ones to try, but, well, most of them didn't work, so I think I'll just skip that.

However, one thing did appear to work. Let's talk about λ-rings. A λ-ring is a ring equipped with operations λk for each whole number k, which must satisfy certain identities. The quantity λk(x) is meant to be analogous to (x choose k) -- or, more analogously, the k'th exterior power of x. An equivalent description is in terms of operations σk, analogous to ((x multichoose k)), or, more analogously, to the k'th symmetric power of x; these satisfy similar identities but not the same ones. The λk and σk are interdefinable via σk(x)=(-1)kλk(-x).

In fact one simple way of constructing a λ-ring is binomial rings. If you have a ring R with no Z-torsion and where for any x, (x choose n) is in R -- that's (x choose n) in the usual polynomial sense -- then you get a λ-ring this way, with λk(x)=(x choose k) and σk(x)=((x multichoose k)). These are basically the simplest examples of λ-rings. We'll return to these later.

So, the one thing that appeared to work is that natural multichoose appeared to satisfy the identities of a λ-ring! The σ-identities that is, not the λ-identities. (Unfortunately, natural choose does *not* satisfy the identities of a λ-ring. You do of course get operations λk from taking Sk to be natural multichoose, but while I can find an order-theoretic interpretation of these, it's rather artificial.) I haven't listed out what the identities are, but basically, aside from a few trivial ones, there's identities for λk(x+y), for λk(xy), and for λmn(x)). (And of course one can write down analogous identities for σk.) The addition identity is just Vandermonde, and works the same way for σk; the mutliplication and iteration identities are more complicated, and are different for σk; I won't attempt to describe them here.

And actually there's a pretty simple proof that ordinal multichoose satisfies these σ-identities, though it took me a while to realize, and I spent quite a while just wondering. Basically it follows quite easily once you realize that if you have an ordinal α=ωα_1+...+ωα_r (written in "weak Cantor normal form", where there are no coefficients but the exponents are weakly decreasing), then a simple way of describing ((α multichoose k)) is as hkα_1,...,ωα_r), where hk is a complete homogeneous polynomial. (If you replace hk here with ek, the elementary symmetric polynomial, you get the λk corresponding to this σk. Which, remember, is not natural choose.)

So this is pretty good -- natural multichoose satisfies the identities of a λ-ring. There's just one thing lacking, and that's that ordinals don't form a ring. So actually they're a "λ-semiring". Or a "σ-semiring", I guess; I'm a little doubtful that the equivalence works without negatives (even though in this case the λk do take ordinals to ordinals). (EDIT: OK actually it probably works. Whatever.)

So that raises the question -- can we extend these operations, these σk, to differences of ordinals in a sensible manner, and make an actual λ-ring? Or, hell, let's go all the way -- how about we extend them all the way to the surreals? Can we make the surreals into a λ-ring like this, so that when we restrict the σk down to the ordinals, we get back the order-theoretically meaningful natural multichoose?

Note of course that there's already a λ-ring structure on the surreals coming from the fact that, as a characteristic 0, they're certainly a binomial ring. But this would be a different λ-ring structure; as a binomial ring, we'd have σ²(ω)=ω²½+ω½, whereas under this one, we'd have σ²(ω)=ω².

Basically, with ordinals, if we had a "monomial", ωαa, then what we had was σkαa)=ωαk((a multichoose k)), and then you can extend by Vandermonde. (For λ instead of σ, just use choose instead of multichoose.) Well, surreal numbers have something like Cantor normal form -- Conway normal form. The differences are that, firstly, the exponents can be arbitrary surreal numbers rather than just ordinals; secondly, the coefficients can be real numbers rather than just whole numbers; and thirdly, rather than being a finite sum, it can be an arbitrary well-ordered decreasing sum. So this gives us a way of extending it; define σkαa)=ωαk((a multichoose k)), where now α can be surreal and a can be real, and then extend in the obvious fashion. (Or you can do the same with λ instead of σ.) Of course, the question remained: Does this form a λ-ring? Vandermonde (and the trivialities) are easy enough to prove, but the multiplication and iteration identities are nasty and hard to reason about.

Now, since Conway normal forms can be added and multiplied in the obvious way, this means that surreal numbers are isomorphic to the field of Hahn series over the real numbers with the value group being the surreal numbers themselves; the isomorphism is given by just replacing ω with T-1. (Yes, the value group is also the surreal numbers; that's the sort of weird thing that happens when you've got surreal numbers.) So the question now becomes -- if we have a λ-ring, and we take a ring of Hahn series over that λ-ring, is it also a λ-ring? Where here, in this more general case, we're defining λk(Tαa) to be Tαkλk(a). (Or the same thing but with σ instead of λ.) Right, in the case we care about, that of the surreals, the base ring is the reals, and we're considering it as a λ-ring by considering it as a binomial ring.

Well, I learned, from Darij Grinberg's text on λ-rings, that in fact this is well-known if, rather than Hahn series, we are just taking polynomials. So, could I extend it? To power series, to Laurent series, to Puiseux series -- to Hahn series?

Well, long story short, yes. At first I had some trouble because Hahn series, unlike power series or Laurent series or Puiseux series, aren't actually the limit of finite sums of their monomials; so when I tried to imitate the proofs Grinberg gives, which work by first doing it for monomials and then extending by Vandermonde, I ran into a problem. I wanted to just do the same thing but extending by Vandermonde and also continuity, and this worked for power series and Laurent series and Puiseux series, but not for Hahn series. I was just going to go ask MathOverflow, honestly. But today I made one last try -- I imitated Grinberg's proofs, but this time, instead of doing it for monomials like he does, I just plugged in the whole damn sum. And it worked! (Note that Grinberg's proof basically offloads the hard part to the verification that for a λ-ring A, Λ(A), which I'm not going to define here, is a λ-ring.) I was able to keep all the infinite sums and products to contexts where they truly made sense and could be manipulated sensibly without having to worry about any sort of convergence (or where such convergence is easily proved, like rings of formal power series).

So that's it then! Hahn series over a λ-ring form a λ-ring; in particular the surreal numbers, considered as Hahn series over the real numbers, form a λ-ring; and when you restrict the σk operations of this λ-ring down to the ordinals, you get natural multichoose, which is order-theoretically meaningful. And therefore (although we knew this already by easier means) the natural multichoose operations also satisfy the identities of the σk in a λ-ring, because they are the σk in a λ-ring.

So, that's one more thing to write up someday...

EDIT: Fixed interdefinition of λk and σk, oops.
EDIT: Found a reference for the equivalence, thanks to Darij Grinberg! Hazewinkel, Gubareni, and Kirichenko's "Algebra, Rings, and Modules" volume 3 goes over it. It uses σ rather than S, so I'm changing that in the entry.

-Harry

2016 Apr 11

19:43:00 - A way to handle the surcomplex exponential?

Here's an idea I had, I have no idea if this is standard or what. It's kind of trivial, but it seems worth mentioning at least.

EDIT: OK, here is somebody saying this previously; it may be standard, I'm not so sure.

So, background: Harry Gonshor defined the exponential function for the surreals. But what about for the surcomplex numbers? Say z=x+iy is surcomplex. It's clear what the magnitude of exp(z) should be (namely, exp(x)), but how is the direction determined? Now, if y is limited (i.e., not infinite), then cos(y) and sin(y) are defined and so one can define exp(z), and everything works out. But when y is infinite we have more of a problem; after you go around a circle infinitely many times, where do you end up? The oscillating nature of exp(iy) would seem to pose a problem for any attempt to sensibly define exp(iω).

So, here's my idea: Just chop off the infinite part of y. Define exp(iy)=exp(iy'), where y' is y with the infinite parts cut off.

And sure, it's a trivial thing to do, but it works pretty well. First off, we get the usual relation exp(z+w)=exp(z)exp(w); or at least we do assuming that sin and cos for limited surreals obey the usual relations, which they have to, right? (I don't have a text on surreals on hand at the moment.) Like I'm pretty sure that when you're proving exp(x+y)=exp(x)exp(y) for surreals, the hard part is if x and y may be infinite, not if they may be limited, because then you can use power series arguments adapted to the surreals (if I'm understanding/recalling correctly).

Secondly, let's look at the kernel of this map. For exponentiation on C, the kernel consists of numbers of the form τin (or 2πin, in the more usual notation), where n is an integer. Well, the commonly-used analogue in the surreals of the integers is the omnific integers -- those surreals with integral real part and no infinitesimal part. (That is to say, the infinite part has no effect on whether you're an omnific integer or not; all purely infinite surreals are omnific integers.) But that fits great here -- the kernel of this surcomplex exponentiation is precisely numbers of the form τin (or 2πin in the more usual notation) where n is an omnific integer. Really, we can just let all purely imaginary, purely infinite surcomplex numbers exponetiate to 1. Why not? Seems good to me!

And I mean seriously -- does anyone have a better idea? Well, maybe, but I haven't heard of it at least.

Does anyone know anything about this? Is this standard? Does this lead anywhere? It seems like it's too trivial to lead to anything seriously interesting beyond what having the surreal exponential (and sin and cos for limited surreals) already get you, but I mean just having a definition of the exponential for surcomplex numbers seems neat, so there you go.

-Harry

2016 Feb 24

21:13:00 - Big multichoose and WPOs: When can we ignore the details?

EDIT, 2/25, 7 PM: I'm no longer confident that the claims in this entry are correct. I've found a serious mistake in my proof. It seems like fixable, but I haven't succeeded so far.

EDIT, 2/27, 1:30 AM: THIS ENTRY AS ORIGINALLY POSTED IS FALSE. I've edited to be true, but, uh, much less interesting. False statements are in strikethrough, added statements are in bold.

So I've been thinking more about big multichoose and WPOs recently. I'll use Rev(A,B) to denote the set of order-reversing maps from A to B.

So recall, I figured out how to compute o(Rev(α,β)) (where α and β are ordinals) and showed that for any WPO X, o(Rev(α,X))≤o(Rev(α,o(X))). Or to state it another way -- which is the way the proof actually goes -- I figured out how to compute an upper bound on o(Rev(α,X)) based on α and o(X), and showed that when X is total, this upper bound is attained.

The other day I noticed that in fact, if o(X) is an (infinite) initial ordinal, then this upper bound is also attained. (Because in this case, o(X) embeds in X.) In other words, in this case, the value of o(Rev(α,X)) depends only on α and o(X), and not on any further details about X.

Could there be other values of o(X) for which this is true, I wondered? More specifically, could it be true whenever o(X) is a power of ω? (It's certainly not true when o(X) isn't a power of ω.)

It took me quite a while to find a counterexample, but eventually I did: Take α=ω, and take X to be Rado's order. In this case, o(X)=ω², and o(Rev(ω,ω²))=ωω²+2, while o(Rev(ω,X))=ωω2+2.

It should not have taken me so long to find a counterexample. A simpler counterexample is α=ω, X=ω×ω. I computed o(Rev(ω,ω×ω)) incorrectly earlier, and got ωω²+2; in fact it is ωω2+2, the same as for Rado's order. What's more, this is really easy to see, since Rev(X,Y×Z) = Rev(X,Y) × Rev(X,Z), so it's just the natural product of two copies of omega multichoose omega, which is ωω+1. Somehow I entirely failed to notice this.

So that's it -- problem solved, right? Well, not quite. Rado's order is a famous example of a WPO which is not a BPO. This leaves the question: Could the statement be true if we require that X be a BPO? Or what if instead of requiring that X be a BPO, we just insist on the weaker requirement that Rev(X,2) be a WPO, or something like that? So far, I have no counterexample to this. It's definitely not true, see above.

Unfortunately, I know very little about the theory of BPOs, so I think I am unlikely to resolve such a question (though perhaps a bit more likely if it turns out that only a weaker hypothesis is needed). But I thought this was worth putting out there.

Note, by the way, that if o(X) is a power of ω and, in addition, we have that α is finite, then the statement is true without any further hypothesis on X; hence why my counterexample above had α=ω. Note that o(X) is also as small as a counterexample can be, as o(X)=ω². (This remark is still true.)

By the way, I also tried proving the statement, by means analogous to how I proved that the bound is attained if X is total, and found that much of a proof went surprisingly well. There were three problems that arose, essentially:
1. How do we go from the case where α is a power of ω, to the case where it isn't? I was able to solve this problem.
2. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a limit ordinal? I was able to solve this problem (assuming an inductive hypothesis, of course).
3. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a successor ordinal? This turned out to be a real problem, even assuming an inductive hypothesis! And indeed, the counterexample I mentioned above has o(X)=ω². But the fact that it's just this one part of the proof that turned out to be a problem does incline me to think that it might indeed be possible to prove it with additional hypotheses on X.

Note: If X has a bottom element, everything I'm saying still works if we restrict to functions which are eventually 0. Pretty sure. And of course if α finite we can restrict to strictly decreasing functions.

EDIT: It totally works! Just using the hypothesis that Rev(X,2) is a WPO, too! This is amazing! Not only that, but there's a generalization that covers both this case and the case where X is total! (Though not also the case where o(X) is initial.) Namely, you require that Rev(X,2) be a WPO, and that X=X1+...+Xr, where each o(Xi) is a power of ω. In the case where X is total you have Cantor normal form, and in the amazing case, r=1. And once again it works if we remove the requirement that Rev(X,2) be WPO but add the requirement that α be finite.

EDIT 2: Sorry, just noticed a mistake in my proof. I think this is fixable.

-Harry

2016 Feb 10

02:28:00 - A neat little result about well partial orders

I don't think this is known, but again, I'm not a logician.

So: Let's say you have a well partial order X. De Jongh and Parikh proved it has some largest extending ordinal, o(X). Moreover, they proved that o(X) is the smallest ordinal greater than any o(L), where L ranges over proper downwardly-closed subsets of X.

Question: Say X is a WPO, and you're given an ordinal α<o(X). Can you find a downward-closed subset L of X with o(L)=α?

It certainly seems like you should be able to! But you'll notice that this doesn't follow immediately from the above characterization, which merely guarantees that there's some downward-closed L with o(L)≥α. How do you get it exactly on the nose?

This problem stumped me for several days, but it turns out there's an easy answer: Iterate! Each time you iterate this, o(L) goes down; and since you're working with ordinals, well, it has to terminate. When it terminates, there's your L. I'm glad such a simple problem turned out to have such a clean solution!

(I guess I've left out a few steps here, but nothing that isn't easy to fill in. Also, since you're passing to initial segments, you don't need to phrase it in terms of o(L) going down, it's simpler without that, but that's how I originally thought of it.)

-Harry

2016 Feb 5

01:13:00 - What do we mean by "impossible"?

So people throw around the word "impossible" a lot, but oftentimes they actually mean different things by it. (I'm assuming here we're talking about real-world discussions rather than mathematical discussions, where things are clearer.) I thought I'd create a list of different things that people mean by "impossible", in the hopes that it might clarify things. Note -- because we're talking about real-world things, time is going to play a role. (Yes, there's not really any universal clock. Whatever.)

I'm listing these as "levels of impossibility", going roughly from "most impossible" to "least impossible", even though they're not necessarily actually linearly ordered. Also, some of the distinctions between some of these may be fuzzy at times.

Level 0. Instantaneously inconsistent. The given description contains or logically implies a contradiction. It rules out all possible states at some point in time, in any universe. People often claim this one when they really mean level 2 or level 3.

Level 1. Instantaneously impossible (contingently). In the actual universe we live in, the given description is instantaneously impossible; it rules out all possible states at some point in time. This one... I'm not sure this one really comes up in political discussions, I can only really think of physical examples.

Level 2. Non-equilibrium. The described system fails to propagate itself forward in time; or, if a system extended in time is described, it contains an inconsistency. This is one that people often actually mean when they say something is "impossible".

Level 3. Unstable equilibrium or possible non-equilibrium. The described system is not resilient to noise; it will not propagate itself forward in time unless exceptional conditions hold continually. This is another one that people often really mean when they say something is "impossible".

Level 4. Unachievable. The described system is unreachable from our present state -- it may make sense on its own, it may not be inconsistent with the way the world evolves in time, but it's inconsistent with the initial conditions that hold in the real world. Yet another one that people often mean when they say "impossible".

Level 5. Not "stably achievable". The only path from the current state to the described state is not resilient to noise and requires exceptional conditions to hold, possibly for an extended period of time. We might also want to require that in addition that the failure modes of such a path leave us worse off than we were before or somehow prevent the same path from being used again (so that you can't just try over and over for free).

Are there any I'm missing? This seems like a pretty good breakdown to me.

-Harry

2016 Jan 21

02:32:00 - 2016 Mystery Hunt roundup

Solutions are up for this year's Mystery Hunt -- or mostly up; metas, metametas, and a few scattered others appear to be lacking them. Well, whatever.

EDIT: Quantum Minesweeper solution is now up, so I've updated this entry accordingly.
EDIT 1/23: I've added some more stuff now that a few of the meta solutions are up.
BIG EDIT 1/26: It turns out out there's a good reason why the hunt software was lacking its usual capabilities here! devjoe, who was on Team Luck, goes into it here.

So this year I was on Donner Party (again). Seems a few of my other friends joined Donner Party this year; Kevin defected from Codex, and Nic was on it too.

Results don't seem to be up yet so I don't know just how behind we were. I assumed we must be really behind all the time; I think we reached every round on a timer. (We didn't solve any of the round metas or metametas, and only one or two of the sub-round metas.) On the other hand, we were already on the Endymion round by the time the coin was found (along with everybody else I guess), so maybe we weren't as behind as I thought. Still, I feel like one of these years I need to get back on a bigger team; Mystery Hunt is considerably more fun when you're actually in contention for the win, you know? Also when you stay up all night and there are actually a good number of other people staying up all night with you. (Due to my staying up all night, I ended up going to bed on Sunday shortly before 7 PM... and about 5 minutes before the coin was found. It was probably found while I was, like, brushing my teeth or something.)

I tried to get people from Baker House (where I now live; I guess I haven't mentioned that) to join in but none did. Melissa mentioned that her brother does Mystery Hunt but didn't join in herself for whatever reason. I did however recruit Adam Funk from Truth House for the Hunt; after the first day, when other Bakerites failed to join in, I spent much of my solving time in his room instead of in public areas of Baker like I'd intended. Unfortunately these past two weeks have been quite busy for me and so I wasn't able to prepare Adam, or others, as much as I should have, but he still contributed quite a bit, I think.

Before I talk about the individual puzzles, I want to talk about how just how the Hunt was run this year, and in particular the hunt website/software. Simply put, I didn't like it. The Hunt for the past few years had used the same software for its website and it worked really well. This one... didn't. Not in the sense that it was buggy, but in the sense that it was just lacking the (even basic) features I'd come to expect from the past few years. It wouldn't show you which puzzles you'd already solved; you had to keep track of that yourself. (Because I started late, I initially assumed the check marks in the dog show round indicated the puzzles we'd solved. Nope.) It wouldn't show you what wrong answers you'd already submitted; you had to keep track of that yourself. There was nothing explaining how round advancement worked or anything like that, let alone showing your current score (were there scores? I don't know) or time left until next unlock. (When I said above we advanced to every round on a timer, that was an inference.) No list of all puzzles unlocked so far. Etc. Fancy 3D animations are not much of a replacment for actual functionality.

Not to mention the most obvious annoyance of all -- every time you made it to a new round, you had to go to a different website! One with crazy long unprouncable strings of letters in them that you couldn't just easily tell someone. Part of what I want in the Hunt is to be able to easily recruit people during Hunt-time and bring them in without having to get them officially added to the team. Being able to just tell them the website, here's our username, here's our password, works really well for that. This doesn't.

Which brings me to a different point -- I'm not sure I like how Donner Party organized things this year. In the past I'd solved for Manic Sages and Codex, who each used (IIRC) a combination of IRC and a MediaWiki-based wiki. This worked really well; I could let someone else use my account for the wiki, and anyone can use IRC. More recently on Strange New Universe and Donner Party we'd used (IIRC agin) IRC (still pretty straightforward) and Google Docs. The latter is a bit more of a pain, I can't just *tell* someone something, I have to use the software to share them on it.

This year we used Google Docs, Trello (acting like the "list of puzzles" page on a wiki), and Slack (substituting for IRC). This... oy. (It was especially a pain combined with the Hunt website address issue I've mentioned above.) Trello was a good addition organization-wise, but I had some trouble adding Adam. (Actually we were using a team account, but I forgot that. Oops.) Still, it eventually worked. Slack, I couldn't figure out how to add him to the chat at all.

Like I said, what I want is to be able to quickly and unofficially add people mid-Hunt. This means either any accounts used should be team ones and I can just give them the username and password, or should be throwaway ones (like on a specialized wiki, rather than on Trello as a whole) that I can also give away. The Trello + Google Docs system was good, about on par with using a wiki, but slightly more troublesome to add people. But Slack seems like a definite downgrade from IRC; having a channel history is a plus but not being able to add Adam is a problem.

(Also, getting back to the Hunt itself, is it just me or were there a lot of unnecessary PDFs this year?)

Anyway, enough about that; time to talk about individual puzzles!

Cut for length and spoilersCollapse )

Anyway, that's it for this year. Hopefully next year is a bit better-run. And maybe I really should see about joining a larger team...

-Harry

2015 Dec 23

21:37:00 - Quick thought on copyright snarls and registration

So currently in the US copyrights are automatic and not required to be registered, due to the US having entered into the Berne Convention. This leads to terrible copyright snarls where one person wants to use something of someone else's but they can't even determine who holds the current copyright to talk to about it.

Obviously bringing back copyright registration, with proper records of copyright transfers and all, would solve this (if y'know you could somehow extricate the US from the Berne Convention). But there is a real advantage to not having to register copyrights; suppose you're just some person who's drawn something and posted it to the internet and now some big company is making money selling T-shirts of it. It'd be nice to have some legal recourse there without having to register every single thing you do.

So here's an idea for an intermediate policy, which I assume would still violate the Berne Convention, but just something worth considering if, y'know, IP reform ever actually happens. What if copyrights didn't have to be registered, but unregistered copyrights could not be transferred or exclusively licensed? Obviously they could be licensed, but you could not legally bind yourself not to license it in the future. Unregistered copyrights would also automatically expire with the author's death. This way, if the copyright is unregistered, tracking who you need to consult with is easy: You talk to the author. If the copyright is registered, it might have gone through all sorts of complex stuff, but that would all be in the registry and you could look it up.

(I am not a lawyer, there may easily be all sorts of nasty details here I'm missing. But thought I'd throw this out there.)

2015 Dec 20

03:24:00 - The Force Awakens was... OK?

So, I basically stopped really caring about Star Wars a while back, but Noelle organized an expedition to see The Force Awakens today, so I went and saw it. It was OK? There were substantial parts that didn't make a lot of sense, or at least weren't sufficiently clarified, so I'm basically going to complain about it here.

Cut for spoilersCollapse )
...OK. I'm done complaining for now. There's more I could say but I think that would be picking nits. The original left a lot unexplained too, after all, and wasn't really the worse for it. These are just the things I think need explaining or fixing.

-Harry

2015 Nov 10

22:27:00 - A big "oops" on ordinal arithmetic, and some other updates

Well, it's time for me to declare a big "oops".

My paper Transfinitely iterated natural multiplication of ordinal numbers was recently rejected from the Notre Dame Journal of Formal Logic, and for good reason.

The mathematical assertions aren't wrong. But contrary to what I said, it's actually quite easy to prove Jacobsthal's distributivity law by transfinite induction, and I entirely missed this. My thanks to the anonymous referee for pointing this out. The referee also claimed this is true of my theorem that a⊗(b⊕c)=a⊗b⊗a⊗c, which I haven't yet checked, but is presumably also true.

EDIT Nov 12: OK, now I'm confused. I've tried to check it today, and... all I could get was an inequality. I'm not sure what to make of this.
EDIT Nov 13: OK, I figured out the mistake, disregard the above.

This cuts out a lot of what I said; I'll have to rewrite a lot! It also turns it into, well, more of an expository paper. I mean, I think it's some good exposition; I would still really like to see it published somewhere, because it summarizes and connects up some things that haven't been connected up elsewhere. But I'm going to need to find somewhere else to publish it, that's for sure.

Mind you, there still is the problem of finding an order-theoretic proof of these laws. Of course, that first requires an order-theoretic interpretation of a×b (Jacobsthal's multiplication) and a⊗b (super-Jacobsthal exponentiation).

But the former of those now does in fact exist! Take a look at this paper: Some iterated natural sums, by Paolo Lipparini. It gives an order-theoretic interpretation of the infinitary natural sum, and therefore, implicitly, Jacobsthal multiplication.

This order-theoretic interpretation does not make it immediately obvious that a×(b⊕c)=(a×b)⊕(a×c), nor that a×(b×c)=(a×b)×c; but just having an interpretation at all is a huge start. I have written to Lipparini and he agrees the problem is interesting.

One thing worth noting about this order-theoretic characterization of a×b is that while it makes it obvious that ab≤a×b, it doesn't make it obvious that a×b≤a⊗b. Of course, what Lipparini did wasn't meant specifically as an interpretation of a×b at all, but the generic infinitary natural sum, where you can't assume that all the summands are the same. It's possible that for the specific case of a×b, one might be able to find a reformulation of his characterization that does make this obvious. Of course, proving that this reformulation is the same might be far from easy! Who knows.

So, that's the state of things. Guess I've got quite a bit of work to do... (Well, I guess that's true regardless. In fact I might want to put off fixing this for now to get more writing up of integer complexity done.)

-Harry

2015 Oct 23

02:40:00 - Some possibly-meaningless analyses of Super Smash Bros. matchup charts

So here's a question for you: How does one generate a tier list from a matchup chart?

A matchup chart, obviously, contains way more information than a tier list. If you know how literally every character matches up against every other character, you should certainly be able to say which characters are, on the whole, better than others. But how? If you have two characters you want to compare, and one dominates the other, then this is easy; but what if neither dominates?

(I'm assuming here that a "tier list" means just a list of all characters sorted by rank; I'm ignoring any further information, such as numerical ratings or actual divisions into tiers.)

This is the method I came up with: First, we treat the matchup chart as a (zero-sum, symmetric) game. (Since matchup charts are usually presented as being out of 10, you may have to subtract 5 from each entry before you can feed it into your standard tools.) Then, compute the optimal strategy for this game. (I'll assume this is unique, as it will be in all cases analyzed here; I'm not going to worry about what happens if it isn't.) Finally, rank each character by how well they do against this optimal strategy. (There might be ties; I'm just not going to worry about that for now.)

EDIT 6 August 2016: Welp, I've now actually encountered the case of the optimal strategy being nonunique; I decided to handle it by taking the centroid of the set of optimal strategies and using that. How meaningful that is, I don't know, but I couldn't think of anything better. (Note that for a two-player zero-sum game, the set of optimal strategies must be convex.)

Originally I had left out the final step, instead thinking that how often a character occurs in the optimal strategy should be used as the method of ranking. But later I realized this makes no sense -- suppose you have two characters, A and B, which are very similar, but A slightly dominates B. Then B should rank just a little bit below A, but this method would instead put B in last place. I don't think that's the intent of a tier list. We can, I suppose, use this as a tiebreaker for the actual method I suggest though.

Now obviously there are some problems with this. In real life, not everyone is actually capable of playing every character to its greatest extent. Still, I think it is an interesting model. What does it result in when applied to Super Smash Bros.? And do the results match the more usual tier lists?

Now, it's quite a bit more work to produce a matchup chart than it is to produce a tier list, so matchup charts are not always so up to date. As such, we'll compare each matchup chart not to the current tier list but to the one closest in time to when the matchup chart was produced.

SSBWiki lists four matchup charts: One for the original Super Smash Bros., one for Melee, one for Brawl, and one for the Japanese version of the original Super Smash Bros (various things were altered for the more well-known international version). (For obvious reasons, there is as yet no Super Smash Bros. 4 matchup chart; indeed, there isn't even yet a tier list. Since the game is still actively being patched, producing one would be premature. Also, without a Super Smash Bros. 4 Back Room, it's not clear who would produce an "authoritative" one.)

The SSB chart will be compared against the 3rd SSB tier list; the SSBM chart against the 10th SSBM tier list; and the SSBB chart against the 8th (current) SSBB tier list. (Again, this is based on the dates the matchup charts are from.) SSBWiki doesn't list a separate Japanese SSB tier list, but there seems to be one implicit in the Japanese SSB matchup charts, i.e., the ordering of the characters used in the chart, so that's what we'll compare it to.

Actually, before I continue, let me observe -- it's not obvious that my method should make sense that for the SSBM and SSBB matchup charts listed here, since rather than state how often a given character wins a given matchup, they just give abstract descriptions such as "slight advatage", "large advantage", "Close to unloseable", etc. Nonetheless, as we'll see, this will prove not to be an issue.

The thing about the SSB, SSBM, and SSBB matchup charts is that all three are dominance-solvable; the optimal strategy, it turns out, isn't any sort of mixed strategy at all, but simply to always play the best character (i.e., Pikachu for SSB, Fox for SSBM, and Meta Knight for SSBB). This is why I said it doesn't matter that the SSBM and SSBB charts don't actually give matchup splits, since only qualitative comparisons are needed to establish this result.

In the SSB case, Pikachu dominates all characters but Kirby and Fox; but once we restrict to those three, Pikachu dominates.

In the SSBM case, the characters not dominated by Fox are Falco, Sheik, Marth, Jigglypuff, Peach, Captain Falcon, Ice Climbers, Samus, Ganondorf, Link, and Donkey Kong. Once we restrict to those, Fox dominates everyone but Sheik, Marth, Jigglypuff, Peach, Captain Falcon, Ganondorf, Link, and Donkey Kong. And once we restrict to those, Fox dominates everyone but Sheik and Jigglypuff. Once we restrict to those three, Fox dominates.

In the SSBB case, the characters not dominated by Meta Knight are Ice Climbers, Olimar, Diddy Kong, Marth, Falco, Pikachu, Wario, Lucario, King Dedede, Donkey Kong, and Sheik. Once we restrict to those, Meta Knight dominates everyone but Ice Climbers, Olimar, Diddy Kong, Marth, Pikachu, Lucario, and King Dedede. And once we restrict to those, Meta Knight dominates everyone but Ice Climbers, Olimar, Diddy Kong, Marth, and Lucario. Once we restrict to those six, Meta Knight dominates.

In all three cases, then, my method says that for these games, the tier list should be ordered purely based on how each character does against the unique best character (we'll ignore the question of how ties are broken). As you can easily check, this is not how they actually are ordered.

This leaves the case of DSB (Japanese SSB). Here the optimal strategy is in fact mixed! Let's start with a dominance analysis. Pikachu dominates everyone but Kirby, Captain Falcon, and Ness; and once we restrict to those 4, Ness is dominated by Kirby. But once we restrict attention to the remaining three, we find that none of them dominates. In fact, the unique optimal strategy is a mix of 1/4 Pikachu, 1/4 Kirby, and 1/2 Captain Falcon.

This is surprising because it suggests that in some sense Captain Falcon is the best character, rather than Pikachu! I'm not sure how believable that really is, but that's what it says.

This case is more interesting than the others, so let's do a full listing. I'm going to rank characters by how they do against this optimal strategy, as discussed above. I'll then break ties by how often they appear in this optimal strategy. Finally I'll break the remaining ties by their rank in the (inferred) Japanese tier list, i.e., I'll try to keep the resulting list as close to the usual tier list as possible within the bounds stated above.

If we do this, we get the following results:

  Games won Games used Usual rank Resulting rank Difference
Captain Falcon 5 5 3 1 +2
Pikachu 5 2.5 1 2 -1
Kirby 5 2.5 2 3 -1
Ness 4.75 0 5 4 +1
Fox 4.25 0 4 5 -1
Jigglypuff 4.25 0 8 6 +2
Mario 4 0 6 7 -1
Luigi 3.5 0 11 8 +3
Samus 3.25 0 7 9 -2
Link 3 0 9 10 -1
Yoshi 3 0 10 11 -1
Donkey Kong 1.75 0 12 12 0


Here "games won" is how many games out of 10 they win against the optimal strategy above, "games used" is how many games out of 10 they're used in the optimal strategy above, "usual rank" is their rank in the usual Japanese tier list (that I've inferred), "resulting rank" is their resulting rank in this new ordering, and "difference" is, well, the difference.

So, that's a little different, with Luigi jumping up 3 whole spots. Most of the rest isn't too different, but remember, I deliberately broke ties in a way designed to keep it similar.

Is any of this really meaningful? I have no idea. Captain Falcon getting the edge over Pikachu certain seems wrong... but then, that was on tiebreaker, by a different method. Still, I thought it was interesting to see how it would result. If anyone wants to try it on other games and report the results, I'd be interested to hear!

-Harry

2015 Oct 9

17:08:00 - Some more questions about big multichoose

EDIT Oct 12: Corrected a mistake in question 0, and also expanded it slightly. Also pretty sure I've proved question 1 and therefore 2 and 3; see note below.

So, OK, I know how to compute big multichoose now. But I was going through some old notes of mine and I found a few questions about it I still hadn't answered. Two of them, I suppose, could potentially be solved based on my computation rule, but that seems like it would be quite ugly. I still don't have a resolution to these questions; I'm going to state them here.

Notation: I'll write ((α multichoose β)) for my "big multichoose" operation. Given posets A and B, I'll write Rev(A,B) for the set of order-reversing maps from A to B, considered as a poset. Now, all orders considered here will be WPOs; in general, given WPOs A and B, the order Rev(A,B) is not necessarily a WPO, but I'll only consider cases where it is. In particular, for any ordinal α and a WPO B, Rev(α,B) is a WPO. Remember, ((α multichoose β)) is by definition o(Rev(β,α)).

So suppose A, B, and Rev(A,B) are WPOs; then it's easy to see that o(Rev(A,B))≥o(Rev(o(A),B)). Adding more relations to A just removes points from Rev(A,B), so o(Rev(A,B)) goes down.

However, it's less clear what happens if we instead take o(Rev(A,o(B))) instead of o(Rev(o(A),B)). Adding more relations to B adds points to Rev(A,B), which would appear to make o(Rev(A,B)) go up; however, it also adds relations to Rev(A,B), which would appear to make o(Rev(A,B)) go down.

So naively, it could go either up or down. However, I have never once seen it go down! I've only see it go up or stay the same.

So, question 0: If A, B, Rev(A,B), and Rev(A,o(B)) are all WPOs, does one have o(Rev(A,B))≤o(Rev(A,o(B))? (And if A, B, and Rev(A,o(B)) are all WPOs, does this imply that Rev(A,B) is too?)

And in case that's false, the more specific question 1: If α is an ordinal and B is a WPO, does one have o(Rev(α,B))≤o(Rev(α,o(B)))?

Note that if this is true, it cannot be simply because o(B) is an extension of B! One can find other extensions that make this false. For instance, suppose α=2, and B is the disjoint union of ω and 1; one may extent B to an order of type ω, but this will cause o(Rev(α,B)) to go down.

So far, I've been able to prove this in the case when α is finite, by making use of my computation rule. (I think attempting to prove this without making use of my computation rule will be impractical.)

The above two questions don't purely deal with big multichoose and thus can't be settled by my computation rules. The next two questions potentially could.

With Rev, one has the obvious isomorphisms Rev(A⨿B,C)=Rev(A,C)×Rev(B,C), Rev(A,B×C)=Rev(A,B)×Rev(A,C), and Rev(A,Rev(B,C))=Rev(A×B,C).

If one combines the first of these isomorphisms with the inequality o(Rev(A,B))≥o(Rev(o(A),B)) mentioned above, one obtains the inequality

((α multichoose β⊕γ)) ≤ ((α multichoose β)) ⊗ ((α multichoose γ)).

However, attempting to turn the other two isomorphisms into inequalities would require the use of the inequality in question 1. So this gives rise to questions 2 and 3 -- keep in mind that a positive answer to question 1 (or, y'know, question 0) would settle both of these in the affirmative:

Question 2: Is ((α⊗β multichoose γ)) ≥ ((α multichoose γ)) ⊗ ((β multichoose γ))?

Question 3: Is ((α multichoose β⊗γ)) ≤ (( ((α multichoose β)) multichoose γ))?

Because I can currently prove question 1 in the case where α is finite, I can prove both of these in the case where γ is finite.

EDIT Oct 12: I'm pretty sure I have now proven question 1 in general, therby proving questions 2 and 3. Unfortunately the proof is not simple. As for question 0... well, it's pretty obviously true when A and B are both finite. Other than that I haven't gotten very far, but I can't find a counterexample. If it is indeed false, perhaps it's worth asking with the assumptions that A and B are both BPOs instead of merely WPOs. (We can call that question 1/2.)

Both these could be potentially answered by purely computational means, though it would be somewhat unsatisfying. Right now, though, if this is true, I have not done even that. Although I can at least verify that they're true when the inputs are powers of ω; that case, at least, is straightforward computationally...

-Harry

2015 Sep 5

02:22:00 - The AADL Smash tourney

So today there was a Smash 4 tournament at North Quad. I decided to enter, figuring I was unlikely to win but if nothing else I could probably beat a few people who didn't know the game very well. :P It was pretty small; only 16 people showed up.

It was... not very well-organized. Well -- it could have been a lot worse, considering it didn't seem to have been organized by actual Smash players. But here's how it worked:

Format -- first, everyone played 3 single-game matches against other people. (I'm not sure how those were determined.) Then top 8 went to single elimination; since there was more time left than expected, these matches were best of 3.

Rules -- OK, so, before anything else, at the beginning of the tournament, the organizer held a vote over whether the tournament would be free-for-all or one-on-one (!). There were a few votes for FFA, but not many.

Rules were 2 stock, 5 minutes... and all items on low. After some people complained about this, it was decided that items could be turned off if both players agreed. I didn't see any matches actually being played with items, I'm pretty sure there were at most one or two.

(Tangential note: A lot of people assume adding items makes things easier for new players, due to the extra variance. It might make things easier for OK-but-not-great players, like me... but in my experience, actually new players are just unfamiliar with and bewildered by the items, and it makes things harder for them.)

Stage selection was always random -- from among all the stages. Yes, seriously. But it was decided that if both players can agree on a stage they could play there instead. (All my matches were played on Battlefield.)

Except actually not all the stages were available! I was worried that, say, not all the DLC would be there; but in fact not even all the unlockable stages were unlocked. Hence why Battlefield rather than Smashville. (All the DLC characters were there, but I don't think all the DLC stages were, IIRC.)

Let me move away from rules for a moment. So, I went 2-1 during the initial part. I played Ness, of course. First match was a close game against a pit player, whose name I forget; Ray, I think? He had quite an advantage on me at first but eventually I fought my way back. Second match was against Adrian, who played Dark Pit and would eventually win the tournament. I got destroyed, not even close. Third match was against BK, who played Lucario; that was an easy victory for me. So, wide range of skill levels on display!

After that was cut to Top 8 and single elimination. (Fortunately, no tiebreakers were needed.) I'm not sure how seeding was done among the many tied people. I ended up facing Adrian first and so was eliminated quickly.

Let me talk more about the rules here -- there were no particular rules about character selection, or the order in which characters and stages were picked. First game against Adrian was basically a reprise of last time; I picked Ness and he picked Dark Pit, and he beat me even more resoundingly than last time. For the second game, I figured I needed something different if I wanted to have a chance, so I picked Little Mac. And then he changed to Sheik! Wait, what? I lost, I should be counterpicking, not him! I mean, since there were no defined rules, and I wasn't likely to win anyway, I didn't bother to complain. I lost the second game, again by a lot, but not as badly, at least. (I don't think the organizer really thought about just how to do best-of-3 when it was a last-minute addition.)

Also when we went to top 8 one player dropped out for being actually affiliated with the tournament. Blech. If someone is going to have to drop out, don't let them play! This is just dumb.

Anyway, despite the lack of organization, it was still a fun time. That's really all I guess.

-Harry

2015 Aug 29

01:41:00 - Ordinals ordinals ordinals!

So I have a number of neat things to report about ordinals! In order from least to most exciting:

#1: I can now compute big multichoose, in general

At least, I'm pretty sure I can prove it. More on this in a moment.

(Unfortunately, this means I have no "easy math" I can work on right now -- I need to get back to the hard math, y'know.)

Hm -- I just looked and saw I never defined "big multichoose" here. In short: If α and β are ordinals, we can consider the poset of all weakly decreasing functiosn from β to α; this is a WPO, so we can take the largest ordinal extending it. This is what I mean by "α multichoose β", which I will denote here as ((α|β)) because what else am I going to do.

Point is, I've been puzzling over it for a while, working out special cases, because for obvious reasons this is quite a bit harder to compute than left and right multichoose from earlier.

#2: There really, really, is no natural exponentiation

In appendix B of my ordinal arithmetic paper, I showed that there's no operation satisfying several desiderata that we'd expect of a "natural exponentiation". However, one of the desiderata was a bit less, uh, to be desired than the others. But I've now managed to eliminate the dependence on that hypothesis! Best of all, the new proof is only a slight modification of the existing one. The super short version is, instead of defining f(n) = deg e(n,ω), you define f(n) to be the "leading coefficient" of deg e(n,ω).

I'll update the paper shortly -- I'll also need to notify the journal I submitted it to about the update...

#3: Big multichoose is related to the surreal exponential!

Don't ask me why -- I don't actually understand the surreal exponential at all. But it totally is! The thing is, this doesn't actually depend on any of the new stuff in #1; what's related to the surreal exponential is the rule for computing ((ω|α)), or rather ((ω|α))0. (I'm using the subscript 0 to mean that I'm restricting to functions which are eventually 0.) It turns out for α a limit ordinal,

((ω|α))0 = exp(α)

where exp is the surreal exponential.

Again, I don't have any idea why this is so, I just noticed that the rules for computing them are the same. I didn't notice it before because I hadn't written the rule for computing the former in a way that made it obvious.

So let me go into more detail about #1 now -- the final piece of the puzzle was figuring out how to compute ((ωαβ))0 for α,β>0. What I eventually figured out was that this is equal to ωω^G(α,β) for a certain function G. Specifically, G(α,β) is equal to α⊕β if either α or β is of the form εγ+k for some ordinal γ and some finite k, or if both are infinite. Otherwise -- if one is finite and the other is not of the form εγ+k -- then instead you get (α⊕β)-1, which necessarily makes sense since one of them is finite and nonzero.

Now, Harry Gonshor, in his book "An Introduction to the Theory of Surreal Numbers" -- where he defined the surreal exponential in the first place -- showed that there's a function g such that for a surreal number α>0, exp(ωα)=ωω^g(α), and showed how to compute g. Of course, we only care about the case where α is an ordinal, not just any surreal. And in that case, g(α) is equal to α+1 if α is of the form εγ+k, and is equal to α otherwise.

See the similarity? G(1,β) is exactly equal to g(β). And from this follows the equation above.

Now as I said this only requires knowing how to compute G(1,β), i.e., how to compute ((ω|β))0, which I already knew how to do. But I didn't notice the similarity until I wrote it in this form depending on G. And I wouldn't have written it that way if I weren't trying to determine ((α|β)) more generally. So there is that bit of relation there. I determined the form of G, and then noticed that G(1,β) reminded me of the rule for g(β); and I ran to the library to look it up, and indeed it was the same.

So yay! That's finally solved. Now back to harder math, I guess...

-Harry

Navigate: (Previous 20 Entries)