Log in

Chronicles of Harry

2016 Apr 30

23:55:00 - Natural multichoose and Hahn series over lambda-rings

It works! OK, so...

All ordinal arithmetic in this entry will be natural. All addition will be natural addition and all multiplication will be natural multiplication. Basically, for now, I am embedding the ordinals in the surreals. Also, all rings will be commutative.

So you'll recall I came up with notions of "natural multichoose" and "natural choose" (aka "big multichoose" and "big choose") for ordinals. These are defined order-theoretically, remember: ((α multichoose β)) is the largest ordinal extending the partial order of weakly decreasing functions from β to α, and (α choose β) is the same with strictly decreasing functions. (Obviously, this latter is zero unless β is finite.)

So I spent quite a bit of time figuring out how to compute these, but that's a problem I solved a while back. Also you'll recall I figured out some inequalities for them. But just a few weeks ago I realized something -- that while I was looking at inequalities, I had missed at least one straight-up identity!

While the description of how to compute ((α multichoose β)) in general is more complicated (though not that complicated!), the description of how to compute ((α multichoose k)), where k is finite, is quite simpler. Here's one simple way of stating it, in the form of two rules that determine it completely: 1. We'll start with powers of ω; ((ωα multichoose k))=ωαk. 2. Now extend by Vandermonde's convolution. (Vandermonde's identity, while usually stated for choose, is also valid for multichoose, with the same proof.)

Now I had never described it in that form before; and yet, that's plainly what it was. Somehow, until the other week, I never realized that, wait, natural multichoose satisfies Vandermonde's identity! (So, for that matter, does natural choose. The rule for computing natural choose is the same as the rule for computing natural multichoose, except that of course (1 choose k) is 0 instead of 1 for k>1.)

This got me wondering about the possibility of a version of Vandermonde's identity holding even when the thing on the bottom is infinite -- after all, we're using natural addition here, and a given ordinal has only finitely many natural summands. Unfortunately, the obvious analogue doesn't work. For choose it does... but only because if you have something infinite on the bottom, both sides are automatically 0. So that's not interesting. Let's forget about putting infinite things on the bottom; while it's a fruitful subject (that I've written about elsewhere!), for the remainder of this entry, I will only consider choosing or multichoosing whole numbers.

So this got me wondering -- what about further identities? There are various combinatorial identities we could try, and I could go over how I came up with ones to try, but, well, most of them didn't work, so I think I'll just skip that.

However, one thing did appear to work. Let's talk about λ-rings. A λ-ring is a ring equipped with operations λk for each whole number k, which must satisfy certain identities. The quantity λk(x) is meant to be analogous to (x choose k) -- or, more analogously, the k'th exterior power of x. An equivalent description is in terms of operations σk, analogous to ((x multichoose k)), or, more analogously, to the k'th symmetric power of x; these satisfy similar identities but not the same ones. The λk and σk are interdefinable via σk(x)=(-1)kλk(-x).

In fact one simple way of constructing a λ-ring is binomial rings. If you have a ring R with no Z-torsion and where for any x, (x choose n) is in R -- that's (x choose n) in the usual polynomial sense -- then you get a λ-ring this way, with λk(x)=(x choose k) and σk(x)=((x multichoose k)). These are basically the simplest examples of λ-rings. We'll return to these later.

So, the one thing that appeared to work is that natural multichoose appeared to satisfy the identities of a λ-ring! The σ-identities that is, not the λ-identities. (Unfortunately, natural choose does *not* satisfy the identities of a λ-ring. You do of course get operations λk from taking Sk to be natural multichoose, but while I can find an order-theoretic interpretation of these, it's rather artificial.) I haven't listed out what the identities are, but basically, aside from a few trivial ones, there's identities for λk(x+y), for λk(xy), and for λmn(x)). (And of course one can write down analogous identities for σk.) The addition identity is just Vandermonde, and works the same way for σk; the mutliplication and iteration identities are more complicated, and are different for σk; I won't attempt to describe them here.

And actually there's a pretty simple proof that ordinal multichoose satisfies these σ-identities, though it took me a while to realize, and I spent quite a while just wondering. Basically it follows quite easily once you realize that if you have an ordinal α=ωα_1+...+ωα_r (written in "weak Cantor normal form", where there are no coefficients but the exponents are weakly decreasing), then a simple way of describing ((α multichoose k)) is as hkα_1,...,ωα_r), where hk is a complete homogeneous polynomial. (If you replace hk here with ek, the elementary symmetric polynomial, you get the λk corresponding to this σk. Which, remember, is not natural choose.)

So this is pretty good -- natural multichoose satisfies the identities of a λ-ring. There's just one thing lacking, and that's that ordinals don't form a ring. So actually they're a "λ-semiring". Or a "σ-semiring", I guess; I'm a little doubtful that the equivalence works without negatives (even though in this case the λk do take ordinals to ordinals). (EDIT: OK actually it probably works. Whatever.)

So that raises the question -- can we extend these operations, these σk, to differences of ordinals in a sensible manner, and make an actual λ-ring? Or, hell, let's go all the way -- how about we extend them all the way to the surreals? Can we make the surreals into a λ-ring like this, so that when we restrict the σk down to the ordinals, we get back the order-theoretically meaningful natural multichoose?

Note of course that there's already a λ-ring structure on the surreals coming from the fact that, as a characteristic 0, they're certainly a binomial ring. But this would be a different λ-ring structure; as a binomial ring, we'd have σ²(ω)=ω²½+ω½, whereas under this one, we'd have σ²(ω)=ω².

Basically, with ordinals, if we had a "monomial", ωαa, then what we had was σkαa)=ωαk((a multichoose k)), and then you can extend by Vandermonde. (For λ instead of σ, just use choose instead of multichoose.) Well, surreal numbers have something like Cantor normal form -- Conway normal form. The differences are that, firstly, the exponents can be arbitrary surreal numbers rather than just ordinals; secondly, the coefficients can be real numbers rather than just whole numbers; and thirdly, rather than being a finite sum, it can be an arbitrary well-ordered decreasing sum. So this gives us a way of extending it; define σkαa)=ωαk((a multichoose k)), where now α can be surreal and a can be real, and then extend in the obvious fashion. (Or you can do the same with λ instead of σ.) Of course, the question remained: Does this form a λ-ring? Vandermonde (and the trivialities) are easy enough to prove, but the multiplication and iteration identities are nasty and hard to reason about.

Now, since Conway normal forms can be added and multiplied in the obvious way, this means that surreal numbers are isomorphic to the field of Hahn series over the real numbers with the value group being the surreal numbers themselves; the isomorphism is given by just replacing ω with T-1. (Yes, the value group is also the surreal numbers; that's the sort of weird thing that happens when you've got surreal numbers.) So the question now becomes -- if we have a λ-ring, and we take a ring of Hahn series over that λ-ring, is it also a λ-ring? Where here, in this more general case, we're defining λk(Tαa) to be Tαkλk(a). (Or the same thing but with σ instead of λ.) Right, in the case we care about, that of the surreals, the base ring is the reals, and we're considering it as a λ-ring by considering it as a binomial ring.

Well, I learned, from Darij Grinberg's text on λ-rings, that in fact this is well-known if, rather than Hahn series, we are just taking polynomials. So, could I extend it? To power series, to Laurent series, to Puiseux series -- to Hahn series?

Well, long story short, yes. At first I had some trouble because Hahn series, unlike power series or Laurent series or Puiseux series, aren't actually the limit of finite sums of their monomials; so when I tried to imitate the proofs Grinberg gives, which work by first doing it for monomials and then extending by Vandermonde, I ran into a problem. I wanted to just do the same thing but extending by Vandermonde and also continuity, and this worked for power series and Laurent series and Puiseux series, but not for Hahn series. I was just going to go ask MathOverflow, honestly. But today I made one last try -- I imitated Grinberg's proofs, but this time, instead of doing it for monomials like he does, I just plugged in the whole damn sum. And it worked! (Note that Grinberg's proof basically offloads the hard part to the verification that for a λ-ring A, Λ(A), which I'm not going to define here, is a λ-ring.) I was able to keep all the infinite sums and products to contexts where they truly made sense and could be manipulated sensibly without having to worry about any sort of convergence (or where such convergence is easily proved, like rings of formal power series).

So that's it then! Hahn series over a λ-ring form a λ-ring; in particular the surreal numbers, considered as Hahn series over the real numbers, form a λ-ring; and when you restrict the σk operations of this λ-ring down to the ordinals, you get natural multichoose, which is order-theoretically meaningful. And therefore (although we knew this already by easier means) the natural multichoose operations also satisfy the identities of the σk in a λ-ring, because they are the σk in a λ-ring.

So, that's one more thing to write up someday...

EDIT: Fixed interdefinition of λk and σk, oops.
EDIT: Found a reference for the equivalence, thanks to Darij Grinberg! Hazewinkel, Gubareni, and Kirichenko's "Algebra, Rings, and Modules" volume 3 goes over it. It uses σ rather than S, so I'm changing that in the entry.


2016 Apr 11

19:43:00 - A way to handle the surcomplex exponential?

Here's an idea I had, I have no idea if this is standard or what. It's kind of trivial, but it seems worth mentioning at least.

EDIT: OK, here is somebody saying this previously; it may be standard, I'm not so sure.

So, background: Harry Gonshor defined the exponential function for the surreals. But what about for the surcomplex numbers? Say z=x+iy is surcomplex. It's clear what the magnitude of exp(z) should be (namely, exp(x)), but how is the direction determined? Now, if y is limited (i.e., not infinite), then cos(y) and sin(y) are defined and so one can define exp(z), and everything works out. But when y is infinite we have more of a problem; after you go around a circle infinitely many times, where do you end up? The oscillating nature of exp(iy) would seem to pose a problem for any attempt to sensibly define exp(iω).

So, here's my idea: Just chop off the infinite part of y. Define exp(iy)=exp(iy'), where y' is y with the infinite parts cut off.

And sure, it's a trivial thing to do, but it works pretty well. First off, we get the usual relation exp(z+w)=exp(z)exp(w); or at least we do assuming that sin and cos for limited surreals obey the usual relations, which they have to, right? (I don't have a text on surreals on hand at the moment.) Like I'm pretty sure that when you're proving exp(x+y)=exp(x)exp(y) for surreals, the hard part is if x and y may be infinite, not if they may be limited, because then you can use power series arguments adapted to the surreals (if I'm understanding/recalling correctly).

Secondly, let's look at the kernel of this map. For exponentiation on C, the kernel consists of numbers of the form τin (or 2πin, in the more usual notation), where n is an integer. Well, the commonly-used analogue in the surreals of the integers is the omnific integers -- those surreals with integral real part and no infinitesimal part. (That is to say, the infinite part has no effect on whether you're an omnific integer or not; all purely infinite surreals are omnific integers.) But that fits great here -- the kernel of this surcomplex exponentiation is precisely numbers of the form τin (or 2πin in the more usual notation) where n is an omnific integer. Really, we can just let all purely imaginary, purely infinite surcomplex numbers exponetiate to 1. Why not? Seems good to me!

And I mean seriously -- does anyone have a better idea? Well, maybe, but I haven't heard of it at least.

Does anyone know anything about this? Is this standard? Does this lead anywhere? It seems like it's too trivial to lead to anything seriously interesting beyond what having the surreal exponential (and sin and cos for limited surreals) already get you, but I mean just having a definition of the exponential for surcomplex numbers seems neat, so there you go.


2016 Feb 24

21:13:00 - Big multichoose and WPOs: When can we ignore the details?

EDIT, 2/25, 7 PM: I'm no longer confident that the claims in this entry are correct. I've found a serious mistake in my proof. It seems like fixable, but I haven't succeeded so far.

EDIT, 2/27, 1:30 AM: THIS ENTRY AS ORIGINALLY POSTED IS FALSE. I've edited to be true, but, uh, much less interesting. False statements are in strikethrough, added statements are in bold.

So I've been thinking more about big multichoose and WPOs recently. I'll use Rev(A,B) to denote the set of order-reversing maps from A to B.

So recall, I figured out how to compute o(Rev(α,β)) (where α and β are ordinals) and showed that for any WPO X, o(Rev(α,X))≤o(Rev(α,o(X))). Or to state it another way -- which is the way the proof actually goes -- I figured out how to compute an upper bound on o(Rev(α,X)) based on α and o(X), and showed that when X is total, this upper bound is attained.

The other day I noticed that in fact, if o(X) is an (infinite) initial ordinal, then this upper bound is also attained. (Because in this case, o(X) embeds in X.) In other words, in this case, the value of o(Rev(α,X)) depends only on α and o(X), and not on any further details about X.

Could there be other values of o(X) for which this is true, I wondered? More specifically, could it be true whenever o(X) is a power of ω? (It's certainly not true when o(X) isn't a power of ω.)

It took me quite a while to find a counterexample, but eventually I did: Take α=ω, and take X to be Rado's order. In this case, o(X)=ω², and o(Rev(ω,ω²))=ωω²+2, while o(Rev(ω,X))=ωω2+2.

It should not have taken me so long to find a counterexample. A simpler counterexample is α=ω, X=ω×ω. I computed o(Rev(ω,ω×ω)) incorrectly earlier, and got ωω²+2; in fact it is ωω2+2, the same as for Rado's order. What's more, this is really easy to see, since Rev(X,Y×Z) = Rev(X,Y) × Rev(X,Z), so it's just the natural product of two copies of omega multichoose omega, which is ωω+1. Somehow I entirely failed to notice this.

So that's it -- problem solved, right? Well, not quite. Rado's order is a famous example of a WPO which is not a BPO. This leaves the question: Could the statement be true if we require that X be a BPO? Or what if instead of requiring that X be a BPO, we just insist on the weaker requirement that Rev(X,2) be a WPO, or something like that? So far, I have no counterexample to this. It's definitely not true, see above.

Unfortunately, I know very little about the theory of BPOs, so I think I am unlikely to resolve such a question (though perhaps a bit more likely if it turns out that only a weaker hypothesis is needed). But I thought this was worth putting out there.

Note, by the way, that if o(X) is a power of ω and, in addition, we have that α is finite, then the statement is true without any further hypothesis on X; hence why my counterexample above had α=ω. Note that o(X) is also as small as a counterexample can be, as o(X)=ω². (This remark is still true.)

By the way, I also tried proving the statement, by means analogous to how I proved that the bound is attained if X is total, and found that much of a proof went surprisingly well. There were three problems that arose, essentially:
1. How do we go from the case where α is a power of ω, to the case where it isn't? I was able to solve this problem.
2. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a limit ordinal? I was able to solve this problem (assuming an inductive hypothesis, of course).
3. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a successor ordinal? This turned out to be a real problem, even assuming an inductive hypothesis! And indeed, the counterexample I mentioned above has o(X)=ω². But the fact that it's just this one part of the proof that turned out to be a problem does incline me to think that it might indeed be possible to prove it with additional hypotheses on X.

Note: If X has a bottom element, everything I'm saying still works if we restrict to functions which are eventually 0. Pretty sure. And of course if α finite we can restrict to strictly decreasing functions.

EDIT: It totally works! Just using the hypothesis that Rev(X,2) is a WPO, too! This is amazing! Not only that, but there's a generalization that covers both this case and the case where X is total! (Though not also the case where o(X) is initial.) Namely, you require that Rev(X,2) be a WPO, and that X=X1+...+Xr, where each o(Xi) is a power of ω. In the case where X is total you have Cantor normal form, and in the amazing case, r=1. And once again it works if we remove the requirement that Rev(X,2) be WPO but add the requirement that α be finite.

EDIT 2: Sorry, just noticed a mistake in my proof. I think this is fixable.


2016 Feb 10

02:28:00 - A neat little result about well partial orders

I don't think this is known, but again, I'm not a logician.

So: Let's say you have a well partial order X. De Jongh and Parikh proved it has some largest extending ordinal, o(X). Moreover, they proved that o(X) is the smallest ordinal greater than any o(L), where L ranges over proper downwardly-closed subsets of X.

Question: Say X is a WPO, and you're given an ordinal α<o(X). Can you find a downward-closed subset L of X with o(L)=α?

It certainly seems like you should be able to! But you'll notice that this doesn't follow immediately from the above characterization, which merely guarantees that there's some downward-closed L with o(L)≥α. How do you get it exactly on the nose?

This problem stumped me for several days, but it turns out there's an easy answer: Iterate! Each time you iterate this, o(L) goes down; and since you're working with ordinals, well, it has to terminate. When it terminates, there's your L. I'm glad such a simple problem turned out to have such a clean solution!

(I guess I've left out a few steps here, but nothing that isn't easy to fill in. Also, since you're passing to initial segments, you don't need to phrase it in terms of o(L) going down, it's simpler without that, but that's how I originally thought of it.)


2016 Feb 5

01:13:00 - What do we mean by "impossible"?

So people throw around the word "impossible" a lot, but oftentimes they actually mean different things by it. (I'm assuming here we're talking about real-world discussions rather than mathematical discussions, where things are clearer.) I thought I'd create a list of different things that people mean by "impossible", in the hopes that it might clarify things. Note -- because we're talking about real-world things, time is going to play a role. (Yes, there's not really any universal clock. Whatever.)

I'm listing these as "levels of impossibility", going roughly from "most impossible" to "least impossible", even though they're not necessarily actually linearly ordered. Also, some of the distinctions between some of these may be fuzzy at times.

Level 0. Instantaneously inconsistent. The given description contains or logically implies a contradiction. It rules out all possible states at some point in time, in any universe. People often claim this one when they really mean level 2 or level 3.

Level 1. Instantaneously impossible (contingently). In the actual universe we live in, the given description is instantaneously impossible; it rules out all possible states at some point in time. This one... I'm not sure this one really comes up in political discussions, I can only really think of physical examples.

Level 2. Non-equilibrium. The described system fails to propagate itself forward in time; or, if a system extended in time is described, it contains an inconsistency. This is one that people often actually mean when they say something is "impossible".

Level 3. Unstable equilibrium or possible non-equilibrium. The described system is not resilient to noise; it will not propagate itself forward in time unless exceptional conditions hold continually. This is another one that people often really mean when they say something is "impossible".

Level 4. Unachievable. The described system is unreachable from our present state -- it may make sense on its own, it may not be inconsistent with the way the world evolves in time, but it's inconsistent with the initial conditions that hold in the real world. Yet another one that people often mean when they say "impossible".

Level 5. Not "stably achievable". The only path from the current state to the described state is not resilient to noise and requires exceptional conditions to hold, possibly for an extended period of time. We might also want to require that in addition that the failure modes of such a path leave us worse off than we were before or somehow prevent the same path from being used again (so that you can't just try over and over for free).

Are there any I'm missing? This seems like a pretty good breakdown to me.


2016 Jan 21

02:32:00 - 2016 Mystery Hunt roundup

Solutions are up for this year's Mystery Hunt -- or mostly up; metas, metametas, and a few scattered others appear to be lacking them. Well, whatever.

EDIT: Quantum Minesweeper solution is now up, so I've updated this entry accordingly.
EDIT 1/23: I've added some more stuff now that a few of the meta solutions are up.
BIG EDIT 1/26: It turns out out there's a good reason why the hunt software was lacking its usual capabilities here! devjoe, who was on Team Luck, goes into it here.

So this year I was on Donner Party (again). Seems a few of my other friends joined Donner Party this year; Kevin defected from Codex, and Nic was on it too.

Results don't seem to be up yet so I don't know just how behind we were. I assumed we must be really behind all the time; I think we reached every round on a timer. (We didn't solve any of the round metas or metametas, and only one or two of the sub-round metas.) On the other hand, we were already on the Endymion round by the time the coin was found (along with everybody else I guess), so maybe we weren't as behind as I thought. Still, I feel like one of these years I need to get back on a bigger team; Mystery Hunt is considerably more fun when you're actually in contention for the win, you know? Also when you stay up all night and there are actually a good number of other people staying up all night with you. (Due to my staying up all night, I ended up going to bed on Sunday shortly before 7 PM... and about 5 minutes before the coin was found. It was probably found while I was, like, brushing my teeth or something.)

I tried to get people from Baker House (where I now live; I guess I haven't mentioned that) to join in but none did. Melissa mentioned that her brother does Mystery Hunt but didn't join in herself for whatever reason. I did however recruit Adam Funk from Truth House for the Hunt; after the first day, when other Bakerites failed to join in, I spent much of my solving time in his room instead of in public areas of Baker like I'd intended. Unfortunately these past two weeks have been quite busy for me and so I wasn't able to prepare Adam, or others, as much as I should have, but he still contributed quite a bit, I think.

Before I talk about the individual puzzles, I want to talk about how just how the Hunt was run this year, and in particular the hunt website/software. Simply put, I didn't like it. The Hunt for the past few years had used the same software for its website and it worked really well. This one... didn't. Not in the sense that it was buggy, but in the sense that it was just lacking the (even basic) features I'd come to expect from the past few years. It wouldn't show you which puzzles you'd already solved; you had to keep track of that yourself. (Because I started late, I initially assumed the check marks in the dog show round indicated the puzzles we'd solved. Nope.) It wouldn't show you what wrong answers you'd already submitted; you had to keep track of that yourself. There was nothing explaining how round advancement worked or anything like that, let alone showing your current score (were there scores? I don't know) or time left until next unlock. (When I said above we advanced to every round on a timer, that was an inference.) No list of all puzzles unlocked so far. Etc. Fancy 3D animations are not much of a replacment for actual functionality.

Not to mention the most obvious annoyance of all -- every time you made it to a new round, you had to go to a different website! One with crazy long unprouncable strings of letters in them that you couldn't just easily tell someone. Part of what I want in the Hunt is to be able to easily recruit people during Hunt-time and bring them in without having to get them officially added to the team. Being able to just tell them the website, here's our username, here's our password, works really well for that. This doesn't.

Which brings me to a different point -- I'm not sure I like how Donner Party organized things this year. In the past I'd solved for Manic Sages and Codex, who each used (IIRC) a combination of IRC and a MediaWiki-based wiki. This worked really well; I could let someone else use my account for the wiki, and anyone can use IRC. More recently on Strange New Universe and Donner Party we'd used (IIRC agin) IRC (still pretty straightforward) and Google Docs. The latter is a bit more of a pain, I can't just *tell* someone something, I have to use the software to share them on it.

This year we used Google Docs, Trello (acting like the "list of puzzles" page on a wiki), and Slack (substituting for IRC). This... oy. (It was especially a pain combined with the Hunt website address issue I've mentioned above.) Trello was a good addition organization-wise, but I had some trouble adding Adam. (Actually we were using a team account, but I forgot that. Oops.) Still, it eventually worked. Slack, I couldn't figure out how to add him to the chat at all.

Like I said, what I want is to be able to quickly and unofficially add people mid-Hunt. This means either any accounts used should be team ones and I can just give them the username and password, or should be throwaway ones (like on a specialized wiki, rather than on Trello as a whole) that I can also give away. The Trello + Google Docs system was good, about on par with using a wiki, but slightly more troublesome to add people. But Slack seems like a definite downgrade from IRC; having a channel history is a plus but not being able to add Adam is a problem.

(Also, getting back to the Hunt itself, is it just me or were there a lot of unnecessary PDFs this year?)

Anyway, enough about that; time to talk about individual puzzles!

Cut for length and spoilersCollapse )

Anyway, that's it for this year. Hopefully next year is a bit better-run. And maybe I really should see about joining a larger team...


2015 Dec 23

21:37:00 - Quick thought on copyright snarls and registration

So currently in the US copyrights are automatic and not required to be registered, due to the US having entered into the Berne Convention. This leads to terrible copyright snarls where one person wants to use something of someone else's but they can't even determine who holds the current copyright to talk to about it.

Obviously bringing back copyright registration, with proper records of copyright transfers and all, would solve this (if y'know you could somehow extricate the US from the Berne Convention). But there is a real advantage to not having to register copyrights; suppose you're just some person who's drawn something and posted it to the internet and now some big company is making money selling T-shirts of it. It'd be nice to have some legal recourse there without having to register every single thing you do.

So here's an idea for an intermediate policy, which I assume would still violate the Berne Convention, but just something worth considering if, y'know, IP reform ever actually happens. What if copyrights didn't have to be registered, but unregistered copyrights could not be transferred or exclusively licensed? Obviously they could be licensed, but you could not legally bind yourself not to license it in the future. Unregistered copyrights would also automatically expire with the author's death. This way, if the copyright is unregistered, tracking who you need to consult with is easy: You talk to the author. If the copyright is registered, it might have gone through all sorts of complex stuff, but that would all be in the registry and you could look it up.

(I am not a lawyer, there may easily be all sorts of nasty details here I'm missing. But thought I'd throw this out there.)

2015 Dec 20

03:24:00 - The Force Awakens was... OK?

So, I basically stopped really caring about Star Wars a while back, but Noelle organized an expedition to see The Force Awakens today, so I went and saw it. It was OK? There were substantial parts that didn't make a lot of sense, or at least weren't sufficiently clarified, so I'm basically going to complain about it here.

Cut for spoilersCollapse )
...OK. I'm done complaining for now. There's more I could say but I think that would be picking nits. The original left a lot unexplained too, after all, and wasn't really the worse for it. These are just the things I think need explaining or fixing.


2015 Nov 10

22:27:00 - A big "oops" on ordinal arithmetic, and some other updates

Well, it's time for me to declare a big "oops".

My paper Transfinitely iterated natural multiplication of ordinal numbers was recently rejected from the Notre Dame Journal of Formal Logic, and for good reason.

The mathematical assertions aren't wrong. But contrary to what I said, it's actually quite easy to prove Jacobsthal's distributivity law by transfinite induction, and I entirely missed this. My thanks to the anonymous referee for pointing this out. The referee also claimed this is true of my theorem that a⊗(b⊕c)=a⊗b⊗a⊗c, which I haven't yet checked, but is presumably also true.

EDIT Nov 12: OK, now I'm confused. I've tried to check it today, and... all I could get was an inequality. I'm not sure what to make of this.
EDIT Nov 13: OK, I figured out the mistake, disregard the above.

This cuts out a lot of what I said; I'll have to rewrite a lot! It also turns it into, well, more of an expository paper. I mean, I think it's some good exposition; I would still really like to see it published somewhere, because it summarizes and connects up some things that haven't been connected up elsewhere. But I'm going to need to find somewhere else to publish it, that's for sure.

Mind you, there still is the problem of finding an order-theoretic proof of these laws. Of course, that first requires an order-theoretic interpretation of a×b (Jacobsthal's multiplication) and a⊗b (super-Jacobsthal exponentiation).

But the former of those now does in fact exist! Take a look at this paper: Some iterated natural sums, by Paolo Lipparini. It gives an order-theoretic interpretation of the infinitary natural sum, and therefore, implicitly, Jacobsthal multiplication.

This order-theoretic interpretation does not make it immediately obvious that a×(b⊕c)=(a×b)⊕(a×c), nor that a×(b×c)=(a×b)×c; but just having an interpretation at all is a huge start. I have written to Lipparini and he agrees the problem is interesting.

One thing worth noting about this order-theoretic characterization of a×b is that while it makes it obvious that ab≤a×b, it doesn't make it obvious that a×b≤a⊗b. Of course, what Lipparini did wasn't meant specifically as an interpretation of a×b at all, but the generic infinitary natural sum, where you can't assume that all the summands are the same. It's possible that for the specific case of a×b, one might be able to find a reformulation of his characterization that does make this obvious. Of course, proving that this reformulation is the same might be far from easy! Who knows.

So, that's the state of things. Guess I've got quite a bit of work to do... (Well, I guess that's true regardless. In fact I might want to put off fixing this for now to get more writing up of integer complexity done.)


2015 Oct 23

02:40:00 - Some possibly-meaningless analyses of Super Smash Bros. matchup charts

So here's a question for you: How does one generate a tier list from a matchup chart?

A matchup chart, obviously, contains way more information than a tier list. If you know how literally every character matches up against every other character, you should certainly be able to say which characters are, on the whole, better than others. But how? If you have two characters you want to compare, and one dominates the other, then this is easy; but what if neither dominates?

(I'm assuming here that a "tier list" means just a list of all characters sorted by rank; I'm ignoring any further information, such as numerical ratings or actual divisions into tiers.)

This is the method I came up with: First, we treat the matchup chart as a (zero-sum, symmetric) game. (Since matchup charts are usually presented as being out of 10, you may have to subtract 5 from each entry before you can feed it into your standard tools.) Then, compute the optimal strategy for this game. (I'll assume this is unique, as it will be in all cases analyzed here; I'm not going to worry about what happens if it isn't.) Finally, rank each character by how well they do against this optimal strategy. (There might be ties; I'm just not going to worry about that for now.)

Originally I had left out the final step, instead thinking that how often a character occurs in the optimal strategy should be used as the method of ranking. But later I realized this makes no sense -- suppose you have two characters, A and B, which are very similar, but A slightly dominates B. Then B should rank just a little bit below A, but this method would instead put B in last place. I don't think that's the intent of a tier list. We can, I suppose, use this as a tiebreaker for the actual method I suggest though.

Now obviously there are some problems with this. In real life, not everyone is actually capable of playing every character to its greatest extent. Still, I think it is an interesting model. What does it result in when applied to Super Smash Bros.? And do the results match the more usual tier lists?

Now, it's quite a bit more work to produce a matchup chart than it is to produce a tier list, so matchup charts are not always so up to date. As such, we'll compare each matchup chart not to the current tier list but to the one closest in time to when the matchup chart was produced.

SSBWiki lists four matchup charts: One for the original Super Smash Bros., one for Melee, one for Brawl, and one for the Japanese version of the original Super Smash Bros (various things were altered for the more well-known international version). (For obvious reasons, there is as yet no Super Smash Bros. 4 matchup chart; indeed, there isn't even yet a tier list. Since the game is still actively being patched, producing one would be premature. Also, without a Super Smash Bros. 4 Back Room, it's not clear who would produce an "authoritative" one.)

The SSB chart will be compared against the 3rd SSB tier list; the SSBM chart against the 10th SSBM tier list; and the SSBB chart against the 8th (current) SSBB tier list. (Again, this is based on the dates the matchup charts are from.) SSBWiki doesn't list a separate Japanese SSB tier list, but there seems to be one implicit in the Japanese SSB matchup charts, i.e., the ordering of the characters used in the chart, so that's what we'll compare it to.

Actually, before I continue, let me observe -- it's not obvious that my method should make sense that for the SSBM and SSBB matchup charts listed here, since rather than state how often a given character wins a given matchup, they just give abstract descriptions such as "slight advatage", "large advantage", "Close to unloseable", etc. Nonetheless, as we'll see, this will prove not to be an issue.

The thing about the SSB, SSBM, and SSBB matchup charts is that all three are dominance-solvable; the optimal strategy, it turns out, isn't any sort of mixed strategy at all, but simply to always play the best character (i.e., Pikachu for SSB, Fox for SSBM, and Meta Knight for SSBB). This is why I said it doesn't matter that the SSBM and SSBB charts don't actually give matchup splits, since only qualitative comparisons are needed to establish this result.

In the SSB case, Pikachu dominates all characters but Kirby and Fox; but once we restrict to those three, Pikachu dominates.

In the SSBM case, the characters not dominated by Fox are Falco, Sheik, Marth, Jigglypuff, Peach, Captain Falcon, Ice Climbers, Samus, Ganondorf, Link, and Donkey Kong. Once we restrict to those, Fox dominates everyone but Sheik, Marth, Jigglypuff, Peach, Captain Falcon, Ganondorf, Link, and Donkey Kong. And once we restrict to those, Fox dominates everyone but Sheik and Jigglypuff. Once we restrict to those three, Fox dominates.

In the SSBB case, the characters not dominated by Meta Knight are Ice Climbers, Olimar, Diddy Kong, Marth, Falco, Pikachu, Wario, Lucario, King Dedede, Donkey Kong, and Sheik. Once we restrict to those, Meta Knight dominates everyone but Ice Climbers, Olimar, Diddy Kong, Marth, Pikachu, Lucario, and King Dedede. And once we restrict to those, Meta Knight dominates everyone but Ice Climbers, Olimar, Diddy Kong, Marth, and Lucario. Once we restrict to those six, Meta Knight dominates.

In all three cases, then, my method says that for these games, the tier list should be ordered purely based on how each character does against the unique best character (we'll ignore the question of how ties are broken). As you can easily check, this is not how they actually are ordered.

This leaves the case of DSB (Japanese SSB). Here the optimal strategy is in fact mixed! Let's start with a dominance analysis. Pikachu dominates everyone but Kirby, Captain Falcon, and Ness; and once we restrict to those 4, Ness is dominated by Kirby. But once we restrict attention to the remaining three, we find that none of them dominates. In fact, the unique optimal strategy is a mix of 1/4 Pikachu, 1/4 Kirby, and 1/2 Captain Falcon.

This is surprising because it suggests that in some sense Captain Falcon is the best character, rather than Pikachu! I'm not sure how believable that really is, but that's what it says.

This case is more interesting than the others, so let's do a full listing. I'm going to rank characters by how they do against this optimal strategy, as discussed above. I'll then break ties by how often they appear in this optimal strategy. Finally I'll break the remaining ties by their rank in the (inferred) Japanese tier list, i.e., I'll try to keep the resulting list as close to the usual tier list as possible within the bounds stated above.

If we do this, we get the following results:

  Games won Games used Usual rank Resulting rank Difference
Captain Falcon 5 5 3 1 +2
Pikachu 5 2.5 1 2 -1
Kirby 5 2.5 2 3 -1
Ness 4.75 0 5 4 +1
Fox 4.25 0 4 5 -1
Jigglypuff 4.25 0 8 6 +2
Mario 4 0 6 7 -1
Luigi 3.5 0 11 8 +3
Samus 3.25 0 7 9 -2
Link 3 0 9 10 -1
Yoshi 3 0 10 11 -1
Donkey Kong 1.75 0 12 12 0

Here "games won" is how many games out of 10 they win against the optimal strategy above, "games used" is how many games out of 10 they're used in the optimal strategy above, "usual rank" is their rank in the usual Japanese tier list (that I've inferred), "resulting rank" is their resulting rank in this new ordering, and "difference" is, well, the difference.

So, that's a little different, with Luigi jumping up 3 whole spots. Most of the rest isn't too different, but remember, I deliberately broke ties in a way designed to keep it similar.

Is any of this really meaningful? I have no idea. Captain Falcon getting the edge over Pikachu certain seems wrong... but then, that was on tiebreaker, by a different method. Still, I thought it was interesting to see how it would result. If anyone wants to try it on other games and report the results, I'd be interested to hear!


2015 Oct 9

17:08:00 - Some more questions about big multichoose

EDIT Oct 12: Corrected a mistake in question 0, and also expanded it slightly. Also pretty sure I've proved question 1 and therefore 2 and 3; see note below.

So, OK, I know how to compute big multichoose now. But I was going through some old notes of mine and I found a few questions about it I still hadn't answered. Two of them, I suppose, could potentially be solved based on my computation rule, but that seems like it would be quite ugly. I still don't have a resolution to these questions; I'm going to state them here.

Notation: I'll write ((α multichoose β)) for my "big multichoose" operation. Given posets A and B, I'll write Rev(A,B) for the set of order-reversing maps from A to B, considered as a poset. Now, all orders considered here will be WPOs; in general, given WPOs A and B, the order Rev(A,B) is not necessarily a WPO, but I'll only consider cases where it is. In particular, for any ordinal α and a WPO B, Rev(α,B) is a WPO. Remember, ((α multichoose β)) is by definition o(Rev(β,α)).

So suppose A, B, and Rev(A,B) are WPOs; then it's easy to see that o(Rev(A,B))≥o(Rev(o(A),B)). Adding more relations to A just removes points from Rev(A,B), so o(Rev(A,B)) goes down.

However, it's less clear what happens if we instead take o(Rev(A,o(B))) instead of o(Rev(o(A),B)). Adding more relations to B adds points to Rev(A,B), which would appear to make o(Rev(A,B)) go up; however, it also adds relations to Rev(A,B), which would appear to make o(Rev(A,B)) go down.

So naively, it could go either up or down. However, I have never once seen it go down! I've only see it go up or stay the same.

So, question 0: If A, B, Rev(A,B), and Rev(A,o(B)) are all WPOs, does one have o(Rev(A,B))≤o(Rev(A,o(B))? (And if A, B, and Rev(A,o(B)) are all WPOs, does this imply that Rev(A,B) is too?)

And in case that's false, the more specific question 1: If α is an ordinal and B is a WPO, does one have o(Rev(α,B))≤o(Rev(α,o(B)))?

Note that if this is true, it cannot be simply because o(B) is an extension of B! One can find other extensions that make this false. For instance, suppose α=2, and B is the disjoint union of ω and 1; one may extent B to an order of type ω, but this will cause o(Rev(α,B)) to go down.

So far, I've been able to prove this in the case when α is finite, by making use of my computation rule. (I think attempting to prove this without making use of my computation rule will be impractical.)

The above two questions don't purely deal with big multichoose and thus can't be settled by my computation rules. The next two questions potentially could.

With Rev, one has the obvious isomorphisms Rev(A⨿B,C)=Rev(A,C)×Rev(B,C), Rev(A,B×C)=Rev(A,B)×Rev(A,C), and Rev(A,Rev(B,C))=Rev(A×B,C).

If one combines the first of these isomorphisms with the inequality o(Rev(A,B))≥o(Rev(o(A),B)) mentioned above, one obtains the inequality

((α multichoose β⊕γ)) ≤ ((α multichoose β)) ⊗ ((α multichoose γ)).

However, attempting to turn the other two isomorphisms into inequalities would require the use of the inequality in question 1. So this gives rise to questions 2 and 3 -- keep in mind that a positive answer to question 1 (or, y'know, question 0) would settle both of these in the affirmative:

Question 2: Is ((α⊗β multichoose γ)) ≥ ((α multichoose γ)) ⊗ ((β multichoose γ))?

Question 3: Is ((α multichoose β⊗γ)) ≤ (( ((α multichoose β)) multichoose γ))?

Because I can currently prove question 1 in the case where α is finite, I can prove both of these in the case where γ is finite.

EDIT Oct 12: I'm pretty sure I have now proven question 1 in general, therby proving questions 2 and 3. Unfortunately the proof is not simple. As for question 0... well, it's pretty obviously true when A and B are both finite. Other than that I haven't gotten very far, but I can't find a counterexample. If it is indeed false, perhaps it's worth asking with the assumptions that A and B are both BPOs instead of merely WPOs. (We can call that question 1/2.)

Both these could be potentially answered by purely computational means, though it would be somewhat unsatisfying. Right now, though, if this is true, I have not done even that. Although I can at least verify that they're true when the inputs are powers of ω; that case, at least, is straightforward computationally...


2015 Sep 5

02:22:00 - The AADL Smash tourney

So today there was a Smash 4 tournament at North Quad. I decided to enter, figuring I was unlikely to win but if nothing else I could probably beat a few people who didn't know the game very well. :P It was pretty small; only 16 people showed up.

It was... not very well-organized. Well -- it could have been a lot worse, considering it didn't seem to have been organized by actual Smash players. But here's how it worked:

Format -- first, everyone played 3 single-game matches against other people. (I'm not sure how those were determined.) Then top 8 went to single elimination; since there was more time left than expected, these matches were best of 3.

Rules -- OK, so, before anything else, at the beginning of the tournament, the organizer held a vote over whether the tournament would be free-for-all or one-on-one (!). There were a few votes for FFA, but not many.

Rules were 2 stock, 5 minutes... and all items on low. After some people complained about this, it was decided that items could be turned off if both players agreed. I didn't see any matches actually being played with items, I'm pretty sure there were at most one or two.

(Tangential note: A lot of people assume adding items makes things easier for new players, due to the extra variance. It might make things easier for OK-but-not-great players, like me... but in my experience, actually new players are just unfamiliar with and bewildered by the items, and it makes things harder for them.)

Stage selection was always random -- from among all the stages. Yes, seriously. But it was decided that if both players can agree on a stage they could play there instead. (All my matches were played on Battlefield.)

Except actually not all the stages were available! I was worried that, say, not all the DLC would be there; but in fact not even all the unlockable stages were unlocked. Hence why Battlefield rather than Smashville. (All the DLC characters were there, but I don't think all the DLC stages were, IIRC.)

Let me move away from rules for a moment. So, I went 2-1 during the initial part. I played Ness, of course. First match was a close game against a pit player, whose name I forget; Ray, I think? He had quite an advantage on me at first but eventually I fought my way back. Second match was against Adrian, who played Dark Pit and would eventually win the tournament. I got destroyed, not even close. Third match was against BK, who played Lucario; that was an easy victory for me. So, wide range of skill levels on display!

After that was cut to Top 8 and single elimination. (Fortunately, no tiebreakers were needed.) I'm not sure how seeding was done among the many tied people. I ended up facing Adrian first and so was eliminated quickly.

Let me talk more about the rules here -- there were no particular rules about character selection, or the order in which characters and stages were picked. First game against Adrian was basically a reprise of last time; I picked Ness and he picked Dark Pit, and he beat me even more resoundingly than last time. For the second game, I figured I needed something different if I wanted to have a chance, so I picked Little Mac. And then he changed to Sheik! Wait, what? I lost, I should be counterpicking, not him! I mean, since there were no defined rules, and I wasn't likely to win anyway, I didn't bother to complain. I lost the second game, again by a lot, but not as badly, at least. (I don't think the organizer really thought about just how to do best-of-3 when it was a last-minute addition.)

Also when we went to top 8 one player dropped out for being actually affiliated with the tournament. Blech. If someone is going to have to drop out, don't let them play! This is just dumb.

Anyway, despite the lack of organization, it was still a fun time. That's really all I guess.


2015 Aug 29

01:41:00 - Ordinals ordinals ordinals!

So I have a number of neat things to report about ordinals! In order from least to most exciting:

#1: I can now compute big multichoose, in general

At least, I'm pretty sure I can prove it. More on this in a moment.

(Unfortunately, this means I have no "easy math" I can work on right now -- I need to get back to the hard math, y'know.)

Hm -- I just looked and saw I never defined "big multichoose" here. In short: If α and β are ordinals, we can consider the poset of all weakly decreasing functiosn from β to α; this is a WPO, so we can take the largest ordinal extending it. This is what I mean by "α multichoose β", which I will denote here as ((α|β)) because what else am I going to do.

Point is, I've been puzzling over it for a while, working out special cases, because for obvious reasons this is quite a bit harder to compute than left and right multichoose from earlier.

#2: There really, really, is no natural exponentiation

In appendix B of my ordinal arithmetic paper, I showed that there's no operation satisfying several desiderata that we'd expect of a "natural exponentiation". However, one of the desiderata was a bit less, uh, to be desired than the others. But I've now managed to eliminate the dependence on that hypothesis! Best of all, the new proof is only a slight modification of the existing one. The super short version is, instead of defining f(n) = deg e(n,ω), you define f(n) to be the "leading coefficient" of deg e(n,ω).

I'll update the paper shortly -- I'll also need to notify the journal I submitted it to about the update...

#3: Big multichoose is related to the surreal exponential!

Don't ask me why -- I don't actually understand the surreal exponential at all. But it totally is! The thing is, this doesn't actually depend on any of the new stuff in #1; what's related to the surreal exponential is the rule for computing ((ω|α)), or rather ((ω|α))0. (I'm using the subscript 0 to mean that I'm restricting to functions which are eventually 0.) It turns out for α a limit ordinal,

((ω|α))0 = exp(α)

where exp is the surreal exponential.

Again, I don't have any idea why this is so, I just noticed that the rules for computing them are the same. I didn't notice it before because I hadn't written the rule for computing the former in a way that made it obvious.

So let me go into more detail about #1 now -- the final piece of the puzzle was figuring out how to compute ((ωαβ))0 for α,β>0. What I eventually figured out was that this is equal to ωω^G(α,β) for a certain function G. Specifically, G(α,β) is equal to α⊕β if either α or β is of the form εγ+k for some ordinal γ and some finite k, or if both are infinite. Otherwise -- if one is finite and the other is not of the form εγ+k -- then instead you get (α⊕β)-1, which necessarily makes sense since one of them is finite and nonzero.

Now, Harry Gonshor, in his book "An Introduction to the Theory of Surreal Numbers" -- where he defined the surreal exponential in the first place -- showed that there's a function g such that for a surreal number α>0, exp(ωα)=ωω^g(α), and showed how to compute g. Of course, we only care about the case where α is an ordinal, not just any surreal. And in that case, g(α) is equal to α+1 if α is of the form εγ+k, and is equal to α otherwise.

See the similarity? G(1,β) is exactly equal to g(β). And from this follows the equation above.

Now as I said this only requires knowing how to compute G(1,β), i.e., how to compute ((ω|β))0, which I already knew how to do. But I didn't notice the similarity until I wrote it in this form depending on G. And I wouldn't have written it that way if I weren't trying to determine ((α|β)) more generally. So there is that bit of relation there. I determined the form of G, and then noticed that G(1,β) reminded me of the rule for g(β); and I ran to the library to look it up, and indeed it was the same.

So yay! That's finally solved. Now back to harder math, I guess...


2015 Jul 2

14:54:00 - Video game review -- Pinball Hall of Fame: The Williams Collection (PS3)

Zero-word summary: See my userpic.

(This entry has been revised on July 8th and again on July 15th; revisions are marked.)

So let me begin with some background. I am, to be honest, really not much of a pinball player. I've always wondered how it is that someone is supposed to get into pinball in the first place; modern pinball games are quite complicated, but typically provide little in the way of explicit instruction (the little inserts in the lower left are really not enough) or implicit guidance. Meanwhile they have their whole own set of terminology and conventions that I don't know how one is supposed to learn in the first place -- one reason those instruction inserts are so unhelpful is because in addition to their brevity, they're also written in pinballese. For many years, "LOCK IS LIT" was my own personal pinball equivalent of "PC LOAD LETTER". (I know what it means now, thank you, you don't have to explain.)

As a kid I would play my grandparents' Star Explorer machine, a pinball machine made for the home, whenever we visited them. These days I don't think I'd find it very fun, but it held my attention at the time. It's definitely not great pinball, though. You can't even control the flippers separately. I also played Sonic Spinball on the Genesis, which really shouldn't count. (Mostly a fun game, but eventually I got good enough I could beat it, and the final boss fight takes so long and is so tedious that it counterbalances the rest of the game.) That game at least allows you to control the flippers separately, but you can win without ever doing so.

Eventually I started playing the famous "3D Pinball for Windows -- Space Cadet", which taught me some actual basic pinball skills. A complicated game, to be sure, but once I found the manual hidden away on the hard drive, it became comprehensible. (Also, even before then, the in-game messages do provide a fair bit of guidance, and there's also the flashing arrow pointing you towards whatever it is you need to do to begin or complete a mission.) Still, for all I've played, I don't think I can claim to be good at pinball. Better than a total newbie, I guess. (I'm also too weak to nudge the machine effectively, unless I'm somehow doing it wrong.) And more relevantly, I continue to find most pinball machines baffling.

But there is one pinball machine I love, and that is Medieval Madness. I encountered it by chance as a kid at an arcade I never had the opportunity to return to. Unlike with other pinball games, I immediately understood what I was supposed to do: Shoot the castle! OK, there's actually considerably more to the game than that, but the design of the table -- both the physical layout, and how the game responds when you do shoot the castle -- makes it clear that this is something you want to do, and even if you have no idea what else is going on, well, at least the game is guiding you to do that one thing. (And having now learned how the game actually works, it isn't that complicated on the whole. I think; maybe I'm just saying that because I understand it now.)

And on this, the pinball community seems to agree with me; Medieval Madness is a top-rated pinball machine. It's #4 on IPDB and #1 on Pinside. I'm guessing that they're judging it on different criteria than I am, but that reinforces the conclusion -- apparently it's a great pinball table for serious players and newbies alike.

(The writing is another matter. It's mostly funny, but some of it is seriously cringeworthy. Sir Psycho is already pretty bad, but the "sexy princess"... ugh.)

There's a Medieval Madness machine at Pinball Pete's here in Ann Arbor, and I play it a lot. Unfortunately, it isn't always in the greatest condition; a fact of life with pinball machines, I know. Also, it costs 50¢ per play, and that adds up over a while. But recently I learned that there exist pinball video games that license actual pinball tables -- as well as the free Visual Pinball and Visual PinMAME -- and I decided to give that a shot.

First, I bought Pinball Hall of Fame: The Williams Collection ($10 used on Amazon) for the Wii. Foolishly, I forgot when I was buying it that only the PS3 and Xbox 360 versions include Medieval Madness. Oops. So I tried running Visual Pinball and Visual PinMAME under WINE, me having long ago accidentally rendered my copy of Windows unbootable -- I've heard tell of a way to restore it, but I am not trying that out just for some pinball. I don't know whether it was due to WINE or what, but I couldn't get the part of Visual Pinball that allows you to actually play the games to work. So finally I decided to shell out the $30 for a used copy of the PS3 version of Pinball Hall of Fame: The Williams Collection. (There is also apparently a PS3 version of The Pinball Arcade, yet somehow I can't find anything but the PS4 version. Perhaps I'm missing something.)

And so we get to my list of complaints. Now, I spent much of last night glued to the game -- though the fact that my cell phone was charging and there are no clocks in the game room here contributed to that -- so evidently it's not all bad. And it's not! But I got it to play Medieval Madness, and that has some serious problems. (OK, those aren't the only problems.)

Let's start with the biggest, most obvious one: The bumpers don't work properly. Oh, the physics on them is fine -- the ball hits them and is sharply repelled. But the game part of the game frequently doesn't register the hit. Maybe you're still scoring points for them, I don't know. But it doesn't consistently display the "N jet hits for super jets" screen, and when it finally does, you can see that in fact, all those earlier hits didn't register. You'll hit the bumpers a large number of times, only to finally be told, "49 jet hits to activate super jets". Did they test this at all?

EDIT July 8th: The following two paragraphs seem to be largely wrong; see the note after them.

Meanwhile, the underlying engine seems to have not been quite prepared for how, well, 3-dimensional a game Medieval Madness is. Now it's not like the game engine can't handle the fact that pinballs move in three dimensions; otherwise, there'd be no way to jump the moat to lock a ball. (Or to pick a different table, otherwise there'd be no way to perform a skill shot in Tales of the Arabian Nights.) And I have even seen it happen that when you hit the castle, the ball jumps over the side of the drawbridge and lands in the moat, as it often does on a real machine. So that's good.

But the castle gate, and the trolls, are another story. When you destroy the castle gate, it doesn't seem to retract into the castle; instead it just disappears. Now that may not sound different, but I've seen it happen in this game that on destroying the castle gate, the ball actually continues into the castle a little rather than bouncing back down. Maybe that's somehow possible on a real machine? I doubt it. And then there's the trolls. Rather than popping out from under the playfield, the trolls just... appear. What would happen if a troll were to appear underneath a ball? On a real machine, the ball is launched into the air. I have no idea what would happen here. (I didn't see this happen while I was playing; it's not something that happens all that often.)

EDIT July 8th: OK, actually I don't think the problem is in the underlying engine at all -- it seems to have no problem handling vertically moving components in, say, Funhouse. I think the problem here is just a half-assed implementation of Medieval Madness, not any problem with the underlying engine. (The drawbridge doesn't seem to move quite right either, by the way.)

EDIT July 15th: Medieval Madness doesn't seem to be the only game with a half-assed implementation: Tales of the Arabian Nights seems to have significant problems as well. I've never played the real table, but some of the things in the left loop seem to work inconsistently, similar to the bumpers in Medieval Madness.

Let's now turn to problems that affect other tables as well. The physics is dodgy; the ball seems to sometimes do impossible things, and on one table (I forget which -- Pin-Bot, I think?) I'm pretty sure I was able to (more than once) get the ball to clip through the right outlane wall, returning it to the plunger. Meanwhile, the game's random number generator, or its use of it, seems to be broken. Firstly in the physics -- getting a skill shot in Medieval Madness is easy, yes, but in this game it requires literally zero skill; the ball will go down the left lane and give you the skill shot every time. Other things seem to be insufficiently randomized as well -- for instance, in every game, in the catapult minigame, it was the cat that was worth 500,000. (I also got the "sexy princess" every game (ugh), but based on my experience with the real machine, getting the same princess over and over is not too unexpected.) Now it's possible I'm wrong about this, and I don't know how I'd judge this for tables I'm less familiar with, but I'm pretty sure there's something wrong with the randomness in this game.

EDIT July 15th: I have definitely seen the ball clip through walls now, no question.

As for the tables other than Medieval Madness, I can't say how faithfully they're implemented; Medieval Madness is the only one I'm familiar with. I didn't notice anything like the gate or the trolls, but when it comes to something like the bumpers not working properly, well, I could have easily missed that were I unfamiliar with the table. (I suppose I could compare against the in-game instructions -- more on that below -- but I didn't actually read the instructions for most tables. Or I can look up instructions online, if I really care.)

Having (mostly) covered the actual play of the game, let's turn to the interface. Rather than a simple menu where you can select which pinball game to play, the developers made a 3D virtual arcade -- which takes a fair bit of time to load, and needs to be accessed every time you want to select a different machine (which then itself needs time to load). Fortunately, we are spared having to control some player avatar walking around this arcade; within it, you can simply use the left and right buttons to select a machine, making it just a harder-to-use (since you can't see where you're going) menu... except that there's a separate "upstairs" and "back room" you have to separately navigate to. I bet somebody got a really nice bonus for that.

Let's now talk about where the gameplay and interface intersect -- the camera! Rather than a fixed camera that gives you a good view of the overall table, you can choose between several different "Smart Cams". (I found Smart Cam 2 to be the best.) However, there's no menu option for this; you have to switch between them during the game with the circle button. And no, you can't do it before the game starts, because, you see, there's a separate "Plunger Cam" (when the ball is on the plunger), "Smart Cam" (most of the time), and "Multiball Cam" (for multiball), and you can only switch the one that's currently active. (I generally found Plunger Cam 3 to be the most useful; of course, that doesn't matter for Medieval Madness, since it has an autoplunger. I never really tried any of the other Multiball Cams; Multiball Cam 1 is fine.) Your camera settings are remembered between plays of the same table... until you leave that table. Then the next time you play, you're going to be going "Oh crap I'm in Smart Cam 1, let me hit the circle button while nothing important is happening" all over again.

EDIT July 8th: There actually is a way to get a full-table camera (of which there are several you can switch between); press the square button. But like every other camera switch, this must be done mid-game. That said, on the whole I found the Smart Cam better. (Although I'd like to revise my above statement -- I often find Smart Cam 3 better than Smart Cam 2.) Oh, here's another annoyance for you: You can only cycle through the cameras in one direction! And if you activate the left plunger in Funhouse, there's no good camera for that.

EDIT July 15th: Having played some more, I think Smart Cam 3 is generally the best normal camera, though there are some exceptions. Also, Multiball Cam 2 actually is better than Multiball Cam 1.

(By the way, in every menu in the game, triangle is the cancel button, in defiance of all convention.)

One good thing about the game is that it does provide detailed instructions for all the tables; I don't know if they're complete instructions, but they're pretty good. And it's not just a block of text; it actually shows you on the table what it's talking about. (You still may encounter problems with pinball jargon though -- I had to look up what a "Special" was. It means a free game.) Unfortunately, the interface gets in the way again; you have to page through these explanations a little bit at a time to find the part you care about, the pages can't be flipped very rapidly, and each page contains only a few lines of text. Even on most of the simpler tables, this is about 50 pages long; for Medieval Madness, it's about 80.

(Another side note -- when playing, if you pause, there's a menu option to reset the ball position. This is amusingly but unhelpfully labelled "Call Attendant".)

What I would like to exist, but does not, is some sort of table inspection mode, where you can just examine a table up-close without playing it. On some of the more complicated tables, I couldn't get a good sense of the physical layout of the tables without actually playing, or sometimes even after actually playing. Where do the ramps and habitrails lead, for instance? And what do all the lights and markings say? You'll never be able to read those during play (unless you use a terribly close-up camera). The instructions will tell you the significance of the lights, of course, but it's nice to be able to read them, as well as just more generally getting to examine the table art, some of which can be in places you'll hardly even be able to see during play.

They've also added five "table goals" (essentially achievements) to each table. Normally I'm opposed to achievement systems, but I'm pretty OK with it here. However, once again implementation problems get in the way. Completing all five table goals unlocks five further "wizard goals"... but you can't get the wizard goals to register as completed unless you've already unlocked them. If you completed one earlier, or even in the same game where you finally completed the fifth table goal? Too bad! You have to do it over. (This actually happened to me playing Taxi, with the "get 4,600,000 points" wizard goal.)

EDIT July 15th: This has now happened to me a lot. It was especially a problem in Sorcerer (which has one normal goal -- the extra ball -- that's harder than many of the wizard goals) and in Tales of the Arabian Nights (which has a similar situation with the Harem Multiball (ugh), though in this case the difficulty seems to be due primarily to the table's broken implementation mentioned above). At least now I've unlocked the wizard goals on all tables (Jive Time included) so I never have to worry about this again!

This brings me to my next complaint: The unlocking system. Why does this even exist? I paid for the game, I'd like to just be able to play the pinball tables in it. (Fortunately, Medieval Madness starts fully unlocked.) Instead, of the 13 tables, only 4 start fully unlocked for free play mode. Most of the rest are available, but require spending credits. You start with 20 of these and it costs 1 to play a game; things that give you free games instead give you 1 credit. However, completing table goals gives you a bunch, and completing wizard goals gives you even more. You can spend 100 to unlock a table for free play; you can also unlock a table for free play without spending credits by completing (for the first time) all five goals on any table. (I assume you can also do this with wizard goals, but unsurprisingly I haven't been able to do that.)

Still, it's not terrible; at least I was able to unlock all the tables after a single night of playing, right? (Granted, one that went much longer than I originally intended.) Well, not quite. One table, Jive Time, starts entirely locked. How do you unlock it? I don't know. You can't spend 100 credits on it, because you can't even access its menu. (Yes, you have to load a table before you can spend 100 credits to unlock it for free play. More loading time!) I didn't realize this till after I had collected 100 credits to unlock it, the final table. Fortunately, there was one table, Black Knight, on which I had completed 4 of the 5 table goals already, but not the fifth. I went back, got the fifth, and when asked which table I wanted to unlock, selected Jive Time. (Why do you even have the option of selecting tables you've already unlocked?) I then went back to the menu, back downstairs, to the back room, and Jive Time... was still locked. So, I don't know what's going on there. (There are also other unlockable options, such as mirror mode. I don't know how to unlock those at all.)

EDIT July 8th: Apparently, looking it up, Jive Time is unlocked by beating the Williams Challenge -- an additional mode I hadn't tried -- and the unlockable options are unlocked by completing all wizard goals on all tables. Yikes!

EDIT July 15th: Actually, the way it works is that completing all the wizard goals on a given table unlocks the unlockable options for that specific table. Seeing as the unlockable options are nothing too essential, I'm pretty OK with this.

Finally, before I conclude, a word on the selection of tables. As I've said above, I'm not really a pinball enthusiast; I basically got this for Medieval Madness. So if you are a pinball enthusiast -- who I assume this game is intended for -- I may not be the best judge. (Although if this game is for people who really care about pinball, why does it have so many implementation problems? Why don't the fricking bumpers even work right in Medieval Madness?) But, on the whole, I can't say I was too impressed with the selection of tables. Most of the tables were, uh, very old-school -- simple, yes, but, well, a bit too simple. I generally found them repetitive and tedious. Then there were the more modern incomprehensible tables. Yikes. (Of course, the game does have instructions. Maybe I should revisit these after reading those.) So aside from Medieval Madness, there just weren't a lot of tables I found to be that fun (and as for those, well, I get the impression I'd be better off playing them in another form). No Good Gofers I liked; I didn't really understand it, but it was at least somewhat comprehensible, and I think I might want to sit down and read the instructions. Taxi was also fun, but again a bit on the repetitive side. Tales of the Arabian Nights was interesting, and while still basically incomprehensible, at least it offered a fair amount of in-game guidance, in the form of explicitly telling you to shoot certain targets -- if only there were a table inspection mode so I could see what those targets were!

EDIT July 15th: Having played a bit more, and read more of the instructions, I have to say that the selection of tables has grown on me. A lot of the old-school ones I still don't like, but some of the more modern ones I called "incomprehensible" earlier really were worth revisiting after reading the instructions, in particular Whirlwind. Also Funhouse, if you're not too creeped out by Rudy. But definitely not Pin-Bot. I really do not like Pin-Bot.

In short: It is pretty unlikely that you should buy this.


PS: I found Medieval Madness substantially easier here than on a real machine. (I managed to light Royal Madness for the first time, though annoyingly I failed to then actually start it.) I don't know why this was. In particular the damsel ramp seemed to be easier to go up. Also, it seemed that I would less often miss the castle or the ball lock and hit a troll target instead, which normally is pretty common. Oddly, though, I had trouble hitting the peasant ramp. No idea what would cause all this. Maybe just luck.

EDIT July 15th: Well, I managed to activate Royal Madness now. :) Still haven't managed to win it though...

2015 Jun 30

03:22:00 - Another question about Snake

So do you remember a while ago I posted some questions about the game of Snake? Well, here's another one. Or perhaps I should really say, here's a rephrasing of one of them.

The broad question about Snake, which earlier I wrote might just be too broad to answer, was "Which graphs are winnable?" (Weakly or strongly.) But it is possible -- not likely, I don't think, but possible -- that this might have a not-too-complicated answer.

Let's reformulate the question first. If a graph has a spanning subgraph which is winnable, then the graph is winnable. Hence, rather than focusing on all winnable graphs, let's focus just on minimal winnables -- ones where deleting any edge renders the graph unwinnable. (Note: I'm going to assume the strong game unless I say otherwise.) Characterizing these would characterizing all winnable graphs, as a graph is winnable iff it contains some minimal winnable.

Then so far I've only managed to find a few types of minimal winnable graphs! And, more generally, every winnable graph I know of contains one of these as a spanning subgraph. There are of course the trivial ones -- the zero-vertex graph, the one-vertex graph, the path P2. But the interesting ones are the following:
1. For n≥3, the cycle Cn. (In the weak game, we should restrict to n≥4, as there C3 is not minimal.)
2. For n≥3, two copies of Kn joined at a vertex. (In the weak game, we should instead say n≥2, as n=2 yields the path P3.)
3. For 1≤i≤j≤k, define Gi,j,k to be the graph consisting of two vertices and three paths between them, with i, j, and k vertices in the interior of the three paths (respectively). Then G1,1,k, G1,2,k, and G2,2,k are examples of minimal winnable graphs.

So, the question: Could this possibly be all of them? It seems unlikely, but it might just be possible. I have in fact verified that these are all of them up to 8 vertices, so any counterexample will have to be at least 9 vertices, unless I've messed up.

EDIT 4 July 2015: I may indeed have messed up about the 8-vertex case. I'm certain about the 7-vertex case, though. Will update if I fully solve the 8-vertex case. (9-vertex is out of reach, I think.)

Note that a positive answer to this question would also imply that subdivisions of unwinnable graphs are unwinnable, answering that question from the previous post. (It would also answer the question of whether weak and strong winnability coincide aside from P3, but that's because this proposition essentially just includes that; it isn't a nontrivial implication.) I should say, if this is true, I don't expect it to be at all easy to prove; it's not actually something I expect at all to solve...

(I will update my website with all this later.)

(Also, "minimal winnable" is really fun to say. :) )


2015 Jun 20

11:32:00 - There might have been other misleading units too

The other night I had a dream that there was a unit of time known as a "trailer year" that was equal to 15 and a half years. In it I was thinking about how if I were famous I'd go and tell everyone to never use this unit because it is very misleading, and yes I understand the history of how it came into use and in those narrow purposes it is still useful but it should be called something else, it is primarily used to mislead people. Then I woke up and was relieved to realize this was not something I or anyone had to do.


2015 May 28

00:01:00 - A quick note on "Well-partial orderings and hierarchies"

In their paper "Well-partial orderings and hierarchies", D. H. J. De Jongh and Rohit Parikh showed that given any WPO X, we can define an ordinal o(X), which is the largest order type of any well-order extending X. (As in, yes, the maximum is achieved.) Then they proved some things about how to compute o(X) and computed some specific cases. In particular, o(X⨿Y)=o(X)⊕o(Y) and o(X×Y)=o(X)⊗o(Y), where ⊕ and ⊗ denote natural sum and natural product, respectively.

I'm surprised, however, that nowhere did they note the following rule:

o(X) is equal to the smallest ordinal greater than any o(L), where L is a lower set of X other than X itself.

Equivalently, if we use their notation l(x) to mean o({y: y≱x}), then one can state this as, o(X) is the smallest ordinal greater than any l(x).

I mean, this is kind of the rule you would expect if you know that o(X⨿Y)=o(X)⊕o(Y) and you know the recursion for natural addition. But it's surprising that they don't note this because it's basically just a rephrasing of what they did prove. You could say that it's an unnecessary rephrasing, but I think it's a helpful one, not to mention a clean one.

To be clear, this follows from the following facts that they proved:
1. If o(X) is a limit, it's the sup of all the l(x)
2. l(x) is always less than o(X)
3. o(X) is a successor iff X has a maximal element (call it x); in which case, o(X) is equal to o(X\{x})+1

Number 1 just is the statement above in the case where o(X) is a limit, and the case where it's a successor follows pretty easily from 2 and 3. There's also of course the case where o(X)=0, which is trivial.

So, that's the basic recursion. If you want to go computing o(X) for some WPOs but you don't know how to get an upper bound, it's an available tool (if possibly a hard to apply one).

(Also, unrelatedly, remember I said that as best as I could tell, nobody had ever studied super-Jacobsthal exponentiation before? Well, they don't give a name to it, and they don't really "study" it, but somehow, even though I'd looked at this paper a number of times before, I never noticed that De Jongh and Parikh briefly make use of it. I'd better edit my paper to mention that before I resubmit it!)


2015 May 26

05:25:00 - Another dumb story about transportation

So Saturday I had the problem of getting back to Colin and Aviva's from the bar where Shuo and Nadja and Jim and I all met. I asked Nadja how to get home, and she said, take this bus north to Chicago Ave, then this bus west (or you can just walk west, it's not far) and then you'll be somewhere you can recognize and you can get back from there.

So I get on the bus, right, and at first I'm not really paying attention. Then I see areas that look maybe kind of familiar? And I'm thinking, oh crap Harry, you've missed your stop. But then I think, no, this sort of thing has happened before -- where you get on a bus you don't know, and you keep thinking you missed your stop. It basically never turns out that you actually missed your stop. Your sense of this is not reliable. Just chill out and sit tight until you get to Chicago Ave.

(I saw a blue line stop, and I knew how to get to Colin's from the blue line, and I thought about getting off there and just taking the subway; but instead I stayed on the bus.)

Well, the bus kept going, and going, and going... until finally it got to the end of the line. I had indeed missed my stop. And of course it stops for a while at the end of the line before it turns around, which was annoying as I really had to pee by this point.

But! The thing is? I was totally right that my sense of this is completely unreliable. Because while I did miss my stop, it wasn't until pretty late; it wasn't that far back from the end of the line. So early on when I was thinking "I missed my stop earlier when I wasn't paying attention", I was still wrong. (I don't know how I did eventually miss my stop; I thought I was paying attention. Apparently not well enough.)

So I had the right strategy, even though I did somehow ultimately miss my stop.


2015 May 25

01:08:00 - An unplanned trip to Chicago

So this past weekend I ended up taking an unplanned trip to Chicago. The occasion for this was that Nadja posted on Facebook that Shuo was going to be in Chicago for the weekend and, well, I realized that I had little reason not to.

I mean, it's a funny thing -- I hadn't seen Shuo since before she disappeared 8 years ago. Of course, at this point she reappeared... well I don't know how long ago it was but it's been years, is the point. So why had I never gone to see her before? Well, basically, travel is annoying. It's not something I do often. And I mean yeah I guess strictly speaking I could have made time but it's so easy to forget, you know, when you're just constantly addressing whatever the most urgent task you have to do is. But hey -- here I am in Ann Arbor, nothing's particularly urgent at the moment, why not take an unplanned trip to Chicago?

So I did! I stayed with Colin and Aviva. Since I last saw them they've gotten a dog, named Gately. Gately is, um, extremely dog. He jumps on everyone and licks their faces and runs after thrown objects and chews on things and rolls over and lets you pet him (and then starts play-biting you) and eats anything that gets dropped and climbs on the furniture and plays with the cat (Prince Myshkin) and... I don't know, he's like some sort of dog archetype. He is maybe too much dog. I don't want a dog licking my face quite that much.

Friday night ended up kind of turning into a big Tufts House reunion, actually. You see, Colin and Aviva already had invited over Girl Alex and Antonia and Matt Woolf (not actually Tufts but kind of in that circle) for their weekly Shabbat dinner (and also Jan, their upstairs neighbor, who once also had a cat named Prince Myshkin). When Nadja and Shuo tried to coordinate us getting together, Colin suggested they just come over here as well. And then coincidentally it turned out that Grant was visiting his parents out in the Chicago suburbs and could get here in about an hour, and we got him to show up too. We took a big picture and sent it to Kate Harney, who reportedly replied "Whoa. That is a lot of Tufts." Jim unfortunately could not make it that night though Shuo and Nadja and I met up with him the next day.

We did not get a chance to play 64 Smash, nor did I get a chance to play Twilight Struggle against Colin (I didn't actually bring it, I would have had to bring a suitcase), but Shuo and Nadja and Colin and I did play Wiz-War (the Fantasy Flight reprinting, not the original). It was fun but kind of disappointing? I don't know, I expected more from it. I was wondering how it compares to the original but it's actually rated higher on BGG. Hm. It seems like the sort of game that might need the expansions.

(Also, Shuo tells me she still has that letter I sent her while she was disappeared because somehow I managed to get ahold of her physical address and I figured, why not, I can try to contact her that way. She never responded at the time, but now I learn it worked! :D )

I could probably go on about a whole bunch of things, but instead, two things about transport:

1. Man, the payment machines on the Chicago subway system are unfriendly to people who don't live there. I thought the New York ones were bad, but the Chicago ones are substantially worse. I mean, they don't give change!

2. Here's an unexpected delay for you: The Megabus I took back to Ann Arbor got pulled over by the police.

Anyway, I'm stopping here.


2015 Apr 12

02:53:00 - Time T+4, serious internal threat. Repeat, time T+4, serious internal threat.

So we've been turning up the difficulty in Space Alert lately.

I should probably clarify who "we" is -- me, Noelle, Andy, Nick, and Seth. That's basically our usual Truth House crew now; roles vary. Josh and Eric (first-year Eric, not Gamble or DeVries, who haven't lived here for a while) have played before, as has Sam, but these days it's usually the same five of us. On the one hand, it seems kind of disappointing that we're not getting more people involved, like we used to; on the other hand, hey, a consistent crew. (Although Seth stayed out for the first two out of the four games tonight.)

And we definitely seem to be getting better! For quite a while here "standard difficulty" was all decks mixed, except serious internal, which would be white. But not too long ago we started mixing in the yellow serious internals; then removing the white ordinary threats; until today the final game we played was all decks yellow, except for serious internal, which was mixed.

Two of the games we played had all-yellow threats! We survived! In our final game the Seeker -- the dreaded Seeker, which I consider the scariest threat in the base set -- showed up! We survived! (Yeah, I realize other people consdier the Executioner or the Nuclear Device scarier. To be fair, I've still never faced the Nuclear Device; it might be harder than I'm giving it credit for.)

Actually, on that topic -- I realized today, one thing that makes the Seeker easier is that it doesn't move on ties, and ties are probably more likely than non-ties, and not moving is easier to handle than moving.

...we forgot to account for the Seeker knocking out the person who kills it though, which was Seth. Oops. Seth actually got knocked out in both games he was in tonight. In the earlier one, he was knocked out by the Executioner; we knew he would get knocked out, since his move to the bridge, where the Executioner was headed, was locked in by then; but I (the captain) got mixed up and thought he'd be knocked out at the end of the turn rather than the beginning. Which meant he was knocked out before hitting the mouse rather than after. But we survived!

Maybe soon we'll turn it up to all yellow -- I mean, guaranteed all yellow, not just possibly all yellow. And maybe soon I should look into getting the expansion, for the few more months we're all here...


Navigate: (Previous 20 Entries)