?

Log in

Chronicles of Harry

2015 Jul 2

14:54:00 - Video game review -- Pinball Hall of Fame: The Williams Collection (PS3)

Zero-word summary: See my userpic.

(This entry has been revised on July 8th and again on July 15th; revisions are marked.)

So let me begin with some background. I am, to be honest, really not much of a pinball player. I've always wondered how it is that someone is supposed to get into pinball in the first place; modern pinball games are quite complicated, but typically provide little in the way of explicit instruction (the little inserts in the lower left are really not enough) or implicit guidance. Meanwhile they have their whole own set of terminology and conventions that I don't know how one is supposed to learn in the first place -- one reason those instruction inserts are so unhelpful is because in addition to their brevity, they're also written in pinballese. For many years, "LOCK IS LIT" was my own personal pinball equivalent of "PC LOAD LETTER". (I know what it means now, thank you, you don't have to explain.)

As a kid I would play my grandparents' Star Explorer machine, a pinball machine made for the home, whenever we visited them. These days I don't think I'd find it very fun, but it held my attention at the time. It's definitely not great pinball, though. You can't even control the flippers separately. I also played Sonic Spinball on the Genesis, which really shouldn't count. (Mostly a fun game, but eventually I got good enough I could beat it, and the final boss fight takes so long and is so tedious that it counterbalances the rest of the game.) That game at least allows you to control the flippers separately, but you can win without ever doing so.

Eventually I started playing the famous "3D Pinball for Windows -- Space Cadet", which taught me some actual basic pinball skills. A complicated game, to be sure, but once I found the manual hidden away on the hard drive, it became comprehensible. (Also, even before then, the in-game messages do provide a fair bit of guidance, and there's also the flashing arrow pointing you towards whatever it is you need to do to begin or complete a mission.) Still, for all I've played, I don't think I can claim to be good at pinball. Better than a total newbie, I guess. (I'm also too weak to nudge the machine effectively, unless I'm somehow doing it wrong.) And more relevantly, I continue to find most pinball machines baffling.

But there is one pinball machine I love, and that is Medieval Madness. I encountered it by chance as a kid at an arcade I never had the opportunity to return to. Unlike with other pinball games, I immediately understood what I was supposed to do: Shoot the castle! OK, there's actually considerably more to the game than that, but the design of the table -- both the physical layout, and how the game responds when you do shoot the castle -- makes it clear that this is something you want to do, and even if you have no idea what else is going on, well, at least the game is guiding you to do that one thing. (And having now learned how the game actually works, it isn't that complicated on the whole. I think; maybe I'm just saying that because I understand it now.)

And on this, the pinball community seems to agree with me; Medieval Madness is a top-rated pinball machine. It's #4 on IPDB and #1 on Pinside. I'm guessing that they're judging it on different criteria than I am, but that reinforces the conclusion -- apparently it's a great pinball table for serious players and newbies alike.

(The writing is another matter. It's mostly funny, but some of it is seriously cringeworthy. Sir Psycho is already pretty bad, but the "sexy princess"... ugh.)

There's a Medieval Madness machine at Pinball Pete's here in Ann Arbor, and I play it a lot. Unfortunately, it isn't always in the greatest condition; a fact of life with pinball machines, I know. Also, it costs 50¢ per play, and that adds up over a while. But recently I learned that there exist pinball video games that license actual pinball tables -- as well as the free Visual Pinball and Visual PinMAME -- and I decided to give that a shot.

First, I bought Pinball Hall of Fame: The Williams Collection ($10 used on Amazon) for the Wii. Foolishly, I forgot when I was buying it that only the PS3 and Xbox 360 versions include Medieval Madness. Oops. So I tried running Visual Pinball and Visual PinMAME under WINE, me having long ago accidentally rendered my copy of Windows unbootable -- I've heard tell of a way to restore it, but I am not trying that out just for some pinball. I don't know whether it was due to WINE or what, but I couldn't get the part of Visual Pinball that allows you to actually play the games to work. So finally I decided to shell out the $30 for a used copy of the PS3 version of Pinball Hall of Fame: The Williams Collection. (There is also apparently a PS3 version of The Pinball Arcade, yet somehow I can't find anything but the PS4 version. Perhaps I'm missing something.)

And so we get to my list of complaints. Now, I spent much of last night glued to the game -- though the fact that my cell phone was charging and there are no clocks in the game room here contributed to that -- so evidently it's not all bad. And it's not! But I got it to play Medieval Madness, and that has some serious problems. (OK, those aren't the only problems.)

Let's start with the biggest, most obvious one: The bumpers don't work properly. Oh, the physics on them is fine -- the ball hits them and is sharply repelled. But the game part of the game frequently doesn't register the hit. Maybe you're still scoring points for them, I don't know. But it doesn't consistently display the "N jet hits for super jets" screen, and when it finally does, you can see that in fact, all those earlier hits didn't register. You'll hit the bumpers a large number of times, only to finally be told, "49 jet hits to activate super jets". Did they test this at all?

EDIT July 8th: The following two paragraphs seem to be largely wrong; see the note after them.

Meanwhile, the underlying engine seems to have not been quite prepared for how, well, 3-dimensional a game Medieval Madness is. Now it's not like the game engine can't handle the fact that pinballs move in three dimensions; otherwise, there'd be no way to jump the moat to lock a ball. (Or to pick a different table, otherwise there'd be no way to perform a skill shot in Tales of the Arabian Nights.) And I have even seen it happen that when you hit the castle, the ball jumps over the side of the drawbridge and lands in the moat, as it often does on a real machine. So that's good.

But the castle gate, and the trolls, are another story. When you destroy the castle gate, it doesn't seem to retract into the castle; instead it just disappears. Now that may not sound different, but I've seen it happen in this game that on destroying the castle gate, the ball actually continues into the castle a little rather than bouncing back down. Maybe that's somehow possible on a real machine? I doubt it. And then there's the trolls. Rather than popping out from under the playfield, the trolls just... appear. What would happen if a troll were to appear underneath a ball? On a real machine, the ball is launched into the air. I have no idea what would happen here. (I didn't see this happen while I was playing; it's not something that happens all that often.)

EDIT July 8th: OK, actually I don't think the problem is in the underlying engine at all -- it seems to have no problem handling vertically moving components in, say, Funhouse. I think the problem here is just a half-assed implementation of Medieval Madness, not any problem with the underlying engine. (The drawbridge doesn't seem to move quite right either, by the way.)

EDIT July 15th: Medieval Madness doesn't seem to be the only game with a half-assed implementation: Tales of the Arabian Nights seems to have significant problems as well. I've never played the real table, but some of the things in the left loop seem to work inconsistently, similar to the bumpers in Medieval Madness.

Let's now turn to problems that affect other tables as well. The physics is dodgy; the ball seems to sometimes do impossible things, and on one table (I forget which -- Pin-Bot, I think?) I'm pretty sure I was able to (more than once) get the ball to clip through the right outlane wall, returning it to the plunger. Meanwhile, the game's random number generator, or its use of it, seems to be broken. Firstly in the physics -- getting a skill shot in Medieval Madness is easy, yes, but in this game it requires literally zero skill; the ball will go down the left lane and give you the skill shot every time. Other things seem to be insufficiently randomized as well -- for instance, in every game, in the catapult minigame, it was the cat that was worth 500,000. (I also got the "sexy princess" every game (ugh), but based on my experience with the real machine, getting the same princess over and over is not too unexpected.) Now it's possible I'm wrong about this, and I don't know how I'd judge this for tables I'm less familiar with, but I'm pretty sure there's something wrong with the randomness in this game.

EDIT July 15th: I have definitely seen the ball clip through walls now, no question.

As for the tables other than Medieval Madness, I can't say how faithfully they're implemented; Medieval Madness is the only one I'm familiar with. I didn't notice anything like the gate or the trolls, but when it comes to something like the bumpers not working properly, well, I could have easily missed that were I unfamiliar with the table. (I suppose I could compare against the in-game instructions -- more on that below -- but I didn't actually read the instructions for most tables. Or I can look up instructions online, if I really care.)

Having (mostly) covered the actual play of the game, let's turn to the interface. Rather than a simple menu where you can select which pinball game to play, the developers made a 3D virtual arcade -- which takes a fair bit of time to load, and needs to be accessed every time you want to select a different machine (which then itself needs time to load). Fortunately, we are spared having to control some player avatar walking around this arcade; within it, you can simply use the left and right buttons to select a machine, making it just a harder-to-use (since you can't see where you're going) menu... except that there's a separate "upstairs" and "back room" you have to separately navigate to. I bet somebody got a really nice bonus for that.

Let's now talk about where the gameplay and interface intersect -- the camera! Rather than a fixed camera that gives you a good view of the overall table, you can choose between several different "Smart Cams". (I found Smart Cam 2 to be the best.) However, there's no menu option for this; you have to switch between them during the game with the circle button. And no, you can't do it before the game starts, because, you see, there's a separate "Plunger Cam" (when the ball is on the plunger), "Smart Cam" (most of the time), and "Multiball Cam" (for multiball), and you can only switch the one that's currently active. (I generally found Plunger Cam 3 to be the most useful; of course, that doesn't matter for Medieval Madness, since it has an autoplunger. I never really tried any of the other Multiball Cams; Multiball Cam 1 is fine.) Your camera settings are remembered between plays of the same table... until you leave that table. Then the next time you play, you're going to be going "Oh crap I'm in Smart Cam 1, let me hit the circle button while nothing important is happening" all over again.

EDIT July 8th: There actually is a way to get a full-table camera (of which there are several you can switch between); press the square button. But like every other camera switch, this must be done mid-game. That said, on the whole I found the Smart Cam better. (Although I'd like to revise my above statement -- I often find Smart Cam 3 better than Smart Cam 2.) Oh, here's another annoyance for you: You can only cycle through the cameras in one direction! And if you activate the left plunger in Funhouse, there's no good camera for that.

EDIT July 15th: Having played some more, I think Smart Cam 3 is generally the best normal camera, though there are some exceptions. Also, Multiball Cam 2 actually is better than Multiball Cam 1.

(By the way, in every menu in the game, triangle is the cancel button, in defiance of all convention.)

One good thing about the game is that it does provide detailed instructions for all the tables; I don't know if they're complete instructions, but they're pretty good. And it's not just a block of text; it actually shows you on the table what it's talking about. (You still may encounter problems with pinball jargon though -- I had to look up what a "Special" was. It means a free game.) Unfortunately, the interface gets in the way again; you have to page through these explanations a little bit at a time to find the part you care about, the pages can't be flipped very rapidly, and each page contains only a few lines of text. Even on most of the simpler tables, this is about 50 pages long; for Medieval Madness, it's about 80.

(Another side note -- when playing, if you pause, there's a menu option to reset the ball position. This is amusingly but unhelpfully labelled "Call Attendant".)

What I would like to exist, but does not, is some sort of table inspection mode, where you can just examine a table up-close without playing it. On some of the more complicated tables, I couldn't get a good sense of the physical layout of the tables without actually playing, or sometimes even after actually playing. Where do the ramps and habitrails lead, for instance? And what do all the lights and markings say? You'll never be able to read those during play (unless you use a terribly close-up camera). The instructions will tell you the significance of the lights, of course, but it's nice to be able to read them, as well as just more generally getting to examine the table art, some of which can be in places you'll hardly even be able to see during play.

They've also added five "table goals" (essentially achievements) to each table. Normally I'm opposed to achievement systems, but I'm pretty OK with it here. However, once again implementation problems get in the way. Completing all five table goals unlocks five further "wizard goals"... but you can't get the wizard goals to register as completed unless you've already unlocked them. If you completed one earlier, or even in the same game where you finally completed the fifth table goal? Too bad! You have to do it over. (This actually happened to me playing Taxi, with the "get 4,600,000 points" wizard goal.)

EDIT July 15th: This has now happened to me a lot. It was especially a problem in Sorcerer (which has one normal goal -- the extra ball -- that's harder than many of the wizard goals) and in Tales of the Arabian Nights (which has a similar situation with the Harem Multiball (ugh), though in this case the difficulty seems to be due primarily to the table's broken implementation mentioned above). At least now I've unlocked the wizard goals on all tables (Jive Time included) so I never have to worry about this again!

This brings me to my next complaint: The unlocking system. Why does this even exist? I paid for the game, I'd like to just be able to play the pinball tables in it. (Fortunately, Medieval Madness starts fully unlocked.) Instead, of the 13 tables, only 4 start fully unlocked for free play mode. Most of the rest are available, but require spending credits. You start with 20 of these and it costs 1 to play a game; things that give you free games instead give you 1 credit. However, completing table goals gives you a bunch, and completing wizard goals gives you even more. You can spend 100 to unlock a table for free play; you can also unlock a table for free play without spending credits by completing (for the first time) all five goals on any table. (I assume you can also do this with wizard goals, but unsurprisingly I haven't been able to do that.)

Still, it's not terrible; at least I was able to unlock all the tables after a single night of playing, right? (Granted, one that went much longer than I originally intended.) Well, not quite. One table, Jive Time, starts entirely locked. How do you unlock it? I don't know. You can't spend 100 credits on it, because you can't even access its menu. (Yes, you have to load a table before you can spend 100 credits to unlock it for free play. More loading time!) I didn't realize this till after I had collected 100 credits to unlock it, the final table. Fortunately, there was one table, Black Knight, on which I had completed 4 of the 5 table goals already, but not the fifth. I went back, got the fifth, and when asked which table I wanted to unlock, selected Jive Time. (Why do you even have the option of selecting tables you've already unlocked?) I then went back to the menu, back downstairs, to the back room, and Jive Time... was still locked. So, I don't know what's going on there. (There are also other unlockable options, such as mirror mode. I don't know how to unlock those at all.)

EDIT July 8th: Apparently, looking it up, Jive Time is unlocked by beating the Williams Challenge -- an additional mode I hadn't tried -- and the unlockable options are unlocked by completing all wizard goals on all tables. Yikes!

EDIT July 15th: Actually, the way it works is that completing all the wizard goals on a given table unlocks the unlockable options for that specific table. Seeing as the unlockable options are nothing too essential, I'm pretty OK with this.

Finally, before I conclude, a word on the selection of tables. As I've said above, I'm not really a pinball enthusiast; I basically got this for Medieval Madness. So if you are a pinball enthusiast -- who I assume this game is intended for -- I may not be the best judge. (Although if this game is for people who really care about pinball, why does it have so many implementation problems? Why don't the fricking bumpers even work right in Medieval Madness?) But, on the whole, I can't say I was too impressed with the selection of tables. Most of the tables were, uh, very old-school -- simple, yes, but, well, a bit too simple. I generally found them repetitive and tedious. Then there were the more modern incomprehensible tables. Yikes. (Of course, the game does have instructions. Maybe I should revisit these after reading those.) So aside from Medieval Madness, there just weren't a lot of tables I found to be that fun (and as for those, well, I get the impression I'd be better off playing them in another form). No Good Gofers I liked; I didn't really understand it, but it was at least somewhat comprehensible, and I think I might want to sit down and read the instructions. Taxi was also fun, but again a bit on the repetitive side. Tales of the Arabian Nights was interesting, and while still basically incomprehensible, at least it offered a fair amount of in-game guidance, in the form of explicitly telling you to shoot certain targets -- if only there were a table inspection mode so I could see what those targets were!

EDIT July 15th: Having played a bit more, and read more of the instructions, I have to say that the selection of tables has grown on me. A lot of the old-school ones I still don't like, but some of the more modern ones I called "incomprehensible" earlier really were worth revisiting after reading the instructions, in particular Whirlwind. Also Funhouse, if you're not too creeped out by Rudy. But definitely not Pin-Bot. I really do not like Pin-Bot.

In short: It is pretty unlikely that you should buy this.

-Harry

PS: I found Medieval Madness substantially easier here than on a real machine. (I managed to light Royal Madness for the first time, though annoyingly I failed to then actually start it.) I don't know why this was. In particular the damsel ramp seemed to be easier to go up. Also, it seemed that I would less often miss the castle or the ball lock and hit a troll target instead, which normally is pretty common. Oddly, though, I had trouble hitting the peasant ramp. No idea what would cause all this. Maybe just luck.

EDIT July 15th: Well, I managed to activate Royal Madness now. :) Still haven't managed to win it though...

2015 Jun 30

03:22:00 - Another question about Snake

So do you remember a while ago I posted some questions about the game of Snake? Well, here's another one. Or perhaps I should really say, here's a rephrasing of one of them.

The broad question about Snake, which earlier I wrote might just be too broad to answer, was "Which graphs are winnable?" (Weakly or strongly.) But it is possible -- not likely, I don't think, but possible -- that this might have a not-too-complicated answer.

Let's reformulate the question first. If a graph has a spanning subgraph which is winnable, then the graph is winnable. Hence, rather than focusing on all winnable graphs, let's focus just on minimal winnables -- ones where deleting any edge renders the graph unwinnable. (Note: I'm going to assume the strong game unless I say otherwise.) Characterizing these would characterizing all winnable graphs, as a graph is winnable iff it contains some minimal winnable.

Then so far I've only managed to find a few types of minimal winnable graphs! And, more generally, every winnable graph I know of contains one of these as a spanning subgraph. There are of course the trivial ones -- the zero-vertex graph, the one-vertex graph, the path P2. But the interesting ones are the following:
1. For n≥3, the cycle Cn. (In the weak game, we should restrict to n≥4, as there C3 is not minimal.)
2. For n≥3, two copies of Kn joined at a vertex. (In the weak game, we should instead say n≥2, as n=2 yields the path P3.)
3. For 1≤i≤j≤k, define Gi,j,k to be the graph consisting of two vertices and three paths between them, with i, j, and k vertices in the interior of the three paths (respectively). Then G1,1,k, G1,2,k, and G2,2,k are examples of minimal winnable graphs.

So, the question: Could this possibly be all of them? It seems unlikely, but it might just be possible. I have in fact verified that these are all of them up to 8 vertices, so any counterexample will have to be at least 9 vertices, unless I've messed up.

EDIT 4 July 2015: I may indeed have messed up about the 8-vertex case. I'm certain about the 7-vertex case, though. Will update if I fully solve the 8-vertex case. (9-vertex is out of reach, I think.)

Note that a positive answer to this question would also imply that subdivisions of unwinnable graphs are unwinnable, answering that question from the previous post. (It would also answer the question of whether weak and strong winnability coincide aside from P3, but that's because this proposition essentially just includes that; it isn't a nontrivial implication.) I should say, if this is true, I don't expect it to be at all easy to prove; it's not actually something I expect at all to solve...

(I will update my website with all this later.)

(Also, "minimal winnable" is really fun to say. :) )

-Harry

2015 Jun 20

11:32:00 - There might have been other misleading units too

The other night I had a dream that there was a unit of time known as a "trailer year" that was equal to 15 and a half years. In it I was thinking about how if I were famous I'd go and tell everyone to never use this unit because it is very misleading, and yes I understand the history of how it came into use and in those narrow purposes it is still useful but it should be called something else, it is primarily used to mislead people. Then I woke up and was relieved to realize this was not something I or anyone had to do.

-Harry

2015 May 28

00:01:00 - A quick note on "Well-partial orderings and hierarchies"

In their paper "Well-partial orderings and hierarchies", D. H. J. De Jongh and Rohit Parikh showed that given any WPO X, we can define an ordinal o(X), which is the largest order type of any well-order extending X. (As in, yes, the maximum is achieved.) Then they proved some things about how to compute o(X) and computed some specific cases. In particular, o(X⨿Y)=o(X)⊕o(Y) and o(X×Y)=o(X)⊗o(Y), where ⊕ and ⊗ denote natural sum and natural product, respectively.

I'm surprised, however, that nowhere did they note the following rule:

o(X) is equal to the smallest ordinal greater than any o(L), where L is a lower set of X other than X itself.

Equivalently, if we use their notation l(x) to mean o({y: y≱x}), then one can state this as, o(X) is the smallest ordinal greater than any l(x).

I mean, this is kind of the rule you would expect if you know that o(X⨿Y)=o(X)⊕o(Y) and you know the recursion for natural addition. But it's surprising that they don't note this because it's basically just a rephrasing of what they did prove. You could say that it's an unnecessary rephrasing, but I think it's a helpful one, not to mention a clean one.

To be clear, this follows from the following facts that they proved:
1. If o(X) is a limit, it's the sup of all the l(x)
2. l(x) is always less than o(X)
3. o(X) is a successor iff X has a maximal element (call it x); in which case, o(X) is equal to o(X\{x})+1

Number 1 just is the statement above in the case where o(X) is a limit, and the case where it's a successor follows pretty easily from 2 and 3. There's also of course the case where o(X)=0, which is trivial.

So, that's the basic recursion. If you want to go computing o(X) for some WPOs but you don't know how to get an upper bound, it's an available tool (if possibly a hard to apply one).

(Also, unrelatedly, remember I said that as best as I could tell, nobody had ever studied super-Jacobsthal exponentiation before? Well, they don't give a name to it, and they don't really "study" it, but somehow, even though I'd looked at this paper a number of times before, I never noticed that De Jongh and Parikh briefly make use of it. I'd better edit my paper to mention that before I resubmit it!)

-Harry

2015 May 26

05:25:00 - Another dumb story about transportation

So Saturday I had the problem of getting back to Colin and Aviva's from the bar where Shuo and Nadja and Jim and I all met. I asked Nadja how to get home, and she said, take this bus north to Chicago Ave, then this bus west (or you can just walk west, it's not far) and then you'll be somewhere you can recognize and you can get back from there.

So I get on the bus, right, and at first I'm not really paying attention. Then I see areas that look maybe kind of familiar? And I'm thinking, oh crap Harry, you've missed your stop. But then I think, no, this sort of thing has happened before -- where you get on a bus you don't know, and you keep thinking you missed your stop. It basically never turns out that you actually missed your stop. Your sense of this is not reliable. Just chill out and sit tight until you get to Chicago Ave.

(I saw a blue line stop, and I knew how to get to Colin's from the blue line, and I thought about getting off there and just taking the subway; but instead I stayed on the bus.)

Well, the bus kept going, and going, and going... until finally it got to the end of the line. I had indeed missed my stop. And of course it stops for a while at the end of the line before it turns around, which was annoying as I really had to pee by this point.

But! The thing is? I was totally right that my sense of this is completely unreliable. Because while I did miss my stop, it wasn't until pretty late; it wasn't that far back from the end of the line. So early on when I was thinking "I missed my stop earlier when I wasn't paying attention", I was still wrong. (I don't know how I did eventually miss my stop; I thought I was paying attention. Apparently not well enough.)

So I had the right strategy, even though I did somehow ultimately miss my stop.

-Harry

2015 May 25

01:08:00 - An unplanned trip to Chicago

So this past weekend I ended up taking an unplanned trip to Chicago. The occasion for this was that Nadja posted on Facebook that Shuo was going to be in Chicago for the weekend and, well, I realized that I had little reason not to.

I mean, it's a funny thing -- I hadn't seen Shuo since before she disappeared 8 years ago. Of course, at this point she reappeared... well I don't know how long ago it was but it's been years, is the point. So why had I never gone to see her before? Well, basically, travel is annoying. It's not something I do often. And I mean yeah I guess strictly speaking I could have made time but it's so easy to forget, you know, when you're just constantly addressing whatever the most urgent task you have to do is. But hey -- here I am in Ann Arbor, nothing's particularly urgent at the moment, why not take an unplanned trip to Chicago?

So I did! I stayed with Colin and Aviva. Since I last saw them they've gotten a dog, named Gately. Gately is, um, extremely dog. He jumps on everyone and licks their faces and runs after thrown objects and chews on things and rolls over and lets you pet him (and then starts play-biting you) and eats anything that gets dropped and climbs on the furniture and plays with the cat (Prince Myshkin) and... I don't know, he's like some sort of dog archetype. He is maybe too much dog. I don't want a dog licking my face quite that much.

Friday night ended up kind of turning into a big Tufts House reunion, actually. You see, Colin and Aviva already had invited over Girl Alex and Antonia and Matt Woolf (not actually Tufts but kind of in that circle) for their weekly Shabbat dinner (and also Jan, their upstairs neighbor, who once also had a cat named Prince Myshkin). When Nadja and Shuo tried to coordinate us getting together, Colin suggested they just come over here as well. And then coincidentally it turned out that Grant was visiting his parents out in the Chicago suburbs and could get here in about an hour, and we got him to show up too. We took a big picture and sent it to Kate Harney, who reportedly replied "Whoa. That is a lot of Tufts." Jim unfortunately could not make it that night though Shuo and Nadja and I met up with him the next day.

We did not get a chance to play 64 Smash, nor did I get a chance to play Twilight Struggle against Colin (I didn't actually bring it, I would have had to bring a suitcase), but Shuo and Nadja and Colin and I did play Wiz-War (the Fantasy Flight reprinting, not the original). It was fun but kind of disappointing? I don't know, I expected more from it. I was wondering how it compares to the original but it's actually rated higher on BGG. Hm. It seems like the sort of game that might need the expansions.

(Also, Shuo tells me she still has that letter I sent her while she was disappeared because somehow I managed to get ahold of her physical address and I figured, why not, I can try to contact her that way. She never responded at the time, but now I learn it worked! :D )

I could probably go on about a whole bunch of things, but instead, two things about transport:

1. Man, the payment machines on the Chicago subway system are unfriendly to people who don't live there. I thought the New York ones were bad, but the Chicago ones are substantially worse. I mean, they don't give change!

2. Here's an unexpected delay for you: The Megabus I took back to Ann Arbor got pulled over by the police.

Anyway, I'm stopping here.

-Harry

2015 Apr 12

02:53:00 - Time T+4, serious internal threat. Repeat, time T+4, serious internal threat.

So we've been turning up the difficulty in Space Alert lately.

I should probably clarify who "we" is -- me, Noelle, Andy, Nick, and Seth. That's basically our usual Truth House crew now; roles vary. Josh and Eric (first-year Eric, not Gamble or DeVries, who haven't lived here for a while) have played before, as has Sam, but these days it's usually the same five of us. On the one hand, it seems kind of disappointing that we're not getting more people involved, like we used to; on the other hand, hey, a consistent crew. (Although Seth stayed out for the first two out of the four games tonight.)

And we definitely seem to be getting better! For quite a while here "standard difficulty" was all decks mixed, except serious internal, which would be white. But not too long ago we started mixing in the yellow serious internals; then removing the white ordinary threats; until today the final game we played was all decks yellow, except for serious internal, which was mixed.

Two of the games we played had all-yellow threats! We survived! In our final game the Seeker -- the dreaded Seeker, which I consider the scariest threat in the base set -- showed up! We survived! (Yeah, I realize other people consdier the Executioner or the Nuclear Device scarier. To be fair, I've still never faced the Nuclear Device; it might be harder than I'm giving it credit for.)

Actually, on that topic -- I realized today, one thing that makes the Seeker easier is that it doesn't move on ties, and ties are probably more likely than non-ties, and not moving is easier to handle than moving.

...we forgot to account for the Seeker knocking out the person who kills it though, which was Seth. Oops. Seth actually got knocked out in both games he was in tonight. In the earlier one, he was knocked out by the Executioner; we knew he would get knocked out, since his move to the bridge, where the Executioner was headed, was locked in by then; but I (the captain) got mixed up and thought he'd be knocked out at the end of the turn rather than the beginning. Which meant he was knocked out before hitting the mouse rather than after. But we survived!

Maybe soon we'll turn it up to all yellow -- I mean, guaranteed all yellow, not just possibly all yellow. And maybe soon I should look into getting the expansion, for the few more months we're all here...

-Harry

2015 Mar 11

17:05:00 - Two questions about Snake

Obviously, I haven't been posting much here lately. Here's another entry for the "problem dump" series -- two math problems I don't really intend to work on (at least not at the moment). I'll probably put this up on my website later.

Actually, I'm a little surprised to find that I haven't written anything about this here earlier. Oh well. Here we go.

Let's consider the game of Snake; except instead of playing it on a grid, we're going to play it on an arbitrary finite [simple] graph. We assume the snake starts at length 1 -- the game begins by the "computer" (who acts adversarially) picking a starting spot for the snake and a starting spot for the fruit. When the snake consumes the fruit (and thereby grows by 1), the computer picks a new spot for the fruit, which cannot be somewhere currently covered by the snake. We'll say the player wins if the snake covers the whole graph. If the game goes on infinitely, I guess that's a draw, but we'll give it to the computer.

(Here are some obvious ones -- if a spanning subgraph of a graph is winnable, then so is the whole thing; any cycle is winnable, so so is any graph with a Hamilton cycle; in order to be winnable, a graph must have a Hamilton path. But like I said, I'm not going to put all my notes here.)

So the general question then is, for which graphs can the player always win? (Hence why I said draws go to the computer.) Now this question is not necessarily that concrete; there might not be any nice characterization. I have bunch of notes about necessary conditions and sufficient conditions for this, but I'm not going to list them all here. I do, however, have two specific questions about this that I'd like to ask.

So -- you'll notice I didn't give a totally formal specification of the rules above, and there's a reason for that. When you formalize the rules, you realize there's the question: Should a snake of length 2 be allowed to turn around? As in, move its head to the current location of the tail. If you write the rules in the obvious manner, this is entirely legal, but it seems kind of unintuitive. So we can have two variants of the game: The weak game, where this is allowed; and the strong game, where it is not. (Note that you could consider graphs with multiple edges, and say that then in the strong game you're allowed to turn around if there's a double edge; but I'm restricting to simple graphs for reasons you'll see shortly.)

There is at least one graph that is weakly winnable but not strongly winnable, namely, a path of length 3. So, question number one: Are there any other such graphs? I suspect the answer is no but have been unable to prove it. Hence the restriction to simple graphs -- if I'm right, then unless the "underlying simple graph" is P3, adding multiple edges simply won't affect anything. (And it's easy to see how it affects P3.)

Here's a second question. When I initially mentioned this problem to John, he suggested that whether a graph is winnable or not should depend only on its topology. But this is not so; I have examples of (weakly or strongly) winnable graphs that can be subdivided to be made non-winnable. However, I have been unable to find an example of a (weakly or strongly) non-winnable graph which can be subdivided to be made winnable. So, question number two: If a graph is non-winnable (weakly or strongly), is the same true of any subdivision of it? For this question I don't really have a good idea what the answer should be. My guess would be yes, but I wouldn't be all that surprised if someone found a counterexample.

By the way, someone -- I forget who, unfortunately -- suggested that you could also play Snake on infinite graphs, with the win condition being that you grow arbitrarily large (now it's only a "draw" if the game goes on infinitely but your size stays bounded). But that seems like a pretty different problem, so I'm not going to consider that here.

-Harry

2015 Mar 7

22:18:00 - Two papers on Jacobsthal multiplication I didn't know about

So!

I just learned from a discussion on Wikipedia that Jacobsthal multiplication was rediscovered in the 1980s by Conway. Possibly because they didn't have Wikipedia back then, he didn't know it was originally due to Jacobsthal. As far as I can tell, this rediscovery resulted in only two papers: One by Harry Gonshor, and one by John Hickman, both of which explored the arithmetic of this multiplication.

I think I'm going to go edit references to these into my own paper on the subject now. The history deserves to be connected up!

-Harry

2015 Jan 26

02:45:00 - Jacobsthal thing on arXiv

As discussed last entry... here it is! Again, hopefully I'm not embarrassing myself with this.

If you look at nothing else, look at the two tables. Especially the first table, it's so fancy! :)

-Harry

2015 Jan 24

19:11:00 - Jacobsthal again

So remember that Jacobsthal thing? (Here's the version on my UMich website, it's a bit easier for me to find than the LJ version. Also, I've updated the terminology there; see below.)

Well -- so, a while ago Jeff told me that yeah I should write this up as an actual paper. (I may have said this here before.) So I did. I didn't post it on arXiv, however; I wasn't too sure whether it was actually original, and I didn't want to potentially embarrass myself by posting something unoriginal. I mean, I hadn't been able to turn it up in my searches, but I'm not a set theorist, and it's not like I did a super-thorough search. (Integer complexity is a pretty small area, and addition chains isn't a huge area either. But ordinal arithmetic seems rather larger.) Maybe for all I knew it had been done a hundred years ago!

So instead I figured I'd try to get it published in a journal first -- I'd know it was original if it got accepted! Unfortunately, the first place I submitted it didn't accept it, and after that I was really busy so I haven't resubmitted it anywhere else yet.

(I also changed "semi-Jacobsthal" to "super-Jacobsthal". Jeff sugested I should really change "semi-Jacobsthal" to something else, and I'm glad he did, because "super-Jacobsthal" is much better and I wouldn't have thought of it if he hadn't suggested that.)

Point is -- Jeff, and also Dan Hathaway, recently pointed out to me this paper by Paolo Lippiari. And this is certainly not the same thing I'm doing, but it was close enough in flavor that I was basically just like "Oh crap, I should probably really put this up." So I'm doing that! I can't link to it yet, because it doesn't go up till Monday, but I'm doing it. I guess we'll see whether I end up embarrassing myself...

But if nothing else, at least my paper has two really nice tables in it! One of them is pretty fancy and took a decent amount of work in TeX. :)

(Meanwhile -- do you remember the problem of "ordinal multichoose"? Well, I figured out how to compute it in general! Both the "left" and "right" versions, and both choose and multichoose. I also came up with a third one, "natural" multichoose (you can guess the definition), but I can't compute that one except in very special cases, with lower bounds and conjectures for a few more cases. I can write more about this if people want. I haven't here partly because the rules for computing these are, unfortunately, really complicated. I'm probably not going to get around to writing this up formally for quite a while -- finally getting back to my integer complexity and addition chains work, after the constant not-having-time that was last semester, is higher priority, I'm pretty sure.)

-Harry

2015 Jan 22

21:04:00 - Mystery Hunt roundup 2015

So! This past weekend was MIT Mystery Hunt, which means it's time for my annual Mystery Hunt roundup. Unfortunately the puzzles aren't up yet on the archive, or full stats, but limited stats are up in the form of the slideshow, and the solutions are at least up for those of us who participated. Hopefully everything's up before too long, but I'm just going to go ahead and do this anyway. EDIT: I've now put the links in. (Also, I had accidentally mislabeled "Dory" as "Nemo".)

I was once again on the Donner Party team this year. Not a very large team. We did OK, I guess? Solved a decent fraction of the puzzles. Ended up right about the middle of the pack. We didn't solve any of the metametas (certainly didn't make it to the runaround -- I don't know how possible the runaround would have been anyway, seeing as we're a mostly-remote team), and I don't think we even solved any of the metas until after the coin was found (and we only got two of those). So, not great. Apparently we solve a record number of puzzles for our team, but that was probably helped by the existence of the School of Fish round, which consisted of a large number of lower-difficulty puzzles (apparently designed to be about "something like one-third the difficulty of typical ocean puzzles").

Nobody else here really joined in, although Seth and Noelle each did very briefly, and so did Angus, who was here for the weekend, and actually Angus and Seth both ended up contributing substantially during that brief time! But oh well. Anyway, on to discussing particular puzzles.
Cut for spoilersCollapse )
So, that's that. We'll see what next year brings. (Maybe I should join a different team -- I miss having a shot at actually winning...)

-Harry

2014 Nov 11

22:17:00 - A neat paper about finite groups, and some irresponsible speculation

By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.

So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.

EDIT DEC 1: The paper has been updated; the following refers to the original version of the paper. See the comments for discussion of the updated version.

So: If you have a finite group, you can consider the probability that a randomly chosen ordered pair of elements from it commutes. You could then consider the set of all probabilities obtained this way; this is some set of rational numbers between 0 and 1.

In this paper, Eberhard shows that this set is in fact reverse well-ordered! Its order type is of the form ωα where 2≤α≤&omega². (Though Juan Arias de Reyna points out that it's not too hard to find examples that show that in fact α≥ω, so the order type is at least ωω.) He also shows that all the limit points of this set are rational. I think it should be pretty obvious why I'm interested in this! (Even though I have no plans to actually study this, my hands are full as it is.)

Now for the irresponsible speculation:

1. Instead of just doing finite groups, one could generalize to compact groups; one can consider probabilities there as well. How would this affect the set? It would be really nice if this just gave you the closure of the probability set for finite groups! Though Arias de Reyna tells me it's conjectured that the finite group commuting probability set is closed in (0,1], so that would be saying that using general compact groups only gives you 0 in addition. (I mean, it certainly does give you 0 in addition!)

2. I'm reminded of some work of John Wiltshire-Gordon and Gene Kopp. They considered probabilities of randomly chosen elements from a compact group satisfying some general word; the case of commuting
probabilities is the case where the word w is aba-1b-1. I wonder if the same phenomenon might be seen for other words.

Probably it would be best to first look at words in one variable. Obviously using w=a generates an ω if we stick to finite groups and an ω+1 if we allow general compact groups -- not ωω, but still well-ordered. As for a² or a³... well, I don't know, and I don't really intend to work on it as I said above, but it seems vaguely plausible and it's an interesting question!

So yeah that's basically all I have to say on the matter.

-Harry

2014 Oct 22

04:12:00 - ~/enormousfiles

Something clever I thought of today:

So I've got this enormous file on my hard drive -- about 35 GB. While it's enormous, it's not particularly important; I haven't deleted it only because there's no need to. But it's in a directory with lots of other important stuff; actually, it's in a subdirectory of a directory with lots of important stuff. So when I'm doing backups, it's a pain, because I have to go in and copy everything in this directory except this one file in a subdirectory -- the file isn't important enough to be worth slowing down my backups for.

Solution I thought of today: make a new directory, ~/enormousfiles, put the file in there, and put a symlink to it in the original directory. Yay!

-Harry

2014 Sep 8

20:20:00 - Addition chain defects on arXiv!

Hey hey hey here you go!

So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now.

...I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P

-Harry

2014 Sep 6

02:55:00 - Gregory House

So I've moved from Truth House to Gregory House. (I guess that was a few weeks ago now.)

First house meeting was Wednesday. I managed to get myself elected treasurer (well, nobody else ran), so that's good. It only counts for 2 hours per week, though; not getting out of chores entirely. (I also ran for secretary, which only counts for 1 hours, but didn't win.)

Unfortunately for various reasons I haven't actually been hanging around Gregory very much so far (probably spent more time at Truth still), or when I have it's largely been empty. People don't seem to stay up very late there. Oh well. Obviously part of why I moved to Gregory is that it would be quieter than Truth, being the no-drugs-no-drinking house, but so far it's been a bit to quiet.

Well, not in the literal sense -- the big reason I moved to Gregory was so that I could get a single! If I stayed in Truth another year I'd have to take a roommate. I wasn't planning to be in Ann Arbor another year at all, since I figured I'd have found a job of some sort, but here I am. But I didn't sign up for Gregory until really late, so while I did get a single, I got the last one; and Gregory has a few rooms that are different from the rest. My room is tiny, and right by the kitchen and common areas. Oh well. Still better than having a roommate, I think!

So, yeah, I can't say a lot at the moment. But, we'll see how this goes. I expect it will go well. Once we get power back, anwyay; there was quite a storm tonight...

-Harry

2014 Sep 5

02:12:00 - Idle question: What if we changed the definition of homotopy?

What would happen to homotopy theory if we used a more general notion of homotopy?

Let me make a formal definition: Given topological spaces X and Y and continuous maps f,g:X→Y, we'll say f and g are C-homotopic if there exists a connected space Z with points z0 and z1 and a continuous map h:X×Z→Y such that h(x,z0)=f(x) and h(x,z1).

So, obviously, this is a more inclusive notion than our usual notion of homotopy. We can then talk about C-homotopy equivalence, C-contractibility, C-homotopy groups, etc. And certainly there are maps that are C-homotopic but not homotopic; let Y be connected but not path-connected, and consider two points in Y in different path components as maps from the one-point space.

But can we find less trivial examples of maps that are C-homotopic but not homotopic? What about examples that just straight up are *not* C-homotopic? What about examples of spaces that are C-homotopy equivalent, but not homotopy equivalent, as well as spaces that aren't C-homotopy equivalent at all? (Question I tried unsuccessfully to answer: Is the topologists's sine curve C-contractible?)

Do C-homotopy groups agree with the usual homotopy groups? Do our usual algebraic topology functors respect C-homotopy in addition to just homotopy? (I asked John about this, he suggested that cohomology at least probably should.)

I'm posting this here as idle speculation because really, I don't know topology very well; I don't know enough to try to answer this. (Maybe someone already has. John hadn't heard of such a thing, that much I can say.) I thought of asking MathOverflow... but I was afraid I wouldn't be able to understand any answer I got! So yeah, I'm posting this here.

-Harry

2014 Aug 11

16:56:00 - Quick thoughts on computer-verified proofs

So apparently there's now a computer verified proof of the Kepler Conjecture.

Obviously computer verification is now becoming a "thing", with homotopy type theory and Vladimir Voevodsky pushing it and all that. (I still have no idea why there is topology involved.) Not much of a thing, mind you; people still aren't doing it a whole lot. But they're talking about it like it's actually a thing that could happen, rather than something totally impractical, and some people are actually taking steps to make it possible. So writing proofs formally as a routine thing just might be a possible future.

But that's not what I mean to focus on right now. Right now, computer-verified proofs basically only seem to happen in two cases:
1. People who are trying to push computer verification, and so are building up a library or showing off an example
2. There is actually some amount of uncertainty about a proof.

And I mean, this latter is a bit funny, because it means that computer verification is to a large extent starting with the *hardest*, most complicated proofs!

And, like, for computer verification to ever really catch on, there are going to have to be libraries of formal theorems for use. And the people writing these computer-verified proofs to a large extent presumably don't yet have those to go on, except for the most basic things, instead writing them themselves.

So I wonder if this is how things start -- libraries getting written to do something complicated and horrible, and only *then* getting used to do the ordinary.

(This leaves me with visions of math having some of the problems programming currently has -- libraries with horrible interfaces that everyone uses anyway because nobody wants to rewrite it, or they can't get anyone else to use it... I don't know, I think the nature of mathematics would serve to mitigate that effect.)

-Harry

2014 Aug 10

00:35:00 - Back to ordinal multichoose for a moment

So!

I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.

First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.

Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).

So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.

Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.

When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.

I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.

I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.

Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.

Say α=ωα_0a0 + ... + ωα_ra_r + a, where I'm writing α in Cantor normal form, and separating out the finite part "a" as worthy of special attention. Then we have, for k finite and nonzero,

((α↓k)) = ωα_0 ka0 + ... + ωα_r k a_r + ((a|k)).

Pretty nice, no? The choose version is the same, except the multichoose at the end becomes a choose. Unfortunately, once k becomes infinite, things become complicated fast.

Also last time I was trying to resolve the question, for k finite, does one always have ((α↑k))≤(α↓k))? (And the same for the choose versions.) I had thought this was true, and spent some time trying to prove it, but now I can report a counterexample: α=ωω+1+1, k=2. On the left hand side we get ωω2+1ω+1+1, and on the right hand side we get ωω2+1+1. At least, I'm pretty sure I calculated both of those correctly. It's also a counterexample for the choose versions; in that case, we get the same things but without the +1s on each.

So, there's that. But the big thing is... how did I not notice this before? There's a symmetry law! The two operations are very closely related!

With ordinary multichoose, we have ((n+1|k))=((k+1|n)), since both are equal to (n+k|n,k) (I write it that way, rather than (n+k|k), to emphasize the symmetry.) With these ordinal versions of multichoose, we get

((α+1↑β)) = ((β+1↓α))

The proof is pretty simple, too! As in, you can straight-up construct an order isomorphism between these. I feel a little silly for not noticing this, but, this is really cool!

I feel like this also indicates that ((α↓β)) is somehow the "more fundamental" of the two operations. Because, see, ((α↑β)), well, if α=0, we know what it is; if α is a successor ordinal, we can apply the symmetry law to express it in terms of ↓ and if α is a limit ordinal, well, it's continuous in α, so it's a limit of things that can be expressed in terms of ↓. With ((α↓β)), if α=0 we know what it is, and if α is a successor we can switch it around, but if α is a limit ordinal, well, we don't have anything like that. (EDIT next day: Although, ((α↓β)) is pretty easy to compute in the case where α is a limit ordinal -- see below. The successor case is the only actually hard case.)

So, yeah. Now to put that away (for now, anyway) and get back to work...

EDIT next day: Actually, let me say a bit more about the computation of ((α↓β)) that I left out at first. Say we write α=α'+a, where α' is either 0 or a limit ordinal, and a is finite. Then in fact this breaks down into ((α'↓β))+((a↓β)). And the first of these is easy -- if β=0 we know it; if β is positive and finite it's given above (just multiply all the exponents in the Cantor normal form on the right by β); and if β is infinite, it's also easy (multiply by β+1 instead of β). Problem is the ((a↓β)) part! That seems to be complicated. Well, by the symmetry rule above, figuring out ((n↓β)) is equivalent to figuring out ((α↑k)), but, well, you'll notice I didn't give a formula for that -- that seems complicated in general. It might be doable, though. (Note that for any given β, since ((α↓β)) is strictly increasing in α, one has ((n↓β))<((ω↓β)) which does at least mean that when you compute ((α↓β)), the purely infinite part and the finite part do not interfere with one another.)

-Harry

2014 Jul 18

22:18:00 - Why there is no natural exponentiation (sort of)

It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?

(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)

Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation ab.

Then I think we can agree on the following desiderata:

1. p(a,1) = a
2. p(a,b⊕c)=p(a,b)⊗p(a,c)
3. p(a,b⊗c)=p(p(a,b),c)
4. p(a,b) is weakly increasing in a
5. For a>0, p(a,b) is weakly increasing in b

Thing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.

But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ωb≤a. I.e. it's
the largest exponent appearing the Cantor normal form.

All the other operations have degree laws, or something like them. In
particular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ω
deg(ab) = deg(a×b) = deg(a) * b.
And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ω
deg(a⊗b) = deg(a) × b.

(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)

Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:

6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ b

And with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(nk)=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.

Whoo. Now I need to decide whether to add this as an appendix.

(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)

-Harry

Navigate: (Previous 20 Entries)