`By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.`

`Something clever I thought of today:So I've got this enormous file on my hard drive -- about 35 GB. While it's enormous, it's not particularly important; I haven't deleted it only because there's no need to. But it's in a directory with lots of other important stuff; actually, it's in a subdirectory of a directory with lots of important stuff. So when I'm doing backups, it's a pain, because I have to go in and copy everything in this directory `

`Hey hey hey here you go!So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now....I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P-Harry`

`So I've moved from Truth House to Gregory House. (I guess that was a few weeks ago now.)First house meeting was Wednesday. I managed to get myself elected treasurer (well, nobody else ran), so that's good. It only counts for 2 hours per week, though; not getting out of chores entirely. (I also ran for secretary, which only counts for 1 hours, but didn't win.)Unfortunately for various reasons I haven't actually been hanging around Gregory very much so far (probably spent more time at Truth still), or when I have it's largely been empty. People don't seem to stay up very late there. Oh well. Obviously part of why I moved to Gregory is that it would be quieter than Truth, being the no-drugs-no-drinking house, but so far it's been a bit to quiet.Well, not in the literal sense -- the big reason I moved to Gregory was so that I could get a single! If I stayed in Truth another year I'd have to take a roommate. I wasn't planning to be in Ann Arbor another year at all, since I figured I'd have found a job of some sort, but here I am. But I didn't sign up for Gregory until really late, so while I did get a single, I got the last one; and Gregory has a few rooms that are different from the rest. My room is tiny, and right by the kitchen and common areas. Oh well. Still better than having a roommate, I think!So, yeah, I can't say a lot at the moment. But, we'll see how this goes. I expect it will go well. Once we get power back, anwyay; there was quite a storm tonight...-Harry`

`What would happen to homotopy theory if we used a more general notion of homotopy?Let me make a formal definition: Given topological spaces X and Y and continuous maps f,g:X→Y, we'll say f and g are C-homotopic if there exists a connected space Z with points z`

`So apparently there's now a computer verified proof of the Kepler Conjecture.Obviously computer verification is now becoming a "thing", with homotopy type theory and Vladimir Voevodsky pushing it and all that. (I still have no idea why there is topology involved.) Not much of a thing, mind you; people still aren't doing it a whole lot. But they're talking about it like it's actually a thing that could happen, rather than something totally impractical, and some people are actually taking steps to make it possible. So writing proofs formally as a routine thing just might be a possible future.But that's not what I mean to focus on right now. Right now, computer-verified proofs basically only seem to happen in two cases:1. People who are trying to push computer verification, and so are building up a library or showing off an example2. There is actually some amount of uncertainty about a proof.And I mean, this latter is a bit funny, because it means that computer verification is to a large extent starting with the *hardest*, most complicated proofs!And, like, for computer verification to ever really catch on, there are going to have to be libraries of formal theorems for use. And the people writing these computer-verified proofs to a large extent presumably don't yet have those to go on, except for the most basic things, instead writing them themselves.So I wonder if this is how things start -- libraries getting written to do something complicated and horrible, and only *then* getting used to do the ordinary.(This leaves me with visions of math having some of the problems programming currently has -- libraries with horrible interfaces that everyone uses anyway because nobody wants to rewrite it, or they can't get anyone else to use it... I don't know, I think the nature of mathematics would serve to mitigate that effect.)-Harry`

`So!I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.Say α=ω`

`It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation a^{b}.Then I think we can agree on the following desiderata:1. p(a,1) = a2. p(a,b⊕c)=p(a,b)⊗p(a,c)3. p(a,b⊗c)=p(p(a,b),c)4. p(a,b) is weakly increasing in a5. For a>0, p(a,b) is weakly increasing in bThing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ω^{b}≤a. I.e. it'sthe largest exponent appearing the Cantor normal form.All the other operations have degree laws, or something like them. Inparticular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ωdeg(a^{b}) = deg(a^{×b}) = deg(a) * b.And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ωdeg(a^{⊗b}) = deg(a) × b.(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ bAnd with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(n^{k})=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.Whoo. Now I need to decide whether to add this as an appendix.(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)-Harry`

`[I wrote this as a letter initially, figured I'd adapt this into an LJ entry. Some of this is a bit out of date as this is originally from July 2nd. Oh well.]So this year most of the ICPSR people are from China and also most of them don't speak much English. Among these people is Chaoqun Mei (using Western name order) -- she says we can just call her by her last name since it's easier for us to say -- who is here studying some sort of math or statistics thing? Communication is difficult. But the main reason I am pointing her out is not because of her as such but rather because, uh, she brought a kid with her."Peter", we call him in English. He's 8 years old (and doesn't speak much English). It's kind of a weird situation -- how do we have a contract with an 8-year-old? I mean, OK, his mom signed for him, and is paying for him, but...? And, like, are we going to make him do chores? He can't wash the dishes, he isn't tall enough! Not sure he can operate a vacuum cleaner either. I think the current plan is that Beatrix (who is work manager at the moment) will talk to Mei and try to work something out. (Though obviously Peter doesn't eat as much as we do!)Anyway, Peter is adorable. Though unsurprisingly he is also kind of a pain in the ass. But I play with him a bunch and he likes me a lot. He likes to play pool (he cheats). Mostly though I've been playing with him outside -- often in that tiny backyard of ours. Often just tossing a frisbee or a softball back and forth. The other day we were playing a game where we would take turns tossing the softball onto the chair-swing and trying to get it to stay....then he threw the softball onto the roof. Guess he's not getting that back!Also yesterday we were playing hide and seek. At one point I hid in a pretty obvious spot because he gave me very little time, but he missed me anyway... and then seemed to forget about it and started throwing gravel at a birds' nest instead.Also yesterday: Me being constantly worried I was going to get poison ivy. There was a bunch of poison ivy growing on the back of the house -- not nearly so much as there used to be, thankfully. Shane has been going and killing it by spraying soap-salt-vinegar on it. But I was constantly worried that playing with Peter in the back I was going to get poison ivy still somehow (you can still get it from dead plants, apparently). Especially when he went and hid in the wooded area behind Triangle (which is on their property, not ours -- Shane would have been doing nothing about that).I mean it takes several days to show up so I could still have gotten it and not realize! Gods, what a terrible plant. I don't think I did; my worries are probably unfounded. But, ugh. I mean it's pretty easy to recognize in vine form, because the hairy vines are a giveaway, but when it's just on the ground? Sure, sure, "leaves of three, let it be", but *so many* plants have leaves of three. Goddamn usesless rhyme. I found it useless as a kid and I find it useless now. Well, Peter seems to be fine too.[Since it's now over a week later, I can verify that neither of us ended up getting poison ivy.]-Harry`

`So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )So what's wrong with ordinal hyper?Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is `

`It seems to me that the common meaning of the phrase "no true Scotsman [fallacy]" has shifted quite a bit from the original meaning as I understand it.Let's take Antony Flew's original example, which I've copied over from Wikipedia:`

Imagine Hamish McDonald, a Scotsman, sitting down with hisGlasgow Morning Heraldand seeing an article about how the "Brighton [(England)] Sex Maniac Strikes Again". Hamish is shocked and declares that "No Scotsman would do such a thing". The next day he sits down to read hisGlasgow Morning Heraldagain; and, this time, finds an article about an Aberdeen [(Scotland)] man whose brutal actions make the Brighton sex maniac seem almost gentlemanly. This fact shows that Hamish was wrong in his opinion but is he going to admit this? Not likely. This time he says, "NotrueScotsman would do such a thing".

The honest thing to do when presented with a counterexample (that you agree with) is to openly fall back to a weaker position. Hamish is doing that, in a sense -- restricting his claim to

(Note, of course, that if you have to retreat to a weaker position sufficiently often, you do have to consider the possibility that your original position really was just totally wrong and you are making a mistake in trying to incrementally salvage it.)

But the way I see people using the phrase these days is something entirely different. Rather, it seems that to most people, the "no true Scotsman" fallacy is when you say someone isn't part of group X when they say they're a member of X.

Not only is this not the "no true Scotsman" mistake (in the original sense), it isn't even necessarily wrong. Some groups are essentially defined by self-identification, but not all are.

Now it's worth noting here that many groups are defined as empirical clusters -- they're defined extensionally, not intensionally. Suppose that I claim "No member of group X does Y", and someone else replies "I'm a member of group X and I do Y." And let's say I also happen to know this person, and I know that they have very little in common with the people usually denoted by X. Then I think my best reply would be, "I'm sorry, but you seem to have very little in common with the people usually denoted by X. I don't think most peole, when they would use the word X, are referring to a group that includes you. Seeing as X is a category that is defined extensionally, by empirical clusters of similarity, I don't really think it can be said that you are an X, at least, not if the word X is used in the standard way. In particular, you differ regarding Y, Z, and W, all of which would generally be considered essential. Hence, I hope you don't mind if I continue to use the word X in this way -- meaning, people who can be described as some combination of Y, Z, and W, typical examples of which are A, B, and C -- rather than in a way that includes you. If you really object, I will say X' rather than X to denote the cluster that I am talking about, and say X'' to denote the cluster that you aren talking about, but I hope you realize that, in my opinion, outsiders will probably read X as meaning X' rather than X''."

What I've done there now really does look a lot like No True Scotsman (in the original sense)! I've insisted on using a particular definition of a word, in a way that's not totally transparent. But, unlike the original "no true Scotsman" example:

1. I'm making entirely clear what I am doing.

2. I am doing so in a way that is in concordance with standard usage, rather than going against it and trying to sneak in connotations.

3. While my definition isn't totally transparent, I have tried to make it as clear as I can with some combination of defining features and typical examples. It's not totally transparent, but neither is it totally opaque and mutable.

4. I am ultimately offering unambiguous terminology, rather than getting into an argument over definitions. Remember, if an argument didn't start out about being about definitions, don't let it become about definitions! (And if you did start an argument about definitions, hopefully it's only because you know what you're doing and had a good reason for doing so.)

By the way -- let's note here that the above examples dealt in universal claims and counterexamples. But in arguments about the real world, universal claims are rarely appropriate. That was the original form, though, so I've left it that way -- and using more realistic claims would have made the examples more complicated.

Point is, claiming that someone is not an X when they claim to be an X is not necessarily incorrect, and bears little relation to the original meaning of the term "no true Scotsman", though they may often coincide.

-Harry

`So here's an interesting paper that really deserves to be better known: On On_{p}, by Joseph DiMuro.`

Here, On_{2} refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F_{2}; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε_{0}, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On On_{p}", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in On_{p}, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of F_{p}. And the operations are computable! So this gives a concrete description of the algebraic closure of F_{p}! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On_{0}", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F_{2}(t), you have to go well beyond ε_{0}, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

-Harry

`You're all familiar with the disagreement hierarchy, right? Actually, I'm not sure how helpful it is most of the time, as I feel like a lot of the arguments I see (at least on the internet) consist of people arguing at cross-purposes rather than actually disagreeing with each other. Nonetheless, I would like to suggest two revisions to it.Revision 1: Add level DH4.5: Nonconstructive refutation.The archetypical example of refuting an argument is finding a hole in it -- "Your inference of P is unjustified given only what you've established so far." (Or, better yet, "Your inference of P is unjustified given only what you've established so far; indeed, here is an example where what you've established so far holds, but P does not.") But it's possible to show an argument wrong without actually finding a hole in it. The classic example is showing that an argument proves too much. If an argument proves too much, you can conclude that it's wrong -- but you still don't necessarily know exactly why it's wrong. It's still a form of refutation and should be above counterargument, but it's not as good as a constructive refutation.Revision 2: Replace DH6, "Refuting the central point", with "Refutation and counterargument"."Refuting the central point" doesn't really strike me as qualitatively different from "refutation". Honestly to my mind, if you're refuting some peripheral thing, that hardly even counts. When I argue I like to spot the other person lots of points because I want to get to the central disagreement as quickly as possible; arguing over peripheral stuff is mostly a waste of time. Of course, sometimes peripheral stuff becomes central later, but you can always un-spot a point.Anyway, point is, what is qualitatively different is refuting `

`Instead, I'd just like to point out that this old entry is now unlocked.`

`And so today I turned in the final version of my thesis and got my certificate of completion. If there are any more mistakes in there, I'm not fixing them. At least, not until I try to get Chapters 4 and 5 published (though Jeff tells me Chapter 5 will need to be split before it can be published).It's going on ProQuest of course but also I figure I may as well just upload it to my website at UMich? (Hm, that should really be updated.) There's no reason it shouldn't be freely available. I'll also finally put the code up, for those of you who want it. A warning -- the code is far from readable; I didn't really have time to go back and make it readable, and since it wasn't officially submitted to the school, I didn't really have to. I did at least take the time to document all the functions, but I didn't take the time to put them in a sensible order. (Juan Arias de Reyna, meanwhile, tells me that he may rewrite the whole thing in Python.) Well -- I'll post an update here once I've done that.`

`So, we got a letter from Comcast the other day. These are only ever one of a few things: Bills, advertisements, or, as in this case, letters informing us that someone in the house has been caught torrenting copyrighted material and asking us not to do that.Fortunately we don't get so many of these -- not enough that they're actually going to cut off our internet or anything. Maybe about 1 per semester. Used to be a much bigger problem before I was here, though; apparently it's the reason why, not too long before I moved it, we started using Comcast in the first place (we got in trouble with our previous ISP).Anyway. Whenever one of these comes up, naturally there is the question, what was it? Usually it's nothing interesting, but the other year there was a case of Scooby Doo porn.So what was it this time? Game of Thrones. Even though we have HBO -- on demand, even. (It's not that expensive when you're splitting the cost 50 ways.) Even though this should be well-known to anyone living here seeing as so many people gather around the TV for it every Sunday.I can't say that there's `

`Today was my defense. It went well. I "broke everybody's legs", as Heidi wished me yesterday. Jeff had me make slides for it a few days ago, so I mostly used the slides rather than the blackboard; fortunately we didn't have much trouble getting the projector set up.Committee was Jeff, Andreas Blass, Alexander Barvinok, Martin Strauss, and Kevin Compton (from the CS department). Karen Smith showed up as well. Also Julian (my academic older brother) happened to be in town so he was there. And various other people; this wasn't supposed to be a complete list, so let me head off the possibility before it turns into one. And of course a number of my housemates showed up -- Beatrix, Angus, Seth, Noelle... perhaps one or two more I'm forgetting? Not sure that any of them understood it besides Beatrix.Still, the topic obviously was pretty accessible to those with a bit of math background (though I did make a point of stopping to explain what ω`

`In a few minutes, I'll get back to work. But first...So I only just learned today that -- like "The Star-Spangled Banner" -- "Hatikvah" is also a combination of a pre-existing poem and an unrelated old tune. Now, admittedly, the Star-Spangled Banner just directly copies both its sources, whereas Hatikvah modifies things a bit more. (The tune is a big difference if you compare directly to "La Mantovana"; less so if you compare to "Carul cu boi".)Still, it struck me as an interesting coincidence. Is this a common thing among national anthems? I mean, I could easily believe that adapting them from preexisting poems is, but also using unrelated existing melodies? That would be a bit more surprising.Well, if I had the time, I'd start going down Wikipedia's list of national anthems and counting. But I don't right now, so I'm just going to note this coincidence as one to follow up on later. Or let someone else count.-Harry`

`So! Juan Arias de Reyna and Jan Van de Lune recently posted to arXiv their paper "Algorithms for Determining integer complexity". In it, they A. improve the known bounds on time complexity for computing ||n||, and B. get a new bound on ||n|| that works for almost all n.(Hey, I'm blogging about things adjacent to my work again! Well, a little, anyway. More will have to wait.)To review -- a big problem in integer complexity is determining the limsup of ||n||/(log n), which I'll denote C`

`Stopping by here briefly to record this. (I think I may actually begin posting some more once my defense and everything is done. For now, though...)SO! Yesterday was the Microsoft College Puzzle Challenge. (Apparently "college" includes grad students. And people who finished their PhDs a year ago.) It's a 10-hour puzzlehunt for teams of 4 people. (You have to be in one of several particular locations to do it.) They do it every year but I'd not done it before. Dave Putnins was running a team, Good Fibrations. (A team name with a history of doing well at CPC, winning UMichigan every year since 2010 and winning the whole thing in 2012.) The team was him, me, Jordan (hence the "and people who finished their PhDs a year ago" above), and Leda, who is not from the math department but instead from the solar car team (and the only actual undergrad on the team). There was another math department team, consisting of Ari, Chris, Alex, and Dave Prigge; I'm not sure what their team name was.Long story short, we came in 6th overall and won UMichigan. (Second place UMichigan was 20th place overall.) We didn't solve the meta; if we had we'd have been 3rd. We were also damn close to 5th... more on that in a bit. I was kind of hoping for that Xbox One (which, yes, I would probably just use as an expensive Geometry Wars machine), but, a good showing nonetheless.A note on the format: The format was apparently a little different than in previous years. Instead of the winner being the first to solve the meta, there was a scoring system. Solving a puzzle was worth 100,000 points ("dollars"; there was a Las Vegas theme this year, ugh), while solving the meta was worth 1,000,000; you could keep going after the meta for more points until time ran out at 10:00. Not sure how much the pre-event puzzles were worth.Additionally, on the ordinary (non-meta) puzzles, there was a bonus for solving the puzzle early -- 10,000 to the first team to solve it, around 7,000 for the next team, and decreasing pretty quickly; you had to be very fast to get more than 4,000 on bonus. Much more important than the bonus, though, was the betting. On the ordinary (non-meta) puzzles, you could bet up to 10,000 points that your first answer was right (much larger than you were likely to get from the bonus). You could not bet on answers beyond the first. Also, if you bet on your first answer, but you got it wrong with one of the "we'll give you a hint" possibilities, the bet would just be cancelled (though you still couldn't bet on later answers).`

Navigate: (Previous 20 Entries)