Chronicles of Harry

2014 Nov 11

22:17:00 - A neat paper about finite groups, and some irresponsible speculation

By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.

So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.

EDIT DEC 1: The paper has been updated; the following refers to the original version of the paper. See the comments for discussion of the updated version.

So: If you have a finite group, you can consider the probability that a randomly chosen ordered pair of elements from it commutes. You could then consider the set of all probabilities obtained this way; this is some set of rational numbers between 0 and 1.

In this paper, Eberhard shows that this set is in fact reverse well-ordered! Its order type is of the form ωα where 2≤α≤&omega². (Though Juan Arias de Reyna points out that it's not too hard to find examples that show that in fact α≥ω, so the order type is at least ωω.) He also shows that all the limit points of this set are rational. I think it should be pretty obvious why I'm interested in this! (Even though I have no plans to actually study this, my hands are full as it is.)

Now for the irresponsible speculation:

1. Instead of just doing finite groups, one could generalize to compact groups; one can consider probabilities there as well. How would this affect the set? It would be really nice if this just gave you the closure of the probability set for finite groups! Though Arias de Reyna tells me it's conjectured that the finite group commuting probability set is closed in (0,1], so that would be saying that using general compact groups only gives you 0 in addition. (I mean, it certainly does give you 0 in addition!)

2. I'm reminded of some work of John Wiltshire-Gordon and Gene Kopp. They considered probabilities of randomly chosen elements from a compact group satisfying some general word; the case of commuting
probabilities is the case where the word w is aba-1b-1. I wonder if the same phenomenon might be seen for other words.

Probably it would be best to first look at words in one variable. Obviously using w=a generates an ω if we stick to finite groups and an ω+1 if we allow general compact groups -- not ωω, but still well-ordered. As for a² or a³... well, I don't know, and I don't really intend to work on it as I said above, but it seems vaguely plausible and it's an interesting question!

So yeah that's basically all I have to say on the matter.

-Harry

Tags:
(2 comments | Post comment)

2014 Oct 22

04:12:00 - ~/enormousfiles

Something clever I thought of today:

So I've got this enormous file on my hard drive -- about 35 GB. While it's enormous, it's not particularly important; I haven't deleted it only because there's no need to. But it's in a directory with lots of other important stuff; actually, it's in a subdirectory of a directory with lots of important stuff. So when I'm doing backups, it's a pain, because I have to go in and copy everything in this directory except this one file in a subdirectory -- the file isn't important enough to be worth slowing down my backups for.

Solution I thought of today: make a new directory, ~/enormousfiles, put the file in there, and put a symlink to it in the original directory. Yay!

-Harry

(4 comments | Post comment)

2014 Sep 8

20:20:00 - Addition chain defects on arXiv!

Hey hey hey here you go!

So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now.

...I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P

-Harry

Tags:
(Post comment)

2014 Sep 6

02:55:00 - Gregory House

So I've moved from Truth House to Gregory House. (I guess that was a few weeks ago now.)

First house meeting was Wednesday. I managed to get myself elected treasurer (well, nobody else ran), so that's good. It only counts for 2 hours per week, though; not getting out of chores entirely. (I also ran for secretary, which only counts for 1 hours, but didn't win.)

Unfortunately for various reasons I haven't actually been hanging around Gregory very much so far (probably spent more time at Truth still), or when I have it's largely been empty. People don't seem to stay up very late there. Oh well. Obviously part of why I moved to Gregory is that it would be quieter than Truth, being the no-drugs-no-drinking house, but so far it's been a bit to quiet.

Well, not in the literal sense -- the big reason I moved to Gregory was so that I could get a single! If I stayed in Truth another year I'd have to take a roommate. I wasn't planning to be in Ann Arbor another year at all, since I figured I'd have found a job of some sort, but here I am. But I didn't sign up for Gregory until really late, so while I did get a single, I got the last one; and Gregory has a few rooms that are different from the rest. My room is tiny, and right by the kitchen and common areas. Oh well. Still better than having a roommate, I think!

So, yeah, I can't say a lot at the moment. But, we'll see how this goes. I expect it will go well. Once we get power back, anwyay; there was quite a storm tonight...

-Harry

(Post comment)

2014 Sep 5

02:12:00 - Idle question: What if we changed the definition of homotopy?

What would happen to homotopy theory if we used a more general notion of homotopy?

Let me make a formal definition: Given topological spaces X and Y and continuous maps f,g:X→Y, we'll say f and g are C-homotopic if there exists a connected space Z with points z0 and z1 and a continuous map h:X×Z→Y such that h(x,z0)=f(x) and h(x,z1).

So, obviously, this is a more inclusive notion than our usual notion of homotopy. We can then talk about C-homotopy equivalence, C-contractibility, C-homotopy groups, etc. And certainly there are maps that are C-homotopic but not homotopic; let Y be connected but not path-connected, and consider two points in Y in different path components as maps from the one-point space.

But can we find less trivial examples of maps that are C-homotopic but not homotopic? What about examples that just straight up are *not* C-homotopic? What about examples of spaces that are C-homotopy equivalent, but not homotopy equivalent, as well as spaces that aren't C-homotopy equivalent at all? (Question I tried unsuccessfully to answer: Is the topologists's sine curve C-contractible?)

Do C-homotopy groups agree with the usual homotopy groups? Do our usual algebraic topology functors respect C-homotopy in addition to just homotopy? (I asked John about this, he suggested that cohomology at least probably should.)

I'm posting this here as idle speculation because really, I don't know topology very well; I don't know enough to try to answer this. (Maybe someone already has. John hadn't heard of such a thing, that much I can say.) I thought of asking MathOverflow... but I was afraid I wouldn't be able to understand any answer I got! So yeah, I'm posting this here.

-Harry

Tags:
(1 comment | Post comment)

2014 Aug 11

16:56:00 - Quick thoughts on computer-verified proofs

So apparently there's now a computer verified proof of the Kepler Conjecture.

Obviously computer verification is now becoming a "thing", with homotopy type theory and Vladimir Voevodsky pushing it and all that. (I still have no idea why there is topology involved.) Not much of a thing, mind you; people still aren't doing it a whole lot. But they're talking about it like it's actually a thing that could happen, rather than something totally impractical, and some people are actually taking steps to make it possible. So writing proofs formally as a routine thing just might be a possible future.

But that's not what I mean to focus on right now. Right now, computer-verified proofs basically only seem to happen in two cases:
1. People who are trying to push computer verification, and so are building up a library or showing off an example
2. There is actually some amount of uncertainty about a proof.

And I mean, this latter is a bit funny, because it means that computer verification is to a large extent starting with the *hardest*, most complicated proofs!

And, like, for computer verification to ever really catch on, there are going to have to be libraries of formal theorems for use. And the people writing these computer-verified proofs to a large extent presumably don't yet have those to go on, except for the most basic things, instead writing them themselves.

So I wonder if this is how things start -- libraries getting written to do something complicated and horrible, and only *then* getting used to do the ordinary.

(This leaves me with visions of math having some of the problems programming currently has -- libraries with horrible interfaces that everyone uses anyway because nobody wants to rewrite it, or they can't get anyone else to use it... I don't know, I think the nature of mathematics would serve to mitigate that effect.)

-Harry

(2 comments | Post comment)

2014 Aug 10

00:35:00 - Back to ordinal multichoose for a moment

So!

I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.

First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.

Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).

So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.

Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.

When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.

I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.

I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.

Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.

Say α=ωα_0a0 + ... + ωα_ra_r + a, where I'm writing α in Cantor normal form, and separating out the finite part "a" as worthy of special attention. Then we have, for k finite and nonzero,

((α↓k)) = ωα_0 ka0 + ... + ωα_r k a_r + ((a|k)).

Pretty nice, no? The choose version is the same, except the multichoose at the end becomes a choose. Unfortunately, once k becomes infinite, things become complicated fast.

Also last time I was trying to resolve the question, for k finite, does one always have ((α↑k))≤(α↓k))? (And the same for the choose versions.) I had thought this was true, and spent some time trying to prove it, but now I can report a counterexample: α=ωω+1+1, k=2. On the left hand side we get ωω2+1ω+1+1, and on the right hand side we get ωω2+1+1. At least, I'm pretty sure I calculated both of those correctly. It's also a counterexample for the choose versions; in that case, we get the same things but without the +1s on each.

So, there's that. But the big thing is... how did I not notice this before? There's a symmetry law! The two operations are very closely related!

With ordinary multichoose, we have ((n+1|k))=((k+1|n)), since both are equal to (n+k|n,k) (I write it that way, rather than (n+k|k), to emphasize the symmetry.) With these ordinal versions of multichoose, we get

((α+1↑β)) = ((β+1↓α))

The proof is pretty simple, too! As in, you can straight-up construct an order isomorphism between these. I feel a little silly for not noticing this, but, this is really cool!

I feel like this also indicates that ((α↓β)) is somehow the "more fundamental" of the two operations. Because, see, ((α↑β)), well, if α=0, we know what it is; if α is a successor ordinal, we can apply the symmetry law to express it in terms of ↓ and if α is a limit ordinal, well, it's continuous in α, so it's a limit of things that can be expressed in terms of ↓. With ((α↓β)), if α=0 we know what it is, and if α is a successor we can switch it around, but if α is a limit ordinal, well, we don't have anything like that. (EDIT next day: Although, ((α↓β)) is pretty easy to compute in the case where α is a limit ordinal -- see below. The successor case is the only actually hard case.)

So, yeah. Now to put that away (for now, anyway) and get back to work...

EDIT next day: Actually, let me say a bit more about the computation of ((α↓β)) that I left out at first. Say we write α=α'+a, where α' is either 0 or a limit ordinal, and a is finite. Then in fact this breaks down into ((α'↓β))+((a↓β)). And the first of these is easy -- if β=0 we know it; if β is positive and finite it's given above (just multiply all the exponents in the Cantor normal form on the right by β); and if β is infinite, it's also easy (multiply by β+1 instead of β). Problem is the ((a↓β)) part! That seems to be complicated. Well, by the symmetry rule above, figuring out ((n↓β)) is equivalent to figuring out ((α↑k)), but, well, you'll notice I didn't give a formula for that -- that seems complicated in general. It might be doable, though. (Note that for any given β, since ((α↓β)) is strictly increasing in α, one has ((n↓β))<((ω↓β)) which does at least mean that when you compute ((α↓β)), the purely infinite part and the finite part do not interfere with one another.)

-Harry

(Post comment)

2014 Jul 18

22:18:00 - Why there is no natural exponentiation (sort of)

It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?

(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)

Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation ab.

Then I think we can agree on the following desiderata:

1. p(a,1) = a
2. p(a,b⊕c)=p(a,b)⊗p(a,c)
3. p(a,b⊗c)=p(p(a,b),c)
4. p(a,b) is weakly increasing in a
5. For a>0, p(a,b) is weakly increasing in b

Thing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.

But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ωb≤a. I.e. it's
the largest exponent appearing the Cantor normal form.

All the other operations have degree laws, or something like them. In
particular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ω
deg(ab) = deg(a×b) = deg(a) * b.
And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ω
deg(a⊗b) = deg(a) × b.

(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)

Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:

6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ b

And with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(nk)=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.

Whoo. Now I need to decide whether to add this as an appendix.

(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)

-Harry

(Post comment)

2014 Jul 12

18:36:00 - Peter, pool, and poison ivy

[I wrote this as a letter initially, figured I'd adapt this into an LJ entry. Some of this is a bit out of date as this is originally from July 2nd. Oh well.]

So this year most of the ICPSR people are from China and also most of them don't speak much English. Among these people is Chaoqun Mei (using Western name order) -- she says we can just call her by her last name since it's easier for us to say -- who is here studying some sort of math or statistics thing? Communication is difficult. But the main reason I am pointing her out is not because of her as such but rather because, uh, she brought a kid with her.

"Peter", we call him in English. He's 8 years old (and doesn't speak much English). It's kind of a weird situation -- how do we have a contract with an 8-year-old? I mean, OK, his mom signed for him, and is paying for him, but...? And, like, are we going to make him do chores? He can't wash the dishes, he isn't tall enough! Not sure he can operate a vacuum cleaner either. I think the current plan is that Beatrix (who is work manager at the moment) will talk to Mei and try to work something out. (Though obviously Peter doesn't eat as much as we do!)

Anyway, Peter is adorable. Though unsurprisingly he is also kind of a pain in the ass. But I play with him a bunch and he likes me a lot. He likes to play pool (he cheats). Mostly though I've been playing with him outside -- often in that tiny backyard of ours. Often just tossing a frisbee or a softball back and forth. The other day we were playing a game where we would take turns tossing the softball onto the chair-swing and trying to get it to stay.

...then he threw the softball onto the roof. Guess he's not getting that back!

Also yesterday we were playing hide and seek. At one point I hid in a pretty obvious spot because he gave me very little time, but he missed me anyway... and then seemed to forget about it and started throwing gravel at a birds' nest instead.

Also yesterday: Me being constantly worried I was going to get poison ivy. There was a bunch of poison ivy growing on the back of the house -- not nearly so much as there used to be, thankfully. Shane has been going and killing it by spraying soap-salt-vinegar on it. But I was constantly worried that playing with Peter in the back I was going to get poison ivy still somehow (you can still get it from dead plants, apparently). Especially when he went and hid in the wooded area behind Triangle (which is on their property, not ours -- Shane would have been doing nothing about that).

I mean it takes several days to show up so I could still have gotten it and not realize! Gods, what a terrible plant. I don't think I did; my worries are probably unfounded. But, ugh. I mean it's pretty easy to recognize in vine form, because the hairy vines are a giveaway, but when it's just on the ground? Sure, sure, "leaves of three, let it be", but *so many* plants have leaves of three. Goddamn usesless rhyme. I found it useless as a kid and I find it useless now. Well, Peter seems to be fine too.

[Since it's now over a week later, I can verify that neither of us ended up getting poison ivy.]

-Harry

(Post comment)

2014 Jun 17

01:39:00 - Why, briefly, ordinal hyper is stupid

So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.

(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)

Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.

I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.

(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )

So what's wrong with ordinal hyper?

Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.

OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!

The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is continuous in the right summand. Which would mean that if we were to take aω under this weird modified mutliplication, we would get a fixed point of left-addition by a. Multiplying by any higher ordinal would still get you aω. That isn't what we want at all.

Thus we have to add on the right. Similarly, when it comes to defining exponentiation, we have to multiply on the right. But what about tetration?

For natural numbers, we define tetration by doing our exponentiation on the left, and there's a good reason for this. (x^x)^x is the same as x^(x*x), which makes doing it on the right a bit silly. Addition and multiplication don't have this problem. They're associative, sure, but associativity of, say, multiplication, doesn't involve any operations simpler/smaller than multiplication. By contrast, this relation turns two exponeniations into an exponentiation and a multiplication, and in general (if you keep putting more exponents on the right) turns n exponentations into 1 exponentiation and n-1 multiplications. So this is not very good.

Thus when we try to generalize to ordinals, we have a conflict of which way things need to go in order to be nonstupid. If we continue to do things on the right, we run into the same problem we do for finite numbers. If, on the other hand, we break that convention and switch to the left, we run into the problem of continuity and stabilization. (We can't have "been on the left all along", or we'd have run into that problem even sooner!)

Now the reader may point out here that "left" and "right" are just directions, without any particular meaning, but in fact they have been used here with a consistent meaning: Left is what's getting iterated, right is the number of iterations. So this is indeed a real problem.

And so -- assuming we do switch to the left, because we want finite things to work -- we run into the continuity problem and things become stupid pretty quickly. Tetration is pretty stupid; hyper-n for 5≤n<ω is very stupid; and for n≥ω is maximally stupid.

(There is also the problem that H4(0,ω) is undefined, but oh well.)

(Note, by the way, that if you're defining hyper so that things are done on the right, you should define H0(a,b)=Sa, not Sb.)

(Also of note is the fact that while "ordinary tetration", "Jacobsthal tetration", and "semi-Jacobsthal tetration" all remain distinct, once you go to hyper-5, they all become the same.)

(Tangentially -- the other day I had the idea: while trying to define "natural exponentiation" using surreal exponentials doesn't work... what if you just rounded up to the next ordinal? Turns out, this is a pretty bad idea. No algebraic relations there that I can see.)

-Harry

(5 comments | Post comment)

2014 Jun 14

00:00:00 - Some quick notes on "no true Scotsman"

It seems to me that the common meaning of the phrase "no true Scotsman [fallacy]" has shifted quite a bit from the original meaning as I understand it.

Let's take Antony Flew's original example, which I've copied over from Wikipedia:


Imagine Hamish McDonald, a Scotsman, sitting down with his Glasgow Morning Herald and seeing an article about how the "Brighton [(England)] Sex Maniac Strikes Again". Hamish is shocked and declares that "No Scotsman would do such a thing". The next day he sits down to read his Glasgow Morning Herald again; and, this time, finds an article about an Aberdeen [(Scotland)] man whose brutal actions make the Brighton sex maniac seem almost gentlemanly. This fact shows that Hamish was wrong in his opinion but is he going to admit this? Not likely. This time he says, "No true Scotsman would do such a thing".
So what is actually wrong with what Hamish is doing here? Let's assume that this is part of some larger argument.

The honest thing to do when presented with a counterexample (that you agree with) is to openly fall back to a weaker position. Hamish is doing that, in a sense -- restricting his claim to true Scotsmen rather than all Scotsmen -- but not in any useful way. The notion of "true" Scotsman is completely opaque; it's not a useful coherent position at all, just a way to make it look like he was essentially right all along. (See also: Moving the goalposts.) If you can question Hamish and perhaps get him to nail down just what he means by a "true" Scotsman, then perhaps the argument can continue in a productive fashion -- though you should really use a term other than "true Scotsman", as that's obviously loaded. But as long as it remains opaque it remains mutable.

(Note, of course, that if you have to retreat to a weaker position sufficiently often, you do have to consider the possibility that your original position really was just totally wrong and you are making a mistake in trying to incrementally salvage it.)

But the way I see people using the phrase these days is something entirely different. Rather, it seems that to most people, the "no true Scotsman" fallacy is when you say someone isn't part of group X when they say they're a member of X.

Not only is this not the "no true Scotsman" mistake (in the original sense), it isn't even necessarily wrong. Some groups are essentially defined by self-identification, but not all are.

Now it's worth noting here that many groups are defined as empirical clusters -- they're defined extensionally, not intensionally. Suppose that I claim "No member of group X does Y", and someone else replies "I'm a member of group X and I do Y." And let's say I also happen to know this person, and I know that they have very little in common with the people usually denoted by X. Then I think my best reply would be, "I'm sorry, but you seem to have very little in common with the people usually denoted by X. I don't think most peole, when they would use the word X, are referring to a group that includes you. Seeing as X is a category that is defined extensionally, by empirical clusters of similarity, I don't really think it can be said that you are an X, at least, not if the word X is used in the standard way. In particular, you differ regarding Y, Z, and W, all of which would generally be considered essential. Hence, I hope you don't mind if I continue to use the word X in this way -- meaning, people who can be described as some combination of Y, Z, and W, typical examples of which are A, B, and C -- rather than in a way that includes you. If you really object, I will say X' rather than X to denote the cluster that I am talking about, and say X'' to denote the cluster that you aren talking about, but I hope you realize that, in my opinion, outsiders will probably read X as meaning X' rather than X''."

What I've done there now really does look a lot like No True Scotsman (in the original sense)! I've insisted on using a particular definition of a word, in a way that's not totally transparent. But, unlike the original "no true Scotsman" example:
1. I'm making entirely clear what I am doing.
2. I am doing so in a way that is in concordance with standard usage, rather than going against it and trying to sneak in connotations.
3. While my definition isn't totally transparent, I have tried to make it as clear as I can with some combination of defining features and typical examples. It's not totally transparent, but neither is it totally opaque and mutable.
4. I am ultimately offering unambiguous terminology, rather than getting into an argument over definitions. Remember, if an argument didn't start out about being about definitions, don't let it become about definitions! (And if you did start an argument about definitions, hopefully it's only because you know what you're doing and had a good reason for doing so.)

By the way -- let's note here that the above examples dealt in universal claims and counterexamples. But in arguments about the real world, universal claims are rarely appropriate. That was the original form, though, so I've left it that way -- and using more realistic claims would have made the examples more complicated.

Point is, claiming that someone is not an X when they claim to be an X is not necessarily incorrect, and bears little relation to the original meaning of the term "no true Scotsman", though they may often coincide.

-Harry

(Post comment)

2014 Jun 4

01:51:00 - This really deserves to be better known

So here's an interesting paper that really deserves to be better known: On Onp, by Joseph DiMuro.

Here, On2 refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F2; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε0, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On Onp", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in Onp, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of Fp. And the operations are computable! So this gives a concrete description of the algebraic closure of Fp! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On0", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F2(t), you have to go well beyond ε0, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

-Harry

(1 comment | Post comment)

2014 May 30

02:28:00 - A suggested revision to the disagreement hierarchy

You're all familiar with the disagreement hierarchy, right? Actually, I'm not sure how helpful it is most of the time, as I feel like a lot of the arguments I see (at least on the internet) consist of people arguing at cross-purposes rather than actually disagreeing with each other. Nonetheless, I would like to suggest two revisions to it.

Revision 1: Add level DH4.5: Nonconstructive refutation.

The archetypical example of refuting an argument is finding a hole in it -- "Your inference of P is unjustified given only what you've established so far." (Or, better yet, "Your inference of P is unjustified given only what you've established so far; indeed, here is an example where what you've established so far holds, but P does not.") But it's possible to show an argument wrong without actually finding a hole in it. The classic example is showing that an argument proves too much. If an argument proves too much, you can conclude that it's wrong -- but you still don't necessarily know exactly why it's wrong. It's still a form of refutation and should be above counterargument, but it's not as good as a constructive refutation.

Revision 2: Replace DH6, "Refuting the central point", with "Refutation and counterargument".

"Refuting the central point" doesn't really strike me as qualitatively different from "refutation". Honestly to my mind, if you're refuting some peripheral thing, that hardly even counts. When I argue I like to spot the other person lots of points because I want to get to the central disagreement as quickly as possible; arguing over peripheral stuff is mostly a waste of time. Of course, sometimes peripheral stuff becomes central later, but you can always un-spot a point.

Anyway, point is, what is qualitatively different is refuting and counterarguing. If you only refute but you don't counterargue, all you've established is that the other person's argument is wrong -- not that your own position is right! Refutation does not automatically include counterargument, and I think this is worth singling out a separate higher level.

(Sometime, I really need to get around to writing "Harry's guide for how to have an argument well"...)

-Harry

(Post comment)

2014 May 29

03:38:00 - There was something I meant to post, but I forgot it

Instead, I'd just like to point out that this old entry is now unlocked.

(Post comment)

2014 May 27

20:31:00 - It's done!

And so today I turned in the final version of my thesis and got my certificate of completion. If there are any more mistakes in there, I'm not fixing them. At least, not until I try to get Chapters 4 and 5 published (though Jeff tells me Chapter 5 will need to be split before it can be published).

It's going on ProQuest of course but also I figure I may as well just upload it to my website at UMich? (Hm, that should really be updated.) There's no reason it shouldn't be freely available. I'll also finally put the code up, for those of you who want it. A warning -- the code is far from readable; I didn't really have time to go back and make it readable, and since it wasn't officially submitted to the school, I didn't really have to. I did at least take the time to document all the functions, but I didn't take the time to put them in a sensible order. (Juan Arias de Reyna, meanwhile, tells me that he may rewrite the whole thing in Python.) Well -- I'll post an update here once I've done that.

EDIT: OK, website is updated, with the thesis and code. You wanted to see how awful my code is, Mickey? :)

Anyway, hooray! Now I actually have some free time for a bit. Will I actually write here more often? Uh, maybe...

-Harry

Tags:
(Post comment)

2014 May 24

13:48:00 - Someone in this house is a dumbass

So, we got a letter from Comcast the other day. These are only ever one of a few things: Bills, advertisements, or, as in this case, letters informing us that someone in the house has been caught torrenting copyrighted material and asking us not to do that.

Fortunately we don't get so many of these -- not enough that they're actually going to cut off our internet or anything. Maybe about 1 per semester. Used to be a much bigger problem before I was here, though; apparently it's the reason why, not too long before I moved it, we started using Comcast in the first place (we got in trouble with our previous ISP).

Anyway. Whenever one of these comes up, naturally there is the question, what was it? Usually it's nothing interesting, but the other year there was a case of Scooby Doo porn.

So what was it this time? Game of Thrones. Even though we have HBO -- on demand, even. (It's not that expensive when you're splitting the cost 50 ways.) Even though this should be well-known to anyone living here seeing as so many people gather around the TV for it every Sunday.

I can't say that there's no reason to do this, but, still: Dumbass move.

EDIT: Holy hell, today we got a second one. For the next goddamn episode. This is ridiculous. As they say: Once is happenstance; twice is you're a dumbass.

-Harry

(Post comment)

2014 May 9

19:28:00 - The defense

Today was my defense. It went well. I "broke everybody's legs", as Heidi wished me yesterday. Jeff had me make slides for it a few days ago, so I mostly used the slides rather than the blackboard; fortunately we didn't have much trouble getting the projector set up.

Committee was Jeff, Andreas Blass, Alexander Barvinok, Martin Strauss, and Kevin Compton (from the CS department). Karen Smith showed up as well. Also Julian (my academic older brother) happened to be in town so he was there. And various other people; this wasn't supposed to be a complete list, so let me head off the possibility before it turns into one. And of course a number of my housemates showed up -- Beatrix, Angus, Seth, Noelle... perhaps one or two more I'm forgetting? Not sure that any of them understood it besides Beatrix.

Still, the topic obviously was pretty accessible to those with a bit of math background (though I did make a point of stopping to explain what ωω is; that's been a sticking point when I've talked about this before). Which meant that I could actually get into the proofs quite a bit. Most defenses I've gone to, the subject and the statement of the results takes so long to explain that there's barely any time to go into how any of it was proved, but I actually got to do quite a bit of that. I was afraid I would go over time, and I did a little, but only by like 5 minutes. Which is probably by about how much I started late anyway.

Kevin (er, by which I mean Kevin Carde, not Kevin Compton) mentioned to me afterward that he really liked the argument for why, when k is an integer, the order type of D∩[0,k) is exactly ωk and not any larger. (Look at what happens for k-ε and take a limit.) That was a pleasant surprise.

Most unexpected question at the defense (I forget whether this Compton or Strauss): "What if you allow addition, multiplication, and nim-addition?" My response was that, since 3k⊕1=3k-1, well-ordering should fail for the same reason as when you allow subtraction. But up till then, in all the variants of the problem I had considered, it had never occurred to me to consider nim-addition. (Or nim-multiplication, or nim-inversion.) Also Compton asked about ωω showing up in other contexts, which gave me a chance to say a bit about how it's possible it might show up with addition-multiplication chains as well... as well as another less speculative generalization which I then forgot to mention, but oh well.

Spent a bunch of time talking to Kevin and Julian and Karen afterward. Mentioned the fact that the closure of the defect set is the set of numbers of the form "a defect plus a whole number"; they were surprised that the latter was a closed set. Since the arguments for these facts were largely unloaded from memory at the time, all I could really say was, "You know, when you put it that way, it does seem weird."

In addition to the usual little math department celebration (Jeff got me sparkling grape juice, since I don't drink), when I got home I found Beatrix had gotten me a cake as well! When Noelle mentioned this via IM to Justine (who has moved out for the summer and is down in Texas), she said [something along the lines of] "Is it an ice cream cake? It should be an ice cream cake." I told Noelle she can tell Justine, I'm glad to hear she cares, but the cake is fine.

EDIT: OK, actually Justine just asked if it was an ice cream cake; she didn't say it should be. Since it was Justine, I imagined that was the implication, but perhaps that wasn't intended.

Anyway, that's done. I mean, aside from writing up the revisions to my thesis and getting those turned in...

-Harry

Tags:
(2 comments | Post comment)

2014 Apr 16

02:24:00 - A thing to do when I actually have time: Look up sources of national anthems

In a few minutes, I'll get back to work. But first...

So I only just learned today that -- like "The Star-Spangled Banner" -- "Hatikvah" is also a combination of a pre-existing poem and an unrelated old tune. Now, admittedly, the Star-Spangled Banner just directly copies both its sources, whereas Hatikvah modifies things a bit more. (The tune is a big difference if you compare directly to "La Mantovana"; less so if you compare to "Carul cu boi".)

Still, it struck me as an interesting coincidence. Is this a common thing among national anthems? I mean, I could easily believe that adapting them from preexisting poems is, but also using unrelated existing melodies? That would be a bit more surprising.

Well, if I had the time, I'd start going down Wikipedia's list of national anthems and counting. But I don't right now, so I'm just going to note this coincidence as one to follow up on later. Or let someone else count.

-Harry

(2 comments | Post comment)

2014 Apr 13

18:09:00 - New integer-complexity related upper bounds!

So! Juan Arias de Reyna and Jan Van de Lune recently posted to arXiv their paper "Algorithms for Determining integer complexity". In it, they A. improve the known bounds on time complexity for computing ||n||, and B. get a new bound on ||n|| that works for almost all n.

(Hey, I'm blogging about things adjacent to my work again! Well, a little, anyway. More will have to wait.)

To review -- a big problem in integer complexity is determining the limsup of ||n||/(log n), which I'll denote Cmax. Certainly Cmax≤3/(log 2), or about 4.328, since ||n||≤3log2n for all n≥1; this bound follows from writing n in base 2. And Josh Zelinsky, in a paper he still needs to write, has lowered this to ||n||≤27log1260n for all n≥1, so Cmax≤27/(log 1260), or about 3.782. The highest known value of ||n||/(log n) occurs at n=1439, for a value of 26/(log 1439), or about 3.576. This seems to be a unique occurrence, so one ought to have Cmax<26/(log 1439), but the current best upper bounds have not beaten this milestone.

For comparison, if it is indeed the case that ||2k||=2k for all k≥1, then one would have Cmax≥2/(log 2), which is about 2.885; Josh has previously suggested that perhaps the limsup is exactly this number. (Though Iraids et al suggested that perhaps it's even larger.) However, at present, nobody's even been able to prove that the limsup is any greater than the liminf, which is 3/(log 3), or about 2.731 (and which is also an absolute lower bound). And indeed, various people have suggested that perhaps the limsup simply is the liminf[0]. Josh and I attempted a while ago to show it was at least slightly larger, but that ended up not working out, though I'm of the opinion it's worth re-exploring.

Anyway. So the state of Cmax is not good. But we can also define a number I'll call Cavg: We'll define Cavg to be the inf of all C such that ||n||≤Clog(n) for almost all n, i.e., on a set of density 1. (So certainly 3/(log 3)≤Cavg≤Cmax). And it's here that Arias de Reyna and Van de Lune have made an improvement. (But first a bit more history, if you don't mind.)

A number of people have noted that based on writing n in base 2, one can show Cavg≤5/(2log2), or about 3.607. (Already this is better than Josh's bound.) Richard Guy and John Isbell took this a step further and tried writing n in base 24, yielding a bound of Cavg≤265/(24log24), or about 3.474. (This is even better than 26/(log 1439), which the current Cmax bounds are not!) Well, now, based on writing n in base 2936, they've shown that in fact

Cavg≤15903451/(2936log(2936))

which is about 3.321. Quite an improvement, in my opinion!

(As for the true value of Cavg, who knows. Maybe it and Cmax are both equal to 2/(log 2). Well -- if Cmax is at most 2/(log 2), then so is Cavg, and if Cmax is equal to 3/(log 3), then so is Cavg; and if either is true, then even this new bound on Cavg is quite a ways away from the true value. Still. A substantial improvement.)

So what's going on here? How did they do this? Well, it's the same method as Richard Guy and John Isbell used, just applied with a much higher base with the help of a bit of automation. Let's go into a bit more detail.

Let's define D(b,r), as Arias de Reyna and Van de Lune do, to be the smallest number of ones needed to turn a number x into the number bx+r. That's not a formal definition, but it's not too hard to write one down; you can write down a recursion similar to that for integer complexity, allowing you to compute it algorithmically. For r=0, D(b,r) will simply be ||b||. (And for b=0, D(b,r) will simply be ||r||, though we won't care about that here.) We'll only be considering here D(b,r) for 0≤r<b, though one can consider it for r≥b as well. Note, by the way, that (excluding r=0 or b=0) one always has D(b,r)≥max(||b||,||r||). (Actually, so long as r>0, one has D(b,r)>||b||, and so long as b>0, one has D(b,r)>||r||.)

With this language, we can make some nice statements. The method of getting upper bounds on Cmax by writing numbers in different bases simply becomes the statement that, for any b≥2,

Cmax ≤ max0≤r<b D(b,r)/(log b)

...or does it? It's possible I'm neglecting some subtleties with the initial digit here. Well -- it's ultimately irrelevant, because so far nobody's ever found a base that does better than base 2 for getting an upper bound on Cmax! And in base 2 one does not have to worry about such things.

But what certainly is true, and much more useful, is that

Cavg ≤ avg0≤r<b D(b,r)/(log b);

this is the method of writing things in base b to get upper bounds on Cavg. Well, Arias de Reyna and Van de Lune have done this with b=2936, and gotten a pretty nice bound.

But -- and now we get to part A -- this isn't the only thing they've done with the values D(b,r)! They present also in their paper an algorithm for computing ||n||, which is apparently due to Martin Fuller. Now, we already knew that you could do better than O(n2) in terms of the time required to compute ||n||; Srinivas and Shankar presented an algorithm that showed that the exponent 2 could be lowered to log23, or about 1.585.

Arias de Reyna and Van de Lune present a time analysis of Fuller's algorithm, and show that it runs in time O(nα), where α is about 1.246. Interestingly, their time analysis depends on the numbers D(b,r) -- once again, you pick a base b, do computations involving D(b,r) for 0≤r<b, and get out a bound. In this case, the computation involved is rather more complicated than just taking an average, and doesn't yield a number that can be written down in a nice short exact form, but it still is a computation based on D(b,r) for 0≤r<b. The best base they found for this problem was 66, which yielded the above value of α.

So, pretty neat! A little different from what I'm mostly working on, but, it's not like integer complexity is a widely-studied thing. And it shows that the numbers D(b,r) have more than one purpose, and may be worth further study.

-Harry

[0]Indeed, if instead of the integer complexity one considers the addition chain length l(n), which is similar in a number of ways (which reminds me, I've never really posted here about any of my work regarding addition chains; I should probably get around to that sometime), one has that the liminf and limsup of l(n)/(log n) are both equal to each other, both being equal to 1/(log 2).

Tags:
(2 comments | Post comment)

2014 Apr 6

17:12:00 - 6th place!

Stopping by here briefly to record this. (I think I may actually begin posting some more once my defense and everything is done. For now, though...)

SO! Yesterday was the Microsoft College Puzzle Challenge. (Apparently "college" includes grad students. And people who finished their PhDs a year ago.) It's a 10-hour puzzlehunt for teams of 4 people. (You have to be in one of several particular locations to do it.) They do it every year but I'd not done it before. Dave Putnins was running a team, Good Fibrations. (A team name with a history of doing well at CPC, winning UMichigan every year since 2010 and winning the whole thing in 2012.) The team was him, me, Jordan (hence the "and people who finished their PhDs a year ago" above), and Leda, who is not from the math department but instead from the solar car team (and the only actual undergrad on the team). There was another math department team, consisting of Ari, Chris, Alex, and Dave Prigge; I'm not sure what their team name was.

Long story short, we came in 6th overall and won UMichigan. (Second place UMichigan was 20th place overall.) We didn't solve the meta; if we had we'd have been 3rd. We were also damn close to 5th... more on that in a bit. I was kind of hoping for that Xbox One (which, yes, I would probably just use as an expensive Geometry Wars machine), but, a good showing nonetheless.

A note on the format: The format was apparently a little different than in previous years. Instead of the winner being the first to solve the meta, there was a scoring system. Solving a puzzle was worth 100,000 points ("dollars"; there was a Las Vegas theme this year, ugh), while solving the meta was worth 1,000,000; you could keep going after the meta for more points until time ran out at 10:00. Not sure how much the pre-event puzzles were worth.

Additionally, on the ordinary (non-meta) puzzles, there was a bonus for solving the puzzle early -- 10,000 to the first team to solve it, around 7,000 for the next team, and decreasing pretty quickly; you had to be very fast to get more than 4,000 on bonus. Much more important than the bonus, though, was the betting. On the ordinary (non-meta) puzzles, you could bet up to 10,000 points that your first answer was right (much larger than you were likely to get from the bonus). You could not bet on answers beyond the first. Also, if you bet on your first answer, but you got it wrong with one of the "we'll give you a hint" possibilities, the bet would just be cancelled (though you still couldn't bet on later answers).
Begin puzzle roundup, cut for length and spoilersCollapse )
OK. That's all I've got to say. OK maybe not all but I'm not writing any more.

-Harry

(2 comments | Post comment)

Navigate: (Previous 20 Entries)