"Alters gene expression" is the new "alters the brain".
So I was in Chicago briefly the other week, staying with Mark and Kenneth, to pay last respects to Pierce Tower, not really going to write about that. But I thought I should mention a silly conversation that occurred with Kenneth and a friend of his named Dick:
If an ant and a lion were scaled to the same size, which would win in a fight?
Well, obviously, that depends on whether the ant was scaled up to the lion's size, or the lion was scaled down to the ant's size.
OK then, better question: At what size would the fight be fair?
I'm not sure that question is really answerable (we also haven't specified what *type* of ant). But I think it's safe to say, at whatever size would be fair, they'd both be really terrible.
OK, one thing about that: Apparently the old method for getting on the roof of Pierce Tower no longer works. Also, neither Hannah nor Sasha seemed to remember me, which was disappointing. And obviously Isaac didn't. But Evangeline seemed to like me.
Also, I've updated my website with blurry pictures of bugs.
(They're blurry because I took them with my phone. The grasshopper and the mantis *aren't* blurry because they're freaking huge.)
Except this this took place over several days via text messaging. Pretty sure Linda didn't know who she was talking to at first.
It began with me sending Linda a copy of this picture.
Linda: Who? What+
Harry: Not who! Muffin is not person!
Linda: Then where is the muffin
Harry: The muffin is on the phone.
Linda: Who has bitten the muffin
Harry: Julian Rosen bit the muffin!
Linda: The mighty mathematician?
Harry: That seems a fair description.
Harry: Unrelatedly, the Julian has now transformed into a Dr. Julian!
Linda: From master Julian
Harry: Journeyman Julian?
Linda: Journeyed where? To alliteration alternatives?
Harry: Waterloo wanderings.
Linda: Ah, Waterloo, the great toilet!
Harry: It's not a closet.
Linda: Maybe you are in the closet but unawares?
Harry: Julian will be in the not closet; i will not be in the not closet.
Linda: Is the not-closet an open set? So that you can be arbitrarily close to being in the closet?
Harry: It's unclear.
Linda: Like the skies.
[Originally posted as a friends-locked entry on 3/30/2013 -- locked because I didn't want to post this before the game was done. We eventually did complete it the following week, though Gamble had to step in for Marquise. Short version of what happened: Gamble tried to fix Marquise's mistakes, turning and attacking Nick -- this seriously hurt Nick, but ultimately couldn't do enough; Marc behaved erratically and almost threw the game to Mike; Nick and I stopped fighting; I lost the West Summer Sea to Dan and my homelands got sacked; Mike almost won; and though I did start to reclaim my territorty, I couldn't do enough and Dan won.]
So last night Dan organized a game of AGoT. Playing were him (Martell), me (Tyrell), Nick (Lannister), Marquise (Greyjoy), Mike Milligan (Stark), and Marc (Baratheon).
OK, I'm not going to do any sort of whole writeup. But I thought it was worth noting, this was a game that actually had people supporting each other a lot. Usually I find most fights in AGoT don't actually involve support from other players.
As often happens I spent much of the game sitting around not getting anywhere. Now usually this is because I'm holed up and being defensive, but this time I didn't have to spend a lot of the game defending. I was Tyrell, as I said, so there was the buffer zone of Searoad Marches between me and Nick, and Dan and I had a pretty defined border as well. I could have easily hit him at Starfall, but he couldn't have so easily hit me, because I controlled the West Summer Sea.
So why didn't I get anywhere? Well, Nick and Marquise had something of an alliance for much of the game, so going north would have been a huge pain. And as for going south, well, Doran Martell is quite the rattlesnake. Indeed I had the strength to attack Dan multiple times -- I just wanted someone else to eat the rattlesnake first.
Now the thing was, Clash of Kings came up on each of turns 2 through 4, and due to heavy use of consolidate power on my part, I came out ahead on the King's Court track (each time! Though I was only in second on turn 3.) And after that there weren't any (at least, as far as we played -- the game was tentatively left unfinished), so I had the raven for a lot of the game.
So I in particular did not want to get hit by Doran. So I start suggesting to Marc, hey, maybe we should take on Dan together. But to a large extent this was less because I needed Marc's help, and more because I wanted someone else to eat the rattlesnake! But the problem was that Marc didn't have good positions on any of the tracks, so there was no way that was going to happen to him.
Anyway the result of all this was that I expanded east into the Reach, so yay, a lot of fights with 3 people involved. Also, early on, I proposed to Nick that we take on Marquise together -- I'd get the Sunset Sea. Of course, Nick had an alliance with Marquise, so that didn't happen. (Which was probably for the best for Nick -- I'd have been a big threat to him had he let me do that.)
Finally Nick shows up in the Searoad Marches with a bunch of siege engines. Well, there's only one possible defense against that: Strike first! But my forces weren't *quite* enough; I just had to hope that maybe Nick woudln't do everything he could to defend them. (Aside from putting a defense order there, which would defeat the point of the siege engines.) As it turned out, he did. OK -- maybe I can get Marquise to my side instead of Nick's. I suggest to Marquise that his alliance with Nick has only gained him the things he would have had anyway, and we could crush Nick between us instead.
Finally, Marquise decides to support... neither of us. A decision neither of us are happy with, unsurprisingly. Nick can still squeak out a victory against me without Marquise, so he's still handing the battle to Nick. But now Nick's annoyed at him -- especially because this means Nick had to spend his high card in order to win. And without his high card, Nick decided not to attack Highgarden, going for the Reach instead.
I think this it was shortly after this that basically all the alliances broke down and we just got to a point where it was "sure, I'll support you, because you're fighting someone who's also next to me". People would be largely honest, but, y'know, not loyal for more than a turn (or a march).
As for Dan and Doran -- ultimately I ended up losing the Raven (which Dan then got) and all my stars after getting tired and forgetting why I'd been avoiding attacking Dan, and attacking him in the Reach late in the game. Not even that great a prize...
Unrelated but interesting: There was a Stark/Baratheon fight early on, and Roose Bolton got Patchfaced. Stark sure has to play differently without that card, huh? Later though he got it back after a wildling attack.
After the beginning of turn 8 we were all really tired -- and Marc had just kind of disappeared (we assumed he bid 0 for the wildling attack that turn) -- but we weren't comfortable declaring Mike the winner (since we hadn't had a declared last turn, as we'd done sometimes in the past when the game had gone too long), so we recorded it and packed it up. We'll see if we actually get back to it, though.
I've actually been trying to keep a consistent bedtime recently -- namely, about 2:30 -- but last night... yeah. It was about 3:00 when we finally packed it up. We'd started sometime inbetween 8 and 9, I think.
Getting an email from your second cousin, who you haven't seen in years, asking if you could dig up some math papers by Ahmad Chalabi.
Here's another entry from the file.
It's an old idea that addition and multiplication of whole numbers don't get along very well. I attended a talk by my advisor recently on the subject. But one thing that struck me from the examples he used is that the idea of "addition and multiplication don't get along very well" seems to take two distinctly different flavors.
One of his examples was the abc conjecture, and some related ideas; another was Gödel's incompleteness theorem. But the first of these, essentially, says that addition predictably destroys multiplicative structure. While the second says that the interaction of addition and multiplication is so tangled as to be unpredictable.
(His third example was my own well-ordering result regarding the defects of integer complexity, but honestly I'm not sure it even fits into this category at all. The set of defects has some really nice structure! But that gets into stuff (due to a combination of Juan Arias de Reyna and myself) I haven't really talked about here and probably won't get to for a while.)
Anyway, I don't really know where I'm going with this. I think my point just is, "Addition and multiplication don't get along" seems to really be two different ideas actually.
Hey, I actually have some free time once again! Time to write stuff down.
So a few weeks ago was spring break here and I went down to Rolla to visit Heidi and the rest of Spotty Thorpe. (Well, OK, the only people still in Spotty Thorpe from when I was last there are Heidi and Maggie.) And stuff happened and I didn't write any of it down till now and hopefully I can remember it.
( Cut for lengthCollapse )
I think that is all I have to say for now.
The house where she lives.
My second cousin, Jeffrey Beals. Some second cousins of mine live on a farm. I'm not too clear on the details -- I've never really gotten a good handle on the outlying branches of the family, to be honest...
Holy crap, no I haven't. In fact I never wrote about my and Mickey's trip down to Philadelphia (to see Nick) during winter break at all! Man, I've really let this thing go to rot, haven't I? Short version: I missed a bus, we got there anyway, it was fun, I was introduced to Wawa, we walked on an abandoned elevated train line and encountered someone who lives there, a downed (but deactivated) power line was used to help climb down, there was much Tzolk'in and Space Alert, and then the russian inventor's device made the ship came alive but it sank and everybody died! Hm I think that last bit is missing some context. Oh well. This is not that entry, because that entry doesn't exist.
And there certainly hasn't been anything here about my work for quite a while, but there's a good reason for that. I intend to write more about that before too long, though...
Or something like that.
So a while ago -- honestly, I think I just won't bother to dig up the entry -- I made a list of mathematical terms where the "co-" prefix has its ordinary English meaning, rather than denoting complementation or dualization.
Yesterday I remembered one that wasn't on my list: "cobordism". (And the adjective form, "cobordant".)
I was telling this to Hunter, and he responded, "Isn't cobordism dual to bordism?" I was pretty sure it wasn't, but this isn't my area so I looked it up. And, it isn't. "Co-" has the ordinarily English meaning in "cobordism"; "bordism" is pretty much just a shortened form of the word.
So, yay. Still a pretty short list though.
So. As you probably know, I'm currently writing a paper in which I prove that a certain subset of the real numbers is well-ordered with order type ωω. And yes it should have been done like a month ago (or probably earlier). Real life has introduced some delays.
Anyway. The point is, in order to do this, I need to cite various set theory/order theory/point set topology stuff, to handle ordinals embedded in larger totally ordered sets (which here is always the reals). I have a section of the paper where I introduce all the stuff I need from there. I'd held off on actually writing it until pretty recently though. I didn't actually know where I'd cite the stuff from; I figured it wouldn't be too hard to find.
So Jeff suggested I ask Andreas Blass where I might find citations for these things. His response on hearing just what I wanted was, yes, that's certainly easy, but I'm actually not certain you'll find it in the literature. Try the following really old books; they cared a lot about this sort of thing around the turn of the century.
So I got the books Andreas suggested out from the library, and, unfortunately, they mostly didn't help. So, I'm having to write most of the proofs myself.
Anyway. The point of this entry was a particular thing. Say we have an ordinal α, and we let β be the set of all limit ordinals less than α. How can we express β in terms of α? Sounds easy, right? I mean it's basically just "dividing by ω", right?
Well basically yes, but if you actually want the order type on the nose, well, it's surprisingly easy to screw up. Now actually all I really need is that if α<ωn+1 (n finite), then β<ωn, so I don't need to prove a whole formula. (Actually, I guess if you just replaced n+1 with 1+n, this statement would still be true for n infinite.) But today I sat down and figured out just what the correct formula was (because you know I coudln't find this listed anywhere) so I thought I'd record it here for reference.
First off, if α is finite, there are no limit points. Otherwise...
Say α has Cantor normal form ωγ_kaγ_k + ... + ωγ_0aγ_0, and assume that γ_0=0 (we're going to allow the a_i to be 0). Define α/ω (this may have a standard meaning but I forget, oh well) to mean ω(γ_k)-1aγ_k + ... + ω(γ_1)-1aγ_1. (When I write γ-1 for an ordinal γ, I mean subtracting the 1 off the beginning. So for a finite ordinal this is subtracting 1, and for an infinite ordinal it does nothing.)
Then the order type of β is α/ω-1 if a_0=0, and is (α/ω-1)+1 if a_0>0. (Yeah, that notation is kind of crappy -- it looks like (γ-1)+1 should just be γ (and often it is). But we're subtracting 1 off the beginning, not the end, so rather we have 1+(γ-1)=γ, while (γ-1)+1 is only γ if γ is finite. I guess what we need here is some additive equivalent of the \ that gets used for division-on-the-left in certain contexts. Oh well.
EDIT: No longer needed. Heidi found the paper. It's "How to Seem Telepathic: Enabling Mind Reading by Matching Construal" by Tal Eysal and Nicholas Epley.
So I'm trying to find a particular psychological study for Heidi. It's from the past few years.
I may be misremembering, but I'm pretty sure it was about people's evaluation of their own appearance. (Pretty sure it was appearance, though it's possible I'm misremembering.) Or rather, their predictions of how other people would evaluate their appearance. It was a construal level theory thing; I forget the details, the upshot was that they were more accurate when they were told to think about something several months or years in the future. IIRC, I think what it was was that that people's predictions of how others would judge their current appearance in the future (based on a photo, say), were a better predictor of how other people would judge their appearance now, than their predictions of how other people would judge their appearance now.
My search ability is failing me. Help, anyone?
Or, here's some assorted math with John lately. Have to make up for that dumb last entry. Skip to the bit after the horizontal rule if you want to know what the title's about.
EDIT next day: Added the last paragraph.
EDIT February 3: Added a bit more about the diamond lemma.
So not too long ago John was asking -- say we have a finite-dimensional vector space V and some subspace W of End(V). How can we tell whether, for all v in V, there is some w in W such that w fixes v? ("W can fix every vector.") Like, what are sufficient conditions, necessary conditions...? Obviously containing the identity is sufficient, but what else can we say?
Now in the original context John was considering, W was actually constructed some specific way, and in fact it was closed under multiplication. Well, adding that condition kind of changes everything, doesn't it? So these are really two separate problems.
Also: From here on I'm just going to assume V=Fn, where F is our base field. The added abstraction of not doing so desn't really gain us anything here, and I want to talk about this in terms of matrices.
Let's consider the first one first. Here's an easy case: If W has dimension 1, it can only have this property by consisting of all scalar matrices. What if W has codimension 1?
Codimension 1 already exhibits several different sorts of behavior: Obviously, W could contain the identity, and thus fix every vector. Or W could fail to fix some vector: Suppose W consists of all matrices with a 0 in the upper left; then there's no way it can fix the vector (1,0,...,0). Or W could fix every vector without containing the identity; for instance, suppose W is the set of traceless matrices. Then given any v, you can pick a linear transformation that fixes v, and on some complement of span(v), has trace -1.
Anyway, in the codimension 1 case, we can describe W by a nonzero matrix A -- i.e., B is in W iff A·B=0, where by A·B I mean "dotting" A and B as if they were vectors, multiplying corresponding entries and adding them up. So can we describe whether or not W can fix every vector in terms of properties of A?
Indeed we can! Initially I did a bunch of tedious computations in the 2x2 case, and found that W could fix every vector iff A either had trace 0 (because this is equivalent to saying I∈W) or nonzero determinant (because...? I had no idea; that was just what I computed). But I had no idea if this generalized.
Anyway the other day I sat down and thought about it some more and concluded that it did generalize, but that that was the wrong generalization. The correct statement is, W can fix every vector iff A either has trace 0, or has rank greater than 1. For a proof, well, see the MathOverflow question. (Actually seeing that will tell you a lot of the other things I'm about to say, but oh well.) I expect there should be a better proof, but, eh, it's what I came up with.
Anyway, let's consider the case where W is now assumed to be closed under multiplication. Julian quickly pointed out then that if W contains any invertible element, W contains the identity (because Cayley-Hamilton theorem). So if W fails to fix some vector, it must consist entirely of singular matrices. This seemed like it ought to put on a dimension restriction -- it seemed like such a subspace ought to have dimension at most n²-n, but we couldn't prove it.
Fortunately, this sort of thing -- subspaces of matrices of bounded rank -- has been well studied. A bit of searching turned up this question on math.stackexchange, which also linked to this paper. So yes -- a subspace of matrices, each of rank at most r, does have dimension at most nr.
And from the paper, much more is true! Applying the results with r=n-1, we get that if dim(W)>n²-2n+2, and all elements of W are singular, then either A. there is some nonzero vector which is in the kernels of all the elements of W, or B. there is some proper subspace containing all the images of elements of W. Either way, W fails to fix some vector. Thus, if W is closed under multiplication and dim(W)>n²-2n+2, the only way it can fix every vector is if it contains the identity. Rather different from the case without multiplicative closure.
The paper also has results about if dim(W)=n²-2n+2, but I didn't see how to apply those here. In any case, one ought to be able to do much better -- we only used multiplicative closure to go from "contains an invertible element" to "contains the identity", when presumably it can be used for more than that.
So somehow I was showing John a particular open problem involving the combinatorics of words (specifically, problem 15 here -- there's some neat problems there, btw) and he said, this is an awful problem. This is the sort of awful thing Miklos would work on. He would work on awful stuff like the free idempotent monoid on n generators.
The free idempotent monoid on n generators, huh? So, like, squarefree words, I thought. So for n=2 this is finite, as there are only 7 squarefree words. But for n=3, there are arbitrarily long (indeed, infinite -- yes, that's the same by König's Lemma) squarefree words. So I said I would probably bet that for n=3, this monoid is infinite. John wasn't so sure.
And so I sat down to prove it. I wanted to show that any word reduced to a unique squarefree word; obviously, this is a case for a diamond lemma argument. Of course there are several cases based on how the reduction sites overlap. Most cases worked, but two were problematic. One I couldn't prove worked, and the other, well, just clearly didn't work. (ADDENDUM February 3: For what little it's worth, the other case does in fact work.)
Specifically, consider the word ababcbabc. Reducing the duplicated ab, we get abcbabc. But reducing the duplicated babc, we get ababc, which then reduces to abc. So despite being distinct squarefree words, abc is equivalent to abcbabc; there are weird hidden relations you wouldn't have expected, and the diamond property just fails. (Also, note that 'a', 'b', and 'c' could have been replaced with arbitrary words here.)
And so I sent John an email pointing out this fact, with the subject line: "Your monoid is disgusting, and here's why."
John replied by pointing me to the MathWorld entry on the subject, because, well, people have studied this monoid before. And apparently there are enough of these weird hidden relations that regardless of n, despite the infinitude of squarefree words, the monoid is finite. (So, I would have lost that bet.) And there's a formula! Why is there a formula? (Along with the link, John left just the comment, "Good heavens.") I guess there probably is still some normal form for it, which is a stricter condition than just being squarefree, and you can enumerate these? Well, I don't really intend on getting out the book to check...
Actually, when I was a fourth-year at Chicago, first semester, I remember there was some problem on an algebra problem set where I decided that the thing to do was to show that for any n, sufficiently long words can't be squarefree. Of course, this is false, so it didn't work. (I figured, it was true for 2, so...) Unfortunately, I don't remember what the actual problem is, so I have no idea what a correct approach might have been. I have to wonder though if maybe proving that the free idempotent monoid is finite would have done what I wanted, though! Whether or not that would actually be within reach is another question...
So, this may seem dumb, but it is really annoying and I would like to have some clue what is going on.
Sometime a while ago -- during the summer, maybe? Certainly no later than that -- I noticed a problem with my shoes. Well, my left shoe. More specifically, the insole of my left shoe. Somehow it had developed these creases, see, and whenever I walked, it would start sliding backwards and up along the back surface of the shoe, actually coming out of the shoe. The result was that every so often I would have to stop and reset the insole of my left shoe.
As you can imagine this was pretty annoying. How did it happen? No idea. And why would something go wrong specifically with the left insole? I'd never had this problem before, although the shoes I was wearing were by the same brand and largely identical to the past few pairs of shoes I'd gone through.
Now had I noticed this problem no later than the summer but I didn't actually bother doing about it until this month. Got a new pair of shoes. Pretty much the same, though a different brand. And the problem happened again, almost immediately. (Say, after a week or two. It was this month, not a lot of time has passed!) It started out minor but has quickly grown to be almost as bad as before.
And once again, it is specifically the left insole that has the problem. Different manufacturer!
Is there like something wrong with the way I'm walking that is causing this? (I guess I should ask my mom about this next time I'm in New Jersey...) Why just on the left? Why would it be asymmetric? (Scoliosis?) Why did it never happen before this past year or so? Or maybe I'm inferring too much from too little; maybe it really is just coincidence.
I don't want to have to keep buying new shoes. I would like to have one all-purpose pair of shoes that lasts for a few years. Or at least a year, dammit. Buying a new pair of shoes after a month is ridiculous. I had assumed a new pair of shoes would fix the problem, but evidently that's not so. I don't want to keep walking with my insole coming out (especially not in this cold), but until I understand the problem any action I take might be useless.
(Hm, OK, a quick Google suggests that I should try just gluing it in. Yeah, I never thought to actually search this before. Oops. I guess this entry may be pretty pointless. Oh well.)
I was kind of hoping there was some way I could just get a new left insole, but, yeah, seems that's more trouble than it's worth.
EDIT Feburary 3: Added a bit about a certain clue in "Magic: The Tappening".
OK, so by this point you've probably all heard about how this year's Mystery Hunt was a bit of a fiasco. (Too many puzzles, too difficult, lack of editing, etc.) Well, I'm not going to go into all that here. I'm just going to go over some of the puzzles I worked on or wanted to comment on, what happened on them, etc.
Oh! I should say. So I was solving for A Strange New Universe this year. This is basically the team consisting of the people who split off from Manic Sages because they didn't want to be on the writing team. Well, OK -- actually it was basically the MathCamp team, because Manic Sages usually is, but this year they were writing. But whatever, I was on it, because I'm a friend of Youlian's, so OK.
I tried to get some other people from Truth House to join in. Beatrix expressed interest, but was sick; Ryan Tea also, though he didn't join in till the end. Some others -- well, more on this later. Anyway, didn't get many people for very long, but they were helpful.
Anyway! To the puzzles!
( Cut for spoilersCollapse )
I think that is all the puzzles I have anything to say about. Now I'm going to go to sleep.
Well, those of you pay attention to Mystery Hunt, anyway.
What is a finite set?
Well, the standard definition is that a finite set is one whose cardinality is a whole number. More formally, the way this works is that we make this set ω, the whole numbers, consisting of 0, 1, 2, etc., and then we say that a set is finite if it's in bijection with some element of ω.
This... this is not so great. I mean, it works fine ordinarily, but the problem is that it relies on this external thing, ω, and sometimes we might be in a setting where we don't have ω. For instance, suppose you want to take ZF, remove the axiom of infinity, and instead add an axiom that states that all sets are finite. How would you do that? You couldn't do it using this definition of finiteness, because you can't talk about ω if you don't have the axiom of infinity! Indeed, having an ω to talk about would contradict the axiom you wanted to add, that all sets are finite.
So, a different definition is needed. Famously, there's Dedekind's notion of finiteness: A set is finite if it's not in bijection with any proper subset of itself. That's intrinsic -- no reference to any external ω. And this certainly is an important property that differentiates finite sets from infinite ones. But the problem is that while this is equivalent to finiteness in ZFC, without choice, it becomes weaker. In ZF without choice, it's consistent that there exist sets which are infinite, but Dedekind-finite. So this can't be the essence of finiteness after all; just one more useful property. (Note that equivalence doesn't require the full strength of AC -- countable choice will do.)
Anyway so you go to the Wikipedia entry for "finite set" and you find that there are a number of these definitions that are equivalent in ZF. But most of these, while they may be useful and equivalent to finiteness, don't really seem like they are really what finiteness is about. Like, if you dropped some axioms and these became inequivalent, would you still use them? No.
The one that stands out is Kuratowski's definition. Say you have a set S. Take its power set. Take the smallest subset of its power set that: A. contains the empty set and all singletons, and B. is closed under taking unions of two sets. If S itself is in this set, then we say S is finite.
In other words, this is saying that S can be built up one element at a time. And that, really, is what finiteness means. The ordinary definition of finiteness is really saying this too, of course, because whole numbers are things that we build up one element at a time; it just has the problem that it's not intrinsic. And this is also why Dedekind-finiteness seems like the right definition at first: If a set isn't in bijection with any of its proper subsets, then you can remove one element at a time until you get the empty set, then put them back, thereby building it up one at a time. The problem is making that line of reasoning rigorous requires some form of choice.
So Kuratowski-finiteness is what I'd go with, then, in settings where you can't build ω. Although this leaves the question of what about settings where you can build ω, but there's not enough to make the two notions equivalent? Yikes. That sounds bad. Well, I'm not a set theorist or a logician, so, uh, for now I'm just going to hope I don't encounter such a thing. I assume people have actually thought about this before...
Actually, Kuratowski's definition has a problem too -- it relies on the notion of power set. But I think the axiom of power set is one of the ones that constructivists like to throw out and replace with weaker versions. So much for being good in that setting. Is the use of the power set really essential there, though? It's just there to provide the background; I wonder if there's some way to get around that...
Well, this is something I'm definitely not going to think about! In any case, Kuratowski's definition is pretty neat and useful to have available, even if it's not the One True Essence of Finiteness like I initially thought it was...
Edit Jan 6: Seems it's more predicativists than constructivists (yes, there's a large overlap) that object to that, and -- man, to hell with predicativity. I'm going to revert to my earlier position and say that in contexts I might possibly care about, this is the right definition.
Assuming consistency of ZF, of course.
So do you remember "Appendicitis: The Movie"? (Probably not. That's why there's a link.)
Well, unfortunately, it seems that YouTube has taken it down for a Terms of Service violation. What that might be, I have no idea. And the person who actually recorded and edited and uploaded it was not Ingrid or Linda but the guy I didn't know named Paul who apparently none of them are really in contact with anymore, so none of us actually have a copy of the video anymore.
Well, I'm not giving up quite yet -- it still might be possible to find him; but good chance the whole thing is just lost...
I haven't posted anything here in a week, so, why not. (Note: I really doubt that most of the entries in the file will ever get written. There's simply too many and the motivation to write them is lacking.)
One thing that often happens when I'm helping students in the MathLab is that -- well, they'll suggest something that makes no sense, often phrased in terms of manipulating the symbols on the page, and I'll respond with something to the effect of "Let's talk about the numbers, not the symbols on the page".
Of course, in a very real sense we are talking about symbols on a page! What the student did wrong was to suggest something that doesn't respect the tree sructure of those symbols. We represent mathematical expressions as strings, but really, they are trees. Well, OK, we don't quite just represent them as strings -- and some things, like superscripts, subscripts, radical signs, fraction bars, etc., help to display the tree structure a bit more directly.
My attempts to correct these errors often involve saying -- well, suppose there was a sin(x+3), and they talked about sin(x); I would say "There's no sin(x) here. There's an x+3, and a sin(x+3), but no sin(x)." (Yes, you could expand it out with the angle addition formula. That is not the point.) This seems to work pretty well for individual instances, but does it really help with the underlying error? That's not really something I can see!
Now anyone who works with mathematical expressions regularly understands that expressions are trees, but apparently a lot of people don't, and it makes me wonder -- is this something we should be teaching more explicitly, and earlier? Related is the confusion about "order of operations", and how lots of people think this is a rule of mathematics, rather than a convention about how to write things (how to represent trees as strings).
Of course, by all accounts, lots of teachers don't understand this stuff either, but hey, we can try...
For a long time I didn't know what the phrase "double parking" referred to.
From the way people used it, obviously it was a way that people sometimes parked; one that was obstructive, illegal, wrong. But just what problematic parking practice were they referring to? The "double" suggested to me that maybe they meant parking so as to take up two spaces (say, by parking on top of a line instead of inbetween them). But further example uses of the term showed that that couldn't be what it was.
I went through various other hypotheses but none of them ever quite fit how I heard the term used (and most didn't fit the name). What on earth could it refer to?
When I actually found out what it meant, I was horrified.
(Why does this post get the golden apple userpic, of all things? I'll let you figure out why I think it's appropriate.)
There should really be a book -- it could be called, say, "Topological and Ordered Algebraic Structures" -- which covers, like, all the basic properties of topological STUFF and ordered STUFF (groups, abelian groups, rings, fields, modules, vector spaces...) and, in particular, what happens when you complete them (and what structure the completion has).
Seriously, you can get out a basic algebra book like Dummit & Foote or something and learn pretty much all the basics of how just the simple algebraic versions work, but where is there a unified source for the topological versions? Even if it just contained statements and references without proofs, that would be helpful. And it could, like, presume prior knowledge of uniform spaces, because hey you should probably know that first if you want to think about these things. And state when things are unknown.
So far I've only seen this stuff in scattered sources, which are often quite incomplete, and often aren't as general as they could be (I get the impression lots of people care about topological vector spaces, but I think mostly over R and C). Really I'm asking for something that doesn't focus on the serious math most people care about, but rather about the obvious very general abstract questions that just kind of bug you. And it could go into more serious stuff where results exist, but mostly I just want something that thoroughly covers all the basic questions you're going to think to ask first, especially about the nicest cases (like say when the topology comes from an ordering, or a "generalized absolute value", or something, and there's a lot of algebraic structure...).
Maybe such a book exists? I would like to know of it if it does, because these things are bugging me. No, this has nothing to do with any of my actual work (which maybe I might get back to actually writing about in, uh, a few months :P )...
This reminds me -- I meant to ask on MathOverflow, what's the most general setting for "continuity of roots"? The roots of a polynomial vary continuously in the coefficients, so long as the leading coefficient doesn't become 0, right? Of course the problem is finding a correct formal statement of that, and also, well, proving it -- the various proofs I've seen have all been for particular special cases; never any nice general statement. What's up with that? (Similarly with statements that "topological completions of things which are complete in some algebraic sense are again complete in that algebraic sense", which I think is typically proved via some sort of continuity of roots...)
Yeah, I should actually go ask that.
This isn't really an example of something being *hard*, but I still think it's a worthwhile example -- if you have a balanced group, proving that its completion is again a topological group is a bit of easy abstract nonsense. But if you have a topological *ring*, proving that its completion naturally has the structure of a topological ring actually requires real proof.
For general topological groups, the completion is not again a group! Of course, for general topological groups, you have to ask, "the completion with respect to which uniformity"...
Here's an example: The completion of a topological field is not necessarily again a field (though if it's a field, it's a topological field). As of, uh, 1988, all known examples of this have zero divisors; is it possible to get an integral domain that isn't a field? Well, like I said, as of 1988 this is unknown... I kind of dread trying to find a more up-to-date source on this... I most likely won't try because, do I care that much? :-/
Navigate: (Previous 20 Entries)