Lemma gal_conjg K E x : 'Gal(E / K) :^ x = 'Gal(E / x @: K).
This says for intermediate fields K and E (within some ambient separable (finite dimensional) field extension L / F), 'Gal(E / K), the group of automorphisms of E that fix K†, conjugated by (written as :^ ) x‡ is equal to 'Gal(E / x @: K),the group of automorphisms of E that fix x @: K which denotes x(K).
The fields E K and F in this proof correspond to the fields L M and K from the article; 'Gal(E / X) corresponds to X* from the article; and of course x corresponds to τ from the article.
†) technically that fix E ∩ K since our notation doesn't technically assume that K ≤ E.
‡) I believe by type inference x must be something we can conjugate the group by, and thus is must be a generic automorphism of E, i.e. an F-automorphism of E.
Let me take this opportunity to post one of the best texts on Galois Theory I have read -- and I had to go through quite a few while preparing for a class.
The subject is developed very naturally and every idea is beautifully motivated. It begins with a quick one chapter intro of Arnold's proof of Abel-Ruffini.
> It begins with a quick one chapter intro of Arnold's proof of Abel-Ruffini.
The key to understanding/motivating Galois theory is Abel-Ruffini, which is a corollary of Galois. And the simplest way to understand that is Arnold's topological proof, which i learned about from this video
Watching that video and rolling it around in my head completely demystified Galois theory for me, years after literally 2 semesters of algebra in undergrad. Everything about normal subgroups and commutators and splitting fields and blah blah blah immediately became tangible and obvious. It should be a crime not teach this proof first.
The coverage in Koch's book looks good too - lots of pictures - and funny enough it links to a different youtube video.
Edit: copy-pasting notes I took from the video after watching.
------
The idea is to continuously perturb each of the coefficients of the polynomial along a loop (change each of them from their initial value such that they traverse a path that returns them to that initial value at the end of the path) and study what happens to the roots of the polynomial.
Note, once all coefficients have returned to their original values the entire set of roots also returns to itself, but each root does not necessarily returns to its original value. In general you get a permutation of the set of roots and so in this way we get a mapping between loops of the coefficients and permutations of the roots.
Also note, we can produce coefficient loops that map to any permutation of the roots by permutating the roots and “watching” the coefficients.
Hence, the way to prove Abel-Ruffini is to show that any expression involving the coefficients (ie formula for the roots in terms of the coefficients) returns to itself after the coefficients traverse their loops but the roots do not (and therefore the expression cannot capture all of the roots). For example, an immediate corollary of the construction of the mapping between loops of coefficients and roots is the fact that a general solution involving only -, +, ×, ÷ is not possible;
-, +, ×, ÷ are all single-valued and therefore no composition thereof could produce multiple roots.
There is indeed a deep connection between what is going on behind Arnold's proof and the classical Galois theory. But it needs quite a bit of sophistication to flesh out properly (not apparent in his famous lectures given to high school kids). There is a Galois theory for Riemann surfaces over algebraic functions where the coverings behave like fields do in the classical correspondence. If any one is interested, check out chapter 3 of Khovanskii's Galois Theory, Coverings and Riemann Surfaces.
i mean calling arnold's proof actually topological is probably a stretch (the "deep connection" you're talking about). it doesn't really use any topological facts about either the loops or the embedding space. continuity isn't really required i don't think? i should've said contiguity instead. it's just a very very nice model for the theory (in the sense of model theory) that lends itself to immediate visualization.
It's a bit shocking to me: I still remember all the concepts quite clearly from when I studied Galois theory ~20 years ago, to the point where I can run through a lot of the proofs conceptually in my head, but the vocabulary is GONE. Like, completely a blank, I don't remember almost any of the abstract algebra terms that all of this is expressed in.
It reminds me of the truth of the advice that my category theory professor gave us, that the definitions are both the least important and the most important things, simultaneously. They're the least important in that they're just words that wrap up very simple concepts, and merely knowing the definitions doesn't actually mean you can work with the concepts. But they're the most important thing in that most of higher level math really boils down to picking out the exact right set of definitions to use, at which point proofs tend to pop out as trivial and obvious statements using those definitions. And at a more practical level, you won't be able to read any math if the definitions are not ingrained, so you might as well get a head start and just rote memorize them if you want to succeed.
But it's interesting that the language is far less sticky in memory than the underlying intuition. My guess is that because the intuition is so much harder to develop, it wires itself in much more deeply than the words themselves, which can be pretty easily learned in a few hours of flashcard work.
Galois theory is so useful that, besides its fundamental importance in algebra, it birthed the whole subject of Galois connections, which crop up all over the place, including in theoretical CS: https://en.wikipedia.org/wiki/Galois_connection .
Similarly, Galois theory was a penny dropping moment in my understanding of mathematics and has remained surprisingly useful late into by career, as software has become increasingly "mathy".
Hello HN! Thank you for upvoting this post to the front page. I'm quite delighted to see this post about a relatively obscure lemma from Galois theory make it to the front page.
This post is primarily intended as notes on the subject. The theorems and proofs involved in Galois theory can often feel too abstract, so I find it helpful to work through them with concrete examples. This is one such example that I wanted to archive for future reference on my website. If you spot any errors, please let me know.
Thanks! Yes, I handcraft all my HTML and CSS. I'm glad you noticed the HTML and liked it. I find great joy in crafting my website by hand. It's like digital gardening. I grow all my HTML and CSS myself. It's all 100% organic and locally sourced!
I rarely ^U nowadays, but your site was so clean that I couldn't resist!
Just as a side note : when writing html5 by hand, you can use the full power of the language, most notably optional tags (no need to write html, body, etc) and auto-closing tags (no need to close p, li, td, etc). You may get something even crispier!
> Notice that, when writing html5 by hand, you can use the full power of the language, most notably optional tags (no need to write html, body, etc) and auto-closing tags (no need to close p, li, td, etc). You may get something even crispier!
Yes! In fact, sometime back I wrote a little demo page to show the minimal (but not code-golfed) HTML we can write such that it passes validation both with the Nu HTML Checker and HTML Tidy.
That said, when writing my own posts, I prefer keeping optional and closing tags intact. Since I use Emacs, I can insert and indent closing tags effortlessly with C-c /. It's a bit like how some people write:
10 PRINT"HELLO
But I've always preferred:
10 PRINT "HELLO"
I find the extra structure more aesthetically pleasing.
Thank you for sharing your notes—everyone needs a different push/viewpoint to make key concepts click, and, the more expositions are out there, the more likely someone will be to find the one that's right for them!
Is M^* Stewart's notation? I'm much more used to seeing it used for a dual space than a group of automorphisms. Conventions aside, it's a bit unfortunate in that it doesn't specify the field on which we're acting! I think that the notation Aut(L/M), or Gal(L/M) for a Galois extension, is more common.
Thank you! Yes, M^* is Stewart's notation. Indeed this notation is "lossy" in the sense that it loses information about what the extension field is. The extension field is usually introduced once for every example or theorem and then M^* is used as a shorthand notation. For example, in my post, the extension field is introduced like this:
"The notation M^* denotes the group of all M-automorphisms of L with composition as the group operation."
By the way, Stewart uses the notation Γ(L/M) too at several places but in this specific lemma, the notation M^* is used and therefore my post too uses the same notation.
It is quite fascinating to think that Galois lived only 20 years in the early 19th century, yet he conceived a theory of profound significance and impact 200 years later. Imagine if he could live 80 years!
I’m inclined to agree, but who knows… maybe he just happened to have his big insight early on, and if he’d lived longer he’d never have done anything quite as significant. Plenty of people do great things only once.
Is there such a thing as the graph of lemmas and theorems in mathematics, somewhere online?
I can't really understand much of mathematics, but I would appreciate seeing the graph of how all major proofs are constructed with help of other proofs.
The thing with mathematics is there are often many different ways to prove things leaving no universally agreed upon graph of true mathematical statements in terms of which helps proving which! That being said, you might want to check out ProofWiki [1], an online collaborative project featuring mathematical proofs consistently linked together.
There is a large library of mathematics formalized in lean called mathlib. There are graphs for the module dependencies, but I haven't seen any at the level of definitions (including lemmas and theorems).
You mean like a dependency tree? There isn’t a comprehensive one — it would be way too big and we don’t have all mathematical knowledge written down in one place anyway.
But if you’re interested in any given classical result, if you ask I’m sure a mathematician can give you a rough idea of what the path back down to first principles is. If you’re not literally interested in starting by assuming the existence of the empty set, say, then opening any introductory book on the topic will give you an idea. Just look at the proof of the result and follow its references back. It won’t go that deep.
I worked on the Galois theory needed in the formalization of the Odd Order Theorem. I wanted to check to see if I had this lemma. It's been over a decade but I think I found it: https://github.com/math-comp/math-comp/blob/84cb21e7bb853041...
This says for intermediate fields K and E (within some ambient separable (finite dimensional) field extension L / F), 'Gal(E / K), the group of automorphisms of E that fix K†, conjugated by (written as :^ ) x‡ is equal to 'Gal(E / x @: K),the group of automorphisms of E that fix x @: K which denotes x(K).The fields E K and F in this proof correspond to the fields L M and K from the article; 'Gal(E / X) corresponds to X* from the article; and of course x corresponds to τ from the article.
†) technically that fix E ∩ K since our notation doesn't technically assume that K ≤ E.
‡) I believe by type inference x must be something we can conjugate the group by, and thus is must be a generic automorphism of E, i.e. an F-automorphism of E.
Let me take this opportunity to post one of the best texts on Galois Theory I have read -- and I had to go through quite a few while preparing for a class.
https://pages.uoregon.edu/koch/Galois.pdf
The subject is developed very naturally and every idea is beautifully motivated. It begins with a quick one chapter intro of Arnold's proof of Abel-Ruffini.
Richard Koch's home page (https://pages.uoregon.edu/koch/) has other examples of his fantastic pedagogy.
> It begins with a quick one chapter intro of Arnold's proof of Abel-Ruffini.
The key to understanding/motivating Galois theory is Abel-Ruffini, which is a corollary of Galois. And the simplest way to understand that is Arnold's topological proof, which i learned about from this video
https://www.youtube.com/watch?v=RhpVSV6iCko
Watching that video and rolling it around in my head completely demystified Galois theory for me, years after literally 2 semesters of algebra in undergrad. Everything about normal subgroups and commutators and splitting fields and blah blah blah immediately became tangible and obvious. It should be a crime not teach this proof first.
The coverage in Koch's book looks good too - lots of pictures - and funny enough it links to a different youtube video.
Edit: copy-pasting notes I took from the video after watching.
------
The idea is to continuously perturb each of the coefficients of the polynomial along a loop (change each of them from their initial value such that they traverse a path that returns them to that initial value at the end of the path) and study what happens to the roots of the polynomial.
Note, once all coefficients have returned to their original values the entire set of roots also returns to itself, but each root does not necessarily returns to its original value. In general you get a permutation of the set of roots and so in this way we get a mapping between loops of the coefficients and permutations of the roots.
Also note, we can produce coefficient loops that map to any permutation of the roots by permutating the roots and “watching” the coefficients.
Hence, the way to prove Abel-Ruffini is to show that any expression involving the coefficients (ie formula for the roots in terms of the coefficients) returns to itself after the coefficients traverse their loops but the roots do not (and therefore the expression cannot capture all of the roots). For example, an immediate corollary of the construction of the mapping between loops of coefficients and roots is the fact that a general solution involving only -, +, ×, ÷ is not possible; -, +, ×, ÷ are all single-valued and therefore no composition thereof could produce multiple roots.
There is indeed a deep connection between what is going on behind Arnold's proof and the classical Galois theory. But it needs quite a bit of sophistication to flesh out properly (not apparent in his famous lectures given to high school kids). There is a Galois theory for Riemann surfaces over algebraic functions where the coverings behave like fields do in the classical correspondence. If any one is interested, check out chapter 3 of Khovanskii's Galois Theory, Coverings and Riemann Surfaces.
i mean calling arnold's proof actually topological is probably a stretch (the "deep connection" you're talking about). it doesn't really use any topological facts about either the loops or the embedding space. continuity isn't really required i don't think? i should've said contiguity instead. it's just a very very nice model for the theory (in the sense of model theory) that lends itself to immediate visualization.
Galois theory was my favourite course in the final year of my maths undergrad.
I have no idea how this post is at the top of HN. I barely remember what the symbols mean.
As another math major who doesn't do math anymore, I feel this comment so deeply in my soul
It's a bit shocking to me: I still remember all the concepts quite clearly from when I studied Galois theory ~20 years ago, to the point where I can run through a lot of the proofs conceptually in my head, but the vocabulary is GONE. Like, completely a blank, I don't remember almost any of the abstract algebra terms that all of this is expressed in.
It reminds me of the truth of the advice that my category theory professor gave us, that the definitions are both the least important and the most important things, simultaneously. They're the least important in that they're just words that wrap up very simple concepts, and merely knowing the definitions doesn't actually mean you can work with the concepts. But they're the most important thing in that most of higher level math really boils down to picking out the exact right set of definitions to use, at which point proofs tend to pop out as trivial and obvious statements using those definitions. And at a more practical level, you won't be able to read any math if the definitions are not ingrained, so you might as well get a head start and just rote memorize them if you want to succeed.
But it's interesting that the language is far less sticky in memory than the underlying intuition. My guess is that because the intuition is so much harder to develop, it wires itself in much more deeply than the words themselves, which can be pretty easily learned in a few hours of flashcard work.
Me too! But I can't really remember why. The proof was impressive.
Galois theory is so useful that, besides its fundamental importance in algebra, it birthed the whole subject of Galois connections, which crop up all over the place, including in theoretical CS: https://en.wikipedia.org/wiki/Galois_connection .
Galois connections are super useful for static program analysis.
In particular, for abstract interpretation. A great intro book is [1].
[1] Program Analysis – An Appetizer. https://arxiv.org/pdf/2012.10086
Similarly, Galois theory was a penny dropping moment in my understanding of mathematics and has remained surprisingly useful late into by career, as software has become increasingly "mathy".
Would you mind sharing how it turned out useful?
Hello HN! Thank you for upvoting this post to the front page. I'm quite delighted to see this post about a relatively obscure lemma from Galois theory make it to the front page.
This post is primarily intended as notes on the subject. The theorems and proofs involved in Galois theory can often feel too abstract, so I find it helpful to work through them with concrete examples. This is one such example that I wanted to archive for future reference on my website. If you spot any errors, please let me know.
The html source is beautiful, did you write it by hand?
Thanks! Yes, I handcraft all my HTML and CSS. I'm glad you noticed the HTML and liked it. I find great joy in crafting my website by hand. It's like digital gardening. I grow all my HTML and CSS myself. It's all 100% organic and locally sourced!
I rarely ^U nowadays, but your site was so clean that I couldn't resist!
Just as a side note : when writing html5 by hand, you can use the full power of the language, most notably optional tags (no need to write html, body, etc) and auto-closing tags (no need to close p, li, td, etc). You may get something even crispier!
See for a reference the google html style guide : https://google.github.io/styleguide/htmlcssguide.html#Option...
And the official html5 reference for a complete list of optional tags : https://html.spec.whatwg.org/multipage/syntax.html#syntax-ta...
> Notice that, when writing html5 by hand, you can use the full power of the language, most notably optional tags (no need to write html, body, etc) and auto-closing tags (no need to close p, li, td, etc). You may get something even crispier!
Yes! In fact, sometime back I wrote a little demo page to show the minimal (but not code-golfed) HTML we can write such that it passes validation both with the Nu HTML Checker and HTML Tidy.
Here's the demo page: https://susam.net/code/web/minimal.html
Here's the Nu HTML Checker output: https://validator.w3.org/nu/?doc=https%3A%2F%2Fsusam.net%2Fc...
Here's the HTML Tidy (version 5.8.0) output:
Here's the HTML: That said, when writing my own posts, I prefer keeping optional and closing tags intact. Since I use Emacs, I can insert and indent closing tags effortlessly with C-c /. It's a bit like how some people write: But I've always preferred: I find the extra structure more aesthetically pleasing.Thank you for sharing your notes—everyone needs a different push/viewpoint to make key concepts click, and, the more expositions are out there, the more likely someone will be to find the one that's right for them!
Is M^* Stewart's notation? I'm much more used to seeing it used for a dual space than a group of automorphisms. Conventions aside, it's a bit unfortunate in that it doesn't specify the field on which we're acting! I think that the notation Aut(L/M), or Gal(L/M) for a Galois extension, is more common.
Thank you! Yes, M^* is Stewart's notation. Indeed this notation is "lossy" in the sense that it loses information about what the extension field is. The extension field is usually introduced once for every example or theorem and then M^* is used as a shorthand notation. For example, in my post, the extension field is introduced like this:
"The notation M^* denotes the group of all M-automorphisms of L with composition as the group operation."
By the way, Stewart uses the notation Γ(L/M) too at several places but in this specific lemma, the notation M^* is used and therefore my post too uses the same notation.
I think the usual notation works better for this lemma.
It is quite fascinating to think that Galois lived only 20 years in the early 19th century, yet he conceived a theory of profound significance and impact 200 years later. Imagine if he could live 80 years!
I’m inclined to agree, but who knows… maybe he just happened to have his big insight early on, and if he’d lived longer he’d never have done anything quite as significant. Plenty of people do great things only once.
Is there such a thing as the graph of lemmas and theorems in mathematics, somewhere online?
I can't really understand much of mathematics, but I would appreciate seeing the graph of how all major proofs are constructed with help of other proofs.
The thing with mathematics is there are often many different ways to prove things leaving no universally agreed upon graph of true mathematical statements in terms of which helps proving which! That being said, you might want to check out ProofWiki [1], an online collaborative project featuring mathematical proofs consistently linked together.
[1] https://proofwiki.org/wiki/Main_Page
Ah, just what I was asking for. Thanks
There is a large library of mathematics formalized in lean called mathlib. There are graphs for the module dependencies, but I haven't seen any at the level of definitions (including lemmas and theorems).
You mean like a dependency tree? There isn’t a comprehensive one — it would be way too big and we don’t have all mathematical knowledge written down in one place anyway.
But if you’re interested in any given classical result, if you ask I’m sure a mathematician can give you a rough idea of what the path back down to first principles is. If you’re not literally interested in starting by assuming the existence of the empty set, say, then opening any introductory book on the topic will give you an idea. Just look at the proof of the result and follow its references back. It won’t go that deep.
I see. I was just thinking I would personally appreciate seeing a big graph of proof-dependencies. A bit like "Seeing the Forest from the Trees".
Every mathematician is working on their own TREE, I assume. But might be useful if somebody was looking at how the FOREST is doing.