blog for drafts and other unsuitable stuff; no reblogs please
Don't wanna be here? Send us removal request.
Text
Conjugate Group Elements are Naturally Isomorphic
Basic category theory singles out the partition of a group into its conjugacy classes as a particularly natural one: elements are conjugate iff they’re naturally isomorphic, in a sense spelled out below.
(Note: this is elementary, and has been well-known for a long time; it appears, for example, as exercise 1.3.30 in Leinster's excellent, and free, "Basic Category Theory".)
The key to this idea is finding a way to understand the elements of some group G as functors, and then asking which group elements are naturally isomorphic; not so strange, since natural isomorphism is just about the most reasonable equivalence relation on functors.
To see the elements of G as functors, we need two facts: 1) that elements of a group G are in bijection with homomorphisms from ℤ to G, where an element g corresponds to the homomorphism mapping 1 to g, and 2) that groups are equivalent to one-object categories with all arrows invertible (i.e. their deloopings, denoted with a B in front of the group name), and functors between these are exactly group homomorphisms.
Now that we can think of elements of G as functors from the one-object category Bℤ to the one-object category BG, we should ask when two such functors are naturally isomorphic. First, we'll characterize the natural transformations; they'll turn out to all be invertible.
A natural transformation between two parallel functors comprises a collection of arrows in the target, BG (i.e. a collection of elements of G), indexed by objects in the source, Bℤ; since Bℤ has just one object, a natural transformation is given by a single element of G.
Given two such functors A, B, the naturality condition tells us which (if any) elements of G are the one and only component of some natural transformation from A to B.
If g is the component of a natural transformation from A to B, the corresponding naturality squares (one square for every arrow in the source category, i.e. one for every integer) must commute. The square for the integer n is below; the asterisk denotes the one object of BG.
(Reminder: A, B, and g are all elements of G, but we think of A and B as denoting functors from Bℤ to BG, and of g as the one and only component of some natural transformation from A to B.)
The commutativity of this nth square says that g . Aⁿ = Bⁿ . g, or in other words that Bⁿ = g . Aⁿ . g⁻¹. It's not too tricky to show (e.g. by induction) that the n=1 case is already equivalent to this statement for all integers n.
So: The group element g is the component of a natural transformation from A to B iff B is the g-conjugate of A. This clearly has an inverse natural transformation, from B to A, whose component is g⁻¹, since A must also be the g⁻¹-conjugate of B.
Now we know that every natural transformation is a natural isomorphism (since it has an inverse), and furthermore two functors are naturally isomorphic exactly when the corresponding two group elements are conjugate.
This is just what we wanted: the natural isomorphism equivalence relation on functors specializes to the conjugacy equivalence relation on elements of G, singling out that relation (and the corresponding partition into conjugacy classes) as a structurally important one.
1 note
·
View note
Text
plus c
The inverse image of a point under a linear map T: V -> W is an affine subset of V, of the form v + null(T), and thus of dimension dim(null(T)). The derivative, as a linear map on the space of differentiable functions, has a 1-dimensional nullspace (the subspace C of constant functions), so the inverse image of a function is a 1-dimensional affine subset of the space of differentiable functions. The “+ c” in the usual treatment of antidifferentiation is exactly this “+ C”; in some sense you’re really adding an arbitrary constant function, not just an arbitrary constant.
0 notes
Text
R^n isn’t a field for n>2, 2nd draft
Here's a relatively simple argument I'm working on for why R^n doesn't have the structure of a field (compatible with the usual topology) for n>2.
The multiplicative group of a topological field has, as its underlying space, the underlying space of the field with a single puncture, because we have to remove 0 to get the multiplicative group. This group is abelian by the field axioms, and since the space is a manifold in this case, the multiplicative group must actually be an abelian Lie group.
An abelian Lie group must have every connected component homeomorphic to some product of a euclidean space and a torus (i.e. of lines and circles). For n=1, we can puncture the line R to get a disjoint union of two lines; for n=2, we can puncture the plane R^2 to get a cylinder (i.e. the product of a line and a circle). The claim here is that, for n>2, punctured R^n is not homeomorphic to any product of a euclidean space and a torus, and therefore it cannot be given the structure of an abelian Lie group.
To show that that there is no such homeomorphism, we show that these two kinds of spaces are never even homotopy equivalent (a strictly weaker condition than homeomorphism), except in the cases mentioned above. Punctured R^n is homeomorphic to a product of R and the (n-1)-sphere, S^(n-1), and is thus homotopy equivalent to S^(n-1). The product of j-dimensional euclidean space and the k-torus, R^j × T^k, is homotopy equivalent to T^k.
For k>0, T^k has d-th homotopy group Z^k for d=1, and trivial for d>1. On the other hand, the (n-1)-st homotopy group of S^(n-1) is always Z. If k>1, then no homotopy group of T^k is Z, so there can be no homotopy equivalence, and a fortiori no homeomorphism.
So the only possibilities for homotopy equivalence are k=0 and k=1. These cases correspond to the real and complex fields, respectively. When k=0, T^k is just a single point, which isn't a sphere at all (although each of the two connected components of S^0 is homeomorphic to T^0). When k=1, we do actually find that T^1 = S^1.
Thus, the only euclidean spaces which have topologically-compatible field structures are R and R^2.
1 note
·
View note
Text
i’m trying to write a post about a simple argument i thought of for why there’s no field structure on R^n (compatible with the usual topology) except for n=1,2.
there should be an abelian Lie group structure on the underlying topological space, for the additive group of the field, and then another abelian Lie group structure on the punctured version of the underlying space, for the multiplicative group of the field.
an abelian Lie group is a disjoint union of products of lines and circles. so a topological field structure on R^n is only possible if puncturing R^n yield a disjoint union of products of lines and circles. for n=1, you get a pair of lines, which works. for n=2, you get a cylinder (a line times a circle), which also works.
but since R^n minus 0 is homeomorphic to an (n-1)-sphere times a line, it would be enough to rule out the possibility of getting R^n to be a field if we could show that an (n-1)-sphere isn’t a product of lines and circles unless n=1,2. this should be easy to do with de Rham cohomology, and that’s the piece of the argument i’m still working out in detail.
so that’s the whole argument! a topologically compatible field structure on R^n would have to include an abelian Lie group structure on punctured R^n, and that’s not topologically possible when n>2.
2 notes
·
View notes
Text
Thinking some thoughts about posets today. Looks to me like the Möbius function of a poset assigns to each element the Euler characteristic of its downset. I have only convinced myself of this in the special case of Boolean algebras. Some sources take Möbius functions to be binary; in that case I mean the unary function μ(p) = μ(p,0), where 0 is the minimum element, as in slide 106 here.
1 note
·
View note
Text
The House of Pain Is Built on Sand
Above is the title of a post about Ainslie’s theory of pain (as presented in his book Breakdown of Will) that I hereby resolve to write. I’ll be reblogging this post from myself with drafts, additions, etc.
0 notes
Text
Notes on the Einstein Fandom, first draft
I recently remembered something my old roommate once said to me, when we were undergrads, about the how people’s perception of Einstein evolves as they study physics.
Practically everyone starts out thinking of Einstein as one of the greatest physicists of all time, if not the greatest. His name is an epithet for genius, the theories he’s best known for (special & general relativity) have stood the test of time, and he is typically portrayed as good-natured and humble. He is probably the physicist whose name lay people encounter most often.
As one studies more physics, it becomes clear that there have been many important and brilliant physicists, and Einstein’s relative importance among them seems to decline. Sure, he gets great PR, but plenty of other people deserve to be just as famous. He becomes “just another fairly important physicist” in one’s mind, and the list of “fairly important physicists” grows rapidly as one learns more.
But then, at some point in one’s physics education, it becomes increasingly obvious that Einstein really does stand out, even among the greatest physicists of the past! One learns a little about what physicists actually thought of their contemporaries; one reads that Wigner thought Einstein’s “mind was both more penetrating and more original than von Neumann's”, and many similar statements. One notices that most great physicists made a major contribution or two to a branch or two of the subject. Meanwhile, in a single year, at the beginning of his career, Einstein made three revolutionary contributions:
1) By taking the idea of energy quantization, as proposed by Planck, seriously (apparently Planck thought of it as “a purely formal assumption”), and applying it to electromagnetism and optics, Einstein invented the idea of photons, providing the first successful explanation for the photoelectric effect. This is what he won the Nobel for, and is of the most important chapters in the origin story of quantum mechanics.
2) By taking the atomic/molecular theory of matter seriously (which some physicists, e.g. Mach, considered a convenient fiction, as molecules were too small to observer directly), Einstein derived a novel description of Brownian motion, which connected the physical properties of matter’s molecular constituents to the dynamics of macroscopic objects. This is mainly what finally convinced the remaining skeptics that matter was made of microscopic discrete objects.
3) By taking the Lorenz transformations (as developed by Poincaré, Lorenz, FitzGerald et al.) to indicate something fundamental about the geometry of spacetime, Einstein invented the special theory of relativity, explaining the relativistic effects known to earlier physicists without reference to a preferred rest frame, in a way that has informed all subsequent physical and philosophical ideas bout space and time. As a consequence of special relativity, energy and mass turn out to be interconvertible.
But Einstein was just getting started! He continued to make pioneering contributions to quantum theory - including developing the first quantum theory of the heat capacity of solids, predicting (with Bose) existence of Bose-Einstein condensates, and (with Podolsky & Rosen) pointing out issues related to hidden variables and locality. He argued for many years with Bohr about the foundations of the theory in a productive and insightful way; as Tim Maudlin puts it, “while Einstein won—and would continue to win—all the logical battles, Bohr was decisively winning the propaganda war”. It was Einstein’s insistence on focusing on the nonlocality of standard quantum theory (which Bohr apparently did not think was terribly important or interesting) that led to Bell developing his eponymous theorem, decades later; Bell presented his theory explicitly as an extension of the EPR paper (which was actually written by Podolsky, but based on work he did with Einstein & Rosen).
And of course there’s Einstein’s own realization, almost as soon as he’d published his work on special relativity, that it was not compatible with Newton’s theory of gravity, and his subsequent development of general relativity, requiring him to learn the relatively cutting-edge mathematics of Riemannian geometry and tensor calculus to make his physical intuitions precise.
The complete list of Einstein’s publications is pretty impressive; I’ll just note here that, while he was making all these seminal contributions to statistical mechanics, special & general relativity, and quantum theory, he apparently found time to resolve the long-standing “tea leaf paradox” in fluid dynamics.
In conclusion, the people who are vocal about how awesome Einstein was are distributed bimodally with respect to their level of physics education.
1 note
·
View note
Text
One’s Modus Ponens Is Another’s Modus Tollens, draft
The title phrase refers to an important point about arguments, which can be easily described with some simple logical terminology. It’s not new or obscure, but I recently found that many of my friends aren’t familiar with it, so I’ll try explaining it here.
The idea is that there are two different conclusions one could draw from a statement of the form “if X then Y”, depending on one’s prior beliefs about X and Y. Consider two people, each of whom becomes convinced that the inference or implication “if X then Y” is true.
Alice, who started out believing that X is true, will apply the rule known as modus ponens. This is the rule that lets us conclude, from “if X then Y” and “X is true”, that “Y is true”. So Alice ends up believing that Y is true.
Bob, who started out believing that Y is false, will apply the rule known as modus tollens. This lets us conclude, from “if X then Y” and “Y is false”, that “X is false” - for if X were instead true, we could use modus ponens and “if X then Y” to conclude “Y is true”, which contradicts Bob’s other belief. So Bob ends up believing that X is false.
Because Alice and Bob started out with different beliefs, they drew completely different conclusions when they learned the implication “if X then Y”, and this ambiguity of conclusion is what the phrase “one’s modus ponens is another’s modus tollens” points to.
2 notes
·
View notes
Text
august book of the month, first draft
August's Book of the Month is Edouard Machery's "Doing Without Concepts" from 2009, which is primarily about the psychology (and secondarily the philosophy) of concepts. It develops and elaborates the argument first put forth by the author in his 2005 paper "Concepts are not a natural kind", published in Philosophy of Science.
The thesis is Machery's "heterogeneity hypothesis" about concepts, which has five components:
(1) For each category or class, we typically have several coreferential concepts. So we have not one but several concepts of "fruit", "metal", "birthday", and so on.
(2) These coreferential concepts are not just substantially different from one another, they have very few properties in common at all. They are heterogeneous kinds of concept.
(3) There are at least three distinct and heterogeneous kinds of concept whose existence is well-supported by experiments: exemplars, prototypes, and theories. There may well be others.
(4) These heterogeneous kinds of concept are used in distinct cognitive processes, so that to each kind of concept there corresponds a distinct kind of procedure for categorization, induction, and other things we do with concepts.
(5) Concepts are so heterogeneous that they do not form the type of natural kind about which it is useful to seek scientific generalizations, and thus the term "concept" should be eliminated from the theoretical vocabulary of psychology.
Before outlining the heterogeneity hypothesis, Machery takes a chapter to say what he takes psychologists to mean by "concept", and then devotes another chapter to what philosophers mean by the term. Briefly, psychologists are talking about bodies of knowledge stored in long-term memory that are used by default in the processes underlying higher cognitive competences. The phrase "by default" is meant to distinguish information that spontaneously comes to mind when considering a category from less-immediately accessible relevant information that we can dig up from memory if necessary. While there is not much agreement on how to characterize or distinguish the "higher" cognitive competences, it is typically the case perceptual and motor competences are excluded from this class. In any case, there is widespread agreement that categorization (the cognitive competence most commonly discussed alongside the theory of concepts), deduction, induction, planning, and analogy-making are among the higher cognitive competences.
Philosophers, on the other hand, have something substantially different in mind when they talk about "concepts". In fact, philosophers are substantially less uniform than psychologists in their ideas about what concepts are, but Machery focuses on a relatively common view among philosophers who engage with psychologists about the nature of concepts. While a psychologist would say that we use our concept of "dog" to categorize something as a dog, a philosopher would say that our concept of "dog" is that which allows us to have propositional attitudes about dogs (or things which might be dogs) in the first place - so our concept of "dog" is precisely that which lets us formulate the predicate "is a dog" is in the first place. So the goal of a philosophical theory of concepts is to explain how it is that we can have "contentful states" and "propositional attitudes".
Psychologists tend to take this capacity for granted. The goal of a typical psychological theory of concepts is to describe the nature of those bodies of knowledge that we use by default in the processes underlying higher cognitive competences: how they are formed, how they are stored, how they are used, their localization in the brain, and so on. The point here is that, contrary to what Machery says is commonly believed, psychological and philosophical theories of concepts are quite different; thus 1) the heterogeneity hypothesis addresses the former and 2) some philosophical attacks on psychological theories of concepts are undermined by their failure to recognize this distinction.
Having established what a psychological (as opposed to philosophical) theory of concepts is supposed to do, Machery sets the stage for his hypothesis by describing what he calls "the received view" on concepts, which is (sometimes explicitly, usually tacitly) shared by most psychologists. While being well-aware that concepts vary substantially among themselves, psychologists believe that many inductive generalizations about the class of concepts will be possible, by virtue of essential common properties shared by all kinds of concepts. Machery takes great care to show that he is not constructing a strawman by quoting a variety of widely-cited sources by well-known psychologists to support the existence of this view.
(1) The first tenet of the heterogeneity hypothesis is that we typically have several concepts for each category. This is distinct from the existing view of "scope pluralism" (the stance that there are several substantially different types of concept - so that animal concepts, substance concepts, even concepts, artifact concepts, etc. are all different) and "competence pluralism" (the stance that each higher cognitive competence is underwritten by a different class of concept - so that we have one concept of "dog" for categorizing, another concept of "dog" for induction, etc.). Machery contends that we have at least three different concepts of "dog" (to be spelled out below), and that we generically use all of them for each competency. Thus, we actually have several distinct categorization processes, several induction processes, and so on (one for each fundamental kind of concept).
(2) The second tenet is that coreferential concepts (e.g. the several concepts of "dog" we each presumably have) have very little in common. There is, therefore, no reason to expect the search for generalizations among them to be fruitful.It is important at this point to distinguish the heterogeneity hypothesis from a "hybrid theory of concepts". The latter is a class of concept theories which says each concept is divided into several parts; these parts store distinct kinds of information about the corresponding category, and coordinate in the sense that they jointly produce a since output to an act of categorization (or induction, etc.). The crucial distinction between hybrid theories and the heterogeneity hypothesis is that the latter does not assume coreferential concepts are coordinated, so that one of our concepts of "fruit" might lead us to categorize an avocado as a fruit, while another of our concepts of "fruit" might not. This lack of coordination makes it more plausible that there are distinct kinds of bodies of knowledge in play here (a theory and a prototype respectively, as we'll see below), rather than something best described as two parts of a single concept of "fruit".
(3) Machery makes no attempt to exhaustively classify the heteregeneous "kinds of concept" known to psychology, but argues that we know of at least three: exemplars, prototypes, and theories. All three of these have their proponents among psychologists, who tend to argue not only that their preferred kind of concept exists, but that the others do not (i.e. they argue that only their preferred kind of concept explains various experimental data) - a pointless conflict, Machery argues, as the existence of all three are well-supported, caused by the assumption of homogeneity of concepts (i.e. the received view). For historical / chronological reasons, he presents them in the order "prototypes, exemplars, and [then] theories", but I think it's more natural to put exemplars first, as "exemplars, prototypes, and theories" has a natural gradation from least to most abstract.
Exemplars are a kind of concept based on memories of, and properties believed to be possessed by, particular instances of some category (or, in some versions, particular encounters with instances). Thus, an exemplar theorist would say that our concept of "dog" consists of a set of dog exemplars - a body of knowledge consisting of memories of, and beliefs about, the particular dogs we've seen or otherwise encountered.
Categorization works by comparing putative category members to our set of exemplars, and computing a similarity measure. In contrast to prototype theories of concepts, exemplar theories typically propose a nonlinear similarity measure. This means that how much any one similarity between a putative category member and some exemplar contributes to the similarity measure depends on which other similarities are detected. For example, seals bark (like our exemplars of dogs), but that similarity to dogs is made unimportant by their lack of legs. Nonlinearity is useful for explaining experimental data wherein subjects were more likely to classify something in a category if it was extremely similar to a single known category member but only moderately so to others, than if it was moderately similar to most known members.
Machery gives a history of the exemplar theory of concepts, points out some potential problems with it, gives a concrete example of a similarity measure, and ultimately cites convincing empirical evidence that exemplars exist (and are used by default in the processes underlying higher cognitive competences).
Prototypes are a more abstract, but still similarity-based, kind of concept. A prototype is a kind of statistical of the properties possessed by members of a category. Prototype theories are mostly either "featural" (referring to binary properties that can be either possessed by putative members or not) or "dimensional" (referring to properties that can be possessed to some degree).
Prototypes are usually taken to contain information about typicality, aka "category-validity", where a property P is typical of category C iff an object is likely to have property P when it is a member of category C. Prototypes are also usually taken to contain information about cue-validity, aka "diagnosticity", where a property P is cue-valid for category C iff an object is likely to be in category C when it has property P. The example Machery gives is that having legs is typical for dogs; barking is cue-valid for dogs. The crucial thing prototype theories have in common is the idea that concept store statistical information about category members.
This statistical information is used as part of a similarity measure in categorization tasks, but, contrary to the case of exemplars, this is usually (but not always) taken to be a linear function of the individual feature similarities (so that features contribute to similarity independently of one another). Machery says that linear similarity measures tend to be used by prototype theorists to explain "typicality effects", wherein subjects are more likely to classify things into a category when they possess properties typical of category members.Machery reviews the history of prototype theories and the evidence prototype theorists have accumulated, and concludes that prototypes also exist, are distinct from exemplars (in part because prototype theories explain some data that exemplar theory doesn't, and vice versa), and fit the usual psychological definition of "concept".
Theories, sometimes called "causal theories" when ambiguity is possible, are the most abstract of the three types of concept Machery defends. Theories, in the psychological sense of the word, contain causal, functional, nomological, and explanatory information. While a few influential theory theorists have emphasized the analogy between theory-concepts and scientific theories, Machery claims that theory theorists are more likely to describe theory-based reasoning in terms of traditional "folk" concepts of explanations, in part because philosophers of science often disagree about what kinds of knowledge constitute scientific theories.
Much of the evidence for theories comes from developmental psychology, where decades of experiments have shown how children steadily gain the ability to apply the similarity-insensitive tools of causal theorizing to construct explanations, categorize, make inductive generalizations, and perform other higher cognitive tasks. Theory theorists seem to have invested a great deal of their time, at least at first, arguing that (contra exemplar and prototype theories) higher cognitive processes don't involve a similarity computation. In any case, they have accumulated a great deal of evidence that for cognitive processes that cannot be explained by a similarity-based notion of concept.
Models of theory-based reasoning do not involve a similarity measure, because theories are insensitive to superficial similarities (unlike exemplars and prototypes). Instead, theory theorists argue that conceptual knowledge has the structure of a causal Bayes net, encoding the causal relationships between categories, whose topology and weights we are learn with experience. Causal reasoning allows us to conclude (as we now know children gradually learn to do) that, for example, a dog made to look superficially like a cat can be expected to behave more like a dog than like a cat.
After arguing for the existence of exemplars, prototypes, and theories (though not without criticizing many aspects of the associated research programmes), Machery addresses some other theories of concepts, expressing skepticism about the "neo-empirical" theory wherein concepts are encoded in perceptual representations. He also tentatively endorses ideals as yet another kind of concept - ideals store information about the properties a category member should have - but notes that evidence for ideals as a distinct class is not as strong as for exemplars, prototypes, and theories.
We have now seen that exemplar sets contain knowledge about specific individuals of some class, prototypes contain statistical knowledge about the entire class, and theories contain knowledge about causal relationships, laws, and explanations relating members of a class. Furthermore, exemplar theorists hold that we compute a nonlinear similarity measure between a putative category member and a set of exemplars, prototype theories hold that we compute a linear similarity measure between a putative category member and a single prototype for the category, and theory theorists hold that we don't rely on similarity judgments at all, but use explanations, laws, and known causal properties to perform higher cognitive processes. So we can conclude that, as per the heterogeneity hypothesis, these three types of concept store distinct kinds of knowledge and are used in different ways.
(4) The next tenet of the heterogeneity hypothesis involves a discussion of what Machery calls "multi-process theories", which assume that each higher cognitive competence is executed by virtue of several different cognitive processes. A competence is defined by its function or goal; a process is a particular way of fulfilling that function or accomplishing that goal. Machery spells out a plausible principle for distinguishing and identifying competences; without such a principle, we could hardly argue that two or more processes underwrite the same competence.
A true multi-process theory of a cognitive competence must describe several processes underwriting that competence such that any one of them is sufficient on its own - there must be redundant mechanisms for performing the competence's function. Such a theory should be able to say under what conditions the hypothesized processes are activated, as well as what happens to the outputs of the redundant processes when two or more are triggered simultaneously.
Because the chapter on multi-process theories (Chapter 5) precedes two others devoted primarily to empirical evidence for the heterogeneity hypothesis, Machery takes care to specify what should could as evidence for a multi-process theory. He then gives several examples of successful multi-process theories, including the famous "System 1 / System 2" dual-process theory of cognition.
(5) The conclusion Machery comes to is that the term "concept", as used in psychology, refers to a sufficiently heterogeneous collection of distinct categories (exemplars, prototypes, theories, and perhaps others) that it should be eliminated from the psychological vocabulary. A universally-accepted example of this kind of eliminativism in science is the elimination of "protist" from the (formal) vocabulary of modern biology - as that Wikipedia article puts it, "protists do not necessarily have much in common". Certainly protists all satisfy the definition "unicellular eukaryote that is not a fungus, plant, or animal", but this isn't enough for protists to form what Machery calls a "causal natural kind" (more on this notion below). The use of the term "protist" in biology invited attempts to generalize about protists, but the modern (cladistic) approach to phylogenetics has demonstrated why this is unlikely to be fruitful.
To defend this proposal, Machery first distinguishes his eliminativist argument from several others, particularly the anti-representationalist argument against concepts and the argument from context-sensitivity. Rather than arguing that there is no real thing or class of things to which "concept" could refer - that the term fails to pick out any class of entities - Machery argues that "concept" picks out a heterogeneous collection of entities which do not form a causal natural kind, and that seeking scientific generalizations about this entire collection is a waste of time. In other words, "concept" picks out the entities which satisfy its definition, just like "protist" does, but is just as unhelpful a class to specify.
After reviewing various notions of "natural kind", Machery settles on what he calls the "causal notion of natural kind" as the appropriate version for his purposes . This type of natural kind is such that 1) many generalizations about its members are possible, as its members have many properties in common, 2) these generalizations are due to some common causal mechanism acting on the members, not some accident, and 3) there is no larger class of entities about which the same generalizations could be formulated. There may be properties satisfied by most members of the natural kind but not all, so that the relevant generalizations need not be law-like. This, then, is the sort of natural kind that empirical sciences seek to identify.
It's not uncommon for terms that were thought to refer to a natural kind to turn out otherwise. Two examples from cognitive science are "memory" and "emotion". We now know, thanks to data from psychological and neurological experiments, that it is more useful to speak of "working memory", "declarative memory", "episodic memory", and other notions that were once thought or assumed to be part of some monolithic "memory system". The term "emotion" is like "concept" in that few generalizations can be made about it (beyond those which simply follow from the definition), but many generalizations are true of various subclasses thereof. These examples are mentioned as part of an extensive discussion, in Chapter 8, of the history and structure of scientific eliminativism.
At this point in the book it's clear why Machery thinks concepts do not form a natural kind, and under what circumstances he thinks eliminativism is called for; it remains for him to apply the latter argument to the former conclusion. The pragmatic reason for eliminating the term "concept", beyond the theoretical issue that it does not refer to a natural kind, is that psychological research on concepts has suffered from a pointlessly polemical and adversarial tone, as exemplar theorists, prototype theorist, theory theorists, and others have argued not just for their own notion of concept, but against all the others. By eliminating the term "concept", we could eliminate the implication that evidence for one theory of concepts is necessarily evidence against the others - these theories are mutually consistent and can peacefully coexist. This would also discourage the search for commonalities among the bodies of knowledge used in higher cognition - a search which Machery contends is unlikely to be fruitful.
Once psychologists studying concepts drop their needlessly antagonistic attitudes, Machery says, they can get down to the important business of addressing important questions, such as the implications of the differences among different exemplar theories (mutatis mutandis for prototype and theory theories), or the detailed multi-process structure of cognitive competences such as categorization: when are different categorization processes triggered simultaneously, and what is the fate of their outputs if they disagree? These important questions are not being paid enough attention, Machery argues, because of the conflict between schools of thought arguing over what concepts are.
Overall, I found Machery's arguments to be carefully considered, amply supported by evidence, and convincing. Unsurprisingly, given how radical his eliminativist recommendation is, the heterogeneity hypothesis has sparked a lively debate among psychologists and philosophers - the book has 565 citations on Google Scholar as of this post - including some defenses of concepts [1] [2] [3] [4], a response by Machery to some of his critics [5], at least one interesting attempt to construct a unified model of concepts [6], and a nuanced discussion of the issue of natural kinds and eliminativism in general (focused on Machery's thesis in particular) [7].
As far as I can tell, the most recent thing Machery has written on this subject is his contribution to the 2014 collection "Advances in the Experimental Philosophy of Mind" (chapter 8), titled "Concepts: Investigating the Heterogeneity Hypothesis". This summarizes some of his work on collecting what empirical data to support the heterogeneity hypothesis.
1 note
·
View note
Text
differential geometry as part of commutative algebra (second draft)
I’m really starting to appreciate the utility of thinking about differential geometry as a part of commutative algebra! By generalizing the kind of algebra we have on one side of this duality, we can generalize geometry (to supergeometry, synthetic differential geometry, noncommutative geometry, etc.). It might finally be time for me to finish reading Jet Nestruev’s book about this, Smooth Manifolds and Observables, a monograph devoted to this perspective. The 3 “magic algebraic properties of differential geometry” (as Urs Schreiber calls them; see e.g. Prop 1.15 here), which allow us to translate differential-geometric statements into commutative-algebraic ones, are as follows:
1) “embedding of smooth manifolds into formal duals of R-algebras”
Smooth (real scalar) functions on smooth manifolds exhibit the latter as formal duals of ℝ-algebras. In particular, we have a fully faithful contravariant “smooth functions on” functor, C∞(-): Smooth → ℝ-Algop. This maps a smooth manifold M to the ℝ-algebra of smooth functions on M, and a smooth function between manifolds f: M → N to precomposition with f, (-)∘f: C∞(N) → C∞(M). This lets us faithfully translate statements about smooth manifolds and smooth functions between them into statements about ℝ-algebras and ℝ-algebra homomorphisms (in the opposite direction) between them. This restricts to a contravariant equivalence of categories on the “smooth ℝ-algebras”, which Nestruev’s book spells out in detail.
Once we understand how to characterize these smooth ℝ-algebras that make up the essential image of our fully faithful functor, we can generalize our concept of geometry by broadening the class of algebras on the rhs of this (or a closely related) duality, and considering the formal duals of these algebras as geometric objects of a new kind. Supergeometry involves considering a kind of ℤ/2ℤ-graded algebras ("supercommutative algebras") on the rhs, formed by tensoring C∞(M) with a Grassmann algebra. Synthetic differential geometry proceeds by adjoining nilpotent elements, which are effectively infinitesimals, allowing us to think of tangent vectors and higher jets as functions from an “infinitesimally thickened point” (or “smol line”, if you will).
Noncommutative geometry is based on a similar principle, in the context of Gelfand duality (a related construction which has compact Hausdorff spaces on the lhs and commutative C*-algebras on the rhs), where one generalizes by dropping the commutativity requirement for the algebras. We then think of “supermanifolds”, “synthetic manifolds”, and “noncommutative spaces” as the formal duals to these categories of algebras, larger than the image of the original embedding of ordinary geometric objects.
(This idea of turning geometric objects into algebraic ones by considering the algebra of functions on the geometric objects is part of a pattern sometimes known as Isbell duality, which is vastly more general than this example with ℝ-algebras or Gelfand duality.)
2) “embedding of smooth vector bundles into formal duals of R-algebra modules” aka the smooth Serre-Swan theorem
Any smooth vector bundle p: E → M over a smooth manifold M has a set of sections ΓM(p), which carries the structure of a module over C∞(M); sections can be added and scaled pointwise using the vector space structure of the fibers. This extends to a functor ΓM: SmoothVB/M → C∞(M)-Mod from the category of smooth (finite-rank) vector bundles over M to the category of (finite-dimensional) modules over the function ring C∞(M). This functor is fully faithful, and its (essential) image consists of the finitely-generated projective C∞(M)-modules; the (essential) image of the subcategory of cartesian spaces consists of finitely-generated free modules.
So again, we have a way to turn statements about geometric objects (smooth vector bundles) into statements about algebraic objects (projective modules), and we can use the machinery of commutative algebra to generalize the objects on the algebraic side of this functor, leading to generalizations of smooth vector bundles. The details here are still obscure to me (there seems to be some connection to quasicoherent sheaves in algebraic geometry, which I don’t understand yet), but see the nLab page on the original Serre-Swan theorems for a little more on this.
3) "derivations of smooth functions are vector fields"
That vector fields can be used to take directional derivatives of functions on a manifold is one of the basic facts of differential geometry, and the fact that differentiation by a vector field is a derivation on the algebra of smooth functions (i.e. Dv(f⋅g) = Dv(f)⋅g + f⋅Dv(g) for any vector field v) follows from the product rule for derivatives. It is rather less obvious that every derivation on C∞(M) come from a unique vector field via this construction. This means we have an isomorphism between the set of smooth vector fields on M (equivalently smooth sections of the tangent bundle) and the set of derivations on C∞(M) (see the above-linked nLab article for a complete proof).
In conclusion, all of the major concepts of differential geometry can be expressed in commutative-algebraic language, which allows us to do new kinds of differential geometry by expanding the class of algebraic objects to which we apply these constructions, and considering their formal duals as novel geometric objects. Supergeometry (with its anticommuting coordinates representing fermionic degrees of freedom) and synthetic differential geometry (with its nilpotent elements representing infinitesimals of all orders) are some of the most useful and studied examples.
0 notes
Text
differential geometry as part of commutative algebra (draft)
I’m really starting to appreciate the utility of thinking about differential geometry as a part of commutative algebra! It might finally be time for me to finish reading Jet Nestruev’s book about this, Smooth Manifolds and Observables, a monograph devoted to this perspective. The 3 “magic algebraic properties of differential geometry” (as Urs Schreiber calls them; see e.g. Prop 1.15 here), which allow us to translate differential-geometric statements into commutative-algebraic ones, are as follows:
1) “embedding of smooth manifolds into formal duals of R-algebras”
Smooth (real scalar) functions on smooth manifolds make smooth manifolds formal duals of ℝ-algebras. In particular, the contravariant “smooth functions on” functor, C∞(-): Smooth → ℝ-Algop, which maps a smooth manifold M to the ℝ-algebra of smooth functions on M, and a smooth function between manifolds f: M → N to precomposition with f, is fully faithful. This means that we can faithfully translate statements about smooth manifolds and smooth functions between them into statements about ℝ-algebras and ℝ-algebra homomorphisms (in the opposite direction) between them. This restricts to a contravariant equivalence of categories on the “smooth ℝ-algebras”, which Nestruev’s book spells out in detail.
Thinking of smooth manifolds as formal duals to ℝ-algebras allows us to generalize our concept of geometry by broadening the class of algebras on the rhs of this (or a closely related) duality. Supergeometry involves considering ℤ/2ℤ-graded algebras on the rhs, synthetic differential geometry proceeds by adding nilpotent elements (which are effectively infinitesimals). Noncommutative geometry is based on a similar principle, in the context of Gelfand duality (a related construction which has compact Hausdorff spaces on the lhs and commutative C*-algebras on the rhs), where one generalizes by dropping the commutativity requirement for the algebras. We then think of “supermanifolds”, “synthetic manifolds”, and “noncommutative spaces” as the formal duals to these categories of algebras, larger than the image of the original embedding.
(This idea of turning geometric objects into algebraic ones by considering the algebra of functions on the geometric objects is part of a pattern sometimes known as Isbell duality, which is vastly more general than this example with ℝ-algebras or Gelfand duality.)
2) “embedding of smooth vector bundles into formal duals of R-algebra modules” aka the smooth Serre-Swan theorem
Any smooth vector bundle p: E → M over a smooth manifold M has a set of sections ΓM(p), which carries the structure of a module over C∞(M) - sections can be added and scaled pointwise using the vector space structure of the fibers. This extends to a functor ΓM: SmoothVB/M → C∞(M)-Mod from the category of smooth (finite-rank) vector bundles over M to the category of (finite-dimensional) modules over the function ring C∞(M). This functor is fully faithful, and its (essential) image consists of the finitely-generated projective C∞(M)-modules; the (essential) image of the subcategory of cartesian spaces consists of finitely-generated free modules.
So again, we have a way to turn statements about geometric objects into equivalent ones about algebraic objects, and we can use the machinery of commutative algebra to generalize the objects on the algebraic side of this functor, leading to generalizations of smooth vector bundles. The details here are still obscure to me, but see the nLab page on the original Serre-Swan theorems for a little more on this.
3) "derivations of smooth functions are vector fields"
That vector fields can be used to take directional derivatives of functions on a manifold is one of the basic facts of differential geometry, and the fact that differentiation by a vector field is a derivation on the algebra of smooth functions (i.e. Dv(f⋅g) = Dv(f)⋅g + f⋅Dv(g) for any vector field v) follows from the product rule for derivatives. It is rather less obvious that every derivation on C∞(M) come from a unique vector field via this construction. This means we have an isomorphism between the set of smooth vector fields on M (equivalently smooth sections of the tangent bundle) and the set of derivations on C∞(M) (see the above-linked nLab article for a complete proof).
2 notes
·
View notes
Text
the magic algebraic properties of differential geometry, part 2
2) “embedding of smooth vector bundles into formal duals of R-algebra modules” aka the smooth Serre-Swan theorem
Any smooth vector bundle p: E → M over a smooth manifold M has a set of sections Γ_M(p), which carries the structure of a module over C∞(M) - sections can be added and scaled pointwise using the vector space structure of the fibers. This extends to a functor Γ_M: SmoothVB_/M → C∞(M)-Mod from the category of smooth (finite-rank) vector bundles over M to the category of modules over the function ring C∞(M). This functor is fully faithful, and its (essential) image consists of the finitely-generated projective C∞(M)-modules; the (essential) image of the subcategory of cartesian spaces consists of finitely-generated free modules.
0 notes
Text
the magic algebraic properties of differential geometry, part 1
Smooth manifolds have 3 algebraic properties that Urs Schreiber calls “magic” (see e.g. Prop 1.15 here), which allow for an algebraic formulation (and generalization of) differential geometry.
1) “embedding of smooth manifolds into formal duals of R-algebras”
The contravariant “smooth functions on” functor, C∞(-): Smooth → ℝ-Algop, which maps a smooth manifold M to the ℝ-algebra of smooth functions on M, and a smooth function between manifolds f: M → N to precomposition with f, is fully faithful. This means that we can faithfully translate statement about smooth manifolds and smooth functions between them into statements about ℝ-algebras and ℝ-algebra homomorphisms between them.
Nestruev’s book “Smooth Manifolds and Observables” is a good place to read about the properties of this mapping. It’s crucial, for example, to the subject of supergeometry - a field of great relevance to particle physics, where it is necessary for describing the phase spaces of fermionic systems, but also independently of mathematical interest, as in spin geometry. The idea is to extend the algebra of smooth functions on a cartesian space ℝp by adjoining q anticommuting generators; this tensor product of C∞(ℝp) with the free Grassmann algebra on q generators is then thought of as the algebraic dual to the super cartesian space ℝp|q, and supermanifolds can then be defined as locally representable sheaves on the category of super cartesian spaces, in the same way that smooth manifolds can be defined as locally representable sheaves on the category of cartesian spaces; this means defining supermanifolds as objects that are locally super cartesian.
2 notes
·
View notes
Photo
@ontologicalidiot these are some of the tags i use for organizing books & papers in my library
3 notes
·
View notes
Text
this seems like a good baseline length - 10^-4 meters is 31 decades away from either extreme
4 decades to get up to human body size
4 more to match the tallest mountains
another 3 to get to the size of the Earth
just 2 more and you’re as big as the Sun
7 more (and change) to reach to the nearest alien star system
then just 4 more to stretch all the way across the galaxy
and 6 more (and change) to get to the diameter of the observable universe
or
dive down 6 decades to get to atom size
8 more to the proton diameter, which is about what the LHC can probe
and then there are 17 rather obscure decades before you hit the Planck length
it’s very satisfying that the geometric mean of the diameter of the observable universe & the Planck length is about the diameter of a human egg
20 notes
·
View notes
Link
In this paper, Graham Priest (everybody’s favorite dialetheist logician) points out how the “inclosure schema” provides a uniform framework for paradoxes of self-reference (like the liar) and paradoxes of gradation (like the heap) and proposes that we therefore adopt a uniform stance towards both classes.
Unsurprisingly, Priest’s preferred way of dealing with these paradoxes is to accept the reasoning that leads to them as sound, reject the principle of explosion, and live with the fact that some things really are both true and not-true.
In particular, the reasoning about the heap/sorites paradox given here suggests we should believe there’s a transitional region between “is a heap” and “is not a heap” wherein we the conjunction of those contradictory predicates obtains.
But this seems to give us two cutoffs - the boundary between “just true” & “true and false”, and the boundary between “true and false” and “just false”, which is as awkward and unsatisfactory as the cutoff between “true” and “false” in a naive resolution of the sorites paradox. In fact, applying the same reasoning to the new cutoffs leads to an infinite hierarchy of “higher-order vague” predicates.
I don’t really understand the analogy Priest is constructing between sorites and self-referential paradoxes here, but this paper is supposed to be just part of this longer paper, which I’ll read in the hopes of understanding better.
1 note
·
View note