turing-complete-eukaryote
turing-complete-eukaryote
Humans are the best mammals
1K posts
he/him. I like science. Formerly known as turing-complete-mammal. This is a side blog, and I follow from @turing-complete-eukaryote-2
Last active 2 hours ago
Don't wanna be here? Send us removal request.
turing-complete-eukaryote · 22 hours ago
Text
there's a whole industry of trying to make good studies out of bad data. and i think it basically just can't be done. you can answer questions with ill-suited data, but its unlikely you'll be able to answer the question you want to answer
43 notes · View notes
turing-complete-eukaryote · 22 hours ago
Text
I've said this before I but control the world through my powerful posts, disseminated through my powerfgul mutuals. trump and xi report to me although they dont even know it yet. anyway some new and powerful users have followed me today, welcome to the squad. welcome to, you know umi yukaba? there should be a song like that but its about reblogging my posts instead of dying for the emperor. welcome to the squad
23 notes · View notes
Text
wikiquote clearly the worst of the wikimedia projects. if someone said they edit wikipedia i'd be like tough work but it needs to be done. secretly id be thinking they could be one of the bad ones but i wouldn't say it. if someone said they edit wiktionary i'd be like ayyyyy. if someone said they edit wikibooks id be like not sure why ur doing this on a wiki but im sure whatever you're writing is interesting. if someone said they edit wikimedia commons id be like have fun categorizing enjoy your categories. if someone said they upload for wikimedia commons that would be different i would probably not think of them so much as a wiki person and more of a general public domain enthusiast type which is outside the scope of this post. if someone said they edit wikispecies id have a warm feeling towards them that i would not express in words; if i already found this person attractive that attraction would increase significantly. if someone said they edit wikifunctions honestly ive never heard of wikifunctions its some programming shit im sure its fine. probably not the best free code repository but i dont know much about the subject. if someone said they edit mediawiki i would be standing there as if i had met god himself id be scared tbh what if i piss them off and they destroy one of my favorite templates. if someone said they edit wikivoyage i would politely change the subject but it wouldn't change the way i think of them as a person. if someone said they edit wikinews i would be confused why do you do that when you could just have a blog no one reads. if someone said they edit metawiki i would ask them for more information on the subject before forming an opinion because i still dont understand what metawiki is and i've looked at it for like two minutes while researching for this post what exactly is it that you do over there. but if someone said they edit wikiquote i would laugh at them. straight up laugh in their face
very sorry to wikidata editors i couldnt come up with a joke for you. i respect it i just dont have any jokes
486 notes · View notes
Text
142 notes · View notes
Note
do the québécois get to be smug? They’re more like if France was American than truly Canadian
i think it does make sense for quebecois to be smug. but in the same way it makes sense for like, the basque to be smug. its cool to be a large minority group within a country that sort of has your own thing going on
10 notes · View notes
Text
something that seems like... i mean it seems kind of obvious to me but i dont see other ppl say it....
ok, once we had like semantics as a s buject in logic where people construct mathematical objects that the laws of the logic apply to, and we had model theory....
didn't that in retrospect clarify what all these set theoretic constructions are doing in real analysis?
Like.... we construct a model to prove consistency right? Or... to... at least demonstrate that there is some mathematical object that satisfies these axioms, which satisfies us that we've formulated consistent axioms, instead of them being inconsistent and it thus being impossible for any object to satisfy them.
The rules used for calculus, I mean getting themc onsistent was a real historical struggle. So we have all these constructions of the reasl.... in retrospect, we know that's not what the reasl "really are", right, we know what that activity is, it's demonstrating consistency by exhibiting a model, we just need some construction, any consruction, and then we know our lsit of axioms is possible to satisfy. Like now that we're past the 1930s this is a pretty well known well established activity so intretrosepct we can recognize that's what the analysis foundations stuff with set theory was right good for right?
8 notes · View notes
Note
https://www.tumblr.com/max1461/765702755607347200/i-dont-agree-with-chomsky-about-ug-obviously?source=share
This one?
Dear Max,
Why do we represent syntax using trees? I'm taking an intro to ling course, and all of the syntax trees seem forced. What makes us think that words are organized in trees, and why is this a good model of language?
I'm not a syntactician but I do have a post on this under my #linguistics tag
8 notes · View notes
Text
I don't agree with Chomsky about UG, obviously, principles-and-parameters, the minimalist program, any of that. But people's critiques of "Chomskyan" syntax are far too sweeping (I've said this a bunch of times). Like... you need constituency, you need hierarchical grouping, you need a dependency relation (and a sensible notion of head vs. complement then emerges from the typological data). You need all that stuff to build a sensible description of natural language syntax. It's just empirically screaming at you. You need a syntax/semantics distinction. Maybe "colorless green ideas sleep furiously" is not a perfectly constructed example, but you need a syntax/semantics distinction. And I'll even go so far as to say that movement makes... a lot of sense. You don't need movement but like, I'm not sure how you make a reasonable analysis of German without positing movement by another name. I'm not a German speaker so I can't generate an example off the top of my head, but when you look at those sentences with a stack of head final AuxPs/VPs, and the single structurally highest Aux occurs at the front of the predicate, and then if you add another Aux the formerly-fronted one is now at the back and the new structurally highest one is at the front... what do you call that but movement! You can make up another mechanism, but it's still movement by another name. I guess you could get into debates about deep structure vs. surface structure but I don't really believe in that.
24 notes · View notes
Text
Linguists deal with two kinds of theories or models.
First, you have grammars. A grammar, in this sense, is a model of an individual natural language: what sorts of utterances occur in that language? When are they used and what do they mean? Even assembling this sort of model in full is a Herculean task, but we are fairly successful at modeling sub-systems of individual languages: what sounds occur in the language, and how may they be ordered and combined?—this is phonology. What strings of words occur in the language, and what strings don't, irrespective of what they mean?—this is syntax. Characterizing these things, for a particular language, is largely tractable. A grammar (a model of the utterances of a single language) is falsified if it predicts utterances that do not occur, or fails to predict utterances that do occur. These situations are called "overgeneration" and "undergeneration", respectively. One of the advantages linguistics has as a science is that we have both massive corpora of observational data (text that people have written, databases of recorded phone calls), and access to cheap and easy experimental data (you can ask people to say things in the target language—you have to be a bit careful about how you do this—and see if what they say accords with your model). We have to make some spherical cow type assumptions, we have to "ignore friction" sometimes (friction is most often what the Chomskyans call "performance error", which you do not have to be a Chomskyan to believe in, but I digress). In any case, this lets us build robust, useful, highly predictive, and falsifiable, although necessarily incomplete, models of individual natural languages. These are called descriptive grammars.
Descriptive grammars often have a strong formal component—Chomsky, for all his faults, recognized that both phonology and syntax could be well described by formal grammars in the sense of mathematics and computer science, and these tools have been tremendously productive since the 60s in producing good models of natural language. I believe Chomsky's program sensu stricto is a dead end, but the basic insight that human language can be thought about formally in this way has been extremely useful and has transformed the field for the better. Read any descriptive grammar, of a language from Europe or Papua or the Amazon, and you will see (in linguists' own idiosyncratic notation) a flurry regexes and syntax trees (this is a bit unfair—the computer scientists stole syntax trees from us, also via Chomsky) and string rewrite rules and so on and so forth. Some of this preceded Chomsky but more than anyone else he gave it legs.
Anyway, linguists are also interested in another kind of model, which confusingly enough we call simply a "theory". So you have "grammars", which are theories of individual natural languages, and you have "theories", which are theories of grammars. A linguistic theory is a model which predicts what sorts of grammar are possible for a human language to have. This generally comes in the form of making claims about
the structure of the cognitive faculty for language, and its limitations
the pathways by which language evolves over time, and the grammars that are therefore attractors and repellers in this dynamical system.
Both of these avenues of research have seen some limited success, but linguistics as a field is far worse at producing theories of this sort than it is at producing grammars.
Capital-G Generativism, Chomsky's program, is one such attempt to produce a theory of human language, and it has not worked very well at all. Chomsky's adherents will say it has worked very well—they are wrong and everybody else thinks they are very wrong, but Chomsky has more clout in linguistics than anyone else so they get to publish in serious journals and whatnot. For an analogy that will be familiar to physics people: Chomskyans are string theorists. And they have discovered some stuff! We know about wh-islands thanks to Generativism, and we probably would not have discovered them otherwise. Wh-islands are weird! It's a good thing the Chomskyans found wh-islands, and a few other bits and pieces like that. But Generativism as a program has, I believe, hit a dead end and will not be recovering.
Right, Generativism is sort of, kind of attempting to do (1), poorly. There are other people attempting to do (1) more robustly, but I don't know much about it. It's probably important. For my own part I think (2) has a lot of promise, because we already have a fairly detailed understanding of how language changes over time, at least as regards phonology. Some people are already working on this sort of program, and there's a lot of work left to be done, but I do think it's promising.
Someone said to me, recently-ish, that the success of LLMs spells doom for descriptive linguistics. "Look, that model does better than any of your grammars of English at producing English sentences! You've been thoroughly outclassed!". But I don't think this is true at all. Linguists aren't confused about which English sentences are valid—many of us are native English speakers, and could simply tell you ourselves without the help of an LLM. We're confused about why. We're trying to distill the patterns of English grammar, known implicitly to every English speaker, into explicit rules that tell us something explanatory about how English works. An LLM is basically just another English speaker we can query for data, except worse, because instead of a human mind speaking a human language (our object of study) it's a simulacrum of such.
Uh, for another physics analogy: suppose someone came along with a black box, and this black box had within it (by magic) a database of every possible history of the universe. You input a world-state, and it returns a list of all the future histories that could follow on from this world state. If the universe is deterministic, there should only be one of them; if not maybe there are multiple. If the universe is probabilistic, suppose the machine also gives you a probability for each future history. If you input the state of a local patch of spacetime, the machine gives you all histories in which that local patch exists and how they evolve.
Now, given this machine, I've got a theory of everything for you. My theory is: whatever the machine says is going to happen at time t is what will happen at time t. Now, I don't doubt that that's a very useful thing! Most physicists would probably love to have this machine! But I do not think my theory of everything, despite being extremely predictive, is a very good one. Why? Because it doesn't tell you anything, it doesn't identify any patterns in the way the natural world works, it just says "ask the black box and then believe it". Well, sure. But then you might get curious and want to ask: are there patterns in the black box's answers? Are there human-comprehensible rules which seem to characterize its output? Can I figure out what those are? And then, presto, you're doing good old regular physics again, as if you didn't even have the black box. The black box is just a way to run experiments faster and cheaper, to get at what you really want to know.
General Relativity, even though it has singularities, and it's incompatible with Quantum Mechanics, is better as a theory of physics than my black box theory of everything, because it actually identifies patterns, it gives you some insight into how the natural world behaves, in a way that you, a human, can understand.
In linguistics, we're in a similar situation with LLMs, only LLMs are a lot worse than the black box I've described—they still mess up and give weird answers from time to time. And more importantly, we already have a linguistic black box, we have billions of them: they're called human native speakers, and you can find one in your local corner store or dry cleaner. Querying the black box and trying to find patterns is what linguistics already is, that's what linguists do, and having another, less accurate black box does very little for us.
Now, there is one advantage that LLMs have. You can do interpretability research on LLMs, and figure out how they are doing what they are doing. Linguists and ML researchers are kind of in a similar boat here. In linguistics, well, we already all know how to talk, we just don't know how we know how to talk. In ML, you have these models that are very successful, buy you don't know why they work so well, how they're doing it. We have our own version of interpretability research, which is neuroscience and neurolinguistics. And ML researchers have interpretability research for LLMs, and it's very possible theirs progresses faster than ours! Now with the caveat that we can't expect LLMs to work just like the human brain, and we can't expect the internal grammar of a language inside an LLM to be identical to the one used implicitly by the human mind to produce native-speaker utterances, we still might get useful insights out of proper scrutiny of the innards of an LLM that speaks English very well. That's certainly possible!
But just having the LLM, does that make the work of descriptive linguistics obsolete? No, obviously not. To say so completely misunderstands what we are trying to do.
78 notes · View notes
turing-complete-eukaryote · 10 days ago
Text
Many people don’t know this but your yields in organic chemistry lab aren’t due to skill/technique. It’s actually dependent on your character and intentions
422 notes · View notes
turing-complete-eukaryote · 10 days ago
Text
the mitochondria and the chloroplast, these endosymbioses were key moments in the history of life, but only in retrospect. Maybe we're finally starting to see the zoo of relatively unimportant endosymbionts from which these two have distinguished themselves. With the Paulinella chromataphore, which seems to be an independent evolution of a chloroplast-like organelle, and the nitroplast
21 notes · View notes
turing-complete-eukaryote · 10 days ago
Text
grandmasters must be great artists because they're always drawing
51 notes · View notes
turing-complete-eukaryote · 17 days ago
Text
My physics prof pronounces 'iron' as 'ion' and it always throws me off
1 note · View note
turing-complete-eukaryote · 20 days ago
Text
They should make a fake tenth Muse for economics, like with the Nobel Prize
36 notes · View notes
turing-complete-eukaryote · 21 days ago
Text
Congratulations to the Eagles on escaping that cursed hotel or whatever
2 notes · View notes
turing-complete-eukaryote · 21 days ago
Note
Slavoj Zizian. How has no one said this yet
sorry, i don't think this is anything anon
8 notes · View notes
turing-complete-eukaryote · 24 days ago
Text
yeah i know all about "dialectical materialism," yknow, ceramic, glass, all that stuff. #ilovecapacitors
557 notes · View notes