#Bayesian Epistemology
Explore tagged Tumblr posts
bsahely · 1 year ago
Text
Distributed Science - The Scientific Process as Multi-Scale Active Inference
Reproduced from: OSF Preprints | Distributed Science – The Scientific Process as Multi-Scale Active Inference and Distributed-Science-The-Scientific-Process-as-Multi-Scale-Active-Inference.pdf (researchgate.net) Distributed Science The Scientific Process as Multi-Scale Active Inference Authors Francesco Balzan 1,2* ([email protected]) John Campbell 3 Karl Friston 4,5 Maxwell J. D.…
Tumblr media
View On WordPress
1 note · View note
omegaphilosophia · 1 month ago
Text
The Philosophy of Statistics
The philosophy of statistics explores the foundational, conceptual, and epistemological questions surrounding the practice of statistical reasoning, inference, and data interpretation. It deals with how we gather, analyze, and draw conclusions from data, and it addresses the assumptions and methods that underlie statistical procedures. Philosophers of statistics examine issues related to probability, uncertainty, and how statistical findings relate to knowledge and reality.
Key Concepts:
Probability and Statistics:
Frequentist Approach: In frequentist statistics, probability is interpreted as the long-run frequency of events. It is concerned with making predictions based on repeated trials and often uses hypothesis testing (e.g., p-values) to make inferences about populations from samples.
Bayesian Approach: Bayesian statistics, on the other hand, interprets probability as a measure of belief or degree of certainty in an event, which can be updated as new evidence is obtained. Bayesian inference incorporates prior knowledge or assumptions into the analysis and updates it with data.
Objectivity vs. Subjectivity:
Objective Statistics: Objectivity in statistics is the idea that statistical methods should produce results that are independent of the individual researcher’s beliefs or biases. Frequentist methods are often considered more objective because they rely on observed data without incorporating subjective priors.
Subjective Probability: In contrast, Bayesian statistics incorporates subjective elements through prior probabilities, meaning that different researchers can arrive at different conclusions depending on their prior beliefs. This raises questions about the role of subjectivity in science and how it affects the interpretation of statistical results.
Inference and Induction:
Statistical Inference: Philosophers of statistics examine how statistical methods allow us to draw inferences from data about broader populations or phenomena. The problem of induction, famously posed by David Hume, applies here: How can we justify making generalizations about the future or the unknown based on limited observations?
Hypothesis Testing: Frequentist methods of hypothesis testing (e.g., null hypothesis significance testing) raise philosophical questions about what it means to "reject" or "fail to reject" a hypothesis. Critics argue that p-values are often misunderstood and can lead to flawed inferences about the truth of scientific claims.
Uncertainty and Risk:
Epistemic vs. Aleatory Uncertainty: Epistemic uncertainty refers to uncertainty due to lack of knowledge, while aleatory uncertainty refers to inherent randomness in the system. Philosophers of statistics explore how these different types of uncertainty influence decision-making and inference.
Risk and Decision Theory: Statistical analysis often informs decision-making under uncertainty, particularly in fields like economics, medicine, and public policy. Philosophical questions arise about how to weigh evidence, manage risk, and make decisions when outcomes are uncertain.
Causality vs. Correlation:
Causal Inference: One of the most important issues in the philosophy of statistics is the relationship between correlation and causality. While statistics can show correlations between variables, establishing a causal relationship often requires additional assumptions and methods, such as randomized controlled trials or causal models.
Causal Models and Counterfactuals: Philosophers like Judea Pearl have developed causal inference frameworks that use counterfactual reasoning to better understand causation in statistical data. These methods help to clarify when and how statistical models can imply causal relationships, moving beyond mere correlations.
The Role of Models:
Modeling Assumptions: Statistical models, such as regression models or probability distributions, are based on assumptions about the data-generating process. Philosophers of statistics question the validity and reliability of these assumptions, particularly when they are idealized or simplified versions of real-world processes.
Overfitting and Generalization: Statistical models can sometimes "overfit" data, meaning they capture noise or random fluctuations rather than the underlying trend. Philosophical discussions around overfitting examine the balance between model complexity and generalizability, as well as the limits of statistical models in capturing reality.
Data and Representation:
Data Interpretation: Data is often considered the cornerstone of statistical analysis, but philosophers of statistics explore the nature of data itself. How is data selected, processed, and represented? How do choices about measurement, sampling, and categorization affect the conclusions drawn from data?
Big Data and Ethics: The rise of big data has led to new ethical and philosophical challenges in statistics. Issues such as privacy, consent, bias in algorithms, and the use of data in decision-making are central to contemporary discussions about the limits and responsibilities of statistical analysis.
Statistical Significance:
p-Values and Significance: The interpretation of p-values and statistical significance has long been debated. Many argue that the overreliance on p-values can lead to misunderstandings about the strength of evidence, and the replication crisis in science has highlighted the limitations of using p-values as the sole measure of statistical validity.
Replication Crisis: The replication crisis in psychology and other sciences has raised concerns about the reliability of statistical methods. Philosophers of statistics are interested in how statistical significance and reproducibility relate to the notion of scientific truth and the accumulation of knowledge.
Philosophical Debates:
Frequentism vs. Bayesianism:
Frequentist and Bayesian approaches to statistics represent two fundamentally different views on the nature of probability. Philosophers debate which approach provides a better framework for understanding and interpreting statistical evidence. Frequentists argue for the objectivity of long-run frequencies, while Bayesians emphasize the flexibility and adaptability of probabilistic reasoning based on prior knowledge.
Realism and Anti-Realism in Statistics:
Is there a "true" probability or statistical model underlying real-world phenomena, or are statistical models simply useful tools for organizing our observations? Philosophers debate whether statistical models correspond to objective features of reality (realism) or are constructs that depend on human interpretation and conventions (anti-realism).
Probability and Rationality:
The relationship between probability and rational decision-making is a key issue in both statistics and philosophy. Bayesian decision theory, for instance, uses probabilities to model rational belief updating and decision-making under uncertainty. Philosophers explore how these formal models relate to human reasoning, especially when dealing with complex or ambiguous situations.
Philosophy of Machine Learning:
Machine learning and AI have introduced new statistical methods for pattern recognition and prediction. Philosophers of statistics are increasingly focused on the interpretability, reliability, and fairness of machine learning algorithms, as well as the role of statistical inference in automated decision-making systems.
The philosophy of statistics addresses fundamental questions about probability, uncertainty, inference, and the nature of data. It explores how statistical methods relate to broader epistemological issues, such as the nature of scientific knowledge, objectivity, and causality. Frequentist and Bayesian approaches offer contrasting perspectives on probability and inference, while debates about the role of models, data representation, and statistical significance continue to shape the field. The rise of big data and machine learning has introduced new challenges, prompting philosophical inquiry into the ethical and practical limits of statistical reasoning.
2 notes · View notes
rachelraygifs · 2 years ago
Text
I’m a frequentist* not a bayesian so maybe I’m already biased but literally anyone who talks about their “priors” in real life should be shot out of a canon
3 notes · View notes
skluug · 2 days ago
Text
Just met a guy who was very into functional programming and Bayesian epistemology. He had never heard of lesswrong.com. We are failing our lost children
42 notes · View notes
Note
Hi Frank! What does 'big yud' mean? You tag a lot of posts with it :)
"Big Yud" is a name for Eliezer Yudkowsky, an essayist and author who's very popular in the Less Wrong community (which itself is kind of a subculture inside the larger community surrounding the Sequences, Yudkowsky's Yale physics baccalaureate thesis written up as an intro to Bayesian epistemology; he calls it "Bayesianism for Inquisitive People"). See this for his official personal site (which he calls his "official web presence").
He's basically the guy who founded the subculture. When I tag with "big yud," it means "this post relates to the sorts of things Eliezer Yudkowsky is especially interested in writing about."
13 notes · View notes
davidroesener · 8 months ago
Text
Algorithmic Bayesian Epistemology
Algorithmic Bayesian Epistemology
View On WordPress
0 notes
anomellee · 9 months ago
Text
can't sleep, reading about bayesian epistemology
0 notes
tielt · 2 years ago
Text
Spent some time this lovely morning chatting with ChatGPT. About Gödels theorems and Russel’s paradox and other things that are hard to talk about with professors because everything you want to ask sounds dumb. I remember my professor saying in philosophy of mind to not cheat on papers because philosophers have a specific way of communicating and they can tell when their words are un-cited. I think this is likely true for using GPT, same for math.
GPT-4 has a waitlist, but Chat-GPT has a free mode off-hours. I can see it replacing Wikipedia, I like being able to ask certain questions without another human, particularly stumbling type questions if that makes sense. AI sandboxes aren’t scary to me. Propaganda bot-farms on the other hand are going to cause real problems. Not sure why people keep saying LLama got leaked, you can get the torrent from the GitHub. Collect them all mission completion. I’m guessing we’ll see a “is this skin cancer or hives” chat-bot soon enough if GPT4 doesn’t have that already.
“Is queer theory pretentious”
“If humans are idiots and all Bayesian priors are created by humans, what can we infer of Bayesian epistemological knowledge.”
Is the Zen Koan, “to know what something is you must know what it isn’t,” pretentious?
0 notes
transgenderer · 3 months ago
Text
@raginrayguns said:
it showed up in insurance very early, like 17th century. Look up Halley's work on this. The gambling thing doesn't make sense if you're familiar with th ework of anyone besides like pascal and de moivre also bayesian epistemology is as old as bayes theorem, so like a century before mendel see also condorcet jury theorem. "Beliefs are probabilities in the mathematical sense" was a pretty conventional 18th century belief actually. When you get to Mendel you have people like Fisher treating it as an established tradition that they are rebelling against actually even Boole I've heard was presenting his work as an alternative to the established probability-based theory of Laplace
thanks! useful wikipedia article pascal...it seems like the first practical application was graunt doing proto-demography. unless you count arabic frequency analysis for breaking codes. The Noble Demographer
whenever i learn about ye olde epistemology it seems clear to me that they need the notion of probability, like....probabilistic reasoning just *IS* how humans parse the world, it's not formal probabilistic reasoning ofc but it's *nature* is probabilistic. and the thing is, probabilistic reasoning has lots of interesitng philosophical problems to wrestle with, it's not like it solves epistemology or anything, it's just engaging with what reasoning about practical reality *is*. ancient philosophers really wanted reasoning about the world to be like syllogisms. and its not like syllogisms never happen, theyre useful, but they will mix together reasoning about sense-perception and reasoning about math, and we just...dont approach those the same way! you can say we *should*, but then you have to approach one of them a weird counterintuitive way, for some reason we have at least two fundamenally different ways-of-knowing (id say 3, something like "probabilistic empiricism", "mathematical syllogism", "inference and argument").
anyway this is why the epistemology of math tugs at me, if you believe (as i do) that we dont generally reason via syllogism, that syllogism is a weird edge case, then math looks like a strange outlier among knowledge-kinds. my favorite conception is kind of like..."logical rules" as an underlying "scaffolding" of knowledge, like...the "pipes" knowledge moves through. and math is like what if you just did the scaffolding and none of the other stuff
99 notes · View notes
omegaphilosophia · 2 years ago
Text
Theories of Epistemology
Epistemology is the branch of philosophy concerned with the nature of knowledge, its justification, and the rationality of belief. It studies the nature of belief, the ways in which knowledge is acquired, and the criteria for evaluating evidence and justification. The study of epistemology aims to understand the limits of human knowledge and the conditions under which claims to knowledge can be considered justified.
Some contemporary theories or systems of epistemology include:
Coherentism: which holds that knowledge is justified when beliefs cohere with one another within a system of beliefs.
Foundationalism: which holds that knowledge is justified by basic beliefs that are self-evident or evident to the senses, and that non-basic beliefs are justified by their logical relationship to these basic beliefs.
Externalism: which holds that knowledge can depend on external factors such as cognitive processes, social and cultural context, and the reliability of sources of information.
Internalism: which holds that justification is solely dependent on factors internal to the mind, such as intuition and reason.
Infinitism: which holds that knowledge requires an infinite regress of justifications, each one providing a reason for accepting the next.
Reliabilism: which holds that knowledge is justified when it is produced by reliable cognitive processes, regardless of the individual's justification for belief.
Evidentialism: which holds that knowledge requires sufficient evidence or reasons for belief.
Contextualism: which holds that the standards for knowledge or justification are context-sensitive and can vary depending on the situation.
Virtue epistemology: which holds that knowledge is a result of intellectual virtues, such as intellectual curiosity, open-mindedness, and wisdom, rather than just the accumulation of true beliefs.
Naturalized epistemology: which holds that epistemology should be integrated with the empirical study of human cognition and the scientific method.
Social epistemology: which holds that knowledge is a social construct and that the norms and practices of a particular community play a central role in shaping what counts as knowledge.
Epistemic relativism: which holds that there is no objective truth or standards of justification and that knowledge claims are relative to the beliefs and practices of particular communities or cultures.
Experimental epistemology: which holds that the methods of natural science can and should be used to study knowledge, belief, and justification.
Bayesian epistemology: which holds that degrees of belief should be modeled using Bayesian probability theory.
Deontological epistemology: which holds that knowledge and justification are governed by moral and ethical rules and duties.
Factive epistemology: which holds that knowledge requires both truth and belief, and that belief alone is not sufficient for knowledge.
Constructivist epistemology: which holds that knowledge is a product of human construction and that there is no fixed and objective reality independent of our beliefs and practices.
Feminist epistemology: which holds that knowledge is deeply influenced by social, cultural, and historical factors, including gender, race, and class, and that these factors should be taken into account in evaluating claims to knowledge.
Modal epistemology: which studies the modal aspects of knowledge, such as its possibility, necessity, and apriority.
Explanatory coherence: which holds that knowledge and justification can be explained by the coherence of beliefs within a system of beliefs.
Epistemic instrumentalism: which holds that knowledge and justification serve as instruments for achieving other goals and purposes, rather than having value in themselves.
Abduction: which holds that justification and knowledge can be explained by a form of inference called abduction, which is also known as inference to the best explanation.
Evidence-based epistemology: which holds that knowledge and justification should be based on empirical evidence and the scientific method.
It is difficult to say that one theory is the best or most useful in epistemology, as each theory offers its own unique insights and perspectives on the nature of knowledge and justification. The choice of a particular theory often depends on the specific questions and problems one is interested in investigating. Additionally, many epistemologists subscribe to a combination of theories or integrate insights from multiple perspectives, rather than adopting a single theory.
6 notes · View notes
rlyehtaxidermist · 3 years ago
Note
Bayesian epistemology?
...it's a convenient model for axiomatisation of belief and decisions but like many mathematical models in economics and philosophy overlooks the complexities of human behaviour which arise outside a purely axiomatic context?
I'm not sure where this ask came from, honestly?
Personally I'm largely an empiricist, with the caveat that many of the underlying rules for legitimacy under modern empiricism are subject to biases resulting in incomplete or inaccurate conclusions which overly generalise the experiences of an elite class rather than reflecting the full breadth of reality.
also something something tarski blah blah godel i eat truth and shit math
6 notes · View notes
Text
@pantshkek replied to your post “follow up to my previous question: can you recreate a picture of the green sun meme?”
Why do you like Bayes Theorem so much?
Just the feeling of "wow, bayesian epistemology is cool"
The whole thing is cool, but that part is the part that's "you just know it is correct when a bunch of people on the internet say it" cool
2 notes · View notes
loving-n0t-heyting · 1 year ago
Text
Nothing that i have said is inconsistent with the basics of Bayesian epistemology, to which I am intermittently sympathetic
At each world, one has a credence function. And among the facts about which this function outputs values, is what function ones credence function is. One updates this function by conditionalising on the information obtained thru observation; among the kinds of information on which one might update, is information about what ones credence function is. Thus there is no conflict between having some determinate credence distribution, and having a distribution over what credence distributions one has (this is, indeed, simply one part of the original credence distribution), nor between either of these claims and the elementary commitments of Bayesian epistemology
Garnering accusations of witchcraft and heresy from my rationalist roommate after denying the conservation of expected evidence
60 notes · View notes
canmom · 5 years ago
Text
phyg
(The title refers to the fact that during early discussions of whether LessWrong constituted a cult, they ‘tabooed’ the word cult by replacing it with rot13′d phyg. However, it’s not just about LessWrong, but about cults in general.)
I’ve kind of backed off from writing about the lesswrongers for a few reasons, mainly that I’d moved on in my own life. Though another strong reason is that the revelation of all the heinous sex abuse shit going on there (resulting in the suicide of members) meant that it was less ‘internet rabbit hole’ and more ‘some of these people are actively abusing people and many others are in the process of being victimised by them, and it feels very inappropriate to stand at the sidelines poking fun at Roko’s basilisk’
There’s a post going around about Jehovah’s Witnesses, and the way their ‘missionary work’ functions less to bring new members into the cult and more to give the existing members a perception of outsiders as being rude and hostile, thus drawing them back into the fold. So I worry a bit that taking a stance of making fun of lesswrongers helps fulfil a perception that non-members of the cult are a ‘sneer club’, and kindness can only be found inside.
It’s a fine line to walk because part of helping people escape must involve helping them see the flaws in ideas used to control and abuse them. Roko’s Basilisk was a rather crude example, but there’s many variants; certain LessWrong members seem very adept at manipulating feelings of guilt and obligation, and part of that often seems to involve trying to make people feel personally, individually responsible for very large-scale dynamics to which the person (and LessWrong in general) is the only remedy.
So you need to kind of make clear that no, what’s at stake isn’t the future of humanity, that all the stories they tell about AIs and so forth are science fiction.
My own history on the periphery of cults
I should also note that I kind of feel that the difference between ‘cults’, ‘religions’, ‘ideologies’, ‘movements’, arguably even ‘fandoms’ and ‘subcultures’ is often more a matter of degree (along various spectra) than kind. Different dynamics prevail at different scales. I’m going to outline the features that make me call something a ‘cult’ below.
(this gets fairly long...)
I grew up in Glastonbury, a town that’s home to a lot of the remnants of new-ageism and hippie-subculture in the UK. Undoubtedly there were a few cults around me. My parents were at first neopagans, and later just became the kind of generically nonreligious but maybe a little ‘spiritual’ people which form the majority of the UK.
At some point in my late teens, I found the ‘sceptical’/‘atheist’ movement online, which at the time prioritised deflating alternative medicine and criticising creationism (before its hard right wing turn). It was through them I found my way to LessWrong, which presented itself as a kind of the next level of scepticism: through ‘Bayesianism’ they would systematise the kind of thought prevailing in sceptical movements and turn it into a machine for ‘overcoming bias’ and believing the right thing.
And as it happened, Yudkowsky-the-expert [prophet] wrote, properly applied Bayesianism would lead you to perceive the vast threat of an ‘unfriendly AI’. Wouldn’t it be lucky if someone was fighting the good fight against that before it happened?
After maybe a year, I managed to bounce off getting too deep into LessWrong because of those early posts about Roko’s Basilisk and its culty behaviour, but its traces - a perfectionist belief in ‘utilitarian’ ethics, a kind of po-faced affect where everything must be evaluated ethically at all times, a belief in the great scientific mission of humanity, a weird obsession with testing my beliefs against the things I found most unlikely (in this case, fundamentalist Christians at university), interests in epistemology and aging research and (then very nascent) ‘effective altruism’... I was never really a member but it kind of did a number on me!
At much the same time, I got hit hard by the rise of what I might call ‘online social justice’ (proponents might say ‘intersectional feminism’, but that refers to a lot of things). In this case, I think many of the fundamental beliefs - that our society is structured by deep injustices, such as white supremacy, heterosexual patriarchy, disablism to name a few that were salient to this movement - are absolutely correct. But this doesn’t mean that some of the same dynamics weren’t prevalent.
Again this belief system functioned heavily on guilt: starting with the itemised privilege checklist, the only way to address your complicity in oppression was to obsess over it at all times, and in particular scrutinise your language for inadvertent double meanings. To encourage this, a mechanism for punishing people through public shaming developed; it took a while for people to recognise that the dynamics of who got targeted for dogpiles, and what happened to those targeted, were largely orthogonal to what particularly terrible thing someone may have done or not.
Because this movement heavily overlapped with fandom (as a product of LiveJournal at first - this was where racefail ‘09, the incident that drew me in, played out), a great focus of this movement was media criticism. If the corporate entertainment products we consume could be made to portray The Gays in the right light, then surely social change would follow? I think a lot of this was driven by a need to be doing something within the social spaces we were moving in, which were focused on consumption of fan media.
Unlike LessWrong, ‘online social justice’ had its celebrities (and public sacrifices) but did not have any central charismatic figure. Still, this belief system provided a lot of fertile ground for people to build themselves up as progressive, ‘indie’ alternatives to the corporate media order. Most were sincere in pursuing this, but the ‘winners’ ultimately cast down anyone as it suited them as they scrambled for positions in that same order.
Unlike LessWrong, ‘online social justice’ enjoyed a certain degree of mainstream success, seeing its language taken up by a few larger outlets; it also ended up provoking a big and very nasty right-wing backlash equally obsessed with the ‘social justice warriors’ who might threaten their power in whatever way. This backlash, though just as nasty and cultish in itself, picked up many of the criticisms of cultish ‘social justice’ dynamics, and so denying these dynamics were significant became itself a moral imperative and made it very difficult to actually assess what is happening.
So to be very clear: I am grateful that my participation in ‘online social justice’, however shallow my concerns seemed in retrospect, revealed a lot of places I was dangerously ignorant and I’m pretty sure in some ways made me a better, more caring person. However, it also gave me some very unhelpful self-destructive thought patterns, which made me pretty insufferable and sometimes quite nasty about things which really didn’t better, and I hope I’m growing out of the worst of it.
Ironically, the SJ side of things helped me avoid getting sucked into LessWrong too bad, because it was obvious that those guys didn’t really give a shit even before I learned about their friendship with neoreactionaries. I never made a decisive break with ‘SJ’, but hopefully I’ve since developed some more robust and less easily manipulated thought forms - that can’t be taken up by someone’s personal campaign to dispose of their victim quite so easily.
Then, in more recent years, my cult flirtation of choice was a strain of Leninism/Maoism. I never got anywhere near joining an actual Leninist party (thank fuck), but I did spend a lot of time challenging myself to read MIM (prisons) and the like, despite it obviously being kinda off.
Much like LessWrongism in relation to ‘skepticism’, the Leninism was able to present itself as a kind of refinement of the belief system I already had, and my rudimentary understanding of capitalism, colonialism etc. They could say that obviously racism is real, but unlike those nerds obsessed with cartoons, we have the right way to analyse it and the right program to destroy it. (That is of course elaborately written out nationalist fantasies and cheerleading for whoever America is fighting at the moment. It’s working great, guys.)
I had one friend who was particularly deep into this worldview, and eventually denounced me and cut all ties because I wouldn’t join in a harassment campaign calling a certain then-popular trans musician (who I have never spoken to before or since) a pedophile. Outside of that, you can see can see this worldview’s traces in posts from that period: I started dropping words ‘imperialism’ more for example.
What really prevented me from getting sucked into this cult was, perhaps, the same obsessive scrupulousness that I’d developed while dealing with the lesswrongers. I spent a lot of time digging into the history literature regarding things like the gulags and the famines in the USSR and China during the periods that the Maoists celebrated, and concluded that the historical evidence they dismissed was pretty strong, and they were full of shit. I didn’t really want to be friends with people who were such huge fans of gigantic incarceration programs and that kind of deflated the whole thing.
I also was lucky enough to find things like the Neue-Marx-Lekture and communisation theory and other more anarchist-aligned approaches to Marx. However accurate they may be about Marx (ultimately irrelevant, ideas are useful or not regardless of who came up with them), they made it very clear how many ways there were to approach this history and the appealing parts of Marx’s work, so the ‘package deal’ presented by the Maoists and Leninists (Marx’s criticism of capitalism is insightful, so you must cheerlead Stalin with us) became obviously nonsensical.
Of course, I’m now much more deeper into the leftist subcultural sphere than the average person. Most people do not have opinions on “self-abolition of the proletariat”, nor do they do any of the dubiously effective offline stuff. I like to think I have a healthily cautious approach to the various prevailing ideologies around me, and an actual sense of humour about all this nonsense (hard to emphasise how important that is!), but who knows what I’ll think in a year...
so what’s a cult anyhow
I’m not familiar in detail with the research literature on cults or ‘new religious movements’, but here are some salient features that seem to me to create the dangerous kind of pattern:
a claim to urgency: there is a great problem which nobody is taking seriously enough. this can be a real problem (gender and racism exist and people are suffering under them every day) or a made up problem (an AI might turn us all into paperclips); it will need to appeal to a specific milieu to be effective (programmers who read science fiction novels, people who have experienced homophobic abuse in their lives, followers of a ‘mainstream’ religion)
this is often presented as a world-ending catastrophe, but it can equally well be a minor injustice, or something blown way out of proportion.
a claim to legitimacy: the cult are uniquely equipped to face this problem, or possess a unique wisdom. perhaps they alone have the right theoretical tools, or perhaps they have a claim to a lineage. a very hardline distinction between ‘correct’ and ‘incorrect’ helps here (Mao’s idea of the ‘two-line struggle’ was a gift to cult-builders).
for religious cults (the most stereotypical kind), the cult leader is the recipient of a unique vision, or simultaneously descended from Jesus, Mohammed and the Buddha.
for a certain kind of leftist, the Party (all five members!) is the sole inheritor of the ‘red thread’ of history that begins with Lenin and the Bolsheviks, uniquely upholding the correct ideological line in the face of revisionism.
for (pseudo)scientists like LessWrong, the cult’s methods are truth-preserving in a way mainstream academia isn’t, or hidebound academics are too blinkered to investigate the phenomenon in question whereas we have the sense to see something there.
I’m sure you can think of more... a scrappy underdog speaking truth to power is always a handy one I guess?
a means to isolate members: this can be physical isolation (the cult lives in a remote location), social isolation (members encouraged to cut ties with non-members), or it can be something like opaque jargon or outlandish, difficult-to-explain beliefs so the only people who can discuss the cult’s worldview are other members.
I should emphasise that many people have unusual beliefs and come to no harm for it; it’s their role in a system of power and control that makes the beliefs dangerous, which can work almost regardless of the content of these beliefs.
a threat of punishment: particularly isolation. If leaving the cult means alienating your entire social circle, then it’s an almost insurmountable obstacle. But cults can also instil complexes of guilt (if you leave or disobey, you will be responsible for poverty or the failure of the revolution), or practice regular public shaming. I haven’t personally dealt with physical punishment but I’m sure it’s an element.
the LessWrong probabilistic mindset is ingenious here: you can make something ever so slightly more or less likely according to a subjective probability model, but on the level of intuition that still feels like you are responsible for the horrifying terrible thing!
the ritual of a public apology is another extremely powerful mechanism (and why I’m very wary of leftist notions of ‘self-crit’). even if you’re just doing it to get people off your back, making a definite declaration has an effect on your worldview (are all these people wrong?); and watching someone take the stance of apologetic failure/sinner has a big effect on observers as well in terms of illustrating the lines of power.
a feeling of constant scrutiny and unpredictable punishment is very effective. no matter how hard you try, your words might betray your secret error/sin, and you can never be sure of the underlying principles or reliably apply them, so you must obsessively self-scrutinise and research the ‘right ways’ to act, perhaps even pre-emptively apologise if you catch something you did wrong before. but the prevailing narrative is of course that, all of this is (or should be) simple and obvious! or else the effort of having scrupulously correct language helps demonstrate your personal virtue.
abuse: particularly, sexual. Many of these dynamics are abusive in themselves, but once you’ve got a cult running, it seems all but inevitable that someone will abuse the power they have over members in a more personal, direct way. All the cult or cult-like movements I’ve described, and many other cults I’ve brushed up against like just about every Leninist or Trotskyist party in the UK left, have their history of sex abuse scandals that could call the organisation into question, and cover-ups within the ranks. There’s probably a lot more that doesn’t get revealed.
disposability: given the power of ostracism outlined above, members must be prepared to learn that a friend has become suspect and that it’s dangerous (in a moral sense, or for their own personal social future) to continue to associate with them. This may happen after a long period of bullying, when it is no longer useful or convenient to keep that person around for the individuals wielding power, or it may happen seemingly at random as a constant threat to the remaining members.
The common feature running through all of these is that they ensure the ongoing reproduction (and perhaps even growth) of the cult. This doesn’t have to mean the same people: some cults keep a small core members for a long time, others (like your prototypical Trotskyist party) have a hard core who wield the power, and a continuous churn of peripheral members who are exploited.
Of course, many if not all of these traits can be recognised in ‘mainstream’ society, under the power of a state or company or in academia:
claim to legitimacy: democratic mandate, ‘rule of law’, the contrast with the Hobbesian ‘state of nature’ or rival ‘authoritarian’ state, a history of success in business, science vs. superstition, possession of scientific and other academic expertise, fame in itself...
isolation within a worldview bubble: this one may seem like a stretch since it’s not really ‘isolation’ if most of society shares it, but mechanisms like language differences, preferential media coverage and the routine incarceration of people deemed ‘insane’ can all help to keep people within a particular ‘Overton window’ within a particular society.
in smaller scales, e.g. a company or academic institution can provide a more specific ‘bubble’ effect, focusing attention and effort on the organisation’s concerns.
claim to urgency: this one’s more complicated since most societies are not organised on the basis of a single overarching threat. probably the closest thing we have is ‘reproductive futurism’, the requirement to reproduce the next generation and the threat of failure to do so. on smaller scales, a project may fail, a company might go bust, the economy might go into a recession and so we must work... and then there’s exceptional events like ‘terrorism’ and the present virus pandemic. (claims to urgency may be at least somewhat legitimate!)
threat of punishment: the police and threat of incarceration and other official punishments are the most obvious mechanisms, but there’s also a lot of punishment done under the guise of healthcare, such as holding people deemed ‘mentally ill’ in wards. medicine is practised to the primary end of ensuring the reproduction of society, and ‘mental health care’ is an indistinct blend of heavy coercion and things that might be genuinely useful in other circumstances.
Apart from that, you’ve got the wage mechanism: if you don’t work for someone with money, you don’t eat. Almost nobody has the option of producing their own food.
You also have people bound together in reproductive units, notably the family. You cannot leave if doing so would deprive you of your means of subsistence.
abuse: is blatantly prevalent anywhere there’s power. the heterosexual nuclear family and prisons deserve special acknowledgement here.
disposability: any ‘social safety net’ is designed to push people back towards work. If you end up homeless on the street, odds are pretty high you’ll die of pneumonia rather than find your way back into some form of stability. We are trained to walk past people who need help, knowing or deluding ourselves that we can do nothing for them, every day we go outside. And that’s not to go into dynamics within just about all specific ‘communities’ to guard the walls and expel problematic cases
Does this mean the label ‘cult’ is useless, since it can capture almost all groups at a stretch? As mentioned earlier, it’s a matter of degree, such as the particular intensity of the cult mechanisms. Thus I still think it’s helpful to talk about how these dynamics can manifest (often all the more intensely) in more marginal spaces.
On LessWrong
LessWrong seems like an unusual cult in a few ways. A lot of its internal discussions are not made private; rather, what keeps them closed is the opaque forest of jargon which can only be parsed given extensive familiarity with its writing.
It’s very conscious of itself in relation to academia in the hard sciences, which is both its aspirational model (providing much of its language and its obsessions) and its bugbear (they won’t take us seriously, we’re too many inferential steps away). They may appeal to a historical lineage to a degree (the Enlightenment!!), but they are also quite proud of making displays of novelty (we’ll propagandise through a Harry Potter fanfiction, look how modern and switched on we are), and enjoy the sense of being challenging and disruptive.
One quite nasty trick is that LessWrong sells itself as improving rationality, and illustrates this by drawing on genuine ‘cognitive bias’ research and pulling out a battery of common epistemological errors which they claim to inoculate against; thus as one gets drawn in they can easily believe that they’re becoming more cautious and sceptical, not more credulous. It also potentially gives a quick way to dismiss people who haven’t gotten in: they are too ridden with bias to be worth consideration.
Of course, LessWrong members do not reason like a theoretical Bayesian agent any more than any other human does. Performing a Bayesian update on all your beliefs in light of new evidence is impossibly computationally expensive, and as they well acknowledge, the majority of our reasoning and perception is better understood on the basis of heuristics and habits which work ‘well enough’ to get by. So far, I doubt they’d be disagreeing with me.
What goes wrong is when the ‘Bayesianism’ starts becoming a rhetorical performance: speaking in terms of ‘probabilities’ which have not been calculated and could not be, claiming to be ‘updating’ when one learns something new, appealing rhetorically to some mathematical property of Bayes’ theorem without actually ever doing a Bayesian calculation.
With these devices, LessWrong members can paint a picture of a careful, considered mathematical reasoner sharing their results in detail, while actually the appeal works on that performance: it uses the right jargon, it affects the right rhetorical style. This performance probably works on the speaker as much as anyone; it feels right to use.
That said, much of LessWrong on this website has moved on from the dramatic performance of Bayesianism per se, but they still have a tendency to write in a particularly insular style drawing more on the rhetoric of the blog Slate Star Codex (which seems to have almost eclipsed Less Wrong itself in the milieu). Despite a few rationalists habitually picking fights with members of other cliques, they tend to fairly effectively repel non-LessWrongers.
Of course, there are many cliques in this website (no doubt I can be said to be in a few!); use of jargon and speaking to a specific group with shared concerns is not in itself automatically a problem. I would be remiss if I didn’t acknowledge that many of the concerns of LessWrong are quite interesting subjects to explore, and aren’t often explored elsewhere without a lot of money to go to university (and be subject to its own forms of brutal hazing). The problem is not that a group of people exist who share unusual beliefs, but the function of the cult-mechanisms outlined above, and the way they lead on to actions like SWATting and the sex abuse linked at the top.
So far, so familiar; and this may be the last thing I write about LessWrong, at least for a long while. I hope my words can be useful for people in that milieu to start considering, if not immediately getting out, at least making sure they are anchored in personal relationships outside the cult context, and treating it with the same degree of scrutiny it encourages you to apply to everything else.
For me, the instructive thing about LessWrong is to try and keep track of these dynamics, so I might be able to tell whether I’m falling into them in some other context.
When do ‘groups of people with unusual beliefs’ turn into cults?
This is really the crux of it, yet also the hardest part. Simply naming cults I’ve orbited is not very useful if I can’t recognise a pattern, or work out a way to intervene (at least inside my own head) which could have helped me before.
In many cases, (pseudo)cults may be a defensive formation. Certain actions - gender transition for example - automatically invite a great deal of hostility, which can really only be survived by banding together with other trans women. That doesn’t make us a cult, but it lays a seed.
By contrast, trans woman exterminationist feminists form a particularly blatant cult, which portrays our existence as a threat in the ways described above. They level the accusation of being a cult against us in ways that are for the most part quite hollow, since they’re predicated on gender transition being unthinkable as something one might actually desire for any reason beyond ‘brainwashing’. Yet I think it would be dangerous to us to completely deny the existence of some cult-like dynamics in the way we treat each other (if not in the illusory ‘trans community’ at large, then in specific cliques and subcommunities around you, transfem reader). Cult dynamics do not require that the threat be fake.
I can observe a few things that help resist this: we have incredibly varied means of analysing gender and naming ourselves, and many attempts to establish an orthodoxy ultimately fall rather flat. As much as I’d like my own understanding to be better known, I think this is a valuable trait. There are also groups of us who have seen these dynamics play out time and time again, and consciously attempt to defend each other.
But I don’t know if there’s any general recipe for anti-cultishness. Even a principle like ‘anti-disposability’ can be used to enforce cult-like behaviour: if you can be convinced that personally cutting ties with someone who’s treating you horribly amounts to disposing of them, as has happened to so many trans women, then you can be forced to put up with a whole lot of shit when actually the person would be just fine if you cut ties, or at least the effect they have on you is not worth bearing.
I often come back to something my friend @porpentine​ wrote (before I knew her), a followup snippet to her essay Hot Allostatic Load which described her experience of abuse and trashing by a particular cult-like group of indie game developers. Among what she wrote is this:
Tumblr media
Like she said, this ‘sustained application of will against entropy’, facing up to the social systems around us in all their harrowing complexity, is not something that can be reduced to an authoritative text, containing a list of rules or fundamental principles from which everything else neatly follows. That’s something which, unfortunately, can be internalised only through experience and practice, and you will make the wrong call sometimes. All I can hope is that the story I tell, and the models I’ve built, can end up being helpful to a few who read my writing.
So what I will say to finish is: there certainly are real catastrophic threats in the world (like climate change), and there are all sorts of daily miseries which endlessly claim more people (far too many to list). This does (and should!) inspire a sense of terrible urgency as you become aware of it. But, this world also has many systemic dynamics and groups which will (consciously or not) eagerly inculcate and exploit your feeling of urgency, guilt and desperation, and use it to control and abuse you, or use you to do that to others. Somehow, we have to act responsibly to find a worthwhile path in all this, to take it seriously but not obsess in a way that’s futile and harms ourselves and others.
It’s not easy, and I certainly haven’t solved it. But that’s ‘the work’...
80 notes · View notes
imatree-worldblog · 4 years ago
Text
EP03 - Buddhism is not a kind of religion, it’s a core to understand the science as a being.
Tumblr media
Study quantum mechanics allows me to link many things happened in our nature together, regardless the tangible or the intangible one.  There is no absolute correct answer in this nature, and there is no absolute right or wrong in this statement too. Every single individual, as small as quantum, as big as cosmos, having a superposition of possibility to exist in particular way. All these can be structured as a wave function, with the guidance of Schrödinger equation to make predictions, but still, there is no absolute in there. We cannot strangle the other possibilities because of the outcome we measured, because what it gives is just a guidance to your focal point. This recalls a saying in ‘Being Peace’ by Thich Nhat Hanh, “Buddha’s teaching is only a raft to help you cross the river, a finger pointing to the moon. Don’t mistake the finger for the moon. The raft is not the shore. If we cling to the raft, if we cling to the finger, we miss everything”.
Frankly speaking, I will put Buddhism as a wave function, because it is an epistemology; the result needs not be, or cannot be showed in a physical system; and its existence and ideology are differed to every individual observer, as similar to quantum mechanics - they are correspondingly resonated to each other; we are unable to have a Heisenberg Cut to perceive Buddhism as either microscopic or macroscopic, instead it is a nonduality as advocated in the idea of Buddhism - nil compartmentalization, both microscopic and macroscopic are representing an unity. And this core of idea can spread out to be applied in different contexts. For instance, in complying the nature of quantum mechanics, the theory makes its contribution in several areas, namely quantum computing, purported applications such as quantum leadership, quantum psychology, quantum love, quantum happiness..... These seemingly relate to our real life without the effort of understanding it microscopically, we are them as a whole, and they are us as a whole. As same as quantum mechanics, the objectives of Buddhism can also be differed from countries due to their unique culture, history, and spirit. In America, there is American Buddhism; in Vietnam, there is Vietnamese Buddhism; in Japan, there is Japanese Buddhism..... Different languages, different cultures, but they all share the same ideology to understand the livings and world. 
In Buddhism, we tend not to take side between two, we understand each and reconcile them as a whole, because they are in nature a system with entanglement. To reconcile them, the negotiation has to be taken out to make the scattering ideas collapse into a concentrated result - just like how the wave function behaves before and upon the measurement.
Buddha’s teaching does not consist assertion, its existence is not to be the highness among the other principles, doctrines, or ideologies; neither dictates nor rules the world. It is rather a compass for human’s internal awakening and understanding, and so bringing the peace to every beings to have a sustainable life in this world - Buddha, Dharma, and Sangha.
The science, testify the nature of Buddhism, gives many credible explanations for most of the happenings on earth. In the meantime it constructs the world, it is also destructing the world too. In the concept of entanglement, we are in the superposition of the results, we are the result, the result is us; our intention makes the result, and the result is our intention - the result appears in our observing position (Quantum Bayesianism, or QBism). Our world is a system of particles, we are the particles, why can’t we collapse our intention like a position in the wave function and bring our world to the positive world among the Many-Worlds? Buddhas, the awaken one could be the Buddha. Each of us is buddha, but depends on the degree of awakening. Buddha won’t be taking side on science right? Even when science leads to constructive scientific products and destructive scientific products, they reconcile it right?
Think about it. Bye la.
4 notes · View notes
Photo
This joke is a lot more than a joke about Bayesians. It's a joke about everything that Bayesians think they know.
Tumblr media
https://www.facebook.com/DeutschSeite/photos/a.330316637052655/4668308256586783/
72 notes · View notes