Don't wanna be here? Send us removal request.
Text
Weekly-ish Crap, 25 Sept 2018 (Tues): Some philosophy podcast episodes
Haven’t been here for a while after I realized how much time it takes to write these... Here’s some cumulative retrospective crap over the past couple days.
For now I’ll focus on a few podcasts I listened to from Conversations from the Pale Blue Dot (which I’m addicted to right now). Oftentimes I don’t understand much of the podcast (since I know very little about philosophy, also esp. when the philosopher has an Australian accent which happens surprisingly often -- could be a philosophy of religion thing?) so the notes are kind of brief (and probably confused).
===================================================
008: Stephen Finlay – The Error in the Error Theory
Interesting paper by Finlay: “The Error in the Error Theory”
ABSTRACT: Moral error theory of the kind defended by J.L. Mackie and Richard Joyce is premised on two claims: (1) that moral judgements essentially presuppose that moral value has absolute authority, and (2) that this presupposition is false, because nothing has absolute authority. This paper accepts (2) but rejects (1). It is argued first that (1) is not the best explanation of the evidence from moral practice, and second that even if it were, the error theory would still be mistaken, because the assumption does not contaminate the meaning or truth-conditions of moral claims. These are determined by the essential application conditions for moral concepts, which are relational rather than absolute. An analogy is drawn between moral judgements and motion judgements.
Human motivations or normative facts, which comes first? -- This is a big divide in contemporary philosophy.
===================================================
009: Don Loeb – Moral Irrealism
Moral realism says moral facts reduce to empirical facts. But if people who are smart and know empirical facts think hard and still disagree on moral facts, e.g. deontologists and utilitarians, then maybe moral facts cannot reduce to empirical facts.
Do you think people talk about real things when they talk about moral facts? If so you are a moral realist. If not you are a moral anti-realist. Do moral anti-realists believe there are moral facts even if people don't talk about them when they are talking about moral facts?
Will introducing God help moral realism? No. We can’t say God is the source of morality. For if that were the case, then if God wills torturing small children for fun then it's moral to do so, which is absurd. Note that in this case you can’t say this is impossible because “God is good”. This is because if you say so, then you are presupposing morality before God. (This is brilliant.)
If you accept moral realism you also need to accept gastronomical realism which doesn’t make sense.
===================================================
010: Jessica Pierce – Animal Morality
(Animal) morality: a complex set of behavior including cognition and emotions. Incorporating 3 clusters of behavior found in animals: cooperation, empathy, and fairness.
Implications for humans: animals have morality; our morality also comes from evolution. A lot of moral behavior are from subconscious choices rather than deliberation like Kant said. Humans are not uniquely capable of moral feeling or actions. Though some part of human morality comes from deliberation. (But are we as a species uniquely capable of reasoning? Then some of Kant’s arguments still hold? Or we can’t rule out that some animals are capable of reasoning too?)
===================================================
013: Richard Otte – Evil and Miracles
Draper’s indifference argument says that given that evil exists, and that it’s more likely under indifference than God, indifference is more likely than God.
-- But what if Christianity is such that evil is more likely under God?
===================================================
014: Brian Hayden – Prehistoric Religion
Prehistoric religion: expensive (in time and resources), yet persistent and widespread -- must have adaptive value, e.g. cooperation across tribes which was important for survival back then. (Similar idea as “collective storytelling” in “Sapiens”.)
We're drawn to rituals. It's like language that has become part of our genes. Or music.
Traditional vs book religions:
Former involves shamans etc. and lets individuals directly connect with God, dancing, drumming, etc.
Latter done through books and intermediary, usually used by the state to control people’s behavior, touches more on morality, more likely to be in hierarchical societies, does not encourage individual interpretation
===================================================
015: Wes Morriston – God, Genocide, Craig, and Infinity
I don’t remember much, but after listening to so many interviews of both theist and atheist philosophers of religion, I just realized that there are some really smart people who have thought hard about this issue and yet come to vastly different conclusions, so maybe it is true (like one of them said) that there is no convincing way to argue for or against the existence of God. (Although some other atheist philosopher of religion was pretty adamant that believing in God was stupid...)
At least one thing is clear: there exists rational and reflected version of theism (just like rational and reflected version of atheism, or irrational and unreflected version of theism / atheism... !).
===================================================
017: Matt McCormick – Theism and Double Standards
Argues that Christians have double standards when it comes to accepting the resurrection of Jesus, i.e. they require much weaker evidence than when it comes to other historical claims. (More arguments in this book draft.)
Internet Encyclopedia of Philosophy article: good intro to modern atheism.
Luke and philosophers keep saying new atheists like Richard Dawkins, Sam Harris get things wrong (even though they make a good contribution of popularizing atheism and making it easier for atheists to “come out of the closet”). I’m not familiar with their arguments but Luke says he writes about them on his website.
0 notes
Text
Daily Crap, 17 Sept 2018 (Mon): JS Mill & Utilitarianism; Expectations & moral evaluations; More cash debate
(Posted this on Slack a while back)
https://www.economist.com/schools-brief/2018/08/04/against-the-tyranny-of-the-majority
I love this article which illustrates the richness of JS Mill’s ideas. It talks about his journey from a “dyed-in-the-wool utilitarian” who followed his mentor Bentham “in seeing humans as mere calculating machines”, a child prodigy who grew up “in the absence of love and in the presence of fear”, to a breakdown in his early 20s, and then later “came to believe that there must be more to life than what Benthamites term the ‘felicific calculus’ — the accounting of pleasure and pain” and turned to qualify utilitarianism, demonstrating “a pragmatism that is one of Mill’s intellectual hallmarks”. in particular, his famous “harm principle” — “the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others” — and many other ideas he argued for aren’t compatible with Bentham’s version of utilitarianism. “An eclectic and transitional thinker whose writings cannot be expected to yield a coherent doctrine,” his story reminds us that the world is so complex and trying to fit one’s ethics into a logically coherent framework may be at risk of losing reasonability (so maybe moral uncertainty is the way to go? Ambiguity feels bad, but until we figure it all out, pursuing coherence at all cost could be very dangerous). (And perhaps good for Effective Altruists to keep in mind, as related to our discussion that some EAs take utilitarianism to the extreme..)
He turned to the poetry of William Wordsworth and Samuel Taylor Coleridge, which taught him about beauty, honour and loyalty. His new aesthetic sense pushed him away from gung-ho reformism and gently towards conservatism. If the societies of the past had produced such good art, he reasoned, they must have something to offer his age.
===================================================
https://psyarxiv.com/s7fu2/
Expectations Bias Moral Evaluations
People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.
Super interesting theory that seems to fit empirics: “People were more upset about events that were unexpected than events that were more expected. We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.”
Can explain why
People are more upset about terrorist attacks in Paris than in Beirut
People feel more compelled to help those affected by disaster events than those harmed by preventable tropical diseases on a daily basis even if the latter is more cost-effective
(And maybe the trolley problem, somewhat?)
I know this is very anti EA / utilitarian, and generally not great, but wonder if there are arguments for it being justified? (In a normative sense; obviously it’s evolutionarily adaptive.)
===================================================
Finally read the Vox article on Blattman et al’s Uganda cash grant RCT https://www.vox.com/2018/9/10/17827836/cash-basic-income-uganda-study-blattman-charity
Even for “ambitious young people”, not everyone started a business or grew it — Berk Ozler: “You have a third that never really start a business, a third that are disinvesting, and a third that are happy to be small businesses not really growing.”
On cash transfers, right now the following seem really important big open questions that we should understand better:
What do people do with the cash (certainly depends on the size): increase consumption today, diversify diet and improving child nutrition etc. (like the Rwanda study found with the $500 transfer), invest in things with future returns (e.g. education, metal roof, productive assets or even starting business — and how far can that go without skills training), what people felt are the best use of money ex ante / what they thought they would do vs. what they end up doing by revealed preferences (and possibly also result of information and behavioral problems)
Spillovers: effect size and mechanisms (again depends on size of transfer; also present in any program like graduation program)
I think understanding these 2 problems in depths seems really important in research on cash transfers (maybe even going qualitative, not sure), rather than more RCTs simply looking at effect sizes; any more RCT on cash should really invest in understanding these 2. (Including the ones USAID tries to do) Things could go in so many ways in theory and right now we don’t have a clear understanding of the mechanism (and different studies in different context show different results — would be good to understand how context interacts with this too).
Also really interesting to think about different types of cash transfers:
NGOs like Give Directly may be better suited to give one time large cash transfers, and if it really brings people out of the poverty trap (at least out of extreme poverty) it’s pretty good (but evidence on this is unclear)
For government cash transfer programs e.g. the Zambia one, they typically do frequent, longer term, smaller size each time, which is shown to increase consumption etc. in the UNC paper. if the former doesn’t work in raising consumption long term, the latter may be the way to go to eliminate extreme poverty (safety net, e.g. discussed in this blog post).
Many other considerations in deciding which one is better, apart from cost-effectiveness. e.g. political — maybe if something like Give Directly gets big, citizens of those countries will be annoyed about it compared to a government cash transfer program (giving $1000 at a time may seem controversial to them and if it’s “Silicon Valley money” it could have further backlash). Also even if Give Directly only go to very poor villages, eventually as they get big many other people in Kenya may be annoyed that they are not getting the $1000? Whereas something being integrated into government policy may be more politically feasible.
0 notes
Text
Daily Crap, 16 Sept 2018 (Sun): Pre-Raphaelite etc.; Long term outcomes etc.
Today I finally saw the exhibition “Truth and Beauty” on the Pre-Raphaelite Brotherhood at the Legion of Honour in SF (including artworks from the Northern Renaissance and Early Italian Art that inspired them). Very nicely put together, and probably the best art exhibition I’ve seen in the past year. I was surprised by the number of paintings that came from the SF art museums.
I frequented Nourish cafe for the first time, as well as Green Apple Bookstore which is really cool. I also tried kava for the first time.
===================================================
Yesterday I was talking to someone about my work and I end up thinking about this: how problematic is it that most academic studies and our projects don’t measure long term outcomes? (I’d like to understand “predictions with proxies” better at some point)
And this person is working on motivating students to learn, which leads me to wonder again: what are the goals of this learning? How should learning be tailored based on the end goal? I’m so curious how they think about it.
0 notes
Text
Desire utilitarianism
16 Sept 2018, Sunday
Finally got a chance to write about desire utilitarianism!
You may know that I’ve been obsessed with this philosophy podcast lately.
One particularly interesting moral theory I learned about is desire utilitarianism proposed by Alonzo Fyfe, first introduced in episode 3, then episode 5, and they even made a stand-alone podcast “Morality in the real world” on it (which unfortunately seems to be incomplete).
Some of my notes from listening to these:
This is a moral theory that evaluates desires, not actions.
I was confused about its difference with preference utilitarianism, and I’m guessing this is a major one?
Actions: desires are reasons for actions; actions are good if they satisfy desires.
Desires: a desire is good (in the normative sense, according to this theory) if it satisfies other desires, and bad if it hinders other desires.
Question: how do we know which desires are stronger or more fundamental?
Answer: to know which desires are stronger, say between desires A and B, just imagine the same person having both and which one they’d rather satisfy.
My question: not sure if it works, different people may have different strengths for desire A. Not sure if it means anything to translate them to the same person -- again comes back to the question of interpersonal comparison (which may be addressed by WTP but I think imperfectly, see the 3rd bullet point under “my questions” here).
(Malleable) desires can be shaped by reward and punishment.
My question: seems to need empirical evidence.
There are reasons to promote “good” desires (defined as above) and reward / praise is how we promote them.
Overall, don’t need to call this whole thing morality. Call it whatever you want.
Intrinsic values:
Desire fulfillment (by action) has no intrinsic value.
In fact according to this theory nothing has intrinsic value.Intrinsic values: not necessary, hence can do away with them; of course you can assume they exist but there is no point since it doesn’t add anything.
There is no categorical imperative, only hypothetical.
0 notes
Text
Daily Crap, 14 Sept 2018 (Fri): Culture etc. and growth; Psychedelics; State capacity
(Retrospective crap)
An economist at WB presented his research agenda: Using large administrative datasets to answer big questions on growth, corruption and intergenerational mobility. Fascinating.
The future of development for developing countries seems to not lie in rural development but migration and urbanization. (E.g. rural roads don’t do as much as we expected.)
What explains the empirical fact that: sub-district level variation accounts for most of the variation in economic growth in India?
Natural condition (e.g. geography)
“Culture”, norms, informal local institutions
Conglomeration
Local policies
National policies
Etc.
Culture: I’m totally on board.
Basically norms, beliefs etc. that people hold for whatever reason (typically passed through by intergenerational transmission and Nash equilibrium, even if not apparently rational / reasonable in current environment, highly path dependent).
If you believe, like I do, that most decisions made by most people are driven by System 1, then such beliefs and norms are huge in shaping behavior.
Many norms were formed by arbitrary reasons and not particularly good or harmful back then, but now may happen to be conducive to or hindering economic development.
How to change harmful norms? Sometimes (but not always) people don’t even care that much themselves, it’s just a matter of thinking everyone else cares / caring about people’s perception of you / signaling / “bad” Nash equilibrium (I’m lumping a few things together here). E.g. this paper on female genital cutting in Burkina Faso, and this paper on men’s status and female labor force participation.
Maybe one way is to have a charismatic activist from your own community to convince you...
(Side: Liberia bans female genital cutting partly due to the work of one activists, which doesn’t mean it will be implemented perfectly but at least it’s a start. I’m really curious to understand how lobbying for policy change and movement building work.)
===================================================
Psychedelics: someone told me that one thing they do is to “flatten” your perception -- reduces the extent to which your priors shape how you perceive ad filter information or ideas.
===================================================
Tax collection / state capacity of Al Shaabab: https://kenopalo.com/2018/07/17/is-somalias-al-shaabab-better-at-tax-collection-than-most-low-income-states/ (including link to an economics paper explaining why it’s easier to collect taxies in economies with large firms http://eprints.lse.ac.uk/66114/1/Kleven_Modern%20governments_2016.pdf) I would guess it’s a case of clever institutional design that was done from scratch and from the top down, which may be similar to the case of Singapore but not applicable in many places in the developing world today with already ingrained barriers. (But I don’t really know what’s the key of this design!)
Another example of high state capacity: Rwanda http://blogs.worldbank.org/developmenttalk/using-satellite-imagery-revolutionize-creation-tax-maps
How to make sure the bureaucracy does what it’s supposed to seems more complicated since there are so many links. May be an organizational design problem? I wonder who studies that.
Related, this book looks so interesting https://bsc.cid.harvard.edu/building-state-capability-evidence-analysis-action
Slightly related: corruption among traffic police in DRC (similar to what’s described in “Who shall catch us as we fall” in the Kenyan context, and many many anecdotes on policing in developing countries) https://www.economist.com/middle-east-and-africa/2018/09/08/kinshasas-traffic-police-make-80-of-their-income-informally
0 notes
Text
Daily Crap, 13 Sept 2018 (Thurs): USAID using cash as benchmark; Local solutions; Evidence to action; Evidence to policy
USAID doing RCTs to compare cash and their other programs which is extremely cool.
(The RCT comparing a bundled nutritional program and cash transfers in Rwanda is here. The story isn’t as simple as cash dominates -- they have 2 sizes of cash transfers, the cost-equivalent one doesn’t obviously dominate the program, but the large $500 one does.)
Pretty good write-up on Vox by Dylan Matthews
===================================================
An example of infrastructure (footbridges) enabling mobility for labor and goods, which increases wages and farm profit. There are lots of studies on how infrastructure serves these functions. What strikes me about this case is that it’s an effective micro-level intervention that would be very hard to think of without knowing the local context. I’m really curious to see other success stories like this and how more such local solutions can be encouraged.
Also some heuristics I’ve been learning recently in development is that interventions that give people more choice (cash transfers, enabling mobility etc.) may work better than more direct ones (e.g. providing specific things or jobs) -- but of course it depends on the context (e.g. public goods, people’s information or behavioral constraints).
===================================================
Today I learned about this really cool NGO that translates Pascaline Dupas’ sugar daddy study into a program in Botswana. They tested key assumptions first and then did an RCT, and are now thinking more carefully about program design and outcome measures (and doing another RCT after they figure them out). Soooooo cool!
(One lesson they shared with us, among many: even though academic studies of RCTs focuses on causal effect, for scaling up understanding of mechanism / theory of change is crucial)
We need more orgs like this in development, taking evidence to action, taking tested interventions to scale. Like Evidence Action, Charity Science Health, New Incentives, etc. The space seems both funding and talent constrained. Can there be an incubator?
Evidence Action Beta has one and recently received some funding from GiveWell to test and try to identify their next top charities.
===================================================
Related, how to take evidence to policy (by existing institutions, e.g. governments)?
Here is a GiveWell conversation with IPA on their work in taking evidence to policy.
Some related materials:
IPA case studies of how their evidence is incorporated into policy — I love seeing case studies like these! https://www.poverty-action.org/impact/case-studies
I think the JPAL equivalent is here https://www.povertyactionlab.org/scale-ups
0 notes
Text
Daily Crap, 12 Sept 2018 (Wed): “Theism and Explanation”; Rawls; Trolley
(Retrospective Crap)
Podcast: Conversations from the Pale Blue Dot (which I’ve been obsessed with lately), 007: Gregory Dawes – Theism and Explanation
My notes:
There is no reason to reject theistic or supernatural explanations of phenomena a priori
E.g. Can't reject them simply b/c they are weird; modern physics has plenty of weird stuff
Ultimately we reject such explanations because they don't help, don't predict or explain more things than explanations without god / supernatural forces
The most convincing argument that god doesn't exist he thinks is from Draper: the world with suffering fits better with cosmic indifference than with the assumption of a loving and powerful god (the latter is more of a stretch)
===================================================
I thought about Rawls, and I agree with some people that there is no veil of ignorance in the real world, which is a reason to argue that his theory of justice that is based on it is not relevant. However, I still find the veil of ignorance a very useful thought experiment when thinking about the “just” allocation of things, even if we don’t want to take that notion of justice super seriously (e.g. actually using it to determine the “morally right” allocation of things in society like Rawls did). The notion of “moral desert” also seems very useful -- though again I don’t know ultimately what to do with it (how seriously to take it?).
Interesting that both Rawls’ theory and Nozick’s are “ideal theories” that are about what an ideal society should look like, not where we should go from where we currently are (according to The Economist).
===================================================
Trolley problem:
Somehow I started thinking about it again. For the first time in my life I feel it’s obvious to switch to the track with one person. Even though it feels emotionally difficult, that’s not enough justification to let more people die. Another argument is if the problem was set up in a more neutral way (the switch is in the middle by default, and you have to choose one) then more people would choose to kill one, so if you don’t choose to kill one in the original set-up then the solution seems non robust (and hence silly).
(Of course it’s assuming the one and five people are identical strangers; if they are of different ages, “productivity” in any sense, etc., normatively how we should value them is a separate and difficult question; similarly if you know them.)
So I guess maybe I’ve become more utilitarian than in the past when I used to find it so difficult to answer the trolley problem that I would refuse (though not extremely utilitarian according to this).
Though the trolley problem isn’t as strong a test for utilitarianism as this story, I think. If you are a true utilitarian, then as long as sufficiently many people benefit, it’s justified to torture the child (for it cannot be infinitely bad?). Here I’m inclined to use some heuristics like, it’s okay to inflict harm on some if it avoids more harm on others, but not to inflict harm on some to make already okay people happier -- which is probably not something that can be derived from utilitarianism for the reason discussed above. But I’m not sure. (I’m also not sure if I want to invoke deontology here which according to Ajeya assigns negative infinity value to some actions -- even though it’s appealing here, it seems like it could also be too extreme.)
(Also not sure what I would do in the fat guy version of the trolley problem, except this body shaming seems bad. It has the same structure as the original one I guess...)
0 notes
Text
Daily Crap, 11 Sept 2018 (Tues): EAs in development, Evidence-based policy, Governance; Curiosity in AI; Empathy; Cause prioritization
1) Found 2 more EAs in development, yay!
One researched how to get policy makers to use evidence and wrote this piece.
One is looking into working on improving governance. We talked about the following:
There is good evidence in how to increase citizen participation (getting them to vote, giving them information etc.)
But hard to tie that, or better quality of officials, to improved outcomes in health, education, etc. which EAs care about (WHAT’S THE BEST EVIDENCE ON THIS?)
I think most of the benefit of improving governance is long term: better health system (not just specific interventions), infrastructure, economic growth etc. ... -- which means 1) hard to see immediate outcome, 2) hard to attribute causality
Reminds me of this blog post by Chris Blattman: https://chrisblattman.com/2015/07/20/the-problem-with-evidence-based-policy-change-is-we-dont-have-evidence-on-the-important-policies/
At least we need more research on this!
===================================================
2) Economist article on how building a preferences for “curiosity” helps AI learn, fascinating!
===================================================
3) Article in National Geographic on the biological basis of empathy (some part of the brain), really interesting.
Those whose this part of the brain is naturally more active feel more empathy (feel other’s distress more) and are “ultra altruistic” e.g. willing to sacrifice oneself to save others (not like EAs donating money but actually fighting bad people or donating kidney etc.), and those with naturally very low activities there are psychopaths (and the article says they are mostly men).
Nature vs. nurture: environment in which one grows up can also affect empathy on top of one’s natural tendency.
Empathy vs. compassion: empathy can be channelled into compassion but not always, and they are not the same thing. “Empathy and compassion use different networks in the brain, Singer and her colleagues found. Both can lead to positive social behavior, but the brain’s empathic response to seeing another person suffer can sometimes lead to empathic distress—a negative reaction that makes the onlooker want to turn away from the sufferer to preserve his or her own sense of well-being.” “Compassion ... combines awareness of another’s distress with the desire to alleviate it.”
Effect of “compassion training”: meditating, thinking of someone you love and care about, and trying to expand this feeling to incorporate other people -- this training is shown to increase compassion.
===================================================
4) Ajeya Cotra from Open Phil talks about implementing cause prioritization at EA Global 2017 London. Podcast here. My notes on some things I find interesting below:
How use the moral uncertainty framework?
In principle have belief over moral theories.
However, sometimes the compromise you find appealing isn’t necessary a coherent moral theory.
Sometimes it’s a compromise between different moral theories (e.g. deontology vs. consequentialism), sometimes it’s between moral theories and “heuristics clusters” (e.g. advancing science is good, economic growth is good) which are not clean frameworks.
“Heuristics clusters”: e.g. a set of action-guiding heuristic that have worked in the past, a set of qualms you have about some moral theories. (Deontology: assigns value of negative infinity to some actions.)
You may feel more confidence in these heuristics than in explicit moral theories.
Sometimes you feel like using different moral theories for different domains, e.g. using deontological views in everyday life and being more utilitarian in your work.
Once they figure out the normative uncertainty part (they are still working on it), i.e. how to assign weights to moral theories / heuristics? they will figure out how much good each cause area / intervention does under each moral theory, and then do variance normalization to figure out how to allocate budget.
My questions:
On the normative uncertainty part (which is intuitively very appealing), seems like you can just incorporate any intuition, you may end up just defending everyday intuition / conventional views. Of course, it’s probably good to put some weights on them, and not 100% weight on strictly “rational” things like utility maximization -- how to decide how much weight to put? Obviously very hard question (Will MacAskill also discussed this in his interview with 80,000 Hours and says it’s a challenge).
Relatedly, if we give up on coherence as a criterion, how do we evaluate the plausibility of different moral theories - heuristics? (Or decide how much weight to assign)
On variance normalization (which MacAskill also discussed on the 80,000 Hours podcast), I am not sure it totally works. Just like people’s preferences may have different strengths -- e.g. some people may be really sensitive to everything (pain, tastes etc.) while others are pretty much numb to any stimulation -- different moral theories may “feel” differentially strongly about things, e.g. some may say doing certain things is extremely terrible while others say do whatever you want. Variance normalization doesn’t seem to give a “fair” comparison of different moral theories here. But I’m guessing it might be a compromise that’s better than the alternative, namely different moral theories having complete freedom to arbitrarily scale in order to dominate other ones? (The optimal solution may be to come up with a “scientific” way to decide the scale in each theory. However, this doesn’t seem possible, just like utility function only describes individual choices and theoretically interpersonal utility comparison is impossible -- it may be somewhat captured by willingness to pay though imperfectly? Maybe there could be some sort of “WTP” like currencies with which different moral theories can “trade” with one another -- not sure.)
(Note: why I think WTP captures “strengths of preferences” imperfectly even if you assume away liquidity constraint -- imagine 2 people have the same utility function and always make the same choices, so to economists they are the same, but the first person is twice as stimulated by everything as the second person maybe under some neurological measure, the second person just doesn’t care much about food, clothes etc. Economists won’t say the first person’s utility is scaled up from the second person��s, but in some sense it is, no? Perhaps I shouldn’t talk about “strengths of preferences” since in the econ sense of preferences the two people are identical, but there is something there that’s different. Perhaps if you’re a hedonic utilitarian you’d care since magnitudes of pleasure or whatever are different for them, which doesn’t make sense to economists as “happiness” isn’t a real thing to them, but it is something?)
P.S. I have the impression that someone at Open Phil told me they no longer do such theoretical cause prioritization / research on normative uncertainty but I’m not sure. If they don’t, what do they do now? I want to ask them.
0 notes
Text
Daily Crap, 10 Sept 2018 (Mon): Education in Singapore; Podcasts; Hackers and Painters
Singapore’s new direction in education: https://www.economist.com/asia/2018/08/30/it-has-the-worlds-best-schools-but-singapore-wants-better?frsc=dg%7Ce
It makes a lot of sense that they start emphasizing critical thinking, social development etc. after the early stage of pursing academic performance only. The trade-off between "hard” and “soft” skills is often discussed in the debate on education in the development context or low income populations (e.g. Bridge, KIPP). Perhaps there are optimal points for different stages of development -- without sacrificing any dimensions too much of course.
It seems like the main thing all countries should learn from Singapore is how they put so much emphasis on education early on, invests in so much resources and with careful planning and long term thinking. (The advantage of their political system is that they can get things done, and have long term agendas; however, other countries can still benefit from putting competent technocrats with long term vision in key positions like education early on.)
===================================================
Some cool podcasts I listened to recently:
Zephyr Teachout on the Ezra Klein show: talks about corruption in politics in the US, e.g. how campaign financing leads congressmen to cater to the interest of rich people (who may be outside their state) rather than constituents, and her proposal for reform: public financing of elections, aiming to correct this distortion. She also mentioned suing Trump for business interests related to foreign governments, breaking up monopolies (to revive the vibrancy of the US economy, and some conservatives like Luigi Zingales is on the same page) etc. -- she’s super cool, with deep theoretical and practical insights, and very reasonable. (Update: see this Economist article on competition in the American economy; this has been discussed quite a lot recently, including this earlier Economist article)
Chrisy Baily on the Ezra Klein show: procrastination is because your brain decides that something else is more appealing to focus on because it’s more pleasurable / threatening / novel than the important task you need to do. I like this explanation. I’m not yet sure how to get myself to focus but I’ll try...
===================================================
(Re)reading “Hackers and Painters” by Paul Graham:
Chapter 1 on nerds (peculiar American middle/high school phenomenon). Argues nerds aren’t popular in school, ultimately because they don’t care about popularity enough, compared to intelligence; popularity is a “full time job” and requires a lot of effort and a lot of motivation to be popular. Argues adults keep kids occupied with high school where they don’t do any real thing and as a result they are miserable and cruel to each other. (I have heard arguments that kids that age don’t know about empathy, or at least a lot don’t -- so is empathy not part of innate human nature? Is it developed later? Is the fact that people become nicer later on due to development of empathy, or simply understanding and obeying social norm? I’m curious, esp. about the people who used to be mean kids and became nicer as they age.)
Chapter 2 on hackers and painters, argues hackers are ultimately makers like painters, not computer scientists or mathematicians. Painters have high status now because of cool work they did a few centuries ago; hackers are not yet (though now in 2018 they somewhat are). Ultimately hackers are sad if they are made to be merely implementers (like software engineers in big companies) not also designers of the product.
0 notes
Text
Daily Crap, 07 Sept 2018 (Fri): Stupid Privileged Tight-fisted Vegan EA Whining about “Self-denial”
(Retrospective Crap)
Friday night I decided to take a walk from my house. After passing by a few beautiful houses on Page St, I reached Hayes Valley which has many cute little restaurants. Through the big windows you can see cozy candlelit tables where diners -- couples, friends and families -- are sitting and enjoying a nice meal (most likely animal products). Somehow I started thinking: how come I’ve never been one of those people? (Okay, I have been in a nice restaurant, just not in one of these particular ones; the most recent times I’ve been in a nice restaurant like that, having wine with my dinner, was probably last year in Paris and Venice; okay I guess I did also go to Jardiniere recently for Impossible Burger, and Shizen for vegan sushi... I just don’t do it very often.)
Because I’m a tight-fisted EA (or at least one who’s trying to be tight-fisted), and vegan (or at least trying to be).
I don’t actually feel sad for missing out the food experience, for I now find the idea of myself consuming meat weird. And I’ve been to one of those restaurants and I know it’s nice inside but not all that special. Somehow though I do feel I’m missing out on something beautiful. Maybe I’ll go there with my friend one day, they can order whatever fancy meat dishes they like, and I’ll have a salad and a glass of wine just to feel that I’ve been there. Lol.
Is it really worth it, all this “self-denial”, instead of trying to satisfy every desire of one’s own, like many people do? Some people have always had a frugal habit; to me these are some real behavioral changes, starting to think about money, instead of satisfying random desires without very much consideration to the cost (individually or cumulatively). So many people are just trying to enjoy life, going out to a fancy dinner on Friday night, buying stuff, etc. What am I trying to do?
What about Katherine? Julia Wise? Toby Ord? Ben Kuhn? And many more (e.g. this). Did they never have these desires, or have they learned to deny them? (Well, Buddha certainly fell in the latter category.)
To be fair, I don’t really *feel* much of “self-denial” (if at all, perhaps a tiny tiny bit); I’m just curiously examining this act that many (including my family) would see as such.
But shouldn’t I be ashamed of any such complaint or thought? What do I deserve, being merely born into privilege? Why me, but not the poor person in the street or in an African village? I certainly don’t deserve any of this fancy consumption, so denying myself these is only natural. (Julia Wise: “I see my money as belonging to whoever needs it most: every dollar I spend is a dollar out of the hands of someone who needs it more than me.”)
I think I’m going to continue acting as I do now, while being understanding and forgiving to myself when I have stray thoughts -- just like when I was trying to be vegetarian. (To be practical, realistic, and sustainable, like Will MacAskill said in the podcast with Sam Harris.)
-- The inner monologue of an aspiring tight-fisted vegan EA who sometimes fights with her desires that she got accustomed to during her past life.
(I do still go out to dinner nowadays but I try to limit it more than before.)
0 notes
Text
Daily Crap, 06 Sept 2018 (Thurs): Learning
(Retrospective crap)
Saw this, pretty interesting and weird Japanese anime movie https://en.wikipedia.org/wiki/Night_Is_Short,_Walk_On_Girl The little artistic creativity that I may once have possessed (if at all) has pretty much been killed by the mathy stuff I’ve been doing for the past decade. Perhaps the only legacy is that my brain still comes up with weird associations of things sometimes (that merely make people think I’m weird). When I see artistic associations of things done by actual artists though they really blow my mind.
===================================================
Talked to a friend about learning, e.g. going to university vs. taking MOOCs vs. following textbooks on one’s own. The university system is probably not optimal for a lot of people, but it has the merit of putting a structure on learning without which a lot of kids would just end up doing nothing (me included; I’m quite pessimistic about human nature, largely informed by observing my own rationality) -- of course this is assuming what’s taught is useful, which the signaling theory of education may oppose. One of those things where there is so much to improve yet no one powerful enough to change things has enough incentive? (Another one we talked about recently is free access to academic journals.)
Now I’m thinking: to talk about the best way to learn we need to figure out the purpose of learning, and there could be different purposes for different kinds of learning. Some I can think of:
Obtain knowledge and skills that are useful for: 1) performing one’s job, 2) being a functional citizen of society (e.g. informed voter)
For fun, i.e. to satisfy intellectual curiosity
Signalling
Anything else? (One thing math did for me is to want to come up with mutually exclusive, collectively exhaustive categories of things and being constantly paranoid about missing something.)
She also mentioned for most scientists it’s not necessary to dig in the weeds of math, e.g. calculus / analysis, but rather grasping the intuition is sufficient -- one can dig deeper if one has the interest or need. And also how she likes reading textbooks and then jump from one topic to Wikipedia article and perhaps another textbook, eventually coming back to finish the original textbook in a few months. (I’ll think more about it when I have a chance to learn the stuff I want to learn...)
0 notes
Text
Daily Crap, 09 Sept 2018: Crazy Rich Asians; Kehinde Wiley
Note: When it comes to writing dates, I prefer DD/MM/YY (Europe etc.) or YY/MM/DD (China etc.). I hate the American way of MM/DD/YY. In order to avoid confusion I write out the month in English.
Interesting article on “Crazy Rich Asians”, the Chinese Chinese vs. Asian American experience https://www.nytimes.com/2018/09/06/world/asia/crazy-rich-asians-china.html I guess like “Americanah” is about the African vs. African American experience (which I haven’t read, because I didn’t like “Half of a Yellow Sun”, but maybe I will try it).
Kehinde Wiley, who painted Obama’s presidential portrait, is so cool -- see description here https://www.instagram.com/p/BneJSryim7Q/; you should also check out his other work, super cool!
0 notes
Text
Introduction
In order to avoid spamming my friends too much with the random crap that I come across or come through my mind every day, I created this depository for them. Hopefully it relieves my friends of some negative externalities.
The quality of writing here is probably not great because I’m trying to quickly document my daily thoughts without putting in too much effort. Maybe one day some of these will turn into more serious of pieces of writing... Maybe.
I try to write about topics of general interests though some are purely personal crap. It’s a reasonable reflection of what occupies my mind each day (outside of work stuff).
0 notes