Don't wanna be here? Send us removal request.
Text
Convention and Meaning: Derrida and Austin Jonathan Culler, 2008 / read July 20
for Saussure things got their meaning by differnetation, contrasts
but this cant give a complete account: if you say 'Could you lift that box?' it might be a request, an abstract question of capability, or a rhetorical question about how hopelessly heavy it is
so where does it get the meaning?
we risk going back to saying that the meaning resides in the consciousness of the speaker
but a structuralist would ask: what makes it possible for them to mean these several things at once?
so we account for the meaning of 'utterances', different from sentences, by analyzing a different system: that of Speech Acts
so Austin is thus repeating Saussure's move: describing the system that makes 'signifying events' (parole) possible
Austin wont let us locate meaning in the speaker's mind - there isnt an 'inner act of meaning' which goes on when you mean something
it gets its meaning through certain conventions -- if I say 'I promise to return this to you', indicating an item I will borrow, you understand that I am making a promise, but when I just wrote it you understood that this is not a promise bc it lacks the context
so Austin offers a structural explanation of meaning which avoids 'logocentric premesis' -- but in his discussion of it he reintroduces the problems he just overcomes. This is what Derrida tries to deal with in Signature, Event, Context
in How To Do Things With Words, Austun wants to get over some narrow views of language his milieu had; to have a theory adequate to statements which had been discarded as meaningless or 'psuedo-statements' for not fitting their critera [which were: either a description, or a statement of fact - and could be either true or false]
he distinguishes two types: constitutive statments (descriptions of statements of fact), and performative statements (which enact what they say)
there is a surprising conclusion here: if I say, 'I affirm that the cat is on the matt', I'm performing my affirmation. But a crucial aspect of performatives is that they can have the explicitly performative part removed: 'I will pay you tomorrow' is still a promise. But removing the 'I affirm...' gives us, 'the cat is on the mat' - I still affirm it, performatively - but the statement I make is also an emblematic constitutive statement
Culler notes that Austin's argument here is a 'splendid' instance of the deconstructionist 'supplementarism' here, in its inversion of the old formula: what had been seen as merely secondary or inessential becomes the most primary -- rather than performatives being secondary to constitutives, the constitutives are a special case of the performative
"The conclusion that a constative is a performative from which one of various performative verbs has been deleted has since been adopted by numerous linguists." [how used is this in linguistics?]
this allows us to solve the problem of a single statement having multiple meanings: its actually a performative statment from which the performative has been deleted. 'I ask you to lift the box', 'I inquire if you could lift the box', 'I despair at the box's weight'
Austin doesnt argue this and would be skeptical of it; he argues that illocuctionary force (meaning) does not necessarily derive from grammatical structure
he instead proposes a distinction between locutionary and illocuntionary acts
so when I say 'the chair is broken' I perform the 'locutionary' act of making an utterance, and the 'illocutionary' act of 'stating, warning, complaining...', whatever performance
linguistics accounts for the meaning of the locutionary act; speech act theory accunts for the meaning (or 'illocutionary force') of an utterance
explaining illocutionary force means explaining the conventions that make it possible
we might find out what these conventions are by looking at how these performatives can go wrong, might not actually enact the promised performance [I think eg. a bigamous marriage would prevent the 'I pronounce you man and wife' from really marrying the couple]
so Austin doesnt treat failure as something alterior to performatives, accidental, not part of how they really work, but an integral part of them - performances can go wrong -- something cannot BE a performative unless it CAN go wrong [continental philosophers like him for this reason: he really grasps the 'negative' (Culler puts it in these terms later)]
this accords with semiotics: a statement couldn't signify if it couldnt be said falsely
Austin argues that performing acts - like marrying or betting - must be described as something like 'saying certain words' rather than performing some inward action which the words reflect
...enter Derrida
Derrida argues that despite saying this, Austin reintroduces this inward action as the force of the performance
Austin, worrying about jokes etc., perhaps because it would involve a description of an inward act of meaning, says that only 'serious' speech acts can be analyzed, but doesnt argue for it. He actually puts 'serious' in scare quotes, as if the argument itself was a joke [Deconstructionists love that stuff...]
so after remarking that philosophers wrongly excluded utterances which werent true or false, he excludes utterances which aren't serious. Instead of arguing for it as a 'rigorous move within philosophy', its a customary exclusion 'on which philosophy relies'
later on he describes these 'unserious' uses as 'parasitic on' the normal use; so Austin introduces a new constitutive & supplementary distinction, after getting away from one
Searle defended this to Derrida saying that we ought not *start* our investigation by considering these parasitic discourses [we feel, and have perhaps been primed to feel by Culler, that this misses the point that Austin makes his intervention by uncovering the way these 'supplementary' excess cases are core to the working logic of speech acts, and this might be another such case - although we might not feel it to be necessarily the case that *all* supplementary things are likewise constitutive, although perhaps Derrida 1. argues that *this* supplement is constituive, but also 2. that all supplements are constitutve of what they are supplemental to, as a matter of a thing being a thing, elsewhere]
actually Derrida's case is moreso that setting aside these uses as secondary from the beginning is begging the question; the theory has to be able to account for them -- Austin deals with an 'ideal language' here, not the one really used (which includes uses by actors on a stage, in jokes... Derrida here appears as an ordinary language philosopher!)
So Searle argues that its parasitic because its not possible for an actor to make a promise in a play if we didnt make promises in real life; but Culler says, why see it this way around? Perhaps it is only possible to make a promise in real life if it could be made in a play. For Austin, an utterance is only possible because there are formuals and procedures that we can follow to do so - so for me to do it irl, there have to be iterable procedures that could be acted out...
so Derrida asks: could my performance succeed if it didn't conform to an iterable model? -- for it to succeed there needs to be a model, a representation, and the actors representation of it is just such a thing
~footnote: some commentary on Searle's disagreement... he brings up a use/mention distinction - performatives use utterances, while actors just mention them. Derrida argues that this distinction requires us to go back to making use of intentionality & the inner act that meaning depends on, what we were trying to get away from: if I mention something instead of use it, it can only be because I intend to mention it...
Culler gives an example that is very ambiguous w/r/t use/mention - "His colleagues have said his work was 'boring' and 'pointless' " -- have I merely mentioned the words boring and pointless (since I'm just quoting others who have said it) or have I used them (since I do imply that his work is really boring and pointless)? To tell you would have to decide which one I intended to express.~
so, to repeat Austin's move on the core/marginal distinction that Austin reintroduces: the so-called serious performance is actually a special case of the parasitic - its an instance or reenactment of this iterable representation
so imitation is a condition of possibility of signification
eg., for there to be a recognizable original 'Hemmingway style', there must be some style which can be imitated, repeated, etc. [This seems very convincing to me]
so, the performative is from the outset structured by this possibility for iterability, citation, performance-of...
the reason that Austin reintroduces this flawed core/supplementary model is to solve a problem for speech act theory:
if you explicate all the conditions that make a particular performative possible (which is the goal of speech act theory), say-- 'I pronounce you married' is perforative only if there is a marriage license, a licensed officiary, etc. - one can *always* imagine a further scenario that would cause the performative to fail (say, they're all actors in a play...)
Austin tries to resolve this by ruling out instances where the speaker is 'not serious' - but this requires us to appeal to the intentions, etc...
so to make performatives and 'performance' coextensive is to maintain a version of the theory that can really discard intention etc., but at the cost of being unable to explicate the conditions of possibility of a given performative - because it gets its meaning only via context, and the number of contexts is infinite
... [skipping a nice section that we dont really need to note]
for Austin, a signature is the equivalent in writing of a performative utterance, 'I hereby...'
on this idea, Derrida ends Signature, Event, Context, by writing his name twice, and indicating one is a counterfeit of the other. The joke being: is this counterfit, citational second signature not a signature, because he wasnt being serious? or does it function as a signature, because a signature is signing your name?
the other implication: which of the two signatures is the 'real' one? you cant tell in writing -- 'the effects of the signature depend on iterability'
so contrary to Austin, who holds that the signature is an indication of some inner intention (assent to an agreement, etc), the signature can only funtion if it is repeatable, iterable... "The condition of possibility of [its] effects is simaultaneously ... its condition of impossibility, or the impossibility of their rigorous function." [ie. to be possible, it must also be imitable, repeatable... theres a bit of what 'difference & repetition' is engaging w/ here --
interesting to comapre w/ Deleuze here - for Derrida, something has to be repeatable in order to be at all because its just a repeatable expression of conditions of possibility. This means its negative is prefactored into its conditions of possibility -- the price of having a signature is that the signature can be counterfeited.
Deleuze is somewhat allergic to 'conditions of possibility', and also wants to find a system where the negative doesn't exist. I'm not sure how he might argue w/ Derrida here. Perhaps he would feel that it is the difference between each signature which makes it repeatable... but that doesnt really make sense to me & is probably an overly literal reading. It's possible the two only disagree in terminology here - what Derrida would call the negative is just another form of difference for Deleuze. I'm not sure.]
Culler talks about how signatures can be made without the signatory's presence, in the case of machines signing checks automatically, so that wages are paid without being physically cashed in
he identifies 'logocentrism' as seeing these sorts of things as secondary to or parasitic on direct speech where the speaker's intentions are carried out
really, such cases could not occur if they didnt belong to the structure of the signing (etc.) already
so Derrida says that intention will not disappear from a good analysis, but it will no longer govern the entire 'system of utterance'... so while I intend to mean something and thats why I speak, the act of speaking itself introduces a gap between my intention and my words. My attention is the reason I structure things the way I do, why I make use of certain conventions, etc., but my intention is not accessible in the words I use (just as we might say, if I make a necklace, my intention for the necklace to be a gift for my niece is not a property of the necklace itself; the meaning/illocutionary force of a speech act is the necklace here - a speech act is given its meaning by the conventions it uses to generate a meaning, and I employ those conventions to try and say what I intend to say)
Culler introduces the unconscious here - often we say things and do things for reasons we are unconcious of, so intention is even a little more deflated. My reasons for saying something are not entirely conscious intentions which are transparent and accessible to reflection, but a 'structuring intentionality' that includes implications that never "entered my mind"
"Intentions are not a delimited content but open sets of discursive possibilities-what one will say in response to questions about an act." [nice idea]
"The example of the signature thus presents us with the same structure we encountered in the case of other speech acts: (1) the dependence of meaning on conventional and contextual factors, but (2) the impossibility of exhausting contextual possibilities so as to specify the limits of illocutionary force, and thus (3) the impossibility of controlling effects of signification or the force of discourse by a theory, whether it appeal to intentions of subjects or to codes and contexts." [a summary of the whole argument]
what this means is that meaning can never be *exhaustively* determined, but we are still left with tools to examine speech acts and how they work, etc.
Culler gives a nice defense that meaning being indeterminable (or not precisely, finally, exhaustively determinable) does not mean that no analysis can or should be done by comparing it with Godel's incompleteness theorem in mathematics: "the impossibility of constructing a theoretical system within which all true statements of number theory are theorems does not lead mathematicians to abandon their work"
3 notes
·
View notes
Text
Hallucinations as top-down effects on perception Powers et al, 2016, read 27-28.06.20
a review of the state of the literature of top-down effects on perception in neuroscience; and then applies it to hallucinations (we're reading for the first part, but we'll take the second too)
They say 'present-day cognitive scientists' argue cognition does *not* influence perception. They cite: Firestone C, Scholl BJ (2015): Cognition does not affect perception: Evaluating the evidence for 'top-down' effects [in sense do they use 'perception'? perhaps more like Roftopoulos restricted sense? also: this shouldn't be seen as argument against the penetrability of 'perceptual belief' in Lyons 2011]
but 'work in computational neuroscience' challenges this view, & they also think hallucinations pose a challenge to 'srict, encapsulated modularity' [Fodor]; they'll illustrate it with 'phenomenology(!) and neuro-computational work'
MODULES OF THE MIND
Fodor shout out! they give a summary of his modular parsers - "for example, theearly vision module takes in ambient lightand outputs color representations" - which are cognitively penetrable only in a very strict, delineated way.
Fodor's modules are annoying for scientists because they're pooly defined, so difficut to falsify. Some 'ultra-cognitive neuropsychologists' even claim that the brain 'hardware' is irrelevant to the 'software' they're interested in & resist empirical evidence!
a strict modular approach requires 'functional segregation', with different parts of the brain doing different things inaccessibly to one another, but evidence supports an 'integrationism' of the brain [we saw this with the knots "Vetter & Newen" were tying themselves into]
the authors prefer "predictive coding" [like O'Callaghan et al], which they use to model the integrated mind via "functional and effective connectivity data" [a whole new language of buzzwords to learn!]
PREDICTIVE PERCEPTION IMPLIES COGNITIVE PENETRATION
while perception that corresponds to truth would be adaptive, perception that allows misbelief could also be adaptive if the misbeliefs are adaptive
we might, per Hume & Heimholtz, 'perceive what would need to be there for our sensations to make sense'
so the brain uses both bottom-up information and top-down inferences, as Heimholtz argued [fascinating - who was that guy?]
it uses the top-down inferences to 'compute precision-weighted prediction errors' to arrie at 'an optimal estimation' - cite a bunch of 'predictive coding' & 'attention' papers
top-down has a long history in neuroscience, from the 80s
Friston K (2005): A theory of cortical responses -- the origin of 'predictive coding'
contra Fodor, some studies claim that 'early visual processing' ('perception' in Roftopoulos) is influenced by 'non-perceptual information' ... "semantic priming increasesspeed and accuracy of detection by minimizing prediction error" ... " Word contexts result inambiguous shapes being perceived as themissing letters that complete aword" ... a bunch of others
THE BURDEN OF PROOF: ESTABLISHING TOP-DOWN INFERENCES IN PERCEPTION
they go over Firestone & Scholl's criticisms of the 'new look' research
they say that they're plagued w/ problems that can be avoided by following these guidelines:
1. Disentangle perceptual from decisional processes 2. Dissociatereaction time effects from primary perceptual changes 3. Avoid demand characteristics 4. Ensure adequate low-level stimulus control 5. Guarante eequal attentional allocation across conditions.
these issues are inherent to tasks where perception guides a behaviour decision (so research would have to be done without that)
but a 'Bayesian formulation' doesn't permit this distinction; 'Signal Detection Theory' appears to, but it also allows cognition to influence perception.
"Top-down processes can even alter the mechanical properties of sensory organs by alteringthe signal-to-noise ratio" [wow]
they will argue that top-down influence is clearest 'when sensory input is completely absent'-- 'when experiences are hallucinated'
HALLUCINATIONS AS EXAMPLES OF TOP-DOWN PENETRATION
Hallucinations can be consistent w/ affective states; guilt & disease when depressed, etc.
hallucinations are fairly common in even 'non-clinical' cases; they occur in 28% of the population -- hallucinations may be 'an extreme of normal functioning', not a 'failure of modularity'
they give some support for hallucinations being top-down: "prior knowledge of a visual scene conferredanadvantage in recognizing a degraded version of that image" & patients at risk for psychosis were 'particularly susceptible to this advantage'; similarly, patients who were taught to associate a difficult-to-detect noise w/ a visual stimuli began hearing it when shown the visual w/out the noise -- esp. patients 'who hallucinate'
experiences of uncertainty increase the influence of top-down
they feel that studying penetrability via hallucinatory experiences gets around the problems Firestone & Scholl identify; neuroimaging might do it too
now they'll try to integrate this understanding of halluciations as top-down w/ 'notions of neural modularity and connectivity'
BRAIN LESIONS, MODULARITY, CONNECTIVIT AND HALLUCINATIONS
They propose that "inter-regional effects" mediate top-down influence on perception
these are often discussed in terms of 'attention'; 'predictive coding' theory conceives of attention as 'the precision of priors' and 'prediction errors'
a bit of statistics jargon for modelling we don't care about, although they make the interesting equivalence between 'change over time' (uncertainty) and 'predictive relationship between states' (reliability) [difference & repetition baby!] -- the gist is that all this stuff is a promising, plausible explanation of some difficult areas of the data but ['precision weighting'] is still waiting on more empirical trials
so someone walking home after watching a scary movie might have 'precise' enough 'priors', ie. a strongly-weighted 'background theory' (in Fodor's terms), to actually see the shadows on the street as being darker than they are... & if they were precise enough, strong enough, they'd really hallucinate
their support: a single case where a lesion caused hallucinations; 'functional connectivity' between the lesion location ad othre regions; 'effective (directioal) connectivity' in patients w/ Audo-Visual Hallucinations
they'll use these to argue that 'top-down priors' influence perception, contra strict encapsulation
1. lesion-induced hallucinosis
with 'graph-theory' fMRIs of the brain are parsed into 'hubs (sub-networks)', with a subset of regions connecting those sub-networks ('connectors'). see:
lesions are more likely in 'rich-club hubs', regions that mediate long-range connectivity between connected information processing hubs
the limbic system is a rich-club hub & has been implicated in 'the global specification of' precision weighting
it is not, however, part of *early perception*; they'll instead show "regions like orbitofrontal cortexpenetrate perceptual processing in primary sensory cortices giving rise to hallucinations"
~this part gets very heavy on the neuroscience & is beyond me - but the gist is that they're able to look at which hubs do what & how that gets disrupted by lesioins. It appears that there are definitely such things as modules like Fodor's parser, responsible for different faculties, & which parts of the brain these are found in is well settled - its just that these seem to be cognitively penetrable bc of how they behave with lesions. However, these are not 'proof' of it, just 'candidate' explanations for penetration
2. lesion effects on graph theory metrics
re: connectivity, lesions are more disruptive, & can be disruptive of the whole brain, when they occur in between-module connectios (rich club hubs); & they alter connectivity in opposing, un-lesioned hemispheres [this is a challenge for 'cognitive neuropsychology' - the sophist-like 'cognitivists' from before]
"We suggest that the rich-club hubs that alter global network function ... are also the hubs involved in specifying global precision and therefore updating of inference in predictive coding" -- & thats how early perception is cognitively penetrated (ie. 'higher' priors re: precision are mediated by the same stuff that mediate 'predictive coding' in early perception) [note this is a 'suggestion', but they do give a study in support]
there *may* be a connection w/ schizophrenia and lesions in these areas, but it hasnt really been shown yet; but some neuropsychiatrists do work off of this
"In our predictive coding approach informational integration (between modules) is mediated via precision weighting of priors and prediction errors, perhaps through rich club hubs" -- but "the exact relationshipbetween psychological 9modularity and modularity in functional connectivity remains an open empirical question."
ie. percetion is cognitively penetrated because 'predictive coding' (used in early perception) is mediated by a 'precision weighting' of 'priors and prediction errors' via rich club hubs
3. directional effects
'Dynamic causal modeling' (DCM) is a way of looking for 'directional' connectivity in fMRI data
one study examining 'inner speech processing' found very little connectivity "from Wernicke’s to Broca’s areas" in schizophrenic patients w/ auditory hallucinations (vs. schizophrenic patients without them) -- suggesting 'precision of processing in Broca's was higher than in Wernicke's'
they say that this data is consstent w/ informaton from 'higher' regions penetrating lower regions
[Wernicke's area is involved in comprehension of written & spoken language, while Broca's area is involved in the production of language; the idea here is that the patients who experienced auditory hallucinations woud also, when processing language, rely more on the higher level functions of Broca's area for precision weighting and much less so on the earlier perception of Wernicke's area]
'predictions' are top-down (ie. 'flow from less to more laminated cortices') while 'prediction errors' are bottom-up (the opposite) [what are 'prediction errors'? maybe like an 'error warning'?]
a lot of neuroscience stuff about the insula, priors, and lots of things I dont understand, which I dont need to note; the conclusion is tat they speculative that rich club hubs are "well placed to implement changes in gain control as a function of the precision of predictions and prediction errors." [ie. rich club hubs are the 'court' and 'court of appeals' of the brain, 'hearing' prediction & prediciton errors & itself 'sentencing' gain control changes]
another paragraph of studies showing similar things, this time with 'bi-stable perception', percepts that switch dominance 'on their own' (without a change in sensory input) -- this happes more in schizophrenics, but currently hasnt been looked at w/r/t hallucinations specifically
DISCUSSION & FUTURE DIRECTIONS
a summary of the above
their argument is that the data is inconsistent with 'an encapsulated modularity of mind'
w/r/t hallucinations, it looks like the top-down 'gain control mechanisms' ... 'sculpt' perceptions even in the absence of sensation
perception is cognitively penetrated insofar as it minimizes 'overall long-term' prediction error; so the knowing how the Muller-Lyre illusion works doesn't act on my perception because 'the illusion is Bayes optimal' - seeing in this way is more *overall long term* precise
there is some contradiction about schizophrenics & their tendency to perceive illusions - sometimes it works less, sometimes more. Thye say that this cannot be generalized & ought to be treated case by case; there is a *hierarchy* of perceptual systems and 'informaton processing can be impaired at different levels of the hierarchy'
so illusions might fail at a lower level in the hierarchy while hallucinations are generated at a higher level
they discuss work they did w/ ketamine; it doesnt normally cause hallucinations, but they found that it did in the MRI scan which is 'perceptually denuded (dark, still, rythmically noisy)'
ketamine enhances 'bottom up noise'; they argued then that sensory deprivation induces hallucination via top-down priors. "This is similar to the paradoxical effect of hearing loss and vision loss on hallucinations."
higher level precision increases to compensate for lower level prediction errors
so the increased bottom-up feed of ketamine creates prediciton errors when sensory deprived & this produces halluciations -- the priors top-down predictively organizing the error-filled bottom-up feed
so in general, hallucinations are produced by 'the dynamic interaction between priors and prediction errors'
they hint at some arguments that are strongly consonant with our own experiences of schizophrenia. Fist, that it is possible to 'conjure up' hallucinations at will. Secondly, that there are two types of hallucination - those 'with insight' (accompanied by a sense of unreality), and those 'without insight' (which feel as real as any other percept). We have always argued both of these things. [We have always argued that schizophrenia involves a kind of top-down *compulsion*, ie. I *have to* conjure this...]
2 notes
·
View notes
Text
worth noting: the picture of the process of perception I had receved from the paper on Deleuze I wrote as follows,
so there are a few relevant steps: noumena, then I have a sensation of noumena, then I make a mental representation of that sensation (and might synthesize similar sensations together), and I finally know this representation
this is precisely the view defended by Fodor, below - with the cautions that our background theories (le style européen, the Concept) infiltrate/s the perceptive process both on the level of the representation of the sensation (in a more limited way) and in my final evaluation of that representation (without restrictions) ie. I have a chance to modify it, and always do so, before I believe it - ergo, actually know it.
0 notes
Text
Observation Reconsidered Jerry Fodor, written 1984, read 9.06.20-18.06.20
fodor's Granny hopes to counter claims that observation is theory-laden
she says: there are two roads to "the fixation of belief" (following Peirce?)
those based on observation those based on inference
there is a 'corresponding taxonomy of beliefs': observational beliefs and inferential beliefs
observation is the more reliable than inference because there are fewer steps to fixing belief; hence, we try to rely on observations (eg. in the sciences)
but some things are inaccessible to observation, eg. ultraviolet light, so inference must be used
observation is important in settling disputes; if we disagree on matters of inference, we can only say we prefer one theory for another. If we disagree on observation, we can simply look and see for ourselves who is right
Fodor takes back the mic...
he wants to defend this view against 'widely endorsed' arguments against drawing a clean theory/observation distinction
so for fodor, observational beliefs are 'theory neutral'
there are three arguements against this view:
ordinary language arguments meaning holism arguments de facto psychological arguments (a fourth 'ontological' argument is excluded as non-realist)
1. THE ORDINARY LANGUAGE ARGUMENT
Fodor says the main contention of the paper is that "there is a theory-neutral observation/inference distinction" & the boundary is set by "fixed, architectural features of an organism's sensory/perceptual psychology"
this is not however how scientists see it / talk about things
the distinction scientists make between observation & inference is 'relativized to the inquiry at hand'
what an experimenter calls an observation depends on the assumptions the experiment makes, eg. response times are taken by measuring a clock, so checking the clock is a way of observing the response times; but if the clock is broken...
this is a concession to an epistemic fact: not every element of the experiment can be tested, they rely on things which are assumed to work in a certain way, such as the clock- the tests of these assumptions have happened elsewhere
however, even if they use it this way, this doesnt 'settle the case against Granny'
she argues that while actual experimenting scientists observations are theory-relative; but there is still a theory/inference distinction outside of 'ordinary language' that no lingustic considerations can decide
ie. we have to do more than socio-linguistic analysis of working scientists [why? where does the theory-inference distinction ever come up outside of actual observations?]
2. ARGUMENTS FROM MEANING HOLISM
he outlines a complicated picture of a graph which represents a theory with each node its entailiments etc. that I dont need to reproduce
what happens is that if one presupposition is displaced, the whole graph distorts
so the 'minimal context' of the meaning of any theoretical postulate is the *whole theory*
he says this argument has done a lot of work for skeptical philosophers since Quine's two dogmas
its possible to accept this holism (which 'granny and I do not') while defending an observation/inference distinction, eg. an epistemic rather than semantic distinction - every statement gets its meaning from its theoretical context, but some depend on more empirical confirmation than others; Quine takes this view
but the observations can never be theory neutral, because what your "observation sentences" mean depends on what your theory is
he gives Paul Churchland as one defender of this view: "[observation terms] position in semantic space appears to be determined by the network of sentences contaning them accepted by the speakers who use them" - sensation itself cant determine the meaing of an 'observation term', only 'networks of belief'
"as we saw, we are left with networks of belief as the bearers or de- terminants of understanding . . . (p. 13). . . a child's initial (stimulus-response) use of, say, 'white' as a re- sponse to the familiar kind of sensation, provides that term with no semantic identity. It acquires a semantic identity as, and only as, it comes to figure in a network of beliefs and a correlative pattern of inferences. Depending on what that acquired network happens to be, that term could come to mean white or hot . . ., or an infinity of other things (14). " (churchland)
so an observation sentence could mean anything depending on theoretical context
Fodor claims this means that “anything might be an observation sentence depending on theoretical context or, material mode, that anything might be observed depending upon theoretical context."
this means you can change your observational capacities by changing your theories
for Fodor this already means that for meaning holism there cannot be a class of beliefs that must be inferential regardless of what theories the believer espouses, because they could always become given meaning by a theory which renders them observational statements [I'm not really sure about this line of argument; why can inference statements always become observations? or: isn't it simply that if I say 'white' and it means a colour, and someone says 'white' and it means a temperature, we have said different things? that the same words, mere words, might refer in one context to an observation and in another to an inference doesn't seem that interesting to me]
Fodor says that recent causal semantic theories may be indicating that holism is not true; not all meanings are dependant on their theoretical context
some of their semantic properties are dependant on their reationship to the world empirically [no footnote! excuse me??? where can I read more about this??]
so Churchland argues: if statements do not get their meanings from their connections with sensations, they must get them from a theoretical context. But neither are the case.
so 'white' doesnt refer to the colour of sensations but the colour of objects; but 'the referntial roles of colour terms tend to be isomorphic' (uh...?) & dont have 'functional roles' (pls explain???) -- in fact, 'white' gets its meanig from its association 'with white things' [sounds like Suassure!]
this however doesnt show that there is a viable theory-neutral observation/inference distinction; only that meaning holism doesnt pose a challenge to it because meaning holism isnt true...
3. PSYCHOLOGICAL ARGUMENTS
the psychological arugment is that there is no distinction to be made between perception and cognition
"perception involves a kind of problem-solving - a kind of intelligence" (Gregory 1970, p. 30) [the wiki article for 'perception' mentions this, ie. how we 'fill in the details' etc]
perception is therefore the process by which we assign probable causes to the vague stimulus we receive
every 'proximal stimulation' is compatible with a great number of 'distal causes' [ultimate causes], so there are many possible worlds where many different patterns would appear & be picked up as the same visual to an observer [Fodor consistently refers to the perceiver as 'an organism' which is a little crazy to me]
perception involves 'betting on the most probably interpretation of sensory data' (Gregory), ie. perception itself involves inferences - "what mediates perception is an inference from effects to causes"
this seems to be true; so psychology provides a direct argument that observations are always inferential, ie. not theory neutral
so if the world is something like a Neckar cube - all ambiguous stimulus, could always be interpreted differently - how can we ever say an effect has this univocal cause? (without inferring it) - that is, if perception involves problem solving, how are probems ever solved?
the answer is that we simply do infer it - from past information... [we're hitting on the problem Merleau-Ponty brings up here - use of memory, etc.]
so sensory information is interpreted according to the observer's 'background theories'
if percepton is usually true of something that is because our theories about it are true
following a Holmes metaphor, "many projections, if you like, of possible criminals onto actual clues." ... "The clues underdetermine the criminal, but the clues plus background knowledge may be univocal up to a very high order of probability." The trick is having the right background information.
he mentions Kuhn & Goodman in connection w/ this argument
Fodor argues however that the evidence doesnt really say this
while perception is interpretive & contextual, it is also 'bullheaded & recalcitrant'
he explains the Muller-Lyre illusion: the <--> line is seen as shoter because its interpreted as further away and >--< larger because its interpreted as closer; we apply size constancy as though looking at objects at different distances & hence the lines seem to us different sizes (ie. we think we're looking at something 3 dimensional) - our interpretation misguides us; it is highly attested but less often in children, who have less time to develop 3d perception
this is usually cited as an example of how background information affects our perception; the background information here is a complex understanding of 3d objects and their 2d projections
Fodor's challenge: everyone by now knows about the Muller-Lyre illusion. Why isnt our perception penetrated by THAT background theory? Why doesnt knowing the lines are the same length make it look like they are?
In fact, why doesnt knowing a drawing is 2d prevent us from perceiving it as a 3d projection in the first place?
This makes it seem that how the world looks to us is unaffected by cognition
Fodor notes that this is true of many things in this sort of experimental data, not just this illusion
[My thought here is that certain cognitions might be more immediate or work faster, eg. spatial reasoning intervenes before memory]
In a footnote Fodor mentions that Jerome Bruner brings up this point & these examples but writes it off quickly - too quickly for him. He considers the problem w/ 'New Look psychological theorizing' to be failing to distinguish 'how much of what you know actually does affect the way you see'
so there are two conflicting facts here: a known plasticity of perception & a known implasticity of perception. The question is, how can a theory of perception accomodate the existence of both?
[Fodor's argument from here on out will probably rest on the strength of this argument for implasticity. I wonder if there's been any more enlightenment on this from perception science in the last few decades?]
Fodor argues this: to get from a 'cognitivist' view of perception to an argument that observations are theory-dependant you have to argue not just that perception is problem solving engaging background information but that perception has access to ALL the background information (or "arbitrarily much" bg information)
So there are really two questions: whether perception is problem solving/inferential, and wether its "comprehensively pentrated by background beliefs (ie. whether it can be theory-neutral)"
So Fodor wants to arrive at a via media between Granny and Bruner: observation IS inference, but that "there is a radical isolation of how things look from the effects of much of what one believes"
ie. observation is inference, but this inferential-observation IS THEORY NEUTRAL
so it doesn't follow that scientists who accept differnet theories observe phenomena differetly [this argument makes sense to me & answers our hesitation about different types of reasoning; its 'theory neutral' in the sense of 'a scientific theory' or some such]
what might psychology of perception look like if perception is both inferential & theory neutral?
perceptual psychology makes a few distinctons:
between perception and cognition - which they treat as contiguous between perception and sensation - which they treat as strongly divided
sensations are the mere stuff that we actually lay out eyes on, which cognitive processes - the perceptual processes foremost - set on to organize ('assign probable distal causes to')
sensation is noninferential & responds exclusively to the Outside perception is 'both inferential and responsive to the perceivers background theories'
perception can be inferential because my background theories supply the premises for inference; sensation doesnt have access to my theories & cant be inferential
so if you want to 'split the difference between Granny & New Look' you need to postulate a 'tertium quid' -- a psychological mechanism 'which is both encapsulated (like sensation) and inferential (like cognition)': the contradiction between inference and encapsulation [which I take to mean 'access to the Outside'] is resolved by assuming our perceptions access to background mechanism is 'sharply delimited'
he says he develops this properly in Fodor 1983
but to briefly treat what these 'modular' perceptual mechanisms might be like:
perception might assign sentence tokens to sentence types
two reasons this is plausible: "its obviously true" (lol), and... in order to understand what someone says you ned to know the 'form of words' uttered, and 'assigning an utterance to a form of words is to assign a toekn to a type' [I'll take your word for it Jerry]
lets assume theres a psychological mechanism for this -- a 'parser'
the parser takes 'sensory representations' of utterances and makes 'representations of sentence types' (so: acoustic representations in, 'lingustic structural descriptions' out)
the assumption that there are 'psychological mechanisms' and that if they do exist they are 'functions from one form of representation to another' is not given, but it reflects the current psychological theories that we've been dealing with
according to empirical evidence he 'wont bother us with' parsing has 'all the properties that make' psychologists say perception is inferential [this actually seems to be a transcription of a lecture, hence no citations]
the 'acoustic character' of an utterance underdetermines its 'structural description', ie. nothing about the sound of the word 'mankind' should make us anticipate that it is a noun, we can only infer that its a noun with refernece to a background theory about language
so as a module... a parser for L contains 'a grammar of L', ie. noun - it then infers from the acoustic proprties of a token 'a characterization of the distal causes of the token', such as the speakers intention, ie. that the speaker uses it to name something...
so the information it can use in its inferences are: information about the acoustics of the token, and information about lingustic types in L 'and internally represended grammar' -- and nothing else. This 'closure condition' makes the parser modular
the New Look parser, contrarily, can bring in any information that the perceiver has
[we arrive a little at my proposal earlier, that the spatial theory applies 'faster' than other types; Fodor instead believes that the parser's access to anything but (in that case) spatial theories is limited by something intrinsic to the parser itself. It occurs to me that we could decide on this empirially: if you could stop someone from accessing their spatial reasoning, via some kind of 'Russian Sleep Experiment', would they begin accessing other bg theories and update their vision to it, or would everything just appear flat?]
The New Look parser would therefore not see anything it doesnt expect to see; it could never tolerate new information, because it would always turn every sensation into something consistent with its background theories about the world.
what the Modular parser means is that people with very different theories can nonetheless see the world in the same way, as long as their parser doesnt have access to theories on which they disagree
secondly, it means that there can be theory neutral observation language ("much current opinion to the contrary notwithstanding") -- so there is a sense where some terms (like 'red') are observational while others (like 'proton') arent.
his gambit here -- and it is a gambit -- is that only the properties of an object that are explained by the theories that the parser accesses are observational, ie. when I look at something red, my colour parser can access a theory of redness and infer things about it, so I am observing its redness; but if I say that its reflecting(?) electromagnetic radiation, I'm not making an observation, I'm inferring from a theory, because my colour parser does not have access to information about electromgnetism & so cannot observe it
[an interesting manouever, Fodor rescues a scientific realism from Kuhn etc by displacing science even further than them]
giving more cases: the parser has access to certain observations about an utterance, eg. 'an utterance of a sentence' ... 'an utterance of a sentence that contains a word that refers to trees', etc., - these are observations bc they are properties the parser can access. But it does not observe the property 'being uttered to deceive John' or 'being ill-advised in the context...'
one can distinguish between observable properties and sensory properties in this way: the 'sensory properties of utterances are plausibly all acoustic and almost all inaccessible to consciousness'
[this would seem to make all music inaudible]
his third point: the modular parser is 'synchronically' impenetrable by our bg knowledge, ie. in this instance. So just because I know about the Muller-Lyre when I look at it doesn't dispel the illusion. But might they be 'diachronically' penetrable, ie. I could learn how to incorporate more of my bg information into my parser?
it can - denying this is crazy, because it would mean a child could never actually learn a language, etc.
however this also doesnt have complete access, ie. not any kind of teaching influences my parser
the parser is only diachronically penetrable within strictly ("perhaps endogenously" ie. physiologically) defined limits; we still perceive the sun as moving despite our 'Copernican prejudices', and it may be that no educational paradigm could change this
"In this case, our agreement on the general character of the perceptual world might transcend the par- ticularities of our training and go as deep as our common humanity"
So returning to epistemology, Granny will have to 'give a bit' and "distinguish between observation and the perceptual fixation of belief. It is only for the former that claims for theory neutrality have any plausibility."
While the Muller-Lyre doesnt LOOK differnetly to us, we still dont BELIEVE that the first line is shorter; our background information is accessible to 'the mechanisms of belief fixation'
belief fixation, unlike observation ('the fixation of appearances'), is 'a conservative process', using *everything* you know.
think of it this way: the parser module is proposing perceptual *hypothesis* which are 'couched in a restricted (observational) vocabulary and are predicated on a restricted body of info' - then we compare it to the rest of our bg theory and our belief is 'consequent on' this background theory
what this means is that our perceptual beliefs are observational only in the 'first approximation', because afterward they have to be regarded with respect to our whole background theories. So the modules limits determine 'what would you would believe about the appearances from the appearances alone', which by no means accounts for all of the perceptual beliefs you might fix [I have a hard time thinking of an example for this; maybe looking into a microscope shows me a bunch of shapes in the first approximation and, then, subsequently, I can believe that I saw a microbe; its possible that it doesnt even need to be so curious - do I see the smooth surface of carapace in the first approximation and then believe I saw a cockroach?]
so the theory-neutrality of observation is not 'infallible'
its importat here that our perceptual access to observation doesnt necessarily bear on our fixation of beliefs. While it would seem we always know our own perception of things, it doesnt seem to be true. The explanation for the Muller-Lyre illusion [as of this article] hinges on us seeing them, in the first approximation, as 3d objects, which is not how any of us believe we see them; and this explanation was not accessed by introspection but by experiment etc.
An imaginary reader objects: what is this heavily attenuated version of observation good for? havent you given away everything the proponents of 'theory-ladenness' asked for?
he quotes Hanson, one of the main proponents of theory-ladenness, who objects to saying 'all normal observes see the same things in x, but interpret them differently' as unable to explain controversy in research science
rather, for Fodor, 'given the nondemonstrative character of empirical inference' its no puzzle that 'there should be scientific controversy'
the epistemological problem par excellence is to explain scientific *consensus*
how is it possible that given the degree of underdetermination of theory by data that scientists ever agree, and agree as much as they do?
for Fodor, the consensus is down to the theory-neutrality of observation
because we all see the world the same & 'independent of our theoretical attachments', we can see when our predictions arent working out
We admit, Granny and I do, that working scientists indulge in every conceivable form of fudging, smoothing over, brow beating, false advertising, self-deception, and outright rat painting-all the intellectual ills that flesh is heir to ... Nevertheless, it is perfectly obviously true that scientific observations often turn up unexpected and unwelcome facts, that experiments often fail and are often seen to do so, in short that what scientists observe isn't determined solely, or even largely, by the theories that they endorse ... It's these facts that the theory neutrality of observation allows us to explain."
In the end Fodor endorses a sort of plain-spoken realism: science is true because it describes objective reality, and to say this realists will have to hold onto theory-neutral observation.
0 notes
Text
Deleuze and Empiricism Bruce Baugh, written 1993, read 10/06/2020 - ??
a short article which characterizes Deleuze as an empiricist.
1. intro
Deleuze was an empiricist, but wanted to meet Hegels challenge to empiricism - so rather than arguing all knowledge is generalized from experience, he wants to "search for real conditions of actual experience" - he does not provide foundations for knowledge claims [Hume: Empiricism & Subjectivity, and an article on Hume in Histoire de Ia philosphie (Paris: Hachette, 1972-73) - find this!]
Deleuze takes as his starting point that "there is a difference between real difference and conceptual difference”
this difference is in "the being of the sensible" [difference & repetition]
2. non-conceptual difference
the 'naive' statement:
the concept makes 'repeatable experiences' possible, experiences which are identical to each other
the sensible is 'the actuality of any given experience' - something sensible can never be repeated, so there is always difference between actualisations
the sensible 'as a specific actualization' always falls outside the concept
the concept 'determines the equivalency among actualization', so they are all actualisations of the same concept, while the sensible grounds their difference
[this is a somewhat straightforward statement of particular vs abstract entities, and Deleuze seems to say that abstract entities, as generalities (every red thing is the same 'red', etc.) are never instantiated in particulars, at least not fully; ie. a nominalist view -- although perhaps what is considered significant here is the ordering of the world into the 'different' (each particular different from another) and the 'repeatable' (these particulars all instantiate the same thing) in the first place]
but...
if this were all, the sensible would just be a platform for actualizing the concept - our representations are just determined by the concept [as it is in Sellars, 'theory-laden observation', etc.] (so the sensible isn't noumena, its 'sense-perception'?)
in this case the sensible is 'explained by' the concept, ie. 'a priori conditions of experence', and therefore the a priori that constitutes knowledge
so whatever particularities of a representation aren't covered by a representation are just extrinsic & accidental, as are sensations themselves [don't quite understand this - wouldn't we require an a priori concept to grasp them in the first place?]
baugh offers a justification in parentheses: 'since..' other qualitatively similar sensations can be 'synthesized into a representation' that would be equivalent 'from the standpoint of knowledge' [confusing to me - different things are synthesized into the same representation? are representations repeatable?; I might need to read more about 'representations']
brief aside~
Representation, in the Oxford Companion to Philosophy ‘T.C.’, written 1995, read 10/06/2020
I couldn’t find a good explanation online so I took the opportunity to wipe the dust off of a physical book I had upstairs, using it for the first time in ten years. Trying to read it and type on a screen almost made me sick - perhaps I should have looked harder for an online explnation...
everything that represents is a representation, so... words, sentences, thoughts and pictures are all representations
representations can represent something that doesn't exist (lets say the word 'unicorn') - but all representations nonetheless do represent *something* [this problem isnt resolved in entry]
so we might say: a pictoral representation represents something by resembling it - but this encounters problems. Resemblance is reflexive (everything resembles itself) and symmetric (identical twins resemble each other), but a representation is neither.
Resemblance doesn't guarantee representation: this newspaper does not 'represent' all the other similar issues... [Nelson Goodman argues resemblance is not relevant to repr., Malcom Budd claims he can defend some resemblance theory of pictoral representations...]
Words obv dont resemble the things they represent, but we might see words as representing by linking to mental pictures
but pictures do not represent intrinsically... Wittgenstein gives a fun example: a picture of a man walking uphill could equally be a picture of a man sliding downhill. Nothing about the picture of itself tells us its a picture of the former or the latter.
So we have three choices:
the picture represents by virtue of being interpreted, so representations represent by being interpreted (not resembling something)
mental pictures 'self-interpret' - in this view representations are primitive & unexplanable
representations represent everything they resemble, so one representation represents countless different things - this too makes representations unexplainable
the 'mental pictures' theory also encounters problems, eg. what does a 'prime number' look like to my mind? how could 'we'll go to the beach next sunday' be a pictoral representation?
so there are many sorts of representation which each require their own explanation
recently representations have become very significant in philosophy of mind & there is hope that neuroscience & psychology could uncover a naturalistic explanation of them
back to Baugh~
so a representation isnt anything special - just anything that refers. Its most relevant to knowledge in the form of 'mental representations'... Deleuze seems to endorse a representation-centric theory of knowledge, where we only come to know things through our mental representations of them (I think this is quite common)
so if the naive account holds, similar particulars can be synthesized into a single representation, eg. several bluebells into one mental picture of 'the bluebell' (this being different from an abstract entity, eg. 'bluebells' as a class?)
so there are a few relevant steps: noumena, then I have a sensation of noumena, then I make a mental representation of that sensation (and might synthesize similar sensations together), and I finally know this representation
[I'm reminded of Ayer discussing Hume here, where impressions (ie. sensations?) must be 'brought under concepts' for us to recognize them by associating them with one another -- but this is a slightly different theory, ie. we have direct acquaintence with noumena and make concepts... This is perhaps really similar to some rationalist who stressed the a priori w/r/t sensation, who I'm not aware of - perhaps Kant! -- this would be why Baugh is careful to say 'a priori *conditions of experience*'] (reading this summary is probably much harder than reading Difference & Repetition)
so basically, a representation can be different from the concept (universal), and this is considered something accidental or extrinsic to it, ie. that this bluebell is shorter than the other is just accidental & its still a bluebell, I know it because it is a bluebell to me. The same operation plays out between sensation and representation (I don't really understand how) [its possible he actually means to explain the same thing in two ways, rather than describe two operations at differnet levels, ie. in order to create a representation (which 'leans on' the a priori concept) I have to discard the particulars of the actualized sensation and grasp only what is general to it, ie. I cannot know this bluebell, only what is 'the bluebell' in this bluebell
Baugh describes this view as 'the Kantian challenge to empiricism' (nailed it) he says there is 'an even greater Hegelian challenge' lurking behind; for Hegel, the particularities of the sensible are not dicarded as accidental, they are instead 'the self-articulation of the Idea', elaborating itself in particular form
for Hegel the concept already contains its particular empirical manifestation, that the two are together the way 'form' and 'content' are in a painting - the form is a 'synthetic organization' of the content
(so the concept is the 'content' and each particular its 'form' - just a particular way of organizing the concept)
Deleuze objects that even if the concept includes empirical content, it cannot already include this actuality (particular)
so for Kant, the empirical is 'what the concept determines would be in a representation if it occured' (so, the flowers of the bluebell would have to always be blue); for Deleuze, the empirical is this actuality itself (the bluebell before me itself), not 'the possibility of existence indicated by the concept' [Baugh writes: see pg 36 of 'Expressionism in Philosophy'; reading this page I dont really understand how its related... Perhaps "substance is once more reduced to the mere possibility of existence, with attributes being nothing but an indication, a sign, of such possible existence." - he's summarizing Spinoza's criticism of Descarte, but we might assume approvingly. Attributes are maybe the 'empirical', the particular - Spinoza argues against treating Substance as a 'genera' of which the attributes are 'species', [ie. where there are attributes 'of' susbtance(?)]; are we to take it that substance is a 'sum of attributes', ie. just empirical reality itself? if so, as Substance empirical reality is undivided, there is no distinction between things in it... (we're back to our point about the ontological equality of all divisions of noumena, ie. the tennis ball, half a tennis ball, etc.; in this case 'attributes' are proper to me, substance has no attributes because there are no distinct 'things' in it, its *just* substance, things are only distinct to me...) - but this seems to be the opposite point than Deleuze's, because for him everything empirical is different, and we make things the same by seeing the concept in them]
Against Hegel we argue that the difference between two performances of Beethoven's 7th Symphony cannot be included in the Idea, because the content (what is performed) is identical but the actual performances differ [is this a good argument? wouldn't the idea/content be 'a performance of the 7th Symphony', and the form be 'each particular performance'?]
for Deleuze the empirical is the difference between each actual performance; this difference makes the repetition of the same work possible
empirical actuality is therefore not possibility -- it is 'the effect of causes' ... 'which are immanent and wholly manifest in the effect through which they are experienced', as Spinoza's God (substance) 'is immanent in his attributes' [now the connection makes sense]
therefore, (here's the juice) "instead of being explicable through the concept ... empirical actuality, 'difference without concept'... [is] expressed in the power belonging to the existent, a sutbbornness of the existent in intuition" [cites Difference & Repetition pg. 23]
difference is a proprety of empirical reality itself, ie. each particular/actualization is different from the others, & it is the concept that organizes them into things which are the same as each other, ie. repeatable entities. each bluebell is already different from the other bluebells, the concept organizes them and declares that they are all bluebells, ie. have some 'being-bluebell' which repeats in them. [I feel like this doesn't overcome our objection to empiricism, ie. what makes this particular the particular? ie., what makes the tennis ball a particular and not half the tennis ball, the ball + some air, etc.? More generally: does it overcome Sellars, 'theory-laden observation', etc.? ie. do we really get the non-foundationalist empiricism promised?]
actually, is Deleuze talking here about noumena or sensation? earlier Baugh says Deleuze "locates difference in the 'being of the sensible'." this might change how we see it, ie. if noumena is undifferentiable stuff - not different or similar in any way - which sensation picks out as 'different stuff', and which are organized into representations which assume similarities between the 'different stuffs'... this makes sense to me.
I think this is the case: "[difference] is first given in sensory consciousness, a receptivity which grasps what comes to thought from 'outside' (DR 74)"
so 'empirical actuality' does NOT = empirical reality/noumena, empirical actuality = the world as grasped by sensation actualities =/= particular just-so, but where particulars are only existent in sensation
is this a 'third way' between foundationalism and 'theory-ladenness'? that noumena does not yet have particulars, but that my sense-perception organizes it into particulars, but this organization is *not* yet inscribed by the conept (theory, etc) - perhaps instead by my perceptive apparatus, the retina and so on? - the concept inscribes only the representation I make of this sensation. Sensation is a sort of passage between noumena and mental representation, perhaps the organization of the 'hailstones on the window' into associations, and their being 'brought under concepts' is their becoming representations, as Ayer says of Hume?
[calling this empiricism feels a little like splitting hairs by now, esp. if there isnt a foundationalist account of knowledge waiting - I'm not sure that Deleuze did call himself an empiricist, though]
Hegel all of a sudden makes our argument about the tennis ball! or something like it.
Hegel believes that the empirical ['pure actuality'] is 'empty' if it is not organized by the concept; every 'this' is as much a this as any other (ie. tennis ball, half a tennis ball...), so there is only 'indeterminacy'. but he takes this as a criticism of the point, ie. empirical reality cant exist without the concept because it would be empty, a 'negative universal', which cannot have being, is nothing. [This goes for both noumena & sensation; ofc Hegel feels that everything in nature is part of the Idea and so on]
This is where Deleuze disagrees with Hegel. Deleuze "rejects the epistemological model on which Hegel's argument is based", that "whatever does not make a difference to knowledge makes no difference" -- rather "the empirical must be thought even if it cannot be known, at least if knowledge is regarded as knowledge of phenomena" [does this line defeat my earlier conjecture about the empirical not being noumena, ie. the empirical is here not phenomena - but does that mean it is noumena, or simply not yet phenomena?]
for deleuze concepts are possible because of empirical actuality, in two senses:
actualities are "the condition of the application of concepts over different cases & so for universality in general" (different actualties are a platform for universal concepts)
it is the "real condition of experience" (I'm guessing: what we really experience; whatever we can expeirence is empirical actualities)
page 4, btw
NOTE: update 15/06/20 I think the bluebell example I use here may have been uninstructive. For Deleuze the sensible that is difference-in-itself is not objects - it is things like ‘substance’, ‘matter’, ‘energy’ (in their scientific uses); MATTER is difference-in-itself, which we coordinate into repeatable objects via the concept
3. multiplicity and externality
taking a siesta...
3 notes
·
View notes
Text
"Genocide differs from other murders in having a category for its object. [...] For genocide to be possible, personal differences must first be obliterated and faces must be melted into the uniform mass of the abstract category."
— Zygmunt Bauman, "The Duty to Remember – But What?"
41 notes
·
View notes
Text
Plantation Futures
by Katherine McKittrick (x)
written ?? read April 20
An essay on ‘plantation theory’ per George Beckford, which seems like a useful material enunciation of what would come to be called ‘the afterlife of slavery’. Reading mostly as a note on methodology going forward, and for its impressive biblioraphy.
The article opens with a discussion of the competing interests involved in the discovery of a burial of about 10,000 African slaves in New York; while the black community wanted to memorialize them, scientists wanted to study them. The research was apparently conducted in a disrespectful way until it came to be headed by a black researcher. “The tension between ethically memorializing this history of death and learning from it” was justified by the research oppotunities for uncovering evidence of antiblack violence, physical info abt slavery, uncovering African bloodlines... After the research the bodies were memorialized at a National Park who’s website describes it as an effort to “return to the past to build the future.”
Plantation Time
McKittirck says that the burial brushes up against the concept of the city as a place where “new forms of life become possible” (she relates it to a Stevie Wonder lyric, “living just enough / just enough for the city”)
The burial indicates the material connection between “slavery, postslavery and black disposession” and the production of the city as a space, it “brings into the production of space and the cityscape ... the remains of blackness”
She gives this concept the name plantation futures - the “time-space that tracks the plantation toward the prison and the impoverished and destroyed city sectors”, essentially the way the modern city is grounded in the historical plantation materially & coneptually
Plantation Context
2 notes
·
View notes
Text
Desert’s section African Roads to Anarchy
written 2011 - read April 2020
I want to keep notes of this section of Desert becuase I’m going to be reading more stuff relevant to Africa soon. It might be useful to compare
Anarchic elements in everyday (peasant) life
The author of desert writes that Africa's international image is far from reality.
Much of the conflict in Africa is not caused by resource scarcity but resource abundance; the presence of resources causes powerful actors to battle for control over its extraction - the "resource curse"
They quote a passage from an African anarchist called Sam Mbah who describes how some societies in Africa still "manifest an anarchic eloquence" carried over from precapitalist periods
Why are these carried over in some parts of Africa? They quote Jim Feast writing for Fifth Estate who writes that in much of Sub-Saharan Africa, apart from resource-rich colonies, "imperial powers had only limited goals" in those areas, and for a long time there was "little penetration of capitalist agriculatural forms or government into the interior"; much of Sub-Saharan Africa is "only marginally affected by the market" as a result, with a society based mostly around susbsitence homesteading
Peoples without governments
The author describes smaller social groups which have effectiely remained on the margins of national powers, continuing an anarchic way of life which predates civilization
The reasons for their existence range from historical coincidence (ie. ‘uncontacted’ societies) to an active resistance to incorporation.
Sometimes groups prevent incorporation by collaborating with the state where it would be necessary (which sometimes only amounts to symbolic agreements; “We’ll pretend you’re governing us, you pretend to believe it”); in some cases resistance to incorporation involves “a complex set of tactics including providing key functions, retraditionalisation, regular movement and manipulating the balance of competing external powers”
[They mention ‘maroon societies’ here but dont elaborate; looking it up, it means societies formed by fugitive slaves - follow up on this?]
The author cautions that most of these societies have problems that we generally consider unacceptable: “some level of sex and age stratified power relations, a division of labour and sometimes rely on animal slavery” - but also cautions that “any overview of possibilities for liberty would be foolish to ignore them.”
Commons resurgent as global trade retracts
The author describes how, during independence, many single-party dictatorships took over which maintained power via patronage systems
The money for those patronage systems dried up during the Structural Adjustment era; this was one of a few reasons [alongside eg. the collapse of the Soviet bloc] that they were mostly replaced with multi-party democratic states which, nonetheless, are incapable of providing essential services (’failed states’)
This means that the essental services have increasingly come to be performed by non-state actors. One trend here is towards “civil self-sufficiency” as well as “women’s groups, trade unions, farmers associations and other grasswork networks” taking care of such functions without government oversight, ie. a return of the commons & an everyday life organized by mutual aid
This is one of many different responses to this retraction of state power/functionality (they also mention international relief organizations and China’s increased interest in the region), but it may be where the seeds of the promised ‘African roads to anarchy’ can germinate
Outwitting the state
The author briefly treats anarchists in Africa, saying that they are unlikely to determine the future of the continent but may prove significant “in emergent movements and struggles”
They say that while, in Sam Mbah’s words, “the process of anarchist transformation in Africa might prove comparatively easy” (the author is a little more cautious), much of it is also true of other areas of the world where self-sufficiency exists, either as something emerging or as a result of resistance to the enclosure of the commons
They subtly tie together the commons with ‘the wilderness’ which I think will be important later
They restate the central point: that while the social condition of the world changes in different parts of the world at different times as a result of climate change, “people will continue to dig, sow, herd and live” - and in many areas the commons might be reclaimed
[Its significant to note here the authors positive opinion of small agriculture - as they say, “Land is liberty!”]
fin
The relationship between colonialism, independence & enclosure - in particular those parts which escaped, resisted or exited the whole process - is a thread I’d like to follow (ie. with Sam Mbah and IG Igariewy’s ‘African Anarchism’ which Desert cites; is there much academic writing on this history? Its very possible that there isn’t, cosidering the total disorder of research into marginal African societies prior to very recently)
Also: as we start to read more independence-era literature, we should keep in mind - who/what is being negated, excluded or abrogated in this creation of an independent Africa?
4 notes
·
View notes
Text
Metaphysics Further Readng
14/12/19-??
Notes on the chapters indicated in the Further Reading sections of Loux’s Metaphysics, an elementary textbook on the subject.
Aristotle’s Metaphysics
After the introduction Loux says to read the first two chapters of Metaphysics A [book I], the first two chapters of Metaphysics upsidedown-L [book IV] and the first chapter of Metaphysics E [book VI] for a discussion of the nature of metaphysics. We’re going to read both Sachs unusual but apparently reputable translation alongside the more traditional translation of Apostle.
In the introduction, Sachs argues that there is a kind of being, like the being a leaf has, but there is a being which ‘actively keeps itself in being’, such as the plant. “The plant’s relation to its own being is active and self-maintaining ... and to that extent it displays being as such, or being as being.” The metaphysics is devoted to understanding this being-at-work (traditionally: actuality)
...
6 notes
·
View notes
Text
Patriarcha
by Robert Filmer
published 1680 (written by 1640), read 15/09/19 - ???
Filmer was, by all accounts, the most popular and influential political theorist in England in the 17th century. The seminal works of many major contributors to the political theory of that century - particularly Locke - were responses to Patriarcha. But he is not read today, really by anyone. He was the principle theorist of a tendency which would, by the next century, no longer exist anywhere: of absolutism, and in particular, that Kings ruled by divine right. Most courses of political science or political philosophy in universities do not even mention Filmer: the only reading list that I found him on was an infographic originating from /pol/ which was structured from most socially acceptable (things like Hayek and Burke) to least (things like Hitler and Kaczynski): under the section ‘Reactionary Right’, Patriarcha appears at the very bottom.
I began reading out of curiosity but it became clear that it was both a relatively complex text and one that is both downstream and upstream of things important to us: thinkers like Tacitus and Machiavelli, and the theory of Sovereignity respectively. So, notes. I always say I’ll try to keep my notes brief and never do, how about this time I promise to be thorough?
Chapter I: That the First Kings were the Fathers of their Families
Filmer opens by talking about an idea which contemporary political theorists believed in, which is that humans are “naturally endowed and born” with “freedom from subjection”, and that forms of rule only have power over them because they give them that power.
Often Hobbes and Rousseau are contrasted on a certain point about human nature: Hobbes believed that civilization was a necessary imposition because of the disastrous anarchy of man’s natural condition, while Rousseau believed (something like) man’s natural condition being good and peaceful and civilization creating problems, although he still affirmed the necessity of civilization in some sense. Anyway, both of these thinkers were later than Filmer, and both take as their beginning the very point that Filmer notes here, which Rousseau makes when he writes that “man is born free, and everywhere he is in chains.”
Filmer says that this is a new idea, and not something originating from the bible or the early church fathers, and hints that it was devised by the Jesuits!
He gives a logical conclusion to the idea: that if the people gave the Prince his power, they can take it away. He considers this a dangerous idea.
In fact, Filmer rejects the very idea that Kings are subject to the laws of their country, and when other theorists (he names ‘Buchanan’ and ‘Parsons’ - two names I’ve never heard) criticize the sovereign for breaking the law he considers it an error.
Equality is mentioned (just like that!) in connection to natural liberty, when he mentions their position as “the natural liberty and equality of mankind.”
Anyway, he comes around to saying, its time someone takes this seditious idea of natural liberty to task! (An early appearance of the ‘say what you’re going to say in the introduction’, by the way!)
Filmer enumerates a number of ‘cautions’ he’s giving himself for the discourse.
First he spends a paragraph going over how it isnt for him, nor anyone else, to pry or meddle into the affairs of the state, “the profound secrets of government”, which he refers to as arcana imperii. “An implicite Faith is given to the meanest Artificer in his own Craft,” he writes - true enough! - and so even more faith ought be given to the sovereign, who is “hourly versed in managing Publique Affairs.”
Arcana imperii (literally ‘mysterious power’, more semantically ‘state secrets’) is an expression from Tacitus which has gone on to have a certain currency in political theory (see here), apparently appearing as recently as Agamben, and having been appropriated earlier than Filmer, by “Botero and Clapmar” (who?). In Tacitus, arcana denotes secrets which ought to be kept secret.
The end of this paragraph is confusing to me, so I’ll note its location (here). The gist is that people ought to obey the sovereign, and he relates this to “render unto Caesar what is Caesar’s...”
In a sentence which goes “...knowledge of those points wherein a Sovereign may Command...”, he has a footnote - attatched to the word may ! - which leads to a paragraph weighing rule and tyranny. For Filmer, a King who rules by his own laws becomes a tyrant, "yet where he sees the Laws Rigorous or Doubtful, he may mitigate and interpret.” I’m going to note the location of this footnote too (here), because it is actualy a very clear and very early exposition of the Non-Derivative Power of sovereignity, and states precisely what Carl Schmitt means by “the leader keeps the law”.
His second caution is that he isn’t going to dispute the “laws or liberties”, only inquire wether they came from Natural Liberty or from “the Grace and bounty of Princes.” Obviously, Filmer will come down on the latter position: that any liberty one has is the benevolant gift of the Sovereign.
He says that the greatest liberty in the world is to live under a monarchy, and that anything else is Slavery, “a liberty only to destroy liberty” - although this whole paragraph is actually plainly an apology for writing a political text, which was surely somewhat dangerous back then, and while this is the official ideology that everyone had to believe (even Rousseau makes the same gestures, framing his dialogues by saying ‘this is all what I would say if I didnt live under a benevolant rulership...’), its actually clearly a bit more extreme than even Filmer is willing to commit to.
His third caution is that he isn’t disparaging the people he criticizes, simply adding on where there are gaps in their thought, and so on. “A Dwarf,” he writes, “sometimes sees what a Giant looks over.” He briefly summarises his idea about the cause of their error: that in order to ensure the authority of the Pope, they placed the People above the King. I’m not sure if thats how Buchanan saw it! Anyway, this is how he explains that the two major factions at the time were the “Royalists” and the “Patriots” - the error, for Filmer, is that people had come to believe that one could be loyal to ones country while traitorous to the King. (True enough - isn’t patriotism always a kind of category error?)
Cautions set aside, he begins the critique proper. He starts by quoting Cardinal Bellarmine (now a saint!), which we’ll reproduce:
Secular or Civil Power is instituted by Men; It is in the People, unless they bestow it on a Prince. This Power is immediately in the whole Multitude, as in the Subject of it; for this Power is in the Divine Law, but the Divine Law hath given this Power to no particular Man— If the Positive Law be taken away, there is left no Reason, why amongst a Multitude (who are Equal) one rather than another should bear Rule over the rest?— Power is given by the Multitude to one man, or to more by the same Law of Nature; for the Commonwealth cannot exercise this Power, therefore it is bound to bestow it upon some One Man, or some Few— It depends upon the Consent of the Multitude to ordain over themselves a King, or Consul, or other Magistrates; and if there be a lawful Cause, the Multitude may change the Kingdom into an Aristocracy or Democracy.
Filmer comments that this is the strongest defence for Natural Liberty that he’s ever seen, and thats why he selects it for critiism: after all, as he said earlier, its usually never a position argued for but simply taken for granted. Filmer now begins a fairly fascinating sequence of deducing things ‘backwards’ from this quote and examining what it presupposes, in a way that very closely reflects the way I approach argument (this is the reason I decided to take notes on this text)
“First,” Filmer writes, “He saith, that by the law of God, Power is immediately in the People”, and therefore the political system that God gave the world is Democracy! because Democracy has no meaning but power belonging to the people. Therefore, not just Aristocracies, but also Monarchies are against God’s will, who rightly gave the people Democracy. (This is a sort of reductio ad absurdum, I think - today it seems quite a natural thing to say!)
We want to object to Filmer here by saying that the Bellarmine does not necessarily refer to Democracy (of course, he explicitly refers to Democracy as something other than the ‘Power and Law of the Multitude’), but its not quite as easy to dismiss as one would think initially. Bellarmine does not argue for a kind of Hobbesian state of nature here, because in Hobbes’ anarchy there are surely no Powers, nor a Law. For Bellarmine, God gave men powers and laws. I would like to look more into what Bellarmine meant by this, that he perhaps thought of a prepolitical power, prelegal law... but there is surely some basis for Filmer equating it with Democracy. That said, it does not necessarily follow that investing those powers and laws in a form of government should be against God’s will.
Second, Filmer says, the only Power that men have in Democracy is to give their power to someone else, and therefore they really do not have any power. (Ho hum!)
“Thirdly,” Filmer writes, Bellarmine says “that if there be a lawful Cause, the Multitude may change the Kingdom.” Filmer asks: who will be the judge of wether something is lawful or not? It would be the Multitude. Filmer considers this “pestilent and dangerous.” (Again, surely quite natural today.)
Now Filmer quotes Bellarmine making what he feels is his only argument for the existence of Natural Liberty. Bellarmine writes: “That God hath given or ordained Power, is evident by Scripture; But God hath given it to no particular Person, because by nature all Men are Equal; therefore he hath given Power to the People or Multitude.”
Filmer now pulls out another quote from Bellarmine to refute the position just quoted, which he is proud as punch about, calling it out right before he does it and also including it in the chapter summary at the beginning (”Bellarmine’s Argument answered out of Bellarmine himself”).
The promised passage goes like this: “If many men had been together created out of the Earth, they all ought to have been Princes over their Posterity.”
Take that, shitlibs! Absolutists: 1 Republicans: 0! See you in hell Milton!
Anyway, Filmer takes this to be true: that Adam, and the succeeding patriarchs, had authority over their children: “by right of father-hood”, they had “royalty over the children”, in fact.
So children are subject to their parents, and parenthood is the “fountain of regal authority”, and this authority was bestowed by God himself. The argument promised in the chapter title begins to take shape: the first Kings were Fathers of their Families.
God also specifically assigned it to the eldest parents, which I think becomes important later.
He ‘saith’: Adam had dominion over the whole world, a Right granted him by God, and that Right was passed down to the Patriarchs. He gives what this Right is specifically, using biblical examples of authority: Dominion over Life and Death, the ability to make War, and to Conclude peace. (All of this is quite fundamental to later theories of sovereignity, especially critical ones: biopower! necropolitics! Indeed, Filmer refers to them as the “chiefest marks of Sovereignity”)
Although his history is Biblical and not the kind of historic epistemology we tend to use, as far as we’re concerned, Filmer’s argument is correct. At least for some parts of the world. I need to read more about stone & bronze age sovereignities globally but my reading on ancient Greece absolutely confirms this: the first forms of authority in that part of the world that we have record of was that exercised by a familial Patriarch who governed over a small kinship villages, setting the law (which is spoken of in terms of having ‘power over life and death’), and declared wars. There would eventually become a ruler who was largely symbolic but who, for this or that reason (not even political reasons, but often reasons related to the development of the productive forces or of national security) would appropriate more and more power from the Patriarchs while the social groups based on kinship ties would lose coherence.
Filmer’s argument here is not quite a naturalistic fallacy because he does not argue directly that it is right because it was so. Rather he uses history here to say that liberty is not natural to men, which he feels most Republican theories of government presuppose. Monarchy is argued to be good only indirectly, so the fallacy only happens ‘between the lines’ of the page.
1 note
·
View note
Text
Metaphysics
by Michael J. Loux
an elementary textbroof: notes are very brief
Introduction
He goes a little over the definition of metaphysics, which changes with time, so its also a bit of a history of metaphysics.
Aristotle called it just another 'departmental discipline’ like (Loux actually doesnt list any others- I’m assuming things like) geometry, politics... in this case, one that studies First Causes; central to it, God or the Unmoved Mover
He also calls metaphysics the study of being qua being, which means that it is not a departmental discipline but a study of everything in general
He notes that there might be confusion - that it is simaultaneously a study of everything and a departmental discipline of a particular thing - but says that it is only a seeming contradiction: its a departmental discipline that simply finds being qua being part of its own department. Or something.
This definition of metaphysics persisted among the scholastics etc. up until the rationalists, who’s metaphysics were quite differnet.
The rationalists (Spinoza, Leibniz...) accepted being qua being as the project of metaphysics (which they called general metaphysics) but included into it things like the nature of change/things that change (which they called cosmology), being “as it is found in rational beings” (which they called rational psychology), and being “as it is exhibited in the Divine” (which they called natural theology); all of these are considered special metaphysics.
These forms of metaphysics are today not really seen as part of metaphysics but part of other disciplines: natural theolgy belongs to the philosophy of religion, while rational psychology belongs to the philosophy of mind and the ‘theory of action’ (the latter of which covers free will). Loux doesnt give a correspondence for cosmology
The book focuses only on the general metaphysics, accordingly.
Loux also says there is a split in modern metaphysics between Kantian metaphysics and Aristotlean metaphysics. Kant argued that we cannot actually describe the world as it is, so metaphysics is not possible; but we can describe our conceptual frameworks for thinking about the world, which is the real task of metaphysics. So Kantian metaphysics describes our mental/sensory structures and not the world, whereas Aristoltean metaphysics attempts to describe the world.
Loux says that the Kantian argument is not convincing (just like that!) because if we cant describe the world, we also cant describe our frameworks and structures either. Loux is, ofc, a specialist in Aristotle - he discusses Kant only in a sarcastic tone throughout the entire book.
Anyway, this corresponds to the Realist [Aristotle] and Anti-Realist [Kant] debate
The Problem of Universals I Metaphysical Realism
There are two positions on the problem of universals: realists and nominalists
Realists say that when two things “are similar or agree in attribute”, then there is “some one thing” that corresponds that the two have in common (eg. two things that are red are the same red, and redness is a thing that exists)
These things are called Universals, which "encompass the proper- ties things possess, the relations into which they enter, and the kinds to which they belong”
Nominalists deny that universals exist
Loux says that for realists, “subject predicate discouse” and “abstract reference” support the existence of universals: without universals, we cant explain the truth of any subject predicate sentence or of any abstract reference
Realism and Nominalism
Loux says that we classify virtually everything: by colour (red things and yellow things...), by shape (triangles, circles...), by kind (elephants, oak trees...)
“Although almost everyone will concede that some of our ways of classifying objects reflect our interests, goals, and values, few will deny that many of our ways of sorting things are fixed by the objects themselves.”
Loux says that things ‘come that way’: we dont call some things circular and some things square simply arbitrarily but because they really are circular or square, “and our language and thought reflect these antecedently given facts about them.“
So ther are objective similarities between things prior to our classification of them: this is a “prephilosophical truism”, but which is the basis of some philosophical theorizing
Such as: say lots of things are yellow. Is there some more fundamental thing that, for lots of things to be yellow, has to also be true? That there is a “very general type” where an attribute agreement between two things obtains only because the general type obtains?
This is the ‘theory of forms’ per Plato’s Parmenides
“What is being proposed here is a general schema for explaining attribute agreement. The schema tells us that where a number of objects, a . . . n, agree in attribute, there is a thing, φ, and a relation, R, such that each of a . . . n bears R to φ, and the claim is that it is in virtue of standing in R to φ that a . . . n agree in attribute by being all beautiful or just or whatever.“
Lots of philosophers since Plato have used this formula, but usually do not call them ‘forms’ per Plato
Instead of saying it ‘partakes of a form’ they say that something instantiates, exhibits or exemplifies a particular property.
Plato’s formula is generally what is meant by ‘realism’: nominalists generally accept Plato’s account of attributes but argue that Plato’s formula (of forms/exemplification) has deep conceptual problems and attributes have to be understood on different terms, or say that attributes are a basic fact and no further analysis is possible
This chapter treats the Realist arguments
The ontology of metaphysical realism
Realists make two distinctions: between particulars and universals. Particulars are “things”, or specific things that ocucpy a specific spatial-temporal position, while univerals are “repeatable objects” that can be exemplified by multiple particulars, so that two houses can be (for the realist) the same red or that two cars can be the same shape. When two things agree in attribute, they both exemplify the same universal
The above universals are monadic, ie. a single particular can exemplify it, but some universals are polyadic, that is, the universal requires two or more particulars to be exemplified. This is called a relation, so that two things can be a mile apart, and many different things can all be a mile apart from each other, so a and b can have the same relationship as the relationship between c and d.
Those relations are symmetrical - both a and b bear the same relation to one another, ie. a is a mile from b and b is likewise a mile from a - but there can also be asymmetrical relations, ie. a is the father of b. Loux compares these to ‘ordered pairs’ in logic, ie. (a,b), an ordered pair of a and b in just that order.
Some realists make a distinction between universals which are properties - something being red or two things being a mile apart - and universals which are kind, ie. dogs. While something red posesses the quality red, something which is a dog belongs to the kind ‘dog’. Some realists do not accept kinds.
“kinds constitute their members as individuals distinct from other individuals of the same kind as well as from individuals of other kinds. Thus, everything that belongs to the kind human being is marked out as a discrete individual, as one human being countably distinct and separate both from other human beings and from things of other kinds.“
There are degrees of generality to attribute agreement, so that a cat and a dog are both alike in attribute by being mammals, but less alike in attribute than two dogs. “The more specific or determinate a shared universal, the closer is the resulting attribute agreement.“
Universals themselves can exemplify other universals, so that red, yellow and blue are all part of the kind colour, they all have the properties of tone and hue, and they might have relations like one being darker than the other, and degrees of generality such as red being closer to orange than blue.
“Thus”, Loux says, “the original insight that familiar particulars agree in attribute by virtue of jointly exemplifying a universal gives rise to a picture of consider- able complexity.” He writes: “Particulars and n-tuples of particulars exemplify universals of different types: properties, kinds, and relations. Those universals, in turn, possess further properties, belong to further kinds, and enter into further relations”
This structure, realists claim, can explain a wide range of phenomena. Loux will cosider just two arguments for this structure of universal realism: subject-predicate discourse and abstract reference.
Realism and predication
Loux gives three subject-predicate sentences:
1. Socrates is courageous 2. Plato is a human being 3. Socrates is the teacher of Plato
In 1, the word ‘Socrates’ refers to some real thing in the world, but ‘courageous’ only seems to modify what Socrates is. Realists say however that ‘courageous’ also refers to some real thing in the world--
because the truth of 1 depends on its correspondence to the real world, and both parts must correspond, ie. Socrates can only be courageous if there is a real Socrates who has this property, but Socrates can also only be courageous if there is a courageousness to have
so another sentence eg. ‘4. Plato is courageous’ is possibe, and the ‘courageous’ in 4 is the same predicate ‘courageous’ as in 1, and has the same relationship to its subject (’Plato’ or ‘Socrates’), so it must refer to the very same ‘courgeous’. This goes for all other similar sentences, so every subject and predicate are always referents
There are three types of predicate: the predicate (courageous) in 1 is a property, so that courage is a property of Socrates; the predicate (human being) in 2 is kind, so that Plato belongs to the kind human being; the predicate (teacher) in 3 is a relation, so that Socrates and Plato are related to each other
Loux notes that it is tempting to say here that predicates are names, just as Socrates is a name because ‘Socrates’ refers to a real thing in the world. He says this is most persuasive in a sentence like ‘This is red’ - ‘this’ names some real thing, so ‘red’ must also name some real thing- the colour red.
It does not work in most cases, however: ie. in ‘Socrates is courageous’, courageous is not a name for a universal, because the name is acutally ‘courage’.
While the above is about grammar, it rests on semantic roots. While names refer to one particular thing, predicates are general, so they “enter into a referential relation with each of the objects of which they can be predicated” - they are true of or satisfied by those objects
For realists, in addition to satisfying the objects they’re predicates of, they also express or connote a universal.
Predicates therefore express which objects belong to a set (ie. the set of things with the property courageous, or all things that belong to the kind dog), but it also identifies the universal “by virtue of which” it belongs to this set
The subject-predicate sentence can therefore be reorganized to contain two names, ie. ‘Socrates exemplifies courage.’ In general all ‘a is F’ sentences can be rephrased ‘a exemplifies F-ness’
The predicate therefore has a referential relationship to a universal that is weaker than naming but “parasitic on it”, called connotation or expression
Realists say that this account of subject-predicate sentences is natural, intuitive and satisfying because it “does what we want it to do” - it explains how these sentences can correspond to the real world - and also works the same way that attribute agreement does:
Predicates are general terms, and general terms also indicate cases of attribute agreement
Items that agree in attribute all exemplify a universal, and the general term that indicates attribute agreement connotes a universal.
The universal that the predicate connotes is also the universal that the subject exemplifies.
Realism and abstract reference
There are sentences which use ‘abstract singular terms’, such as ‘courage’ or ‘mankind’, ie. ‘Courage is a virtue’ or ‘Triangularity is a shape’ (what a thing to say!)
Intuitively, Loux says, the truth of a sentence like this depends on the existence of a universal that the ‘abstract singular term’ refers to, ie. in order for it to be true that ‘courage is a virtue’, there would have to be a real property for ‘courage’ to refer to which one can say is a virtue. This is the realist’s account of abstract singular terms. Abstract singular terms are the names of universals.
There are also sentences that dont involve abstract singular terms, such as ‘this tomato and this firetruck are the same colour’, which presuppose the existence of a universal, in this case the colour red. Or ‘some species are cross fertile’ presupposes the existence of those species. These sentences can only be true if the universal being presupposed is real
Similarly a sentence like ‘this shape is exemplified by many things’ presupposes the existence of a repeatable being (the universal)
This account is independent of the realist’s accout of predicates, but the account of predicates presupposes the account of abstract reference. Predicates connote universals because they can be rephrased as ‘a exemplifies F-ness’, and the realist argues that such a sentence can only be true if there is a real F-ness to refer to.
If there is a satisfactory nominalist account of abstract singular terms then both of these arguments for realism are less convincing. Many attempts at making such an account have been attempted (which will presumably be covered in the section on nominalism)
Restrictions on realism - exemplification
Up until now Loux has been writing as if the realist account applies to everything: that every predicate has a corresponding universal, and so on. Many realists however place restrictions on the account. They do this for a few reasons:
An unrestricted application leads to a paradox: say there is a predicate, ‘does not exemplify itself’ (simplified: is non-selfexemplifying). It is true that many things do not exemplify themselves (ex. the Taj Mahal does not exemplify the Taj Mahal), and therefore do exemplify being non-selfexemplifying, and some things do (ie. ‘is self-identical’ is self-identical), so do not have this property.
However, when the property is applied to itself, it creates a paradox: ‘is non-selfexemplifying’ is not self-exemplifying, so it exemplifes ‘is non-selfexemplifying’, which then means that it does exemplify itself, so cannot be non-selfexemplifying, which means that it doesnt, which again means... [Loux notes this is Russel’s Paradox, applied to properties rather than sets]
So, at least this one universal cannot be real [I don’t totally understand why this should be the case: why can there not just be a universal that is constantly vascilating between exemplifications, or twisting itself into a moebius strip-like shape? I need to read about Russel’s Paradox ie. why it is a paradox]
Another issue that arises is that exemplification results in an infinite regress. If a exemplifies F-ness, then a also exemplifies exemplifying F-ness, and then also exemplifies exemplifying exemplifying F-ness, and so on forever.
The same goes for the account of predication: if a is F can be rephrased a exemplifies F-ness, then...
[After reading this I wrote: why is this necessary a problem, cant there just be an infinite series of properties (which we might denote with an elipses, 'f-ness...', the mathematical symbol for 'repeating'?) - but then...]
Loux says that while many realists have treated this as a problem and attempted to solve it, it does not need to be. The realist can simply say that there are an infinite number of properties of exemplification (he says while it is a cycle, it is not ‘viscious’!) [Fools seldom differ!]
Realists who want to avoid the regress might simply say that exemplification is not subject to the realist’s account. They may also say that they are just giving a more “articulated” explanation of whats already going on rather than introducing a new object, and similarly, of predication, that ‘a is F-ness’ is semantically equivalent to ‘a is F’ and does not actually introduce a new exemplification that also exemplifies etc.
Another infinite regress appears in the realist account: we have said earlier that exemplification is a relation between a particular and a universal. Because relations are also universals, when we say that a exemplifies F-ness, we say that the two are related by exemplification, so we introduce another univeral, the relationship of exemplification. Because a and F are related by exemplification, we need “a higher form of exemplification ... to ensure that a and F enter into a relation of exemplification”, and so on...
Loux doesn’t see this as any more of a problem than the other regressions, but he says he is in the minority. Bradley, who first made this argument, made it to say that relations do not exist. Most realists instead say that exemplification is not a relation or any universal at all, but a nexus or linkage which is nonrelational.
Loux says that this has the bonus of making the earlier restrictions look less like “desperate ad-hoc attempts at avoiding paradox”, because exemplification is simply not a universal.
Further restrictions on realism - defined and undefined predicates
Loux says that some realists feel there is a problem with a predicate like ‘bachelor’. Bachelor refers to an ‘unmarried’ ‘male’ ‘human’, and nothing besides. Since these three predicates are already universals, ‘bachelor’ seems to be redundant: is it really a universal of its own?
Similarly, ‘unmarried’ is only the negative of ‘married’ - do we need a negative of another universal, isn’t it enough to say that a particular lacks the property of being married?
Based on this, some realsts have placed restrictions on what counts as a universal. They have separated ‘undefined’ predicates, which are totally primitive, not defined with reference to any other universal (ie. red just refers to red), and ‘defined’ predicates, like bachelor, which are defined with reference to universals, which they say are not themselves universals.
Dividing up what is and what isn’t defined however is often just arbitrary, up to the metaphysician who’s doing it. To get around this realists in the first half of the 20th century tended to take an ‘empirical’ position: the only universals that exist are physical, objects of sense-perception, such as colours, shapes... Everything else is defined with reference to these physical universals.
This approach has fallen out of favour however, mostly because it was unable to grapple with certain nonphysical things: eg. “the theoretical predicates of science” and “moral or ethical predicates”. Loux writes that they were forced to develop “highly improbable accounts” of these things, for example that ethical predicates were just a way of venting our emotions about actions and persons...
Another problem with separating undefined and defined predicates is that there are predicates that are definitely not primitive but are also not reducible to any component universals the way ‘bachelor’ is. He gives a very beautiful quote from Wittgenstein about games here, where he argues that there is nothing common to all games, but all are nonetheless still games.
Loux says that for some realists (citing himself!) this is simply no problem: there are universals which aren’t any particular physical thing, but also aren’t reducible to any physical thing. They simply accept things like games as valid universals.
Others do want to restrict what predicates are universals, but in a more considered way. They generally fall into a ‘scientific realist’ camp, and they say that universals are only those which are discoverable with the apparatuses of scientific inquiry.
There are two types: the more moderate says that while there are physical and non-physical universals, the physical universals dictate everything about the non-physical ones: “what physical relations it enters into determines uniquely what nonphysical kinds, properties, and relations it exhibits”, so that when you have described the physical universals proper to a particular, you have already given everything necessary for understanding it. In the metaphysicist’s jargon, non-physical universals superveine on physical ones. [He points to Jaegwon Kim’s ‘Concepts of Supervenience’ for help understanding this concept, which we should read!]
The more extreme are ‘eliminativists’ who believe that our language is a theory about the world, which like any other can be modified when it does not accord; they feel that the only things that exist are those outlined by the natural sciences, especially physics, and that our language should be brought into accord with that of physics, ie. when our language presumes something exists which is not explained by physics, we should think our language is inaccurate.
Are there unexemplified universals?
Realists are divided on this issue. Some realists (Plato among them, who Loux calls ‘Platonists’) say that there are unexemplified universals, and they divide them into two types: contingently unexemplified universals, eg. a shape that, simply, no particular has taken, etc., and necessarily unexemplified attributes, which cannot and never will be exemplified in the world, eg. being round and square at the same time.
Other realists (Aristotle among them, who Loux calls ‘Aristotleans’) only allow for exemplified universals, so that, per Aristotle, “if everything were healthy, disease would not exist.”
They have several objections to the Platonist’s account. They say that this view creates a kind of ‘two worlds’ ontology, where universals exist in some other world which we cannot access, and it is difficult to understand how the two worlds would ever be connected.
And they say that this world would not be epistemilogically accessible because these beings, in the universal realm, are outside of space and time, and we can only experience things in space and time: the knowledge would have to be a priori, and Aristotleans generally don’t accept a priori knowledge.
“As they see it, we grasp particulars only by grasping the kinds to which they belong, the properties they exhibit, and the relations they bear to each other; and we grasp the relevant kinds, properties, and relations, in turn, only by epistemic contact with the particulars that exemplify them.”
Aristotleans therefore do not think there can be unexemplified universals: the only universals that exist are those that can be found in concrete particulars.
Platonists counter that we should believe in unexemplified universals for the same reasons we believe in universals.
They say that if we believe that predicates of true statements refer to real universals, we should also believe they refer to real universals in untrue statements. So that when someone wrongly says that a exemplifies F-ness, and it is untrue because nothing does or can exemplify F-ness, our account of predication should still make us believe that F-ness exists whether or not it is exemplified: the same semantic argument for a predicate’s reference to a real universal goes wether or not the statement is true or false.
Platonists believe that all universals are necessary beings, while particulars are contingent beings.
Platonists argue that Aristotleans turn the issue of universals and particulars on its head, so that universals are brought into existence by particulars that exemplify them, while Platonists say that universals must precede particulars, and that undermining this also undermines the reasons to accept a realist position on universals in the first place.
Some Platonists do simply endorse the ‘two worlds’ view (and this probably includes Plato), but some don’t. They say that a nexus of exemplification ties universals and particulars together, which is a notion that both Aristotleans and Platonists are comitted to.
Epistemically, they argue, while some universals are unexemplified, many are, and those that are we can access empirically. Anything else we know about universals is extrapolated from our knowledge about the exemplified universals we do have access to. And if we cannot have knowledge of unexemplified universals, this is just how we expect it to go.
The Problem of Universals II Nominalism
Nominalists reject universals: they say that only particulars exist. Nomainlists find that the problems the realists seek to explain with universals, such as subject-predicate discourse and abstract reference, can be explained with particulars alone.
Loux says there are four main types of nominalist: the ‘austere nominalist’ who takes the most extreme position, that whatever seems to refer to a universal simply refers to something about a particular (he says that this view is not common because it encounters some problems); the ‘metalinguistic nominalist’, who says that some things that seem to refer to universals in fact refer to linguistic expressions; the ‘trope theorist’, who believes that there are such things as properties, but that properties are particulars (called tropes), so that the sentence ‘a is red’ indicates two particulars, a and its redness; and fictionalists, who say that talk about universals are fictional stories we tell.
The motivation for nominalism
Loux begins by asking why should anyone be a nominalist? He says that nominalists have a number of objections to the realist’s account, including:
That because universals can be exemplified by multiple things, believing in them forces us to believe in certain illogical things, eg. that somehting can be in multiple spatiotemporal positions at once, which is impossible, that we can say “redness is two meters from itself”, which is nonsensical, etc. [Loux says this argument is quite old, appearing in Parmenides]
That a universal’s identity cannot be defined in a way that isn’t circular, eg. we cannot say that U’s identity is all of the particulars that exemplify U, because another universal might be exemplified by the same set of particulars, eg. ‘mankind’ and ‘featherless biped’ are exemplified by identical particulars but are different universals. We can only indicate how these universals differ by utilizing more universals in our explanation, so we cannot provide a general identity account for universals.
That, as mentioned in the last chapter, believing in universals leads to a regress, which nominalists say is vicious, and that universals lead to epistemological problems, because we cannot access them as spatiotemporal beings.
However, Loux says, none of these shoud be quite enough to push someone to nominalism. He then answers each of the arguments against realism (being a realist, nach):
Many realists deny that the things we are made to believe in the first case are illogical. While such things are impossible for particulars, they are simply possible for universals. [Bertrand Russel gave the example of ‘being north of’, which relates Edinboroguh to London, but “there is no pace where we find the relation ‘north of’ ”, so that particulars can be somewhere but universals are not anywhere]
Some realists attempt to answer the second case, and find that some universals can be given an identity which is not circular, such as sets in mathematics: a and b are identical if a contains all of the same things as b... However, Loux says, most universals are not like sets. His argument is that this demand for an account of identity is unfounded, and that many things cannot have a noncircular identity, ie. a material object. Trying to give an identity to a material object may mean, for example, saying that it occupies a particular spatiotemporal space, and so whatever it is in that space is the object, but it is not possible to do this without reference to the object itself (because it is in that space)... [I guess?]
The last argument has ofc already been addressed.
Loux says that while the nominalist might not find these responses convincing, they are not enough to defeat realism, and should not be enough for anyone to become a nominalist alone. Furthermore they are all technical points, and nominalists are usually not nominalists for technical reasons. Why, then, is anyone a nominalist?
Loux writes that nominalists tend to see metaphysics as something similar to the natural sciences: able to give theories which account for phenomena. If there are two theories that satisfy the phenomena equally, then one has to decide on which theory to support based on something else. Nominalists usually prefer the nominalist theory because it is the simpler, it posits fewer entities: while metaphysicians have to believe in ‘two worlds’, the particular and the universal, nominalists only have to believe in particulars. This was, ofcourse, the argument for nominalism given by the founder of nominalism, William of Ockham, with his famous razor.
Austere nominalism
What nominalists allow into their ontology varies: many nominalists view humans, plants etc. as particulars, while some (of the ‘eliminitavist’ type) might only admit quarks, neutrons... [Loux really only deals with the former even though he says otherwise]
Austere nominalists claim that attributes are simply primitive facts about the world and do not need further analysis: particulars can just be that way. The sentence ‘Socrates is courageous’ is true simply if Socrates is that way.
This is how they account for subject-predicate discourse: while for the realist the predicate can only accurately describe the particular if it corresponds to a real universal the particular has, enters into or belongs to, for the austere nominalist the predicate just describes the way that the particular is.
If the object is raised that they are saying something trivial (’Socrates is courageous’ is true if Socrates is courageous), they argue that this should be expected, and besides, the realist’s account is also trivial, since when they say ‘Socrates is courageous is true if Socrates exemplifies courage’, they have just reworded the sentence, it expresses the same thing. The sentence simply is trivial, there is no further explanation that can be given to it, its just a primitive part of our ontology.
How do they deal with abstract reference, then? ie. ‘Courage is a virtue’. For Loux, this is where they run into problems.
While the realist says that abstract reference presupposes a real thing to refer to, ie. courage is a real universal, the nominalist says that all these sentences can be translated into some other sentence. ‘Triangularity is a shape’, for example, becomes ‘things which are triangular are shaped objects’, ‘red is a colour’ becomes ‘things which are red are coloured objects’...
However, a sentence like ‘courage is a virtue’ poses a problem: it cannot be translated to ‘courageous persons are virtuous persons’, because one might imagine a courageous person who is otherwise a bad one. The particulars that are courageous do not seem to be able to account for courage in this sentence.
Nominalists might, Loux suggests, make use of ceteris paribus - all other things being equal courageous persons are virtuous persons.
Loux gives an explanation for why ceteris paribus doesn’t actually work here that I do not understand: he says that there is no guarantee that our language has enough predicates to enumerate all theways everything else is equal (ie. we may not have words for all the other virtues that would have to be equal), sp that nominalists have to just insist that the ceteris paribus cannot be analyzed further. The things which are equal cannot be anticipated. [It just seems kind of ridiculous - who cares if we dont know them all, aren’t they all equal anyway when we say ceteris paribus?]
There are however some sentences using abstract reference that austere nominalists simply cannot account for, eg. ‘some species are cross-fertile’
At this point the austere nominalist might argue that our common sense expressions might be wrong, and anyway, platonic metaphysicians have been speaking that way for so long that some of it might have got in. Our language might just be inaccurate, and this shouldn’t trouble us. [Quine took this view]
This nominalist, Loux notes, has a different metaphilosophy than the one we’ve been arguing with up to now. For the other, the fact that nominalism contradicts our common sense would be troubling.
Regardless, Loux says, even if ways are found to account for these like these, and the problems are circumvented, the austere nominalist account does not really succeed in being simpler. The austere nominalist has to leave many things simply unanalysable and primitive, and sentences using abstract references are translated on an ad-hoc basis. The realist account can offer explanations for more things, and can provide a systematic translation of those sentences. Therefore while the nominalist account postulates fewer entities, it has a more cumbersome explanatory framework.
Metalinguistic nominalism
There are nominalists who want to find a nominalism that has both ontological and explanatory simplicity. Many of these nominalists do so by making a different account of abstract reference: that abstract reference refers to linguistic features rather than to universals.
This position is as old as the very first nominalist, the 13th century philosopher Roscelin, and it was built upon by Abelard and Occam. It did not however find its fullest expression until the second half of the 20th century.
Loux discusses Carnap’s version here: for Carnap, abstract reference that seem to refer to universals really refer to things like verbs, adverbs, etc. So ‘Courage is a virtue’ becomes ‘Courage is a virtue predicate’, or ‘Triangularity is a shape’ becomes ‘Triangular is an adjective’. They all refer to the way that the word is used in the language. So ‘man’ is not a universal but just an adjective, etc.
This version manages to be systematic and to translate sentences about abstract reference all in the same way. Contrary to the austere nominalist, the metalinguistic nominalist agrees with the realist about how such sentences can be true (ie. they correspond) and only differ in terms of what they correspond with (ie. with the way the words are used rather than the way the world is)
Loux raises two problems with Carnap’s account, however: 1. no sentence of this type can be translated under this theory. A sentence in english is only true because it is true of an English word. A sentence refering to how ‘man’ is used does not also refer to how ‘hombre’ is used, etc. 2. That Carnap actually still posits some universals:
Loux introduces here the notion of ‘tokens’ and ‘names’. Tokens are any particular utternace, ie. two peope who say ‘lion’ say two different things taken individually, both lions are ‘token’. But they both use the name ‘lion’. So a token refers to the particular and name to the general case.
Carnap, then, seems to posit the universal of ‘man’ as apart from each individual utterance of ‘man’, ie. there is the name man to which we can refer, which is separate from each instance of the word man which is spoken. The word ‘man’ is then a repeatable entity, a multiply instantiable entity.
Sellar’s theory, Loux says, solves both these problems. [Throughout this section Loux speaks of Sellars in tones of awe and admiration]
Sellars says that, rather than abstract reference being a reference to how a word is used, refers to the total of the tokens all taken together as particulars, in a way that does not refer to a universal. The example he gives here is ‘the lion is tawny’ (’the lion’ as in ‘the animal we call the lion’, not ‘that lion’): a universal (the kind ‘lion’) cannot be tawny, it is only the particular lions that can be tawny. But if all lions are tawny it is true to say ‘the lion is tawny’. Similarly that ‘the American citizen has rights’ obviously refers to every citizen in particular.
So abstract references are really distributed singular terms.
In this way, sentences with abstract reference can be translated, because they just refer to all the given tokens of a word. The word can be translated into its equivalent word in another language and statements about it will still be true about all the tokens.
Sellars demonstrates this by using special ‘dotted quotes’ to indicate a distributed singular term, which is the same in all langages. So -red- means red, rouge, etc. and it is possible to speak of -red-s, of -man-s, etc. (the way one might when one says ‘there are 24 mans in chapter 16′ to refer to instances of the word man)
Loux says that this is a very complete and developed theory, perhaps the most complete, and discussing all of it is outside the scope of the book. There are however still some objections for realsts (such as, ultimately, Loux himself), in particular:
That Sellars position seems to commit him to another type of universal. If what makes all individual -F-s the same is that they play the same role in each language, “isn’t Sellars committed to the existence of linguistic roles” ie. a kind, a universal, the same “linguistic role” that words exemplify in many languages?
Sellar’s objection is that when he says ”linguistic expressions” is just a paraphrase is what he really means, which is really a longer and more complex analysis of languages [in which “there are no linguistic expressions, only individual speakers and inscribers”]. Realists doubt this analysis.
Trope nominalism
Trope nominalists differ from the others because they argue that attributes do exist, but that they are also particulars. If a truck is red, it has the property redness, but this redness is not the same redness that the tomato has. The tomato and the truck eac have a numerically different particular attribute which is, fine, identical in every way, but nonetheless a different thing. These identical but numerically different attributes are called tropes.
The idea that attributes are particulars themselves appears in the work of Occam, Hume, Locke and “arguably” Aristotle, although the term ‘trope theory’ didn’t appear until the 20th century.
He gives a nice quote from D.C. Williams to demonstrate it:
The sense in which Heraplem and Boanerp [two lollipops] “have the same shape” and in which “the shape of one is identical with the shape of the other” is the sense in which two soldiers “wear the same uniform” or in which a son “has his father’s nose” or our candy man might say “I use the same identical stick, Ledbetter’s Triple-X, in all my lollipops.” They do not “have the same shape” in the sense in which two children “have the same father” or two streets have the same manhole in the middle of their intersections or two college students “wear the same tuxedo” (and so can’t go to dances together).
A strength of trope nominalism is that we can talk about how we can look at, in Loux’s example, the Taj Mahal, and we can focus on the colour of the Taj Mahal. We aren’t thinking of the Taj Mahal in general, where the Taj Mahal just happens to be a coloured object, we are really thinking about its colour specifically. Trope theory can account for this by allowing attributes in a way that other nominalists cant.
Loux asks how the trope nominalist accounts for predication and abstract reference? He says that trope nominalists could use the same sort of eliminativist argument that the other nominalists used, and that Occam did this, but most trope nominalists do not do this.
Instead they argue that abstract reference is acutally a name, and that it names sets of particular attributes. They can therefore talk about sets such as ‘wisdoms’ or ‘reds’, called sets of resembling tropes, that abstract references to courage and red refer to.
They argue that these sets are not universals because of the difference in their identity conditions: set a is identical to set b ‘just in case’ they share all of the same members of the set. So if all the members of set a are also in set b, set a and set b are the same set. This is not the case for universals: if all dogs are good dogs, ‘dogs’ and ‘good dogs’ are still two different universals (my example, nach)
[I need to find out what ‘just in case’ means, it comes up a lot but Loux doesn’t define it]
This is an advantage for the trope nominalist because the other forms of nominalists have to reject set theory and other things in mathematics, which is problematic.
While we have “just scratched the surface” of trope theory, Loux indicates that it is a sophisticated theory that can be both systematic and simple, and accounts for more things than metalinguistic nominalism can. He then gives a couple of criticisms of it:
The first is a fairly complicated one. Some have argued that for the trope theorist, the set of attributes possessed by a fictional being must be empty, because there is no such being. This means that the set of attributes that a Unicorn has and the set of attributes that a Griffin has are both the same set, the null set, and this means that a Unicorn and a Griffin would be the same thing, which, of course, they aren’t.
Loux says the trope theorist can however just say that there simply isn’t such a thing as being a Griffin or being a Unicorn, and therefore “the corresponding abstract singular term doesn’t name anything at all” - he says this is the same sort of argument that the ‘Aristotlean’ realist makes when they argue that there are no unexemplified attributes.
The next objection is that because a set is necessarily only the members of its set, and just those members, then every set that exists now is that way necessarily, ie. the number of humans that there are right now is a necessary fact, there could not be more or less humans. This, of course, isn’t true, so poses a problem for the trope theorist. To Loux’s knowledge no trope theorist has responded to this argument.
In a footnote Loux gives a possible way around the problem: they could argue that, say, wisdom is not identified with the set of wisdoms in the actual world, but with a theoretical set of wisdoms in all possible worlds.
Fictionalism
This section is very short and Loux is quite dismissive of it, which I think is a little unjustified.
Fictionalists argue that statements about universals are fictional, and they can be true with respect to the fiction they’re part of. So ‘Socrates exemplifies courage’ can be true in the way that ‘Achilles slew Hector’ is true, because its true within a fictional context.
Loux doesn’t really address why they think that they’re fictional or what it means. He seems to interpret it as them saying metaphysics are ‘just makeblieve’ and that mathematics are too. We should read about them on our own!
4 notes
·
View notes
Photo
3 notes
·
View notes
Text
War as Paradox
by Youri Cormier
written 2016, read 01/04/19-??
A book which outlines “dialectical war theory”, examining Clausewitz and Hegel’s theories of war. In particular, there is a great deal of discussion about Clausewitz’ reading & the intellectual milieu he was contributing to, which makes it useful as an accompanient to On War itsef. The text is also much lighter. Our notes will probably be a lot less formal and systematic.
Introduction
Cormier addresses a number of criticisms of Clausewitz, as well as popular misrepresentations of him in the literature, in a way that echoes my own feelings so precisely that it gave me a feeling like doves being released into the air in my brain.
He promises to address the issue that Clausewitz theory creates ‘self fulfilling prophecies’ in the first chapter, and the question of wether or not Clausewitz is still relevant today in the era of terrorism and drones, but mainly answers the question of wether or not Clausewitz was influenced by Kant and Hegel; he says its obvious that he was, even though he doesnt mention them.
He frames Clausewitz as part of a history of thinkers that begun with Kant, includes him and Hegel, and emanates on to Marx, Engels, Kropotkin and Bakunin; what Kant applied to reason Hegel applied to the state, Clausewitz to war, and the revolutionaries to politics, sociology, economics...
Clausewitz concerns himself chiefly with strategy and tactics, specific questions about war, and has a theory of war itself, while Hegel has no perspective on strategy and is concerned almost entirely with the ethical dimension of war.
While both thinkers are dialectical, they are both dialectical in different ways, and Cormier says that their theories of war are not completely dialectical; Clausewitz uses dialectical analysis in his overall analysis of war - the dialectic between the ‘absolute’ and ‘real’ wars, for example - but does not see the things that he analyses in a dialectical way of themselves, while Hegel enters his discussion of war into a worldview that is entirely made up of dialectical processes however has no dialectical theory of war of his own.
Taken together the two theorists complete each other, however not perfectly; they have radically different perspectives on the ethical dimensions of war, which complicates a holistic ‘dialectical war theory’
Hegel treats war as right in and of itself and is an essential component of the survival of a people and is essential to maintain the state. Cormier uses two quotes from the Philosophy of Right: “Successful wars have checked domestic unrest and consolidated the power of the state at home” and “[...]corruption in nations would be the product of prolonged, let alone ‘perpetual’ peace”
Clausewitz, however, sees war as having no ethical dimension of its own, but entirely an instrument of political will, that “war is only part of political intercourse, therefore by no means an independent thing in itself.”
Cormier sees these two diverging ethical approaches to war as both manifesting in the revolutionary wars of the early 20th century; the Hegelian form with the anarchists, and the Clausewitzian with the marxists. (!!!!!!!!!)
How to Approach Claims that Hegel and Clausewitz Generate Self-Fulfilling Prophecies and Warmongering
The claims about Clausewitz are quite easily dismissed as distortions, having their origins in blatant misreadings by both his detractors and supporters.
We don’t need to cover this in too much detail but a few things to note:
In order to address some criticisms, Cormier discusses the role of the State in Clausewitz and Hegel, in order to countre claims that they saw war as only carried out between states, and not between ‘non state actors’.
He first of all talks about how Clausewitz did, in fact, address non state actors explicitly, and brings up “Small Wars”, the word he uses for insurgency conflicts.
On his concept of the state: in his era, where the state was in some ways only coming into being, in the wake of enormous revolutions such as the French Revolution, their concept of the state was not static or unchanging, but in flux, and which they had sort-of-utopian ideas about.
Clausewitz said that the nation “achieves independence and unity, only to disappear once again”
“Clausewitz’s notion of the state was built up on a classification of means and ends: a tool by which peoples achieve national and its cultural ends. It was not an end in itself.”
Cormier talks about how, while the king of Prussia had ceded territory to and allied himself to Napoleon, he joined the Russians and fought against them, but in doing so saw himself as acting on the political interests of Prussia and himself carrying out policy that was not coordinated with the state.
Cormier talks about how many non-state actors see themselves as ‘states in waiting’ and carry out policy.
Hegel regarded the state as “a process that creates itself, asserts itself, renews itself”, and “the actuality of concrete freedom” - Hegel regards the state as an ethical system, more than anything else. He promises to return to this in a later chapter as its too complex to treat now.
He then addresses the self-fulfilling prophecy part, which again in the case of Clausewitz is based on distortion. One interesting distortion is the work of Ferdinand Foch, an architect of World War I, who seemed to intentionally misrepresent Clausewitz as a bloodthirsty warmonger and handmaiden to Napoleon, in order to present his own ideal of mechanized warfare.
Cormier says that dealing with claims that Hegel makes self-fulfilling prophecies about war is a bit more sticky, since Hegel did consider wars a historical necessity.
He says that Hegel views historical events as a spectator after the fact and only then declares them necessary, but that later interpreters utilized this to project it into the future and call certain things historically necessary, or as a kind of destiny; he refers here to Marx, and to marxists.
He concludes the chapter with this, that Hegel is only self-fulfilling when projected into the future, and thats Marx’s fault. This is, to me, a clear misinterpretation of Marx, as well as surely overstating Marx as an inheritor of Hegel, and I’m not sure that what he says was the fault of Marx cannot in fact be found in Hegel, but alas...
Perfection and Certainty in Metaphysics and War Theory
Cormier talks about how, prior to Clausewitz, military theory had focused on making itself into a ‘scientific’ disicpline, which could make precise predictions and uncover eternal truths about the world.
It therefore focused primarily on quantifiable data: geometric distances, manouvers, etc. The manouver was in fact the only consideration for theory. Following the theory was supposed to always produce victory. These are the ‘positive theories of war’
This seemed to be confirmed by Frederick the Great’s application of those theories in the Seven Years War, however it was suddenly disproven by the Napoleonic Wars where Napoleon’s enemies employed the theory and were defeated here, while Napoleon employed the theory and was defeated there - in particular, Napoleon took the Russian capital, but Russia did not surrender, and in the end won the war, which went against all positive theories of war.
Cormier says that their theories were true only in the context of wars between feudal european states, where wars were generally not worth too much expenditure of force, and that this was not true with the social upheaval of the French Revolution and the great distances and great effort that Napoleon’s nonprofessional army would commit.
It is these positive theories that Clausewitz directs extremely stiff criticism, dismissing them as pure theory and contrasting them with reality, with the diversity of war in relation to its poltical object, and to which he opposes his own dialectical methodology (without ever mentioning them explicitly - subtle!)
[Note: this is what Jomini gets pissed at him for, see here]
2 notes
·
View notes
Text
On War
by Carl von Clausewitz, trans. Jolles
written 1816-1830, read 03/19-???
We’ll keep notes fairly brief due to the length of the work, lest we never finish it for our rigorous notekeeping; it is necessary only to record the shape of Clausewitz’ thought, so we might follow it along at a glance.
Author’s Notes
We didn’t read much of these becase theres no real reason to read them first, despite coming first in this edition (they were never meant to be read by anyone but Clausewitz); however in the first few paragraph, Clausewitz says that he wanted to stress that there are two kids of war: wars which aim at the overthrow of our adversary, and wars which merely make some conquests on the frontier of his country; he also wants to stress everywhere that war is nothing but the continuation of state policy with other means.
On the Nature of War
War is like a wrestling match, in that the ultimate object of the war is the disarm the opponent - to make it such that the sacrifice we demand is preferable to continuing on the way things are, that they’ll be willing to give it up, and not to act.
Clausewitz considers that well-meaning people might see the true art of war to be that of attaining victory by minimizing bloodshed, but Clausewitz says that this is not possible:
Because if one side applies more force than the other, then they will gain the upper hand; so they force our hand, to apply the same amount of force, to not simply lose the conflict. My opponent’s force returns upon my own, and impacts the force that I bring - and vice versa. This tendency has theoretically no limit, so that theoretically it is driven to the extreme. He calls this the first reciprocal action.
[This reminds me of a marxist line; that capitalsts have to use exploitative and unethical business practices, like sweatshops etc, or else they’ll be driven out by competition who would]
Just as my goal is always to disarm my enemy, so my enemy also acts to disarm me. I force his hand just as he forces mine; he calls this the second reciprocal action.
If I want to overcome my opponent, I need to bring more forces (as in units, resources) than they do; but they too want to overcome me, and bring more forces than me. This again proceeds with no theoretical limit, until it reaches an extreme. He calls this the third reciprocal action.
However, no such wars actually take place in this theoretical abstract, and so they are not driven to the extreme. If all the above were true in real wars, the war would be over at the first blow.
The above things are not true in real wars because decisions are not made all at once, but over time; [re-read this point for a better understanding of why this was]
[something about: the number of forces my opponent controls can be verified, and that my opponent’s willpower can be guessed at from past encounters; something about probabilities]
The above things are also not true in real wars because not all of our forces can be brought out at once; all the movable forces can, such as the army, but not every fort, every river, every mountan, etc., ‘in other words, the whole country’ can be utilized in the same batte at the same time (he clarifies: ‘unless the country is so small as to be embraced by a single battle’!)
He introduces a schematic of the three types of resources in a war; the military forces proper, the country and the allies. The country creates the forces proper, (by providing the soldiers), but is also other things, like land and fortresses.
He says that the political cause of the war also exerts an influence on the war, and sometimes this is the most important factor.
In wars, a real object - such as a piece of captured territory - substitutes for the political object that is pursued.
Clausewitz now considers the problem: why is military action ever suspended?
In the abstract theory, you could never stop military action, because your opponent would not stop: both of you would have to be at the extreme the whole time. But of course, in most wars, not-fighting happens more than fighting!
He says that this question ‘gets to the heart of the matter’
He first says that abstract theory would allow suspension in war only if it is of benefit to one party to wait; if I have an advantage in four weeks, its better for me to attack in four weeks than attack right now. However, there are other reasons that, in reality, one might not attack right now.
The first is to do with something he calls polarity, which he says will be discussed in depth later. But he says that, with polarity, either I have an advantage or my opponent has an advantage; only I am victorious or my opponent is victorious. We cannot both have an advantage, or both be victorious.
But this is only true if we consider both sides as only attacking: in reality, there are two kinds of war - attacking and defending, and defending is often the much stronger. This breaks polarity.
Just because my opponent would be at a disadvantage if he attacked me does not mean that I would be an advantage if I attacked him. If my opponent will be at an advantage in four weeks time but at a disadvantage now, I want him to attack me now; it does not follow that I want to attack him now.
The second reason for suspension in war is the fact that I only have imperfect information of my enemy. I do not know exactly if they are at an advantage or not, if I am, or if they will be later. I cannot know all the relevant factors with certainty. So in war, there is a certain amount of chance. I have to take risks, deal with probabilities, and have good luck.
He says that in situations of danger, courage is the most important virtue; courage implies risk-taking, boldness, and even foolhardiness, and a theory of war must account for this.
He says that of all things, war is ‘the most like a game of cards.’
War “always arises from a political condition and is called forth by a political motive. It is, therefore, a political act.”
War is not a rupture of violence that suspends policy, but its a continuation of policy with other means.
The nature of the policy, then, shapes the war: Clausewitz stresses this fact for an entire page, stating it and restating it in several ways, outlining the dimension that the political cause of the war shapes the war [pg. 279-280]
The stronger the political cause of the war, the more extreme it will be, the closer it will conform with the theoretical abstract; the more that the war is aimed at the complete destruction of the enemy, “the more closely the war and the political object coincide”, the more military and the less political war seems.
But in a war with a weak political motive (such as, presumably, the second type of war - merely conquering a few provinces on their frontier), the less it accords with the theroetical, the less destructive, the more influence policy plays, so that these wars seem more political.
He stresses again how war is an instrument of politics, not an independent thing; and this allows us to analyze and appreciate military history. We can see that, owing to the different political motivations for wars, wars too also differ.
The most important thing for the general, then, is understanding the political aspect of the war, and not mistaking it or trying to make it into a type of war that it isnt.
In the final section of the chapter, titled result for theory, he says that war’s nature is not just “a veritable chameleon”, changing itself to the nature of the political cause, but also a strange trinity:
It is composed of “the original violence in its essence, the hate and emnity which are to be regarded as a blind, natural impulse; of the play of probabilities and chance, which make it a free activity of the emotions; and of the subordinate character of a political tool, through which it belongs to the province of pure intelligence.”
That is to say: a trinity of violence, chance, and politics.
Violence is the concern of “the people”, chance of “the commander and his army”, and politics of “the government.”
The task is to keep theory between these three things, without giving them some kind of arbitrary ratio or proportion, acknowledging how each changes and each becomes stressed or diminished at different times.
Means and Ends in War
In pure, abstract theory, the conduct of war would not be related to the political object of the war, because the object would always be the complete overthrow of the enemy. However, in reality, this is not usually the case.
We return again to a schematic of three things: the military forces, the country and the will of the enemy (as in the 11th point above), which Clausewitz says contains everything else.
To win, the enemy’s miltary forces must destroy the opponent’s military forces; that is to say, they must place them in a position where they cannot continue to fight.
The enemy’s country must also be conquered, so they cannot reinforce their military.
Even if these two things are true, however, the enemy must run out of will to fight and actually sign a peace treaty for the war to be over, otherwise attacks could begin from inside or the war could continue at a later date.
It is true that this could also happen after a peace treaty is signed, however the signing of peace has a significant effect on the people and of the political machinery as a whole that it rarely happens and, basically, the signing of the peace treaty must be treated as the end of the war in theory.
Usually it proceeds in this order: that the military forces are defeated and then provinces are taken, but it isn’t always the case.
However, not all of this needs to happen in a real war to obtain the political object that the war is for, because not all wars are in reality aimed at the complete destruction of the enemy. There are even wars where this would be impossible, for example, in cases when the enemy is significantly stronger.
In pure theory, wars between opponents of unequal strength would be impossible. But reality is often far removed from pure theory
Two things “take the place of the impossibility of further resistance” when it comes to motives for making peace: improbability of success, and an excessive price to pay for it.
Because decisions in war have to be made based on probabilities, the war often ends long before it is fully fought out.
One side may strive to create this probability/improbability instead of aiming at the complete overthrow of the enemy.
The value of the political object the war is for will determine how much one side is willing to fight for victory; if the war is more costly than the political object is worth, they might surrender.
Clausewitz introduces a discussion of positive and negative political objects (but doesn’t elaborate on them yet!)
He discusses how the probability of success can be influenced: the same as the overthrow of the enemy influence the probability of success, “naturally” - the destruction of his forces & the conquest of his provinces; however, we would do it somewhat differently for this purpose than for the other. We might strike a strong initial blow instead of attempting to completely route an enemy, to show them our strength; or we might take weak or undefended provinces early, while this would be undesirable if we wanted to overthrow him.
Another means of influences the probability of success is “enterprises which have an immedite bearing upon policy” - such as breaking up their alliances or making alliances for ourselves, or “stimulating political activies in our favour” (I so wish he elaborated here!)
Another means of influencng the probability of success is to increase our opponent’s “expenditure of strength” - as their expenditure of strength “lies in the wastage of their forces”
Two ways of increasing expenditure of strength is by destroying their forces ourselves or by conquering their provinces, but there are three other ways:
Invading - occupying a province without attempting to keep it but “in order to levy contributions upon it or even devastate it” - the object here is simply to “do damage in a general way”
Directing our enterprises to the points where will we do most harm - instead of to points that would lead most easily to overthrowing them, if overthrowing them is not possible.
By wearing out the enemy - “a gradual exhaustion of the physical powers and the will by the long continuance of action” - which he says is the most important
In order to wear the enemy out, we focus our energies on pure resistance, “combat without any positive intention”; this negative means cannot be carried out to absolute passivity, resistance is an active thing, and the enemies forces are destroyed by it while ‘our means operate at their maximum’, until the enemy gives up their intention
While this negative action doesnt have as much of an impact as a positive action would, it succeeds much more easily
This negative action is called defence
When defending, extending the duration of the comat is enough to win victory, as the opponent has to expend their forces by attacking.
Clasuewitz gives a historical example here - FINALLY - in the case of Frederick the Great, who conducted the Seven Years War. Clausewitz says that he would have lost the war immediately if he carried it out offensively, but “after his skillful use of a wise economy of his forces” he showed his enemies over the course of seven years that they would have to expend a great deal more than they initially thought to defeat him.
He says “wee see then that there are many ways to our object in war”, but they all come down to combat. There is therefore only one means in war: the combat, the use of military forces.
“All, therefore, that relates to the military forces, and, thus, all that appertains to their creation, maintenance and employment, belongs to warfare.”
“Combat in war is not a combat of individual against individual, but as an organized whole made up of many parts.” The are two kinds of units: one determined by the subject and one by the object. This part is a bit confusing but I think it means that basically there are units of armed men, and units of combats which they engage with. The purpose of the combat makes it a unit.
To each combat, “we attach the name of engagement.”
The employment of armed forces - the only means in war - is the determining and arranging of engagments.
“The soldier is levied, clothed, armed, trained, sleeps, eats, drinks and marches merely to fight at the right place at the right time.”
In an engagement, all energy is directed towards the destruction of the enemy, that is, his ability to fight.
However, in reality, this is often not the actual object of a particular engagement: a single engagement may be, for example, to hold a bridge, rather than destroy the enemy’s forces.
But the bridge is in fact being occupied to bring a greater destruction to their forces. So, there is a diversity of engagements, each subordnated to another, with the destruction of the enemy’s forces as the ultimate aim.
These engagements he says are “nothing but a trial of strength”, and has no value of itself except its result, “that is to say, its decision.”
We go over how fundamental the engagement is, and how fundamental the decision is.
“The decision by arms is, for all operations in war, great and small, what cash payment is in bill transactions. However remote these relations may be, however seldom the seetlements may take place, they must eventually be fulfilled.”
Clausewitz talks about combinations here, in a way thats a little confusing to me; I think combinations are the diversity of engagements.
He says that “any important decision by arms - that is, destruction of the enemy’s forces - reacts upon all preceding it, because, like a fluid they tend to bring themselves to a level” and that this means that a fortunate decision by arms by our opponent can make one of our combinations impracticable. I have no idea what this means.
Because of this, the destruction of the enemy’s forces is the most important thing.
However the ends, and not the means, are the most important; a blind dash to destroy all the enemies forces is not always right.
The destruction of the enemy’s forces is not necessarily their physical force: their moral force is often the most important.
Destroying the enemy’s forces directly is very costly on us and risky, while other things are less risky and less effective by some other measure. However, our reduced risk assumes they are meeting us with similar means; if they meet us with the aim to destroy our forces completely, then we will suffer. For this reason, the destruction of the enemy’s forces is the “first born son of war”
This is the meaning of positive and negative objects introduced earlier: the positive object is the destruction of our enemy’s forces, and the negative object is the preservation of our own forces.
The positive object calls the destruction into existence, while the negative object awaits it. This waiting is, however, not passive; he promises to discuss this more later when covering defence.
While negative effort can give us advantages, it can be exhausted, and it can be neccessary to engage in positive effort.
He stresses, at the end, how we have to keep all of this in mind, not overstate any one part but see how all of the parts interact.
2 notes
·
View notes
Text
Theories are made only to die in the war of time
by Stevphen Shukaitis (x)
Published 2012, read March 2019
Introduction
Shukaitis talks about how Debord & the SI in general are approached as marxist theorists and as avant-garde artists, but that they were also strategists, and stressed this aspect of themselves. The article addresses this gap in the literature
Expanding the field of strategy
Shukaitis goes over what kind of strategists the SI were, comparing them against modern literature on strategy; he says that they’re classical strategists, per Clausewitz, but they make significant departures from classical strategy.
Classical strategy assumes a rational subject that persists through the conflict, capable of strategizing the entire thing; while the SI want to create the conditions for a strategizing subject to emerge.
The SI are ‘strategizing from below’ by creating those conditions, not passing down commands like a general; they are creating what Vaneheim calls a ‘federation of tacticians of everyday life’
Through their empasis on everyday life, Shukaitis says that they create ‘symbolic universes’ by drawing lines between events, which renders things coherent for other strategies who share that ‘strategic world’
He talks about how the SI emphasis their own abolition in the course of enactment, rather than persisting as a central strategic vehicle; he mentions how Debord says that the formation and dissolution of the SI were both equally revolutionary acts in their time.
He compares the SI to conventional marxist approaches to strategy, which means outlining the conventional marxist approaches to strategy which communist parties such as the French Communist Party would have acted on at that time.
Conventional marxist strategy assumes a pre-existing rational actor, in line with classical strategy, which is the Proletariat; the proletariat has a clearly oriented place in the conflict, and is capable of acting on its own strategies. The role of the Communist Party is to make the Proletariat aware of its position, at which point it joins the party & begins strategizing.
The SI, in contrast, do not assume a predetermined rational actor, but have to create the conditions for such a rational actor to emerge. This section is worth quoting in full:
For orthodox Marxist politics (such as those held by the French Communist Party at the time), there is an already given rational subject, namely the proletariat. The proletariat emerges through a linearly defined historical teleology. Once the proletariat becomes aware of its position, it can then act to transform it. […] Debord and the SI depart from this argument on multiple levels; they are less certain that there is an already given antagonistic social position, such as a clearly delineated working class, that is the locus of strategy and social conflict. That does not mean that the SI is giving up the notion of class struggle or its potential to create revolutionary conditions. Rather the focus shifts […] on to exploring the spaces and the creation of situations for the emergence of a collective subject that would be adequate for and capable of the task of formulating and strategizing what is to be done.
Shukaitis spends some time going over a leftist tactician from the 80s called De Certeau, comparing him with Clausewitz and concluding that the SI are not similar to De Certeau. I’m not sure what the point of this section was - it doesn’t seem to illustrate anything relative to the SI - but perhaps the author felt that De Certeau was a sort of ‘elephant in the room’ that had to be addressed for his intended audience; maybe this section puts down a potential objection.
We talk abot Agamben for a bit; he talks about how Agamben talked about how Debord utilizes the aesthetics of war in all of his work, and how his work is deliberately cold and artificial in a way that is evocative of war, and how this is intentional even if his subject, and his strategizing, is not pertaining to a literal armed conflict; Agamben says that Debord’s strategizing was ‘of pure intellect’
Agamben mentions a Clausewitz quote that Debord used multiple times in his writing: “In every strategical critique, the essential thing is to put oneself exactlyin the position of the actors; it is true that this is often very difficult.” Shukatis says that the goal of strategy, for Debord, is to put onesself in the place of the emerging collective subject, “which is to say a process of conceptualizing agency in a given situation”, and understanding “what underlies agency in a given situation.”
We spend a little time going over the Specatcle; how that relations in capitalism become mediated by images; that ths isn’t just the proflieration of images, technology and medias but the way that relations are mediated through them.
He talks about how their strategy is devised for a particular time, during particular conditions - the time of the spectacle - and is not meant to be true for all time, as with all theory; theories are made to be sacrificed at an appropriate time.
He mentions a work by the Retort Collective called Afflicted Powers in 2005 which attempts to explore and expand on the role of counter-intelligence and communication technologies - theories esp. germaine to our historical period - in Debord’s later writing, while criticising some marxist assumptions in his work (which we gotta read >.>)
2 notes
·
View notes
Text
A History of the Jewish People
by H. H. ben Sasson, et al
Written 1969, read 12/01/18-????
Origins and the Formative Period
The region of Palestine and Syria in the Bronze Age was situated between two or three enormous political powers for its duration; Egypt to the south, and variously, Persia, Babylon, and Mesopotamia to the north
The region was rich in natural resources such as timber, which was scarce in the regions of the neighbouring great powers, and several species of crop; they also had extensive land trade connections, but few sea connections owing to a lack of natural bays appropriate for port cities.
For these reasons, and its position as a land bridge between africa, europe and asia, Palestine and Syria was an important region for the neighbouring great powers and was usually under the hegemony of one or the other as they fought over its control.
The author, A. Malamat, describes this in terms of "a continuous chain of conquests and oppression directed by the various powers against the local population” and gives the motives of the Empires as ‘power status’ and ‘prestige’
There were also a number of “severe disputes” between local powers, one which would lead to a major break-up of the region into “diminuitive kingdoms” in the second half of the Second Millenium during the conflict between Egypt and the Mitanni (and later the Hittites), and another in the second quarter of the First Millenium in the time of Israel; this latter dispute was wether to side with the northern powers of Assyria (and later Babylon) or with Egypt. The second dispute is referenced in the Hebrew Bible (Jeremiah 2:18)
1 note
·
View note
Text
Value, Price and Profit
by Marx
written 1865 (published 1898), read 16/12/2016 - ????
Production and Wages
17 pages of fucking *dunking* on Citizen Weston
The context is that Marx is giving a speech to the International Working Men’s Association relating to the strikes for higher wages in England. Weston has been making some arguments against the demand, which Marx describes as follows: that if the workers are paid five shillings for four shillings worth of work, the capitalists will be forced to raise the prices of the commodities accordingly, ie. what they could buy for four shillings they’ll now have to buy for five. The effect is that there would actually be no change for the workers, because while they’d have higher wages everything would be more expensive.
Marx criticizes this from a great number of angles, the first being this: that his argument relies on the rate of national production being fixed, and a number of other things being ‘fixed’, and not ‘variable’, while they’re really variabe. (Nowadays I think we would say that he ‘assumes its a zero sum game’).
[The study guide on Marxists.org notes this: “Marx refers to Weston's argument that the total portion of the national product accruing to wages must remain constant. The modern expression of this theory is “that wage rises cause inflation”, and that therefore any increase in wages will cause inflation and reduce real wages back to where they started.”]
However, even if we were accepting that these things are fixed and not variable, he says okay, the capitalists profits will fall as a result of the increased wages: and the prices of the commodities the workers buy may also rise to make up for it. But not all capitalists are in the business of making only what their workers buy (ie., mostly necessities), that the vast majority of production is in luxuries that workers cannot afford, or in things which are shipped overseas, and so on.
What will happen is this: the capitalists will have less money as a result of the falling profits, and will have to raise prices. However, they’re the ones who buy the luxuries, so they’ll also have less money to buy the luxuries, which are now money expensive. Luxuries will, then, be less profitable, and more capitalists will leave the production of luxuries and enter into the production of necessities. Therefore, supply and demand etc., with the increased production of necessities the prices of necessities will fall, and the market will equilibrialize.
That is: the commodities the workers buy will be more expensive for a short period, and then will return to their usual price.
He then details a number of things which are related, like how the price of grain actually fell once when the wages of the farmers were raised, and so on.
He then summarizes another argument by Weston, where he says that they would have to produce more currency to pay the wages. He describes the way banks work in England, such that the factory worker is paid wages, then makes purchases in shops, which pay it into the bank, who pay the capitalist, who then give it back to the factory worker - he includes figures of exactly how much money the working class is paid in total, and how much money actually pays it (a few hundred thousand pounds of wages are paid by a constantly cycling 90 thousand pounds)
He is able to quickly dismiss the argument that they’ll have to produce new currency by saying that they wont, that there’s a lot of money sitting idle in the bank coffers which can simply be placed in circulation.
Finally, he criticizes Weston’s theory of value, by saying that his arguments show that he believes wages produce value, which he reduces to ‘value produces value’, and concludes that Weston doesn’t understand how value is created.
2 notes
·
View notes