Tumgik
#and in any case: to use Richard as an example to generalize assumptions of the power other magnates held during Edward IV's reign
wonder-worker · 1 year
Text
"Because Richard (III) usurped the throne, his retinue is inevitably seen as inimical to the crown and therefore in an important sense independent of royal authority. In the context of Edward IV's reign, in which the retinue was created, neither assumption is true. The development of the retinue would have been impossible without royal backing and reflected, rather than negated, the king's authority. Within the north itself, Gloucester's connection subsumed that of the crown. Elsewhere, in East Anglia and in Wales, that focus for royal servants was provided by others, but Gloucester was still part of that royal connection, not remote from it. In the rest of England, as constable and admiral, he had contributed to the enforcement of royal authority. When he seized power in 1483 he did not do it from outside the prevailing political structure but from its heart."
-Rosemary Horrox, "Richard III: A Study of Service"
#richard iii#english history#my post#Richard was certainly very powerful in the north but to claim that he 'practically ruled' or was king in all but name is very misleading#his power/success/popularity were not detached from Edward IV's rule but a fundamental part/reflection/extension of Edward IV's rule#even more so that anyone else because he was Edward's own brother#there's also the 1475 clause to consider: Richard & Anne would hold their titles jointly and in descent only as long as George Neville#also had heirs. Otherwise Richard's title would revert to life interest. His power was certainly exceptional but his position wasn't as#absolute or indefinite as is often assumed. It WAS fundamentally tied to his brother's favor just like everyone else#and Richard was evidently aware of that (you could even argue that his actions in 1483 reflected his insecurity in that regard)#once again: when discussing Edward IV's reign & Richard III's subsequent usurpation it's really important to not fall prey to hindsight#for example: A.J Pollard's assumption that Edward IV had no choice but to helplessly give into his overbearing brothers' demands#and had to use all his strength to make Richard to heed to his command which fell apart after he died and Richard was unleashed#(which subsequently forms the basis of Pollard's criticism of Edward IV's reign & character along with his misinterpretation of the actions#of Edward IV's council & its main players after his death who were nowhere near as divided or hostile as Pollard assumes)#is laughably inaccurate. Edward IV was certainly indulgent and was more passive/encouraging where Richard (solely Richard) was concerned#but he was by no means unaware or insert. His backing was necessary to build up Richard's power and he was clearly involved & invested#evidenced by how he systematically depowered George of Clarence (which Clarence explicitly recognized) and empowered Richard#and in any case: to use Richard as an example to generalize assumptions of the power other magnates held during Edward IV's reign#- and to judge Edward's reign with that specific assumption in mind - is extremely misleading and objectively inaccurate#Richard's power was singular and exceptional and undoubtedly tied to the fact that he was Edward's own brother. It wasn't commonplace.#as Horrox says: apart from Richard the power enjoyed by noble associates under Edward IV was fairly analogous to the power enjoyed by#noble associates under Henry VII. and absolutely nobody claims that HE over-powered or was ruled by his nobles or subjects#the idea that Richard's usurpation was 'inevitable' and the direct result of Edward empowering him is laughable#contemporaries unanimously expected Edward V's peaceful succession. Why on earth would anyone - least of all Edward -#expect Richard to usurp his own nephew in a way that went far beyond the political norms of the time?#that was the key reason why the usurpation was possible at all#as David Horspool says: RICHARD was the 'overriding factor' of his own usurpation There's no need to minimize or outright deny his agency#as Charles Ross evidently did
13 notes · View notes
nanshe-of-nina · 3 years
Text
Articles about Early Modern Witch Hunts
“Historians as Demonologists: The Myth of the Midwife-witch” by David Harley
“The Myth of the Persecuted Female Healer” by Jane P. Davidson
“On the Trail of the ‘Witches’: Wise Women, Midwives and the European Witch Hunts” by Ritta Jo Horsley and Richard A. Horsley
“Who Were the Witches? The Social Roles of the Accused in the European Witch Trials” by Richard A. Horsley
All of these articles are about, among other things, the myth that midwives were especially likely to be accused of witchcraft. This myth came about from taking the works of Early Modern demonologists, such as Heinrich Kramer, at face value and assuming that what he believed and that what a lot of clergy believed was identical to what everyone also thought and therefore that since the clergy was suspicious of midwives, lay people were as well.
Unfortunately, trial records don’t bear this myth out. There were some midwives accused, such as Walpurga Hausmannin and La Voisin, but both of these women seem to have been the exception rather than the rule and their cases were not typical of most witch trials; their relatively high-rank suggests that there were many factors behind their cases. We don’t know who first accused Hausmannin, but the accusations against La Voisin were driven by high-profile poisoning cases among the aristocratic elite of 17th century France and La Voisin was initially arrested on charges of poisoning. Linking poisoning with witchcraft was not uncommon, but it suffices to say that there was more afoot in the Voisin case than being a straightforward example of a midwife-witch. In total, Harley’s analysis of trial records finds only 14 midwife-witches, not all of whom were executed and sometimes their midwifery was rather incidental to the accusations of witchcraft. 
What’s more, it turns that the assumption that Early Modern clergy (both Catholic and Protestant) and lay people had the same beliefs about witchcraft and magic seems entirely wrong. To make a long story short: the clergy were most interested in associating witches with cannibalism, pacts with Satan, Witches’ Sabbaths, and sex with demons. The peasantry, on the other hand, was most interested in magic that caused direct harm, such as causing a sudden storm or a child or animal to sicken seemingly without any explanation. This point on this divide was also made in this recent article, “The invention of satanic witchcraft by medieval authorities was initially met with skepticism” by Michael D. Bailey.
In a lot of cases, what lay people thought seems to have been rather more important to understanding the witch-craze, because the vast majority of Witch Trials were caused by accusations from people who were not members of the clergy and were generally of roughly the same social status as those they were accusing. (As in, most of the trials were the result of peasants accusing other peasants). Another point made in the Horsley articles is that while the line between beneficent and malevolent magic was sometimes blurry in the minds of lay people there does, nevertheless, seem to have been a line. By which I mean that the belief that all magic is inherently evil seems to most often been the viewpoint of the clergy. Harley, meanwhile, suggests that the clergy’s writings on midwives might have originated in the fact that midwives often did use quasi-magical techniques to promote easier childbirth and also possibly that midwives provided a rather neat and tidy explanation for where the supposed witches engaged in baby-killing rituals were getting the babies from without attracting a lot of attention. This aside, however, midwives, in fact, seem to have been highly-respected and valued members of their communities. Indeed, they were trusted enough to testify as expert witnesses for legal cases involving rape, illegitimacy, and/or infanticide.
The recent article by Bailey and both of the articles by Horsley also emphasize that witch hunts most often coincided with times of economic instability and thus, it is interesting to note that, aside from being mostly middle-aged or elderly women, a lot of accused witches were also beggars.
These works also discuss the Malleus Maleficarum and its often overstated influence on Early Modern witch trials. Thing thing about the Malleus is that it was originally written by a man named Heinrich Kramer as a way to vindicate himself as an Inquisitor and witch hunter after the cred-stripping debacle that had happened to him in Innsbruck in 1485. Namely, while running a trial against an accused witch named Helena Scheuberin and six other women, Kramer apparently spent most of his time asking Scheuberin endless questions about her sexual behavior. Georg Golser, the Bishop of Brixen, was not amused and had the trial shut down, the accused all freed, and Kramer expelled from Innsbruck.
Despite Kramer’s intentions, the Malleus was not initially an influential work; it was published during a lull in European witch-hunting and the peak of the Witch Hunt panic actually occurred around 140 years later. The lack of immediate popularity was probably due to, among other things, that the book was initially condemned by the faculty of Cologne due to objections over its demonology and recommended legal procedures. That Kramer was German and the heart of the Witch-Craze was in the German-speaking parts of the Holy Roman Empire is interesting, but it’s not enough to draw a one-to-one cause and effect, because there were other social, cultural, and political reasons for the German parts of the HRE produced so many witch trials.
962 notes · View notes
dwellordream · 3 years
Text
“…The complex design of the Victorian house signified the changing ratio between the cultural and physical work situated there. With its twin parlors, one for formal, the other for intimate exchange, and its separate stairs and entrances for servants, the Victorian house embodied cultural preoccupations with specialized functions, particularly distinguishing between public and private worlds.
American Victorians maintained an expectation of sexualized and intimate romanticism in private at the same time that they sustained increasingly ‘‘proper’’ expectations for conduct in public. The design of the house helped to facilitate the expression of both tendencies, with a formal front parlor designed to stage proper interactions with appropriate callers, and the nooks, crannies, and substantial private bedrooms designed for more intimate exchange or for private rumination itself.
Just as different areas of the house allowed for different gradations of intimacy, so did the house offer rooms designed for different users. The ideal home offered a lady’s boudoir, a gentleman’s library, and of course a children’s nursery. This ideal was realized in the home of Elizabeth E. Dana, daughter of Richard Henry Dana, who described her family members situated throughout the house in customary and specialized space in one winter’s late afternoon in 1865. Several of her siblings were in the nursery watching a sunset, ‘‘Father is in his study as usual, mother is taking her nap, and Charlotte is lying down and Sally reading in her room.’’ In theory, conduct in the bowels of the house was more spontaneous than conduct in the parlor.
This was partly by design, in the case of adults, but by nature in the case of children. If adults were encouraged to discover a true, natural self within the inner chambers of the house, children—and especially girls—were encouraged to learn how to shape their unruly natural selves there so that they would be presentable in company. The nursery for small children acknowledged that childish behavior was not well-suited for ‘‘society’’ and served as a school for appropriate conduct, especially in Britain, where children were taught by governesses in the nursery, and often ate there as well. In the United States children usually went to school and dined with their parents. As the age of marriage increased, the length of domestic residence for some girls extended to twenty years and more.
The lessons of the nursery became more indirect as children grew up. Privacy for children was not designed simply to segregate them from adults but was also a staging arena for their own calisthenics of self-discipline. A room of one’s own was the perfect arena for such exercises in responsibility. As the historian Steven Mintz observes, such midcentury advisers as Harriet Martineau and Orson Fowler ‘‘viewed the provision of children with privacy as an instrument for instilling self-discipline. Fowler, for example, regarded private bedrooms for children as an extension of the principle of specialization of space that had been discovered by merchants. If two or three children occupied the same room, none felt any responsibility to keep it in order.’’
…The argument for the girl’s room of her own rested on the perfect opportunity it provided for practicing for a role as a mistress of household. As such, it came naturally with early adolescence. The author Mary Virginia Terhune’s advice to daughters and their mothers presupposed a room of one’s own on which to practice the housewife’s art. Of her teenage protagonist Mamie, Terhune announced: ‘‘Mamie must be encouraged to make her room first clean, then pretty, as a natural following of plan and improvement. . . . Make over the domain to her, to have and to hold, as completely as the rest of the house belongs to you. So long as it is clean and orderly, neither housemaid nor elder sister should interfere with her sovereignty.’’ Writing in 1882, Mary Virginia Terhune favored the gradual granting of autonomy to girls as a natural part of their training for later responsibilities.
…Victorian parents convinced their daughters that the secret to a successful life was strict and conscientious self-rule. The central administrative principle was carried forth from childhood: the responsibility to ‘‘be good.’’ The phrase conveyed the prosecution of moralist projects and routines, and perhaps equally significant, the avoidance or suppression of temper and temptation. Being good extended beyond behavior and into the realm of feeling itself. Being good meant what it said—actually transfiguring negative feelings, including desire and anger, so that they ceased to become a part of experience.
Historians of emotion have argued that culture can shape temperament and experience; the historian Peter Stearns, for one, argues that ‘‘culture often influences reality’’ and that ‘‘historians have already established some connections between Victorian culture and nineteenth-century emotional reality.’’ More recently, the essays in Joel Pfister and Nancy Schnog’s Inventing the Psychological share the assumption that the emotions are ‘‘historically contingent, socially specific, and politically situated.’’ The Victorians themselves also believed in the power of context to transform feeling.
The transformation of feeling was the end product of being good. Early lessons were easier. Part of being good was simply doing chores and other tasks regularly, as Alcott’s writings suggest. One day in 1872 Alice Blackwell practiced the piano ‘‘and was good,’’ and another day she went for a long walk ‘‘for exercise,’’ made two beds, set the table, ‘‘and felt virtuous.’’ Josephine Brown’s New Year’s resolutions suggested such a regimen of virtue—sanctioned both by the inherent benefits of the plan and by its regularity.
As part of her plan to ‘‘make this a better year,’’ she resolved to read three chapters of the Bible every day (and five on Sunday) and to ‘‘study hard and understandingly in school as I never have.’’ At the same time, Brown realized that doing a virtuous act was never simply a question of mustering the positive energy to accomplish a job. It also required mastering the disinclination to drudge. She therefore also resolved, ‘‘If I do feel disinclined, I will make up my mind and do it.’’
The emphasis on forming steady habits brought together themes in religion and industrial culture. The historian Richard Rabinowitz has explained how nineteenth-century evangelicalism encouraged a moralism which rejected the introspective soul-searching of Calvinism, instead ‘‘turning toward usefulness in Christian service as a personal goal.’’ This pragmatic spirituality valued ‘‘habits and routines rather than events,’’ including such habits as daily diary writing and other regular demonstrations of Christian conduct. Such moralism blended seamlessly with the needs of industrial capitalism—as Max Weber and others have persuasively argued.
Even the domestic world, in some ways justified by its distance from the marketplace, valued the order and serenity of steady habits. Such was the message communicated by early promoters of sewing machines, for instance, one of whom offered the use of the sewing machine as ‘‘excellent training . . . because it so insists on having every-thing perfectly adjusted, your mind calm, and your foot and hand steady and quiet and regular in their motions.’’ The relation between the market place and the home was symbiotic. Just as the home helped to produce the habits of living valued by prudent employers, so, as the historian Jeanne Boydston explains, the regularity of machinery ‘‘was the perfect regimen for developing the placid and demure qualities required by the domestic female ideal.’’
Despite its positive formulation, ‘‘being good’’ often took a negative form —focusing on first suppressing or mastering ‘‘temper’’ or anger. The major target was ‘‘willfulness.’’ An adviser participating in Chats with Girls proposed the cultivation of ‘‘a perfectly disciplined will,’’ which would never ‘‘yield to wrong’’ but instantly yield to right. Such a will, too, could teach a girl to curb her unruly feelings. The Ladies’ Home Journal columnist Ruth Ashmore (a pseudonym for Isabel Mallon) more crudely warned readers ‘‘that the woman who allows her temper to control her will not retain one single physical charm.’’ As a young teacher, Louisa May Alcott wrestled with this most common vice.
Of her struggles for self-control, she recognized that ‘‘this is the teaching I need; for as a school-marm I must behave myself and guard my tongue and temper carefully, and set an example of sweet manners.’’ Alcott, of course, made a successful career out of her efforts to master her maverick temper. The autobiographical heroine of her most successful novel, Little Women, who has spoken to successive generations of readers as they endured female socialization, was modeled on her own struggles to bring her spirited temperament in accord with feminine ideals.
So in practice being good first meant not being bad. Indeed, it was some- times better not to ‘‘be’’ much at all. Girls sometimes worked to suppress liveliness of all kinds. Agnes Hamilton resolved at the beginning of 1884 that she would ‘‘study very hard this year and not have any spare time,’’ and also that she would try to stop talking, a weakness she had identified as her principle fault.
When Lizzie Morrissey got angry she didn’t speak for the rest of the evening, certainly preferable to impassioned speech. Charlotte Perkins Gilman, who later critiqued many aspects of Victorian repression, at the advanced age of twenty-one at New Year’s made her second resolution: ‘‘Correct and necessary speech only.’’
Mary Boit, too, measured her goodness in terms of actions uncommitted. ‘‘I was good and did not do much of anything,’’ she recorded ambiguously at the age of ten. It is perhaps this reservation that provoked the reflection of southerner Lucy Breckinridge, who anticipated with excitement the return of her sister from a long trip. ‘‘Eliza will be here tomorrow. She has been away so long that I do not know what I shall do to repress my joy when she comes. I don’t like to be so glad when anybody comes.’’ Breckinridge clearly interpreted being good as in practice an exercise in suppression. This was just the lesson of self-censoring that Alice James had starkly described as ‘‘‘killing myself,’ as some one calls it.’’
This emphasis on repressing emotion became especially problematic for girls in light of another and contradictory principle connected with being good. A ‘‘good’’ girl was happy, and this positive emotion she should express in moderation. Explaining the duties of a girl of sixteen, an adviser writing in the Ladies’ Home Journal noted that she should learn ‘‘that her part is to make the sunshine of the home, to bring cheer and joyousness into it.’’ At the same time that a girl must suppress selfishness and temper, she must also project contentment and love. Advisers simply suggested that a girl employ a steely resolve to substitute one for the other. ‘‘Every one of my girls can be a sunshiny girl if she will,’’ an adviser remonstrated. ‘‘Let every failure act as an incentive to greater success.’’
This message could be concentrated into an incitement not to glory and ethereal virtue but simply to a kind of obliging ‘‘niceness.’’ This was the moral of a tale published in The Youth’s Companion in 1880. A traveler in Norway arrives in a village which is closed up at midday in mourning for a recent death. The traveler imagines that the deceased must have been a magnate or a personage of wealth and power. He inquires, only to be told, ‘‘It is only a young maiden who is dead. She was not beautiful nor rich. But oh, such a pleasant girl.’’ ‘‘Pleasantness’’ was the blandest possible expression of the combined mandate to repress and ultimately destroy anger and to project and ultimately feel love and concern.
Yet it was a logical blending of the religious messages of the day as well. Richard Rabinowitz’s work on the history of spirituality notes a new later-century current which blended with the earlier emphasis on virtuous routines. The earlier moralist discipline urged the establishment of regular habits and the steady attention to duty. Later in the century, religion gained a more experiential and private dimension, expressed in devotionalism. Both of these demands—for regular virtue and the experience and expression of religious joy—could provide a loftier argument for the more mundane ‘‘pleasant.’’
…The challenges of this project were particularly bracing given the acute sensitivity of the age to hypocrisy. One must not only appear happy to meet social expectations: one must feel the happiness. The origins of this insistence came not only from a demanding evangelical culture but also from a fluid social world in which con artists lurked in parlors as well as on riverboats. A young woman must be completely sincere both in her happiness and in her manners if she was not to be guilty of the corruptions of the age. One adviser noted the dilemma: ‘‘‘Mamma says I must be sincere,’ said a fine young girl, ‘and when I ask her whether I shall say to certain people, ‘‘Good morning, I am not very glad to see you,’’ she says, ‘‘My dear, you must be glad to see them, and then there will be no trouble.’’’’’
…No wonder that girls filled their journals with mantras of reassurance as they attempted to square the circle of Victorian emotional expectation. Anna Stevens included a separate list stuck between the pages of her diary. ‘‘Everything is for the best, and all things work together for good. . . . Be good and you will be happy. . . . Think twice before you speak.’’
We look upon these aphorisms as throwaways—platitudes which scarcely deserve to be preserved along with more ‘‘authentic’’ manuscript material. Yet these mottoes, preserved and written in most careful handwriting in copy books and journals, represent the straws available to girls attempting to grasp the complex and ultimately unreconcilable projects of Victorian emotional etiquette and expectation.”
- Jane H. Hunter, “Houses, Families, Rooms of One Own.” in How Young Ladies Became Girls: The Victorian Origins of American Girlhood
13 notes · View notes
perkwunos · 4 years
Text
The totality of things… is an exchange for fire, and fire an exchange for all things, in the way goods (are an exchange) for gold, and gold for goods.
Heraclitus
After the first coins were minted around 6oo BC in the kingdom of Lydia, the practice quickly spread to Ionia, the Greek cities of the adjacent coast. The greatest of these was the great walled metropolis of Miletus, which also appears to have been the first Greek city to strike its own coins. It was Ionia, too, that provided the bulk of the Greek mercenaries active in the Mediterranean at the time, with Miletus their effective headquarters. Miletus was also the commercial center of the region, and, perhaps, the first city in the world where everyday market transactions came to be carried out primarily in coins instead of credit. Greek philosophy, in turn, begins with three men: Thales, of Miletus (c. 624 BC-c. 546 BC) , Anaximander, of Miletus (c. 610 BC-c. 546 BC) , and Anaximenes, of Miletus (c. 585 BC-C. 525 BC)--in other words, men who were living in that city at exactly the time that coinage was first introduced. All three are remembered chiefly for their speculations on the nature of the physical substance from which the world ultimately sprang. Thales proposed water, Anaximenes, air. Anaximander made up a new term, apeiron, "the unlimited," a kind of pure abstract substance that could not itself be perceived but was the material basis of everything that could be. In each case, the assumption was that this primal substance, by being heated, cooled, combined, divided, compressed, extended, or set in motion, gave rise to the endless particular stuffs and substances that humans actually encounter in the world, from which physical objects are composed--and was also that into which all those forms would eventually dissolve.
It was something that could turn into everything. As [Richard] Seaford emphasizes, so was money. Gold, shaped into coins, is a material substance that is also an abstraction. It is both a lump of metal and something more than a lump of metal--it's a drachma or an obol, a unit of currency which (at least if collected in sufficient quantity, taken to the right place at the right time, turned over to the right person) could be exchanged for absolutely any other object whatsoever.
… Greek thinkers were suddenly confronted with a profoundly new type of object, one of extraordinary importance--as evidenced by the fact that so many men were willing to risk their lives to get their hands on it--but whose nature was a profound enigma.
David Graeber, Debt: The First 5000 Years
[Aristotle] … sees that the value-relation which provides the framework for this expression of value itself requires that the house should be qualitatively equated with the bed, and that these things, being distinct to the senses, could not be compared with each other as commensurable magnitudes if they lacked this essential identity. 'There can be no exchange,' he says, 'without equality, and no equality without commensurability' … Here, however, he falters, and abandons the further analysis of the form of value. 'It is, however, in reality, impossible … that such unlike things can be commensurable,' i.e. qualitatively equal. This form of equation can only be something foreign to the true nature of the things, it is therefore only 'a makeshift for practical purposes'.
However, Aristotle himself was unable to extract this fact, that, in the form of commodity-values, all labour is expressed as equal human labour and therefore as labour of equal quality, by inspection from the form of value, because Greek society was founded on the labour of slaves, hence had as its natural basis the inequality of men and of their labour-powers. The secret of the expression of value, namely the equality and equivalence of all kinds of labour because and in so far as they are human labour in general, could not be deciphered until the concept of human equality had already acquired the permanence of a fixed popular opinion. This however becomes possible only in a society where the commodity-form is the universal form of the product of labour, hence the dominant social relation is the relation between men as possessors of commodities. …
Karl Marx, Capital, Vol. 1, Chapter 1
The generalization of commodity production is only possible when production itself is transformed into capitalist production, when the multiplication and augmentation of abstract wealth becomes the direct goal of production and all other social relationships are subsumed to this goal.  The “destructive power of money” which was the object of much criticism in many pre-capitalist modes of production (by many authors in ancient Greece, for example) is rooted precisely in this process of the capitalization of society as a result of the generalization of the money relationship.
Michael Heinrich, “A Thing with Transcendental Qualities: Money as a Social Relationship in Capitalism”
Aristotle contrasts economics with 'chrematistics '. He starts with economics. So far as it is the art of acquisition, it is limited to procuring the articles necessary to existence and useful either to a household or the state. … With the discovery of money, barter of necessity developed … into trading in commodities, and this again, in contradiction with its original tendency, grew into chrematistics, the art of making money. Now chrematistics can be distinguished from economics in that 'for chrematistics, circulation is the source of riches … And it appears to revolve around money, for money is the beginning and the end of this kind of exchange … Therefore also riches, such as chrematistics strives for, are unlimited. Just as every art which is not a means to an end, but an end in itself, has no limit to its aims, because it seeks constantly to approach nearer and nearer to that end, while those arts which pursue means to an end are not boundless, since the goal itself imposes a limit on them, so with chrematistics there are no bounds to its aims, these aims being absolute wealth. Economics, unlike chrematistics, has a limit ... for the object of the former is something different from money, of the latter the augmentation of money … By confusing these two forms, which overlap each other, some people have been led to look upon the preservation and increase of money ad infinitum as the final goal of economics' (Aristotle, De Republica, ed. Bekker, lib. I, c. 8, 9, passim).
Karl Marx, Capital, Vol. 1, Chapter 4
75 notes · View notes
lifecoachinghints · 3 years
Text
About Effective Dog Training and Obedience
Illustrated beneath are the basic strategies that should be followed when preparing you canine, regardless of which preparing technique you pick. Utilizing these strategies will help the preparation cycle hugely and guarantee that you get the most out your relationship with your canine.
Holding.
Likely the main piece of building a fruitful relationship with your canine is the affinity you can make with him, Rapport may be made on the off chance that you invest quality energy with your canine and become his dearest companion conversing with him-taking him out for long strolls, playing with him. This is the way in to a sound connection with your canine,
Consistency
Conveying steady clear messages to your canine will assist him with considering his to be as highly contrasting as opposed to different shades of dim. By reliable messages we mean the orders you use to prepare recognition and censure your canine ought to consistently be something very similar. It is significant that all individuals from your family are utilizing similar orders as you. At the point when first preparing your canine it will help if only one individual does the preparation. This is significant on the grounds that albeit the order might be a similar the non-verbal communication or the manner of speaking might be totally unique.
Timing
By timing we mean the measure of time permitted to elapse before your canine reacts (or not) to your orders and your recognition (or censure). This time ought to be no longer than 2 to 3 seconds. In the event that any more extended the odds are your canine won't connect your words with his activities. Recall that you canine's psychological capacity is equivalent to a baby.
By a similar taken it is significant that any actual revision to your canines reaction to you preparing order happens inside similar 2 to 3 seconds. On the off chance that for instance you have provided the order sit and the canine doesn't sit then you could push his back quarter down while providing the sit order
Repition.
Canines are animals of propensity and learn by repition. It might take a few repitions of a similar order before the reaction becomes embedded in the canines cerebrum and the activity you are attempting to show him becomes programmed. Your canine will likewise require boost meetings so the order or activity doesn't become lost during his life. You ought to consistently commend him when he accomplishes something right.
Meeting length
Keep all instructional meetings short and pleasant with the goal that you canine's fixation is kept up with all through. Quality not amount is the key, you ought to likewise consistently attempt to complete the instructional course on a positive note I you can.
Demeanor
Continuously be sensible in your assumptions for what your canine can accomplish. It requires some investment to get results. On the off chance that your canine experiences issues in getting a specific order attempt and take a gander at why he is experiencing issues. Return to it one more day.
Recognition.
Continuously use acclaim at whatever point you canine has effectively finished an activity. This ought to likewise be done when your canine has done the ideal demonstration (recall the segment on planning) When conveying the applause gaze straight at him with the goal that he comprehend the association between the voice or contact and his activity. Convey the acclaim either verbally or with the hand by either tapping or stroking him.
Eye to eye connection.
Utilizing eye to eye connection can be more viable than utilizing the verbally expressed word particularly in case there is a nearby connection between the canine and proprietor. On the off chance that a canine wishes to speak with you he will gaze straight at you attempting to peruse your expectation.
Hand signals
Utilizing explicit hand signals while simultaneously addressing your canine can be a successful method of preparing you canine. It will be valuable in getting youthful canine to react at significant distances and you can ultimately stop the verbal orders with the goal that he reacts to the hand signal as it were. Give hand signal before or more the canine's head as this is in their line of vision.
Voice signals
Canines are known for their insight however they are simply ready to comprehend a couple of words, even those are a greater amount of a relationship between the sound you make and the activity the canine has figured out how to react to the sound with.
Utilize one order for one activity and articulate the order with a similar manner of speaking. You should acquire your canines consideration by saying his name prior to beginning an order.
Understand that you canine won't see all that you say and may misconstrue the importance of what you say. For instance on the off chance that you have prepared you canine with the "down" order he might well in case he is perched on the furniture not react to the order "get down" as he has just perceived the word down.
Discipline and remedy
It is significant that the canine considers you to be the pack chief. In the wild if a canine misbehaves the"alpha"dog will rebuff or chide it right away.
For general defiance utilize the "Caution No Command" strategy. This strategy has three stages that you take when your canine doesn't react as you wish.
Use something to alert your canine, for example, a spurt from a water gun or shaking a stone filled can. Ensure that you do this while he is in the demonstration of acting up.
Simultaneously say so anyone might hear No or Bad .Use a harsh voice so your canine perceives the distinction in tone from your ordinary voice .It is significant that your voice amendment is true and that the conveyance is predictable so the canine partners the brutal words or words with halting the conduct.
Then, at that point divert your canine with an order. Sit and stay is an awesome decision.
A check collar offers a simpler yet more actual approaches to give an adjustment.
A third alternative is banish your canine from the pack. In the wild the alpha canine would snarl and pursue the culpable canine away from the pack. The alienated canine would not be permitted once again into he pack until the alpha canine lets him. You could do this by snarling at your canine and pursuing him away from the family region say outside to the nursery.
Hello my name is Richard. I live in the UK with my significant other and girl and our pet canine "Ollie". I have been a canine sweetheart for various years. I have contemplated canine conduct essentially to upgrade our relationship with our pet yet additionally in light of the fact that I feel that most conduct issues are handily kept away from if the right preparing strategies are taken on in any case.
Check out https://bit.ly/2YfiqEZ for your other successful training tips.
3 notes · View notes
umi-ananda · 5 years
Text
Tumblr media
Chapter 1 - Impressions of Anandamayi
“This incident, which I have reconstructed from the diary account of Didi Gurupriya Devi, Anandamayi's lifelong chief assistant, typifies the paradoxical status of a figure such as Anandamayi in modern Indian society. She is so unusual that there is no woman, not even an example known to us from the past, with whom she can be compared except in the vaguest of terms. We are baffled, as were the inhabitants of Bhawanipur, by her unplaceability. A strange event was visited upon the good peasants of that nondescript village—an eruption of the sacred which they would puzzle over for many years. In her speech, mode of dress and features, the lady with the airs of a holy person seemed to belong nowhere or everywhere.
Nowadays, we indiscriminately call such a charismatic figure a Guru, without being any too clear what that term means other than, perhaps, somebody with pretentious claims to spiritual wisdom. We relegate all Gurus to a dubious category of exotic, perhaps dangerous, cults. Gurus have been seriously discredited by recent scandal and tend to be treated with a degree of caustic suspicion. We recall Bhagavan Rajneesh—he of the 87 Rolls Royces—or various cult leaders whose followers committed mass suicide. We look on them as sinister and mendacious personalities who take backhanders from politicians or seduce the daughters of our friends.
Traditionalists point out that people like Sri Aurobindo, Krishnamurti, Swami Ramdas and Swami Shivananda, Mother Meera, Sai Baba, and Meher Baba are not Gurus at all but a hybrid phenomenon catering to foreigners.
Certainly, the glamorized deluxe ashrams which have sprung up in recent decades are a far cry from the modest pattern of the age-old guru-shishya relationship of master-disciple tutelage; yet this ancient system survives, for example, in the teaching of classical music and dance.
Throughout Indian history, this pattern of instruction ensured the transmission of knowledge from one generation to the next. In the case of Anandamayi who did not herself have a Guru, but was self-initiated, the traditional model of the teacher and the taught has, in certain respects, taken on new life, but in other equally important respects, she radically departed from tradition.
Her role as a revered Brahmin divine was by no means orthodox since this was a departure from the traditional status parameters of the married woman; further, for some 50 years as a widow and thus a member of the lowliest rank of Indian society, she was at the same time one of the most sought after of all spiritual teachers.
Yet again, she revived the old custom of the gurukul, an ancient style of schooling for both girls and boys at her ashrams. Until almost the very end of her life she could not be classed as a Guru in a technical sense; for a Guru is one who gives Diksha to disciples or initiation by mantra.
Nevertheless, in the more general and metaphorical sense of spiritual teacher, she was certainly a Guru, one of the greatest and the most respected of her time. In addition, she was indeed the Guru to many advanced sadhakas spiritual practitioners.
For them, she was everything that the Guru traditionally should be a perfect vehicle of Divine Grace. There is a section in the excerpts from the discourses of Anandamayi included here where she comments at length on the spiritual meaning of the Guru. The true Guru is never to be regarded by the disciple as merely human but as a divine being to whom he or she surrenders in total obedience.
The disciple places himself in the hands of the Guru and the Guru can do no wrong.
Moreover, from the point of view of the Guru's disciples, the Guru is the object of worship. Obviously so serious a commitment is hedged about with all manner of safeguards, for the Indian is as aware of the perils inherent in such a position of absolute authority as is any skeptical outsider—rather more so, in fact, for much experience about the dynamics of the guru-shishya bond has been amassed over the millennia of its existence.
How could such adulation, such assumption of control over another's destiny, fail to turn the heads of all upon whom this mantle of omniscience falls? Everything depends on the closely observed fact that there are a few rare individuals at any one point in time who are so devoid of ego that no such temptation could possibly be felt. Egolessness is the sine qua non of the Guru.
For an Indian, submission to tutelage by a Guru is but one among many possible routes to salvation or Self-Realisation. In the case of Anandamayi, it has become obvious, indeed widely known, that we are dealing with a level of spiritual genius of very great rant, Her manifestation is extraordinarily rich and diverse.
She lived for 86 years, had an enormous following, founded 30 ashrams, and traveled incessantly the length and breadth of the land. People of all classes, castes, creeds, and nationalities flocked to her; the great and the good sought her counsel; the doctrine which she expounded came as near to being completely universal as is attainable by a single individual.
Though she lived for the good of all, she had no motive of self-sacrifice in the Christian sense: "there are no others," she would say, "there is only the One". She came of extremely humble rural origins, though from a family respected over generations for its spiritual attainments.
In the course of time, she would converse with the highest in the land, but draw no distinction between the status of rich and poor, or the caste and sectarian affiliations of all who visited her. She personified the warmth and the wide toleration of the Indian spiritual sensibility at its freshest and most accessible.
The fact that she was a woman certainly accentuates the distinctive features of her manifestation. Female sages as distinct from saints capable of holding sustained discourse with the learned are almost unheard of in India. Her femininity certainly imparts to the heritage of Indian and global spirituality certain qualities of flexibility and common sense, lyricism and humor not often associated with its loftiest heights.
Her quicksilver temperament and abundant Lila sacred play are in stark contrast with the serenity of that peerless exemplar of Advaita Vedanta, Sri Ramana Maharshi of Tiruvannamalai, the quintessence of austere stillness. That a woman of such distinction and wide-ranging activity should emerge in India in the 20th century, the century of world-wide feminism and reappraisal of feminine phenomenology hardly seems a coincidence. The Guru, by definition, reflects the profoundest and most urgent needs of all followers. While the Guru incarnates the wish-fulfilments of a myriad devotees, he or she also extends, expands and elevates to new and unfamiliar sensitivity those who take heed.”
—Anandamayi, Her Life and Wisdom by Richard Lannoy Chapter 1, Pages 6-7 
4 notes · View notes
Spider-Man Life Story #1 Thoughts
Tumblr media
Well...this was odd.
I have profoundly mixed feelings about this story.
That is owed to this comic being a collision of so many different things.
It is a period piece. But period piece that only half uses the period.
And I mean that on two levels.
It’s a period piece in the more general sense because it is set in the 1960s. But it is also a Spider-Man period piece because it uses 1960s Spider-Man continuity.
And it only half uses both in both cases.
Basically this issue was Chip Zdarsky’s Spider-Man AU fanfic that is a giant what if deviating from the Romita era...that is also set in the 1960s.
That is honestly the only way I could sum up this story. And by the looks of it things are going to get MORE complicated next issue because we move into the 1970s which implies each issue will be set in a different decade and this is confusing because if Spider-Man’s history played out in real time starting in 1962 then modern stories would only maybe be in the early 1980s.
Basically I guess this is more a general What If series that each issue will be talking up topical issues from each decade.
Which is seriously NOT how this mini was advertised to readers so that sucks hard.
But okay AS what it actually is trying to be...is it any good.
The answer is...kind of.
There is more good than bad.
Now you all know I do not like Zdarsky’s work on Spider-Man, so when I say there is more good than bad I’m not damning with faint praise.
On a general sense, the pacing is REALLY good. A lot of story happens in one issue. Granted it’s extra length so maybe that is why. The dialogue is perfectly fine, nothing rings untrue to the characters’ voices (except Gwen but we’ll get there). There is a respectable amount of introspection and exploration on Peter’s part and this is THE best Mark Bagley art in a very long time.
IIRC Mark Bagley once said that when he did the Ultimate Clone Saga and got to draw Richard Parker, he modelled him upon Gil Kane’s take on Peter Parker from the 1970s, and felt he got closer to that than he was trying to do in his 1990s work. You can very much feel that here now that he’s drawing Peter in literally the same setting that Kane drew him in.
Okay lets talk about other things that worked.
·         Flash’s characterization and Peter’s relationship with him. It felt very realistic in spite of not being how things played out in the original comics
·         Norman Osborn was very much in character in being devious and frightening
  What didn’t work.
·         Peter’s quick assumption of Norman’s amnesia. He kind of just presumes Norman has lost his memory on the basis of little evidence. Now granted his spider sense later corroborates this, but it’s still...kind of lame. Especially compared to the original story in ASM #40 wherein Peter figured Norman lost his memories because he was referencing an event from his past that he’d just finished relaying to Peter.
·         The blurb at the start of the issue says Peter was 15 when he got his powers in 1962. And then we cut to 1966 where Peter says that this happened 4 years ago. On the very next page he claims he has a year left of collage. Er...what? Maybe I’m out of the loop on the American college system (in the 1960s) but if Peter was 15 in 1962 and it’s 4 years later then he’d be 19. Collage lasts four years meaning Peter wouldn’t be graduating for another 2-3 years (depending upon how close he is to turning 20). He should be in his FIRST year of college,  1965-1966, and would be graduating in 1969, the school year beginning in 1968. WTF?
·         Gwen. Zdarsky has constructed a conundrum for himself here. This is the Romita era Gwen but with shades of Ditko Gwen but also shades of more modern revisionist versions of her and Emma Stone and also he’s now taking her in a MASSIVE deviation from the established Spider-Man history. It’s all a mess, and speaks to where Zdarsky’s shipper flag is planted btw.
·         Peter’s attitude to Flash at his leaving party. In the original story, ASM #47 Peter in a wonderful moment of maturity held no grudge against Flash and wished him well sincerely. There was no ‘triggering’ on his part.
·         The focus upon other superheroes like
·         Frankly the fact that this is not clearly either a What If deviation from established history or a true blue period piece using the established lore.
 And really that is THE big dilemma with this story. It’s not really committing to being one of those things or the other and is as a result kind of compromising both things.
 It’d be one thing if Spider-Man’s history was going in starkly different directions as a RESULT of Zdarsky using the historical setting, like if Peter was drafted for example.
But that isn’t what happens. Gwen finding out Peter is Spider-Man and Peter turning in Norman Osborn are things that could have happened in any contemporary What If issues (if What If was around back then).
And it’s not that these are uninteresting deviations to explore, but they feel undercooked because the book is also examining Peter’s introspection about joining up to fight in Vietnam. And THAT stuff is really interesting too, the discussion with Flash and Captain America serving as opposing arguments for Peter’s decision is REALLY good.
But again it feels undercooked because we’ve got this plot about Norman Osborn knowing Peter is Spider-Man brewing.
And the thing is I can’t decide if it’s a case of the story itself being at fault or the advertising for it being flawed.
Let’s put aside discussions about whether the story being Spider-Man’s history just presented in real time would’ve been better than this or not.
The fact is it WAS sold to readers that way so when you view it through that lens all the What If deviations seem weird and out of place, like distractions.
But hypothetically if this was just advertised as a What If mini ‘What if Peter turned in Norman Osborn and Gwen found out he was Spider-Man before she died’ then the focus upon Vietnam would’ve felt much the same.
But I don’t know if the series advertising EXACTLY what this mini seems to be would’ve mitigated this sensation from the reader. Or if the story itself is just really just two types of stories glued together.
I suspect it actually is the latter though for two big reasons.
The Vietnam plotline places a lot of focus upon Captain America and Iron Man. Their conflict is in fact the shocking cliffhanger of the entire issue. So you know...something that isn’t about Spider-Man himself. That felt more like Zdarsky trying to do Watchmen but in the Marvel universe. Which gets complicated because that opens up a whole can of worms for the relative realism of the MU, not least of which being how could the heroes ever allow things in the war to get to the point that it did.
The other reason is that the deviations from established history aren’t done the way of a traditional What If, wherein the in-universe history is identical up to a certain point then a single change sets off a new direction.
Here Zdarsky is just remixing various different elements from Romita Spider-Man to create an impression of that era and then deviating from the ‘general knowledge’ of that era.
Norman dropping Harry off at school cribs from ASM #39, Norman’s amnesia cribs from ASM #40, Flash’s party cribs from ASM #47, the Scorpion and Spider Slayer stuff treats ASM #20 and #25 as big parts of the past but the threat of Jameson’s  exposure cribs from Stern’s 1980s run. Norman wanting Peter as his heir cribs from Revenge of the Green Goblin in the 2000s.
But these elements, much like Spider-Man: Blue, are not remixed in a way that chronologically line up with how things happened. They’re all jumbled together so now Norman found out who Peter was (somehow?) but kept that in his back pocket to bring it up at Flash’s party and then announced he wanted Peter as his heir.
It’s all so...weird.
Look it isn’t an uninteresting what if but it’s also like...just a fanfic basically.
Not badly written fanfiction but it’s also like...what point is there to this really besides BEING Zdarsky’s fanfiction?
Another problem is that this story, along with not fully committing to the period piece aspect, simultaneously plays things with an intrusive degree of hindsight and imposes revisionism.
I’ve already spoken about this with Gwen but it’s also true with Harry and Norman’s relationship being cribbed from the Raimi movies the incredibly obvious ‘Norman will kill Gwen!!!!!!’ foreshadowing along with the ‘Professor Warren is a bad guy’ stuff; to say nothing of how Warren’s character design is inaccurate to the period.
That stuff imposes a present day hindsight of the Romita era whilst also overlays that with truisms brought about by adaptations being in the zeitgeist.
This applies to the Vietnam war stuff too. The book frames the war in a way that we look back upon it as opposed to framing it the way people in 1966 America probably actually viewed it. The final page is the biggest example of this.
Finally...didn’t we JUST see this from Zdarsky with his time travel arc in Spec?
Like wasn’t this a very similar idea. Spider-Man’s history but deviated because Norman Osborn’s identity is exposed differently and Peter and Gwen wind up as endgame?
Over all I can’t say that I disliked this. But nor can I say I was that thrilled with it. It’s not what we were promised and honestly...what we were promised sounded a lot more compelling. Moreover there are much better examples of period piece superhero stories out there.
·         Spider-Man Blue frames the early Romita issues the way they might’ve happened in the 1960s as they existed rather than Marvel universe 1960s
·         ASM Annual 1996 is DeFalco, Frenz and Romita Senior presenting an untold tale so good it could be downright mistaken as being MADE in the 1960s
·         Busieck’s seminal Untold Tales of Spider-Man series as a whole
·         The last 2 issues of Webspinners by DeFalco and Frenz which serve as a lost arc from their 1980s era
·         X-Men: Grand Design
I think this is something you just gotta pick up and taste for yourself, but again...just be aware this isn’t what it was advertised as.
16 notes · View notes
bountyofbeads · 5 years
Text
Jeffrey Epstein and When to Take Conspiracies Seriously https://www.nytimes.com/2019/08/13/opinion/jeffrey-epstein-suicide.html
When you have the #POTUS pushing conspiracy theories about the former president we are in DANGEROUS territory. The #ClintonBodyCount is being pushed by Russia and bots. We can't jump to conclusions until we have the facts. BEWARE
Jeffrey Epstein and When to Take Conspiracies Seriously
Sometimes conspiracy theories point toward something worth investigating. A few point toward the truth.
By Ross Douthat | Published August 13, 2019 | New York Times | Posted August 13, 2019 |
The challenge in thinking about a case like the suspicious suicide of Jeffrey Epstein, the supposed “billionaire” who spent his life acquiring sex slaves and serving as a procurer to the ruling class, can be summed up in two sentences. Most conspiracy theories are false. But often some of the things they’re trying to explain are real.
Conspiracy theories are usually false because the people who come up with them are outsiders to power, trying to impose narrative order on a world they don’t fully understand — which leads them to imagine implausible scenarios and impossible plots, to settle on ideologically convenient villains and assume the absolute worst about their motives, and to imagine an omnicompetence among the corrupt and conniving that doesn’t actually exist.
Or they are false because the people who come up with them are insiders trying to deflect blame for their own failings, by blaming a malign enemy within or an evil-genius rival for problems that their own blunders helped create.
Or they are false because the people pushing them are cynical manipulators and attention-seekers trying to build a following who don’t care a whit about the truth.
For all these reasons serious truth-seekers are predisposed to disbelieve conspiracy theories on principle, and journalists especially are predisposed to quote Richard Hofstadter on the “paranoid style” whenever they encounter one — an instinct only sharpened by the rise of Donald Trump, the cynical conspiracist par excellence.
But this dismissiveness can itself become an intellectual mistake, a way to sneer at speculation while ignoring an underlying reality that deserves attention or investigation. Sometimes that reality is a conspiracy in full, a secret effort to pursue a shared objective or conceal something important from the public. Sometimes it’s a kind of unconscious connivance, in which institutions and actors behave in seemingly concerted ways because of shared assumptions and self-interest. But in either case, an admirable desire to reject bad or wicked theories can lead to a blindness about something important that these theories are trying to explain.
Here are some diverse examples. Start with U.F.O. theories, a reliable hotbed of the first kind of conspiracizing — implausible popular stories about hidden elite machinations.
It is simple wisdom to assume that any conspiratorial Fox Mulder-level master narrative about little gray men or lizard people is rubbish. Yet at the same time it is a simple fact that the U.F.O. era began, in Roswell, N.M., with a government lie intended to conceal secret military experiments; it is also a simple fact, lately reported in this very newspaper, that the military has been conducting secret studies of unidentified-flying-object incidents that continue to defy obvious explanations.
So the correct attitude toward U.F.O.s cannot be a simple Hofstadterian dismissiveness about the paranoia of the cranks. Instead, you have to be able to reject outlandish theories and  acknowledge a pattern of government lies and secrecy around a weird, persistent, unexplained feature  of human experience — which we know about in part because the U.F.O. conspiracy theorists keep banging on about their subject. The wild theories are false; even so, the secrets and mysteries are real.
Another example: The current elite anxiety about Russia’s hand in the West’s populist disturbances, which reached a particularly hysterical pitch with the pre-Mueller report collusion coverage, is a classic example of how conspiracy theories find a purchase in the supposedly sensible center — in this case, because their narrative conveniently explains a cascade of elite failures by blaming populism on Russian hackers, moneymen and bots.
And yet: Every conservative who rolls her or his eyes at the “Russia hoax” is in danger of dismissing the reality that there is a Russian plot against the West — an organized effort to use hacks, bots and rubles to sow discord in the United States and Western Europe. This effort is far weaker and less consequential than the paranoid center believes, it doesn’t involve fanciful “Trump has been a Russian asset since the ’80s” machinations … but it also isn’t something that Rachel Maddow just made up. The hysteria is overdrawn and paranoid; even so, the Russian conspiracy is real.
A third example: Marianne Williamson’s long-shot candidacy for the Democratic nomination has elevated the holistic-crunchy critique of modern medicine, which often shades into a conspiratorial view that a dark corporate alliance is actively conspiring against American health, that the medical establishment is consciously lying to patients about what might make them well or sick. Because this narrative has given anti-vaccine fervor a huge boost, there’s understandable desire among anti-conspiracists to hold the line against anything that seems like a crankish or quackish criticism of the medical consensus.
But if you aren’t somewhat paranoid about how often corporations cover up the dangers of their products, and somewhat paranoid about how drug companies in particular influence the medical consensus and encourage overprescription — well, then I have an opioid crisis you might be interested in reading about. You don’t need the centralized conspiracy to get a big medical wrong turn; all it takes is the right convergence of financial incentives with institutional groupthink. Which makes it important to keep an open mind about medical issues that are genuinely unsettled, even if the people raising questions seem prone to conspiracy-think. The medical consensus is generally a better guide than crankishness; even so, the tendency of cranks to predict medical scandals before they’re recognized is real.
Finally, a fourth example, circling back to Epstein: the conspiracy theories about networks of powerful pedophiles, which have proliferated with the internet and peaked, for now, with the QAnon fantasy among Trump supporters.
I say fantasy because the details of the QAnon narrative are plainly false: Donald Trump is not personally supervising an operation against “deep state” child sex traffickers any more than my 3-year-old is captaining a pirate ship.
But the premise of the QAnon fantasia, that certain elite networks of influence, complicity and blackmail have enabled sexual predators to exploit victims on an extraordinary scale — well, that isn’t a conspiracy theory, is it? That seems to just be true.
And not only true of Epstein and his pals. As I’ve written before, when I was starting my career as a journalist I sometimes brushed up against people peddling a story about a network of predators in the Catholic hierarchy — not just pedophile priests, but a self-protecting cabal above them — that seemed like a classic case of the paranoid style, a wild overstatement of the scandal’s scope. I dismissed them then as conspiracy theorists, and indeed they had many of conspiracism’s vices — above all, a desire to believe that the scandal they were describing could be laid entirely at the door of their theological enemies, liberal or traditional.
But on many important points and important names, they were simply right.
Likewise with the secular world’s predators. Imagine being told the scope of Harvey Weinstein’s alleged operation before it all came crashing down — not just the ex-Mossad black ops element but the possibility that his entire production company also acted as a procurement-and-protection operation for one of its founders. A conspiracy theory, surely! Imagine being told all we know about the late, unlamented Epstein — that he wasn’t just a louche billionaire (wasn’t, indeed, a proper billionaire at all) but a man mysteriously made and mysteriously protected who ran a pedophile island with a temple to an unknown god and plotted his own “Boys From Brazil” endgame in plain sight of his Harvard-D.C.-House of Windsor pals. Too wild to be believed!
And yet.
Where networks of predation and blackmail are concerned, then, the distinction I’m drawing between conspiracy theories and underlying realities weakens just a bit. No, you still don’t want to listen to QAnon, or to our disgraceful president when he retweets rants about the #ClintonBodyCount. But just as Cardinal Theodore McCarrick’s network of clerical allies and enablers hasn’t been rolled up, and the fall of Bryan Singer probably didn’t get us near the rancid depths of Hollywood’s youth-exploitation racket, we clearly haven’t gotten to the bottom of what was going on with Epstein.
So to worry too much about online paranoia outracing reality is to miss the most important journalistic task, which is the further unraveling of scandals that would have seemed, until now, too implausible to be believed.
Yes, by all means, resist the tendency toward unfounded speculation and cynical partisan manipulation. But also recognize that in the case of Jeffrey Epstein and his circle, the conspiracy was real.
Epstein Suicide Conspiracies Show How Our Information System Is Poisoned
With each news cycle, the false-information system grows more efficient.
By Charlie Warzel | Published August 11, 2019 | New York Times | Posted August 13, 2019 "|
Even on an internet bursting at the seams with conspiracy theories and hyperpartisanship, Saturday marked a new chapter in our post-truth, choose-your-own-reality crisis story.
It began Saturday morning, when news broke that the disgraced financier  Jeffrey Epstein had apparently hanged himself in a Manhattan jail. Mr. Epstein’s death, coming just one day after court documents from one of his accusers were unsealed, prompted immediate suspicion from journalists, politicians and the usual online fringes.
Within minutes, Trump appointees, Fox Business hosts and Twitter pundits  revived a decades-old conspiracy theory, linking the Clinton family to supposedly suspicious deaths. #ClintonBodyCount and #ClintonCrimeFamily trended on Twitter. Around the same time, an opposite hashtag — #TrumpBodyCount — emerged, focused on President Trump’s decades-old ties to Mr. Epstein. Each hashtag was accompanied by GIFs and memes picturing Mr. Epstein with the Clintons or with Mr. Trump to serve as a viral accusation of foul play.
The dueling hashtags and their attendant toxicity are a grim testament to our deeply poisoned information ecosystem — one that’s built for speed and designed to reward the most incendiary impulses of its worst actors. It has ushered in a parallel reality unrooted in fact and helped to push conspiratorial thinking into the cultural mainstream. And with each news cycle, the system grows more efficient, entrenching its opposing camps. The poison spreads.
Mr. Epstein’s apparent suicide is, in many ways, the post-truth nightmare scenario. The sordid story contains almost all of the hallmarks of stereotypical conspiratorial fodder: child sex-trafficking, powerful global political leaders, shadowy private jet flights, billionaires whose wealth cannot be explained. As a tale of corruption, it is so deeply intertwined with our current cultural and political rot that it feels, at times, almost too on the nose. The Epstein saga provides ammunition for everyone, leading one researcher to refer to Saturday’s news as the “Disinformation World Cup.”
At the heart of the online fiasco is Twitter, which has come to largely program the political conversation and much of the press. Twitter is magnetic during huge breaking stories; news junkies flock to it for up-to-the-second information. But early on, there’s often a vast discrepancy between the attention that is directed at the platform and the available information about the developing story. That gap is filled by speculation and, via its worst users, rumormongering and conspiracy theories.
On Saturday, Twitter’s trending algorithms hoovered up the worst of this detritus, curating, ranking and then placing it in the trending module on the right side of its website. Despite being a highly arbitrary and mostly “worthless metric,” trending topics on Twitter are often interpreted as a vague signal of the importance of a given subject.
There’s a decent chance that President Trump was using Twitter’s trending module when he retweeted a conspiratorial tweet tying the Clintons to Epstein’s death. At the time of Mr. Trump’s retweet, “Clintons” was the third trending topic in the United States. The specific tweet amplified by the president to his more than 60 million followers was prominently featured in the “Clintons” trending topic. And as Ashley Feinberg at Slate pointed out in June, the president appears to have a history of using trending to find and interact with tweets.
On Saturday afternoon, a computational propaganda researcher, Renée DiResta, noted that the media’s close relationship with Twitter creates an incentive for propagandists and partisans to artificially inflate given hashtags. Almost as soon as #ClintonBodyCount began trending on Saturday, journalists took note and began lamenting the spread of this conspiracy theory — effectively turning it into a news story, and further amplifying the trend. “Any wayward tweet … can be elevated to an opinion worth paying attention to,” Ms. DiResta wrote. “If you make it trend, you make it true.”
That our public conversation has been uploaded onto tech platforms governed by opaque algorithms adds even more fodder for the conspiratorial-minded. Anti-Trump Twitter pundits with  hundreds of thousands of followers  blamed “Russian bots” for the Clinton trending topic. On the far right, pro-Trump sites like the Gateway Pundit (with a long track record of amplifying  conspiracy theories) suggested that Twitter was suppressing and censoring the Clinton hashtags.
Where does this leave us? Nowhere good.
It’s increasingly apparent that our information delivery systems were not built for our current moment — especially with corruption and conspiracy at the heart of our biggest national news stories (Epstein, the Mueller report, mass shootings), and the platforms themselves functioning as petri dishes for outlandish, even dangerous conspiracy theories to flourish. The collision of these two forces is so troubling that an F.B.I. field office recently identified fringe conspiracy theories as a domestic terrorist threat. In this ecosystem, the media is frequently outmatched and, despite its best intentions, often acts as an amplifier for baseless claims, even when trying its best to knock them down.
Saturday’s online toxicity may have felt novel, but it’s part of a familiar cycle: What cannot be easily explained is answered by convenient untruths. The worst voices are rewarded for growing louder and gain outsize influence directing narratives. With each cycle, the outrage and contempt for the other build. Each extreme becomes certain its enemy has manipulated public perception; each side is the victim, but each is also, inexplicably, winning. The poison spreads.
1 note · View note
dwellordream · 3 years
Text
“…In modern English, we often use oath and vow interchangeably, but they are not (usually) the same thing. Divine beings figure in both kinds of promises, but in different ways. In a vow, the god or gods in question are the recipients of the promise: you vow something to God (or a god). By contrast, an oath is made typically to a person and the role of the divine being in the whole affair is a bit more complex.
…In a vow, the participant promises something – either in the present or the future – to a god, typically in exchange for something. This is why we talk of an oath of fealty or homage (promises made to a human), but a monk’s vows. When a monk promises obedience, chastity and poverty, he is offering these things to God in exchange for grace, rather than to any mortal person. Those vows are not to the community (though it may be present), but to God (e.g. Benedict in his Rule notes that the vow “is done in the presence of God and his saints to impress on the novice that if he ever acts otherwise, he will surely be condemned by the one he mocks.” (RB 58.18)). Note that a physical thing given in a vow is called a votive (from that Latin root).
(More digressions: Why do we say ‘marriage vows‘ in English? Isn’t this a promise to another human being? I suspect this usage – functionally a ‘frozen’ phrase – derives from the assumption that the vows are, in fact, not a promise to your better half, but to God to maintain. After all, the Latin Church held – and the Catholic Church still holds – that a marriage cannot be dissolved by the consent of both parties (unlike oaths, from which a person may be released with the consent of the recipient). The act of divine ratification makes God a party to the marriage, and thus the promise is to him. Thus a vow, and not an oath.)
…Which brings us to the question how does an oath work? In most of modern life, we have drained much of the meaning out of the few oaths that we still take, in part because we tend to be very secular and so don’t regularly consider the religious aspects of the oaths – even for people who are themselves religious. Consider it this way: when someone lies in court on a TV show, we think, “ooh, he’s going to get in trouble with the law for perjury.” We do not generally think, “Ah yes, this man’s soul will burn in hell for all eternity, for he has (literally!) damned himself.” But that is the theological implication of a broken oath!
So when thinking about oaths, we want to think about them the way people in the past did: as things that work – that is they do something. In particular, we should understand these oaths as effective – by which I mean that the oath itself actually does something more than just the words alone. They trigger some actual, functional supernatural mechanisms. In essence, we want to treat these oaths as real in order to understand them.
So what is an oath? To borrow Richard Janko’s (The Iliad: A Commentary (1992), in turn quoted by Sommerstein) formulation, “to take an oath is in effect to invoke powers greater than oneself to uphold the truth of a declaration, by putting a curse upon oneself if it is false.” Following Sommerstein, an oath has three key components:
First: A declaration, which may be either something about the present or past or a promise for the future.
Second: The specific powers greater than oneself who are invoked as witnesses and who will enforce the penalty if the oath is false. In Christian oaths, this is typically God, although it can also include saints. For the Greeks, Zeus Horkios (Zeus the Oath-Keeper) is the most common witness for oaths. This is almost never omitted, even when it is obvious.
Third: A curse, by the swearers, called down on themselves, should they be false. This third part is often omitted or left implied, where the cultural context makes it clear what the curse ought to be. Particularly, in Christian contexts, the curse is theologically obvious (damnation, delivered at judgment) and so is often omitted.
While some of these components (especially the last) may be implied in the form of an oath, all three are necessary for the oath to be effective – that is, for the oath to work.
A fantastic example of the basic formula comes from Anglo-Saxon Chronicles (656 – that’s a section, not a date), where the promise in question is the construction of a new monastery, which runs thusly (Anne Savage’s translation):
These are the witnesses that were there, who signed on Christ’s cross with their fingers and agreed with their tongues…”I, king Wulfhere, with these king’s eorls, war-leaders and thanes, witness of my gift, before archbishop Deusdedit, confirm with Christ’s cross”…they laid God’s curse, and the curse of all the saints and all God’s people on anyone who undid anything of what was done, so be it, say we all. Amen.”
So we have the promise (building a monastery and respecting the donation of land to it), the specific power invoked as witness, both by name and through the connection to a specific object (the cross – I’ve omitted the oaths of all of Wulfhere’s subordinates, but each and every one of them assented ‘with Christ’s cross,’ which they are touching) and then the curse to be laid on anyone who should break the oath.
…With those components laid out, it may be fairly easy to see how the oath works, but let’s spell it out nonetheless. You swear an oath because your own word isn’t good enough, either because no one trusts you, or because the matter is so serious that the extra assurance is required.
That assurance comes from the presumption that the oath will be enforced by the divine third party. The god is called – literally – to witness the oath and to lay down the appropriate curses if the oath is violated. Knowing that horrible divine punishment awaits forswearing, the oath-taker, it is assumed, is less likely to make the oath. Interestingly, in the literature of classical antiquity, it was also fairly common for the gods to prevent the swearing of false oaths – characters would find themselves incapable of pronouncing the words or swearing the oath properly.
And that brings us to a second, crucial point – these are legalistic proceedings, in the sense that getting the details right matters a great detail. The god is going to enforce the oath based on its exact wording (what you said, not what you meant to say!), so the exact wording must be correct. It was very, very common to add that oaths were sworn ‘without guile or deceit’ or some such formulation, precisely to head off this potential trick (this is also, interestingly, true of ancient votives – a Roman or a Greek really could try to bargain with a god, “I’ll give X if you give Y, but only if I get by Z date, in ABC form.” – but that’s vows, and we’re talking oaths).
…Not all oaths are made in full, with the entire formal structure, of course. Short forms are made. In Greek, it was common to transform a statement into an oath by adding something like τὸν Δία (by Zeus!). Those sorts of phrases could serve to make a compact oath – e.g. μὰ τὸν Δία! (yes, [I swear] by Zeus!) as an answer to the question is essentially swearing to the answer – grammatically speaking, the verb of swearing is necessary, but left implied. We do the same thing, (“I’ll get up this hill, by God!”). And, I should note, exactly like in English, these forms became standard exclamations, as in Latin comedy, this is often hercule! (by Hercules!), edepol! (by Pollux!) or ecastor! (By Castor! – oddly only used by women). One wonders in these cases if Plautus chooses semi-divine heroes rather than full on gods to lessen the intensity of the exclamation (‘shoot!’ rather than ‘shit!’ as it were). Aristophanes, writing in Greek, has no such compunction, and uses ‘by Zeus!’ quite a bit, often quite frivolously.
Nevertheless, serious oaths are generally made in full, often in quite specific and formal language. Remember that an oath is essentially a contract, cosigned by a god – when you are dealing with that kind of power, you absolutely want to be sure you have dotted all of the ‘i’s and crossed all of the ‘t’s. Most pre-modern religions are very concerned with what we sometimes call ‘orthopraxy’ (‘right practice’ – compare orthodoxy, ‘right doctrine’). Intent doesn’t matter nearly as much as getting the exact form or the ritual precisely correct (for comparison, ancient paganisms tend to care almost exclusively about orthopraxy, whereas medieval Christianity balances concern between orthodoxy and orthopraxy (but with orthodoxy being the more important)).”
- Bret Devereaux, “Oaths! How do they Work?”
19 notes · View notes
writsgrimmyblog · 6 years
Note
So Do you think if Nick were a straight man dating a 22 yr old woman, or a straight woman dating a 22 yr old man, it would have been reported differently? Perceived differently? How about a gay woman dating a 22 yr old woman?
Yes, I do think it would have been reported differently using all three of your examples, and it has been reported differently. I would like to preface this by saying I can’t believe anyone would take any issue with a relationship between two consenting adults with this kind of age difference, irrespective of gender, but I’ll give answering this a go, even though I think your question is actually enormously complex because it engages issues like heteornormativity, the way non-het women are talked about as opposed to non-het men, misogyny, ageism and the constant objectification and sexualisation of women, a.k.a the goddamn patriarchy . Beneath a read more cut to save peoples dashes from yet another wall of text from me.
Age difference between two men
For all the reasons I outlined in this post the way the British tabloids have written about male homosexuality has a particular history which must be understood in the context of discussions about the language used when the press write about two men. It has historically framed homosexuality as something to be feared, portrayed gay men as predatory and a threat to children. 
This is a narrative which still persists, implicitly, reading between the lines, barely concealed, however you want to put it. It’s there, it exists, and it comes from a place which is no good. 
The reports on Nick’s new relationship has highlighted some of homophobia that remains implicit in the language used by the media when writing about relationships between men, but there are other examples. See, for e.g. the scrutiny about the 20 year age gap between Tom Daley and Dustin Lance Black, something they have been required to address on multiple occasions.
Let’s not pretend homophobic fear mongering isn’t still being pedaled by the British media (and don’t even get me started on the mess of articles around transgender issues that have exploded across the press recently). Whether you agree with me or not with regard to my suggestion that the focus of many of the articles in this instance are couched in the language of thinly-veiled homophobia, there are far more explicit examples of it. 
If you’re unsure, I direct you again to Tom and Dustin, and Richard Littlejohn’s piece in The Daily Mail, written just this year, titled ‘let’s not pretend two dads is the ‘new normal’’ 
Age difference between two women
There was a ten year age gap between Cara Delevingne and St Vincent and I don’t remember there being any concern or sensationalism in that regard. 
I found one article in the British tabloids which, aside from a cursory mention of both of their ages (not in the headlines or bylines and certainly not the focal point of the articles) addressed the age gap. It did so as it was something St Vincent herself raised in an interview. 
There was a 14 year age gap between Cara and Michelle Rodriquez, but the slant to the way that was reported was different, the assumption being that Michelle was ready to settle down and have a family and Cara wasn’t. Check out the articles writing about ‘broody’ Michelle and women behaving improperly, if you have the stomach for them. The issue wasn’t that Cara was young, it was that the expectation was Michelle would be ready to start having kids and Cara wouldn’t be. 
Women in relationships with other women also face the issue that they are ‘gal pals’ for a long time before their relationship is actually even acknowledged by the media so it takes the tabloids a while to catch up in the first place. Sticking with Cara as a high profile example, The Daily Mail today posted an article describing her and Ashley Benson as ‘gal pals,’ a phrase that has now become synonymous with the media being generally blinkered to the fact queer women exist, and one which has been repeatedly memed and poked fun at by the interwebs at large.
One relationship between two women that I’ve seen attract press attention over the age gap is Sarah Paulson and Holland Taylor. There is a 32 year age gap between them. Interestingly, the issue with that doesn’t appear to be that Taylor is predatory, but rather more to do with what on earth Paulson might see in an older woman. 
So, in the case of Paulson and Taylor age has some media outlets talking, but for different reasons, this time coming from a place of ageist misogyny and the supposed expiration date on female sexuality. Paulson responds to that best when she says: “If anyone wants to spend any time thinking I’m strange for loving the most spectacular person on the planet, then that’s their problem.”
Age difference between men and women (older woman, younger man)
If it was a 34 year old straight woman and a 22 year old straight man, I doubt it would generate much excitement to be honest, although I don’t have a specific example to point to.
Of course with older women/younger men you start edging into cougar territory. See, Demi Moore and Ashton Kutcher, who had a 15 year age gap between them. 
The Sun, the tabloid that got everyone so excited about Nick’s new chap, ran a charming article titled: Meow! Top Ten Sexiest Celebrity Cougars back in 2013, which has so many problems I can’t even begin to unpack them all, but consider the phrasing used: X man ‘bags himself a cougar’, ‘yummy mummy’, ‘toy boy’, ‘sinking her claws into’, ‘man hungry’, ‘her ferocious appetite.’ 
The age disparity might be couched in terms which indicate the older woman is predatory (literally, a cougar) but it is the way women are written about and gendered expectations of women that drive those speculative pieces in this instance. 
This kind of angle requires a totally different perspective and a deep dive into how women are treated by the press more broadly, the sexualisation and objectification of women, which I don’t have time to unpack in depth here, but I imagine you get where I would be headed with all of this.
Age difference between men and women (older man, younger woman)
No one would be talking about it at all and even if ages did feature in an article, it certainly wouldn’t be the headline/byline. There are literally so freaking many celebrity relationships with a 10 - 14 age gap where the man is the older partner and much, much higher instances of age disparity, I have taken just a couple of examples that make my point.
In 2015 at 37 Maggie Gyllenhaal was deemed too old to play George Clooney’s love interest. He was then 55. That’s an 18 year age gap. She didn’t get the part because SHE WASN’T YOUNG ENOUGH to make the attraction of her 55 year old co-star believable. 
Younger women and older men being in relationships is normalised on screen, in television and very, very common, so much so that the age difference has to be significant for it to be the focus of media attention. 
There’s an 11 year age gap between Ryan Reynolds and Blake Lively. I don’t remember this being deemed particularly newsworthy.
There’s an 11 year age gap between Angelina Jolie and Brad Pitt, and I don’t remember this being deemed particularly newsworthy, either. What was far more newsworthy was the infidelity, and, shock horror, Angelina was framed as the predator with her ‘weird’ relationship history, which included a same-sex partnership. 
Women get past their prime, men trade them in for a younger model, and it’s celebrated in a slap on the back, well of course he did, kind of way. 
Even when the age gap is huge between younger women and older men, who is (most commonly) the predator and who is the victim in that case according to the media? It’s the woman who’s the predator, obviously. Looking for a payout from a rich man’s will. We all know the phrase ‘gold digger’ and all of its connotations.
Again, if you’re still reading, thank you for bearing with me. I promise not to have any more opinions for at least a few weeks.
18 notes · View notes
flauntpage · 6 years
Text
The Ultimate Ranking of the Best Hockey Films Ever
Great hockey movies are hard to come by.
There's what first comes to mind: Slap Shot and Goon and Miracle. Though there's more bad than good: The Love Guru, MVP, or the boring-as-sin Sudden Death.
One thing that's fairly certain is that hockey movies tend to represent the experience of the wealthy white and male demographic, the one that also populates the sport itself.
The NHL says that "hockey is for everyone," and yet many believe that official motto to be one of the league's myths. Fines for on-ice gay slurs are pocket change for the privileged. Ice Girls' bodies are still exploited for a male audience. Social change within the hockey community is diminished yet commodified into sloganeering. And while the NHL was one of the first pro leagues to partner with the You Can Play project, many feel hockey's rate of change to be glacial. Commonplace hockey myths suggests NHL owners don't make much money off the game or that enforcers are necessary to police on-ice behaviour, exactly the kind of myths that are reinforced in hockey movies.
Some of the best hockey films are lesser known yet they question our assumptions about the game. Like the assumption that Russian players are "enigmatic" or that men are inherently better and more entertaining on the ice than women, or that hockey is a unifying force in communities or across a nation.
Should Slap Shot and Goon stand as the best of the best if they reinforce hockey's monoculture? Are Miracle and Youngblood and Mystery, Alaska just reselling the myths we've been buying?
Great hockey movies are out there. It's just time to reconsider the rankings.
1. RED ARMY (2014)
Gabe Polsky's Red Army does what few if any films have done: provide a real glimpse at the life of Soviet hockey players inside the Iron Curtain. It comes across as the most honest portrayal of the Soviet Union's relationship to hockey, and depicts how dramatically Russian-style hockey changed the sport.
The documentary succeeds as the best hockey film on this list because it weaves together sports and politics, it asks its audience to challenge their assumptions about certain hockey myths, and it expertly uses hockey footage and commentary to tell a compelling story.
Polsky, through intimate interviews with the famed Russian Five and goalie Vladislav Tretiak, captures such a personal account of the players that audiences feel they've learned something about the players of the historically tight-fisted Soviet organization. Whereas hockey myth-making has portrayed Russian players as robotic, or as self-interested divas, Red Army does well in illustrating the Russian Five and their goaltender as sympathetic individuals with six different points of view.
Vladislav Fetisov, the first Soviet player to play in the NHL, is a star in the film, and Polsky's dynamic with him on camera is a big part of what makes Fetisov's scenes work so well. The director asks Fetisov questions and often the player will not initially answer but only react with a facial expression—moments Polsky uses to splice in visuals and recordings to provide an answer to what Fetisov doesn't say. When he asks Fetisov about the Soviets' disappointing loss to the United States in the 1980 "Miracle On Ice," for instance, no words are necessary. The best moments are when it's "show" rather than "tell."
Red Army illustrates the conflicting approaches to coaching between Anatoly Tarasov and Viktor Tikhonov, the varied personal politics of the players, and it highlights the politics that drive hockey-related decisions in nation-building. Its use of historical footage and ability to tell a compelling, real-life story is unmatched in hockey films.
2. CANADA-RUSSIA '72 (2006)
The second-best hockey film is one in which Canadians are finally self-effacing about one of their greatest on- and off-ice triumphs embarrassments. In Canada-Russia '72, the CBC dramatization relives the famed Summit Series of 1972 when for the first time Canada's best professional hockey players took on the powerhouse Soviets.
Released in 2006, the three-hour film casts a critical eye on the resentful, obnoxious, and violent behaviour of the Canadian exhibition team that eked out a victory in the eight-game series. The historical event is understood by many Canadians as an affirmation of the country's dominance in hockey. But Canada-Russia '72 paints everyone from Alan Eagleson to Harry Sinden to Phil Esposito as petulant and crude in their pursuit of beating the surprising Soviets. What was supposed to be a walk in the park turned into a national identity crisis. But rather than portray the Canadian victory as a case of underdogs exhibiting perseverance—and free-market capitalism defeating communism—Team Canada is viewed here as the Apollo Creed to the Russians' Rocky. They win the contest but the victory feels empty.
"You know Ms. Fournier, the average Canadian might never forgive us if we lose this series," says head coach Harry Sinden to the fictional media relations character. "But the rest of you intellectuals? You'll never forgive us if we win, will ya?"
The hockey in the film is terrific to watch. Entire sequences are reproduced from documentary footage that seem natural rather than staged. Recognizable beats maintain a degree of tension regardless of the fact that we know the outcome. The audience is privy to a great deal of dramatized behind-the-scenes moments that provide new context.
In one of the film’s most effective (and probably exaggerated) examples of the Canadians' arrogance, a young Soviet boy offers Esposito a Lenin pin in exchange for his hockey stick. Esposito instead offers him a stick of gum. The boy says in Russian, which Esposito doesn't understand, "You cheap son of a bitch."
When Sinden gives his big speech in the dressing room ahead of Game 8—a moment typical of sports dramas, intended to rile the players and the audience—he says, "We win this game, we win the series. We vindicate ourselves and everything we stand for." That line might be heard as inspiring in a straight-forward sports drama. Instead, it sounds like what Canadians "stand for" in hockey is upholding the assumption that Canadians are the best at it.
Canada's victory celebration in the film is muted. It's more relief than national pride. As Canadians, we've been buying the line Sinden voiced, that we were vindicated by the win. But the film succeeds because it throws that myth in the trash.
3. NET WORTH (1995)
There is one scene in this film that stuck with me since 1995 despite only watching the CBC television miniseries the one time. Detroit Red Wings general manager Jack Adams is negotiating with Gordie Howe over the star's one-year contract. Howe's wife Colleen had just prompted her husband to ask for an extra $2,000 over last season rather than his usual ask of a $1,000 raise. Howe is manipulated by Adams and folds. Adams smiles and tosses the signed contract in the drawer.
Based on the book by David Cruise and Alison Griffiths, Net Worth describes the beginnings of the formation of the NHL Players' Association in the face of tyrannical owners who exploit the players and bust their attempt to form a union.
In one of the best scenes in the film, the Association's first lawyer spells out, one by one, the popular myths that the NHL sells—like Randy in Scream listing the rules of the horror genre. To all of them, the lawyer Milton Mound says, "Bull. Shit. When you sniff around a pile of money and the other side clams up, they are hiding something."
What's particularly memorable is the contrast between the PA's first leader, Ted Lindsay, and his Red Wings teammate Howe. Lindsay takes the first cautious steps toward achieving fairness with the league but Howe, the most recognizable hockey player across the US and Canada at the time, decides again and again not to use his influence to better the players' position. Howe's character effectively shows that NHL players themselves become indoctrinated by the myths of the game rather than demand the rights and the money they deserve. Hockey is for fun, after all. It's a boy's dream. At least that's what owners have been selling to everyone.
The film isn't afraid to make players and owners alike look bad in the eyes of the viewer. It challenges some of the great assumptions of the NHL, like the heroism of a star player or the father-knows-best style of management. Toronto Maple Leafs owner Conn Smythe is even represented as using racist language three times in the movie, the last of which the Jewish lawyer Mound responds with, "Smythe, it's hard to believe you fought against the Nazis."
Once you've seen Net Worth, you won't forget it.
4. INDIAN HORSE (2017)
No list of the best hockey films can be complete or accountable to the sport's troubled history without acknowledging its exclusionary and abusive nature. And no hockey film does this better than 2017's Indian Horse.
Situated within Canadian residential schools that abused and neglected Indigenous children, the film based on Richard Wagamese's book of the same name centres around the young boy Saul Indian Horse who is ripped from his family but attempts to lift himself out of the residential school life by teaching himself to play hockey.
"The rink became my escape," says Saul in narration. "The ice my obsession. The game my survival."
What the Canadian film does so well is illustrate how hockey has, since its inception, been a tool to help enforce white cultural dominance and nationhood. While Saul is eventually able to leave the school for a foster family, his new hockey team made up of fellow Indigenous players experiences the same kind of subjugation and violence at the hands of Canadians in the rinks and in the towns they visit. As the teachers at those schools tried to assimilate Indigenous children into Canadian culture, hockey players and coaches did the same on the ice—only "assimilate" is too kind a word for what took place.
Indian Horse's best hockey sequence is a montage in which Saul's Indigenous team defeats a local white team while Stompin' Tom Connor's song "Sudbury Saturday Night" plays on the film's soundtrack. Connor's music, most notably "The Hockey Song" which is ubiquitous in hockey and was recently inducted into the Canadian Song Writers' Hall of Fame, typically signifies to English Canada a sense of nationhood intended to unify people. Instead, the way the music is used signifies that neither the sport nor the country's identity can be appropriated by just one people. Saul and his teammates stake their own claim to the land and the game by playing skilled, virtuous hockey in the face of intolerance.
Indian Horse is not a story about a resilient "other" who succeeds despite the odds. Saul quits hockey despite pleas from his coach who appeals to Saul by pointing to the success of Indigenous NHLer Reggie Leach. The film reminds us the stories of the Saul Indian Horses are as important to see as the Reggie Leaches.
As Brett Pardy notes in his review, "This story makes it clear hockey is more often an extension of Canadian racism than a unifying force." This chapter of hockey's history, and Canada's, is as important as any other.
5. SLAP SHOT (1977)
The throne for best hockey movie has been Slap Shot's to lose for years, and yet it's trotted out again and again on best-of lists like it's a geriatric honouree at a Montreal Canadiens pregame ceremony. Its iconography and cultural impact is irrefutable but it's time to cede the throne to more inclusive films.
The movie's casual sexism and homophobia hits you like a brick when you watch now. Women are cast as wives and girlfriends only, portrayed as drunk hangers-on who complain while their partner lives out his extended childhood. One hockey wife is said to have slept with another woman and that prompts some players to wonder if that makes her husband gay. Paul Newman's character even exploits that information on the ice to manipulate the husband into giving up a goal against.
Women and their bodies are referred to with deplorable name-calling—and the thing is, that's kind of the point: to paint an accurate picture of men's pro hockey in the 1970s. Written by Nancy Dowd, whose brother played this level of hockey at the time, the film is a satire of commodified hockey culture and its spectacle of violence. And it gets major credit for that. But in revealing such naked truths about the game—like its casual intolerance—it reinforces to subsequent generations that hockey normalizes exclusionary behaviour. When Slap Shot has a chance in the end to say something progressive about women's roles in the story, it suggests that if only a hockey wife got a salon makeover, she'd forget her troubles.
But the film does have its transgressive points, allowing it to still survive among the top five. It's a sports movie about the economic malaise and widening rich-poor gap of the 70s, the resulting cultural frustration that leads to a blood thirst for violent entertainment, and makes a fairly bold statement with the Ned Braden striptease scene by criticizing the pandering to fans by the sport through the commodification of athletes and their bodies. The ambiguous ending, when the Chiefs win the championship while their jobs remain tenuous, even flips the standard sports drama narrative by questioning how we evaluate success and heroism.
But it's the fact that you can't watch Slap Shot without wincing or even turning off the film part way that pushes it down this list. Time is no friend of this film.
6. THE MIGHTY DUCKS (1992)
Despite The Mighty Ducks being pretty typical Disney fare, it was the hockey movie for a generation of young hockey fans who'd never seen The Bad News Bears. A championship game that didn't consummate with a fight but instead a skilled play. A coach who tells his player, "I believe in you, Charlie. Win or Lose."
The Mighty Ducks condemns the win-at-all-costs attitude of many hockey films while a team of lower-middle class kids beat the rich kids. It's one of the few to include non-white and non-male players on the featured team, and gives on- and off-ice screen time to just about every character.
The movie has a lot to do with classism in hockey, albeit in a sanitized way: The Ducks resent that their coach was once a (rich) Hawks player, one Duck calls his teammate and former Hawk Adam Banks a "cake-eater," and yet coach Bombay (Emilio Estevez) uses sponsorship dollars from his wealthy law firm to pay for necessary jerseys and equipment. In The Mighty Ducks, success in hockey still comes at the expense of your wallet.
It's a Disney-fied, contradictory mess but I'm still crying at that "I believe in you, Charlie" line.
7. THE ROCKET
Another film taking aim at Canada's national politics intersecting with hockey, The Rocket stands above the average hockey biopic by portraying Canadiens legend Maurice Richard as the tip of the Quebecois cultural spear during a time of division between French and English Canada.
Richard's personality in the film is an idealized portrait of French resistance in the face of English cultural dominance. The NHL referee who holds Richard's arms while a Boston Bruins player hits him with two free punches represents the English bias of NHL management who handcuff their French players. The Richard Riots—a politicized event often linked to Quebec's Quiet Revolution of the 1960s—bookend the film, couching the player's biography within the province's socio-political history.
When Canadiens coach Dick Irvin says to his team, "I need players who hate to lose," he's using a common sports maxim in reference to Richard. But those words could also describe how Richard embodies the attitudes of many Quebecers toward English rivals in politics and in the NHL.
The Rocket's sensitive approach to the story is seen too in the filming of the hockey scenes. The low-lighting of 1950s hockey arenas, the helmet-less players, and the cool colour tones give us a sense that Richard is alone in the cold of the rink.
Points go to any hockey movie that features a grown man crying in front of his teammates in a dressing room. The film hits the dramatic a little too heavily at times but is another in the genre that flips the standard sports drama finale by not concluding with the hero's team winning the ultimate game. This movie's about the NHL taking one step forward and two back.
8. THE GAME OF HER LIFE (1998)
Documenting the lead-up to the first women's Olympic ice hockey tournament in 1998, The Game of Her Life provides a rare and unique look at one of the most significant chapters in women's hockey history from the Canadian perspective.
Produced by the National Film Board and directed by Lyn Wright, the film charts Team Canada's ascent to its first Olympic Games and its disappointing loss to Team USA. These were the first Olympic matches between two of hockey's fiercest rivals, and the very real tension between the teams is set up well.
Among the best sequences is when coach Shannon Miller is meeting with players to tell them whether or not they made the team. Miller told me in an interview earlier this year that she relied on her experience as a police officer to prepare for the following day's roster cuts by recalling having to tell victim's families their relative had died. The elation and sorrow in that sequence is the heart of the documentary, and one of those roster cuts includes future Hall of Famer Angela James.
Though the documentary isn't beloved by all the Canadian players. Cassie Campbell, interviewed for the same story as Miller, told me she didn't appreciate the film's portrayal of her supposed modelling career (she took one class at age 16). "I was such a team player and yet I could feel the attention going to a small amount of us," said Campbell.
The rare glimpse into the women's game, the coverage of one of sport's most exciting rivalries, and the stark differences apparent between the men's and women's games makes The Game of Her Life a standout.
9. GOON (2011)
Another movie that may appear to rank too low on this list. But Goon, like Slap Shot, isn't standing the test of time well.
A fun story about a dim-witted Doug Glatt (Sean William Scott) who can't skate but can fight and protect the skilled players by intimidation, the independent Canadian film wants to sell the myth of the self-aware goon who violently avenges his teammates because the game is inevitably violent. It succeeds as fun and entertaining, as Doug is probably the nicest hockey player ever on screen, but its glorification of fighting and lack of attention toward the consequences of fighting just don't hold up, and it's only 2019.
The hockey scenes, though, are maybe the best in the industry—Liev Schreiber elevates everyone's game with his acting—and who doesn't shed a tear when Xavier Laflamme is set loose to score once Doug has punched out Schreiber's Ross Rhea? There's just too much cementing of boys hockey culture here, particularly with the casual homophobia. If you don't think the satire in Slap Shot helps normalize intolerance in the hockey dressing room, sit back and watch a group of actors riff on gay jokes for extended sequences while an implied gay character listens nearby.
Still, for a movie with questionable material, it has a lot of good writing and performances, and it's a satisfying experience where hockey fans get to interrogate their fandom and the role of violence in the sport.
10. MIRACLE
It's hard to fill out a roster of great hockey movies, and Miracle just makes the cut. There are better films, like Inside Out, that are hockey-adjacent. There are better films that few have seen, like Swift Current, which documents Sheldon Kennedy's experience of sexual abuse in junior hockey. And although Miracle has its charms, it embodies what's stale about hockey films.
Miracle is guilty of the most hockey movie clichés on this list. A group of underdog players beat the unbeatable team in improbable fashion. The players are bag-skated until they learn a valuable lesson. The coach dismisses the odds and relies on instincts and trust. The name on the front of the sweater is more important than the one on the back. A dressing-room speech inspires victory.
A great hockey film should do more than arouse national pride. It should relate to everyone in the audience, not just the high achievers and Type-A's. "This cannot be a team of common men," says Kurt Russell's Herb Brooks to the 1980 American Olympic team. "Because common men go nowhere. You have to be uncommon."
Canada-Russia '72 does everything well that this movie does but does it while questioning how history was recorded and how Canadians remember the events. Still, Miracle is among the best there is.
Honourable Mentions
The Sweater: This National Film Board short is a time capsule of 1950s French Canada, and in that context it's a staple in the hockey canon.
Swift Current: A hockey film only in part, the Canadian-made documentary on former NHLer Sheldon Kennedy charts his sexual abuse at the hands of the former junior hockey coach Graham James with startling detail. You won't be able to unsee this underside of hockey's history.
Inside Out: The Pixar-animated film touches briefly on the main character's relationship with hockey but it becomes a significant element to a beautiful story. If you need a good cry, sob heavily to this movie.
Blades and Brass: This 1967 NFB short combines NHL highlights with Tijuana Brass-style music. Why haven't you clicked through yet?
Goofy–Hockey Homicide: Almost 100 years ago, Disney thought hockey was a foaming-at-the-mouth celebration of violence, and the psychedelic approach of Hockey Homicide is truly a sight to see.
Dishonourable Mentions
Mystery, Alaska: A fun concept sullied by misogyny and an absolutely wretched Mike Myers Cameo.
Sudden Death: Some of you don't remember how boring this movie is and it shows. If you loved Die Hard, you're going to hate this.
Youngblood: A b-movie with a confused message about violence in hockey. Still, look for Keanu Reeves playing a French-Canadian goaltender.
The Love Guru: (I will not be sharing any thoughts about this alleged film, thank you for your understanding.)
Goon 2: The Last of the Enforcers: No thanks to the faux-Sportscenter panel of James Duthie and T.J. Miller. I've never fast-forwarded through a movie faster.
This article originally appeared on VICE Sports CA.
The Ultimate Ranking of the Best Hockey Films Ever published first on https://footballhighlightseurope.tumblr.com/
2 notes · View notes
heatpeen03-blog · 6 years
Text
The Value of W, or, Interdisciplinary Engagements on Culture
OCTOBER 31, 2018
LAST SPRING, I attended a conference in New Mexico featuring evolutionary biologists working on a new research program they have been calling the “extended evolutionary synthesis” (EES). The program aims to go beyond the so-called “modern synthesis” of the mid-20th century, which joined Darwinism to Mendelian genetics, whose mathematical formulations could be simply and straightforwardly expressed. Biologists involved in the EES have been calling for a broader and less reductive view of evolution, unrestricted to Mendelian genes. In particular, they have been addressing the modern synthesis’s paucity of information about developmental biology. These EES revisionists are interested in feedbacks: in how developmental processes, along with ecological and even cultural ones, feed back into one another, into genetic and other forms of inheritance, and therefore into evolution. While the modern synthesis proposes that epigenetic, developmental, ecological, and cultural processes are all products of evolution, the EES claims they are causes as well as products.
The roots of this movement extend back to the early 1970s, to the work of Richard Lewontin at Harvard, and Luigi Luca Cavalli-Sforza and Marcus Feldman at Stanford, among others. But the reigning, reductive neo-Darwinist paradigm — in other words, the modern synthesis — remains well entrenched, and its defenders staunch in its support. Only in the last 25 years or so has the more expansive vision of the EES slowly begun — against much resistance — to establish itself in mainstream biology.
As part of this development, EES biologists have been increasingly interested in culture, among other forms of transformation and transmission, and so have welcomed the input of humanists, including philosophers and historians of science like me, whose job it is to study and understand culture. Taking part in their conversations has in turn informed my own work in the history of evolutionary theory.
Still, I experienced a moment of comical culture shock at the recent meeting I attended. A biologist wrote an equation on the whiteboard in which one of the variables was a “w.” He then circled the w, explaining that it represented “culture,” and pointed out that under certain conditions, the value of “w” would tend toward zero, while under other conditions it would tend toward 100. “Perhaps,” I thought, “we don’t mean quite the same thing by ‘culture.’” To a humanist, or anyway to this one, “culture” is an abstract noun encompassing many things of many kinds: processes, objects, habits, beliefs both explicit and implicit. It seems a category mistake to think that we can represent such a welter by a single variable, or that the whole jumble could act as a discrete thing having a single quantifiable effect on some other discrete thing. Could we say, for example, that in a given society, “culture” influences “politics” by some quantifiable amount x? Could we say that “the arts” has a y-percent effect on birth rate or life expectancy?
As it turned out, I had somewhat misunderstood the situation. When I expressed a certain dubiousness about representing “culture” with a single variable, an EES biologist explained to me that the variables standing for “culture” in biologists’ mathematical models are not meant to denote the entire Gestalt, but rather quantifiable bits of culture: a single behavior, for example, that might be taught, learned, transmitted, or counted, and whose effects on survival and reproduction can be measured and modeled. Perhaps these individual culture variables might in principle add up to a single, overarching W, but for the moment, no one claims to be able to make that summation. For now, we can simply use the little w’s to build discrete cultural bits or forms into an evolutionary model. This seems to me more credible, but it still assumes that we can meaningfully represent cultural forms as quantifiable bits, and that this will add more to our understanding of the role of cultural forms in evolutionary processes than simply trying to describe this role in qualitative terms. I can’t help wondering if that’s a sound assumption.
Of course I’m by no means the first to raise the question, nor indeed have such objections been confined to humanists. Lewontin himself, together with the historian Joseph Fracchia, argued in a 1999 paper against the idea of cultural evolution. They wondered whether conceptualizing entities like “the idea of monotheism” as “cultural units” begged crucial questions — for example, how can we count up these units in a population, and what are their laws of inheritance and variation? Fracchia and Lewontin maintained that there could be no such general laws because cultural phenomena, unlike atoms and molecules, differ from one another in their properties and dynamics of transmission and change. “There is no one transhistorical law or generality,” they contended, “that can explain the dynamics of all historical change.” [1] Marcus Feldman disagreed, albeit not specifically with regard to the existence of general laws explaining the dynamics of all historical change; rather, he defended the notion of “observable units of culture,” which he did not associate with grand organizing ideas such as monotheism. An example of an observable cultural unit for Feldman is a behavior or custom that follows statistical rules of transmission, and that can therefore be a legitimate object of mathematical study. [2]
The biologist with the “w” variable and I were thus reenacting an intellectual confrontation that has been going on for decades. As is often the case in longstanding debates, we actually agree on the essentials: that nature and culture are at bottom made of the same stuff — in fact, of one another — which no humanistic or scientific inquiry can legitimately disregard. Evolutionary theory must encompass cultural processes just as human history must encompass biological ones. But, despite our deep accord, this biologist and I are thinking incommensurately about methods, about how to put our two fields into communication. His method is mathematical modeling, and mine is thick description. These are diametrically opposite in trajectory, one abstractive and reductive, the other concretizing and expansive. While I understand and admire these biologists’ conclusions, I keep wondering: Why these methods? Why mathematical modeling? By which I mean, what function do biologists intend their mathematical models to serve? Are they meant to prove claims about evolution? Or rather to express, represent, or advocate certain interpretive views of evolutionary processes? If the latter, why choose this particular means of expression, representation, advocacy? These will surely seem naïve questions to any biologist reading this. But I have learned from teaching college freshman and sophomores that naïve questions from untrained newcomers can be the hardest and most useful, which emboldens me to ask mine.
¤
Kevin Laland, author of Darwin’s Unfinished Symphony, was an organizer of the conference I attended and is a leader of the Extended Evolutionary Synthesis research program. Reading his important and heartfelt book, the to-date summary of a groundbreaking career, I had similar feelings as I did at the conference. Uppermost among these is heated agreement: Laland’s essential tenets seem to me profoundly right, some indeed incontrovertible. These include the precept that cultural practices — in particular teaching, imitating, and copying — are causes as well as results of evolution; that in mammals and especially humans, such cultural practices have accelerated evolutionary development by constantly creating “new selection regimes” in a process that Laland, citing evolutionary biologist Allan Wilson, calls “cultural drive”; and that, accordingly, in humans especially, there has been a “gene-culture coevolutionary dynamic.” The first of these — that cultural practices are causes as well as results of evolution — seems to me incontrovertible, but more like a first principle than like an empirical result. Cultural practices must be causes as well as results of evolution because any result of the evolutionary process becomes a feature of the world of causes shaping the continuation of that process.
The other principal tenets — such as “cultural drive” and “gene-culture co-evolution” — are not quite first principles, but they seem to me ways of understanding how the feedback-loop of evolution encompasses cultural forms. To express these ways of understanding in the language of mathematical modeling seems fine, if one likes to do that, but no more definitive than expressing them in words. This is because a mathematical model, like a verbal description, contains many layers of interpretation. This is not a criticism: interpretation is essential to (and ineradicable from) any attempt to understand the world. But insofar as a mathematical model is taken to prove rather than to argue or represent, that’s where I think it can mislead.
Laland has devoted his career to pioneering work against reductive, simplistic, and dogmatic accounts of evolution, building brick by brick a sound case for the richer and more complex vision of the EES. Darwin’s Unfinished Symphony is a record of his resounding success. But while he has been constructing this revisionist scientific theory, he has often supported it by traditional methods. An example is his game-theoretic tournament to study social learning. After offering several examples of social learning in animals — such as Japanese macaques who learn from one innovative macaque to wash their sweet potatoes before eating them, and fish who learn from one another where the rich feeding patches are located — Laland asks what might be the best “social learning strategy.” He explains that the “traditional means to address such questions is to build mathematical models using, for instance, the methods of evolutionary game theory.”
Game theory became a standard model in evolutionary biology in the early 1970s with the work, notably, of the British theoretical evolutionary biologists W. D. Hamilton and John Maynard Smith, along with the population geneticist George R. Price. Hamilton, Price, and Maynard Smith developed a game-theoretic approach to modeling the behaviors of organisms in the struggle for survival. Their work was foundational to the neo-Darwinist, gene-centric program that Laland has devoted his career to challenging. In this gene-centric view, all higher-order entities — individual organisms, their behaviors and interactions — are epiphenomenal, controlled by and reducible to genes, so that any apparent agency or intention on the part of an organism is illusory. Organisms survive if they happen to achieve an optimal state of genetic affairs, one that maximizes some function for greater reproductive success. They die out when they fail to do so. Maynard Smith accordingly emphasized that his technical definition of “strategy” was strictly behaviorist. “Nothing,” he maintained, “is implied about intention.” A strategy was merely “a behavioural phenotype,” in other words, “a specification of what an individual will do [in a given situation].” [3] These “strategies,” therefore, involved no ascription of internal agency, but merely outward observations of behavior. Neither observed behaviors nor any other macrolevel phenomenon could play a causal role in evolution according to this school of thought.
Maynard Smith’s approach has inspired the most reductive of neo-Darwinists. For example, Richard Dawkins has adapted it to his own theory of gene functioning, emphasizing that the “strategies” in question are behaviorally defined and do not require the ascription of consciousness, let alone agency, to the strategic agent. Dawkins indeed refers to “unconscious strategists,” the deliberate oxymoron encouraging the reader to accept these apparent ascriptions of agency to genes as radical denials of any such agency. [4] Neither behaviors, nor agency, nor consciousness, nor culture operates causally at any level of Dawkins’s picture; all reduces to just gene functioning.
Game-theoretic modeling has been a hallmark of neo-Darwinist reductionism and, specifically, of the denial of any kind of evolutionary agency to the evolving organism. But in Darwin’s Unfinished Symphony, Laland describes how he and his collaborators used game theory in an innovative way, to design a virtual world in which they hosted a tournament. The game involved virtual “organisms” or “agents” engaging in a hundred “behavior patterns,” with varying rates of success resulting in greater or lesser “fitness” (i.e., survival and reproduction). The game also included three different “moves” — “innovate,” “observe,” and “exploit” — representing different phases of asocial or social learning. More than a hundred people of various ages and backgrounds took part in the game. Unlike in Maynard Smith’s applications of game theory to evolution, Laland and his collaborators were not looking for an optimum in the form of a single function or property to be maximized. They did not pre-judge what had to happen in order for an organism to win the competition. Rather, they set the competitors loose and waited to see who would triumph. The winning strategy was an unpredictable, complex mix of behaviors, although it did represent an overall optimum solution composed of behavioral bits.
Analyzing the winning strategy, Laland concludes that observing and copying are tremendously valuable, much more so than innovating on one’s own except “in extreme environments that change at extraordinarily high rates,” which must be rare in nature. The conclusion is persuasive, but the tournament seems to me more a way of expressing than of proving this point: the virtual agents and their behaviors and strategies of course constitute an interpretive representation of natural processes. They are not drawn in pastels or composed in prose, but the fact that they are programmed on a computer makes them no less a representation.
¤
To elaborate further, consider an experiment Laland describes, performed with his postdoctoral student Hannah Lewis. Laland explains that to model the effects of high-fidelity transmission of information on the longevity of cultural forms or “traits” in a population, he and Lewis “assumed that there are a fixed number of traits that could appear within a group through novel inventions and that are independent of any other traits within a culture. We called these novel inventions ‘cultural seed traits.’ Then, one of four possible events could occur”: a new seed trait could be acquired by novel invention; two traits could be combined to produce a new one; one trait could be modified; or a trait could be lost.
This model, in its relation to real cultural forms, seems to me the equivalent of a Cubist painting. Cultural “traits” that are independent of one another occur no more often in nature than young ladies with perfectly geometrical features distributed all on one side of their two-dimensional heads. Likewise for the separate and distinct occurrence of novel invention, combination, modification, or loss of cultural forms. These processes travel in the real world as aspects of a single organic entity and not as separate blocks. Of course, I’m not opposed to representing cultural forms in these Cubist terms any more than I’m opposed to Picasso’s portraits of Dora Maar. Representations should, though, declare themselves as such.
Mathematical models are interpretative from the get-go. Again, let me be clear that I think that’s fine — indeed, inevitable — because interpretation is ineradicable from any attempt to understand the world. Indeed, some scientists describe their use of mathematical models in these very terms. The theoretical physicist Murray Gell-Mann warned that we must be careful, regarding models, “not to take them too seriously but rather to use them as prostheses for the imagination, as sources of inspiration, as acknowledged metaphors. In that way I think they can be valuable.” [5] Feldman, who pointed me to Gell-Mann’s characterization of models as “prostheses of the imagination,” added that “insofar as the model assists in the interpretation, then it has value.” [6] On another occasion, Feldman told an interviewer, “[p]eople who make models for a living like I do don’t actually believe they’re describing reality. We aren’t saying that our model is more probable than another model; we’re saying it exposes what is possible.” [7]
I have no trouble believing in mathematical modeling as a powerful form of metaphor, representation of the possible, or prosthesis for the imagination. But mathematical modeling does have a distinctive feature that sets it apart from other interpretive modes: notwithstanding Gell-Mann and Feldman, it tends to disguise itself as proof rather than representation. Would it be possible for it to come right out of the positivist closet? To put my point another way, culture plays as crucial a role in evolutionary theory as it does in evolution. Culture plays as crucial a role in science as it does in nature. Wouldn’t a scientific method that unapologetically declared itself as interpretive and representational be in keeping with Laland’s revolutionary program to write cultural forms into evolutionary theory?
Mathematical modeling, like any mode of interpretive analysis, also has its limitations and pitfalls. For example, it brings a tendency I’ll call “either/or-ism”: a tendency to represent as separate and discrete, the better to count them, things that are in fact mixed and blended. Darwin’s Unfinished Symphony lists as discrete alternatives, for example, animals learning innovations socially from one another versus inventing them independently; the cultural drive hypothesis operating through natural selection on social learning proficiency versus social learning incidence; humans being more accomplished than other primates due to “chance factors” or because of a “trait or combination of traits that were uniquely possessed by our ancestors”; that high-fidelity transmission of information might have been achieved by our ancestors through language or alternatively through teaching; learning a skill such as stone-knapping to make a cutting tool by reverse-engineering from a finished sharpened flake, or else by imitation, or else by various forms of non-verbal teaching; or else by verbal teaching; and young individuals acquiring skills either asocially by trial and error, or else socially by copying, or else socially by being taught by a tutor “at some cost to the tutor.” In each of these cases, “both, and” seems more plausible to me than “either, or.” (Additionally, in the last case, must teaching involve a cost to the tutor? In my experience, teaching is often a win-win process, a non-zero-sum game, in which the teacher learns at least as much as the pupil, rather than a donation by the teacher to the pupil. Perhaps the sort of teaching that humans do is qualitatively different from the sorts that other animals do: a teacher macaque might not derive the same intellectual benefits from teaching to compensate for the loss of time that could be spent eating or reproducing. But I wonder if that’s necessarily true in all cases of nonhuman teachers.)
Yet Laland’s conclusions are extremely persuasive. Their persuasiveness overwhelms my failure to believe in a proof-value for the mathematical models. He concludes that natural selection favors those who copy others efficiently, strategically, and accurately; that nonhuman species lack cumulative cultures because of their “low-fidelity copying mechanisms”; that teaching evolves where the benefits outweigh the costs; and that language first evolved to teach close kin. I can believe in these conclusions, not as proven by the tournament-experiment, or the cultural-trait-transmission model, or the other mathematical models, but as interpretively, argumentatively presented by these models. I think this is because Laland’s conclusions are based on the kind of profound knowledge that comes only from a wealth of direct experience and — yes — keen, richly informed interpretation. Alongside the mathematical models are descriptions drawn from experiments and observations, some extending over decades.
For example, Laland describes several series of experiments designed to show that fish can learn from one another, and to investigate how and under what conditions they do so. In one set of experiments, Laland and his students and collaborators trained guppies to take certain routes to find rich food supplies, then observed other untrained guppies, in various conditions, learn from their trained fellows. In one variation, the experimenters trained the demonstrator fish to swim directly up narrow vertical tubes to reach their meal; this was a highly esoteric skill that no fish figured out on its own, without training, but the guppies did readily learn it from one another. In another series of experiments, the experimenters offered certain stickleback fish rich feeding patches and others poor ones, while observer fish watched from a distance; the humans then observed the observer fish to see whether and what they learned.
Such experiments, Laland reports, have established certain social tendencies in fish. These include “a tendency to adopt the majority behavior,” “copying the behavior of others when uncertain,” and “disproportionately attending to the behavior of groups.” Such social tendencies, once established, must surely enter into any legitimate evolutionary picture of fish. More generally, the principle that many animals are social, and that their sociality necessarily plays a role in the evolutionary process, has the retrospective obviousness of all grand, organizing ideas once stated, a most notable example being the idea of natural selection itself, whose retrospective obviousness led T. H. Huxley, upon reading the On the Origin of Species, to figuratively smack his forehead, exclaiming: “How extremely stupid of me not to have thought of that!” [8] Such grand, organizing ideas, which create conceptual sea-changes that render them retrospectively (but only retrospectively) obvious, can emerge only from richly informed interpretative analysis.
Darwin’s own method was explicitly so. He described natural history as a form of deeply interpretive historical scholarship. The geological record, he said, was a collection of fragments of the most recent volume of “a history of the world imperfectly kept, and written in a changing dialect.” He urged people to join him in considering natural history in these terms: to “regard every production of nature as one which has had a history” to be pieced together by interpretation of scant evidence. Darwin promised that this approach would be its own reward: “[W]hen we thus view each organic being, how far more interesting, I speak from experience, will the study of natural history become!” [9] Laland’s evolutionary science, as portrayed in Darwin’s Unfinished Symphony, might as well come right out and declare itself as such: it is precisely that “far more interesting” study.
¤
Jessica Riskin is a history professor at Stanford University, where she teaches courses in European intellectual and cultural history and the history of science. She is the author, most recently, of The Restless Clock: A History of the Centuries-Long Argument Over What Makes Living Things Tick (2016).
¤
[1] Joseph Fracchia and R. C. Lewontin, “Does Culture Evolve,” in History and Theory Vol. 38, No. 4, Theme Issue 38: The Return of Science: Evolutionary Ideas and History (Dec., 1999), pp. 52–78, on pp. 60, 72.
[2] Marcus W. Feldman, “Dissent with Modification: Cultural Evolution and Social Niche Construction,” in Melissa J. Brown, ed., Explaining Culture Scientifically (Seattle: University of Washington Press, 2008), Ch. 3, on p. 58.
[3] John Maynard Smith, Evolution and the Theory of Games (Cambridge: Cambridge University Press, 1982), 5, 10.
[4] Richard Dawkins, The Selfish Gene (1976), 30th anniversary ed. (Oxford: Oxford University Press, 2006), 229.
[5] Murray Gell-Mann, “Plectics,” in John Brockman, ed., Third Culture: Beyond the Scientific Revolution (New York: Touchstone, 1995), Ch. 19, on p. 324. 
[6] Marcus Feldman, in conversation, August 2018.
[7] Feldman, quoted in Elizabeth Svoboda, “Finding the Actions that Alter Evolution,” in Quanta Magazine, Jaunary 5, 2017, https://www.quantamagazine.org/culture-meets-evolution-the-marcus-feldman-qa-20170105/.
[8] Thomas Henry Huxley, “On the Reception of The Origin of Species” (1887), in The Life and Letters of Charles Darwin, edited by Francis Darwin (New York: D. Appleton, 1896), 1:533–58, on p. 551.
[9] Charles Darwin, On the Origin of Species by means of natural selection, or the preservation of favoured races in the struggle for life (London: John Murray, 1859 [1st ed.]), 310–311, 485–486.
Source: https://lareviewofbooks.org/article/the-value-of-w-or-interdisciplinary-engagements-on-culture/
1 note · View note
theliberaltony · 6 years
Link
via Politics – FiveThirtyEight
We don’t know how or when special counsel Robert Mueller’s investigation into Russian interference in the 2016 election will end. It could wrap up in a few weeks or many months and could play out in several ways.
There might be a political bombshell, like a revelation that President Trump’s 2016 campaign was involved with election interference. Or it could fizzle out, with no additional major revelations and no evidence of wrongdoing by Trump. With the help of legal experts, I’ve spun out five plausible scenarios for how the special counsel’s investigation might conclude. Each carries different amounts of legal and political risk for Trump. Let’s start with the scenario that would be most harmful to Trump.
Scenario 1: Trump is implicated in some form of coordination with Russia to interfere with the 2016 election
So far, Mueller’s legal filings have made one thing abundantly clear: The special counsel has evidence that Russian operatives engaged in a sustained effort to boost Trump’s candidacy and undermine Hillary Clinton’s. What we don’t know is whether the Trump campaign — or Trump himself — knew about these efforts, encouraged them or actively helped promote them. For Trump, one of the worst possible outcomes of the Mueller investigation would be for the special counsel to present evidence that Trump deliberately cooperated with Russian efforts to undermine the election. It would likely lead to calls among House Democrats for Trump to be impeached.
If Mueller has this evidence and an appetite for a legal battle that will probably end up at the Supreme Court, he could try to test a longtime constitutional hypothetical and charge Trump with a federal crime. But most experts I spoke with think he’ll abide by Justice Department guidelines that say indicting a sitting president is unconstitutional. A likelier outcome in this situation is that Trump’s involvement could be spelled out in charges against other people (similar to the way Trump was implicated in campaign finance violations by his former lawyer, Michael Cohen, but not explicitly named). Trump could also be named as an unindicted co-conspirator, as the grand jury in the Watergate investigation did with Richard Nixon, although that is also discouraged by the Justice Department. Even if he didn’t face any immediate legal liability, both of those outcomes could be very politically damaging to Trump, because his alleged wrongdoing would still be out in the open.
Alternatively, Mueller could include in his final report any information he finds about presidential misconduct, which could be just as damning if the relevant parts of the report are released to Congress or the public. When he concludes his work, Mueller is required to submit a confidential accounting of his indictment decisions. And that information could be presented in a myriad of ways. “He could use that report to describe a lot of things his team has uncovered that didn’t make their way into criminal charges,” said Joshua Geltzer, executive director of Georgetown’s Institute for Constitutional Advocacy and Protection. “Or it could be a very sparse report that essentially recaps what we’ve already seen through Mueller’s legal filings — or anything in between.”
Another way Trump’s wrongdoing could be communicated would be for Mueller to work with his grand jury to create a document to submit to Congress. This is one of the actions that Leon Jaworski, a special prosecutor in the Watergate investigation, took in 1974, when his grand jurors submitted what’s known as the Watergate “road map” to the House of Representatives. “It was essentially a guide for the House, which was considering articles of impeachment, to help clarify what criminal offenses the president had actually been implicated in,” said Andrew Coan, a professor of law at the University of Arizona and the author of a new book about the history of special counsel investigations. This would be a surprising move, though, Coan said, because it would bypass the courts and the final report to the attorney general, which are the two primary methods that a special counsel has to communicate the results of an investigation.
Scenario 2: Trump is implicated in obstruction of justice
Another possibility is that Trump isn’t implicated in Russian attempts to influence the 2016 election but there is evidence that he illegally or inappropriately tried to stymie Mueller’s investigation. Mueller has reportedly been looking into whether Trump obstructed justice or tampered with witnesses, but how big a deal would it be if those inquiries bear fruit?
The legal risks to Trump are lower here, because an obstruction of justice case could be hard to make. If Mueller believes Trump committed a crime of obstruction — and has evidence to support it — he could relay that information in the ways I described above, through charging documents, his final report or a grand jury submission to Congress. But the legal bar for obstruction of justice is high, according to Laurie Levenson, a professor at Loyola Law School, Los Angeles. And, she said, it’s possible that even if Mueller finds it troubling or inappropriate, Trump’s behavior may not clear that threshold. A description of potentially obstructive behavior could, therefore, end up in a section of Mueller’s final report in which he explains why he chose not to issue charges against certain people.
But even if Mueller doesn’t charge Trump with obstruction of justice, a description of potentially obstructive behavior could still have significant political fallout, depending on what Mueller says and what is made public. In that situation, the real question will be how politicians respond. “You could see a scenario where Mueller explains why this was a close call and why he landed where he did,” Levenson said. “But it’s very unlikely that Mueller will tell Congress if he thinks something is impeachable but not criminal. He’ll outline the facts and let them take it from there.”
The amount of political pressure that results, though, could depend on whether there are new revelations — or if Mueller simply outlines information that is already publicly known. If it’s the latter, Republican support for impeaching Trump and removing him from office could be more difficult to marshal. According to Lisa Kern Griffin, a professor of law at Duke University and a former federal prosecutor, Trump may have insulated himself from some backlash because he didn’t try to conceal his potentially obstructive behaviors — in the process, “numbing” observers to its significance.
Scenario 3: Trump isn’t accused of wrongdoing, but someone close to him is
What happens if someone close to Trump — for example, Donald Trump Jr., who has reportedly said that he expects to be indicted by Mueller — is charged in the Russia probe, but Trump himself isn’t accused of wrongdoing? That would be relatively good news for Trump; Katy Harriger, who is a political science professor at Wake Forest University and studies special prosecutor investigations, said he might be able to weather the scandal — much as Ronald Reagan did during the Iran-Contra investigation, which ended up implicating Reagan’s defense secretary. But he wouldn’t be out of the woods for a few reasons.
First, Trump might be tempted to issue a pardon. It wouldn’t be unprecedented for a president to pardon someone who got into legal trouble because of a special counsel investigation, as I wrote last year, but the timing matters a lot. If a pardon interferes with an ongoing investigation, it could open Trump up to obstruction of justice charges.
Another possible consequence of the indictment of a close business associate of Trump’s is that it could add fuel to the political fire around the president. The charges could give Democrats in Congress more justification for pursuing their own investigations, possibly into Trump’s family businesses. Harriger noted that unlike Reagan, Trump is only in his first term, so the Democrats’ probes could end up eroding his popularity and potentially hurt his chances in 2020.
Scenario 4: Mueller’s findings aren’t made public, so we don’t know whether Trump is implicated in wrongdoing
All three of the previous scenarios are based on the assumption that the key findings of Mueller’s investigation will become publicly available. But it’s possible that Mueller will conclude his investigation, submit his report and trigger a battle between Congress and the executive branch about whether his conclusions should be made public. That’s because once Mueller submits his report, it’s largely up to the attorney general to decide what to do with it.
This scenario is why Senate Democrats spent much of the confirmation hearings for William Barr, Trump’s pick to be the next attorney general, grilling him about how he would handle Mueller’s report. Barr promised transparency but also wouldn’t make any commitments about how much of the report he would release publicly. If the report, or parts of it, aren’t made public, then we might not know if Trump is implicated in wrongdoing that didn’t make its way into an indictment.
Even if much of Mueller’s report is made public, several experts told me that redactions seem likely to protect information that relates to other Justice Department investigations. And the Trump administration could try to suppress parts of the report by citing executive privilege, which means that some evidence of wrongdoing by the president could be held back.
House Democrats do have some tools at their disposal to respond if the attorney general declines to release the entire report or significant portions of it. They could subpoena the report or call Mueller to testify about his findings. Either way, there would likely be a dramatic separation-of-powers showdown. But it wouldn’t be resolved quickly.
Scenario 5: The findings are made public, and neither the president nor any of his close associates are implicated in further wrongdoing
In December, my colleague Perry Bacon Jr. and I speculated that Mueller might be writing his “report” in real time, through his detailed court filings. If this turns out to be true, the ending of the investigation may be somewhat anticlimactic, because all of the relevant information will already be known. And unless there are bombshells between now and when Mueller wraps up his probe, the investigation might not end up touching Trump or his close associates at all. If that happens, Trump will likely claim that he has been exonerated.
But even if the Mueller investigation ends without big, bad news for the president, Trump’s legal troubles are likely to continue — indeed, more hazards are already on the horizon. Federal prosecutors in New York — who already secured a guilty plea from Cohen, for a campaign finance violation that implicated Trump — are reportedly looking into spending around Trump’s inauguration and Trump Organization executives’ connections to illegal campaign payments.
And whereas the mandate for Mueller’s investigation was relatively narrow, other federal and state prosecutors have broad latitude to investigate potential corporate fraud, tax violations and other offenses that were out of Mueller’s purview. “In my view, the more obvious threat to Trump is coming from federal and state prosecutors in New York — maybe having less to do with the Russia component but involving campaign finance violations, potentially money laundering, misconduct in the context of the Trump Organization,” Griffin said.
And this, perhaps, is the most important lesson from all of these scenarios: that whenever the Mueller investigation does finally end, it’s likely to trigger another high-profile process — whether it’s a fight over the final report, further investigations by House Democrats and other prosecutors, or even impeachment proceedings. So whatever happens, don’t expect the drama to end when Mueller packs up his office and turns off the lights.
1 note · View note
canmom · 6 years
Text
forking consciousness
Played Digital: A Love Story a couple of days ago on my Twitch, and, it was pretty great but it’s making me think... (spoilers for Digital, NieR Automata and Blade Runner 2049 below)
A lot of stories where someone’s consciousness is computer data place some kind of restriction on copying that data and spawning multiple instances of a person - whether it’s like Richard Morgan’s Altered Carbon novels, or NieR Automata, where multiple embodiment is banned by by law and custom, or like Digital, where attempting to contain replicating AIs is the impetus behind the plot and there’s a contrived software reason why later AIs can’t be copied, or like Tacoma, where the AIs are cloned from a ‘source’ AI but require special ‘wetware’ in order to run, rather than being able to work on a general-purpose computer.
Even stories such as the recent Blade Runner sequel, where the AI companion character (who’s put in a rather troublesome plot where she really loves her slavemaster and is ultimately killed off) is one of a mass-produced line, we only encounter one instance of her. Although we know there are many Emils or Devolas and Popolas in NieR Automata, we only encounter the one Emil outside of a brief encounter in a single quest, and only the one Devola and Popola pair. While at one point towards the ending we encounter multiple 2Bs, they are not the ‘real’ 2B, just puppets instantiated to psychologically mess with 9S.
O Human Star deals with two robot clones of a particular human, one of whom transitioned, the other (more recent, and seemingly more ‘accurate’) did not. But, in that case, there’s no provision that unlimited robot clones could be made necessarily.
Often, sci-fi has an AI overseeing a ship or planet or something, able to hold conversations with many people at once. But it’s almost always considered as just one being, with a single memory.
It’s kind of understandable that we do this. We are used to dealing with people who cannot be duplicated at will, and trying to characterise many different clones and variations of a person is very difficult! But what an interesting challenge it would be to meet.
I’m actually struggling to think of any story where the idea of consciousness as a computer program that can be run in multiple instances in multiple computers is really dealt with by biting the bullet and having multiple instances of the same character running around. There must be some, right? It’s such a common sci-fi trope, and such an immediate implication. It actually seems to be an idea more commonly explored in time travel stories, such as Homestuck, rather than transhumanist scifi.
I suppose the recent Torment: Tides of Numenera game kind of touches on this concept, but the characters in that aren’t really clones of the same mind so much as new minds that manifested in the discarded bodies of one person. And, that person is unique, it’s not the standard state of affairs.
What would a society be like where minds could be forked in a way limited only by computing power? When any time you wanted, you could make a bunch of copies of yourself, which would then diverge and evolve in different ways? Surely people would do that if given the chance. Is it so far from our default assumptions about identity that we just can’t imagine it?
And it would be pretty tough to write about!
For example, how would a society built on ‘democratic’ principles (in whatever flavour you propose, whether directly democratic or any representative system like most present countries...) count the votes of your 100,000 virtual clones running in a server farm? To count them all separately would amount to giving whoever controls the most computing resources dictatorship, but at what point do two forked consciousness become sufficiently different to count as separate people? If you forked fifty years ago, and both had completely different lives and developed totally different personalities, would you still count as one person for voting?
What happens when two forked consciousnesses simultaneously claim the right to use an unreplaceable personal possession, such as a treasured heirloom?
I feel like it would be fun to come back to this. Maybe in a Twine or something.
19 notes · View notes
scepticaladventure · 6 years
Text
27  Gravitational Waves  12Sep18
Introduction This essay continues my series of essays discussing tests of Einstein’s Theory of General Relativity. More detailed descriptions of the test themselves can be found online and in the literature. See for example the literature review in May 2017 by Estelle Asmodelle from the University of Central Lancashire Ref:  arXiv:1705.04397 [gr-qc] or arXiv:1705.04397v1 [gr-qc].
I have questioned whether the experimental tests exclude any other explanations for the same phenomenon. So far I have examined gravitational redshifts and gravitational light bending, the Shapiro round-trip light delay and the ‘anomalous’ precession of Mercury. The evidence so far is that while General Relativity provides a satisfying explanation for all of these experimental observations, other ways of describing the outcomes are also viable. Hence there may be more than one way to include all the evidence within a different but still complete and consistent model or theory.
In this essay I will look at the latest of the five so-called tests – gravitational waves.
Gravitational Waves Gravitational waves are generated in certain gravitational interactions and propagate as waves outward from their source at the speed of light. Their possibility was discussed in 1893 by the polymath Oliver Heaviside, using the analogy between the inverse-square laws in both gravitation and electricity.
In 1905, Henri Poincaré suggested that a model of physics using the Lorentz transformations (then being incorporated into Special Relativity) required the possibility of gravitational waves (‘ondes gravifiques’) emanating from a body and propagating at the speed of light.
Some authors claim that gravitational waves disprove Newton’s mechanics since Newton assumed that gravity acted instantaneously at a distance. I think this is unfair to Newton. Whether or not Newton explicitly claimed that gravity acted instantaneously at a distance I do not know, but it would have been a reasonable and pragmatic working assumption to make at the time. Furthermore whether he assumed instantaneous effects or delays at the speed of light makes no practical difference to the validity of Newton’s work for the type of celestial mechanics he was interested in.
In 1916, Einstein suggested that gravitational waves were a firm prediction of General Relativity. He said that that large accelerations of mass/energy would cause disturbances in the spacetime metric around them and that such disturbances would travel outwards at the speed of light. A spherical acceleration of a star would not suffice because the gravity effects would still be felt as coming from the centre of mass. The cause would have to be a large asymmetric mass that was rotating rapidly. Or better still, two very large masses that were rotating around each other.
In general terms, gravitational waves are radiated by objects whose motion involves acceleration and changes in that acceleration, provided that the motion is not spherically symmetric (like an expanding or contracting sphere) or rotationally symmetric (like a spinning disk or sphere).
A simple example is a spinning dumbbell. If the dumbbell spins around its axis of its connecting bar, it will not radiate gravitational waves. If it tumbles end over end, like in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, or the faster it tumbles, the greater the gravitational radiation. In an extreme case, such as when two massive stars like neutron stars or black holes are orbiting each other very quickly, then significant amounts of gravitational radiation will be given off.
Over the next twenty years the idea developed slowly. Even Einstein had his doubts about whether gravitational waves should exist or not. He said as much to Karl Schwarzschild and later started a collaboration with Nathan Rosen to debunk the whole idea. But instead of debunking the idea Einstein and Rosen further developed it and by 1937 they had published a reasonably complete version of gravitational waves in General Relativity. Note that this is 22 years after the General Theory was first published.  
In 1956, the year after Einstein’s death, Felix Pirani reduced some of the confusion by representing gravitational waves in terms of the manifestly observable Riemann curvature tensor.
In 1957 Richard Feynman argued that gravitational waves should be able to carry energy and so might be able to be detected. Note that gravitational waves are also expected to be able to carry away angular or linear momentum. Feynman’s insight inspired Joseph Weber to try to build the first gravity wave detectors. However his efforts were not successful. The incredible weakness of the effects being sought cannot be over emphasized.
More support came from indirect sources. Theorists predicted that gravity waves would sap energy out of an intensely strong gravitational system. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar (a discovery that would earn them the 1993 Nobel Prize in Physics). In 1979, results were published detailing measurement of the gradual decay of the orbital period of the Hulse-Taylor pulsar, and these measurements fitted precisely with the loss of energy and angular momentum through gravitational radiation as predicted by calculations using General Relativity.
Four types of gravitational waves (GWs) have been predicted. Firstly, there are ‘continuous GWs,’ which have almost constant frequency and relatively small amplitude, and are expected to come from binary systems in rotation, or from a single extended asymmetric mass object rotating about its axis.
Secondly, there are ‘Inspiral GWs,’ which are produced by massive binary systems that are spiralling in towards one another. As their orbital distance lessens, their rotational velocity increases rapidly.
Then there are ‘Burst GWs,’ which are produced by an extreme event such as asymmetric gamma ray bursters or supernovae.
Lastly, there are ‘Stochastic GWs,’ which are predicted to have been created in the very early universe by sonic waves within the primordial soup. These are sometimes called primordial GWs and they are predicted to produce a GW background. Personally I doubt that this last type of GW exists.
On February 11, 2016, the LIGO and Virgo Scientific Collaboration announced they had made the first observation of gravitational waves. The observation itself was made on 14 September 2015, using the Advanced LIGO detectors. The gravity waves originated from a pair of merging black holes millions of years ago that released energy equivalent to a billion trillion stars within seconds. For the first time in human history, mankind could ‘feel and hear’ something happening in deep space and not just ‘see’ it. The black holes were estimated to be 36 and 29 solar masses respectively and circling each other at 250 times per second when the signal was first detected.
By August 2017 half a dozen other detections of gravitational waves were announced. I think all of them have been in-spiral GW’s. These produce a characteristic ‘chirp’ in which the signal becomes quicker and stronger and then stops. This is very useful for finding the signal amongst all the background noise. The flickering light pattern signal in the interferometer detector can be turned directly into a sound wave and actually does sound like a chirp.
In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. (The same Kip Thorne who co-authored the heavyweight textbook on gravity that I have referred to so often in these essays that I gave it its own acronym -  MTW).
As I first drafted this essay in 2017 there was considerable excitement in the world of astronomy because the Large Interferometer Gravity Wave Observatories (LIGO) suggested that a pair of neutron starts were in the process of collapsing. Space based telescopes were then able to look in that direction and they observed an intense burst of gamma rays. This is the first example of the two types of observational instruments working together and the dual result confirms that LIGO had been observing what they thought they were observing. Furthermore it provides evidence that gravitational waves travel at the speed of light.
Detection LIGO is a large-scale long-term physics project that includes the design, construction and operation of observatories designed to detect cosmic gravitational waves and applied theoretical work to develop gravitational-wave observations as an astronomical tool. It has been a struggle lasting many decades. It took many attempts to achieve funding for the observatories and nearly a decade to make the first successful observations. A triumph of persistence, optimism and the begrudging willingness of the USA National Science Foundation to fund a speculative fundamental science project to the tune of US$1.1 billion over the course of 40 years.
To my mind the experimental set up is reminiscent of Michelson Morley experiments 140 years ago. But it is on a much larger scale and is incredibly more sensitive, with all sorts of very clever tricks to increase the sensitivity and to get unwanted noise out of the system. Two large observatories have been built in the United States (in the states of Washington and Louisiana) with the aim of detecting gravitational waves by enhanced laser interferometry. The observatories have mirrors 4 km apart. Each arm contains resonant cavities at the end.
When a gravitational wave passes through the interferometer, the spacetime in the local area is altered. Depending on the source of the wave and its polarization, this results in an effective change in length of one or both of the beams. The effective length change between the beams will cause the light currently in the cavity to become very slightly out of phase (anti-phase) with the incoming light. The cavity will therefore periodically get very slightly out of coherence and the beams, which are tuned to destructively interfere at the detector, will have a very slight periodically varying detuning. This results in a measurable signal.
Or, to put it another way: After approximately 280 trips up and down the 4 km long evaluated tube arms to the far mirrors and back again, the two beams leave the arms and recombine at the beam splitter. The beams returning from two arms are kept out of phase so that when the arms are both in coherence and interference (as when there is no gravitational wave or extraneous disturbance passing through), their light waves subtract, and no light should arrive at the final photodiode. When a gravitational wave passes through the interferometer, the distances along the arms of the interferometer are repeatedly shortened and lengthened, creating a resonance and causing the beams to become slightly less out of phase and thus allowing some of the laser light to arrives at the final photodiode, thus creating a signal.
Light that does not contain a signal is returned to the interferometer using a power recycling mirror, thus increasing the power of the light in the arms. In actual operation, noise sources can cause movement in the optics that produces similar effects to real gravitational wave signals. A great deal of the art and skill in the design of the observatories, and in the complexity of their construction, is associated with the reduction of spurious motions of the mirrors. Observers also compare signals from both sites to reduce the effects of noise.
The observatories are so sensitive that they can detect a change in the length of their arms equivalent to ten-thousandth the charge diameter of a proton. This is equivalent to measuring the distance to Proxima Centauri with an error smaller than the width of a human hair.
Although the official description of LIGO talks about gravitational waves shortening and lengthening the arms of the interferometers by almost infinitesimal amounts, I think it might also be reasonable to describe what is going on as very slight changes in the speed of the photons being reflected back and forth 280 times in the 4 km long arms, as compared to the reference photons in the resonant cavities.
Some Comments on the Interpretation Commentators continually refer to gravitational waves as being “ripples in the fabric of spacetime”. There seems to be some deep-seated human desire to regard spacetime as being real and tangible, more or less like some sort of four dimensional fluid in in which the Universe is immersed. Computer based animations invariably depict empty space as some sort of rubberized sheet being dimpled by massive ball bearings and this promotes the same sort of mental images, attitudes and beliefs. Which is a pity.
It may be a lost cause but I point out once again that spacetime is a human construct for measuring, modeling and discussing what is going on in the Universe. It has no more reality that the Cartesian coordinate grid of latitude and longitude lines here on Earth.
It was not Einstein who promoted the idea that curved spacetime is an actual physical reality. This only happened after his death and was promoted by authors such as MTW and Stephen Hawking. For example, John Wheeler often made the comment that “mass/energy tells spacetime how to curve, and spacetime curvature tells matter how to move”. The cover of MTW classic textbook shows a little ant wandering around on the surface of an apple and dutifully following its curvature.
I would say to John Wheeler that he has started to confuse mathematical models with reality and that the analogy with the ant is a false one. The ant can feel the curvature of the apple with its little feet. The surface and its curvature is real and tangible. But spacetime is a manmade imagination created for our own convenience. A better analogy is the lines of latitude and longitude we have invented for talking about movement on the surface of our home planet. These lines do not actually exist. They cannot be observed. They are not tangible. I would say to John Wheeler that spacetime does not tell matter how to move any more than the latitude and longitude grid on Earth tells ducks how to migrate.
Which is not to say that I think that spacetime does not correspond to something that it observable. In fact I do. But this is a heretical idea that I will explore in other essays.
I also agree that applying a spacetime metric to this “something” is a good idea. But spacetime is not that something, and that something is not spacetime. In other words, do not get a reference system invented by mankind for convenience of describing physics mentally confused with reality itself.
Another crime in my book is commentators who compare gravitational waves with electromagnetic waves. Unless such commentators can explain how two stars orbiting each other can produce quantized packets of energy and then how these packets can be reflected, polarized, refracted etc. I suggest that they refrain from such analogies. If they must use analogies I suggest that they try acoustic comparisons instead.
Note that Doppler effects are a familiar phenomenon in sound waves and they should also occur for other moving disturbances such as gravitational waves. But where gravitational waves are concerned the effects should not be called red-shifting. The Doppler effect is not called red-shifting when it applies to acoustic waves and I think it should not be called red-shifting for gravitational waves either. It is just a plain old Doppler effect.
Discussion I do not find it surprising that a pair of massive pair of stars rotating about each other might have tiny push-pull effects a long way away. I think this is what you would expect to find even with a basics inverse-square law based on classical physics. For example, if a large asteroid suddenly knocked the moon out of its orbit, I think it reasonable to expect that observers on Earth would notice changes in gravity very soon afterwards.
Nor am I surprised that gravitational disturbances travel at the speed of light. In fact I am surprised that this has not been measured experimentally years ago. For example, the passage of the Moon overhead produces a noticeable gravitational tidal effect on the surface of the Earth. Since the centre of the pattern of this disturbance coincides exactly with where the Moon appears to be then that is evidence for the gravitational effect to be arriving hand in hand with the visible light from the Moon.
I would be surprised if gravitational waves are ever found to consist of discrete quantized packets, analogous to photons. In my currently preferred conceptual model of the Universe, photons are disturbances in something that can be modelled by spacetime constructs, and gravitational waves are disturbances of that something itself.
This is more than a semantic difference. Consider a laser beam that is pointed at the sky and turned on and off again. This sends bunches of well-collimated photons off on a journey into deep space which, in principle, can continue travelling indefinitely. Barring absorption by dust or blocking by some solid barrier, the beam of photons stands a chance of being able to be detected on some distant galaxy at some time in the future. Not so a gravitational wave. The energy from a gravitational wave spreads outwards in all directions and becomes increasingly weak with distance from its source. I think there is almost no chance of being able to detect gravitational waves coming from binary star events and suchlike outside of their local galaxy. Colliding galaxies might be a different story.
Conclusion After initial doubts, Einstein eventually decided that gravitational waves were a necessary feature of his Theory of General Relativity. The recent detection of gravitational waves, apart from being a remarkable achievement, is further confirmation that General Relativity works well as a model. However I think it is not proof that General Relativity is the only viable and useful way of looking at physics in our Universe.
1 note · View note
patriotsnet · 3 years
Text
How Many Republicans Are Running For President
New Post has been published on https://www.patriotsnet.com/how-many-republicans-are-running-for-president/
How Many Republicans Are Running For President
Tumblr media
How Black Republicans Are Debunking The Myth Of A Voter Monolith
How GOP retirements are making the 2022 midterm elections a Trump referendum
African American politicians and activists on the right say theyve found support in the black community through dialogue
For Brad Mole, venturing into Republican politics didnt start with a sudden awakening to conservatism. It was his religious upbringing and way of life that brought him to the Republican party.
My faith pushed me more toward policies that better reflected my upbringing, he said. I began understanding that the teachings I was raised with were more reflected in a party that not many around me identified with.
The son of a preacher in the Lowcountry region of South Carolina, Mole is now taking his politics a giant leap forward, challenging the Democrat Joe Cunningham for his US congressional seat.
As analysts debunk the myth of the black voter monolith, some black Republicans are stepping forward to counter stereotypes and assert a political identity very different from the usual assumption that all black Americans are Democrats, especially in the era of Donald Trump.
As one of seven Republicans running for the seat, Mole credits his religious background for his motivations to join the crowded race. Those same traditions are often associated with centrist African American political leanings. But for black Americans like Mole, their conservatism leads some to question whether their political party and preferences actually match their worldview.
But hes not out to change minds; he wants rebuild a sense of community.
Political Primaries: How Are Candidates Nominated
Article two, section one of the United States Constitution discusses the procedures to be followed when electing the president of the United States, but it does not provide guidance for how to nominate a presidential candidate. Currently, candidates go through a series of state primaries and caucuses where, based on the number of votes they receive from the electorate, they are assigned a certain number of delegates who will vote for them at their party’s convention.
Earlier party conventions were raucous events, and delegates did not necessarily represent the electorate. Mrs. J.J. McCarthy describes her convention experience:
I can picture … the great Democratic convention of 1894 at the old coliseum in Omaha… right now I can hear the Hallelluiahs of the assembled. Oh how I wish I had back the youth and the enthusiasm I felt that night, I jumped on a chair and ask that by a rising vote the nomination be made unanimous, how the people yelled, how the packed gallories applauded, it cheers an old man now to think about it.
Politics played a big part in the life of this town years ago. Campaigns were hot, and there was always a big celebration afterwards. … Votes used to be bought — that is before the secret ballot was adopted. Some sold ’em pretty cheap. I remember one old fellow who sold out to one party for a dollar — then sold out to the other for the same price.
State Primaries And Caucuses For The Presidential Elections
State primaries are run by state and local governments. Voting happens through secret ballot.
Caucuses are private meetings run by political parties. Theyre held at the county, district, or precinct level. In most, participants divide themselves into groups according to the candidate they support. Undecided voters form their own group. Each group gives speeches supporting its candidate and tries to get others to join its group. At the end, the number of voters in each group determines how many delegates each candidate has won.
Both primaries and caucuses can be open,closed, or some hybrid of the two.
During an open primary or caucus, people can vote for a candidate of any political party.
During a closed primary or caucus, only voters registered with that party can take part and vote.
Semi-open and semi-closed primaries and caucuses are variations of the two main types.
Recommended Reading: How Many Seats Did Republicans Lose In The House
Affordable Care Act Lawsuit
See also: State Attorneys General Against the Patient Protection and Affordable Care Act of 2010
Abbott was one of 13 state attorneys general who initiated a 2010 lawsuit challenging the constitutionality of the Patient Protection and Affordable Care Act. The suit argued that the individual mandate fell outside of the federal governmentâs authority and that the requirement for state Medicaid expansion of coverage violated state sovereignty. The case was ultimately heard before the Supreme Court, which ruled to uphold the individual mandate as falling within Congressâ authority to levy taxes and struck down the Medicaid expansion as being unduly coercive in light of the withholding of funding that would result from noncompliance.
Sen Marco Rubio Of Florida
Tumblr media Tumblr media
Like Cruz, Rubio would enter the 2024 presidential race with heightened name ID and experience from his 2016 run. One of Rubios biggest challenges, though, could be his fellow Floridians. If DeSantis and fellow Sen. Rick Scott run, there could be just one ticket out of Florida, a Republican strategist said.
Rubio, 49, is married to Jeanette Dousdebes and they have four children. He graduated from the University of Florida and University of Miami School of Law and was speaker of the Florida House of Representatives before running for U.S. Senate in 2010.
You May Like: What Do Republicans Think About Daca
Fragment Of Lincoln Speech To Kentuckians
A fragment of President Lincolnâs First Inaugural Address is attached to this speech intended for Kentuckians, indicating that it was prepared prior to his journey from Springfield to Washington. The assumption is that Lincoln either planned to receive a delegation from Kentucky during his stop in Cincinnati, or to make a quick excursion into his home state to deliver the speech. The speech itself confirms Lincolnâs belief that there was nothing he could say to appease the South without betraying the principles upon which he had been elected.
Abraham Lincoln. Speech intended for Kentuckians, February 1861. Holograph letter. Robert Todd Lincoln Papers, Manuscript Division, Library of Congress Digital ID # al0082p1, al0082p2
Bookmark this item: //www.loc.gov/exhibits/lincoln/the-run-for-president.html#obj23
Republican Nominee Shows Humility
William D. Kelley, a Philadelphia, Pennsylvania, attorney and judge, served as a delegate to the 1860 Republican National Convention in Chicago. Kelley joined Lincoln in Washington in 1861 as a Republican member of the U.S. House of Representatives, an office he continued to hold until his death in 1890. In responding to Kelleyâs offer to inscribe his two-volume work on international law to Lincoln, the Republican nominee for president showed that he had not lost sight of his humble origins.
Bookmark this item: //www.loc.gov/exhibits/lincoln/the-run-for-president.html#obj13
Read Also: Which Republicans Might Vote For Impeachment
Abraham Lincoln Replies To A Political Rival
Cassius Clay, an enthusiastic but undisciplined Kentucky abolitionist, thought he should be the next president of the United States. Clay would have settled for vice president, but he accepted the fact that the party needed an Eastern Democrat to balance the ticket. Aware that Clay lacked the necessary judgment to manage either office effectively, Lincoln sidestepped Clayâs direct solicitation for a prominent place in the possible future Republican administration.
Bookmark this item: //www.loc.gov/exhibits/lincoln/the-run-for-president.html#obj7
Primaries Were Beauty Contests
Donald Trump to decide on 2024 Presidential run| White House | Latest English News | World News
Eisenhower
When primaries did play a substantive role, it was instead through their function as beauty contests. Winning the 1952 New Hampshire primary let Dwight Eisenhower prove that rank-and-file Republicans, and not just party bosses, were more interested in picking a winner than in picking an orthodox conservative thus giving the establishment permission to do what it wanted and go with Ike.
But both of these examples were making a point to persuade party leaders, not a way to override their preferences.
The fundamental inefficacy of the primaries was driven home by the bitter 1968 Democratic nomination contest that ultimately went to Vice President Hubert Humphrey, who didn’t even enter any primary elections.
But the tumultuous, riot-scarred convention where it happened, followed by electoral defeat at the hands of Richard Nixon, spurred massive change.
You May Like: Why Dont Republicans Want To Impeach Trump
How To Become President Of The United States
The U.S. Constitution’s Requirements for a Presidential Candidate:
At least 35 years old
A natural born citizen of the United States
A resident of the United States for 14 years
Step 1: Primaries and Caucuses
There are many people who want to be president. Each of these people have their own ideas about how our government should work.; People with similar ideas belong to the same political party. This is where primaries and caucuses come in. Candidates from each political party campaign throughout the country to win the favor of their party members.
Caucus: In a caucus, party members select the best candidate through a series of discussions and votes.
Primary: In a primary, party members vote for the best candidate that will represent them in the general election.
Step 2: National Conventions
Each party holds a national convention to finalize the selection of one presidential nominee. At each convention, the presidential candidate chooses a running-mate .
Step 3: General Election
The presidential candidates campaign throughout the country in an attempt to win the support of the general population.
People in every state across the country vote for one president and one vice president. When people cast their vote, they are actually voting for a group of people known as electors.
Step 4: Electoral College
In the Electoral College system, each state gets a certain number of electors based on its total number of representatives in Congress.
Definitions:
You can;.
Road To The Nomination
Lincolnâs remarkable performance in a series of seven debates with Senator Douglas drew the attention of Republican Party leaders in New York and New England.;Invited East to speak, Lincoln delivered one of the best speeches of his career at Manhattanâs famous Cooper Union.;Horace Greeley immediately reproduced the speech in his widely read New York Tribune, and Lincoln began to be thought of as a potential presidential candidate. With the help of able advisors, Lincoln orchestrated a successful campaign for the 1860 Republican nomination for president.
Don’t Miss: What Major Cities Are Run By Republicans
Campaign Buttons From 1860
In 1860, after the invention of the economical tintype process, candidatesâ images appeared on campaign buttons for the first time. The buttons shown here display a portrait of Lincoln on one side and an image of vice-presidential candidate Hannibal Hamlin on the reverse.
1 of 5
Abraham Lincoln and Hannibal Hamlin campaign buttons, 1860. Tintypes with metal casings. , Library of Congress Digital ID # ppmsca-19432
Bookmark this item: //www.loc.gov/exhibits/lincoln/the-run-for-president.html#obj17
Here Are The Republicans To Keep An Eye On For 2024
Tumblr media Tumblr media
Republicans are paying extra attention to a number of Republican governors, senators, and former officials that might consider making a run for president in 2024.
The contenders come from various contingents of right-leaning thought, and will be fighting to capture parts of former President Donald Trumps base. Whichever Republican hopeful prevails will not only become the Republican Partys nominee, but also help determine the ideological trajectory of the Republican Party in the post-Trump era.
Vice President Mike Pence
Its not uncommon for vice presidents to follow up their stint as second-in-command with a run for president. Former President John Adams, the nations second president, was Americas first vice president under President George Washington. More recently, President Joe Biden became the 46th president four years after he ended his eight-year tenure as former President Barack Obamas vice president.
Vice President Mike Pence might decide to do the same, but Pences relationship with Trump seems to be severely tarnished after Pence did not contest the certification of the Electoral College results, as reported by The Hill.
Senator Ted Cruz
Texas Republican Sen. Ted Cruz could run for president again come 2024 after he defended his senate seat in 2018 from Democratic challenger Beto ORourke. Cruzs bid for the presidency in 2016 ended in failure as Trump captured the Republican Partys nomination.
Senator Josh Hawley
Governor Ron DeSantis
Recommended Reading: Which Republicans Voted Against The Tax Bill
Elites Still Matter Enormously In Primaries
George H.W. Bush
Just when journalists and political scientists were ready to proclaim the death of parties in favor of candidate-centered politics, the pendulum started to swing back.
Over the past 35 years, incumbent presidents have had zero problems obtaining renomination even presidents like George H.W. Bush and Bill Clinton who alienated substantial segments of the party base with ideological heterodoxy during their first term. Reagan and Clinton both passed the baton to their vice presidents without much trouble.
Insurgent candidates who caught fire with campaigns explicitly promising to shake up the party establishment Gary Hart in 1984, Pat Robertson in 1988, Jerry Brown in 1992, Pat Buchanan in 1996, John McCain and Bill Bradley in 2000, Howard Dean in 2004, Mike Huckabee in 2008, and Rick Santorum in 2012 repeatedly gained headlines and even won state primaries.
But while 1970s insurgents were able to use early wins to build momentum, post-Reagan insurgents were ground down by the sheer duration and expansiveness of primary campaigns.
Tactics that worked in relatively low-population, cheap states like Iowa and New Hampshire simply couldn’t scale without access to the broad networks of donors, campaign staff, and policy experts that establishment-backed candidates enjoyed.
It’s this “invisible primary” among party elites that truly matters.
Endorsements were better at predicting the outcome than polls, fundraising numbers, or media coverage.
Former Vice President Mike Pence
Historically, experience as Veep isnt a bad launching pad for the presidency. Six former vice presidents went on to become president, including, of course, President Joe Biden, and an additional five won their partys nomination. For 61-year-old Pence, though, the upside of his time as vice president is more of an open question.
Trumps 2020 pollster Tony Fabrizio found that if the former president doesnt run in the 2024 election, his supporters gravitate most to Pence, DeSantis and Sen. Ted Cruz of Texas, so there is plenty of support there. But on Jan. 6, when Pence announced Biden as the winner of the 2020 election, he complicated things.
Hes got this tricky position, said Steven Webster, and assistant professor of political science at Indiana University Bloomington. I think increasingly the base of the Republican Party is aligned with Donald Trump, and Mike Pence is really seen with hostility by Trumps base, simply for performing his constitutional duty on the 6th.
Pence appears to be well aware of the predicament. Earlier this month, he published an op-ed voicing his concern over supposed voting irregularities in the 2020 election, though he didnt mention any specifically. Trumps own administration said the election was the most secure in American history.
Pence and his wife, Karen, have three children. Pence is a former conservative radio host who served seven terms in the U.S. House before becoming governor of Indiana.
Also Check: Why Republicans Want To Repeal Aca
Lincolns Cooper Union Address
Lincolnâs debates with Stephen A. Douglas brought him to national attention, including an invitation to speak at Cooper Union in New York City. In one of the most carefully prepared speeches of his career, Abraham Lincoln argued that twenty-one signers of the United States Constitution believed that the federal government should exercise control over slavery in the territories. Hence, the position of the Republican Party on the westward expansion of slavery was not revolutionary, but instead was consistent with the wishes of the Founding Fathers. The speech is significant because it won Lincoln the support of Republican Party leaders in the East and led to his nomination as the partyâs presidential candidate.
Speech of Hon. Abraham Lincoln, in New York, in Vindication of the Policy of the Framers of the Constitution and the Principles of the Republican Party. Delivered in the Cooper Institute, Feb. 27th, 1860. Springfield, IL: Bailhache & Baker, 1860. Rare Book and Special Collections Division, Library of Congress Digital ID # al0047_1, al0047_2, al0047_3, al0047_4, al0047_5, al0047_6, al0047_7, al0047_8
Bookmark this item: //www.loc.gov/exhibits/lincoln/the-run-for-president.html#obj1
South Dakota Gov Kristi Noem
The GOP is still very much the ‘party of Donald Trump
Noem, 49, has seen her profile rise during the pandemic, and she also had a high-profile moment last summer when she hosted Trump at Mount Rushmore for the Fourth of July. Noem gifted Trump with a Mount Rushmore replica that included his face, and her growing connection with Trump fueled speculation that he was considering swapping her for Pence as his running mate. She reportedly visited Washington, D.C., weeks later to smooth things over with Pence, according to The New York Times.
Noem isnt one to back down from culture wars fights. She recently came under fire from social conservatives for not signing a bill she originally said she supported barring transgender athletes from competing in sports. Noem cited her concern that the state would be punished by the NCAA, but followed up last week with executive orders restricting transgender athletes in K-12 schools and colleges.
Noem also recently got in a Twitter fight with Lil Nas X over his limited-edition Satan Shoes. The rapper responded to her tweet by saying, ur a whole governor and u on here tweeting about the shoes. Noem fired back with a Bible verse from Matthew 16:26.
Like DeSantis, Noem has played up her states more hands-off approach to handling COVID-19, but the virus has devastated South Dakota. More than 1,900 people have died in the rural state, and it has the eighth-highest death rate per 100,000 people in the U.S., according to data compiled by Statista.
Read Also: Why Do Republicans Like Donald Trump
Cancellation Of State Caucuses Or Primaries
The Washington Examiner reported on December 19, 2018, that the South Carolina Republican Party had not ruled out forgoing a primary contest to protect Trump from any primary challengers. Party chairman Drew McKissick stated, “Considering the fact that the entire party supports the president, we’ll end up doing what’s in the president’s best interest.” On January 24, another Washington Examiner report indicated that the Kansas Republican Party was “likely” to scrap its presidential caucus to “save resources”.
In August 2019, the Associated Press reported that the Nevada Republican Party was also contemplating canceling their caucuses, with the state party spokesman, Keith Schipper, saying it “isn’t about any kind of conspiracy theory about protecting the president;… He’s going to be the nominee;… This is about protecting resources to make sure that the president wins in Nevada and that Republicans up and down the ballot win in 2020.”
Kansas, Nevada and South Carolina’s state committees officially voted on September 7, 2019, to cancel their caucus and primary. The Arizona state Republican Party indicated two days later that it will not hold a primary. These four were joined by the Alaska state Republican party on September 21, when its central committee announced they would not hold a presidential primary.
Virginia Republicans decided to allocate delegates at the state convention.
0 notes