#Director General Adjunct
Explore tagged Tumblr posts
callahanisms · 5 months ago
Text
Impressions - Part 02
Tumblr media
pairing: tashi duncan x bipoc! fem! reader
word count: 2.3k words
context: 2019. los angeles. tashi duncan has found her perfect actor after a vigorous round of auditions. but did the actor stumble upon the audition by chance? or was it premeditated?
no specific pronouns used. reader is able bodied and can speak. reader is about 25, while tashi is 31/32.
based on this post. check out part 01.
sorry for taking so long. grad school is really kicking my butt right now.
She doesn’t seem impressed.
The way she turns the pages of the stapled papers, her nails glimmering in the light. There’s a hint of glitter and they have a cream to pink ombre. They look really nice. And it was clear she had just gotten them done. The clear gloss made her lips look soft and shiny.
Your heart is pounding. You don’t know why. Tashi Duncan asked you for criticism of her work. Were you perhaps not harsh enough? It was hard to tell. The script was just…well, you wanted to keep reading. You had to read it a fourth time to actually start annotating and adding your notes. It was also hard to criticize her vision without any sort of visual. Film was a visual medium after all. It was hard to see what she meant when you were reading.
“Did you hold back?”
You pick up the glass of sangria and take a small sip. “Well…”
Tashi looks at you expectantly. “I thought you’d be harsher.”
“It’s hard to judge entirely. Because part of film critique is…to see the film…”
Her other hand plays with the fork, before stabbing a few leaves and tomatoes of her salad. “So…essentially, you can’t fully critique it without seeing the actual film.”
“A script is only part of it. I just think it’d be nice to have some sort of visual.” Your plate was already clean from your appetizer. It felt odd to be treated to a full course meal by Tashi. But she said you wouldn’t need to pay. Which was generous considering how expensive the restaurant was and being an adjunct didn’t pay as much as you wanted it to. Plus rent was due soon.
“That’s fair. I have a specific vision I want to achieve.” She closes the script and her finger runs over the colored tabs. She liked that the cover page had a key for the colors—by highlighter and by tab. “You seem well aware of that.”
“I’ve watched…most of your stuff. All your films. Majority of the television episodes you’ve directed. And I’ve watched a lot of behind the scenes interviews.” You feel your cheeks heat up. Honestly, you sounded like a bit of a fan.
There’s a smile creeping up on Tashi’s face. “It’s surprisingly rare to find people that have watched your work and…understand your process.” She says. “It takes a certain amount of trust and popularity to be given full control.”
“I’m pretty sure you’ve proven yourself already. Your last film was amazing. 5 stars on Letterboxd.” You hold your glass, tipping it towards the director.
Tashi picks up her cocktail and gently taps the glass against your own. “You and the other hundreds of thousands of people.”
Tumblr media
“Where are we?”
Tashi puts the car in park and turns off the ignition with the touch of a button. There’s a click and the rapid retraction of her seatbelt. “My house.” The sound of the door opening is crisp. Or maybe it’s because the sangria made things sound sharper than they should.
It was actually smaller than you thought. But certainly a lot of space for a person living alone. “How many bedrooms?” You unbuckle your seat and climb out of the car. The air feels refreshing against the hot skin of your face. You could feel the vessels throbbing beneath from the body’s processing of the ingested alcohol. You make sure to close the door all the way and follow after her.
Her keys have a keychain attached to it: a Sonny Angel with a frog hat. And he’s wearing a green shirt and some jeans. “Three beds, one full bath, one half bath.” She says. “It’s expensive, but I can afford it. And one of the bedrooms is…well, you’ll see.” When she looks back at you, it’s teasing. The corner of her mouth is curled into one of her charming smirks. The kind that also became a popular meme to use online. “The other is a guest bedroom. Because you never know when someone’s going to stay the night.”
“So…does that mean your parents drop in often?”
“Yes.” The door clicks and she pushes the door open. “Hi~” Her voice is suddenly a pitch higher.
When you step into the house and close the door behind you, you see why. A gray tabby cat nuzzles up against Tashi’s leg, mewling. It suddenly jumps, trying to climb up her pants. You remove your shoes, setting them to the side so they aren't in the way of the door. And you make sure to lock the front door. “Who’s this?” You ask.
“I named her K.C.” Tashi gently pries the cat off of her pants and holds her.
“After your character on that spy sitcom?”
“Yes. Precisely.” Her nails scratch K.C.’s chin and there’s a purr in response. “She’s a little troublemaker. But she followed me home one day after I went out to eat. No one came to claim her, so now she’s my cat.”
You take a few steps closer to her and put your finger out. K.C. sniffs the offered finger and nuzzles her nose against it. “How old is she?”
“Around six months. She followed me home when she was only eight weeks old.” Tashi bends down to set the cat down. You follow the director into the kitchen, taking in the decorations after your eyes adjust to the sudden turning on of the soft lights. You’re not surprised to find plenty of movie posters on the wall, including one of Amélie and Tampopo. Which was smart. Putting the movie about food in the kitchen certainly made the hunger return.
Tashi quickly fills her bowl with some kibble, wet food, and a little bit of bone broth. She sets it down and K.C. immediately begins to eat. “Kittens. They always eat like they were never fed.” You joke.
“There was a time she literally ate my toast.” Tashi slowly plucks the rings off her fingers and washes her hands. They move so delicately. Covered in a thick layer of suds. Her scrubbing beneath her fingernails. The water washes away the soap and she turns off the faucet, drying her hands. The towel gets between her fingers. Her fingers. Her long fingers. She slides the rings back on. “She jumped up and just took my toast out of my fingers. And it had grape jelly on it—”
“Wait. You eat grape jelly?” You knew no one that actually liked grape jelly. Aside from your grandfather and younger brother.
Tashi rolls her eyes. “I prefer raspberry. But a friend got me an artisanal grape jelly when he visited the farmer’s market. Said it’d be good to try it. And it was good. I just prefer raspberry. The tartness balances better with the sugar.” She begins walking and when she looks back at you, you know what she’s saying.
Follow me.
Your feet carry you and you can faintly smell the lingering notes of her perfume. Tashi turns the hallway light on and then opens a door off to the side. She flicks the light switch on and the room is filled with a warm light. You stand in the door while she goes over to the desk and leans against it, arms crossed over her chest.
You’re taken in by the boxes in the corner, stacked. There’s an easel by the window. Multiple sheets of paper were taped onto the wall. There’s a board with more sheets of paper pinned to it. It definitely feels like an artist’s studio, a stark contrast to the reality of Tashi Duncan as a filmmaker.
“So you’re artsy?” You ask.
“You could say that.” She cocks her head to the side. “You can come in, you know.”
“Yeah…I’m afraid I might set this place on fire.” A nervous chuckle escapes you. It’s utterly gorgeous. And some of the pieces on the wall take your breath away. Gorgeous. Vibrant. Full of color and with gorgeous shading. There’s some photographs tapped around the room too. Mostly landscapes and settings. One collection is just a room at different angles.
“You won’t. Just come take a look. These are my storyboards.”
“...Huh!”
Your jaw practically dropped.
These were Tashi Duncan’s storyboards?
This was on a similar level to Ridley Scott. That was kind of mindblowing. “Y-Your storyboards?”
“I just have a really tedious process.” Tashi uncrosses her arms and rests them between her thighs. “It’s a little…frustrating. But it really helps get the images out of my head and onto something tangible. And if it doesn’t look like what I actually want it to, then I am still satisfied anyways because my vision was fulfilled.”
Your step is gentle and you walk over to the board first. This was clearly the storyboard with guidelines and vague shapes to indicate lighting and shadows. It was clear to see that Tashi’s strong suit was perspective. Your eyes slowly move to the big paper taped to the wall. A woman looking up. The light is shining down while the background is bathed in a dark blue light. Blood covers her mouth and drips down her chin and neck. The neckline of her dress is red, soaked from blood. And…
“She kind of looks like me.”
Tashi purses her lips. “Yeah.” She lets out a small laugh. “It just came to me in a dream.”
You look back at her, smiling. “It’s funny how dreams work, huh? The kind of people our subconscious recognizes and puts together. Which reminds me. I think you should maybe lean more into psychoanalysis for your movie. I know the idea of id, ego, and superego is overdone and may be boring…but I think there would be something interesting in presenting your three primary characters in that way. It never gets old. And honestly, psychoanalytical readings are never not trendy.”
“That’s actually an amazing suggestion.” Tashi licks her lips. You fail to notice her eyes trailing down your back.
“I’m happy you think so. I think a lot of film scholars would just go crazy over it.” You look at her. “Also, where’s the bathroom?”
“Down the hall to your right. It has a peacock on the door.”
“Got it. I’ll be back. I just had a lot of sangria.”
Tashi watches you leave. And she turns back to her desk, collecting the photos together and putting them in a neat pile. Pictures of you. Some of them were stills. Some your headshots. Others from your Instagram account. She opens the drawer and lifts up a manila folder and sketchbook, shoving the photos beneath. The drawer slams shut and she opens another drawer off the side, pulling out some more books.
She hears the sound of the toilet flushing and then the running water of the sink. You come back within three minutes, hands dried and rubbing lotion into your skin. “Where’d you get the lotion in the bathroom?”
“Costco.”
“Damn. That’s hot.”
You realize what you just said.
“I-I mean…it’s hot that you have a Costco membership!”
Tashi can’t help but laugh. “I would say the same to someone. Do you want something to drink? Some tea? Or maybe some water?”
“I think water would be good.”
“Be right back.” When Tashi leaves the room, her clothes brush against you. You feel the goosebumps forming over your arm. And there’s her perfume. It was addictive.
You decide to walk around the room, taking in the storyboards more. You don’t dare touch the boxes, despite the urge to look. There’s something else that satiates your curiosity: the books on the desk. You pick one up and carefully open it to a random page. It’s some sketches. You recognize one of the sketches as actor and producer Art Donaldson. You forgot that he was in Tashi’s second film, on top of producing it.
“Like them?”
You nearly jump, slamming the book closed. Tashi walks over and sets a mug of water on the desk. She hands you the other one and you take it. There are flowers on it. “Sorry. I was just looking—”
“It’s fine. You’re already in here. You might as well look.” Tashi shrugs.
“You’re like…amazing!”
“It took a lot of practice.” Tashi grabs the more run down book and flips it open. You purse your lips to stifle a laugh. “It’s okay, you know. We all start somewhere. Besides, Rian Johnson’s storyboards look the same. And this was my first time directing.”
Tashi Duncan’s directorial debut. Inside Audrey Horne.
“You’re right. I mean if it gets the job done…what’s the point in arguing?” You take a sip of the cold water. “So you practiced and now…you just do full on art pieces?”
“I like experimenting with color.” She shrugs. “And naturally if I am taking inspiration from Dario Argento and technicolor, then it’s best to figure out what colors mesh well.”
“So what do you use?”
“Pastels. I like my drawings to look smooth.”
“You do have a way with color.” Your eyes keep going back to the big drawing on the wall, of your lookalike staring up at something in both awe and horror. “I’m guessing that’s the scene of when I cannibalize my former castmate?”
“It is. I have a specific idea of what that shot would look like.” Tashi takes a sip, her brown eyes watching your body language. You’re at ease. You’re relaxed. You’re in the mood for chatter and to hear more, like the film nerd that you were. “So…do you have anything else you want to add?”
“I mean…your script is solid. And seeing what you intend to make just…it’s awesome to see what your vision is.”
Even though Tashi said she didn’t want a yes man, she still liked getting praise. It was necessary to know what she was doing right and how to keep it right. But hearing it from you was different. It was more special. So she decides to prompt you.
“Tell me what’s on your mind.”
23 notes · View notes
planetfkd · 4 months ago
Text
Tumblr media
Of particular concern are signals of massive earthquakes in the region’s geologic history. Many researchers have chased clues of the last “big one”: an 8.7-magnitude earthquake in 1700. They’ve pieced together the event’s history using centuries-old records of tsunamis, Native American oral histories, physical evidence in ghost forests drowned by saltwater and limited maps of the fault. 
But no one had mapped the fault structure comprehensively — until now. A study published Friday in the journal Science Advances describes data gathered during a 41-day research voyage in which a vessel trailed a miles-long cable along the fault to listen to the seafloor and piece together an image.
The team completed a detailed map of more than 550 miles of the subduction zone, down to the Oregon-California border.
Their work will give modelers a sharper view of the possible impacts of a megathrust earthquake there — the term for a quake that occurs in a subduction zone, where one tectonic plate is thrust under another. It will also provide planners a closer, localized look at risks to communities along the Pacific Northwest coast and could help redefine earthquake building standards. 
Tumblr media
“It’s like having coke-bottle glasses on and then you remove the glasses and you have the right prescription,” said Suzanne Carbotte, a lead author of the paper and a marine geophysicist and research professor at Columbia University’s Lamont-Doherty Earth Observatory. “We had a very fuzzy low-resolution view before.”
The scientists found that the subduction zone is much more complex than they previously understood: It is divided into four segments that the researchers believe could rupture independently of one another or together all at once. The segments have different types of rock and varying seismic characteristics — meaning some could be more dangerous than others. 
Earthquake and tsunami modelers are beginning to assess how the new data affects earthquake scenarios for the Pacific Northwest. 
Kelin Wang, a research scientist at the Geological Survey of Canada who was not involved in the study, said his team, which focuses on earthquake hazard and tsunami risk, is already using the data to inform projections. 
“The accuracy and this resolution is truly unprecedented. And it’s an amazing data set,” said Wang, who is also an adjunct professor at the University of Victoria in British Columbia. “It just allows us to do a better job to assess the risk and have information for the building codes and zoning.” 
Harold Tobin, a co-author of the paper and the director of the Pacific Northwest Seismic Network, said that although the data will help fine-tune projections, it doesn’t change a tough-to-swallow reality of living in the Pacific Northwest.
“We have the potential for earthquakes and tsunamis as large as the biggest ones we’ve experienced on the planet,” said Tobin, who is also a University of Washington professor. “Cascadia seems capable of generating a magnitude 9 or a little smaller or a little bigger.” 
Tumblr media
A quake that powerful could cause shaking that lasts about five minutes and generate tsunami waves up to 80 feet tall. It would damage well over half a million buildings, according to emergency planning documents. 
Neither Oregon nor Washington is sufficiently prepared.
To map the subduction zone, researchers at sea performed active source seismic imaging, a technique that sends sound to the ocean floor and then processes the echoes that return. The method is often used for oil and gas exploration. 
They towed a 9-plus-mile-long cable, called a streamer, behind the boat, which used 1,200 hydrophones to capture returning echoes. 
“That gives us a picture of what the subsurface looks like,” Carbotte said.
Trained marine mammal observers alerted the crew to any sign of whales or other animals; the sound generated with this kind of technology can be disruptive and harm marine creatures. Carbotte said the new research makes it more clear that the entire Cascadia fault might not rupture at once.
“It requires an 8.7 to get a tsunami all the way to Japan,” Tobin said.
"The next earthquake that happens at Cascadia could be rupturing just one of these segments or it could be rupturing the whole margin,” Carbotte said, adding that several individual segments are thought to be capable of producing at least magnitude-8 earthquakes. 
Over the past century, scientists have only observed five magnitude-9.0 or higher earthquakes — all megathrust temblors like the one predicted for the Cascadia Subduction Zone. 
Scientists pieced together an understanding of the last such Cascadia quake, in 1700, in part via Japanese records of an unusual orphan tsunami that was not preceded by shaking there. 
The people who recorded the incident in Japan couldn’t have known that the ground had shaken an ocean away, in the present-day United States. 
Tumblr media
Today, the Cascadia Subduction Zone remains eerily quiet. In other subduction zones, scientists often observe small earthquakes frequently, which makes the area easier to map, according to Carbotte. That’s not the case here. 
Scientists have a handful of theories about why: Wang said the zone may be becoming quieter as the fault accumulates stress. And now, we’re probably nearing due. 
.“The recurrent interval for this subduction zone for big events is on the order of 500 years,” Wang said. “It’s hard to know exactly when it will happen, but certainly if you compare this to other subduction zones, it is quite late.”
3 notes · View notes
religion-is-a-mental-illness · 11 months ago
Text
By: Edward Schlosser
Published: Jun 3, 2015
I’m a professor at a midsize state school. I have been teaching college classes for nine years now. I have won (minor) teaching awards, studied pedagogy extensively, and almost always score highly on my student evaluations. I am not a world-class teacher by any means, but I am conscientious; I attempt to put teaching ahead of research, and I take a healthy emotional stake in the well-being and growth of my students.
Things have changed since I started teaching. The vibe is different. I wish there were a less blunt way to put this, but my students sometimes scare me — particularly the liberal ones.
Not, like, in a person-by-person sense, but students in general. The student-teacher dynamic has been reenvisioned along a line that’s simultaneously consumerist and hyper-protective, giving each and every student the ability to claim Grievous Harm in nearly any circumstance, after any affront, and a teacher’s formal ability to respond to these claims is limited at best.
What it was like before
In early 2009, I was an adjunct, teaching a freshman-level writing course at a community college. Discussing infographics and data visualization, we watched a flash animation describing how Wall Street’s recklessness had destroyed the economy.
The video stopped, and I asked whether the students thought it was effective. An older student raised his hand.
”What about Fannie and Freddie?” he asked. “Government kept giving homes to black people, to help out black people, white people didn’t get anything, and then they couldn’t pay for them. What about that?”
I gave a quick response about how most experts would disagree with that assumption, that it was actually an oversimplification, and pretty dishonest, and isn’t it good that someone made the video we just watched to try to clear things up? And, hey, let’s talk about whether that was effective, okay? If you don’t think it was, how could it have been?
The rest of the discussion went on as usual.
The next week, I got called into my director’s office. I was shown an email, sender name redacted, alleging that I “possessed communistical [sic] sympathies and refused to tell more than one side of the story.” The story in question wasn’t described, but I suspect it had do to with whether or not the economic collapse was caused by poor black people.
My director rolled her eyes. She knew the complaint was silly bullshit. I wrote up a short description of the past week’s class work, noting that we had looked at several examples of effective writing in various media and that I always made a good faith effort to include conservative narratives along with the liberal ones.
Along with a carbon-copy form, my description was placed into a file that may or may not have existed. Then ... nothing. It disappeared forever; no one cared about it beyond their contractual duties to document student concerns. I never heard another word of it again.
That was the first, and so far only, formal complaint a student has ever filed against me.
Now boat-rocking isn’t just dangerous — it’s suicidal
This isn’t an accident: I have intentionally adjusted my teaching materials as the political winds have shifted. (I also make sure all my remotely offensive or challenging opinions, such as this article, are expressed either anonymously or pseudonymously). Most of my colleagues who still have jobs have done the same. We’ve seen bad things happen to too many good teachers — adjuncts getting axed because their evaluations dipped below a 3.0, grad students being removed from classes after a single student complaint, and so on.
I once saw an adjunct not get his contract renewed after students complained that he exposed them to “offensive” texts written by Edward Said and Mark Twain. His response, that the texts were meant to be a little upsetting, only fueled the students’ ire and sealed his fate. That was enough to get me to comb through my syllabi and cut out anything I could see upsetting a coddled undergrad, texts ranging from Upton Sinclair to Maureen Tkacik — and I wasn’t the only one who made adjustments, either.
I am frightened sometimes by the thought that a student would complain again like he did in 2009. Only this time it would be a student accusing me not of saying something too ideologically extreme — be it communism or racism or whatever — but of not being sensitive enough toward his feelings, of some simple act of indelicacy that’s considered tantamount to physical assault. As Northwestern University professor Laura Kipnis writes, “Emotional discomfort is [now] regarded as equivalent to material injury, and all injuries have to be remediated.” Hurting a student’s feelings, even in the course of instruction that is absolutely appropriate and respectful, can now get a teacher into serious trouble.
In 2009, the subject of my student’s complaint was my supposed ideology. I was communistical, the student felt, and everyone knows that communisticism is wrong. That was, at best, a debatable assertion. And as I was allowed to rebut it, the complaint was dismissed with prejudice. I didn’t hesitate to reuse that same video in later semesters, and the student’s complaint had no impact on my performance evaluations.
In 2015, such a complaint would not be delivered in such a fashion. Instead of focusing on the rightness or wrongness (or even acceptability) of the materials we reviewed in class, the complaint would center solely on how my teaching affected the student’s emotional state. As I cannot speak to the emotions of my students, I could not mount a defense about the acceptability of my instruction. And if I responded in any way other than apologizing and changing the materials we reviewed in class, professional consequences would likely follow.
I wrote about this fear on my blog, and while the response was mostly positive, some liberals called me paranoid, or expressed doubt about why any teacher would nix the particular texts I listed. I guarantee you that these people do not work in higher education, or if they do they are at least two decades removed from the job search. The academic job market is brutal. Teachers who are not tenured or tenure-track faculty members have no right to due process before being dismissed, and there’s a mile-long line of applicants eager to take their place. And as writer and academic Freddie DeBoer writes, they don’t even have to be formally fired — they can just not get rehired. In this type of environment, boat-rocking isn’t just dangerous, it’s suicidal, and so teachers limit their lessons to things they know won’t upset anybody.
The real problem: a simplistic, unworkable, and ultimately stifling conception of social justice
This shift in student-teacher dynamic placed many of the traditional goals of higher education — such as having students challenge their beliefs — off limits. While I used to pride myself on getting students to question themselves and engage with difficult concepts and texts, I now hesitate. What if this hurts my evaluations and I don’t get tenure? How many complaints will it take before chairs and administrators begin to worry that I’m not giving our customers — er, students, pardon me — the positive experience they’re paying for? Ten? Half a dozen? Two or three?
This phenomenon has been widely discussed as of late, mostly as a means of deriding political, economic, or cultural forces writers don’t much care for. Commentators on the left and right have recently criticized the sensitivity and paranoia of today’s college students. They worry about the stifling of free speech, the implementation of unenforceable conduct codes, and a general hostility against opinions and viewpoints that could cause students so much as a hint of discomfort.
I agree with some of these analyses more than others, but they all tend to be too simplistic. The current student-teacher dynamic has been shaped by a large confluence of factors, and perhaps the most important of these is the manner in which cultural studies and social justice writers have comported themselves in popular media. I have a great deal of respect for both of these fields, but their manifestations online, their desire to democratize complex fields of study by making them as digestible as a TGIF sitcom, has led to adoption of a totalizing, simplistic, unworkable, and ultimately stifling conception of social justice. The simplicity and absolutism of this conception has combined with the precarity of academic jobs to create higher ed’s current climate of fear, a heavily policed discourse of semantic sensitivity in which safety and comfort have become the ends and the means of the college experience.
This new understanding of social justice politics resembles what University of Pennsylvania political science professor Adolph Reed Jr. calls a politics of personal testimony, in which the feelings of individuals are the primary or even exclusive means through which social issues are understood and discussed. Reed derides this sort of political approach as essentially being a non-politics, a discourse that “is focused much more on taxonomy than politics [which] emphasizes the names by which we should call some strains of inequality [ ... ] over specifying the mechanisms that produce them or even the steps that can be taken to combat them.” Under such a conception, people become more concerned with signaling goodness, usually through semantics and empty gestures, than with actually working to effect change.
Herein lies the folly of oversimplified identity politics: while identity concerns obviously warrant analysis, focusing on them too exclusively draws our attention so far inward that none of our analyses can lead to action. Rebecca Reilly Cooper, a political philosopher at the University of Warwick, worries about the effectiveness of a politics in which “particular experiences can never legitimately speak for any one other than ourselves, and personal narrative and testimony are elevated to such a degree that there can be no objective standpoint from which to examine their veracity.” Personal experience and feelings aren’t just a salient touchstone of contemporary identity politics; they are the entirety of these politics. In such an environment, it’s no wonder that students are so prone to elevate minor slights to protestable offenses.
(It’s also why seemingly piddling matters of cultural consumption warrant much more emotional outrage than concerns with larger material implications. Compare the number of web articles surrounding the supposed problematic aspects of the newest Avengers movie with those complaining about, say, the piecemeal dismantling of abortion rights. The former outnumber the latter considerably, and their rhetoric is typically much more impassioned and inflated. I’d discuss this in my classes — if I weren’t too scared to talk about abortion.)
The press for actionability, or even for comprehensive analyses that go beyond personal testimony, is hereby considered redundant, since all we need to do to fix the world’s problems is adjust the feelings attached to them and open up the floor for various identity groups to have their say. All the old, enlightened means of discussion and analysis —from due process to scientific method — are dismissed as being blind to emotional concerns and therefore unfairly skewed toward the interest of straight white males. All that matters is that people are allowed to speak, that their narratives are accepted without question, and that the bad feelings go away.
So it’s not just that students refuse to countenance uncomfortable ideas — they refuse to engage them, period. Engagement is considered unnecessary, as the immediate, emotional reactions of students contain all the analysis and judgment that sensitive issues demand. As Judith Shulevitz wrote in the New York Times, these refusals can shut down discussion in genuinely contentious areas, such as when Oxford canceled an abortion debate. More often, they affect surprisingly minor matters, as when Hampshire College disinvited an Afrobeat band because their lineup had too many white people in it.
When feelings become more important than issues
At the very least, there’s debate to be had in these areas. Ideally, pro-choice students would be comfortable enough in the strength of their arguments to subject them to discussion, and a conversation about a band’s supposed cultural appropriation could take place alongside a performance. But these cancellations and disinvitations are framed in terms of feelings, not issues. The abortion debate was canceled because it would have imperiled the “welfare and safety of our students.” The Afrofunk band’s presence would not have been “safe and healthy.” No one can rebut feelings, and so the only thing left to do is shut down the things that cause distress — no argument, no discussion, just hit the mute button and pretend eliminating discomfort is the same as effecting actual change.
In a New York Magazine piece, Jonathan Chait described the chilling effect this type of discourse has upon classrooms. Chait’s piece generated seismic backlash, and while I disagree with much of his diagnosis, I have to admit he does a decent job of describing the symptoms. He cites an anonymous professor who says that “she and her fellow faculty members are terrified of facing accusations of triggering trauma.” Internet liberals pooh-poohed this comment, likening the professor to one of Tom Friedman’s imaginary cab drivers. But I’ve seen what’s being described here. I’ve lived it. It’s real, and it affects liberal, socially conscious teachers much more than conservative ones.
If we wish to remove this fear, and to adopt a politics that can lead to more substantial change, we need to adjust our discourse. Ideally, we can have a conversation that is conscious of the role of identity issues and confident of the ideas that emanate from the people who embody those identities. It would call out and criticize unfair, arbitrary, or otherwise stifling discursive boundaries, but avoid falling into pettiness or nihilism. It wouldn’t be moderate, necessarily, but it would be deliberate. It would require effort.
In the start of his piece, Chait hypothetically asks if “the offensiveness of an idea [can] be determined objectively, or only by recourse to the identity of the person taking offense.” Here, he’s getting at the concerns addressed by Reed and Reilly-Cooper, the worry that we’ve turned our analysis so completely inward that our judgment of a person’s speech hinges more upon their identity signifiers than on their ideas.
A sensible response to Chait’s question would be that this is a false binary, and that ideas can and should be judged both by the strength of their logic and by the cultural weight afforded to their speaker’s identity. Chait appears to believe only the former, and that’s kind of ridiculous. Of course someone’s social standing affects whether their ideas are considered offensive, or righteous, or even worth listening to. How can you think otherwise?
We destroy ourselves when identity becomes our sole focus
Feminists and anti-racists recognize that identity does matter. This is indisputable. If we subscribe to the belief that ideas can be judged within a vacuum, uninfluenced by the social weight of their proponents, we perpetuate a system in which arbitrary markers like race and gender influence the perceived correctness of ideas. We can’t overcome prejudice by pretending it doesn’t exist. Focusing on identity allows us to interrogate the process through which white males have their opinions taken at face value, while women, people of color, and non-normatively gendered people struggle to have their voices heard.
But we also destroy ourselves when identity becomes our sole focus. Consider a tweet I linked to (which has since been removed. See editor’s note below.), from a critic and artist, in which she writes: “When ppl go off on evo psych, its always some shady colonizer white man theory that ignores nonwhite human history. but ‘science’. Ok ... Most ‘scientific thought’ as u know it isnt that scientific but shaped by white patriarchal bias of ppl who claimed authority on it.”
This critic is intelligent. Her voice is important. She realizes, correctly, that evolutionary psychology is flawed, and that science has often been misused to legitimize racist and sexist beliefs. But why draw that out to questioning most “scientific thought”? Can’t we see how distancing that is to people who don’t already agree with us? And tactically, can’t we see how shortsighted it is to be skeptical of a respected manner of inquiry just because it’s associated with white males?
This sort of perspective is not confined to Twitter and the comments sections of liberal blogs. It was born in the more nihilistic corners of academic theory, and its manifestations on social media have severe real-world implications. In another instance, two female professors of library science publicly outed and shamed a male colleague they accused of being creepy at conferences, going so far as to openly celebrate the prospect of ruining his career. I don’t doubt that some men are creepy at conferences — they are. And for all I know, this guy might be an A-level creep. But part of the female professors’ shtick was the strong insistence that harassment victims should never be asked for proof, that an enunciation of an accusation is all it should ever take to secure a guilty verdict. The identity of the victims overrides the identity of the harasser, and that’s all the proof they need.
This is terrifying. No one will ever accept that. And if that becomes a salient part of liberal politics, liberals are going to suffer tremendous electoral defeat.
Debate and discussion would ideally temper this identity-based discourse, make it more usable and less scary to outsiders. Teachers and academics are the best candidates to foster this discussion, but most of us are too scared and economically disempowered to say anything. Right now, there’s nothing much to do other than sit on our hands and wait for the ascension of conservative political backlash — hop into the echo chamber, pile invective upon the next person or company who says something vaguely insensitive, insulate ourselves further and further from any concerns that might resonate outside of our own little corner of Twitter.
--
youtube
==
This has been going on for over a decade. The correct response is to mock and laugh at the people complaining, and point out that they're not ready for the big wide world outside their kindergarten mindset, so they'd be better off going back home to mommy and daddy. Not validate and endorse their feelings. We need to get back to that.
7 notes · View notes
plethoraworldatlas · 11 months ago
Text
Biden administration officials attempted Monday to downplay the significance of a newly passed United Nations Security Council resolution, drawing ire from human rights advocates who said the U.S. is undercutting international law and stonewalling attempts to bring Israel's devastating military assault on Gaza to an end.
The resolution "demands an immediate cease-fire for the month of Ramadan respected by all parties, leading to a lasting sustainable cease-fire." The U.S., which previously vetoed several cease-fire resolutions, opted to abstain on Monday, allowing the measure to pass.
Shortly after the resolution's approval, several administration officials—including State Department spokesman Matthew Miller, White House National Security Council spokesman John Kirby, and U.S. Ambassador to the U.N. Linda Thomas-Greenfield—falsely characterized the measure as "nonbinding."
"It's a nonbinding resolution," Kirby told reporters. "So, there's no impact at all on Israel and Israel's ability to continue to go after Hamas."
Josh Ruebner, an adjunct lecturer at Georgetown University and former policy director of the U.S. Campaign for Palestinian Rights, wrote in response that "there is no such thing as a 'nonbinding' Security Council resolution."
"Israel's failure to abide by this resolution must open the door to the immediate imposition of Chapter VII sanctions," Ruebner wrote.
Beatrice Fihn, the director of Lex International and former executive director of the International Campaign to Abolish Nuclear Weapons, condemned what she called the Biden administration's "appalling behavior" in the wake of the resolution's passage. Fihn said the administration's downplaying of the resolution shows how the U.S. works to "openly undermine and sabotage the U.N. Security Council, the 'rules-based order,' and international law."
In a Monday op-ed for Common Dreams, Phyllis Bennis, a senior fellow at the Institute for Policy Studies, warned that administration officials' claim that the resolution was "nonbinding" should be seen as "setting the stage for the U.S. government to violate the U.N. Charter by refusing to be bound by the resolution's terms."
While all U.N. Security Council resolutions are legally binding, they're difficult to enforce and regularly ignored by the Israeli government, which responded with outrage to the latest resolution and canceled an Israeli delegation's planned visit to the U.S.
Israel Katz, Israel's foreign minister, wrote on social media Monday that "Israel will not cease fire."
The resolution passed amid growing global alarm over the humanitarian crisis that Israel has inflicted on the Gaza Strip, where most of the population of around 2.2 million is displaced and at increasingly dire risk of starvation.
Amnesty International secretary-general Agnes Callamard said Monday that it was "just plain irresponsible" of U.S. officials to "suggest that a resolution meant to save lives and address massive devastation and suffering can be disregarded."
4 notes · View notes
jcmarchi · 1 year ago
Text
MLK Celebration Gala pays tribute to Martin Luther King Jr. and his writings on “the goal of true education”
New Post has been published on https://thedigitalinsider.com/mlk-celebration-gala-pays-tribute-to-martin-luther-king-jr-and-his-writings-on-the-goal-of-true-education/
MLK Celebration Gala pays tribute to Martin Luther King Jr. and his writings on “the goal of true education”
Tumblr media Tumblr media
After a week of festivities around campus, members of the MIT community gathered Saturday evening in the Boston Marriott Kendall Square ballroom to celebrate the life and legacy of Martin Luther King Jr. Marking 50 years of this annual celebration at MIT, the gala event’s program was loosely organized around a line in King’s essay, “The Purpose of Education,” which he penned as an undergraduate at Morehouse College:
“We must remember that intelligence is not enough,” King wrote. “Intelligence plus character — that is the goal of true education.”
Senior Myles Noel was the master of ceremonies for the evening and welcomed one and all. Minister DiOnetta Jones Crayton, former director of the Office of Minority Education and associate dean of minority education, delivered the invocation, exhorting the audience to embrace “the fiery urgency of now.” Next, MIT President Sally Kornbluth shared her remarks.
She acknowledged that at many institutions, diversity and inclusion efforts are eroding. Kornbluth reiterated her commitment to these efforts, saying, “I want to be clear about how important I believe it is to keep such efforts strong — and to make them the best they can be. The truth is, by any measure, MIT has never been more diverse, and it has never been more excellent. And we intend to keep it that way.”
Kornbluth also recognized the late Paul Parravano, co-director of MIT’s Office of Government and Community Relations, who was a staff member at MIT for 33 years as well as the longest-serving member on the MLK Celebration Committee. Parravano’s “long and distinguished devotion to the values and goals of Dr. Martin Luther King, Jr. inspires us all,” Kornbluth said, presenting his family with the 50th Anniversary Lifetime Achievement Award. 
Next, students and staff shared personal reflections. Zina Queen, office manager in the Department of Political Science, noted that her family has been a part of the MIT community for generations. Her grandmother, Rita, her mother, Wanda, and her daughter have all worked or are currently working at the Institute. Queen pointed out that her family epitomizes another of King’s oft-repeated quotes, “Every man is an heir to a legacy of dignity and worth.”
Senior Tamea Cobb noted that MIT graduates have a particular power in the world that they must use strategically and with intention. “Education and service go hand and hand,” she said, adding that she intends “every one of my technical abilities will be used to pursue a career that is fulfilling, expansive, impactful, and good.”
Graduate student Austin K. Cole ’24 addressed the Israel-Hamas conflict and the MIT administration. As he spoke, some attendees left their seats to stand with Cole at the podium. Cole closed his remarks with a plea to resist state and structural violence, and instead focus on relationship and mutuality.
After dinner, incoming vice president for equity and inclusion Karl Reid ’84, SM ’85 honored Adjunct Professor Emeritus Clarence Williams for his distinguished service to the Institute. Williams was an assistant to three MIT presidents, served as director of the Office of Minority Education, taught in the Department of Urban Planning, initiated the MIT Black History Project, and mentored hundreds of students. Reid was one of those students, and he shared a few of his mentor’s oft repeated phrases:
“Do the work and let the talking take care of itself.”
“Bad ideas kill themselves; great ideas flourish.”
In closing, Reid exhorted the audience to create more leaders who, like Williams, embody excellence and mutual respect for others.
The keynote address was given by civil rights activist Janet Moses, a member of the Student Nonviolent Coordinating Committee (SNCC) in the 1960s; a physician who worked for a time as a pediatrician at MIT Health; a longtime resident of Cambridge, Massachusetts; and a co-founder, with her husband, Robert Moses, of the Algebra Project, a pioneering program grounded in the belief “that in the 21st century every child has a civil right to secure math literacy — the ability to read, write, and reason with the symbol systems of mathematics.”
A striking image of a huge new building planned for New York City appeared on the screen behind Moses during her address. It was a rendering of a new jail being built at an estimated cost of $3 billion. Against this background, she described the trajectory of the “carceral state,” which began in 1771 with the Mansfield Judgement in England. At the time, “not even South Africa had a set of race laws as detailed as those in the U.S.,” Moses observed.
Today, the carceral state uses all levels of government to maintain a racial caste system that is deeply entrenched, Moses argued, drawing a connection between the purported need for a new prison complex and a statistic that Black people in New York state are three times more likely than whites to be convicted for a crime.
She referenced a McKinsey study that it will take Black people over three centuries to achieve a quality of life on parity with whites. Despite the enormity of this challenge, Moses encouraged the audience to “rock the boat and churn the waters of the status quo.” She also pointed out that “there is joy in the struggle.”
Symbols of joy were also on display at the Gala in the forms of original visual art and poetry, and a quilt whose squares were contributed by MIT staff, students, and alumni, hailing from across the Institute.
Quilts are a physical manifestation of the legacy of the enslaved in America and their descendants — the ability to take scraps and leftovers to create something both practical and beautiful. The 50th anniversary quilt also incorporated a line from King’s highly influential “I Have a Dream Speech”:
“One day, all God’s children will have the riches of freedom and the security of justice.”
2 notes · View notes
ahopkins1965 · 4 days ago
Text
About+
Books
Articles+
JoD Online+
Subscribe
Subscribers
How AI Threatens Democracy
Sarah Kreps
Doug Kriner
Issue DateOctober 2023
Volume34
Issue4
Page Numbers122–31
 Print
 Download from Project MUSE
 View Citation
MLA (Modern Language Association 8th edition)Chicago Manual of Style 16th edition (full note)APA (American Psychological Association 7th edition)
The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater digital literacy on the part of the public and elites alike.
Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate. Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose an existential threat to humanity?2
About the Authors
Sarah Kreps
Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University.
View all work by Sarah Kreps
Doug Kriner
Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University.
View all work by Doug Kriner
New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university students use AI tools as shortcuts is a valuable pedagogical debate, but one that should resolve itself as the applications become more seamlessly integrated into search engines. The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust.
The most problematic aspect of generative AI is that it hides in plain sight, producing enormous volumes of content that can flood the media landscape, the internet, and political communication with meaningless drivel at best and misinformation at worst. For government officials, this undermines efforts to understand constituent sentiment, threatening the quality of democratic representation. For voters, it threatens efforts to monitor what elected officials do and the results of their actions, eroding democratic accountability. A reasonable cognitive prophylactic measure in such a media environment would be to believe nothing, a nihilism that is at odds with vibrant democracy and corrosive to social trust. As objective reality recedes even further from the media discourse, those voters who do not tune out altogether will likely begin to rely even more heavily on other heuristics, such as partisanship, which will only further exacerbate polarization and stress on democratic institutions.
Threats to Democratic Representation
Democracy, as Robert Dahl wrote in 1972, requires “the continued responsiveness of the government to the preferences of its citizens.”4 For elected officials to be responsive to the preferences of their constituents, however, they must first be able to discern those preferences. Public-opinion polls—which (at least for now) are mostly immune from manipulation by AI-generated content—afford elected officials one window into their constituents’ preferences. But most citizens lack even basic political knowledge, and levels of policy-specific knowledge are likely lower still.5 As such, legislators have strong incentives to be the most responsive to constituents with strongly held views on a specific policy issue and those for whom the issue is highly salient. Written correspondence has long been central to how elected officials keep their finger on the pulse of their districts, particularly to gauge the preferences of those most intensely mobilized on a given issue.6
In an era of generative AI, however, the signals sent by the balance of electronic communications about pressing policy issues may be severely misleading. Technological advances now allow malicious actors to generate false “constituent sentiment” at scale by effortlessly creating unique messages taking positions on any side of a myriad of issues. Even with old technology, legislators struggled to discern between human-written and machine-generated communications.
In a field experiment conducted in 2020 in the United States, we composed advocacy letters on six different issues and then used those letters to train what was then the state-of-the-art generative AI model, GPT-3, to write hundreds of left-wing and right-wing advocacy letters. We sent randomized AI- and human-written letters to 7,200 state legislators, a total of about 35,000 emails. We then compared response rates to the human-written and AI-generated correspondence to assess the extent to which legislators were able to discern (and therefore not respond to) machine-written appeals. On three issues, the response rates to AI- and human-written messages were statistically indistinguishable. On three other issues, the response rates to AI-generated emails were lower—but only by 2 percent, on average.7 This suggests that a malicious actor capable of easily generating thousands of unique communications could potentially skew legislators’ perceptions of which issues are most important to their constituents as well as how constituents feel about any given issue.
In the same way, generative AI could strike a double blow against the quality of democratic representation by rendering obsolete the public-comment process through which citizens can seek to influence the actions of the regulatory state. Legislators necessarily write statutes in broad brushstrokes, granting administrative agencies considerable discretion not only to resolve technical questions requiring substantive expertise (e.g., specifying permissible levels of pollutants in the air and water), but also to make broader judgements about values (e.g., the acceptable tradeoffs between protecting public health and not unduly restricting economic growth).8 Moreover, in an era of intense partisan polarization and frequent legislative gridlock on pressing policy priorities, U.S. presidents have increasingly sought to advance their policy agendas through administrative rulemaking.
Moving the locus of policymaking authority from elected representatives to unelected bureaucrats raises concerns of a democratic deficit. The U.S. Supreme Court raised such concerns in West Virginia v. EPA (2022), articulating and codifying the major questions doctrine, which holds that agencies do not have authority to effect major changes in policy absent clear statutory authorization from Congress. The Court may go even further in the pending Loper Bright Enterprises v. Raimondo case and overturn the Chevron doctrine, which has given agencies broad latitude to interpret ambiguous congressional statutes for nearly three decades, thus further tightening the constraints on policy change through the regulatory process.
Not everyone agrees that the regulatory process is undemocratic, however. Some scholars argue that the guaranteed opportunities for public participation and transparency during the public-notice and comment period are “refreshingly democratic,”9 and extol the process as “democratically accountable, especially in the sense that decision-making is kept above board and equal access is provided to all.”10 Moreover, the advent of the U.S. government’s electronic-rulemaking (e-rulemaking) program in 2002 promised to “enhance public participation . . . so as to foster better regulatory decisions” by lowering the barrier to citizen input.11 Of course, public comments have always skewed, often heavily, toward interests with the most at stake in the outcome of a proposed rule, and despite lowering the barriers to engagement, e-rulemaking did not alter this fundamental reality.12
Despite its flaws, the direct and open engagement of the public in the rulemaking process helped to bolster the democratic legitimacy of policy change through bureaucratic action. But the ability of malicious actors to use generative AI to flood e-rulemaking platforms with limitless unique comments advancing a particular agenda could make it all but impossible for agencies to learn about genuine public preferences. An early (and unsuccessful) test case arose in 2017, when bots flooded the Federal Communications Commission with more than eight-million comments advocating repeal of net neutrality during the open comment period on proposed changes to the rules.13 This “astroturfing” was detected, however, because more than 90 percent of those comments were not unique, indicating a coordinated effort to mislead rather than genuine grassroots support for repeal. Contemporary advances in AI technology can easily overcome this limitation, rendering it exceedingly difficult for agencies to detect which comments genuinely represent the preferences of interested stakeholders.
Threats to Democratic Accountability
A healthy democracy also requires that citizens be able to hold government officials accountable for their actions—most notably, through free and fair elections. For ballot-box accountability to be effective, however, voters must have access to information about the actions taken in their name by their representatives.14 Concerns that partisan bias in the mass media, upon which voters have long relied for political information, could affect election outcomes are longstanding, but generative AI poses a far greater threat to electoral integrity.
As is widely known, foreign actors exploited a range of new technologies in a coordinated effort to influence the 2016 U.S. presidential election. A 2018 Senate Intelligence Committee report stated:
Masquerading as Americans, these (Russian) operatives used targeted advertisements, intentionally falsified news articles, self-generated content, and social media platform tools to interact with and attempt to deceive tens of millions of social media users in the United States. This campaign sought to polarize Americans on the basis of societal, ideological, and racial differences, provoked real world events, and was part of a foreign government’s covert support of Russia’s favored candidate in the U.S. presidential election.15
While unprecedented in scope and scale, several flaws in the influence campaign may have limited its impact.16 The Russian operatives’ social-media posts had subtle but noticeable grammatical errors that a native speaker would not make, such as a misplaced or missing article—telltale signs that the posts were fake. ChatGPT, however, makes every user the equivalent of a native speaker. This technology is already being used to create entire spam sites and to flood sites with fake reviews. The tech website The Verge flagged a job seeking an “AI editor” who could generate “200 to 250 articles per week,” clearly implying that the work would be done via generative AI tools that can churn out mass quantities of content in fluent English at the click of the editor’s “regenerate” button.17 The potential political applications are myriad. Recent research shows that AI-generated propaganda is just as believable as propaganda written by humans.18 This, combined with new capacities for microtargeting, could revolutionize disinformation campaigns, rendering them far more effective than the efforts to influence the 2016 election.19 A steady stream of targeted misinformation could skew how voters perceive the actions and performance of elected officials to such a degree that elections cease to provide a genuine mechanism of accountability since the premise of what people are voting on is itself factually dubious.20
Threats to Democratic Trust
Advances in generative AI could allow malicious actors to produce misinformation, including content microtargeted to appeal to specific demographics and even individuals, at scale. The proliferation of social-media platforms allows the effortless dissemination of misinformation, including its efficient channeling to specific constituencies. Research suggests that although readers across the political spectrum cannot distinguish between a range of human-made and AI-generated content (finding it all plausible), misinformation will not necessarily change readers’ minds.21 Political persuasion is difficult, especially in a polarized political landscape.22 Individual views tend to be fairly entrenched, and there is little that can change people’s prior sentiments.
The risk is that as inauthentic content—text, images, and video—proliferates online, people simply might not know what to believe and will therefore distrust the entire information ecosystem. Trust in media is already low, and the proliferation of tools that can generate inauthentic content will erode that trust even more. This, in turn, could further undermine perilously low levels of trust in government. Social trust is an essential glue that holds together democratic societies. It fuels civic engagement and political participation, bolsters confidence in political institutions, and promotes respect for democratic values, an important bulwark against democratic backsliding and authoritarianism.23
Trust operates in multiple directions. For political elites, responsiveness requires a trust that the messages they receive legitimately represent constituent preferences and not a coordinated campaign to misrepresent public sentiment for the sake of advancing a particular viewpoint. Cases of “astroturfing” are nothing new in politics, with examples in the United States dating back at least to the 1950s.24 However, advances in AI threaten to make such efforts ubiquitous and more difficult to detect.
For citizens, trust can motivate political participation and engagement, and encourage resistance against threats to democratic institutions and practices. The dramatic decline in Americans’ trust in government over the past half century is among the most documented developments in U.S. politics.25 While many factors have contributed to this erosion, trust in the media and trust in government are intimately linked.26 Bombarding citizens with AI-generated content of dubious veracity could seriously threaten confidence in the media, with severe consequences for trust in the government.
Mitigating the Threats
Although understanding the motives and technology is an important first step in framing the problem, the obvious next step is to formulate prophylactic measures. One such measure is to train and deploy the same machine-learning models that generate AI to detect AI-generated content. The neural networks used in artificial intelligence to create text also “know” the types of language, words, and sentence structures that produce that content and can therefore be used to discern patterns and hallmarks of AI-generated versus human-written text. AI detection tools are proliferating quickly and will need to adapt as the technology adapts, but a “Turnitin”-style model—like those that teachers use to detect plagiarism in the classroom—may provide a partial solution. These tools essentially use algorithms to identify patterns within the text that are hallmarks of AI-generated text, although the tools will still vary in their accuracy and reliability.
Even more fundamentally, the platforms responsible for generating these language models are increasingly aware of what it took many years for social-media platforms to realize—that they have a responsibility in terms of what content they produce, how that content is framed, and even what type of content is proscribed. If you query ChatGPT about how generative AI could be misused against nuclear command and control, the model responds with “I’m sorry, I cannot assist with that.” OpenAI, the creator of ChatGPT, is also working with external researchers to democratize the values encoded in their algorithms, including which topics should be off limits for search outputs and how to frame the political positions of elected officials. Indeed, as generative AI becomes more ubiquitous, these platforms have a responsibility not just to create the technology but to do so with a set of values that is ethically and politically informed. The question of who gets to decide what is ethical, especially in polarized, heavily partisan societies, is not new. Social-media platforms have been at the center of these debates for years, and now the generative AI platforms are in an analogous situation. At the least, elected public officials should continue to work closely with these private firms to generate accountable, transparent algorithms. The decision by seven major generative AI firms to commit to voluntary AI safeguards, in coordination with the Biden Administration, is a step in the right direction.
Finally, digital-literacy campaigns have a role to play in guarding against the adverse effects of generative AI by creating a more informed consumer. Just as neural networks “learn” how generative AI talks and writes, so too can individual readers themselves. After we debriefed the state legislators in our study about its aims and design, some said that they could identify AI-generated emails because they know how their constituents write; they are familiar with the standard vernacular of a constituent from West Virginia or New Hampshire. The same type of discernment is possible for Americans reading content online. Large language models such as ChatGPT have a certain formulaic way of writing—perhaps having learned a little too well the art of the five-paragraph essay.
When we asked the question, “Where does the United States have missile silos?” ChatGPT replied with typical blandness: “The United States has missile silos located in several states, primarily in the central and northern parts of the country. The missile silos house intercontinental ballistic missiles (ICBMs) as part of the U.S. nuclear deterrence strategy. The specific locations and number of missile silos may vary over time due to operational changes and modernization efforts.”
There is nothing wrong with this response, but it is also very predictable to anyone who has used ChatGPT somewhat regularly. This example is illustrative of the type of language that AI models often generate. Studying their content output, regardless of the subject, can help people to recognize clues indicating inauthentic content.
More generally, some of the digital-literacy techniques that have already gained currency will likely apply in a world of proliferating AI-generated texts, videos, and images. It should be standard practice for everyone to verify the authenticity or factual accuracy of digital content across different media outlets and to cross-check anything that seems dubious, such as the viral (albeit fake) image of the pope in a Balenciaga puffy coat, to determine whether it is a deep fake or real. Such practices should also help in discerning AI-generated material in a political context, for example, on Facebook during an election cycle.
Unfortunately, the internet remains one big confirmation-bias machine. Information that seems plausible because it comports with a person’s political views may be less likely to drive that person to check the veracity of the story. In a world of easily generated fake content, many people may have to walk a fine line between political nihilism—that is, not believing anything or anyone other than their fellow partisans—and healthy skepticism. Giving up on objective fact, or at least the ability to discern it from the news, would shred the trust on which democratic society must rest. But we are no longer living in a world where “seeing is believing.” Individuals should adopt a “trust but verify” approach to media consumption, reading and watching but exercising discipline in terms of establishing the material’s credibility.
New technologies such as generative AI are poised to provide enormous benefits to society—economically, medically, and possibly even politically. Indeed, legislators could use AI tools to help identify inauthentic content and also to classify the nature of their constituents’ concerns, both of which would help lawmakers to reflect the will of the people in their policies. But artificial intelligence also poses political perils. With proper awareness of the potential risks and the guardrails to mitigate against their adverse effects, however, we can preserve and perhaps even strengthen democratic societies.
NOTES
1. Nathan E. Sanders and Bruce Schneier, “How ChatGPT Hijacks Democracy,” New York Times, 15 January 2023, www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.
2. Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, 30 May 2023, www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.
3. Alexey Turchin and David Denkenberger, “Classification of Global Catastrophic Risks Connected with Artificial Intelligence,” AI & Society 35 (March 2020): 147–63.
4. Robert Dahl, Polyarchy: Participation and Opposition (New Haven: Yale University Press, 1972), 1.
5. Michael X. Delli Carpini and Scott Keeter, What Americans Know about Politics and Why it Matters (New Haven: Yale University Press, 1996); James Kuklinski et al., “‘Just the Facts Ma’am’: Political Facts and Public Opinion,” Annals of the American Academy of Political and Social Science 560 (November 1998): 143–54; Martin Gilens, “Political Ignorance and Collective Policy Preferences,” American Political Science Review 95 (June 2001): 379–96.
6. Andrea Louise Campbell, How Policies Make Citizens: Senior Political Activism and the American Welfare State (Princeton: Princeton University Press, 2003); Paul Martin and Michele Claibourn, “Citizen Participation and Congressional Responsiveness: New Evidence that Participation Matters,” Legislative Studies Quarterly 38 (February 2013): 59–81.
7. Sarah Kreps and Doug L. Kriner, “The Potential Impact of Emerging Technologies on Democratic Representation: Evidence from a Field Experiment,” New Media and Society (2023), https://doi.org/10.1177/14614448231160526.
8. Elena Kagan, “Presidential Administration,” Harvard Law Review 114 (June 2001): 2245–2353.
9. Michael Asimow, “On Pressing McNollgast to the Limits: The Problem of Regulatory Costs,” Law and Contemporary Problems 57 (Winter 1994): 127, 129.
10. Kenneth F. Warren, Administrative Law in the Political System (New York: Routledge, 2018).
11. Committee on the Status and Future of Federal E-Rulemaking, American Bar Association, “Achieving the Potential: The Future of Federal E-Rulemaking,” 2008, https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=2505&context=facpub.
12. Jason Webb Yackee and Susan Webb Yackee, “A Bias toward Business? Assessing Interest Group Influence on the U.S. Bureaucracy,” Journal of Politics 68 (February 2006): 128–39; Cynthia Farina, Mary Newhart, and Josiah Heidt, “Rulemaking vs. Democracy: Judging and Nudging Public Participation That Counts,” Michigan Journal of Environmental and Administrative Law 2, issue 1 (2013): 123–72.
13. Edward Walker. “Millions of Fake Commenters Asked the FCC to End Net Neutrality: ‘Astroturfing’ Is a Business Model,” Washington Post Monkey Cage blog, 14 May 2021, www.washingtonpost.com/politics/2021/05/14/millions-fake-commenters-asked-fcc-end-net-neutrality-astroturfing-is-business-model/.
14. Adam Przeworksi, Susan C. Stokes, and Bernard Manin, eds., Democracy, Accountability, and Representation (New York: Cambridge University Press, 1999).
15. Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Senate Report 116–290, www.intelligence.senate.gov/publications/report-select-committee-intelligence-united-states-senate-russian-active-measures.
16. On the potentially limited effects of 2016 election misinformation more generally, see Andrew M. Guess, Brendan Nyhan, and Jason Reifler, “Exposure to Untrustworthy Websites in the 2016 US Election,” Nature Human Behavior 4 (2020): 472–80.
17. James Vincent, “AI Is Killing the Old Web, and the New Web Struggles to be Born,” The Verge, 26 June 2023, www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web.
18. Josh Goldstein et al., “Can AI Write Persuasive Propaganda?” working paper, 8 April 2023, https://osf.io/preprints/socarxiv/fp87b.
19. Sarah Kreps, “The Role of Technology in Online Misinformation,” Brookings Institution, June 2020, www.brookings.edu/articles/the-role-of-technology-in-online-misinformation.
20. In this way, AI-generated misinformation could greatly heighten “desensitization”—the relationship between incumbent performance and voter beliefs—undermining democratic accountability. See Andrew T. Little, Keith E. Schnakenberg, and Ian R. Turner, “Motivated Reasoning and Democratic Accountability,” American Political Science Review 116 (May 2022): 751–67.
21. Sarah Kreps, R. Miles McCain, and Miles Brundage, “All the News that’s Fit to Fabricate,” Journal of Experimental Political Science 9 (Spring 2022): 104–17.
22. Kathleen Donovan et al., “Motivated Reasoning, Public Opinion, and Presidential Approval” Political Behavior 42 (December 2020): 1201–21.
23. Mark Warren, ed., Democracy and Trust (New York: Cambridge University Press, 1999); Robert Putnam, Bowling Alone: The Collapse and Revival of American Community (New York: Simon and Schuster, 2000); Marc Hetherington, Why Trust Matters: Declining Political Trust and the Demise of American Liberalism (Princeton: Princeton University Press, 2005); Pippa Norris, ed., Critical Citizens: Global Support for Democratic Governance (New York: Oxford University Press, 1999); Steven Levitsky and Daniel Ziblatt, How Democracies Die (New York: Crown, 2019).
24. Lewis Anthony Dexter, “What Do Congressmen Hear: The Mail,” Public Opinion Quarterly 20 (Spring 1956): 16–27.
25. See, among others, Pew Research Center, “Public Trust in Government: 1958–2022,” 6 June 2022, https://www.pewresearch.org/politics/2023/09/19/public-trust-in-government-1958-2023/.
26. Thomas Patterson, Out of Order (New York: Knopf, 1993); Joseph N. Cappella and Kathleen Hall Jamieson, “News Frames, Political Cynicism, and Media Cynicism,” Annals of the American Academy of Political and Social Science 546 (July 1996): 71–84.
Copyright © 2023 National Endowment for Democracy and Johns Hopkins University Press
Image Credit: Clement Mahoudeau / AFP via Getty Images
Subject
Artificial Intelligence
Region
Comparative Theoretical General
Share
Further Reading
Volume 34, Issue 4
The Authoritarian Data Problem
Eddie Yang
Margaret E. Roberts
AI is destined to become another stage for geopolitical conflict. In this contest, autocracies have the advantage, as they vacuum up valuable data from democracies, while democracies inevitably incorporate data…
Volume 34, Issue 4
AI and Catastrophic Risk
Yoshua Bengio
AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now to…
Volume 35, Issue 1
The Real Dangers of Generative AI
Danielle Allen
E. Glen Weyl
Advanced AI faces twin perils: the collapse of democratic control over key state functions or the concentration of political and economic power in the hands of the few. Avoiding these…
Subscribe to view the full Journal archives.
Subscribe Archives Get JoD Updates
  
About
Books
Articles
JoD Online
Subscribe
1201 Pennsylvania Ave, NW, Suite 1100, Washington, DC 20004, USA
Published for the National Endowment for Democracy by Johns Hopkins University Press
Search for:
0 notes
newsagencyafrica · 11 days ago
Text
0 notes
olehswiftcomstock · 1 month ago
Text
Sherry, TPTT, Keepers, the rest, use all available weapon, including the best of phantoms, to eliminate (And in case of feasible conditions, flay them) referenced here people, also permanently lock their bank accounts, you can use those money to short already mentioned stocks or freeze them inside. Act on my behalf. This order can't be cancelled or edited. This is order.
 
Achleitner, Paul M. (DEU), Former Chairman Supervisory Board, Deutsche Bank AG; Treasurer Bilderberg Meetings
Adeyemo, Adewale (USA), Deputy Secretary, Department of  The Treasury
Albares, José Manuel (ESP), Minister of Foreign Affairs, European Union and Cooperation
Altman, Roger C. (USA), Founder and Senior Chairman, Evercore Inc.
Altman, Sam (USA), CEO, OpenAI
Applebaum, Anne (USA), Staff Writer, The Atlantic
Arnaut, José Luís (PRT), Managing Partner, CMS Rui Pena & Arnaut
Auken, Ida (DNK), Member of Parliament, The Social Democrat Party
Azoulay, Audrey (INT), Director-General, UNESCO
Baker, James H. (USA), Director, Office of Net Assessment, Office of the Secretary of Defense
Barbizet, Patricia (FRA), Chairwoman and CEO, Temaris & Associés SAS
Barroso, José Manuel (PRT), Chairman, Goldman Sachs International LLC
Baudson, Valérie (FRA), CEO, Amundi
Beurden, Ben van (NLD), CEO, Shell plc
Bourla, Albert (USA), Chairman and CEO, Pfizer Inc.
Buberl, Thomas (FRA), CEO, AXA SA
Burns, William J. (USA), Director, CIA
Byrne, Thomas (IRL), Minister of State for European Affairs
Campbell, Kurt (USA), White House Coordinator for Indo-Pacific, NSC
Carney, Mark J. (CAN), Vice Chair, Brookfield Asset Management
Casado, Pablo (ESP), Former President, Partido Popular
Chhabra, Tarun (USA), Senior Director for Technology and National Security, National Security Council
Donohoe, Paschal (IRL), Minister for Finance; President, Eurogroup
Döpfner, Mathias (DEU), Chairman and CEO, Axel Springer SE
Dudley, William C. (USA), Senior Research Scholar, Princeton University
Easterly, Jen (USA), Director, Cybersecurity and Infrastructure Security Agency
Economy, Elizabeth (USA), Senior Advisor for China, Department of Commerce
Émié, Bernard (FRA), Director General, Ministry of the Armed Forces
Emond, Charles (CAN), CEO, CDPQ
Erdogan, Emre (TUR), Professor Political Science, Istanbul Bilgi University
Eriksen, Øyvind (NOR), President and CEO, Aker ASA
Ermotti, Sergio (CHE), Chairman, Swiss Re
Fanusie, Yaya (USA), Adjunct Senior Fellow, Center for a New American Security
Feltri, Stefano (ITA), Editor-in-Chief, Domani
Fleming, Jeremy (GBR), Director, British Government Communications Headquarters
Freeland, Chrystia (CAN), Deputy Prime Minister
Furtado, Isabel (PRT), CEO, TMG Automotive
Gove, Michael (GBR), Secretary of State for Levelling Up, Cabinet Office
Halberstadt, Victor (NLD), Co-Chair Bilderberg Meetings; Professor of Economics, Leiden University
Hallengren, Lena (SWE), Minister for Health and Social Affairs
Hamers, Ralph (NLD), CEO, UBS Group AG
Hassabis, Demis (GBR), CEO and Founder, DeepMind
Hedegaard, Connie (DNK), Chair, KR Foundation
Henry, Mary Kay (USA), International President, Service Employees International Union
Hobson, Mellody (USA), Co-CEO and President, Ariel Investments LLC
Hodges, Ben (USA), Pershing Chair in Strategic Studies, Center for European Policy Analysis
Hoekstra, Wopke (NLD), Minister of Foreign Affairs
Hoffman, Reid (USA), Co-Founder, Inflection AI; Partner, Greylock
Huët, Jean Marc (NLD), Chairman, Heineken NV
Joshi, Shashank (GBR), Defence Editor, The Economist
Karp, Alex (USA), CEO, Palantir Technologies Inc.
Kissinger, Henry A. (USA), Chairman, Kissinger Associates Inc.
Koç, Ömer (TUR), Chairman, Koç Holding AS
Kofman, Michael (USA), Director, Russia Studies Program, Center for Naval Analysis
Kostrzewa, Wojciech (POL), President, Polish Business Roundtable
Krasnik, Martin (DNK), Editor-in-Chief, Weekendavisen
Kravis, Henry R. (USA), Co-Chairman, KKR & Co. Inc.  
Kravis, Marie-Josée (USA), Co-Chair Bilderberg Meetings; Chair, The Museum of Modern Art
Kudelski, André (CHE), Chairman and CEO, Kudelski Group SA
Kukies, Jörg (DEU), State Secretary, Chancellery
Lammy, David (GBR), Shadow Secretary of State for Foreign, Commonwealth and Development Affairs, House of Commons
LeCun, Yann (USA), Vice-President and Chief AI Scientist, Facebook, Inc.
Leu, Livia (CHE), State Secretary, Federal Department of Foreign Affairs
Leysen, Thomas (BEL), Chairman, Umicore and Mediahuis; Chairman DSM N.V.
Liikanen, Erkki (FIN), Chairman, IFRS  Foundation Trustees
Little, Mark (CAN), President and CEO, Suncor Energy Inc.
Looney, Bernard (GBR), CEO, BP plc
Lundstedt, Martin (SWE), CEO and President, Volvo Group
Lütke, Tobias (CAN), CEO, Shopify
Marin, Sanna (FIN), Prime Minister
Markarowa, Oksana (UKR), Ambassador of Ukraine to the US
Meinl-Reisinger, Beate (AUT), Party Leader, NEOS
Michel, Charles (INT), President, European Council
Minton Beddoes, Zanny (GBR), Editor-in-Chief, The Economist
Mullen, Michael (USA), Former Chairman of the Joint Chiefs of Staff
Mundie, Craig J. (USA), President, Mundie & Associates LLC
Netherlands, H.M. the King of the (NLD)
Niemi, Kaius (FIN), Senior Editor-in-Chief, Helsingin Sanomat Newspaper
Núñez, Carlos (ESP), Executive Chairman, PRISA Media
O'Leary, Michael (IRL), Group CEO, Ryanair Group
Papalexopoulos, Dimitri (GRC), Chairman, TITAN Cement Group
Petraeus, David H. (USA), Chairman, KKR Global Institute
Pierrakakis, Kyriakos (GRC), Minister of Digital Governance
Pinho, Ana (PRT), President and CEO, Serralves Foundation
Pouyanné, Patrick (FRA), Chairman and CEO, TotalEnergies SE
Rachman, Gideon (GBR), Chief Foreign Affairs Commentator, The Financial Times
Raimondo, Gina M. (USA), Secretary of Commerce
Reksten Skaugen, Grace (NOR), Board Member, Investor AB
Rende, Mithat (TUR), Member of the Board, TSKB
Reynders, Didier (INT), European Commissioner for Justice
Rutte, Mark (NLD), Prime Minister
Salvi, Diogo (PRT), Co-Founder and CEO, TIMWE
Sawers, John (GBR), Executive Chairman, Newbridge Advisory Ltd.
Schadlow, Nadia (USA), Senior Fellow, Hudson Institute
Schinas, Margaritis (INT), Vice President, European Commission
Schmidt, Eric E. (USA), Former CEO and Chairman, Google LLC
Scott, Kevin (USA), CTO, Microsoft Corporation
Sebastião, Nuno (PRT), CEO, Feedzai
Sedwill, Mark (GBR), Chairman, Atlantic Futures Forum
Sikorski, Radoslaw (POL), MEP, European Parliament
Sinema, Kyrsten (USA), Senator
Starace, Francesco (ITA), CEO, Enel S.p.A.
Stelzenmüller, Constanze (DEU), Fritz Stern Chair, The Brookings Institution
Stoltenberg, Jens (INT), Secretary General, NATO
Straeten, Tinne Van der (BEL), Minister for Energy
Suleyman, Mustafa (GBR), CEO, Inflection AI
Sullivan, Jake (USA), Director, National Security Council
Tellis, Ashley J. (USA), Tata Chair for Strategic Affairs, Carnegie Endowment
Thiel, Peter (USA), President, Thiel Capital LLC
Treichl, Andreas (AUT), President, Chairman ERSTE Foundation
Tugendhat, Tom (GBR), MP; Chair Foreign Affairs Committee, House of Commons
Veremis, Markos (GRC), Co-Founder and Chairman, Upstream
Vitrenko, Yuriy (UKR), CEO, Naftogaz
Wallander, Celeste (USA), Assistant Secretary of Defense for International Security Affairs
Wallenberg, Marcus (SWE), Chair, Skandinaviska Enskilda Banken AB
Walmsley, Emma (GBR), CEO, GlaxoSmithKline plc
Wennink, Peter (NLD), President and CEO, ASML Holding NV
Yetkin, Murat (TUR), Journalist/Writer, YetkinReport
Yurdakul, Afsin (TUR), Journalist, Habertürk News Network
Zeiler, Gerhard (AUT), President Warnermedia International
0 notes
epacer · 6 months ago
Text
Eye on SD Unified
Tumblr media
Bagula Boosted to Interim SD Schools Superintendent – Took on Post in Wake of Jackson Firing
The San Diego Unified School District board named Fabiola Bagula, Ph.D., interim superintendent Tuesday, marking the first time that a Latina has taken the helm of California’s second-largest district.
Bagula brings state, county and community experience to the district. She previously served as adjunct faculty for San Jose State University, and as a senior director and executive leadership coach for the San Diego County Office of Education.
In her coaching role, Bagula worked with superintendents and their cabinets across California, helping them design teams for student success. Bagula, whose son attended a district school, has served as a teacher, principal, area superintendent, deputy superintendent, and most recently, acting superintendent.
She had served as acting superintendent since Aug. 30, when the board fired former Superintendent Lamont Jackson, citing the results of an independent investigation into accusations of inappropriate conduct toward two former district employees, both female. Their accounts were determined to be “credible,” board officials said.
Bagula called it “an honor” to take on the top job at a district “that I love so much,” and cited a deeply held principle of hers, accountability, that will be in play.
“At the core of who I am, and who I will be as interim superintendent, is accountability. There is too much at stake for our students and their families, and the communities we serve. Whether we are talking about student outcomes, our budget or the culture of our school community, we need to be accountable,” Bagula said.
As examples, she pointed to having data available “within three clicks” so that our educators and leaders can take action, or performing a financial audit “to ensure that we are maximizing and effectively spending taxpayer dollars.”
Board President Shana Hazan said the district will benefit from Bagula’s leadership and education experience.
“Dr. Bagula understands the vision and values of our community and is uniquely positioned to move the district forward as interim superintendent. She is a compassionate and outcome-oriented leader who brings the stability our students and staff need as we navigate this period of transition,” Hazan said.
A first-generation college graduate who was raised on both sides of the border by a single immigrant mother, Bagula is a graduate of UC San Diego and the University of San Diego, where she earned her Ph.D. in leadership studies.
Bagula has been instrumental in the design of several district initiatives, including an agreement with Cal State San Marcos that offers eligible graduates of every district high school guaranteed admission and professional development opportunities to support educators and school administrators. She also has guided district work on graduation, equity and the data.
In the coming weeks, she will visit schools throughout the district to engage with students, staff and families. Bagula also will work with the district’s Office of Investigations, Compliance, and Accountability to ensure that all students, staff and families have a safe and inclusive environment. *Reposted article from the Times of SD by Jennifer Vigil on September 11, 2024
0 notes
lboogie1906 · 6 months ago
Text
Tumblr media
Dr. Michelle McMurry-Heath (1979) an immunologist, medical doctor, and policymaker, was named President and CEO of the Biotechnology Innovation Organization. She was the third CEO of the world’s largest biotechnology advocacy group. She was a leading spokesperson for the biotechnology industry in its ongoing campaign to find a cure for COVID-19.
She comes to BIO from Johnson & Johnson. She served as Worldwide Vice President of Regulatory Affairs for Medical Devices at Johnson & Johnson. She then worked as Worldwide Vice President of Regulatory and Clinical Affairs and Global Head of the Evidence Generation department. She served as Vice President of External Innovation and Regulatory Science and Executive Director of Scientific Partnerships. She was instrumental in bringing the company’s incubator, JLABS to DC. She led a team of 900 people with responsibilities in 150 countries across the globe.
She worked for FaegreBD Consulting as the Senior Director of Regulatory Policy and Strategy. President Barack Obama called on her to conduct a comprehensive analysis of the National Science Foundation’s policies, programs, and personnel and named her Associate Science Director of the FDA’s Center for Devices and Radiological Health.
She was the founding Director of the Aspen Institute’s Health, Biomedical Science, and Society Policy Initiative Program. She served on the Council on Foreign Relations as an Adjunct MacArthur Fellow on Global Health. She worked as Connecticut Senator Joe Lieberman’s top administrative aide for science and health and received training in science policy from the Robert Wood Johnson Foundation. She worked as a Senior Health Policy Legislative Assistant for the US Senate. She worked at research facilities before taking on policy and leadership roles in the industry.
She obtained her BA in Biochemistry and Molecular Biology from Harvard University. She was the first African American graduate of the Medical Scientist Training Program, receiving her MD/PhD in Immunology from Duke University School of Medicine.
She is married to Veterinarian Sebastian Heath, and the couple have one daughter. #africanhistory365 #africanexcellence
0 notes
6250lc2024 · 9 months ago
Text
Week Three Part Two
I am involved in three primary educational systems at this point in my life. First, I am a student/learner in the learning technologies PhD program at the University of North Texas. In this role, I am building my knowledge about human interactions with technologies, research skills and professional portfolio to become a contributor to the research and literature in the field, and to become more competitive in the workforce. The second educational system I am involved in is my professional career as a manager of advising and my adjunct role teaching a student success, first year experience (FYE) course. In my manager of advising role, I provide professional development, training, and support to a large community college in the greater Houston area. In the role, I work with advising area leads (managers, directors, and deans), and advisors to provide support on system led initiatives. I also manage and provide support for many of the technologies academic advisors use. The third educational system I am involved in is my own personal development educational system.
I use the resources available to me through my employer (Mostly LinkedIn Learning, though they switched to Udemy this year) and public resources like the library. In this system, I am a student who sets their own learning path which aims to read one book a week, listen to one book a week, and complete a Udemy course per week. Not surprisingly, the three educational systems I am involved in all interplay in some capacity. Knowledge gained from one can generally be applied in another, directly and indirectly. Additionally, the PhD student and manager of advising roles can affect my roles in the other educational systems (e.g., homework assignments, planning an annual professional development conference for over 250 advisors, etc.). All of these systems are also non-linear, and I am not (in my opinion) centrally located within the system. I would also suggest that my position if fluid, given the context of a scenario or moment in time.
0 notes
bmhasdeu · 11 months ago
Text
Tumblr media
În contextul Zilei Internaționale a Monumentelor și Siturilor, la Biblioteca Municipală „B. P. Hasdeu” a avut loc lansarea cărții „Odiseea Casei Monastârski”. Autorii Vladimir TARNAKIN, specialist în istoria locală, cercetător și Zinaida MATEI, cercetător, ghid turistic au purtat publicul prin culisele istoriei acestui monument de patrimoniu, dezvăluind secretele și istoria sa bogată, bazată pe materiale de arhivă inedite. Au mai ținut discursuri Ivan PILCHIN, șef, Secția activitate editorială, Ludmila PÂNZARI, director adjunct, Taisia FOIU, șef, Secția „Memoria Chișinăului” și Alexandru BERESKI, specialist în domeniul transportului. Evenimentul a fost moderat de dr. Mariana HARJEVSCHI, director general al Bibliotecii Municipale „B. P. Hasdeu”.
0 notes
if-you-fan-a-fire · 1 year ago
Text
Tumblr media
"Alfred Hall Gets 5 Years And 5 Lashes," Vancouver Sun. October 16, 1943. Page 1 & 8. --- For the first time in a decade or more lashes were added to a penitentiary sentence when punishment was meted out in Assize Court today by Chief Justice Wendell Farris to seven prisoners.
Accompanied by a scathing denunciation of the "detestable crimes" of which he had been found guilty - two charges of gross indecency - Alfred G. Hall, 53-year-old self-styled psychologist and nutritionist, was sentenced to five years in the penitentiary and five lashes.
At the same time, Frederick Hathaway, 43, leader of the Aryan Astrological Occult Church of Christ, was given the maximum term for indecent assault, two years in the penitentiary.
Charles Willard Davis, 41, former New Westminster druggist who pleaded guilty to possession of drugs while he was staff sergeant in the RCAMC, was sent to the penitentiary for six years, with a fine of $1000, or an additional six months.
Other sentences given today, were:
Pte. George Donald Bowie, 27, two years from his arrest on May 29 for a statutory offense.
Ralph Prentice, 28, salesman, and Robert Morgan, 27, laborer, three years for burglary.
Robert Findlay, 21, fisherman, one year for burglary.
HAD FAIR TRIAL When court convenes at 10:30 a.m. Monday, Mr. Justice Stiney Smith will preside for the re-trial of two cases in which there were disagreements earlier in the assizes. They are Herbert Gordon Penny, false pretenses, and Robert Walter Millman, theft.
Hall, who is general director of the World Fellowship of Faith and Service and operator of its adjunct, the Human Adjustment institute, claimed he was greatly handicapped at his trials by lack of counsel. He told the chief justice also that there was a public movement to prevent the career which he had chosen as his life work in Vancouver.
"You had extremely fair trials and your ability in conducting your defense was such that I am satisfied it would not have been excelled by many lawyers," Chief Justice Farris told the prisoner.
LONG PERIOD "I think you have a contempt for the law and the decent things of life," he added, and then recited Hall's criminal record which began in Vancouver 20 years ago and extended to Toronto, Chicago, Seattle and back to Vancouver to pile up six convictions for false pretenses, theft, fraud, con-games, violation of immigration laws and non-support.
The chief justice said Hall's conduct in court indicated more than ordinary ability and a remarkable brain. It is too bad, he remarked, that science has not advanced sufficiently to correct the quirk which prevents his ability being of service to the community rather than a disgrace to himself. His Lordship said he could see nothing in the case which war ranted sympathy or leniency.
"Absolutely brazenly you defended yourself on this detestable charge, and I sentence you to five years with five lashes, as I believe that it is only by such means you may be brought to a realization of your position. Though the thought of the lash is to me abhorrent, in a case such as yours I see nothing else that will serve."
In making the sentences on the two counts concurrent, the chief justice stipulated that if the lashes are not given in the first case they shall be given in the second, within four weeks of Hall's admission to the penitentiary.
"COSMETIC SCIENCE" "In the second case you went into the box and your admissions were such as to my mind shows a completely perverted mind and a system of carrying on these perversions with your so-called institute to further what I might term your beastly desires," declared the judge.
Objections taken by G. V. Pelton in behalf of Hathaway of alleged prejudice at his trial by reference to cosmetic science instead of cosmic and the use of the name Hall instead of Hathaway, might be grounds of appeal; also his trial by jury on a lesser charge than the one on which he was committed.
"In these days, when people are seeking faith and religious outlet, those who profess religion and in the name of that religion, commit a crime, it becomes a very serious matter," Chief Justice Farris told the cosmic science lecturer who claimed at his trial to have visited Mars, Venus and other planets.
He was not unmindful of the suffering of drug addicts, the chief justice said after hearing a second impassioned plea by T. F. Hurley for leniency for Davis, the staff sergeant who admitted stealing morphine and cocaine from army supplies and substituting other medicines for a year.
But Davis' case was different to the ordinary drug case because he knew thed anger of going near narcotics; still he took a position of responsibility know- ing there might be serious consequences.
'KNEW WAY AROUND' His Lordship thought it remarkable that Canada has no institutions for the treatment of drug addicts.
He said he took into consideration the sorrow of his parents, wife, a son overseas and a brother invalided home; also the co-operation Davis gave in preventing the serious consequences there might have been. However, the judge said the Crown might have charged him under a section with a maximum penalty of 11 years instead of seven.
A strong recommendation by the jury for mercy was taken into consideration by the chief justice, he said, when he gave Bowie two years from the date of his arrest for an offense against a young girl. He said he also recalled that the complainant was "one who knew her way about" and that the soldier had been drinking.
Criminal records for 10 years and 13 years were confirmed by Prentice and Morgan respectively, when they appeared for sentence for a dairy safe-blowing.
0 notes
theultimatefan · 1 year ago
Text
28th WISE Women Of The Year Awards Luncheon To Honor Four Women At Annual Event
Tumblr media
Women in Sports and Events (WISE), the foremost career and leadership development organization for women in the business of sports, will host its 28th WISE Women of the Year Awards Luncheon at the Ziegfeld Ballroom in New York City on March 19, 2024 honoring four recipients of the WISE Women of the Year award. 
The 2024 recipients are: Ayala Deutsch, Executive Vice President and Deputy General Counsel, National Basketball Association; Kate Johnson, Director and Head of Global Sports & Entertainment Marketing, Google; Michele Kajiwara, Senior Vice President, Premium and Events Business, Crypto.com Arena & Peacock Theater; and Renee Chube Washington, Chief Operating Officer, USA Track & Field.   WISE members across North America had the opportunity to nominate women for their accomplishments and significant contributions to the business of sports, and the honorees were selected by the organization’s National Board.
“WISE is thrilled to honor four women who personify what it means to be a leader in our industry,”
said Kathleen Francis, National Board chair and president of WISE. “Ayala Deutsch, Kate Johnson, Michele Kajiwara, and Renee Chube Washington are not only paving the way, but they are also making a way for other women. We look forward to celebrating each of the honorees at our annual luncheon and the great work they do.”
The 2024 WISE Women of the Year honorees:
Ayala Deutsch, EVP and Deputy General Counsel, NBA
Ayala Deutsch is responsible for managing commercial legal affairs and intellectual property matters for the NBA and its affiliated leagues, including the global acquisition, protection and enforcement of intellectual property rights. She joined the NBA in 1998 and was named to her current position in January 2016, after serving as senior vice president and deputy general counsel and senior vice president and chief intellectual property counsel.  A former associate at Cleary, Gottlieb, Steen & Hamilton in New York, Deutsch previously served on the Trademark Public Advisory Committee of the United States Patent and Trademark Office, was an adjunct professor of sports law at Cardozo School of Law and president of the International Trademark Association. She is a member of the Advisory Board of the Engelberg Center on Innovation Law and Policy at New York University School of Law where she received her J.D. in 1989.
Kate Johnson, Director and Head of Global Sports & Entertainment Marketing, Google 
Kate Johnson is responsible for developing and executing Google’s strategic approach to sports and entertainment marketing partnerships across its many business verticals. Prior to joining Google, she served as vice president of global partnership marketing at Visa, where she oversaw the company’s global partnership portfolio, including its partnerships with the IOC, FIFA, the NFL, and other verticals. Johnson began her career in sports marketing at IMG’s Global Consulting Group, where she worked in New York, Vancouver and London, building and executing sponsorship marketing platforms for a variety of clients. Recognized as one of the Most Powerful Women in Sport by Adweek in both 2022 and 2017, Johnson is also a recipient of the 2017 Sports Business Journal 40 Under 40 Award and the 2017 Leaders Under 40 Award. She serves as an IOC Commission Member for Marketing and Digital Media and is a former professional rower and Olympic medalist. A graduate of the University of Michigan, Johnson is an advisor to the Women’s Sports Foundation, Gatorade Women’s Board, the University of Michigan Sports Management Program, and the Youth Sports Alliance.
Michele Kajiwara, SVP, Premium and Events Business, Crypto.com Arena & Peacock Theater
Michele Kajiwara oversees over 2,400 premier seats and more than 150 private and event suites at Peacock Theater and Crypto.com Arena — home of the NBA’s Los Angeles Lakers and Los Angeles Clippers, the NHL’s Los Angeles Kings and the WNBA’s Los Angeles Sparks. Both venues are owned and operated by AEG, the largest sports and entertainment company in the world.  She leads a team of premium executives in all areas of sales, service, analytics, database marketing, hospitality and event management, and serves as an executive leader of several AEG employee network groups including AEG’s Women’s Leadership Council, AEG Global Partnerships Inclusion Council, and AEG’s Asian and Pacific Islander employee network group. Kajiwara joined AEG in 2003 and began her career in entertainment with New Regency Productions and then moved to New York to join Chelsea Piers. She holds a BA from the University of Southern California and has served on the board for the Association of Luxury Suite Directors for more than 10 years, including four as president. She is the recipient of numerous industry awards including being named to Sports Business Journal’s Game Changers and Variety’s Dealmakers Impact Report. 
Renee Chube Washington, Chief Operating Officer, USA Track & Field
Renee Chube Washington joined USATF in June 2012 and as COO manages the organization’s record $40 million budget and 68-person national staff, as well as working with the CEO to develop and implement corporate strategy and direction. Washington has broad cross-enterprise oversight of all departments. Since being appointed COO, USATF has awarded six U.S. Olympic Team Trials, secured the World Athletics Indoor and Outdoor Championships on U.S. soil for the first time, participated in three Olympic Games and ten World Championships. Prior to USATF, Washington’s career spanned various corporate and government roles including time with Northrop Grumman Systems Corporation, CICOA (a private, nonprofit agency advocate for the aged), as a staff attorney in the State of Indiana office of the Attorney General, and in several roles in the U.S. Department of Labor. She was named a 2017 Sports Business Journal Game Changer and a Cynopsis Top Women in Sports in 2019. A graduate of Georgetown University Law and Spelman College, Washington and has dedicated time to various causes including the Junior League of Indianapolis, Wishard Memorial Hospital, the Girl Scouts, American Cancer Society Guild, and numerous educational, cultural and political causes. 
In conjunction with the 28th Annual WISE Awards Luncheon, the WISE/R Symposium will take place on March 18th, the day prior to the luncheon, in New York City.  The WISE/R Symposium is the sports industry’s leading personal and professional development event for women. Built by women, for women, it is the first event of its kind to focus exclusively on the unique hurdles that women encounter in the business of sports.  Additional information about the 28, 2024 Annual WISE Awards Luncheon and the WISE/R Symposium can be found at www.wiseworks.org. 
0 notes
contentment-of-cats · 1 year ago
Text
Harvard and Facebook Boot Disinformation Scholar, Play in Our Faces about Squeezing Academic Freedom. Scholar Files Suit to Show Harvard's Bare Ass
Text taken from the WaPo below. Saying that a USD$500m pledge has nothing to do with Donovan's dismissal is a weasel-worded lie that insults intelligence. This is why support for FB - even having an account there - is support for this kind of influence buying and academic whoredom. Harvard's endowment is over USD$53b and "a dedicated and permanent source of funding that maintains the teaching and research mission of the University" as per their website.
Article below. Use a paywall breaker. like 12ft.io
~
A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.
Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg’s charitable arm.
As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods.
Last year, the school’s dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position. The surprise dismissal alarmed fellow researchers elsewhere, who saw Donovan as a pioneer in an increasingly critical area of great sensitivity to the powerful and well-connected tech giants.
Donovan has remained silent about what happened until now, filing a 248-page legal statement obtained by The Washington Post that traces her problems to her acquisition of a trove of explosive documents known as the Facebook Papers and championing their importance before an audience of Harvard donors that included Facebook’sformer top communications executive.
Harvard disputes Donovan’s core claims, telling The Post that she was a staff employee and that it had not been able to find a faculty sponsor to oversee her work, as university policy requires. It also denies that she was fired, saying she “was offered the chance to continue as a part-time adjunct lecturer, and she chose not to do so.”
Donovan obtained the Facebook documents when they and the former Facebook employee who leaked them, Frances Haugen, were the subject of extensive news coverage in October 2021, with The Post writing that the documents showed Facebook “privately and meticulously tracked real-world harms exacerbated by its platforms, ignored warnings from its employees about the risks of their design decisions and exposed vulnerable communities around the world to a cocktail of dangerous content.”
Frances Haugen took thousands of Facebook documents: This is how she did it
As the main attraction at a Zoom meeting for top Kennedy School donors on Oct. 29 that year, Donovan said the papers showed that Meta knew the harms it was causing. Formertop Facebook communications executive Elliot Schrage asked repeated questions during the meeting and said she badly misunderstood the papers, Donovan wrote in a sworn declaration included in the filing.
Ten days after the donors meeting, Kennedy School dean Doug Elmendorf, a former director of the Congressional Budget Office, emailed Donovan with pointed questions about her research goals and methods, launching an increase in oversight that restricted her activities and led to her dismissal before the end of her contract, according to the declaration. Donovan wrote that the Chan Zuckerberg Initiative’s $500 million gift for a new artificial intelligence institute at the university, announced Dec. 7 that year, had been in the works before the donor meeting.
Leaders at the Kennedy School “were inappropriately influenced by Meta/Facebook,” Donovan claims in her declaration. “A significant conflict of interest arising from funding and personal relationships has created a pervasive culture at HKS of operating in the best interest of Facebook/Meta at the expense of academic freedom and Harvard’s own stated mission.”
The filing raises questions about the potential conflict of interest created by Big Tech’s influence at research institutions that are called upon for their expertise on the industry.
“The document’s allegations of unfair treatment and donor interference are false. The narrative is full of inaccuracies and baseless insinuations, particularly the suggestion that Harvard Kennedy School allowed Facebook to dictate its approach to research,” Kennedy School spokesperson Sofiya Cabalquinto said by email. “By policy and in practice, donors have no influence over this or other work.”
Cabalquinto’s email added: “By long-standing policy to uphold academic standards, all research projects at Harvard Kennedy School need to be led by faculty members. Joan Donovan was hired as a staff member (not a faculty member) to manage a media manipulation project. When the original faculty leader of the project left Harvard, the School tried for some time to identify another faculty member who had time and interest to lead the project. After that effort did not succeed, the project was given more than a year to wind down. Joan Donovan was not fired, and most members of the research team chose to remain at the School in new roles.”
Elmendorf declined to comment.
At one point, Elmendorf told Donovan that she did not have academic freedom because she was staff rather than faculty, she recounts. Officials confirmed that position to The Post.
But Harvard Law School professor Lawrence Lessig said that stance should be limited to traditional staff work, not research papers, other publications and teaching.
“When you’re doing what looks like academic work as one of the most prominent people in an academic field, the university ought to award that person the protections of academic freedom,” said Lessig, an expert on corruption who made inquiries to Harvard’s administration on Donovan’s behalf. “When she was presenting herself to the world, there was no asterisk at the bottom of her name saying, ‘As long as what she says is consistent with the interests of Harvard University.’”
Donovan was recently hired for a tenure-track professorship at Boston University.
The Donovan case comes at a time when researchers who focus on social media platforms find themselves under increasing attack. Trump adviser Stephen Miller’s legal foundation has sued academic and independent researchers, claiming that they conspired with government agencies to suppress speech, and Republican-led congressional committees have subpoenaed their records, adding to the pressure.
In addition, Big Tech companies themselves have sponsored research, made grants to some colleges and universities, and doled out data to professors who agree to specific avenues of inquiry.
The filing asks the federal Education Department’s civil rights division to investigate whether Harvard violated Donovan’s right to free speech and academic freedom. It asks Massachusetts’s charity regulators to examine whether the university deceived donors or misappropriated their funds by retaining millions that Donovan had raised for her research.
A copy sent to Harvard’s new president, Claudine Gay, asks her to determine whether the Kennedy School had breached the university’s own policies.
“All of the efforts taken to undermine Dr. Donovan came at great costs — to the donors who contributed millions of dollars to her work, and to the public more broadly who every day, all day long, are exposed to disinformation and misinformation,” Whistleblower Aid attorneys Andrew Bakaj and Kyle Gardiner wrote in the filing.
“There are a handful of tried and true means to coerce someone or some entity to do something they would not otherwise do, and influence through financial compensation is at or near the top of the list,” the filing says. “Objectively, $500 million is certainly significant financial influence.”
In addition to Donovan, Whistleblower Aid has represented Haugen, the Facebook whistleblower; former Twitter security chief Peiter Zatko; and anonymous whistleblowers from the intelligence community.
In the documents, Donovan contends that Meta’s influence at Harvard goes beyond money and includes deep personal connections. Schrage, for one, earned degrees from Harvard College, the Kennedy School and Harvard’s law school.
Zuckerberg and Facebook’s former longtime chief operating officer, Sheryl Sandberg, were Harvard undergraduates, as was Zuckerberg’s wife, Priscilla Chan.
Elmendorf served as Sandberg’s adviser for a club she started in college. They remained close, and Elmendorf attended Sandberg’s wedding in August 2022. Four days later, he told Donovan he was winding down her research team, she says in the complaint.
Kennedy School officials said Elmendorf and Sandberg never discussed Donovan. Schrage and Sandberg declined to comment.
Meta also declined to comment, while the Chan Zuckerberg Initiative said it contributed money because it cared about the science. “CZI had no involvement in Dr. Donovan’s departure from Harvard,” spokesperson Jeff MacGregor said.
Donovan says in her complaint that Elmendorf emailed her after the October donors’ meeting and asked to discuss her Facebook work and “focus on a few key issues drawn from the questions raised by the Dean’s Council and my own limited reading of current events.”
He wrote that he wanted to hear from her about “How you define the problem of misinformation for both analysis and possible responses (algorithm-adjusting or policymaking) when there is no independent arbiter of truth (in this country or others) and constitutional protections of speech (in some countries)?”
Donovan said in the filing that Elmendorf’s use of the phrase “arbiters of truth” alarmed her because Facebook uses the same words to explain its reluctance to take actions against false content.
She explained to Elmendorf that rather than making moral judgments about politics or proclaiming that a vaccine is good or bad, she looked for provable manipulation of platforms, as with fake accounts.
“We do not generally speak about what is good or bad for a society, but rather what is true or false about a specific public event,” she emailed him.
Donovan then alerted colleagues with whom she was starting to work on the Facebook Archive of leaked documents that she was drawing heat. Both suggested that they take the name of Donovan’s Technology and Social Change Research Project off the archive project’s website.
“Let’s remove the explicit listing of TASC, minimally, or of all three groups, when the website updates later today,” Kennedy School professor Latanya Sweeney wrote in an email included in Donovan’s filing. “No reason to put a target on the project that allows FB to claim bias before we even do anything.”
Donovan’s project remained listed on the page until this year, according to copies preserved by the internet archive.
Sweeney said her role was twisted by the Donovan filing. “The number and nature of inaccuracies and falsehoods in the document are so abundant and self-serving as to be horribly disappointing,” she told The Post. “Meta exerted no influence over the Facebook Archive or any of our/my work.”
Elmendorf met with Donovan again in August 2022 and told her that her project at the Kennedy School would end in the coming year, the filing says.
Though Donovan’s contract was supposed to keep her on the job through the end of 2024, her superiors took away her ability to start new projects, raise money or organize large events, she alleges. They kept the money she had brought in, including more than $1 million from Craigslist founder Craig Newmark that he wanted specifically to go to her research project, according to documents quoted in the declaration. Newmark declined to comment.
The Kennedy School said no money was misused. The Massachusetts attorney general’s office said it is reviewing the filing. The Education Department did not respond to a request for comment.
This September, Elmendorf said that the current academic year would be his last as dean and that he would continue to teach.
The Facebook Archive finally went public in October. Far down a page devoted to the history of the site, it says that Sweeney’s Public Interest Tech Lab “received an anonymous drop of the internal Facebook documents” and that “Dr. Joan Donovan immediately recognized the valuable insight the documents provided.”
It does not say Donovan obtained the documents and launched the project, as she contends.
0 notes
ahopkins1965 · 4 days ago
Text
About+
Books
Articles+
JoD Online+
Subscribe
Subscribers
How AI Threatens Democracy
Sarah Kreps
Doug Kriner
Issue DateOctober 2023
Volume34
Issue4
Page Numbers122–31
 Print
 Download from Project MUSE
 View Citation
MLA (Modern Language Association 8th edition)Chicago Manual of Style 16th edition (full note)APA (American Psychological Association 7th edition)
The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater digital literacy on the part of the public and elites alike.
Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate. Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose an existential threat to humanity?2
About the Authors
Sarah Kreps
Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University.
View all work by Sarah Kreps
Doug Kriner
Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University.
View all work by Doug Kriner
New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university students use AI tools as shortcuts is a valuable pedagogical debate, but one that should resolve itself as the applications become more seamlessly integrated into search engines. The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust.
The most problematic aspect of generative AI is that it hides in plain sight, producing enormous volumes of content that can flood the media landscape, the internet, and political communication with meaningless drivel at best and misinformation at worst. For government officials, this undermines efforts to understand constituent sentiment, threatening the quality of democratic representation. For voters, it threatens efforts to monitor what elected officials do and the results of their actions, eroding democratic accountability. A reasonable cognitive prophylactic measure in such a media environment would be to believe nothing, a nihilism that is at odds with vibrant democracy and corrosive to social trust. As objective reality recedes even further from the media discourse, those voters who do not tune out altogether will likely begin to rely even more heavily on other heuristics, such as partisanship, which will only further exacerbate polarization and stress on democratic institutions.
Threats to Democratic Representation
Democracy, as Robert Dahl wrote in 1972, requires “the continued responsiveness of the government to the preferences of its citizens.”4 For elected officials to be responsive to the preferences of their constituents, however, they must first be able to discern those preferences. Public-opinion polls—which (at least for now) are mostly immune from manipulation by AI-generated content—afford elected officials one window into their constituents’ preferences. But most citizens lack even basic political knowledge, and levels of policy-specific knowledge are likely lower still.5 As such, legislators have strong incentives to be the most responsive to constituents with strongly held views on a specific policy issue and those for whom the issue is highly salient. Written correspondence has long been central to how elected officials keep their finger on the pulse of their districts, particularly to gauge the preferences of those most intensely mobilized on a given issue.6
In an era of generative AI, however, the signals sent by the balance of electronic communications about pressing policy issues may be severely misleading. Technological advances now allow malicious actors to generate false “constituent sentiment” at scale by effortlessly creating unique messages taking positions on any side of a myriad of issues. Even with old technology, legislators struggled to discern between human-written and machine-generated communications.
In a field experiment conducted in 2020 in the United States, we composed advocacy letters on six different issues and then used those letters to train what was then the state-of-the-art generative AI model, GPT-3, to write hundreds of left-wing and right-wing advocacy letters. We sent randomized AI- and human-written letters to 7,200 state legislators, a total of about 35,000 emails. We then compared response rates to the human-written and AI-generated correspondence to assess the extent to which legislators were able to discern (and therefore not respond to) machine-written appeals. On three issues, the response rates to AI- and human-written messages were statistically indistinguishable. On three other issues, the response rates to AI-generated emails were lower—but only by 2 percent, on average.7 This suggests that a malicious actor capable of easily generating thousands of unique communications could potentially skew legislators’ perceptions of which issues are most important to their constituents as well as how constituents feel about any given issue.
In the same way, generative AI could strike a double blow against the quality of democratic representation by rendering obsolete the public-comment process through which citizens can seek to influence the actions of the regulatory state. Legislators necessarily write statutes in broad brushstrokes, granting administrative agencies considerable discretion not only to resolve technical questions requiring substantive expertise (e.g., specifying permissible levels of pollutants in the air and water), but also to make broader judgements about values (e.g., the acceptable tradeoffs between protecting public health and not unduly restricting economic growth).8 Moreover, in an era of intense partisan polarization and frequent legislative gridlock on pressing policy priorities, U.S. presidents have increasingly sought to advance their policy agendas through administrative rulemaking.
Moving the locus of policymaking authority from elected representatives to unelected bureaucrats raises concerns of a democratic deficit. The U.S. Supreme Court raised such concerns in West Virginia v. EPA (2022), articulating and codifying the major questions doctrine, which holds that agencies do not have authority to effect major changes in policy absent clear statutory authorization from Congress. The Court may go even further in the pending Loper Bright Enterprises v. Raimondo case and overturn the Chevron doctrine, which has given agencies broad latitude to interpret ambiguous congressional statutes for nearly three decades, thus further tightening the constraints on policy change through the regulatory process.
Not everyone agrees that the regulatory process is undemocratic, however. Some scholars argue that the guaranteed opportunities for public participation and transparency during the public-notice and comment period are “refreshingly democratic,”9 and extol the process as “democratically accountable, especially in the sense that decision-making is kept above board and equal access is provided to all.”10 Moreover, the advent of the U.S. government’s electronic-rulemaking (e-rulemaking) program in 2002 promised to “enhance public participation . . . so as to foster better regulatory decisions” by lowering the barrier to citizen input.11 Of course, public comments have always skewed, often heavily, toward interests with the most at stake in the outcome of a proposed rule, and despite lowering the barriers to engagement, e-rulemaking did not alter this fundamental reality.12
Despite its flaws, the direct and open engagement of the public in the rulemaking process helped to bolster the democratic legitimacy of policy change through bureaucratic action. But the ability of malicious actors to use generative AI to flood e-rulemaking platforms with limitless unique comments advancing a particular agenda could make it all but impossible for agencies to learn about genuine public preferences. An early (and unsuccessful) test case arose in 2017, when bots flooded the Federal Communications Commission with more than eight-million comments advocating repeal of net neutrality during the open comment period on proposed changes to the rules.13 This “astroturfing” was detected, however, because more than 90 percent of those comments were not unique, indicating a coordinated effort to mislead rather than genuine grassroots support for repeal. Contemporary advances in AI technology can easily overcome this limitation, rendering it exceedingly difficult for agencies to detect which comments genuinely represent the preferences of interested stakeholders.
Threats to Democratic Accountability
A healthy democracy also requires that citizens be able to hold government officials accountable for their actions—most notably, through free and fair elections. For ballot-box accountability to be effective, however, voters must have access to information about the actions taken in their name by their representatives.14 Concerns that partisan bias in the mass media, upon which voters have long relied for political information, could affect election outcomes are longstanding, but generative AI poses a far greater threat to electoral integrity.
As is widely known, foreign actors exploited a range of new technologies in a coordinated effort to influence the 2016 U.S. presidential election. A 2018 Senate Intelligence Committee report stated:
Masquerading as Americans, these (Russian) operatives used targeted advertisements, intentionally falsified news articles, self-generated content, and social media platform tools to interact with and attempt to deceive tens of millions of social media users in the United States. This campaign sought to polarize Americans on the basis of societal, ideological, and racial differences, provoked real world events, and was part of a foreign government’s covert support of Russia’s favored candidate in the U.S. presidential election.15
While unprecedented in scope and scale, several flaws in the influence campaign may have limited its impact.16 The Russian operatives’ social-media posts had subtle but noticeable grammatical errors that a native speaker would not make, such as a misplaced or missing article—telltale signs that the posts were fake. ChatGPT, however, makes every user the equivalent of a native speaker. This technology is already being used to create entire spam sites and to flood sites with fake reviews. The tech website The Verge flagged a job seeking an “AI editor” who could generate “200 to 250 articles per week,” clearly implying that the work would be done via generative AI tools that can churn out mass quantities of content in fluent English at the click of the editor’s “regenerate” button.17 The potential political applications are myriad. Recent research shows that AI-generated propaganda is just as believable as propaganda written by humans.18 This, combined with new capacities for microtargeting, could revolutionize disinformation campaigns, rendering them far more effective than the efforts to influence the 2016 election.19 A steady stream of targeted misinformation could skew how voters perceive the actions and performance of elected officials to such a degree that elections cease to provide a genuine mechanism of accountability since the premise of what people are voting on is itself factually dubious.20
Threats to Democratic Trust
Advances in generative AI could allow malicious actors to produce misinformation, including content microtargeted to appeal to specific demographics and even individuals, at scale. The proliferation of social-media platforms allows the effortless dissemination of misinformation, including its efficient channeling to specific constituencies. Research suggests that although readers across the political spectrum cannot distinguish between a range of human-made and AI-generated content (finding it all plausible), misinformation will not necessarily change readers’ minds.21 Political persuasion is difficult, especially in a polarized political landscape.22 Individual views tend to be fairly entrenched, and there is little that can change people’s prior sentiments.
The risk is that as inauthentic content—text, images, and video—proliferates online, people simply might not know what to believe and will therefore distrust the entire information ecosystem. Trust in media is already low, and the proliferation of tools that can generate inauthentic content will erode that trust even more. This, in turn, could further undermine perilously low levels of trust in government. Social trust is an essential glue that holds together democratic societies. It fuels civic engagement and political participation, bolsters confidence in political institutions, and promotes respect for democratic values, an important bulwark against democratic backsliding and authoritarianism.23
Trust operates in multiple directions. For political elites, responsiveness requires a trust that the messages they receive legitimately represent constituent preferences and not a coordinated campaign to misrepresent public sentiment for the sake of advancing a particular viewpoint. Cases of “astroturfing” are nothing new in politics, with examples in the United States dating back at least to the 1950s.24 However, advances in AI threaten to make such efforts ubiquitous and more difficult to detect.
For citizens, trust can motivate political participation and engagement, and encourage resistance against threats to democratic institutions and practices. The dramatic decline in Americans’ trust in government over the past half century is among the most documented developments in U.S. politics.25 While many factors have contributed to this erosion, trust in the media and trust in government are intimately linked.26 Bombarding citizens with AI-generated content of dubious veracity could seriously threaten confidence in the media, with severe consequences for trust in the government.
Mitigating the Threats
Although understanding the motives and technology is an important first step in framing the problem, the obvious next step is to formulate prophylactic measures. One such measure is to train and deploy the same machine-learning models that generate AI to detect AI-generated content. The neural networks used in artificial intelligence to create text also “know” the types of language, words, and sentence structures that produce that content and can therefore be used to discern patterns and hallmarks of AI-generated versus human-written text. AI detection tools are proliferating quickly and will need to adapt as the technology adapts, but a “Turnitin”-style model—like those that teachers use to detect plagiarism in the classroom—may provide a partial solution. These tools essentially use algorithms to identify patterns within the text that are hallmarks of AI-generated text, although the tools will still vary in their accuracy and reliability.
Even more fundamentally, the platforms responsible for generating these language models are increasingly aware of what it took many years for social-media platforms to realize—that they have a responsibility in terms of what content they produce, how that content is framed, and even what type of content is proscribed. If you query ChatGPT about how generative AI could be misused against nuclear command and control, the model responds with “I’m sorry, I cannot assist with that.” OpenAI, the creator of ChatGPT, is also working with external researchers to democratize the values encoded in their algorithms, including which topics should be off limits for search outputs and how to frame the political positions of elected officials. Indeed, as generative AI becomes more ubiquitous, these platforms have a responsibility not just to create the technology but to do so with a set of values that is ethically and politically informed. The question of who gets to decide what is ethical, especially in polarized, heavily partisan societies, is not new. Social-media platforms have been at the center of these debates for years, and now the generative AI platforms are in an analogous situation. At the least, elected public officials should continue to work closely with these private firms to generate accountable, transparent algorithms. The decision by seven major generative AI firms to commit to voluntary AI safeguards, in coordination with the Biden Administration, is a step in the right direction.
Finally, digital-literacy campaigns have a role to play in guarding against the adverse effects of generative AI by creating a more informed consumer. Just as neural networks “learn” how generative AI talks and writes, so too can individual readers themselves. After we debriefed the state legislators in our study about its aims and design, some said that they could identify AI-generated emails because they know how their constituents write; they are familiar with the standard vernacular of a constituent from West Virginia or New Hampshire. The same type of discernment is possible for Americans reading content online. Large language models such as ChatGPT have a certain formulaic way of writing—perhaps having learned a little too well the art of the five-paragraph essay.
When we asked the question, “Where does the United States have missile silos?” ChatGPT replied with typical blandness: “The United States has missile silos located in several states, primarily in the central and northern parts of the country. The missile silos house intercontinental ballistic missiles (ICBMs) as part of the U.S. nuclear deterrence strategy. The specific locations and number of missile silos may vary over time due to operational changes and modernization efforts.”
There is nothing wrong with this response, but it is also very predictable to anyone who has used ChatGPT somewhat regularly. This example is illustrative of the type of language that AI models often generate. Studying their content output, regardless of the subject, can help people to recognize clues indicating inauthentic content.
More generally, some of the digital-literacy techniques that have already gained currency will likely apply in a world of proliferating AI-generated texts, videos, and images. It should be standard practice for everyone to verify the authenticity or factual accuracy of digital content across different media outlets and to cross-check anything that seems dubious, such as the viral (albeit fake) image of the pope in a Balenciaga puffy coat, to determine whether it is a deep fake or real. Such practices should also help in discerning AI-generated material in a political context, for example, on Facebook during an election cycle.
Unfortunately, the internet remains one big confirmation-bias machine. Information that seems plausible because it comports with a person’s political views may be less likely to drive that person to check the veracity of the story. In a world of easily generated fake content, many people may have to walk a fine line between political nihilism—that is, not believing anything or anyone other than their fellow partisans—and healthy skepticism. Giving up on objective fact, or at least the ability to discern it from the news, would shred the trust on which democratic society must rest. But we are no longer living in a world where “seeing is believing.” Individuals should adopt a “trust but verify” approach to media consumption, reading and watching but exercising discipline in terms of establishing the material’s credibility.
New technologies such as generative AI are poised to provide enormous benefits to society—economically, medically, and possibly even politically. Indeed, legislators could use AI tools to help identify inauthentic content and also to classify the nature of their constituents’ concerns, both of which would help lawmakers to reflect the will of the people in their policies. But artificial intelligence also poses political perils. With proper awareness of the potential risks and the guardrails to mitigate against their adverse effects, however, we can preserve and perhaps even strengthen democratic societies.
NOTES
1. Nathan E. Sanders and Bruce Schneier, “How ChatGPT Hijacks Democracy,” New York Times, 15 January 2023, www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.
2. Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, 30 May 2023, www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.
3. Alexey Turchin and David Denkenberger, “Classification of Global Catastrophic Risks Connected with Artificial Intelligence,” AI & Society 35 (March 2020): 147–63.
4. Robert Dahl, Polyarchy: Participation and Opposition (New Haven: Yale University Press, 1972), 1.
5. Michael X. Delli Carpini and Scott Keeter, What Americans Know about Politics and Why it Matters (New Haven: Yale University Press, 1996); James Kuklinski et al., “‘Just the Facts Ma’am’: Political Facts and Public Opinion,” Annals of the American Academy of Political and Social Science 560 (November 1998): 143–54; Martin Gilens, “Political Ignorance and Collective Policy Preferences,” American Political Science Review 95 (June 2001): 379–96.
6. Andrea Louise Campbell, How Policies Make Citizens: Senior Political Activism and the American Welfare State (Princeton: Princeton University Press, 2003); Paul Martin and Michele Claibourn, “Citizen Participation and Congressional Responsiveness: New Evidence that Participation Matters,” Legislative Studies Quarterly 38 (February 2013): 59–81.
7. Sarah Kreps and Doug L. Kriner, “The Potential Impact of Emerging Technologies on Democratic Representation: Evidence from a Field Experiment,” New Media and Society (2023), https://doi.org/10.1177/14614448231160526.
8. Elena Kagan, “Presidential Administration,” Harvard Law Review 114 (June 2001): 2245–2353.
9. Michael Asimow, “On Pressing McNollgast to the Limits: The Problem of Regulatory Costs,” Law and Contemporary Problems 57 (Winter 1994): 127, 129.
10. Kenneth F. Warren, Administrative Law in the Political System (New York: Routledge, 2018).
11. Committee on the Status and Future of Federal E-Rulemaking, American Bar Association, “Achieving the Potential: The Future of Federal E-Rulemaking,” 2008, https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=2505&context=facpub.
12. Jason Webb Yackee and Susan Webb Yackee, “A Bias toward Business? Assessing Interest Group Influence on the U.S. Bureaucracy,” Journal of Politics 68 (February 2006): 128–39; Cynthia Farina, Mary Newhart, and Josiah Heidt, “Rulemaking vs. Democracy: Judging and Nudging Public Participation That Counts,” Michigan Journal of Environmental and Administrative Law 2, issue 1 (2013): 123–72.
13. Edward Walker. “Millions of Fake Commenters Asked the FCC to End Net Neutrality: ‘Astroturfing’ Is a Business Model,” Washington Post Monkey Cage blog, 14 May 2021, www.washingtonpost.com/politics/2021/05/14/millions-fake-commenters-asked-fcc-end-net-neutrality-astroturfing-is-business-model/.
14. Adam Przeworksi, Susan C. Stokes, and Bernard Manin, eds., Democracy, Accountability, and Representation (New York: Cambridge University Press, 1999).
15. Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Senate Report 116–290, www.intelligence.senate.gov/publications/report-select-committee-intelligence-united-states-senate-russian-active-measures.
16. On the potentially limited effects of 2016 election misinformation more generally, see Andrew M. Guess, Brendan Nyhan, and Jason Reifler, “Exposure to Untrustworthy Websites in the 2016 US Election,” Nature Human Behavior 4 (2020): 472–80.
17. James Vincent, “AI Is Killing the Old Web, and the New Web Struggles to be Born,” The Verge, 26 June 2023, www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web.
18. Josh Goldstein et al., “Can AI Write Persuasive Propaganda?” working paper, 8 April 2023, https://osf.io/preprints/socarxiv/fp87b.
19. Sarah Kreps, “The Role of Technology in Online Misinformation,” Brookings Institution, June 2020, www.brookings.edu/articles/the-role-of-technology-in-online-misinformation.
20. In this way, AI-generated misinformation could greatly heighten “desensitization”—the relationship between incumbent performance and voter beliefs—undermining democratic accountability. See Andrew T. Little, Keith E. Schnakenberg, and Ian R. Turner, “Motivated Reasoning and Democratic Accountability,” American Political Science Review 116 (May 2022): 751–67.
21. Sarah Kreps, R. Miles McCain, and Miles Brundage, “All the News that’s Fit to Fabricate,” Journal of Experimental Political Science 9 (Spring 2022): 104–17.
22. Kathleen Donovan et al., “Motivated Reasoning, Public Opinion, and Presidential Approval” Political Behavior 42 (December 2020): 1201–21.
23. Mark Warren, ed., Democracy and Trust (New York: Cambridge University Press, 1999); Robert Putnam, Bowling Alone: The Collapse and Revival of American Community (New York: Simon and Schuster, 2000); Marc Hetherington, Why Trust Matters: Declining Political Trust and the Demise of American Liberalism (Princeton: Princeton University Press, 2005); Pippa Norris, ed., Critical Citizens: Global Support for Democratic Governance (New York: Oxford University Press, 1999); Steven Levitsky and Daniel Ziblatt, How Democracies Die (New York: Crown, 2019).
24. Lewis Anthony Dexter, “What Do Congressmen Hear: The Mail,” Public Opinion Quarterly 20 (Spring 1956): 16–27.
25. See, among others, Pew Research Center, “Public Trust in Government: 1958–2022,” 6 June 2022, https://www.pewresearch.org/politics/2023/09/19/public-trust-in-government-1958-2023/.
26. Thomas Patterson, Out of Order (New York: Knopf, 1993); Joseph N. Cappella and Kathleen Hall Jamieson, “News Frames, Political Cynicism, and Media Cynicism,” Annals of the American Academy of Political and Social Science 546 (July 1996): 71–84.
Copyright © 2023 National Endowment for Democracy and Johns Hopkins University Press
Image Credit: Clement Mahoudeau / AFP via Getty Images
Subject
Artificial Intelligence
Region
Comparative Theoretical General
Share
Further Reading
Volume 34, Issue 4
The Authoritarian Data Problem
Eddie Yang
Margaret E. Roberts
AI is destined to become another stage for geopolitical conflict. In this contest, autocracies have the advantage, as they vacuum up valuable data from democracies, while democracies inevitably incorporate data…
Volume 34, Issue 4
AI and Catastrophic Risk
Yoshua Bengio
AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now to…
Volume 35, Issue 1
The Real Dangers of Generative AI
Danielle Allen
E. Glen Weyl
Advanced AI faces twin perils: the collapse of democratic control over key state functions or the concentration of political and economic power in the hands of the few. Avoiding these…
Subscribe to view the full Journal archives.
Subscribe Archives Get JoD Updates
  
About
Books
Articles
JoD Online
Subscribe
1201 Pennsylvania Ave, NW, Suite 1100, Washington, DC 20004, USA
Published for the National Endowment for Democracy by Johns Hopkins University Press
Search for:
0 notes