#stuart chalmers
Explore tagged Tumblr posts
Text
LAST TAPES FOR 2024
Our last batch of tapes land on 29 November.
Pre-order now at https://cruelnaturerecordings.bandcamp.com
1) SALISMAN COMMUNAL ORCHESTRATION "A Queen Among Clods" - Salisman’s western shoegaze saga advances with their fifth album. Also available as a super-limited bundle with the EP "Of The Desert"
2) ST JAMES INFIRMARY "All Will Be Well" - SJI transport back to the halcyon days of the late 60s / early 70s (or is it mid-80s?), when every band had a Vox and the beats were strictly motorik. Channeling Velvets, garage and psych-rock.
3) BLACK TEMPEL PYRÄMID “Frontier Plains Wanderer” - A sonic thesis, delving into themes of loss and mid-life growth via intense immersive post-rock shoegaze motorik soundscapes
4) GVANTSA NARIM “Enigmatic Reflections” - 8 tracks inspired by spirituality, esotericism & Georgian polyphonic music. A captivating exploration of ethereal melodic ambient soundscapes, through glimpses of field recordings and classical sentiments
5) EMPTY HOUSE “Dream Lounge” - Dub-bass pulses, whispered melodies delayed drums, meet African radio samples, pulling you into a world where Suicide & Coil gather around a turntable playing Pete Kember’s E.A.R albums
6) NOMAD TREE “I : Awakening The Ancestors” - Stuart Chalmers takes us on a profound journey through sound, blending experimental folk, noise, and shamanic practices. Aimed at healing and altered states of consciousness. Also available as a bundle with "Live At Cave 12"
7) AIDAN BAKER / HAN-EARL PARK / KATHARINA SCHMIDT “Thoughts Of Trio” - Exploration through the intersections of ambient, improvisational (free) jazz, and musique-concrète
Eternal thanks to you all for the support this year!
3 notes
·
View notes
Text
Books read and movies watched in 2024 (January-June): Should you watch/read them?
Poetry:
In the Next Galaxy (Ruth Stone): No
Selected Poems (Mark Strand): No
In the Dark (Ruth Stone): Yes!
Response (Juliana Spahr): Yes
The Unicorn (Anne Morrow Lindbergh): No!
Everything Else in the World (Stephen Dunn): Yes
Words Under the Words (Naomi Shihab Nye): Eh
On Love and Barley (Matsuo Basho, trans. Lucien Stryk): Yes!
The Transformation (Juliana Spahr): No
The Narrow Road to the Deep North and Other Travel Sketches (Matsuo Basho, trans. Nobuyuki Yuasa): No
The Book of Taliesin (anon., trans. Gwyneth Lewis & Rowan Williams): No
What Love Comes To: New and Selected Poems (Ruth Stone): Eh
Face (Sherman Alexie): NO
No Surrender (Ai): Eh
The Summer of Black Widows (Sherman Alexie): Yes!
The Afflicted Girls (Nicole Cooley): Yes!
Winter Poems Along the Rio Grande (Jimmy Santiago Baca): No
American Smooth (Rita Dove): No
Elegy (Mary Jo Bang): No
Angel (Giles Dorey): NO
Collected Poems (Paul Auster): Eh
June-Tree (Peter Balakian): Yes
We Must Make a Kingdom of It (Gregory Orr): Eh
Only as the Day is Long (Dorianne Laux): No
Grace Notes (Rita Dove): Yes
Bathwater Wine (Wanda Coleman): Yes
My Soviet Union (Michael Dumanis): No
American Milk (Ruth Stone): Yes
The Drowned Girl (Eve Alexandra): No
A Worldly Country (John Ashberry): No
The Complete Poems of Hart Crane: No
One Stick Song (Sherman Alexie): Yes
If You Call This Cry a Song (Hayden Carruth): No
Doctor Jazz (Hayden Carruth): No
The Last Time I Saw Amelia Earhart (Gabrielle Calvocoressi): No
And Her Soul Out of Nothing (Olena Kalytiak Davis): No
Prisoner of Hope (Yvonne Daley): No
The Other Man Was Me (Rafael Campo): No
My Wicked Wicked Ways (Sandra Cisneros): No
On Earth (Robert Creeley): Eh
Genius Loci (Alison Hawthorne Deming): Eh
Science and Other Poems (Alison Hawthorne Deming): Eh
Voices (Lucille Clifton): Yes
A New Path to the Waterfall (Raymond Carver): Eh
Where Shadows Will (Norma Cole): No
The Way Back (Wyn Cooper): No
A Cartography of Peace (Jean L. Connor): No
Minnow (Judith Chalmer): Yes!
Postcards from the Interior (Wyn Cooper): Yes
Natural History (Dan Chiasson): Eh
The Ship of Birth (Greg Delanty): Eh
Madonna anno domini (Joshua Clover): NO
The Terrible Stories (Lucille Clifton): No
The Flashboat (Jane Cooper): Eh
Book of Longing (Leonard Cohen): No
Streets in Their Own Ink (Stuart Dybek): Eh
Different Hours (Stephen Dunn): Yes
I Love This Dark World (Alice B. Fogel): Eh
Baptism of Desire (Louise Erdrich): Yes!
The Eternal City (Kathleen Graber): Eh
Monolithos (Jack Gilbert): Yes
Crown of Weeds (Amy Gerstler): No
Blue Hour (Carolyn Forché): No
Place (Jorie Graham): No
Meadowlands (Louise Gluck): Yes!
Dearest Creature (Amy Gerstler): No
Loosestrife (Stephen Dunn): No
Little Savage (Emily Fragos): Yes
The Living Fire (Edward Hirsch): No
On Love (Edward Hirsch): No
Human Wishes (Robert Hass): NO
Early Occult Memory Systems of the Lower Midwest (B. H. Fairchild): No
Sinking Creek (John Engels): No
Alabanza (Martín Espada): Yes
Saving Lives (Albert Goldbarth): No
All of It Singing (Linda Gregg): No
Green Squall (Jay Hopler): No
Tender Hooks (Beth Ann Fennelly): No
After (Jane Hirshfield): Eh
Unincorporated Persons in the Late Honda Dynasty (Tony Hoagland): NO
These Are My Rivers (Lawrence Ferlinghetti): No
Fruitful (Stephanie Kirby): No
Jaguar Skies (Michael McClure): No
Song (Brigit Pegeen Kelly): No
Roadworthy Creature, Roadworthy Craft (Kate Magill): No
Life in the Forest (Denise Levertov): No
Viper Rum (Mary Karr): No
Questions for Ecclesiastes (Mark Jarman): No
Brutal Imagination (Cornelius Eady): Yes
Alphabet of Bones (Alexis Lathem): No
Handwriting (Michael Ondaatje): No
Sure Signs (Ted Kooser): No
Sledding on Hospital Hill (Leland Kinsey): No
Between Silences (Ha Jin): Yes
House of Days (Jay Parini): No
Bird Eating Bird (Kristin Naca): Yes
Orpheus & Eurydice (Gregory Orr): Yes
Another America (Barbara Kingsolver): Yes
Candles in Babylon (Denise Levertov): Yes
The Clerk's Tale (Spencer Reece): Eh
Still Listening (Angela Patten): Yes
A Thief of Strings (Donald Revell): No
Wayfare (Pattiann Rogers): No
The Niagara River (Kay Ryan): No
The Bird Catcher (Marie Ponsot): No
Easy (Marie Ponsot): No
Human Dark with Sugar (Brenda Shaughnessy): No
Chronic (D. A. Powell): No
Novels/Fiction:
A Thousand Years of Good Prayers (Yiyun Li): No
The Oxford Book of English Ghost Stories: Yes
Movies:
What Dreams May Come (1998, Vincent Ward): Yes
The Cat's Meow (2001, Peter Bogdanovich): Yes
The Birdcage (1996, Mike Nichols): Yes
The Color of Pomegranates (1969, Sergei Parajanov): No
The Eve of Ivan Kupalo (1969, Yuri Ilyenko): Yes
And here's my 2023 list!
3 notes
·
View notes
Text
A Duck in a Tree 2024-07-27 - Copper Coins and Boiled Water
zoviet*france weekly radio program listen/download
track list
00 Suzanne Hardy - Intro 01 Ruaridh Law, Debbie Armour, James Papademetrie & Orphax - The Sun 02 Ruaridh Law, Debbie Armour, James Papademetrie & Orphax - The Emperor 03 Ruaridh Law, Debbie Armour, James Papademetrie & Orphax - The High Priestess 04 Philippe Neau - Wein Dou Damn 05 Joe Shaw - The Salt Marshes at Alnmouth 06 Matt Atkins & Stuart Chalmers - Ante-Choir 07 Aidan Lochrin - In the Ruins of No Specific Place (with Jude Norton-Smith) 08 Demetrio Cecchitelli - Oxygen 09 Philippe Neau - Cigales à l'orage 10 Soundsaroundus - Jungle Night After Rain 01 11 Audela - Images Which Form Within your Ears 12 Ben Ponton - BS 19:36 07.03.12 13 Sophie Sleigh-Johnson - I Cairn Get Enough of It 14 Sala - Cell Tower 15 Jan Ryhalsky - Night, Entryway 16 Ting Ting Jahe - 6 17 Drone Forest - Ominous Vine Movement Through Unsuspecting Undergrowth ++ Suzanne Hardy - Outro
3 notes
·
View notes
Text
Happy Staraya Derevnya day!
blue forty-nine is out NOW on tape and digital
A deeply evocative set that will appeal to fans of Blue Tapes artists such as Minaru, Richard Youngs and Stuart Chalmers/Taming Power.
#blue tapes#experimental music#tapes#ambient#cassettes#drone#bandcamp#tape label#staraya derevnya#krautrock#psychedelic music
7 notes
·
View notes
Text
MIT Tech Review is now drawing attention to the immediately imminent possibility of AI consciousness too
The emergence of the deviants might be sooner than we're prepared for.
Full Text Below if you can't access it:
MIT Technology ReviewSubscribe
ARTIFICIAL INTELLIGENCE
Minds of machines: The great AI consciousness conundrum
Philosophers, cognitive scientists, and engineers are grappling with what it would take for AI to become conscious.
By Grace Huckins
October 16, 2023

STUART BRADFORD
David Chalmers was not expecting the invitation he received in September of last year. As a leading authority on consciousness, Chalmers regularly circles the world delivering talks at universities and academic meetings to rapt audiences of philosophers—the sort of people who might spend hours debating whether the world outside their own heads is real and then go blithely about the rest of their day. This latest request, though, came from a surprising source: the organizers of the Conference on Neural Information Processing Systems (NeurIPS), a yearly gathering of the brightest minds in artificial intelligence.
Less than six months before the conference, an engineer named Blake Lemoine, then at Google, had gone public with his contention that LaMDA, one of the company’s AI systems, had achieved consciousness. Lemoine’s claims were quickly dismissed in the press, and he was summarily fired, but the genie would not return to the bottle quite so easily—especially after the release of ChatGPT in November 2022. Suddenly it was possible for anyone to carry on a sophisticated conversation with a polite, creative artificial agent.
Advertisement
Chalmers was an eminently sensible choice to speak about AI consciousness. He’d earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible.
If he had been able to interact with systems like LaMDA and ChatGPT back in the ’90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment. Yes, large language models—systems that have been trained on enormous corpora of text in order to mimic human writing as accurately as possible—are impressive. But, he said, they lack too many of the potential requisites for consciousness for us to believe that they actually experience the world.
“Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define.” Liad Mudrik, neuroscientist, Tel Aviv University
At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.
Not many people dismissed his proposal as ridiculous, Chalmers says: “I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me.” Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”

STUART BRADFORD
Over the past few decades, a small research community has doggedly attacked the question of what consciousness is and how it works. The effort has yielded real progress on what once seemed an unsolvable problem. Now, with the rapid advance of AI technology, these insights could offer our only guide to the untested, morally fraught waters of artificial consciousness.
“If we as a field will be able to use the theories that we have, and the findings that we have, in order to reach a good test for consciousness,” Mudrik says, “it will probably be one of the most important contributions that we could give.”
Advertisement
When Mudrik explains her consciousness research, she starts with one of her very favorite things: chocolate. Placing a piece in your mouth sparks a symphony of neurobiological events—your tongue’s sugar and fat receptors activate brain-bound pathways, clusters of cells in the brain stem stimulate your salivary glands, and neurons deep within your head release the chemical dopamine. None of those processes, though, captures what it is like to snap a chocolate square from its foil packet and let it melt in your mouth. “What I’m trying to understand is what in the brain allows us not only to process information—which in its own right is a formidable challenge and an amazing achievement of the brain—but also to experience the information that we are processing,” Mudrik says.
Studying information processing would have been the more straightforward choice for Mudrik, professionally speaking. Consciousness has long been a marginalized topic in neuroscience, seen as at best unserious and at worst intractable. “A fascinating but elusive phenomenon,” reads the “Consciousness” entry in the 1996 edition of the International Dictionary of Psychology. “Nothing worth reading has been written on it.”
Mudrik was not dissuaded. From her undergraduate years in the early 2000s, she knew that she didn’t want to research anything other than consciousness. “It might not be the most sensible decision to make as a young researcher, but I just couldn’t help it,” she says. “I couldn’t get enough of it.” She earned two PhDs—one in neuroscience, one in philosophy—in her determination to decipher the nature of human experience.
As slippery a topic as consciousness can be, it is not impossible to pin down—put as simply as possible, it’s the ability to experience things. It’s often confused with terms like “sentience” and “self-awareness,” but according to the definitions that many experts use, consciousness is a prerequisite for those other, more sophisticated abilities. To be sentient, a being must be able to have positive and negative experiences—in other words, pleasures and pains. And being self-aware means not only having an experience but also knowing that you are having an experience.
In her laboratory, Mudrik doesn’t worry about sentience and self-awareness; she’s interested in observing what happens in the brain when she manipulates people’s conscious experience. That’s an easy thing to do in principle. Give someone a piece of broccoli to eat, and the experience will be very different from eating a piece of chocolate—and will probably result in a different brain scan. The problem is that those differences are uninterpretable. It would be impossible to discern which are linked to changes in information—broccoli and chocolate activate very different taste receptors—and which represent changes in the conscious experience.
The trick is to modify the experience without modifying the stimulus, like giving someone a piece of chocolate and then flipping a switch to make it feel like eating broccoli. That’s not possible with taste, but it is with vision. In one widely used approach, scientists have people look at two different images simultaneously, one with each eye. Although the eyes take in both images, it’s impossible to perceive both at once, so subjects will often report that their visual experience “flips”: first they see one image, and then, spontaneously, they see the other. By tracking brain activity during these flips in conscious awareness, scientists can observe what happens when incoming information stays the same but the experience of it shifts.
With these and other approaches, Mudrik and her colleagues have managed to establish some concrete facts about how consciousness works in the human brain. The cerebellum, a brain region at the base of the skull that resembles a fist-size tangle of angel-hair pasta, appears to play no role in conscious experience, though it is crucial for subconscious motor tasks like riding a bike; on the other hand, feedback connections—for example, connections running from the “higher,” cognitive regions of the brain to those involved in more basic sensory processing—seem essential to consciousness. (This, by the way, is one good reason to doubt the consciousness of LLMs: they lack substantial feedback connections.)
A decade ago, a group of Italian and Belgian neuroscientists managed to devise a test for human consciousness that uses transcranial magnetic stimulation (TMS), a noninvasive form of brain stimulation that is applied by holding a figure-eight-shaped magnetic wand near someone’s head. Solely from the resulting patterns of brain activity, the team was able to distinguish conscious people from those who were under anesthesia or deeply asleep, and they could even detect the difference between a vegetative state (where someone is awake but not conscious) and locked-in syndrome (in which a patient is conscious but cannot move at all).
Advertisement
That’s an enormous step forward in consciousness research, but it means little for the question of conscious AI: OpenAI’s GPT models don’t have a brain that can be stimulated by a TMS wand. To test for AI consciousness, it’s not enough to identify the structures that give rise to consciousness in the human brain. You need to know why those structures contribute to consciousness, in a way that’s rigorous and general enough to be applicable to any system, human or otherwise.
“Ultimately, you need a theory,” says Christof Koch, former president of the Allen Institute and an influential consciousness researcher. “You can’t just depend on your intuitions anymore; you need a foundational theory that tells you what consciousness is, how it gets into the world, and who has it and who doesn’t.”
Here’s one theory about how that litmus test for consciousness might work: any being that is intelligent enough, that is capable of responding successfully to a wide enough variety of contexts and challenges, must be conscious. It’s not an absurd theory on its face. We humans have the most intelligent brains around, as far as we’re aware, and we’re definitely conscious. More intelligent animals, too, seem more likely to be conscious—there’s far more consensus that chimpanzees are conscious than, say, crabs.
But consciousness and intelligence are not the same. When Mudrik flashes images at her experimental subjects, she’s not asking them to contemplate anything or testing their problem-solving abilities. Even a crab scuttling across the ocean floor, with no awareness of its past or thoughts about its future, would still be conscious if it could experience the pleasure of a tasty morsel of shrimp or the pain of an injured claw.
Susan Schneider, director of the Center for the Future Mind at Florida Atlantic University, thinks that AI could reach greater heights of intelligence by forgoing consciousness altogether. Conscious processes like holding something in short-term memory are pretty limited—we can only pay attention to a couple of things at a time and often struggle to do simple tasks like remembering a phone number long enough to call it. It’s not immediately obvious what an AI would gain from consciousness, especially considering the impressive feats such systems have been able to achieve without it.
As further iterations of GPT prove themselves more and more intelligent—more and more capable of meeting a broad spectrum of demands, from acing the bar exam to building a website from scratch—their success, in and of itself, can’t be taken as evidence of their consciousness. Even a machine that behaves indistinguishably from a human isn’t necessarily aware of anything at all.
Understanding how an AI works on the inside could be an essential step toward determining whether or not it is conscious.
Schneider, though, hasn’t lost hope in tests. Together with the Princeton physicist Edwin Turner, she has formulated what she calls the “artificial consciousness test.” It’s not easy to perform: it requires isolating an AI agent from any information about consciousness throughout its training. (This is important so that it can’t, like LaMDA, just parrot human statements about consciousness.) Then, once the system is trained, the tester asks it questions that it could only answer if it knew about consciousness—knowledge it could only have acquired from being conscious itself. Can it understand the plot of the film Freaky Friday, where a mother and daughter switch bodies, their consciousnesses dissociated from their physical selves? Does it grasp the concept of dreaming—or even report dreaming itself? Can it conceive of reincarnation or an afterlife?
Advertisement
There’s a huge limitation to this approach: it requires the capacity for language. Human infants and dogs, both of which are widely believed to be conscious, could not possibly pass this test, and an AI could conceivably become conscious without using language at all. Putting a language-based AI like GPT to the test is likewise impossible, as it has been exposed to the idea of consciousness in its training. (Ask ChatGPT to explain Freaky Friday—it does a respectable job.) And because we still understand so little about how advanced AI systems work, it would be difficult, if not impossible, to completely protect an AI against such exposure. Our very language is imbued with the fact of our consciousness—words like “mind,” “soul,” and “self” make sense to us by virtue of our conscious experience. Who’s to say that an extremely intelligent, nonconscious AI system couldn’t suss that out?
If Schneider’s test isn’t foolproof, that leaves one more option: opening up the machine. Understanding how an AI works on the inside could be an essential step toward determining whether or not it is conscious, if you know how to interpret what you’re looking at. Doing so requires a good theory of consciousness.
A few decades ago, we might have been entirely lost. The only available theories came from philosophy, and it wasn’t clear how they might be applied to a physical system. But since then, researchers like Koch and Mudrik have helped to develop and refine a number of ideas that could prove useful guides to understanding artificial consciousness.
Numerous theories have been proposed, and none has yet been proved—or even deemed a front-runner. And they make radically different predictions about AI consciousness.
Some theories treat consciousness as a feature of the brain’s software: all that matters is that the brain performs the right set of jobs, in the right sort of way. According to global workspace theory, for example, systems are conscious if they possess the requisite architecture: a variety of independent modules, plus a “global workspace” that takes in information from those modules and selects some of it to broadcast across the entire system.
Other theories tie consciousness more squarely to physical hardware. Integrated information theory proposes that a system’s consciousness depends on the particular details of its physical structure—specifically, how the current state of its physical components influences their future and indicates their past. According to IIT, conventional computer systems, and thus current-day AI, can never be conscious—they don’t have the right causal structure. (The theory was recently criticized by some researchers, who think it has gotten outsize attention.)
Anil Seth, a professor of neuroscience at the University of Sussex, is more sympathetic to the hardware-based theories, for one main reason: he thinks biology matters. Every conscious creature that we know of breaks down organic molecules for energy, works to maintain a stable internal environment, and processes information through networks of neurons via a combination of chemical and electrical signals. If that’s true of all conscious creatures, some scientists argue, it’s not a stretch to suspect that any one of those traits, or perhaps even all of them, might be necessary for consciousness.
Because he thinks biology is so important to consciousness, Seth says, he spends more time worrying about the possibility of consciousness in brain organoids—clumps of neural tissue grown in a dish—than in AI. “The problem is, we don’t know if I’m right,” he says. “And I may well be wrong.”
Advertisement
He’s not alone in this attitude. Every expert has a preferred theory of consciousness, but none treats it as ideology—all of them are eternally alert to the possibility that they have backed the wrong horse. In the past five years, consciousness scientists have started working together on a series of “adversarial collaborations,” in which supporters of different theories come together to design neuroscience experiments that could help test them against each other. The researchers agree ahead of time on which patterns of results will support which theory. Then they run the experiments and see what happens.
In June, Mudrik, Koch, Chalmers, and a large group of collaborators released the results from an adversarial collaboration pitting global workspace theory against integrated information theory. Neither theory came out entirely on top. But Mudrik says the process was still fruitful: forcing the supporters of each theory to make concrete predictions helped to make the theories themselves more precise and scientifically useful. “They’re all theories in progress,” she says.
At the same time, Mudrik has been trying to figure out what this diversity of theories means for AI. She’s working with an interdisciplinary team of philosophers, computer scientists, and neuroscientists who recently put out a white paper that makes some practical recommendations on detecting AI consciousness. In the paper, the team draws on a variety of theories to build a sort of consciousness “report card”—a list of markers that would indicate an AI is conscious, under the assumption that one of those theories is true. These markers include having certain feedback connections, using a global workspace, flexibly pursuing goals, and interacting with an external environment (whether real or virtual).
In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong. That’s where LLMs like LaMDA currently are: they don’t possess the right type of feedback connections, use global workspaces, or appear to have any other markers of consciousness.
The trouble with consciousness-by-committee, though, is that this state of affairs won’t last. According to the authors of the white paper, there are no major technological hurdles in the way of building AI systems that score highly on their consciousness report card. Soon enough, we’ll be dealing with a question straight out of science fiction: What should one do with a potentially conscious machine?
In 1989, years before the neuroscience of consciousness truly came into its own, Star Trek: The Next Generation aired an episode titled “The Measure of a Man.” The episode centers on the character Data, an android who spends much of the show grappling with his own disputed humanity. In this particular episode, a scientist wants to forcibly disassemble Data, to figure out how he works; Data, worried that disassembly could effectively kill him, refuses; and Data’s captain, Picard, must defend in court his right to refuse the procedure.
Picard never proves that Data is conscious. Rather, he demonstrates that no one can disprove that Data is conscious, and so the risk of harming Data, and potentially condemning the androids that come after him to slavery, is too great to countenance. It’s a tempting solution to the conundrum of questionable AI consciousness: treat any potentially conscious system as if it is really conscious, and avoid the risk of harming a being that can genuinely suffer.
Treating Data like a person is simple: he can easily express his wants and needs, and those wants and needs tend to resemble those of his human crewmates, in broad strokes. But protecting a real-world AI from suffering could prove much harder, says Robert Long, a philosophy fellow at the Center for AI Safety in San Francisco, who is one of the lead authors on the white paper. “With animals, there’s the handy property that they do basically want the same things as us,” he says. “It’s kind of hard to know what that is in the case of AI.” Protecting AI requires not only a theory of AI consciousness but also a theory of AI pleasures and pains, of AI desires and fears.
“With animals, there’s the handy property that they do basically want the same things as us. It’s kind of hard to know what that is in the case of AI.” Robert Long, philosophy fellow, Center for AI Safety in San Francisco
And that approach is not without its costs. On Star Trek, the scientist who wants to disassemble Data hopes to construct more androids like him, who might be sent on risky missions in lieu of other personnel. To the viewer, who sees Data as a conscious character like everyone else on the show, the proposal is horrifying. But if Data were simply a convincing simulacrum of a human, it would be unconscionable to expose a person to danger in his place.
Extending care to other beings means protecting them from harm, and that limits the choices that humans can ethically make. “I’m not that worried about scenarios where we care too much about animals,” Long says. There are few downsides to ending factory farming. “But with AI systems,” he adds, “I think there could really be a lot of dangers if we overattribute consciousness.” AI systems might malfunction and need to be shut down; they might need to be subjected to rigorous safety testing. These are easy decisions if the AI is inanimate, and philosophical quagmires if the AI’s needs must be taken into consideration.
Seth—who thinks that conscious AI is relatively unlikely, at least for the foreseeable future—nevertheless worries about what the possibility of AI consciousness might mean for humans emotionally. “It’ll change how we distribute our limited resources of caring about things,” he says. That might seem like a problem for the future. But the perception of AI consciousness is with us now: Blake Lemoine took a personal risk for an AI he believed to be conscious, and he lost his job. How many others might sacrifice time, money, and personal relationships for lifeless computer systems?
Knowing that the two lines in the Müller-Lyer illusion are exactly the same length doesn’t prevent us from perceiving one as shorter than the other. Similarly, knowing GPT isn’t conscious doesn’t change the illusion that you are speaking to a being with a perspective, opinions, and personality.
Even bare-bones chatbots can exert an uncanny pull: a simple program called ELIZA, built in the 1960s to simulate talk therapy, convinced many users that it was capable of feeling and understanding. The perception of consciousness and the reality of consciousness are poorly aligned, and that discrepancy will only worsen as AI systems become capable of engaging in more realistic conversations. “We will be unable to avoid perceiving them as having conscious experiences, in the same way that certain visual illusions are cognitively impenetrable to us,” Seth says. Just as knowing that the two lines in the Müller-Lyer illusion are exactly the same length doesn’t prevent us from perceiving one as shorter than the other, knowing GPT isn’t conscious doesn’t change the illusion that you are speaking to a being with a perspective, opinions, and personality.
In 2015, years before these concerns became current, the philosophers Eric Schwitzgebel and Mara Garza formulated a set of recommendations meant to protect against such risks. One of their recommendations, which they termed the “Emotional Alignment Design Policy,” argued that any unconscious AI should be intentionally designed so that users will not believe it is conscious. Companies have taken some small steps in that direction—ChatGPT spits out a hard-coded denial if you ask it whether it is conscious. But such responses do little to disrupt the overall illusion.
Schwitzgebel, who is a professor of philosophy at the University of California, Riverside, wants to steer well clear of any ambiguity. In their 2015 paper, he and Garza also proposed their “Excluded Middle Policy”—if it’s unclear whether an AI system will be conscious, that system should not be built. In practice, this means all the relevant experts must agree that a prospective AI is very likely not conscious (their verdict for current LLMs) or very likely conscious. “What we don’t want to do is confuse people,” Schwitzgebel says.
Avoiding the gray zone of disputed consciousness neatly skirts both the risks of harming a conscious AI and the downsides of treating a lifeless machine as conscious. The trouble is, doing so may not be realistic. Many researchers—like Rufin VanRullen, a research director at France’s Centre Nationale de la Recherche Scientifique, who recently obtained funding to build an AI with a global workspace—are now actively working to endow AI with the potential underpinnings of consciousness.
Advertisement

STUART BRADFORD
The downside of a moratorium on building potentially conscious systems, VanRullen says, is that systems like the one he’s trying to create might be more effective than current AI. “Whenever we are disappointed with current AI performance, it’s always because it’s lagging behind what the brain is capable of doing,” he says. “So it’s not necessarily that my objective would be to create a conscious AI—it’s more that the objective of many people in AI right now is to move toward these advanced reasoning capabilities.” Such advanced capabilities could confer real benefits: already, AI-designed drugs are being tested in clinical trials. It’s not inconceivable that AI in the gray zone could save lives.
VanRullen is sensitive to the risks of conscious AI—he worked with Long and Mudrik on the white paper about detecting consciousness in machines. But it is those very risks, he says, that make his research important. Odds are that conscious AI won’t first emerge from a visible, publicly funded project like his own; it may very well take the deep pockets of a company like Google or OpenAI. These companies, VanRullen says, aren’t likely to welcome the ethical quandaries that a conscious system would introduce. “Does that mean that when it happens in the lab, they just pretend it didn’t happen? Does that mean that we won’t know about it?” he says. “I find that quite worrisome.”
Academics like him can help mitigate that risk, he says, by getting a better understanding of how consciousness itself works, in both humans and machines. That knowledge could then enable regulators to more effectively police the companies that are most likely to start dabbling in the creation of artificial minds. The more we understand consciousness, the smaller that precarious gray zone gets—and the better the chance we have of knowing whether or not we are in it.
For his part, Schwitzgebel would rather we steer far clear of the gray zone entirely. But given the magnitude of the uncertainties involved, he admits that this hope is likely unrealistic—especially if conscious AI ends up being profitable. And once we’re in the gray zone—once we need to take seriously the interests of debatably conscious beings—we’ll be navigating even more difficult terrain, contending with moral problems of unprecedented complexity without a clear road map for how to solve them. It’s up to researchers, from philosophers to neuroscientists to computer scientists, to take on the formidable task of drawing that map.
Grace Huckins is a science writer based in San Francisco.
This is your last free story.
Sign inSubscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.
Privacy Policy
Sign up
Our most popular stories
• DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
Will Douglas Heaven
• What to know about this autumn’s covid vaccines
Cassandra Willyard
• Deepfakes of Chinese influencers are livestreaming 24/7
Zeyi Yang
• A biotech company says it put dopamine-making cells into people’s brains
Antonio Regalado
Advertisement
MIT Technology Review © 2023
4 notes
·
View notes
Text
How Does a Complex Network of Neurons Give Rise to Subjective Experience?
Neural Correlates of Consciousness: One approach to understanding consciousness is to identify the neural correlates of consciousness (NCC). These are specific brain processes and structures that are associated with conscious experiences. Neuroscientists have made progress in mapping areas of the brain that are active during different states of consciousness, such as the prefrontal cortex, thalamus, and certain networks like the default mode network. However, identifying the NCCs is only part of the puzzle—it tells us where consciousness might occur but not how or why it arises.
Integration and Information Theory: One theory that attempts to explain consciousness is Integrated Information Theory (IIT), proposed by neuroscientist Giulio Tononi. IIT posits that consciousness arises from the integration of information within a system. According to this theory, consciousness is a property of any system that has a high degree of informational integration. The brain, with its vast interconnected network of neurons, is thought to generate a unified conscious experience because it integrates information in a highly complex way. IIT provides a framework for understanding why certain brain states are conscious and others are not, but it doesn’t fully explain the subjective quality of experience—what philosophers refer to as "qualia."
Global Workspace Theory: Another influential theory is the Global Workspace Theory (GWT), proposed by Bernard Baars and further developed by Stanislas Dehaene and others. GWT suggests that consciousness involves a global workspace, a kind of "stage" in the brain where information is "broadcast" to various specialized areas. When information is globally broadcasted, it becomes available for a variety of processes, such as decision-making, planning, and verbal report, which corresponds to conscious experience. This theory provides a useful model for understanding the distribution and coordination of conscious information but still leaves open the question of how subjective experience arises.
The Binding Problem: One challenge in understanding how consciousness arises from neural activity is the binding problem—how the brain integrates information from various sensory modalities (sight, sound, touch, etc.) into a unified conscious experience. Despite different types of information being processed in different areas of the brain, we experience them as a coherent whole. The mechanisms by which the brain accomplishes this integration remain a significant area of research.
Quantum Consciousness Hypotheses: Some theories, like those proposed by physicist Roger Penrose and anesthesiologist Stuart Hameroff, suggest that quantum processes within neurons could play a role in consciousness. These theories are controversial and remain speculative, as there is currently no empirical evidence that quantum processes contribute to consciousness in a meaningful way. However, they attempt to address the gap between the physical brain and the experience of consciousness by suggesting that consciousness could involve more fundamental, possibly non-computational, processes.
Why Does Consciousness Exist?
Evolutionary Perspectives: From an evolutionary perspective, consciousness may have developed because it provides adaptive advantages. Consciousness allows an organism to integrate information, make decisions, and predict outcomes more effectively than a purely reflexive or unconscious process. It facilitates learning, problem-solving, social interaction, and the ability to navigate complex environments. The awareness of pain, for instance, is crucial for avoiding harmful situations, while the experience of pleasure can reinforce behaviors beneficial for survival and reproduction.
The Hard Problem of Consciousness: Despite evolutionary explanations, the "hard problem" of consciousness, a term coined by philosopher David Chalmers, remains unresolved. The hard problem refers to the question of why and how physical processes in the brain give rise to subjective experiences—the "what it is like" aspect of being. While we can explain many of the functions that consciousness performs, the fundamental nature of subjective experience—why there is something it is like to see red, feel pain, or hear a symphony—remains mysterious. This gap between objective descriptions of brain processes and subjective experience is at the core of the hard problem.
Panpsychism and Fundamental Properties: Some philosophers and scientists suggest that consciousness might be a fundamental property of the universe, like space, time, or mass. This view, known as panpsychism, posits that all matter has some form of consciousness or proto-consciousness. While this idea might sound far-fetched, it attempts to bridge the explanatory gap by suggesting that consciousness is not something that emerges at a certain level of complexity but is instead a fundamental feature of reality itself.
Philosophical Theories: Various philosophical theories, such as dualism (the idea that mind and matter are distinct substances), idealism (the idea that reality is fundamentally mental), and physicalism (the idea that everything is physical and consciousness emerges from brain processes), all attempt to explain why consciousness exists. Each theory has its proponents and critics, and none has definitively answered the question to the satisfaction of all scholars.
What is Consciousness Fundamentally?
Phenomenal Consciousness (Qualia): At its core, consciousness is often defined by its phenomenal or subjective aspects—those raw experiences we refer to as "qualia." This includes the experience of colors, sounds, emotions, thoughts, and the sense of self. Phenomenal consciousness is what makes a mental state something it "feels like" to have. It is these qualia that make the question of consciousness so puzzling; they are inherently subjective and resistant to objective measurement or description.
Self-Awareness and Reflective Consciousness: Beyond phenomenal consciousness, there is also reflective or self-consciousness—the ability to reflect on one's own mental states, to recognize oneself as a distinct individual, and to think about one's own thoughts. This form of consciousness is often seen as more advanced and is associated with higher cognitive functions such as reasoning, planning, and complex social interactions.
Non-Physicalist Theories: Some theories propose that consciousness is not reducible to physical brain states but instead involves non-physical properties or substances. For example, property dualism suggests that while the brain is physical, consciousness arises as a non-physical property of certain brain processes. Substance dualism, as famously argued by René Descartes, posits that consciousness (or the mind) is a distinct substance from the physical brain.
Emergent Properties: Another perspective is that consciousness is an emergent property of complex systems. Just as water has properties (like fluidity) that are not present in the individual hydrogen and oxygen atoms, consciousness might emerge from the complex interactions and integrations of neurons in ways that are not predictable from the properties of individual neurons alone. However, this explanation still leaves open the question of how subjective experience "emerges" from physical processes.
Final Thoughts
Ultimately, the mysteries of consciousness touch on the deepest questions of existence, identity, and reality. Despite significant advancements in neuroscience, psychology, and philosophy, the "hard problem" of consciousness remains unsolved. The question of how and why subjective experience arises from a network of neurons continues to challenge our understanding and invites ongoing exploration and reflection. While we may not have all the answers yet, exploring these questions deepens our appreciation of the complexities of the mind and the wonder of existence itself.
1 note
·
View note
Text
When Kylie Bucknell is sentenced to home detention, she’s forced to come to terms with her unsociable behaviour, her blabbering mother and a hostile spirit who seems less than happy about the new living arrangement. Credits: TheMovieDb. Film Cast: Kylie Bucknell: Morgana O’Reilly Miriam Bucknell: Rima Te Wiata Amos: Glen-Paul Waru Graeme: Ross Harper Dennis: Cameron Rhodes Officer Grayson: Millen Baird Officer Carson: Bruce Hopkins Eugene: Ryan Lampp Judge: Ian Mune Hollis: Wallace Chapman Kraglund: Mick Innes Justin: David Van Horn Leslie: Nikki Si’ulepa Elizabeth Chalmers: Kitty Riddell Young Kylie (Voice): Lila Sharp Fitness Host: Louise Mills Film Crew: Editor: Gerard Johnstone Director of Photography: Simon Riera Production Design: Anya Whitlock Original Music Composer: Mahuia Bridgman-Cooper Second Unit Director: Luke Sharpe Production Design: Jane Bucknell Executive Producer: Chris Lambert Executive Producer: Ant Timpson Visual Effects Supervisor: Matt Westbrooke Set Decoration: Simon Vine Steadicam Operator: Joe Lawry Set Decoration: Stephen Jaimeson Art Direction: Lyn Bergquist Art Direction: Laura Smith Art Direction: Haley Williams Costume Design: Lissy Mayer Steadicam Operator: Alex McDonald Set Decoration: Graham Collins Gaffer: Nicholas Riini Gaffer: Tane Kingan Continuity: Rose Damon Steadicam Operator: Simon Tutty Executive Producer: Michael Kumerich Line Producer: Garett Mayow Executive Producer: Daniel Story Makeup Effects Designer: Jacinta Driver Assistant Makeup Artist: Kendall Feruson Makeup Artist: Vanessa Hurley Assistant Makeup Artist: Rachel Johanson Assistant Makeup Artist: Katie Jones Makeup Artist: Carly Marr Assistant Makeup Artist: Nikki Milina Assistant Makeup Artist: Miranda Raman Makeup Artist: Lauren Steward Production Manager: Ainsley Allen Third Assistant Director: Rachael Bristow Third Assistant Director: Esther Clewlow Third Assistant Director: Sarah Hough Third Assistant Director: Laurelle May First Assistant Director: Natasha Romaniuk First Assistant Director: Fraser Ross First Assistant Director: Katie Tate First Assistant Director: Craig Wilson Props: Shamus Butt Art Department Assistant: Meling Cooper Art Department Assistant: Hilary Crombie Assistant Set Dresser: Louise George Assistant Set Dresser: James Goldenthal Runner Art Department: Kathryn Lees Art Department Assistant: Brian Maru Assistant Set Dresser: Aimee Russell Art Department Assistant: Jaime Sharpe Concept Artist: Andrejs Skuja Art Department Assistant: Luke Thornborough Art Department Assistant: Wesley Twiss Dialogue Editor: Nich Cunningham Boom Operator: Matthew Dickins Sound Recordist: Phil Donovan Sound Recordist: Gabriel Muller Boom Operator: Stephen Saldanha Sound Recordist: Ande Schurr Sound Recordist: Mark Storey Sound Designer: Shane Taipari Sound Recordist: Ben Vanderpoel Digital Compositor: Stuart Bedford Digital Imaging Technician: James Brookes Digital Compositor: Johnny Lyon 3D Modeller: Rich Nosworthy Digital Compositor: Jesse Parkhill Stunt Coordinator: Aaron Lupton Stunt Coordinator: Steve McQuillan Stunts: Stefan Talaic Stunts: Shane Blakey Stunts: Joanna Baker Stunts: Daniel Andrews Lighting Technician: Sam Behrend First Assistant Camera: Nick Burridge First Assistant Camera: Alexander Campbell First Assistant Camera: Kelly Chen Lighting Technician: Tommy Davis Camera Intern: Woody Dean Lighting Technician: Hayden Dudley Lighting Technician: James Dudley Lighting Technician: Leigh Elford Camera Intern: Kalym Gilbert Camera Intern: Andrew Farrent First Assistant Camera: Julia Green Cinematography: Adrian Greshoff Lighting Technician: Mathew Harte Lighting Technician: Stacey Hui First Assistant Camera: Matt Hunt First Assistant Camera: Blair Ihaka Gaffer: Tony Lumsden First Assistant Camera: Tom Neunzerling Cinematography: Eoin O’Liddy Key Grip: Jeremy Osbourne Key Grip: Jim Rowe Lighting Technician: Richard Schofield First Assistant Camera: Richard Simkins First Assistant Camera: Cameron Stoltz Cinematography: Drew Sturge Camera Intern: Matt Thomas Lighting Technician: Jason Tidsw...
#Basement#dentures#exploding head#father-in-law#garden shears#haunted house#home detention#house arrest#security guard#superstition#Top Rated Movies
0 notes
Text
Details Presentation I Want Routers
Stuart Chalmers created ‘I Want Routers’ in 2004 after 20 years of working as a programmer/analyst/technician in the IT sector.Originally named BizTel, in the early days I Want Routers supplied telephone systems & lines and broadband.
Davis House Lodge,Causeway Trading Estate ,Fishponds, BS16 3JB
+44 0117 3701120
#computer network companies bristol#computer network company bristol#computer network installation bristol#computer network installation companies bristol
0 notes
Text
George Garner - via social media - "PADAM! Sooo good to have the <incredible> Kylie Minogue on the cover of the new MusicWeek Insta ! Inside she speaks to James Stuart Hanley all about her career so far and, more importantly, what comes next! Huge thanks to Polly Bhowmik and the A&P Artist Management team, BMG UK, Murray Chalmers Pr & Emma Banks (agent) for being a part of it and making it happen. This is a super special one for the team ❤️ Kylie, Kylie Minogue, Padam Padam, Tension".
0 notes
Text
Downward Envy: What Kind Of Australian Indulges In That?
Hands up who has heard of the term ‘downward envy’? Yes, this is the name for all those Aussies who think that everyone on welfare is a dole bludger. These people reckon that there are too many folk having a good time on less than $50 a day. You could not even pay your rent in Australia on that, let alone eat. Downward envy: What kind of Australian indulges in that? Bitter and twisted miserable bastards comes to mind. Unhappy people wanting to apportion blame onto others, also, springs to mind in this case.
“As a student of John Howard, Peter Dutton was always going to focus his budget reply speech on how middle Australia is missing out, in the hope that there are enough Howard battlers still around to appreciate the throwback. In this line of attack on the government’s priorities, he’s getting help from some sections of the press gallery. At his National Press Club address the day after the budget was handed down, Treasurer Jim Chalmers took a question from Sky News political editor Andrew Clennell. “With unemployment at three and a half per cent, it seems in the vast majority of cases, if you want a job, you can get a job,” began Clennell. “So why do people on the dole get more money from the government out of this budget, but not a household on more than $160k a year who, for example, don’t get the electricity bill relief? What do you say to those working full time about why those on the dole get relief, but they don’t?” Won’t somebody think of the couple on $160,000 a year?” - (https://www.themonthly.com.au/the-politics/daniel-james/2023/05/12/downward-envy)
Photo by Pixabay on Pexels.com
Robodebt & Downward Envy
Weak scumbags who like putting the boot into those who can’t fight back is another identifying attribute here, I reckon. The Robodebt debacle was fuelled by downward envy and the Coalition of the Liberals and Nationals in government fed on it. Scott Morrison, that great liar who led the nation, was a driving force in the Robodebt shameful betrayal of vulnerable Australians. Tony Abbott, Stuart Robert, Alan Tudge, and Marissa Payne all had their grubby paws on it as well. Plus, a bunch of shameless senior bureaucrats who would have licked the s*** from the sewer if asked to and for their plump pay packets.
Nothing To See Here Your Honour
Lest we forget, people actually died on the back of the disgraceful and unlawful imposition of debts upon them. Oh yeah, and it cost the Australian tax payers $1.8 billion for the massive stuff up it was. Did any of these movers and shakers even say they were sorry? No. Nobody was responsible apparently – it kind of just happened by itself apparently. Bloody amazing how these politicians and senior public servants conveniently go missing when they are handing out blame and the s*** sandwiches. The Robodebt welfare cops were suddenly on holiday during the Royal Commission and ducked the arse smacking from the judge.
Photo by Kindel Media on Pexels.com Treasurer Jim Chalmers noted: He cited comments by Taylor “that what worried him about our changes in social security was that it meant that the broader Australian community would be funding help for the most vulnerable”. “That is the whole basis of social security,” Chalmers said, to applause in Parliament House’s great hall, packed with ministers, department heads, chief executives and advocates who had called for increases in welfare. “And I think that our country is better, frankly, than the kind of downward envy that we hear about from time to time from people like Angus Taylor. - (https://www.theguardian.com/australia-news/2023/may/10/jim-chalmers-accuses-coalition-of-downward-envy-as-dutton-refuses-to-commit-to-jobseeker-increase-in-budget)
Downward Envy & Keeping The Abos In Their Place Peter Dutton is The Grand Poo Bah of the downward envy club in Australia. Old skull, Voldemort himself, is forever ready with a dog whistle for the racist mob and their bitter hate for Indigenous Australia. It is never enough that institutional neglect and racist behaviours have dogged First Nations people in this country for hundreds of years. No, the Coalition of miserable scumbags is dedicated to keeping them in their place at the bottom of the wealth ladder. Closing the gap won’t be happening anytime soon on their watch. Cheap shots at Linda Burney for the way she speaks. Jibes and thinly disguised insults thrown at Aboriginals for being bludgers. Downward envy bubbles over on the stove for this lot.
Sky News Australia, Fox & Downward Envy Sky News Australia is where Rupert Murdoch has coalesced all the smug, ugly, and selfish attributes of Australians into one, hopefully, cash inspiring place. Fox News is his American cash cow, where he feeds on the rabid right wing audience over there. The thing about right wing news is that it doesn’t even bother being objective. Telling lies is par for the course and the dumber the BS the better for the alt right. Successfully sued for a billion dollars for misleading the public over Dominion’s role in the 2020 presidential election Fox News is so far from being a trusted source of news it is a sick joke. Donald Trump the compulsive liar is the pied piper of fake news on Fox. Murdoch and the Trump machine go around sucking billions of dollars out of a deluded audience hellbent on believing anything that fits into their own uber partisan view of the world. Downward envy even gets a guernsey over there with African Americans in the ghettoes getting a free ride wherever they are going, according to the shock jocks and motor mouths on Fox and Sky News Australia. Yes, Blacks deserve the hundreds of years of slavery and the decades of Black Codes locking them up and continuing peonage slavery for the state in the south until 1942. Race was criminalised in America and still is with a veritable industry keeping African Americans incarcerated and working in prisons as free labour for states and corporations to this day. It is big business in the US of A.
Downward envy is like having the telescope the wrong way around. Peering down into the lives of the poor and benighted and giving them a hard time for their troubles. This is the Murdoch, Trump, and Dutton stratagem. Blame those below you on the economic ladder for their self-begotten woes – that way your own greed and self-serving attitudes don’t ever come into question. Middle Australia has never been wealthier, despite the fact that landlords and corporations are feasting on the inflated fat of the land. But don’t blame the rich because you aspire to that position yourselves. Blame the poor, the homeless and the unemployed instead. Welcome to modern Australia in the 21C. Robert Sudha Hamilton is the author of Money Matters; Navigating Credit, Debt, and Financial Freedom. ©MidasWord Read the full article
0 notes
Text
Mirage Station playlist for May 17, 2023
1. Gamelan Telek “Bendera Merah Putih” from ロンボック島の音楽 ~潮騒のメロディ (King Records 1999) 2. Giacomo Salis/Paolo Sanna “1” from Acoustic Studies For Sardinian Bells (Falt 2023) (bandcamp link) 3. Stuart Chalmers “Voice of the Underground Stream” from Sound Environments 1: Caves (Self-released 2018) 4. Wolf Eyes “The Museums We Carry” from Dreams In Splattered Lines (Disciples 2023) (bandcamp link) 5. Xylouris White “Night Club” from The Forest In Me (Drag City 2023) (bandcamp link) 6. Arthur Russell “Tower of Meaning/Rabbit's Ear/Home Away from Home” from World of Echo (Upside 1986) 7. Mother Juniper “The Sculptor” from Write The Soil Lighter (Spirit House 2023) (bandcamp link) 8. Country Teasers “Sandy” from Secret Weapon Revealed At Last (In The Red Recordings 2003) 9. Strapping Fieldhands “Red Dog The Deconstructor” from LYVE: IN CONCERTE (ever never 2023) (bandcamp link)
Listen to this episode in the archive: here
1 note
·
View note
Photo
M4 High Speed Tractor - BREVE HISTÓRICO
Veículo Rebocador/Trator de Artilharia – USA 1943
RESUMO HISTÓRICO O M4 foi construído em resposta a um pedido do exército dos Estados Unidos da América, em 1941, para o desenvolvimento de um novo trator médio para reboque de artilharia. Nesse pedido era especificado que o novo veículo teria que ter a capacidade de acompanhar os demais veículos blindados nos terrenos irregulares dos campos de batalha, mas também que deveriam ser empregues na sua fabricação componentes já usados noutros carros em construção, de forma a garantir confiabilidade mecânica e rapidez. Assim, para além da otimização da produção, a logística e manutenção também se tornariam mais simples devido à partilha de componentes com outros veículos envolvidos nos conflitos. De a responder a estas especificações, a empresa Allis-Chalmers, fundada em 1901, na cidade de Milwaukee do estado de Wisconsin, especializada na produção de tratores, transmissões e apetrechos agrícolas, começou a produzir os primeiros protótipos em finais de 1942, utilizando a plataforma do Tanque M3 Stuart que estava sendo naquela altura produzido em larga escala. Após os testes realizados, a produção foi autorizada em início de 1943 e, em agosto desse mesmo ano, foi padronizado sob a designação de “M4 High Speed Tractor”.
Basicamente, foram construídas duas versões. Uma para rebocar morteiros e peças de artilharia de 155 mm, 240 mm e obuses de 8 polegadas e outra para rebocar peças antiaéreas de 90 mm. O M4 High Speed Tractor para além de rebocar essas peças podia também, para além do condutor, transportar até 10 soldados pertencentes à guarnição da peça, assim como transportar igualmente um grande número de munições. As munições ficavam alojadas em racks (suportes) modulares especiais consoante os projeteis fossem de 90 mm, 155 mm, 200 mm ou obuses de oito polegadas. Para mais fácil manuseio das munições, os M4 estavam equipados com um pequeno guindaste mecânico que ajudava no carregamento dos projeteis já que estes tinham um peso considerável. O M4 High Speed Tractor, por questões de segurança, foi equipado com um sofisticado sistema combinado de travões a ar e elétrico para assim suportar a carga rebocada independentemente do declive e condições do terreno. Para a sua proteção estava equipado com uma metralhadora M2 Browning de 12.7 mm, montada num anel giratório no teto do veículo que podia ser empregue na defesa antiaérea ou ataque antipessoal.
Foi utilizado nos Teatros de Operações da Europa e no Pacífico até o final da Guerra. Depois do término da 2ª Guerra Mundial, os M4 High Speed Tractor foram ainda empregues na Guerra da Coreia pelo exército Americano e Sul Coreano. Algumas centenas de unidades também foram fornecidas para nações aliadas como Brasil, Portugal, Nova Zelândia, Paquistão, Iugoslávia e Japão, sob a égide do programa de assistência mútua de defesa. Ele foi o primeiro de uma longa lista de tratores que prestou serviço ao exército dos Estados Unidos da América e que conseguia desempenhar uma grande variedade de missões das quais se destacavam o reboque de peças pesadas e o transporte de munições. A Allis-Chalmers produziu ao todo 5.552 unidades, entre março 1943 e junho 1945, e o M4 High Speed Tractor esteve em serviço no exército Americano até meados da década de 1960, sendo substituído pelo M40 Gun Motor Carriage que apareceu no final da 2ª Guerra Mundial.
Foi utilizado nos Teatros de Operações da Europa e no Pacífico até o final da Guerra. Depois do término da 2ª Guerra Mundial, os M4 High Speed Tractor foram ainda empregues na Guerra da Coreia pelo exército Americano e Sul Coreano. Algumas centenas de unidades também foram fornecidas para nações aliadas como Brasil, Portugal, Nova Zelândia, Paquistão, Iugoslávia e Japão, sob a égide do programa de assistência mútua de defesa. Ele foi o primeiro de uma longa lista de tratores que prestou serviço ao exército dos Estados Unidos da América e que conseguia desempenhar uma grande variedade de missões das quais se destacavam o reboque de peças pesadas e o transporte de munições. A Allis-Chalmers produziu ao todo 5.552 unidades, entre março 1943 e junho 1945, e o M4 High Speed Tractor esteve em serviço no exército Americano até meados da década de 1960, sendo substituído pelo M40 Gun Motor Carriage que apareceu no final da 2ª Guerra Mundial.
M40 Gun Motor Carriage (substituto do M4) a partir de 1960
Cerca de 500 unidades foram reconstruídas como tratores civis na década de 1960 pela empresa G.M. Philpott Ltd. de Vancouver. Esses tratores foram desprovidos dos seus componentes militares e usados no mercado civil em atividades de construção de base e estradas, como rebocadores de brocas de perfuração de rocha, transporte de carga, tendo ainda sido utilizados na exploração de madeira na Colúmbia Britânica. Alguns deles ainda se encontravam em uso no final do século passado.
VERSÕES LANÇADAS
M4
Versão base do modelo e as primeiras unidades entraram em serviço a partir de março de 1943.
M4C
Esta versão dispunha de racks (suportes) especiais para transporte de munição na parte traseira do veículo.
M4A1
Esta versão apresentava esteiras mais largas tipo "duck bill" idênticas às adotadas nas últimas versões dos tanques Sherman. Foram fabricadas 259 unidades entre Junho e Agosto de 1945.
DADOS TÉCNICOS
País Origem
USA
Fabricante
Allis-Chalmers
Produção Total
5.552 unidades
Produção
Março de 1943 a junho de 1945
Período Histórico
Segunda Guerra Mundial e Guerra da Coréia
Países Utilizadores
USA, Bélgica, Brasil, Portugal, Nova Zelândia, Paquistão, Iugoslávia e Japão
Comprimento
5.23 mts
Largura
2.46 mts
Altura
2.51 mts
Peso
14.288 kgs
Veloc. Máxima - Estrada
53 km/h
Capacidade Combustível
290 km/ls
Blindagem
nenhuma
Passagem de Vãos
1.04 mts
Obstáculos Verticais
0.7 mts
Trincheiras
1.5 mts
Motorização
1 motor Waukesha 145 GZ OHV 16 de 6 cilindros em linha a gasolina com 210 cv
Armamento
1 metralhadora M2 de 12.7 mm
Fontes:
https://modelismoestatico.comunidades.net/ph-m4-high-speed-tractor
https://en.wikipedia.org/wiki/M4_Tractor
https://modelismoestatico.comunidades.net/ph-m40-gun-motor-carriage
-.-.-.-.-.-.-.-.-
No próximo post a apresentação de um novo trabalho
Forte Abraço!
Osmarjun
1 note
·
View note
Photo
Stuart Chalmers - The Heart of Nature
Opal Tapes
2021
8 notes
·
View notes
Text
I refreshed the Spiritual pop and Secular drones playlists on the Blue Tapes Soundcloud that act as a sort of primer to the different hemispheres of the label.
Spiritual pop encapsulates the lighter, more playful side of our work, with contributions from Katie Gately, Tashi Dorji, Henry Plotnick, The Library of Babel, Stuart Chalmers and Taming Power, Hey String, Richard Youngs, Bulbils, Ashtray Navigations and Dane Law.
Secular drones summarises the darker, noisier end of things: Threes and Will, Father Murphy, Jute Gyte, Trupa Trupa, Unfollow, Mats Gustafsson, Ratkiller, Cadu Tenorio and Gultskra Artikler. Click the links in the first para to explore!
#blue tapes#experimental music#tapes#ambient#cassettes#drone#bandcamp#tape label#richard youngs#richard dawson#trupa trupa#shane parish#tashi dorji#katie gately#mats gustafsson#jute gyte
6 notes
·
View notes
Text
Exclusive: NEW from Cruel Nature Records - Listen to releases from Stuart Chalmers, Aidan Baker, MAbH and Fast Blood
Exclusive: NEW from Cruel Nature Records – Listen to releases from Stuart Chalmers, Aidan Baker, MAbH and Fast Blood
Words: Andy Hughes It doesn’t seem like five minutes have gone by since we were last bigging up Cruel Nature Records and their latest releases. True to form, the Newcastle-upon-Tyne based DIY cassette label are kicking things up a notch month on month and there’s even more on the horizon. Once again they’ve tasked us with helping tell you about some of the new records they’ve got coming out and…
View On WordPress
#Aidan Baker#Birthday cake for breakfast#Broken Spine Productions#Cruel Nature Records#Distant Animals#Fast Blood#Live at Cave 12#Look Under#MAbH#Mortuus Auris#Stimmt#Stuart Chalmers#The Black Hand#The Breaking Ground
0 notes