#AI in Climate Modeling
Explore tagged Tumblr posts
Text
“So, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it.” This is what “Bard” told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users. But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence. In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change. In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAI’s Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists. “I think this is important and worrying, the production of fake science, the automation in this domain, and how easily that becomes integrated into search tools like Google Scholar or similar ones,” said Victor Galaz, deputy director and associate professor in political science at the Stockholm Resilience Centre at Stockholm University in Sweden. “Because then that’s a slow process of eroding the very basics of any kind of conversation.” In another recent study published this month, researchers found GPT-fabricated content in Google Scholar mimicking legitimate scientific papers on issues including the environment, health, and computing. The researchers warn of “evidence hacking,” the “strategic and coordinated malicious manipulation of society’s evidence base,” which Google Scholar can be susceptible to.
18 September 2024
82 notes
·
View notes
Text
Determined to use her skills to fight inequality, South African computer scientist Raesetje Sefala set to work to build algorithms flagging poverty hotspots - developing datasets she hopes will help target aid, new housing, or clinics.
From crop analysis to medical diagnostics, artificial intelligence (AI) is already used in essential tasks worldwide, but Sefala and a growing number of fellow African developers are pioneering it to tackle their continent's particular challenges.
Local knowledge is vital for designing AI-driven solutions that work, Sefala said.
"If you don't have people with diverse experiences doing the research, it's easy to interpret the data in ways that will marginalise others," the 26-year old said from her home in Johannesburg.
Africa is the world's youngest and fastest-growing continent, and tech experts say young, home-grown AI developers have a vital role to play in designing applications to address local problems.
"For Africa to get out of poverty, it will take innovation and this can be revolutionary, because it's Africans doing things for Africa on their own," said Cina Lawson, Togo's minister of digital economy and transformation.
"We need to use cutting-edge solutions to our problems, because you don't solve problems in 2022 using methods of 20 years ago," Lawson told the Thomson Reuters Foundation in a video interview from the West African country.
Digital rights groups warn about AI's use in surveillance and the risk of discrimination, but Sefala said it can also be used to "serve the people behind the data points". ...
'Delivering Health'
As COVID-19 spread around the world in early 2020, government officials in Togo realized urgent action was needed to support informal workers who account for about 80% of the country's workforce, Lawson said.
"If you decide that everybody stays home, it means that this particular person isn't going to eat that day, it's as simple as that," she said.
In 10 days, the government built a mobile payment platform - called Novissi - to distribute cash to the vulnerable.
The government paired up with Innovations for Poverty Action (IPA) think tank and the University of California, Berkeley, to build a poverty map of Togo using satellite imagery.
Using algorithms with the support of GiveDirectly, a nonprofit that uses AI to distribute cash transfers, the recipients earning less than $1.25 per day and living in the poorest districts were identified for a direct cash transfer.
"We texted them saying if you need financial help, please register," Lawson said, adding that beneficiaries' consent and data privacy had been prioritized.
The entire program reached 920,000 beneficiaries in need.
"Machine learning has the advantage of reaching so many people in a very short time and delivering help when people need it most," said Caroline Teti, a Kenya-based GiveDirectly director.
'Zero Representation'
Aiming to boost discussion about AI in Africa, computer scientists Benjamin Rosman and Ulrich Paquet co-founded the Deep Learning Indaba - a week-long gathering that started in South Africa - together with other colleagues in 2017.
"You used to get to the top AI conferences and there was zero representation from Africa, both in terms of papers and people, so we're all about finding cost effective ways to build a community," Paquet said in a video call.
In 2019, 27 smaller Indabas - called IndabaX - were rolled out across the continent, with some events hosting as many as 300 participants.
One of these offshoots was IndabaX Uganda, where founder Bruno Ssekiwere said participants shared information on using AI for social issues such as improving agriculture and treating malaria.
Another outcome from the South African Indaba was Masakhane - an organization that uses open-source, machine learning to translate African languages not typically found in online programs such as Google Translate.
On their site, the founders speak about the South African philosophy of "Ubuntu" - a term generally meaning "humanity" - as part of their organization's values.
"This philosophy calls for collaboration and participation and community," reads their site, a philosophy that Ssekiwere, Paquet, and Rosman said has now become the driving value for AI research in Africa.
Inclusion
Now that Sefala has built a dataset of South Africa's suburbs and townships, she plans to collaborate with domain experts and communities to refine it, deepen inequality research and improve the algorithms.
"Making datasets easily available opens the door for new mechanisms and techniques for policy-making around desegregation, housing, and access to economic opportunity," she said.
African AI leaders say building more complete datasets will also help tackle biases baked into algorithms.
"Imagine rolling out Novissi in Benin, Burkina Faso, Ghana, Ivory Coast ... then the algorithm will be trained with understanding poverty in West Africa," Lawson said.
"If there are ever ways to fight bias in tech, it's by increasing diverse datasets ... we need to contribute more," she said.
But contributing more will require increased funding for African projects and wider access to computer science education and technology in general, Sefala said.
Despite such obstacles, Lawson said "technology will be Africa's savior".
"Let's use what is cutting edge and apply it straight away or as a continent we will never get out of poverty," she said. "It's really as simple as that."
-via Good Good Good, February 16, 2022
#older news but still relevant and ongoing#africa#south africa#togo#uganda#covid#ai#artificial intelligence#pro ai#at least in some specific cases lol#the thing is that AI has TREMENDOUS potential to help humanity#particularly in medical tech and climate modeling#which is already starting to be realized#but companies keep pouring a ton of time and money into stealing from artists and shit instead#inequality#technology#good news#hope
210 notes
·
View notes
Text
Why Quantum Computing Will Change the Tech Landscape
The technology industry has seen significant advancements over the past few decades, but nothing quite as transformative as quantum computing promises to be. Why Quantum Computing Will Change the Tech Landscape is not just a matter of speculation; it’s grounded in the science of how we compute and the immense potential of quantum mechanics to revolutionise various sectors. As traditional…
#AI#AI acceleration#AI development#autonomous vehicles#big data#classical computing#climate modelling#complex systems#computational power#computing power#cryptography#cybersecurity#data processing#data simulation#drug discovery#economic impact#emerging tech#energy efficiency#exponential computing#exponential growth#fast problem solving#financial services#Future Technology#government funding#hardware#Healthcare#industry applications#industry transformation#innovation#machine learning
3 notes
·
View notes
Text
How AI and Machine Learning are helping to combat climate change?
Climate change is one of the global issues that needs accurate predictions and effective solutions. This can be done with the help of AI and Machine Learning.
🔗 Read how AI and Machine Learning can help:
https://www.aisparkify.com/ai-and-machine-learning-in-climate-change/?
🌐 aisparkify.com
Let’s move closer to a more sustainable future!

1 note
·
View note
Text
How Much Energy Does GenAI Really Use?
Part 1: Examining Generative AI
I did it! Here is the first part on examining Generative AI. This article covers the environmental footprint and energy usage of AI, but there will be 3-4 parts that cover other areas impacted by AI (I've already drafted part 2).
Friend (free) link:
#notolux blog#notolux#anti generative ai#anti genai#generative ai#ai#ai model#climate change#carbon footprints#energy usage#environment#environmental footprint#sustainability#natural resources#research
1 note
·
View note
Text
🤖🌍 AI in Climate Engineering: Predicting and Deploying Smart Responses to Environmental Crises
AI in climate engineering is revolutionizing how we predict, model, and solve environmental crises. Discover how smart tools protect the planet. AI in climate engineering is transforming how scientists approach environmental crises like global warming and extreme weather events. By leveraging data, algorithms, and smart models, AI now enables faster, smarter decisions in climate tech. This…
#ai#AI climate modeling#artificial-intelligence#climate-change#crisis prediction tools#environmental AI#sustainability#sustainable technology#technology
0 notes
Text
"The companies that make AI—which is, to establish our terms right at the outset, large language models that generate text or images in response to natural language queries—have a problem. Their product is dubiously legal, prohibitively expensive (which is to say, has the kind of power and water requirements that are currently being treated as externalities and passed along to the general populace, but which in a civilized society would lead to these companies’ CEOs being dragged out into the street by an angry mob), and it objectively does not work. All of these problems are essentially intractable. Representatives of AI companies have themselves admitted that if they paid fair royalties to all the artists whose work they’ve scraped and stolen in order to get their models working, they’d be financially unfeasible. The energy requirements for running even a simple AI-powered google query are so prohibitive that Sam Altman has now started to pretend that he can build cold fusion in order to solve the problem he and others like him have created. And the dreaded “hallucination” problem in AI-generated text and images is an inherent attribute of the technology. Even in cases where there are legitimate, useful applications for AI—apparently if you provide a model a specific set of sources, it can produce accurate summaries, which has its uses in various industries—there remains the question of whether this is a cost-effective tool once its users actually have to start paying for it (and whether this is even remotely ethically justifiable given the technology’s environmental cost)."
#The angry mob is tumblr right?#AI#Abigail Nussbaum#Large Language Model#legal#environmental#expensive#CEO#angry mob#environment#green#climate change#problem#royalties#artists#work#scraped#hallucination
0 notes
Text
Weather and Climate Artificial Intelligence (AI) Foundation Model Applications Presented at IBM Think in Boston
Rahul Ramachandran and Maskey (ST11/IMPACT) participated in IBM Think, where their IBM collaborators showcased two innovative AI applications for weather and climate modeling. The first application focuses on climate downscaling, enhancing the resolution of climate models for more accurate local predictions. The second application aims to optimize wind farm predictions, improving renewable energy forecasts. During […] from NASA https://ift.tt/gSoq9zW
#NASA#space#Weather and Climate Artificial Intelligence (AI) Foundation Model Applications Presented at IBM Think in Boston#Michael Gabrill
0 notes
Text
Artificial Intelligence for Climate Action
Artificial Intelligence (AI) is transforming various sectors, and its impact on climate change mitigation is becoming increasingly significant. By leveraging AI, we can develop more efficient energy systems, enhance environmental monitoring, and foster sustainable practices. This blog post explores how AI is being used to curb climate change. AI for Renewable Energy Improvement One of the…
View On WordPress
#AI and Climate Change#Artificial Intelligence#Carbon Capture and Storage#Climate Change Mitigation#Climate Modeling#Disaster Response#Environmental Monitoring#Precision Agriculture#Renewable Energy Optimization#Sustainable Technology
0 notes
Text
High Water Ahead: The New Normal of American Flood Risks
According to a map created by the National Oceanic and Atmospheric Administration (NOAA) that highlights ‘hazard zones’ in the U.S. for various flooding risks, including rising sea levels and tsunamis. Here’s a summary and analysis: Summary: The NOAA map identifies areas at risk of flooding from storm surges, tsunamis, high tide flooding, and sea level rise. Red areas on the map indicate more…
View On WordPress
#AI News#climate forecasts#data driven modeling#ethical AI#flood risk management#geospatial big data#News#noaa#sea level rise#uncertainty quantification
0 notes
Note
what’s the story about the generative power model and water consumption? /gen
There's this myth going around about generative AI consuming truly ridiculous amount of power and water. You'll see people say shit like "generating one image is like just pouring a whole cup of water out into the Sahara!" and bullshit like that, and it's just... not true. The actual truth is that supercomputers, which do a lot of stuff, use a lot of power, and at one point someone released an estimate of how much power some supercomputers were using and people went "oh, that supercomputer must only do AI! All generative AI uses this much power!" and then just... made shit up re: how making an image sucks up a huge chunk of the power grid or something. Which makes no sense because I'm given to understand that many of these models can run on your home computer. (I don't use them so I don't know the details, but I'm told by users that you can download them and generate images locally.) Using these models uses far less power than, say, online gaming. Or using Tumblr. But nobody ever talks about how evil those things are because of their power generation. I wonder why.
To be clear, I don't like generative AI. I'm sure it's got uses in research and stuff but on the consumer side, every effect I've seen of it is bad. Its implementation in products that I use has always made those products worse. The books it writes and flood the market with are incoherent nonsense at best and dangerous at worst (let's not forget that mushroom foraging guide). It's turned the usability of search engines from "rapidly declining, but still usable if you can get past the ads" into "almost one hundred per cent useless now, actually not worth the effort to de-bullshittify your search results", especially if you're looking for images. It's a tool for doing bullshit that people were already doing much easier and faster, thus massively increasing the amount of bullshit. The only consumer-useful uses I've seen of it as a consumer are niche art projects, usually projects that explore the limits of the tool itself like that one poetry book or the Infinite Art Machine; overall I'd say its impact at the Casual Random Person (me) level has been overwhelmingly negative. Also, the fact that so much AI turns out to be underpaid people in a warehouse in some country with no minimum wage and terrible labour protections is... not great. And the fact that it's often used as an excuse to try to find ways to underpay professionals ("you don't have to write it, just clean up what the AI came up with!") is also not great.
But there are real labour and product quality concerns with generative AI, and there's hysterical bullshit. And the whole "AI is magically destroying the planet via climate change but my four hour twitch streaming sesh isn't" thing is hysterical bullshit. The instant I see somebody make this stupid claim I put them in the same mental bucket as somebody complaining about AI not being "real art" -- a hatemobber hopping on the hype train of a new thing to hate and feel like an enlightened activist about when they haven't bothered to learn a fucking thing about the issue. And I just count my blessings that they fell in with this group instead of becoming a flat earther or something.
2K notes
·
View notes
Text
Our Stance On Gen-AI
This year, for the first time, we've had a couple of reports from bidders that the FTH fanworks they received were produced using generative AI. For that reason, we've decided that it's important that we lay out a specific, concrete policy going forward.
Generative AI tools are not welcome here.
Non-exhaustive list of examples:
image generators like Imagen, Midjourney, and similar
video generators like Sora, Runway, and similar
LLMs like ChatGPT and similar
audio generators like ElevenLabs, MusicLM, and similar
Participants found to have used generative AI to produce a fanwork, in part or in whole, for their bidder(s) will be permanently banned from participating in future iterations of Fandom Trumps Hate.
Why?
We understand that there can be contentious debate around the use of generative AI, we know individual people have their own reasons for being in favor of it, and we recognize that many people may simply be unaware that these tools come with any negative impacts at all. Regardless, we are firm in our stance on this for the following (non-exhaustive) list of key reasons in no particular order:
negative, unregulated environmental impact
Over the years, you may have noticed that we’ve supported multiple environmental organizations doing important work to combat climate change, preserve wildlife, and advocate for renewable and sustainable energy policy changes. Generative AI tools produce a startling amount of e-waste, can require massive amounts of storage space and computational power, and are a (currently unregulated) drain on natural resources. Using these tools to produce a fanwork flies in the face of every environmental organization we have supported to date.
plagiarism and lack of artistic integrity
Most if not all generative AI models are trained on some amount of stolen work (across various mediums). As a result, any output generated by these models is at worst plagiarized and at best extremely derivative and unoriginal. In our opinion, using generative AI tools to produce a fanwork demonstrates a lack of care for your own craft, a lack of respect for the work of other creators, and a lack of respect for your bidder and your commitment to them.
undermining our community building impact
One of the best things to come out of the auction every year—we can't even call it a side benefit, because it's so central to us—is that bidders and creators form collaborative relationships which sometimes even turn into friendship. Using generative AI undermines that trust and collaboration.
undermining the value of participating as a creator
Bidders participate in Fandom Trumps Hate for the opportunity to prompt YOU to create a fanwork for them, in YOUR style with YOUR specific skill set. Any potential bidder is perfectly capable of dropping a prompt into a generative AI tool on their own time, if they wish. We hope all creators sign up with the aim to play a role more significant than “unnecessary middleman.”
In general, we try to be as flexible as we can in our policies to allow for the best experience possible for all Fandom Trumps Hate participants. This, however, is something we are not willing to be flexible on. We realize this may seem unusually rigid, but we ask that you trust we have given this serious consideration and respect that while we are willing to answer clarifying questions, we are not open to debate on this topic.
1K notes
·
View notes
Note
genuinely curious but I don't know how to phrase this in a way that sounds less accusatory so please know I'm asking in good faith and am just bad at words
what are your thoughts on the environmental impact of generative ai? do you think the cost for all the cooling system is worth the tasks generative ai performs? I've been wrangling this because while I feel like I can justify it as smaller scales, that would mean it isn't a publicly available tool which I also feel uncomfortable with
the environmental impacts of genAI are almost always one of three things, both by their detractors and their boosters:
vastly overstated
stated correctly, but with a deceptive lack of context (ie, giving numbers in watt-hours, or amount of water 'used' for cooling, without necessary context like what comparable services use or what actually happens to that water)
assumed to be on track to grow constantly as genAI sees universal adoption across every industry
like, when water is used to cool a datacenter, that datacenter isn't just "a big building running chatgpt" -- datacenters are the backbone of the modern internet. now, i mean, all that said, the basic question here: no, i don't think it's a good tradeoff to be burning fossil fuels to power the magic 8ball. but asking that question in a vacuum (imo) elides a lot of the realities of power consumption in the global north by exceptionalizing genAI as opposed to, for example, video streaming, or online games. or, for that matter, for any number of other things.
so to me a lot of this stuff seems like very selective outrage in most cases, people working backwards from all the twitter artists on their dashboard hating midjourney to find an ethical reason why it is irredeemably evil.
& in the best, good-faith cases, it's taking at face value the claims of genAI companies and datacenter owners that the power usage will continue spiralling as the technology is integrated into every aspect of our lives. but to be blunt, i think it's a little naive to take these estimates seriously: these companies rely on their stock prices remaining high and attractive to investors, so they have enormous financial incentives not only to lie but to make financial decisions as if the universal adoption boom is just around the corner at all times. but there's no actual business plan! these companies are burning gigantic piles of money every day, because this is a bubble
so tldr: i don't think most things fossil fuels are burned for are 'worth it', but the response to that is a comprehensive climate politics and not an individualistic 'carbon footprint' approach, certainly not one that chooses chatgpt as its battleground. genAI uses a lot of power but at a rate currently comparable to other massively popular digital leisure products like fortnite or netflix -- forecasts of it massively increasing by several orders of magnitude are in my opinion unfounded and can mostly be traced back to people who have a direct financial stake in this being the case because their business model is an obvious boondoggle otherwise.
877 notes
·
View notes
Text
Green energy is in its heyday.
Renewable energy sources now account for 22% of the nation’s electricity, and solar has skyrocketed eight times over in the last decade. This spring in California, wind, water, and solar power energy sources exceeded expectations, accounting for an average of 61.5 percent of the state's electricity demand across 52 days.
But green energy has a lithium problem. Lithium batteries control more than 90% of the global grid battery storage market.
That’s not just cell phones, laptops, electric toothbrushes, and tools. Scooters, e-bikes, hybrids, and electric vehicles all rely on rechargeable lithium batteries to get going.
Fortunately, this past week, Natron Energy launched its first-ever commercial-scale production of sodium-ion batteries in the U.S.
“Sodium-ion batteries offer a unique alternative to lithium-ion, with higher power, faster recharge, longer lifecycle and a completely safe and stable chemistry,” said Colin Wessells — Natron Founder and Co-CEO — at the kick-off event in Michigan.
The new sodium-ion batteries charge and discharge at rates 10 times faster than lithium-ion, with an estimated lifespan of 50,000 cycles.
Wessells said that using sodium as a primary mineral alternative eliminates industry-wide issues of worker negligence, geopolitical disruption, and the “questionable environmental impacts” inextricably linked to lithium mining.
“The electrification of our economy is dependent on the development and production of new, innovative energy storage solutions,” Wessells said.
Why are sodium batteries a better alternative to lithium?
The birth and death cycle of lithium is shadowed in environmental destruction. The process of extracting lithium pollutes the water, air, and soil, and when it’s eventually discarded, the flammable batteries are prone to bursting into flames and burning out in landfills.
There’s also a human cost. Lithium-ion materials like cobalt and nickel are not only harder to source and procure, but their supply chains are also overwhelmingly attributed to hazardous working conditions and child labor law violations.
Sodium, on the other hand, is estimated to be 1,000 times more abundant in the earth’s crust than lithium.
“Unlike lithium, sodium can be produced from an abundant material: salt,” engineer Casey Crownhart wrote in the MIT Technology Review. “Because the raw ingredients are cheap and widely available, there’s potential for sodium-ion batteries to be significantly less expensive than their lithium-ion counterparts if more companies start making more of them.”
What will these batteries be used for?
Right now, Natron has its focus set on AI models and data storage centers, which consume hefty amounts of energy. In 2023, the MIT Technology Review reported that one AI model can emit more than 626,00 pounds of carbon dioxide equivalent.
“We expect our battery solutions will be used to power the explosive growth in data centers used for Artificial Intelligence,” said Wendell Brooks, co-CEO of Natron.
“With the start of commercial-scale production here in Michigan, we are well-positioned to capitalize on the growing demand for efficient, safe, and reliable battery energy storage.”
The fast-charging energy alternative also has limitless potential on a consumer level, and Natron is eying telecommunications and EV fast-charging once it begins servicing AI data storage centers in June.
On a larger scale, sodium-ion batteries could radically change the manufacturing and production sectors — from housing energy to lower electricity costs in warehouses, to charging backup stations and powering electric vehicles, trucks, forklifts, and so on.
“I founded Natron because we saw climate change as the defining problem of our time,” Wessells said. “We believe batteries have a role to play.”
-via GoodGoodGood, May 3, 2024
--
Note: I wanted to make sure this was legit (scientifically and in general), and I'm happy to report that it really is! x, x, x, x
#batteries#lithium#lithium ion batteries#lithium battery#sodium#clean energy#energy storage#electrochemistry#lithium mining#pollution#human rights#displacement#forced labor#child labor#mining#good news#hope
3K notes
·
View notes
Text
PROTOCOL Pairing: Doctor Zayne x Nurse Reader
author note: love and deepspace is my addiction guys LOL anyways enjoy!!
wc: 3,865
chapter 1 | chapter 2
✦•┈๑⋅⋯ ⋯⋅๑┈•✦
Akso Hospital looms in the heart of Linkon like a monument of glass, metal, and unrelenting precision. Multi-tiered, climate-controlled, and fully integrated with city-wide telemetry systems, it's known across the cosmos for housing the most advanced medical AI and the most exacting surgeons in the Union.
Inside its Observation Deck on Level 4, the air hums with quiet purpose. Disinfectant and filtered oxygen mix in sterile harmony. The floors are polished to a mirrored sheen, the walls pulse faintly with embedded biometrics, and translucent holoscreens scroll real-time vitals, arterial scans, and surgical priority tags in muted color-coded displays.
You’ve been on the floor since 0500. First to check vitals. First to inventory meds. First to get snapped at.
Doctor Zayne Li is already here—of course he is. The man practically lives in the operating theatres. Standing behind the panoramic glass that overlooks Surgery Bay Delta, he looks like something carved out of discipline and frost. His pristine long coat hangs perfectly from squared shoulders, gloves tucked with methodical precision, silver-framed glasses reflecting faint readouts from the transparent interface hovering before him.
He’s the hospital’s prized cardiovascular surgeon. The Zayne Li—graduated top of his class from Astral Medica, youngest surgeon ever certified for off-planet cardiac reconstruction, published more than any other specialist in the central systems under 35. There's even a rumor he once performed a dual-heart transplant in an emergency gravity failure. Probably true.
He’s a legend. A genius.
And an ass.
He’s never once smiled at you. Never once said thank you. With other staff, he’s distant but civil. With you, he’s something else entirely: cold, strict, and unrelentingly sharp. If you breathe wrong, he notices. If you hesitate, he corrects. If you do everything by protocol?
He still finds something to critique.
"Vitals on Bed 12 were late," he said this morning without even turning his head. No greeting. Just judgment, clean and surgical.
"They weren’t late. I had to reset the cuff."
"You should anticipate equipment failures. That’s part of the job."
And that was it. No acknowledgment of the three critical patients you’d managed in that hour. No recognition. No room for explanation. He turned away before you could blink, his coat slicing behind him like punctuation.
You don’t like him.
You don’t disrespect him—because you're a professional, and because he's earned his reputation a hundred times over. But you don’t like how he talks to you like you’re a glitch in the system. Like you’re a deviation he hasn’t figured out how to reprogram.
You’ve worked under strict doctors before. But Zayne is different. He doesn’t push to challenge you. He pushes to see if you’ll break.
And the worst part?
You haven’t.
Which only seems to piss him off more.
You watch him now from the break table near the edge of the deck, your synth-coffee going tepid between your hands. He’s reviewing scans on a projection screen—high-res, rotating 3D models of a degenerating bio-synthetic valve. His eyes, a pale hazel-green, flick across the data with sharp focus. His arms are folded behind his back, posture perfect, expression unreadable.
He hasn’t noticed you.
Correction: he has, and he’s pointedly ignoring you.
Typical.
You take another sip of coffee, more bitter than before. You could head back to inventory. You could restock surgical trays. But you don’t.
Because part of you refuses to give him the satisfaction of leaving first.
So you stay.
And so does he.
Two professionals. Two adversaries. One cold war fought in clipped words, clinical tension, and overlapping silence.
And the day hasn’t even started yet.
The surgical light beams down like a second sun, flooding the operating theatre in harsh, clinical brightness. It washes the color out of everything—blood, skin, even breath—until all that remains is precision.
Doctor Zayne Li stands at the head of the table, gloved hands elevated and scrubbed raw, sleeves of his sterile gown clinging tight around his forearms. His eyes flick up to the vitals screen, then down to the patient’s exposed chest.
“Vitals?” he asks.
You answer without hesitation. “Steady. HR 82, BP 96/63, oxygen at 99%, no irregularities.”
His silence is your only cue to proceed.
You hand him the scalpel, handle first, exactly as protocol demands. He doesn’t look at you when he takes it—but his fingers graze yours, cold through double-layered gloves, and the contact still sends a tiny jolt up your arm. Annoying.
He makes the incision without fanfare, clean and deliberate, the kind of cut that only comes from years of obsessive mastery. The kind that still makes your gut tighten to watch.
You monitor the instruments, anticipating without crowding him. You’ve been assisting in his surgeries for weeks now. You’ve learned when he prefers the microclamp versus the stabilizer. You’ve memorized the sequence of his suturing pattern. You know when to speak and when not to. Still, it’s never enough.
“Retractor,” he says flatly.
You’re already reaching.
“Not that one.”
Your hand freezes mid-motion.
His tone is ice. “Cardiac thoracic, not abdominal. Are you even awake?”
A hot flush rises behind your ears. He doesn’t yell—Zayne never yells—but his disappointment cuts deeper than a scalpel. You grit your teeth and correct the tray.
“Cardiac thoracic,” you repeat. “Understood.”
No response. Just the soft click of metal as he inserts the retractor into the sternotomy.
The rest of the operation is silence and beeping. You suction blood before he asks. He cauterizes without hesitation. The damaged aortic valve is removed, replaced with a synthetic graft designed for lunar-pressure tolerance. It’s delicate work—millimeter adjustments, microscopic thread. One wrong move could tear the tissue.
Zayne doesn’t shake. Doesn’t blink. He’s terrifyingly still, even as alarms spike and the patient's BP dips for three agonizing seconds.
“Clamp. Now,” he says.
You pass it instantly. He seals the nicked vessel, stabilizes the pressure, and the monitor quiets.
You exhale—but not too loudly. Not until the final suture is tied, the chest closed, and the drape removed. Then, and only then, does he speak again.
“Clean,” he says, already walking away. “Prepare a report for Post-Op within the hour.”
You stare at his retreating back, fists clenched at your sides. No thank you. No good work. Just a cold command and disappearing footsteps.
The Diagnostic Lab is silent, save for the low hum of scanners and the occasional pulse of a vitascan completing a loop. The walls are steel-paneled with matte black inlays, lit only by the soft glow of holographic interfaces. Ambient light drifts in from a side wall of glass, showing the icy curve of Europa in the distance, half-shadowed in space.
You stand alone at a curved diagnostics console, sleeves rolled just above your elbows, eyes locked on the 3D hologram spinning in front of you. The synthetic heart pulses slowly, arteries reconstructed with precise synthetic grafts. The valve—a platinum-carbon composite—is functioning perfectly. You check the scan tags, patient ID, op codes, and log the post-op outcome.
Everything’s clean. Correct.
Or so you thought.
You barely register the soft hiss of the door opening behind you until the room shifts. Not in volume, but in pressure—like gravity suddenly increased by one degree.
You don’t turn. You don’t have to.
Zayne.
“Line 12 in the file log,” he says, voice low, composed, and close. Too close.
You blink at the screen. “What about it?”
“You mislabeled the scan entry. That’s a formatting violation.”
Your heart rate ticks up. You straighten your spine.
“No,” you reply calmly, “I used trauma tags from pre-op logs. They cross-reference with the emergency surgical queue.”
His footsteps approach—measured, deliberate—and stop directly behind you. You sense the heat of his body before anything else. He’s not touching you, but he’s close enough that you feel him standing there, like a charged wire humming at your back.
“You adapted a tag system that’s not recognized by this wing’s software. If these were pushed to central review, they’d get flagged. Wasting time.” His tone is even. Too even.
Your hands rest on the edge of the console. You force your shoulders not to tense.
“I made a call based on the context. It was logical.”
“You’re not here to improvise logic,” he replies, stepping even closer.
You feel the air change as he raises his arm, reaching past you—his coat sleeve brushing the side of your bicep lightly, the barest whisper of contact. His hand moves with surgical confidence as he taps the air beside your own, opening the tag metadata on the scan you just logged. His fingers are long, gloved, deliberate in motion.
“This,” he says, highlighting a code block, “should have been labeled with an ICU procedural tag, not pre-op trauma shorthand.”
You turn your head slightly, and there he is. Close. Towering. His jaw is tight, clean-shaven except for the faintest trace of stubble catching the edge of the light. There’s a tiredness around his eyes—subtle, buried deep—but he doesn’t blink. Doesn’t waver. He’s so still it’s unnerving.
He doesn’t seem to notice—or care—how near he is.
You, however, are all too aware.
Your voice tightens. “Is there a reason you couldn’t point this out without standing over me like I’m in your way?”
Zayne doesn’t flinch. “If I stood ten feet back, you’d still argue with me.”
You bristle. “Because I know what I’m doing.”
“And yet,” he replies coolly, “I’m the one correcting your data.”
That sting digs deep. You pull in a breath, clenching your fists subtly against the side of the console. You want to yell. But you won’t. Because he wants control, and you won’t give him that too.
He lowers his hand slowly, retracting from the display, and finally—finally—steps back. Just enough to let you breathe again.
But the tension? It lingers like static.
“I’ll correct the tag,” you say flatly.
Zayne nods once, then turns to go.
But at the doorway, he stops.
Without looking back, he adds, “You're capable. That’s why I expect better.”
Then he walks out.
Leaving you in the cold hum of the diagnostic lab, your pulse racing, your thoughts a snarl of frustration and something else—unsettling and electric—curling low in your gut.
You don’t know what that something is.
But you’re starting to suspect it won’t go away quietly.
You sit three seats from the end of the long chrome conference table, back straight, shoulders tight, fingers wrapped just a little too hard around your datapad.
The Surgical Briefing Room is too bright. It always is. Cold light from the ceiling plates bounces off polished surfaces, glass walls, and the brushed steel of the central console. A hologram hovers in the center of the room, slowly spinning: the reconstructed heart from this morning’s procedure, arteries lit in pulsing red and cyan.
You can feel sweat prickling at the nape of your neck under your uniform collar. Your scrubs are crisp, your hair pinned back precisely, your notes immaculate—but none of that matters when Dr. Myles Hanron speaks.
You’ve only spoken to him a few times. He’s been at Bell for twenty years. Stern. Respected. Impossible to argue with. Today, he's reviewing the recent cardiovascular procedure—the one you assisted under Zayne’s lead.
And something is off. He’s frowning at the scan display.
Then he looks at you.
“Explain this inconsistency in the anticoagulation log.”
You glance up, already feeling the slow roll of nausea in your stomach.
Your voice comes out measured, but your throat is dry. “I followed the automated-calibrated dosage curve based on intra-op vitals and confirmed with the automated log.”
Hanron raises a brow, his tablet casting a soft reflection on the lenses of his glasses. “Then you followed it wrong.”
The words hit like a slap across your face.
You feel the blood drain from your cheeks. Something sharp twists in your stomach.
“I—” you begin, mouth parting. You shift slightly in your seat, fingers tightening on the datapad in your lap, legs crossed too stiffly. Your body wants to shrink, but you force yourself not to move.
“Don’t interrupt,” Hanron snaps, before you can finish.
A few heads turn in your direction. One of the interns frowns, glancing at you with wide eyes. You stare straight ahead, trying to keep your breathing even, your spine straight, your jaw from visibly clenching.
Hanron paces two steps in front of the display. “You logged a 0.3 ml deviation on a patient with a known history of arrhythmic episodes. Are you unfamiliar with the case history? Or did you just not check?”
“I did check,” you say, quieter, trying to keep your tone professional. Your hands are starting to sweat. “The scan flagged it within range. I wasn’t improvising—”
“Then how did this discrepancy occur?” he presses. “Or are you suggesting the system is at fault?”
You flinch, slightly. You open your mouth to say something—to explain the terminal sync issue you noticed during the last vitals run—but your voice catches.
You’re a nurse.
You’re new.
So you sit there, every instinct in your body screaming to speak, to defend yourself—but you swallow it down.
You stare down at your datapad, the screen now blurred from the way your vision’s tunneling. You clench your teeth until your jaw aches.
You can’t speak up. Not without making it worse.
“Let this be a reminder,” Hanron says, turning his back to you as he scrolls through another projection, “that there is no room for guesswork in surgical prep. Especially not from auxiliary staff who feel the need to act above their training.”
Auxiliary.
The word burns.
You feel heat crawl up your chest. Your hands are shaking slightly. You grip your knees under the table to hide it.
And then—
“I signed off on that dosage.”
Zayne’s voice cuts clean through the air like a cold wire.
You turn your head sharply toward the door. He’s standing in the entrance, posture military-straight, coat half-unbuttoned, gloves tucked into his belt. His presence shifts the atmosphere instantly.
His black hair is perfectly combed back, not a strand out of place, glinting faintly under the sterile overhead lights. His silver-framed glasses sit low on the bridge of his nose, catching a brief reflection from the room’s data panels, but not enough to hide the expression in his eyes.
Hazel-green. Pale and piercing
He’s not looking at you. His gaze is fixed past you, locked on Hanron with unflinching intensity—like the man has just committed a fundamental breach of logic.
There’s not a wrinkle in his coat. Not a single misaligned button or loose thread. Even the gloves at his belt look placed, not shoved there. Zayne is, as always, polished. Meticulous. Icy.
But today—his expression is different.
His jaw is set tighter than usual. The faint crease between his brows is deeper. He looks like a man on the verge of unsheathing a scalpel, not for surgery—but for precision retaliation.
And when he speaks, his voice is calm. Controlled.
His face is unreadable. Voice flat.
“If there’s a problem with it, you can take it up with me.”
The silence in the room is instant. Tense. Airless.
Hanron turns slowly. “Doctor Zayne, this isn’t about—”
“It is,” Zayne replies, tone even sharper. “You’re implying a clinical error in my procedure. If you’re accusing her, then you’re accusing me. So let’s be clear.”
You can barely process it. Your heart is thudding, ears buzzing from the sudden shift in tone, from the weight of Zayne’s voice cutting through the tension like a scalpel. You look at him — really look — and for once, he isn’t focused on numbers or reports.
He’s solely focused on Hanron. And he is furious — not loudly, but in the way his voice doesn’t rise, his jaw locks, and his words slice like ice.
Just furious—in that cold, calculated way of his.
“She followed my instruction under direct supervision,” he says, voice steady. “The variance was intentional. Based on patient history and real-time rhythm response.”
He pauses just long enough to let the words land.
“It was correct.”
Hanron doesn’t respond right away.
His lips press into a thin line, face unreadable, and he shifts back a step—visibly checking himself in the silence Zayne has carved into the room like a scalpel.
“We’ll review the surgical logs,” Hanron mutters at last, voice clipped, his authority retreating behind procedure.
Zayne nods once. “Please do.”
Then, without fanfare, without another word, he steps forward—not toward the exit, but toward the table.
You track him with your eyes, unable to help it.
The low hum of the room resumes, like the air had been holding its breath. No one speaks. A few nurses drop their eyes back to their datapads. Pages turn. Screens flicker.
But you’re frozen in place, shoulders still tight, hands clenched in your lap to keep them from visibly shaking.
Zayne rounds the end of the table, his boots clicking softly against the metal flooring. His long coat sways with his movements, falling neatly behind him as he pulls out the seat directly across from you.
And sits.
Not at the head of the table. Not in some corner seat to observe.
Directly across from you.
He adjusts his glasses with two fingers, expression cool again, almost as if nothing happened. As if he didn’t just dress down a senior doctor in front of the entire room on your behalf.
He doesn’t look at you.
He opens the file on his datapad, stylus poised, reviewing the surgical results like this is any other debrief.
But you’re still staring.
You study the slight tension in his shoulders, the stillness in his hands, the way his eyes don’t drift—not toward Hanron, not toward you—locked entirely on the data as if that can contain whatever just happened.
You should say something.
Thank you.
But the words get stuck in your throat.
Your pulse is still unsteady, confusion mixing with the low thrum of heat behind your ribs. He didn’t need to defend you. He never steps into conflict like that, especially not for others—especially not for you.
You glance away first, eyes back on your screen, unable to ignore the twist in your gut.
The room empties, but you stay.
The echo of voices fades out with the hiss of the sliding doors. Just a few minutes ago, the surgical debrief room was bright with tension—every overhead light too sharp, the air too thin, the hum of holopanels and datapads a constant static in your head.
Now, it’s quiet. Still.
You sit for a moment longer, fingers resting on your lap, knuckles tight, back straight even though your entire body wants to collapse inward. You’re still warm from the flush of embarrassment, your pulse still flickering behind your ears.
Dr. Hanron’s words sting less now, dulled by the cool aftershock of what Zayne did.
He defended you.
You hadn’t expected it. Not from him.
You replay it in your head—his voice cutting in, his posture like stone, his eyes locked on Hanron like a scalpel ready to slice. He didn’t raise his voice. He didn’t even look at you.
But you felt it.
You felt the impact of what it meant.
And now, as you sit in the empty conference room—white walls, chrome-edged table, sterile quiet—you’re left with one burning thought:
You have to say something.
You rise slowly, brushing your palms down your thighs to wipe off the sweat that lingers there. You hesitate at the doorway. Your reflection stares back at you in the glass panel—eyes still a little wide, jaw tight, posture just a bit too stiff.
He didn’t have to defend you, but he did.
And that matters.
You step into the hallway.
It’s long and narrow, glowing with soft white overhead lights and lined with clear glass panels that reflect fragments of your movement as you walk. The hum of the ventilation system buzzes low and steady—comforting in its monotony. The air smells of antiseptic and the faint trace of ozone from high-oxygen surgical wards.
You spot him ahead, already halfway down the corridor, walking with purpose—long coat swaying slightly with each step, back straight, shoulders squared. Always composed. Always fast.
You hesitate. Your boots slow down and your throat tightens.
You want to turn back, to let it go, to pretend it was just professional courtesy. Nothing more. Nothing personal.
But you can’t.
Not this time.
You quicken your pace.
“Doctor Zayne!”
The name catches in the air, too loud in the quiet hallway. You flinch, just a little—but he stops.
You break into a small jog to catch up, boots tapping sharply against the tile. Your breath catches as you reach him.
Zayne turns toward you, expression unreadable, brows slightly furrowed in that ever-present, analytical way of his. The glow of the ceiling lights reflects off his silver-framed glasses, casting sharp highlights along the edges of his jaw.
He doesn’t say anything. Just waits.
You stop a foot away, heart thudding. You don’t know what you expected—maybe something colder. Maybe for him to ignore you entirely.
You swallow hard, eyes flicking up to meet his.
“I just…” Your voice is quieter now. Careful. “I wanted to say thank you.”
He doesn’t respond immediately. His gaze is steady. Measured.
“I don’t tolerate incompetence,” he says calmly. “That includes false accusations.”
You blink, taken off guard by the directness. It’s not warm. Not even particularly kind. But coming from him, it’s almost intimate.
Still, you can’t help yourself. “That wasn’t really about incompetence.”
“No,” he admits. “It wasn’t.”
The hallway feels smaller now, quieter. He’s watching you in full. Not scanning you like a chart, not calculating — watching. Still. Focused.
You nod slowly, grounding yourself in the moment. “Still. I needed to say it. Thank you.”
You’re suddenly aware of everything—of the warmth in your cheeks, of the way your hands twist at your sides, of how tall he stands compared to you, even when he’s not trying to intimidate.
And he isn’t. Not now.
If anything, he looks… still.
Not soft. Never that. But something quieter. Less armored.
“You handled yourself better than most would have,” he says after a moment. “Even if I hadn’t said anything, you didn’t lose control.”
“I didn’t feel in control,” you admit, a breath of nervous laughter escaping. “I was two seconds from either crying or throwing my datapad.”
That earns you something surprising—just the faintest twitch at the corner of his mouth. Almost a smile. But not quite.
“Neither would’ve been productive,” he says.
You roll your eyes slightly. “Thanks, Doctor Efficiency.”
His glasses catch the light again, but his expression doesn’t change.
You glance past him, down the corridor. “I should get back to my rotation.”
He nods once. “I’ll see you in the lab.”
You pause.
Then—because you don’t know what else to do—you offer a small, genuine smile.
“I’ll be there.”
As you turn to leave, you feel his eyes on your back.
#love and deep space#loveanddeepspace#love and deepspace#love and deepspace x reader#love and deepspace zayne#love and deepspace fanfiction#lads x you#lads x reader#lads imagine#lads zayne#zayne love and deepspace#lnds zayne#zayne x reader#zayne li#l&ds zayne#zayne lads#zayne x you#zayne x y/n#zayne x non mc#lads#lads fanfic#doctor zayne#lads x non!mc reader#lads x y/n
466 notes
·
View notes
Text

Ellipsus Digest: March 18
Each week (or so), we'll highlight the relevant (and sometimes rage-inducing) news adjacent to writing and freedom of expression.
This week: AI continues its hostile takeover of creative labor, Spain takes a stand against digital sludge, and the usual suspects in the U.S. are hard at work memory-holing reality in ways both dystopian and deeply unserious.
ChatGPT firm reveals AI model that is “good at creative writing” (The Guardian)
... Those quotes are working hard.
OpenAI (ChatGPT) announced a new AI model trained to emulate creative writing—at least, according to founder Sam Altman: “This is the first time i have been really struck by something written by AI.” But with growing concerns over unethically scraped training data and the continued dilution of human voices, writers are asking… why?
Spoiler: the result is yet another model that mimics the aesthetics of creativity while replacing the act of creation with something that exists primarily to generate profit for OpenAI and its (many) partners—at the expense of authors whose work has been chewed up, swallowed, and regurgitated into Silicon Valley slop.
Spain to impose massive fines for not labeling AI-generated content (Reuters)
But while big tech continues to accelerate AI’s encroachment on creative industries, Spain (in stark contrast to the U.S.) has drawn a line: In an attempt to curb misinformation and protect human labor, all AI-generated content must be labeled, or companies will face massive fines. As the internet is flooded with AI-written text and AI-generated art, the bill could be the first of many attempts to curb the unchecked spread of slop.
Besos, España 💋
These words are disappearing in the new Trump administration (NYT)
Project 2025 is moving right along—alongside dismantling policies and purging government employees, the stage is set for a systemic erasure of language (and reality). Reports show that officials plan to wipe government websites of references to LGBTQ+, BIPOC, women, and other communities—words like minority, gender, Black, racism, victim, sexuality, climate crisis, discrimination, and women have been flagged, alongside resources for marginalized groups and DEI initiatives, for removal.
It’s a concentrated effort at creating an infrastructure where discrimination becomes easier… because the words to fight it no longer officially exist. (Federally funded educational institutions, research grants, and historical archives will continue to be affected—a broader, more insidious continuation of book bans, but at the level of national record-keeping, reflective of reality.) Doubleplusungood, indeed.
Pete Hegseth’s banned images of “Enola Gay” plane in DEI crackdown (The Daily Beast)
Fox News pundit-turned-Secretary of Defense-slash-perpetual-drunk-uncle Pete Hegseth has a new target: banning educational materials featuring the Enola Gay, the plane that dropped the atomic bomb on Hiroshima. His reasoning: that its inclusion in DEI programs constitutes "woke revisionism." If a nuke isn’t safe from censorship, what is?
The data hoarders resisting Trump’s purge (The New Yorker)
Things are a little shit, sure. But even in the ungoodest of times, there are people unwilling to go down without a fight.
Archivists, librarians, and internet people are bracing for the widespread censorship of government records and content. With the Trump admin aiming to erase documentation of progressive policies and minority protections, a decentralized network is working to preserve at-risk information in a galvanized push against erasure, refusing to let silence win.
Let us know if you find something other writers should know about, (or join our Discord and share it there!) Until next week, - The Ellipsus Team xo
619 notes
·
View notes