#artificial intelligence regulation
Explore tagged Tumblr posts
trendynewsnow · 22 days ago
Text
EU Governments' Fragility Threatens Unity, Warns European Parliament President Roberta Metsola
EU Governments’ Fragility Threatens Unity, Warns European Parliament President Roberta Metsola, the President of the European Parliament, expressed grave concerns on Thursday regarding the fragility of EU governments, which she believes is undermining the bloc’s unity as it strives to enhance its economic competitiveness relative to the United States. Metsola, a member of the centre-right…
0 notes
nationallawreview · 2 months ago
Text
California Poised to Further Regulate Artificial Intelligence by Focusing on Safety
Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on…
0 notes
the-irreverend · 2 years ago
Text
Tumblr media
What better way to vent about low-effort AI-generated artwork than with a low-effort shitpost?
779 notes · View notes
frank-olivier · 21 days ago
Text
Tumblr media
The Future of Justice: Navigating the Intersection of AI, Judges, and Human Oversight
One of the main benefits of AI in the justice system is its ability to analyze vast amounts of data and identify patterns that human judges may not notice. For example, the use of AI in the U.S. justice system has led to a significant reduction in the number of misjudgments, as AI-powered tools were able to identify potential biases in the data and make more accurate recommendations.
However, the use of AI in the justice system also raises significant concerns about the role of human judges and the need for oversight. As AI takes on an increasingly important role in decision-making, judges must find the balance between trusting AI and exercising their own judgement. This requires a deep understanding of the technology and its limitations, as well as the ability to critically evaluate the recommendations provided by AI.
The European Union's approach to AI in justice provides a valuable framework for other countries to follow. The EU's framework emphasizes the need for human oversight and accountability and recognizes that AI is a tool that should support judges, not replace them. This approach is reflected in the EU's General Data Protection Regulation (GDPR), which requires AI systems to be transparent, explainable and accountable.
The use of AI in the justice system also comes with its pitfalls. One of the biggest concerns is the possibility of bias in AI-generated recommendations. When AI is trained with skewed data, it can perpetuate and even reinforce existing biases, leading to unfair outcomes. For example, a study by the American Civil Liberties Union found that AI-powered facial recognition systems are more likely to misidentify people of color than white people.
To address these concerns, it is essential to develop and implement robust oversight mechanisms to ensure that AI systems are transparent, explainable and accountable. This includes conducting regular audits and testing of AI systems and providing clear guidelines and regulations for the use of AI in the justice system.
In addition to oversight mechanisms, it is also important to develop and implement education and training programs for judges and other justice professionals. This will enable them to understand the capabilities and limitations of AI, as well as the potential risks and challenges associated with its use. By providing judges with the necessary skills and knowledge, we can ensure that AI is used in a way that supports judges and enhances the fairness and accountability of the justice system.
Human Centric AI - Ethics, Regulation. and Safety (Vilnius University Faculty of Law, October 2024)
youtube
Friday, November 1, 2024
6 notes · View notes
cbirt · 1 year ago
Link
A team of researchers from the University of Chinese Academy of Sciences, China, and collaborators developed GeneCompass, one of the first foundation models of its kind that encompasses a vast expanse of knowledge across a diverse array of species, owing to the fact that it has been trained on over 120 million single-cell transcriptomes derived from the genomes of mice and humans. It is a self-supervised model. During the process of pre-training the model, it retrieves information from four types of biological datasets in the form of ‘prior knowledge’ and integrates it. It has excelled and outperformed several state-of-the-art models when studying a single species. It can also open new avenues for carrying out studies across different combinations of species other than humans and mice. This model can potentially contribute to discovering key regulators that determine cell fate and to identifying promising target candidates in the drug discovery and development field.
It is essential to decode universal regulatory mechanisms that dictate the expression of genes across a diverse set of organisms for accelerating clinical research and expanding our existing knowledge of basic and crucial life processes. Traditional research methodologies and existing deep-learning models have only considered using individual models of organisms separately. This has resulted in a dearth of integrated knowledge of features observed across different cell types over a variety of species. The development of this model was made possible by combining the outcomes of recent advancements in the fields of deep-learning (DL) methods and single-cell sequencing methods.
Continue Reading
43 notes · View notes
fashionlandscapeblog · 2 years ago
Text
Just listened to Kurt Cobain singing Soundgarden's Black Hole Sun generated by A.I. on Rick Beato's channel and I cried, because it sounded 100% like him. There's something so morally wrong about this, not just the fact that this has massive, negative media implications, including creating fake news that can stir up society, big record labels using A.I. to generate voices of dead and living musicians without paying them a cent, or why not? Even big streaming services like Spotify (who have already a vile monopoly) creating A.I. music without properly retributing the artists (they barely pay them now). A.I. is not a negative thing per se, but new technology needs laws and regulations that protects people, which always come too late after it already had catastrophic consequences on the population. Why? Because lawmakers are always a bunch of ancient dinosaurs who do not understand technology and its effects. We need young people making laws and urgently.
39 notes · View notes
continuations · 1 year ago
Text
AI Safety Between Scylla and Charybdis and an Unpopular Way Forward
I am unabashedly a technology optimist. For me, however, that means making choices for how we will get the best out of technology for the good of humanity, while limiting its negative effects. With technology becoming ever more powerful there is a huge premium on getting this right as the downsides now include existential risk.
Let me state upfront that I am super excited about progress in AI and what it can eventually do for humanity if we get this right. We could be building the capacity to turn Earth into a kind of garden of Eden, where we get out of the current low energy trap and live in a World After Capital.
At the same time there are serious ways of getting this wrong, which led me to write a few posts about AI risks earlier this year. Since then the AI safety debate has become more heated with a fair bit of low-rung tribalism thrown into the mix. To get a glimpse of this one merely needs to look at the wide range of reactions to the White House Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. This post is my attempt to point out what I consider to be serious flaws in the thinking of two major camps on AI safety and to mention an unpopular way forward.
First, let’s talk about the “AI safety is for wimps” camp, which comes in two forms. One is the happy-go-lucky view represented by Marc Andreessen’s “Techno-Optimist Manifesto” and also his preceding Tweet thread. This view dismisses critics who dare to ask social or safety questions as luddites and shills.
So what’s the problem with this view? Dismissing AI risks doesn’t actually make them go away. And it is extremely clear that at this moment in time we are not really set up to deal with the problems. On the structural risk side we are already at super extended income and wealth inequality. And the recent AI advances have already been shown to further accelerate this discrepancy.
On the existential risk side, there is recent work by Kevin Esvelt et al. showing how LLMs can broaden access to pandemic agents. Jeffrey Ladish et. al. demonstrating how cheap it is to remove safety training from an open source model with published weights. This type of research clearly points out that as open source models become rapidly more powerful they can be leveraged for very bad things and that it continues to be super easy to strip away the safeguards that people claim can be built into open source models.
This is a real problem. And people like myself, who have strongly favored permissionless innovation, would do well to acknowledge it and figure out how to deal with it. I have a proposal for how to do that below.
But there is one intellectually consistent way to continue full steam ahead that is worth mentioning. Marc Andreessen cites Nick Land as an inspiration for his views. Land in Meltdown wrote the memorable line “Nothing human makes it out of the near-future”. Embracing AI as a path to a post-human future is the view embraced by the e/acc movement. Here AI risks aren’t so much dismissed as simply accepted as the cost of progress. My misgiving with this view is that I love humanity and believe we should do our utmost to preserve it (my next book which I have started to work on will have a lot more to say about this).
Second, let’s consider the “We need AI safety regulation now” camp, which again has two subtypes. One is “let regulated companies carry on” and the other is “stop everything now.” Again both of these have deep problems.
The idea that we can simply let companies carry on with some relatively mild regulation suffers from three major deficiencies. First, this has the risk of leading us down the path toward highly concentrated market power and we have seen the problems of this in tech again and again (it has been a long standing topic on my blog). For AI market power will be particularly pernicious because this technology will eventually power everything around us and so handing control to a few corporations is a bad idea. Second, the incentives of for-profit companies aren’t easily aligned with safety (and yes, I include OpenAI here even though it has in theory capped investor returns but also keeps raising money at ever higher valuations, so what’s the point?).
But there is an even deeper third deficiency of this approach and it is best illustrated by the second subtype which essentially wants to stop all progress. At its most extreme this is a Ted Kaczynsci anti technology vision. The problem with this of course is that it requires equipping governments with extraordinary power to prevent open source / broadly accessible technology from being developed. And this is an incredible unacknowledged implication of much of the current pro-regulation camp.
Let me just give a couple of examples. It has long been argued that code is speech and hence protected by first amendment rights. We can of course go back and revisit what protections should be applicable to “code as speech,” but the proponents of the “let regulated companies go ahead with closed source AI” don’t seem to acknowledge that they are effectively asking governments to suppress what can be published as open source (otherwise, why bother at all?). Over time government would have to regulate technology development ever harder to sustain this type of regulated approach. Faster chips? Government says who can buy them. New algorithms? Government says who can access them. And so on. Sure, we have done this in some areas before, such as nuclear bomb research, but these were narrow fields, whereas AI is a general purpose technology that affects all of computation.
So this is the conundrum. Dismissing AI safety (Scylla) only makes sense if you go full on post humanist because the risks are real. Calling for AI safety through oversight (Charybdis) doesn’t acknowledge that way too much government power is required to sustain this approach.
Is there an alternative option? Yes but it is highly unpopular and also hard to get to from here. In fact I believe we can only get there if we make lots of other changes, which together could take us from the Industrial Age to what I call the Knowledge Age. For more on that you can read my book The World After Capital.
For several years now I have argued that technological progress and privacy are incompatible. The reason for this is entropy, which means that our ability to destroy will always grow faster than our ability to (re)build. I gave a talk about it at the Stacks conference in Berlin in 2018 (funny side note: I spoke right after Edward Snowden gave a full throated argument for privacy) and you can read a fuller version of the argument in my book.
The only solution other than draconian government is to embrace a post privacy world. A world in which it can easily be discovered that you are building a super dangerous bio weapon in your basement before you have succeeded in releasing it. In this kind of world we can have technological progress but also safeguard humanity – in part by using aligned super intelligences to detect what is happening. And yes, I believe it is possible to create versions of AGI that have deep inner alignment with humanity that cannot easily be removed. Extremely hard yes, but possible (more on this in upcoming posts on an initiative in this direction).
Now you might argue that a post privacy world also requires extraordinary state power but that's not really the case. I grew up in a small community where if you didn't come out of your house for a day, the neighbors would check in to make sure you were OK. Observability does not require state power per se. Much of this can happen simply if more information is default public. And so regulation ought to aim at increased disclosure.
We are of course a long way away from a world where most information about us could be default public. It will require massive changes from where we are today to better protect people from the consequences of disclosure. And those changes would eventually have to happen everywhere that people can freely have access to powerful technology (with other places opting for draconian government control instead). 
Given that the transition which I propose is hard and will take time, what do I believe we should do in the short run? I believe that a great starting point would be disclosure requirements covering training inputs, cost of training runs, and powered by (i.e. if you launch say a therapy service that uses AI you need to disclose which models). That along with mandatory API access could start to put some checks on market power. As for open source models I believe a temporary voluntary moratorium on massively larger more capable models is vastly preferable to any government ban. This has a chance of success because there are relatively few organizations in the world that have the resources to train the next generation of potentially open source models. 
Most of all though we need to have a more intellectually honest conversation about risks and how to mitigate them without introducing even bigger problems. We cannot keep suggesting that these are simple questions and that people must pick a side and get on with it.
7 notes · View notes
dailyworldecho · 7 months ago
Text
Tumblr media
3 notes · View notes
glasshomewrecker · 1 year ago
Text
In a silicon valley, throw rocks. Welcome to my tech blog.
Antiterf antifascist (which apparently needs stating). This sideblog is open to minors.
Liberation does not come at the expense of autonomy.
* I'm taking a break from tumblr for a while. Feel free to leave me asks or messages for when I return.
Frequent tags:
2 notes · View notes
sixstringphonic · 1 year ago
Text
OpenAI Fears Get Brushed Aside
(A follow-up to this story from May 16th 2023.) Big Tech dismissed board’s worries, along with the idea profit wouldn’t rule usage. (Reported by Brian Merchant, The Los Angeles Times, 11/21/23) It’s not every day that the most talked-about company in the world sets itself on fire. Yet that seems to be what happened Friday, when OpenAI’s board announced that it had terminated its chief executive, Sam Altman, because he had not been “consistently candid in his communications with the board.” In corporate-speak, those are fighting words about as barbed as they come: They insinuated that Altman had been lying. The sacking set in motion a dizzying sequence of events that kept the tech industry glued to its social feeds all weekend: First, it wiped $48 billion off the valuation of Microsoft, OpenAI’s biggest partner. Speculation about malfeasance swirled, but employees, Silicon Valley stalwarts and investors rallied around Altman, and the next day talks were being held to bring him back. Instead of some fiery scandal, reporting indicated that this was at core a dispute over whether Altman was building and selling AI responsibly. By Monday, talks had failed, a majority of OpenAI employees were threatening to resign, and Altman announced he was joining Microsoft. All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.
It’s hard to overstate how wild this whole saga is. In a year when artificial intelligence has towered over the business world, OpenAI, with its ubiquitous ChatGPT and Dall-E products, has been the center of the universe. And Altman was its world-beating spokesman. In fact, he’s been the most prominent spokesperson for AI, period. For a highflying company’s own board to dump a CEO of such stature on a random Friday, with no warning or previous sign that anything serious was amiss — Altman had just taken center stage to announce the launch of OpenAI’s app store in a much-watched conference — is almost unheard of. (Many have compared the events to Apple’s famous 1985 canning of Steve Jobs, but even that was after the Lisa and the Macintosh failed to live up to sales expectations, not, like, during the peak success of the Apple II.)
So what on earth is going on?
Well, the first thing that’s important to know is that OpenAI’s board is, by design, differently constituted than that of most corporations — it’s a nonprofit organization structured to safeguard the development of AI as opposed to maximizing profitability. Most boards are tasked with ensuring their CEOs are best serving the financial interests of the company; OpenAI’s board is tasked with ensuring their CEO is not being reckless with the development of artificial intelligence and is acting in the best interests of “humanity.” This nonprofit board controls the for-profit company OpenAI.
Got it?
As Jeremy Khan put it at Fortune, “OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI) … while at the same time preventing capitalist forces, and in particular a single tech giant, from controlling AGI.” And yet, Khan notes, as soon as Altman inked a $1-billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking got louder when Microsoft sunk $10 billion more into OpenAI in January of this year.
We still don’t know what exactly the board meant by saying Altman wasn’t “consistently candid in his communications.” But the reporting has focused on the growing schism between the science arm of the company, led by co-founder, chief scientist and board member Ilya Sutskever, and the commercial arm, led by Altman. We do know that Altman has been in expansion mode lately, seeking billions in new investment from Middle Eastern sovereign wealth funds to start a chip company to rival AI chipmaker Nvidia, and a billion more from Softbank for a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And that’s on top of launching the aforementioned OpenAI app store to third party developers, which would allow anyone to build custom AIs and sell them on the company’s marketplace.
The working narrative now seems to be that Altman’s expansionist mind-set and his drive to commercialize AI — and perhaps there’s more we don’t know yet on this score — clashed with the Sutskever faction, who had become concerned that the company they co-founded was moving too fast. At least two of the board’s members are aligned with the so-called effective altruism movement, which sees AI as a potentially catastrophic force that could destroy humanity.
The board decided that Altman’s behavior violated the board’s mandate. But they also (somehow, wildly) seem to have failed to anticipate how much blowback they would get for firing Altman. And that blowback has come at gale-force strength; OpenAI employees and Silicon Valley power players such as Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman. It’s not hard to see why. OpenAI had been in talks to sell shares to investors at an $86-billion valuation. Microsoft, which has invested more than $11 billion in OpenAI and now uses OpenAI’s tech on its platforms, was apparently informed of the board’s decision to fire Altman five minutes before the wider world. Its leadership was furious and seemingly led the effort to have Altman reinstated. But beyond all that lurked the question of whether there should really be any safeguards to the AI development model favored by Silicon Valley’s prime movers; whether a board should be able to remove a founder they believe is not acting in the interest of humanity — which, again, is their stated mission — or whether it should seek relentless expansion and scale.
See, even though the OpenAI board has quickly become the de facto villain in this story, as the venture capital analyst Eric Newcomer pointed out, we should maybe take its decision seriously. Firing Altman was probably not a call they made lightly, and just because they’re scrambling now because it turns out that call was an existential financial threat to the company does not mean their concerns were baseless. Far from it.
In fact, however this plays out, it has already succeeded in underlining how aggressively Altman has been pursuing business interests. For most tech titans, this would be a “well, duh” situation, but Altman has fastidiously cultivated an aura of a burdened guru warning the world of great disruptive changes. Recall those sheepdog eyes in the congressional hearings a few months back where he begged for the industry to be regulated, lest it become too powerful? Altman’s whole shtick is that he’s a weary messenger seeking to prepare the ground for responsible uses of AI that benefit humanity — yet he’s circling the globe lining up investors wherever he can, doing all he seemingly can to capitalize on this moment of intense AI interest.
To those who’ve been watching closely, this has always been something of an act — weeks after those hearings, after all, Altman fought real-world regulations that the European Union was seeking to impose on AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to be bent on operating with the utmost transparency — before Altman steered it into a for-profit company that keeps its models secret. Now, I don’t believe for a second that AI is on the cusp of becoming powerful enough to destroy mankind — I think that’s some in Silicon Valley (including OpenAI’s new interim CEO, Emmett Shear) getting carried away with a science fictional sense of self-importance, and a uniquely canny marketing tactic — but I do think there is a litany of harms and dangers that can be caused by AI in the shorter term. And AI safety concerns getting so thoroughly rolled at the snap of the Valley’s fingers is not something to cheer.
You’d like to believe that executives at AI-building companies who think there’s significant risk of global catastrophe here couldn’t be sidelined simply because Microsoft lost some stock value. But that’s where we are.
Sam Altman is first and foremost a pitchman for the year’s biggest tech products. No one’s quite sure how useful or interesting most of those products will be in the long run, and they’re not making a lot of money at the moment — so most of the value is bound up in the pitchman himself. Investors, OpenAI employees and partners such as Microsoft need Altman traveling the world telling everyone how AI is going to eclipse human intelligence any day now much more than it needs, say, a high-functioning chatbot.
Which is why, more than anything, this winds up being a coup for Microsoft. Now it has got Altman in-house, where he can cheerlead for AI and make deals to his heart’s content. They still have OpenAI’s tech licensed, and OpenAI will need Microsoft more than ever. Now, it may yet turn out to be that this was nothing but a power struggle among board members, and it was a coup that went wrong. But if it turns out that the board had real worries and articulated them to Altman to no avail, no matter how you feel about the AI safety issue, we should be concerned about this outcome: a further consolidation of power of one of the biggest tech companies and less accountability for the product than ever.
If anyone still believes a company can steward the development of a product like AI without taking marching orders from Big Tech, I hope they’re disabused of this fiction by the Altman debacle. The reality is, no matter whatever other input may be offered to the company behind ChatGPT, the output will be the same: Money talks.
2 notes · View notes
cancmbyn · 1 year ago
Text
Christopher Nolan Warns of 'Terrifying Possibilities' as AI Reaches 'Oppenheimer Moment': 'We Have to Hold People Accountable'
By Kim J. Murphy
"I hope so," Nolan stated. "When I talk to the leading researchers in the field of AI right now, for example, they literally refer to this -- right now -- as their Oppenheimer moment. They're looking to history to say, 'What are the responsibilities for scientists developing new technologies that may have unintended consequences?'"
"Do you think Silicon Valley is thinking that right now?" Todd interjected. "Do you think they say that this is an Oppenheimer moment?"
"They say that they do," Nolan said after a pause and then chuckled. "It's helpful that that's in the conversation and I hope that that thought process will continue. I am not saying Oppenheimer's story offers any easy answers to those questions, but it at least can show where some of those responsibilities lie and how people take a breath and think, 'Okay, what is the accountability?'"
Accountability needs to come from external sources; If you look to Silicon Valley, it will never come wholly from within. There is no incentive to do so, when the profit-making incentive is the only one that really matters.
4 notes · View notes
aifyit · 2 years ago
Text
The DARK SIDE of AI : Can we trust Self-Driving Cars?
Read our new blog and follow/subscribe for more such informative content. #artificialintelligence #ai #selfdrivingcars #tesla
Self-driving cars have been hailed as the future of transportation, promising safer roads and more efficient travel. They use artificial intelligence (AI) to navigate roads and make decisions on behalf of the driver, leading many to believe that they will revolutionize the way we commute. However, as with any technology, there is a dark side to AI-powered self-driving cars that must be…
Tumblr media
View On WordPress
16 notes · View notes
carolinemillerbooks · 1 year ago
Text
New Post has been published on Books by Caroline Miller
New Post has been published on https://www.booksbycarolinemiller.com/musings/the-revolution-of-the-species/
The Revolution Of The Species
Tumblr media
   Senator John Fetterman (D) recently shared this observation with the public.  You all should need to know that America is not sending their best and brightest to Washington, D. C.  Congressional in-fighting, and scandals among the elected elite support the senator’s view. Bureaucrats add to the confusion.  As specialists in their fields, they can run circles around the people’s representatives. For example, while Congress squabbles about sending money to support Ukraine’s war, the Secretary of the Treasury, Janet Yellen, proposes that President Joe Biden bypass the government’s legislative branch and delegate Russia’s frozen assets to Volodymyr Zelenskyy. Adding to the fog is technology, an industry politicians little know or understand.  As a result, innovators in Silicon Valley have pursued Artificial Intelligence (AI) unfettered to a degree that it has become as great a danger to us as the atomic bomb. In 2018, the Brookings Institute issued a report on the benefits and dangers of AI and provided recommendations to ensure the technology did no harm.  It collected dust like most reports. But now, five years later, tech giants, have come running to Congress seeking regulations, fearing they have released an evil genie from its bottle and hoping to spread the blame. While the tech world seeks legal protection from the potential damage their invention can do, the rest of us should consider what human traits these innovators have passed along to their powerful machines.  Given our current capacity to blow up the planet’s resources, including polluting its air, what could go wrong? The advent of AI will alter our lives, no doubt, but it won’t create a blank slate upon which to build our utopian dream.  As historian Timothy Snyder warns, we can’t avoid dragging into our new world the debris of the past.  Economic inequality will be one such and should social mobility die, the scholar predicts democracy [will] give way to oligarchy, opening the door to tyranny. Donald Trump has given us a glimpse of that future, a society where citizens are encouraged to sleepwalk through their existence, obeying their leaders without question.   What these sheep mustn’t see, says Snyder, is that most of those who held power in the past will continue to hold it in the future, making changes wrought by insurrections or revolutions largely an illusion. True, the technological revolution has brought a world of information to our fingertips, but the price has been the loss of our privacy — data that the oligarchs of AI gather and sell for their immense profit. Elon Must is one of these.  Having accumulated much of the world’s capital, he imagines he owns the rest of us and dares to wade into international politics, changing the course of our lives without the authority of a single vote cast at the ballot box. Such hubris leaves us to ponder the legacy of these innovators. They have given us convenience and access to endless information, but they are the purveyors of disinformation and deep fakes too. By these means, society finds itself not merely divided but fractured, and to a degree that makes determining the public good seem impossible. Will their invention, AI, come to sense the frailty of our species? As repositories of all that we know, will they see how we have dehumanized ourselves by our obsession with money, pleasure, and the pursuit of war? If so, will these lungless servants become our masters, caring nothing about us and our environment?  I doubt they will miss the meadowlark ‘s song. Forgive these dystopian questions, but it’s time to consider our status as naked apes. The universe takes little notice of us. And, Nature appears to be turning its back on our species.  Or, perhaps, we were the first to turn away, preferring to focus on ourselves and the petty differences in our religions, the color of our skin,  and our varying lifestyles.  When inconsequential variations like these become matters of life and death, are we worthy of respect even from our miraculous machines?  More likely, they will judge us against other creatures on the planet and find we are not the best and brightest.  I must say that I have rarely seen a community come together in order to meet a common need in a manner as beautiful as that of a handful of birds at a feeder. Craig D. Lounsbrough
2 notes · View notes
bopinion · 1 year ago
Text
Tumblr media
2023 / 22
Aperçu of the Week:
"The greatest enemy of knowledge is not the ignorance, it is the illusion of knowledge."
(Stephen Hawking - British theoretical physicist and astrophysicist at Cambridge University)
Bad News of the Week:
With the Manhattan Project, mankind has already opened Pandora's box once. For there will be no way back to a time before nuclear weapons. Despite knowing better, the lid will never be put on the box, because there will always be people who see an advantage in it: personal preservation of power, deterrence against real or imagined threats, signs of national strength and other superficial egoisms. We will never get rid of this curse. And exactly the same thing is happening again now. With artificial intelligence. Says Warren Buffett, too.
The scientists can't be blamed for this. It is in their nature to test the limits of what is possible. And if the goal of their research and development is also economically attractive, there will always be someone to fund their work. It started with shopping recommendations in online stores. It continued with the analysis of movement profiles. And today, AI in insurance companies is already making decisions about who gets which rate at which conditions. All based on bare numbers, so 100% objective.
In a way, the great advantage of human intelligence is the equally human retarding moment. It is called conscience. Doubts are good, because they let humans think again, risk a second look, weigh things up based on personal experience. Artificial intelligence does not have this control mechanism. It decides purely on the basis of facts, coldly, ruthlessly. Example: how would artificial intelligence decide if the power fails in a hospital and the emergency generator only has enough electricity for one system. What would it shut down - itself or the life-support systems of patients in palliative care who were doomed anyway? Exactly.
The statements from critics - and there are many among them who have been or are in AI development themselves, such as Sam Altman, the head of ChatGPT creator Open AI - calling on policymakers to act are serious. Once again, technical progress is much faster than regulatory requirements. Still, for example, the handling of fake news and hate speech in social media lags far behind. But this time there is (even) more at stake: the control of the human over the machine.
Joachim Weickert, professor of mathematics and computer science at Saarland University, lists four areas of risk: Upheaval in the labor market, even for highly skilled professions. Destabilization of societies through disinformation. Loss of control, intransparency and one-sidedness. And finally, the damaging independence of AI itself - by simply taking command itself, fully aware of its own superiority. Almost 40 years ago, we were introduced to the central machine instance Skynet in the cinema. Let's hope it's not "I am back!" one day in reality.
Good News of the Week:
I am a child of the Cold War. Germany and Europe were divided. In school we learned what to do in the event of an atomic bomb explosion and subway stations led to bunkers. The world seemed clearly divided into good and evil. Nevertheless, I took to the streets against the stationing of Pershing missiles, found the "nuclear sharing" frightening - to this day, we Germans do not know where the U.S. forces keep how many nuclear weapons in our country. Neither do we know about Great Britain and France. Creepy.
Then came the turning point. Warsaw Pact and Soviet Union collapsed, the war of systems seemed to have a clear winner. And nuclear weapons were to rust away uselessly, serving only as a fetish of Arab and East Asian rulers. With Vladimir Putin and Xi Jinping, there are now two men in power in undemocratic states for whom nuclear weapons are a perfectly normal utensil of geopolitical interests.
And the United States is not very squeamish about its words either. In the future, the United States should be able "for the first time in your history, to deter two roughly equal nuclear powers," says national security adviser Jake Sullivan. And, "one of our greatest nonproliferation successes in the age of nuclear weapons has been extended nuclear deterrence, which gives many of our allies the assurance that they don't have to develop their own nuclear weapons." In short, living with the bomb is again (or still) quite normal.
At this point in Sullivan's speech on Friday in the White House press room, I would have preferred to get out of it and would have expected unpleasant dreams for the following night. But then I was surprised: In light of the New START nuclear arms control treaty, which expires in 2026 and which Russia suspended four months ago anyway, Sullivan called for talks "on how to deal with nuclear risks beyond 2026" so that no new conflicts would arise.
And then came a double whammy: first, the U.S. called for talks "without preconditions," and second, it directed that call to Russia - and China. And thus, for the first time, acknowledges an equal footing. Therefore, the talks will happen. I am not naive, there will be no large-scale waiver with reciprocal controls that everyone would then abide by. But whoever made the statement "Where there is talk, there is no shooting." was almost always right.
Personal happy moment of the week:
My son returned yesterday from a vacation in Italy with his mother and sister. Where he was not only willing to risk a glimpse of nature and culture, but also went swimming for a whole two hours every day. And today he left with my father for a week-long bike tour, from Koblenz along the Moselle to Luxembourg. And he has already declared that he will also make a detour to a church or castle worth seeing. In addition, he not only tanned his skin in Italy, but also overtook my wife in height. So in every sense it means: he is getting big.
I couldn't care less...
...about the further rapprochement of the Arab powers Saudi Arabia and Iran. This time in the form of the establishment of a naval alliance. Officially it is said that this is the only way to bring security to the region. Iranian naval commander Sharam Irani declares: "Then we will witness our region being liberated from unauthorized forces." This can only mean the U.S. naval base in Bahrain. A common adversary is apparently enough to bridge fundamental differences - in this case, the Shiite versus Sunni faiths of Islam. Unfortunately, this will do nothing for democracy or even human rights. On the contrary: The oppression of women, for example, will be cemented even more firmly.
As I write this...
...a mixture of full moon, everyday worries and Monday horror keeps me from sleeping. Well at least I'll get my blog done, which I didn't get around to finish yesterday / Sunday.
Post Scriptum
On Saturday was organ donation day. A topic that urgently needs more attention. Because about 8,500 Germans are currently waiting for an organ transplant, for kidneys, for example, about eight years - too long for many. And in 2022, only 900 people donated an organ. Theoretically, people are much more willing to donate, but bureaucracy is the main obstacle: many relatives don't even know what the deceased person's position is on the subject, and there is often no valid identification. The so-called "objection solution" would put an end to this, as the donation would then have to be actively and centrally documented. But there is currently no majority in parliament for this. And at least one person dies every day in Germany - avoidably.
2 notes · View notes
datamined · 2 years ago
Text
Updated my BYF page to include that I do not support AI art.
3 notes · View notes
lillyanne4writes · 7 months ago
Text
In 2022, 2.2 billion people didn't have access to safely managed drinking water (source). You'd think doing something about that would be our priority, but no.
(Btw, here's the full article by The Standard shown in the screenshot. It's worth checking out for more details, including a few measures that tech companies could or are planning to take to tackle the problem.)
Tumblr media
120K notes · View notes