#the ethics of artificial intelligence
Explore tagged Tumblr posts
reallytoosublime · 1 year ago
Text
AI technology continues to advance and become more integrated into various aspects of society, and ethical considerations have become increasingly important. but as AI becomes more advanced and ubiquitous, concerns are being raised about its impact on society. In this video, we'll explore the ethics behind AI and discuss how we can ensure fairness and privacy in the age of AI.
#theethicsbehindai#ensuringfairnessprivacyandbias#limitlesstech#ai#artificialintelligence#aiethics#machinelearning#aitechnology#ethicsofaitechnology#ethicalartificialintelligence#aisystem
The Ethics Behind AI: Ensuring Fairness, Privacy, and Bias
0 notes
youtubemarketing1234 · 1 year ago
Text
AI technology continues to advance and become more integrated into various aspects of society, and ethical considerations have become increasingly important. but as AI becomes more advanced and ubiquitous, concerns are being raised about its impact on society. In this video, we'll explore the ethics behind AI and discuss how we can ensure fairness and privacy in the age of AI.
Artificial Intelligence has emerged as a transformative technology, revolutionizing industries and societies across the globe. From personalized recommendations to autonomous vehicles, AI systems are becoming deeply integrated into our daily lives. However, this rapid advancement also brings forth a host of ethical concerns that demand careful consideration. Among these concerns, ensuring fairness, privacy, and mitigating bias are paramount.
Ensuring fairness in AI systems is paramount. AI algorithms, often trained on large datasets, have the potential to perpetuate and even exacerbate existing social biases. This can manifest in various ways, from biased hiring decisions in AI-driven recruitment tools to discriminatory loan approvals in automated financial systems. Achieving fairness involves developing algorithms that are not only technically proficient but also ethically sound. It requires the recognition of biases, both subtle and overt, in data and the implementation of measures to mitigate these biases.
AI systems often require access to vast amounts of personal data to function effectively. This raises profound privacy concerns, as the misuse or mishandling of such data can lead to surveillance, identity theft, and unauthorized access to sensitive information. Striking a balance between data collection for AI improvement and safeguarding individual privacy is a significant ethical challenge. The implementation of robust data anonymization techniques, data encryption, and the principle of data minimization are vital in ensuring that individuals' privacy rights are respected in the age of AI.
The ethical underpinnings of AI demand transparency and accountability from developers and organizations. AI systems must provide clear explanations for their decisions, especially when they impact individuals' lives. The concept of the "black box" AI, where decisions are made without understandable reasons, raises concerns about the potential for unchecked power and biased outcomes. Implementing mechanisms such as interpretable AI and model explainability can help in building trust and ensuring accountability.
#theethicsbehindai#ensuringfairnessprivacyandbias#limitlesstech#ai#artificialintelligence#aiethics#machinelearning#aitechnology#ethicsofaitechnology#ethicalartificialintelligence#aisystem
The Ethics Behind AI: Ensuring Fairness, Privacy, and Bias
0 notes
beyondlimitss1 · 2 years ago
Text
The ethics of AI (Artificial Intelligence)
Explore the ethics of AI: Pros & Cons of Artificial Intelligence. Learn about the best use of Artificial Intelligence(AI) in your life.
 The ethics of AI (Artificial Intelligence)
Tumblr media
The ethics of AI (Artificial Intelligence) refers to the principles and values that guide the development, deployment, and use of AI systems. As AI becomes more sophisticated and ubiquitous, it raises a variety of ethical concerns and challenges.
Here are some of the key ethical issues in AI:
Bias and discrimination: AI systems can reflect and amplify the biases and prejudices of their creators and the data they are trained on. This can lead to unfair and discriminatory outcomes, particularly for marginalized communities.
Privacy and surveillance: AI systems can collect, store, and analyze vast amounts of personal data, raising concerns about privacy and surveillance. This is especially true for facial recognition technology and other forms of biometric data collection.
Accountability and transparency: It can be difficult to understand how AI systems make decisions and to hold them accountable for their actions. This lack of transparency and accountability can lead to mistrust and uncertainty.
Safety and reliability: AI systems can have unintended consequences and cause harm, particularly in critical domains such as healthcare and transportation. Ensuring the safety and reliability of AI systems is therefore crucial.
Employment and automation: AI systems can automate jobs and displace workers, leading to economic disruption and inequality. It is important to consider the ethical implications of these changes and to ensure that workers are protected and supported.
To address these ethical challenges, there are several frameworks and guidelines for responsible AI development and use. These include principles such as fairness, transparency, accountability, and human-centered design, as well as specific policies and regulations. Ultimately, the goal is to create AI systems that benefit society while minimizing harm and ensuring ethical use.
AI (Artificial Intelligence) has the potential to bring many benefits to society, but it also poses some challenges and risks. Here are some of the pros and cons of AI:
Pros:
Efficiency and productivity: AI can automate routine and repetitive tasks, freeing up time and resources for more creative and complex work. This can lead to increased efficiency and productivity.
Improved decision-making: AI can analyze vast amounts of data and provide insights that humans may not be able to identify. This can improve decision-making in fields such as healthcare, finance, and business.
Personalization: AI can analyze user data and provide personalized recommendations and experiences, such as in the case of personalized advertising or content recommendations.
Safety and security: AI can enhance safety and security in areas such as transportation, defense, and cybersecurity. For example, self-driving cars can reduce the risk of accidents caused by human error.
Cons:
Bias and discrimination: AI can perpetuate and amplify biases and discrimination, particularly if the data it is trained on is biased. This can lead to unfair and discriminatory outcomes, particularly for marginalized communities.
Job displacement: AI can automate jobs and displace workers, leading to economic disruption and inequality. This can create social and economic challenges, particularly if these workers do not have the skills or resources to adapt to new roles.
Privacy and security: AI can collect and analyze vast amounts of personal data, raising concerns about privacy and security. This can also increase the risk of cyber attacks and other forms of hacking.
Lack of transparency: AI can be opaque and difficult to understand, leading to a lack of transparency and accountability. This can create mistrust and uncertainty, particularly if the decisions made by AI systems are consequential.
In conclusion, AI has the potential to bring many benefits to society, but it is important to carefully consider its implications and address its challenges and risks. This includes ensuring that AI systems are transparent, accountable, and ethical, and that their benefits are fairly distributed across society.
Visit Us - https://beyondlimitss.com/the-ethics-of-ai-artificial-intelligence/
0 notes
incognitopolls · 1 month ago
Text
This is a real thing that happened and got a google engineer fired in 2022.
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
448 notes · View notes
cirilee · 10 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
grumpy monitor robot creature belongs to @neoncl0ckwork heheh, mindless human repairman isidor tichy belongs to me ovo
256 notes · View notes
Text
71 notes · View notes
savagechickens · 1 year ago
Text
Tumblr media
The Latest Technology.
And more technology.
365 notes · View notes
reasonsforhope · 1 year ago
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
199 notes · View notes
fibrielsolaer · 2 years ago
Text
"Ethical AI" activists are making artwork AI-proof
Hello dreamers!
Art thieves have been infamously claiming that AI illustration "thinks just like a human" and that an AI copying an artist's image is as noble and righteous as a human artist taking inspiration.
It turns out this is - surprise! - factually and provably not true. In fact, some people who have experience working with AI models are developing a technology that can make AI art theft no longer possible by exploiting a fatal, and unfixable, flaw in their algorithms.
They have published an early version of this technology called Glaze.
https://glaze.cs.uchicago.edu
Glaze works by altering an image so that it looks only a little different to the human eye but very different to an AI. This produces what is called an adversarial example. Adversarial examples are a known vulnerability of all current AI models that have been written on extensively since 2014, and it isn't possible to "fix" it without inventing a whole new AI technology, because it's a consequence of the basic way that modern AIs work.
This "glaze" will persist through screenshotting, cropping, rotating, and any other mundane transformation to an image that keeps it the same image from the human perspective.
The web site gives a hypothetical example of the consequences - poisoned with enough adversarial examples, AIs asked to copy an artist's style will end up combining several different art styles together. Perhaps they might even stop being able to tell hands from mouths or otherwise devolve into eldritch slops of colors and shapes.
Techbros are attempting to discourage people from using this by lying and claiming that it can be bypassed, or is only a temporary solution, or most desperately that they already have all the data they need so it wouldn't matter. However, if this glaze technology works, using it will retroactively damage their existing data unless they completely cease automatically scalping images.
Give it a try and see if it works. Can't hurt, right?
595 notes · View notes
tumbler-polls · 9 months ago
Text
Tumblr media Tumblr media
68 notes · View notes
Text
My New Article at WIRED
Tweet
So, you may have heard about the whole zoom “AI” Terms of Service  clause public relations debacle, going on this past week, in which Zoom decided that it wasn’t going to let users opt out of them feeding our faces and conversations into their LLMs. In 10.1, Zoom defines “Customer Content” as whatever data users provide or generate (“Customer Input”) and whatever else Zoom generates from our uses of Zoom. Then 10.4 says what they��ll use “Customer Content” for, including “…machine learning, artificial intelligence.”
And then on cue they dropped an “oh god oh fuck oh shit we fucked up” blog where they pinky promised not to do the thing they left actually-legally-binding ToS language saying they could do.
Like, Section 10.4 of the ToS now contains the line “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent,” but it again it still seems a) that the “customer” in question is the Enterprise not the User, and 2) that “consent” means “clicking yes and using Zoom.” So it’s Still Not Good.
Well anyway, I wrote about all of this for WIRED, including what zoom might need to do to gain back customer and user trust, and what other tech creators and corporations need to understand about where people are, right now.
And frankly the fact that I have a byline in WIRED is kind of blowing my mind, in and of itself, but anyway…
Also, today, Zoom backtracked Hard. And while i appreciate that, it really feels like decided to Zoom take their ball and go home rather than offer meaningful consent and user control options. That’s… not exactly better, and doesn’t tell me what if anything they’ve learned from the experience. If you want to see what I think they should’ve done, then, well… Check the article.
Until Next Time.
Tweet
Read the rest of My New Article at WIRED at A Future Worth Thinking About
124 notes · View notes
ai-innova7ions · 2 months ago
Text
Tumblr media
Neturbiz Enterprises - AI Innov7ions
Our mission is to provide details about AI-powered platforms across different technologies, each of which offer unique set of features. The AI industry encompasses a broad range of technologies designed to simulate human intelligence. These include machine learning, natural language processing, robotics, computer vision, and more. Companies and research institutions are continuously advancing AI capabilities, from creating sophisticated algorithms to developing powerful hardware. The AI industry, characterized by the development and deployment of artificial intelligence technologies, has a profound impact on our daily lives, reshaping various aspects of how we live, work, and interact.
16 notes · View notes
omegaphilosophia · 14 days ago
Text
The Philosophy of Sapience
Sapience refers to wisdom, deep insight, or the ability to think and act with judgment, often contrasted with sentience (the capacity for sensation and feeling). In philosophy, sapience explores what it means to be capable of higher-order thinking, reflective self-awareness, and the pursuit of knowledge and understanding.
1. Definition of Sapience
Sapience is typically defined as the ability to reason, think abstractly, and apply knowledge wisely. It encompasses the intellectual faculties that allow beings to reflect, solve complex problems, and engage in self-directed learning.
It is often associated with wisdom, foresight, and a moral dimension, involving not only intellectual capacity but also ethical judgment.
2. Sapience vs. Sentience
Sentience refers to the capacity to have subjective experiences (such as pleasure or pain), while sapience is linked to the higher cognitive abilities that include reasoning, planning, and understanding abstract concepts.
Sapient beings are not only aware of their experiences but are capable of reflecting on those experiences, making decisions based on reason, and exercising judgment about complex matters. Humans are typically considered sapient, while many non-human animals are seen as sentient but not sapient.
3. Sapience and the Human Condition
Sapience is often seen as a key trait that distinguishes humans from other animals. It involves self-awareness and the ability to ask philosophical questions, reflect on one’s existence, and make moral judgments.
The ancient Greeks, especially Aristotle, viewed sapience as a fundamental characteristic of humans. Aristotle argued that humans are "rational animals" whose ability to reason sets them apart from other creatures and allows them to achieve eudaimonia (flourishing or happiness) through the exercise of virtue.
Wisdom and Practical Reasoning: Sapience is also closely related to the philosophical concept of phronesis, or practical wisdom, which refers to the ability to make good judgments in everyday life. This kind of wisdom, according to Aristotle, requires not only knowledge but also experience and moral insight.
4. Sapience and Knowledge
Epistemology, or the philosophy of knowledge, is closely related to the concept of sapience. To be sapient is not just to have knowledge, but to understand how to apply that knowledge wisely in different contexts.
Philosophers like Plato and Socrates viewed sapience as the highest form of knowledge. For Plato, wisdom was a form of insight into the eternal truths of the universe, such as the Forms, and the philosopher was the one who could access this deep knowledge.
Socratic Wisdom: Socrates famously said that true wisdom comes from knowing that one knows nothing. This humility and self-awareness are seen as core aspects of sapience—the ability to reflect critically on one’s own limitations and to pursue knowledge without assuming one already has it.
5. Sapience and Artificial Intelligence
As artificial intelligence continues to develop, the question of whether machines could ever achieve sapience arises. While many AI systems demonstrate remarkable abilities to process information and solve problems (which might mimic aspects of sapience), philosophers debate whether machines can truly possess wisdom, self-awareness, or moral judgment.
Strong AI vs. Weak AI: Weak AI refers to systems that can perform specific tasks but do not have genuine understanding or wisdom. Strong AI theorizes that machines could one day develop true sapience, becoming not just tools for human use but entities capable of reflective thought and ethical decision-making.
Ethical Implications: If machines were to become sapient, this would raise profound ethical questions about their rights, responsibilities, and their place in human society. Would sapient machines deserve the same moral consideration as humans?
6. Sapience and Moral Responsibility
Moral Agency: A key philosophical question related to sapience is whether sapience is required for moral responsibility. Beings with the capacity for reflective thought, self-awareness, and moral reasoning are often seen as responsible for their actions, as they can make choices based on reasoning and judgment.
Free Will and Sapience: The relationship between sapience and free will is another important topic. For some philosophers, sapience involves the ability to act freely, based on reasoned decisions rather than instinct or compulsion.
7. Sapience in Non-Human Animals
Philosophers and scientists debate whether certain non-human animals (such as dolphins, elephants, or great apes) might possess degrees of sapience. These animals have demonstrated behaviors that suggest problem-solving, self-awareness, and even moral behavior, leading to discussions about extending moral consideration to them.
Degrees of Sapience: Some argue that sapience exists on a continuum, with humans representing the highest degree of sapience, but other species potentially exhibiting lesser forms of wisdom and self-reflection.
8. Sapience and Existentialism
Existentialist philosophers like Jean-Paul Sartre view sapience as central to the human experience. Sartre argued that humans are unique in their ability to reflect on their own existence and to make free choices in the face of an indifferent or even absurd universe.
This capacity for self-reflection and choice is both a source of freedom and a burden, as humans must create meaning and purpose in their lives without relying on external or predetermined systems of value. For existentialists, sapience is both the source of human dignity and the cause of existential anxiety.
9. Sapience and the Future
As humans develop new technologies and continue to explore the boundaries of knowledge, the concept of sapience is evolving. Philosophers consider what it means to be wise in an era of rapid technological change, where access to vast amounts of information may not always lead to wisdom or good judgment.
Transhumanism: Some thinkers speculate about the possibility of enhancing human sapience through technology. Transhumanism advocates for using science and technology to improve human intellectual and moral capacities, potentially leading to a future where humans achieve a higher form of sapience.
The philosophy of sapience examines the nature of wisdom, reflective thought, and higher-order reasoning. It encompasses questions about what distinguishes humans from other animals, the relationship between knowledge and judgment, and the moral implications of sapience. It also raises ethical concerns about the development of artificial sapience in machines and the potential for enhancing human intellectual capacities.
7 notes · View notes
incognitopolls · 7 months ago
Text
For the purposes of this poll, research is defined as reading multiple non-opinion articles from different credible sources, a class on the matter, etc.– do not include reading social media or pure opinion pieces.
Fun topics to research:
Can AI images be copyrighted in your country? If yes, what criteria does it need to meet?
Which companies are using AI in your country? In what kinds of projects? How big are the companies?
What is considered fair use of copyrighted images in your country? What is considered a transformative work? (Important for fandom blogs!)
What legislation is being proposed to ‘combat AI’ in your country? Who does it benefit? How does it affect non-AI art, if at all?
How much data do generators store? Divide by the number of images in the data set. How much information is each image, proportionally? How many pixels is that?
What ways are there to remove yourself from AI datasets if you want to opt out? Which of these are effective (ie, are there workarounds in AI communities to circumvent dataset poisoning, are the test sample sizes realistic, which generators allow opting out or respect the no-ai tag, etc)
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
461 notes · View notes
dxxprs · 1 year ago
Text
Consciousness vs. Intelligence: Ethical Implications of Decision-Making
The distinction between consciousness in humans and artificial intelligence (AI) revolves around the fundamental nature of subjective experience and self-awareness. While both possess intelligence, the essence of consciousness introduces a profound divergence. Now, we are going to delve into the disparities between human consciousness and AI intelligence, and how this contrast underpins the ethical complexities in utilizing AI for decision-making. Specifically, we will examine the possibility of taking the emotion out of the equation in decision-making processes and taking a good look at the ethical implications this would have
Consciousness is the foundational block of human experience, encapsulating self-awareness, subjective feelings, and the ability to perceive the world in a deeply personal manner. It engenders a profound sense of identity and moral agency, enabling individuals to discern right from wrong, and to form intrinsic values and beliefs. Humans possess qualia, the ineffable and subjective aspects of experience, such as the sensation of pain or the taste of sweetness. This subjective dimension distinguishes human consciousness from AI. Consciousness grants individuals the capacity for moral agency, allowing them to make ethical judgments and to assume responsibility for their actions.
AI, on the other hand, operates on algorithms and data processing, exhibiting intelligence that is devoid of subjective experience. It excels in tasks requiring logic, pattern recognition, and processing vast amounts of information at speeds beyond human capabilities. It also operates on algorithmic logic, executing tasks based on predetermined rules and patterns. It lacks the capacity for intuitive leaps and subjective interpretation, at least for now. AI processes information devoid of emotional biases or subjective inclinations, leading to decisions based solely on objective criteria. Now, is this useful or could it lead to a catastrophe?
The prospect of eradicating emotion from decision-making is a contentious issue with far-reaching ethical consequences. Eliminating emotion risks reducing decision-making to cold rationality, potentially disregarding the nuanced ethical considerations that underlie human values and compassion. The absence of emotion in decision-making raises questions about moral responsibility. If decisions lack emotional considerations, who assumes responsibility for potential negative outcomes? Emotions, particularly empathy, play a crucial role in ethical judgments. Eradicating them may lead to decisions that lack empathy, potentially resulting in morally questionable outcomes. Emotions contribute to cultural and contextual sensitivity in decision-making. AI, lacking emotional understanding, may struggle to navigate diverse ethical landscapes.
Concluding, the distinction between human consciousness and AI forms the crux of ethical considerations in decision-making. While AI excels in rationality and objective processing, it lacks the depth of subjective experience and moral agency inherent in human consciousness. The endeavor to eradicate emotion from decision-making raises profound ethical questions, encompassing issues of morality, responsibility, empathy, and cultural sensitivity. Striking a balance between the strengths of AI and the irreplaceable facets of human consciousness is imperative for navigating the ethical landscape of decision-making in the age of artificial intelligence.
49 notes · View notes
buzzdixonwriter · 2 months ago
Text
Hedonism Makes You Smarter
Every value we hold dear, every fact we consider indisputable, every thread of irreducible logic we base our reality on can be traced back to something the earliest one-celled microbes realized without even possessing a brain to process it: 
Life = Good Death = Bad
At some point in the unimaginably distant past, some microbe mutated to the point where a certain type of stimuli prompted it to either move away or move closer.
And thus ethics / logic / morality / philosophy / theology was born.
The microbes capable of moving away from threats and towards nutriment stood a far better chance of surviving and reproducing than those that did not.
Very quickly, this rudimentary value system became permanently embedded in all life on the planet.
Any organism not embracing this principle quickly gets consumed by other life or wiped out by natural forces.
As we evolved into multi-cellular organisms, some of those cells develop to specialize and capitalize on the “flee death / find food” paradigm.  Every new mutation got weighed against this relentless evolutionary razor. Any mutation that didn’t help tended to get eradicated ASAP while those that helped got reinforced.
Sure, some mutations appear useless but in their cases so long as they didn’t impair pro-survival traits.
Eventually some of these specialized cells specialized even further into organs we now call brains.
And within these brains some sort of…abstract (for lack of a better word) consciousness…
Consciousness is oft referred to by philosophers and scientists as “the hard problem.”
And not in the least because – as with pornography – everybody knows it when they see it, but no one can adequately define it.
Some call it the spirit, some call it the soul, some call it psyche, some call it mind, some call it being, some call it identity.
Some claim body and mind are one, yet it is absolutely possible to destroy most of a human’s brain – and by that, who they ever actually are -- while keeping the body alive and healthy.
Others claim body and mind are separate and that in some yet to come golden age we can transfer our minds from these rotting flesh carcasses to perfect, immutable silicon bodies…
…only they not only lack any mechanism for doing so, they can’t adequately define what it is they’ll be transferring.
This is not a trivial matter!
This is of vital importance Right Now to all of us, especially those who choose not to think of it at all.  If we are nothing but a batch pf data points in a meat computer, then our whole sense of unique and discrete individual identity evaporates.  Any transfer of data points does nothing for the original organism…or its accompanying soul / identity / mind / consciousness.
This is why I think AI will never acquire bona fide self-awareness and consciousness.  Whatever grants us possession of such an abstract concept does not exist without feeling.
And these feelings came from the first protozoa to flee death and embrace life.
What we feel in e otions originates in what we feel physically.
We feel pain, we seek to avoid it.  We feel hunger, we seek food.  These basic sensations steer us to live, and not just live but to live abundantly, to avoid being prey to predators, to avoid conditions that would physically impair us, to seek out what prolongs and enriches our existence (again, not in a monetary sense).
We can see even plants doing this without benefit of anything recognizable as a brain or identity.  They grow towards beneficial stimuli and away from harmful ones.
Once brains arrived on the scene, organisms may develop more nuanced means os assessing threat / benefit ratios.  Already wildly successful on the most basic levels found in tardigrades and worms, when brains obtained the most rudimentary means of symbolizing the external world and passing that information along to other brains / minds, the race for genuine consciousness kicked into high gear. 
At some point this symbolic version of the world began to reside full time in an abstract realm we refer to as consciousness. 
Within the physical confines or our brains we conjure up literally an infinite number of symbolic realizations of what appears to be our “real” world – or at least our interpretations of the real world.
I hold this phenomenon to be something vastly different from AI’s generative process where it admittedly guesses what the next numbers / letters / words / pixels in a sequence should be.  AI is nothing but a flow chart -- a sophisticated, intricate, and blindingly fast flow chart, but a flow chart nonetheless.
Human consciousness is far more organic -- in every sense of the word.  It places values on symbolic items representing the (supposedly) external world around us, values that derive from emotions, not a predetermined logic chain.
You see, in order to create the ethical systems we live by, in order to create the cultures we inhabit, in order to experience genuine consciousness and self-awareness, we must feel first.
This is anathema to both the “fnck your feelings” crowd and to materialists who insist we need to process everything we experience purely rationally like…well…AI programs.
Nothing could be further from the truth, of course.
To differentiate between life and death, being and non-being, all higher functioning brains develop an emotional bond towards life and a hgealthy antipathy towards death.
The mix may vary from culture to culture -- one certainly doesn’t expect a 15th century samurai to share the exact values as a 21st century valley girl -- but they share a common set of core values that can be related to and understood by each other regardless of their respective background.
Because they feel emotionally --  both stoic samurai and histrionic teen -- they create a consciousness that can experience the external world and relate that experience both to themselves and others.
By comparison, when AI correctly predicts "the sun will rise tomorrow" does it actually understand what those terms mean or only that when they appear they usually do so in a certain sequence?  This is the Chinese room paradox:  Would somebody with a set of pictograms but no way of knowing what those pictograms actually symbolized actually understand Chinese if they figured out by trial and era that certain patterns of pictograms preceded another pattern?
 © Buzz Dixon
5 notes · View notes