#able to take that story and that autonomy without the threat or forced input of colonialism
Explore tagged Tumblr posts
north-of-annwn · 6 years ago
Note
Ok so I'm not going to do this anonymously because i don't fear getting chastised for my own ignorance but there are other alternatives to abortion aren't there? I mean i honestly don't understand this bill nonsense but it mostly at least to me sounds like it's just to keep children alive. I mean there are adoption centers and people who will actually pay women who are pregnant to act as surrogates. Why is anti abortion so bad? And how is this a woman's autonomy probpem. Please educate me
First, I want to thank you for acknowledging that your perspective on this may be informed from a place of systemic oppression of AFAB people, and for seeking out information. When people add on to this post with the purpose to educate, I implore you all to remember this person is seeking information. Please avoid shaming them or ridiculing them.
Let’s first address your questions:
1) “There are other alternatives to abortion aren’t there?”
Let’s first define abortion. “In medicine, an abortion is the premature exit of the products of conception (the fetus, fetal membranes, and placenta) from the uterus. It is the loss of a pregnancy and does not refer to why that pregnancy was lost. A spontaneous abortion is the same as a miscarriage. The miscarriage of three or more consecutive pregnancies is termed habitual abortion or recurrent pregnancy loss”  (Shiel MD, MedicineNet).
¼ womxn will have abortions in their lifetime. Abortion is a medical procedure that can be requested or required for a lot of different reasons:
The pregnant person may not be able to carry an embryo to term safely.
The pregnant person may not have the financial support to pay for the medical bills that pregnancy costs in the US (prenatal and delivery alone can cost around $18k).  
I also want to add that people in this country are not given any kind of financial support for the time taken off for prenatal or postnatal care. Being out of work for this time could mean entering extreme poverty.
The pregnant person may not have the financial support or stability of lifestyle to support a child.
The pregnant person may not be physically up to the task of carrying a child to term and delivering. Not all womb-having people are up to what childbirth does to the body. Childbirth is one of the most dangerous things that a body can be put through.  In the US we’re just under 20 maternal deaths per 100,000 births, which is the highest in the developed world. Some undeveloped countries have better stats than we do
Abortion may be required as an emergency life-saving procedure for the pregnant person. And waiting for approval by a committee could mean the death of that person.
Medical interference can also be needed if the embryo has already been determined unviable (basically will not ever have life) because having dead tissue remain in the womb will kill the person. Wombs don’t always do what they’re supposed to and often they will still act as if the pregnancy is going along normally when the embryo stopped growing and forming.
Abortion as a medical procedure is part of basic reproductive healthcare. Denying it is like denying the use of a c-section or blood transfusions.
I also want to add that many of these GOP states are seeking to classify any and all contraceptives as “abortion” as well. This isn’t included in this bill specifically but it’s been named as part of their agendas.
2) “I mean I honestly don’t understand this bill nonsense but it mostly at least to me sounds like it’s just to keep children alive.”According to the CDC, 91.1% of abortions are performed at ≤13 weeks’ gestation. At this time, this is an embryo and fetal tissue. It’s not a child. Pro-life people are placing the eventual *possible* life of a being that isn’t even formed yet above the autonomy and rights of a living human being (the pregnant person). A zygote without a brain or the ability to survive outside the womb is not a person, and therefore not a child. We have determined that something without brain activity is not alive. People with wombs are not incubators. This is not the sum of our existence.
Right now you cannot force a person to give blood or organs in life-saving situations. Why should it be okay to force a person to donate their entire body as an incubator if they don’t want to, which has health complications, and long-lasting effects on the body? We even afford humans that are DEAD more rights than womb-having people in this country. It is illegal to take organs or tissue from dead bodies with no brain activity without consent, but it’s legal to force a living person to act as an incubator for tissue and chromosomes that aren’t even formed to make a person yet?
Also, this bill has SO much more nuanced support for the oppression of women than just keeping “children” alive. This affords the state the right to investigate any suspicion of “intentional abortion.” This means, if a person miscarries, they may be subject to invasive investigation and murder charges on top of grieving for their loss and recovering medically. This bill also in no certain terms basically considers all womb-having people in their state to be the property of the state by allowing people to be extradited and charged if they have a LEGAL abortion procedure in another state.
3) “I mean there are adoption centers and people who will actually pay women who are pregnant to act as surrogates. Why is anti-abortion so bad?” We currently have 108,000 foster children up for adoption right this second in the US. This doesn’t even include unwanted pregnancies being given to private adoption agencies. Adopt one if you want to save a child, but forcing people to enter crippling debt, put their body through the abuse of childbirth, and possible forced poverty because of lack of childcare or compensation for missing work isn’t okay.
Additionally, anti-abortion really only seems to be concerned with one thing - popping out children. There is ZERO concern for the health, wellbeing, or survival of that child OR the parent afterward. This is oppressive and forced childbirth expectations. And again, reduces womb-having people as nothing more than a means to an end. Their life and wellbeing aren’t considered - they’re incubators.
4) “How is this a woman’s autonomy problem.”All of the above. The entire idea of denying women normal reproductive medical procedures or criminalize a natural thing that our bodies DO is inherently oppressive. Deciding that a womb-having person is just supposed to do their best to carry to term an embryo regardless of danger to their life, medical needs, e, inability to care for the child, inability to pay medical bills, or the abuse that childbirth puts on the body… and possibly condemning them to death, poverty, or life-long debt removes the ability for a person to choose what is done or what is done TO their body. It’s inherently oppressive.
Make no mistake, these bills have very little to do with saving the lives of children, and everything to do with keeping women impoverished, oppressed, and without any control over their own bodies and lives. These bills are also written and signed without ANY input or oversight primarily by the people they affect. This is not a choice that womb-having people made… these are oppressive laws being forced upon them.
Some final personal notes from me: I am currently in a place where I would suffer greatly from these laws if they were to be implemented in my state. First of all, if I were to get pregnant, mine would be a high-risk pregnancy. It is likely that I could lose the pregnancy anytime within the first two trimesters, which would require an abortive procedure to remove the remaining tissue. If I’m to get pregnant, I need to know that modern medical procedures that are agreed to be the most effective best practices would be available to me by a doctor without the threat of criminalization or debating on whether it’s necessary/legal. This affects all people who may ever become pregnant. This is a clear and present fear for us. It’s not just anti-abortion. If that’s all it was… the answer would be simple, don’t have one. If you need one to save your life, you can choose to say no. But it’s not. This is about controlling womxn, denying us healthcare, and we are afraid. We are all desperately terrified of this becoming the new normal across our country. ONE in FOUR pregnancies ends in the need for abortion. And if you need one, you get one. This is about whether or not we have access to SAFE and MEDICALLY sanctioned abortions. 
I really encourage you to do some additional research and reading from educational sites. Be wary of both FOX News, CNN, major news networks, and any journalists with a religious agenda. Further reading: https://prochoice.org/education-and-advocacy/downloads-resources/https://iwhc.org/2018/09/abortion-normal-and-vital/https://www.prochoiceamerica.org/issue/abortion-access/I’d really appreciate if any followers could tack on additional resources, statistics, and personal stories. This is SO important. 
14K notes · View notes
jccamus · 6 years ago
Text
Here are the 7 requirements for building ethical AI, according to the EU commission
In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns.
Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years. There’s a six-decade history behind the AI research. But recent advances in machine learning and neural networks have pushed artificial intelligence into sensitive domains such as hiring, criminal justice and health care.
In tandem with advances in artificial intelligence, there’s growing interest in establishing criteria and standards to weigh the robustness and trustworthiness of the AI algorithms that are helping or replacing humans in making important and critical decisions.
With the field being nascent, there’s little consensus over the definition of ethical and trustworthy AI, and the topic has become the focus of many organizations, tech companies and government institutions.
In a recently published document titled “Ethics Guidelines for Trustworthy AI,” the European Commission has laid out seven essential requirements for developing ethical and trustworthy artificial intelligence. While we still have a lot to learn as AI takes a more prominent role in our daily lives, EC’s guidelines, unpacked below, provide a nice roundup of the kind of issues the AI industry faces today.
Human agency and oversight
Image credit: Depositphotos
“AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight,” the EC document states.
Human agency means that users should have a choice not to become subject to an automated decision “when this produces legal effects on users or similarly significantly affects them,” according to the guidelines.
AI systems can invisibly threaten the autonomy of humans who interact with them by influencing their behavior. One of the best-known examples in this regard is Facebook’s Cambridge Analytica scandal, in which a research firm used the social media giant’s advertising platform to send personalized content to millions of users with the aim of affecting their vote in the 2016 U.S. presidential elections.
The challenge of this requirement is that we’re already interacting with hundreds of AI systems everyday, including the content in our social media feeds, when we view trends in Twitter, when we Google a term, when we search for videos on YouTube, and more.
The companies that run these systems provide very few controls over the AI algorithms. In some cases, such as Google’s search engine, companies explicitly refrain from publishing the inner-workings of their AI algorithms to prevent manipulation and gaming. Meanwhile, various studies have shown that search results can have a dramatic influence on the behavior of users.
Human oversight means that no AI system should be able to perform its functions without some level of control by humans. This means that humans should either be directly involved in the decision-making process or have the option to review and override decisions made by an AI model.
In 2016, Facebook had to shut down the AI that ran its “Trending Topics” section because it pushed out false stories and obscene material. It then returned humans in the loop to review and validate the content the module was specifying as trending topics.
Technical robustness and safety
The EC experts state that AI systems must “reliably behave as intended while minimizing unintentional and unexpected harm, and preventing unacceptable harm” to humans and their environment.
One of the greatest concerns of current artificial intelligence technologies is the threat of adversarial examples. Adversarial examples manipulate the behavior of AI systems by making small changes to their input data that are mostly invisible to humans. This happens mainly because AI algorithms work in ways that are fundamentally different from the human brain.
Adversarial examples can happen by accident, such as an AI system that mistakes sand dunes for nudes. But they can also be weaponized into harmful adversarial attacks against critical AI systems. For instance, a malicious actor can change the coloring and appearance of a stop sign in a way that will go unnoticed to a human but will cause a self-driving car to ignore it and cause a safety threat.
Adversarial attacks are especially a concern with deep learning, a popular blend of AI that develops its behavior by examining thousands and millions of examples.
There are already been several efforts to build robust AI systems that are resilient to adversarial attacks. AutoZOOM, a method developed by researchers at MIT-IBM Watson AI Lab, helps detect adversarial vulnerabilities in AI systems.
The EC document also recommends that AI systems should be able to fallback from machine learning to rule-based systems or ask for a human to intervene.
Since machine learning models are based on statistics, it should be clear how accurate a systems is. “When occasional inaccurate predictions cannot be avoided, it is important that the system can indicate how likely these errors are,” the EC’s ethical guidelines state. This means that the end user should know about the confidence level and the general reliability of the AI system they’re using.
Privacy and data governance
“AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system,” according to the EC document.
Machine learning systems are data-hungry. The more quality data they have, the more accurate they become. That’s why companies have a tendency to collect more and more data from their users. Companies like Facebook and Google have built economic empires by building and monetizing comprehensive digital profiles of their users. The use this data to train their AI models to provide personalized content and ads to their users and keep them glued to their apps to maximize their profit.
But how responsible are these companies in maintaining the security and privacy of this data? Not very much. They’re also not very explicit about the amount of data they collect and ways they use it.
In recent years, general awareness about privacy and new rules such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) are forcing organizations to be more transparent about their data collection and processing practices. In the past year, many companies have offered users the option to download their data or to ask the company to delete it from its servers.
However, more needs to be done. Many companies share sensitive user information with their employees or third-party contractors to label data and train their AI algorithms. In many cases, users don’t know that human operators review their information and they falsely believe that only algorithms process their data.
Very recently, Bloomberg revealed that thousands of Amazon employees across the world access the voice recordings of the users of its Echo smart speakers to help improve the company’s AI-powered digital assistant Alexa. The idea does not sit well with the users, who expect to enjoy privacy in their homes.
Transparency
The European Commission experts define AI transparency in three components: traceability, explainability and communication.
AI systems based on machine learning and deep learning are highly complex. They develop their behavior based on correlations and patterns found in thousands and millions of training examples. Often, the creators of these algorithms don’t know the logical steps behind the decisions their AI models make. This makes it very hard to find the reasons behind the errors these algorithms make.
EC specifically recommends that developers of AI systems document the development process, the data they use to train their algorithms, and explain their automated decisions in ways that are understandable to humans.
Explainable AI has become the focus of several initiatives by the private and public sector. This includes a widespread effort by the Defense Advanced Research Projects Agency (DARPA) to create AI models are open to investigation and methods that can explain AI decisions.
Another important point raised in the EC document is communication. “AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system,” the document reads.
Last year, Google introduced Duplex, an AI service that could place calls on behalf users and make restaurant and salon reservations. Controversy ensued because the assistant refrained from presenting itself as an AI agent and duped its interlocutors into thinking they were speaking to a real human. The company later updated the service to present itself as Google Assistant.
Diversity, non-discrimination and fairness
Algorithmic bias is one of the well-known controversies of contemporary AI technology. For a long time, we believed that AI would not make subjective decisions based on bias. But machine learning algorithms develop their behavior from their training data, and they reflect and amplify any bias contained in those data sets.
There have been numerous examples of algorithmic bias rearing its ugly head, such as the examples listed at the beginning of this article. Other cases include a study that showed popular AI-based facial analysis servicesbeing more accurate on men with light skin and making more errors on women with dark skin.
To prevent unfair bias against certain groups, EC’s guidelines recommend that AI developers make sure their AI systems’ data sets are inclusive.
The problem is, AI models often train on data that is publicly available, and this data often contains hidden biases that already exist in the society.
For instance, a group of researchers at Boston University discovered that word embedding algorithms (AI models used in tasks such as machine translation and online text search) trained on online articles had developed hidden biases, such as associating programming with men and homemaker with women. Likewise, if a company trains its AI-based hiring tools with the profiles of its current employees, it might be unintentionally pushing its AI toward replicating the hidden biases and preferences of its current recruiters.
To solve hidden biases, EC recommends for companies that develop AI systems hire people from diverse backgrounds, cultures and disciplines.
One consideration to note however is that fairness and discrimination often depends on the domain. For instance, in hiring, organizations must make sure that their AI systems don’t make decisions. But in another field like health care, parameters like gender and ethnicity must be factored in when diagnosing patients.
Societal and environmental well-being
“[The] broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle,” EC’s guidelines state.
The social aspect of AI has been deeply studied. A notable example are social media companies, which use AI to study the behavior of their users and provide them with personalized content. This makes social media applications addictive and profitable, but also causes a negative impact on users, making them less social, less happy and less tolerant toward opposing views and opinions.
Some companies have started to acknowledge this and correct the situation. In 2018, Facebook declared that it would be making changes to its News Feed algorithm and provide users with more posts from friends and family and less from brands and publishers. The move was aimed at making the experience more social.
The environmental impact of AI is less discussed, but is equally important. Training and running AI systems in the cloud consumes a lot of electricity and leaves a huge carbon footprint. This is a problem that will grow worse as more and more companies use AI algorithms in their applications.
One of the solutions is to use lightweight edge AI solutions that require very little power and  run on renewable energy. Another solution is to use AI itself to help improve the environment. For instance, machine learning algorithms can help manage traffic and public transport to reduce congestion and carbon emissions.
Accountability
Finally, EC calls for mechanisms “to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.” Basically, this means there should be legal safeguards to make sure companies keep their AI systems conformant with ethical principles.
U.S. lawmakers recently introduced the Algorithmic Accountability Act which, if passed, will required companies to have their AI algorithms evaluated by the Federal Trade Commission for known problems such as algorithmic bias as well as privacy and security concerns.
Other countries, including the UK, France and Australia have passed similar legislation to hold tech companies to account for the behavior of their AI models.
In most cases, ethical guidelines are not in line with the business model and interests of tech companies. That’s why there should be oversight and accountability. “When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress. Knowing that redress is possible when things go wrong is key to ensure trust,” the EC document states.
This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them on Twitter.
TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech extravaganza by clicking here.
Publicado en The Next Web http://bit.ly/2J8JDzA vía IFTTT
0 notes