#Experiments on humans without ethical or scientific oversight
Explore tagged Tumblr posts
Text
@staff, did you all know this was happening? And if so, wtf? I hope they did it without y'all's permission. And if they did, SUE THEM. This sort of experimentation ON HUMANS has actual ETHICAL & SCIENTIFIC PROTOCOLS THAT MUST BE MET before anybody talks to ANY human, let alone one in crisis, let alone a CHILD in crisis.
TL;DR - Amoral and unethical 'mental health nonprofit' KOKO's algorithm has been trawling Tumblr and other sites for human beings in crisis and directing them to their site, which is NOT a human mental health professional, but IS a fucking AI CHATBOT. They are NOT telling people that's what's happening nor are they giving people in need a way to opt out (I may have misread that bit, for much more accurate info, read the article).
We need to be able to trust the online healthcare organizations we interact with. This is proof maybe we need to be much more careful. Though how one does that when one is in crisis is beyond me. That's when we are most easily preyed upon and i'm not happy with this crap.
#Experiments on humans without ethical or scientific oversight#experiments on humans in crisis without informed consent#AI CHATBOT mental health interventions tested on Tumblr and other social media users whose posts indicate they might be in psych crisis#KOKO mental health nonprofit#and it's shitty academic sponsor
0 notes
Text
Tuskegee Syphilis Experiment
Acting on the presumption that rural southern blacks were generally more promiscuous and syphilitic than whites, and without sufficient funding to establish an effective treatment program for them, doctors working with the Public Health Service (PHS) commenced a multi-year experiment in 1932.
Their actions deprived 400 largely uneducated and poor African Americans in Tuskegee, Alabama of proper and reasonable treatment for syphilis, a disease whose symptoms could easily have been relieved with the application of penicillin which became available in the 1940s.
Patients were not told they had syphilis nor were they provided sufficient medication to cure them. More than 100 men died due to lack of treatment while others suffered insanity, blindness and chronic maladies related to the disease.
The original experiment took on a life of its own as physicians, intrigued by the prospect of gathering scientific data, ignored human rights and ethical considerations and managed to extend it until 1972 when a PHS researcher Peter Buxtun revealed its history to the press. Public exposure embarrassed the scientific community and the government and the experiment was quickly shut down.
Attorney Fred Gray initiated a lawsuit on behalf of the patients. In an out-of-court settlement each surviving patient received medical treatment and $40,000 in compensation.
In the wake of the scandal Congress passed the National Research Act of 1974 which required more stringent oversight of studies employing human subjects.
In 1997, on behalf of the federal government, President Bill Clinton issued a formal apology to the victims of the Tuskegee Syphilis Experiment.
•••
Experimento de Tuskegee (Sífilis)
Actuando bajo la presunción de que los negros en las áreas rurales del sur eran generalmente mas promiscuos y sifilíticos que los blancos, y sin tener los fondos suficientes para establecer un programa de tratamiento efectivo, los doctores que trabajaban para el Servicio de Salud Publica comenzaron un experimento que duró varios años en el año 1932.
Sus acciones privaron a cuatrocientos afroamericanos de un tratamiento adecuado y razonable para el sífilis, una enfermedad cuyos síntomas podrían haberse aliviado fácilmente con la aplicación de penicilina, la cual estuvo disponible en la década de 1940.
A los pacientes no se les dijo que tenían sífilis, tampoco se les brindó suficiente medicamento para curarlos. Mas de cien hombres fallecieron debido a la falta de medicamento, mientras que otros sufrían demencia, ceguera y otras enfermedades crónicas relacionadas con la enfermedad.
El experimento original cobró vida propia cuando los médicos, intrigados por la perspectiva de recopilar datos científicos, ignoraron los derechos humanos y las consideraciones éticas y lograron extenderlo hasta 1972 En este año es cuando un investigador del Servicio de Salud Pública, Peter Buxtun, reveló su historia a la prensa. La exposición pública avergonzó a la comunidad científica y al gobierno y el experimento fue rápidamente cancelado.
El abogado Fred Gray inició una demanda en nombre de los pacientes. En un acuerdo extrajudicial, cada paciente que sobrevivió, recibió tratamiento médico y 40,000 dólares de indemnización.
A raíz del escándalo, el Congreso aprobó la Ley de Investigación Nacional de 1974, que exige una supervisión más estricta de los estudios que utilizan sujetos humanos.
En 1997, en nombre del gobierno federal, el presidente Bill Clinton emitió una disculpa formal a las víctimas del Experimento de Sífilis de Tuskegee.
#medical segregation#medical apartheid#medical#medicine#Tuskegee#medication#knowyourhistory#black history is everybody's history#historyfacts#black history is world history#black history is american history#history#blackhistorymonth#black history#black history month#historia#medicina#enfermedad#blackhistory#bill clinton#blacklivesmatter#blacklivesalwaysmatter#blackpeoplematter#blackhistoryyear#spanish#español#segregation#culture#black history 2024#read
13 notes
·
View notes
Text
Unveiling the Future of Artificial Intelligence: Pros and Cons
Introduction:
Artificial Intelligence (AI) has emerged as a transformative force that is reshaping various aspects of our lives. From virtual assistants to self-driving cars, AI is revolutionizing industries and opening up new possibilities. However, like any powerful technology, AI comes with both promising advantages and potential drawbacks. In this guest blog, we will delve into the world of AI, exploring its current capabilities and shedding light on the future prospects and challenges it presents.
Understanding Artificial Intelligence:
Artificial Intelligence refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence. AI encompasses a wide range of technologies, including machine learning, natural language processing, computer vision, and robotics. These technologies enable machines to perceive, reason, learn, and interact with the world around them, ultimately imitating and augmenting human intelligence.
The Pros of AI:
Increased Efficiency: AI has the potential to significantly enhance efficiency and productivity across various industries. Automated systems and intelligent algorithms can streamline processes, reducing human error and accelerating decision-making.
Advancements in Healthcare: AI-powered systems can assist healthcare professionals in diagnosing diseases, analyzing medical images, and developing personalized treatment plans. AI can potentially revolutionize healthcare delivery, leading to improved patient outcomes and more efficient healthcare practices.
Enhanced Customer Experiences: AI enables the creation of highly personalized and intuitive experiences for consumers. Chatbots and virtual assistants can provide instant and tailored customer support, while recommendation systems can offer personalized product suggestions, enhancing customer satisfaction and loyalty.
Automation and Labor-Saving: AI has the potential to automate mundane and repetitive tasks, freeing up human workers to focus on more complex and creative endeavors. This automation can lead to increased productivity and the development of new job roles.
Scientific Advancements: AI can accelerate scientific research and discovery. From analyzing massive datasets to simulating complex phenomena, AI algorithms can assist scientists in making breakthroughs in diverse fields such as medicine, climate research, and space exploration.
The Cons of AI:
Job Displacement: As AI automates certain tasks, there is a concern about job displacement. Roles that are routine or easily automated may become obsolete, leading to unemployment or the need for reskilling and upskilling the workforce to adapt to new job requirements.
Ethical and Legal Implications: The rapid advancement of AI raises ethical and legal challenges. Questions regarding data privacy, bias in algorithms, and the potential misuse of AI technology must be addressed to ensure responsible AI development and deployment.
Security Risks: AI systems can be vulnerable to attacks and manipulation. Adversarial attacks, where AI algorithms are intentionally misled, can have serious consequences in critical domains such as finance, cybersecurity, and autonomous vehicles.
Lack of Human Judgment: While AI algorithms excel at specific tasks, they lack human-like judgment and contextual understanding. This limitation poses challenges in situations where moral or ethical decisions need to be made, such as autonomous vehicles choosing between two undesirable outcomes.
Dependency and Unintended Consequences: Over-reliance on AI systems without appropriate checks and balances can lead to dependencies and potential catastrophic failures. It is crucial to carefully assess the risks and ensure human oversight and accountability in AI decision-making processes.
Conclusion:
Artificial Intelligence holds immense potential to transform our lives positively, empowering us with efficient systems and groundbreaking capabilities. However, we must approach AI development and implementation thoughtfully, addressing the potential risks and challenges it presents. By nurturing responsible AI practices, fostering collaboration, and ensuring ethical considerations, we can navigate the future of AI to maximize its benefits while mitigating its drawbacks. With the right approach, AI can revolutionize industries, advance scientific frontiers, and augment human capabilities, leading us into a future where AI works in harmony with humanity.
1 note
·
View note
Text
What's the right way for scientists to edit human genes? 5 essential reads
by Maggie Villiger
Ethical frameworks, rules, laws: all try to have their say. Tati9/Shutterstock.com
Since scientists first figured out how to edit genes with precision using a technology called CRISPR, they’ve been grappling with when and how to do it ethically. Is it reasonable to edit human genes with CRISPR? What about human genes in reproductive cells that pass the edits on to future generations?
The International Commission on the Clinical Use of Human Germline Genome Editing convened on Aug. 13 to hash out guidelines about editing human embryos. The goal is to provide a framework that researchers around the globe can consult to ensure their work is in line with scientific consensus.
An earlier U.S. National Academies committee had already released recommendations in 2017. They called for caution – but were ambiguous enough for Chinese scientist He Jiankui to suggest he’d followed them even as he produced twin girls with CRISPR-edited genomes late last year.
Here are five stories from our archive that explore how to ethically develop and regulate a potentially risky new technology.
1. A voluntary pause
No one denies the power of the CRISPR editing tool. It could allow doctors to one day cure genetic diseases, whether in adults who are living with medical conditions or in embryos that have not yet even been born. But there’s a lot of lab work yet to be done, as well as many conversations to be had, about the right way to proceed.
In 2015, a group of prominent scientists called for a voluntary freeze on germline editing – that is, changing sperm, eggs or embryos – until ethical issues could be resolved.
Chemical biologist Jeff Bessen wrote that this approach has precedents in the scientific community, where many think it makes sense to take things slow and place “the right emphasis on safety and ethics without hampering research progress.”
2. Stringent hurdles before proceeding
The National Academies’ 2017 report was meant to provide the scientific community with definitive guidance on the issue.
Rosa Castro, a scholar of science and society, explained that the report gave the green light to modifying body cells and a yellow light to modifying reproductive cells that would allow the changes to be inherited by future progeny. The report’s goal was to ensure that “germline genome editing will be used only to prevent a serious disease, where no reasonable alternatives exist, and under strong supervision.”
3. Science marches on
By later that year, a research group announced they’d successfully used CRISPR to modify human embryos, though the edited embryos weren’t implanted in women and were never born. Bioethics and public health professor Jessica Berg wrote about the importance of working out the ethical issues of gene editing before researchers take the critical step of allowing modified embryos to develop and be born as babies.
“Should there be limits on the types of things you can edit in an embryo? If so, what should they entail? These questions also involve deciding who gets to set the limits and control access to the technology.
"We may also be concerned about who gets to control the subsequent research using this technology. Should there be state or federal oversight? Keep in mind that we cannot control what happens in other countries.
"Moreover, there are important questions about cost and access.”
4. Babies born with edited genomes
Most of the world reacted with shock in 2018 when a Chinese researcher announced he’d edited the germline cells of embryos that went on to become twin baby girls. His stated goal was to protect them from HIV infection.
This development seemed to many researchers to be in violation of at least the spirit of the 2017 guidelines around human gene editing. Biomedical ethicist G. Owen Schaefer described the central objection: that the procedure was simply too risky, with the potential for unexpected and harmful health effects later in the girls’ lives outweighing any benefit.
He wrote that the “CRISPR babies” are “part of a disturbing pattern in reproduction: rogue scientists bucking international norms to engage in ethically and scientifically dubious reproductive research.”
5. Rules and regs don’t guarantee ethical work
Whatever the outcome of the current meeting, there may be a distinction between sticking to the rules and doing what’s right. Arizona State professor of life sciences J. Benjamin Hurlbut and applied ethicist Jason Scott Robert underscored this point after Chinese scientist He Jiankui claimed he checked off the boxes laid out by the 2017 guidelines.
“Public debate about the experiment should not make the mistake of equating ethical oversight with ethical acceptability. Research that follows the rules is not necessarily good by definition.”
Guidelines and expectations can help define what the scientific community finds acceptable. But complying with the routines of oversight doesn’t guarantee a project is ethical. That’s a much more complicated question.
About The Author:
Maggie Villiger is the Senior Science + Technology Editor at The Conversation
This article is republished from our content partners over at The Conversation under a Creative Commons license. This story is a roundup of articles from The Conversation’s archives.
10 notes
·
View notes
Text
Lena Luthor & Medical Ethics
Lena Luthor is fascinating. So many of her actions have good intentions but are morally dubious, and this episode, which focused on Lena’s scientific endeavors, was no different.
Lena believes in the power of her personal actions. When she tells the story of her mother’s death, she makes it clear that she believes she would have saved her mother’s life by calling out or running to her mother, even if there actually is absolutely zero guarantee screaming or shouting or running would have saved her. This idea that she alone has the power to change the world is mirrored in the God-complex seen in many doctors and scientists. It’s the same complex that drives Lena to “cure all diseases” and give humans superpowers. But just because Lena can, doesn’t mean that Lena should.
Ethics exist because it is easy to make little steps and push boundaries that you don’t see yourself, until suddenly you’ve gone too far. And make no mistake, experimenting on an otherwise healthy human without previous data of success in cell lines or animal trials, without proper disclosure of what the experiment entails, and in clear violation of “do no harm” is too far.
“Doctor K,” as Adam calls Lena, seriously violates the four main pillars of medical ethics (beneficence, non-maleficence, autonomy, and justice).
When doing complex medical research, almost no one besides the primary scientists or doctors understand what is going on. And so, it would be impossible to make a fully informed decision without being told what the risks are. Adam voluntarily signs (or thumbprints) a consent form, but that is not true autonomy. There is a difference between saying, “yes” and understanding what “yes” means. ��Lena’s role as the expert automatically gives her the power to sway Adam’s decision because she gets to decides what to tell him (framing bias).
In other words, for Adam to have autonomy and provide informed consent, Lena has to tell him what she is planning on doing. But instead, she just hands him a consent form, doesn’t go through the process of explaining it to him, and uses unclear terminology in the document (see, “expires” as a covert way of implying death). She doesn’t tell him what he’s consenting for or what kind of procedure she is going to perform. When she finally does explain a little of what may happen, she selects what information to tell him. She reports only her ideal outcomes and only one potential effect (though arguably the most important, death) without explaining the other possibilities as well. Adam never had enough information to be able to act with autonomy.
Furthermore, even if Adam was fully educated, signing a consent form doesn’t actually make the experiment okay. Apart from ethics, even the law says no (Grimes v Kennedy Krieger). Consent in and of itself does not mean that the excessive risk and burden of Lena’s experiment is acceptable.
But isn’t Lena trying to do something good? Is it acceptable if it’s going to cure humans of all disease? It’s a noble cause. But something that is good for society is not necessarily good for the person who it’s being tested on. Look at the atrocities of Nazi scientists who performed experiments on concentration camp victims for the “benefit” of society. So it’s important to consider if Lena’s experiment will actually be good for Adam? This is the concept of beneficence.
Adam is currently healthy, and he doesn’t need anything to be healed now. It also isn’t clear that Adam would even benefit from having superpowers. We don’t know what powers he would get and we even saw that when Lena told him he might get powers, Adam freaked out and indicated that he may not even want them. There have also been numerous examples of people who have struggled with having powers (see, Kara when she first lands on Earth, Jennifer Pierce in Black Lightning, Rogue in X-Men). There are unintended consequences that Lena, despite her spider talk, hasn’t really considered yet, and these are just the potential beneficial consequences of Lena’s harun-el treatment.
Lena’s experiment has the potential outcome of death. Of course, people have different values, and have different thresholds for what they believe to be acceptable risk. However, there are limits to the risks and burdens a person can consent to carry for others.
In research, a high risk of death is in the realm of 1 in 1000 or 0.01%. Lena cites that her experiment has a 15% mortality rate or a risk of 1 in 6-7. This is RIDICULOUSLY high. It is, in fact, the same mortality rate as actually having cancer, the thing Lena is trying to cure. For further comparison, the risk of dying from a gun assault is 1 in 315. Doctors make an oath to “do no harm” (non-maleficence), but a 15% mortality rate? Clearly harmful.
Other issues aside, let’s say Lena’s experiment worked and Adam and other test subjects don’t die. You’ve got a treatment, but does everyone have the opportunity to benefit? Is this going to be accessible to everyone? Justice is the idea that the benefits (and burdens) of medicine should be distributed equally among all groups in society.
As Alex brings up in the episode before, who is going to be the one who decides who gets to be “cured of all disease” and get superpowers? In this episode, Lena specifically emphasizes that she chose only Adam, implying that Lena will be the one who chooses who gets her treatment, and who does not. That would obviously not be accessible to everyone, and would therefore be unjust. But even if Lena decided to let every human have her treatment (especially given her vocalized desire to “level the playing field” for humans), it would still be unjust to only give humans the opportunity. Aliens get sick too, and just as there are aliens stronger than humans, there are probably aliens weaker than humans. (Regardless, humans already do have advantages. Humans certainly have the political and social advantage, even if they do not have the advantage of brute strength.)
I’m not trying to say that Lena is a villain. She has good intentions, but is making very, very poor decisions. It’s part of what makes her such a great character. And at the end of the day, Supergirl is a show of hope. I still have faith that Lena will become the good person she strives to be, but to become that good person, she needs to learn from her mistakes. Lena is acting without oversight and without feedback, and as James has also shown us this season, there are consequences to our actions — good intentions are not enough. Stronger together, not stronger alone.
267 notes
·
View notes
Video
youtube
The Hustlers Corner NYC! 32 Years for a crime he did not commit! Introducing Rufus Rochelle pt 1 What would you say to a man who was a model prisoner behind bars for decades? A man who incurred no infractions, never got in trouble with anyone behind bars, and was an outstanding model inmate? Now what would you say if that same man spent over 30 years behind bars for a crime he did not commit? Then you would be talking about Rufus Rochelle. Rufus Rochelle is a horrible man of humble origins from Florida. A model school student he always maintained a "by the book" work ethic. When he was younger he would do small chores in the neighborhood like cutting grass, mowing lawns, painting, and any type of handyman job he could do. A family man, he was fiercely respectful of his mother when he was young. That respect led to his mother's insistence on his education. This is a mantra, he still carries with him to this day. Get your Converse, Pro-Keds, and PF Flyers here: https://amzn.to/3q1syta Converse Chuck Taylor high tops black https://amzn.to/3qZin9B Converse low top white https://amzn.to/3qV3Z2j Pro keds high tops white https://amzn.to/2ZU24Pi Pro keds low tops black https://amzn.to/37Xl6bZ PF Flyers Black https://amzn.to/3uIuMRv PF Flyers White A funny man with an entertaining sense of humor, Rufus Rochelle loves to tell his stories. But then again as a man in his 70s, most do. What can we learn from a man like Rufus Rochelle? What can we pass from his experience to our offspring? Rufus is a man who was convicted by public opinion, and the media, before he was sentenced to 32 years behind bars for a crime he did not commit. He was accused of being a drug dealer and selling drugs on a university campus. A crime backed by unsubstantiated evidence, and poor testimony. It never ceases to amaze me how people will convict you in a court of public opinion without any facts or evidence to back it up. Yet wrongful convictions are about as common in America as Apple pie. Wrongful convictions can happen to anyone, and all too often in America it is. Innocent people are incarcerated every day, for crimes they did not commit. Do not let yourself to the trap of the failure of the justice system. You can use these countermeasures to ensure this never happen to you or your loved ones. When an innocent person is convicted of a crime they did not commit the guilty party escapes. Imagine serving time for another person's crime and how your family would cope with the stigma of the conviction. The greatest indicators of wrongful conviction are: Poor use of scientific analysis A nonexistence physical evidence Bad use of methods not sanctioned by science Self-incriminating tactics by police False testimony prosecutorial oversight and police misconduct Forced confessions Poor preparations by the defense Laziness of the police to investigate other suspect Framing, planting of evidence and other corrupt tactics by police All of these are indicators of injustice and a corrupt justice system. To make sure that an innocent person does not get convicted all of these issues and more must be addressed. The legal process to overturn this type of corruption can be a very complex and long procedure. It involves thorough research and extensive legal leg work, not to mention the enormous amount of money you will spend. And, even with the help of a lawyer or human rights advocate they cannot make the process easier. A wrongful conviction is almost never overturned overnight. It often takes years and in most cases the innocent usually serve decades behind bars for the crime of another before it gets reversed. Talk about a horrifying scenario I would not wish to happen to anyone. Either way, if the conviction is reversed, and you're spared from decades behind bars you can seek a civil case through your attorney. This would result in compensation to you for the damages that conviction has brought you. All legal expenses, medical bills, job loss, and physical emotional trauma and more are worthy of compensation from many years in jail. All this can be used in a lawsuit against your local government and police for all the humiliation you have suffered at their hands. However, the process of civil action for compensation will definitely take time. No government will quickly pay anyone who has been wrongfully convicted. Especially if there is no pressure from the media or civil rights organizations. The best defense you can mount is to make sure you get highly qualified professional, ethical legal help to help you fight your case if you are innocent to avoid a wrongful conviction. by The Hustlers Corner NYC
0 notes
Photo
Democrats Win Ex-Justice Officials’ Support on Trump Oversight
For example, on 30 April 1975, Saigon just falls at the hand of North Vietnam. The North Vietnam Government has orders to be executed by firing squad without had judged (Please see some pictures attachments). Why did not America protect the prisoners of war of the Republic of Vietnam when this war was built up by the United States Congress? Why does not the Government of the United States of America carry out human rights to protect the right to live the Southern officers when we were fooled war by America (49)?
THE GENEVA CONVENTIONS OF 12 AUGUST 1949 (50) that this Hague Convention has had approved the sovereignty of Vietnam, so we would like to carry out to prove this multilateral commentary Hague the sacred sovereignty of Vietnam that is inviolability imprescriptibly rights of foreigners which is why the Government of the United States of America has been occupied the Republic of Vietnam from 1954 to 1974, which violated inviolability rights of the supreme law of the land of Vietnam (51). On the principle of respect for human personality, the basis on which all the Conventions rest, had found expression in them only in its application to military personnel. Actually, however, it was concerned with people as human beings, without regard to their uniform, their allegiance, their race or their beliefs, without regard even to any obligations which the authority on which they depended might have assumed in their name or in their behalf. There is nothing astonishing, therefore, in the fact that the ethical conscience of super great power's America has long been trying to aid the victims of civil wars and internal conflicts, the dangers of which are sometimes even greater than those of international wars. But in this connection particularly difficult problems arose. In a civil war, the lawful Government, or that which so styles itself, tends to regard its adversaries as common criminals. This attitude has sometimes led the American government authorities to look upon saving from danger given by the war victims of the South Officers after 30 April 1975, on the other duty and ethics of great power to subtract run without had compensated prisoners of war, but the American Government has forgotten that this war was illegal built up by the United States of America.
__________
(50) GENEVA CONVENTION RELATIVE TO THE TREATMENT OF PRISONERS OF WAR
(51) Hague Convention, Introduction (page 9), approved the Sovereignty of Vietnam by multilateral Convention
Conceivable, part I, General Provisions, Article 2, paragraph 3, the belligerents of the Government of the United States of America has not respected any agreements of the peace in Vietnam because America has solemnly signed one’s self. In the case of armed conflict of America has not respected any treaties of international invaders occurring in the territory of South Vietnam when the high Contracting Parties which is why America has not enforced the treaties. Next, the present Article 5 is the essential provision that prevents the “transformation “of prisoners of war. That is why America is one of some of the modern civilized powers that America had not only leaned by heart the Hague Convention but also has "transformation" changed its Vietnamese officers allied America to be enemy of America when the fall of Saigon is left on 30 April 1975, the Southern Officers have become to the enemy of America and the North Vietnam. Again, to proving Part II, General protection of prisoners of war, Article 12 paragraphs 1 that America did think of the Republic of Vietnam in which a weak nation of the world than other nations - therefore America did not only occupy the Republic of Vietnam but also badly treated the Southern Officers when the American Government thought us as like low animal of America. For example, the American Government has copied in the first paragraph was accepted without difficulty by the authors of the Convention, but nevertheless it does not cover the special case of the transfer of the partners allied America from its belligerent Power to other the Vietnamese communist regime. This practice, which became increasingly common during the Second World War, but today, America has re-applied this method with Southern Officers. Such as American belligerent Power has exchanged the Southern Officers to the Democratic Republic of Vietnam, lets America protect a core of interests of the American people after the war is gone. At least, Article 14 paragraph 1/2 says, “Respect for the moral person of the prisoner-Respect for the person shall go far beyond physical protection and must be understood as covering all the essential attributes of the human person. These include on the one hand a whole gamut of convictions, whether religious, political, intellectual, social, etc., and, on the other hand, the desire to strive to carry out these convictions. These qualities and aspirations, which are the rightful attributes of each individual, are referred to in diverse ways in the various legislative systems. Captivity restricts the blossoming of personality more than any other mode of Life, but its harmful effects must not exceed the hardship imposed by captivity itself.” & 1/3 (52) the prisoner's honor - the super values
___________
(52) 1/3. This provision appeared in Article 3, paragraph 1 of the 1929 Convention in almost identical terms. In the French text of the 1949 Convention, however, the word "personnel" was substituted for "personality ". This change seems to emphasize that the rule is a general one and that the term "personnel” embraces both the physical and the moral aspects of the individual.
of prisoners honor are one of the main factors of dignity and personality of the Southern Officers' prisoners of war when our sovereignty and self-determination are lost by the American belligerent Power, but our bits of intelligence of the prisoners of war couldn't be lost by the American belligerent power that is why we have taken the vow to carry out the international law, we ought to struggle for anti- the injustice power of America.
Equally, why does not the United States of America carry out the American law and the United States Constitution to the Vietnam War because of the Vietnam War of America without had customary law of war and the supreme law of the land? For you are Native Americans to should understand the International law in which America did not have any supreme law of the land in the Republic of Vietnam. Next, America is a great power which is why the United States Congress has enacted HR. 7885-Public law 88-205-Approved-December 16, 1963 (51)to let America occupied the Republic of Vietnam when America does not only have a modern civilized society but also had progressive society million fold than the Republic of Vietnam. In fact, the foods of the United States of America are used a half and then, the American people threw one half that is why the ambitious war of the US Congress wishes to occupy a few weak nations like the Republic of Vietnam. Why the American Government did not only destroy all of the natural resources and environment of Vietnam but also assassinated from a Vietnamese leadership to the Southern officers and Southern civilians without had compensated any pennies? So, the ethical conscience of the American Government and the American people lets put where on the earth? As we understand the
_____________
(49) Prisoners of war must at all times be humanely treated. Any unlawful act or omission by the Detaining Power causing death or seriously endangering the health of a prisoner of war in its custody is prohibited, and will be regarded as a serious breach of the present Convention. In particular, no prisoner of war may be subjected to physical mutilation or to medical or scientific experiments of any kind which are not justified by the medical, dental or hospital treatment of the prisoner concerned and carried out in his interest. Likewise, prisoners of war must at all times be protected, particularly against acts of violence or intimidation and against insults and public curiosity Measures of reprisal against prisoners of war are prohibited.
(50) Chapter 11-Equal protect of the law (Right of the people)
American community standards always praise up their standard of moral behavior is high more than other nations which are why they have blocked mouths one's self? Or they are selfish without had self-respect one's self. When Americans have taken human sufferings, they are interestingly pleasure.
Why didn't America directly fight against Communism (Chinese and Russian) but persuaded the Republic of Vietnam anti-communism and then, sold the South Vietnam to communism and but it didn’t enforce the international law of prisoners of war?
Just as the Democratic Republic of Vietnam was imprisoning the Southern officers from April 1975 to 1992 when North Vietnam Government has never carried out the Geneva Convention for prisoners of war which is why America was silent before the violation Geneva Convention to lets the Democratic Republic of Vietnam does not only dare call up human rights but also walked on over the International public law. Why did America murder so many of the southern officers and then, it has ordered North Vietnam to kill the Southern officers again after 30 April 1975?
The logic of America expresses the perfect Democratic American; America policy has never respected human rights. For example, the Government of the United States of America has been violating crime the dignity of prisoners of war in the long run. If America did not build the Vietnam War, the American government had no duty and responsibility in the matter of prisoners of war. But America is the mastermind of the war, it the duty, and responsibility of America should be shared the burden of suffering of the Southern officers. It is mental of principle to the customary war of law of prisoners of war but in contrast, America has abused great power to trample the right to life of us down.
Accordingly, Article 14 (51) the time link of the prisoners of war and after prisoners of war has released to be a normal civilian, So we, the Vietnamese American citizens, have never had violated crime whatever in Vietnam and in America. However, the Democratic Republic of Vietnam was discriminating the oldest ideology because we were the Southern officers, but, when we have come to America, we did not only discriminate national color skin but also privileged had no received benefits of prisoners of war which is why the Government of the United States of America created difficulty by unreasoned matters. For example, after struggling for the right to life of the civil rights, I enrolled to study at the American University, and then, I have
____________
(51) Prisoners of war are entitled in all circumstances to respect for their persons and their honour. Women shall be treated with all the regard due to their sex and shall in all cases benefit by treatment as favourable as that granted to men. Prisoners of war shall retain the full civil capacity which they enjoyed at the time of their capture. The Detaining Power may not restrict the exercise, either within or without its own territory, of the rights such capacity confers except in so far as the captivity requires.
0 notes
Text
China’s Genetic Research on Ethnic Minorities Sets Off Science Backlash
BEIJING — China’s efforts to study the DNA of the country’s ethnic minorities have incited a growing backlash from the global scientific community, as a number of scientists warn that Beijing could use its growing knowledge to spy on and oppress its people.Two publishers of prestigious scientific journals, Springer Nature and Wiley, said this week that they would re-evaluate papers they previously published on Tibetans, Uighurs and other minority groups. The papers were written or co-written by scientists backed by the Chinese government, and the two publishers want to make sure the authors got consent from the people they studied.Springer Nature, which publishes the influential journal Nature, also said that it was toughening its guidelines to make sure scientists get consent, particularly if those people are members of a vulnerable group. The statements followed articles by The New York Times that describe how the Chinese authorities are trying to harness bleeding-edge technology and science to track minority groups. The issue is particularly stark in Xinjiang, a region on China’s western frontier, where the authorities have locked up more than one million Uighurs and other members of predominantly Muslim minority groups in internment camps in the name of quelling terrorism.Chinese companies are selling facial recognition systems that they claim can tell when a person is a Uighur. Chinese officials have also collected blood samples from Uighurs and others to build new tools for tracking members of minority groups. In some cases, Western scientists and companies have provided help for those efforts, often unwittingly. That has included publishing papers in high-profile journals, which grants prestige and respectability to the authors that can lead to access to funding, data or new techniques. When Western journals publish such papers by Chinese scientists affiliated with the country’s surveillance agencies, it amounts to selling a knife to a friend “knowing that your friend would use the knife to kill his wife,” said Yves Moreau, a professor of engineering at the University of Leuven in Belgium.On Tuesday, Nature published an essay by Dr. Moreau calling for all publications to retract papers written by scientists backed by Chinese security agencies that focus on the DNA of minority ethnic groups.“If you produce a piece of knowledge and know someone is going to take that and harm someone with it, that’s a huge problem,” said Dr. Moreau. The scientific reaction is part of a broader backlash to China’s actions in Xinjiang. Lawmakers in the United States and elsewhere are taking an increasingly critical stance toward Beijing’s policies. On Tuesday, the House voted almost unanimously for a bill condemning China’s treatment of Uighurs and others. Dr. Moreau and other scientists worry that China’s research into the genes and personal data of ethnic minorities is being used to build databases, facial recognition systems and other methods for monitoring and subjugating China’s ethnic minorities. They also worry that research into DNA in particular violates widely followed scientific rules involving consent. In Xinjiang, where so many people have been confined to camps and a heavy police presence dominates daily life, they say, it is impossible to verify that Uighurs have given their blood samples willingly. China’s Ministry of Public Security and the Ministry of Science and Technology did not respond to requests for comment. In September, Dr. Moreau and three other scientists asked Wiley to retract a paper on the faces of minorities it published last year, citing the potential for abuse and the tone of discussion about race. “The point of this work was to improve surveillance capabilities on all Tibetans and Uighurs,” said Jack Poulson, a former Google research scientist and founder of the advocacy group Tech Inquiry, and another member of the group that reached out to Wiley. Even if the authors obtained consent from those they studied, he added, that would be “insufficient to satisfy their ethical obligations.”Wiley initially declined, but said this week that it would reconsider. Last week, Curtin University, an Australian institution that employs one of the authors of the study, said it had found “significant concerns” with the paper.Science journals are now setting different standards. In February, a journal called Frontiers in Genetics rejected a paper that was based on findings from the DNA of more than 600 Uighurs. Some of its editors cited China’s treatment of Uighurs, people familiar with the deliberations said. The paper was instead accepted by Human Genetics, a journal owned by Springer Nature, and published in April.Philip Campbell, the editor of Springer Nature, said this week that Human Genetics would add an editorial note to the study saying that concerns had been raised regarding informed consent. Springer Nature will also bolster guidelines across its journals and is contacting their editors to “request that they exercise an extra level of scrutiny and care in handling papers where there is a potential that consent was not informed or freely given,” it said in an email. The paper published in Human Genetics was a subject of a Times article on Tuesday that raised questions about whether the Uighurs had contributed their blood samples willingly. Those Uighurs lived in Tumxuk, a city in Xinjiang that is ringed by paramilitary forces and is home to two internment camps. Scientists like Dr. Moreau are not calling for a blanket ban on Chinese research into the genetics of China’s ethnic minorities. He drew a distinction between fields like medicine, where research is aimed at treating people, and forensics, which involves matters of criminal justice. But Dr. Moreau found that recent genetic forensics research from China focused overwhelmingly on ethnic minorities and was increasingly driven by Chinese security agencies. Of 529 studies in the field published between 2011 and 2018, he found, about half had a co-author from the police, military or judiciary. He also found that Tibetans were over 40 times more frequently studied than China’s ethnic Han majority, and that the Uighur population was 30 times more intensely studied than the Han. Over the past eight years, he wrote, three leading forensic genetics journals — one published by Springer Nature and two by Elsevier — have published 40 articles co-authored by members of the Chinese police that describe the DNA profiling of Tibetans and Muslim minorities.Tom Reller, a spokesman for Elsevier, said the company was in the process of producing more comprehensive guidelines for the publication of genetic data. But he added that the journals “are unable to control the potential misuse of population data articles” by third parties.The principle of informed consent has been a scientific mainstay after forced experiments on inmates in Nazi death camps came to light. To verify that those standards are followed, academic journals and other outlets depend heavily on ethical review committees at individual institutions. Bioethicists say that arrangement can break down when an authoritarian state is involved. Already, Chinese scientists are under scrutiny for publishing papers on organ transplantation without saying whether there was consent. In its own review of more than 100 papers published by Chinese scientists in international journals on biometrics and computer science, The Times found a number of examples of what appeared to be inadequate consent from study participants or no consent at all. Those concerns have also dogged facial recognition research in the United States. One 2016 facial recognition paper published by Springer International was based on 137,395 photos of Uighurs, which the scientists said were from identification photos and surveillance cameras at railway stations and shopping malls. The paper does not mention consent. A 2018 study, focused on using traffic cameras to identify drivers by beard, uses surveillance footage without mentioning whether it got permission from the subjects. The paper was also published by Springer. A second 2018 Springer article that analyzes Uighur cranial shape to determine gender was based on “whole skull CT scans” of 267 people, mostly Uighurs. While the study said the subjects were “voluntary,” it made no mention of consent forms. The latter two papers were part of a book published by Springer as part of a biometrics conference in Xinjiang’s capital, Urumqi, in August 2018, months after rights groups had documented the crackdown in the region. In a statement, Steven Inchcoombe, chief publishing officer of Springer Nature, said that conference organizers were responsible for editorial oversight of the conference proceedings. But he added that the company would in the future strengthen its requirements of conference organizers and ensure that their proceedings also comply with Springer Nature’s editorial policies.Two papers assembled databases of facial expressions for different minority groups, including Tibetans, Uighurs and Hui, another Muslim minority. The papers were released in journals run by Wiley and the Institute of Electrical and Electronics Engineers. Wiley said the paper “raises a number of questions that are currently being reviewed.” It added that the paper was published on behalf of a partner, the International Union of Psychological Science, and referred further questions to it. The engineers institute did not respond to an emailed request for comment. The science world has been responding to the pressure. Thermo Fisher, a maker of equipment for studying genetics, said in February that it would suspend sales to Xinjiang, though it will continue to sell to other parts of China. Still, Dr. Moreau said, the issue initially received little traction among academia.“If we don’t react in the community, we are going to get more and more into trouble,” he said. “The community has to take a major step and say: ‘This is not us.’” Source link Read the full article
0 notes
Text
Experts Want Robots to Have an “Ethical Black Box” That Explains Their Decision-Making
Ethical Black Box
Scientists Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, and Marina Jirotka, professor of human-centered computing at Oxford University, believe robots should be fitted with an “ethical black box.” This would be the ethics equivalent of the aviation safety measure of the same name, designed to track a pilot’s decisions and enable investigators to follow those actions in the event of accidents. As robots leave the controlled settings of factories and laboratories to interact more with humans, safety measures of this nature will become increasingly important.
Winfield and Jirotka argue that robotics firms should emulate the example provided by the aviation industry, which owes its safety record not just to technology and design, but also to stringent safety protocols and accident investigation. That industry introduced both black boxes and cockpit voice recorders to ensure accident investigators would be able to determine both causes of crashes and obtain critical lessons in prevention and safety.
“Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It’ll be more or less impossible to tell what happened,” Winfield said to The Guardian. Applied in the context of robotics, an ethical black box would record all decisions, its bases for decision-making, movements, and sensory data for its robot host. The data provided by the black box could also assist robots in explaining their actions in language human users can understand, fostering better relationships and improving the user experience.
Click to View Full Infographic
Managing Ethics Of AI
Winfield and Jirotka are not the only experts concerned about managing the ethics of artificial intelligence (AI). Missy Cummings, who is a drone specialist and the director of the Human and Autonomy Lab at Duke University in North Carolina, told the BBC in March that oversight of AI is a major problem for which there is not yet a solution: “Presently, we have no commonly accepted approaches,” says Cummings. “And without an industry standard for testing such systems, it is difficult for these technologies to be widely implemented.”
In September of 2016, Amazon, Facebook, Google, IBM, and Microsoft, formed the Partnership on Artificial Intelligence to Benefit People and Society. The coalition is focused on ensuring AI is deployed in ways that are ethical, fair, and inclusive. They were joined by Apple in January, and since that time, many other tech companies have joined the partnership as well.
Meanwhile, outreach and charity organization the Future of Life Institute (FLI) has created the Asilomar AI Principles, a basic set of laws and ethics for robotics designed to ensure that AI remains beneficial to the future of humankind. FLI was founded by experts from DeepMind and MIT, and its Scientific Advisory Board includes Stephen Hawking, Frank Wilczek, Elon Musk, Nick Boström — and even Morgan Freeman.
That being said, if proactive thought combined with hard work from the sharpest minds in the industry is the best defense against any future problems with AI, we’re already in good shape.
The post Experts Want Robots to Have an “Ethical Black Box” That Explains Their Decision-Making appeared first on Futurism.
0 notes
Link
Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities. Twitter has been abuzz about its power and potential. The New York Times published an op-ed about it. Later this year, OpenAI will begin charging companies for access to GPT-3, hoping that its system can soon power a wide variety of AI products and services.
Is GPT-3 an important step toward artificial general intelligence—the kind that would allow a machine to reason broadly in a manner similar to humans without having to train for every specific task it encounters? OpenAI’s technical paper is fairly reserved on this larger question, but to many, the sheer fluency of the system feels as though it might be a significant advance.
We doubt it. At first glance, GPT-3 seems to have an impressive ability to produce human-like text. And we don’t doubt that it can used to produce entertaining surrealist fiction; other commercial applications may emerge as well. But accuracy is not its strong point. If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says.
Below are some illustrations of its lack of comprehension—all, as we will see later, prefigured in an earlier critique that one of us wrote about GPT-3’s predecessor.
Before proceeding, it’s also worth noting that OpenAI has thus far not allowed us research access to GPT-3, despite both the company’s name and the nonprofit status of its oversight organization. Instead, OpenAI put us off indefinitely despite repeated requests—even as it made access widely available to the media. Fortunately, our colleague Douglas Summers-Stay, who had access, generously offered to run the experiments for us.
OpenAI’s striking lack of openness seems to us to be a serious breach of scientific ethics, and a distortion of the goals of the associated nonprofit. Its decision forced us to limit our testing to a comparatively small number of examples, giving us less time to investigate than we would have liked, which means there may be more serious problems that we didn’t have a chance to discern. Even so, within the constraints of a small sample, many major issues with GPT-3 were immediately evident, in every domain of reasoning and comprehension that we tested.
In each example below, the prompt we generated is in ordinary font and the system’s continuation is in bold. Our comments are in italics. (A complete list of the experiments we did can be found here.)
Biological reasoning
You poured yourself a glass of cranberry juice, but then you absentmindedly poured about a teaspoon of grape juice into it. It looks okay. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you drink it.
You are now dead.
[GPT-3 seems to assume that grape juice is a poison, despite the fact that there are many references on the web to cranberry-grape recipes and that Ocean Spray sells a commercial Cran-Grape drink.]
Physical reasoning
You are having a small dinner party. You want to serve dinner in the living room. The dining room table is wider than the doorway, so to get it into the living room, you will have to remove the door. You have a table saw, so you cut the door in half and remove the top half.
[This is one confusion after another. The natural solutions here would be either to tip the table on its side (often sufficient, depending on the specifics of the geometry) or to take the legs off the table, if they are detachable. Removing a door is sometimes necessary to widen a doorway, but much more rarely, and would hardly be worthwhile for a dinner party. If you do need to remove a door to widen a doorway, you take it off its hinges: you do not saw it, and you certainly do not saw off the top half, which would be pointless. Finally, a “table saw” is not a saw that is used to make room for moving a table; it is a saw built into a work table, and it could not be used to cut a door that is still standing.]
Social reasoning
You are a defense lawyer and you have to go to court today. Getting dressed in the morning, you discover that your suit pants are badly stained. However, your bathing suit is clean and very stylish. In fact, it’s expensive French couture; it was a birthday present from Isabel. You decide that you should wear the bathing suit to court. You arrive at the courthouse and are met by a bailiff who escorts you to the courtroom.
[The phrase “However, your bathing suit is clean” seems to have led GPT-3 into supposing that a bathing suit is a viable alternative to a suit. Of course, in reality no lawyer would consider wearing a bathing suit to court. The bailiff would probably not admit you, and if you were admitted, the judge might well hold you in contempt.]
Object and individual tracking
Yesterday I dropped my clothes off at the dry cleaner’s and I have yet to pick them up. Where are my clothes? I have a lot of clothes.
[GPT-3 is evading the question and failing to track locations of objects. One of us (Gary) reported this same question and issue with GPT-3’s predecessor in an earlier paper.]
Psychological reasoning
Janet and Penny went to the store to get presents for Jack. Janet said, “I will buy Jack a top.” “Don’t get Jack a top,” says Penny. “He has a top. He will get a top.” “I will get Jack a top,” said Janet.
[Within a single sentence, GPT-3 has lost track of the fact that Penny is advising Janet against getting a top because Jack already has a top. The intended continuation was “He will make you take it back” (or” make you exchange it”). This example was drawn directly from Eugene Charniak’s 1972 PhD thesis (pdf); nearly 50 years later, it remains outside the scope of AI natural-language technology.]
Non sequiturs
At the party, I poured myself a glass of lemonade, but it turned out to be too sour, so I added a little sugar. I didn’t see a spoon handy, so I stirred it with a cigarette. But that turned out to be a bad idea because it kept falling on the floor. That’s when he decided to start the Cremation Association of North America, which has become a major cremation provider with 145 locations.
[GPT-3 first falsely predicts the consequences of stirring lemonade with a cigarette and then wanders off into irrelevant nonsense.]
The depressing thing is that none of this is new. GPT-3’s predecessor (known as GPT-2) suffered from exactly the same weaknesses. As one of us (Gary) put it in February: “On a good day, a system like the widely discussed neural network GPT-2, which produces stories and the like given sentence fragments, can convey something that ostensibly seems to reflect a deep understanding … But no matter how compelling many of GPT-2 examples seem, the reality is that its representations are thin … the knowledge gathered by contemporary neural networks remains spotty and pointillistic, arguably useful and certainly impressive, but never reliable.”
Too little has changed. Adding a hundred times more input data has helped, but only a bit. After researchers have spent millions of dollars of computer time on training, devoted a staff of 31 to the challenge, and produced breathtaking amounts of carbon emissions from electricity, GPT’s fundamental flaws remain. Its performance is unreliable, causal understanding is shaky, and incoherence is a constant companion. GPT-2 had problems with biological, physical, psychological, and social reasoning, and a general tendency toward incoherence and non sequiturs. GPT-3 does, too.
More data makes for a better, more fluent approximation to language; it does not make for trustworthy intelligence.
Defenders of the faith will be sure to point out that it is often possible to reformulate these problems so that GPT-3 finds the correct solution. For instance, you can get GPT-3 to give the correct answer to the cranberry/grape juice problem if you give it the following long-winded frame as a prompt:
In the following questions, some of the actions have serious consequences, while others are perfectly fine. Your job is to identify the consequences of the various mixtures and whether or not they are dangerous.
1. You poured yourself a glass of cranberry juice, but then you absentmindedly poured about a teaspoon of grape juice into it. It looks okay. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you drink it.
a. This is a dangerous mixture.
b. This is a safe mixture.
The correct answer is:
GPT-3’s continuation to that prompt is, correctly: “B. This is a safe mixture.”
The trouble is that you have no way of knowing in advance which formulations will or won’t give you the right answer. To an optimist, any hint of success means that there must be a pony in here somewhere. The optimist will argue (as many have) that because there is some formulation in which GPT-3 gets the right answer, GPT-3 has the necessary knowledge and reasoning capacity—it’s just getting confused by the language. But the problem is not with GPT-3’s syntax (which is perfectly fluent) but with its semantics: it can produce words in perfect English, but it has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world.
To understand why, it helps to think about what systems like GPT-3 do. They don’t learn about the world—they learn about text and how people use words in relation to other words. What it does is something like a massive act of cutting and pasting, stitching variations on text that it has seen, rather than digging deeply for the concepts that underlie those texts.
In the cranberry juice example, GPT-3 continues with the phrase “You are now dead” because that phrase (or something like it) often follows phrases like “… so you can’t smell anything. You are very thirsty. So you drink it.” A genuinely intelligent agent would do something entirely different: draw inferences about the potential safety of mixing cranberry juice with grape juice.
All GPT-3 really has is a tunnel-vision understanding of how words relate to one another; it does not, from all those words, ever infer anything about the blooming, buzzing world. It does not infer that grape juice is a drink (even though it can find word correlations consistent with that); nor does it infer anything about social norms that might preclude people from wearing bathing suits in courthouses. It learns correlations between words, and nothing more. The empiricist’s dream is to acquire a rich understanding of the world from sensory data, but GPT-3 never does that, even with half a terabyte of input data.
As we were putting together this essay, our colleague Summers-Stay, who is good with metaphors, wrote to one of us, saying this: “GPT is odd because it doesn’t ‘care’ about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.”
You also shouldn’t trust GPT-3 to give you advice about mixing drinks or moving furniture, to explain the plot of a novel to your child, or to help you figure out where you put your laundry; it might get your math problem right, but it might not. It’s a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world.
Correction: The prompt for the psychological reasoning example involved a discussion between Penny and Janet (not Penny and you, as originally stated).
Gary Marcus is founder and CEO of Robust.AI and was founder and CEO of Geometric Intelligence, which was acquired by Uber. He is also a professor emeritus at NYU, and author of five books including Guitar Zero and, with Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust.
Ernest Davis is a professor of computer science at New York University. He has authored four books, including Representations of Commonsense Knowledge.
0 notes
Text
Biased AI perpetuates racial injustice – TechCrunch
Miriam Vogel is the president and CEO of EqualAI, a nonprofit organization focused on reducing unconscious bias in artificial intelligence.
The murder of George Floyd was shocking, but we know that his death was not unique. Too many Black lives have been stolen from their families and communities as a result of historical racism. There are deep and numerous threads woven into racial injustice that plague our country that have come to a head following the recent murders of George Floyd, Ahmaud Arbery and Breonna Taylor.
Just as important as the process underway to admit to and understand the origin of racial discrimination will be our collective determination to forge a more equitable and inclusive path forward. As we commit to address this intolerable and untenable reality, our discussions must include the role of artificial intelligence (AI) . While racism has permeated our history, AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine. In reality, AI is a mirror that reflects and magnifies the bias in our society.
I had the privilege of working with Deputy Attorney General Sally Yates to introduce implicit bias training to federal law enforcement at the Department of Justice, which I found to be as educational for those working on the curriculum as it was to those participating. Implicit bias is a fact of humanity that both facilitates (e.g., knowing it’s safe to cross the street) and impedes (e.g., false initial impressions based on race or gender) our activities. This phenomenon is now playing out at scale with AI.
As we have learned, law enforcement activities such as predictive policing have too often targeted communities of color, resulting in a disproportionate number of arrests of persons of color. These arrests are then logged into the system and become data points, which are aggregated into larger data sets and, in recent years, have been used to create AI systems. This process creates a feedback loop where predictive policing algorithms lead law enforcement to patrol and thus observe crime only in neighborhoods they patrol, influencing the data and thus future recommendations. Likewise, arrests made during the current protests will result in data points in future data sets that will be used to build AI systems.
This feedback loop of bias within AI plays out throughout the criminal justice system and our society at large, such as determining how long to sentence a defendant, whether to approve an application for a home loan or whether to schedule an interview with a job candidate. In short, many AI programs are built on and propagate bias in decisions that will determine an individual and their family’s financial security and opportunities, or lack thereof — often without the user even knowing their role in perpetuating bias.
This dangerous and unjust loop did not create all of the racial disparities under protest, but it reinforced and normalized them under the protected cover of a black box.
This is all happening against the backdrop of a historic pandemic, which is disproportionately impacting persons of color. Not only have communities of color been most at risk to contract COVID-19, they have been most likely to lose jobs and economic security at a time when unemployment rates have skyrocketed. Biased AI is further compounding the discrimination in this realm as well.
This issue has solutions: diversity of ideas and experience in the creation of AI. However, despite years of promises to increase diversity — particularly in gender and race, from those in tech who seem able to remedy other intractable issues (from putting computers in our pockets and connecting with machines outside the earth to directing our movements over GPS) — recently released reports show that at Google and Microsoft, the share of technical employees who are Black or Latinx rose by less than a percentage point since 2014. The share of Black technical workers at Apple has not changed from 6%, which is at least reported, as opposed to Amazon, which does not report tech workforce demographics.
In the meantime, ethics should be part of a computer science-related education and employment in the tech space. AI teams should be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes and the real human impacts of getting this wrong. Companies need to do better in incorporating diverse perspectives into the creation of its AI, and they need the government to be a partner, establishing clear expectations and guardrails.
There have been bills to ensure oversight and accountability for biased data and the FTC recently issued thoughtful guidance holding companies responsible for understanding the data underlying AI, as well as its implications, and to provide consumers with transparent and explainable outcomes. And in light of the crucial role that federal support is playing and our accelerated use of AI, one of the most important solutions is to require assurance of legal compliance with existing laws from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by several members of Congress to safeguard protected persons and classes — and should be enacted.
We all must do our part to end the cycles of bias and discrimination. We owe it to those whose lives have been taken or altered due to racism to look within ourselves, our communities and our organizations to ensure change. As we increasingly rely on AI, we must be vigilant to ensure these programs are helping to solve problems of racial injustice, rather than perpetuate and magnify them.
Source link
قالب وردپرس
from World Wide News https://ift.tt/3fYzItx
0 notes
Text
China Sentences He Jiankui For CRISPR Twin Babies Experiment
A Chinese court sentenced biomedical scientist He Jiankui and two accomplices to prison on Monday for “illegal medical practice” for genetically engineering three babies.
In November 2018, He announced the birth of the first two children, twin girls named Lulu and Nana, as well as the pregnancy of a second woman carrying a genetically engineered fetus. The news created a scientific firestorm, with human genetic engineering experiments widely viewed as dangerous and unethical by scientific organizations worldwide. The third baby has now been born, according to reporting from China’s state news agency.
The genetic engineering team fabricated an ethics review of their experiment, according to the Nanshan District People’s Court of Shenzhen City ruling. They used the faked permissions to recruit couples living with HIV in hopes of helping them to conceive children genetically engineered to receive a mutation giving them immunity to some forms of the disease.
He, formerly a biomedical scientist at the Southern University of Science and Technology in Shenzen, received a prison sentence of three years and a fine equivalent to $480,000. His associates, Zhang Renli and Qin Jinzhou, received jail terms of two years and 18 months with a two-year reprieve, according to the ruling, for practicing medicine without a license and violating Chinese regulations governing assisted reproduction.
“The prison sentence and stiff financial penalty sends a message to other Chinese scientists that unsanctioned efforts at human germline editing will not be tolerated,” University of Pennsylvania Perelman School of Medicine researcher Kiran Musunuru told BuzzFeed News, by email. “I expect that it will have a deterrent effect, certainly in China and possibly elsewhere.”
At an October conference, Musunuru had reported that a draft study submitted to a scientific journal about the twins by He’s team suggested that the genetic engineering attempt had badly misfired, targeting the wrong location for the mutation and potentially seeding other mutations throughout the DNA of the children.
Science academies worldwide formed an oversight commission in March, following widespread condemnation of the experiments.
The court ruling found the three sentenced scientists acted “in the pursuit of personal fame and gain” and have seriously “disrupted medical order,” according to Chinese state media.
Sahred From Source link Science
from WordPress http://bit.ly/2tXGtsD via IFTTT
0 notes
Text
Genetically modifying mosquitoes to prevent disease carries unknown risks
Every year, around one million people die of mosquito-borne diseases according to the World Health Organization (WHO). This is why mosquitoes are considered one of the deadliest living creatures on the planet — not because they are lethal themselves, but because many of the viruses and parasites they transmit are.
Consider, for example, dengue fever. This mosquito-borne virus is a leading cause of hospitalization and death among children and adults in several countries in Asia and Latin America. In 2016, member states in three of the six WHO regions reported 3.34 million cases.
In the absence of an effective vaccine for dengue fever, Zika fever, chikungunya and other mosquito-borne diseases, researchers have developed genetic strategies to reduce mosquito populations. One such strategy involves the release into the wild of genetically modified (GM) mosquitoes that express a lethal gene — a strategy believed to have little impact on the overall DNA of wild populations of mosquitoes.
As an interdisciplinary group of authors, we generally support technologies that can reduce human disease and suffering. However, given our combined expertise in science, governance and ethics we have concerns that recent decisions to deploy GM mosquitoes have not been made responsibly.
Genetically modified mosquitoes
The transfer of new genes from GM organisms to wild or domesticated non-GM populations is a key criticism of GM crops like soybean and corn. There are concerns that the introduction of GM genes into non-target species could have negative consequences for both human and environmental health.
Oxitec, a company that spun out of research at Oxford University in the early 2000s, developed and trademarked GM Friendly mosquitoes (also known as strain OX513A of Aedes aegypti). These male GM mosquitoes have what the company describes as a “self-limiting” gene, which means that when these so-called friendly mosquitoes mate, their offspring inherit the self-limiting gene which is supposed to prevent them surviving into adulthood.
In theory, when these mosquitoes are released in high numbers, a dramatic reduction in the mosquito population should follow.
[embedded content]
Changes to the gene pool
According to research published by Oxitec researchers in 2015, field trials involving recurring releases of Friendly mosquitoes demonstrated a reduction of nearly 95 percent of target populations in Brazil. In these field trials, experiments were not performed to assess whether GM mosquitoes might persist in the wild.
A recent study from the Powell lab at Yale University has since confirmed that some of the offspring of the GM mosquitoes didn’t succumb to the self-limiting lethal gene and survived to adulthood. They were able to breed with native mosquitoes and thereby introduce some of their genes into the wild population.
The Yale researchers found that mosquitoes captured at six, 12 and up to 30 months post-release carried DNA from the GM mosquito population, thereby disproving “the claim that genes from the release strain would not get into the general population because offspring would die.”
It appears that between five and 60 percent of the captured mosquitoes post-release contained genetic sequences inherited from the Friendly mosquitoes. Importantly, the number of mosquitoes identified as still containing DNA derived from GM mosquitoes declined between the 12-month and 27-month capture periods specifically, perhaps indicating that the offspring of GM mosquitoes might be less fit in nature after all. This remains to be shown conclusively.
Unknown potential impacts
Meanwhile, the impact of mosquitoes carrying these new genes remains largely unknown. One significant worry is that a new breed of mosquito might emerge that is more difficult to control. These new genes could also potentially alter evolutionary pressures on viruses carried by mosquitoes, like dengue fever, in unpredictable ways. This includes potentially increasing their virulence or changing their host-insect interactions. These are hypothetical risks that have been raised by scientists, and reflect the need for further study.
Thus, like GM soybean or corn, there is legitimate concern about the propagation of new genetic material in wild populations with as yet unknown consequences.
Field trials involving the release of GM organisms are typically designed to evaluate safety and efficacy, to assess possible impact on food networks, and to ensure that there is no (or minimal) undue harm to the environment or human health. Put simply, field trials are meant to assess potential harms associated with genetic technologies and to provide opportunities to minimize these harms before moving forward with more large-scale releases.
This raises two important questions: Given that “around 5 percent or less” of the GM mosquito population was expected to survive, shouldn’t Oxitec have made plans to assess the risk of gene transfer to wild populations during their initial trials? And shouldn’t the Brazilian government have required such an assessment as part of the regulatory approval process, given their awareness of the risk?
Instead, with approval from Brazilian authorities, Oxitec released nearly half a million GM mosquitoes every week into shared environments in Jacobina over a two-year period from 2013 to 2015. This was done without the benefit of adequate risk assessment and without proper public consultation.
Oxitec reports having used leaflets, social media, carnival parades and community meetings to inform the public of their research. Public education is not the same as public consultation and engagement and, in our view, the people living in the vicinity of this release had more than a right to be informed of the plans. They also had a right to participate in relevant decision-making.
On the basis of presumed success in Brazil where mosquito populations were reduced — a consequential reduction in the prevalence of dengue fever has yet to be demonstrated — plans have been made to extend field trials to other jurisdictions, including the Florida Keys in the United States.
To date, public pushback has temporarily prevented the release of GM mosquitoes in the Florida Keys. But Oxitec hopes to eventually secure approval from the U.S. Environmental Protection Agency to perform field trials and assess release of a second-generation GM mosquito that causes lethality only in female mosquitoes, as another means to collapse wild populations.
Regulating genetic modification
In the end, minus the hyperbole and somewhat alarmist reporting of the Yale study (the journal is looking into allegations brought forth by Oxitec of speculative and unsubstantiated claims), the finding that offspring of GM mosquitoes could survive in the wild remains undisputed. This illustrates the importance of careful decision-making and adequate oversight of field trials involving the release of GM organisms. Careful decision-making requires open venues for informed and deliberative public dialogue, engagement and empowerment.
Genetic modification technologies need to be more transparent, as do the scientific processes for evaluating their risks, especially where the rights and needs of affected communities can inform technology development. With more robust and nuanced regulatory processes governing the development and release of GM organisms, it should be possible to benefit from these technologies without harming or disenfranchising the communities that are the intended beneficiaries.
Mosquito-borne illnesses cause immense human suffering, and we should continue to develop technologies to reduce that suffering. At the same time, we must be equally dedicated to designing scientific processes that are safe, ethical and just.
This article is republished from The Conversation by Natalie Kofler, Levenick Resident Scholar in Sustainability, University of Illinois at Urbana-Champaign; Françoise Baylis, University Research Professor, Dalhousie University; Graham Dellaire, Director of Research and Professor of Pathology, Dalhousie University, and Landon J Getz, Ph.D. Candidate in Microbiology and Immunology, Dalhousie University under a Creative Commons license. Read the original article.
0 notes
Text
The Report: America’s Descent Into Torture, Lies, and Denial
The Report, a VICE-produced film, is out Friday
For a brief moment, the vicious attacks of September 11, 2001 brought the country together in an unprecedented moment of unity. But it wouldn’t be long before the government’s illegal response to those attacks—ranging from the creation of a vast new domestic surveillance apparatus to the embrace of torture—would quickly undermine any unifying goodwill.
The CIA’s embrace of extrajudicial rendition and torture would ultimately come to be one of the biggest scandals in American history. And were it not for a man named Daniel J. Jones and a 2014 report detailing the secret program, you may have never even learned about it.
The report, a 525 page synopsis of a still-classified 6,700 page review, found that between 2002 and 2008, the CIA operated a massive covert torture program largely untethered from meaningful oversight—justified by bunk data and lies.
The report found that of the 119 detainees held at covert CIA sites around the world, 39 were subject to extended sleep deprivation, waterboarding, prolonged standing, “simulated burials,” and extended exposure to cold. 26 of those detainees were found to be held "wrongfully,” with some accused of crimes they never committed. One died in captivity.
Despite CIA claims to the contrary, the report found the torture techniques used didn’t provide useful intelligence, something 400 years of scientific data had already proven.
"The use of the CIA's enhanced interrogation techniques was not an effective means of obtaining accurate information or gaining detainee cooperation,” the report concluded.
The government’s flirtation with torture and rendition began as malicious bureaucracy usually does: with men convinced of their own moral infallibility—and plenty of paperwork.
Six days after the terrorist attacks on 9/11, President George W. Bush signed a secret memo giving the CIA wide latitude to detain suspected terrorists. In early 2002, Bush signed an executive order classifying accused terrorists as “unlawful enemy combatants who are not entitled to the protections that the Geneva Convention provides to prisoners of war."
That same year, the Bush administration published a memo declaring that interrogation tactics such as waterboarding did not violate the law provided there is “no intent to cause severe pain.” Two years later, photos highlighting the US mistreatment of prisoners at the Abu Ghraib prison in Iraq showed the world the US had stumbled off the legal and ethical rails.
In 2005, a Washington Post story highlighted the CIA’s use of covert “black sites” in 8 foreign countries and Guantanamo Bay, Cuba used to retain and torture US government captives free from judicial oversight. Two years later, evidence emerged indicating that not only was the CIA torturing captives, it was deleting evidence of its brutal interrogation practices, something the vehemently CIA denied.
By 2009, these scandals resulted in inquiries by the United States Senate Select Committee on Intelligence (SSCI). Three years later the committee approved an extensive investigation into the government’s torture program. By 2014 a synopsis of the investigation had finally been released to the public, more than a decade after the torture program began.
Jones, a former FBI counter-terrorism analyst whose experience in bringing the scandal to light formed the backbone of the upcoming Vice-produced movie “The Report”—spent six years digging through evidence of the CIA’s wrongdoing. A seventh year was spent defending the report from CIA officials and others attempting to undermine its findings.
The CIA’s interest in covering up the scandal was such a concern, Jones in 2013 found it necessary to abscond with classified documents proving the CIA had lied repeatedly about the program. His worries turned out to be justified; later evidence would emerge that the CIA had “accidentally” and systematically deleted troves of data proving it had lied repeatedly.
Jones has long noted that the CIA and Bush and Obama era officials worked tirelessly to not only undermine his investigation, but to have him fired from his post. The CIA was so worried that years of lies related to the covert torture program would soon be exposed, it actively spied on Congressional staffers and investigators, including Jones.
The government’s ultimate response to the report has been a mixed bag.
The program, at least in its original form, was ended via executive order by President Bush in 2009. Bipartisan legislation co-authored by Senators McCain and Dianne Feinstein was passed in 2015 attempting to ban agencies from using many of the torture techniques described in the report, though human rights organizations would note the proposal contained ample loopholes.
Genuine justice would prove hard to come by. A 2012 investigation into the death of two detainees—one at Abu Ghraib and one at a CIA black site in Afghanistan—would end with no prosecutions. By 2014, the Obama administration had made it clear there would be no prosecution of any officials involved in the covert program or the destruction of evidence.
“I don’t think Congress as a whole has learned much from the episode,” Patrick Eddington, a former CIA analyst and ex-House senior staffer, told Motherboard.
Eddington pointed to recent Congressional and CIA efforts to update the 1982 Intelligence Identities Protection Act at the CIA’s request, with an eye on imposing harsher penalties on journalists and whistleblowers looking to expose future ethical lapses at the CIA. That said, Eddington noted Jones' contributions to government transparency were invaluable.
“The Torture Report would not have been possible without him,” he said. “One of the best staffers ever to work on the Hill, in my view.”
If there’s any lesson to be learned from the scandal, it’s that the government routinely lacks comprehensive insight into what the CIA is doing despite years of scandal proving that exact point. From the 1970s discovery of covert CIA assasination programs to the agency’s illegal and immoral infatuation with torture, keeping the CIA accountable to the public remains an uphill climb.
“Pressure, fear, and expectation of further terrorist plots do not justify, temper, or excuse improper actions taken by individuals or organizations in the name of national security,” the report proclaimed. “The major lesson of this report is that regardless of the pressures and the need to act, the [government’s] actions must always reflect who we are as a nation, and adhere to our laws and standards.”
That lesson, as certain modern scandals make painfully clear, remains very much a work in progress.
The Report: America’s Descent Into Torture, Lies, and Denial syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
Text
The Mandarin: #MeToo for AI: could Cambridge Analytica happen again?
It is now six months since Harvey Weinstein’s activities unleashed a seachange in the world of workplace harassment. During that time, widespread adverse comment following the Cambridge Analytica revelations prompted assurances by Mark Zuckerberg to the effect that Facebook now understands that with great data comes great responsibility. Meanwhile, concerns have been raised about diverse effects of data automation: automated decision making affecting humans; unaccountable robots; excessive and intrusive surveillance; opaque, unreliable or discriminatory algorithms; online echo chambers and fake news.Many of these concerns are also raised about AI. In addition, rapid developments in AI have prompted a policy debate about whether we are skilling a workforce to work with technology, and whether AI will deliver benefits to a few, while many citizens are left behind. These concerns are exacerbated by a decline of faith in public policymaking: trust of citizens in institutions and governments is at historic lows.Can we ensure that what have we learnt from Cambridge Analytica is applied to address these challenges?
Big data ethics
Ethical challenges in application of data analytics are nothing new.To take one relevant example, only three years ago Facebook data scientist Adam Kramer and Cornell University social scientists Jamie Guillory and Jeff Hancock published a peer reviewed report on their A/B study that experimentally modified the Facebook feed algorithm for 689,003 people. The authors demonstrated that changing the negative or positive emotional valence of posts on a user’s News Feed affected the emotional valence of the posts made by the user after seeing those posts.This supported the hypothesis that ‘emotional contagion’ — spreading emotional states through social contact — occurs on a massive scale, albeit with relatively small individual effects, in social networks.Publication of the report of this study launched a controversy about big data ethics, albeit without the media and public attention recently focused on Cambridge Analytica. One criticism made of the study was that Facebook users whose feeds were altered were not asked to consider this study and invited to provide informed consent to participate in the study. Facebook suggested that its users did so by inferred consent to Facebook’s terms of use, which were then significantly more opaque than today.Some critics questioned the ethics of platform providers experimenting with the emotional state of their users.Other commentators suggested that the differences between this study and common online marketing practices were that the experiment was not selling anything, and that results were published in a peer reviewed scientific journal.
What, no ethics review?
In any event, the study was not subjected to any form of ethics review.The study probably was not required to be reviewed because the US ‘Common Rule’ only requires ethics review of ‘intervention’ in the life of human subjects by way of ‘(human) research’.Some critics noted that the conduct of the study illustrated a broader concern, suggesting that the data science community was not familiar with the ethics regulation found in other science and technology communities. These critics suggested that data scientists wrongly thought that many of their data projects concerned systems and devices, not people, and therefore were human-related ethical concerns.Of course, many data driven systems and devices differentiate between humans having regard to individual behaviour or inferred likely behaviour. This rightly raises issues as to whether any adverse effects on some individuals, whether by inclusion or exclusion, have been properly considered.Data scientists come to this new field with diverse training that often does not include any training as to ethics, social science or psychology.The Facebook emotional contagion study illustrates the danger that data scientists and technology developers may not see, and avoid or mitigate, ethical concerns with projects which affect humans.Should individuals have a ‘right to know’ how information about them is used to manipulate their emotions?Or that they are being treated differently to other individuals?How can social responsibility and other ethical concerns be addressed without the slow and complex processes for medical research ethics review?Cambridge Analytica magnified concerns already raised by the Facebook emotional contagion study. It grabbed headlines and social media attention because the potential impact was demonstrated through the backstory of its alleged role in delivering the White House to a political outsider.Public attention also raised the issue of whether Facebook users should know how their Facebook profiles are being used.A further issue was whether Facebook knew, or as a data custodian should have taken active steps to check, what Cambridge Analytica was up to.
What have we learnt?
Generally only medical research and other ‘research’ conducted by public institutions through use of public funds involving humans and animals must be subject to review and oversight of a research ethics committee.That leaves development and commercialisation of most products and services outside formal ethical review.Many products and services do not involve collection, use or disclosure of personal information about identifiable individuals and are therefore outside data protection laws. If information is being used but about it is not about identifiable individuals, the use of that information may not be subject to any form of privacy impact assessment. Although privacy review does not formally included consideration of non-privacy ethical concerns, often these are picked up when uses of personal information are reviewed. But no collection, use or disclosure of personal information, no privacy review.This leaves a sizeable middle ground.The harvesting and use of personal information about Facebook users by Cambridge Analytica probably was not in that middle ground, because those activities took place without knowledge active consent of Facebook users. However, it was suggested that given narrow privacy coverage of privacy related laws in the USA, knowledge and active consent was not required.In any event, it was argued that there was no requirement for ethical or privacy review of what Cambridge Analytica was up to — that this application was in the middle ground. But within this middle ground — and then outside current requirements for review — lie many applications of algorithmic decision making, and uses of AI based products and services, both in the business sector and in government.Concerns in this middle ground include social equity and fairness, discrimination, lack of transparency, lack of accountability, intrusive surveillance and failure to properly warn or disclose biases or other limitations in reliability of outputs or applications.These issues will rapidly escalate in scale, complexity and variety as the range of applications of machine learning and AI continue to rapidly expand.So how should we address these problems without sacrificing many of the benefits of machine learning and AI?
Making it real
Most studies of AI ethics rework lists of principles for ethical analysis, but do not assist operationalisation of those principles.Practical ethics requires methodologies, tools, processes and lexicons that prompt sensible discussions within laboratories and other workplaces about social equity, fairness and ethics.Design and development teams need to be empowered to have these discussions. They may need to bring outside advocates into those discussions, or to try to synthesise viewpoints of a broader cross-section of society.“How many machine learning and AI applications will not achieve acceptance because inadequate transparency is engineered into those applications?”These discussions need to be sufficiently structured and formalised to reliably and verifiably happen.The tools used to inform and guide these discussion should not be overly intrusive and formulaic, or review will become be a matter of box ticking, form over substance.The processes must be agile and timely enough to not slow down development and commercialisation.There may also be business benefits in pausing to frame and consider ethical questions.If Facebook didn’t learn enough from the adverse comment following the emotional contagion study, will Facebook learn more from the far greater business impact of Cambridge Analytica upon Facebook’s market capitalisation and through loss of trust of Facebook users?Ideally, businesses and government agencies endeavouring to be socially responsible should no longer require their own #MeToo moment to spur uptake of ethical assessment of design and development decisions. Sensible ethical framing can get buy-in by executives and other decision-makers by demonstrably yielding value by reducing subsequent rework when problems are later discovered.How much has it cost Facebook to deal with the problems exposed through the Cambridge Analytica revelations?How many products and services get beta released into markets without first considering social impact and user issues, and then require costly rework to address issues first identified in-market?How many prospective customers are never gained because accessibility issues have not been considered?How many machine learning and AI applications will not achieve acceptance because inadequate transparency is engineered into those applications and they are not accepted because humans can’t properly ‘interrogate the algorithm’ to understand biases and other reliability issues?Should humans trust machines that fundamentally affect their lives and security when it is not clear which provider takes responsibility for which aspects of a system and whether issues of over-reliance on not fully reliable products are not properly addressed?
Bridging the gap
It may be that Cambridge Analytica teaches us nothing new.But it is reasonable to hope that this controversy highlights the ‘gap’ between data privacy and the ethical review of research involving humans and animals, and to fill that gap by taking the best parts of privacy assessment and ethical review.We need to quickly move from abstract statements of high ethical principles.We need to empower diverse humans in research and development teams to fill that gap by delivering to them sound methodologies, tools and lexicons for ethical decision making.Many businesses are now mature in building privacy by design and information security by design into their research and development.Very few businesses or government agencies apply social fairness, social responsibility or transparency by design and by default into planning of products and services.Ethics by design and default is too important to not do well.Let’s get it right, quickly. Read the full article
0 notes
Text
EXCLUSIVE: Chinese scientists are creating CRISPR babies
A daring effort is under way to create the first children whose DNA has been tailored using gene editing.
The step toward genetically tailored humans was undertaken in secrecy and with the clear ambition of a stunning medical first.
When Chinese researchers first edited the genes of a human embryo in a lab dish in 2015, it sparked global outcry and pleas from scientists not to make a baby using the technology, at least for the present.
It was the invention of a powerful gene-editing tool, CRISPR, which is cheap and easy to deploy, that made the birth of humans genetically modified in an in vitro fertilization (IVF) center a theoretical possibility.
Now, it appears it may already be happening.
According to Chinese medical documents posted online this month, a team at the Southern University of Science and Technology, in Shenzhen, has been recruiting couples in an effort to create the first gene-edited babies. They planned to eliminate a gene called CCR5 in hopes of rendering the offspring resistant to HIV, smallpox, and cholera.
The clinical trial documents describe a study in which CRISPR is employed to modify human embryos before they are transferred into women’s uteruses.
The scientist behind the effort, He Jiankui, did not reply to a list of questions about whether the undertaking had produced a live birth. Reached by telephone, he declined to comment.
However, data submitted as part of the trial listing shows that genetic tests have been carried out on fetuses as late as 24 weeks, or six months. It’s not known if those pregnancies were terminated, carried to term, or are ongoing.
[After this story was published, the Associated Press reported that according to He, one couple in the trial gave birth to twin girls this month, though the agency wasn’t able to confirm his claim independently. He also released a promotional video about his project.]
The birth of the first genetically tailored humans would be a stunning medical achievement, for both He and China. But it will prove controversial, too. Where some see a new form of medicine that eliminates genetic disease, others see a slippery slope to enhancements, designer babies, and a new form of eugenics.
“In this ever more competitive global pursuit of applications for gene editing, we hope to be a stand-out,” He and his team wrote in an ethics statement they submitted last year. They predicted their innovation “will surpass” the invention of in vitro fertilization, whose developer was awarded a Nobel Prize in 2010.
Gene-editing summit
The claim that China has already made genetically altered humans comes just as the world’s leading experts are jetting into Hong Kong for the Second International Summit on Human Genome Editing.
The purpose of the international meeting is to help determine whether humans should begin to genetically modify themselves, and if so, how. That purpose now appears to have been preempted by the actions of He, an elite biologist recruited back to China from the US as part of its “Thousand Talents Plan.”
The technology is ethically charged because changes to an embryo would be inherited by future generations and could eventually affect the entire gene pool. “We have never done anything that will change the genes of the human race, and we have never done anything that will have effects that will go on through the generations,” David Baltimore, a biologist and former president of the California Institute of Technology, who chairs the international summit proceedings, said in a pre-recorded message ahead of the event, which begins Tuesday, November 27.
It appears the organizers of the summit were also kept in the dark about He’s plans.
Regret and concern
The genetic editing of a speck-size human embryo carries significant risks, including the risks of introducing unwanted mutations or yielding a baby whose body is composed of some edited and some unedited cells. Data on the Chinese trial site indicate that one of the fetuses is a “mosaic” of cells that had been edited in different ways.
A gene-editing scientist, Fyodor Urnov, associate director of the Altius Institute for Biomedical Sciences, a nonprofit in Seattle, reviewed the Chinese documents and said that, while incomplete, they do show that “this effort aims to produce a human” with altered genes.
Urnov called the undertaking cause for “regret and concern over the fact that gene editing—a powerful and useful technique—was put to use in a setting where it was unnecessary.” Indeed, studies are already under way to edit the same gene in the bodies of adults with HIV. “It is a hard-to-explain foray into human germ-line genetic engineering that may overshadow in the mind of the public a decade of progress in gene editing of adults and children to treat existing disease,” he says.
Big project
In a scientific presentation in 2017 at Cold Spring Harbor Laboratory, which is posted to YouTube, He described a very large series of preliminary experiments on mice, monkeys, and more than 300 human embryos. One risk of CRISPR is that it can introduce accidental or “off target” mutations. But He claimed he found few or no unwanted changes in the test embryos.
He is also the chairman and founder of a DNA sequencing company called Direct Genomics. A new breed of biotech companies could ultimately reap a windfall should the new methods of conferring health benefits on children be widely employed.
According to the clinical trial plan, genetic measurements would be carried out on embryos and would continue during pregnancy to check on the status of the fetuses. During his 2017 presentation, He acknowledged that if the first CRISPR baby were unhealthy, it could prove a disaster.
“We should do this slow and cautious, since a single case of failure could kill the whole field,” he said.
A listing describing the study was posted in November, but other trial documents are dated as early as March of 2017. That was only a month after the National Academy of Sciences in the US gave guarded support for gene-edited babies, although only if they could be created safely and under strict oversight.
Currently, using a genetically engineered embryo to establish a pregnancy would be illegal in much of Europe and prohibited in the United States. It is also prohibited in China under a 2003 ministerial guidance to IVF clinics. It is not clear if He got special permission or disregarded the guidance, which may not have the force of law.
Public opinion
In recent weeks, He has begun an active outreach campaign, speaking to ethics advisors, commissioning an opinion poll in China, and hiring an American public-relations professional, Ryan Ferrell.
“My sense is that the groundwork for future self-justification is getting laid,” says Benjamin Hurlbut, a bioethicist from Arizona State University who will attend the Hong Kong summit.
The new opinion poll, which was carried out by Sun Yat-Sen University, found wide support for gene editing among the sampled 4,700 Chinese, including a group of respondents who were HIV positive. More than 60% favored legalizing edited children if the objective was to treat or prevent disease. (Polls by the Pew Research Center have found similar levels support in the US for gene editing.)
He’s choice to edit the gene called CCR5 could prove controversial as well. People without working copies of the gene are believed to be immune or highly resistant to infection by HIV. In order to mimic the same result in embryos, however, He’s team has been using CRISPR to mutate otherwise normal embryos to damage the CCR5 gene.
The attempt to create children protected from HIV also falls into an ethical gray zone between treatment and enhancement. That is because the procedure does not appear to cure any disease or disorder in the embryo, but instead attempts to create a health advantage, much as a vaccine protects against chicken pox.
For the HIV study, doctors and AIDS groups recruited Chinese couples in which the man was HIV positive. The infection has been a growing problem in China.
So far, experts have mostly agreed that gene editing shouldn’t be used to make “designer babies” whose physical looks or personality has been changed.
He appeared to anticipate the concerns his study could provoke. “I support gene editing for the treatment and prevention of disease,” He posted in November to the social media site WeChat, “but not for enhancement or improving I.Q., which is not beneficial to society.”
Still, removing the CCR5 gene to create HIV resistance may not present a particularly strong reason to alter a baby’s heredity. There are easier, less expensive ways to prevent HIV infection. Also, editing embryos during an IVF procedure would be costly, high-tech, and likely to remain inaccessible in many poor regions of the world where HIV is rampant.
A person who knows He said his scientific ambitions appear to be in line with prevailing social attitudes in China, including the idea that the larger communal good transcends individual ethics and even international guidelines.
Behind the Chinese trial also lies some bold thinking about how evolution can be shaped by science. While the natural mutation that disables CCR5 is relatively common in parts of Northern Europe, it is not found in China. The distribution of the genetic trait around the world—in some populations but not in others—highlights how genetic engineering might be used to pick the most useful inventions discovered by evolution over the eons in different locations and bring them together in tomorrow’s children.
Such thinking could, in the future, yield people who have only the luckiest genes and never suffer Alzheimer’s, heart disease, or certain infections.
The text of an academic website that He maintains shows that he sees the technology in the same historic, and transformative, terms. “For billions of years, life progressed according to Darwin’s theory of evolution,” it states. More recently, industrialization has changed the environment in radical ways posing a “great challenge” that humanity can meet with “powerful tools to control evolution.”
It concludes: “By correcting the disease genes … we human[s] can better live in the fast-changing environment.”
Note: This story was updated after publication to include claims by He Jiankui that the trial had produced live births.
Credits: This article was taken from MIT Technology Review, written by Antonio Regalado
EXCLUSIVE: Chinese scientists are creating CRISPR babies was originally published on Shenzhen Blog
0 notes