#ImageChat
Explore tagged Tumblr posts
aphisit0 · 1 year ago
Text
ลองดู "ImageChat: AI Computer Vision"
Application เหมาะกับพวกเรามากเลยครับ App นี้ App สำหรับรับคำอธิบายรูปภาพและคุยกับรูปภาพได้ download
View On WordPress
0 notes
helloartinme · 4 years ago
Text
HW3 Project
1.
started comparing different examples of Bias in Ai such as word embedding, and human learned data.
discussed corporations who faced backlash due to Ai
How it plays a role in racism
2. Next I will be organizing all of my notes from and starting to make my interactive site. I’m just learning HTML, CSS, so I’m implementing what I’m learning in Web Design and creating my project.
Don’t mind my notes, they aren’t exactly organized.
<cite>John Murray
Senior Editor at Binary District, focusing on machine learning, AI, quantum computing, cybersecurity, IoT. </cite>
“Human Intelligence relies on accurate information, so too do machines. Algorithms need training data to learn from. This training data is created, selected, collated, annotated by humans. And therein lies the problem.”
The True Problem with Ai; Examples Bias
Ai technology is used throughout every industry, so it could seem possible to focus on one issue. However, the truth is that the root of the issue is a human-shaped problem.
Example: Microsoft developed an Ai Chatbot that was supposed to learn the people around and was explicitly designed to converse with people worldwide. Unfortunately, Tay soaked up so much information from Twitter comments; it started coming up with its own sentiments.
<imagechatbot>https://towardsdatascience.com/racist-data-human-bias-is-infecting-ai-development-8110c1ec50c</imagechat>
Tay, the ChatBot formed its own opinion based on what it was feed. This proves that Ai can take on biases of humans. Tay was only active for 16 hours. MS was forced to take it down because of the behavior.
Example: Apple under fire, user Types in CEO ( Male Emoji) comes up
Apple was under fire because users noticed when typed in CEO, a male emoji comes up. This sparked outrage. Do you think Apple maliciously did this? Ai experts argue that this type of learning concept is “word embedding.” These are Ai programs that are merely reflecting the example bias that already exists. If you type in CEO, or Firefighter you will see mostly men. When you think about Firefighters, what pictures pop in your head? Would you say you are malicious for having your idea of what a firefighter is? 
Possible Example: Google Translate confirming Gender Norms
https://www.princeton.edu/news/2017/04/18/biased-bots-artificial-intelligence-systems-echo-human-prejudices
<Transition>
The word embedding highlights existing societal prejudices and cultural assumptions. However, there is another bias aspect to Ai
</Transition>
Example: Google Ai facial recognition program labeled two African-Americans as ‘gorillas.’
(website) https://www.scientificamerican.com/article/how-a-machine-learns-prejudice/(website)
0 notes