#breazeale
Explore tagged Tumblr posts
rootsinthefuture · 4 months ago
Text
Tumblr media
“In 2035, in the living room of a typical Western family, a daily scene unfolds that seems straight out of a science fiction novel: among the toys scattered on the carpet, a humanoid-looking robot sits next to Tommaso, a four-year-old boy. The robot, named HERA (Home Empathetic Robotic Assistant), is a psychodroid, programmed not only to assist with household chores but also to interact with family members in an empathetic and intuitive manner. HERA's presence in the family's daily life has become as normal as that once attributed to televisions or smartphones. However, unlike the latter, HERA has the ability to actively participate in education and play, becoming both a babysitter and a friend to Tommaso.”
Robots like HERA do not yet exist, but the interaction between children and robots is a subject of study in various fields of pedagogy and social robotics. According to research conducted by Breazeal, Harris, De Steno, and Kory,(1) children treat anthropomorphic robots as genuine sources of information, similarly to human interlocutors. From as young as three years old, children not only receive and retain information imparted by robots, but they also actively seek them out as informants. This phenomenon is particularly evident in robots that exhibit a rich range of non-verbal cues, such as glances, gestures, and facial expressions, indicating responsiveness and interactivity.
(Electronic Mentors: Pedagogy in the Age of Empathetic Robotics)
https://www.amazon.com/dp/B0D9SVDK4B
5 notes · View notes
mariacallous · 7 months ago
Text
Look at almost any recent major news story from Russia, and you’ll find the Federal Security Service, better known as the FSB. Having failed to prevent the Crocus City Hall terrorist attack in Moscow last month, the agency has played a major role in arresting and apparently torturing the suspected perpetrators. It was FSB agents who arrested Wall Street Journal reporter Evan Gershkovich on espionage charges just over a year ago. And the FSB has been heavily involved in enforcing Russia’s crackdowns on dissent and LGBTQ+ rights.
At the same time, the FSB is inextricably linked to Moscow’s war against Ukraine. After years of carrying out subversive activities there, it provided Putin with key (though apparently misleading) intel that led him to launch his full-scale invasion in 2022. Since then, its agents have facilitated the deportation of Ukrainian children, tortured an untold number of Ukrainian civilians in so-called “torture chambers,” and tried to plant former ISIS members in Ukrainian battalions.
And let’s not forget that Putin himself was shaped by his career in the FSB’s predecessor agency, the Soviet-era KGB. Putin’s rise to power was defined by his image as a strong man who could ensure security and stability. Since assuming the presidency, he’s given himself direct authority over the FSB and steadily expanded its ability to surveil and repress Russian citizens.
To learn about the Russian FSB’s evolution over the last three decades, its operations in Russia and beyond, and its possible future after Putin, Meduza in English senior news editor Sam Breazeale spoke to Dr. Kevin Riehle, an expert in foreign intelligence services and the author of The Russian FSB: A Concise History of the Federal Security Service.
Timestamps for this episode:
(3:13) Decoding the FSB: Structure, mission, and operations
(5:58) The evolution of Russian national security: From KGB to FSB
(14:36) Corruption and ideology: The FSB’s internal struggle
(23:31) The FSB’s foreign reach and domestic repression
(38:49) The agency’s post-Putin future
5 notes · View notes
mit · 1 year ago
Text
Tumblr media
MIT roboticist Cynthia Breazeal had worked with designers at Mattel to ensure that Robotics Engineer Barbie was both realistic and inspirational. #Barbie has been a STEM career role model for many decades; among other things, she’s been an astronaut a number of times, starting in 1965!
5 notes · View notes
party-slug · 2 years ago
Note
If Wilder and Usyk were to fight how do you see it going?
short answer: usyk by very wide decision, or 11th/12th round tko.
sounds a little crazy at first, but if you think about it there are two ways to beat wilder. there is of course the fury way which works great if you are big and can handle his right hand/dont mind taking some punishment to dish it out. obviously that isnt what usyk is gonna try if they fight, and he doesnt need to because there exists a lesser known second option. if you go back to wilders olympic days, you will notice he lost to a fella named clemente russo in 2008(also worth pointing out that usyk actually beat him at the 2012 olympics to win the gold at HW). i would say its also worth noting that russo is smaller than usyk but was able to use his footwork to beat wilder up on the inside and force him to fight off the back foot, which prevented him from throwing the right, which is basically the only thing wilder knows how to do. this is the exact kind of boxer wilder has been protected from his entire pro career. even if you discount his opponent's exceptionally low rankings and look strictly at their styles, they are almost without exception flat footed sluggers who only know how to come forward. it sounds harsh, but you would have a tough time convincing me wilder has exhibited any real growth as a boxer since his loss to russo. he kinda jabbed against (i think) breazeale, and it looked like he learned some rudimentary foot work in wilder/fury 3, but that went out the window after the first round. his footwork is still extremely limited, hes never really learned how string punch combinations together, and historically speaking his ability to use the jab is pretty much non existent. he has an abundance of heart, but unless he can seriously touch usyk(which i doubt), it wont be enough to get it done.
3 notes · View notes
sunaleisocial · 3 months ago
Text
First AI + Education Summit is an international push for “AI fluency”
New Post has been published on https://sunalei.org/news/first-ai-education-summit-is-an-international-push-for-ai-fluency/
First AI + Education Summit is an international push for “AI fluency”
This summer, 350 participants came to MIT to dive into a question that is, so far, outpacing answers: How can education still create opportunities for all when digital literacy is no longer enough — a world in which students now need to have AI fluency?
The AI + Education Summit was hosted by the MIT RAISE Initiative (Responsible AI for Social Empowerment and Education) in Cambridge, Massachusetts, with speakers from the App Inventor Foundation, the Mayor’s Office of the City of Boston, the Hong Kong Jockey Club Charities Trust, and more. Highlights included an onsite “Hack the Climate” hackathon, where teams of beginner and experienced MIT App Inventor users had a single day to develop an app for fighting climate change.
In opening remarks, RAISE principal investigators Eric Klopfer, Hal Abelson, and Cynthia Breazeal emphasized what new goals for AI fluency look like. “Education is not just about learning facts,” Klopfer said. “Education is a whole developmental process. And we need to think about how we support teachers in being more effective. Teachers must be part of the AI conversation.” Abelson highlighted the empowerment aspect of computational action, namely its immediate impact, that “what’s different than in the decades of people teaching about computers [is] what kids can do right now.” And Breazeal, director of the RAISE Initiative, touched upon AI-supported learning, including the imperative to use technology like classroom robot companions as something supplementary to what students and teachers can do together, not as a replacement for one another. Or as Breazeal underlined in her talk: “We really want people to understand, in an appropriate way, how AI works and how to design it responsibly. We want to make sure that people have an informed voice of how AI should be integrated into society. And we want to empower all kinds of people around the world to be able to use AI, harness AI, to solve the important problems of their communities.”
Play video
MIT AI + Education Summit 2024: Welcome Remarks by MIT RAISE Leaders, Abelson, Breazeal, and Klopfer Video: MIT Open Learning
The summit featured the invited winners of the Global AI Hackathon. Prizes were awarded for apps in two tracks: climate and sustainability, and health and wellness. Winning projects addressed issues like sign-language-to-audio translation, moving object detection for the vision impaired, empathy practice using interactions with AI characters, and personal health checks using tongue images. Attendees also participated in hands-on demos for MIT App Inventor, a “playground” for the Personal Robots Group’s social robots, and an educator professional development session on responsible AI.
By convening people of so many ages, professional backgrounds, and geographies, organizers were able to foreground a unique mix of ideas for participants to take back home. Conference papers included real-world case studies of implementing AI in school settings, such as extracurricular clubs, considerations for student data security, and large-scale experiments in the United Arab Emirates and India. And plenary speakers tackled funding AI in education, state government’s role in supporting its adoption, and — in the summit’s keynote speech by Microsoft’s principal director of AI and machine learning engineering Francesca Lazzeri — the opportunities and challenges of the use of generative AI in education. Lazzeri discussed the development of tool kits that enact safeguards around principles like fairness, security, and transparency. “I truly believe that learning generative AI is not just about computer science students,” Lazzeri said. “It’s about all of us.”
Trailblazing AI education from MIT
Critical to early AI education has been the Hong Kong Jockey Club Charities Trust, a longtime collaborator that helped MIT deploy computational action and project-based learning years before AI was even a widespread pedagogical challenge. A summit panel discussed the history of its CoolThink project, which brought such learning to grades 4-6 in 32 Hong Kong schools in an initial pilot and then met the ambitious goal of bringing it to over 200 Hong Kong schools. On the panel, CoolThink director Daniel Lai said that the trust, MIT, Education University of Hong Kong, and the City University of Hong Kong did not want to add a burden to teachers and students of another curriculum outside of school. Instead, they wanted “to mainstream it into our educational system so that every child would have equal opportunity to access these skills and knowledge.”
MIT worked as a collaborator from CoolThink’s start in 2016. Professor and App Inventor founder Hal Abelson helped Lai get the project off the ground. Several summit attendees and former MIT research staff members were leaders in the project development. Educational technologist Josh Sheldon directed the MIT team’s work on the CoolThink curriculum and teacher professional development. Karen Lang, then App Inventor’s education and business development manager, was the main curriculum developer for the initial phase of CoolThink, writing the lessons and accompanying tutorials and worksheets for the three levels in the curriculum, with editing assistance from the Hong Kong education team. And Mike Tissenbaum, now a professor at the University of Illinois at Urbana-Champaign, led the development of the project’s research design and theoretical grounding. Among other key tasks, they ran the initial teacher training for the first two cohorts of Hong Kong teachers, consisting of sessions totaling 40 hours with about 40 teachers each.
The ethical demands of today’s AI “funhouse mirror”
Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, delivered the closing keynote. He described the current state of AI as a “funhouse mirror” that “distorts the world around us” and framed it as yet another technology that has presented humans with ethical demands to find its positive, empowering uses that complement our intelligence but also to mitigate its risks. 
“One of the areas I’m most excited about personally,” Huttenlocher said, “is people learning from AI,” with AI discovering solutions that people had not yet come upon on their own. As so much of the summit demonstrated, AI and education is something that must happen in collaboration. “[AI] is not human intellect. This is not human judgment. This is something different.”
0 notes
jcmarchi · 3 months ago
Text
First AI + Education Summit is an international push for “AI fluency”
New Post has been published on https://thedigitalinsider.com/first-ai-education-summit-is-an-international-push-for-ai-fluency/
First AI + Education Summit is an international push for “AI fluency”
This summer, 350 participants came to MIT to dive into a question that is, so far, outpacing answers: How can education still create opportunities for all when digital literacy is no longer enough — a world in which students now need to have AI fluency?
The AI + Education Summit was hosted by the MIT RAISE Initiative (Responsible AI for Social Empowerment and Education) in Cambridge, Massachusetts, with speakers from the App Inventor Foundation, the Mayor’s Office of the City of Boston, the Hong Kong Jockey Club Charities Trust, and more. Highlights included an onsite “Hack the Climate” hackathon, where teams of beginner and experienced MIT App Inventor users had a single day to develop an app for fighting climate change.
In opening remarks, RAISE principal investigators Eric Klopfer, Hal Abelson, and Cynthia Breazeal emphasized what new goals for AI fluency look like. “Education is not just about learning facts,” Klopfer said. “Education is a whole developmental process. And we need to think about how we support teachers in being more effective. Teachers must be part of the AI conversation.” Abelson highlighted the empowerment aspect of computational action, namely its immediate impact, that “what’s different than in the decades of people teaching about computers [is] what kids can do right now.” And Breazeal, director of the RAISE Initiative, touched upon AI-supported learning, including the imperative to use technology like classroom robot companions as something supplementary to what students and teachers can do together, not as a replacement for one another. Or as Breazeal underlined in her talk: “We really want people to understand, in an appropriate way, how AI works and how to design it responsibly. We want to make sure that people have an informed voice of how AI should be integrated into society. And we want to empower all kinds of people around the world to be able to use AI, harness AI, to solve the important problems of their communities.”
Play video
MIT AI + Education Summit 2024: Welcome Remarks by MIT RAISE Leaders, Abelson, Breazeal, and Klopfer Video: MIT Open Learning
The summit featured the invited winners of the Global AI Hackathon. Prizes were awarded for apps in two tracks: climate and sustainability, and health and wellness. Winning projects addressed issues like sign-language-to-audio translation, moving object detection for the vision impaired, empathy practice using interactions with AI characters, and personal health checks using tongue images. Attendees also participated in hands-on demos for MIT App Inventor, a “playground” for the Personal Robots Group’s social robots, and an educator professional development session on responsible AI.
By convening people of so many ages, professional backgrounds, and geographies, organizers were able to foreground a unique mix of ideas for participants to take back home. Conference papers included real-world case studies of implementing AI in school settings, such as extracurricular clubs, considerations for student data security, and large-scale experiments in the United Arab Emirates and India. And plenary speakers tackled funding AI in education, state government’s role in supporting its adoption, and — in the summit’s keynote speech by Microsoft’s principal director of AI and machine learning engineering Francesca Lazzeri — the opportunities and challenges of the use of generative AI in education. Lazzeri discussed the development of tool kits that enact safeguards around principles like fairness, security, and transparency. “I truly believe that learning generative AI is not just about computer science students,” Lazzeri said. “It’s about all of us.”
Trailblazing AI education from MIT
Critical to early AI education has been the Hong Kong Jockey Club Charities Trust, a longtime collaborator that helped MIT deploy computational action and project-based learning years before AI was even a widespread pedagogical challenge. A summit panel discussed the history of its CoolThink project, which brought such learning to grades 4-6 in 32 Hong Kong schools in an initial pilot and then met the ambitious goal of bringing it to over 200 Hong Kong schools. On the panel, CoolThink director Daniel Lai said that the trust, MIT, Education University of Hong Kong, and the City University of Hong Kong did not want to add a burden to teachers and students of another curriculum outside of school. Instead, they wanted “to mainstream it into our educational system so that every child would have equal opportunity to access these skills and knowledge.”
MIT worked as a collaborator from CoolThink’s start in 2016. Professor and App Inventor founder Hal Abelson helped Lai get the project off the ground. Several summit attendees and former MIT research staff members were leaders in the project development. Educational technologist Josh Sheldon directed the MIT team’s work on the CoolThink curriculum and teacher professional development. Karen Lang, then App Inventor’s education and business development manager, was the main curriculum developer for the initial phase of CoolThink, writing the lessons and accompanying tutorials and worksheets for the three levels in the curriculum, with editing assistance from the Hong Kong education team. And Mike Tissenbaum, now a professor at the University of Illinois at Urbana-Champaign, led the development of the project’s research design and theoretical grounding. Among other key tasks, they ran the initial teacher training for the first two cohorts of Hong Kong teachers, consisting of sessions totaling 40 hours with about 40 teachers each.
The ethical demands of today’s AI “funhouse mirror”
Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, delivered the closing keynote. He described the current state of AI as a “funhouse mirror” that “distorts the world around us” and framed it as yet another technology that has presented humans with ethical demands to find its positive, empowering uses that complement our intelligence but also to mitigate its risks. 
“One of the areas I’m most excited about personally,” Huttenlocher said, “is people learning from AI,” with AI discovering solutions that people had not yet come upon on their own. As so much of the summit demonstrated, AI and education is something that must happen in collaboration. “[AI] is not human intellect. This is not human judgment. This is something different.”
0 notes
mirandamckenni1 · 5 months ago
Text
youtube
AI is Making Us Less Human Ground News Holiday Sale: Compare news coverage. Spot media bias. Join Ground News today to get 40% off unlimited access: https://ift.tt/Arvugn4. Sale ends December 31. - From police robots to AI avatars, the tech industry uses human empathy as a tool for evil. - Filmed & edited by me :) Set design & camera by Vic Mongiovi AI & robotics consultation from David Marino & Charlie Gauthier Voiceover recordings by: Stushi: https://www.youtube.com/@stushi Carmilla Morrell: https://ift.tt/4cbmYSF Lola Sebastian: https://ift.tt/y1BpnOm Foreign Man in a Foreign Land: https://ift.tt/8DBHYZo – Support the channel on Patreon: https://ift.tt/Y9SJx14 Twitter: https://twitter.com/lily_lxndr Instagram: https://ift.tt/ab746TU Letterboxd: https://ift.tt/TxLNSDs - Sources: Tech fair robot scandal https://ift.tt/pEfeuGK Kamil Mamak, "Should Violence Against Robots be Banned?" Kate Darling, "'Who's Johnny?': Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy" Mads Bering Christiansen, Ahmad Rafsanjani, and Jonas Jørgensen, "“It Brings the Good Vibes”: Exploring Biomorphic Aesthetics in the Design of Soft Personal Robots" Kate Darling, Palash Nandy, and Cynthia Breazeal, "Empathic concern and the effect of stories in human-robot interaction" Dr. Cherie Lacey and Dr. Catherine Caudwell, "Cuteness as a Dark Pattern in Home Robots" Adobe Podcast https://ift.tt/WaQO3Ng Anat Perry, "AI will never convey the essence of human empathy" Luisa Damiano, Paul Dumouchel, and Hagen Lehmann, "Artificial Empathy: An Interdisciplinary Investigation" Adrienn Ujhelyi, Flora Almosdi, and Alexandra Fodor, "Would You Pass the Turing Test? Influencing Factors of the Turing Decision" Jonas Ivarsson and Oskar Lindwall, "Suspicious Minds: the Problem of Trust and Conversational Agents" HeyGen https://heygen.com Affective computing market (U.S. only - unsure how large the global market is!) https://ift.tt/MF4xSyz Ted Chiang, "Liking What You See: A Documentary" (from the short story collection "Stories of Your Life and Others") Daniel Immerwahr, "What the Doomsayers Get Wrong About Deepfakes" https://ift.tt/ZDnb0aO" – Media used: The Iron Giant (1999) Wall-E (2008) Her (2013) Blade Runner (1982) Ex Machina (2014) Air Doll (2009) Links to all YouTube videos used here: https://ift.tt/YAv3XeR via YouTube https://www.youtube.com/watch?v=vvGOA34z22E
0 notes
thatswhatshedoes · 7 years ago
Text
Tumblr media
Meet These Incredible Women Advancing A.I. Research
Forbes' Mariya Yao introduces over 20 leading women behind #AI research -- featuring Coursera's Daphne Koller, Jiboro Bot's Cynthia Breazeal, Harvard professor Latanya Sweeney, and more: "We all have a responsibility to make sure everyone - including companies, governments and researchers - develop AI with diversity in mind.”
0 notes
thomasenglishclass · 1 year ago
Link
0 notes
brunomindcast · 1 year ago
Link
0 notes
rootsinthefuture · 3 months ago
Text
Tumblr media
“In 2035, in the living room of a typical Western family, a daily scene unfolds that seems straight out of a science fiction novel: among the toys scattered on the carpet, a humanoid-looking robot sits next to Tommaso, a four-year-old boy. The robot, named HERA (Home Empathetic Robotic Assistant), is a psychodroid, programmed not only to assist with household chores but also to interact with family members in an empathetic and intuitive manner. HERA's presence in the family's daily life has become as normal as that once attributed to televisions or smartphones. However, unlike the latter, HERA has the ability to actively participate in education and play, becoming both a babysitter and a friend to Tommaso.”
Robots like HERA do not yet exist, but the interaction between children and robots is a subject of study in various fields of pedagogy and social robotics. According to research conducted by Breazeal, Harris, De Steno,and Kory,(1) children treat anthropomorphic robots as genuine sources of information, similarly to human interlocutors. From as young as three years old, children not only receive and retain information imparted by robots, but they also actively seek them out as informants. This phenomenon is particularly evident in robots that exhibit a rich range of non-verbal cues, such as glances, gestures, and facial expressions, indicating responsiveness and interactivity.
HERA's ability to display empathy is no accident; it is the result of sophisticatedprogramming aimed at emulating human non-verbal contingency. This aspect is crucial because children, as suggested by Breazeal and colleagues' research, prefer interactions with those who show a greater ability to respond appropriately and promptly to their communicative signals. HERA's artificial empathy allows it not only to understand and respond to Tommaso's emotions but also to anticipate his needs, learning from his daily behaviors. HERA's role extends beyond mere supervision. It is an educational tool that stimulates Tommaso's curiosity, proposing educational games and activities that encourage learning through play. Its presence encourages the child to question how things work, promoting a type of active and participatory learning that was less accessible with previous technological means.
As technology advances rapidly towards the imaginary reality of HERA(2), numerous advantages emerge, but also serious ethical issues(3).
The dependence on robotic assistants for the companionship and education of children could have negative effects on the development of
their social skills and make them excessively reliant on robots for various aspects of their lives. Spending too much time with a robot like HERA could limit fundamental human interactions essential for their emotional
and interpersonal development, leading to potential difficulties in forming real relationships and handling complex social situations. Therefore, it is essential that educators and technologists collaborate to create guidelines that balance the beneficial use of such technologies with the necessary interpersonal and emotional development of children, thus preventing the risks associated with excessive dependence on robots.
A crucial aspect to consider is the physical safety of children. Robots designed to interact with young children must be equipped with rigorous safety protocols to prevent physical harm. This includes implementing advanced sensors to avoid collisions and sophisticated algorithms to detect potentially dangerous situations. For example, according to the research of Tanaka et al. (2007), the inclusion of proximity sensors and the ability to quickly recognize and react to sudden movements are essential to ensure that robots can operate safely in home environments where children play freely. Additionally, the design of robots must take into account the physical characteristics of children, meaning avoiding sharp corners, toxic materials, and easily detachable components that could pose a choking hazard.
A study conducted by Sharkey et al. (2010) emphasizes the importance of rigorous and continuous testing of robots in real environments to ensure that any potential risks are identified and mitigated.
Psychological safety is another critical aspect. Robots must be designed to support, not overshadow, children's autonomy. This means that robots should be programmed to encourage children to explore and learn independently rather than becoming a unique and constant point of reference. As psychologist Goleman (2006) suggests, it is crucial for children to develop the ability to self-regulate and manage their own emotions without overly relying on constant external support.
Stuart J. Russell's research on AI alignment(4) is particularly relevant in his context. Russell emphasizes the importance of developing artificial intelligences that understand and respect human objectives, avoiding behaviors that could be harmful or unintended. In the case of robots like HERA, this translates into the need to program robots so that they not only respond to children's immediate needs but also promote their long-term development in a safe and healthy manner.
In previous chapters, we explored a near future where the convergence of technological acceleration and consequent economic accessibility could lead to the widespread adoption of humanoid robots in our homes. This transformation, far from being a mere futuristic hypothesis, is shaping up as an increasingly tangible trajectory that will radically redefine the concepts of "home" and "family."
Humanoid robots, initially conceived as simple domestic aids, could quickly evolve into complex entities integrated into the family fabric.
These active and interactive presences could soon occupy a central place in our lives, especially in those of our children. The idea that intelligent robots could become a constant presence in children's lives raises a myriad of ethical, psychological, and educational issues that we cannot afford to ignore.
This technological evolution fits into an already complex and problematic social context regarding parenting. On the one hand, the introduction of domestic robots could potentially free up valuable time for parents, offering them greater opportunities to interact meaningfully with their children. On the other hand, we must consider current trends in parenting, which paint a worrying picture. Recent studies(5) highlight a growing neglect by parents, often overwhelmed by work and sometimes immature, regarding their children's needs. This neglect manifests in various ways, including the excessive use of electronic devices in the presence of children. According to a survey reported by Psychology Today, which involved six thousand children between the ages of eight and thirteen, 32% reported feeling "unimportant" when their parents used cell phones, and over half said their parents spent too much time on devices. This behavior can significantly damage children's social and emotional development, depriving them of important face-to-face interactions and the necessary parental attention.
Moreover, the phenomenon of parental burnout is emerging as an increasingly widespread issue. Characterized by physical and emotional exhaustion, burnout often leads to emotional distancing and a loss of satisfaction in the parental role. This condition, exacerbated by factors such as work-family conflict, financial insecurity, and lack of social support, not only affects the mental and physical health of parents but also impacts children, leading to increased anger, neglect, and in the most severe cases, violence from parents.
In this complex context, the introduction of intelligent robots into families presents both opportunities and challenges. On the one hand, these robots could lighten parents' domestic workload, potentially freeing up time and energy for more meaningful interaction with their children.
From: Electronic Mentors: Pedagogy in the Age of Empathetic Robotics
2 notes · View notes
mariacallous · 6 months ago
Text
For the past two months, millions of Kazakhstanis have been glued to their screens, witnessing a landmark moment in the nation’s history: a murder trial live-streamed on YouTube. This was the trial of Kuandyk Bishimbayev, Kazakhstan’s former economic minister, who was convicted of torturing and killing his wife, Saltanat Nukenova, on November 9, 2023. The brutal CCTV footage of the incident went viral, not just within Kazakhstan but internationally as well. This trial not only highlighted Saltanat Nukenova’s tragic case but also shined a glaring spotlight on Kazakhstan’s chronic issues with domestic violence.
To learn more about the case and its wider significance, Meduza intern Ekaterina Rahr-Bohr and Meduza senior news editor Sam Breazeale spoke to Century College political scientist Dr. Colleen Wood, human rights activist and NeMolchiKZ founder Dinara Smailova, and The Village Kazakhstan editor-in-chief Aleksandra Akanaeva.
Timestamps for this episode:
(6:33) The role of social media and public sentiment
(9:51) The impact of “Saltanat’s law”
(16:41) Broader issues of domestic violence in Kazakhstan
(26:00) The role of NGOs and activists
(28:55) Dinara Smailova’s personal stories
(36:34) The need for systemic changes
3 notes · View notes
snapdragonhemp · 1 year ago
Photo
Tumblr media Tumblr media Tumblr media
We had a lot of fun at Boogie on the Bridge 2023! It was a totally excellent time with lots of great music, great food, and great friends! The motorcycle rally was amazing and there was never a dull moment at this wonderful event! Proceeds of this event went to amazing causes of Maryville/Alcoa Animal Shelter and The Jeff Breazeale Foundation. Thank you so much for the wonderful time!
0 notes
rpnewspaperblog · 2 years ago
Text
Hires at Breazeale, Sachse & Wilson, BioInnovation Center | Business News
New Orleans Muieen Cader has joined the New Orleans BioInnovation Center as program director. Cader has a background in venture capital, having served as a senior associate with Garden District Ventures, where he was responsible for investment memo preparation, due diligence and adding value for prospective portfolio companies. — The Ehrhardt Group has added three new employees. Taylor Morris has…
View On WordPress
0 notes
osamu-jinguji · 2 years ago
Photo
Tumblr media
My favorite books in Feb-2023 - #23 Architects of Intelligence: The truth about AI from the people building it – November 23, 2018 by Martin Ford (Author) Bestselling author Martin Ford talks to a hall-of-fame list of the world's top AI experts, delving into the future of AI, its impact on society and the issues we should be genuinely concerned about as the field advances. This is the hardcover edition of the book. Key Features: Interviews with AI leaders and practitioners A snapshot of the current state of AI, where it is headed and how it will impact society Voices from across AI and the scientific community Book Description: How will AI evolve and what major innovations are on the horizon? What will its impact be on the job market, economy, and society? What is the path toward human-level machine intelligence? What should we be concerned about as artificial intelligence advances? Architects of Intelligence contains a series of in-depth, one-to-one interviews where New York Times bestselling author, Martin Ford, uncovers the truth behind these questions from some of the brightest minds in the Artificial Intelligence community. Martin has wide-ranging conversations with twenty-three of the world's foremost researchers and entrepreneurs working in AI and robotics: Demis Hassabis (DeepMind), Ray Kurzweil (Google), Geoffrey Hinton (Univ. of Toronto and Google), Rodney Brooks (Rethink Robotics), Yann LeCun (Facebook) , Fei-Fei Li (Stanford and Google), Yoshua Bengio (Univ. of Montreal), Andrew Ng (AI Fund), Daphne Koller (Stanford), Stuart Russell (UC Berkeley), Nick Bostrom (Univ. of Oxford), Barbara Grosz (Harvard), David Ferrucci (Elemental Cognition), James Manyika (McKinsey), Judea Pearl (UCLA), Josh Tenenbaum (MIT), Rana el Kaliouby (Affectiva), Daniela Rus (MIT), Jeff Dean (Google), Cynthia Breazeal (MIT), Oren Etzioni (Allen Institute for AI), Gary Marcus (NYU), and Bryan Johnson (Kernel). Martin Ford is a prominent futurist, and author of Financial Times Business Book of the Year, Rise of the Robots. He speaks at conferences and companies around the world on what AI and automation might mean for the future. This is the hardcover edition of the book. What You Will Learn: The state of modern AI How AI will evolve and the breakthroughs we can expect Insights into the minds of AI founders and leaders How and when we will achieve human-level AI The impact and risks associated with AI and its impact on society and the economy Who this book is for: Anybody with an interest in artificial intelligence and the role it will play in the future of human life and work will find this a fascinating read. The discussions here are not only of interest to scientists and technologists, but to the wider reading public. About the Author Martin Ford is a futurist and the author of two books: The New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future (winner of the 2015 Financial Times/McKinsey Business Book of the Year Award and translated into more than 20 languages) and The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, as well as the founder of a Silicon Valley-based software development firm. His TED Talk on the impact of AI and robotics on the economy and society, given on the main stage at the 2017 TED Conference, has been viewed more than 2 million times. Martin is also the consulting artificial intelligence expert for the new "Rise of the Robots Index" from Societe Generale, underlying the Lyxor Robotics & AI ETF, which is focused specifically on investing in companies that will be significant participants in the AI and robotics revolution. He holds a computer engineering degree from the University of Michigan, Ann Arbor and a graduate business degree from the University of California, Los Angeles. He has written about future technology and its implications for publications including The New York Times, Fortune, Forbes, The Atlantic, The Washington Post, Harvard Business Review, The Guardian, and The Financial Times. He has also appeared on numerous radio and television shows, including NPR, CNBC, CNN, MSNBC and PBS. Martin is a frequent keynote speaker on the subject of accelerating progress in robotics and artificial intelligence-and what these advances mean for the economy, job market and society of the future. Martin continues to focus on entrepreneurship and is actively engaged as a board member and investor at Genesis Systems, a startup company that has developed a revolutionary atmospheric water generation (AWG) technology. Genesis will soon deploy automated, self-powered systems that will generate water directly from the air at industrial scale in the world's most arid regions.
1 note · View note
sunaleisocial · 4 months ago
Text
“They can see themselves shaping the world they live in”
New Post has been published on https://sunalei.org/news/they-can-see-themselves-shaping-the-world-they-live-in/
“They can see themselves shaping the world they live in”
Tumblr media
During the journey from the suburbs to the city, the tree canopy often dwindles down as skyscrapers rise up. A group of New England Innovation Academy students wondered why that is.
“Our friend Victoria noticed that where we live in Marlborough there are lots of trees in our own backyards. But if you drive just 30 minutes to Boston, there are almost no trees,” said high school junior Ileana Fournier. “We were struck by that duality.”
This inspired Fournier and her classmates Victoria Leeth and Jessie Magenyi to prototype a mobile app that illustrates Massachusetts deforestation trends for Day of AI, a free, hands-on curriculum developed by the MIT Responsible AI for Social Empowerment and Education (RAISE) initiative, headquartered in the MIT Media Lab and in collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. They were among a group of 20 students from New England Innovation Academy who shared their projects during the 2024 Day of AI global celebration hosted with the Museum of Science.
The Day of AI curriculum introduces K-12 students to artificial intelligence. Now in its third year, Day of AI enables students to improve their communities and collaborate on larger global challenges using AI. Fournier, Leeth, and Magenyi’s TreeSavers app falls under the Telling Climate Stories with Data module, one of four new climate-change-focused lessons.
“We want you to be able to express yourselves creatively to use AI to solve problems with critical-thinking skills,” Cynthia Breazeal, director of MIT RAISE, dean for digital learning at MIT Open Learning, and professor of media arts and sciences, said during this year’s Day of AI global celebration at the Museum of Science. “We want you to have an ethical and responsible way to think about this really powerful, cool, and exciting technology.”
Moving from understanding to action
Day of AI invites students to examine the intersection of AI and various disciplines, such as history, civics, computer science, math, and climate change. With the curriculum available year-round, more than 10,000 educators across 114 countries have brought Day of AI activities to their classrooms and homes.
The curriculum gives students the agency to evaluate local issues and invent meaningful solutions. “We’re thinking about how to create tools that will allow kids to have direct access to data and have a personal connection that intersects with their lived experiences,” Robert Parks, curriculum developer at MIT RAISE, said at the Day of AI global celebration.
Before this year, first-year Jeremie Kwampo said he knew very little about AI. “I was very intrigued,” he said. “I started to experiment with ChatGPT to see how it reacts. How close can I get this to human emotion? What is AI’s knowledge compared to a human’s knowledge?”
In addition to helping students spark an interest in AI literacy, teachers around the world have told MIT RAISE that they want to use data science lessons to engage students in conversations about climate change. Therefore, Day of AI’s new hands-on projects use weather and climate change to show students why it’s important to develop a critical understanding of dataset design and collection when observing the world around them.
“There is a lag between cause and effect in everyday lives,” said Parks. “Our goal is to demystify that, and allow kids to access data so they can see a long view of things.”
Tools like MIT App Inventor — which allows anyone to create a mobile application — help students make sense of what they can learn from data. Fournier, Leeth, and Magenyi programmed TreeSavers in App Inventor to chart regional deforestation rates across Massachusetts, identify ongoing trends through statistical models, and predict environmental impact. The students put that “long view” of climate change into practice when developing TreeSavers’ interactive maps. Users can toggle between Massachusetts’s current tree cover, historical data, and future high-risk areas.
Although AI provides fast answers, it doesn’t necessarily offer equitable solutions, said David Sittenfeld, director of the Center for the Environment at the Museum of Science. The Day of AI curriculum asks students to make decisions on sourcing data, ensuring unbiased data, and thinking responsibly about how findings could be used.
“There’s an ethical concern about tracking people’s data,” said Ethan Jorda, a New England Innovation Academy student. His group used open-source data to program an app that helps users track and reduce their carbon footprint.
Christine Cunningham, senior vice president of STEM Learning at the Museum of Science, believes students are prepared to use AI responsibly to make the world a better place. “They can see themselves shaping the world they live in,” said Cunningham. “Moving through from understanding to action, kids will never look at a bridge or a piece of plastic lying on the ground in the same way again.”
Deepening collaboration on earth and beyond
The 2024 Day of AI speakers emphasized collaborative problem solving at the local, national, and global levels.
“Through different ideas and different perspectives, we’re going to get better solutions,” said Cunningham. “How do we start young enough that every child has a chance to both understand the world around them but also to move toward shaping the future?”
Presenters from MIT, the Museum of Science, and NASA approached this question with a common goal — expanding STEM education to learners of all ages and backgrounds.
“We have been delighted to collaborate with the MIT RAISE team to bring this year’s Day of AI celebration to the Museum of Science,” says Meg Rosenburg, manager of operations at the Museum of Science Centers for Public Science Learning. “This opportunity to highlight the new climate modules for the curriculum not only perfectly aligns with the museum’s goals to focus on climate and active hope throughout our Year of the Earthshot initiative, but it has also allowed us to bring our teams together and grow a relationship that we are very excited to build upon in the future.”
Rachel Connolly, systems integration and analysis lead for NASA’s Science Activation Program, showed the power of collaboration with the example of how human comprehension of Saturn’s appearance has evolved. From Galileo’s early telescope to the Cassini space probe, modern imaging of Saturn represents 400 years of science, technology, and math working together to further knowledge.
“Technologies, and the engineers who built them, advance the questions we’re able to ask and therefore what we’re able to understand,” said Connolly, research scientist at MIT Media Lab.
New England Innovation Academy students saw an opportunity for collaboration a little closer to home. Emmett Buck-Thompson, Jeff Cheng, and Max Hunt envisioned a social media app to connect volunteers with local charities. Their project was inspired by Buck-Thompson’s father’s difficulties finding volunteering opportunities, Hunt’s role as the president of the school’s Community Impact Club, and Cheng’s aspiration to reduce screen time for social media users. Using MIT App Inventor, ​their combined ideas led to a prototype with the potential to make a real-world impact in their community.
The Day of AI curriculum teaches the mechanics of AI, ethical considerations and responsible uses, and interdisciplinary applications for different fields. It also empowers students to become creative problem solvers and engaged citizens in their communities and online. From supporting volunteer efforts to encouraging action for the state’s forests to tackling the global challenge of climate change, today’s students are becoming tomorrow’s leaders with Day of AI.
“We want to empower you to know that this is a tool you can use to make your community better, to help people around you with this technology,” said Breazeal.
Other Day of AI speakers included Tim Ritchie, president of the Museum of Science; Michael Lawrence Evans, program director of the Boston Mayor’s Office of New Urban Mechanics; Dava Newman, director of the MIT Media Lab; and Natalie Lao, executive director of the App Inventor Foundation.
0 notes