Tumgik
#data science course in canada
saidatascience · 1 month
Text
Sai Data Science | Data Science and Kids Courses in Edmonton, Canada
Sai Data Science offers comprehensive data science courses and engaging kids' programs in Edmonton, Canada. Learn cutting-edge skills in data science or introduce your child to the world of technology with our expert-led courses.
0 notes
ok-orange-8774 · 1 year
Text
 Data science is a field of study that works with enormous amounts of data utilizing contemporary technologies and methodologies to uncover hidden patterns, obtain valuable information, and make business decisions. Datamites provides online data science training in canada
0 notes
rangahkr · 2 years
Text
0 notes
jcmarchi · 4 months
Text
Elaine Liu: Charging ahead
New Post has been published on https://thedigitalinsider.com/elaine-liu-charging-ahead/
Elaine Liu: Charging ahead
Tumblr media Tumblr media
MIT senior Elaine Siyu Liu doesn’t own an electric car, or any car. But she sees the impact of electric vehicles (EVs) and renewables on the grid as two pieces of an energy puzzle she wants to solve.
The U.S. Department of Energy reports that the number of public and private EV charging ports nearly doubled in the past three years, and many more are in the works. Users expect to plug in at their convenience, charge up, and drive away. But what if the grid can’t handle it?
Electricity demand, long stagnant in the United States, has spiked due to EVs, data centers that drive artificial intelligence, and industry. Grid planners forecast an increase of 2.6 percent to 4.7 percent in electricity demand over the next five years, according to data reported to federal regulators. Everyone from EV charging-station operators to utility-system operators needs help navigating a system in flux.
That’s where Liu’s work comes in.
Liu, who is studying mathematics and electrical engineering and computer science (EECS), is interested in distribution — how to get electricity from a centralized location to consumers. “I see power systems as a good venue for theoretical research as an application tool,” she says. “I’m interested in it because I’m familiar with the optimization and probability techniques used to map this level of problem.”
Liu grew up in Beijing, then after middle school moved with her parents to Canada and enrolled in a prep school in Oakville, Ontario, 30 miles outside Toronto.
Liu stumbled upon an opportunity to take part in a regional math competition and eventually started a math club, but at the time, the school’s culture surrounding math surprised her. Being exposed to what seemed to be some students’ aversion to math, she says, “I don’t think my feelings about math changed. I think my feelings about how people feel about math changed.”
Liu brought her passion for math to MIT. The summer after her sophomore year, she took on the first of the two Undergraduate Research Opportunity Program projects she completed with electric power system expert Marija Ilić, a joint adjunct professor in EECS and a senior research scientist at the MIT Laboratory for Information and Decision Systems.
Predicting the grid
Since 2022, with the help of funding from the MIT Energy Initiative (MITEI), Liu has been working with Ilić on identifying ways in which the grid is challenged.
One factor is the addition of renewables to the energy pipeline. A gap in wind or sun might cause a lag in power generation. If this lag occurs during peak demand, it could mean trouble for a grid already taxed by extreme weather and other unforeseen events.
If you think of the grid as a network of dozens of interconnected parts, once an element in the network fails — say, a tree downs a transmission line — the electricity that used to go through that line needs to be rerouted. This may overload other lines, creating what’s known as a cascade failure.
“This all happens really quickly and has very large downstream effects,” Liu says. “Millions of people will have instant blackouts.”
Even if the system can handle a single downed line, Liu notes that “the nuance is that there are now a lot of renewables, and renewables are less predictable. You can’t predict a gap in wind or sun. When such things happen, there’s suddenly not enough generation and too much demand. So the same kind of failure would happen, but on a larger and more uncontrollable scale.”
Renewables’ varying output has the added complication of causing voltage fluctuations. “We plug in our devices expecting a voltage of 110, but because of oscillations, you will never get exactly 110,” Liu says. “So even when you can deliver enough electricity, if you can’t deliver it at the specific voltage level that is required, that’s a problem.”
Liu and Ilić are building a model to predict how and when the grid might fail. Lacking access to privatized data, Liu runs her models with European industry data and test cases made available to universities. “I have a fake power grid that I run my experiments on,” she says. “You can take the same tool and run it on the real power grid.”
Liu’s model predicts cascade failures as they evolve. Supply from a wind generator, for example, might drop precipitously over the course of an hour. The model analyzes which substations and which households will be affected. “After we know we need to do something, this prediction tool can enable system operators to strategically intervene ahead of time,” Liu says.
Dictating price and power
Last year, Liu turned her attention to EVs, which provide a different kind of challenge than renewables.
In 2022, S&P Global reported that lawmakers argued that the U.S. Federal Energy Regulatory Commission’s (FERC) wholesale power rate structure was unfair for EV charging station operators.
In addition to operators paying by the kilowatt-hour, some also pay more for electricity during peak demand hours. Only a few EVs charging up during those hours could result in higher costs for the operator even if their overall energy use is low.
Anticipating how much power EVs will need is more complex than predicting energy needed for, say, heating and cooling. Unlike buildings, EVs move around, making it difficult to predict energy consumption at any given time. “If users don’t like the price at one charging station or how long the line is, they’ll go somewhere else,” Liu says. “Where to allocate EV chargers is a problem that a lot of people are dealing with right now.”
One approach would be for FERC to dictate to EV users when and where to charge and what price they’ll pay. To Liu, this isn’t an attractive option. “No one likes to be told what to do,” she says.
Liu is looking at optimizing a market-based solution that would be acceptable to top-level energy producers — wind and solar farms and nuclear plants — all the way down to the municipal aggregators that secure electricity at competitive rates and oversee distribution to the consumer.
Analyzing the location, movement, and behavior patterns of all the EVs driven daily in Boston and other major energy hubs, she notes, could help demand aggregators determine where to place EV chargers and how much to charge consumers, akin to Walmart deciding how much to mark up wholesale eggs in different markets.
Last year, Liu presented the work at MITEI’s annual research conference. This spring, Liu and Ilić are submitting a paper on the market optimization analysis to a journal of the Institute of Electrical and Electronics Engineers.
Liu has come to terms with her early introduction to attitudes toward STEM that struck her as markedly different from those in China. She says, “I think the (prep) school had a very strong ‘math is for nerds’ vibe, especially for girls. There was a ‘why are you giving yourself more work?’ kind of mentality. But over time, I just learned to disregard that.”
After graduation, Liu, the only undergraduate researcher in Ilić’s MIT Electric Energy Systems Group, plans to apply to fellowships and graduate programs in EECS, applied math, and operations research.
Based on her analysis, Liu says that the market could effectively determine the price and availability of charging stations. Offering incentives for EV owners to charge during the day instead of at night when demand is high could help avoid grid overload and prevent extra costs to operators. “People would still retain the ability to go to a different charging station if they chose to,” she says. “I’m arguing that this works.”
2 notes · View notes
dataly-data-science · 11 months
Text
Tumblr media
2 notes · View notes
suhasini123 · 1 year
Text
An artificial intelligence (AI) course is an educational program or training that focuses on teaching individuals the principles, techniques, and applications of artificial intelligence. Datamites is an organization that offers various courses and training programs in the field of data science, artificial intelligence, and machine learning.
0 notes
apexedusblog · 2 years
Text
0 notes
antiporn-activist · 6 months
Text
I thought y'all should read this
I have a free trial to News+ so I copy-pasted it for you here. I don't think Jonathan Haidt would object to more people having this info.
Tumblr wouldn't let me post it until i removed all the links to Haidt's sources. You'll have to take my word that everything is sourced.
End the Phone-Based Childhood Now
The environment in which kids grow up today is hostile to human development.
By Jonathan Haidt
Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.
The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.
The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.
As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likelyto live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.
Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.
Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.
Tumblr media
What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.
I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.
As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.
But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.
The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.
My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.
1. The Decline of Play and Independence 
Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.
Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.
Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.
One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.
Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.
And then we changed childhood.
The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.
In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.
2. The Virtual World Arrives in Two Waves
The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.
The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).
The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).
Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.
It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.
3. Techno-optimism and the Birth of the Phone-Based Childhood
The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?
In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.
You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.
Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.
It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.
We had no idea what we were doing.
4. The High Cost of a Phone-Based Childhood
In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.
The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.
In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.
The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.
But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.
You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?
Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.
First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.
Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.
Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.
Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.
These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.
Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.
Tumblr media
A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.
5. So Many Harms
The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyondmental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.
Fragmented Attention, Disrupted Learning
Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.
It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.
Addiction and Social Withdrawal
The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.
Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?
The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.
Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.
I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?
The Decay of Wisdom and the Loss of Meaning 
During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.
This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.
All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.
When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today. 
Tumblr media
Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.
6. Young People Don’t Like Their Phone-Based Lives
How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.
An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.
Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:
Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.
Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,
"The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier."
A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:
I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.
Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”
7. Collective-Action Problems
Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.
Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.
A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.
Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.
This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.
Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.
8. Four Norms to Break Four Traps
Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.
No smartphones before high school  
The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.
No social media before 16
The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.
Phone‐free schools 
Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.
More independence, free play, and responsibility in the real world
Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.
It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.
The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.
9. What Are We Waiting For?
An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.
In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.
There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).
Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.
The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.
We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.
This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.
218 notes · View notes
xenosagaepisodeone · 1 year
Note
the thing is though that these checklists don’t mean if you have BPD it is not allowed that you have nightmares / if you have CPTSD you are legally obligated to never experience impulsiveness etc etc; it’s not just “making stuff up” — though ig in the strictest sense yeah, first you make stuff up but then you test it and see if your hypotheses align with the population. Basically chances are if you meet 8/9 BPD criteria and some for CPTSD but not enough to meet the diagnostic standard (which afaik isn’t recognized just yet but i think they’re trying to get it recognized in the diagnostic manuals but correct me if i’m wrong) then it’s pretty likely you’re going to respond better to BPD treatment and ALSO if your practitioner completely ignores one diagnosis in favor of the other they’re probably not that good at their job. Psychology doesn’t speak in “rules” and absolutes, it speaks in trends and likelihoods and everyone trying to sell you a 100% true and immovable psychology fact is a sham
as someone who unfortunately has a degree in psychology (and whose undergrad began right as the infamous replication crisis became more widely acknowledged in the field), yes, historically a lot of this field is bias and hegemony imbued with some metric. when homosexuality was still classified as a mental disorder, the conversion therapy program by masters and johnson (who were like, some of the earliest pioneers of research into human sexual responses lmao) would often boast high success rates due to participants merely adopting signifiers of heterosexuality. the modern day pop psychology movement (and it's subfields, new ageism, self help books, uhhh Market Christianity) also cannot be disentangled from academic psychology, which further bends the way in which people understand and interact with psychological phenomena. this of course does not mean that all data is junk data, or that methods of measurement are without some rigor, or that therapy is completely useless, but it's just patently incorrect to insist that this field is even predominantly an apolitical force attempting to further our understanding of human beings. it's bizarre that you acknowledge that credentialized individuals in the field can be flawed while also being uncritical of psychological categorization for mental illness.
It's not that I don't get what you're saying, but it's not reflective of reality. yes, I know that practitioners are supposed to help you feel out your symptoms and see what treatment works for you, but that isn't just what they're doing (assuming it's even being done with care and competence). it's inaccurate to insist that psychology doesn't speak in absolutes- I know that we are taught not to do this, but for any social science related field this is the equivalent of going "stop hitting yourself". in any practical real-world setting where accredited institutional psychology is present, there are rules. in a clinical setting, there are rules, and you can be inpatiented against your will for breaking those rules (or recently here in canada, randomly stripped of your driver's license). in neuromarketing (<- yes this is a real discipline.), which is intensively oriented towards results due to the profit incentive, there are rules. the conditions of release for many offenders necessitates staying on court-mandated medication or participating in specific programs. when H.B. Phrenology from The Heritage Foundation wheels out his thousandth manicured study on crime and race (and when a different journal publishes a study indirectly debunking it), that is him tacitly acknowledging that there are rules.
anyway did I ever tell you guys that in my first year at University of Toronto (UTSC campus baybee) they brought in a guest speaker to my abnormal psych course who gave us a lengthy talk on how autogynephilia theory is objectively true. this was like 2013ish maybe 2014 btw.
46 notes · View notes
rjzimmerman · 2 months
Text
Extreme heat is wilting and burning forests, making it harder to curb climate change. (Washington Post)
Earth’s land lost much of their ability to absorb the carbon dioxide humans pumped into the air last year, according to a new study that is causing concern among climate scientists that a crucial damper on climate change underwent an unprecedented deterioration.
Temperatures in 2023 were so high — and the droughts and wildfires that came with them were so severe — that forests in various parts of the world wilted and burnedenough to have degraded the ability of the land to lock away carbon dioxide and act as a check on global warming, the study said.
The scientists behind the research, which focuses on 2023, caution that their findings are preliminary. But the work represents a disturbing data point — one that, if it turns into a trend, spells trouble for the planet and the people on it.
“We have to be, of course, careful because it’s just one year,” said Philippe Ciais, a scientist at France’s Laboratory of Climate and Environmental Sciences who co-authored the new research.
Earth’s continents act as what is known as a carbon sink. The carbon dioxide that humans emit through activities such as burning fossil fuels and making cement encourages the growth of plants, which in turn absorb a portion of those greenhouse gases and lock them in wood and soil. Without this help from forests, climate change would be worse than what is already occurring.
“This is a significant issue, because we are benefiting from the uptake of carbon,” said Robert Rohde, chief scientist for Berkeley Earth, who was not involved in the research. “Otherwise, carbon dioxide levels in the atmosphere would rise even faster and drive up temperatures even faster.”
Ciais and his colleagues saw that the concentration of CO2 measured at an observatory on Mauna Loa in Hawaii and elsewhere spiked in 2023, even though global fossil fuel emissions increased only modestly last year in comparison. That mismatch suggests that there was an “unprecedented weakening” in the Earth’s ability to absorb carbon, the researchers wrote.
The scientists then used satellite data and models for vegetative growth to try to pinpoint where the carbon sink was weakening. The team spotted abnormal losses of carbon in the drought-stricken Amazon and Southeast Asia as well as in the boreal forests of Canada, where record-breaking wildfires burned through tens of millions of acres.
2 notes · View notes
saidatascience · 2 months
Text
Summer Camp Booking Steps: Simplify Your Enrollment Process
Discover the simple procedures for booking your summer camp at Sai Data Science. Streamline enrollment and reserve your child's space now!
Tumblr media
0 notes
ok-orange-8774 · 2 years
Text
Data science is a field of study that works with enormous amounts of data utilizing contemporary technologies and methodologies to uncover hidden patterns, obtain valuable information, and make business decisions. Datamites provides data science courses in the Canada along with artificial intelligence, python, data analytics, machine learning etc.
0 notes
insanityclause · 2 years
Note
I think Tucker is nuts, and no, the US is not going to invade Canada, but I know a lot of ex Canadians, and none are going back. A lot of what that person posted about their country is not true, but look at the population of both countries. Of course there is more crime in the US, but better education? Hardly. I do know they are more lenient about who they let in because my extended Iranian family couldn’t get in the US, so they had to go to Canada.
Hi. Canadian here. @doctortwhohiddles' country IS my country, so I know exactly what of that post is true. Pretty much every single word.
Greater life expectancy? True. (In part due to universal health care and gun control.)
Tumblr media
And since Tucker was using last year's hostage taking 'Freedom Convoy' for the basis of his talking points, let's look at this data:
Tumblr media
Crime??
Tumblr media Tumblr media
Education?
Tumblr media
PISA scores for Reading, Math & Science (2018)
Tumblr media Tumblr media Tumblr media
Costs for post-secondary are far lower here, too, and funding is pretty good.
University of Toronto - $4700 USD/year (non-Ontario resident)
McGill - $6500 USD/year (non-Quebec resident)
University of British Columbia - $4200 USD/year
Comparable universities in the US:
Johns Hopkins - $60,000/yr
University of Michigan - $52,000/yr ($26,000 if you're a resident)
Northwestern - $60,000/yr
UK universities just for fun:
$11,000 USD/yr
And the public elementary/secondary system is excellent, compared to what you get in publicly funded education in the US or UK. The 'posh public school' argument literally does not exist here. No one cares where you went to high school.
Plus, safe schools. We have had 8 school shootings since 1975. Total. Including universities. Last one was 2016. The US had ~500 in the same time frame, and 57 in 2019 alone. Even accounting for the population difference, you're on the losing side in that one.
Tumblr media
Not sure why our immigration system is your concern; it's worked really well for us. A mosaic rather than a melting pot.
23 notes · View notes
jcmarchi · 5 months
Text
MIT Emerging Talent opens pathways for underserved global learners
New Post has been published on https://thedigitalinsider.com/mit-emerging-talent-opens-pathways-for-underserved-global-learners/
MIT Emerging Talent opens pathways for underserved global learners
Tumblr media Tumblr media
Two ambitions drive Eric Tuyizere: advancing his technological skills and following his passion for entrepreneurship. In July 2023, when he discovered that MIT’s Emerging Talent program was launching the fifth cohort of its Certificate in Computer and Data Science, he applied right away. Seven months in, he says he has found even more than he dreamed of: community and support. This unexpected benefit has turned into a key motivation for Tuyizere as he combines work on the challenging curriculum with the demands of daily life. 
“Apart from being my colleagues on the Emerging Talent program, we are friends,” says Tuyizere, a learner from Rwanda. “I really like the community.”
Tuyizere is one of 100 individuals in Emerging Talent’s current cohort, which launched in September 2023. Selected from more than 2,000 applicants, 85 percent of these learners are refugees, migrants, or have been impacted by forced displacement. They join the ranks of the more than 160 individuals who have already completed the program.
The program is the brainchild of Admir Masic, who became a teenage refugee in Croatia in 1992 after escaping from the horrors of war that was devastating his homeland in Bosnia and Herzegovina. Today, Masic is an associate professor of civil and environmental engineering and a faculty fellow in archaeological materials at MIT. 
“I am overwhelmed with gratitude at having made it to MIT, a place that values innovation, science, and excellence, but also with a sense of responsibility,” Masic says. “There are millions of people forcibly displaced every year — for political, economic, social, or, more recently, climate change-related reasons. How can I do my part to support those who have come after me?” 
Inspired by his life experience and conviction, Masic founded the MIT Refugee Action Hub (ReACT) in 2017, with the goal of developing global education programs for refugees and displaced communities. To date, ReACT has offered its Certificate in Computer and Data Science to five cohorts of talented learners across the globe, helping them grow academically, advance their skills, leverage their expertise, and access a professional career in the tech field. Together, the certificate and ReACT are now MIT Emerging Talent, a program that extends the reach and impact of MIT’s pioneering efforts to reach the most talented underserved learners. Part of the Abdul Latif Jameel World Education Lab at MIT Open Learning, Emerging Talent is expanding ReACT’s proven model of upskilling refugees to other underrepresented communities around the world including migrants, first-generation and low-income students, and historically excluded groups.
Hidden realities
According to the U.N. High Commission on Refugees, more than 110 million people were forcibly displaced worldwide as of May 2023. This number is equivalent to the population of the four largest states in the United States: California, Texas, Florida, and New York. It also marks the largest ever single-year increase propelled by ongoing wars, political instability, and civil conflicts. Learners in this year’s cohort come from 24 different countries, and are experiencing situations like war in Ukraine and Sudan, military persecution in Myanmar, dictatorship in Eritrea, and oppression by the Taliban in Afghanistan. Conflict-impacted learners from Ethiopia, Democratic Republic of Congo, Rwanda, and many other countries may each have their own unique story, but their shared experience of displacement drives their desire to build their skills and education in order to improve their situation. 
“It’s like a cultural exchange, we share things like songs and dances — everything which is interesting to our own culture helps us to be more interactive,” says Tuyizere, citing in particular a dance taught to him by one of his peers from Ukraine. 
Along with MIT’s trademark rigor and relevance, a key design principle for the program is adaptation to meet the unique needs of underrepresented talent and make them feel welcomed and part of a safe learning community. For Emerging Talent’s learners, adaptation is essential for enabling peer learning, capitalizing on multicultural perspectives to benefit all, and permitting appropriate flexibility for students who come from other education systems. 
“Education has always been a challenge for women in Afghanistan,” says Somaia Zabihi, who joined the Emerging Talent team in 2023 as a computer science instructor. “Going to college for a girl used to be as strange as planning a trip to the moon. In past years, especially in big cities, some progress had been made, and girls could think about their dreams instead of being forced into marriage. Unfortunately, with the Taliban in power, things have gone backwards, taking us back even further.” 
Zabihi previously worked as the dean of computer science faculty at the University of Herat in Afghanistan, but relocated with her family to Canada because of the ongoing situation in her home country. She is currently designing custom workshops on foundational skills, delivering recitation sessions, and holding office hours for the latest cohort of Emerging Talent learners. 
Fostering opportunities
The Emerging Talent program exemplifies MIT Open Learning’s Agile Continuous Education (ACE) model. Advanced by leading educators and researchers at MIT, the ACE model is focused on providing education in a flexible, cost-effective, and time-efficient manner by combining rigorous online learning with at-work application of knowledge. In the case of the Certificate in Computer and Data Science, learners complete MIT courses on edX, and apply learned skills and gain real-life experiences through capstone projects or internships. This allows them to customize their path based on personal preferences. To augment these skills, Emerging Talent works with organizations such as Paper Airplanes for English training; the Global Mentorship Initiative and MENTEE for mentoring opportunities; Close the Gap, Give Internet, and Unconnected for device access; and Na’amal for employability skills training. 
“Now that the learners have completed the required academic classes, they are honing their skills and interests through elective courses and group project work,” Megan Mitchell, associate director for Pathways for Talent, says of the current Emerging Talent cohort. “They will be actively pursuing job opportunities that will allow them to put to practice what they have learned and bring extensive value to the companies they join.” 
From high school graduates to advanced degree seekers, Emerging Talent learners apply to the Certificate in Computer and Data Science for an opportunity. Over 70 percent of accepted learners have university degrees; yet 60 percent are unemployed, with forced geographic relocation, ongoing wars, overwhelming family responsibilities, and restrictive labor regulations to blame. The majority of those who are working are underemployed. Despite their varied situations, the program’s diverse learners soon discover a shared desire to transform their careers by acquiring new skills and experience to enhance their professional competencies and adaptability. All are looking for a way to develop their technical capabilities and contribute to society. As Kaung Hein Htet expressed in his application to Emerging Talent: “Because of the current political crisis in Myanmar, I cannot accomplish my passion and do my favorite things. I want to become a data scientist who can help people around the world.”  
By looking beyond learners’ immediate circumstances, Emerging Talent ensures that every learner is given an equal opportunity to participate and benefit from being part of the community.
“I was seen for who I am, without proof or requirement to show my hard copy diploma evaluated by some other agency,” says Pavel Illin, an asylee from Russia currently living in the United States who completed the program in 2021. After graduating, Pavel began working at the New York City Mayor’s Office as a software engineer. “And the fact that I’ve been seen for just being there gives me hope that not everything is lost. It’s possible to succeed.” 
The Emerging Talent team is sourcing experiential learning opportunities for its current cohort. If you want to help support or engage a learner, email [email protected]
0 notes
mariacallous · 1 year
Text
In the first four months of the Covid-19 pandemic, government leaders paid $100 million for management consultants at McKinsey to model the spread of the coronavirus and build online dashboards to project hospital capacity.
It's unsurprising that leaders turned to McKinsey for help, given the notorious backwardness of government technology. Our everyday experience with online shopping and search only highlights the stark contrast between user-friendly interfaces and the frustrating inefficiencies of government websites—or worse yet, the ongoing need to visit a government office to submit forms in person. The 2016 animated movie Zootopia depicts literal sloths running the DMV, a scene that was guaranteed to get laughs given our low expectations of government responsiveness.
More seriously, these doubts are reflected in the plummeting levels of public trust in government. From early Healthcare.gov failures to the more recent implosions of state unemployment websites, policymaking without attention to the technology that puts the policy into practice has led to disastrous consequences.
The root of the problem is that the government, the largest employer in the US, does not keep its employees up-to-date on the latest tools and technologies. When I served in the Obama White House as the nation’s first deputy chief technology officer, I had to learn constitutional basics and watch annual training videos on sexual harassment and cybersecurity. But I was never required to take a course on how to use technology to serve citizens and solve problems. In fact, the last significant legislation about what public professionals need to know was the Government Employee Training Act, from 1958, well before the internet was invented.
In the United States, public sector awareness of how to use data or human-centered design is very low. Out of 400-plus public servants surveyed in 2020, less than 25 percent received training in these more tech-enabled ways of working, though 70 percent said they wanted such training. 
But knowing how to use new technology does not have to be an afterthought, and in some places it no longer is. In Singapore, the Civil Service Training College requires technology and digital-skills training for its 145,000 civilian public servants. Canada’s “Busrides” training platform gives its quarter-million public servants short podcasts on topics like data science, AI, and machine learning to listen to during their commutes. In Argentina, career advancement and salary raises are tied to the completion of training in human-centered design and data-analytical thinking. When public professionals possess these skills—learning how to use technology to work in more agile ways, getting smarter from both data and community engagement—we all benefit.
Today I serve as chief innovation officer for the state of New Jersey, working to improve state websites that deliver crucial information and services. When New Jersey’s aging mainframe strained under the load of Covid jobless claims, for example, we wrote forms in plain language, simplified and eliminated questions, revamped the design, and made the site mobile-friendly. Small fixes that came from sitting down and listening to claimants translated into 48 minutes saved per person per application. New Jersey also created a Covid-19 website in three days so that the public had the information they wanted in one place. We made more than 134,000 updates as the pandemic wore on, so that residents benefited from frequent improvements.
Now with the explosion of interest in artificial intelligence, Congress is turning its attention to ensuring that those who work in government learn more about the technology. US senators Gary Peters (D-Michigan) and Mike Braun (R-Indiana) are calling for universal leadership training in AI with the AI Leadership Training Act, which is moving forward to the full Senate for consideration. The bill directs the Office of Personnel Management (OPM), the federal government's human resources department, to train federal leadership in AI basics and risks. However, it does not yet mandate the teaching of how to use AI to improve how the government works.
The AI Leadership Training Act is an important step in the right direction, but it needs to go beyond mandating basic AI training. It should require that the OPM teach public servants how to use AI technologies to enhance public service by making government services more accessible, providing constant access to city services, helping analyze data to understand citizen needs, and creating new opportunities for the public to participate in democratic decisionmaking.
For instance, cities are already experimenting with AI-based image generation for participatory urban planning, while San Francisco’s PAIGE AI chatbot is helping to answer business owners' questions about how to sell to the city. Helsinki, Finland, uses an AI-powered decisionmaking tool to analyze data and provide recommendations on city policies. In Dubai, leaders are not just learning AI in general, but learning how to use ChatGPT specifically. The legislation, too, should mandate that the OPM not just teach what AI is, but how to use it to serve citizens.
In keeping with the practice in every other country, the legislation should require that training to be free. This is already the case for the military. On the civilian side, however, the OPM is required to charge a fee for its training programs. A course titled Enabling 21st-Century Leaders, for example, costs $2,200 per person. Even if the individual applies to their organization for reimbursement, too often programs do not have budgets set aside for up-skilling.
If we want public servants to understand AI, we cannot charge them for it. There is no need to do so, either. Building on a program created in New Jersey, six states are now collaborating with each other in a project called InnovateUS to develop free live and self-paced learning in digital, data, and innovation skills. Because the content is all openly licensed and designed specifically for public servants, it can easily be shared across states and with the federal government as well.
The Act should also demand that the training be easy to find. Even if Congress mandates the training, public professionals will have a hard time finding it without the physical infrastructure to ensure that public servants can take and track their learning about tech and data. In Germany, the federal government’s Digital Academy offers a single site for digital up-skilling to ensure widespread participation. By contrast, in the United States, every federal agency has its own (and sometimes more than one) website where employees can look for training opportunities, and the OPM does not advertise its training across the federal government. While the Department of Defense has started building USALearning.gov so that all employees could eventually have access to the same content, this project needs to be accelerated.
The Act should also require that data on the outcomes of AI training be collected and published. The current absence of data on federal employee training prevents managers, researchers, and taxpayers from properly evaluating these training initiatives. More comprehensive information about our public workforce, beyond just demographics and job titles, could be used to measure the impact of AI training on cost savings, innovation, and performance improvements in serving the American public.
Unlike other political reforms that could take generations to achieve in our highly partisan and divisive political climate, investing in people—teaching public professionals how to use AI and the latest technology to work in more agile, evidence-based, and participatory ways to solve problems—is something we can do right now to create institutions that are more responsive, reliable, and deserving of our trust.
I understand the hesitance to talk about training people in government. When I worked for the Obama White House, the communications team was reluctant to make any public pronouncements about investing in government lest we be labeled “Big Government” advocates. Since the Reagan years, Republicans have promoted a “small government” narrative. But what matters to most Americans is not big or small but that we have a better government.
6 notes · View notes
suhasini123 · 1 year
Text
Data science is an interdisciplinary field that combines various techniques, tools, and methodologies to extract valuable insights and knowledge from data. Datamites is an organization that offers various courses and training programs in the field of data science, artificial intelligence, and machine learning.
0 notes