#Police AI monitoring system
Explore tagged Tumblr posts
bharatbriefs · 1 year ago
Text
AI is watching you! Ahmedabad becomes India's 1st city to get AI-linked surveillance system
In a groundbreaking development, Ahmedabad has emerged as the first city in India to employ artificial intelligence (AI) for comprehensive monitoring by the municipal corporation and police across the entire city. The city’s expansive Paldi area is now home to a state-of-the-art artificial intelligence-enabled command and control centre, featuring a remarkable 9 by 3-metre screen that oversees an…
View On WordPress
1 note · View note
globalnewscollective · 2 months ago
Text
AI and Donald Trump Are Watching You—And It Could Cost You Everything
Imagine this: You post your thoughts online. Or you express support for human rights. Or you attend a peaceful protest. Months later, you find yourself denied a visa, placed on a watchlist, or even under investigation—all because an algorithm flagged you as a ‘threat.’ This isn’t a dystopian novel. It’s happening right now in the U.S.
How AI Is Being Weaponized Against Protesters and Online Speech The Trump administration has rolled out AI-driven surveillance to monitor and target individuals based on their political beliefs and activities. According to reports, these systems analyze massive amounts of online data, including social media posts, protest attendance, and affiliations.
The goal? To identify and suppress dissent before it even happens.
Here’s what this means:
Attending a Protest Could Put You on a Government Watchlist – AI systems are being trained to scan for ‘suspicious behavior’ based on location data and social media activity.
Your Social Media History Can Be Used Against You – The government is using algorithms to flag people who express opinions that don’t align with Trump’s agenda.
Expressing Your Opinion Online Can Have Consequences – It’s not just about attending protests anymore. Simply posting criticism of the government, sharing articles, or even liking the ‘wrong’ post could get you flagged.
Dissenters Could Face Harsh Consequences – In some cases, simply supporting the wrong cause online could lead to visa denials, surveillance, or worse.
AI and Student Visa Bans: A Dangerous Precedent Recently, AI was used to screen visa applicants for supposed ‘Hamas support,’ leading to students being denied entry to the U.S. without due process. This is alarming for several reasons:
False Positives Will Ruin Lives – AI systems are not perfect. Innocent people will be flagged, denied entry, or even deported based on misinterpretations of their online activity.
This Can Be Expanded to Anyone – Today, it’s foreign students. Tomorrow, it could be U.S. citizens denied jobs, housing, or government services for expressing their political views.
It Sets a Dangerous Global Example – If the U.S. normalizes AI-driven political suppression, other governments will follow.
Marco Rubio’s ‘Catch and Revoke’ Plan: A New Threat Senator Marco Rubio has proposed the ‘Catch and Revoke’ plan, which would allow the U.S. government to scan immigrants’ social media with AI and strip them of their visas if deemed a ‘threat.’ This raises serious concerns about surveillance overreach and algorithm-driven repression, where immigrants could be punished for harmless or misinterpreted online activity. This policy could lead to:
Mass Deportations Based on AI Errors – Algorithms are prone to bias and mistakes, and immigrants may have no recourse to challenge these decisions.
Fear-Driven Self-Censorship – Many may feel forced to silence themselves online to avoid government scrutiny.
A Precedent for Broader Use – What starts with immigrants could easily be expanded to citizens, targeting dissenters and activists.
What’s at Stake?
The ability to speak freely, protest, and express opinions without fear of government retaliation is a fundamental right. If AI surveillance continues unchecked, America will become a place where thought crimes are punished, and digital footprints determine who is free and who is not.
The Bigger Picture
Technology that was meant to make life easier is now being turned against us. Today, it’s AI scanning protest footage. Tomorrow, it could be predictive policing, social credit systems, or AI-driven arrest warrants.
What Can You Do?
Be Mindful of Digital Footprints – Understand that what you post and where you go could be tracked.
Support Digital Rights Organizations – Groups like the ACLU and EFF are fighting against mass surveillance.
Demand Transparency – Governments must be held accountable for how they use AI and surveillance.
Freedom dies when people stop fighting for it. We must push back before AI turns democracy into an illusion.
Source:
https://www.fastcompany.com/91295390/how-the-trump-administration-plans-to-use-algorithms-to-target-protesters
67 notes · View notes
mariacallous · 3 months ago
Text
A young technologist known online as “Big Balls,” who works for Elon Musk's so-called Department of Government Efficiency (DOGE), has access to sensitive US government systems. But his professional and online history call into question whether he would pass the background check typically required to obtain security clearances, security experts tell WIRED.
Edward Coristine, a 19-year-old high school graduate, established at least five different companies in the last four years, with entities registered in Connecticut, Delaware, and the United Kingdom, most of which were not listed on his now-deleted LinkedIn profile. Coristine also briefly worked in 2022 at Path Network, a network monitoring firm known for hiring reformed black-hat hackers. Someone using a Telegram handle tied to Coristine also solicited a cyberattack-for-hire service later that year.
Coristine did not respond to multiple requests for comment.
One of the companies Coristine founded, Tesla.Sexy LLC, was set up in 2021, when he would have been around 16 years old. Coristine is listed as the founder and CEO of the company, according to business records reviewed by WIRED.
Tesla.Sexy LLC controls dozens of web domains, including at least two Russian-registered domains. One of those domains, which is still active, offers a service called Helfie, which is an AI bot for Discord servers targeting the Russian market.While the operation of a Russian website would not violate US sanctions preventing Americans doing business with Russian companies, it could potentially be a factor in a security clearance review.
"Foreign connections, whether it's foreign contacts with friends or domain names registered in foreign countries, would be flagged by any agency during the security investigation process," Joseph Shelzi, a former US Army intelligence officer who held security clearance for a decade and managed the security clearance of other units under his command, tells WIRED.
A longtime former US intelligence analyst, who requested anonymity to speak on sensitive topics, agrees. “There's little chance that he could have passed a background check for privileged access to government systems,” they allege.
Another domain under Coristine’s control is faster.pw. The website is currently inactive, but an archived version from October 25, 2022 shows content in Chinese that stated the service helped provide “multiple encrypted cross-border networks.”
Prior to joining DOGE, Coristine worked for several months of 2024 at Elon Musk’s Neuralink brain implant startup, and, as WIRED previously reported, is now listed in Office of Personnel Management records as an “expert” at that agency, which oversees personnel matters for the federal government. Employees of the General Services Administration say he also joined calls where they were made to justify their jobs and to review code they’ve written.
Other elements of Coristine’s personal record reviewed by WIRED, government security experts say, would also raise questions about obtaining security clearances necessary to access privileged government data. These same experts further wonder about the vetting process for DOGE staff—and, given Coristine’s history, whether he underwent any such background check.
The White House did not immediately respond to questions about what level of clearance, if any, Corisitine has, and if so, how it was granted.
At Path Network, Coristine worked as a systems engineer from April to June of 2022, according to his now-deleted LinkedIn resume. Path has at times listed as employees Eric Taylor, also known as Cosmo the God, a well-known former cybercriminal and member of the hacker group UGNazis, as well as Matthew Flannery, an Australian convicted hacker whom police allege was a member of the hacker group LulzSec. It’s unclear whether Coristine worked at Path concurrently with those hackers, and WIRED found no evidence that either Coristine or other Path employees engaged in illegal activity while at the company.
“If I was doing the background investigation on him, I would probably have recommended against hiring him for the work he’s doing,” says EJ Hilbert, a former FBI agent who also briefly served as the CEO of Path Network prior to Coristine’s employment there. “I’m not opposed to the idea of cleaning up the government. But I am questioning the people that are doing it.”
Potential concerns about Coristine extend beyond his work history. Archived Telegram messages shared with WIRED show that, in November 2022, a person using the handle “JoeyCrafter” posted to a Telegram channel focused on so-called distributed denial of service, or DDOS, cyberattacks that bombard victim sites with junk traffic to knock them offline. In his messages, JoeyCrafter—which records from Discord, Telegram, and the networking protocol BGP indicate was a handle used by Coristine—writes that he’s “looking for a capable, powerful and reliable L7” that accepts Bitcoin payments. That line, in the context of a DDOS-for-hire Telegram channel, suggests he was looking for someone who could carry out a layer 7 attack, a certain form of DDOS. A DDOS-for-hire service with the name Dstat.cc was seized in a multi-national law enforcement operation last year.
The JoeyCrafter Telegram account had previously used the name “Rivage,” a name linked to Coristine on Discord and at Path, according to Path internal communications shared with WIRED. Both the Rivage Discord and Telegram accounts at times promoted Coristine’s DiamondCDN startup. It’s not clear whether the JoeyCrafter message was followed by an actual DDOS attack. (In the internal messages among Path staff, a question is asked about Rivage, at which point an individual clarifies they are speaking about "Edward".)
"It does depend on which government agency is sponsoring your security clearance request, but everything that you've just mentioned would absolutely raise red flags during the investigative process," Shelzi, the former US Army intelligence officer says. He adds that a secret security clearance could be completed in as little as 50 days while a top secret security clearance could take anywhere from 90 days to a year to complete.
Coristine’s online history, including a LinkedIn account where he calls himself Big Balls, has disappeared recently. He also previously used an account on X with the username @edwardbigballer. The account had a bio that read: “Technology. Arsenal. Golden State Warriors. Space Travel.”
Prior to using the @edwardbigballer username, Coristine was linked to an account featuring the screenname “Steven French” featuring a picture of what appears to be Humpty Dumpty smoking a cigar. In multiple posts from 2020 and 2021, the account can be seen responding to posts from Musk. Coristine’s X account is currently set to private.
Davi Ottenheimer, a longtime security operations and compliance manager, says many factors about Coristine’s employment history and online footprint could raise questions about his ability to obtain security clearance.
“Limited real work experience is a risk,” says Ottenheimer, as an example. “Plus his handle is literally Big Balls.”
27 notes · View notes
orphiclovers · 14 days ago
Text
genuinely shoutout to exile for writing a dystopia not about technology, AI, or politics, but how fucked the criminal justice system is.
in our opening paragraph, we learn that our protagonist is a 'first degree felon'. we never hear the details of his crime or even what it was, since it doesn't matter, thats not the point. the author gives us no excuses, no justifications, no "what did he do to deserve it?", no opportunity to victim blame.
when we witness the way 329 is treated by the system and society, how he struggles, how he is abused, disenfranchised and discriminated against, it invokes only uncomplicated horror and sympathy, pity and righteous anger. it's barbaric, it's cruel, it's unreasonable, it's clearly not justice no matter what he's done in the past that we don't even know.
and the kicker is. the restrictions 329 suffers under are EXACTLY the same as real life criminals'. the dystopian scifi authoritatian dictatorship isn't any more cruel or unusual in it's punishment of criminals than we are!! FUCK.
Being an Exile is an EXACT analog to being on probation.
329 is not allowed to leave the district he's stationed in or talk to other Exiles. He must get a check up every 3 months as to whether he's not harming himself - otherwise he'll get sent to a "wellfare center", he must pay massive fines every month, he must wear a monitoring collar, he must not get in any trouble with the police.
In various cases all of these apply to people on probation too. not allowed to leave the city, not allowed to talk to other co-defendants, must regularly check in with an officer, must not do drugs, must pay restitution, must wear an ankle monitor, must not get in trouble with the police.
Anyway, that is only a small fraction of 329's problems. ordinary citizens despise him too, see him as the scum of the earth, as not being punished enough - some brave souls take it in their own hands to enact justice. his home address is on a freely searchable government website (another thing that's also reality) so whoever can come and vandalize his living space with no repercussions.
"Exiles can, of course, go to the police, tell a Supervisor that they’ve been mugged, robbed, beaten, raped. This would serve no other purpose than wasting time. If the Supervisor was in a good mood, they’d send you off with some form response. If you encountered one who really hated the new Exile Laws (“you scum of society deserve to die Outside!”), the outcomes get a lot worse."
He can't go to the police either, since they won't help someone like him or might even do something worse. convicts got no rights or protection. so basically its the exact same as our world.
well, there is one way that their punishment is more advanced than ours. two years ago, the scientists and doctors figured out a perfect, flawless way to prevent recidivism and repeat offenses, a way to permanently reform a criminal - something that's still a huge problem for us. it's just um. fucking horrifying.
And before putting on the collar, a small procedure is performed on the felons. Some people are born bad; the surgery can fix them. Some people were simply led astray by others; the surgery can wipe out those memories, restoring a contaminated soul to a pristine state.
It’s nothing like the lobotomies of old. Patients don’t become drooling idiots; they don’t even get the sort of major memory discontinuities that would affect daily life. They just lose the bad stuff. They’re fixed, saved. The surgery destroys the person who committed the crime; crime won’t even cross the mind of the person that remains.
and make no mistake, it DOES work, perfectly, 100% of the time, on everyone, and is totally irreversible. there is no catch. the problem isn't the fact it's not effective enough, it's the fact this is INSANE AND EVIL.
it's presented in this positive way, as "saving" and "fixing" a "contaminated soul" - and only performed on criminals who "deserve" it.
there are plenty of people in our real world, people I've talked to, who love the idea of torturing criminals. who think punishments for crimes should be harsher and crueler. in the name of justice of course. they would swallow this bullshit hook line and sinker.
many people I know would think its okay to hurt and violate a human like this, as long as it's a criminal, a "bad" person. even if they can see it's bad, it's worth it for the sake of the wellfare of society at large. a small price to pay to win the war on crime. not considering even once that it could happen to them, that the government should not have the power to do this to anyone, because they will start with the undesirables. 329's crime was a political one, after all.
just. exile is really well-written social commentary. it's accomplishes the goals of dystopic fiction - I sure am thinking about the state of our world now.
12 notes · View notes
thesilliestrovingalive · 7 months ago
Text
Updated: March 14, 2025
Reworked Group #4: S.P.A.R.R.O.W.S.
Overview
Tequila and Red Eye successfully dismantled a rogue military organisation engaged in illicit human trafficking and arms dealing, which had also planned to launch a global bioterrorist attack in collaboration with the Pipovulaj. The plot involved spreading a plague to control the population, transforming numerous innocent civilians into violent Man Eaters as a means to create a twisted form of super soldier. Impressed by Tequila and Red Eye's exceptional performance as highly capable spies, the Intelligence Agency and the Regular Army jointly established a covert operations branch, S.P.A.R.R.O.W.S., through a mutual agreement.
The S.P.A.R.R.O.W.S. is responsible for gathering intelligence and managing information to prevent public panic and global hysteria. They provide their members with specialised training in high-risk covert operations that surpass the scope of regular Intelligence Agency agents, which are all conducted with utmost discretion and situational awareness. Some of these special covert operation missions involve precision targeting of high-priority threats and strategic disruption of complex criminal schemes.
Insignia
It features a cerulean square Iberian shield, rimmed with a spiky teal vine that’s outlined in bronze. Above the shield, the words "S.P.A.R.R.O.W.S." are inscribed in bluish-white, surmounting a stylized pair of bronze eyes with a yellowish-white star at their centre. The shield is flanked by a stylized peregrine falcon holding a gilded blade on the right side and a male house sparrow clutching an olive branch on the left side.
S.P.A.R.R.O.W.S. Base
The Intelligence Division is tactically positioned adjacent to the Joint Military Police Headquarters, deeply entrenched within a dense and remote forest in Northern Russia. The rectangular military compound features a forest-inspired camouflage colour scheme, a secure warehouse for military vehicles, multiple surveillance cameras, and several elevators leading to a subterranean base. They have a rooftop array of parabolic antennas that enables real-time surveillance, threat detection, and situational awareness, preventing surprise attacks and informing strategic decision-making. The base features comprehensive protection through an advanced security system and a defensive magnetic field, which automatically activates in response to potential threats, safeguarding against enemy attacks.
The base features a state-of-the-art command and surveillance centre, equipped with cutting-edge technological systems to orchestrate and execute operations. Additional facilities include:
An armoury housing the group’s most cutting-edge, high-clearance weaponry and specialised ordnance.
A high-tech meeting room with a high-resolution, encrypted display screen and multi-axis, AI-enhanced holographic projection system.
A state-of-the-art gymnasium for maintaining elite physical readiness, featuring biometric monitoring systems and AI-driven training programs.
A fully equipped, high-tech medical bay with regenerative treatment capabilities and telemedicine connectivity for remote expert consultation.
A secure dining area serving optimised, nutrient-rich rations for peak performance.
A high-security quarters with biometrically locked storage for personal gear and AI-monitored, secure communication arrays.
A Combat Academy, led by Margaret Southwood, featuring a heavily fortified training area with advanced combat simulation zones, tactical obstacle courses, stealth and surveillance training areas, and high-tech weapons testing ranges.
Extra Information
S.P.A.R.R.O.W.S. stands for Special Pursuit Agents and Rapid Response Operations Worldwide Strikeforce.
Members of the S.P.A.R.R.O.W.S. are commonly known as "Sparrowers" or "Following Falconers", reflecting their affiliation with the unit and their close relationship with the P.F. Squad.
Despite being part of an elite covert operations branch, Sparrowers face a significant pay disparity: males earn a quarter of the average government agent's salary, while females earn about a third. Additionally, underperforming Sparrowers, both male and female, experience further financial hardship due to delayed salary payments, often waiting between one to two months to receive their overdue compensation.
The S.P.A.R.R.O.W.S. conduct their covert operations in collaboration with the Peregrine Falcons Squad who provide primary firepower and protection for their agents.
The handguns carried by Sparrowers are the Murder Model-1915 .38 Mk.1Am or Classic Murder .38 for short. It’s a double-action revolver that features a 6-round cylinder. Originally designed to enhance the Enfield No.2 .38 Caliber revolver in 1915, the Murder Model retained only the frame and grip from the original. All other components were replaced with newer parts in later years.
11 notes · View notes
probablyasocialecologist · 2 years ago
Text
In case you missed it: artificial intelligence (AI) will make teachers redundant, become sentient, and soon, wipe out humanity as we know it. From Elon Musk, to the godfather of AI, Geoffrey Hinton, to Rishi Sunak’s AI advisor, industry leaders and experts everywhere are warning about AI’s mortal threat to our existence as a species. They are right about one thing: AI can be harmful. Facial recognition systems are already being used to prohibit possible protestors exercising fundamental rights. Automated fraud detectors are falsely cutting off thousands of people from much-needed welfare payments and surveillance tools are being used in the workplace to monitor workers’ productivity. Many of us might be shielded from the worst harms of AI. Wealth, social privilege or proximity to whiteness and capital mean that many are less likely to fall prey to tools of societal control and surveillance. As Virginia Eubanks puts it ‘many of us in the professional middle class only brush against [the invisible spider web of AI] briefly… We may have to pause a moment to extricate ourselves from its gummy grasp, but its impacts don’t linger.’ By contrast, it is well established that the worst harms of government decisions already fall hardest on those most marginalised. Let’s take the example of drugs policing and the disproportionate impact on communities of colour. Though the evidence shows that Black people use drugs no more, and possibly less, than white people, the police direct efforts to identify drug-related crimes towards communities of colour. As a consequence, the data then shows that communities of colour are more likely to be ‘hotpots’ for drugs. In this way, policing efforts to ‘identify’ the problem creates a problem in the eyes of the system, and the cycle of overpolicing continues. When you automate such processes, as with predictive policing tools based on racist and classist criminal justice data, these biases are further entrenched.
85 notes · View notes
marsisfried · 2 months ago
Text
Psycho Pass Analysis Continued (Eps 14-17)
As discussed in my previous post there's a strong connection between pantotism and this anime where the foundation is solely the idea of surveillance and order. The sibyl system functions as an omnipresent surveillance entity that continuously monitors citizens' mental state to maintain societal order. This parallels Foucault's description of the Panopticon, a system where individuals internalize discipline due to the constant possibility of being watched. In Psycho-Pass, people live in fear of their crime coefficients rising, leading them to self-regulate their behavior which mirrors how Foucault's idea enforces social order without the direct need for coercion. However, there comes a certain point where there's no means of dealing with certain crimes. When the helmet murderer picks the random girl (well she seemed random in the moment) to kill, she sees the hammer in his hand and doesn't even react. Even the people around him seeing her get struck by the hammer multiple times have straight faces and no emotion, simply just watching. Fujii Hiroko even gets assaulted and the AI simply says "I have detected that you are experiencing a great deal of stress" and recommends she seek mental care as she's constantly getting assaulted as people just record and watch. Someone even says "This is good. real brutal stuff." The society maintains its order based on the assumption that everyone on the streets is a good citizen because if not, the psycho-pass system would catch them. If people find out there's a way to avoid the system (the helmets), there's no way to avoid it. Kougami even questions what the world defines crime as, seeing that they have never experienced a crime. The witness testimonies said they couldn't understand what was going on. This can be the possibility of the Sibyl System not having them experience or see crimes. All these people have lived up until now without considering that something like murder could even happen. No one even reported the incident, no one reacted in the crowd, not even the victim. It was almost as if she didn't understand a crime was being committed against herself.
It's almost ironic because the judicial system had been disbanded because they thought crimes simply could not be committed, as the sibyl system would avoid this. Makishima himself says people have been misled by the sibyl systems to where they no longer recognize danger, even if it's staring them in the face. Even in the media things are being said such as "There's no such thing as a murder here, right? lol" and "Does this mean it's dangerous no matter where you are?" This proves this point further that constant surveillance could be dangerous as people do not realize what is in front of them. Even in the anime, there was essentially a black market to get helmets and disrupt society. This can be compared to how when you ban things here, people find a way to bypass this regardless. This can be expected with most systems. However, because of the brutality that was shown by Yuuji, people now seem to react when someone is being killed because they can recognize what a murder looks like.
"If their crime coefficient can't be measured there's no point in calling the police" This quote instills doubt in the whole police and safety system that the Sibyl system swore hard to protect. The whole system was based on the Chief's judgment who e learned to just be a few brains combined, even Makishima finds this idea foolish. There is a point where you can't surveillance a society because people have to police themselves. But to a degree where when they see violence or crime they know how to process and adresss these things. I actually really enjoyed this anime:).
Tumblr media
4 notes · View notes
ghibligrrrl · 3 months ago
Text
👤Psycho-Pass👤
Ep. 1, 3, 4, & 5
Tumblr media
Psycho-Pass is an anime that touches on many themes relevant to our current social climate and digital landscape. The story, which centers on law enforcement in a society of hyper-surveillance, touches on ideas of privacy, dehumanization, isolation, parasocial relationships, and simulation. In conjunction with this anime, we were asked to read Foucault's "Panopticism" and Drew Harwell's 2019 Washington Post article "Colleges are turning students’ phones into surveillance machines, tracking the locations of hundreds of thousands." I think these choices expanded my understanding of the show and were extremely eye opening when applied to our current culture.
Using the language of Foucault, the Sibyl system acts as a constant "supervisor" monitoring the emotional states of every citizen through a psycho-pass that gives a biometric reading of an individual's brain revealing a specific hue and crime score which can relay how likely a person is to commit a crime or act violently. The brain, formerly the one place safe from surveillance, is now on display 24/7, creating a true panoptic effect. In this future dystopian Japan, criminals are dehumanized and some, called enforcers, are used as tools to apprehend other criminals. They are constantly compared to dogs, and inspectors are warned not to get too emotionally invested or close to them to avoid increasing their own crime scores. The show constantly shows criminals as being lost causes, and even victims are cruelly given up on if the stress of the crimes against them increased their own crime score too much. This concept is shown in episode 1 and I think it is meant to present Sibyl as an inherently flawed system from the start.
I think that the Washington Post article was extremely relevant to this anime, and even to my own life as a college student. Harwell writes that oftentimes monitoring begins with good intentions like preventing crime (as in Psycho-Pass) or identifying mental health issues. Universities across the US have started implementing mobile tracking software to monitor where students are, what areas they frequent, and whether or not they come to class. The developer of this software stated that algorithms can generate a risk score based on student location data to flag students who may be struggling with mental health issues. While this sounds helpful in theory, I can't help but notice how eerily similar this software is to the Sybil system. Even high school students are sounding alarm bells after being subjected to increased surveillance in the interest of safety. In another of Harwell's articles published the same year, "Parkland school turns to experimental surveillance software that can flag students as threats," a student raised concerns about the technology's potential for being abused by law enforcement stating, "my fear is that this will become targeted." After beginning Psycho-Pass, I honestly couldn't agree more. Supporters of AI surveillance systems argue that its just another tool for law enforcement and that it's ultimately up to humans to make the right call, but in ep. 1 of Psycho-Pass, we saw just how easy it was for law enforcement to consider taking an innocent woman's life just because the algorithm determined that her crime score increased past the acceptable threshold. And there are plenty of real-world examples of law enforcement making the wrong decisions in high-stress situations. AI has the potential to make more people the targets of police violence either through technical error or built-in bias. As former Purdue University president Mitch Daniels stated in his op-ed "Someone is watching you," we have to ask ourselves "wether our good intentions are carrying us past boundaries where privacy and individual autonomy should still prevail."
I'm interested to see what the next episodes have in store. This is a series that I will probably continue watching outside of class. Finally some good f-ing food.
Tumblr media
3 notes · View notes
darkmaga-returns · 4 months ago
Text
It’s bad enough that students are monitored for every computer keystroke or Internet pages that they view, but the height of stupidity is to turn AI loose to access their mental health and then send the police after them. One police chief told NYT, “There are a lot of false alerts, but if we can save one kid, it’s worth a lot of false alerts.” This Technocrat mindset with students is guaranteed to find its way into the adult population. Big Brother is watching you.
youtube
This video is from the company GoGuardian Beacon. Find out if your local schools have bought this dystopian lunacy. ⁃ Patrick Wood, Editor.
“It was one of the worst experiences of her life.”
Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves and sending the cops to their homes as a result — with often chaotic and traumatic results.
As the New York Times reports, software being installed on high school students’ school-issued devices tracks every word they type. An algorithm then analyzes the language for evidence of teenagers wanting to harm themselves.
Unsurprisingly, the software can get it wrong by woefully misinterpreting what the students are actually trying to say. A 17-year-old in Neosho, Missouri, for instance, was woken up by the police in the middle of the night.
As it turns out, a poem she had written years ago triggered the alarms of a software called GoGuardian Beacon, which its maker describes as a way to “safeguard students from physical harm.”
“It was one of the worst experiences of her life,” the teen’s mother told the NYT.
Wellness Check
Internet safety software employed by educational tech companies took off during the COVID-19 shutdowns, leading to widespread surveillance of students in their own homes.
Many of these systems are designed to flag keywords or phrases to figure out if a teen is planning to hurt themselves.
But as the NYT reports, we have no idea if they’re at all effective or accurate, since the companies have yet to release any data.
Besides false alarms, schools have reported that the systems have allowed them to intervene in time before they’re at imminent risk at least some of the time.
However, the software remains highly invasive and could represent a massive intrusion of privacy. Civil rights groups have criticized the tech, arguing that in most cases, law enforcement shouldn’t be involved, according to the NYT.
In short, is this really the best weapon against teen suicides, which have emerged as the second leading cause of death among individuals aged five to 24 in the US?
“There are a lot of false alerts,” Ryan West, chief of the police department in charge of the school of the 17-year-old, told the NYT. “But if we can save one kid, it’s worth a lot of false alerts.”
Others, however, tend to disagree with that assessment.
“Given the total lack of information on outcomes, it’s not really possible for me to evaluate the system’s usage,” Baltimore city councilman Ryan Dorsey, who has criticized these systems in the past, told the newspaper. “I think it’s terribly misguided to send police — especially knowing what I know and believe of school police in general — to children’s homes.”
Read full story here…
5 notes · View notes
endcriminalgangstalking · 2 months ago
Text
🛑 THE DIGITAL PLANTATION: HOW MODERN SLAVERY HAS BEEN REINVENTED 🛑
🚨 What if I told you slavery never ended? It just evolved. 🚨
Tumblr media
The same banks that financed slavery now fund the prison-industrial complex.
The same intelligence agencies that ran COINTELPRO now run predictive policing.
The same military that occupied Afghanistan now arms local police with war machines.
🔻 Electronic monitoring is the new shackles.
🔻 Parole is the new 'freedom papers.'
🔻 Gang-stalking & AI policing are the new overseers.
🛑 Who they target:
✔️ Journalists & Whistleblowers 🗣️
✔️ Veterans & Activists 🏴
✔️ The Poor & Working Class 📉
✔️ Religious & Independent Thinkers 🙏
💡 HOW THEY CONTROL COMMUNITIES:
🔸 AI-driven predictive policing (real-life Minority Report) 🤖
🔸 Fusion centers tracking your every move 📡
🔸 Electronic monitoring replacing prison bars 🚨
🔸 Banking system keeping you enslaved through debt & surveillance 💳
🔸 Gang-stalking networks to isolate & destroy targets 🕵️‍♂️
They don’t need to arrest you if they can make you unemployable.
They don’t need to imprison you if they can make you unbankable.
They don’t need to execute you if they can make you disappear.
🔥 Read the full exposé and discover how the U.S. has turned into a modern-day digital plantation. 🔥
🔗 READ HERE: https://usagsa.substack.com/p/the-digital-plantation?r=1ez7y3
📢 Expose it. Share it. Fight back.
#SurveillanceState #MassIncarceration #AIpolicing #PrisonIndustrialComplex #MilitaryPolice #DigitalSlavery #COINTELPRO #MKUltra #GangStalking #PredictivePolicing #FBIControl #PoliceSurveillance #USAGSA #ExposingCorruption #DystopianReality #SocialCredit #FacialRecognition #BankingTyranny #DeepState #HumanRightsAbuse #PoliceMilitarization
2 notes · View notes
ethanswgstblog · 3 months ago
Text
Blog #2 due 2/6
What role does the digital economy play in shaping cyberfeminist practices?
The digital economy plays a crucial role in shaping cyberfeminist practices by both creating opportunities for empowerment and reinforcing existing inequalities. By providing these opportunities, women were able to slowly become more aware and familiar of online media. With the addition of women joining the online platforms, Daniels agreed that it was “a crucial medium for movement toward gender equity.” These technological advancements were not only for women in the US but also for women around the world 
How does the concept of “identity tourism” function in cyberfeminist forums, and what are its limitations?
In cyberfeminist discussions, Lisa Nakamura defines identity tourism as the process by which users "try on" identities of marginalized groups, which can lead to the appropriation and distortion of those identities rather than meaningful engagement (Daniels, 2009). While early cyberfeminists saw the internet as a space for identity fluidity, identity tourism exposes its limitations allowing privileged users to adopt marginalized identities without facing real-world oppression. Rather than fostering genuine understanding, this often reinforces stereotypes and power imbalances, prompting cyberfeminists to advocate for ethical engagement over superficial appropriation.
What alternative approaches could be implemented to ensure that technology is used to empower rather than police vulnerable populations?
To ensure that technology empowers rather than polices vulnerable populations, several key approaches must be implemented, including increased transparency, community involvement, a shift from surveillance to support, and stronger legal protections. As Eubanks highlights, automated decision-making systems often lack public oversight, making it crucial to clarify how algorithms function, who they impact, and the rationale behind their decisions. Additionally, rather than allowing policymakers and private companies to dictate digital systems, participatory design should involve those most affected such as welfare recipients and low-income families in shaping these technologies. Another could be that technology should also be used to improve access to essential services rather than predict fraud or police marginalized groups, streamlining benefits enrollment and reducing barriers to aid instead of reinforcing punitive measures. Furthermore, given that many automated systems disproportionately target vulnerable populations, policy reforms are necessary to establish ethical guidelines for AI and machine learning in public service programs. By implementing these approaches, technology can shift from a tool of control to one of empowerment
In what ways do automated fraud detection systems disproportionately target marginalized communities?
As Eubanks explains, low-income individuals are more frequently subjected to digital monitoring and fraud detection due to systemic biases, government policies aimed at reducing welfare fraud, and the increasing use of automated decision-making systems that disproportionately scrutinize marginalized populations. She mentioned that her untraditional family was denied access to their insurance company due to some missing digits and believed it to be a computer AI problem (Eubanks). Another way automated fraud detection systems targeted these communities was through historical biases in data collections. Some AI models rely on past data which can ultimately reveal people's racial and economic inequalities and target them. 
Daniels, J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment. WSQ: Women’s Studies Quarterly, 37(1–2), 101–124. https://doi.org/10.1353/wsq.0.0158
Eubanks, V. (2018). Red Flags. In Automating inequality: How high-tech tools profile, police, and punish the poor (pp. 9–28). essay, Tantor Media.
5 notes · View notes
tech-sphere · 3 months ago
Text
Tarun Wig: Co-Founder of Innefu Labs and Leader in Cybersecurity Innovation
Tumblr media
Tarun Wig is the co-founder of Innefu Labs, an Indian cybersecurity company specializing in Artificial Intelligence (AI) solutions for national and cyber security. With over a decade of experience, Tarun is recognized as a leader in India's cybersecurity industry, known for pioneering innovative AI-driven security solutions.
Early Life and Education of Tarun Wig
Born and raised in a middle-class joint family in Delhi, Tarun developed a curiosity for technology and learning at an early age. His family, especially his elder sisters and brothers-in-law, played a key role in shaping his intellectual development. Tarun attended Montfort Senior Secondary School, where he excelled academically and developed a strong passion for technology.
He pursued Electronics and Communication Engineering at Bharati Vidyapeeth University, Pune. It was here that his interest in cybersecurity took root, as he explored the intersection of technology and security. During his time at university, he co-founded Appin Group, a venture that would lay the groundwork for his future entrepreneurial success.
Tarun Wig’s Entrepreneurial Journey: Appin Group to Innefu Labs
Tarun's first major venture, Appin Group, began as an educational initiative focused on training engineering students in emerging technologies. However, the company soon pivoted to providing cybersecurity services to government and corporate clients. Under his leadership, Appin grew rapidly, achieving a valuation of INR 14 crore within four years and securing its first round of funding.
Despite Appin’s success, Tarun felt the need to focus on developing homegrown cybersecurity products. This led him to sell his stake in Appin and co-found Innefu Labs in 2011, a company dedicated to building cutting-edge, AI-powered security solutions tailored to India's needs.
Innefu Labs: Revolutionizing Cybersecurity in India
At Innefu Labs, Tarun focused on creating innovative cybersecurity products rather than offering services. One of the company’s biggest achievements was the deployment of India’s first state-wide Internet monitoring system, a milestone that positioned Innefu Labs as a leader in national cybersecurity.
Innefu also developed a link analysis engine for the Delhi Police, which has been instrumental in solving complex criminal cases. The company has since partnered with major government agencies, including the Ministry of Defense, DRDO, and state police departments, helping to secure India’s digital infrastructure.
Global Recognition: Tarun Wig’s Impact on Cybersecurity
Tarun Wig’s expertise has earned him recognition as one of India’s Top 100 Cybersecurity Influencers. He is frequently invited to consult for intelligence agencies, law enforcement, and government organizations across the country. Tarun’s contributions to cybersecurity policy, AI-driven security solutions, and national security have earned him widespread respect within the industry.
Through his leadership, Innefu Labs has successfully competed with global cybersecurity firms, including those from the United States and Israel, by focusing on innovative solutions tailored to the Indian market.
The Vision for the Future: Scaling India’s Cybersecurity Products
Looking ahead, Tarun is focused on expanding Innefu Labs to dominate India’s cybersecurity product market. He believes that India’s IT and tech sector can reach its full potential only if more homegrown technology products are developed. Innefu Labs is at the forefront of this movement, creating high-quality AI-powered products that can compete globally.
Tarun's ultimate goal is to position Innefu Labs as a leader in India’s cybersecurity and AI ecosystem, offering products that protect national security and help businesses secure their digital infrastructure.
Conclusion: Tarun Wig’s Legacy in Cybersecurity Innovation
Tarun Wig has transformed the Indian cybersecurity landscape through his innovative approach and dedication to developing indigenous solutions. As the co-founder of Innefu Labs, he has led the company to become a trailblazer in AI-powered security technologies.
With a focus on cybersecurity products, national security, and AI innovation, Tarun’s work continues to have a lasting impact on India’s digital future. As Innefu Labs grows and evolves, Tarun remains committed to positioning India as a global leader in cybersecurity and technology product innovation.
By pioneering homegrown solutions and leveraging the power of AI, Tarun Wig is shaping the future of India’s cybersecurity industry. His vision and leadership are paving the way for a safer, more secure digital world.
2 notes · View notes
jcmarchi · 3 months ago
Text
Are AI-Powered Traffic Cameras Watching You Drive?
New Post has been published on https://thedigitalinsider.com/are-ai-powered-traffic-cameras-watching-you-drive/
Are AI-Powered Traffic Cameras Watching You Drive?
Tumblr media Tumblr media
Artificial intelligence (AI) is everywhere today. While that’s an exciting prospect to some, it’s an uncomfortable thought for others. Applications like AI-powered traffic cameras are particularly controversial. As their name suggests, they analyze footage of vehicles on the road with machine vision.
They’re typically a law enforcement measure — police may use them to catch distracted drivers or other violations, like a car with no passengers using a carpool lane. However, they can also simply monitor traffic patterns to inform broader smart city operations. In all cases, though, they raise possibilities and questions about ethics in equal measure.
How Common Are AI Traffic Cameras Today?
While the idea of an AI-powered traffic camera is still relatively new, they’re already in use in several places. Nearly half of U.K. police forces have implemented them to enforce seatbelt and texting-while-driving regulations. U.S. law enforcement is starting to follow suit, with North Carolina catching nine times as many phone violations after installing AI cameras.
Fixed cameras aren’t the only use case in action today, either. Some transportation departments have begun experimenting with machine vision systems inside public vehicles like buses. At least four cities in the U.S. have implemented such a solution to detect cars illegally parked in bus lanes.
With so many local governments using this technology, it’s safe to say it will likely grow in the future. Machine learning will become increasingly reliable over time, and early tests could lead to further adoption if they show meaningful improvements.
Rising smart city investments could also drive further expansion. Governments across the globe are betting hard on this technology. China aims to build 500 smart cities, and India plans to test these technologies in at least 100 cities. As that happens, more drivers may encounter AI cameras on their daily commutes.
Benefits of Using AI in Traffic Cameras
AI traffic cameras are growing for a reason. The innovation offers a few critical advantages for public agencies and private citizens.
Safety Improvements
The most obvious upside to these cameras is they can make roads safer. Distracted driving is dangerous — it led to the deaths of 3,308 people in 2022 alone — but it’s hard to catch. Algorithms can recognize drivers on their phones more easily than highway patrol officers can, helping enforce laws prohibiting these reckless behaviors.
Early signs are promising. The U.K. and U.S. police forces that have started using such cameras have seen massive upticks in tickets given to distracted drivers or those not wearing seatbelts. As law enforcement cracks down on such actions, it’ll incentivize people to drive safer to avoid the penalties.
AI can also work faster than other methods, like red light cameras. Because it automates the analysis and ticketing process, it avoids lengthy manual workflows. As a result, the penalty arrives soon after the violation, which makes it a more effective deterrent than a delayed reaction. Automation also means areas with smaller police forces can still enjoy such benefits.
Streamlined Traffic
AI-powered traffic cameras can minimize congestion on busy roads. The areas using them to catch illegally parked cars are a prime example. Enforcing bus lane regulations ensures public vehicles can stop where they should, avoiding delays or disruptions to traffic in other lanes.
Automating tickets for seatbelt and distracted driving violations has a similar effect. Pulling someone over can disrupt other cars on the road, especially in a busy area. By taking a picture of license plates and sending the driver a bill instead, police departments can ensure safer streets without adding to the chaos of everyday traffic.
Non-law-enforcement cameras could take this advantage further. Machine vision systems throughout a city could recognize congestion and update map services accordingly, rerouting people around busy areas to prevent lengthy delays. Considering how the average U.S. driver spent 42 hours in traffic in 2023, any such improvement is a welcome change.
Downsides of AI Traffic Monitoring
While the benefits of AI traffic cameras are worth noting, they’re not a perfect solution. The technology also carries some substantial potential downsides.
False Positives and Errors
The correctness of AI may raise some concerns. While it tends to be more accurate than people in repetitive, data-heavy tasks, it can still make mistakes. Consequently, removing human oversight from the equation could lead to innocent people receiving fines.
A software bug could cause machine vision algorithms to misidentify images. Cybercriminals could make such instances more likely through data poisoning attacks. While people could likely dispute their tickets and clear their name, it would take a long, difficult process to do so, counteracting some of the technology’s efficiency benefits.
False positives are a related concern. Algorithms can produce high false positive rates, leading to more charges against innocent people, which carries racial implications in many contexts. Because data biases can remain hidden until it’s too late, AI in government applications can exacerbate problems with racial or gender discrimination in the legal system.
Privacy Issues
The biggest controversy around AI-powered traffic cameras is a familiar one — privacy. As more cities install these systems, they record pictures of a larger number of drivers. So much data in one place raises big questions about surveillance and the security of sensitive details like license plate numbers and drivers’ faces.
Many AI camera solutions don’t save images unless they determine it’s an instance of a violation. Even so, their operation would mean the solutions could store hundreds — if not thousands — of images of people on the road. Concerns about government surveillance aside, all that information is a tempting target for cybercriminals.
U.S. government agencies suffered 32,211 cybersecurity incidents in 2023 alone. Cybercriminals are already targeting public organizations and critical infrastructure, so it’s understandable why some people may be concerned that such groups would gather even more data on citizens. A data breach in a single AI camera system could affect many who wouldn’t have otherwise consented to giving away their data.
What the Future Could Hold
Given the controversy, it may take a while for automated traffic cameras to become a global standard. Stories of false positives and concerns over cybersecurity issues may delay some projects. Ultimately, though, that’s a good thing — attention to these challenges will lead to necessary development and regulation to ensure the rollout does more good than harm.
Strict data access policies and cybersecurity monitoring will be crucial to justify widespread adoption. Similarly, government organizations using these tools should verify the development of their machine-learning models to check for and prevent problems like bias. Regulations like the recent EU Artificial Intelligence Act have already provided a legislative precedent for such qualifications.
AI Traffic Cameras Bring Both Promise and Controversy
AI-powered traffic cameras may still be new, but they deserve attention. Both the promises and pitfalls of the technology need greater attention as more governments seek to implement them. Higher awareness of the possibilities and challenges surrounding this innovation can foster safer development for a secure and efficient road network in the future.
5 notes · View notes
mariacallous · 11 months ago
Text
VSquare SPICY SCOOPS
BUDAPEST–BEIJING SECURITY PACT COVERTLY INCLUDES CHINESE SURVEILLANCE TECHNOLOGY
Fresh details regarding Xi Jinping’s May visit to Budapest have begun to surface. As it was widely reported, a new security pact between Hungary and the People's Republic of China (PRC) allows for Chinese law enforcement officers to conduct patrols within Hungary—which is to say, within a European Union member state. Chinese dissidents living in the EU fear that the PRC may abuse this agreement: Chinese policemen “can even go to European countries to perform secret missions and arbitrarily arrest dissidents,” as I reported in a previous Goulash newsletter. However, there's an additional as yet undisclosed aspect of this security arrangement. According to reliable sources familiar with recent Chinese-Hungarian negotiations, a provision permits the PRC to deploy surveillance cameras equipped with advanced AI capabilities, such as facial recognition software, on Hungarian territory.  The Orbán government already maintains a significant surveillance infrastructure, including CCTV systems, and there are indications that, besides the Pegasus spyware, they may have acquired Israeli-developed facial recognition technology as well. Nevertheless, allowing the PRC to establish their own surveillance apparatus within Hungary raises distinct concerns. Even if purportedly intended to monitor Chinese investments, institutions, and personnel, the potential involvement of Chinese technology firms, some of which have ties to the People’s Liberation Army or Chinese intelligence and are subject to Western sanctions, could complicate Hungary's relations with its NATO allies. The Hungarian government, when approached for comment, redirected inquiries to the Hungarian police, who claimed that Chinese policemen won’t be authorized to investigate or take any kind of action on their own. My questions on surveillance cameras and AI technology remained unanswered.   
CHINA FURTHER SPLITS THE VISEGRÁD GROUP
One of the factors enabling Hungarian Prime Minister Viktor Orbán's maneuvers is the deep-seated divisions among its official allies, particularly evident within the Visegrád Group, regarding China. While Slovakia largely aligns with Hungary’s amicable stance towards both China and Russia, Poland adopts a more nuanced position, vehemently opposing the Kremlin while maintaining a softer approach towards China, as previously discussed in this newsletter. Conversely, the Czech Republic takes a hawkish stance towards both China and Russia. During a recent off-the-record discussion with journalists in Prague, a senior Czech official specializing in foreign policy candidly expressed skepticism about the efficacy of the V4 platform. “At this moment, it’s not possible to have a V4 common stance on China. I thought we already learned our lesson with the pandemic and how our supply chains [too dependent on China] were disrupted,” the Czech official said, adding that “I don’t know what needs to happen” for countries to realize the dangers of relying too heavily on China. The Czech official said Xi Jinping’s recent diplomatic visits to Paris, Belgrade, and Budapest was proof China is using the "divide and conquer" tactic. The Czech official felt that it isn’t only Hungary and Slovakia that are neglecting national security risks associated with Beijing, noting that “France doesn’t want to discuss China in NATO,” underscoring a broader reluctance among European nations to confront the challenges posed by China's growing influence.  
CZECHS REMAIN STEADFAST IN SUPPORT OF TAIWAN, OTHERS MAY JOIN THEIR RANKS
In discussions with government officials and China experts both in Prague and Taipei, the Czech Republic and Lithuania emerged as the sole countries openly supportive of Taiwan. This is partly attributed to the currently limited presence of Chinese investments and trade in these nations, affording them the freedom to adopt a more assertive stance. Tomáš Kopečný, the Czech government’s envoy for the reconstruction of Ukraine, emphasized in a conversation with journalists in Prague that regardless of which parties are in power, the Czech Republic’s policy toward China and Taiwan is unlikely to waver. When queried about the stance of the Czech opposition, Kopečný replied, “You could not have heard much anti-Taiwanese stance. Courting [China] was done by the Social Democrats, but not by the [strongest opposition party] ANO party. I don’t see a major player in Czech politics having pro-Chinese policies. It’s not a major domestic political issue.” This suggests that even in the event of an Andrej Babis-led coalition, a shift in allegiance is improbable. In Taipei, both a Western security expert and a senior legislator from the ruling Democratic Progressive Party (DPP) asserted that numerous Western countries covertly provide support to Taiwan to avoid antagonizing China. The DPP legislator hinted that the training of a Taiwanese air force officer at the NATO Defence College in Rome is “just the tip of the iceberg.” The legislator quickly added with a smile, “the media reported it already, so I can say that.” Delving deeper, the Western expert disclosed that since Russia's aggression in Ukraine, there has been increased communication between Taiwan and EU countries, particularly those closely monitoring Russia, including on military matters. “There is a lot going on behind the scenes,” the expert noted, with the caveat that certain specifics remain confidential. When asked which Western countries might follow the lead of the Czechs and Lithuanians in openly supporting Taiwan, the expert suggested that most Central and Eastern European nations might be open to such alliances.
MCCONNELL’S CRITICISM OF ORBÁN PRECEDED BY KEY AIDE’S VISIT
In a significant setback to the Orbán government’s lobbying efforts aimed at US Republicans, Senate Minority Leader Mitch McConnell condemned Orbán's government for its close ties with China, Russia, and Iran during a recent Senate floor speech (watch it here or read it here). “Orban’s government has cultivated the PRC as its top trading partner outside the EU. It’s given Beijing sweeping law enforcement authorities to hunt dissidents on Hungarian soil. It was the first European country to join Beijing’s Belt-and-Road Initiative, which other European governments – like Prime Minister Meloni’s in Italy – have wisely decided to leave,” McConnell stated. This speech appeared to come out of the blue, as there had been no prior indications of McConnell’s interest in Hungary. However, in reality, McConnell’s key aide on national security, Robert Karem, made an official trip to Budapest last October and held multiple meetings, according to a source familiar with the visit. Before working for McConnell, Karem served as an advisor to former Vice President Dick Cheney and as Assistant Secretary of Defense for International Security Affairs under the Trump administration. Multiple sources closely following US-Hungarian relations suggest that McConnell’s outspoken criticism of Orbán, despite the Hungarian Prime Minister’s recent visit to Donald Trump in Florida, is the clearest indication yet that Orbán may have crossed a red line by courting nearly all of the main adversaries of the US.  
RUSSIAN PRESENCE FOR PAKS TO EXCEED 1,000 IN HUNGARY BY 2025
Russia’s nuclear industry is not yet under EU sanctions, and as a result, Rosatom’s Hungarian nuclear power plant project, Paks II, is still moving forward. While construction of the plant faces numerous regulatory hurdles, significant Russian involvement is anticipated in the city of Paks. A source directly engaged in the project revealed that the current contingent of Rosatom personnel and other Russian "experts" working on Paks II is projected to double or even triple in the coming year. "Presently, approximately 400 Russians are engaged in the Paks project, with expectations for this figure to surpass 1,000 by 2025," the source disclosed. This disclosure is particularly noteworthy given the lack of precise public data on the exact number of Russians in Paks. Previous estimates, reportedly from the security apparatus of a certain Central European country, suggested a figure around 700 – a number that appears somewhat inflated to me. However, it is anticipated to escalate rapidly. Notably, the staunchly anti-immigration Orbán government recently granted exemptions for "migrant workers" involved in both the Russian Paks II and the Chinese Belt and Road projects, such as the Budapest-Belgrade railway reconstruction, allowing them to obtain 5-year residency permits more easily. Central European security experts I’ve asked view the anticipated influx of Russian – and Chinese – workers into Hungary as a security concern for the entire region. Specifically, there are fears that Russia might deploy numerous new undercover intelligence operatives to the Paks II project, who could subsequently traverse other Schengen zone countries with ease. These concerns are not unfounded, as Russia has a history of leveraging state-owned enterprises like Rosatom to cloak its intelligence activities, according to Péter Buda, a former senior Hungarian counterintelligence officer. We reached out for comment, but the Hungarian government has yet to respond to inquiries regarding this matter. (For further insights into the Orbán government's involvement in the Rosatom project, read "How Orbán saved Russia’s Hungarian nuclear power plant project" by my esteemed Direkt36 colleagues.)
6 notes · View notes
facelessoldgargoyle · 10 months ago
Text
I really enjoy Philosophy Bear’s latest post “Let's delve into exploring the rich and dynamic tapestry of AI plagiarism or: You're not an AI detector”
In one section, he points out that there’s basically four ways to prevent the use of ChatGPT, and the one that won’t be defeated in time is having solely in-class assignments, where students can be monitored. And that sucks!
Unless something drastic emerges, eventually all assessments will have to be in-class exams or quizzes. This is terrible, I won’t pretend otherwise, students will never learn to structure a proper essay and the richness of the world will be greatly impoverished by this feature of AI- not least of all because writing is one of the best ways to think about something. However, pretending you have a magic nose for AI that you most likely don’t have won’t fix this situation.
Justice, always and everywhere, no matter the level, means accepting the possibility that some (?most) wrongdoers who think before they act will get the better of the system and prove impossible to discover and convict. The root of so much injustice in sanctions and punishment is here, in an overestimation of our own ability to sniff it out, in turn, born of a fear of someone ‘getting the better of us’. But the bad guys often do get the better of us, there’s nothing more to be said.
This gets directly to the root of the issue! The damage done by trying to figure out whether a student has used AI is worse than the consequences of students getting away with using AI. I remember learning about idea that false negatives are preferable in a justice system back in high school, and it’s undergirded my thinking about police, courts, and the enforcement of rules in general ever since. I’m always surprised to encounter people who either arrived at different conclusions or haven’t thought about it at all.
Following from this position, I try to practice deliberate gullibility in my relationships. I believe that my loved ones have the right to privacy from me, and I trust that if they lie to me about something, it’s for a good reason. I wouldn’t make a blanket recommendation to do this—part of the reason this works for me is that I am good at setting boundaries and select for friends who act in good faith. However, I do think that people should be less tolerant of their loved ones “checking up” on them. Things like going through emails and texts, sharing phone/computer passwords, sharing locations, asking a friend to test your partner’s loyalty are patently awful to me. The damage caused by treating people with ongoing suspicion is worse than just accepting that sometimes you will be hurt and betrayed by people.
#op
6 notes · View notes
female-malice · 1 year ago
Text
Why disinformation experts say the Israel-Hamas war is a nightmare to investigate
A combination of irresponsibility on the part of social media platforms and the emergence of AI tech makes the job of policing fake news harder than ever.
BY CHRIS STOKEL-WALKER
The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.
In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.
“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.
Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.
“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”
For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.
“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.
The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.
“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)
Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.
It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”
Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.
The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.
“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.
Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.
And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.
“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”
5 notes · View notes