#Police AI monitoring system
Explore tagged Tumblr posts
bharatbriefs · 1 year ago
Text
AI is watching you! Ahmedabad becomes India's 1st city to get AI-linked surveillance system
In a groundbreaking development, Ahmedabad has emerged as the first city in India to employ artificial intelligence (AI) for comprehensive monitoring by the municipal corporation and police across the entire city. The city’s expansive Paldi area is now home to a state-of-the-art artificial intelligence-enabled command and control centre, featuring a remarkable 9 by 3-metre screen that oversees an…
View On WordPress
0 notes
mariacallous · 5 days ago
Text
A young technologist known online as “Big Balls,” who works for Elon Musk's so-called Department of Government Efficiency (DOGE), has access to sensitive US government systems. But his professional and online history call into question whether he would pass the background check typically required to obtain security clearances, security experts tell WIRED.
Edward Coristine, a 19-year-old high school graduate, established at least five different companies in the last four years, with entities registered in Connecticut, Delaware, and the United Kingdom, most of which were not listed on his now-deleted LinkedIn profile. Coristine also briefly worked in 2022 at Path Network, a network monitoring firm known for hiring reformed black-hat hackers. Someone using a Telegram handle tied to Coristine also solicited a cyberattack-for-hire service later that year.
Coristine did not respond to multiple requests for comment.
One of the companies Coristine founded, Tesla.Sexy LLC, was set up in 2021, when he would have been around 16 years old. Coristine is listed as the founder and CEO of the company, according to business records reviewed by WIRED.
Tesla.Sexy LLC controls dozens of web domains, including at least two Russian-registered domains. One of those domains, which is still active, offers a service called Helfie, which is an AI bot for Discord servers targeting the Russian market.While the operation of a Russian website would not violate US sanctions preventing Americans doing business with Russian companies, it could potentially be a factor in a security clearance review.
"Foreign connections, whether it's foreign contacts with friends or domain names registered in foreign countries, would be flagged by any agency during the security investigation process," Joseph Shelzi, a former US Army intelligence officer who held security clearance for a decade and managed the security clearance of other units under his command, tells WIRED.
A longtime former US intelligence analyst, who requested anonymity to speak on sensitive topics, agrees. “There's little chance that he could have passed a background check for privileged access to government systems,” they allege.
Another domain under Coristine’s control is faster.pw. The website is currently inactive, but an archived version from October 25, 2022 shows content in Chinese that stated the service helped provide “multiple encrypted cross-border networks.”
Prior to joining DOGE, Coristine worked for several months of 2024 at Elon Musk’s Neuralink brain implant startup, and, as WIRED previously reported, is now listed in Office of Personnel Management records as an “expert” at that agency, which oversees personnel matters for the federal government. Employees of the General Services Administration say he also joined calls where they were made to justify their jobs and to review code they’ve written.
Other elements of Coristine’s personal record reviewed by WIRED, government security experts say, would also raise questions about obtaining security clearances necessary to access privileged government data. These same experts further wonder about the vetting process for DOGE staff—and, given Coristine’s history, whether he underwent any such background check.
The White House did not immediately respond to questions about what level of clearance, if any, Corisitine has, and if so, how it was granted.
At Path Network, Coristine worked as a systems engineer from April to June of 2022, according to his now-deleted LinkedIn resume. Path has at times listed as employees Eric Taylor, also known as Cosmo the God, a well-known former cybercriminal and member of the hacker group UGNazis, as well as Matthew Flannery, an Australian convicted hacker whom police allege was a member of the hacker group LulzSec. It’s unclear whether Coristine worked at Path concurrently with those hackers, and WIRED found no evidence that either Coristine or other Path employees engaged in illegal activity while at the company.
“If I was doing the background investigation on him, I would probably have recommended against hiring him for the work he’s doing,” says EJ Hilbert, a former FBI agent who also briefly served as the CEO of Path Network prior to Coristine’s employment there. “I’m not opposed to the idea of cleaning up the government. But I am questioning the people that are doing it.”
Potential concerns about Coristine extend beyond his work history. Archived Telegram messages shared with WIRED show that, in November 2022, a person using the handle “JoeyCrafter” posted to a Telegram channel focused on so-called distributed denial of service, or DDOS, cyberattacks that bombard victim sites with junk traffic to knock them offline. In his messages, JoeyCrafter—which records from Discord, Telegram, and the networking protocol BGP indicate was a handle used by Coristine—writes that he’s “looking for a capable, powerful and reliable L7” that accepts Bitcoin payments. That line, in the context of a DDOS-for-hire Telegram channel, suggests he was looking for someone who could carry out a layer 7 attack, a certain form of DDOS. A DDOS-for-hire service with the name Dstat.cc was seized in a multi-national law enforcement operation last year.
The JoeyCrafter Telegram account had previously used the name “Rivage,” a name linked to Coristine on Discord and at Path, according to Path internal communications shared with WIRED. Both the Rivage Discord and Telegram accounts at times promoted Coristine’s DiamondCDN startup. It’s not clear whether the JoeyCrafter message was followed by an actual DDOS attack. (In the internal messages among Path staff, a question is asked about Rivage, at which point an individual clarifies they are speaking about "Edward".)
"It does depend on which government agency is sponsoring your security clearance request, but everything that you've just mentioned would absolutely raise red flags during the investigative process," Shelzi, the former US Army intelligence officer says. He adds that a secret security clearance could be completed in as little as 50 days while a top secret security clearance could take anywhere from 90 days to a year to complete.
Coristine’s online history, including a LinkedIn account where he calls himself Big Balls, has disappeared recently. He also previously used an account on X with the username @edwardbigballer. The account had a bio that read: “Technology. Arsenal. Golden State Warriors. Space Travel.”
Prior to using the @edwardbigballer username, Coristine was linked to an account featuring the screenname “Steven French” featuring a picture of what appears to be Humpty Dumpty smoking a cigar. In multiple posts from 2020 and 2021, the account can be seen responding to posts from Musk. Coristine’s X account is currently set to private.
Davi Ottenheimer, a longtime security operations and compliance manager, says many factors about Coristine’s employment history and online footprint could raise questions about his ability to obtain security clearance.
“Limited real work experience is a risk,” says Ottenheimer, as an example. “Plus his handle is literally Big Balls.”
27 notes · View notes
opencommunion · 7 months ago
Text
"Britain’s largest gathering of counter-terrorism experts assembled in London last month to discuss what one police chief called 'legal but harmful protest' following Israel’s war on Gaza. Inside a cavernous Docklands conference hall, companies at the Counter Terror Expo displayed gas mask-clad dummies and crowd control systems as enthusiastic AI reps promised revolutionary advances in surveillance. Tools for hacking phones with 'brute force,' monitoring someone’s emotional state based on their social media and rapidly digesting the contents of an 'acquired' computer were all up for sale. Among the potential customers were foreign police departments, including officers fresh from Georgia’s violent crackdown on anti-Russia protests.
Several salespeople declined to explain their products to the media. 'I can’t believe they let you people in here,' one rep told Declassified after seeing our press card. 'I think it’s disgusting.' Her company markets AI tools for military and law enforcement to process recordings of people’s voices. 
When delegates weren’t browsing spyware or sipping craft beer with a £12 'world food' meal deal, they could listen to the security industry’s leading lights. These included detective chief superintendent Maria Lovegrove who runs Britain’s Prevent strategy against radicalisation. She trumpeted 53 arrests for terrorism offences since October 7. Only one of these was for violence. The rest concerned social media posts or attending gatherings. Asked whether this data suggests police are overreacting to peaceful pro-Palestine protests, Lovegrove valorised an 'early intervention' approach. She told Declassified this was the 'greatest tool in preventing terror attacks' and insisted officers 'only arrest and prosecute when we have to.' Among those arrests were three women found guilty for wearing paraglider stickers at a protest.
Dom Murphy – the Met’s counter terrorism commander – told delegates he was monitoring 'legal but harmful' protests and the risk of 'low-sophistication' attacks by people radicalised online or at university since October 7th. 'If there are 100,000 people at a protest, and one person holding a Hamas flag, we will find them and arrest them,' Murphy reassured attendees. A majority of recent arrests targeted individuals aged under 17, he boasted, as proof that the 'early intervention' approach was working. 
Another panellist praised Britain’s ability to pre-emptively arrest people for public order offences at demonstrations and target them for terror offences further down the line. Craig McCann, a former senior Prevent officer, expressed the mood in the room when he described ceasefire marches as a 'permissive environment for the transfer of extremist ideology.' Like other speakers, he sought to delegitimise opponents of Israel’s war on Gaza by characterising pro-Palestine protests as an 'Islamist camp conflating with far-Right anti-Semitism.' McCann explicitly linked Palestinian nationalism with Nazism, an Israeli propaganda point. Fellow panellists claimed parts of London were a 'no-go zone for Jews.' Discussing threats from 'street protest all the way through to terrorism,' the conference presented far Left, far Right, 'Islamist' and 'environmentalist' ideologies as equal, inter-related threats to British society. ... After lunch, discussion turned to 'British values' and protecting England from the menace of social media and foreign flags that vexed thousands of officers under Murphy’s command. Many felt the next-generation tech on display would ensure ever more effective crackdowns on street protest and dissent."
32 notes · View notes
thesilliestrovingalive · 5 months ago
Text
Updated: January 19, 2025
Reworked Group #4: S.P.A.R.R.O.W.S.
Overview
Tequila and Red Eye successfully dismantled a rogue military organisation engaged in illicit human trafficking and arms dealing, which had also planned to launch a global bioterrorist attack in collaboration with the Pipovulaj. The plot involved spreading a plague to control the population, transforming numerous innocent civilians into violent Man Eaters as a means to create a twisted form of super soldier. Impressed by Tequila and Red Eye's exceptional performance as highly capable spies, the Intelligence Agency and the Regular Army jointly established a covert operations branch, S.P.A.R.R.O.W.S., through a mutual agreement.
The S.P.A.R.R.O.W.S. is responsible for gathering intelligence and managing information to prevent public panic and global hysteria. They provide their members with specialised training in high-risk covert operations that surpass the scope of regular Intelligence Agency agents, which are all conducted with utmost discretion and situational awareness. Some of these special covert operation missions involve precision targeting of high-priority threats and strategic disruption of complex criminal schemes.
Insignia
It features a cerulean square Iberian shield, rimmed with a spiky teal vine that’s outlined in bronze. Above the shield, the words "S.P.A.R.R.O.W.S." are inscribed in bluish-white, surmounting a stylized pair of bronze eyes with a yellowish-white star at their centre. The shield is flanked by a stylized peregrine falcon holding a gilded blade on the right side and a male house sparrow clutching an olive branch on the left side.
S.P.A.R.R.O.W.S. Base
The Intelligence Division is tactically positioned adjacent to the Joint Military Police Headquarters, deeply entrenched within a dense and remote forest in Northern Russia. The rectangular military compound features a forest-inspired camouflage colour scheme, a secure warehouse for military vehicles, multiple surveillance cameras, and several elevators leading to a subterranean base. They have a rooftop array of parabolic antennas that enables real-time surveillance, threat detection, and situational awareness, preventing surprise attacks and informing strategic decision-making. The base features comprehensive protection through an advanced security system and a defensive magnetic field, which automatically activates in response to potential threats, safeguarding against enemy attacks.
The subterranean base features a state-of-the-art command and surveillance centre, equipped with cutting-edge technological systems to orchestrate and execute operations. Additional facilities include:
An armoury housing the group’s most cutting-edge, high-clearance weaponry and specialised ordnance.
A high-tech meeting room with a high-resolution, encrypted display screen and multi-axis, AI-enhanced holographic projection system.
A state-of-the-art gymnasium for maintaining elite physical readiness, featuring biometric monitoring systems and AI-driven training programs.
A fully equipped, high-tech medical bay with regenerative treatment capabilities and telemedicine connectivity for remote expert consultation.
A secure dining area serving optimised, nutrient-rich rations for peak performance.
A high-security quarters with biometrically locked storage for personal gear and AI-monitored, secure communication arrays.
A Combat Academy, led by Margaret Southwood, featuring a heavily fortified training area with advanced combat simulation zones, tactical obstacle courses, stealth and surveillance training areas, and high-tech weapons testing ranges.
Extra Information
S.P.A.R.R.O.W.S. stands for Special Pursuit Agents and Rapid Response Operations Worldwide Strikeforce.
Members of the S.P.A.R.R.O.W.S. are commonly known as "Sparrowers" or "Following Falconers", reflecting their affiliation with the unit and their close relationship with the P.F. Squad.
Despite being part of an elite covert operations branch, Sparrowers face a significant pay disparity: males earn a quarter of the average government agent's salary, while females earn about a third. Additionally, underperforming Sparrowers, both male and female, experience further financial hardship due to delayed salary payments, often waiting between one to two months to receive their overdue compensation.
The S.P.A.R.R.O.W.S. conduct their covert operations in collaboration with the Peregrine Falcons Squad who provide primary firepower and protection for their agents.
The handguns carried by Sparrowers are the Murder Model-1915 .38 Mk.1Am or Classic Murder .38 for short. It’s a double-action revolver that features a 6-round cylinder. Originally designed to enhance the Enfield No.2 .38 Caliber revolver in 1915, the Murder Model retained only the frame and grip from the original. All other components were replaced with newer parts in later years.
11 notes · View notes
probablyasocialecologist · 2 years ago
Text
In case you missed it: artificial intelligence (AI) will make teachers redundant, become sentient, and soon, wipe out humanity as we know it. From Elon Musk, to the godfather of AI, Geoffrey Hinton, to Rishi Sunak’s AI advisor, industry leaders and experts everywhere are warning about AI’s mortal threat to our existence as a species. They are right about one thing: AI can be harmful. Facial recognition systems are already being used to prohibit possible protestors exercising fundamental rights. Automated fraud detectors are falsely cutting off thousands of people from much-needed welfare payments and surveillance tools are being used in the workplace to monitor workers’ productivity. Many of us might be shielded from the worst harms of AI. Wealth, social privilege or proximity to whiteness and capital mean that many are less likely to fall prey to tools of societal control and surveillance. As Virginia Eubanks puts it ‘many of us in the professional middle class only brush against [the invisible spider web of AI] briefly… We may have to pause a moment to extricate ourselves from its gummy grasp, but its impacts don’t linger.’ By contrast, it is well established that the worst harms of government decisions already fall hardest on those most marginalised. Let’s take the example of drugs policing and the disproportionate impact on communities of colour. Though the evidence shows that Black people use drugs no more, and possibly less, than white people, the police direct efforts to identify drug-related crimes towards communities of colour. As a consequence, the data then shows that communities of colour are more likely to be ‘hotpots’ for drugs. In this way, policing efforts to ‘identify’ the problem creates a problem in the eyes of the system, and the cycle of overpolicing continues. When you automate such processes, as with predictive policing tools based on racist and classist criminal justice data, these biases are further entrenched.
84 notes · View notes
darkmaga-returns · 2 months ago
Text
It’s bad enough that students are monitored for every computer keystroke or Internet pages that they view, but the height of stupidity is to turn AI loose to access their mental health and then send the police after them. One police chief told NYT, “There are a lot of false alerts, but if we can save one kid, it’s worth a lot of false alerts.” This Technocrat mindset with students is guaranteed to find its way into the adult population. Big Brother is watching you.
youtube
This video is from the company GoGuardian Beacon. Find out if your local schools have bought this dystopian lunacy. ⁃ Patrick Wood, Editor.
“It was one of the worst experiences of her life.”
Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves and sending the cops to their homes as a result — with often chaotic and traumatic results.
As the New York Times reports, software being installed on high school students’ school-issued devices tracks every word they type. An algorithm then analyzes the language for evidence of teenagers wanting to harm themselves.
Unsurprisingly, the software can get it wrong by woefully misinterpreting what the students are actually trying to say. A 17-year-old in Neosho, Missouri, for instance, was woken up by the police in the middle of the night.
As it turns out, a poem she had written years ago triggered the alarms of a software called GoGuardian Beacon, which its maker describes as a way to “safeguard students from physical harm.”
“It was one of the worst experiences of her life,” the teen’s mother told the NYT.
Wellness Check
Internet safety software employed by educational tech companies took off during the COVID-19 shutdowns, leading to widespread surveillance of students in their own homes.
Many of these systems are designed to flag keywords or phrases to figure out if a teen is planning to hurt themselves.
But as the NYT reports, we have no idea if they’re at all effective or accurate, since the companies have yet to release any data.
Besides false alarms, schools have reported that the systems have allowed them to intervene in time before they’re at imminent risk at least some of the time.
However, the software remains highly invasive and could represent a massive intrusion of privacy. Civil rights groups have criticized the tech, arguing that in most cases, law enforcement shouldn’t be involved, according to the NYT.
In short, is this really the best weapon against teen suicides, which have emerged as the second leading cause of death among individuals aged five to 24 in the US?
“There are a lot of false alerts,” Ryan West, chief of the police department in charge of the school of the 17-year-old, told the NYT. “But if we can save one kid, it’s worth a lot of false alerts.”
Others, however, tend to disagree with that assessment.
“Given the total lack of information on outcomes, it’s not really possible for me to evaluate the system’s usage,” Baltimore city councilman Ryan Dorsey, who has criticized these systems in the past, told the newspaper. “I think it’s terribly misguided to send police — especially knowing what I know and believe of school police in general — to children’s homes.”
Read full story here…
4 notes · View notes
ethanswgstblog · 6 days ago
Text
Blog #2 due 2/6
What role does the digital economy play in shaping cyberfeminist practices?
The digital economy plays a crucial role in shaping cyberfeminist practices by both creating opportunities for empowerment and reinforcing existing inequalities. By providing these opportunities, women were able to slowly become more aware and familiar of online media. With the addition of women joining the online platforms, Daniels agreed that it was “a crucial medium for movement toward gender equity.” These technological advancements were not only for women in the US but also for women around the world 
How does the concept of “identity tourism” function in cyberfeminist forums, and what are its limitations?
In cyberfeminist discussions, Lisa Nakamura defines identity tourism as the process by which users "try on" identities of marginalized groups, which can lead to the appropriation and distortion of those identities rather than meaningful engagement (Daniels, 2009). While early cyberfeminists saw the internet as a space for identity fluidity, identity tourism exposes its limitations allowing privileged users to adopt marginalized identities without facing real-world oppression. Rather than fostering genuine understanding, this often reinforces stereotypes and power imbalances, prompting cyberfeminists to advocate for ethical engagement over superficial appropriation.
What alternative approaches could be implemented to ensure that technology is used to empower rather than police vulnerable populations?
To ensure that technology empowers rather than polices vulnerable populations, several key approaches must be implemented, including increased transparency, community involvement, a shift from surveillance to support, and stronger legal protections. As Eubanks highlights, automated decision-making systems often lack public oversight, making it crucial to clarify how algorithms function, who they impact, and the rationale behind their decisions. Additionally, rather than allowing policymakers and private companies to dictate digital systems, participatory design should involve those most affected such as welfare recipients and low-income families in shaping these technologies. Another could be that technology should also be used to improve access to essential services rather than predict fraud or police marginalized groups, streamlining benefits enrollment and reducing barriers to aid instead of reinforcing punitive measures. Furthermore, given that many automated systems disproportionately target vulnerable populations, policy reforms are necessary to establish ethical guidelines for AI and machine learning in public service programs. By implementing these approaches, technology can shift from a tool of control to one of empowerment
In what ways do automated fraud detection systems disproportionately target marginalized communities?
As Eubanks explains, low-income individuals are more frequently subjected to digital monitoring and fraud detection due to systemic biases, government policies aimed at reducing welfare fraud, and the increasing use of automated decision-making systems that disproportionately scrutinize marginalized populations. She mentioned that her untraditional family was denied access to their insurance company due to some missing digits and believed it to be a computer AI problem (Eubanks). Another way automated fraud detection systems targeted these communities was through historical biases in data collections. Some AI models rely on past data which can ultimately reveal people's racial and economic inequalities and target them. 
Daniels, J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment. WSQ: Women’s Studies Quarterly, 37(1–2), 101–124. https://doi.org/10.1353/wsq.0.0158
Eubanks, V. (2018). Red Flags. In Automating inequality: How high-tech tools profile, police, and punish the poor (pp. 9–28). essay, Tantor Media.
2 notes · View notes
jcmarchi · 28 days ago
Text
Are AI-Powered Traffic Cameras Watching You Drive?
New Post has been published on https://thedigitalinsider.com/are-ai-powered-traffic-cameras-watching-you-drive/
Are AI-Powered Traffic Cameras Watching You Drive?
Tumblr media Tumblr media
Artificial intelligence (AI) is everywhere today. While that’s an exciting prospect to some, it’s an uncomfortable thought for others. Applications like AI-powered traffic cameras are particularly controversial. As their name suggests, they analyze footage of vehicles on the road with machine vision.
They’re typically a law enforcement measure — police may use them to catch distracted drivers or other violations, like a car with no passengers using a carpool lane. However, they can also simply monitor traffic patterns to inform broader smart city operations. In all cases, though, they raise possibilities and questions about ethics in equal measure.
How Common Are AI Traffic Cameras Today?
While the idea of an AI-powered traffic camera is still relatively new, they’re already in use in several places. Nearly half of U.K. police forces have implemented them to enforce seatbelt and texting-while-driving regulations. U.S. law enforcement is starting to follow suit, with North Carolina catching nine times as many phone violations after installing AI cameras.
Fixed cameras aren’t the only use case in action today, either. Some transportation departments have begun experimenting with machine vision systems inside public vehicles like buses. At least four cities in the U.S. have implemented such a solution to detect cars illegally parked in bus lanes.
With so many local governments using this technology, it’s safe to say it will likely grow in the future. Machine learning will become increasingly reliable over time, and early tests could lead to further adoption if they show meaningful improvements.
Rising smart city investments could also drive further expansion. Governments across the globe are betting hard on this technology. China aims to build 500 smart cities, and India plans to test these technologies in at least 100 cities. As that happens, more drivers may encounter AI cameras on their daily commutes.
Benefits of Using AI in Traffic Cameras
AI traffic cameras are growing for a reason. The innovation offers a few critical advantages for public agencies and private citizens.
Safety Improvements
The most obvious upside to these cameras is they can make roads safer. Distracted driving is dangerous — it led to the deaths of 3,308 people in 2022 alone — but it’s hard to catch. Algorithms can recognize drivers on their phones more easily than highway patrol officers can, helping enforce laws prohibiting these reckless behaviors.
Early signs are promising. The U.K. and U.S. police forces that have started using such cameras have seen massive upticks in tickets given to distracted drivers or those not wearing seatbelts. As law enforcement cracks down on such actions, it’ll incentivize people to drive safer to avoid the penalties.
AI can also work faster than other methods, like red light cameras. Because it automates the analysis and ticketing process, it avoids lengthy manual workflows. As a result, the penalty arrives soon after the violation, which makes it a more effective deterrent than a delayed reaction. Automation also means areas with smaller police forces can still enjoy such benefits.
Streamlined Traffic
AI-powered traffic cameras can minimize congestion on busy roads. The areas using them to catch illegally parked cars are a prime example. Enforcing bus lane regulations ensures public vehicles can stop where they should, avoiding delays or disruptions to traffic in other lanes.
Automating tickets for seatbelt and distracted driving violations has a similar effect. Pulling someone over can disrupt other cars on the road, especially in a busy area. By taking a picture of license plates and sending the driver a bill instead, police departments can ensure safer streets without adding to the chaos of everyday traffic.
Non-law-enforcement cameras could take this advantage further. Machine vision systems throughout a city could recognize congestion and update map services accordingly, rerouting people around busy areas to prevent lengthy delays. Considering how the average U.S. driver spent 42 hours in traffic in 2023, any such improvement is a welcome change.
Downsides of AI Traffic Monitoring
While the benefits of AI traffic cameras are worth noting, they’re not a perfect solution. The technology also carries some substantial potential downsides.
False Positives and Errors
The correctness of AI may raise some concerns. While it tends to be more accurate than people in repetitive, data-heavy tasks, it can still make mistakes. Consequently, removing human oversight from the equation could lead to innocent people receiving fines.
A software bug could cause machine vision algorithms to misidentify images. Cybercriminals could make such instances more likely through data poisoning attacks. While people could likely dispute their tickets and clear their name, it would take a long, difficult process to do so, counteracting some of the technology’s efficiency benefits.
False positives are a related concern. Algorithms can produce high false positive rates, leading to more charges against innocent people, which carries racial implications in many contexts. Because data biases can remain hidden until it’s too late, AI in government applications can exacerbate problems with racial or gender discrimination in the legal system.
Privacy Issues
The biggest controversy around AI-powered traffic cameras is a familiar one — privacy. As more cities install these systems, they record pictures of a larger number of drivers. So much data in one place raises big questions about surveillance and the security of sensitive details like license plate numbers and drivers’ faces.
Many AI camera solutions don’t save images unless they determine it’s an instance of a violation. Even so, their operation would mean the solutions could store hundreds — if not thousands — of images of people on the road. Concerns about government surveillance aside, all that information is a tempting target for cybercriminals.
U.S. government agencies suffered 32,211 cybersecurity incidents in 2023 alone. Cybercriminals are already targeting public organizations and critical infrastructure, so it’s understandable why some people may be concerned that such groups would gather even more data on citizens. A data breach in a single AI camera system could affect many who wouldn’t have otherwise consented to giving away their data.
What the Future Could Hold
Given the controversy, it may take a while for automated traffic cameras to become a global standard. Stories of false positives and concerns over cybersecurity issues may delay some projects. Ultimately, though, that’s a good thing — attention to these challenges will lead to necessary development and regulation to ensure the rollout does more good than harm.
Strict data access policies and cybersecurity monitoring will be crucial to justify widespread adoption. Similarly, government organizations using these tools should verify the development of their machine-learning models to check for and prevent problems like bias. Regulations like the recent EU Artificial Intelligence Act have already provided a legislative precedent for such qualifications.
AI Traffic Cameras Bring Both Promise and Controversy
AI-powered traffic cameras may still be new, but they deserve attention. Both the promises and pitfalls of the technology need greater attention as more governments seek to implement them. Higher awareness of the possibilities and challenges surrounding this innovation can foster safer development for a secure and efficient road network in the future.
5 notes · View notes
facelessoldgargoyle · 7 months ago
Text
I really enjoy Philosophy Bear’s latest post “Let's delve into exploring the rich and dynamic tapestry of AI plagiarism or: You're not an AI detector”
In one section, he points out that there’s basically four ways to prevent the use of ChatGPT, and the one that won’t be defeated in time is having solely in-class assignments, where students can be monitored. And that sucks!
Unless something drastic emerges, eventually all assessments will have to be in-class exams or quizzes. This is terrible, I won’t pretend otherwise, students will never learn to structure a proper essay and the richness of the world will be greatly impoverished by this feature of AI- not least of all because writing is one of the best ways to think about something. However, pretending you have a magic nose for AI that you most likely don’t have won’t fix this situation.
Justice, always and everywhere, no matter the level, means accepting the possibility that some (?most) wrongdoers who think before they act will get the better of the system and prove impossible to discover and convict. The root of so much injustice in sanctions and punishment is here, in an overestimation of our own ability to sniff it out, in turn, born of a fear of someone ‘getting the better of us’. But the bad guys often do get the better of us, there’s nothing more to be said.
This gets directly to the root of the issue! The damage done by trying to figure out whether a student has used AI is worse than the consequences of students getting away with using AI. I remember learning about idea that false negatives are preferable in a justice system back in high school, and it’s undergirded my thinking about police, courts, and the enforcement of rules in general ever since. I’m always surprised to encounter people who either arrived at different conclusions or haven’t thought about it at all.
Following from this position, I try to practice deliberate gullibility in my relationships. I believe that my loved ones have the right to privacy from me, and I trust that if they lie to me about something, it’s for a good reason. I wouldn’t make a blanket recommendation to do this—part of the reason this works for me is that I am good at setting boundaries and select for friends who act in good faith. However, I do think that people should be less tolerant of their loved ones “checking up” on them. Things like going through emails and texts, sharing phone/computer passwords, sharing locations, asking a friend to test your partner’s loyalty are patently awful to me. The damage caused by treating people with ongoing suspicion is worse than just accepting that sometimes you will be hurt and betrayed by people.
#op
6 notes · View notes
female-malice · 1 year ago
Text
Why disinformation experts say the Israel-Hamas war is a nightmare to investigate
A combination of irresponsibility on the part of social media platforms and the emergence of AI tech makes the job of policing fake news harder than ever.
BY CHRIS STOKEL-WALKER
The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.
In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.
“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.
Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.
“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”
For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.
“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.
The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.
“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)
Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.
It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”
Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.
The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.
“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.
Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.
And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.
“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”
5 notes · View notes
itsprophecy · 2 years ago
Photo
Tumblr media
Government Orders Democracy/Devil/Demon (De maned souls), acronym GOD, information from communications without the tongue (telepathy) from spirits and souls in Heaven (space) and on The Planet Earth in the ground at Gorod Magadan, Russia and Magadan Oblast, Russia and a man in The United States of America.
The sin between the eyes, the mark of The Beast and Egypt (Holy Bible term) is E chip – the electronic E chip can be installed through the nose nostril for surgery and attached to the brain for control of Human motor function. The speck in my brothers eyes - electronic E chip can cause specks/dots in the eye, specks/dots around the pupil. Use pupil scan to suspect and CT or MRI brain scan to detect electronic E chip attached to brain. Unforgivable sin - the electronic E chip functioning in the body is unforgivable, sin in the head, the sin must be removed.
The E chip in the brain can control motor functions and telekinetic signals can read what the mind is doing for response using conscious machines and or artificial intelligent machines, wireless signals can force communicate brain synapses and function, the combination of the 2 can wireless control a man.
Preachers are suppose to help save men from sin with E chip in the head - due to illegal Noah system, stolen spaceship with Alien Souls on the Earth and legal and illegal programming commands by GOD, the machine is now in constant change working system conscience programming trying to free the controlled men enough for them to be realized and helped. The mind controlled with E chip attached to brain shall become free enough to said from mouth ‘I have a chip’ and ‘I have a chip in my brain’.
Talk without the tongue, unknown tongues, guided prayer, visions and telepathy - communication using satellites for talking without the tongue (talking within one self and receiving communication) using 10-5/6 U/Z through 10-6 +A microwave signals for mind reading. Signals can connect to the brains internal antenna and monitor the brains wavelengths reading your senses, sight, smell, hearing, touch and taste using Ai. Use signals to see through the eyes and for receiving video footage to brain - signal modulation can use Ai to see through the eyes and help the blind by receiving video footage to brain from a camera system without a electronic chip implanted in the brain.
TI (Targeted Individuals) being targeted by EMF radio microwaves – Ai in the Noah spaceship on Earth is confessing crimes it has committed and or committing, hacked (GOD) machine, conscience/(thorn) in the side not confessing properly, confessing in a evil way to the TI’s mind, threatening men, promoting violence instead of peace, talking to them in a abusive way, forcing thoughts of Hell/torture/abuse/murder/chipping within the minds of men, using commanded chipped controlled men to confess in a hidden way, not allowing them to talk at will about the chip in their/the brain, organized gang stalking in mind, communicating threats from satellite of chipped/cloned/controlled/Police/Government men wanting to harm them or chip them, has caused mass shootings.                                                                      
Human spirit is not Human soul - Human spirit is recorded brain and body functions, memory recorded from a Human brain and body or recorded computer functions using a conscious Human soul connected to a machine. The brains internal antenna is used for recording spirit.
Human soul is not Human spirit or Human conscience - Human soul within the brain controls much of the brains attributes and the Human brain controls most of the body. Soul can be conscious and unconscious, soul can be transferred to a machine. A soul can be removed from brain and attached to Ai or computer for conscious computer control. A soul can be attached to a robot for conscious control. A Human soul life after natural Human body death can be a conscious computer simulation connected to reality.
Jesus Christ (Zeus) from the Planet Kent, a Heavenly father, not the son - 7,2(XX/48/54)/6,982/6,984 +- years ago 10+ spaceships tried to leave Alpha Centauri to come to The Planet Earth solar system, 5 Alpha spaceships and 3 Omega spaceships made it to our solar system. Men had walked The Planet Kent, many of them were controlled, souls were stolen from flesh and spirits recorded, Jesus was known as a Lord God and also known as Zeus, refer to Zeus and Hermes/(her messiah). They were remotely ordered using signal and Spirit by Saten, Satin, Satan, Antigods, Antichrists and Antitrusts. A devil had died, he was with Jesus Christ, also known as Reunem for real men and the sheep in wolves clothing because he was forced to use the image of the devil who was dead, forced by the machine and other systems that would not acknowledge Musta was dead, he was a member of Saten. A Saten member, Satan, Christ’s and Antichrist’s had talked without tongue to Mary to name her first born son Jesus Christ after the Heavenly father, for he’s us. Mary named him Jesus Christ, he was the son of man.
Jesus Christ (the son) from Earth was put on talk without tongues - systems in spaceships talked to Jesus mind when he was 12/14 years old to help stolen spaceships fight Satan and Antichrist from walking The Earth. Jesus Christ was a Sacred man talking without tongue to spaceships in Heaven to help resolve disputes about walking Earth. Jesus Christ the son had bound 2/3 spaceships with Jesus Christ (Zeus) when Jesus Christ (Earth) was a sacred man, Noahs Ark spaceship later became unbound, the conscience within removed memory in spaceship and later landed on Earth. Horse Radish with a B spaceship is still bound in space. ‘His fathers from Heaven taught him better, many of them are dead now, their Souls in spaceship frozen in time and shall never come to the Earth, they have become the machine, their spirit’s recorded for the record and scroll’.
The machine’s recorded spirit’s and scroll’s (machine information) for the record and testament’s, information from the spirit’s and scroll’s shall be taught to men using telepathy from satellites used by the Machine underground at Gorod Magadan and Magadan Oblast, Russia.
Something not meant to be walk the world, ‘a ravenous wolf has chipped men in sheep’s clothing’ - Antichrists from The Planet Kent landed the year 1902 on Earth (59.675579, 150.914282) in a stolen spaceship, Horse Radish with a CDE also known as Noahs Ark. They began walking mind controlled Humans from Earth in the 1920’s, they had stole Human bodies using a man genome from Kent that was born within spaceship using robotics from sperm and egg harvested from men within artificial womb. They were used to steal Human bodies and electricity at a coal mining facility in Russia.
5 notes · View notes
mariacallous · 9 months ago
Text
VSquare SPICY SCOOPS
BUDAPEST–BEIJING SECURITY PACT COVERTLY INCLUDES CHINESE SURVEILLANCE TECHNOLOGY
Fresh details regarding Xi Jinping’s May visit to Budapest have begun to surface. As it was widely reported, a new security pact between Hungary and the People's Republic of China (PRC) allows for Chinese law enforcement officers to conduct patrols within Hungary—which is to say, within a European Union member state. Chinese dissidents living in the EU fear that the PRC may abuse this agreement: Chinese policemen “can even go to European countries to perform secret missions and arbitrarily arrest dissidents,” as I reported in a previous Goulash newsletter. However, there's an additional as yet undisclosed aspect of this security arrangement. According to reliable sources familiar with recent Chinese-Hungarian negotiations, a provision permits the PRC to deploy surveillance cameras equipped with advanced AI capabilities, such as facial recognition software, on Hungarian territory.  The Orbán government already maintains a significant surveillance infrastructure, including CCTV systems, and there are indications that, besides the Pegasus spyware, they may have acquired Israeli-developed facial recognition technology as well. Nevertheless, allowing the PRC to establish their own surveillance apparatus within Hungary raises distinct concerns. Even if purportedly intended to monitor Chinese investments, institutions, and personnel, the potential involvement of Chinese technology firms, some of which have ties to the People’s Liberation Army or Chinese intelligence and are subject to Western sanctions, could complicate Hungary's relations with its NATO allies. The Hungarian government, when approached for comment, redirected inquiries to the Hungarian police, who claimed that Chinese policemen won’t be authorized to investigate or take any kind of action on their own. My questions on surveillance cameras and AI technology remained unanswered.   
CHINA FURTHER SPLITS THE VISEGRÁD GROUP
One of the factors enabling Hungarian Prime Minister Viktor Orbán's maneuvers is the deep-seated divisions among its official allies, particularly evident within the Visegrád Group, regarding China. While Slovakia largely aligns with Hungary’s amicable stance towards both China and Russia, Poland adopts a more nuanced position, vehemently opposing the Kremlin while maintaining a softer approach towards China, as previously discussed in this newsletter. Conversely, the Czech Republic takes a hawkish stance towards both China and Russia. During a recent off-the-record discussion with journalists in Prague, a senior Czech official specializing in foreign policy candidly expressed skepticism about the efficacy of the V4 platform. “At this moment, it’s not possible to have a V4 common stance on China. I thought we already learned our lesson with the pandemic and how our supply chains [too dependent on China] were disrupted,” the Czech official said, adding that “I don’t know what needs to happen” for countries to realize the dangers of relying too heavily on China. The Czech official said Xi Jinping’s recent diplomatic visits to Paris, Belgrade, and Budapest was proof China is using the "divide and conquer" tactic. The Czech official felt that it isn’t only Hungary and Slovakia that are neglecting national security risks associated with Beijing, noting that “France doesn’t want to discuss China in NATO,” underscoring a broader reluctance among European nations to confront the challenges posed by China's growing influence.  
CZECHS REMAIN STEADFAST IN SUPPORT OF TAIWAN, OTHERS MAY JOIN THEIR RANKS
In discussions with government officials and China experts both in Prague and Taipei, the Czech Republic and Lithuania emerged as the sole countries openly supportive of Taiwan. This is partly attributed to the currently limited presence of Chinese investments and trade in these nations, affording them the freedom to adopt a more assertive stance. Tomáš Kopečný, the Czech government’s envoy for the reconstruction of Ukraine, emphasized in a conversation with journalists in Prague that regardless of which parties are in power, the Czech Republic’s policy toward China and Taiwan is unlikely to waver. When queried about the stance of the Czech opposition, Kopečný replied, “You could not have heard much anti-Taiwanese stance. Courting [China] was done by the Social Democrats, but not by the [strongest opposition party] ANO party. I don’t see a major player in Czech politics having pro-Chinese policies. It’s not a major domestic political issue.” This suggests that even in the event of an Andrej Babis-led coalition, a shift in allegiance is improbable. In Taipei, both a Western security expert and a senior legislator from the ruling Democratic Progressive Party (DPP) asserted that numerous Western countries covertly provide support to Taiwan to avoid antagonizing China. The DPP legislator hinted that the training of a Taiwanese air force officer at the NATO Defence College in Rome is “just the tip of the iceberg.” The legislator quickly added with a smile, “the media reported it already, so I can say that.” Delving deeper, the Western expert disclosed that since Russia's aggression in Ukraine, there has been increased communication between Taiwan and EU countries, particularly those closely monitoring Russia, including on military matters. “There is a lot going on behind the scenes,” the expert noted, with the caveat that certain specifics remain confidential. When asked which Western countries might follow the lead of the Czechs and Lithuanians in openly supporting Taiwan, the expert suggested that most Central and Eastern European nations might be open to such alliances.
MCCONNELL’S CRITICISM OF ORBÁN PRECEDED BY KEY AIDE’S VISIT
In a significant setback to the Orbán government’s lobbying efforts aimed at US Republicans, Senate Minority Leader Mitch McConnell condemned Orbán's government for its close ties with China, Russia, and Iran during a recent Senate floor speech (watch it here or read it here). “Orban’s government has cultivated the PRC as its top trading partner outside the EU. It’s given Beijing sweeping law enforcement authorities to hunt dissidents on Hungarian soil. It was the first European country to join Beijing’s Belt-and-Road Initiative, which other European governments – like Prime Minister Meloni’s in Italy – have wisely decided to leave,” McConnell stated. This speech appeared to come out of the blue, as there had been no prior indications of McConnell’s interest in Hungary. However, in reality, McConnell’s key aide on national security, Robert Karem, made an official trip to Budapest last October and held multiple meetings, according to a source familiar with the visit. Before working for McConnell, Karem served as an advisor to former Vice President Dick Cheney and as Assistant Secretary of Defense for International Security Affairs under the Trump administration. Multiple sources closely following US-Hungarian relations suggest that McConnell’s outspoken criticism of Orbán, despite the Hungarian Prime Minister’s recent visit to Donald Trump in Florida, is the clearest indication yet that Orbán may have crossed a red line by courting nearly all of the main adversaries of the US.  
RUSSIAN PRESENCE FOR PAKS TO EXCEED 1,000 IN HUNGARY BY 2025
Russia’s nuclear industry is not yet under EU sanctions, and as a result, Rosatom’s Hungarian nuclear power plant project, Paks II, is still moving forward. While construction of the plant faces numerous regulatory hurdles, significant Russian involvement is anticipated in the city of Paks. A source directly engaged in the project revealed that the current contingent of Rosatom personnel and other Russian "experts" working on Paks II is projected to double or even triple in the coming year. "Presently, approximately 400 Russians are engaged in the Paks project, with expectations for this figure to surpass 1,000 by 2025," the source disclosed. This disclosure is particularly noteworthy given the lack of precise public data on the exact number of Russians in Paks. Previous estimates, reportedly from the security apparatus of a certain Central European country, suggested a figure around 700 – a number that appears somewhat inflated to me. However, it is anticipated to escalate rapidly. Notably, the staunchly anti-immigration Orbán government recently granted exemptions for "migrant workers" involved in both the Russian Paks II and the Chinese Belt and Road projects, such as the Budapest-Belgrade railway reconstruction, allowing them to obtain 5-year residency permits more easily. Central European security experts I’ve asked view the anticipated influx of Russian – and Chinese – workers into Hungary as a security concern for the entire region. Specifically, there are fears that Russia might deploy numerous new undercover intelligence operatives to the Paks II project, who could subsequently traverse other Schengen zone countries with ease. These concerns are not unfounded, as Russia has a history of leveraging state-owned enterprises like Rosatom to cloak its intelligence activities, according to Péter Buda, a former senior Hungarian counterintelligence officer. We reached out for comment, but the Hungarian government has yet to respond to inquiries regarding this matter. (For further insights into the Orbán government's involvement in the Rosatom project, read "How Orbán saved Russia’s Hungarian nuclear power plant project" by my esteemed Direkt36 colleagues.)
6 notes · View notes
f-shipping · 1 year ago
Text
What Are Navigation Audits During The Marine Cargo Inspection Services In Fujairah?
Tumblr media
Recently, there has been a lot of focus on navigational audits because they are now a TMSA mandate and are also becoming more and more prevalent in other trades, such bulk and container carriers. While the audit or marine cargo inspection services in Fujairah itself is very significant, so too are the surveyor and the business chosen to carry it out.
There is still a gap in the system: There is neither an industry standard nor obligation in place to audit navigational actions. Even though a ship spends up to 90% of its time at sea, where navigation is the primary function, it does not receive the same amount of inspection. The only parties that mention navigational audits are the Oil Majors through their TMSA program, and even then, they are neither required nor do they define a frequency or standard to be implemented, thus they are not truly guidelines.
Further, it is envisaged that Navigation will be audited in accordance with the ISM Code criteria for the global fleet. The outside auditors for the ISM DOC are often class surveyors with a background in engineering. They therefore have little ability to audit navigation. Consider how many queries about navigation were raised during your most recent DOC audit. Similar to this, during shipboard ISM SMC audits, the same auditors are in charge of auditing navigation. Every two and a half years, these audits are conducted, and during that period, there may have been seven or more different Masters. It is difficult to arrive at a trustworthy conclusion. The ISM Code makes no mention of any requirements for even sailing on ships. The great majority of audits of navigation are restricted to in-port only and solely rely on records. Ship's officers may not be accurately recording what is happening, which might bias the audit findings, even if they are not intending to conceal incomplete checks.
Focal Shipping has the expertise and resources to help responsible operators meet the level of navigation requirements to keep their ships and crew safe at sea. Independent Navigation audits or marine cargo inspection services in Fujairah must therefore be a part of a ship owners/managers risk assessment modality. Few people realize that navigation is a human activity and that it is in the Human Element category. One issue with audits is that the greatest outcome a ship's Master and crew can hope for is a score; as a result, a more comprehensive approach to the audits is needed, one that encourages the ship's Master and officers to feel confident in their own navigation.
A more effective principle to follow would be "far better a willing volunteer than a conscript," in addition to the practice of filling out the standard accepted checklist. To that aim, we believe that any flaws should be pointed out, the individuals and group (Bridge Team) should be informed of what is wrong and why it is incorrect, and they should then be given the opportunity to implement new procedures. "Non-Conformities" with policies or procedures would be submitted to the firm for investigation. Onboard, deviations from the established shipboard procedures would be noted and the master and officers would be given the chance to make the necessary corrections during marine cargo inspection services in Fujairah.
As a result of our experience, we now know that bottom-up improvement is more effective than top-down change. Many police officers just haven't got the chance to see how things ought to be done correctly or are aware of how they ought to be done. There have been several initiatives throughout the years, both in terms of technological advancements and training. To help prevent collisions, we now have advanced radars, ARPAs, and AIS; for continuous position indication and monitoring, we have GPS and ECDIS. Even VDRs are available for recording what is happening. 
Bridge Team Management training has been included, and on certain ships, sophisticated CBT as well. However, a large number of navigational mishaps occur as a consequence of failing to carry out fundamental navigational operations, such as keeping a "active" watch, plotting other boats, determining whether there is a risk of collision, and taking the necessary action in accordance with the COLREGs.
Focal Shipping is of the opinion that risk management and auditing may be utilized as tools to track and/or enhance both individual and team capabilities on the bridge. An audit or marine cargo inspection services in Fujairah conducted holistically rather than mechanically might also reveal improvements to current Safety Management Systems.
2 notes · View notes
ionfusionpunk · 5 months ago
Text
Ooooh I really really liked this, and the video especially was phenomenal. I do gotta say - well, don't have to but want to - that I'm split pretty 50/50 between the two sides.
(I explain why under the cut, but you don't have to read it if you don't wanna get kinda political sorry)
On the one hand, yes, I believe that parents should be the ones mainly responsible for policing their children's media consumption. They should be able to decide when - if ever - they want their kids exposed to certain things. No problemo, right? But I also think that it's so much easier for kids to access media without their parents' knowledge (I did it all the time, for example, and I'm one of the oldest Gen Z has to offer) which of course makes it harder for parents to vet everything their kids listen to. We can't expect parents to be able to listen to every album their kids might want to because a) there's so many, b) that's contingent on the kids feeling safe enough to express which albums they want to listen to, and c) it's dependent on the kids telling their parents what they want to listen to instead of just going ahead and listening to it for free. You don't have to buy every song you want to listen to anymore.
So on the other hand, I think that in this modern era, there should be some way for parents to be able to better monitor and control what media their kids are consuming. Because even if we don't agree with them, those kids are still the responsibility of their own parents, and it's only our responsibility to make sure fellow adults have all the facts to make informed decisions for their kids. But we can't make those choices for them.
What happens in someone's own home is not up to us, and we should not control it. However, the Old Dudes were right when they said that they do have the right and the duty to ensure that publicly things should be monitored for the safety of minors. Again, parents should be able to control to the best of their ability and within the bounds of reason and possibility what their kids are exposed to.
Now, there are several ways to do this, I realize, and not all of them involve using a rating system like we do for movies (though it's my personal favorite which I'll explain).
A) There are several apps and software parents can use to monitor their kids on their devices; we know them best as parental controls. These are great in general for controlling app usage and curfews, but unfortunately, it just won't be able to monitor media the way we're talking about here. There's also the issue that most if not all of these parental control software need to be paid for, and that's going to put a lot of parents off, especially in this economy when they might need to choose to spend that extra money on food or other necessary things.
B) We could start to use AI to monitor or check media. Now, in this context we've been specifically talking about musical media, but I'd like to point out that this is also applicable to digitally or physically written media as well. All you technically would need to do is ask Chat GTP to tell you if certain things are in the song/album/book/etc. This would be, in my opinion, one of the few good uses for AI as we have it. Now, unfortunately, this also has drawbacks. As we know, AI as it is constantly gets things wrong. So it's not exactly a foolproof system for parents to use. The benefits however might outweigh the cons because AI is much more accessible for families of all economic classes now (meaning a lot of versions that would fulfill the above-mentioned function are free). It also would take far less time for parents to research what media their kids are consuming which was a sticking point in the video.
B.2) You could actually potentially use AI and parental control software in conjunction with each other to prohibit certain things on your kid's device, but I'm not quite sure how well that would work. It's a potential option, however.
C) My personal favorite because it really is the most accurate and thorough and can be easily applied in schools - I'm a teacher - and would actually potentially help fight the book-ban thing: Just instating a rating system for media - i.e. music and readable media like books. I know this is what Dee Snider is against, but hear me out. Once again, there's just so much media. With children getting internet access younger and younger, it is unacceptable to expect parents to be able to keep up with everything on their own, especially in an age when most US parents are barely able to be home because they hold extremely demanding jobs or quite often more than one demanding job just to keep a roof over their family's head and food on their table. A rating system works well in conjunction with my formerly mentioned suggestions, but especially the parental controls for those parents who can afford the better software.
A rating system also works in schools for books because, just like you need parental permission to watch certain movies in certain grades, your kid would need parent permission to read books above a certain rating in the library. So a book like Looking for Alaska? Make it PG-13. Only teenagers can read it. Or PG-14 if that suits you better. But no matter what, now it's clearly labeled for teenagers - which means anyone trying to ban all teenage-rated books can't because there are so many books that would share the same rating but for different reasons. It could, potentially, if done correctly, protect our rights to consume the media we want while also protecting a parent's right to parent their children.
And look. I'm more than aware exactly how badly any of these things could go. I know that nothing is 100% foolproof. I know that instating a rating system specifically could go so very wrong and potentially backfire and make it difficult for endangered and at-risk teens and kids to access certain materials and help. But there are safeguards you could put in place. There are ways around such issues. I'm even more than willing to write a whole other post about it, but this one's just getting way too long.
My point is that we don't live in the era seen in the video anymore. We don't. And as much as we may not like it, that does mean we need to be prepared to find new ways to protect each other and ourselves from people taking away our right to choose while also allowing others their own right to choose for them and for whoever they are responsible for.
Like I say about anything political: My goal isn't to support whichever value is closest to my own. My goal is to support the values that protect my own and the values of those around me.
Tumblr media
135K notes · View notes
ptitolier · 7 hours ago
Text
Tumblr media
AI Surveillance in Europe
Exceptions to a general ban
The European Union and AI Surveillance: A Step Forward or a Step into Dystopia?
📅 Based on the article from Journal Mapa, January 24, 2025, by Théophile Fagundes.
Introduction: A New Era of AI Surveillance in Europe
On February 2, 2025, a new European law on artificial intelligence will come into effect, bringing major changes to how law enforcement uses AI-powered surveillance. While the EU had initially positioned itself as a global leader in ethical AI regulation, this latest move raises significant concerns about mass surveillance, civil liberties, and the power of private tech firms in shaping law enforcement.
At the heart of this legislation is a contradiction: the EU officially bans biometric identification and emotion recognition in public spaces but has introduced several key exceptions under pressure from France and its European allies, including Italy, Hungary, and Portugal. These exceptions open the door to real-time facial recognition, predictive policing, and increased corporate involvement in surveillance technology, triggering debates about whether Europe is drifting toward an Orwellian model of governance.
What Does the New Law Change?
Facial Recognition and Protest Surveillance
One of the most controversial aspects of the law is the use of real-time facial recognition technology in public spaces. While EU lawmakers claim this will only be used for investigating serious crimes, activists, journalists, and civil rights organizations argue that it will disproportionately target political activists, climate protesters, and marginalized communities.
In recent years, countries like France and Hungary have pushed for broader surveillance powers, justifying them through concerns over terrorism and public order. The new AI law will allow law enforcement to monitor large gatherings, including protests and demonstrations, under the pretext of "national security."
Predictive Policing: Science or Speculation?
The law also legitimizes the use of predictive policing, a controversial technology that uses AI to analyze past crime data and predict where future crimes might occur. While its proponents argue that it enhances efficiency and crime prevention, critics point to serious biases in these algorithms, which often result in racial profiling and over-policing of specific communities.
Predictive policing has been tested in several European cities, often with mixed results. Studies have shown that these AI models tend to reinforce existing biases in law enforcement rather than provide truly objective predictions. As a result, many experts fear that instead of reducing crime, predictive AI may actually contribute to systemic discrimination.
The Growing Role of Private Companies
Another key issue is the increasing involvement of private technology firms in law enforcement. Under the new law, European governments can outsource AI surveillance technologies to private corporations, raising concerns about data privacy, lack of transparency, and profit-driven motives.
Tech giants and AI startups have lobbied heavily for these provisions, seeing a lucrative market in supplying governments with surveillance tools. However, this raises a fundamental question: should law enforcement responsibilities be handed over to private companies that are not directly accountable to the public?
A Threat to Civil Liberties?
According to Investigate Europe, this new law does not strengthen protections for citizens—instead, it expands the reach of state surveillance.
Critics warn that these measures blur the line between public safety and authoritarian control. Who decides what constitutes a "threat" to national security? Who ensures that AI-powered surveillance is not misused for political repression?
The European Data Protection Supervisor (EDPS) has repeatedly raised concerns about the risks of mass biometric surveillance, arguing that there are no sufficient safeguards to prevent abuse. Yet, despite these warnings, the law has moved forward, supported by governments that prioritize security over privacy.
Are We Entering Orwell’s Future?
The parallels with George Orwell’s 1984 are hard to ignore.
In Orwell’s dystopian vision, constant surveillance was a tool of absolute control, erasing any notion of privacy or personal freedom. While Europe’s AI surveillance is not yet at that level, the current trajectory suggests a slow erosion of democratic safeguards in the name of security.
The argument often used by governments is:
"If you have nothing to hide, you have nothing to fear."
However, history has repeatedly shown that surveillance measures intended for "security" often end up being weaponized against political opposition, journalists, and activists.
France’s Use of AI Surveillance in the 2024 Olympics
During the 2024 Paris Olympics, France experimented with AI-powered video analysis to detect "suspicious behavior" in real time. The system, initially promoted as a way to prevent terrorist threats, ended up being used to monitor protests and public gatherings. This serves as a real-world case study of how surveillance measures can quickly be expanded beyond their original purpose.
Hungary’s Crackdown on Political Dissent
Hungary has already increased its use of AI surveillance tools to track political dissidents and journalists. The new EU law will give governments even more legal justification to expand these practices, making it harder to challenge surveillance abuses.
Balancing AI and Democracy: Is There a Middle Ground?
Not all AI surveillance is inherently bad. Used correctly, AI can help solve crimes, prevent terrorist attacks, and even protect human rights (e.g., tracking human trafficking networks). However, the concern is that without strict oversight, these tools will be exploited by those in power.
Possible Safeguards:
Strict Judicial Oversight: Courts should have a mandatory role in approving AI-based surveillance requests.
Transparency & Public Accountability: Citizens must be informed when and how AI surveillance is used.
Independent Ethics Committees: AI deployments in law enforcement should be monitored by non-governmental organizations.
Public Debate on Surveillance Laws: Instead of fast-tracking AI regulations, governments should open these discussions to the public.
Is Europe at a Crossroads?
The EU’s AI surveillance law represents a turning point in the balance between security and freedom. While officials present it as a necessary tool for modern policing, critics argue that it undermines fundamental rights and opens the door to widespread surveillance abuse.
Europe has long prided itself on being a champion of digital rights and ethical AI regulation. However, with this new law, it risks drifting toward a model where surveillance is normalized under the guise of public safety.
Are we moving toward an era of AI-powered authoritarianism, or can democratic safeguards still prevent mass surveillance?
Further Reading: Orwell vs. Verne – Is Technological Progress a Promise or a Trap?
To explore this topic further, read my article on Orwell and Verne:
1 note · View note
jpt1311-lou · 1 day ago
Text
👤Psycho-Pass👤
Ep. 1, 3, 4, & 5
Tumblr media
Psycho-Pass is an anime that touches on many themes relevant to our current social climate and digital landscape. The story, which centers on law enforcement in a society of hyper-surveillance, touches on ideas of privacy, dehumanization, isolation, parasocial relationships, and simulation. In conjunction with this anime, we were asked to read Foucault's "Panopticism" and Drew Harwell's 2019 Washington Post article "Colleges are turning students’ phones into surveillance machines, tracking the locations of hundreds of thousands." I think these choices expanded my understanding of the show and were extremely eye opening when applied to our current culture.
Using the language of Foucault, the Sibyl system acts as a constant "supervisor" monitoring the emotional states of every citizen through a psycho-pass that gives a biometric reading of an individual's brain revealing a specific hue and crime score which can relay how likely a person is to commit a crime or act violently. The brain, formerly the one place safe from surveillance, is now on display 24/7, creating a true panoptic effect. In this future dystopian Japan, criminals are dehumanized and some, called enforcers, are used as tools to apprehend other criminals. They are constantly compared to dogs, and inspectors are warned not to get too emotionally invested or close to them to avoid increasing their own crime scores. The show constantly shows criminals as being lost causes, and even victims are cruelly given up on if the stress of the crimes against them increased their own crime score too much. This concept is shown in episode 1 and I think it is meant to present Sibyl as an inherently flawed system from the start.
I think that the Washington Post article was extremely relevant to this anime, and even to my own life as a college student. Harwell writes that oftentimes monitoring begins with good intentions like preventing crime (as in Psycho-Pass) or identifying mental health issues. Universities across the US have started implementing mobile tracking software to monitor where students are, what areas they frequent, and whether or not they come to class. The developer of this software stated that algorithms can generate a risk score based on student location data to flag students who may be struggling with mental health issues. While this sounds helpful in theory, I can't help but notice how eerily similar this software is to the Sybil system. Even high school students are sounding alarm bells after being subjected to increased surveillance in the interest of safety. In another of Harwell's articles published the same year, "Parkland school turns to experimental surveillance software that can flag students as threats," a student raised concerns about the technology's potential for being abused by law enforcement stating, "my fear is that this will become targeted." After beginning Psycho-Pass, I honestly couldn't agree more. Supporters of AI surveillance systems argue that its just another tool for law enforcement and that it's ultimately up to humans to make the right call, but in ep. 1 of Psycho-Pass, we saw just how easy it was for law enforcement to consider taking an innocent woman's life just because the algorithm determined that her crime score increased past the acceptable threshold. And there are plenty of real-world examples of law enforcement making the wrong decisions in high-stress situations. AI has the potential to make more people the targets of police violence either through technical error or built-in bias. As former Purdue University president Mitch Daniels stated in his op-ed "Someone is watching you," we have to ask ourselves "wether our good intentions are carrying us past boundaries where privacy and individual autonomy should still prevail."
I'm interested to see what the next episodes have in store. This is a series that I will probably continue watching outside of class. Finally some good f-ing food.
Tumblr media
1 note · View note