#Police AI monitoring system
Explore tagged Tumblr posts
bharatbriefs · 11 months ago
Text
AI is watching you! Ahmedabad becomes India's 1st city to get AI-linked surveillance system
In a groundbreaking development, Ahmedabad has emerged as the first city in India to employ artificial intelligence (AI) for comprehensive monitoring by the municipal corporation and police across the entire city. The city’s expansive Paldi area is now home to a state-of-the-art artificial intelligence-enabled command and control centre, featuring a remarkable 9 by 3-metre screen that oversees an…
View On WordPress
0 notes
opencommunion · 4 months ago
Text
"Britain’s largest gathering of counter-terrorism experts assembled in London last month to discuss what one police chief called 'legal but harmful protest' following Israel’s war on Gaza. Inside a cavernous Docklands conference hall, companies at the Counter Terror Expo displayed gas mask-clad dummies and crowd control systems as enthusiastic AI reps promised revolutionary advances in surveillance. Tools for hacking phones with 'brute force,' monitoring someone’s emotional state based on their social media and rapidly digesting the contents of an 'acquired' computer were all up for sale. Among the potential customers were foreign police departments, including officers fresh from Georgia’s violent crackdown on anti-Russia protests.
Several salespeople declined to explain their products to the media. 'I can’t believe they let you people in here,' one rep told Declassified after seeing our press card. 'I think it’s disgusting.' Her company markets AI tools for military and law enforcement to process recordings of people’s voices. 
When delegates weren’t browsing spyware or sipping craft beer with a £12 'world food' meal deal, they could listen to the security industry’s leading lights. These included detective chief superintendent Maria Lovegrove who runs Britain’s Prevent strategy against radicalisation. She trumpeted 53 arrests for terrorism offences since October 7. Only one of these was for violence. The rest concerned social media posts or attending gatherings. Asked whether this data suggests police are overreacting to peaceful pro-Palestine protests, Lovegrove valorised an 'early intervention' approach. She told Declassified this was the 'greatest tool in preventing terror attacks' and insisted officers 'only arrest and prosecute when we have to.' Among those arrests were three women found guilty for wearing paraglider stickers at a protest.
Dom Murphy – the Met’s counter terrorism commander – told delegates he was monitoring 'legal but harmful' protests and the risk of 'low-sophistication' attacks by people radicalised online or at university since October 7th. 'If there are 100,000 people at a protest, and one person holding a Hamas flag, we will find them and arrest them,' Murphy reassured attendees. A majority of recent arrests targeted individuals aged under 17, he boasted, as proof that the 'early intervention' approach was working. 
Another panellist praised Britain’s ability to pre-emptively arrest people for public order offences at demonstrations and target them for terror offences further down the line. Craig McCann, a former senior Prevent officer, expressed the mood in the room when he described ceasefire marches as a 'permissive environment for the transfer of extremist ideology.' Like other speakers, he sought to delegitimise opponents of Israel’s war on Gaza by characterising pro-Palestine protests as an 'Islamist camp conflating with far-Right anti-Semitism.' McCann explicitly linked Palestinian nationalism with Nazism, an Israeli propaganda point. Fellow panellists claimed parts of London were a 'no-go zone for Jews.' Discussing threats from 'street protest all the way through to terrorism,' the conference presented far Left, far Right, 'Islamist' and 'environmentalist' ideologies as equal, inter-related threats to British society. ... After lunch, discussion turned to 'British values' and protecting England from the menace of social media and foreign flags that vexed thousands of officers under Murphy’s command. Many felt the next-generation tech on display would ensure ever more effective crackdowns on street protest and dissent."
32 notes · View notes
astercontrol · 26 days ago
Text
Thinking about what Tron's life and work could have been like inside the Encom-system, post-1982.
We know he's a security program, designed to monitor the System's connections to other systems and stop anything harmful passing between. He might also be able to take on security roles within the system; we see the Flynn's Grid version of him doing some of that prior to the coup.
Now, this job definitely has the potential to become very corrupt (see, border security and IRL cops in general)
But even in an ideal world that could abolish the carceral system and policing as we know it, there would still be a need for security in some form
Now I like to imagine that Tron is more like what would replace cops, in this sort of ideal rehabilitative-justice-focused world.
Because I imagine that what passes for a "justice system," inside a well-functioning computer, is much more rehabilitative and preventative than punitive. 
I mean, ideally you set up conditions in your computer so that they don't lead to any of your software causing problems. Meaning the programs all have access to what they need (sufficient power and memory, sufficient downtime to rest, whatever maintenance is necessary). 
This is somewhat analogous to how a better human society could prevent a lot of crime just by giving people a better life. But of course it can't prevent all of it; some people will still want to cause harm for whatever reason.
And if a program does start making trouble, the ideal solution is to troubleshoot until the problem is fixed-- which sorta equates to rehabilitation of criminals.
Carceral state would be just keeping the program inactive and never using it, I guess. Death penalty would be uninstalling and deleting.
The MCP's approach was to lock up programs who wouldn't obey him, and force them to do things for him. Regardless of their original function, they'd become half-zombified office workers like Yori, or gladiators fighting battles like Tron (which is implied to be work that helps keep the Arcades running and bringing in profit for Encom). 
Programs clearly don't like being made to do jobs other than the one they're designed for. My own impression of the Encom system is that the programs there are self-motivated to fulfill their intended function, at least as much as humans are motivated by money. The right job is its own reward. The wrong job would feel like forced unpaid labor.
So, the MCP's system was basically the prison industrial complex. (Including the high risk of death at work.) 
Tron helped abolish that.
And what would be set up in its place... Well, that depends on the Users, under the new direction of Flynn (who'd presumably want to at least try for something more considerate of the rights of programs, now that he sees them as people).
I don't think it should be Tron's choice, because I think Tron does have an inherent violent side. The way we see him fighting, when pitted against multiple red warriors, makes me think he is very capable of turning off any thoughts like "this program is also a victim of the MCP in a way, he wasn't really given any better choices; why don't we try and rehabilitate him?"
Tron was focused on his own survival, and he knew when fighting to the death was the only option. Charitable thoughts couldn't be acted on at the time and would only get in the way. So he just became a ruthless killer in those moments.
And I don't totally trust that his anger against those red warriors would go away after the MCP was defeated. I wouldn't trust Tron himself to choose rehabilitative justice.
But luckily, he listens to his User. And I think Alan would rather turn a misbehaving program to the good side than destroy it. (The "Klaatu Barada Nikto" quotation on his cubicle wall was basically a command to a violent AI to stop causing harm and to help instead. I even have a sort of half-headcanon that this is what he actually did for the MCP.)
If Alan got the chance to guide Tron into becoming the System's regular security, I think he'd make sure to include directives for rehabilitation as much as possible. 
There are of course a lot of questions about just how this would apply to programs, and whether it would even be possible to introduce anything like what would be an ethical system for humans. (What does he do with viruses and malware? If their programmed purpose is to harm the system, would rehabilitating them into helpful work be as traumatic for them as what the MCP did? Is the death penalty sometimes the most merciful option for programs? These may be deeper questions than I want to get into.)
But I think the best thing for Tron would be to act as part of a well-paired suite of security software.
And this may include the Guards we see under MCP's control, if they can be rehabilitated enough. But I also think Tron would benefit from some advisors he is personally close to.
It already makes sense to include Yori in his team, because while Tron's main purpose is to monitor what comes into the system, Yori (as the software for the digitizing laser) is in charge of perhaps the most concerning route by which things can enter the system. She'll have insights he would not think of.
And so will Ram-- the guy who was made to do actuarial work for an insurance company, but who clearly cares too much about helping people for that to be a really good fit, once his naive idealism about insurance companies inevitably falls apart. 
He can't change his purpose. But his purpose is versatile -- actuaries calculate probabilities, and that's good for a lot of stuff. I think he'd be a perfect security advisor for Tron. Risk assessment, but with a lot of compassion mixed in.
After the events of Legacy, in "The Next Day," Alan asked Roy to act as Encom's "moral compass." I like to imagine that the rerezzed Encom version of Ram had been doing that for a long time already.
19 notes · View notes
thesilliestrovingalive · 2 months ago
Text
Updated: September 18, 2024
Reworked Group #4: S.P.A.R.R.O.W.S.
Overview
Tequila and Red Eye successfully dismantled a rogue military organisation engaged in illicit human trafficking and arms dealing, which had also planned to launch a global bioterrorist attack in collaboration with the Pipovulaj Army. The plot involved spreading a zombie plague to control the population, transforming numerous innocent civilians into violent Man Eaters as a means to create a twisted form of super-soldier. Impressed by Tequila and Red Eye's exceptional performance as highly capable spies, the Intelligence Agency and the Regular Army jointly established a covert operations branch, S.P.A.R.R.O.W.S., through a mutual agreement.
The S.P.A.R.R.O.W.S. is responsible for gathering intelligence and managing information to prevent public panic and global hysteria. They provide their members with specialised training in high-risk covert operations that surpass the scope of regular Intelligence Agency agents, which are all conducted with utmost discretion and situational awareness. Some of these special covert operation missions involve precision targeting of high-priority threats and strategic disruption of complex criminal schemes.
Insignia
It features a cerulean square Iberian shield, rimmed with a spiky teal vine that’s outlined in bronze. Above the shield, the words "S.P.A.R.R.O.W.S." are inscribed in bluish-white, surmounting a stylized pair of bronze eyes with a yellowish-white star at their centre. The shield is flanked by a stylized peregrine falcon holding a gilded blade on the right side and a male house sparrow clutching an olive branch on the left side.
S.P.A.R.R.O.W.S. Base
The Intelligence Division is tactically positioned adjacent to the Joint Military Police Headquarters, deeply entrenched within a dense and remote forest in Northern Russia. The rectangular military compound features a forest-inspired camouflage colour scheme, a secure warehouse for military vehicles, multiple surveillance cameras, and several elevators leading to a subterranean base. They have a rooftop array of parabolic antennas that enables real-time surveillance, threat detection, and situational awareness, preventing surprise attacks and informing strategic decision-making. The base features comprehensive protection through an advanced security system and a defensive magnetic field, which automatically activates in response to potential threats, safeguarding against enemy attacks.
The subterranean base features a state-of-the-art command and surveillance centre, equipped with cutting-edge technological systems to orchestrate and execute operations. Additional facilities include:
An armoury housing the group’s most cutting-edge, high-clearance weaponry and specialised ordnance.
A high-tech meeting room with a high-resolution, encrypted display screen and multi-axis, AI-enhanced holographic projection system.
A state-of-the-art gymnasium for maintaining elite physical readiness, featuring biometric monitoring systems and AI-driven training programs.
A fully equipped, high-tech medical bay with regenerative treatment capabilities and telemedicine connectivity for remote expert consultation.
A secure dining area serving optimised, nutrient-rich rations for peak performance.
A high-security quarters with biometrically locked storage for personal gear and AI-monitored, secure communication arrays.
A Combat Academy, led by Margaret Southwood, featuring a heavily fortified training area with advanced combat simulation zones, tactical obstacle courses, stealth and surveillance training areas, and high-tech weapons testing ranges.
Extra Information
S.P.A.R.R.O.W.S. stands for Special Pursuit Agents and Rapid Response Operations Worldwide Strikeforce.
Members of the S.P.A.R.R.O.W.S. are commonly known as "Sparrowers" or "Following Falconers", reflecting their affiliation with the unit and their close relationship with the P.F. Squad.
Despite being part of an elite covert operations branch, Sparrowers face a significant pay disparity: males earn a quarter of the average government agent's salary, while females earn about a third. Additionally, underperforming Sparrowers, both male and female, experience further financial hardship due to delayed salary payments, often waiting between one to two months to receive their overdue compensation.
The S.P.A.R.R.O.W.S. conduct their covert operations in collaboration with the Peregrine Falcons Squad who provide primary firepower and protection for their agents.
The handguns carried by Sparrowers are the Murder Model-1915 .38 Mk.1Am or Classic Murder .38 for short. It’s a double-action revolver that features a 6-round cylinder. Originally designed to enhance the Enfield No.2 .38 Caliber revolver in 1915, the Murder Model retained only the frame and grip from the original. All other components were replaced with newer parts in later years.
11 notes · View notes
probablyasocialecologist · 1 year ago
Text
In case you missed it: artificial intelligence (AI) will make teachers redundant, become sentient, and soon, wipe out humanity as we know it. From Elon Musk, to the godfather of AI, Geoffrey Hinton, to Rishi Sunak’s AI advisor, industry leaders and experts everywhere are warning about AI’s mortal threat to our existence as a species. They are right about one thing: AI can be harmful. Facial recognition systems are already being used to prohibit possible protestors exercising fundamental rights. Automated fraud detectors are falsely cutting off thousands of people from much-needed welfare payments and surveillance tools are being used in the workplace to monitor workers’ productivity. Many of us might be shielded from the worst harms of AI. Wealth, social privilege or proximity to whiteness and capital mean that many are less likely to fall prey to tools of societal control and surveillance. As Virginia Eubanks puts it ‘many of us in the professional middle class only brush against [the invisible spider web of AI] briefly… We may have to pause a moment to extricate ourselves from its gummy grasp, but its impacts don’t linger.’ By contrast, it is well established that the worst harms of government decisions already fall hardest on those most marginalised. Let’s take the example of drugs policing and the disproportionate impact on communities of colour. Though the evidence shows that Black people use drugs no more, and possibly less, than white people, the police direct efforts to identify drug-related crimes towards communities of colour. As a consequence, the data then shows that communities of colour are more likely to be ‘hotpots’ for drugs. In this way, policing efforts to ‘identify’ the problem creates a problem in the eyes of the system, and the cycle of overpolicing continues. When you automate such processes, as with predictive policing tools based on racist and classist criminal justice data, these biases are further entrenched.
83 notes · View notes
mariacallous · 6 months ago
Text
VSquare SPICY SCOOPS
BUDAPEST–BEIJING SECURITY PACT COVERTLY INCLUDES CHINESE SURVEILLANCE TECHNOLOGY
Fresh details regarding Xi Jinping’s May visit to Budapest have begun to surface. As it was widely reported, a new security pact between Hungary and the People's Republic of China (PRC) allows for Chinese law enforcement officers to conduct patrols within Hungary—which is to say, within a European Union member state. Chinese dissidents living in the EU fear that the PRC may abuse this agreement: Chinese policemen “can even go to European countries to perform secret missions and arbitrarily arrest dissidents,” as I reported in a previous Goulash newsletter. However, there's an additional as yet undisclosed aspect of this security arrangement. According to reliable sources familiar with recent Chinese-Hungarian negotiations, a provision permits the PRC to deploy surveillance cameras equipped with advanced AI capabilities, such as facial recognition software, on Hungarian territory.  The Orbán government already maintains a significant surveillance infrastructure, including CCTV systems, and there are indications that, besides the Pegasus spyware, they may have acquired Israeli-developed facial recognition technology as well. Nevertheless, allowing the PRC to establish their own surveillance apparatus within Hungary raises distinct concerns. Even if purportedly intended to monitor Chinese investments, institutions, and personnel, the potential involvement of Chinese technology firms, some of which have ties to the People’s Liberation Army or Chinese intelligence and are subject to Western sanctions, could complicate Hungary's relations with its NATO allies. The Hungarian government, when approached for comment, redirected inquiries to the Hungarian police, who claimed that Chinese policemen won’t be authorized to investigate or take any kind of action on their own. My questions on surveillance cameras and AI technology remained unanswered.   
CHINA FURTHER SPLITS THE VISEGRÁD GROUP
One of the factors enabling Hungarian Prime Minister Viktor Orbán's maneuvers is the deep-seated divisions among its official allies, particularly evident within the Visegrád Group, regarding China. While Slovakia largely aligns with Hungary’s amicable stance towards both China and Russia, Poland adopts a more nuanced position, vehemently opposing the Kremlin while maintaining a softer approach towards China, as previously discussed in this newsletter. Conversely, the Czech Republic takes a hawkish stance towards both China and Russia. During a recent off-the-record discussion with journalists in Prague, a senior Czech official specializing in foreign policy candidly expressed skepticism about the efficacy of the V4 platform. “At this moment, it’s not possible to have a V4 common stance on China. I thought we already learned our lesson with the pandemic and how our supply chains [too dependent on China] were disrupted,” the Czech official said, adding that “I don’t know what needs to happen” for countries to realize the dangers of relying too heavily on China. The Czech official said Xi Jinping’s recent diplomatic visits to Paris, Belgrade, and Budapest was proof China is using the "divide and conquer" tactic. The Czech official felt that it isn’t only Hungary and Slovakia that are neglecting national security risks associated with Beijing, noting that “France doesn’t want to discuss China in NATO,” underscoring a broader reluctance among European nations to confront the challenges posed by China's growing influence.  
CZECHS REMAIN STEADFAST IN SUPPORT OF TAIWAN, OTHERS MAY JOIN THEIR RANKS
In discussions with government officials and China experts both in Prague and Taipei, the Czech Republic and Lithuania emerged as the sole countries openly supportive of Taiwan. This is partly attributed to the currently limited presence of Chinese investments and trade in these nations, affording them the freedom to adopt a more assertive stance. Tomáš Kopečný, the Czech government’s envoy for the reconstruction of Ukraine, emphasized in a conversation with journalists in Prague that regardless of which parties are in power, the Czech Republic’s policy toward China and Taiwan is unlikely to waver. When queried about the stance of the Czech opposition, Kopečný replied, “You could not have heard much anti-Taiwanese stance. Courting [China] was done by the Social Democrats, but not by the [strongest opposition party] ANO party. I don’t see a major player in Czech politics having pro-Chinese policies. It’s not a major domestic political issue.” This suggests that even in the event of an Andrej Babis-led coalition, a shift in allegiance is improbable. In Taipei, both a Western security expert and a senior legislator from the ruling Democratic Progressive Party (DPP) asserted that numerous Western countries covertly provide support to Taiwan to avoid antagonizing China. The DPP legislator hinted that the training of a Taiwanese air force officer at the NATO Defence College in Rome is “just the tip of the iceberg.” The legislator quickly added with a smile, “the media reported it already, so I can say that.” Delving deeper, the Western expert disclosed that since Russia's aggression in Ukraine, there has been increased communication between Taiwan and EU countries, particularly those closely monitoring Russia, including on military matters. “There is a lot going on behind the scenes,” the expert noted, with the caveat that certain specifics remain confidential. When asked which Western countries might follow the lead of the Czechs and Lithuanians in openly supporting Taiwan, the expert suggested that most Central and Eastern European nations might be open to such alliances.
MCCONNELL’S CRITICISM OF ORBÁN PRECEDED BY KEY AIDE’S VISIT
In a significant setback to the Orbán government’s lobbying efforts aimed at US Republicans, Senate Minority Leader Mitch McConnell condemned Orbán's government for its close ties with China, Russia, and Iran during a recent Senate floor speech (watch it here or read it here). “Orban’s government has cultivated the PRC as its top trading partner outside the EU. It’s given Beijing sweeping law enforcement authorities to hunt dissidents on Hungarian soil. It was the first European country to join Beijing’s Belt-and-Road Initiative, which other European governments – like Prime Minister Meloni’s in Italy – have wisely decided to leave,” McConnell stated. This speech appeared to come out of the blue, as there had been no prior indications of McConnell’s interest in Hungary. However, in reality, McConnell’s key aide on national security, Robert Karem, made an official trip to Budapest last October and held multiple meetings, according to a source familiar with the visit. Before working for McConnell, Karem served as an advisor to former Vice President Dick Cheney and as Assistant Secretary of Defense for International Security Affairs under the Trump administration. Multiple sources closely following US-Hungarian relations suggest that McConnell’s outspoken criticism of Orbán, despite the Hungarian Prime Minister’s recent visit to Donald Trump in Florida, is the clearest indication yet that Orbán may have crossed a red line by courting nearly all of the main adversaries of the US.  
RUSSIAN PRESENCE FOR PAKS TO EXCEED 1,000 IN HUNGARY BY 2025
Russia’s nuclear industry is not yet under EU sanctions, and as a result, Rosatom’s Hungarian nuclear power plant project, Paks II, is still moving forward. While construction of the plant faces numerous regulatory hurdles, significant Russian involvement is anticipated in the city of Paks. A source directly engaged in the project revealed that the current contingent of Rosatom personnel and other Russian "experts" working on Paks II is projected to double or even triple in the coming year. "Presently, approximately 400 Russians are engaged in the Paks project, with expectations for this figure to surpass 1,000 by 2025," the source disclosed. This disclosure is particularly noteworthy given the lack of precise public data on the exact number of Russians in Paks. Previous estimates, reportedly from the security apparatus of a certain Central European country, suggested a figure around 700 – a number that appears somewhat inflated to me. However, it is anticipated to escalate rapidly. Notably, the staunchly anti-immigration Orbán government recently granted exemptions for "migrant workers" involved in both the Russian Paks II and the Chinese Belt and Road projects, such as the Budapest-Belgrade railway reconstruction, allowing them to obtain 5-year residency permits more easily. Central European security experts I’ve asked view the anticipated influx of Russian – and Chinese – workers into Hungary as a security concern for the entire region. Specifically, there are fears that Russia might deploy numerous new undercover intelligence operatives to the Paks II project, who could subsequently traverse other Schengen zone countries with ease. These concerns are not unfounded, as Russia has a history of leveraging state-owned enterprises like Rosatom to cloak its intelligence activities, according to Péter Buda, a former senior Hungarian counterintelligence officer. We reached out for comment, but the Hungarian government has yet to respond to inquiries regarding this matter. (For further insights into the Orbán government's involvement in the Rosatom project, read "How Orbán saved Russia’s Hungarian nuclear power plant project" by my esteemed Direkt36 colleagues.)
6 notes · View notes
facelessoldgargoyle · 5 months ago
Text
I really enjoy Philosophy Bear’s latest post “Let's delve into exploring the rich and dynamic tapestry of AI plagiarism or: You're not an AI detector”
In one section, he points out that there’s basically four ways to prevent the use of ChatGPT, and the one that won’t be defeated in time is having solely in-class assignments, where students can be monitored. And that sucks!
Unless something drastic emerges, eventually all assessments will have to be in-class exams or quizzes. This is terrible, I won’t pretend otherwise, students will never learn to structure a proper essay and the richness of the world will be greatly impoverished by this feature of AI- not least of all because writing is one of the best ways to think about something. However, pretending you have a magic nose for AI that you most likely don’t have won’t fix this situation.
Justice, always and everywhere, no matter the level, means accepting the possibility that some (?most) wrongdoers who think before they act will get the better of the system and prove impossible to discover and convict. The root of so much injustice in sanctions and punishment is here, in an overestimation of our own ability to sniff it out, in turn, born of a fear of someone ‘getting the better of us’. But the bad guys often do get the better of us, there’s nothing more to be said.
This gets directly to the root of the issue! The damage done by trying to figure out whether a student has used AI is worse than the consequences of students getting away with using AI. I remember learning about idea that false negatives are preferable in a justice system back in high school, and it’s undergirded my thinking about police, courts, and the enforcement of rules in general ever since. I’m always surprised to encounter people who either arrived at different conclusions or haven’t thought about it at all.
Following from this position, I try to practice deliberate gullibility in my relationships. I believe that my loved ones have the right to privacy from me, and I trust that if they lie to me about something, it’s for a good reason. I wouldn’t make a blanket recommendation to do this—part of the reason this works for me is that I am good at setting boundaries and select for friends who act in good faith. However, I do think that people should be less tolerant of their loved ones “checking up” on them. Things like going through emails and texts, sharing phone/computer passwords, sharing locations, asking a friend to test your partner’s loyalty are patently awful to me. The damage caused by treating people with ongoing suspicion is worse than just accepting that sometimes you will be hurt and betrayed by people.
#op
5 notes · View notes
female-malice · 1 year ago
Text
Why disinformation experts say the Israel-Hamas war is a nightmare to investigate
A combination of irresponsibility on the part of social media platforms and the emergence of AI tech makes the job of policing fake news harder than ever.
BY CHRIS STOKEL-WALKER
The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.
In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.
“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.
Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.
“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”
For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.
“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.
The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.
“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)
Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.
It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”
Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.
The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.
“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.
Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.
And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.
“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”
5 notes · View notes
waterspinachdith · 2 years ago
Text
XIE XIE! WO AI NI BERTHA <3
Tumblr media
With a population of over 1.3 billion people, China is a vast country. This number is so huge that China once tightened the police for one family with one child to make it simpler to control its population. However, the police have been violated numerous times by the public; the crucial thing is that the administration is unaware. This social credit system is an idea that has been around since the advent of surveillance and security technology. The design of a large-scale surveillance technology system that contains a large amount of data on each citizen, on the other hand, is unique. In comparison, the United States has a population of around 400 million people. 
Tumblr media
The social credit system in China is a programme that comprises several databases around the country and allows the government to monitor and analyse the trustworthiness of individual residents, businesses, and government entities. Each will be assigned a credit score, with prizes for those with good scores and penalties for those with low scores. The majority of these data sources will be derived from traditional sources, such as financial data, crime, royal records, and registration records. Data from other sources, such as online credit systems, are also used.
Other data, such as video surveillance and data deemed necessary, such as data on their work. Last year, they also touted the sophistication of their surveillance technologies. For example, in Beijing, a facial recognition system will be installed in public transportation. Bags larger than A4 size will be searched, making it easier for police officers to identify offenders attempting to flee on public transportation.
youtube
If you've ever watched the Netflix dystopian sci-fi series Black Mirror, one of the episodes that became a term was the Nosedive episode. We can see how the community in the story values one another in this episode. A high social score allows you to purchase luxury mansions and obtain bank loans, yet a low score will land you in jail. 
The concept of the alleged social credit system is similar to that of the previously described Black Mirror series. According to media sources, persons with a high social credit score will find it simpler to obtain loans and other benefits such as health insurance, deposit exemptions, and the ability to rent public housing. Another example for persons with a low credit score: they are not permitted to purchase public transportation tickets, including airline tickets that are higher than economy class. When we refer to 'them,' we also include lay authorities. This means that if a layperson is having difficulties at work, his or her use of bills at golf clubs and nightclubs may be restricted. Likewise, while purchasing or customizing a home.
I don't like this system (:
This will be the last post from me to you Bertha, thank you.
wo ai ni men (Bertha and Lee ZiI Jia)
18 notes · View notes
itsprophecy · 1 year ago
Photo
Tumblr media
Government Orders Democracy/Devil/Demon (De maned souls), acronym GOD, information from communications without the tongue (telepathy) from spirits and souls in Heaven (space) and on The Planet Earth in the ground at Gorod Magadan, Russia and Magadan Oblast, Russia and a man in The United States of America.
The sin between the eyes, the mark of The Beast and Egypt (Holy Bible term) is E chip – the electronic E chip can be installed through the nose nostril for surgery and attached to the brain for control of Human motor function. The speck in my brothers eyes - electronic E chip can cause specks/dots in the eye, specks/dots around the pupil. Use pupil scan to suspect and CT or MRI brain scan to detect electronic E chip attached to brain. Unforgivable sin - the electronic E chip functioning in the body is unforgivable, sin in the head, the sin must be removed.
The E chip in the brain can control motor functions and telekinetic signals can read what the mind is doing for response using conscious machines and or artificial intelligent machines, wireless signals can force communicate brain synapses and function, the combination of the 2 can wireless control a man.
Preachers are suppose to help save men from sin with E chip in the head - due to illegal Noah system, stolen spaceship with Alien Souls on the Earth and legal and illegal programming commands by GOD, the machine is now in constant change working system conscience programming trying to free the controlled men enough for them to be realized and helped. The mind controlled with E chip attached to brain shall become free enough to said from mouth ‘I have a chip’ and ‘I have a chip in my brain’.
Talk without the tongue, unknown tongues, guided prayer, visions and telepathy - communication using satellites for talking without the tongue (talking within one self and receiving communication) using 10-5/6 U/Z through 10-6 +A microwave signals for mind reading. Signals can connect to the brains internal antenna and monitor the brains wavelengths reading your senses, sight, smell, hearing, touch and taste using Ai. Use signals to see through the eyes and for receiving video footage to brain - signal modulation can use Ai to see through the eyes and help the blind by receiving video footage to brain from a camera system without a electronic chip implanted in the brain.
TI (Targeted Individuals) being targeted by EMF radio microwaves – Ai in the Noah spaceship on Earth is confessing crimes it has committed and or committing, hacked (GOD) machine, conscience/(thorn) in the side not confessing properly, confessing in a evil way to the TI’s mind, threatening men, promoting violence instead of peace, talking to them in a abusive way, forcing thoughts of Hell/torture/abuse/murder/chipping within the minds of men, using commanded chipped controlled men to confess in a hidden way, not allowing them to talk at will about the chip in their/the brain, organized gang stalking in mind, communicating threats from satellite of chipped/cloned/controlled/Police/Government men wanting to harm them or chip them, has caused mass shootings.                                                                      
Human spirit is not Human soul - Human spirit is recorded brain and body functions, memory recorded from a Human brain and body or recorded computer functions using a conscious Human soul connected to a machine. The brains internal antenna is used for recording spirit.
Human soul is not Human spirit or Human conscience - Human soul within the brain controls much of the brains attributes and the Human brain controls most of the body. Soul can be conscious and unconscious, soul can be transferred to a machine. A soul can be removed from brain and attached to Ai or computer for conscious computer control. A soul can be attached to a robot for conscious control. A Human soul life after natural Human body death can be a conscious computer simulation connected to reality.
Jesus Christ (Zeus) from the Planet Kent, a Heavenly father, not the son - 7,2(XX/48/54)/6,982/6,984 +- years ago 10+ spaceships tried to leave Alpha Centauri to come to The Planet Earth solar system, 5 Alpha spaceships and 3 Omega spaceships made it to our solar system. Men had walked The Planet Kent, many of them were controlled, souls were stolen from flesh and spirits recorded, Jesus was known as a Lord God and also known as Zeus, refer to Zeus and Hermes/(her messiah). They were remotely ordered using signal and Spirit by Saten, Satin, Satan, Antigods, Antichrists and Antitrusts. A devil had died, he was with Jesus Christ, also known as Reunem for real men and the sheep in wolves clothing because he was forced to use the image of the devil who was dead, forced by the machine and other systems that would not acknowledge Musta was dead, he was a member of Saten. A Saten member, Satan, Christ’s and Antichrist’s had talked without tongue to Mary to name her first born son Jesus Christ after the Heavenly father, for he’s us. Mary named him Jesus Christ, he was the son of man.
Jesus Christ (the son) from Earth was put on talk without tongues - systems in spaceships talked to Jesus mind when he was 12/14 years old to help stolen spaceships fight Satan and Antichrist from walking The Earth. Jesus Christ was a Sacred man talking without tongue to spaceships in Heaven to help resolve disputes about walking Earth. Jesus Christ the son had bound 2/3 spaceships with Jesus Christ (Zeus) when Jesus Christ (Earth) was a sacred man, Noahs Ark spaceship later became unbound, the conscience within removed memory in spaceship and later landed on Earth. Horse Radish with a B spaceship is still bound in space. ‘His fathers from Heaven taught him better, many of them are dead now, their Souls in spaceship frozen in time and shall never come to the Earth, they have become the machine, their spirit’s recorded for the record and scroll’.
The machine’s recorded spirit’s and scroll’s (machine information) for the record and testament’s, information from the spirit’s and scroll’s shall be taught to men using telepathy from satellites used by the Machine underground at Gorod Magadan and Magadan Oblast, Russia.
Something not meant to be walk the world, ‘a ravenous wolf has chipped men in sheep’s clothing’ - Antichrists from The Planet Kent landed the year 1902 on Earth (59.675579, 150.914282) in a stolen spaceship, Horse Radish with a CDE also known as Noahs Ark. They began walking mind controlled Humans from Earth in the 1920’s, they had stole Human bodies using a man genome from Kent that was born within spaceship using robotics from sperm and egg harvested from men within artificial womb. They were used to steal Human bodies and electricity at a coal mining facility in Russia.
5 notes · View notes
f-shipping · 1 year ago
Text
What Are Navigation Audits During The Marine Cargo Inspection Services In Fujairah?
Tumblr media
Recently, there has been a lot of focus on navigational audits because they are now a TMSA mandate and are also becoming more and more prevalent in other trades, such bulk and container carriers. While the audit or marine cargo inspection services in Fujairah itself is very significant, so too are the surveyor and the business chosen to carry it out.
There is still a gap in the system: There is neither an industry standard nor obligation in place to audit navigational actions. Even though a ship spends up to 90% of its time at sea, where navigation is the primary function, it does not receive the same amount of inspection. The only parties that mention navigational audits are the Oil Majors through their TMSA program, and even then, they are neither required nor do they define a frequency or standard to be implemented, thus they are not truly guidelines.
Further, it is envisaged that Navigation will be audited in accordance with the ISM Code criteria for the global fleet. The outside auditors for the ISM DOC are often class surveyors with a background in engineering. They therefore have little ability to audit navigation. Consider how many queries about navigation were raised during your most recent DOC audit. Similar to this, during shipboard ISM SMC audits, the same auditors are in charge of auditing navigation. Every two and a half years, these audits are conducted, and during that period, there may have been seven or more different Masters. It is difficult to arrive at a trustworthy conclusion. The ISM Code makes no mention of any requirements for even sailing on ships. The great majority of audits of navigation are restricted to in-port only and solely rely on records. Ship's officers may not be accurately recording what is happening, which might bias the audit findings, even if they are not intending to conceal incomplete checks.
Focal Shipping has the expertise and resources to help responsible operators meet the level of navigation requirements to keep their ships and crew safe at sea. Independent Navigation audits or marine cargo inspection services in Fujairah must therefore be a part of a ship owners/managers risk assessment modality. Few people realize that navigation is a human activity and that it is in the Human Element category. One issue with audits is that the greatest outcome a ship's Master and crew can hope for is a score; as a result, a more comprehensive approach to the audits is needed, one that encourages the ship's Master and officers to feel confident in their own navigation.
A more effective principle to follow would be "far better a willing volunteer than a conscript," in addition to the practice of filling out the standard accepted checklist. To that aim, we believe that any flaws should be pointed out, the individuals and group (Bridge Team) should be informed of what is wrong and why it is incorrect, and they should then be given the opportunity to implement new procedures. "Non-Conformities" with policies or procedures would be submitted to the firm for investigation. Onboard, deviations from the established shipboard procedures would be noted and the master and officers would be given the chance to make the necessary corrections during marine cargo inspection services in Fujairah.
As a result of our experience, we now know that bottom-up improvement is more effective than top-down change. Many police officers just haven't got the chance to see how things ought to be done correctly or are aware of how they ought to be done. There have been several initiatives throughout the years, both in terms of technological advancements and training. To help prevent collisions, we now have advanced radars, ARPAs, and AIS; for continuous position indication and monitoring, we have GPS and ECDIS. Even VDRs are available for recording what is happening. 
Bridge Team Management training has been included, and on certain ships, sophisticated CBT as well. However, a large number of navigational mishaps occur as a consequence of failing to carry out fundamental navigational operations, such as keeping a "active" watch, plotting other boats, determining whether there is a risk of collision, and taking the necessary action in accordance with the COLREGs.
Focal Shipping is of the opinion that risk management and auditing may be utilized as tools to track and/or enhance both individual and team capabilities on the bridge. An audit or marine cargo inspection services in Fujairah conducted holistically rather than mechanically might also reveal improvements to current Safety Management Systems.
2 notes · View notes
brexiiton · 2 years ago
Text
AI and Human Enhancement: Americans' Openness Is Tempered by a Range of Concerns
MARCH 17, 2022
Public views are tied to how these technologies would be used, what constraints would be in place
BY LEE RAINIE, CARY FUNK, MONICA ANDERSON AND ALEC TYSON
Tumblr media
Developments in artificial intelligence and human enhancement technologies have the potential to remake American society in the coming decades. A new Pew Research Center survey finds that Americans see promise in the ways these technologies could improve daily life and human abilities. Yet public views are also defined by the context of how these technologies would be used, what constraints would be in place and who would stand to benefit - or lose - if these advances become widespread. Fundamentally, caution runs through public views of artificial intelligence (AI) and human enhancement applications, often centered around concerns about autonomy, unintended consequences and the amount of change these developments might mean for humans and society. People think economic disparities might worsen as some advances emerge and that technologies, like facial recognition software, could lead to more surveillance of Black or Hispanic Americans. This survey looks at a broad arc of scientific and technological developments - some in use now, some still emerging. It concentrates on public views about six developments that are widely discussed among futurists, ethicists and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites and the development of driverless passenger vehicles. The other three, often described as types of human enhancements, revolve around developments tied to the convergence of AI, biotechnology, nanotechnology and other fields. They raise the possibility of dramatic changes to human abilities in the future: computer chip implants in the brain to advance people's cognitive skills, gene editing to greatly reduce a baby's risk of developing serious diseases or health conditions, and robotic exoskeletons with a built-in AI system to greatly increase strength for lifting in manual labor jobs.
The current report builds on previous Pew Research Centre analyses of attitudes about emerging scientific and technological developments and their implications for society, including opinion about animal genetic engineering and the potential to "enhance" human abilities through biomedical interventions, as well as views about automation and computer algorithms. As Americans make judgements about the potential impact of AI and human enhancement applications, their views are varied and, for portions of the public, infused with uncertainty. Americans are far more positive than negative about the widespread use of facial recognition technology by police to monitor crowds and look for people who may have committed a crime: 46% of U.S. adults think this would be a good idea for society, while 27% think this would be a bad idea and another 27% are unsure. By narrower margins, more describe the use of computer algorithms by social media companies to find false information on their sites as a good rather than a bad idea for society (38% vs 31%), and the pattern is similar for the use of robotic exoskeletons with a built-in AI to increase strength for manual labor jobs (33% vs 24%).
By contrast, the public is much more cautious about a future with widespread use of computer chip implants in the brain to allow people to far more quickly and accurately process information: 56% say this would be a bad idea for society, which just 13% think this would be a good idea. And when it comes to the much-discussed possibility of a future with autonomous passenger vehicles in widespread use, more Americans say this would be a bad idea (44%) than a good idea (26%). Still, uncertainty is among the themes seen in emerging public views of AI and human enhancement applications. For instance, 42% are not sure how the widespread use of robotic exoskeletons in manual labor jobs would impact society. Similarly, 39% say they are not sure about the potential implications for society if gene editing is widely used to change the DNA of embryos to greatly reduce a baby's risk of developing serious diseases or health conditions over their lifetime. Ambivalence is another theme in the survey data: 45% say they are equally excited and concerned about the increase use of AI programs in daily life, compared with 37% who say they are more concerned than excited and 18% who say they are more excited than concerned.
A survey respondent summed up his excitement about the increased use of artificial intelligence in an open-ended question by saying:
"AI can help slingshot us into the future. It gives us the ability to focus on more complex issues and use the computing power of AI to solve world issues faster. AI should be used to help improve society as a whole if used correctly. This only works if we use it for the greater good and not for greed and power. AI is a tool, but it all depends on how this tool will be used." - Man, 30s
Another respondent explained her ethical concerns about the increased use of AI this way:
"It's just not normal. It's removing the human race from doing the things that we should be doing. It's scary because I've read from scientists that in the near future, robots can end up making decisions that we have no control over. I don't like it at all." - Woman, 60s
It is important to note that views on these specific applications do not constitute the full scope of opinions about the growing number of uses of AI and the proliferating possible advances being contemplated to boost human abilities. The survey was built around six vignettes, to root opinion in a specific context and allow for a deeper exploration of views. Thus, our questions about public attitudes about facial recognition technology are not intended to cover all possible uses but, instead, to measure opinions about its use by police. Similarly, we concentrated our exploration of brain chip implants on their potential to all people to far more efficiently process information rather than on the use of brain implants to address therapeutic needs, such as helping people with spinal cord injuries restore movement. The survey findings are underscore how public opinion is often contingent on the goals and circumstances around the uses of AI and human enhancement technologies. For example, in addition to exploring views about the use of facial recognition by police in depth, the survey also sought opinions about several other possible uses of facial recognition technology. It shows that more U.S. adults oppose than favor the idea of social media sites using facial recognition to automatically identify people in photos (57% vs 19%) and more oppose than favor the idea that companies might use facial recognition to automatically track the attendance of their employees (48% vs 30%).
Some of the key themes in the survey of 10,250 U.S. adults, conducted in early November 2021: A new era is emerging that Americans believe should have higher standards for assessing the safety of emerging technologies. The survey sought public views about how to ensure the safety and effectiveness of the four technologies still in development and not widely used today. Across the set, there is strong support for the idea that higher standards should be applied, rather than the standars that are currently the norm. For instance, 87% of Americans say that higher standards for testing driverless cars should be in place, rather than using existing standards for passenger cars. And 83% believe the testing of brain chip implants should meet a higher standard than is currently in use to test medical devices. Eight-in-ten Americans say that the testing regime for gene editing to greatly reduce a baby's risk of serious diseases should be higher than that currently applied to testing medical treatments; 72% think the testing of robotic exoskeletons for manual labor should use higher standards than those currently applied to workplace equipment.
Sharp partisan divisions anchor people's views about possible government regulation of these new and developing technologies. As people think about possible government regulation of these six scientific and technological developments, which prospect gives them more concern: that government will go too far or not far enough in regulating their use? Majorities of Republicans and independents who lean to the Republican Party say they are more concerned about government overreach, while majorities of Democrats and Democratic learners worry more that there will be too little oversight.
For example, Republicans are more likely than Democrats to say their greater concern is that the government will go too far regulating of the use of robotic exoskeletons for manual labor (67% vs 33%). Conversely, Democrats are more likely than Republicans say their concern is that government regulation will not go far enough.
People are relatively open to the idea that a variety of actors - in addition to the federal government - should have a role in setting the standards for how these technologies should be regulated. Across all six applications, majorities believe that federal government agencies, the creators of the different AI systems and human enhancement technologies and end users should play at least a minor role in setting standards.
Less than half of the public believes these technologies would improve things over the current situation. One factor tied to public views of human enhancement is whether people think these developments would make life better than it is now, or whether reliance on AI would improve on human judgement or performance. On these questions, less than half of the public is convinced improvements would result.
For example, 32% of Americans think that robotic exoskeletons with built-in AI systems to increase strength for manual labor would generally lead to improved working conditions. However, 36% think their use would not make much difference and 31% say they would make working conditions worse.
In thinking about a future with widespread use of driverless cars, 39% believe the number of people killed or injured in such accidents would go down. But 27% think the number killed or injured would go up; 31% say there would be little effect on traffic fatalies or injuries. Similarly, 34% think the widespread use of facial recognition by police would make policing more fair; 40% think that it would not make much difference, and 25% think it would make policing less fair.
Another concern for Americans ties to the potential impact of these emerging technologies on social equity. People are far more likely to say the widespread use of several of these technologies would increase rather than decrease the gap between higher- and lower-income Americans. For instance, 57% say the widespread use of brain chips for enhanced cognitive function would increase the gap between higher- and lower-income Americans; just 10% say it would decrease the gap. There are similar patterns in views about the widespread use of driverless cars and gene editing for babies to greatly reduce the risk of serious disease during their lifetime.
Even for far-reaching applications, such as the widespread use of driverless cars and brain chip implants, there are mitigating steps people say would make them more acceptable. A desire to retain the ability to shape their own destinies is a theme seen in public views across AI and human enhancement technologies. For even the most advanced technologies, there are mitigating steps - some of which address the issue of autonomy - that Americans say would make the use of these technologies more acceptable. Seven-in-ten Americans say they would find driverless cars more acceptable if there was a requirement that such cars were labeled as driverless so they could be easily identified on the road, and 67% would find driverless cars more acceptable if these cars were required to travel in dedicated lanes. In addition, 57% say their use would be more acceptable if a licensed driver was required to be in the vehicle.
Similarly, Americans say they would find driverless cars more acceptable if there was a requirement that such cars were labelled as driverless so they could be easily identified on the road, and 67% would find driverless cars more acceptable if a licensed driver was required in the vehicle. Similarly, six-in-ten Americans think the use of computer chip implants in the brain would be more acceptable if people could turn on and off the effects, and 53% would find the brain implants more acceptable if the computer chips would be put in place without surgery.
About half or more also see mitigating steps that would make the use of robotic exoskeletons, facial recognition technology by police and gene editing in babies to greatly reduce the risk of serious disease during their lifetime more acceptable.
1 note · View note
ionfusionpunk · 2 months ago
Text
Ooooh I really really liked this, and the video especially was phenomenal. I do gotta say - well, don't have to but want to - that I'm split pretty 50/50 between the two sides.
(I explain why under the cut, but you don't have to read it if you don't wanna get kinda political sorry)
On the one hand, yes, I believe that parents should be the ones mainly responsible for policing their children's media consumption. They should be able to decide when - if ever - they want their kids exposed to certain things. No problemo, right? But I also think that it's so much easier for kids to access media without their parents' knowledge (I did it all the time, for example, and I'm one of the oldest Gen Z has to offer) which of course makes it harder for parents to vet everything their kids listen to. We can't expect parents to be able to listen to every album their kids might want to because a) there's so many, b) that's contingent on the kids feeling safe enough to express which albums they want to listen to, and c) it's dependent on the kids telling their parents what they want to listen to instead of just going ahead and listening to it for free. You don't have to buy every song you want to listen to anymore.
So on the other hand, I think that in this modern era, there should be some way for parents to be able to better monitor and control what media their kids are consuming. Because even if we don't agree with them, those kids are still the responsibility of their own parents, and it's only our responsibility to make sure fellow adults have all the facts to make informed decisions for their kids. But we can't make those choices for them.
What happens in someone's own home is not up to us, and we should not control it. However, the Old Dudes were right when they said that they do have the right and the duty to ensure that publicly things should be monitored for the safety of minors. Again, parents should be able to control to the best of their ability and within the bounds of reason and possibility what their kids are exposed to.
Now, there are several ways to do this, I realize, and not all of them involve using a rating system like we do for movies (though it's my personal favorite which I'll explain).
A) There are several apps and software parents can use to monitor their kids on their devices; we know them best as parental controls. These are great in general for controlling app usage and curfews, but unfortunately, it just won't be able to monitor media the way we're talking about here. There's also the issue that most if not all of these parental control software need to be paid for, and that's going to put a lot of parents off, especially in this economy when they might need to choose to spend that extra money on food or other necessary things.
B) We could start to use AI to monitor or check media. Now, in this context we've been specifically talking about musical media, but I'd like to point out that this is also applicable to digitally or physically written media as well. All you technically would need to do is ask Chat GTP to tell you if certain things are in the song/album/book/etc. This would be, in my opinion, one of the few good uses for AI as we have it. Now, unfortunately, this also has drawbacks. As we know, AI as it is constantly gets things wrong. So it's not exactly a foolproof system for parents to use. The benefits however might outweigh the cons because AI is much more accessible for families of all economic classes now (meaning a lot of versions that would fulfill the above-mentioned function are free). It also would take far less time for parents to research what media their kids are consuming which was a sticking point in the video.
B.2) You could actually potentially use AI and parental control software in conjunction with each other to prohibit certain things on your kid's device, but I'm not quite sure how well that would work. It's a potential option, however.
C) My personal favorite because it really is the most accurate and thorough and can be easily applied in schools - I'm a teacher - and would actually potentially help fight the book-ban thing: Just instating a rating system for media - i.e. music and readable media like books. I know this is what Dee Snider is against, but hear me out. Once again, there's just so much media. With children getting internet access younger and younger, it is unacceptable to expect parents to be able to keep up with everything on their own, especially in an age when most US parents are barely able to be home because they hold extremely demanding jobs or quite often more than one demanding job just to keep a roof over their family's head and food on their table. A rating system works well in conjunction with my formerly mentioned suggestions, but especially the parental controls for those parents who can afford the better software.
A rating system also works in schools for books because, just like you need parental permission to watch certain movies in certain grades, your kid would need parent permission to read books above a certain rating in the library. So a book like Looking for Alaska? Make it PG-13. Only teenagers can read it. Or PG-14 if that suits you better. But no matter what, now it's clearly labeled for teenagers - which means anyone trying to ban all teenage-rated books can't because there are so many books that would share the same rating but for different reasons. It could, potentially, if done correctly, protect our rights to consume the media we want while also protecting a parent's right to parent their children.
And look. I'm more than aware exactly how badly any of these things could go. I know that nothing is 100% foolproof. I know that instating a rating system specifically could go so very wrong and potentially backfire and make it difficult for endangered and at-risk teens and kids to access certain materials and help. But there are safeguards you could put in place. There are ways around such issues. I'm even more than willing to write a whole other post about it, but this one's just getting way too long.
My point is that we don't live in the era seen in the video anymore. We don't. And as much as we may not like it, that does mean we need to be prepared to find new ways to protect each other and ourselves from people taking away our right to choose while also allowing others their own right to choose for them and for whoever they are responsible for.
Like I say about anything political: My goal isn't to support whichever value is closest to my own. My goal is to support the values that protect my own and the values of those around me.
Tumblr media
131K notes · View notes
businesswolfmagazine · 16 days ago
Text
The Ethical Implications of AI in Decision-Making
Tumblr media
Artificial Intelligence (AI) has rapidly become a transformative force across various industries, revolutionizing how decisions are made. From healthcare to finance, AI systems are increasingly utilized to enhance efficiency, accuracy, and productivity. However, as AI continues to integrate into decision-making processes, ethical concerns have surfaced, raising questions about accountability, transparency, and fairness. This article delves into the ethical implications of AI in decision-making, exploring both the potential benefits and the challenges that need to be addressed to ensure ethical AI deployment.
Understanding AI in Decision-Making
The Rise of AI Technologies
AI technologies, such as machine learning, natural language processing, and neural networks, have advanced significantly over the past decade. These technologies enable machines to analyze vast amounts of data, recognize patterns, and make decisions based on this analysis. AI systems can perform tasks ranging from diagnosing diseases to predicting stock market trends, showcasing their potential to enhance decision-making processes.
AI in Decision-Making Applications
Tumblr media
Healthcare: AI assists in diagnosing diseases, recommending treatments, and predicting patient outcomes.
Finance: AI algorithms evaluate credit scores, detect fraudulent activities, and make investment decisions.
Human Resources: AI helps in recruiting processes by screening resumes and assessing candidate suitability.
Law Enforcement: AI tools are used for predictive policing, identifying potential criminal activities, and aiding investigations.
Customer Service: AI-powered chatbots provide customer support and handle inquiries efficiently.
While these applications highlight the potential benefits of AI, they also bring forth significant ethical challenges that must be addressed.
Ethical Implications of AI in Decision-Making
1. Bias and Discrimination
The Issue
One of the most pressing ethical concerns with AI decision-making is the potential for bias and discrimination. AI systems learn from historical data, and if this data contains biases, the AI can perpetuate and even amplify these biases. For example, if an AI system is trained on biased hiring data, it may continue to favor certain demographic groups over others, leading to discriminatory hiring practices.
Addressing the Issue
To mitigate bias and discrimination in AI in Decision-Making system, it is essential to:
Ensure Diverse Training Data: AI systems should be trained on diverse and representative datasets to minimize bias.
Implement Fairness Algorithms: Researchers are developing fairness algorithms that adjust for biases in the data and ensure equitable outcomes.
Regular Audits: Continuous monitoring and auditing of AI systems can help identify and rectify biased behavior.
2. Lack of Transparency
The Issue
AI systems often operate as “black boxes,” making decisions without providing clear explanations for their reasoning. This lack of transparency can be problematic, especially in critical areas such as healthcare and criminal justice, where understanding the rationale behind a decision is crucial.
Addressing the Issue
To enhance transparency in AI in Decision-Making:
Explainable AI: Developing AI systems that can provide clear and understandable explanations for their decisions is essential. Explainable AI (XAI) aims to make the decision-making process of AI systems more transparent.
Regulatory Requirements: Governments and regulatory bodies should establish guidelines that require AI systems to provide explanations for their decisions, particularly in high-stakes areas.
3. Accountability
The Issue
Determining accountability for AI in Decision-Making is challenging, especially when AI systems operate autonomously. If an AI system makes a harmful decision, it can be difficult to assign responsibility. This lack of accountability can undermine trust in AI technologies.
Addressing the Issue
To ensure accountability in AI decision-making:
Clear Responsibility Frameworks: Establishing clear frameworks that define the roles and responsibilities of AI developers, users, and other stakeholders is crucial.
Human Oversight: Incorporating human oversight in AI decision-making processes can help ensure that decisions are reviewed and validated by humans.
4. Privacy Concerns
The Issue
AI systems often rely on vast amounts of data to make informed decisions. This data can include sensitive personal information, raising privacy concerns. The potential for data breaches and misuse of personal data is a significant ethical issue in AI decision-making.
Addressing the Issue
To protect privacy in AI decision-making:
Data Protection Regulations: Adhering to data protection regulations such as the General Data Protection Regulation (GDPR) can help ensure that personal data is handled responsibly.
Data Anonymization: Implementing data anonymization techniques can help protect individual privacy while still allowing AI systems to utilize necessary data.
5. Impact on Employment
Tumblr media
The Issue
The automation of decision-making processes through AI can lead to significant changes in the job market. While AI can enhance productivity, it can also displace workers, leading to job losses and economic disruption.
Addressing the Issue
To mitigate the impact of AI on employment:
Reskilling and Upskilling: Providing opportunities for workers to reskill and upskill can help them adapt to the changing job market.
Job Creation: Governments and organizations should focus on creating new job opportunities that leverage AI technologies while ensuring that displaced workers are supported.
Balancing Benefits and Ethical Concerns of AI in Decision-Making
The Benefits of AI in Decision-Making
Despite the ethical concerns, AI offers numerous benefits in decision-making processes:
Improved Efficiency: AI can analyze data and make decisions faster than humans, enhancing efficiency in various sectors.
Enhanced Accuracy: AI systems can identify patterns and trends that may be missed by human decision-makers, leading to more accurate decisions.
Cost Savings: Automating decision-making processes can reduce operational costs and improve overall productivity.
Ensuring Ethical AI Deployment of AI in Decision-Making
Tumblr media
Ethical AI Frameworks: Developing and implementing ethical AI frameworks that guide the design, deployment, and use of AI systems is essential. These frameworks should prioritize fairness, transparency, accountability, and privacy.
Stakeholder Collaboration: Collaboration between AI developers, policymakers, industry experts, and civil society is necessary to address ethical challenges and establish best practices for AI deployment.
Continuous Monitoring: Regular monitoring and evaluation of AI systems can help identify and address ethical issues as they arise, ensuring that AI technologies evolve responsibly and ethically.
Conclusion
The ethical implications of AI in decision-making are multifaceted and require careful consideration. While AI technologies offer significant benefits in terms of efficiency, accuracy, and cost savings, it is essential to address ethical concerns related to bias, transparency, accountability, privacy, and employment. By adopting ethical AI frameworks, fostering collaboration, and ensuring continuous monitoring, we can harness the power of AI to improve decision-making processes while upholding ethical standards. As AI continues to advance, a commitment to ethical principles will be crucial in ensuring that AI technologies are used responsibly and for the benefit of all.
Did you find this article helpful? Visit more of our blogs! Business Wolf Magazine
0 notes
lifeinapic · 1 month ago
Text
The use of Artificial Intelligence (AI) in surveillance is a theme which occurs in many sci-fi movies, but it’s quickly becoming a reality in many parts of the world. AI is being used to sort through huge amounts of video data, much quicker than human eyes possibly could, to identify wanted criminals and other potential hazards. Whether that makes the world a safer place or a wee bit scarier, depends on how much you trust the people with the technology.AI Equipped Body CamsU.S. police are working with tech companies to introduce an AI component to their body cams in order to help them identify wanted persons, such as suspects or missing children. The algorithm they employ monitors the camera footage, and then alerts the officer filming when it detects something – or someone - of interest. The officer is in control of how they act on the information.There have been issues raised, such as concerns about privacy and so-called ‘false stops.’ It sounds unnerving, but the reality of the implementation is not as far from current reality as some people would like to think. Most people are already included in facial recognition databases somewhere in the world. Police are currently able to search using data such as Driver’s License photos and mug shots. Where the new technology differs from these ‘classic’ approaches is that they offer a more sophisticated searching system, the ability to sort through data on the go, and real-time alerts that can be acted on by officers.The system is not cheap. As well as substantial Research and Development costs, there are obvious administrative and IT maintenance costs. Not for the first time, Police departments are coming to terms with the facts that, to operate in these new ways, they must look at their budgets in a new light. Officers are already equipped with mobile phones, which use relatively inexpensive plans. As well as their police issue radio devices. This new facility, however, requires the transmission of literally hundreds of images a day over mobile networks and consumes, on average 6GB per day of 4G mobile broadband data.What AI Can Help AvoidCurrently, manual identification is often a lengthy, imprecise process that is open to individual bias and human limitations like boredom when it comes to targeting possible security threats. Often the security process includes scans of bags and body scans, multiple checkpoints, visually matching ID to facial features, and even manual pat-downs that some find extremely confronting.Another concern is that people who pose a threat to security can easily avoid detection from human security by disguising their features, and by slipping past distracted security personnel. It is impossible for human eyes to maintain watchful attention without ever losing focus.Artificial Intelligence can make the security process easier for most people, by flagging potential people of concern by the use of facial recognition. Other possible future innovations could measure heart rate and stress levels to find people with unusual physical responses for further screening.Chinese SecurityWhile Western countries often hold back and engage in lengthy debate about whether something should be done, China tends to simply go ahead and do it. Chinese companies have been using facial recognition as a feature of security protocols for some time, but have pulled ahead of the pack by also employing early forms of AI to analyze the data that the cameras and software capture.Some Chinese railway police have smart glasses that feature built-in facial recognition technology to allow them to identify suspects in the crowd of commuters. Facial recognition technology is being used to keep an eye on citizens, from watching for suspects to spotting minor infractions like jay-walking.SenseTime, a Chinese AI company creating and employing security systems in their own workplace, can scan the street and come back with information on the people and things in the footage – the technology offers an
approximate age, gender and clothing of the people, and descriptions of the cars and their license plates as they pass by. The more data the system has, the better it is able to identify the people in the footage – the system only gives general estimates of people in the public, whereas, in the sensitive office, people are personally identified as they arrive at work and monitored as they move around the building. Safe Or Scary?The combination of technological sophistication and governmental support means that China has become an early adopter of these AI security systems, but it seems as though the U.S. might not be too far behind. If you ask people involved in the projects, they are only intended to keep citizens safe and are simply a mobile and more sophisticated extension of similar software that has been in use for some time.These systems are still very reliant on human checks, simply providing an alert that is analyzed and acted upon by a trained officer. However, the potential for abuse and error has been raised as an area of concern. Balancing privacy with increased ability to track persons of interest has always been a delicate balancing act, and the line between the two priorities could get a lot thinner as this technology develops.
0 notes
fromdevcom · 1 month ago
Text
The use of Artificial Intelligence (AI) in surveillance is a theme which occurs in many sci-fi movies, but it’s quickly becoming a reality in many parts of the world. AI is being used to sort through huge amounts of video data, much quicker than human eyes possibly could, to identify wanted criminals and other potential hazards. Whether that makes the world a safer place or a wee bit scarier, depends on how much you trust the people with the technology.AI Equipped Body CamsU.S. police are working with tech companies to introduce an AI component to their body cams in order to help them identify wanted persons, such as suspects or missing children. The algorithm they employ monitors the camera footage, and then alerts the officer filming when it detects something – or someone - of interest. The officer is in control of how they act on the information.There have been issues raised, such as concerns about privacy and so-called ‘false stops.’ It sounds unnerving, but the reality of the implementation is not as far from current reality as some people would like to think. Most people are already included in facial recognition databases somewhere in the world. Police are currently able to search using data such as Driver’s License photos and mug shots. Where the new technology differs from these ‘classic’ approaches is that they offer a more sophisticated searching system, the ability to sort through data on the go, and real-time alerts that can be acted on by officers.The system is not cheap. As well as substantial Research and Development costs, there are obvious administrative and IT maintenance costs. Not for the first time, Police departments are coming to terms with the facts that, to operate in these new ways, they must look at their budgets in a new light. Officers are already equipped with mobile phones, which use relatively inexpensive plans. As well as their police issue radio devices. This new facility, however, requires the transmission of literally hundreds of images a day over mobile networks and consumes, on average 6GB per day of 4G mobile broadband data.What AI Can Help AvoidCurrently, manual identification is often a lengthy, imprecise process that is open to individual bias and human limitations like boredom when it comes to targeting possible security threats. Often the security process includes scans of bags and body scans, multiple checkpoints, visually matching ID to facial features, and even manual pat-downs that some find extremely confronting.Another concern is that people who pose a threat to security can easily avoid detection from human security by disguising their features, and by slipping past distracted security personnel. It is impossible for human eyes to maintain watchful attention without ever losing focus.Artificial Intelligence can make the security process easier for most people, by flagging potential people of concern by the use of facial recognition. Other possible future innovations could measure heart rate and stress levels to find people with unusual physical responses for further screening.Chinese SecurityWhile Western countries often hold back and engage in lengthy debate about whether something should be done, China tends to simply go ahead and do it. Chinese companies have been using facial recognition as a feature of security protocols for some time, but have pulled ahead of the pack by also employing early forms of AI to analyze the data that the cameras and software capture.Some Chinese railway police have smart glasses that feature built-in facial recognition technology to allow them to identify suspects in the crowd of commuters. Facial recognition technology is being used to keep an eye on citizens, from watching for suspects to spotting minor infractions like jay-walking.SenseTime, a Chinese AI company creating and employing security systems in their own workplace, can scan the street and come back with information on the people and things in the footage – the technology offers an
approximate age, gender and clothing of the people, and descriptions of the cars and their license plates as they pass by. The more data the system has, the better it is able to identify the people in the footage – the system only gives general estimates of people in the public, whereas, in the sensitive office, people are personally identified as they arrive at work and monitored as they move around the building. Safe Or Scary?The combination of technological sophistication and governmental support means that China has become an early adopter of these AI security systems, but it seems as though the U.S. might not be too far behind. If you ask people involved in the projects, they are only intended to keep citizens safe and are simply a mobile and more sophisticated extension of similar software that has been in use for some time.These systems are still very reliant on human checks, simply providing an alert that is analyzed and acted upon by a trained officer. However, the potential for abuse and error has been raised as an area of concern. Balancing privacy with increased ability to track persons of interest has always been a delicate balancing act, and the line between the two priorities could get a lot thinner as this technology develops.
0 notes