#Fourth Amendment
Explore tagged Tumblr posts
Text
Someone finally got it right...
52 notes
·
View notes
Text
An internet and privacy watchdog has a warning: Your car is tracking you, and it’s collecting far more information than it needs just to get you where you’re going.
Mozilla, the nonprofit that develops the Firefox browser, released a report Wednesday detailing how the policies of more than two dozen car manufacturers allow for the collection, storage and sale of a wide range of sensitive information about auto owners.
Researchers behind the report said that cars now routinely collect data on par with tech companies, offer few details on how that data is stored and used, and don’t give drivers any meaningful way to opt out.
“Cars are a humongous privacy nightmare that nobody’s seemingly paying attention to,” said Jen Caltrider, who directs Privacy Not Included, a consumer privacy guide run by Mozilla. “And they’re getting away with it. It really needs to change because it’s only going to get worse as cars get more and more connected.”
Unlike Europe, the U.S has few meaningful regulations on how companies trade and store personal data. That’s led to a bustling industry of companies that buy and sell peope’s information, often without their knowledge.
Carmakers have a long list of personal information they say they may track, including employment and purchasing history, education, internet browsing history, location data, music and podcast listening habits, immigration status, religious and philosophical beliefs and health information.
(continue reading)
#politics#smart cars#privacy rights#data mining#spyware#capitalism#surveillance state#connected cars#4th amendment#fourth amendment#4th amendment violations
119 notes
·
View notes
Link
“Correctional facilities across the country have a variety of rationales they use to justify this, but largely it boils down to the fact that scanned electronic mail is easier to surveil than physical mail. NYC’s plan is ostensibly in response to a spike in overdoses in NYC’s jail system.”
“Council members also had privacy concerns should mail be recorded off-site by a private contractor. [...] Such data can be retained far into the future and be used against people even if they have never been charged with a crime, have been released from jail, or have had charges dismissed...” “Previous attempts to digitize mail have resulted in first amendment lawsuits.”
#vice#civil rights#right to privacy#prison reform#us prisons#mail#fourth amendment#4th amendment#prisoners rights#prison industrial complex#human rights#first amendment#1st amendment
145 notes
·
View notes
Text
isnt it silly that police don't protect you from other people. Other people protect you from police.
#fuck cops#all cops are bastards#Pigs have helicopters#cops are pigs#police brutality#Protect yourself from the police#fourth amendment
15 notes
·
View notes
Text
The first time Karl Ricanek was stopped by police for “driving while Black” was in the summer of 1995. He was twenty-five and had just qualified as an engineer and started work at the US Department of Defense’s Naval Undersea Warfare Center in Newport, Rhode Island, a wealthy town known for its spectacular cliff walks and millionaires’ mansions. That summer, he had bought his first nice car—a two-year-old dark green Infiniti J30T that cost him roughly $30,000 (US).
One evening, on his way back to the place he rented in First Beach, a police car pulled him over. Karl was polite, distant, knowing not to seem combative or aggressive. He knew, too, to keep his hands in visible places and what could happen if he didn’t. It was something he’d been trained to do from a young age.
The cop asked Karl his name, which he told him, even though he didn’t have to. He was well aware that if he wanted to get out of this thing, he had to cooperate. He felt at that moment he had been stripped of any rights, but he knew this was what he—and thousands of others like him—had to live with. This is a nice car, the cop told Karl. How do you afford a fancy car like this?
What do you mean? Karl thought furiously. None of your business how I afford this car. Instead, he said, “Well, I’m an engineer. I work over at the research centre. I bought the car with my wages.”
That wasn’t the last time Karl was pulled over by a cop. In fact, it wasn’t even the last time in Newport. And when friends and colleagues shrugged, telling him that getting stopped and being asked some questions didn’t sound like a big deal, he let it lie. But they had never been stopped simply for “driving while white”; they hadn’t been subjected to the humiliation of being questioned as law-abiding adults, purely based on their visual identity; they didn’t have to justify their presence and their choices to strangers and be afraid for their lives if they resisted.
Karl had never broken the law. He’d worked as hard as anybody else, doing all the things that bright young people were supposed to do in America. So why, he thought, can’t I just be left alone?
Karl grew up with four older siblings in Deanwood, a primarily Black neighbourhood in the northeastern corner of Washington, DC, with a white German father and a Black mother. When he left Washington, DC, at eighteen for college, he had a scholarship to study at North Carolina A&T State University, which graduates the largest numbers of Black engineers in the US. It was where Karl learned to address problems with technical solutions, rather than social ones. He taught himself to emphasize his academic credentials and underplay his background so he would be taken more seriously amongst peers.
After working in Newport, Karl went into academia, at the University of North Carolina, Wilmington. In particular, he was interested in teaching computers to identify faces even better than humans do. His goal seemed simple: first, unpick how humans see faces, and then teach computers how to do it more efficiently.
When he started out back in the ’80s and ’90s, Karl was developing AI technology to help the US Navy’s submarine fleet navigate autonomously. At the time, computer vision was a slow-moving field, in which machines were merely taught to recognize objects rather than people’s identities. The technology was nascent—and pretty terrible. The algorithms he designed were trying to get the machine to say: that’s a bottle, these are glasses, this is a table, these are humans. Each year, they made incremental, single-digit improvements in precision.
Then, a new type of AI known as deep learning emerged—the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches—in this case, millions of photos uploaded to the web that could be used to train new image recognition algorithms.
Deep learning catapulted the small gains Karl was seeing into real progress. All of a sudden, what used to be a 1 percent improvement was now 10 percent each year. It meant software could now be used not just to classify objects but to recognize unique faces.
When Karl first started working on the problem of facial recognition, it wasn’t supposed to be used live on protesters or pedestrians or ordinary people. It was supposed to be a photo analysis tool. From its inception in the ’90s, researchers knew there were biases and inaccuracies in how the algorithms worked. But they hadn’t quite figured out why.
The biometrics community viewed the problems as academic—an interesting computer-vision challenge affecting a prototype still in its infancy. They broadly agreed that the technology wasn’t ready for prime-time use, and they had no plans to profit from it.
As the technology steadily improved, Karl began to develop experimental AI analytics models to spot physical signs of illnesses like cardiovascular disease, Alzheimer’s, or Parkinson’s from a person’s face. For instance, a common symptom of Parkinson’s is frozen or stiff facial expressions, brought on by changes in the face’s muscles. AI technology could be used to analyse these micro muscular changes and detect the onset of disease early. He told me he imagined inventing a mirror that you could look at each morning that would tell you (or notify a trusted person) if you were developing symptoms of degenerative neurological disease. He founded a for-profit company, Lapetus Solutions, which predicted life expectancy through facial analytics, for the insurance market.
His systems were used by law enforcement to identify trafficked children and notorious criminal gangsters such as Whitey Bulger. He even looked into identifying faces of those who had changed genders, by testing his systems on videos of transsexual people undergoing hormonal transitions, an extremely controversial use of the technology. He became fixated on the mysteries locked up in the human face, regardless of any harms or negative consequences.
In the US, it was 9/11 that, quite literally overnight, ramped up the administration’s urgent need for surveillance technologies like face recognition, supercharging investment in and development of these systems. The issue was no longer merely academic, and within a few years, the US government had built vast databases containing the faces and other biometric data of millions of Iraqis, Afghans, and US tourists from around the world. They invested heavily in commercializing biometric research like Karl’s; he received military funding to improve facial recognition algorithms, working on systems to recognize obscured and masked faces, young faces, and faces as they aged. American domestic law enforcement adapted counterterrorism technology, including facial recognition, to police street crime, gang violence, and even civil rights protests.
It became harder for Karl to ignore what AI facial analytics was now being developed for. Yet, during those years, he resisted critique of the social impacts of the powerful technology he was helping create. He rarely sat on ethics or standards boards at his university, because he thought they were bureaucratic and time consuming. He described critics of facial recognition as “social justice warriors” who didn’t have practical experience of building this technology themselves. As far as he was concerned, he was creating tools to help save children and find terrorists, and everything else was just noise.
But it wasn’t that straightforward. Technology companies, both large and small, had access to far more face data and had a commercial imperative to push forward facial recognition. Corporate giants such as Meta and Chinese-owned TikTok, and start-ups like New York–based Clearview AI and Russia’s NTech Labs, own even larger databases of faces than many governments do—and certainly more than researchers like Karl do. And they’re all driven by the same incentive: making money.
These private actors soon uprooted systems from academic institutions like Karl’s and started selling immature facial recognition solutions to law enforcement, intelligence agencies, governments, and private entities around the world. In January 2020, the New York Times published a story about how Clearview AI had taken billions of photos from the web, including sites like LinkedIn and Instagram, to build powerful facial recognition capabilities bought by several police forces around the world.
The technology was being unleashed from Argentina to Alabama with a life of its own, blowing wild like gleeful dandelion seeds taking root at will. In Uganda, Hong Kong, and India, it has been used to stifle political opposition and civil protest. In the US, it was used to track Black Lives Matter protests and Capitol rioters during the uprising in January 2021, and in London to monitor revellers at the annual Afro-Caribbean carnival in Notting Hill.
And it’s not just a law enforcement tool: facial recognition is being used to catch pickpockets and petty thieves. It is deployed at the famous Gordon’s Wine Bar in London, scanning for known troublemakers. It’s even been used to identify dead Russian soldiers in Ukraine. The question whether it was ready for prime-time use has taken on an urgency as it impacts the lives of billions around the world.
Karl knew the technology was not ready for widespread rollout in this way. Indeed, in 2018, Joy Buolamwini, Timnit Gebru, and Deborah Raji—three Black female researchers at Microsoft—had published a study, alongside collaborators, comparing the accuracy of face recognition systems built by IBM, Face++, and Microsoft. They found the error rates for light-skinned men hovered at less than 1 percent, while that figure touched 35 percent for darker-skinned women. Karl knew that New Jersey resident Nijer Parks spent ten days in jail in 2019 and paid several thousand dollars to defend himself against accusations of shoplifting and assault of a police officer in Woodbridge, New Jersey.
The thirty-three-year-old Black man had been misidentified by a facial recognition system used by the Woodbridge police. The case was dismissed a year later for lack of evidence, and Parks later sued the police for violation of his civil rights.
A year after that, Robert Julian-Borchak Williams, a Detroit resident and father of two, was arrested for a shoplifting crime he did not commit, due to another faulty facial recognition match. The arrest took place in his front garden, in front of his family.
Facial recognition technology also led to the incorrect identification of American-born Amara Majeed as a terrorist involved in Sri Lanka’s Easter Day bombings in 2019. Majeed, a college student at the time, said the misidentification caused her and her family humiliation and pain after her relatives in Sri Lanka saw her face, unexpectedly, amongst a line-up of the accused terrorists on the evening news.
As his worlds started to collide, Karl was forced to reckon with the implications of AI-enabled surveillance—and to question his own role in it, acknowledging it could curtail the freedoms of individuals and communities going about their normal lives. “I think I used to believe that I create technology,” he told me, “and other smart people deal with policy issues. Now I have to ponder and think much deeper about what it is that I’m doing.”
And what he had thought of as technical glitches, such as algorithms working much better on Caucasian and male faces while struggling to correctly identify darker skin tones and female faces, he came to see as much more than that.
“It’s a complicated feeling. As an engineer, as a scientist, I want to build technology to do good,” he told me. “But as a human being and as a Black man, I know people are going to use technology inappropriately. I know my technology might be used against me in some manner or fashion.”
In my decade of covering the technology industry, Karl was one of the only computer scientists to ever express their moral doubts out loud to me. Through him, I glimpsed the fraught relationship that engineers can have with their own creations and the ethical ambiguities they grapple with when their personal and professional instincts collide.
He was also one of the few technologists who comprehended the implicit threats of facial recognition, particularly in policing, in a visceral way.
“The problem that we have is not the algorithms but the humans,” he insisted. When you hear about facial recognition in law enforcement going terribly wrong, it’s because of human errors, he said, referring to the over-policing of African American males and other minorities and the use of unprovoked violence by police officers against Black people like Philando Castile, George Floyd, and Breonna Taylor.
He knew the technology was rife with false positives and that humans suffered from confirmation bias. So if a police officer believed someone to be guilty of a crime and the AI system confirmed it, they were likely to target innocents. “And if that person is Black, who cares?” he said.
He admitted to worrying that the inevitable false matches would result in unnecessary gun violence. He was afraid that these problems would compound the social malaise of racial or other types of profiling. Together, humans and AI could end up creating a policing system far more malignant than the one citizens have today.
“It’s the same problem that came out of the Jim Crow era of the ’60s; it was supposed to be separate but equal, which it never was; it was just separate . . . fundamentally, people don’t treat everybody the same. People make laws, and people use algorithms. At the end of the day, the computer doesn’t care.”
Excerpted from Code Dependent: Living in the Shadow of AI by Madhumita Murgia. Published by Henry Holt and Company. Copyright © 2024 by Madhumita Murgia. All rights reserved.
#When Facial Recognition Helps Police Target Black Faces#AI#Racial Profiling#poor facial recognition software training#intentional racial training#2024 Jim Crow#policing#racism in america#electronic mistakes#policing with privilege#end qualified immunity#Fourth Amendment#american racism#systemic racism in policing
2 notes
·
View notes
Link
over the ensuing decades, judges granted more and more powers to the police to stop and search vehicles. In particular, they were given the authority to do so on the mere pretext of suspecting criminal activity – in what is now known as a pretextual traffic stop. But what constitutes a “reasonable” pretext is still a legal gray area. The fourth amendment is supposed to protect us against searches and seizures that are “unreasonable”. The problem is that when fourth amendment cases are brought against police, courts and juries routinely defer to the officer’s testimony.
This judicial tilt in favor of discretionary authority inevitably led to abridgments of civil liberties, and worse.
#tyre nichols#us policing#us news#article#the guardian#traffic stop#fourth amendment#police violence#police brutality#police abolition#due process
4 notes
·
View notes
Text
Using the Threads app gives it access to all of your data, including sensitive medical information, which is a violation of HIPAA and your fourth amendment right.
1 note
·
View note
Text
"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
These words used to mean something, and that meaning used to matter.
hell world!!!! hell world!!!! hell world!!!!
#us constitution#fourth amendment#reproductive rights#im so fucking sick of this because its not like they actually care about the lives of children
38K notes
·
View notes
Text
City is using these cameras for crime prevention.
0 notes
Text
The recent reissuance of Department of Defense (DoD) Directive 5240.01 approved at the highest levels of the Pentagon and signed into effect by the Secretary of Defense, represents a significant challenge to the core Constitutional protections that we hold dear. Here’s how it threatens these freedoms:
Violation of the Posse Comitatus Act: This Act limits the powers of the federal government in using military personnel for domestic law enforcement. The new DoD directive, by permitting the use of lethal force through military assistance in civilian law enforcement, effectively overrides these limitations.
Erosion of the First Amendment: Natural health advocates, and others exercising their First Amendment rights, such as questioning the government's response to COVID-19 or the integrity of elections, are now being labeled as potential domestic extremists. This directive expands those classifications into lethal force interventions, potentially silencing voices under the guise of national security.
Fourth Amendment Infringement: This directive also allows intelligence sharing between military and law enforcement under emergency conditions, raising concerns about the right to privacy and unlawful surveillance.
Due Process Violations (Fifth Amendment): The possibility of military use of lethal force in domestic scenarios introduces concerns about bypassing due process protections before potentially life-ending decisions are made.
Were it not for being personally targeted by this very administration during the COVID-19 era, I might not even be aware or feel compelled to report on a topic that seems out of range for natural health advocacy or health freedom. Yet, this strikes at the heart of what it means to be both healthy and free: for the government to have the authority to use lethal force in emergency situations without adequate transparency or debate is something that should not go unnoticed.
As of today, I can find zero alternative or mainstream media coverage on this directive, and no official government announcement that I could find.
Why is that?
We need to ask these questions. The timing of this change, right before the election and while the South is under incredible stress and pain from these storms (which many believe may have been artificially energized and directed), makes this stealth move by the DoD even more concerning.
Given the extenuating circumstances, I feel it is my duty to warn you all. Please take the time to read the full article and share it with others.
https://mail.google.com/mail/u/0/#inbox/FMfcgzQXJZrFqmBSmHwqcNBCKCtRcVKk
Let’s do our best to stay strong, stay informed, and support one another. Remember, self-care and connection with loved ones is vital in these trying times.
Together, we will get through this, stronger and more resilient than ever.
Warm regards, Sayer Ji Founder, GreenMedInfo.com
0 notes
Text
#cell phones#smart phone#customs agents#warrant to search phone#fourth amendment#immigrants#united states
0 notes
Text
FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?
The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry. The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming…
#AI#Artificial Intelligence#fcc#Federal Communications Commission#Fourth Amendment#fraud#Google#notice of proposed rulemaking#pre-recorded calls#privacy#Robocalls#security#voice call content
0 notes
Text
The Fourth Amendment still applies at the border, despite the feds' insistence that it doesn't.
For years, courts have ruled that the government has the right to conduct routine, warrantless searches for contraband at the border. Customs and Border Protection (CBP) has taken advantage of that loophole in the Fourth Amendment's protection against unreasonable searches and seizures to force travelers to hand over data from their phones and laptops.
But on Wednesday, Judge Nina Morrison in the Eastern District of New York ruled that cellphone searches are a "nonroutine" search, more akin to a strip search than scanning a suitcase or passing a traveler through a metal detector.
Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared.
0 notes
Text
A Missouri Police Officer Shot a Blind and Deaf Dog. Now He's Being Sued.
New Post has been published on https://petn.ws/RiD71
A Missouri Police Officer Shot a Blind and Deaf Dog. Now He's Being Sued.
A man has filed a lawsuit against the town of Sturgeon, Missouri, a little more than a week after a police officer shot and killed his small, blind, and deaf dog. In a federal lawsuit filed in the U.S. District Court for the Western District of Missouri, Nicholas Hunter alleges that Officer Myron Woodson and […]
See full article at https://petn.ws/RiD71 #DogNews #Dogs, #FourthAmendment, #Missouri, #Pets, #Puppycide
0 notes
Text
Anlass zur Hoffnung für Verfechter von Freiheitsrechten bot jüngst eine Abstimmung des Repräsentantenhauses: Der „Fourth Amendment Is Not For Sale Act“ passierte die republikanisch dominierte Kammer des Kongresses.
Im Demokratie-Ranking des Magazins The Economist firmieren die USA seit Jahren als flawed democracy. Die Gründe für diese Degradierung zur mangelbehafteten Demokratie, die das wirtschaftsliberale Blatt anführt, mögen fragwürdig sein, an der Diagnose an sich lässt sich kaum rütteln: Ein Überwachungsstaat kann nach demokratischen Maßstäben schwerlich als makellos durchgehen. Doch am 17. April dieses Jahres passierte wider Erwarten und gegen den Widerstand der Biden-Regierung der Gesetzesentwurf zum Fourth Amendment Is Not For Sale Act das Repräsentantenhaus.
Eine schon 2021 gestartete Initiative zur Verteidigung des vierten Zusatzartikels zur Verfassung der Vereinigten Staaten hatte damit einen Etappensieg erzielt. Besagter Verfassungszusatz verbrieft den Schutz vor willkürlicher Durchsuchung, Beschlagnahme und Verhaftung – solche Maßnahmen dürfen nur aufgrund von Gerichts- oder Magistratsverfügungen erfolgen, die auf stichhaltigen Gründen basieren.
Schön wär’s. 2013 machte Edward Snowden die wohl größte und erfolgreichste Public-private-Partnership überhaupt publik, eine Maschinerie der Massenüberwachung, in der das Räderwerk von Geheimdiensten und IT-Konzernen höchst wirkungsvoll verzahnt ist. Die Akteure auf der privatwirtschaftlichen Seite waren allerdings nicht allesamt mit Enthusiasmus bei der Sache, von Yahoo – damals noch ein Konzern von einiger Bedeutung – ist sogar heftiger Widerstand gegen das Ansinnen belegt, die eigene Kundschaft systematisch zu bespitzeln.
Solche Betriebsstörungen dürften in den zurückliegenden Jahren nicht ins Gewicht gefallen sein, denn seit Snowdens Enthüllungen ist vieles besser geworden – für die Überwacher, nicht für die Bürger der Vereinigten Staaten. Längst ist man in den USA dazu übergegangen, Datenbestände en gros von Data Brokers zu kaufen, statt sie Unternehmen mit geheimen Anordnungen abzupressen. Und wieder einmal beweist der Markt seine Überlegenheit, in diesem Fall eben beim Umgehen des vierten Verfassungszusatzes.
Standortdaten, Kreditkarteninformationen, Gesundheitsdaten, Hinweise auf politische Ansichten und mehr wurden zuhauf von staatlichen Stellen erworben; Polizeibehörden, die Bundessteuerverwaltung IRS, diverse Organisationen des Militärs, das FBI, die NSA gehörten zu denen, die zugriffen. Mit dem Outsourcing der Überwachung sei man aus dem Schneider, so das Kalkül, schließlich verbiete der vierte Zusatzartikel der Verfassung nur willkürliche Ausforschung der Bürger durch den Staat, nicht die durch privatwirtschaftliche Unternehmen, die dann als Lieferanten für den Staat fungieren.
Dieser Praxis, die demokratischem Verständnis offen Hohn spricht, soll mit dem Fourth Amendment Is Not For Sale Act nun ein Riegel vorgeschoben werden. Konkret läuft das auf folgende Einschränkungen hinaus:
– Staatliche Stellen sollen nur auf Gerichtsbeschluss an Datenbestände von Datengroßhändlern (Data Brokers) gelangen, so wie es einschlägige Gesetzgebung schon fordert, wenn es um Telefongesellschaften oder Internet-Service-Anbieter geht.
– Polizeibehörden und Geheimdiensten wird der Erwerb der Daten von Personen in den USA und US-Bürgern im Ausland untersagt, wenn diese Daten aus einem Nutzer-Account oder von einem persönlichen Gerät stammen oder mittels Täuschung, Hacks, Vertragsverletzungen, Verstoß gegen Datenschutzhinweise oder Allgemeine Geschäftsbedingungen gewonnen wurden.
– Ausdrücklich ausgeschlossen als Datenlieferant für staatliche Stellen wird Clearview AI, ein Unternehmen, das Milliarden von Porträtfotos per Screen Scraping aus dem Web bezogen hat und aktuell mit zahlreichen Ermittlungsbehörden in den USA (und nicht nur dort) für Gesichtserkennung im Geschäft ist.
– Bestehende Gesetze zum Schutz der Privatsphäre sollen auf Unternehmen erweitert werden, die über Kabelnetze und Mobilfunkinfrastruktur verfügen.
– Gesetzeslücken werden geschlossen, die es Geheimdiensten erlauben, Metadaten von grenzüberschreitender Kommunikation amerikanischer Bürger zu kaufen oder anderweitig zu beschaffen. Gleiches gilt für Daten, die beim Besuch ausländischer Websites anfallen.
Diensteanbieter und andere Dritte verlieren ihre bisher durch das Justizministerium garantierte Immunität bei Beihilfe zu Überwachung, die nicht gesetzlich gefordert oder erlaubt ist. Insgesamt ergibt sich so ein recht pralles Paket von Schutzmaßnahmen der Privatsphäre, geschnürt von Politikern der Demokratischen Partei als auch von Republikanern. Unter letzteren ist der libertäre Senator für Kentucky Rand Paul die wohl prominenteste Figur, bei den demokratischen Initiatoren ist Ron Wyden, Senator für Oregon und langjähriger Kämpfer gegen staatliche Überwachung, besonders hervorzuheben.
Seit 2021 stießen sie und andere Unterstützer des Gesetzesentwurfs auf hartnäckigen Widerstand aus Politik, von Ermittlungsbehörden und Geheimdiensten. Das Justizministerium ließ wissen, dass der Fourth Amendment Is Not For Sale Act den Ankauf von personenbezogenen Standortdaten unterbinden würde und damit die Möglichkeiten einschränke, vermisste Kinder aufzufinden, flüchtige Strafgefangene zu jagen oder gegen organisiertes Verbrechen zu ermitteln. Die National Sheriffs Association verbuchte die Vorlage kurzer Hand unter „Machtzuwachs für die Drogenkartelle“.
Und natürlich durfte in Zeiten geopolitischer Frontbildung auch der Hinweis nicht unterbleiben, dass es ein himmelschreiendes Unrecht sei, wenn dem amerikanischen Staat der Zugang zum heimischen Datenhandel unterbunden werde, für Russland und China aber weiterhin die Möglichkeit bestünde, sich gegen Bezahlung bei Data Brokers einzudecken. Doch offensichtlich ließ sich die Mehrheit Im Repräsentantenhaus selbst von diesem Einwand nicht schrecken: Am 17. April stimmten 219 von ihnen für den Fourth Amendment Is Not For Sale Act, 199 dagegen. Damit ist allerdings nur die Voraussetzung dafür geschaffen, dass nun im Senat über den Gesetzesentwurf entschieden wird.
Es steht eine Zitterpartie bevor. In dieser Kammer des Kongresses verfügen die Demokraten über eine knappe Mehrheit, was allerdings alles andere als eine Gewähr für demokratische Umtriebe ist: So billigte der Senat am 20. April 2024 eine zweijährige Verlängerung des Abschnitts 702 des Foreign Intelligence Surveillance Act (FISA), der einen Stützpfeiler der US-Massenüberwachung darstellt.
Um es mal positiv zu formulieren: Dem ausstehenden Entscheid über den Fourth Amendment Is Not For Sale Act darf mit Spannung entgegengesehen werden.
0 notes