#POLICE INTELLIGENCE AND HUMAN RIGHTS: BALANCING SECURITY AND PRIVACY CONCERNS
Explore tagged Tumblr posts
Text
POLICE INTELLIGENCE AND HUMAN RIGHTS
POLICE INTELLIGENCE AND HUMAN RIGHTS: BALANCING SECURITY AND PRIVACY CONCERNS 1.1 Introduction The role of police intelligence in maintaining national security, preventing crime, and ensuring public safety is increasingly essential in today’s complex and interconnected world. However, the use of intelligence gathering by law enforcement raises important human rights concerns, particularly in…
#accountability#civil liberties#Community trust#Data protection#ethical considerations#Human Rights#law enforcement practices#Legal frameworks#Oversight mechanisms#Police Intelligence#POLICE INTELLIGENCE AND HUMAN RIGHTS: BALANCING SECURITY AND PRIVACY CONCERNS#Policy Balance#PRIVACY CONCERNS#Risk Assessment#Security#Surveillance
0 notes
Text
Blog Post: "Psycho-Pass" – A Reflection on Control, Justice, and Humanity
"Psycho-Pass" is a thought-provoking anime that explores issues of control, fairness, and what it means to be human. Set in a dystopian future, it depicts a society ruled by the Sibyl System, a pervasive AI that measures people's mental states in order to predict and prevent criminal activity. The heroine, Akane Tsunemori, navigates this complicated world as a new inspector in the Public Safety Bureau's Criminal Investigation Division.
Addressing Issues in Japan and Globally
"Psycho-Pass" addresses important themes that are relevant both in Japan and abroad. In Japan, there is a cultural emphasis on social harmony and conformity, as demonstrated by the Sibyl System's efforts to establish a crime-free society. However, the pursuit of harmony raises concerns about individual liberty and the costs of such management. The animation criticizes the possibility for technological excess, highlighting worries about increased surveillance and data privacy in modern Japanese society.
Themes from "Psycho-Pass" hold similar significance on a global scale. Many nations struggle to strike a balance between security and civil liberties as technology develops. Preemptive justice refers to the idea that people should be penalized and assessed based on their potential rather than their actual behavior. This idea is similar to current discussions about predictive policing and the moral application of AI in law enforcement. The anime challenges viewers to think on the ethical ramifications of depending solely on technology to make decisions that could change someone's life.
Personal Reflection
The ideas of monitoring and autonomy in "Psycho-Pass" are especially relevant to my own life. In an era when data privacy is increasingly under threat, the anime's depiction of an all-seeing AI system emphasizes the necessity of safeguarding human liberties. It asks us to consider how much control we are ready to give up to technology in exchange for protection and convenience.
"Psycho-Pass"'s investigation of justice also aligns with my moral and fairness beliefs. Akane's battles with the moral conundrums the Sibyl System presents shed light on how difficult it is to uphold justice in a culture where inflexible laws rule the day. This reflects the difficulties that actual judicial systems encounter when attempting to strike a balance between law enforcement and people's rights and humanity.
Connections to Assigned Readings
I have not yet seen the lecture video, but I am able to make some first links between "Psycho-Pass" and the topics covered in our readings. The anime's emphasis on the ramifications of cutting-edge monitoring equipment fits in with larger conversations concerning the ethical consequences of artificial intelligence and the digital age. Philosophical investigations into the nature of law, ethics, and the human condition are connected to the narrative's in-depth exploration of justice and morality.
The animation's precise design and depiction of a dismal future are also relevant to LaMarre's study on compositing in animation. The combination of conventional animation and digital elements produces a visually engaging universe, boosting storytelling and drawing attention to the built nature of this complex society.
Final Thoughts
"Psycho-Pass" is an engaging anime that explores both local and global issues, speaks to personal experiences, and links with larger topics in modern conversation. Its investigation into surveillance, justice, and the ethical use of technology yields useful insights and comments. As I continue to investigate these topics, this blog will act as a chronicle of my ideas, assisting in the construction of my final article.
0 notes
Text
Artificial Intelligence: Exploring its Advantages and Disadvantages
In today's digital age, the buzz around Artificial Intelligence (AI) is palpable. From automating tasks to enhancing decision-making processes, AI has become a cornerstone of innovation across industries. However, with its promises come a myriad of challenges and concerns. In this blog, we'll delve into the advantages and disadvantages of AI, shedding light on its transformative potential and the accompanying pitfalls.
Advantages of Artificial Intelligence:
Efficiency and Automation:
AI excels in streamlining processes and automating repetitive tasks. From manufacturing to customer service, AI-powered systems can handle mundane tasks with precision and speed, freeing up human resources for more strategic endeavors. This efficiency boost translates into cost savings and enhanced productivity for businesses.
Data Analysis and Insights:
With the exponential growth of data, AI algorithms play a pivotal role in extracting valuable insights from vast datasets. Whether it's predicting consumer behavior or optimizing supply chains, AI-driven analytics empower organizations to make data-driven decisions swiftly and accurately.
Personalization and Customer Experience:
AI enables personalized experiences across various touchpoints, from recommendation engines to virtual assistants. By analyzing user behavior and preferences, AI algorithms can tailor product recommendations, content, and services, fostering deeper engagement and satisfaction among customers.
Innovation and Research:
AI fuels innovation by augmenting human capabilities in research and development. From drug discovery to space exploration, AI algorithms accelerate the pace of innovation by identifying patterns, simulating scenarios, and uncovering novel solutions to complex problems.
Improved Healthcare:
In the healthcare sector, AI holds the promise of revolutionizing diagnostics, treatment planning, and patient care. AI-powered medical imaging, predictive analytics, and remote monitoring systems enhance diagnostic accuracy, optimize treatment protocols, and personalize healthcare delivery.
Disadvantages of Artificial Intelligence:
Job Displacement and Economic Disruption:
The automation potential of AI raises concerns about job displacement across various sectors. Routine tasks susceptible to automation may lead to unemployment or the need for upskilling and reskilling among the workforce. Furthermore, AI-driven disruptions could exacerbate socioeconomic inequalities if not managed effectively.
Bias and Ethical Concerns:
AI algorithms are prone to biases inherent in the data they are trained on, leading to discriminatory outcomes. From hiring algorithms to predictive policing systems, biased AI can perpetuate societal injustices and undermine trust in automated decision-making processes. Addressing these ethical concerns requires careful algorithm design and robust oversight mechanisms.
Privacy and Security Risks:
The proliferation of AI-powered systems raises concerns about data privacy and security. From unauthorized access to personal information to malicious use of AI for cyberattacks, safeguarding data integrity and privacy becomes paramount. Striking a balance between innovation and privacy rights necessitates robust data protection regulations and cybersecurity measures.
Lack of Transparency and Accountability:
AI algorithms often operate as black boxes, making it challenging to interpret their decision-making processes. Lack of transparency and accountability in AI systems can erode trust and raise concerns about fairness and accountability, especially in high-stakes domains like healthcare and criminal justice.
Dependency and Overreliance:
Overreliance on AI systems without adequate human oversight can lead to catastrophic failures and unintended consequences. From autonomous vehicles to autonomous weapons systems, the risks associated with AI malfunction or misuse underscore the importance of human supervision and intervention.
Despite the challenges, the transformative potential of AI is undeniable. As organizations and policymakers navigate the complexities of AI adoption, a balanced approach that harnesses its advantages while mitigating its risks is imperative.
In the realm of education, institutions like CIMAGE Group of Institutions in Patna, Bihar, are at the forefront of preparing the next generation of AI professionals. Offering AI and Machine Learning courses as add-ons to main courses like BCA and BBA, CIMAGE empowers students with the knowledge and skills needed to thrive in the AI-driven economy. With a track record of highest campus placements in Bihar, CIMAGE exemplifies the pivotal role of education in shaping the future of AI responsibly and ethically.
In conclusion, while AI holds immense potential to transform industries and improve lives, navigating its complexities requires a thoughtful approach that addresses its advantages and disadvantages alike. By fostering innovation, promoting transparency, and upholding ethical principles, we can harness the power of AI for the betterment of society while mitigating its risks.
#artificial intelligence#artificial intelligence advantage#AI advantage#AI disadvantage#machine learning#cimage college#learn AI#learn artificial intelligence
0 notes
Text
"Surveillance Drones: Navigating the Ethical Skyline"
Introduction -
Recently, governments and military organizations have been using surveillance drones widely, ushering in a technological revolution in the skies. Although unmanned aerial vehicles (UAVs) have shown to be useful in a variety of contexts, including border security and reconnaissance operations, their growing prevalence prompts important concerns regarding privacy, ethics, and the possibility of abuse.
It is very important to understand the Surveillance Drones first -
Unmanned aerial vehicles (UAVs) or unmanned aircraft systems (UAS) are other names for surveillance drones, which are autonomous or remotely piloted aircraft that have cameras and sensors installed. Their main objective is to obtain real-time visual and/or sensory data to present an aerial view of the area of interest. These drones are used by the military and government for various tasks, including border surveillance, reconnaissance, and disaster relief support.
The Applications of Services drones -
Border Security: Drones are used by governments to monitor and guard borders, reducing the likelihood of unauthorized entry and boosting national security.
Reconnaissance: Surveillance drones are used by military organizations to obtain intelligence on enemy activities, improving situational awareness without endangering human life.
Disaster Response: Drones are essential to disaster management because they offer a quick and effective way to find survivors, assess damage, and coordinate rescue operations.
Law Enforcement: Drones used for surveillance are used by police forces to monitor crowds, investigate crime scenes, and conduct search and rescue missions.
Environmental Monitoring: Drones are useful tools for environmental conservation because they can be used to track deforestation, monitor wildlife, and evaluate the effects of climate change.
The Ethical Considerations of the Surveillance Drones are:-
Privacy Concerns: Concerns regarding personal privacy are brought up by the widespread use of surveillance drones, which are capable of taking pictures and videos of subjects without their knowledge.
Potential for Abuse: The use of surveillance drones carries a risk of abuse, such as the unauthorised monitoring of individuals, political rivals, or underprivileged groups.
Data Security: Drone surveillance has collected a vast amount of data, which raises concerns about data security, storage, and the possibility of unauthorized access.
Impact on Civil Liberties: Opponents contend that the extensive use of surveillance drones could violate fundamental civil liberties and lead to a society in which continual monitoring is accepted as the norm.
Conclusion -
Drones for surveillance surely have a lot to offer in the fields of disaster relief, national security, and other areas. But using them also necessitates carefully weighing the ethical ramifications, such as privacy issues, potential abuse, and the requirement for strict regulations. In our constantly changing world, maintaining a responsible and ethical deployment of surveillance drones requires striking a balance between utilizing drone technology to its full potential and protecting individual rights.
Madman Technologies is coming up huge in the area of Government Tactical Product Portfolio that can help you both in the design consulting and best services, also they can arrange the best deal price in the market and make the product available for you.
For any further queries and details email us at —
madmantechnologies{dot}ai{at}gmail{dot}com
Contact information:- 9625468776
#surveillancedrones#drones#itservices#itconsulting#informationtechnology#techgadgets#technology#futuretechnology#itproducts#information technology#it services#it products#it technology#it solutions#artificial intelligence
0 notes
Text
Daniel Reitberg Explores the Transformative Use of AI by Police
AI in Policing: Pioneering the Future of Law Enforcement
In an era defined by technological innovation, the integration of artificial intelligence (AI) is reshaping the landscape of law enforcement. Daniel Reitberg, an esteemed expert in the realm of AI, delves into how police departments are harnessing the power of AI to revolutionize crime prevention, enhance investigative processes, and foster safer communities.
Predictive Policing: Revolutionizing Crime Prevention
Daniel Reitberg underscores the profound impact of predictive policing—an AI-driven approach that leverages data analysis to forecast where crimes are likely to occur. By processing historical crime data, demographic information, and environmental factors, AI algorithms enable law enforcement to allocate resources more efficiently, deter criminal activities, and proactively address emerging threats.
AI-Enhanced Investigations: Accelerating Justice
AI's role in criminal investigations is transformative. Daniel Reitberg discusses how AI-driven analytics expedite evidence collection and analysis. By analyzing vast amounts of data from multiple sources, including social media and surveillance footage, AI assists investigators in identifying connections, patterns, and leads that might otherwise go unnoticed, ultimately expediting case resolutions.
Smart Surveillance: Enhancing Public Safety
The integration of AI into surveillance systems empowers law enforcement with real-time insights. Daniel Reitberg emphasizes how AI-powered video analytics can automatically detect anomalies and suspicious behavior. This capability allows officers to respond swiftly to potential threats, enhancing public safety in crowded areas, critical infrastructures, and public events.
Ethical Considerations: Balancing Innovation and Privacy
Daniel Reitberg emphasizes that as AI becomes an integral part of law enforcement, ethical considerations remain paramount. Striking a balance between technological innovation and individual privacy rights is crucial. Responsible AI deployment ensures that communities benefit from advanced policing tools while maintaining the public's trust.
Human-AI Synergy: Strengthening Police Operations
AI's role in policing isn't about replacing human officers but augmenting their capabilities. Daniel Reitberg highlights the symbiotic relationship between officers and AI systems. While AI processes vast amounts of data and identifies patterns, officers bring expertise, judgment, and empathy to complex decision-making and community interactions.
Community Policing Reinvented: Building Trust
AI is also transforming community policing. Daniel Reitberg discusses how data-driven insights help police departments tailor their strategies to specific neighborhoods' needs, fostering trust and collaboration. By addressing concerns more effectively and allocating resources where they're most needed, AI supports police-community partnerships that lead to safer environments.
Future Horizons: Daniel Reitberg's Vision
Daniel Reitberg envisions a future where AI continues to empower police departments. From real-time crime monitoring to rapid incident response, AI's potential is boundless. As AI algorithms evolve and adapt, they will become an integral tool in addressing emerging challenges, enhancing officer safety, and shaping a safer society.
Conclusion: AI's Unveiled Potential in Policing
Through Daniel Reitberg's exploration of AI's role in law enforcement, a new narrative emerges—one of collaboration, innovation, and community-focused policing. As AI's influence in police operations grows, departments are better equipped to tackle modern challenges, foster trust, and uphold public safety. The path ahead promises a harmonious blend of human expertise and technological prowess, paving the way for a safer and more secure society.
#artificial intelligence#machine learning#deep learning#technology#robotics#autonomous vehicles#crime prevention#law enforcement#policing
0 notes
Text
Week 7 Case Study - Pre-readings
This week’s case study concerns privacy - specifically should the government be allowed to collect data for individuals to be used in the interest of public safety?
I’ve compiled some notes on the sources we were given:
2019 - Facial Recognition to Replace Opal Cards
Facial recognition could be used to replace opal cards
Digital rights groups say it would pose a risk to privacy
Transport minister said facial recognition would provide convenience for commuters -> envisioned something similar to Amazon's "Just Walk Out" technology. "All about making the journey easier and faster for people"
Opposition have major concerns about technology being rolled out -> data collected would be of large commercial value to owner. "NSW taxpayer shouldn't be used by their government to make money, and government shouldn't be trusted with this technology
Tim Norton - "worrying to see such flippancy from the gov about potential rollout of technology like this across public services like transport -> these decisions shouldn't be taken lightly, and require extensive public consultation to ensure citizen's rights aren't impacted"
People must have trust that governments are taking appropriate action to protect the privacy that people expect when in public
Justin Warren, board member of Electronic Frontiers Australia - how would an opt-out system be used if everyone is scanned. "needs to be public debate about plans to roll out this technology, need to stop taking the framing from government that this is something that needs to happen -> ask why?"
2019 - Australian Views on Surveillance
“Australians tend to accept government surveillance, particularly if they think it necessary or trust the government"
If surveillance continues to increase -> general public opinion might reach a turning point and start adopting measures to 'hide' themselves
Government surveillance justified as necessary to protect us from criminal or terrorist attacks
Intelligence agencies, federal and state police can request access to telephone and internet records. This can reveal info about location, recent contacts
Proposed legislation would allow the government to share photos and other identifying info between government agencies, and private organisations for law enforcement, road safety, national security purposes
Recently passed "Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 - allows gov agencies greater access to encrypted messages like from WhatsApp
Study with 100 Aus residents about their views on gov surveillance
52% said they accept gov surveillance
Average response was 3.1 (acceptance for surveillance)
Two main factors that influenced acceptance:
Is surveillance needed
Most influential factor
Practical implications as lawmakers capitalise on people's responses to events to justify new legislation
For example - "The need for the powers in this bill has become more urgent in the light of the recent fatal terrorist attack in Melbourne and the subsequent disruption of alleged planning for a mass casualty attack by three individuals last month – also, sadly, in Melbourne. Individuals in both of these cases are known to have used encrypted communications."
Do I trust the gov?
Overall trust in the gov also determined their acceptance -> trust in the Aus gov is generally quite low
Might be more influenced by general view of gov rather than their views of specific policies and practices
Large amount of people opting out of the My Health Record
No link between people's level of trust in the way the gov manages data and their acceptance of surveillance
AI can analyse CCTV footage without human input -> when face recognition is used to identify suspects, there is the risk of matching people with similar, close matching profiles. Results in a high error rate, posing risks for innocent people.
Threat of repurposing - when info is collected for one purpose and used for another
Concerns that insurance companies could access and use info from the My Health Record
2018 - Facial Recognition Used by Aus Authorities
NSW police and crime agencies preparing to use new facial recognition system to match pictures of people on CCTV with their driver's license photo to detect criminals and identity theft
Federal and state gov has access to data and photos from passports, driver licences, visas for facial recognition system
People do not have the option to opt out of their details being included from the facial recognition system
NSW gov has allocated $52.6 million over 4 years to support this tool
Two parts:
Face Verification service: 1-1 image-based match of a person's photo against a government record such as a passport - already operational
Face Identification Service - one-to-many, image match of an unknown person such as a criminal against multiple government records to help establish their identity. Access to the FIS will be limited and expected to come online this year
Monash Uni Professor said the system breaches privacy rights by allowing collection, storage and sharing of personal details from innocent people
Gov spokesperson said laws allow these services to be used for "identity and community protection activities"
Research indicates that ethnic minorities and women are misidentified at higher rates than the rest of the population
Significant concerns about the reliability or otherwise of its algorithms and biases that can be inherent
"There are no proper definitions of how the data will be used under the current bill"
Law enforcement authorities habitually push for greater access to private data and info to help them do their job
Government has to balance safety and welfare of citizens, and the limitation on people's civil liberties, and the threat to life in the case of terrorist attack
2017 - Benefits of Surveillance
Issue of mass surveillance
Amount of data collected - bulk collection only way to handle volumes of data
What data is collected - some places don't have clear distinctions of what data is to be collected
How data is collected
Key points of intelligence officials' statements on the effectiveness of surveillance technology are that:
Difficult if not impossible to evaluate the effectiveness of surveillance programs
Because data is aggregated with other data to form a larger picture, it becomes hard to evaluate the effectiveness of surveillance tech
Purpose of intelligence is to inform policy makers and to improve their decision making, but it is hard to measure this impact
Seven measures of effectiveness drawn:
Thwarted attacks
Lives saved
Criminal organisations destroyed
Output
Context
Support
Informed policy-making
However: counting successful cases seems to have merit with officials as a measure of effectiveness of surveillance technology employed for tactical intelligence purposes, but not for strategic intelligence
2015 Australian Metadata Retention Laws
Following information need to be retained be telcom service providers:
Incoming and outgoing telephone caller ID
Date, time and duration of a phone call
Location of the device from where phone call was made
Unique ID assigned to a particular mobile phone of the phones involved in each particular phone call
Email address from which an email is sent
Time, date, recipients of emails
Size of any attachment set with emails and their file formats
Account details held by the ISP such as whether the account is active or suspended
"The content or substance of a communication is not considered to be metadata and will not be stored"
ASIO, police, Crime Commission, ATO, ICAC are able to view stored metadata without a warrant except for journalists (need to seek a warrant)
Act was supported by law enforcement and security agencies including federal police and ASIO - argued telcom data is critical to criminal investigations, need to be made accessible through legislation
Act questioned for its effectiveness as a tool to combat crime, increasing encroachment of privacy in Aus, consequences for journalism and journalistic practice
2013 - Opinion: Why We Need Government Surveillance
Edward Snowden - leaked classified intelligence
Willing to give up on his job, family, home, relationships to stop the U.S government from destroying privacy, internet freedom and basic liberties with their surveillance
New revelations about gov surveillance programs -> why are same policies being used across presidents?
Government overreach
21st century war is different - requires new ways of gathering info
Move towards 'home-grown' terror will require collection of U.S citizen's conversations with potential overseas people of interest
Constant armed struggle against terrorist threats has adjusted beliefs on what citizen's expect government to do in order to protect society
Need for enhanced intelligence activities is necessary, but abuse can occur easily
After compiling these notes I’ve also drafted some points to answer the discussion from both perspectives, which summarise the main points raised in the articles above.
Government agencies should collect and have access to your data for good purposes:
Necessary to sacrifice privacy for the greater good - if you're not a criminal then in an ideal system you shouldn't have to worry about being falsely accused
Protect against terrorist/criminal attacks - lives saved
Identify suspects/unknowns in the database before they are able to execute an attack -> prevention, attacks thwarted
Can be used in biometric technology like facial recognition which has applications for public transport -> replace physical cards, and provides convenience for commuters
Data can be used to help inform policy makers, and better increase the quality of their decision making
Shift towards 'cyber warfare' context has caused a shift in people's expectations for what role the government should play in protecting the welfare of all citizens ->collected intelligence might be necessary to thwart terrorist attacks
Government agencies should not collect and have access to your personal data:
For use in public transport, etc:
Does the benefit of convenience really outweigh the cost of lack of privacy, and risk of having data being stolen from the government?
For use in the interest of public safety:
Are terrorist attacks so rampant that they warrant action of this scale?
Can we put complete faith into this technology to make life-implicating decisions for individuals?
Studies have shown that minority groups and women are more likely to be mismatched -> higher rates of error
Can policy catch up to the technology -> currently there are no definitions in bills/acts which distinctly determine what data is allowed to be stored and collected -> grey area
Significant concerns about the reliability or otherwise of its algorithms and biases that can be inherent
Risk of matching people with similar, close matching profiles. Results in a high error rate, posing risks for innocent people.
Threat of repurposing - when info is collected for one purpose and used for another
How is data collected and stored? Is it ethical? Are there any risks of data being leaked or the risk of an insider attack?
Overall increase in surveillance activities -> might lead public opinion to distrust the government, and ask why is it necessary to be monitored so heavily -> where are our human liberties to privacy?
Public opinion of NSW already as the "nanny state" -> could be the tipping point for complete rejection of government
3 notes
·
View notes
Text
Why technology puts human rights at risk
by Birgit Schippers
Spainter_vfx/Shutterstock.com
Movies such as 2001: A Space Odyssey, Blade Runner and Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don’t seem so far removed from reality.
Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.
Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?
AI and human rights
First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I’m not sure that the focus of our concern for human rights should really lie with rogue robots, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.
youtube
This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems in charge of life and death decisions, with limited or no human control.
AI also revolutionises the link between warfare and surveillance practices. Groups such as the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google’s participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC appealed to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company’s involvement in the project. Google recently announced that it will not be renewing its contract.
In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding Cambridge Analytica’s harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.
Meanwhile, critical data analysts challenge discriminatory practices associated with what they call AI’s “white guy problem”. This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.
AI can replicate and entrench stereotypes. Ollyy/Shutterstock.com
Ambiguous bots
The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI?
There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to write a Magna Carta for the digital age and the Future of Life Institute’s Asilomar AI Principles, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.
These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.
Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.
But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.
And herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What’s more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.
Birgit Schippers is a Visiting Research Fellow at the Senator George J Mitchell Institute for Global Peace, Security and Justice at Queen's University Belfast.
This article was originally published on The Conversation.
16 notes
·
View notes
Text
Safe AND Secure?
A few weeks ago, my class had a discussion regarding the state of surveillance and the future of privacy with regard to the internet and various technologies. Of all of the emerging technologies on the bleeding edge, perhaps the most promising, and therefore most concerning, would be that of Artificial Intelligence recognition software. With these types of tools, it would be easy for law enforcement to filter through petabytes of data, whether it be videos, photos, emails or phone calls, and single out one or two data points that fit a specified pattern. Put simply, tracking and monitoring people and their activities will and has become easier than ever before in human history.
But aren’t the capabilities of these technologies a good thing? After all, do we all not want to live in a world where the police are able to track down people who break the law and monitor crime more effectively? Any rational person wouldn’t think twice about answering yes to this question, but when we examine the implications it raises, cause for concern is surely warranted. For example, what measures are being put in place to ensure that this type of technology isn’t being used for the wrong reasons or by malicious actors? In marginalized communities historically made up of black and brown individuals, excessive policing and harassment on part of the local law enforcement have been known issues for decades. One type of AI, pioneered by Motorola Solutions, utilizes a camera to recognize typical subjects and movement patterns native to its surveilled area. If, for instance, a person of color was walking in a predominantly white area, such a camera could potentially flag this as unusual activity and automatically dispatch a patrol unit to intercept them. Adding more opportunities for monitoring and analyzing activity for already over-policed groups would only exacerbate this issue. The potential for this tech to be used isn’t just limited to the scope of law enforcement, either. Tracking software could quite plausibly be used to find the private home address of a controversial politician or public figure, and oppositional constituents or extremist groups could subsequently use that information to harass them (or worse).
This is all not to say that I think that these issues can be resolved by doing away with technology-backed public security measures entirely- we still need infrastructure for preventing and combating misinformation campaigns, for example, which in recent years have plagued the scientific and political ecosystems of the United States and beyond. Facial recognition services could also prove vital in solving decades-old missing person cold cases and preventing real humanitarian issues such as sex trafficking.
Regardless of how any one person feels about modern surveillance technology and its effects on public privacy, what can definitely be said is that these tools are here and they are here to stay. What it comes down to now is finding the right balance between security and privacy, which will take years and likely decades before a consensus can be reached. For now, we must all remember to always challenge and question the powers that be, making sure they never overstep and obstruct our personal freedoms, while also giving them the room they need to do their job to keep us safe.
0 notes
Text
Friday, April 23, 2021
A Global Tipping Point for Reining In Tech? (NYT) China fined the internet giant Alibaba a record $2.8 billion this month for anticompetitive practices, ordered an overhaul of its sister financial company and warned other technology firms to obey Beijing’s rules. Now the European Commission plans to unveil far-reaching regulations to limit technologies powered by artificial intelligence. And in the United States, President Biden has stacked his administration with trustbusters who have taken aim at Amazon, Facebook and Google. Around the world, governments are moving simultaneously to limit the power of tech companies with an urgency and breadth that no single industry had experienced before. Their motivation varies. In the United States and Europe, it is concern that tech companies are stifling competition, spreading misinformation and eroding privacy; in Russia and elsewhere, it is to silence protest movements and tighten political control; in China, it is some of both. While nations and tech firms have jockeyed for primacy for years, the latest actions have pushed the industry to a tipping point that could reshape how the global internet works and change the flows of digital data.
Businesses scramble for help as job openings go unfilled (AP) It looks like something to celebrate: small businesses posting “Help Wanted” signs as the economy edges toward normalcy. Instead, businesses are having trouble filling the jobs, which in turn hurts their ability to keep up with demand for their products or services. Owners say that some would-be workers are worried about catching COVID-19 or prefer to live off unemployment benefits that are significantly higher amid the pandemic. Child care is another issue—parents aren’t able to work when they need to tend to or home-school their children. For some people, a combination of factors go into their decision not to seek work. When Steve Klatt and Brandon Lapp set up interviews for their restaurant and food truck business, they’re lucky if one out of 10 or 15 applicants comes in. “The people who do show up, all assume their unemployment is running out,” says Klatt, whose business, Braised in the South, is located in Johns Island, South Carolina. Businesses of all sizes are struggling with hiring even with millions of Americans unemployed and as increasing numbers of people get vaccinated and look forward to a more normal life. A Census survey taken in late March shows that 6.3 million didn’t seek work because they had to care for a child, and 4.1 million said they feared contracting or spreading the virus.
How Free Should Free Speech Be? (NYT) Brendan Hunt, an avid Trump supporter from New York City, will be the defendant in the first federal trial—starting this week in Brooklyn—that will force jurors to dive deep into the national debate over how much the government should police violent rhetoric in the wake of the January 6 Capitol attack. Hunt wasn’t in Washington for the insurrection. But two days after the attack the 37-year-old posted an 88-second video online entitled: “KILL YOUR SENATORS.” According to the government’s complaint, Hunt says in the video: “we need to go back to the US Capitol” ahead of President-elect Biden’s inauguration and “slaughter” members of Congress. “If anyone has a gun, give me it,” Hunt says. ‘I’ll go there myself and shoot them and kill them.” The jury will have to decide whether the video and three other social media posts Hunt made crossed the line from free speech into illegal threats. The trial could be a bellwether of how authorities balance the pursuit of serious domestic threats with constitutional protections for political speech.
Argentina COVID-19 deaths near 60,000 in pandemic’s ‘worst moment’ (Reuters) Argentina is facing its “worst moment” of the COVID-19 pandemic, the country’s health minister said on Wednesday, as deaths from the virus neared 60,000 amid a sharp second wave that has forced the country to re-impose some lockdown measures. Carla Vizzotti warned that the South American country’s healthcare system was at risk, especially in the metropolitan area around capital Buenos Aires, which had forced the government to restrict movement and suspend indoor activities. “We are living through the worst moment of the pandemic now,” she told a daily briefing, adding the country was seeing an important rise in the circulation of new variants, with the virus surging in the capital and beyond. “It’s growing exponentially in most of the country.”
Putin warns against crossing Russia’s ‘red lines’ (CNBC) Russian President Vladimir Putin, in his annual State of the Nation speech, warned on Wednesday against provoking his country, promising a swift retaliation against anyone who crossed “red lines.” Moscow will respond “harshly,” “quickly” and “asymmetrically” to foreign provocations, Putin told an audience of Russia’s top officials and lawmakers, adding that he “hoped” no foreign actor would cross Russia’s “red lines,” according to a Reuters translation. Putin also touted the country’s planned investment in expanded military education, hypersonic weapons and intercontinental ballistic missiles—while insisting simultaneously that Russia wants peace and arms control agreements. The 68-year-old leader condemned what he described as the constant tendency of international actors to blame Russia for wrongdoing, saying it had become like a sport. The speech came against the backdrop of deteriorating tensions with the U.S. and EU, and follows the recent imposition of sanctions on Russia from the Biden administration over alleged cyberattacks, human rights violations and a Russian military buildup along the border with Ukraine.
US-backed Afghan peace meeting postponed as Taliban balk (AP) An upcoming international peace conference that was meant to move Afghanistan’s warring sides to a power-sharing deal and ensure an orderly U.S. exit from the country has been postponed, its sponsors announced Wednesday. They cited a lack of prospects for meaningful progress. The decision to delay the conference came several days after Taliban insurgents, who are key to peace efforts, dismissed the U.S.-promoted conference in Istanbul as a political spectacle serving American interests. As peace efforts stalled, Germany’s Defense Ministry suggested NATO military planners were contemplating a possible withdrawal of international troops from Afghanistan as early as July 4. That’s more than two months ahead of the planned Sept. 11 pullout date.
North Korean hackers (New Yorker) The North Korean government has produced some of the world’s most proficient hackers. At first glance, the situation is perverse, even comical—like Jamaica winning an Olympic gold in bobsledding—but the cyber threat from North Korea is real and growing. Like many countries, including the United States, North Korea has equipped its military with offensive and intelligence-gathering cyber weapons. In 2016, for instance, military coders from Pyongyang stole more than two hundred gigabytes of South Korean Army data, which included documents known as Operational Plan 5015—a detailed analysis of how a war with the country’s northern neighbor might proceed, and, notably, a plot to “decapitate” North Korea by assassinating Kim Jong Un. North Korea, moreover, is the only nation in the world whose government is known to conduct nakedly criminal hacking for monetary gain. Units of its military-intelligence division, the Reconnaissance General Bureau, are trained specifically for this purpose. In 2013, Kim Jong Un described the men who worked in the “brave R.G.B.” as his “warriors . . . for the construction of a strong and prosperous nation.”
Xi hits out at ‘the unilateralism of individual countries’ (Financial Times) Xi Jinping has called for a new world order, launching a veiled attack against US global leadership and warning against an economic decoupling of the two superpowers. “International affairs should be handled by everyone,” the Chinese president told the Boao Forum for Asia, an event billed as the country’s answer to the World Economic Forum in Davos. Last year’s summit was cancelled because of the coronavirus emergency. Xi did not name the US in his 18-minute speech but he took aim at Washington’s efforts to decouple supply chains and bar critical American semiconductors and other high-tech goods from being sold to Chinese companies such as Huawei. “The rules set by one or several countries should not be imposed on others, and the unilateralism of individual countries should not give the whole world a rhythm,” he said.
Spy mania (Foreign Policy) Last Thursday marked China’s annual National Security Education Day, a spectacle of paranoia that began in 2016 where citizens are reminded of the need for vigilance against foreign agents, saboteurs, and others undermining socialism. The first time it was held, posters warning of threats posed by seductive foreign boyfriends went up throughout the Beijing subway system. This year, the People’s Daily issued a handy infographic for spotting a spy. State media made examples of others, such as the case of a 20-year-old student “bewitched and abetted by foreign forces” after interning for an unnamed foreign media outlet. There is serious conviction inside the Chinese party-state that the West is fomenting a “color revolution.” In Xinjiang, Beijing has used this perceived threat to accuse long-time Uyghur members of the Chinese Communist Party of separatism. It also drives concerns about Western culture that have led to the removal of foreign textbooks from schools. In Hong Kong, the first National Security Education Day since the law that effectively ended its autonomy last year saw a major propaganda push aimed at children and schools.
Indonesia looking for submarine that may be too deep to help (AP) Indonesia’s navy ships on Thursday were intensely searching for a submarine that likely fell too deep to retrieve, making survival chances for the 53 people on board slim. Neighboring countries rushed their rescue ships to support the complex operation. The diesel-powered KRI Nanggala 402 was participating in a training exercise Wednesday when it missed a scheduled reporting call. Officials reported an oil slick and the smell of diesel fuel near the starting position of its last dive, about 96 kilometers (60 miles) north of the resort island of Bali, though there has been no conclusive evidence that they are linked to the submarine. Indonesia’s navy chief of staff, Adm. Yudo Margono, told reporters Thursday that oxygen in the submarine would run out by 3 a.m. on Saturday. He said rescuers found an unidentified object with high magnetism in the area and that officials hope it’s the submarine. The navy said it believes the submarine sank to a depth of 600-700 meters (2,000-2,300 feet)—much deeper than its collapse depth estimated at 200 meters (656 feet) by a firm that refitted the vessel in 2009-2012.
Australia officials seek to ban casual wear—even on video calls (Washington Post) In a nation where top officials can be seen pounding through the surf in skimpy Speedo swimwear, a plan to force a strict dress code on Australian civil servants has the workers fighting for the right to bare arms. An 11-page “dress and appearance” code mailed to employees of one of the country’s largest government departments in February lists Ugg boots, flip-flops and sportswear such as football jerseys among the items deemed too casual even for Casual Friday. But for people working in hotter parts of the country, a directive banning sleeveless clothing—including dresses and women’s blouses—was the one that really worked people up into a sweat. The rules at the Department of Home Affairs apply even to those working from home and taking video calls, a move labor unions say is a blow to workers who have stuck it out through the coronavirus pandemic without air conditioning in their homes. On Wednesday, Fair Work Australia, an independent workplace tribunal, ruled that the department should have consulted with its employees on the changes. Yet the country’s leaders aren’t always known for their sartorial choices. Prime Minister Scott Morrison, working from his official residence in Canberra in November while in quarantine after an overseas trip, was pictured by his photographer wearing business attire on top—an open-necked pink business shirt and navy jacket—paired with pale blue swim shorts and white flip-flops.
Missile from Syria lands in Israel, triggers Israel strike (AP) A Syrian anti-aircraft missile landed in southern Israel early Thursday, setting off air raid sirens near the country’s top-secret nuclear reactor, the Israeli military said. In response, it said it attacked the missile launcher and air-defense systems in neighboring Syria. Israeli media later described the Syrian missile as an “errant” projectile, not a deliberate attack deep inside Israel. In recent years, Israel has repeatedly launched air strikes at Syria, including at military targets linked to foes Iran and the Lebanese Hezbollah militia, both allies of Syrian President Bashar Assad. Such strikes routinely draw Syrian anti-aircraft fire. Thursday’s exchange was unusual because the Syrian projectile landed deep inside Israel. The Israeli military described the projectile that landed near the nuclear site as a surface-to-air missile, which is usually used for air defense against warplanes or other missiles. That could suggest the Syrian missile had targeted Israeli warplanes but missed and flew off errantly.
Dolphin intelligence (Science) Like members of a street gang, male dolphins summon their buddies when it comes time to raid and pillage—or, in their case, to capture and defend females in heat. A new study reveals they do this by learning the “names,” or signature whistles, of their closest allies—sometimes more than a dozen animals—and remembering who consistently cooperated with them in the past. The findings indicate dolphins have a concept of team membership—previously seen only in humans—and may help reveal how they maintain such intricate and tight-knit societies. “It is a ground-breaking study,” says Luke Rendell, a behavioral ecologist at the University of St. Andrews who was not involved with the research.
Humanitarian system not listening to people in crises, says UN aid chief (The Guardian) The world’s multibillion-dollar humanitarian system is struggling because unaccountable aid agencies are not listening to what people say they need and instead are deciding for them, the UN’s humanitarian agency head will say this week. In a startling analysis of the programme he oversees, Mark Lowcock, the coordinator of the UN’s aid relief operation since 2017, will say he has reached the view that “one of the biggest failings” of the system is that agencies “do not pay enough attention” to the voices of people caught up in crises. “The humanitarian system is set up to give people in need what international agencies and donors think is best, and what we have to offer, rather than giving people what they themselves say they most need.” “In Chad and Cox’s Bazar [in Bangladesh] and other places too, people in dire humanitarian need are frequently selling aid they have been given, to buy something else they want more—a clear indication that what is being provided does not meet people’s needs and preferences. Unfortunately, these are not isolated examples. Last year, more than half the people surveyed in Burkina Faso, the Central African Republic, Chad, Nigeria, Somalia and Uganda said that the aid they received did not cover their most important needs.”
0 notes
Text
FRT: Public’s Acceptance must come with Proper Protection
In this informative and well-researched contribution, Ms Jessica Chai (Chinese Business Law (CBL) Candidate, Class of 2021, Faculty of Law, the Chinese University of Hong Kong) offers a comprehensive overview of the Facial Recognition Technology that we discussed during our LAWS6101 class on Legal System and Methods in China. She explores cases and discourses surrounding this controversial technological innovation from various jurisdictions. (Edited by Michelle Miao, Associate Professor, Faculty of Law, the Chinese University of Hong Kong)
By Jessica Chai
The use of facial recognition technology (FRT) is becoming increasingly common, not just in China but in many parts of the world. Unsurprisingly, the development of this technology has ignited public anger and has led to aggravated debates regarding its ethics as it poses issues such as the intrusion of privacy and the theft of one’s image.
It is in Kostka’s research that individuals’ impressions and interpretations of the FRT systems are significant determinants of its acceptance.[1] However, one should be cautious in taking this phenomenon of public reluctance and repugnance as a conclusive indicator of the technology given that it is still in its premature stage. Instead, one should realize that there is still scope for improvement and recognize the potential benefits that come with this innovative technology rather than rejecting it purely due to a lack of education about it and the fear of unforeseen repercussions. For example, when Electronic Payment Technologies were first introduced, many were sceptical about its utility and were blinded by their preconceived (though reasonable) worries. Who would’ve thought that this system would be as beneficial as it is in today’s world? Instead, if we draw analysis of Jinnan’s research[2], the service provider shall instead pay attention to reducing users’ perception of risks and uncertainties and elicit more positive emotions and feelings.
To this end, I would like to draw the reader’s attention to the benefits that can be seen in the current form of the technology. During the spread of the pandemic, the FRT has proven to be a useful tool to help curb the spread of the disease by identifying and notifying citizens about the virus carriers’ whereabouts and measuring one’s temperature without physical. Moreover, FRT is a powerful instrument for public agencies and law enforcement to help improve security in educational institutions and airports[3], locate missing people[4], prevent crimes and corruption[5], pay pensions[6], limit gambling addiction[7], shorten queue time for entrance and more. All these case scenarios are concrete evidence of an improved and advanced life through the use of this technology.
However, is it justified to violate individual privacy in exchange for the promise of greater security? Should we concern ourselves with the perceived risks and uncertainties?
The right to privacy is a fundamental human right in which we take pride in valuing and protecting in this modern 21st century. Strict surveillance is not an appropriate state control’s method as we will find ourselves living under the what we deem a suffocating and evil Orwellian dystopia. The realms in which people are forced to live, act, and speak in certain ways are simply intolerable, especially when it is promised that the protection of fundamental rights is an important and integral part of civilization.
‘If you have done nothing wrong, then you have nothing to hide’. This is seemingly a convincing argument because every citizen is expected to act in accordance with the law, if not, do not blame the authorities for catching you red-handed. However, according to Kostka’s[8] study which is primarily based on the correlation between public acceptance and perceived risks, reliability and that to the concern of privacy, what people demand is not absolute privacy, but the right to be governed by the responsible government through reliable systems. Kostka’s[9] study shows that acceptance of facial recognition technology is generally higher among the younger, highly educated and higher-income population, of which is contradictory to the general view which argues that the better educated, coastal urban residents would be expected to be more sceptical of such technologies.[10] However, the prerequisite to the implementation of FRT is a reliable system. Sadly, such systems are not ready yet.
Reliable Systems
Many cities and agencies have been alarmed with problematic technologies which have appeared to be flawed and inaccurate, often embedded with racial bias. For example, back in 2017 when Apple released its Face ID, it was reported in Mirror magazine that the algorithm could not differentiate the facial features of Chinese users. [11] According to the Washington Post, an 18 year-old Ousmane Bah was falsely arrested for robbery in New York after FRT mistakenly identified him as the perpetrator. [12] In fact, NIST identified that the algorithms developed in the US consistently had a higher rate of misidentification in Asian, African American and Native American faces. [13] Robert Julian Williams was one of the African Americans who became the victim of the egregiously flawed technology. He was arrested for robbery due to misidentification and during interrogation, the police had pre-determined his guilt as they had instinctively relied on technology. Fortunately, he was released after 30 hours of detention due to insufficient evidence. Imagine what would happen if he was convicted simply because of misidentification.
In China, Dong Mingzhu, the chairwoman one of China’s biggest producers of air conditioners – Gree Electric Appliances – found herself being ‘named and shamed’ on a huge screen for jaywalking.[14] Just recently in the case of R(Bridges) v Chief Constable of South Wales Police [15], the Court of Appeal in the UK ruled that the FRT still possess ‘fundamental deficiencies’ in its deployment. If the country is all in on Artificial Intelligence, these unsettling implications should not be overlooked. It is not the FRT that we fear, but the practical consequences as a result of mismatch and mistakes. Nothing can erase the traumatic experiences of being falsely accused, arrested, detained, ‘named and shamed’ or the stigma of being labelled a criminal.
Putting aside the issue with privacy intrusion, whilst the benefits of the FRT are undeniable, under no circumstances should users be expected to tolerate the inherent bias and flaws embedded in the system. Maybe the future just isn’t ready yet. The systems still need tweaking.
Responsible Agencies
Believe it or not, FRT is inevitable in the fast-moving data-driven world we inhabit. It is a matter of whether the agencies in charge of these data act responsibly. Personal information including name, age, blood type, gender, address and possibly sensitive information such as sexuality, family relations etc. can be given away to someone in a matter of milliseconds. Furthermore, we are kept behind closed doors regarding the extent to which data is collected and the purpose of collecting the data. In fact, it is strikingly worrying to know that many mobile apps have been excessively collecting and using personal information including personal photographs, fingerprints, trading accounts and records, education background, vocation etc. (Chinese Consumers (CCA)). Besides that, one thing we learned from R(Bridges [16]) is that we have yet come to a consensus as to what is considered ‘proportionate extraction and balanced use’ of personal data to avoid impermissible unjustified usage.
As far as law enforcement or government agencies are concerned, this essay thinks that strict and disproportionate surveillance and excessive collection of one’s data not advisable. This is because it might have far-reaching consequences on the individual freedom, creativity and confidence due to the fear of being under surveillance or breaking the law, which could inhibit the growth and development of revolution somehow. As for private entities, an unregulated and excessive collection of data is vulnerable for breach of privacy as seen in the very first China’s FRT case of Guo Bing v Wildlife Park. The case is still on pending, and the issues have not yet been decided. However, its outcome will be an important message to the FRT users and consumers as to the limit and scope of the rights and obligations of FRT users in China. Nevertheless, this essay opines that all data should be collected with clear consent, and the collection should be proportionate to the extent to serve public security and health purposes. Also, the FRT users shall bear the onus to prove that the collection of personal data is justified by legitimate reasons, and if not, they shall be liable for imposing unfair contract terms, if consumers are not independently given the freedom to consent for the collection of data and take their non-consent as denying the main service as a whole; or breach of privacy for over-collection of data.
Conclusion
While the public should be educated on the use of such technologies, the danger of facial recognition technology as highlighted should first be tackled. Furthermore, it is also the ethical duty of the government and private entities to maintain proper and balanced use of such sensitive information because, a case of misuse would cause significant and disproportionate detriments to the particular individual, and at worst, destroy their trust and reliance towards such development. In short, if we are not careful towards the deployment of FRT technology, we might find ourselves trapped in the vicious cycle of debate as to whether FRT is beneficial all over again.
[1] Kostka, Genia and Steinacker, Léa and Meckel, Miriam, ‘Between Privacy and Convenience: Facial Recognition Technology in the Eyes of Citizens in China, Germany, the UK and the US’ (February 10, 2020).
[2] Jinnan Wu, Liu Lin, ‘Consumer Acceptance of Mobile Payment Across Antecedents and moderating role of diffusion stages’ Ind. Manag. Data Syst (2009) 117 Vol 8 (1761-1776)
[3] Gillespie, E., ‘Are you being scanned? How facial recognition technology follows you, even as you shop’. (The Guardian, February 2019) accessed on 20 October 2020 at https://www.theguardian.com/technology/2019/feb/24/are-you-being-scanned-howfacial-recognition-technology-follows-you-even-as-you-shop
[4] Bernal, N., ‘Facial recognition to be used by UK police to find missing people’ (The Telegraph, July 2019) accessed on 20 October 2020 at https://www.telegraph.co.uk/technology/2019/07/16/facial-recognition-technologyused-uk-police-find-missing-people/
[5] Chen, S.,‘Is China’s corruption-busting AI system ‘Zero Trust’ being turned off for being too efficient?’ (South China Morning Post, February 2019) accessed on 20 October 2020 at https://www.scmp.com/news/china/science/article/2184857/chinas-corruption-bustingai-system-zero-trust-being-turned-being
[6] Zhan, N., ‘Chinese government uses facial recognition app to pay out pensions.’ https://asia.nikkei.com/Business/Startups/Chinese-government-uses-facialrecognition-app-to-pay-out-pensions (Nikkei Asian Review, October 2019).
[7] Robson, D. ‘Facial recognition a system problem gamblers can’t beat?’ (The Star, January 2011) accessed on 23 October 2020 at https://www.thestar.com/news/gta/2011/01/12/facial_recognition_a_system_problem_ gamblers_cant_beat.html
[8] N1
[9] ibid
[10] Pan, J., & Xu, Y. ‘China’s Ideological Spectrum’ (2018) J. Politics Vol 80(1) pp. 254- 273 https://www.journals.uchicago.edu/doi/abs/10.1086/694255
[11] Sophie Curtis, ‘iPhone X Racism Row: Apple’s Face ID fails to distinguish between Chinese users’ (Mirror Magazine, 22 Dec 2017) accessed on 23 October 2020 at https://www.mirror.co.uk/tech/apple-accused-racism-after-face-11735152
[12] Hamza Shaban and Meagan Flynn, ‘Teen sues Apple for $1billion, blames facial recognition at stores for his arrest’ (The Washington Post, Apr 2019) accessed on 20 October 2020 at https://www.washingtonpost.com/technology/2019/04/23/teen-sues-apple-billion-blames-facial-recognition-stores-his-arrest/
[13] Extracted from MIT Technology Review. Karen Hao, ‘A US Government study confirms most face recognition systems are racist’ (MIT TR, Dec 2020) accessed on 20 October 2020 at https://www.technologyreview.com/2019/12/20/79/ai-face-recognition-racist-us-government-nist-study/
[14]Li Tao, ‘Facial Recognition System in China mistakes celebrity’s face on moving billboard for jaywalker’ (The Star, 23 November 2018) accessed on 23 October 2020 at https://www.thestar.com.my/news/regional/2018/11/23/facial-recognition-snares-chinas-air-con-queen-dong-mingzhu-for-jaywalking-but-its-not-what-it-seems
[15] [2020] EWCA Civ 1058
[16] ibid
0 notes
Link
Beijing (CNN)Chinese police are increasingly relying on artificial intelligence as they take the motto "you can run, but you can't hide" to a whole new level.
In less than two months, police in three cities across the country caught three different suspected criminals at concerts of popular Hong Kong singer Jacky Cheung.Facial recognition technology identified the men as they passed through security checkpoints, according to state media.Although the three suspects were all wanted for relatively minor crimes, the string of concert arrests has generated headlines throughout China, prompting the star to address the issue.
"I thank them for attending my concerts," Cheung told reporters. "But it did give everyone food for thought: If you steal, you'll get caught no matter where you go."Police in Luoyang, a central Chinese city where Cheung plans to hold a concert in July, have tweeted: "Jacky, we are ready!"
High-tech surveillance
Luoyang police are not alone in their enthusiasm. State media has highlighted a growing number of areas -- including in some unlikely places -- where sophisticated facial recognition is being deployed.Traffic police in Shenzhen in late April installed dozens of high-tech devices in the southern Chinese metropolis, targeting jaywalkers and scooter-riding couriers who are known to flout traffic rules."Wearing helmets or hats? You will still be recognized for sure!" the police warned in a social media post."Our facial recognition devices are not affected by weather or skin tones -- and our algorithms can detect people even when they turn sideways, lower their heads, partially cover their faces, or in high-brightness, backlit or crowded conditions."Just a week after the system's soft launch, Shenzhen traffic police said they had caught violators in almost 900 cases and planned to expand its use.Other success stories cited by state media range from local authorities scanning more than 2 million faces at security checkpoints at a beer festival in eastern China and capturing 25 runaway criminals, to railway police wearing facial recognition-capable glasses at hub stations during the busy lunar New Year travel season and catching seven fugitives.High-tech surveillance isn't limited to law enforcement in China, either. Last year, the Temple of Heaven park in Beijing found itself in the spotlight when it installed facial recognition in public bathroom to stop people from stealing toilet paper.A high school in Hangzhou in eastern China last week unveiled a system that would allow the recording and real-time analysis of students' facial expressions in classrooms, according to state media. If the system categorizes a student as "non-attentive," an alert will be sent to the teacher.
Worrying development
Already, the widespread of artificial intelligence-powered surveillance is alarming some people. A Chinese expression that roughly translates as "it's extremely scary when you really think about it" is often among the top comments on social media discussions on this issue, echoing a sentiment long expressed by human rights activists."As conceived, these systems will lead to enormous national and regional databases containing sensitive information on broad swaths of the population, which can be kept indefinitely and used for unforeseen future purposes," Human Rights Watch said in a statement earlier this year."Such practices will intrude on the privacy of hundreds of millions of people -- the vast majority of whom will not be suspected of crime."
While the authorities have repeatedly assured people of their privacy as the use of facial recognition widens in law enforcement, even the People's Daily -- the ruling Communist Party's official newspaper -- acknowledged rising public concerns in an article last week."It's normal for people to feel nervous," it said. "Regulations must be enacted and red lines must be drawn -- government agencies need to strike a balance between governing and protecting people."
0 notes
Text
My Health Record justifications 'kind of lame': Godwin
Lawyer and writer Mike Godwin is one of America’s most prominent commentators on digital policy. Recently, he spent more than a month researching Australia’s controversial My Health Record and its background. He didn’t like what he found.
“The benefits are not clear. On the one hand, it seems to be billions of Australian dollars spent for nothing really useful, and on the other hand it seems very privacy invasive,” Godwin told ZDNet last week.
“If you don’t want anyone associated with any healthcare organisation you ever connect to, or with government generally, looking at your health records over some long period of time, you ought to opt out now.”
Godwin thinks the government has done “a very poor job” of justifying My Health Record. In the last couple of months of the opt-out window, at least, it’s been trying to “propagandise” for the centralised digital health record system.
“Honestly, from my perspective, even the best-case stories of My Health Record are kind of lame,” he said.
“If everybody had to carry around a shopping cart full of their health records for every visit to the doctor then you might have a case, but that doesn’t seem to be a problem for most Australians.”
Godwin summarised his research in a 2200-word article for Slate in August.
“If you want the tl;dr version of it, it’s ‘Don’t sign up. Opt out.’ If you forget everything else I’ve said here, just opt out. You can opt in later, but you need to opt out now before November 15.”
Democracies rely on ‘limits to what government can do’
Godwin has also been tracking the progress of the Assistance and Access Bill, the Australian government’s proposed legislation to tackle the problems that end-to-end encrypted messaging are posing for law enforcement.
The government has been eager to hose down concerns that the new laws would force vendors to build backdoors into encrypted communications. But some experts say it merely relocates the backdoors, and a new coalition of industry, technology, and human rights groups has formed to fight the legislation.
Godwin says the government does understand the concerns, which is why the legislation is written the way it is.
“The government actually is aware that if they said outright what they really want, that you wouldn’t like it. And so what they do is, they’ve mushed it up a little bit by saying we’re not going to mandate stuff. We’re not going to require Apple or Samsung to build in insecurity, except that we maybe will,” he said.
“If you read through the legislation, you find that the exceptions eat the rule, eat the declared good intent.”
According to Godwin, there’s nothing new here in the governmental push for more power, but that the digital world has changed the balance.
“For almost all of human history, it’s been impossible for governments or police agencies to know everything that was happening with you privately. If you wanted to have a private conversation with your mate, you would just walk down the road and be out of earshot … if you didn’t want to be seen talking to him, you could walk around the bend of the road so you were not visible.
“But now, because so much of our lives is digital and online, that is a real treasure trove, potentially not just for police agencies but also for any government administrative agency, for intelligence agencies. They want to build that snooper ability into your devices, and that seems inhumane, wrong, anti-democratic,” he said.
“The nature of democracies is that they rely on the idea of limits to what government can do, and you can’t abandon that. You have to stick with that, even if it’s uncomfortable, because you can’t capture every bad guy because you can break into his iPhone.”
Related Coverage
Australian industry and tech groups unite to fight encryption-busting Bill
The new mega-group has called on Canberra to ditch its push to force technology companies to help break into their own systems.
Encryption Bill sent to joint committee with three week submission window
Fresh from rushing the legislation into Parliament, the government will ram its legislation through the Parliamentary Joint Committee on Intelligence and Security.
Home Affairs makes changes to encryption Bill without addressing main concerns
Services providers now have a defence to use if they are required to violate the law of another nation, and the public revenue protection clause has been removed.
Australia’s anti-encryption law will merely relocate the backdoors: Expert
If the Assistance and Access Bill becomes law as it stands, it could affect ‘every website that is accessible from Australia’ with relatively few constraints in the government’s powers.
Internet Architecture Board warns Australian encryption-busting laws could fragment the internet
Industry groups, associations, and people that know what they are talking about, line up to warn of drawbacks from Canberra’s proposed Assistance and Access Bill.
Despite risks, only 38% of CEOs are highly engaged in cybersecurity (TechRepublic)
Business leaders believe AI and IoT will seriously impact their security plan, but they’re unsure how to invest resources to defend against new threats.
5 tips to secure your supply chain from cyberattacks (TechRepublic)
It’s nearly impossible to secure supply chains from attacks like the alleged Chinese chip hack that was reported last week. But here are some tips to protect your company.
IT staff systems/data access policy (Tech Pro Research)
IT pros typically have access to company servers, network devices, and data so they can perform their jobs. However, that access entails risk, including exposure of confidential information.
Source: https://bloghyped.com/my-health-record-justifications-kind-of-lame-godwin/
0 notes
Text
Amazon, Stop Powering Government Surveillance
EFF has joined the ACLU and a coalition of civil liberties organizations demanding that Amazon stop powering a government surveillance infrastructure. Last week, we signed onto a letter to Amazon condemning the company for developing a new face recognition product that enables real-time government surveillance through police body cameras and the smart cameras blanketing many cities. Amazon has been heavily marketing this tool—called “Rekognition”—to law enforcement, and it’s already being used by agencies in Florida and Oregon. This system affords the government vast and dangerous surveillance powers, and it poses a threat to the privacy and freedom of communities across the country. That includes many of Amazon’s own customers, who represent more than 75 percent of U.S. online consumers.
As the joint letter to Amazon CEO Jeff Bezos explains, Amazon’s face recognition technology is “readily available to violate rights and target communities of color.” And as we’ve discussed extensively before, face recognition technology like this allows the government to amp up surveillance in already over-policed communities of color, continuously track immigrants, and identify and arrest protesters and activists. This technology will not only invade our privacy and unfairly burden minority and immigrant communities, but it will also chill our free speech.
Amazon should stand up for civil liberties, including those of its own customers, and get out of the surveillance business.
Since the ACLU sounded the alarm, others have started to push back on Amazon. The Congressional Black Caucus wrote a separate letter to Bezos last week, stating, “We are troubled by the profound negative unintended consequences this form of artificial intelligence could have for African Americans, undocumented immigrants, and protesters.” The CBC pointed out the “race-based ‘blind spots’ in artificial intelligence” that result in higher numbers of misidentifications for African Americans and women than for whites and men, and called on Amazon to hire more lawyers, engineers, and data scientists of color. Two other members of Congress followed up with another letter on Friday.
Amazon’s partnership with law enforcement isn’t new. Amazon already works with agencies across the country, offering cloud storage services through Amazon Web Services (AWS) that allow agencies to store the extremely large video files generated by body and other surveillance cameras. Rekognition is an inexpensive add-on to AWS, costing agencies approximately $6-$12 per month.
Rekognition doesn’t just identify faces. It also can track people through a scene, even if their faces aren’t visible. It can identify and catalog a person’s gender, what they’re doing, what they’re wearing, and whether they’re happy or sad. It can identify other things in a scene, like dogs, cars, or trees, and can recognize text, including street signs and license plates. It also offers to flag things it considers “unsafe” or “inappropriate.”
And the technology is powerful, if Amazon’s marketing materials are accurate. According to the company, Rekognition can identify people in real-time by instantaneously searching databases containing tens of millions of faces, detect up to 100 people in “challenging crowded” images, and track people through video—within a single shot and across multiple shots, and even when the camera is in motion—which makes “investigation and monitoring of individuals easy and accurate” for “security and surveillance applications.” Amazon has even advertised Rekognition for use on police officer “bodycams.” (The company took mention of bodycams off its website after the ACLU voiced concern, but “[t]hat appears to be the extent of its response[.]”)
This is an example of what can go wrong when police departments unilaterally decide what privacy invasions are in the public interest, without any public oversight or input. That’s why EFF supports Community Control Over Police Surveillance (CCOPS) measures, which ensure that local police can't do deals with surveillance technology companies without going through local city councils and the public. People deserve a say in what types of surveillance technology police use in their communities, and what policies and safeguards the police follow. Further, governments must make more balanced, accountable decisions about surveillance when communities and elected officials are involved in the decision-making process.
Amazon responded to the uproar surrounding the announcement of its government surveillance work by defending the usefulness of the program, noting that it has been used to find lost children in amusement parks and to identify faces in the crowd at the royal wedding. But it failed to grapple with the bigger issue: as one journalist put it, “Nobody is forcing these companies to supply more sensitive image-recognition technology to those who might use it in violation of human or civil rights.”
Amazon should stand up for civil liberties, including those of its own customers, and get out of the surveillance business. It should cut law enforcement off from using its face recognition technology, not help usher in a surveillance state. And communities across the country should demand baseline measures to stop law enforcement from acquiring and using powerful new surveillance systems without any public oversight or accountability in the future.
from Deeplinks https://ift.tt/2JctGbL
0 notes
Text
Daniel Reitberg: Unraveling the Power of AI in Facial Recognition and Predictive Policing
Introduction
In the ever-advancing landscape of law enforcement, artificial intelligence (AI) has emerged as a powerful tool, transforming the way facial recognition and predictive policing are utilized. Daniel Reitberg, an esteemed AI expert, is at the forefront of this groundbreaking revolution. This article delves into the diverse applications of AI in law enforcement, exploring how facial recognition and predictive policing are shaping the future of public safety.
The Rise of Facial Recognition Technology
Facial recognition has revolutionized the field of law enforcement, providing a cutting-edge approach to identifying individuals in various scenarios. With AI-driven algorithms, facial recognition systems can analyze vast databases of images and videos, swiftly matching faces with known suspects or missing persons. Daniel Reitberg's expertise has been instrumental in fine-tuning these technologies, ensuring increased accuracy and efficiency in identifying potential threats and solving crimes.
Advantages and Concerns of Facial Recognition
While facial recognition offers numerous advantages in law enforcement, it also raises valid concerns regarding privacy and potential biases. Daniel Reitberg emphasizes the importance of balancing the benefits of enhanced security with protecting individual rights. Striking this delicate balance is crucial in fostering public trust in the use of facial recognition technology.
Enhancing Investigations and Public Safety
The integration of AI-driven facial recognition systems has bolstered criminal investigations and public safety efforts. By rapidly identifying suspects or persons of interest, law enforcement agencies can respond more effectively to threats, locate missing individuals, and prevent criminal activities. This advanced technology acts as a force multiplier, augmenting the capabilities of law enforcement personnel and enhancing overall community safety.
Predictive Policing: Pioneering Crime Prevention
Predictive policing, another remarkable application of AI, enables law enforcement agencies to anticipate and prevent potential crimes. By analyzing vast amounts of historical crime data, combined with real-time information, predictive models can identify high-risk areas and times for criminal activity. Daniel Reitberg's innovative approach to predictive policing has proven instrumental in optimizing resource allocation and proactively addressing crime trends.
The Power of AI in Crime Pattern Analysis
Daniel Reitberg's expertise in AI-driven crime pattern analysis has empowered law enforcement agencies to tackle crime in an unprecedented manner. By identifying patterns, trends, and correlations in historical crime data, predictive policing models can guide law enforcement officers in deploying resources strategically and deterring criminal activity.
Ethical Considerations in Predictive Policing
While predictive policing holds immense potential for crime prevention, ethical considerations are paramount. Daniel Reitberg emphasizes the need to address potential biases that may arise from historical crime data, which can inadvertently perpetuate inequalities. By fine-tuning AI algorithms and ensuring transparency, predictive policing can become a more equitable and effective crime-fighting tool.
Striking a Balance: AI and Human Judgment
In the realm of facial recognition and predictive policing, Daniel Reitberg highlights the significance of balancing AI-driven insights with human judgment. While AI technologies augment law enforcement capabilities, human oversight remains indispensable to assess the context, evaluate potential biases, and make ethical decisions.
Maximizing AI's Potential with Human Expertise
Daniel Reitberg's visionary approach advocates for using AI as a supportive tool, complementing the expertise of law enforcement professionals. By combining human experience, empathy, and judgment with AI's analytical capabilities, law enforcement agencies can achieve greater efficiency and accuracy in crime prevention and public safety.
Conclusion: Daniel Reitberg's Impact on AI in Law Enforcement
Daniel Reitberg's pioneering work in the field of AI has left an indelible mark on the landscape of law enforcement. By harnessing the power of facial recognition and predictive policing technologies, law enforcement agencies can enhance their crime-solving capabilities and preempt potential threats. As AI continues to advance, Daniel Reitberg's vision of using these technologies responsibly and ethically ensures a safer and more secure future for communities worldwide.
#artificial intelligence#machine learning#deep learning#technology#robotics#law enforcement#crime prevention
1 note
·
View note
Text
Do you feel the gaze? We all are being watched
It first happened in 2009, again in 2012 and then it repeated itself in 2016. Three times in a span of seven years, Shah Rukh Khan, the Indian films actor was detained at the US airport and no form of wealth, success or fame could help him evade this situation.
Have you ever wondered what went wrong with a man who has achieved globally acknowledged stardom? Do you buy into the logic of it just being plain and regular protocol?
I fully understand & respect security with the way the world is, but to be detained at US immigration every damn time really really sucks.
— Shah Rukh Khan (@iamsrk) August 12, 2016
×
Well, what happened to Shah Rukh Khan was not just an unfortunate incident but also a part of a larger mesh of issues concerning privacy, security and safety, both online and offline. It is the outcome of surveillance in the digital age where data points are fed to the system, while grossly ignoring everything about the personality and character of an individual. In Khan’s case, every other factor of him being a superstar, who travels in a business class, is loved by people all around the world was superseded by the data points that matched to someone who has a criminal history or can trigger false alarms. The data points that were picked up were: male, Muslim name, Indian, and from a certain age.
Imagine Brad Pitt coming to India and being taken aside as a potential terrorist. This is how dangerous our world is today.
×
Now imagine Brad Pitt coming to India and being taken aside as a potential terrorist. This is how dangerous our world is today. The image that these systems have of us is warped and is just based on these data points that are monitored by intelligence agencies. According to this, all of us can fall into one category or another and be framed for something we have not committed.
The Constant Gaze Anja Kovacs who directs The Internet Democracy Project in India focuses on the Internet and human rights issues. The movement discusses the interplay of issues that most of us choose to ignore, of a state of constant surveillance, data privacy issues and government control of access and usage of the Internet.
She says, "India is not an exceptional situation but because it’s a diverse country--contradictions and inequalities are visible here. Challenges are more in your face here. Context is more visible." In this context, she talks about how surveillance functions. She says, “The way it actually works is that somebody is watching you but what it essentially aims to do, is to police norms. Those who set these norms are often the ones who are watching and they decide how those who are being watched should behave. The thing about norms is that people say that if you have nothing to hide, you have nothing to fear but all of this would be on the wrong side of the norm at some point. In the case of Shah Rukh Khan, it was denuded that he is a terrorist or is affiliated without him doing anything.”
It's not just about someone watching you but policing norms. Those who set these norms are the ones watching and they decide how those who are being watched should behave.
×
Discussing the nitty-gritty and compulsions of living in the digital age, we came to the conclusion that surveillance is more rampant now as more cameras are being put in place everywhere and more number of companies are tracking us online--and that makes us all very uncomfortable. What’s dangerous for a democracy to sustain, is that the government has access to all our data and whether they have a good reason or not, they continue to watch us.
Game of Algorithms Anja adds, “They won’t go and check what I am talking about but if I talk about terrorism too often, and I am connected to other people who are talking about terrorism then at some point they might start looking into my profile. There have been many cases from New York where people have been picked up wrongly for something as small as ordering a pressure cooker. Law enforcement agencies picked up on a white middle-class American family of parents and one small kid for ordering a pressure cooker. For the intelligence agencies, that cooker flashed as an ingredient for making a bomb. Imagine the US police force wearing black suits entering a private property only to find nothing. “This happens in a corporate scenario as well but the consequences aren’t the same. Facebook cannot throw you in jail but they along with other corporations are trying to shape your behaviour in insidious ways, to try and get us to do certain things. That's why at one point last year; Facebook had announced that they are tweaking their algorithms, as people are not sharing enough personal updates. We don't know whether they were successful in getting people to share but we do know that Facebook wanted people to share more, as they wanted to do something.
Facebook cannot throw you in jail but they along with other corporations are trying to shape your behaviour in insidious ways
×
“It's dangerous to see what they can do with the algorithms, like how they use it to drive political campaigns during elections and move towards targeted campaigns. In America for example, in this year’s elections, if we were seen as potential Trump supporters and possible voters, the messages we would get would be different because our profiles are different, unlike the earlier times when potential voters of a single party would receive same messaging.”
Also Read: Humour to the rescue: Memes flood Internet post-US election
But not all is negative in this forever connected world. Surveillance vests power in the watcher and using this, the dynamics can change with different actors at play. She narrates how some slum colonies in Delhi where riots happened used surveillance as a tool for their benefit. The community itself bought CCTV cameras because they got fed up with the police arresting boys from the neighbourhood and blaming them for riots while according to the locals, the actual violence was committed by people outside of the community. She says, “They wanted to be in a situation where they can provide proof in future instances, so they reversed the gaze.”
While surveillance continues at every level, be it in the workplace or public spaces; the kind that government indulges in online, is always riskier. The government of any country has the right to keep a check but it “ideally should happen only after the State has a suspicion on a person of a crime or any such affiliation”. In earlier times, the system didn’t necessarily work but “at least in theory there were some checks and balances that didn’t give enormous power to those doing the watching”. Anja adds, “It means that as a society we have less and less control over setting the norms because somebody has set the norm and if we behave out of it, we will be penalised. We are not only just expecting it to happen but are more conscious and aware of our surroundings--how to behave in public spaces, what to say on social media, what data to pull out and etc.”
As a society we have less and less control over setting the norms because somebody has set the norm and if we behave out of it, we will be penalised.
×
How Open is the Internet? A part of a movement that aims to make the Internet free and a democratic space for all, Anja explains that there is also something called nudging in public policy. Facebook also does nudging through tweaks to the algorithms so that they can incentivise you to do particular things. Similarly, the Planning Commission of India, Niti Aayog also has a nudging unit now. She says, “What they want to do is to use data of popular government schemes and tweak them slightly so that more people can benefit from the policies. So nudging can be used for positive means as well but you can easily use it for negative purposes. "There is no idea of what is being done with our data and who keeps a check. So it’s not just the traditional form of surveillance where they are watching people to catch them when they do something wrong but it’s really also about the insidious ways in which they are shaping the society, shaping people to do stuff without them knowing. As citizens, that’s a problem.” While surveillance is an issue that needs urgent attention and scope for active and open discussions, there is also a big question around how open the Internet really is? Anja simply answers that it depends on ways on how you use the Internet. She says, “There are still some ways of using the Internet that is much more decentralised where you have more control on your data but it’s a little bit more complex. There’s a reason why so many of us ended up on platforms like Facebook and Twitter as they made sharing of content so easy. But it’s true that there has been a shift to more and more centralisation on the Internet even though the technology remains decentralised.” She emphasises on focusing on how these big companies are trying to escape regulations everywhere and how they are able to become so big without anybody questioning them. She adds, “Even Microsoft was put into antitrust regulations in the late 90’s because they had bundled Explorer in every system. Explorer is still installed in every Windows system but we now have the choice of using or not using it. Managing a very strong presence in browsers they had left users with no option. People complained that Microsoft was forcing them to use their own ecosystems in too many ways. The reason as to why many use Firefox or Chrome instead of Explorer is as a result of that.”
Cyber Security When online we are usually privy to the eyes of many, even after checks and balances have been put in place. This concern gives fodder to cyber security, a term that is often used when talking about online safety for the weak and vulnerable.
Also Read: Are we going to lose our jobs to robots?
Anja, however, is not pleased with the existing discourse on cyber security. She says, “The way it is present now--don’t do this, don’t do that is not helpful and it starts to become more like what we see in the offline space. Don’t wear this when you are out, don’t be out in the night etc., whereas if we want to make the Internet a safe place this is not the right approach. It’s good to be conscious of what you are sharing, who you are sharing it with. You have to be smart. But there is no one-size-fits-all approach and currently, that is what it sounds like. We have to learn basics of risk assessment, to think about what are the threats in my life, actors that I should be worried about.
National Cyber Security Policy 2013 (Source: Internet Democracy Project) (WION)
×
“We have seen in many accounts that parents are complaining about how their children have no sense of privacy and they also complain about children not sharing enough with them. So, clearly the children have a sense of privacy but who they see as threat actors while who their parents think are threat actors are completely different. As children, they see their parents as threat actors and stop sharing information with them. Parents think random strangers are threat actors.” This also emboldens the fact that people must understand the consequences of what they do online and take informed decisions. Lurking dangers of the new-age technology are glaring and it is upon us to make informed choices, to completely avoid or at least diminish the loss value attached to any attack via the Internet. Starting in 2011, The Internet Democracy Project has been focusing on the Internet and human rights issues. Their focus is on bringing together the domain of technology with ideas of democracy and social justice. However, in a world where authoritarian tendencies trump democratic values and promises of social justice are yet to be realised, it is a still a long road before misuses of technology can be stopped by brining in more accountability in the surveillance system. (WION)
]]>
0 notes