#Big data analytics in research
Explore tagged Tumblr posts
latest-info · 7 months ago
Text
The Role of Technology in Unveiling Knowledge Horizons
Introduction In today’s rapidly evolving world, technology plays a pivotal role in reshaping the horizons of knowledge. The unprecedented pace at which technology advances enables us to access, analyze, and disseminate information like never before. This article delves into how technology is unveiling new knowledge horizons, transforming education, research, communication, and societal…
Tumblr media
View On WordPress
0 notes
dhirajmarketresearch · 27 days ago
Text
Tumblr media
0 notes
jcmarchi · 30 days ago
Text
AI’s Life-Changing, Measurable Impact on Cancer
New Post has been published on https://thedigitalinsider.com/ais-life-changing-measurable-impact-on-cancer/
AI’s Life-Changing, Measurable Impact on Cancer
Leveraging Big Data to Enhance AI in Cancer Detection and Treatment
Integrating AI into the healthcare decision making process is helping to revolutionize the field and lead to more accurate and consistent treatment decisions due to its virtually limitless ability to identify patterns too complex for humans to see.
The field of oncology generates enormous data sets, from unstructured clinical histories to imaging and genomic sequencing data, at various stages of the patient journey. AI can “intelligently” analyze large-scale data batches at faster speeds than traditional methods, which is critical for training the machine learning algorithms that are foundational for advanced cancer testing and monitoring tools. AI also has tremendous inherent pattern recognition capabilities for efficiently modeling data set complexities. This is important because it enables deeper, multi-layered understandings of the impact of nuanced molecular signatures in cancer genomics and tumor microenvironments. Discovering a pattern between genes only found in a certain subset of cancer cases or cancer progression patterns can lead to a more tailored, patient-specific approach to treatment.
What is the ultimate goal?  AI-powered cancer tests that support clinical decision-making for doctors and their patients at every step of the cancer journey – from screening and detection, to identifying the right treatment, and for monitoring patients’ response to interventions and predicting recurrence.
Data Quality and Quantity: The Key to AI Success
Ultimately, an AI algorithm will only be as good as the quality of data that trains it. Poor, incomplete or improperly labeled data can hamstring AI’s ability to find the best patterns (garbage in, garbage out). This is especially true for cancer care, where predictive modeling relies on impeccable precision – one gene modification out of thousands, for example, could signal tumor development and inform early detection. Ensuring that high level of quality is time-consuming and costly but leads to better data, which results in optimal testing accuracy. However, developing a useful goldmine of data comes with significant challenges. For one, collecting large-scale genomic and molecular data, which can involve millions of data points, is a complex task. It begins with having the highest quality assays that measure these characteristics of cancer with impeccable precision and resolution.  The molecular data collected must also be as diverse in geography and patient representation as possible to expand the predictive capacity of the training models.  It also benefits from building long-term multi-disciplinary collaborations and partnerships that can help gather and process raw data for analysis. Finally, codifying strict ethics standards in data handling is of paramount importance when it comes to healthcare information and adhering to strict patient privacy regulations, which can sometimes present a challenge in data collection.
An abundance of accurate, detailed data will not only result in testing capabilities that can find patterns quickly and empower physicians with the best opportunity to address the unmet needs for their patients but will also improve and advance every aspect of clinical research, especially the urgent search for better medicines and biomarkers for cancer.
AI Is Already Showing Promise in Cancer Care and Treatment
More effective ways to train AI are already being implemented. My colleagues and I are training algorithms from a comprehensive array of data, including imaging results, biopsy tissue data, multiple forms of genomic sequencing, and protein biomarkers, among other analyses – all of which add up to massive quantities of training data. Our ability to generate data on the scale of quadrillions rather than billions has allowed us to build some of the first truly accurate predictive analytics in clinical use, such as tumor identification for advanced cancers of unknown primary origin or predictive chemotherapy treatment pathways involving subtle genetic variations.
At Caris Life Sciences, we’ve proven that extensive validation and testing of algorithms are necessary, with comparisons to real-world evidence playing a key role. For example, our algorithms trained to detect specific cancers benefit from validation against laboratory histology data, while AI predictions for treatment regimens can be cross compared with real-world clinical survival outcomes.
Given the rapid advancements in cancer research, experience suggests that continuous learning and algorithm refinement is an integral part of a successful AI strategy. As new treatments are developed and our understanding of the biological pathways driving cancer evolves, updating models with the most up-to-date information offers deeper insights and enhances detection sensitivity.
This ongoing learning process highlights the importance of broad collaboration between AI developers and the clinical and research communities. We’ve found that developing new tools to analyze data more rapidly and with greater sensitivity, coupled with feedback from oncologists, is essential. Bottom-line: the true measure of an AI algorithm’s success is how accurately it equips oncologists with reliable, predictive insights they need and how adaptable the AI strategy is to ever-changing treatment paradigms.
Real-World Applications of AI Are Already Increasing Survival Rates and Improving Cancer Management
Advances in data scale and quality have already had measurable impacts by expanding the physician decision-making toolkit, which has had real-world positive results on patient care and survival outcomes. The first clinically validated AI tool for navigating chemotherapy treatment choices for a difficult-to-treat metastatic cancer can potentially  extend patient survival by 17.5 months, compared to standard treatment decisions made without predictive algorithms1. A different AI tool can predict with over 94% accuracy the tumor of origin for dozens of metastatic cancers2 – which is critical to creating an effective treatment plan.  AI algorithms are also predicting how well a tumor will respond to immunotherapy based on each person’s unique tumor genetics. In each of these cases, AI toolkits empower clinical decision-making that improves patient outcomes compared with current standards of care.
Expect An AI Revolution in Cancer
AI is already changing how early we can detect cancer and how we treat it along the way. Cancer management will soon have physicians working side-by-side with integrated AI in real time to treat and monitor patients and stay one step ahead of cancer’s attempts to outwit medicines with mutations. In addition to ever-improving predictive models for detecting cancer earlier and providing more effective personalized treatment paradigms, physicians, researchers, and biotech companies are hard at work today to leverage data and AI analyses to drive new therapeutic discoveries and molecular biomarkers for tomorrow.
In the not-too-distant future, these once-impossible advances in AI will reach far beyond cancer care to all disease states, ending an era of uncertainty and making medicine more accurate, more personalized, and more effective.
0 notes
dhallblogs · 5 months ago
Text
Kerala establishes seven advanced research centers of excellence.
Tumblr media
Thiruvananthapuram: The Kerala government has approved the establishment of seven Centers of Excellence, which would operate as independent institutions and concentrate on various fields of advanced research and training. These will be manned by elite teachers, researchers, and students, and furnished with cutting-edge amenities.
ALSO READ MORE-https://apacnewsnetwork.com/2024/07/kerala-establishes-seven-advanced-research-centers-of-excellence/
0 notes
labtroncc · 6 months ago
Text
Desktop Trace Drug Detector
Labtron Desktop Trace Drug Detector offers rapid, accurate detection of trace amounts of narcotics with a sensitivity limit of 100 ng for TNT and an 8 second analysis time. Features include an audio and visual alert system and advanced ion mobility spectrometry technology, providing real-time results, and ensuring reliable identification of a wide range of drugs. 
Tumblr media
1 note · View note
techtoio · 6 months ago
Text
How Big Data Analytics is Changing Scientific Discoveries
Introduction
In the contemporary world of the prevailing sciences and technologies, big data analytics becomes a powerful agent in such a way that scientific discoveries are being orchestrated. At Techtovio, we explore this renewed approach to reshaping research methodologies for better data interpretation and new insights into its hastening process. Read to continue
1 note · View note
healthtechnews · 8 months ago
Link
0 notes
ambeone-marketwaysarabia · 11 months ago
Text
The vast amount of data available for retailers today is helping them drive a better, enhanced tailored segmentation for customers different needs and preferences. 
Contact Information:
Address: PO Box: 127239, Business Bay, Dubai, UAE
Ph: +971 (04)4431578
Website: www.marketwaysarabia.com
0 notes
Text
Big Data Analysis Company in Kolkata
Introduction
In the dynamic landscape of technology, big data has emerged as a game-changer for businesses worldwide. As organizations in Kolkata increasingly recognize the importance of harnessing data for strategic decision-making, the role of big data analysis companies has become pivotal.
Tumblr media
The Rise of Big Data in Kolkata
Kolkata, known for its rich cultural heritage, is also witnessing remarkable growth in the realm of big data. Over the years, the city has transitioned from traditional methods to advanced data analytics, keeping pace with global trends.
Key Players in Kolkata’s Big Data Scene
Prominent among the contributors to this transformation are the leading big data analysis companies in Kolkata. Companies like DataSolve and AnalytixPro have carved a niche for themselves, offering cutting-edge solutions to businesses across various sectors.
Services Offered by Big Data Companies
These companies provide a range of services, including data analytics solutions, machine learning applications, and customized big data solutions tailored to meet the unique needs of their clients.
Impact on Business Decision-Making
The impact of big data on business decision-making cannot be overstated. By analyzing vast datasets, companies can gain valuable insights that inform strategic decisions, leading to increased efficiency and competitiveness.
Challenges and Solutions
However, the journey toward effective big data implementation is not without challenges. Big data companies in Kolkata face issues like data security and integration complexities. Innovative solutions, such as advanced encryption algorithms and seamless integration platforms, are being developed to address these challenges.
Future Prospects
Looking ahead, the future of big data in Kolkata appears promising. The integration of artificial intelligence and the Internet of Things is expected to open new avenues for data analysis, presenting exciting possibilities for businesses in the city.
Case Study: Successful Big Data Implementation
A closer look at a successful big data implementation in Kolkata reveals how a major e-commerce player utilized data analytics to enhance customer experience and optimize supply chain management.
Training and Skill Development
To keep up with the evolving landscape, there is a growing emphasis on training and skill development in the big data industry. Institutes in Kolkata offer comprehensive programs to equip professionals with the necessary skills.
Big Data and Small Businesses
Contrary to popular belief, big data is not exclusive to large enterprises. Big data companies in Kolkata are tailoring their services to suit the needs of small businesses, making data analytics accessible and affordable.
Ethical Considerations in Big Data
As the volume of data being processed increases, ethical considerations become paramount. Big data companies in Kolkata are taking steps to ensure data privacy and uphold ethical standards in their practices.
Expert Insights
Leading experts in the big data industry in Kolkata share their insights on current trends and future developments. Their perspectives shed light on the evolving nature of the industry.
Success Stories
Success stories from businesses in Kolkata highlight the transformative power of big data. From healthcare to finance, these stories underscore the positive impact that data analysis can have on diverse sectors.
Tips for Choosing a Big Data Analysis Company
For businesses considering a partnership with a big data company, careful consideration of factors such as experience, scalability, and data security is crucial. Avoiding common pitfalls in the selection process is key to a successful partnership.
Conclusion
In conclusion, the journey of big data analysis company in Kolkata reflects a broader global trend. As businesses increasingly recognize the value of data, the role of big data analysis companies becomes indispensable. The future promises even greater advancements, making it an exciting time for both businesses and big data professionals in Kolkata.
Know more:
Oil and Gas Software Development Company in kolkata, Oil and Gas Software Development Services in kolkata
banking software development company, banking software development services, bank software development, banking financial software development
opentable mobile app, restaurant mobile app, best restaurant apps, restaurant app ordering system, restaurant ordering system using mobile application
Best recruitment portal in Kolkata, job portal development services, job portal development company, online job portal development, job portal website development, recruitment portal development
mobile app development company, mobile application development, app development company, mobile app development services, android app development company
hr management software, human resource management system software, human resource management information system, best hr management software, cloud based hr software, best hrms software company in Kolkata
Agriculture software Development company in Kolkata, Agricultural Statistics Database Management in kolkata, Agricultural Application Development in kolkata
0 notes
marketxcel · 1 year ago
Text
The Future of Market Research: Unveiling the Top 10 Emerging Trends
The landscape of market research is undergoing a transformative shift, driven by the convergence of technology, consumer behavior, and data-driven insights. Embracing these six emerging trends empowers businesses to connect with their target audiences on a deeper level, adapt to changing market dynamics, and make informed decisions that drive success
0 notes
dhirajmarketresearch · 1 month ago
Text
Tumblr media
0 notes
jcmarchi · 2 months ago
Text
Sub-Quadratic Systems: Accelerating AI Efficiency and Sustainability
New Post has been published on https://thedigitalinsider.com/sub-quadratic-systems-accelerating-ai-efficiency-and-sustainability/
Sub-Quadratic Systems: Accelerating AI Efficiency and Sustainability
Artificial Intelligence (AI) is changing our world incredibly, influencing industries like healthcare, finance, and retail. From recommending products online to diagnosing medical conditions, AI is everywhere. However, there is a growing problem of efficiency that researchers and developers are working hard to solve. As AI models become more complex, they demand more computational power, putting a strain on hardware and driving up costs. For example, as model parameters increase, computational demands can increase by a factor of 100 or more. This need for more intelligent, efficient AI systems has led to the development of sub-quadratic systems.
Sub-quadratic systems offer an innovative solution to this problem. By breaking past the computational limits that traditional AI models often face, these systems enable faster calculations and use significantly less energy. Traditional AI models need help with high computational complexity, particularly quadratic scaling, which can slow down even the most powerful hardware. Sub-quadratic systems, however, overcome these challenges, allowing AI models to train and run much more efficiently. This efficiency brings new possibilities for AI, making it accessible and sustainable in ways not seen before.
Understanding Computational Complexity in AI
The performance of AI models depends heavily on computational complexity. This term refers to how much time, memory, or processing power an algorithm requires as the size of the input grows. In AI, particularly in deep learning, this often means dealing with a rapidly increasing number of computations as models grow in size and handle larger datasets. We use Big O notation to describe this growth, and quadratic complexity O(n²) is a common challenge in many AI tasks. Put simply, if we double the input size, the computational needs can increase fourfold.
AI models like neural networks, used in applications like Natural Language Processing (NLP) and computer vision, are notorious for their high computational demands. Models like GPT and BERT involve millions to billions of parameters, leading to significant processing time and energy consumption during training and inference.
According to research from OpenAI, training large-scale models like GPT-3 requires approximately 1,287 MWh of energy, equivalent to the emissions produced by five cars over their lifetimes. This high complexity can limit real-time applications and require immense computational resources, making it challenging to scale AI efficiently. This is where sub-quadratic systems step in, offering a way to handle these limitations by reducing computational demands and making AI more viable in various environments.
What are Sub-Quadratic Systems?
Sub-quadratic systems are designed to handle increasing input sizes more smoothly than traditional methods. Unlike quadratic systems with a complexity of O(n²), sub-quadratic systems work less time and with fewer resources as inputs grow. Essentially, they are all about improving efficiency and speeding up AI processes.
Many AI computations, especially in deep learning, involve matrix operations. For example, multiplying two matrices usually has an O(n³) time complexity. However, innovative techniques like sparse matrix multiplication and structured matrices like Monarch matrices have been developed to reduce this complexity. Sparse matrix multiplication focuses on the most essential elements and ignores the rest, significantly reducing the number of calculations needed. These systems enable faster model training and inference, providing a framework for building AI models that can handle larger datasets and more complex tasks without requiring excessive computational resources.
The Shift Towards Efficient AI: From Quadratic to Sub-Quadratic Systems
AI has come a long way since the days of simple rule-based systems and basic statistical models. As researchers developed more advanced models, computational complexity quickly became a significant concern. Initially, many AI algorithms operated within manageable complexity limits. However, the computational demands escalated with the rise of deep learning in the 2010s.
Training neural networks, especially deep architectures like Convolutional Neural Networks (CNNs) and transformers, requires processing vast amounts of data and parameters, leading to high computational costs. This growing concern led researchers to explore sub-quadratic systems. They started looking for new algorithms, hardware solutions, and software optimizations to overcome the limitations of quadratic scaling. Specialized hardware like GPUs and TPUs enabled parallel processing, significantly speeding up computations that would have been too slow on standard CPUs. However, the real advances come from algorithmic innovations that efficiently use this hardware.
In practice, sub-quadratic systems are already showing promise in various AI applications. Natural language processing models, especially transformer-based architectures, have benefited from optimized algorithms that reduce the complexity of self-attention mechanisms. Computer vision tasks rely heavily on matrix operations and have also used sub-quadratic techniques to streamline convolutional processes. These advancements refer to a future where computational resources are no longer the primary constraint, making AI more accessible to everyone.
Benefits of Sub-Quadratic Systems in AI
Sub-quadratic systems bring several vital benefits. First and foremost, they significantly enhance processing speed by reducing the time complexity of core operations. This improvement is particularly impactful for real-time applications like autonomous vehicles, where split-second decision-making is essential. Faster computations also mean researchers can iterate on model designs more quickly, accelerating AI innovation.
In addition to speed, sub-quadratic systems are more energy-efficient. Traditional AI models, particularly large-scale deep learning architectures, consume vast amounts of energy, raising concerns about their environmental impact. By minimizing the computations required, sub-quadratic systems directly reduce energy consumption, lowering operational costs and supporting sustainable technology practices. This is increasingly valuable as data centres worldwide struggle with rising energy demands. By adopting sub-quadratic techniques, companies can reduce their carbon footprint from AI operations by an estimated 20%.
Financially, sub-quadratic systems make AI more accessible. Running advanced AI models can be expensive, especially for small businesses and research institutions. By reducing computational demands, these systems allow for cost-effective scaling, particularly in cloud computing environments where resource usage translates directly into costs.
Most importantly, sub-quadratic systems provide a framework for scalability. They allow AI models to handle ever-larger datasets and more complex tasks without hitting the usual computational ceiling. This scalability opens up new possibilities in fields like big data analytics, where processing massive volumes of information efficiently can be a game-changer.
Challenges in Implementing Sub-Quadratic Systems
While sub-quadratic systems offer many benefits, they also bring several challenges. One of the primary difficulties is in designing these algorithms. They often require complex mathematical formulations and careful optimization to ensure they operate within the desired complexity bounds. This level of design demands a deep understanding of AI principles and advanced computational techniques, making it a specialized area within AI research.
Another challenge lies in balancing computational efficiency with model quality. In some cases, achieving sub-quadratic scaling involves approximations or simplifications that could affect the model’s accuracy. Researchers must carefully evaluate these trade-offs to ensure that the gains in speed do not come at the cost of prediction quality.
Hardware constraints also play a significant role. Despite advancements in specialized hardware like GPUs and TPUs, not all devices can efficiently run sub-quadratic algorithms. Some techniques require specific hardware capabilities to realize their full potential, which can limit accessibility, particularly in environments with limited computational resources.
Integrating these systems into existing AI frameworks like TensorFlow or PyTorch can be challenging, as it often involves modifying core components to support sub-quadratic operations.
Monarch Mixer: A Case Study in Sub-Quadratic Efficiency
One of the most exciting examples of sub-quadratic systems in action is the Monarch Mixer (M2) architecture. This innovative design uses Monarch matrices to achieve sub-quadratic scaling in neural networks, exhibiting the practical benefits of structured sparsity. Monarch matrices focus on the most critical elements in matrix operations while discarding less relevant components. This selective approach significantly reduces the computational load without compromising performance.
In practice, the Monarch Mixer architecture has demonstrated remarkable improvements in speed. For instance, it has been shown to accelerate both the training and inference phases of neural networks, making it a promising approach for future AI models. This speed enhancement is particularly valuable for applications that require real-time processing, such as autonomous vehicles and interactive AI systems. By lowering energy consumption, the Monarch Mixer reduces costs and helps minimize the environmental impact of large-scale AI models, aligning with the industry’s growing focus on sustainability.
The Bottom Line
Sub-quadratic systems are changing how we think about AI. They provide a much-needed solution to the growing demands of complex models by making AI faster, more efficient, and more sustainable. Implementing these systems comes with its own set of challenges, but the benefits are hard to ignore.
Innovations like the Monarch Mixer show us how focusing on efficiency can lead to exciting new possibilities in AI, from real-time processing to handling massive datasets. As AI develops, adopting sub-quadratic techniques will be necessary for advancing smarter, greener, and more user-friendly AI applications.
0 notes
vivekguptahal · 2 years ago
Text
A Data-Driven Approach to Healthcare - Brain Injury and Disease Research
Tumblr media
Cloud adoption & data pipeline automation in healthcare
Traditionally, healthcare advancements have progressed slowly due to siloed research and delayed results. However, with cloud application modernization, all that is changing for good. A unique collaboration between life sciences organizations and digital solution providers is offering an unprecedented level of insight into managing conditions and achieving optimal patient outcomes. Cloud modernization is expanding healthcare organizations’ ability to use data pipeline automation to effectively diagnose patients.
A prime example of cloud adoption is a nonprofit research organization dedicated to biomedical research and technology. The organization has been instrumental in facilitating advances in brain injury and disease research through its launch of the first cloud-based and interactive platform that supports information and idea exchange to further progress in neuroscience research. It uses big data to promote computational innovation discovery in brain diseases.
Co-created by Hitachi and other partners, this platform is a trusted portal where clinical researchers, physicians, and organizations can collaborate on research and the validation of emerging therapeutics.
Tumblr media
The context of merging human and artificial intelligence for analyzing health data
Medical research data is becoming siloed, diverse, and complex. To break this complexity, a robust IT infrastructure with the capacity to aggregate data across multiple studies is required along with harnessing patients’ data to improve the healthcare system.
The organization needed an interactive and scalable platform that would be capable of integrating diverse cohorts and investigators and equipped with a high computing speed that is essential in machine learning and artificial intelligence applications.
These new capabilities would empower users to gain a comprehensive understanding of signature patterns within existing and emerging large-scale datasets and to foster collaboration to promote the efficient use of the research community’s collective knowledge of brain injuries and diseases.
With time, the organization recognized that meeting these challenges would require the expertise of specialists in data pipeline automation and healthcare data solutions to meet the steep requirements of the healthcare industry. Having heard of Hitachi, the organization turned to us for our Cloud Managed Services.
Leveraging healthcare data analytics solutions to build more sophisticated infrastructure
The organization wanted to collaborate with Hitachi to upgrade the user interface and augment the platform’s experience for researchers and the virtual analytical environment to ensure secure data management.
Hitachi was able to deliver an integrated solution that encompassed each component of the build-out. This streamlined project management made the process more efficient and data-driven healthcare innovation helped to further modernize, streamline, and simplify the health diagnostic system for the research organization.
Tumblr media
Infrastructure that enables innovation
Cloud modernization was central to helping the organization maximize value in its transformation journey and boost the lives of people. While the organization began with a vision, advances in cloud-based data management, storage, and security brought that to fruition. The interactive platform now allows the organization to leverage best practices to tap into the potency of data pipeline automation and utilization.
Hitachi’s commitment to social innovation 
For Hitachi, this project has particular resonance because it is aligned with its commitment to social innovation. To have played a role in accelerating this process and in bringing life-changing drugs and therapies to patients more quickly is always rewarding.
Discover how Hitachi is unlocking value for society with Social Innovation and Digital Transformation in Healthcare :
0 notes
mostlysignssomeportents · 1 year ago
Text
The surveillance advertising to financial fraud pipeline
Tumblr media
Monday (October 2), I'll be in Boise to host an event with VE Schwab. On October 7–8, I'm in Milan to keynote Wired Nextfest.
Tumblr media
Being watched sucks. Of all the parenting mistakes I've made, none haunt me more than the times my daughter caught me watching her while she was learning to do something, discovered she was being observed in a vulnerable moment, and abandoned her attempt:
https://www.theguardian.com/technology/blog/2014/may/09/cybersecurity-begins-with-integrity-not-surveillance
It's hard to be your authentic self while you're under surveillance. For that reason alone, the rise and rise of the surveillance industry – an unholy public-private partnership between cops, spooks, and ad-tech scum – is a plague on humanity and a scourge on the Earth:
https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does
But beyond the psychic damage surveillance metes out, there are immediate, concrete ways in which surveillance brings us to harm. Ad-tech follows us into abortion clinics and then sells the info to the cops back home in the forced birth states run by Handmaid's Tale LARPers:
https://pluralistic.net/2022/06/29/no-i-in-uter-us/#egged-on
And even if you have the good fortune to live in a state whose motto isn't "There's no 'I" in uter-US," ad-tech also lets anti-abortion propagandists trick you into visiting fake "clinics" who defraud you into giving birth by running out the clock on terminating your pregnancy:
https://pluralistic.net/2023/06/15/paid-medical-disinformation/#crisis-pregnancy-centers
The commercial surveillance industry fuels SWATting, where sociopaths who don't like your internet opinions or are steamed because you beat them at Call of Duty trick the cops into thinking that there's an "active shooter" at your house, provoking the kind of American policing autoimmune reaction that can get you killed:
https://www.cnn.com/2019/09/14/us/swatting-sentence-casey-viner/index.html
There's just a lot of ways that compiling deep, nonconsensual, population-scale surveillance dossiers can bring safety and financial harm to the unwilling subjects of our experiment in digital spying. The wave of "business email compromises" (the infosec term for impersonating your boss to you and tricking you into cleaning out the company bank accounts)? They start with spear phishing, a phishing attack that uses personal information – bought from commercial sources or ganked from leaks – to craft a virtual Big Store con:
https://www.fbi.gov/how-we-can-help-you/safety-resources/scams-and-safety/common-scams-and-crimes/business-email-compromise
It's not just spear-phishers. There are plenty of financial predators who run petty grifts – stock swindles, identity theft, and other petty cons. These scams depend on commercial surveillance, both to target victims (e.g. buying Facebook ads targeting people struggling with medical debt and worried about losing their homes) and to run the con itself (by getting the information needed to pull of a successful identity theft).
In "Consumer Surveillance and Financial Fraud," a new National Bureau of Academic Research paper, a trio of business-school profs – Bo Bian (UBC), Michaela Pagel (WUSTL) and Huan Tang (Wharton) quantify the commercial surveillance industry's relationship to finance crimes:
https://www.nber.org/papers/w31692
The authors take advantage of a time-series of ZIP-code-accurate fraud complaint data from the Consumer Finance Protection Board, supplemented by complaints from the FTC, along with Apple's rollout of App Tracking Transparency, a change to app-based tracking on Apple mobile devices that turned of third-party commercial surveillance unless users explicitly opted into being spied on. More than 96% of Apple users blocked spying:
https://arstechnica.com/gadgets/2021/05/96-of-us-users-opt-out-of-app-tracking-in-ios-14-5-analytics-find/
In other words, they were able to see, neighborhood by neighborhood, what happened to financial fraud when users were able to block commercial surveillance.
What happened is, fraud plunged. Deprived of the raw material for committing fraud, criminals were substantially hampered in their ability to steal from internet users.
While this is something that security professionals have understood for years, this study puts some empirical spine into the large corpus of qualitative accounts of the surveillance-to-fraud pipeline.
As the authors note in their conclusion, this analysis is timely. Google has just rolled out a new surveillance system, the deceptively named "Privacy Sandbox," that every Chrome user is being opted in to unless they find and untick three separate preference tickboxes. You should find and untick these boxes:
https://www.eff.org/deeplinks/2023/09/how-turn-googles-privacy-sandbox-ad-tracking-and-why-you-should
Google has spun, lied and bullied Privacy Sandbox into existence; whenever this program draws enough fire, they rename it (it used to be called FLoC). But as the Apple example showed, no one wants to be spied on – that's why Google makes you find and untick three boxes to opt out of this new form of surveillance.
There is no consensual basis for mass commercial surveillance. The story that "people don't mind ads so long as they're relevant" is a lie. But even if it was true, it wouldn't be enough, because beyond the harms to being our authentic selves that come from the knowledge that we're being observed, surveillance data is a crucial ingredient for all kinds of crime, harassment, and deception.
We can't rely on companies to spy on us responsibly. Apple may have blocked third-party app spying, but they effect nonconsensual, continuous surveillance of every Apple mobile device user, and lie about it:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
That's why we should ban commercial surveillance. We should outlaw surveillance advertising. Period:
https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising
Contrary to the claims of surveillance profiteers, this wouldn't reduce the income to ad-supported news and other media – it would increase their revenues, by letting them place ads without relying on the surveillance troves assembled by the Google/Meta ad-tech duopoly, who take the majority of ad-revenue:
https://www.eff.org/deeplinks/2023/05/save-news-we-must-ban-surveillance-advertising
We're 30 years into the commercial surveillance pandemic and Congress still hasn't passed a federal privacy law with a private right of action. But other agencies aren't waiting for Congress. The FTC and DoJ Antitrust Divsision have proposed new merger guidelines that allow regulators to consider privacy harms when companies merge:
https://www.regulations.gov/comment/FTC-2023-0043-1569
Think here of how Google devoured Fitbit and claimed massive troves of extremely personal data, much of which was collected because employers required workers to wear biometric trackers to get the best deal on health care:
https://www.eff.org/deeplinks/2020/04/google-fitbit-merger-would-cement-googles-data-empire
Companies can't be trusted to collect, retain or use our personal data wisely. The right "balance" here is to simply ban that collection, without an explicit opt-in. The way this should work is that companies can't collect private data unless users hunt down and untick three "don't spy on me" boxes. After all, that's the standard that Google has set.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/29/ban-surveillance-ads/#sucker-funnel
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
285 notes · View notes
puckpocketed · 8 months ago
Text
[voice of an anthropologist] after careful research and data gathering (5 mins of dicking around on the pages of a bunch of kings replyguys/beat reporters/pundits) eye believe i may have cracked the code : u can tell how frothing mad someone on kingstwt is by what naming convention they use to refer to a player.
Non-exhaustive List:
nicknames them (i.e. juice, Q/QB, kopi, arvie, real deal akil, big save dave/BSD): good bet they’re pretty happy with the player, usually followed up by a clip of said player popping off or some reportage of a stat that makes the player look good.
last name: they’re in Analysis mode and want to seem objective — they aren’t. they never will be. yeah twitter user clarke for norris, you definitely have no biases here babe!!! (they’re just like me fr CALL CLARKIE UP TO THE NHL RN IM SO SERIOUS JIM HILLER)
initials+player number: they’re a tumblr sleeper agent and this is their dogwhistle? (<- working theory)
SPECIAL subcategory!!! Pierre-Luc Dubois Derangement: they never call him dubie (that’s reserved for the actual la kings players and the apologists girlies [gn]) but they will call him PL, PLD, Dubois, 80 — and no matter what, without fail, they will find a way to point out his contract.
using NUMBER ONLY: they’re killing this player/players to death with rocks and want to seem objective but likeee… it comes off as MAJOR overcompensating 2 me <3
common/key phrases:
engaged: vibes-based barometer of how hard they think my disasterwife PLD is trying during the game, varies from person to person but generally stays within the same neighbourhood of agreeing with each other
intangibles: ok i wasn’t present for this one when it happened but jim hiller/kings management is obsessed with Andreas Englund “having intangibles” , which means Clarkie can’t come up from the AHL and everybody disliked that to the point “intangibles” is a meme.
sidebar — things i know about englund: he’s a swedish guy who looks like he churns butter in an apron while living in a cottage, but is actually the kings’ playoff goon (???) he’s STAPLED to jordan spence, who is a much better dman analytics wise and also eye test wise (funniest shit ever is how well spence does away from englund, even funnier is how often kingstwt brings it up)
the 1-3-1: the la kings’ hockey system. 1 guy out the front, 3 guys clogging up entry lanes through the neutral-zone/their own d-zone, 1 guy hanging back. no1 on kingstwt likes it and has wanted it gone for years — still, when the discourse comes around they immediately close ranks to become the biggest 1-3-1 proponent EVER. they will protect the sanctity of their hockey god-given right to play whatever the fuck system they want to!!! even if it’s incredibly annoying <3
19 notes · View notes
uboat53 · 3 months ago
Text
Well, we're about a month out from the presidential election, so it's time to look at the state of the race. And the state of the race is… yeah, there is no real state of the race.
Look, there's enough evidence out there to make a solid case that Trump has the best shot of winning and there's enough evidence to make a solid case that Harris has the best shot of winning. Given the quality (or lack thereof) of the data that we have, it's possible that it's a tie that will come down to a few dozen votes or that one candidate is already running away with it.
That said, it doesn't really matter much. The only thing you, as a voter, can do is vote, and hopefully you were already going to do that anyways. If not, just remember that only those who vote get the right to complain.
Make sure you're registered to vote and, if you feel like you want to do more, get in touch with other people and make sure they're registered as well. You can confirm your registration here if you need to.
After that, just make sure you get out and vote. Federal law permits you to take up to 2 hours paid time off from work to vote, so make sure to take advantage of that if you need to. Also, if your state allows absentee voting, you might take advantage of that as well.
As for the big picture, get used to that being fuzzy until all the votes are counted. The data we have on this election is uniquely poor quality because there are so many moving variables and demographics across the country are changing at light speed. To make it even worse, many states controlled by 2020 election deniers have put in place odd requirements such as hand-counting ballots that make it unlikely that we'll know the results in those states until at least several days after voting is over.
All you can do is what you were hopefully going to do anyways. The data analytics can make it sometimes seem as if the outcome is pre-determined and it doesn't really matter if you vote, but nothing is for certain until all the results are counted and certified. Think about how many elections in our lifetimes have come down to the wire, even 2016 which we were assured was a lock for Clinton.
At the end of the day, forget about all the analysis, all the gamesmanship, and all the data. Just do your part - register to vote, research the issues and candidates, and vote - and the rest will take care of itself. That's all any of us can really do.
10 notes · View notes