#Big Data Security Market
Explore tagged Tumblr posts
Text
Unveiling the Machine Learning Market Dynamics: The AI Evolution
Artificial Intelligence (AI) is an emerging technology transforming how businesses and people operate. Through the development of several digital services and products, as well as supply chain optimization, these technologies have revolutionized the consumer experience. Machine Learning (ML), one of the AI approaches, is gaining a lot of momentum in the industry due to its quick progress. This facilitates the expansion of the machine learning market globally. While some startups concentrate on solutions for specialized domains, numerous technology firms invest in this area to create AI platforms.
Significant growth in the information technology (IT) sector is driving the global market. Along with this, the rise in cyberattacks and data thefts worldwide has prompted many businesses to invest heavily in deploying effective security systems, which is boosting market growth. The widespread integration of machine learning (ML) and artificial intelligence (AI) technologies with big data security solutions, is positively impacting market growth. Continuous technological advancements are creating a positive market outlook.
ML-powered recommendation engines and personalized services enhance customer experience and engagement, driving demand in various consumer-centric industries. ML technologies enable automation of processes, predictive analytics, and improved decision-making, enhancing operational efficiency and reducing costs for businesses.
Players have used partnerships, joint ventures, agreements, and expansions. They are producing new products with faster speeds and improved features to broaden their portfolio and maintain a dominant market position. Market players focus on hybrid models combining different ML techniques and federated learning approaches that enable training models across distributed networks without centralizing data.
1 note
·
View note
Text
The big data security market is forecasted to be worth US$ 20,418.4 million in 2023 and is projected to increase to US$ 72,652.6 million by 2033. Sales of big data security are anticipated to experience substantial growth, with a Compound Annual Growth Rate (CAGR) of 13.5% throughout the forecast period.
0 notes
Text
The big data security market is projected to be valued at US$ 20,418.4 million in 2023 and is expected to rise to US$ 72,652.6 million by 2033. The sales of big data security are expected to record a significant CAGR of 13.5% during the forecast period.
Storing personal data securely, in this fast-paced world has become the most daunting task for big organizations today. Especially with the advancements in technology even the cyber security attacks are becoming sophisticated day by day, eluding all the traditional security tools, leading to demand for advanced protection techniques such as Big Data security.
0 notes
Text
#Big Data Security Market#Big Data Security Market size#Big Data Security Market share#Big Data Security Market trends#Big Data Security Market analysis#Big Data Security Market forecast#Big Data Security Market outlook
0 notes
Text
The Future of Real Estate in Jamaica: AI, Big Data, and Cybersecurity Shaping Tomorrow’s Market
#AI Algorithms#AI Real Estate Assistants#AI-Powered Chatbots#Artificial Intelligence#Automated Valuation Models#Big Data Analytics#Blockchain in Real Estate#Business Intelligence#cloud computing#Compliance Regulations#Cyber Attacks Prevention#Cybersecurity#Data encryption#Data Privacy#Data Security#data-driven decision making#Digital Property Listings#Digital Transactions#Digital Transformation#Fraud Prevention#Identity Verification#Internet of Things (IoT)#Machine Learning#Network Security#predictive analytics#Privacy Protection#Property Management Software#Property Technology#Real Estate Market Trends#real estate technology
0 notes
Text
China Telecom trains AI model with 1 trillion parameters on domestic chips
New Post has been published on https://thedigitalinsider.com/china-telecom-trains-ai-model-with-1-trillion-parameters-on-domestic-chips/
China Telecom trains AI model with 1 trillion parameters on domestic chips
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
China Telecom, one of the country’s state-owned telecom giants, has created two LLMs that were trained solely on domestically-produced chips.
This breakthrough represents a significant step in China’s ongoing efforts to become self-reliant in AI technology, especially in light of escalating US limitations on access to advanced semiconductors for its competitors.
According to the company’s Institute of AI, one of the models, TeleChat2-115B and another unnamed model were trained on tens of thousands of Chinese-made chips. This achievement is especially noteworthy given the tighter US export rules that have limited China’s ability to purchase high-end processors from Nvidia and other foreign companies. In a statement shared on WeChat, the AI institute claimed that this accomplishment demonstrated China’s capability to independently train LLMs and signals a new era of innovation and self-reliance in AI technology.
The scale of these models is remarkable. China Telecom stated that the unnamed LLM has one trillion parameters. In AI terminology, parameters are the variables that help the model in learning during training. The more parameters there are, the more complicated and powerful the AI becomes.
Chinese companies are striving to keep pace with global leaders in AI based outside the country. Washington’s export restrictions on Nvidia’s latest AI chips such as the A100 and H100, have compelled China to seek alternatives. As a result, Chinese companies have developed their own processors to reduce reliance on Western technologies. For instance, the TeleChat2-115B model has approximately 100 billion parameters, and therefore can perform as well as mainstream platforms.
China Telecom did not specify which company supplied the domestically-designed chips used to train its models. However, as previously discussed on these pages, Huawei’s Ascend chips play a key part in the country’s AI plans.
Huawei, which has faced US penalties in recent years, is also increasing its efforts in the artificial intelligence field. The company has recently started testing its latest AI processor, the Ascend 910C, with potential clients waiting in the domestic market. Large Chinese server companies, as well as internet giants that have previously used Nvidia chips, are apparently testing the new chip’s performance. Huawei’s Ascend processors, as one of the few viable alternatives to Nvidia hardware, are viewed as a key component of China’s strategy that will lessen its reliance on foreign technology.
In addition to Huawei, China Telecom is collaborating with other domestic chipmakers such as Cambricon, a Chinese start-up specialising in AI processors. The partnerships reflect a broader tendency in China’s tech industry to build a homegrown ecosystem of AI solutions, further shielding the country from the effects of US export controls.
By developing its own AI chips and technology, China is gradually reducing its dependence on foreign-made hardware, especially Nvidia’s highly sought-after and therefore expensive GPUs. While US sanctions make it difficult for Chinese companies to obtain the latest Nvidia hardware, a black market for foreign chips has emerged. Rather than risk operating in the grey market, many Chinese companies prefer to purchase lower-powered alternatives such as previous-gen models to maintain access to Nvidia’s official support and services.
China’s achievement reflects a broader shift in its approach to AI and semiconductor technology, emphasising self-sufficiency and resilience in an increasingly competitive global economy and in face of American protectionist trade policies.
(Photo by Mark Kuiper)
See also: Has Huawei outsmarted Apple in the AI race?
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: artificial intelligence, chip, huawei, llm, Nvidia
#ai#ai & big data expo#AI chips#ai model#AI Race#American#amp#apple#applications#approach#artificial#Artificial Intelligence#automation#background#Big Data#billion#black market#california#China#chip#chips#Cloud#cloud computing#Companies#comprehensive#computing#conference#content#cyber#cyber security
0 notes
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
35K notes
·
View notes
Text
https://www.htfmarketintelligence.com/report/global-big-data-security-market
0 notes
Text
Bossware is unfair (in the legal sense, too)
You can get into a lot of trouble by assuming that rich people know what they're doing. For example, might assume that ad-tech works – bypassing peoples' critical faculties, reaching inside their minds and brainwashing them with Big Data insights, because if that's not what's happening, then why would rich people pour billions into those ads?
https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble
You might assume that private equity looters make their investors rich, because otherwise, why would rich people hand over trillions for them to play with?
https://thenextrecession.wordpress.com/2024/11/19/private-equity-vampire-capital/
The truth is, rich people are suckers like the rest of us. If anything, succeeding once or twice makes you an even bigger mark, with a sense of your own infallibility that inflates to fill the bubble your yes-men seal you inside of.
Rich people fall for scams just like you and me. Anyone can be a mark. I was:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
But though rich people can fall for scams the same way you and I do, the way those scams play out is very different when the marks are wealthy. As Keynes had it, "The market can remain irrational longer than you can remain solvent." When the marks are rich (or worse, super-rich), they can be played for much longer before they go bust, creating the appearance of solidity.
Noted Keynesian John Kenneth Galbraith had his own thoughts on this. Galbraith coined the term "bezzle" to describe "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In that magic interval, everyone feels better off: the mark thinks he's up, and the con artist knows he's up.
Rich marks have looong bezzles. Empirically incorrect ideas grounded in the most outrageous superstition and junk science can take over whole sections of your life, simply because a rich person – or rich people – are convinced that they're good for you.
Take "scientific management." In the early 20th century, the con artist Frederick Taylor convinced rich industrialists that he could increase their workers' productivity through a kind of caliper-and-stopwatch driven choreographry:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Taylor and his army of labcoated sadists perched at the elbows of factory workers (whom Taylor referred to as "stupid," "mentally sluggish," and as "an ox") and scripted their motions to a fare-the-well, transforming their work into a kind of kabuki of obedience. They weren't more efficient, but they looked smart, like obedient robots, and this made their bosses happy. The bosses shelled out fortunes for Taylor's services, even though the workers who followed his prescriptions were less efficient and generated fewer profits. Bosses were so dazzled by the spectacle of a factory floor of crisply moving people interfacing with crisply working machines that they failed to understand that they were losing money on the whole business.
To the extent they noticed that their revenues were declining after implementing Taylorism, they assumed that this was because they needed more scientific management. Taylor had a sweet con: the worse his advice performed, the more reasons their were to pay him for more advice.
Taylorism is a perfect con to run on the wealthy and powerful. It feeds into their prejudice and mistrust of their workers, and into their misplaced confidence in their own ability to understand their workers' jobs better than their workers do. There's always a long dollar to be made playing the "scientific management" con.
Today, there's an app for that. "Bossware" is a class of technology that monitors and disciplines workers, and it was supercharged by the pandemic and the rise of work-from-home. Combine bossware with work-from-home and your boss gets to control your life even when in your own place – "work from home" becomes "live at work":
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
Gig workers are at the white-hot center of bossware. Gig work promises "be your own boss," but bossware puts a Taylorist caliper wielder into your phone, monitoring and disciplining you as you drive your wn car around delivering parcels or picking up passengers.
In automation terms, a worker hitched to an app this way is a "reverse centaur." Automation theorists call a human augmented by a machine a "centaur" – a human head supported by a machine's tireless and strong body. A "reverse centaur" is a machine augmented by a human – like the Amazon delivery driver whose app goads them to make inhuman delivery quotas while punishing them for looking in the "wrong" direction or even singing along with the radio:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Bossware pre-dates the current AI bubble, but AI mania has supercharged it. AI pumpers insist that AI can do things it positively cannot do – rolling out an "autonomous robot" that turns out to be a guy in a robot suit, say – and rich people are groomed to buy the services of "AI-powered" bossware:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
For an AI scammer like Elon Musk or Sam Altman, the fact that an AI can't do your job is irrelevant. From a business perspective, the only thing that matters is whether a salesperson can convince your boss that an AI can do your job – whether or not that's true:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
The fact that AI can't do your job, but that your boss can be convinced to fire you and replace you with the AI that can't do your job, is the central fact of the 21st century labor market. AI has created a world of "algorithmic management" where humans are demoted to reverse centaurs, monitored and bossed about by an app.
The techbro's overwhelming conceit is that nothing is a crime, so long as you do it with an app. Just as fintech is designed to be a bank that's exempt from banking regulations, the gig economy is meant to be a workplace that's exempt from labor law. But this wheeze is transparent, and easily pierced by enforcers, so long as those enforcers want to do their jobs. One such enforcer is Alvaro Bedoya, an FTC commissioner with a keen interest in antitrust's relationship to labor protection.
Bedoya understands that antitrust has a checkered history when it comes to labor. As he's written, the history of antitrust is a series of incidents in which Congress revised the law to make it clear that forming a union was not the same thing as forming a cartel, only to be ignored by boss-friendly judges:
https://pluralistic.net/2023/04/14/aiming-at-dollars/#not-men
Bedoya is no mere historian. He's an FTC Commissioner, one of the most powerful regulators in the world, and he's profoundly interested in using that power to help workers, especially gig workers, whose misery starts with systemic, wide-scale misclassification as contractors:
https://pluralistic.net/2024/02/02/upward-redistribution/
In a new speech to NYU's Wagner School of Public Service, Bedoya argues that the FTC's existing authority allows it to crack down on algorithmic management – that is, algorithmic management is illegal, even if you break the law with an app:
https://www.ftc.gov/system/files/ftc_gov/pdf/bedoya-remarks-unfairness-in-workplace-surveillance-and-automated-management.pdf
Bedoya starts with a delightful analogy to The Hawtch-Hawtch, a mythical town from a Dr Seuss poem. The Hawtch-Hawtch economy is based on beekeeping, and the Hawtchers develop an overwhelming obsession with their bee's laziness, and determine to wring more work (and more honey) out of him. So they appoint a "bee-watcher." But the bee doesn't produce any more honey, which leads the Hawtchers to suspect their bee-watcher might be sleeping on the job, so they hire a bee-watcher-watcher. When that doesn't work, they hire a bee-watcher-watcher-watcher, and so on and on.
For gig workers, it's bee-watchers all the way down. Call center workers are subjected to "AI" video monitoring, and "AI" voice monitoring that purports to measure their empathy. Another AI times their calls. Two more AIs analyze the "sentiment" of the calls and the success of workers in meeting arbitrary metrics. On average, a call-center worker is subjected to five forms of bossware, which stand at their shoulders, marking them down and brooking no debate.
For example, when an experienced call center operator fielded a call from a customer with a flooded house who wanted to know why no one from her boss's repair plan system had come out to address the flooding, the operator was punished by the AI for failing to try to sell the customer a repair plan. There was no way for the operator to protest that the customer had a repair plan already, and had called to complain about it.
Workers report being sickened by this kind of surveillance, literally – stressed to the point of nausea and insomnia. Ironically, one of the most pervasive sources of automation-driven sickness are the "AI wellness" apps that bosses are sold by AI hucksters:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The FTC has broad authority to block "unfair trade practices," and Bedoya builds the case that this is an unfair trade practice. Proving an unfair trade practice is a three-part test: a practice is unfair if it causes "substantial injury," can't be "reasonably avoided," and isn't outweighed by a "countervailing benefit." In his speech, Bedoya makes the case that algorithmic management satisfies all three steps and is thus illegal.
On the question of "substantial injury," Bedoya describes the workday of warehouse workers working for ecommerce sites. He describes one worker who is monitored by an AI that requires him to pick and drop an object off a moving belt every 10 seconds, for ten hours per day. The worker's performance is tracked by a leaderboard, and supervisors punish and scold workers who don't make quota, and the algorithm auto-fires if you fail to meet it.
Under those conditions, it was only a matter of time until the worker experienced injuries to two of his discs and was permanently disabled, with the company being found 100% responsible for this injury. OSHA found a "direct connection" between the algorithm and the injury. No wonder warehouses sport vending machines that sell painkillers rather than sodas. It's clear that algorithmic management leads to "substantial injury."
What about "reasonably avoidable?" Can workers avoid the harms of algorithmic management? Bedoya describes the experience of NYC rideshare drivers who attended a round-table with him. The drivers describe logging tens of thousands of successful rides for the apps they work for, on promise of "being their own boss." But then the apps start randomly suspending them, telling them they aren't eligible to book a ride for hours at a time, sending them across town to serve an underserved area and still suspending them. Drivers who stop for coffee or a pee are locked out of the apps for hours as punishment, and so drive 12-hour shifts without a single break, in hopes of pleasing the inscrutable, high-handed app.
All this, as drivers' pay is falling and their credit card debts are mounting. No one will explain to drivers how their pay is determined, though the legal scholar Veena Dubal's work on "algorithmic wage discrimination" reveals that rideshare apps temporarily increase the pay of drivers who refuse rides, only to lower it again once they're back behind the wheel:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
This is like the pit boss who gives a losing gambler some freebies to lure them back to the table, over and over, until they're broke. No wonder they call this a "casino mechanic." There's only two major rideshare apps, and they both use the same high-handed tactics. For Bedoya, this satisfies the second test for an "unfair practice" – it can't be reasonably avoided. If you drive rideshare, you're trapped by the harmful conduct.
The final prong of the "unfair practice" test is whether the conduct has "countervailing value" that makes up for this harm.
To address this, Bedoya goes back to the call center, where operators' performance is assessed by "Speech Emotion Recognition" algorithms, a psuedoscientific hoax that purports to be able to determine your emotions from your voice. These SERs don't work – for example, they might interpret a customer's laughter as anger. But they fail differently for different kinds of workers: workers with accents – from the American south, or the Philippines – attract more disapprobation from the AI. Half of all call center workers are monitored by SERs, and a quarter of workers have SERs scoring them "constantly."
Bossware AIs also produce transcripts of these workers' calls, but workers with accents find them "riddled with errors." These are consequential errors, since their bosses assess their performance based on the transcripts, and yet another AI produces automated work scores based on them.
In other words, algorithmic management is a procession of bee-watchers, bee-watcher-watchers, and bee-watcher-watcher-watchers, stretching to infinity. It's junk science. It's not producing better call center workers. It's producing arbitrary punishments, often against the best workers in the call center.
There is no "countervailing benefit" to offset the unavoidable substantial injury of life under algorithmic management. In other words, algorithmic management fails all three prongs of the "unfair practice" test, and it's illegal.
What should we do about it? Bedoya builds the case for the FTC acting on workers' behalf under its "unfair practice" authority, but he also points out that the lack of worker privacy is at the root of this hellscape of algorithmic management.
He's right. The last major update Congress made to US privacy law was in 1988, when they banned video-store clerks from telling the newspapers which VHS cassettes you rented. The US is long overdue for a new privacy regime, and workers under algorithmic management are part of a broad coalition that's closer than ever to making that happen:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.
With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.
The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.
The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#alvaro bedoya#ftc#workers#algorithmic management#veena dubal#bossware#taylorism#neotaylorism#snake oil#dr seuss#ai#sentiment analysis#digital phrenology#speech emotion recognition#shitty technology adoption curve
2K notes
·
View notes
Text
"The Netherlands is pulling even further ahead of its peers in the shift to a recycling-driven circular economy, new data shows.
According to the European Commission’s statistics office, 27.5% of the material resources used in the country come from recycled waste.
For context, Belgium is a distant second, with a “circularity rate” of 22.2%, while the EU average is 11.5% – a mere 0.8 percentage point increase from 2010.
“We are a frontrunner, but we have a very long way to go still, and we’re fully aware of that,” Martijn Tak, a policy advisor in the Dutch ministry of infrastructure and water management, tells The Progress Playbook.Â
The Netherlands aims to halve the use of primary abiotic raw materials by 2030 and run the economy entirely on recycled materials by 2050. Amsterdam, a pioneer of the “doughnut economics” concept, is behind much of the progress.
Why it matters
The world produces some 2 billion tonnes of municipal solid waste each year, and this could rise to 3.4 billion tonnes annually by 2050, according to the World Bank.
Landfills are already a major contributor to planet-heating greenhouse gases, and discarded trash takes a heavy toll on both biodiversity and human health.
“A circular economy is not the goal itself,” Tak says. “It’s a solution for societal issues like climate change, biodiversity loss, environmental pollution, and resource-security for the country.”
A fresh approach
While the Netherlands initially focused primarily on waste management, “we realised years ago that’s not good enough for a circular economy.”
In 2017, the state signed a “raw materials agreement” with municipalities, manufacturers, trade unions and environmental organisations to collaborate more closely on circular economy projects.
It followed that up with a national implementation programme, and in early 2023, published a roadmap to 2030, which includes specific targets for product groups like furniture and textiles. An English version was produced so that policymakers in other markets could learn from the Netherlands’ experiences, Tak says.
The programme is focused on reducing the volume of materials used throughout the economy partly by enhancing efficiencies, substituting raw materials for bio-based and recycled ones, extending the lifetimes of products wherever possible, and recycling.
It also aims to factor environmental damage into product prices, require a certain percentage of second-hand materials in the manufacturing process, and promote design methods that extend the lifetimes of products by making them easier to repair.
There’s also an element of subsidisation, including funding for “circular craft centres and repair cafés”.
This idea is already in play. In Amsterdam, a repair centre run by refugees, and backed by the city and outdoor clothing brand Patagonia, is helping big brands breathe new life into old clothes.
Meanwhile, government ministries aim to aid progress by prioritising the procurement of recycled or recyclable electrical equipment and construction materials, for instance.
State support is critical to levelling the playing field, analysts say...
Long Road Ahead
The government also wants manufacturers – including clothing and beverages companies – to take full responsibility for products discarded by consumers.
“Producer responsibility for textiles is already in place, but it’s work in progress to fully implement it,” Tak says.
And the household waste collection process remains a challenge considering that small city apartments aren’t conducive to having multiple bins, and sparsely populated rural areas are tougher to service.
“Getting the collection system right is a challenge, but again, it’s work in progress.”
...Nevertheless, Tak says wealthy countries should be leading the way towards a fully circular economy as they’re historically the biggest consumers of natural resources."
-via The Progress Playbook, December 13, 2023
#netherlands#dutch#circular economy#waste management#sustainable#recycle#environment#climate action#pollution#plastic pollution#landfill#good news#hope
521 notes
·
View notes
Text
So NFTgate has now hit tumblr - I made a thread about it on my twitter, but I'll talk a bit more about it here as well in slightly more detail. It'll be a long one, sorry! Using my degree for something here. This is not intended to sway you in one way or the other - merely to inform so you can make your own decision and so that you aware of this because it will happen again, with many other artists you know.
Let's start at the basics: NFT stands for 'non fungible token', which you should read as 'passcode you can't replicate'. These codes are stored in blocks in what is essentially a huge ledger of records, all chained together - a blockchain. Blockchain is encoded in such a way that you can't edit one block without editing the whole chain, meaning that when the data is validated it comes back 'negative' if it has been tampered with. This makes it a really, really safe method of storing data, and managing access to said data. For example, verifying that a bank account belongs to the person that says that is their bank account.
For most people, the association with NFT's is bitcoin and Bored Ape, and that's honestly fair. The way that used to work - and why it was such a scam - is that you essentially purchased a receipt that said you owned digital space - not the digital space itself. That receipt was the NFT. So, in reality, you did not own any goods, that receipt had no legal grounds, and its value was completely made up and not based on anything. On top of that, these NFTs were purchased almost exclusively with cryptocurrency which at the time used a verifiation method called proof of work, which is terrible for the environment because it requires insane amounts of electricity and computing power to verify. The carbon footprint for NFTs and coins at this time was absolutely insane.
In short, Bored Apes were just a huge tech fad with the intention to make a huge profit regardless of the cost, which resulted in the large market crash late last year. NFTs in this form are without value.
However, NFTs are just tech by itself more than they are some company that uses them. NFTs do have real-life, useful applications, particularly in data storage and verification. Research is being done to see if we can use blockchain to safely store patient data, or use it for bank wire transfers of extremely large amounts. That's cool stuff!
So what exactly is Käärijä doing? Kä is not selling NFTs in the traditional way you might have become familiar with. In this use-case, the NFT is in essence a software key that gives you access to a digital space. For the raffle, the NFT was basically your ticket number. This is a very secure way of doing so, assuring individuality, but also that no one can replicate that code and win through a false method. You are paying for a legimate product - the NFT is your access to that product.
What about the environmental impact in this case? We've thankfully made leaps and bounds in advancing the tech to reduce the carbon footprint as well as general mitigations to avoid expanding it over time. One big thing is shifting from proof of work verification to proof of space or proof of stake verifications, both of which require much less power in order to work. It seems that Kollekt is partnered with Polygon, a company that offers blockchain technology with the intention to become climate positive as soon as possible. Numbers on their site are very promising, they appear to be using proof of stake verification, and all-around appear more interested in the tech than the profits it could offer.
But most importantly: Kollekt does not allow for purchases made with cryptocurrency, and that is the real pisser from an environmental perspective. Cryptocurrency purchases require the most active verification across systems in order to go through - this is what bitcoin mining is, essentially. The fact that this website does not use it means good things in terms of carbon footprint.
But why not use something like Patreon? I can't tell you. My guess is that Patreon is a monthly recurring service and they wanted something one-time. Kollekt is based in Helsinki, and word is that Mikke (who is running this) is friends with folks on the team. These are all contributing factors, I would assume, but that's entirely an assumption and you can't take for fact.
Is this a good thing/bad thing? That I also can't tell you - you have to decide that for yourself. It's not a scam, it's not crypto, just a service that sits on the blockchain. But it does have higher carbon output than a lot of other services do, and its exact nature is not publicly disclosed. This isn't intended to sway you to say one or the other, but merely to give you the proper understanding of what NFTs are as a whole and what they are in this particular case so you can make that decision for yourself.
95 notes
·
View notes
Note
I know you love scivener, but do you know anything about ellipsus? It's meant to be an aternative to google docs for collaborative writing.
I heard about them when they dropped nanowrimo as a sponsor over their inclusion of AI bullshit, which seemed promising. And digging around on their homepage I saw mentions of beta reading and ao3, and apparently they're trying to promote themselves on Tumblr now.
So it really sounds like we're the target audience, which could be great, but I don't know enough to be able to tell if there's an obvious catch somewhere?
--
This is the first I've heard of them. A quick scroll through their website seems promising.
As usual, the basic questions are:
How much does this product cost to develop?
Do they have a business plan that makes sense with that cost?
This kind of software can, theoretically, be made by a few friends dicking around, not a huge programmer team all of whom have it as their primary job, so it isn't the pile of massive red flags that all attempts at social media are.
From the site:
"Today we are a small, close-knit team of seven, located across the post-capitalist landscapes of Berlin, Bologna, Buenos Aires, and Szczecin. (So much for our alliteration-based hiring strategy.) True to our mission, we're a progressive, remote-friendly company that prioritizes creativity, community, and creative exchange."
Jobs are listed as: Co-founder and CEO, Co-founder and community, Product and marketing, Design, and Engineering x3.
That seems like a reasonable breakdown and a size of team that could possibly be paid for with some non-insane business model.
The types of red flags we're looking for are
"We want to be the next instagram!"
Many idea people with nebulous skills, few programmers
Thinking you can run tumblr with three programmers
Thinking you can pay for 100 programmers with a cheapass subscription model
Programmers are random, cheap contract workers the founders don't know
Venture capital from sources that will want a big payout rather than support from people who share the goals/values of the team
Extremely overcrowded field with tons of products that do exactly this already
Unclear nature of product or a product that doesn't seem to actually have a market
etc.
What they say about money is in the FAQ:
Will Ellipsus have a paid plan? In order to grow the team and fund ongoing feature development, we will need to charge for a version of Ellipsus at some point. A paid version would be targeting users with specific needs related to advanced security, data syncing, and collaboration. But there will always be a free version of Ellipsus, and we want to be as generous as possible in what's included on that free plan (e.g., unlimited docs and drafts, for starters). It takes time to build a great freemium experience (not to mention a premium product people will happily pay for), which is why we won't roll that out in 2024. While the features that will be included in our paid plan aren't final-final, we can share that everything in the product today will be included in our free plan.
This sounds reasonable. It just remains to be seen whether they keep at it or go belly up (taking your data with them). I guess you'd have to know more about the specific people building this to decide whether they'll be reliable.
The biggest potential issues I see are it being difficult to get people to ditch google docs despite its issues, this taking off big time and the owners deciding to sell it for $$$$$$ to someone who will then ruin it, or the team just not being competent.
But since I don't know any of them, I have no idea how good they are at business.
70 notes
·
View notes
Text
crowdstrike: hot take 1
It's too early in the news cycle to say anything truly smart, but to sum things up, what I know so far:
there was no "hack" or cyberattack or data breach*
a private IT security company called CrowdStrike released a faulty update which practically disabled all its desktop (?) Windows workstations (laptops too, but maybe not servers? not sure)
the cause has been found and a fix is on the way
as it stands now, the fix will have to be manually applied (in person) to each affected workstation (this could mean in practice maybe 5, maybe 30 minutes of work for each affected computer - the number is also unknown, but it very well could be tens (or hundreds) of thousands of computers across thousands of large, multinational enterprises.
(The fix can be applied manually if you have a-bit-more-than-basic knowledge of computers)
Things that are currently safe to assume:
this wasn't a fault of any single individual, but of a process (workflow on the side of CrowdStrike) that didn't detect the fault ahead of time
[most likely] it's not that someone was incompetent or stupid - but we don't have the root cause analysis available yet
deploying bugfixes on Fridays is a bad idea
*The obligatory warning part:
Just because this wasn't a cyberattack, doesn't mean there won't be related security breaches of all kinds in all industries. The chaos, panic, uncertainty, and very soon also exhaustion of people dealing with the fallout of the issue will create a perfect storm for actually malicious actors that will try to exploit any possible vulnerability in companies' vulnerable state.
The analysis / speculation part:
globalization bad lol
OK, more seriously: I have not even heard about CrowdStrike until today, and I'm not a security engineer. I'm a developer with mild to moderate (outsider) understanding of vulnerabilities.
OK some background / basics first
It's very common for companies of any size to have more to protect their digital assets than just an antivirus and a firewall. Large companies (Delta Airlines) can afford to pay other large companies to provide security solutions for them (CrowdStrike). These days, to avoid bad software of any kind - malware - you need a complex suite of software that protects you from all sides:
desktop/laptop: antivirus, firewall, secure DNS, avoiding insecure WiFi, browser exploits, system patches, email scanner, phishing on web, phishing via email, physical access, USB thumb drive, motherboard/BIOS/UEFI vulnerabilities or built-in exploits made by the manufacturers of the Chinese government,
person/phone: phishing via SMS, phishing via calls, iOS/Android OS vulnerabilities, mobile app vulnerabilities, mobile apps that masquerade as useful while harvesting your data, vulnerabilities in things like WhatsApp where a glitched JPG pictures sent to you can expose your data, ...
servers: mostly same as above except they servers have to often deal with millions of requests per day, most of them valid, and at least some of the servers need to be connected to the internet 24/7
CDN and cloud services: fundamentally, an average big company today relies on dozens or hundreds of other big internet companies (AWS / Azure / GCP / Apple / Google) which in turn rely on hundreds of other companies to outsource a lot of tasks (like harvesting your data and sending you marketing emails)
infrastructure - routers... modems... your Alexa is spying on you... i'm tired... etc.
Anyway if you drifted to sleep in the previous paragraph I don't blame you. I'm genuinely just scratching the surface. Cybersecurity is insanely important today, and it's insanely complex too.
The reason why the incident blue-screened the machines is that to avoid malware, a lot of the anti-malware has to run in a more "privileged" mode, meaning they exist very close to the "heart" of Windows (or any other OS - the heart is called kernel). However, on this level, a bug can crash the system a lot more easily. And it did.
OK OK the actual hot lukewarm take finally
I didn't expect to get hit by y2k bug in the middle of 2024, but here we are.
As bad as it was, this only affected a small portion of all computers - in the ballpark of ~0.001% or even 0.0001% - but already caused disruptions to flights and hospitals in a big chunk of the world.
maybe-FAQ:
"Oh but this would be avoided if they weren't using the Crowdwhatever software" - true. However, this kind of mistake is not exclusive to them.
"Haha windows sucks, Linux 4eva" - I mean. Yeah? But no. Conceptually there is nothing that would prevent this from happening on Linux, if only there was anyone actually using it (on desktop).
"But really, Windows should have a better protection" - yes? no? This is a very difficult, technical question, because for kernel drivers the whole point is that 1. you trust them, and 2. they need the super-powerful-unrestrained access to work as intended, and 3. you _need_ them to be blazing fast, so babysitting them from the Windows perspective is counterproductive. It's a technical issue with no easy answers on this level.
"But there was some issue with Microsoft stuff too." - yes, but it's unknown if they are related, and at this point I have not seen any solid info about it.
The point is, in a deeply interconnected world, it's sort of a miracle that this isn't happening more often, and on a wider scale. Both bugfixes and new bugs are deployed every minute to some software somewhere in the world, because we're all in a rush to make money and pay rent and meet deadlines.
Increased monoculture in IT is bad for everyone. Whichever OS, whichever brand, whichever security solution provider - the more popular they are, the better visible their mistakes will be.
As much as it would be fun to make jokes like "CrowdStroke", I'm not even particularly mad at the company (at this point - that might change when I hear about their QA process). And no, I'm not even mad at Windows, as explained in the pseudo-FAQ.
The ultimate hot take? If at all possible, don't rely on anything related to computers. Technical problems are caused by technical solutions.
#crowdstrike#cybersecurity#anyway i'm microdosing today so it's probably too boring to read#but hopefully it at least mostly made sense#to be honest I wanted to have more of a hot take#but the truth is mundane
73 notes
·
View notes
Note
tell me about your defense contract pleage
Oh boy!
To be fair, it's nothing grandiose, like, it wasn't about "a new missile blueprint" or whatever, but, just thinking about what it could have become? yeesh.
So, let's go.
For context, this is taking place in the early 2010s, where I was working as a dev and manager for a company that mostly did space stuff, but they had some defence and security contracts too.
One day we got a new contract though, which was... a weird one. It was state-auctioned, meaning that this was basically a homeland contract, but the main sponsor was Philip Morris. Yeah. The American cigarette company.
Why? Because the contract was essentially a crackdown on "illegal cigarette sales", but it was sold as a more general "war on drugs" contract.
For those unaware (because chances are, like me, you are a non-smoker), cigarette contraband is very much a thing. At the time, ~15% of cigarettes were sold illegally here (read: they were smuggled in and sold on the street).
And Phillip Morris wanted to stop that. After all, they're only a small company worth uhhh... oh JFC. Just a paltry 150 billion dollars. They need those extra dollars, you understand?
Anyway. So they sponsored a contract to the state, promising that "the technology used for this can be used to stop drug deals too". Also that "the state would benefit from the cigarettes part as well because smaller black market means more official sales means a higher tax revenue" (that has actually been proven true during the 2020 quarantine).
Anyway, here was the plan:
Phase 1 was to train a neural network and plug it in directly to the city's video-surveillance system, in order to detect illegal transactions as soon as they occur. Big brother who?
Phase 2 was to then track the people involved in said transaction throughout the city, based on their appearance and gait. You ever seen the Plainsight sheep counting video? Imagine something like this but with people. That data would then be relayed to police officers in the area.
So yeah, an automated CCTV-based tracking system. Because that's not setting a scary precedent.
So what do you do when you're in that position? Let me tell you. If you're thrust unknowingly, or against your will, into a project like this,
Note. The following is not a legal advice. In fact it's not even good advice. Do not attempt any of this unless you know you can't get caught, or that even if you are caught, the consequences are acceptable. Above all else, always have a backup plan if and when it backfires. Also don't do anything that can get you sued. Be reasonable.
Let me introduce you to the world of Corporate Sabotage! It's a funny form of striking, very effective in office environments.
Here's what I did:
First of all was the training data. We had extensive footage, but it needed to be marked manually for the training. Basically, just cropping the clips around the "transaction" and drawing some boxes on top of the "criminals". I was in charge of several batches of those. It helped that I was fast at it since I had video editing experience already. Well, let's just say that a good deal of those markings were... not very accurate.
Also, did you know that some video encodings are very slow to process by OpenCV, to the point of sometimes crashing? I'm sure the software is better at it nowadays though. So I did that to another portion of the data.
Unfortunately the training model itself was handled by a different company, so I couldn't do more about this.
Or could I?
I was the main person communicating with them, after all.
Enter: Miscommunication Master
In short (because this is already way too long), I became the most rigid person in the project. Like insisting on sharing the training data only on our own secure shared drive, which they didn't have access to yet. Or tracking down every single bug in the program and making weekly reports on those, which bogged down progress. Or asking for things to be done but without pointing at anyone in particular, so that no one actually did the thing. You know, classic manager incompetence. Except I couldn't be faulted, because after all, I was just "really serious about the security aspect of this project. And you don't want the state to learn that we've mishandled the data security of the project, do you, Jeff?"
A thousand little jabs like this, to slow down and delay the project.
At the end of it, after a full year on this project, we had.... a neural network full of false positives and a semi-working visualizer.
They said the project needed to be wrapped up in the next three months.
I said "damn, good luck with that! By the way my contract is up next month and I'm not renewing."
Last I heard, that city still doesn't have anything installed on their CCTV.
tl;dr: I used corporate sabotage to prevent automated surveillance to be implemented in a city--
hey hold on
wait
what
HEY ACTUALLY I DID SOME EXTRA RESEARCH TO SEE IF PHILLIP MORRIS TRIED THIS SHIT WITH ANOTHER COMPANY SINCE THEN AND WHAT THE FUCK
HUH??????
well what the fuck was all that even about then if they already own most of the black market???
#i'm sorry this got sidetracked in the end#i'm speechless#anyway yeah!#sometimes activism is sitting in an office and wasting everyone's time in a very polite manner#i learned that one from the CIA actually
160 notes
·
View notes