#AI Model Security
Explore tagged Tumblr posts
Text
#AI Factory#AI Cost Optimize#Responsible AI#AI Security#AI in Security#AI Integration Services#AI Proof of Concept#AI Pilot Deployment#AI Production Solutions#AI Innovation Services#AI Implementation Strategy#AI Workflow Automation#AI Operational Efficiency#AI Business Growth Solutions#AI Compliance Services#AI Governance Tools#Ethical AI Implementation#AI Risk Management#AI Regulatory Compliance#AI Model Security#AI Data Privacy#AI Threat Detection#AI Vulnerability Assessment#AI proof of concept tools#End-to-end AI use case platform#AI solution architecture platform#AI POC for medical imaging#AI POC for demand forecasting#Generative AI in product design#AI in construction safety monitoring
0 notes
Text
Simplify Transactions and Boost Efficiency with Our Cash Collection Application
Manual cash collection can lead to inefficiencies and increased risks for businesses. Our cash collection application provides a streamlined solution, tailored to support all business sizes in managing cash effortlessly. Key features include automated invoicing, multi-channel payment options, and comprehensive analytics, all of which simplify the payment process and enhance transparency. The application is designed with a focus on usability and security, ensuring that every transaction is traceable and error-free. With real-time insights and customizable settings, you can adapt the application to align with your business needs. Its robust reporting functions give you a bird’s eye view of financial performance, helping you make data-driven decisions. Move beyond traditional, error-prone cash handling methods and step into the future with a digital approach. With our cash collection application, optimize cash flow and enjoy better financial control at every level of your organization.
#seo agency#seo company#seo marketing#digital marketing#seo services#azure cloud services#amazon web services#ai powered application#android app development#augmented reality solutions#augmented reality in education#augmented reality (ar)#augmented reality agency#augmented reality development services#cash collection application#cloud security services#iot applications#iot#iotsolutions#iot development services#iot platform#digitaltransformation#innovation#techinnovation#iot app development services#large language model services#artificial intelligence#llm#generative ai#ai
4 notes
·
View notes
Text
#Tags:AI Arms Race#AI Security#Artificial Intelligence#Cybersecurity Threats#DeepSeek#facts#Geopolitical Tensions in AI#life#Malicious Attacks#OpenAI Rival#Podcast#R1 Reasoning Model#serious#straight forward#truth#U.S.-China Tech Competition#upfront#website
0 notes
Text
What is WormGPT?
Artificial intelligence (AI) tools are expected to transform the workplace by automating everyday tasks, increasing productivity for everyone. However, AI can also be misused for illegal activities, as highlighted by the new WormGPT system. What is WormGPT? WormGPT is a harmful AI tool designed for cybercriminal activities. It is based on the GPTJ language model, developed by OpenAI, and was…
#AI risks#AI security#Business Email Compromise#Cybersecurity#GPTJ language model#malicious AI tools#malware prevention#online safety#phishing attacks#WormGPT
0 notes
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
36K notes
·
View notes
Text
House Votes to Advance Bill That Could Ban TikTok in the U.S.
Legislation Passed: The House voted in favor of a bill that could lead to a ban on TikTok in the US unless ByteDance sells it to an American company. Senate Expectations: The bill, now heading to the Senate, is expected to pass there as well. Security Concerns: US politicians have security concerns over TikTok’s data sharing with the Chinese government, given ByteDance’s obligations. Potential…

View On WordPress
#access control#AI News#automated incident response#bytedance#cayman#cybersecurity#data collection#data security#douyin#ethical AI#house bill#identity management#model security#national security#News#red teaming#threat detection#tiktok#vulnerability assessment
0 notes
Text
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.
AI companies such as OpenAI, Meta, Google, and Stability AI are facing a slew of lawsuits from artists who claim that their copyrighted material and personal information was scraped without consent or compensation. Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property. Meta, Google, Stability AI, and OpenAI did not respond to MIT Technology Review’s request for comment on how they might respond.
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.
Continue reading article here
#Ben Zhao and his team are absolute heroes#artificial intelligence#plagiarism software#more rambles#glaze#nightshade#ai theft#art theft#gleeful dancing
22K notes
·
View notes
Text
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found. Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence. The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context. Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all. These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%. Human summaries ran up the score by significantly outperforming on identifying references to ASIC documents in the long document, a type of task that the report notes is a “notoriously hard task” for this type of AI. But humans still beat the technology across the board. Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information. Three of the five reviewers said they guessed that they were reviewing AI content. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions which communicated the message better and more concisely.
3 September 2024
5K notes
·
View notes
Text
Exciting developments in MLOps await in 2024! 🚀 DevOps-MLOps integration, AutoML acceleration, Edge Computing rise – shaping a dynamic future. Stay ahead of the curve! #MLOps #TechTrends2024 🤖✨
#MLOps#Machine Learning Operations#DevOps#AutoML#Automated Pipelines#Explainable AI#Edge Computing#Model Monitoring#Governance#Hybrid Cloud#Multi-Cloud Deployments#Security#Forecast#2024
0 notes
Text
Recall is designed to use local AI models to screenshot everything you see or do on your computer and then give you the ability to search and retrieve anything in seconds. There’s even an explorable timeline you can scroll through. Everything in Recall is designed to remain local and private on-device, so no data is used to train Microsoft’s AI models. Despite Microsoft’s promises of a secure and encrypted Recall experience, cybersecurity expert Kevin Beaumont has found that the AI-powered feature has some potential security flaws. Beaumont, who briefly worked at Microsoft in 2020, has been testing out Recall over the past week and discovered that the feature stores data in a database in plain text.
Holy cats, this is way worse than we were told.
Microsoft said that Recall stored its zillions of screenshots in an encrypted database hidden in a system folder. Turns out, they're using SQLite, a free (public domain) database to store unencrypted plain text in the user's home folder. Which is definitely NOT secure.
Further, Microsoft refers to Recall as an optional experience. But it's turned on by default, and turning it off is a chore. They buried it in a control panel setting.
They say certain URLs and websites can be blacklisted from Recall, but only if you're using Microsoft's Edge browser! But don't worry: DRM protected films & music will never get recorded. Ho ho ho.
This whole debacle feels like an Onion article but it's not.
Luckily(?) Recall is currently only available on Windows 11, but I fully expect Microsoft to try and shove this terrible thing onto unsuspecting Win10 users via Update.
Stay tuned...
3K notes
·
View notes
Text

The guild’s insistence in achieving a minimum guaranteed staff level for episodic TV was considered an extreme long-shot when the contract discussions began in March.
They achieved a new-model streaming residual formula that should help fellow striking union SAG-AFTRA in its quest to achieve a revenue-based residual. The WGA’s formula amounts to a bonus system based on pre-determined, high-bar performance benchmarks for individual titles. But it’s nonetheless more than industry dealmakers predicted the guild would secure when the first round of WGA-AMPTP talks began in earnest last spring.
The nitty-gritty details of language around the use of generative AI in content production was one of the last items that the sides worked on before closing the pact.
They did it.
Solidarity works.
Unions are our only tool against capitalistic greed.
[x]
Fact: To be completely precise, the agreement is tentative and the strike will end when the members vote. The guild has allowed the writers to go back to work from Wednesday 27th 12:01. All members still need to vote for the new contract to be final but the strike is officially over. [x]
Opinion: However it’s very unlikely to be any hurdles because the same people who called for the strike are calling for it to be over.
Hence: wga strike ENDS and not ENDED.
#wga strike#wga strike end#destiel news#destiel#capitalism#union solidarity#unions#sag aftra#sag aftra strike
11K notes
·
View notes
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes
·
View notes
Text
OPTİVİSER - GOLD

Welcome to Optiviser.com, your ultimate guide to navigating the complex world of electronics in 2024. As technology continues to evolve at a rapid pace, finding the right devices that suit your needs can be overwhelming. In this blog post, we’ll harness the power of AI to help you make informed choices with our comprehensive electronics comparison. We’ll take a closer look at the top smart home devices that are revolutionizing how we live and work, providing convenience and efficiency like never before. Additionally, we’ll offer expert laptop recommendations tailored to various lifestyles and budgets, ensuring you find the perfect match for your daily tasks.
AI-powered Electronics Comparison
In today's fast-paced technological landscape, making informed choices about electronics can be overwhelming. An AI-powered Electronics Comparison tool can help streamline this process by providing insights that cater to specific user needs. These advanced tools utilize algorithms that analyze product features, specifications, and user reviews, resulting in a tailored recommendation for buyers.
As we delve into the world of consumer technology, it's important to highlight the Top Smart Home Devices 2024. From smart thermostats to security cameras, these devices are becoming essential for modern households. They not only enhance convenience but also significantly improve energy efficiency and home safety.
For those looking for a new computer to enhance productivity or gaming experiences, consider checking out the latest Laptop Recommendations. Many platforms, including Optiviser.com, provide comprehensive comparisons and insights that can help consumers choose the best laptop suited to their needs, whether it’s for work, study, or leisure.
Top Smart Home Devices 2024
As we move into 2024, the landscape of home automation is evolving rapidly, showcasing an array of innovative gadgets designed to enhance comfort and convenience. In this era of AI-powered Electronics Comparison, selecting the right devices can be overwhelming, but we've highlighted some of the best Top Smart Home Devices 2024 that stand out for their functionality and user experience.
One of the most impressive innovations for this year is the latest AI-powered home assistant. These devices not only respond to voice commands but also learn your preferences over time, allowing them to offer personalized suggestions and perform tasks proactively. Imagine a device that can monitor your schedule and automatically adjust your home's temperature and lighting accordingly!
Moreover, security remains a top priority in smart homes. The Top Smart Home Devices 2024 include state-of-the-art security cameras and smart locks that provide robust protection while ensuring ease of access. With features like remote monitoring through your smartphone or integration with smart doorbells, keeping your home safe has never been easier. For more details on the comparisons and recommendations of these devices, you can check out Optiviser.com.
Laptop Recommendation
In today's fast-paced world, choosing the right laptop can be a daunting task. With numerous options available in the market, it's essential to consider various factors such as performance, portability, and price. At Optiviser.com, we provide an insightful guide to help you navigate through the vast array of choices. To streamline your decision-making process, we have developed an AI-powered Electronics Comparison tool that allows you to compare specifications and features of different laptops side by side.
This year, we have seen a surge in innovative laptops that cater to diverse needs. Whether for gaming, business, or everyday use, our top recommendations include models that excel in battery life, processing power, and display quality. For instance, consider the latest models from top brands, which have integrated the best features of Top Smart Home Devices 2024 trends, ensuring seamless connectivity and advanced functionalities.
Additionally, if you're looking for a laptop that can handle multitasking effortlessly, we suggest models equipped with the latest processors and ample RAM. Our detailed Laptop Recommendation section on Optiviser.com includes expert reviews and user feedback to help you choose a laptop that not only fits your budget but also meets your specific requirements.
674 notes
·
View notes