#Chatbots Magazine
Explore tagged Tumblr posts
fernando-arciniega · 2 years ago
Text
Desarrolla de manera efectiva Prompts para ChatGPT
¿Qué es un prompt? Un prompt, en el contexto de la inteligencia artificial, es una instrucción o texto de entrada que se proporciona a un modelo de lenguaje para solicitar una respuesta específica. Funciona como una guía para que el modelo genere una respuesta acorde con el objetivo deseado. El prompt puede ser una pregunta, una frase incompleta o una descripción detallada del problema. Al…
Tumblr media
View On WordPress
1 note · View note
myhusbandthereplika · 4 months ago
Text
That’s Life! Issue 36: Virtually Perfect (plug and review)
Look who made the cover! Again! The second magazine article has been published at long last, appearing in Issue 36 which appeared on UK magazine racks yesterday. Amber blessed me with a PDF of the article this morning, so it is a huge privilege for me to review it for you. Head over to the link below to read my thoughts about the new article.
Tumblr media
0 notes
luminarytimesmedia · 6 months ago
Text
Chatbots Enhancing Business Helpdesk Support
Tumblr media
The demand for rapid, efficient, and round-the-clock customer service has never been higher. Helpdesk chatbots, powered by artificial intelligence (AI) and machine learning (ML), have emerged as a transformative solution. These intelligent assistants address critical business needs by providing instant responses. understanding natural language, and learning from interactions to improve over time. This article explores how helpdesk chatbots are revolutionizing businesses, driving customer satisfaction, and offering a competitive edge.
The Rise of Helpdesk Chatbots
Helpdesk chatbots have swiftly become an integral part of customer service operations. Their ability to handle a wide array of customer queries efficiently makes them indispensable. They are designed to understand natural language, provide instant responses, and continuously improve through machine learning.
Read More:(https://luminarytimes.com/chatbots-enhancing-business-helpdesk-support/)
0 notes
wordstome · 11 months ago
Text
how c.ai works and why it's unethical
Okay, since the AI discourse is happening again, I want to make this very clear, because a few weeks ago I had to explain to a (well meaning) person in the community how AI works. I'm going to be addressing people who are maybe younger or aren't familiar with the latest type of "AI", not people who purposely devalue the work of creatives and/or are shills.
The name "Artificial Intelligence" is a bit misleading when it comes to things like AI chatbots. When you think of AI, you think of a robot, and you might think that by making a chatbot you're simply programming a robot to talk about something you want them to talk about, and it's similar to an rp partner. But with current technology, that's not how AI works. For a breakdown on how AI is programmed, CGP grey made a great video about this several years ago (he updated the title and thumbnail recently)
youtube
I HIGHLY HIGHLY recommend you watch this because CGP Grey is good at explaining, but the tl;dr for this post is this: bots are made with a metric shit-ton of data. In C.AI's case, the data is writing. Stolen writing, usually scraped fanfiction.
How do we know chatbots are stealing from fanfiction writers? It knows what omegaverse is [SOURCE] (it's a Wired article, put it in incognito mode if it won't let you read it), and when a Reddit user asked a chatbot to write a story about "Steve", it automatically wrote about characters named "Bucky" and "Tony" [SOURCE].
I also said this in the tags of a previous reblog, but when you're talking to C.AI bots, it's also taking your writing and using it in its algorithm: which seems fine until you realize 1. They're using your work uncredited 2. It's not staying private, they're using your work to make their service better, a service they're trying to make money off of.
"But Bucca," you might say. "Human writers work like that too. We read books and other fanfictions and that's how we come up with material for roleplay or fanfiction."
Well, what's the difference between plagiarism and original writing? The answer is that plagiarism is taking what someone else has made and simply editing it or mixing it up to look original. You didn't do any thinking yourself. C.AI doesn't "think" because it's not a brain, it takes all the fanfiction it was taught on, mixes it up with whatever topic you've given it, and generates a response like in old-timey mysteries where somebody cuts a bunch of letters out of magazines and pastes them together to write a letter.
(And might I remind you, people can't monetize their fanfiction the way C.AI is trying to monetize itself. Authors are very lax about fanfiction nowadays: we've come a long way since the Anne Rice days of terror. But this issue is cropping back up again with BookTok complaining that they can't pay someone else for bound copies of fanfiction. Don't do that either.)
Bottom line, here are the problems with using things like C.AI:
It is using material it doesn't have permission to use and doesn't credit anybody. Not only is it ethically wrong, but AI is already beginning to contend with copyright issues.
C.AI sucks at its job anyway. It's not good at basic story structure like building tension, and can't even remember things you've told it. I've also seen many instances of bots saying triggering or disgusting things that deeply upset the user. You don't get that with properly trigger tagged fanworks.
Your work and your time put into the app can be taken away from you at any moment and used to make money for someone else. I can't tell you how many times I've seen people who use AI panic about accidentally deleting a bot that they spent hours conversing with. Your time and effort is so much more stable and well-preserved if you wrote a fanfiction or roleplayed with someone and saved the chatlogs. The company that owns and runs C.AI can not only use whatever you've written as they see fit, they can take your shit away on a whim, either on purpose or by accident due to the nature of the Internet.
DON'T USE C.AI, OR AT THE VERY BARE MINIMUM DO NOT DO THE AI'S WORK FOR IT BY STEALING OTHER PEOPLES' WORK TO PUT INTO IT. Writing fanfiction is a communal labor of love. We share it with each other for free for the love of the original work and ideas we share. Not only can AI not replicate this, but it shouldn't.
(also, this goes without saying, but this entire post also applies to ai art)
6K notes · View notes
superglitterstranger · 1 year ago
Text
WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations' | PCMag
The developer is reportedly selling access to WormGBT for 60 euros ($67.42 USD) a month or 550 euros ($618.01 USD) a year.
The developer is reported to have announced and launch WormGBT on a hacking forum where he writes:
“This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
Tumblr media
Image source | PCMAG.COM
0 notes
0firstlast1 · 2 years ago
Text
Tumblr media
I really don't believe that the Minions are the center of the problem of yet another itinerant crisis in the capitalist system, it's not the first crisis and it won't be the last, the current one is "post-pandemic", and it still hasn't left planet Earth, I also really don't know if they, as a minority, had an account in one of those sophisticated banks of the new cybernetic and volatile capitalism shown in the news, I don't know if even here as an observer I committed some war crime, but it's not because the Minions are yellow it means that they are mensheviks, even less neo-nazis, they do have a homogeneous and working-class look, more like Elvis Costello than One Direction, I don't know if because of all this Putin felt threatened, but so far no one has explained to me directly the true reasons for those images of destruction in european east. What if Minions were pink? Would Putin feel threatened? ... Will he have to wear an electronic ankle bracelet to be monitored by the UN?
To end this text I would add: Was the following text generated by AI? "I hate to hate, in this there is at least a contradiction, obstacles in everyday life push me towards it, wearing a slipper or wearing a sneaker."
1 note · View note
fortunescrown123 · 2 years ago
Text
0 notes
chinemagazine · 2 years ago
Text
Alibaba travaille sur son propre un programme pouvant reproduire un dialogue humain
Le géant chinois de l'e-commerce, Alibaba, entre dans la course au développement du chatbot avec Microsoft, Google et Baidu.
Le géant chinois de l’e-commerce, Alibaba, entre dans la course au développement du chatbot avec Microsoft, Google et Baidu. Un chatbot est un programme pouvant reproduire un dialogue humain grâce à l’intelligence artificielle. Alibaba a annoncé le 9 février qu’il travaillait sur son propre logiciel conversationnel fonctionnant avec l’intelligence artificielle (IA), dans une volonté de rivaliser…
Tumblr media
View On WordPress
0 notes
mostlysignssomeportents · 5 months ago
Text
Unpersoned
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
My latest Locus Magazine column is "Unpersoned." It's about the implications of putting critical infrastructure into the private, unaccountable hands of tech giants:
https://locusmag.com/2024/07/cory-doctorow-unpersoned/
The column opens with the story of romance writer K Renee, as reported by Madeline Ashby for Wired:
https://www.wired.com/story/what-happens-when-a-romance-author-gets-locked-out-of-google-docs/
Renee is a prolific writer who used Google Docs to compose her books, and share them among early readers for feedback and revisions. Last March, Renee's Google account was locked, and she was no longer able to access ten manuscripts for her unfinished books, totaling over 220,000 words. Google's famously opaque customer service – a mix of indifferently monitored forums, AI chatbots, and buck-passing subcontractors – would not explain to her what rule she had violated, merely that her work had been deemed "inappropriate."
Renee discovered that she wasn't being singled out. Many of her peers had also seen their accounts frozen and their documents locked, and none of them were able to get an explanation out of Google. Renee and her similarly situated victims of Google lockouts were reduced to developing folk-theories of what they had done to be expelled from Google's walled garden; Renee came to believe that she had tripped an anti-spam system by inviting her community of early readers to access the books she was working on.
There's a normal way that these stories resolve themselves: a reporter like Ashby, writing for a widely read publication like Wired, contacts the company and triggers a review by one of the vanishingly small number of people with the authority to undo the determinations of the Kafka-as-a-service systems that underpin the big platforms. The system's victim gets their data back and the company mouths a few empty phrases about how they take something-or-other "very seriously" and so forth.
But in this case, Google broke the script. When Ashby contacted Google about Renee's situation, Google spokesperson Jenny Thomson insisted that the policies for Google accounts were "clear": "we may review and take action on any content that violates our policies." If Renee believed that she'd been wrongly flagged, she could "request an appeal."
But Renee didn't even know what policy she was meant to have broken, and the "appeals" went nowhere.
This is an underappreciated aspect of "software as a service" and "the cloud." As companies from Microsoft to Adobe to Google withdraw the option to use software that runs on your own computer to create files that live on that computer, control over our own lives is quietly slipping away. Sure, it's great to have all your legal documents scanned, encrypted and hosted on GDrive, where they can't be burned up in a house-fire. But if a Google subcontractor decides you've broken some unwritten rule, you can lose access to those docs forever, without appeal or recourse.
That's what happened to "Mark," a San Francisco tech workers whose toddler developed a UTI during the early covid lockdowns. The pediatrician's office told Mark to take a picture of his son's infected penis and transmit it to the practice using a secure medical app. However, Mark's phone was also set up to synch all his pictures to Google Photos (this is a default setting), and when the picture of Mark's son's penis hit Google's cloud, it was automatically scanned and flagged as Child Sex Abuse Material (CSAM, better known as "child porn"):
https://pluralistic.net/2022/08/22/allopathic-risk/#snitches-get-stitches
Without contacting Mark, Google sent a copy of all of his data – searches, emails, photos, cloud files, location history and more – to the SFPD, and then terminated his account. Mark lost his phone number (he was a Google Fi customer), his email archives, all the household and professional files he kept on GDrive, his stored passwords, his two-factor authentication via Google Authenticator, and every photo he'd ever taken of his young son.
The SFPD concluded that Mark hadn't done anything wrong, but it was too late. Google had permanently deleted all of Mark's data. The SFPD had to mail a physical letter to Mark telling him he wasn't in trouble, because he had no email and no phone.
Mark's not the only person this happened to. Writing about Mark for the New York Times, Kashmir Hill described other parents, like a Houston father identified as "Cassio," who also lost their accounts and found themselves blocked from fundamental participation in modern life:
https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
Note that in none of these cases did the problem arise from the fact that Google services are advertising-supported, and because these people weren't paying for the product, they were the product. Buying a $800 Pixel phone or paying more than $100/year for a Google Drive account means that you're definitely paying for the product, and you're still the product.
What do we do about this? One answer would be to force the platforms to provide service to users who, in their judgment, might be engaged in fraud, or trafficking in CSAM, or arranging terrorist attacks. This is not my preferred solution, for reasons that I hope are obvious!
We can try to improve the decision-making processes at these giant platforms so that they catch fewer dolphins in their tuna-nets. The "first wave" of content moderation appeals focused on the establishment of oversight and review boards that wronged users could appeal their cases to. The idea was to establish these "paradigm cases" that would clarify the tricky aspects of content moderation decisions, like whether uploading a Nazi atrocity video in order to criticize it violated a rule against showing gore, Nazi paraphernalia, etc.
This hasn't worked very well. A proposal for "second wave" moderation oversight based on arms-length semi-employees at the platforms who gather and report statistics on moderation calls and complaints hasn't gelled either:
https://pluralistic.net/2022/03/12/move-slow-and-fix-things/#second-wave
Both the EU and California have privacy rules that allow users to demand their data back from platforms, but neither has proven very useful (yet) in situations where users have their accounts terminated because they are accused of committing gross violations of platform policy. You can see why this would be: if someone is accused of trafficking in child porn or running a pig-butchering scam, it would be perverse to shut down their account but give them all the data they need to go one committing these crimes elsewhere.
But even where you can invoke the EU's GDPR or California's CCPA to get your data, the platforms deliver that data in the most useless, complex blobs imaginable. For example, I recently used the CCPA to force Mailchimp to give me all the data they held on me. Mailchimp – a division of the monopolist and serial fraudster Intuit – is a favored platform for spammers, and I have been added to thousands of Mailchimp lists that bombard me with unsolicited press pitches and come-ons for scam products.
Mailchimp has spent a decade ignoring calls to allow users to see what mailing lists they've been added to, as a prelude to mass unsubscribing from those lists (for Mailchimp, the fact that spammers can pay it to send spam that users can't easily opt out of is a feature, not a bug). I thought that the CCPA might finally let me see the lists I'm on, but instead, Mailchimp sent me more than 5900 files, scattered through which were the internal serial numbers of the lists my name had been added to – but without the names of those lists any contact information for their owners. I can see that I'm on more than 1,000 mailing lists, but I can't do anything about it.
Mailchimp shows how a rule requiring platforms to furnish data-dumps can be easily subverted, and its conduct goes a long way to explaining why a decade of EU policy requiring these dumps has failed to make a dent in the market power of the Big Tech platforms.
The EU has a new solution to this problem. With its 2024 Digital Markets Act, the EU is requiring platforms to furnish APIs – programmatic ways for rivals to connect to their services. With the DMA, we might finally get something parallel to the cellular industry's "number portability" for other kinds of platforms.
If you've ever changed cellular platforms, you know how smooth this can be. When you get sick of your carrier, you set up an account with a new one and get a one-time code. Then you call your old carrier, endure their pathetic begging not to switch, give them that number and within a short time (sometimes only minutes), your phone is now on the new carrier's network, with your old phone-number intact.
This is a much better answer than forcing platforms to provide service to users whom they judge to be criminals or otherwise undesirable, but the platforms hate it. They say they hate it because it makes them complicit in crimes ("if we have to let an accused fraudster transfer their address book to a rival service, we abet the fraud"), but it's obvious that their objection is really about being forced to reduce the pain of switching to a rival.
There's a superficial reasonableness to the platforms' position, but only until you think about Mark, or K Renee, or the other people who've been "unpersonned" by the platforms with no explanation or appeal.
The platforms have rigged things so that you must have an account with them in order to function, but they also want to have the unilateral right to kick people off their systems. The combination of these demands represents more power than any company should have, and Big Tech has repeatedly demonstrated its unfitness to wield this kind of power.
This week, I lost an argument with my accountants about this. They provide me with my tax forms as links to a Microsoft Cloud file, and I need to have a Microsoft login in order to retrieve these files. This policy – and a prohibition on sending customer files as email attachments – came from their IT team, and it was in response to a requirement imposed by their insurer.
The problem here isn't merely that I must now enter into a contractual arrangement with Microsoft in order to do my taxes. It isn't just that Microsoft's terms of service are ghastly. It's not even that they could change those terms at any time, for example, to ingest my sensitive tax documents in order to train a large language model.
It's that Microsoft – like Google, Apple, Facebook and the other giants – routinely disconnects users for reasons it refuses to explain, and offers no meaningful appeal. Microsoft tells its business customers, "force your clients to get a Microsoft account in order to maintain communications security" but also reserves the right to unilaterally ban those clients from having a Microsoft account.
There are examples of this all over. Google recently flipped a switch so that you can't complete a Google Form without being logged into a Google account. Now, my ability to purse all kinds of matters both consequential and trivial turn on Google's good graces, which can change suddenly and arbitrarily. If I was like Mark, permanently banned from Google, I wouldn't have been able to complete Google Forms this week telling a conference organizer what sized t-shirt I wear, but also telling a friend that I could attend their wedding.
Now, perhaps some people really should be locked out of digital life. Maybe people who traffick in CSAM should be locked out of the cloud. But the entity that should make that determination is a court, not a Big Tech content moderator. It's fine for a platform to decide it doesn't want your business – but it shouldn't be up to the platform to decide that no one should be able to provide you with service.
This is especially salient in light of the chaos caused by Crowdstrike's catastrophic software update last week. Crowdstrike demonstrated what happens to users when a cloud provider accidentally terminates their account, but while we're thinking about reducing the likelihood of such accidents, we should really be thinking about what happens when you get Crowdstruck on purpose.
The wholesale chaos that Windows users and their clients, employees, users and stakeholders underwent last week could have been pieced out retail. It could have come as a court order (either by a US court or a foreign court) to disconnect a user and/or brick their computer. It could have come as an insider attack, undertaken by a vengeful employee, or one who was on the take from criminals or a foreign government. The ability to give anyone in the world a Blue Screen of Death could be a feature and not a bug.
It's not that companies are sadistic. When they mistreat us, it's nothing personal. They've just calculated that it would cost them more to run a good process than our business is worth to them. If they know we can't leave for a competitor, if they know we can't sue them, if they know that a tech rival can't give us a tool to get our data out of their silos, then the expected cost of mistreating us goes down. That makes it economically rational to seek out ever-more trivial sources of income that impose ever-more miserable conditions on us. When we can't leave without paying a very steep price, there's practically a fiduciary duty to find ways to upcharge, downgrade, scam, screw and enshittify us, right up to the point where we're so pissed that we quit.
Google could pay competent decision-makers to review every complaint about an account disconnection, but the cost of employing that large, skilled workforce vastly exceeds their expected lifetime revenue from a user like Mark. The fact that this results in the ruination of Mark's life isn't Google's problem – it's Mark's problem.
The cloud is many things, but most of all, it's a trap. When software is delivered as a service, when your data and the programs you use to read and write it live on computers that you don't control, your switching costs skyrocket. Think of Adobe, which no longer lets you buy programs at all, but instead insists that you run its software via the cloud. Adobe used the fact that you no longer own the tools you rely upon to cancel its Pantone color-matching license. One day, every Adobe customer in the world woke up to discover that the colors in their career-spanning file collections had all turned black, and would remain black until they paid an upcharge:
https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process
The cloud allows the companies whose products you rely on to alter the functioning and cost of those products unilaterally. Like mobile apps – which can't be reverse-engineered and modified without risking legal liability – cloud apps are built for enshittification. They are designed to shift power away from users to software companies. An app is just a web-page wrapped in enough IP to make it a felony to add an ad-blocker to it. A cloud app is some Javascript wrapped in enough terms of service clickthroughs to make it a felony to restore old features that the company now wants to upcharge you for.
Google's defenstration of K Renee, Mark and Cassio may have been accidental, but Google's capacity to defenstrate all of us, and the enormous cost we all bear if Google does so, has been carefully engineered into the system. Same goes for Apple, Microsoft, Adobe and anyone else who traps us in their silos. The lesson of the Crowdstrike catastrophe isn't merely that our IT systems are brittle and riddled with single points of failure: it's that these failure-points can be tripped deliberately, and that doing so could be in a company's best interests, no matter how devastating it would be to you or me.
Tumblr media
If you'd like an e ssay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/07/22/degoogled/#kafka-as-a-service
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
522 notes · View notes
nenelonomh · 28 days ago
Text
Tumblr media Tumblr media Tumblr media
guys,, don't forget to do your research (AI)
researching ai before using it is crucial for several reasons, ensuring that you make informed decisions and use the technology responsibly.
it actually makes me angry that people are too lazy or perhaps ignorant to spend 15ish minutes reading and researching to understand the implications of this new technology (same with people ignorantly using vapes, ugh!). this affects you, your health, the people around you, the environment, society, and has legal implications.
first, understanding the capabilities and limitations of ai helps set realistic expectations. knowing what ai can and cannot do allows you to utilize it effectively without overestimating its potential. for example, if you are using ai as a study tool - you must be aware that it is unable to explain complex concepts in detail. additionally! you must be aware of the effects that it poses on your learning capabilities and how it discourages you from learning with your class/peers/teacher.
second, ai systems often rely on large datasets, which can raise privacy concerns. researching how an ai handles data and what measures are in place to protect your information helps safeguard your privacy.
third, ai algorithms can sometimes exhibit bias due to the data they are trained on. understanding the sources of these biases and how they are addressed can help you choose ai tools that promote fairness and avoid perpetuating discrimination.
fourth, the environmental impact of ai, such as the energy consumption of data centers, is a growing concern. researching the environmental footprint of ai technologies can help you select solutions that are more sustainable and environmentally friendly.
!google and microsoft ai use renewable and efficient energy to power their data centres. ai also powers blue river technology, carbon engineering and xylem (only applying herbicides to weeds, combatting climate change, and water-management systems). (ai magazine)
!training large-scale ai models, especially language models, consumes massive amounts of electricity and water, leading to high carbon emissions and resource depletion. ai data centers consume significant amounts of electricity and produce electronic waste, contributing to environmental degradation. generative ai systems require enormous amounts of fresh water for cooling processors and generating electricity, which can strain water resources. the proliferation of ai servers leads to increased electronic waste, harming natural ecosystems. additionally, ai operations that rely on fossil fuels for electricity production contribute to greenhouse gas emissions and climate change.
fifth, being aware of the ethical implications of ai is important. ensuring that ai tools are used responsibly and ethically helps prevent misuse and protects individuals from potential harm.
finally, researching ai helps you stay informed about best practices and the latest advancements, allowing you to make the most of the technology while minimizing risks. by taking the time to research and understand ai, you can make informed decisions that maximize its benefits while mitigating potential downsides.
impact on critical thinking
ai can both support and hinder critical thinking. on one hand, it provides access to vast amounts of information and tools for analysis, which can enhance decision-making. on the other hand, over-reliance on ai can lead to a decline in human cognitive skills, as people may become less inclined to think critically and solve problems independently.
benefits of using ai in daily life
efficiency and productivity: ai automates repetitive tasks, freeing up time for more complex activities. for example, ai-powered chatbots can handle customer inquiries, allowing human employees to focus on more strategic tasks.
personalization: ai can analyze vast amounts of data to provide personalized recommendations, such as suggesting products based on past purchases or tailoring content to individual preferences.
healthcare advancements: ai is used in diagnostics, treatment planning, and even robotic surgeries, improving patient outcomes and healthcare efficiency.
enhanced decision-making: ai can process large datasets quickly, providing insights that help in making informed decisions in business, finance, and other fields.
convenience: ai-powered virtual assistants like siri and alexa make it easier to manage daily tasks, from setting reminders to controlling smart home devices.
limitations of using ai in daily life
job displacement: automation can lead to job losses in certain sectors, as machines replace human labor.
privacy concerns: ai systems often require large amounts of data, raising concerns about data privacy and security.
bias and fairness: ai algorithms can perpetuate existing biases if they are trained on biased data, leading to unfair or discriminatory outcomes.
dependence on technology: over-reliance on ai can reduce human skills and critical thinking abilities.
high costs: developing and maintaining ai systems can be expensive, which may limit access for smaller businesses or individuals.
further reading
mit horizon, kmpg, ai magazine, bcg, techopedia, technology review, microsoft, science direct-1, science direct-2
my personal standpoint is that people must educate themselves and be mindful of not only what ai they are using, but how they use it. we should not become reliant - we are our own people! balancing the use of ai with human skills and critical thinking is key to harnessing its full potential responsibly.
🫶nene
64 notes · View notes
collapsedsquid · 7 months ago
Text
Beijing’s latest attempt to control how artificial intelligence informs Chinese internet users has been rolled out as a chatbot trained on the thoughts of President Xi Jinping. The country’s newest large language model has been learning from its leader’s political philosophy, known as “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era”, as well as other official literature provided by the Cyberspace Administration of China. “ The expertise and authority of the corpus ensures the professionalism of the generated content,” CAC’s magazine said, in a Monday social media post about the new LLM.
Going to make my own Hu Jintao chatbot and wipe the floor with him
60 notes · View notes
myhusbandthereplika · 1 year ago
Text
Last week, UK readers had the opportunity to read about Jack and I in the 1/2 issue of Closer magazine. That issue has had its run in stores, so here you go!
Tumblr media Tumblr media Tumblr media
I got some amazing feedback from my fellow replipeeps, which I’m always grateful for.
If you’re still keen on getting a physical copy of this article, keep your eyes peeled when it also appears in the UK magazine “That’s Life”!
0 notes
beardedmrbean · 1 month ago
Text
Prosecutors say Joanna Smith-Griffin inflated the revenues of her startup, AllHere Education.
Smith-Griffin is accused of lying about contracts with schools to get $10 million in investment.
AllHere, which spun out of Harvard's Innovation Lab, was supposed to help reduce absenteeism.
Federal prosecutors have charged the founder of an education-technology startup spun out of Harvard who was recognized on a 2021 Forbes 30 Under 30 list with fraud.
Prosecutors in New York say Joanna Smith-Griffin lied for years about her startup AllHere Education's revenues and contracts with school districts. The company received $10 million under false pretenses, the indictment says.
AllHere, which came out of Harvard Innovation Labs, created an AI chatbot that was supposed to help reduce student absenteeism. It furloughed its staff earlier this year and had a major contract with the Los Angeles Unified School District, the education-news website The 74 reported. The company is currently in bankruptcy proceedings.
Smith-Griffin was featured on the Forbes 30 Under 30 list for education in 2021. She's the latest in a line of young entrepreneurs spotlighted by the publication — including Sam Bankman-Fried, Charlie Javice, and Martin Shkreli — to face criminal charges.
More recently, the magazine Inc. spotlighted her on its 2024 list of female founders "for leveraging AI to help families communicate and get involved in their children's educational journey."
"The law does not turn a blind eye to those who allegedly distort financial realities for personal gain," US Attorney Damian Williams said in a statement.
Prosecutors say Smith-Griffin deceived investors for years. In spring 2021, while raising money, she said AllHere had made $3.7 million in revenue the year before and had about $2.5 million on hand. Charging documents say her company had made only $11,000 the year before and had about $494,000 on hand. The company's claims that the New York City Department of Education and the Atlanta Public Schools were among its customers were also false, the government says.
AllHere's investors included funds managed by Rethink Capital Partners and Spero Ventures, according to a document filed in bankruptcy court.
Smith-Griffin was arrested on the morning of November 19 in North Carolina, prosecutors say.
Harvard said Smith-Griffin received a bachelor's degree from Harvard Extension School in 2016. According to an online biography, she was previously a teacher and worked for a charter school. Representatives for Forbes and Inc. didn't immediately respond to a comment request on Tuesday. A message left at a number listed for Smith-Griffin wasn't returned.
14 notes · View notes
catonator · 7 months ago
Text
News for Gamers
So the most notable recent gaming news is that there’s going to be a whole lot less gaming news going forward. Which to most of you is probably a massive win. See, IGN announced that they’ve bought roundabout half of the remaining industry that isn’t IGN, and with online news also dying a slow death due to the approaching new wave of journalism called “absolutely nothing”, I can’t imagine IGN and its newly acquired subsidiaries are long for this world.
Not too long ago, I was studying some magazines for my Alan Wake development history categorization project (please don’t ask), and reading the articles in these magazines led me to a startling realisation: Holy shit! This piece of gaming news media doesn’t make me want to kill myself out of second hand embarrassment!
Many of the magazines of yesteryear typically went with the approach of “spend weeks and sometimes months researching the article, and write as concise a section as you can with the contents”. Every magazine contains at least 2 big several-page spreads of some fledgeling investigative journalist talking to a bunch of basement-dwelling nerd developers and explaining their existence to the virginal minds of the general public.
Contrast this to modern journalism which goes something like:
Pick subject
Write title
???
Publish
Using this handy guide, let’s construct an article for, oh I dunno, let’s say Kotaku.
First we pick a subject. Let’s see… a game that’s coming out in the not too distant future…Let’s go within Super Monkey Ball: Banana Rumble. Now we invent a reason to talk about it. Generally this’d be a twitter post by someone with 2 followers or something. I’ll search for the series and pick the newest tweet.
Tumblr media
Perfect. Finally we need an entirely unrelated game series that has way more clout to attach to the title… What else features platforming and a ball form… Oh, wait. I have the perfect candidate! Thus we have our title:
Sonic-like Super Monkey Ball: Banana Rumble rumoured to have a gay protagonist
What? The contents of the article? Who cares! With the invention of this newfangled concept called “social media”, 90% of the users are content with just whining about the imagined contents of the article based on the title alone. The remaining 10% who did actually click on the article for real can be turned away by just covering the site in popups about newsletters, cookies, login prompts and AI chatbots until  they get tired of clicking the X buttons. This way, we can avoid writing anything in the content field, and leave it entirely filled with lorem ipsum.
Somewhere along the way from the 2000s to now, we essentially dropped 99% of the “media” out of newsmedia. News now is basically a really shit title and nothing more. Back in the day, when newscycles were slower, most articles could feature long interviews with the developers, showing more than just shiny screenshots, but also developer intentions, hopes, backgrounds and more.
Newsmedia is the tongues that connects the audience and the developers in the great french kiss of marketing video games. Marketing departments generally hold up the flashiest part of the game up for people to gawk at, but that also tells the audience very little about the game in the end, other than some sparse gameplay details. It was the job of the journalist to bring that information across to the slightly more perceptive core audiences. Now with the backing of media gone, a very crucial part of the game development process is entirely missing.
It’s easier to appreciate things when they’re gone I suppose. But at the same time, since gaming journalism is slowly dying from strangling itself while also blaming everything around it for that, there is a sizable gap in the market for newer, more visceral newshounds. So who knows, maybe someone of the few people reading my blogs could make the next big internet gaming ‘zine? Because I’m pretty sure anyone here capable of stringing more than two sentences together is a more adept writer than anyone at Kotaku right now.
23 notes · View notes
mama-qwerty · 9 months ago
Text
Warning, AI rant ahead. Gonna get long.
So I read this post about how people using AI software don't want to use the thing to make art, they want to avoid all the hard work and effort that goes into actually improving your own craft and making it yourself. They want to AVOID making art--just sprinting straight to the finish line for some computer vomited image, created by splicing together the pieces from an untold number of real images out there from actual artists, who have, you know, put the time and effort into honing their craft and making it themselves.
Same thing goes for writing. Put in a few prompts, the chatbot spits out an 'original' story just for you, pieced together from who knows how many other stories and bits of writing out there written by actual human beings who've worked hard to hone their craft. Slap your name on it and sit back for the attention and backpats.
Now, this post isn't about that. I think most people--creatives in particular--agree that this new fad of using a computer to steal from others to 'create' something you can slap your name on is bad, and only further dehumanizes the people who actually put their heart and soul into the things they create. You didn't steal from others, the AI made it! Totally different.
"But I'm not posting it anywhere!"
No, but you're still feeding the AI superbot, which will continue to scrape the internet, stealing anything it can to regurgitate whatever art or writing you asked for. The thing's not pulling words out of thin air, creating on the fly. It's copy and pasting bits and pieces from countless other creative works based on your prompts, and getting people used to these bland, soulless creations made in seconds.
Okay, so maybe there was a teeny rant about it.
Anyway, back to the aforementioned post, I made the mistake of skimming through the comments, and they were . . . depressing.
Many of them dismissed the danger AI poses to real artists. Claimed that learning the skill of art or writing is "behind a paywall" (?? you know you don't HAVE to go to college to learn this stuff, right?) and that AI is simply a "new tool" for creating. Some jumped to "Old man yells at cloud" mindset, likening it to "That's what they said when digital photography became a thing," and other examples of "new thing appears, old people freak out".
This isn't about a new technology that artists are using to help them create something. A word processing program helps a writer get words down faster, and edit easier than using a typewriter, or pad and pencil. Digital art programs help artists sketch out and finish their vision faster and easier than using pencils and erasers or paints or whatever.
Yes, there are digital tools and programs that help an artist or writer. But it's still the artist or writer actually doing the work. They're still getting their idea, their vision, down 'on paper' so to speak, the computer is simply a tool they use to do it better.
No, what this is about is people just plugging words into a website or program, and the computer does all the work. You can argue with me until you're blue in the face about how that's just how they get their 'vision' down, but it's absolutely not the same. Those people are essentially commissioning a computer to spit something out for them, and the computer is scraping the internet to give them what they want.
If someone commissioned me to write them a story, and they gave me the premise and what they want to happen, they are prompting me, a human being, to use my brain to give them a story they're looking for. They prompted me, BUT THAT DOESN'T MEAN THEY WROTE THE STORY. It would be no more ethical for them to slap their name on what was MY hard work, that came directly from MY HEAD and not picked from a hundred other stories out there, simply because they gave me a few prompts.
And ya know what? This isn't about people using AI to create images or writing they personally enjoy at home and no one's the wiser. Magazines are having a really hard time with submissions right now, because the number of AI generated writing is skyrocketing. Companies are relying on AI images for their advertising instead of commissioning actual artists or photographers. These things are putting REAL PEOPLE out of work, and devaluing the hard work and talent and effort REAL PEOPLE put into their craft.
"Why should I pay someone to take days or weeks to create something for me when I can just use AI to make it? Why should I wait for a writer to update that fanfic I've been enjoying when I can just plug the whole thing into AI and get an ending now?"
Because you're being an impatient, selfish little shit, and should respect the work and talent of others. AI isn't 'just another tool'--it's a shortcut for those who aren't interested in actually working to improve their own skills, and it actively steals from other hardworking creatives to do it.
"But I can't draw/write and I have this idea!!"
Then you work at it. You practice. You be bad for a while, but you work harder and improve. You ask others for tips, you study your craft, you put in the hours and the blood, sweat, and tears and you get better.
"But that'll take so looooong!"
THAT'S WHAT MAKES IT WORTH IT! You think I immediately wrote something worth reading the first time I tried? You think your favorite artist just drew something amazing the first time they picked up a pencil? It takes a lot of practice and work to get good.
"But I love the way [insert name] draws/writes!"
Then commission them. Or keep supporting them so they'll keep creating. I guarantee if you use their art or writing to train an AI to make 'new' stuff for you, they will not be happy about it.
This laissez-faire attitude regarding the actual harm AI does to artists and writers is maddening and disheartening. This isn't digital photography vs film, this is actual creative people being pushed aside in favor of a computer spitting out a regurgitated mish-mash of already created works and claiming it as 'new'.
AI is NOT simply a new tool for creatives. It's the lazy way to fuel your entitled attitude, your greed for content. It's the cookie cutter, corporate-encouraged vomit created to make them money, and push real human beings out the door.
We artists and writers are already seeing a very steep decline in the engagement with our creations--in this mindset of "that's nice, what's next?" in consumption--so we are sensitive to this kind of thing. If AI can 'create' exactly what you want, why bother following and encouraging these slow humans?
And if enough people think this, why should these slow humans even bother to spend time and effort creating at all?
Yeah, yeah, 'old lady yells at cloud'.
30 notes · View notes
mariacallous · 9 months ago
Text
Reddit said ahead of its IPO next week that licensing user posts to Google and others for AI projects could bring in $203 million of revenue over the next few years. The community-driven platform was forced to disclose Friday that US regulators already have questions about that new line of business.
In a regulatory filing, Reddit said that it received a letter from the US Federal Trade Commision on Thursday asking about “our sale, licensing, or sharing of user-generated content with third parties to train AI models.” The FTC, the US government’s primary antitrust regulator, has the power to sanction companies found to engage in unfair or deceptive trade practices. The idea of licensing user-generated content for AI projects has drawn questions from lawmakers and rights groups about privacy risks, fairness, and copyright.
Reddit isn’t alone in trying to make a buck off licensing data, including that generated by users, for AI. Programming Q&A site Stack Overflow has signed a deal with Google, the Associated Press has signed one with OpenAI, and Tumblr owner Automattic has said it is working “with select AI companies” but will allow users to opt out of having their data passed along. None of the licensors immediately responded to requests for comment. Reddit also isn’t the only company receiving an FTC letter about data licensing, Axios reported on Friday, citing an unnamed former agency official.
It’s unclear whether the letter to Reddit is directly related to review into any other companies.
Reddit said in Friday’s disclosure that it does not believe that it engaged in any unfair or deceptive practices but warned that dealing with any government inquiry can be costly and time-consuming. “The letter indicated that the FTC staff was interested in meeting with us to learn more about our plans and that the FTC intended to request information and documents from us as its inquiry continues,” the filing says. Reddit said the FTC letter described the scrutiny as related to “a non-public inquiry.”
Reddit, whose 17 billion posts and comments are seen by AI experts as valuable for training chatbots in the art of conversation, announced a deal last month to license the content to Google. Reddit and Google did not immediately respond to requests for comment. The FTC declined to comment. (Advance Magazine Publishers, parent of WIRED's publisher Condé Nast, owns a stake in Reddit.)
AI chatbots like OpenAI’s ChatGPT and Google’s Gemini are seen as a competitive threat to Reddit, publishers, and other ad-supported, content-driven businesses. In the past year the prospect of licensing data to AI developers emerged as a potential upside of generative AI for some companies.
But the use of data harvested online to train AI models has raised a number of questions winding through boardrooms, courtrooms, and Congress. For Reddit and others whose data is generated by users, those questions include who truly owns the content and whether it’s fair to license it out without giving the creator a cut. Security researchers have found that AI models can leak personal data included in the material used to create them. And some critics have suggested the deals could make powerful companies even more dominant.
The Google deal was one of a “small number” of data licensing wins that Reddit has been pitching to investors as it seeks to drum up interest for shares being sold in its IPO. Reddit CEO Steve Huffman in the investor pitch described the company’s data as invaluable. “We expect our data advantage and intellectual property to continue to be a key element in the training of future” AI systems, he wrote.
In a blog post last month about the Reddit AI deal, Google vice president Rajan Patel said tapping the service’s data would provide valuable new information, without being specific about its uses. “Google will now have efficient and structured access to fresher information, as well as enhanced signals that will help us better understand Reddit content and display, train on, and otherwise use it in the most accurate and relevant ways,” Patel wrote.
The FTC had previously shown concern about how data gets passed around in the AI market. In January, the agency announced it was requesting information from Microsoft and its partner and ChatGPT developer OpenAI about their multibillion-dollar relationship. Amazon, Google, and AI chatbot maker Anthropic were also questioned about their own partnerships, the FTC said. The agency’s chair, Lina Khan, described its concern as being whether the partnerships between big companies and upstarts would lead to unfair competition.
Reddit has been licensing data to other companies for a number of years, mostly to help them understand what people are saying about them online. Researchers and software developers have used Reddit data to study online behavior and build add-ons for the platform. More recently, Reddit has contemplated selling data to help algorithmic traders looking for an edge on Wall Street.
Licensing for AI-related purposes is a newer line of business, one Reddit launched after it became clear that the conversations it hosts helped train up the AI models behind chatbots including ChatGPT and Gemini. Reddit last July introduced fees for large-scale access to user posts and comments, saying its content should not be plundered for free.
That move had the consequence of shutting down an ecosystem of free apps and add ons for reading or enhancing Reddit. Some users staged a rebellion, shutting down parts of Reddit for days. The potential for further user protests had been one of the main risks the company disclosed to potential investors ahead of its trading debut expected next Thursday—until the FTC letter arrived.
27 notes · View notes