Tumgik
#Hire ChatGPT Developers
creolestudios · 1 year
Text
Choose Creole Studios for ChatGPT Development: Opt for Creole Studios on DevelopersForHire.com for expert ChatGPT development. Elevate your projects with our renowned app development expertise.
0 notes
sudhirsingh-123 · 1 year
Text
Leveraging Blockchain Development Services for Enhanced Data Privacy
Tumblr media
1. Decentralization and Security:
One of the key benefits of Blockchain development is decentralization. Traditional systems rely on a central authority, making them vulnerable to single points of failure and security breaches. Blockchain, however, distributes data across a network of nodes, eliminating the risk of data tampering and unauthorized access. Webllisto's expertise in Blockchain technology ensures that your applications and platforms are fortified with robust security measures, safeguarding your users' sensitive information and bolstering their trust in your services.
2. Transparency and Immutability:
Blockchain's transparent nature allows all participants in the network to access the same information, promoting trust and accountability. Every transaction recorded on the Blockchain is immutable, meaning it cannot be altered or deleted, ensuring a reliable and auditable system. Webllisto's proficiency in Blockchain development ensures that your applications embrace transparency and maintain an immutable record, fostering a sense of trust among your users and stakeholders.
Also Read- Top 10 Blockchain development Companies in the year 2022-2023
3. Smart Contracts and Automation:
Smart Contracts are self-executing agreements with predefined conditions that automatically trigger actions when those conditions are met. They eliminate the need for intermediaries, reducing costs and streamlining processes. Webllisto's expertise in Blockchain development empowers you to leverage the full potential of Smart Contracts, creating efficient, secure, and automated workflows for your business operations.
4. NFT Marketplace Development Services:
As the Non-Fungible Token (NFT) market continues to flourish, businesses are keen to capitalize on this trend. Webllisto specializes in NFT Marketplace Development, providing you with a cutting-edge platform to trade digital assets, art, collectibles, and more. By utilizing Blockchain technology, your NFT marketplace will ensure the authenticity and ownership of digital assets, attracting a wider audience of collectors and investors.
5. Play-To-Earn Game Development:
With the rise of blockchain-based games, the Play-to-Earn model has gained popularity, enabling players to earn valuable digital assets by participating in gameplay. Webllisto excels in Play-to-Earn Game Development, creating immersive and rewarding gaming experiences that captivate users while embracing the decentralized nature of Blockchain technology.
6. NFT Game Development:
NFTs have transformed the gaming industry by allowing players to own and trade in-game assets. Webllisto's expertise in NFT Game Development enables you to build captivating games that integrate NFTs, providing players with true ownership of their virtual items and fostering a vibrant in-game economy.
Conclusion:
Blockchain Development Services offer a plethora of benefits, ranging from enhanced security and transparency to automation and decentralized applications. Embrace the power of Blockchain with Webllisto's expertise in NFT Marketplace Development Services, Play-to-Earn Game Development, and NFT Game Development. Stay ahead of the competition, attract more users, and revolutionize your business with Blockchain technology. Get in touch with Webllisto today to embark on a transformative journey toward a decentralized and secure future.
1 note · View note
scalacode · 1 year
Text
Choose the essential insights and cost considerations of developing a chatbot app like ChatGPT with ScalaCode. Our informative blog breaks down the complexities of development, implementation, and scalability, helping you make informed decisions for your business. Stay ahead in the chatbot revolution with ScalaCode's expert guidance. Start exploring our blog today. Read more: How Much Does It Cost To Develop a Chatbot App Like ChatGPT?
0 notes
sapphiresoftware · 8 months
Text
Hire ChatGPT Developers in USA | ChatGPT Development Company
Tumblr media
Looking to Hire ChatGPT Developer in USA? Find dedicated ChatGPT developers from Sapphire who excel in natural language processing & build successful AI-driven solutions. Connect with us to take your AI development to the next level.
Read More:
0 notes
itpathsolutions · 2 years
Link
Here is how ChatGPT an Microsoft Bing Collaboration could Transform the internet. Learn how users will benefit.
1 note · View note
leidensygdom · 4 months
Text
Fighting AI and learning how to speak with your wallet
So, if you're a creative of any kind, chances are that you've been directly affected by the development of AI. If you aren't a creative but engage with art in any way, you may also be plenty aware of the harm caused by AI. And right now, it's more important than ever that you learn how to fight against it.
The situation is this: After a few years of stagnation on relevant stuff to invest to, AI came out. Techbros, people with far too much money trying to find the big next thing to invest in, cryptobros, all these people, flocked to it immediately. A lot of people are putting money in what they think to be the next breakthrough- And AI is, at its core, all about the money. You will get ads shoved in your fave about "invest in AI now!" in every place. You will get ads telling you to try subscription services for AI related stuff. Companies are trying to gauge how much they can depend on AI in order to fire their creatives. AI is opening the gates towards the biggest data laundering scheme there's been in ages. It is also used in order to justify taking all your personal information- Bypassing existing laws.
Many of them are currently bleeding investors' money though. Let it be through servers, through trying to buy the rights to scrape content from social media (incredibly illegal, btw), amidst many other things. A lot of the tech giants have also been investing in AI-related infrastructures (Microsoft, for example), and are desperate to justify these expenses. They're going over their budgets, they're ignoring their emissions plans (because it's very toxic to the environment), and they're trying to make ends meet to justify why they're using it. Surely, it will be worth it.
Now, here's where you can act: Speak with your wallet. They're going through a delicate moment (despite how much they try to pretend they aren't), and it's now your moment to act. A company used AI in any manner? Don't buy their products. Speak against them in social media. Make noise. It doesn't matter how small or how big. A videogame used AI voices? Don't buy the game. Try to get a refund if you did. Social media is scraping content for AI? Don't buy ads, don't buy their stupid blue checks, put adblock on, don't give them a cent. A film generated their poster with AI? Don't watch it. Don't engage with it. Your favourite creator has made AI music for their YT channel? Unsub, bring it up in social media, tell them directly WHY you aren't supporting. Your favourite browser is now integrating AI in your searches? Change browsers.
Let them know that the costs they cut through the use of AI don't justify how many customers they'd lose. Wizards of the Coast has been repeatedly trying to see how away they can get with the use of AI- It's only through consumer boycotting and massive social media noise that they've been forced to go back and hire actual artists to do that work.
The thing with AI- It doesn't benefit the consumer in any way. It's capitalism at its prime: Cut costs, no matter how much it impacts quality, no matter how inhumane it is, no matter how much it pollutes. AI searches are directly feeding you misinformation. ChatGPT is using your input to feed itself. Find a Discord server to talk with others about writing. Try starting art yourself, find other artists, join a community. If you can't, use the money you may be saving from boycotting AI shills to support a fellow creative- They need your help more than ever.
We're in a bit of a nebulous moment. Laws against AI are probably around the corner: A lot of AI companies are completely aware that they're going to crash if they're legally obliged to disclose the content they used to train their machines, because THEY KNOW it is stolen. Copyright is inherent to human created art: You don't need to even register it anywhere for it to be copyrighted. The moment YOU created it, YOU have the copyright to it. They can't just scrape social media because Meta or Twitter or whatever made a deal with OpenAI and others, because these companies DON'T own your work, they DON'T get to bypass your copyright.
And to make sure these laws get passed, it's important to keep the fight against AI. AI isn't offering you anything of use. It's just for the benefit of companies. Let it be known it isn't useful, and that people's work and livelihoods are far more important than letting tech giants save a few cents. Instead, they're trying to gauge how MUCH they can get away with. They know it goes against European GDPR laws, but they're going to try to strech what these mean and steal as much data up until clear ruling comes out.
The wonder about boycotts is that they don't even need you to do anything. In fact, it's about not doing some stuff. You don't need money to boycott- Just to be aware about where you put it. Changing habits is hard- People can't stop eating at Chick-fil-a no matter how much they use the money against the LGBTQ collective, but people NEED to learn how to do it. Now it's the perfect time to cancel a subscription, find an alternate plan to watching that one film and maybe joining a creative community yourself.
209 notes · View notes
mariacallous · 4 months
Text
Last week OpenAI revealed a new conversational interface for ChatGPT with an expressive, synthetic voice strikingly similar to that of the AI assistant played by Scarlett Johansson in the sci-fi movie Her—only to suddenly disable the new voice over the weekend.
On Monday, Johansson issued a statement claiming to have forced that reversal, after her lawyers demanded OpenAI clarify how the new voice was created.
Johansson’s statement, relayed to WIRED by her publicist, claims that OpenAI CEO Sam Altman asked her last September to provide ChatGPT’s new voice but that she declined. She describes being astounded to see the company demo a new voice for ChatGPT last week that sounded like her anyway.
“When I heard the release demo I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the statement reads. It notes that Altman appeared to encourage the world to connect the demo with Johansson’s performance by tweeting out “her,” in reference to the movie, on May 13.
Johansson’s statement says her agent was contacted by Altman two days before last week’s demo asking that she reconsider her decision not to work with OpenAI. After seeing the demo, she says she hired legal counsel to write to OpenAI asking for details of how it made the new voice.
The statement claims that this led to OpenAI’s announcement Sunday in a post on X that it had decided to “pause the use of Sky,” the company’s name for the synthetic voice. The company also posted a blog post outlining the process used to create the voice. “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the post said.
Sky is one of several synthetic voices that OpenAI gave ChatGPT last September, but at last week’s event it displayed a much more lifelike intonation with emotional cues. The demo saw a version of ChatGPT powered by a new AI model called GPT-4o appear to flirt with an OpenAI engineer in a way that many viewers found reminiscent of Johansson’s performance in Her.
“The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers,” Sam Altman said in a statement provided by OpenAI. He claimed the voice actor behind Sky's voice was hired before the company contact Johannsson. “Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
The conflict with Johansson adds to OpenAI’s existing battles with artists, writers, and other creatives. The company is already defending a number of lawsuits alleging it inappropriately used copyrighted content to train its algorithms, including suits from The New York Times and authors including George R.R. Martin.
Generative AI has made it much easier to create realistic synthetic voices, creating new opportunities and threats. In January, voters in New Hampshire were bombarded with robocalls featuring a deepfaked voice message from Joe Biden. In March, OpenAI said that it had developed a technology that could clone someone’s voice from a 15-second clip, but the company said it would not release the technology because of how it might be misused.
87 notes · View notes
destielmemenews · 10 months
Text
Tumblr media
OpenAI is the developer of DALL-E 2 and ChatGPT. 505 of the approximately 700 employees are threatening to resign and follow Altman to Microsoft, where he was just hired.
source 1
source 2
63 notes · View notes
schraubd · 1 year
Text
How To Hack The Law
Do you ever idly puzzle through various ideas for a "perfect crime"? It's awkward to talk about -- you don't actually want to do them, you don't actually want to give anyone a bright idea, but they're still so interesting to think through.
The legal community is abuzz with the story of a lawyer who relied on ChatGPT to do his research and submitted a brief filled with entirely invented cases. ChatGPT just made them up out of air -- complete with names, citations and quotes -- and the lawyer dutifully added them to the brief. When opposing counsel tried to read the cases for themselves, they were baffled because they couldn't find any trace of them. The presiding judge went so far as to contact the clerk of the courts where the cases were allegedly filed, confirming their non-existence. Now the lawyer is facing sanctions; he is begging for mercy on the grounds that he had no idea ChatGPT would lie to him like that.
I know of very few lawyers who have sympathy for this lawyer. But imagine a slightly different case. Let's say that LexisNexis developed a glitch where it invented a case. If you typed in the (invented) citation to the case, it would pop up on Lexis the same as any other case -- name, judge panel, court, reasoning, everything. But the case isn't real; it was a complete invention. If a lawyer came across such a "hallucinated" decision on Lexis, I think we'd be very forgiving if she ended up being deceived and relied on the case in her briefs. Indeed, I actually wonder, in a situation, like this, how long it would take the legal community to figure out that the case wasn't real.
For example: the last case contained in volume 500 of the Federal Reporter (3d) is Jacobsen v. DOJ, 500 F.3d 1376 (Fed. Cir. 2007). That case ends on page 1381. Suppose an enterprising criminal hacks the Westlaw and Lexis database* and adds another case, call it Smith v. Jones, cited to 500 F.3d 1382. To further cover her tracks, the criminal "assigns" the case to a panel of judges who are no longer active on the court, to make it less likely one of them will see it and be like "I don't remember that decision." Smith v. Jones, of course, can be about and say whatever the criminal (or the unscrupulous lawyer who hired her) wants it to. Need a precedent that appears to decisively resolve a contested point of law in your favor? Voila -- the new case of Smith v. Jones is there to meet your needs. Indeed, the diligent criminal could add one or two new precedents per volume on a range of topics, providing bespoke "new" precedent to shift the legal terrain on an array of different issues.
If this happened, again I ask: how long would it take for the legal community to figure it out? If the initial hack was undetected, could one get away with doing this? Certainly, there would still be ways to confirm the cases are not real. If one back-checked the cases back to the clerk's office, one would discover they're vapor -- but realistically, that almost never happens. We take Lexis and Westlaw as proof enough; I'm not sure I can imagine a circumstance where I would try to confirm the veracity of a case I saw on Westlaw or Lexis by contacting the clerk's office. There probably would be some other hints that the cases were suspect -- the lack of citations from other cases would be a significant hint that something is shady -- but I can imagine a crime like this slipping by us for some time. And the longer it goes unnoticed, the more these cases have the opportunity to subtly adjust the overall trajectory of law in a new direction.
It's a scary thought, no? We're very reliant on the robustness and reliability of online databases. If they start to falter, we run into seriously trouble very quickly.
* Note: I assume -- and desperately hope -- that this is difficult-to-impossible to do.
via The Debate Link https://ift.tt/hLkYFA1
96 notes · View notes
creolestudios · 1 year
Text
Hire dedicated ChatGPT developers for seamless AI integration. Elevate your projects with our skilled team of AI integration specialists.
0 notes
anarchycox · 4 months
Note
I think I saw a post of yours that talked about students using AI and chatgpt to do their schoolwork? I can’t find it now, but I want to try to convince my nephew that he needs to stop trying to use it for his English assignments. I think he’s mostly using it for edits than for making something more foundational but I don’t want him to use it at all. Writing and essay writing has always been a struggle in our family but I don’t want him to basically cheat either. Do you have any advice?
Absolutely happy to help!
There are lots of things I tell my students.
That I will never care about a few typos or mistakes in a paper so long as I can see personal effort. I don't care about perfection. Writing is about the journey as much as the destination.
AI just reads frankly boring. That when I read an AI paper versus a student who cared and tried the difference is palpable. I am more engaged when it is authentic writing vs generative writing.
AI is still making up sources, like if you tell it to generate references it will find real journals but make up articles or everything is written by Smith, Doe, Brown, Johnson etc which is a huge red flag and how I bust about 70% of the students.
Is the gamble worth it? Maybe you don't get caught but generally speaking an AI paper will get a C at most just because the language is too generic and broad. Cs get degrees right? But what if you are the person caught - now it is an F on an assignment, in the course, maybe suspension (I have a student facing suspension right now for AI usage).
It is easy - but what as an individual do you actually gain from it? Does it provide growth and satisfaction to have generated that work? Does a B on a fake assignment enrich you the same way a B on something you worked hard on does?
Why don't you want to gain the knowledge and skills that actually working on this assignment provides? Being able to research and assess and analyze are life long skills transferable to pretty much every job out there. A future employer wants to hire you for the perspectives you bring to the market, and if you haven't developed those skills - what good are you?
Trying is scary, it is so scary, I know that, because what if you fail? But the alternative is never trying and never fulfilling the possibility that is you. When you are bold enough to try, you are bold enough to become someone and that is so important. Be curious enough to try in this world.
10 notes · View notes
harrowing-of-hell · 9 months
Text
i think the big thing is that there just isn't any way to ethically create ai generated content, at least with the way training ai models currently works
for the sake of conciseness, lets just focus on the amount of labor required to produce the images text-to-image ai models are trained on
theoretically, you could hire a bunch of artists whose jobs are to create art to feed into an algorithm and train it. there's no wage theft here.
in theory, that could work.
reality is, these models are trained on so many images that there is no way to do this ethically.
for example, DALL-E, which is a text-to-image model developed by OpenAI, was trained on 250 million images
to pay the labor force that would be required to even produce that many images... ignoring the amount of time that would take even with thousands of artists, there's just literally no fucking way.
this is precisely why these ai models, both text-to-image ai art generators similar to DALL-E and LLMs like ChatGPT, resort to scraping the internet for data to train their models on. they have no other option besides just... not making the model to begin with.
the only way to realistically create a good ai model meant to function like these two examples is to resort to unethical methods
and again, this is ignoring all the other ethical concerns with ai generated content! this is the reality of JUST training these models to begin with!
11 notes · View notes
scifigeneration · 9 months
Text
AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024
by Anjana Susarla, Professor of Information Systems at Michigan State University, Casey Fiesler, Associate Professor of Information Science at the University of Colorado Boulder, and Kentaro Toyama Professor of Community Information at the University of Michigan
Tumblr media
2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.
We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.
One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.
However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.
So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.
I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.
youtube
Kentaro Toyama, Professor of Community Information, University of Michigan
In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.
Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.
The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.
Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.
Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.
Anjana Susarla, Professor of Information Systems, Michigan State University
In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.
Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.
Tumblr media
These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.
The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.
The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.
A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.
16 notes · View notes
airasilver · 10 months
Text
Opinion: Here’s who should have won Time’s ‘Person of the Year’
Updated 10:07 AM EST December 8, 2023
Editor’s Note: Holly Thomas is a writer and editor based in London. She is morning editor at Katie Couric Media. She tweets @HolstaT. The opinions expressed in this commentary are solely those of the author. View more opinion on CNN.
Taylor Swift is Time’s 2023 “Person of the Year,” and apparently, I’m the only millennial woman on Earth who doesn’t feel seen.
OK, that’s an exaggeration. But since the announcement, it’s felt like a specific corner of Spotify Wrapped got bitten by a radioactive spider and attained superhuman powers.
I’m happy for her, I guess. I’ve nothing against a seemingly pleasant person having a lovely time, and there’s no denying she’s had a stellar year. As Time’s feature details, Swift’s now made more No. 1 albums than any other woman in history, has world leaders begging her to tour their nations and has reportedly become a billionaire. “Swift is the rare person who is both the writer and hero of her own story,” says Time. That’s great. I just don’t find that story especially compelling.
Ugh, I feel so mean. I’m well aware this will upset people, and I’d never want to rob anyone else of their joy. We’ve all had conversations with people who simply don’t “get” the music or TV we’re into. Typically, my response to such complaints is, “That’s OK, it wasn’t made for you.” But part of what’s making me so squirmy is the sense that Swift, and the stories she tells through her music, are basically aimed at me. If you lined me up alongside everyone I know who’s currently rhapsodizing over her success, I’d be indistinguishable. But I’m not biting. That’s not because I think there’s anything wrong with her. If anything, my choice for Time “Person of the Year” would be more problematic.
Historically, the title’s recipient has often been a provocateur. The idea isn’t necessarily that the “best” person wins — though that’s certainly been the case at times — it’s that the person who’s had the most influence, for “good or ill” over the previous 12 months, is recognized. Previous winners have included Adolf Hitler, Joseph Stalin, Greta Thunberg, Martin Luther King Jr. and Elon Musk. This year’s shortlist included the Hollywood strikers, Chinese President Xi Jinping, Barbie, Federal Reserve Chair Jerome Powell, Russian President Vladimir Putin, the Trump prosecutors, King Charles III and OpenAI CEO Sam Altman. Time ultimately named Altman CEO of the year. I think he should have taken the top title.
In case he hasn’t yet crossed your radar, Altman is the 38-year-old chief executive of OpenAI, the tech startup responsible for creating ChatGPT. ChatGPT is a revolutionary generative artificial intelligence chatbot that was launched in November 2022. It’s since astounded observers by passing exams at law and business schools, writing effective job applications and computer code and composing part of a political speech for Israel’s president.
The implications of that tech alone are both miraculous and terrifying, particularly given the potential for disinformation campaigns to influence the presidential election in 2024. Many companies besides OpenAI are vying for a bite of the lucrative AI market, competing to develop newer, evermore sophisticated systems. Though the Biden administration recently introduced legislation to regulate the exploding industry, the pace of development is so rapid that it’s often difficult for governments to keep up.
The mysteriousness and speed of the AI race were evidenced in November, when, less than a year after ChatGPT’s launch, Altman was fired suddenly by his company’s board. Just days later, Microsoft, OpenAI’s biggest stakeholder, announced it was hiring Altman to head up a new AI team. This prompted a mass revolt among OpenAI’s staff, almost all of whom threatened to quit unless Altman was rehired. Within days, he was, and the board that’d fired him was replaced.
The circumstances around both Altman’s dismissal and rehiring were remarkably murky. In their statement announcing his sacking, the original board accused Altman of “being not consistently candid in his communications,” but didn’t elaborate on what that meant. Even more worrying, Altman’s return and the restructuring of OpenAI have been characterized as a victory for AI “accelerationists” — those who believe that the tech should be developed as fast as possible, unconstrained by safety concerns. The episode proved that Altman wasn’t just capable of spearheading potentially the most significant invention of the 21st century so far. He was able to upend the ecosystem that created it within days.
This, I think, is what’s lacking in Swift as Time “Person of the Year.” Her predominance in the entertainment industry is undeniable, but her story is essentially one of becoming mega-successful within an existing framework. As she told Time, we live in a patriarchal society fueled by money, so “feminine ideas becoming lucrative means that more female art will get made.” It’s not a million miles from, “If you can’t beat ‘em, join ‘em.”
The impression that no one’s anticipating any controversy from Swift anytime soon was reinforced in November when Gannett, America’s biggest newspaper chain, hired the first-ever Swift correspondent. The journalist in question, 35-year-old Bryan West, is a self-avowed fan. Odd though some might find it to hire someone with such an obvious bias, West has argued that it’s no different than “being a sports journalist who’s a fan of the home team.” Whether you agree with that comparison or not, it’s undeniably in his professional interests for Swift to remain popular and relevant — and it seems unlikely that the appetite for stories about her will wane anytime soon.
This is why Altman, not Swift, ought to have been Time’s “Person of the Year.” His impact on the world could be exponentially more consequential, but not nearly enough people are aware of him or the implications of his technology. Every move Swift makes, however incidental, is the subject of feverish intrigue and speculation. Over in San Francisco, Altman is making moves that could change the fate of the world. And until a month ago, most of us were unaware he even existed.
© 2023 Cable News Network. A Warner Bros. Discovery Company. All Rights Reserved. 
CNN Sans ™ & © 2016 Cable News Network.
I don’t know about Sam Altman but I agree, it shouldn’t have been Taylor. She’s just a musician who is everywhere and in everything.
At least I’ve seen good and bad on the AI front. Good and bad from Hollywood and etc. Taylor? Just everyone praising her? For what? Her singing? Her tours? (Where people died but while they bitched at males for things out of their control, Taylor is praised for it….doesn’t make sense to me.) Her making us spend money we then complain about?
She’s not that good of a singer. I don’t like her anymore. She’s the same as any other singer out there.
16 notes · View notes
i4technolab · 2 years
Link
The integration of ChatGPT with .NET and Angular is a game-changer in the world of conversational AI, providing businesses with the tools they need to stay ahead of the curve and provide their customers with the best possible experience.
0 notes
multi-muse-transect · 7 months
Text
Hazbin OC and New Muse Of 2024!
Name: Pix (formerly Charley)
Date And Place Of Birth: Tilamook Bay, Oregon. August 21, 1998.
Date Of Death: 2023
Afiliations: VoxTech
Height: 6’5
Occupation: Video game developer and rising overlord.
Orientation: Straight.
Relationships:
Veorsika Mayday (girlfriend)
Vox (grandfather)
Goals: Wants to become the greatest and nicest overlord in all of Hell.
Likes:
Video games
Developing video games
Steak
Martini's
Waffles
Peanut butter waffles
Peanut butter and jelly waffles
Peanut butter and chocolate waffles
Verosika Mayday (girlfriend)
Hideo Kojima
Treating workers well.
Sex
Older women
Award shows
Charlie Morningstar
Lucifer Morningstar
Angel Dust
Husker
Vaggie
Executing sexual deviants and people who storm on stage to interrupt him.
Dislikes:
Crunch
Mistreating workers.
Valentino.
People who try to deflect blame.
Sexual deviants.
Firing workers.
Studio interference.
Canceling projects.
Sex during work unless it’s with Verosika and it’s during break.
Over-hiring.
Coconuts.
Mammon
Pastors and reporters that blame video games on problems
AI
Horrible video game adaptations
Russia
ChatGPT
Voice Claim: Zach Aguilar, Steven Yeun or Jason Mantzoukas.
Description: In a previous life, Pix was a graduate named Charley from college and studied video game development. He soon moved to Russia to work for a small video game company that was developing an open world MMO RPG survival game. Thing is, the company run by two brothers treated him and several workers horribly like locking them in rooms and making them work 18 hours while changing the game every time the heads played a triple A game that was on the craze and making them pay a fine for any idea they don't like. Soon the heads advertised the game as an open world survival MMO RPG when it really was a horrible extraction shooter. The game came out with extreme backlash and Charley’s career as a developer was basically over as his name was part of it.
What made matters worse, the company shut down and left him jobless and said heads, the brothers, of the company made a cope post via ChatGPT blaming the game’s shutdown on bloggers and influencers while dismissing the allegations doubling down saying they will continue to make games and the only thing that would stop them is if they were killed. Charley ended up buying some weapons from a military stockpile and then broke into their office while letting his workers who were clearing out leave then shot two of the brothers before getting arrested. He was a divisive figure in the media because no one else was hurt in his spree and he had good intentions to make his former bosses pay and made sure no young and up and coming talent like him would get abused. Charley had no regrets and died in prison choking on a steak where he went to Hell. Now as Pix, Charley has a second lease on life and became a famous game developer after signing with Vox and became a rising overlord by having thousands of souls under his contract.
During his stint, Pix met Verosika Mayday during a VO session when Vox wanted a celebrity talent in it. The two quickly became friends despite not being a fan of celebrity voice actors as the two went through similar situations. After that, they decided to start dating.
Personality: Pix is a rather interesting sinner as he was simply pushed too far by his environment and had good intentions before committing his sin. He is very much a kind person who is loyal to his workers and would offer his hand during development. His loyalty to his workers is exemplified when he had a food hall in his building and would give his employees enough time to refine his games while giving them a mandatory four hour break alongside living spaces in case they want to work overtime.
He is very passionate about his craft hence why his games are known to be groundbreaking because of the effort him and his team put onto them. But he's also very demanding and prefers to hire very experienced developers instead of new talent into the fold. This rather demanding attitude made him one of the more elite developers of Hell itself. Pix also greatly dislikes people who mistreat their workers and also sexual harassers like Valentino who he threatened to kill when he saw him harass one of his devs.
His kindly demeanor also makes him one of the most wanted overlord in Hell to work for as his contracts are less than owning one’s souls but more business based. In fact, the ones who take the contract get benefits like pay and sick days alongside housing at the same time. Under Pix, nothing much changes for the contracted except he demands proper behavior and work ethics.
Pix is also known to be very disgusted and angered by people who blame video games for problems. He sees it as power hungry assholes trying to blame something to get more influence over others.
But despite his kindly attitude, Pix can be a perfectionist. This resulted in his games having long development years at times as he’s known to nitpick. Another part is that his passion for developing is so strong that he finds himself not getting enough sleep…or little sleep. In a way, his passion became a Hell of its own.
There is also another darker side as he would literally execute any workers who are being deviants or anyone who is being rambunctious. This led to him killing a worker for sexually harassing a coworker and also shooting someone in the head on live television in an award show when they ran up stage and interrupted him during his speech.
Pix also had greater ambitions and wanted to become an overlord like his grandfather Vox who he is working for. He scouts for talent that are down on their luck and need help resulting in his fellow workers to hold great admiration for him. Pix sought to change the system of Hell by leading by example as his kindness and eye for talent in programming helped him to become an up and coming overlord.
Although he can be known to be very power hungry at the same time. This stemmed from the feeling of hopelessness he had as an overworked game developer. Because of this, he always wants to be in charge and will do anything to stay up top. A trait similar to his grandfather Vox.
His relationship with Verosika Mayday got off to a rocky start as he wasn’t a fan of celebrity voice talent. But the two began to hang out more and more before “practicing lines” for his script in the game. The two officially began dating though their relationship was met with some eyebrows raised as he’s a sinner and she’s a hellborn. Plus Vox wasn’t exactly a fan of it either. All in all, Pix tries his best not to let Verosika down.
Abilities
Demonic Transformation: Almost similar to Alastor, Pix can turn into a being made of pure pixels.
Digital Materialization: Can manipulate artificial light and project it as hard-light projections. This resulted in him creating weapons, holograms and even his own minions.
Skillset
Marksmanship: Pix is known to be a great marksman thanks to learning from a relative and used his skills to murder his former bosses.
Sense of detail: He looks into all of his games before release and would discover bugs that weren't there alongside adding details to make them immersive.
Skilled Developer: In life and in death, Pix was a very talented video game developer who would aim high most of the time. He knows the ins and outs of programming.
2 notes · View notes