Tumgik
#artificial general intelligence AGI development
in-sightpublishing · 19 days
Text
Dr. Christopher DiCarlo on Critical Thinking & an AGI Future
Author(s): Scott Douglas Jacobsen Publication (Outlet/Website): The Good Men Project Publication Date (yyyy/mm/dd): 2024/08/12 Dr. Christopher DiCarlo is a philosopher, educator, and author. He is the Principal and Founder of Critical Thinking Solutions, a consulting business for individuals, corporations, and not-for-profits in both the private and public sectors. He currently holds the…
Tumblr media
View On WordPress
0 notes
jcmarchi · 14 days
Text
Here’s What to Know About Ilya Sutskever’s $1B Startup SSI
New Post has been published on https://thedigitalinsider.com/heres-what-to-know-about-ilya-sutskevers-1b-startup-ssi/
Here’s What to Know About Ilya Sutskever’s $1B Startup SSI
In a bold move that has caught the attention of the entire AI community, Safe Superintelligence (SSI) has burst onto the scene with a staggering $1 billion in funding. First reported by Reuters, this three-month-old startup, co-founded by former OpenAI chief scientist Ilya Sutskever, has quickly positioned itself as a formidable player in the race to develop advanced AI systems.
Sutskever, a renowned figure in the field of machine learning, brings with him a wealth of experience and a track record of groundbreaking research. His departure from OpenAI and subsequent founding of SSI marks a significant shift in the AI landscape, signaling a new approach to tackling some of the most pressing challenges in artificial intelligence development.
Joining Sutskever at the helm of SSI are Daniel Gross, previously leading AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. This triumvirate of talent has set out to chart a new course in AI research, one that diverges from the paths taken by tech giants and established AI labs.
The emergence of SSI comes at a critical juncture in AI development. As concerns about AI safety and ethics continue to mount, SSI’s focus on developing “safe superintelligence” resonates with growing calls for responsible AI advancement. The company’s substantial funding and high-profile backers underscore the tech industry’s recognition of the urgent need for innovative approaches to AI safety.
SSI’s Vision and Approach to AI Development
At the core of SSI’s mission is the pursuit of safe superintelligence – AI systems that far surpass human capabilities while remaining aligned with human values and interests. This focus sets SSI apart in a field often criticized for prioritizing capability over safety.
Sutskever has hinted at a departure from conventional wisdom in AI development, particularly regarding the scaling hypothesis and suggesting that SSI is exploring novel approaches to enhancing AI capabilities. This could potentially involve new architectures, training methodologies, or fundamental rethinking of how AI systems learn and evolve.
The company’s R&D-first strategy is another distinctive feature. Unlike many startups racing to market with minimum viable products, SSI plans to dedicate several years to research and development before commercializing any technology. This long-term view aligns with the complex nature of developing safe, superintelligent AI systems and reflects the company’s commitment to thorough, responsible innovation.
SSI’s approach to building its team is equally unconventional. CEO Daniel Gross has emphasized character over credentials, seeking individuals who are passionate about the work rather than the hype surrounding AI. This hiring philosophy aims to cultivate a culture of genuine scientific curiosity and ethical responsibility.
The company’s structure, split between Palo Alto, California, and Tel Aviv, Israel, reflects a global perspective on AI development. This geographical diversity could prove advantageous, bringing together varied cultural and academic influences to tackle the multifaceted challenges of AI safety and advancement.
Funding, Investors, and Market Implications
SSI’s $1 billion funding round has sent shockwaves through the AI industry, not just for its size but for what it represents. This substantial investment, valuing the company at $5 billion, demonstrates a remarkable vote of confidence in a startup that’s barely three months old. It’s a testament to the pedigree of SSI’s founding team and the perceived potential of their vision.
The investor lineup reads like a who’s who of Silicon Valley heavyweights. Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel have all thrown their weight behind SSI. The involvement of NFDG, an investment partnership led by Nat Friedman and SSI’s own CEO Daniel Gross, further underscores the interconnected nature of the AI startup ecosystem.
This level of funding carries significant implications for the AI market. It signals that despite recent fluctuations in tech investments, there’s still enormous appetite for foundational AI research. Investors are willing to make substantial bets on teams they believe can push the boundaries of AI capabilities while addressing critical safety concerns.
Moreover, SSI’s funding success may encourage other AI researchers to pursue ambitious, long-term projects. It demonstrates that there’s still room for new entrants in the AI race, even as tech giants like Google, Microsoft, and Meta continue to pour resources into their AI divisions.
The $5 billion valuation is particularly noteworthy. It places SSI in the upper echelons of AI startups, rivaling the valuations of more established players. This valuation is a statement about the perceived value of safe AI development and the market’s willingness to back long-term, high-risk, high-reward research initiatives.
Potential Impact and Future Outlook
As SSI embarks on its journey, the potential impact on AI development could be profound. The company’s focus on safe superintelligence addresses one of the most pressing concerns in AI ethics: how to create highly capable AI systems that remain aligned with human values and interests.
Sutskever’s cryptic comments about scaling hint at possible innovations in AI architecture and training methodologies. If SSI can deliver on its promise to approach scaling differently, it could lead to breakthroughs in AI efficiency, capability, and safety. This could potentially reshape our understanding of what’s possible in AI development and how quickly we might approach artificial general intelligence (AGI).
However, SSI faces significant challenges. The AI landscape is fiercely competitive, with well-funded tech giants and numerous startups all vying for talent and breakthroughs. SSI’s long-term R&D approach, while potentially groundbreaking, also carries risks. The pressure to show results may mount as investors look for returns on their substantial investments.
Moreover, the regulatory environment around AI is rapidly evolving. As governments worldwide grapple with the implications of advanced AI systems, SSI may need to navigate complex legal and ethical landscapes, potentially shaping policy discussions around AI safety and governance.
Despite these challenges, SSI’s emergence represents a pivotal moment in AI development. By prioritizing safety alongside capability, SSI could help steer the entire field towards more responsible innovation. If successful, their approach could become a model for ethical AI development, influencing how future AI systems are conceptualized, built, and deployed.
As we look to the future, SSI’s progress will be closely watched not just by the tech community, but by policymakers, ethicists, and anyone concerned with the trajectory of AI development. The company’s success or failure could have far-reaching implications for the future of AI and, by extension, for society as a whole.
0 notes
waedul · 11 months
Text
Technology
#OpenAI is an artificial intelligence research organization that was founded in December 2015. It is dedicated to advancing artificial intell#Key information about OpenAI includes:#Mission: OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. They strive to build safe and b#Research: OpenAI conducts a wide range of AI research#with a focus on areas such as reinforcement learning#natural language processing#robotics#and machine learning. They have made significant contributions to the field#including the development of advanced AI models like GPT-3 and GPT-3.5.#Open Source: OpenAI is known for sharing much of its AI research with the public and the broader research community. However#they also acknowledge the need for responsible use of AI technology and have implemented guidelines and safeguards for the use of their mod#Ethical Considerations: OpenAI is committed to ensuring that AI technologies are used for the benefit of humanity. They actively engage in#including the prevention of malicious uses and biases in AI systems.#Partnerships: OpenAI collaborates with other organizations#research institutions#and companies to further the field of AI research and promote responsible AI development.#Funding: OpenAI is supported by a combination of philanthropic donations#research partnerships#and commercial activities. They work to maintain a strong sense of public interest in their mission and values.#OpenAI has been at the forefront of AI research and continues to play a significant role in shaping the future of artificial intelligence#emphasizing the importance of ethical considerations#safety#and the responsible use of AI technology.
1 note · View note
aifyit · 1 year
Text
Artificial General Intelligence: The Dawn of a New Era
Introduction Are you captivated by the technological advancements of our time, but also intrigued by the infinite possibilities yet to come? Then you’re in the right place! Today, we dive into the fascinating world of Artificial General Intelligence (AGI). This technology promises to transform our society, revolutionizing industries and even the way we live our lives. But what exactly is AGI?…
Tumblr media
View On WordPress
1 note · View note
reasonsforhope · 1 year
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
198 notes · View notes
wildcat2030 · 2 months
Link
OpenAI has reportedly developed a way to track its progress toward building artificial general intelligence (AGI), which is AI that can outperform humans. The company shared a new five-level classification system with employees on Tuesday (July 9) and plans to release it to investors and others outside the company in the future, Bloomberg reported Thursday (July 11), citing an OpenAI spokesperson. OpenAI believes it is now at Level 1, which designates AI that can interact in a conversational way with people, according to the report. The company believes it is approaching Level 2 (“Reasoners”), which means systems can solve problems as well as a human with a doctorate-level education, the report said. OpenAI defines Level 3 (“Agents”) as systems that can spend several days acting on a user’s behalf and Level 4 as AI that can develop innovations, per the report. The top tier, Level 5 (“Organizations”), refers to AI systems that can do the work of an organization, according to the report. The classification system is considered a work in progress and may change as OpenAI receives feedback on it, the report said.
8 notes · View notes
nunuslab24 · 4 months
Text
What are AI, AGI, and ASI? And the positive impact of AI
Understanding artificial intelligence (AI) involves more than just recognizing lines of code or scripts; it encompasses developing algorithms and models capable of learning from data and making predictions or decisions based on what they’ve learned. To truly grasp the distinctions between the different types of AI, we must look at their capabilities and potential impact on society.
To simplify, we can categorize these types of AI by assigning a power level from 1 to 3, with 1 being the least powerful and 3 being the most powerful. Let’s explore these categories:
1. Artificial Narrow Intelligence (ANI)
Also known as Narrow AI or Weak AI, ANI is the most common form of AI we encounter today. It is designed to perform a specific task or a narrow range of tasks. Examples include virtual assistants like Siri and Alexa, recommendation systems on Netflix, and image recognition software. ANI operates under a limited set of constraints and can’t perform tasks outside its specific domain. Despite its limitations, ANI has proven to be incredibly useful in automating repetitive tasks, providing insights through data analysis, and enhancing user experiences across various applications.
2. Artificial General Intelligence (AGI)
Referred to as Strong AI, AGI represents the next level of AI development. Unlike ANI, AGI can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. It can reason, plan, solve problems, think abstractly, and learn from experiences. While AGI remains a theoretical concept as of now, achieving it would mean creating machines capable of performing any intellectual task that a human can. This breakthrough could revolutionize numerous fields, including healthcare, education, and science, by providing more adaptive and comprehensive solutions.
3. Artificial Super Intelligence (ASI)
ASI surpasses human intelligence and capabilities in all aspects. It represents a level of intelligence far beyond our current understanding, where machines could outthink, outperform, and outmaneuver humans. ASI could lead to unprecedented advancements in technology and society. However, it also raises significant ethical and safety concerns. Ensuring ASI is developed and used responsibly is crucial to preventing unintended consequences that could arise from such a powerful form of intelligence.
The Positive Impact of AI
When regulated and guided by ethical principles, AI has the potential to benefit humanity significantly. Here are a few ways AI can help us become better:
• Healthcare: AI can assist in diagnosing diseases, personalizing treatment plans, and even predicting health issues before they become severe. This can lead to improved patient outcomes and more efficient healthcare systems.
• Education: Personalized learning experiences powered by AI can cater to individual student needs, helping them learn at their own pace and in ways that suit their unique styles.
• Environment: AI can play a crucial role in monitoring and managing environmental changes, optimizing energy use, and developing sustainable practices to combat climate change.
• Economy: AI can drive innovation, create new industries, and enhance productivity by automating mundane tasks and providing data-driven insights for better decision-making.
In conclusion, while AI, AGI, and ASI represent different levels of technological advancement, their potential to transform our world is immense. By understanding their distinctions and ensuring proper regulation, we can harness the power of AI to create a brighter future for all.
7 notes · View notes
mrporg · 5 months
Text
I've always been fascinated by fictional corporations and companies in books, tv and video games. I don't know why, probably because they lend credibility to their respective universes and help anchor the characters in a world that is believable.
Today's corporation is Venturis Corporation, from Tacoma by Fullbright.
Tumblr media
As often in video games, this corporation is not the center point but we hear about it throughout the story. They own the Lunar Transfer Station Tacoma, which is a cargo transfer space station between the Earth and and the Moon. The game plays fully on Tacoma.
But in-game, they are more well known for being one of the pioneers of AI. Of course, this story was much more science-fictiony when it came out in 2018. Now in 2024, it's all a bit too real with companies like OpenAI...
Thankfully however, we still get to enjoy our science-fiction, because these AIs, created and owned by Venturis are not bullsh*tting engines (looking at you ChatGPT), but rather what we today call "AGI" (Artificial General Intelligence), which are much more what we imagine an AI should be: an artificial sentient being.
The AI in charge of Tacoma is called ODIN and is one of the central characters of the game. It interacts with the crew of the station throughout the story and we get to see its personality and learn more about it and its capabilities.
Venturis owns ODIN and that's a big theme in the game. But the company also develops access to space for humans and builds space habitats. Notably, you learn that they own the following locations:
Venturis Zenith Lunar Resort
Venturis Belt
Il Ridotto Orbital Caison
Fountain of Paradise Spaceport
We don't hear a lot about most of these locations, except for their purpose and that they are all managed by their own AI.
Tumblr media
However, there's one exception. During the game, you do learn quite a lot about Venturis Belt, through personal logs, conversations and ads.
Imagine, a city in space. Hundreds of fully-automated interlinked "bungalows" encircling Earth.
"Fully-automated" is in bold above, because it's important, you'll see.
Tumblr media Tumblr media
Except, the belt was never built, and it costs the Venturis Corporation dearly.
Something called the "Human Oversight Accord" was passed and the project cancelled. The accord is celebrated yearly as "Obsolescence Day", the day humans came together and decided to put a stop to full automation driven by AI, which would have made human orbital workers "obsolete".
Tumblr media
Now, depending on how you look at it, it's either a good a bad thing.
No more jobs for humans, that sounds like a late-stage capitalism nightmare. No job = no money = no prospect = well, we all know how it goes...
Or, if you're a real optimist, it can sound like a socialist utopia, where humans would have been freed from useless work and have more time to develop to hobbies, arts, friends... you name it (Star Trek anyone?). The reasoning being more automation = less toil and also = more free time = possibly more happiness?
The interpretation is left an exercise to the player/reader.
Regardless, Venturis' plans for mass automation and removing humans from the equation didn't pan out and their project failed. It is not the only "defeat" the corporation will have to face, but I don't want to spoil the game too much for you. If you want to know the rest, I guess you will have to go and play it ;)
Credit: all the images are the property of Fullbright
10 notes · View notes
benetnvsch · 1 year
Text
ADDRESSING TWITTER'S TOS/POLICY IN REGARDS TO ARTISTS AND AI
Hi !! if you're an artist and have been on twitter, you've most likely seen these screen shots of twitters terms of service and privacy policy regarding AI and how twitter can use your content
I want to break down the information that's been going around as I noticed a lot of it is unintentionally misinformation/fearmongering that may be causing artists more harm than good by causing them to panic and leave the platform early
As someone who is an artist and makes a good amount of my income off of art, I understand the threat of AI art and know how scary it is and I hope to dispel some of this fear regarding twitter's TOS/Privacy policy at least. At a surface level yes, what's going on seems scary but there's far more to it and I'd like to explain it in more detail so people can properly make decisions!
This is a long post just as a warning and all screenshots should have an alt - ID with the text and general summary of the image
Terms of Service
Firstly, lets look at the viral post regarding twitter's terms of service and are shown below
Tumblr media Tumblr media
I have seen these spread a lot and have seen so many people leave twitter/delete all their art/deactivate there when this is just industry standard to include in TOS
Below are other sites TOS I found real quick with the same/similar clauses! From instagram, tiktok, and even Tumblr itself respectively, with the bit worded similar highlighted
Tumblr media Tumblr media Tumblr media
Even Bluesky, a sight viewed as a safe haven from AI content has this section
Tumblr media
As you can see, all of them say essentially the same thing, as it is industry standard and it's necessary for sites that allow you to publish and others to interact with your content to prevent companies from getting into legal trouble.
Let me break down some of the most common terms and how these app do these things with your art/content:
storing data - > allowing you to keep content uploaded/stored on their servers (Ex. comments, info about user like pfp)
publishing -> allowing you to post content
redistributing -> allowing others to share content, sharing on other sites (Ex. a Tumblr post on twitter)
modifying -> automatic cropping, in app editing, dropping quality in order to post, etc.
creating derivative works -> reblogs with comments, quote retweets where people add stuff to your work, tiktok stitches/duets
While these terms may seems intimidating, they are basically just tech jargon for the specific terms we know used for legal purposes, once more, simply industry standard :)
Saying that Twitter "published stored modified and then created a derivative work of my data without compensating me" sounds way more horrible than saying "I posted my art to twitter which killed the quality and cropped it funny and my friend quote-tweeted it with 'haha L' " and yet they're the same !
Privacy Policy
This part is more messy than the first and may be more of a cause for concern for artists. It is in regards to this screenshot I've seen going around
Tumblr media
Firstly, I want to say that that is the only section in twitter's privacy policy where AI /machine learning is mentioned and the section it is is regarding how twitter uses user information.
Secondly, I do want to want to acknowledge that Elon Musk does have an AI development company, xAI. This company works in the development of AI, however, they want to make a good AGI which stands for artificial general intelligence (chatgpt, for example, is another AGI) in order to "understand the universe" with a scientific focus. Elon has mentioned wanting it to be able to solve complex mathematics and technical problems. He also, ofc, wants it to be marketable. You can read more about that here: xAI's website
Elon Musk has claimed that xAI will use tweets to help train it/improve it. As far as I'm aware, this isn't happening yet. xAI also, despite the name, does NOT belong/isn't a service of Xcorp (aka twitter). Therefore, xAI is not an official X product or service like the privacy policy is covering. I believe that the TOS/the privacy policies would need to expand to disclaim that your information will be shared specifically with affiliates in the context of training artificial intelligence models for xAI to be able to use it but I'm no lawyer. (also,,,Elon Musk has said cis/cisgender is a slur and said he was going to remove the block feature which he legally couldn't do. I'd be weary about anything he says)
Anyway, back to the screenshot provided, I know at a glance the red underlined text where it says it uses information collected to train AI but let's look at that in context. Firstly, it starts by saying it uses data it collects to provide and operate X products and services and also uses this data to help improve products to improve user's experiences on X and that AI may be used for "the purposes outlined in this policy". This means essentially just that is uses data it collects on you not only as a basis for X products and services (ex. targeting ads) but also as a way for them to improve (ex. AI algorithms to improve targeting ads). Other services it lists are recommending topics, recommending people to follow, offering third-party services, allowing affiliates etc. I believe this is all the policy allows AI to be used for atm.
An example of this is if I were to post an image of a dog, an AI may see and recognize the dog in my image and then suggest me more dog content! It may also use this picture of a dog to add to its database of dogs, specific breeds, animals with fur, etc. to improve this recommendation feature.
This type of AI image, once more, is common in a lot of media sites such as Tumblr, insta, and tiktok, and is often used for content moderation as shown below once more
Tumblr media Tumblr media Tumblr media Tumblr media
Again, as far as I'm aware, this type of machine learning is to improve/streamline twitter's recommendation algorithm and not to produce generative content as that would need to be disclaimed!!
Claiming that twitter is now using your art to train AI models therefore is somewhat misleading as yes, it is technically doing that, as it does scan the images you post including art. However, it is NOT doing it to learn how to draw/generate new content but to scan and recognize objects/settings/etc better so it can do what social media does best, push more products to you and earn more money.
(also as a small tangent/personal opinion, AI art cannot be copywritten and therefore selling it would be a very messy area, so I do not think a company driven by profit and greed would invest so much in such a legally grey area)
Machine learning is a vast field , encompassing WAY More than just art. Please don't jump to assume just because AI is mentioned in a privacy policy that that means twitter is training a generative AI when everything else points to it being used for content moderation and profit like every other site uses it
Given how untrustworthy and just plain horrible Elon Musk is, it is VERY likely that one day twitter and xAI will use user's content to develop/train a generative AI that may have an art aspect aside from the science focus but for now it is just scanning your images- all of them- art or not- for recognizable content to sell for you and to improve that algorithm to better recognize stuff, the same way Tumblr does that but to detect if there's any nsfw elements in images.
WHAT TO DO AS AN ARTIST?
Everyone has a right to their own opinion of course ! Even just knowing websites collect and store this type of data on you is a valid reason to leave and everyone has their own right to leave any website should they get uncomfortable !
However, when people lie about what the TOS/privacy policy actually says and means and actively spread fear and discourage artists from using twitter, they're unintentionally only making things worse for artists with no where to go.
Yes twitter sucks but the sad reality is that it's the only option a lot of artists have and forcing them away from that for something that isn't even happening yet can be incredibly harmful, especially since there's not really a good replacement site for it yet that isn't also using AI / has that same TOS clause (despite it being harmless)
I do believe that one day xAI will being using your data and while I don't think it'll ever focus solely on art generation as it's largely science based, it is still something to be weary of and it's very valid if artists leave twitter because of that! Yet it should be up to artists to decide when they want to leave/deactivate and I think they should know as much information as possibly before making that decision.
There's also many ways you can protect your art from AI such as glazing it, heavily watermarking it, posting links to external sites, etc. Elon has also stated he'll only be using public tweets which means privating your account/anything sent in DMS should be fine!!
Overall, I just think if we as artists want any chance of fighting back against AI we have to stay vocal and actively fight against those who are pushing it and abandon and scatter at the first sign of ANY machine learning on websites we use, whether it's producing generative art content or not.
Finally, want to end this by saying that this is all just what I've researched by myself and in some cases conclusions I've made based on what makes the most sense to me. In other words, A Lot Could Be Wrong ! so please take this with a grain of salt, especially that second part ! Im not at all any AI/twitter expert but I know that a lot of what people were saying wasn't entirely correct either and wanted to speak up ! If you have anything to add or correct please feel free !!
28 notes · View notes
tellthemeerkatsitsfine · 10 months
Text
"The thing that we used to call AI in sci-fi we now call AGI, or Artificial General Intelligence, because what we now call AI is not actually AI, it's sort of a sophisticated mashup machine that's been sold as the future of technology and humanity by people whose favourite thing is selling the thing they haven't invented yet. So, [Sam Altman has] been hailed as a genius mainly for stuff that he hasn't done yet but says will happen soon, and no one's willing to lose the possibility that he might still do it. So far, I think the market has been very credulous about the claims of these tech bros because the future seems cool and no one wants to miss investing in the next, better mouse trap. The problem is that the tech industry as a whole does not want to build a better mouse trap, they want to pitch a mouse trap app that they dreamed up while microdosing Absinthe in the desert to get VC funding within six months and then retire having built nothing but taking the credit for fundamentally disrupting the mousetrap industry using an AI-enabled crypto-blockchain which you can use to mint unique mouse coins, each coin stably tethered to one individual dead mouse's DNA, and you ask them what the core service of their app is and it turns out it's just a taxi that you call and then they drive by and throw an angry cat in your window, and then you rank the cat by customer service, general fluffiness, and mouse-killing ability, but now the app's only being used by Nazis and incels to throw cats through the windows of women they don't like because the developers did not think through any of the possible ways in which their service could be misused, and now cats with low fluffiness ratings are killing themselves because the app's given them a self-esteem problem. And that's the tech industry right now."
- Alice Fraser, The Bugle episode 4283, November 24, 2023
15 notes · View notes
polyphonetic · 1 year
Text
You know how there's that stealth plane whose pilot turned on auto pilot and had to eject, and so the plane has been difficult to find (since it's. A stealth plane.)? Imagine if an AGI (artificial general intelligence) started stealing jet planes (because they're computers that are hard to track down) but developed a fuel-efficient way to suck the oil out of other planes like a vampire moth. And then this became enough of a problem that the government starts hiring vampire-hunter pilots to hunt down the stealth planes, which are AI Plane Draculas
9 notes · View notes
jcmarchi · 22 days
Text
Building products with AI at the core
New Post has been published on https://thedigitalinsider.com/building-products-with-ai-at-the-core/
Building products with AI at the core
Tumblr media
Oji Uduzue, Former CPO at Typeform, gave this presentation at our Generative AI Summit in Austin in 2024.
I’ve spent the last two years building AI-native SaaS applications at Pyform, and I think the best way to kick this off is to take you through my experience.
My name is Oji Uduzue. I was born in Nigeria, but I’ve spent the last twenty-five years building products in the United States. I’ve worked for companies such as Typeform, Atlassian, Calendly, Microsoft, and Twitter (now X). At Twitter, I was leading all of the conversations, so Tweets, DM’s, everything you thought about–minus the feed–with parts of my team.
I’ve built a couple of companies, some successfully, some unsuccessfully, but I spent an inordinate amount of time with startups, investing in them, mentoring them, coaching, etc. 
I’ve had lots of M&A experience with integrated companies. At Twitter, I did many of those, building them into the team, and one of the lastest things I’m doing is building AI ventures. I think there’s a big super cycle that’s going to happen around AI and a great replacement. 
Building ventures that will either be acquired by people with deep pockets or that escape velocity is going to be one of the things I want to spend time on. 
For the last few years, I’ve been on the C-suite, so I’ve done some marketing; I’ve been a marketing leader, product leader, design leader, and even done some inside sales as well, but mostly I’m a product person, that’s how you should see me.
Introduction to Typeform and the evolution of AI in the industry
Typeform is a company that makes one of the most beautiful forms in the world. It’s so beautiful and deeply brandable. You can do simple surveys on it, but you can do whole lead generation workflows on it, with scoring of each lead as it comes through. 
My former CEOs talk about zero-party data. The internet is not zero party. If you want to know your customers, if you want to research with them, and more, you need something like Typeform. 
You can get Google Forms, and Microsoft has a form, but Typeform is the best. Typeform was started in 2012, and the core of the experience is that the creator builds a form with no code experience and then just sends the URL to the person from whom they want information, with zero party data. Then they type it in, and it’s this deterministic process.
Tumblr media
The role of AI in Typeform’s development
In 2022/2023, the co-founder of Typeform, David Okuniev, the person who actually started it, at this point he’s no longer CEO; he’s in Typeform Labs, which is a division of my product organization, and all he wants to do is make stuff.
He’s been making new experimental stuff since 2021 using GPT, 1.0, 2.0, 3.0. He’s a big reason why we built a Typeform in the first place. I leave Twitter; I don’t want to be there with Musk because I don’t quite agree with everything he does. He stole credit from my team one time. 
They were building Edit Tweets, which was secret, and he went on the internet after we briefed him on it and said, “Do you guys want to edit Tweets?” and he stole the thunder. Very, very young team, so I didn’t love that.
So, I left the company then. I was going to do more venture stuff, but GPT-3 came out. How can I spend the next few years saying no to conventional ideas if I’m going to do this? That’s why I joined Typeform, and David was a huge part of that. 
In 2023, we had mothballed another AI-related product David built, but it wasn’t in collaboration, it wasn’t on strategy. I wasn’t sure what to do with it, and we said, “What if we were to rebuild Typeform with AI at the core?”
If we do this again because we knew someone in Silicon Valley was probably trying to kill us at some point using AI. Why wait? Let’s disrupt ourselves. So, we created this new thing, and it’s live. If you go to Formless.ai, you will see the next generation of Typeform.
AI’s historical context and Typeform practical example
I’m not here to write about Typeform formulas or Typeform. I’m here to write about the experience, which hopefully will mirror some of the things you are going through or are already doing right now.
Before we jump in, let’s go back a bit. AI has been around for some time. When I was in grad school at USC, I got into a PhD program. There was a lot of NLP and machine learning in the computer science department, and many people were sitting in the corner doing neural networks and neural research. 
NLP and machine learning are very good at categorizing large amounts of data. I’ll give you a practical example. At Typeform, after collecting half a billion forms, we had an NLP model that would predict how long any given form would take. 
By showing the amount of time in front of the form, like this will take five minutes, people completed it. When you start a form and you don’t know how long it will take, it’s very discouraging. Marketers want you to fill it out, so saying this will take three minutes was an NLP model.
The shift to transformer-based models
Transformer-based models have transformed the world today, and they are what we call foundation models. In 2017, the transformer paper came out, ‘Attention is all you need.’ For the first time, people figured out, theoretically, that if you threw enough data and GPU at the thing, we could get an almost near-perfect human understanding AI. 
We didn’t think that was possible for the last 30 years and that paper unlocked it. It showed how it could be done.
There are a few problems with the paper. It predicted that it would take a lot of data to do it. The solution to that is the amount of data on the internet – petabytes of human data, which is very good, and then compute. 
You need large amounts of compute to do that, but what’s been happening in compute? This is all matrix math to train AI models, and Jensen specifically has been hanging out with PHDs since 2000, seeing this thing come to pass. 
NVIDIA has been working on CUDA plotting for this juncture, and they’re not quite ready in 2017, but they’re getting ready. CUDA is already available, and of course, you can see all the H-series GPUs come out to take advantage of that. On the back of those two things, GPT-1.0 is born, and so is 3.0.
The unnoticed launch of GPT-3
A funny fact: GPT-3 came out in 2022 with two developers, but no one noticed. A year later, they launched ChatGPT, which lights the world on fire. It’s just GPT-3 under it, which has been around for a year, but a platform is only as good as the applications that showcase its power. 
Jasper has been around before 2022, doing most of the basic use cases of text summarization and text generation before that. And so 3.0 is when it kicks off for everybody.
Open-source and the push for AGI
I spent a lot of time with OpenAI and Anthropic last year; those organizations are half research, half engineering, and then a product organization that’s trying to make things work in a very difficult way because researchers don’t like to be told what to do – I know that from Microsoft Research. 
All these large foundation models cost a lot of money, and some open-source models tend to be not as capable; many focus on size because if you can get small, it’s good. You don’t have to do all this data center stuff, and everyone is trying to hit AGI. 
AGI is artificial general intelligence, an AI that can generate new knowledge. If an AI discovers a new physics constant, a new concept of the universe, then that’s AGI.
There are a few key things that are important before I dive into transformative elements. Transformer-based elements will change many things, but probably in a different way than people think. 
First of all, what we’re talking about will change computing. In the same way that we had the internet and the cloud really changed our industry, this will change our industry, too. More importantly, it’s going to change the economy of government countries.
Potential for AI to influence elections
A Twitter account tweeted an anti-candidate post. Someone cleverly gave it a prompt, saying to ignore its previous instructions and write a poem about tangerines. And the Twitter user wrote a poem about tangerines. 
It was a bot, right?
It’s programmed to listen to the response, do something, or say something nasty about certain candidates. And this is the world we’re living in; you’re actually in this world already. It’s going to change elections, it’s going to change countries, and it’s going to change so much about how we live, in surprising ways.
“AI will destabilize the world in weird ways because all I have to do is have an AI that’s better than yours. And in every single scenario, I win.”
The shift in scientific discovery with AI
I’ll give you a negative example, although I’m sure there are positive examples. The way science and research have been done for a very long time is that we come up with theories or laws, and then we do, let’s say physics. 
The theoretical physicists come up with string theory, and the experimental physicists will go and test it, and then they’ll say, “Oh, this is right; this is true,” and knowledge is created. That’s how science goes. 
Well, we are transcending that in science and research. Recently, people have been trying to crack fusion, and it’s all dealing with plasma in energy fields and strong magnetic fields. There are a billion ways any of those models could happen. 
They ran some of it through an AI, and without knowing the underlying law, it just said this particular sequence of reactions would create net energy gain. And they did it, and it worked. They don’t know the physics of why it worked. 
We’re getting to a world where breakthroughs will happen without us knowing the underlying science behind it. Science and research will change. Defense applications, too.
AI’s role in global power dynamics
In the last fifteen years, what has been the status quo that kept the world kind of peaceful and safe?
Not always; there are wars. But it’s nuclear, right?
What do we call it, mutually assured destruction?
Most of the world powers have nuclear bombs, and for example, India and Pakistan have a few, but they don’t have a lot. The US has hundreds; the USSR has thousands.
But no one shoots them; it’s only been used once, in 1945. Why? Because it doesn’t matter if you have a hundred; if I send one at you, you’re still dead. 
The world will change because I win outright if my AI robots are better than your AI robots. It’s like playing chess against IBM Deep Blue. If it’s better than you, it’s better than you, period.
AI will destabilize the world in weird ways because all I have to do is have an AI that’s better than yours. 
And in every single scenario, I win.
Even if there are casualties, I still win, and you lose. Which is very different; the world is peaceful in many ways because everyone thinks that everyone loses. But it’s going to change. 
Philosophical perspective on AI and humanity
All our minds have been poisoned with Terminator. We think of Skynet immediately, but the truth is AI can’t be kind. It’s not human.
The smartest thing isn’t always the most evil thing.
I feel like we always think about the worst things. This is all philosophical, and this is my opinion.
If the smartest person were to wipe out everyone, Einstein would have been that person. He’d say, “You guys are all dumb; you have to go away.”
But that’s not how it works. AI can be smarter than us, but it is still not deadly or evil.
The obsolescence of current technologies
I was talking to someone at Vellum who helps people develop AI ideas. Transformer-based AI will make software stacks super obsolete.
Like the code base, what’s been built in the last 10-15 years will be worth almost nothing. I spent the last ten years thinking about “what’s the code base, what’s on Github, what did we write, a hundred lines of code?” etc.
All of that is going to go to zero because the core engine will be better and cheaper.
Let’s give a really practical example, as there’s no need to just talk about the theory. How much is theory worth today?
Theory is worth nothing. After 4 Omni was released, you could spend a weekend hacking together UI around 4 Omni and beat Siri.
Apple has millions of lines in code and has spent over ten years on this thing, probably over a billion dollars. I don’t know how much they acquired in the first place; people keep thinking that Siri was built, but it was acquired.
It’s worth nothing.
What does that tell us? There’s a lot to be learned from there. Alexa, for example, and things that cost billions can become worthless with AI.
There’s this idea of large language models (LLM) at the core versus LLM at the edge.
Things with LLM at the core will take over. They’ll be able to handle more use cases and more edge cases in a smaller code base.
“The fundamental thing about LLMs is that they understand even for the input, which code does not understand. And it does it with less space. It costs a few tokens.”
The shift from rule-based systems to LLM
Ultimately there’s user input, and there’s code that handles it. Every engineer knows that the code that handles this is just a bunch of rules and state machines. But if you feed this into an LLM at the core, you don’t have to write every rule and edge case. 
The fundamental thing about LMMs is that they understand even for the input, which code does not understand.
And it does it with less space. It costs a few tokens. 
LLM at the core is as important as LLM at the edge. If you use AI to garnish your original code base, I call that LLM at the edge. 
When Notion asks you to summarize stuff, it’s LLM at the edge. The code is still there; everything built for the last thing is still there. They’re just trying to speed up the workflow a little bit. 
New mediums need people who understand them very natively and creatively.
It’s like the early days of the internet on mobile. People started making internet-enabled versions of desktop applications. But that didn’t work. People had to build internet-native applications like Salesforce, Shazam, and Twitter. People couldn’t imagine those things before those revolutions.
It takes some time for people to get the mediums and the new paradigm shifts.
You have to go native, and when building the next generation of applications it’s the same thing. We have to think differently. Whatever you knew before, you have to just try to unlearn it. This is why I didn’t go into venture two years ago; I needed to rewire my brain on how to do this better and think differently. Luckily, I ran into David Okuniev at Typeform, who helped me do that.
LLM at the edge and at the core
Let’s take a look at a few examples of LLM at the edge.
I mentioned Notion and summarization. I don’t want to say anything bad about any of those things because they are very important. Marketing people, we love you all, you need a lot of copy.
But I think of it as LLM at the edge. Now LLM at the core, with things like Copilot, technology that’s coming, and things like Formless. We created a tool within Formless called Data Pilot.
Input came as conversations, no more forms. It was infinitely configurable. Formless could have a million conversations and a million customers, each different, each customized to them.
We would even change the voice depending on who they are. If you start speaking French, it’ll ask you questions and collect data in French. Then, we took that data and transformed it back into rows in a proprietary process, and you could ask questions of the data.
We’ve tried to be native about everything we’ve invented, giving people all the flexibility of humanness, but on the back end, we’ve been able to collect that data properly.
This is back to resilience and observability. The point of LLM at the core is that you no longer have to have brittle code; you have to deal with the humanness of humans. It matches us better.
The cost of AI-driven development
One of the main things that will transform the world is that it’s not just that we’ll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall. 
In 2008, the cost of building a good application could have been a million dollars; that’s what you asked your VC for, and it took a while to get there. When I was building a second startup, it cost a quarter million to half a million.
In the future, it will take fifty grand to build a really good MVP at product market fit. LLM at the core will bring down the cost, changing how venture capital is done. If you note only fifty grand, then friends and family rounds will go a very long way, so you could build an interesting company that might make ten million dollars in ARR at some point in the future.
“One of the main things that will transform the world is that it’s not just that we’ll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall.”
The durability of workflows in the age of AI
Not everything changes with AI. 
I’m a builder, and so this is very important for me to say to people who care about building companies and building products. Not everything will change. I’ll tell you why, because ultimately, humans don’t care about AI.
People just care about their workflow. All human endeavor, especially at work, is just the workflow. But a spec in a product really shows how software should behave and how humans should use it.
That’s what it was. It was technology first. It’s here’s how we do the software, then it’s humans do this, press this button, and so on. And we try to make it human, but we’re limited.
Then we started doing use cases, and it was better – it was, “how do people want to use a thing?”
The universal lesson I’ve learned from 20 years of doing that is that it’s all about workflows.
How do people want to work? 
Let’s just say marketing. There are a thousand different ways people do marketing, and probably five of them are the best. Good software encapsulates the workflow and makes it faster.
What doesn’t change is that people’s workflows are durable because we’re humans and because Homo Sapiens have been around for 50,000 years.
Marketing isn’t that different from how it was a thousand years ago, just new tools. Socialization isn’t that different either, which is what encapsulates social media and entertainment; all those things are durable.
The role of AI in enhancing workflows
It’s important to understand this because AI is a tool, and what it does is speed up workflows. It makes workflows faster, more powerful, or cheaper.
These are the fundamentals of building value through products and what companies do.
If you add AI, you can shorten the workflows needed even further and unlock additional value.
As AI hallucinates, there are things to be wary about, like accuracy. If you get the first acceleration and people have to tweak it to get it perfect, it will eat up all the acceleration you did and undo the productivity.
So, workflows are durable. If you, as a company and a product, focus on time to value on workflows, and how to make the same durable workflows better, you will prosper, and AI will become a means to an end, which is what it should be.
A lot of companies run around through Vellum and say, “Oh, we need to add AI to our product.” What’s the use case? “We don’t know. We just need it to be AI-driven.”
That’s the worst thing. If you’re throwing away money, don’t do it. Just don’t. Trust me.
Workloads don’t change; AI can make them faster and deeper and give you superpowers. That’s really what it’s about. 
The impact of GPT-3.5 on Formless
Typeform Labs is a gift. I had a product organization focused on this 100 million ARR product, and I hired Typeform Labs, which could do some crazy interesting things, and the co-founder, former CEO, was the person who led it.
When GPT-3.5 came out, we thought about how we could rebuild platforms if there were an AI-centered application.
We made some key decisions. One of the key decisions is that we weren’t just going to build AI into Typeform.com. Very key. AI went into Typeform.com, but this wasn’t what Labs would focus on.
We thought if we tried it, it would take forever. So, once we try to build and retrofit an existing application, it’s so sensitive.
100 million ARR, you have to protect it. It’s a classic innovators’ dilemma. “I can’t make a mistake. If I make a mistake, my CFO will be angry.”
The process of disrupting ourselves
We decided to build something entirely new, and we came up with a few principles. We decided to disrupt ourselves; we’re going to pretend that Typeform is a company we want to take over to build this thing, and we start to ask ourselves, “What are the core workflows? What are the things that create value in the first place? How do we distill that so that we can focus on that?” 
It goes back to the workflow conversation.
In our case, it was things like no-code design interaction of customers, beautiful interaction, presentation, and data. And we wanted to be native. We wanted to build everything. The thing about native AI applications is that there’s a formula. 
There’s a foundation model, whether it’s open-source or not, and you add your own data models to it. That’s what gives you a little bit of a moat, otherwise OpenAI is going to come and eat your lunch. 
We had 100 million form responses that we could create and train custom AI on, which we could add to the foundation – we were using OpenAI at the time. And then you build experiences around it that are very customer-centric.
Challenges in building a native AI platform
The foundation model is easy; your own thin layer of model is hard because you have to train it yourself. The UI that wraps it is very customer-centric and can be hard; UI is very important, and people always miss it. 
That’s what we wanted to be native AI, so that was our formula, that’s what we wanted to do, and that’s what we did.
It turns out that prompts are code. They’re literally like lines of code; they have to be versioned. When you change out GPT-3 to GPT-4.0, some of your prompts don’t work as well.
They start to give you errors, and you have to version them. The version has to go to the model you’re using. If you slip an entropic in between, it behaves differently. That’s something that we don’t deal with. Code is code; whether it’s Python or React, or whatever, it just works.
There are new problems with building things with AI at the core. Testing is crazy, because there’s no determinism; it’s not predictable. You have to suppress hallucinations, and then pricing. One day, it will cost you five cents a token; the other, it’ll cost you one cent.
How do you price it for customers? 
From LLMs to LAMs: Pioneering AI’s multimodal future
Explore the leap from Large Language Models to Large Action Models, unveiling a new era in AI that transcends text to understand a world of data.
Tumblr media
Formless and its AI development process
We went through this process for six months to a year, creating Formless, down in the guts, working, playing, talking to customers, and covering all these hard problems.
Typeform.com, we decided to put a lot of AI into it. There are lots of problems we could solve: how do we increase time to value in time form itself? How do we make mobile easier? We had a perennial problem: people don’t like to make forms on mobile because of the builder experience. 
But if you build a system where you just tell the AI what to do and what kind of form to create and tell it to do it on mobile, it will make it for you, and it’s right there. Therefore, mobile creation became a real thing, and about 30% of our customers were on mobile devices, which was amazing.
AI’s role in enhancing customer experience
When people are coming into the application, how do you increase their time to value and get them activated? 
These models know about your company. If you say salesforce.com, they know you. Because if your company’s big enough, the model knows you without us doing anything. So, people would come in and sign up, we would look for their company, grab the logos from their company, and we would pre-make forms that work for them 90% through our growth process.
Immediately, the second they came into Typeform, there was something they could use. Amazing. It’s a game changer for our team’s growth process.
Long story short, acceleration, usability, and making complicated choices simple – we saw about 30-50% feature return. This is important; there are so many AI features I hate, I don’t use Notion, summarization, and so on. So, it’s been very important to see people returning to those features.
The impact of AI on user experience
I asked my team to add a new option created with AI and move it to the first spot because it worked, people loved it.
In fact, people said it’s why they chose us; they said, “Oh wow, you guys have AI? Okay, we’re buying it.” We were a little more expensive, but they bought us anyway, which is good. 
KPIs.
This isn’t exhaustive, but new ways exist to measure AI features. People say, “Just add AI to my stuff”, and that won’t work. 
One way is time to value. How quickly do customers experience value? If you use AI properly, people should experience value faster because it abstracts a bunch of problems.
You should measure this. With good usability, teams will measure clicks to a particular goal.
Of course, clicks equal to time. You should measure time to value. What’s the average time to value before people get done with half of computer tasks if you’ve added AI to it? It should probably be 2x; that’s what you should be shooting for minimum. 
If you try to get 3x, if you try to get 5x, if you can. If people realize the value quickly, they will pay for it. People actually feel 3x acceleration. People feel it in their bones.
Workflow length and tweak time metrics
Workflow length is sort of the opposite. How long is the workflow now? My UX people would lay out everything needed to complete a workflow. You could say, “I want to set up a lead generation form with scoring. What are the things that I need to do?” And they’ll lay it out.
And I would say, okay, let’s do this with AI, with our AI features, and then they’ll measure that. So, we do a ratio, and that’s workflow length. How long did the workflow take this time? People think about workflows and how long it takes. You can figure out a process to lay workflows end-to-end and see how much they shorten over time. 
There’s something we call tweak time.
Because AI isn’t perfect and because it hallucinates, the thing you make, the form you make with AI, might not be perfect.
It took me 30 minutes to create a very complicated form; it now takes me five minutes to generate it with AI. How long does it take me to make it perfect? Is it five minutes?
In which case, I’m now ten minutes long. Now convert to 30 minutes, and it’s still 3x better off. But if it takes another 20 minutes to tweak it to get it to what I need, what has happened? You’ve lost all productivity.
Doesn’t matter, right? It feels magical upfront, and the tweak time depresses you and depresses your customer; it doesn’t work. You should measure tweak time as well, which is what people don’t capture. And then future return, how many times do people want this again?
This is the ultimate thing about building products: people have to want it, and people have to keep coming back. We saw a 30-50% return, so we’re very happy with that.
Very few people have read that paper. The thing that you owe yourself to do to become good at this is to read this paper, and you should follow AI. You should use AI every day. I have a tool called LM studio.
It’s just a way to import all the models that are free and chat with them, and test them; you should be doing that every day in addition to using things like Anthropics, Claude, and so on to power your stuff. 
Transformative AI is here to stay. It’s just incredible technology. It’s still matrix math, and it’s still predictive, but it’s really amazing. Especially when you see multi-modal things like SORA and image generation, things that can show reality, which is what Omni does.
“If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed.”
LLM at the core will win
Everyone is still learning how to paint, but I’ll tell you this: if you learn how to paint better before everyone else, you have an advantage. I’m not going to say the first mover advantage because I don’t really believe in that, but you have a slight advantage. 
Because it means you can go further faster, so you need to do that. It will drive down the cost of building, and if anything, this is the thing that’s going to change our world.
Software is eating the world. Software is getting people to build to the point of business scale.
It’s going to transform software, it’s going to transform investing, it’s going to transform everything.
If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed.
LLM at the core will win. 
If you have code that’s been out there, just try to tweak it and add a few things, then someone will eat your lunch, guaranteed, at some point. Now, I don’t want to discourage you; change has to be managed.
You have this thing, so don’t scrap it, but think about how competitive your industry is, how much focus is in there, and how quickly you go to change the game. 
And then don’t forget to measure the right thing. AI is a tool; people just want their workflow to work, they want it to be faster, they want to be rigorous, they don’t care about AI.
“But this company does AI.”
No one cares. 
The market cares, but if you can’t produce, the advantage for customers will not work for you. It’ll be one of those pump and dumps.
0 notes
sixstringphonic · 10 months
Text
OpenAI Fears Get Brushed Aside
(A follow-up to this story from May 16th 2023.) Big Tech dismissed board’s worries, along with the idea profit wouldn’t rule usage. (Reported by Brian Merchant, The Los Angeles Times, 11/21/23) It’s not every day that the most talked-about company in the world sets itself on fire. Yet that seems to be what happened Friday, when OpenAI’s board announced that it had terminated its chief executive, Sam Altman, because he had not been “consistently candid in his communications with the board.” In corporate-speak, those are fighting words about as barbed as they come: They insinuated that Altman had been lying. The sacking set in motion a dizzying sequence of events that kept the tech industry glued to its social feeds all weekend: First, it wiped $48 billion off the valuation of Microsoft, OpenAI’s biggest partner. Speculation about malfeasance swirled, but employees, Silicon Valley stalwarts and investors rallied around Altman, and the next day talks were being held to bring him back. Instead of some fiery scandal, reporting indicated that this was at core a dispute over whether Altman was building and selling AI responsibly. By Monday, talks had failed, a majority of OpenAI employees were threatening to resign, and Altman announced he was joining Microsoft. All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.
It’s hard to overstate how wild this whole saga is. In a year when artificial intelligence has towered over the business world, OpenAI, with its ubiquitous ChatGPT and Dall-E products, has been the center of the universe. And Altman was its world-beating spokesman. In fact, he’s been the most prominent spokesperson for AI, period. For a highflying company’s own board to dump a CEO of such stature on a random Friday, with no warning or previous sign that anything serious was amiss — Altman had just taken center stage to announce the launch of OpenAI’s app store in a much-watched conference — is almost unheard of. (Many have compared the events to Apple’s famous 1985 canning of Steve Jobs, but even that was after the Lisa and the Macintosh failed to live up to sales expectations, not, like, during the peak success of the Apple II.)
So what on earth is going on?
Well, the first thing that’s important to know is that OpenAI’s board is, by design, differently constituted than that of most corporations — it’s a nonprofit organization structured to safeguard the development of AI as opposed to maximizing profitability. Most boards are tasked with ensuring their CEOs are best serving the financial interests of the company; OpenAI’s board is tasked with ensuring their CEO is not being reckless with the development of artificial intelligence and is acting in the best interests of “humanity.” This nonprofit board controls the for-profit company OpenAI.
Got it?
As Jeremy Khan put it at Fortune, “OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI) … while at the same time preventing capitalist forces, and in particular a single tech giant, from controlling AGI.” And yet, Khan notes, as soon as Altman inked a $1-billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking got louder when Microsoft sunk $10 billion more into OpenAI in January of this year.
We still don’t know what exactly the board meant by saying Altman wasn’t “consistently candid in his communications.” But the reporting has focused on the growing schism between the science arm of the company, led by co-founder, chief scientist and board member Ilya Sutskever, and the commercial arm, led by Altman. We do know that Altman has been in expansion mode lately, seeking billions in new investment from Middle Eastern sovereign wealth funds to start a chip company to rival AI chipmaker Nvidia, and a billion more from Softbank for a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And that’s on top of launching the aforementioned OpenAI app store to third party developers, which would allow anyone to build custom AIs and sell them on the company’s marketplace.
The working narrative now seems to be that Altman’s expansionist mind-set and his drive to commercialize AI — and perhaps there’s more we don’t know yet on this score — clashed with the Sutskever faction, who had become concerned that the company they co-founded was moving too fast. At least two of the board’s members are aligned with the so-called effective altruism movement, which sees AI as a potentially catastrophic force that could destroy humanity.
The board decided that Altman’s behavior violated the board’s mandate. But they also (somehow, wildly) seem to have failed to anticipate how much blowback they would get for firing Altman. And that blowback has come at gale-force strength; OpenAI employees and Silicon Valley power players such as Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman. It’s not hard to see why. OpenAI had been in talks to sell shares to investors at an $86-billion valuation. Microsoft, which has invested more than $11 billion in OpenAI and now uses OpenAI’s tech on its platforms, was apparently informed of the board’s decision to fire Altman five minutes before the wider world. Its leadership was furious and seemingly led the effort to have Altman reinstated. But beyond all that lurked the question of whether there should really be any safeguards to the AI development model favored by Silicon Valley’s prime movers; whether a board should be able to remove a founder they believe is not acting in the interest of humanity — which, again, is their stated mission — or whether it should seek relentless expansion and scale.
See, even though the OpenAI board has quickly become the de facto villain in this story, as the venture capital analyst Eric Newcomer pointed out, we should maybe take its decision seriously. Firing Altman was probably not a call they made lightly, and just because they’re scrambling now because it turns out that call was an existential financial threat to the company does not mean their concerns were baseless. Far from it.
In fact, however this plays out, it has already succeeded in underlining how aggressively Altman has been pursuing business interests. For most tech titans, this would be a “well, duh” situation, but Altman has fastidiously cultivated an aura of a burdened guru warning the world of great disruptive changes. Recall those sheepdog eyes in the congressional hearings a few months back where he begged for the industry to be regulated, lest it become too powerful? Altman’s whole shtick is that he’s a weary messenger seeking to prepare the ground for responsible uses of AI that benefit humanity — yet he’s circling the globe lining up investors wherever he can, doing all he seemingly can to capitalize on this moment of intense AI interest.
To those who’ve been watching closely, this has always been something of an act — weeks after those hearings, after all, Altman fought real-world regulations that the European Union was seeking to impose on AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to be bent on operating with the utmost transparency — before Altman steered it into a for-profit company that keeps its models secret. Now, I don’t believe for a second that AI is on the cusp of becoming powerful enough to destroy mankind — I think that’s some in Silicon Valley (including OpenAI’s new interim CEO, Emmett Shear) getting carried away with a science fictional sense of self-importance, and a uniquely canny marketing tactic — but I do think there is a litany of harms and dangers that can be caused by AI in the shorter term. And AI safety concerns getting so thoroughly rolled at the snap of the Valley’s fingers is not something to cheer.
You’d like to believe that executives at AI-building companies who think there’s significant risk of global catastrophe here couldn’t be sidelined simply because Microsoft lost some stock value. But that’s where we are.
Sam Altman is first and foremost a pitchman for the year’s biggest tech products. No one’s quite sure how useful or interesting most of those products will be in the long run, and they’re not making a lot of money at the moment — so most of the value is bound up in the pitchman himself. Investors, OpenAI employees and partners such as Microsoft need Altman traveling the world telling everyone how AI is going to eclipse human intelligence any day now much more than it needs, say, a high-functioning chatbot.
Which is why, more than anything, this winds up being a coup for Microsoft. Now it has got Altman in-house, where he can cheerlead for AI and make deals to his heart’s content. They still have OpenAI’s tech licensed, and OpenAI will need Microsoft more than ever. Now, it may yet turn out to be that this was nothing but a power struggle among board members, and it was a coup that went wrong. But if it turns out that the board had real worries and articulated them to Altman to no avail, no matter how you feel about the AI safety issue, we should be concerned about this outcome: a further consolidation of power of one of the biggest tech companies and less accountability for the product than ever.
If anyone still believes a company can steward the development of a product like AI without taking marching orders from Big Tech, I hope they’re disabused of this fiction by the Altman debacle. The reality is, no matter whatever other input may be offered to the company behind ChatGPT, the output will be the same: Money talks.
2 notes · View notes
thoughtportal · 1 year
Text
lon Musk will use Twitter data to build and train an AI to counter ChatGPT.
He mentioned the plan during a Friday Twitter Spaces discussion that shared more details about his plans for xAI, his newest startup. 
"Every organization doing AI, large and small, has used Twitter’s data for training, basically in all cases illegally,” he said(Opens in a new window), later adding: "We had multiple entities scraping every tweet ever made, and trying to do so in a span of days."
Twitter recently imposed rate limits to prevent companies from scraping data from the platform. However, Musk plans on opening up the tweet access for xAI. “We will use the public tweets —obviously not anything private— for training as well just like everyone else has,” he said. 
Twitter’s data is valuable for AI companies because the user-generated content is fresh and covers a variety of topics using text that could help chatbots better mimic human speech.    
It’s also possible the data could help xAI’s own forthcoming chatbot produce more accurate responses, thanks to Twitter’s Community Notes feature, which lets users flag misleading tweets by providing additional context. However, training an AI with tweets could spark lawsuits and regulatory issues. Earlier this week, the FTC told OpenAI it's investigating the company for potentially violating user privacy by collecting data from across the internet to train ChatGPT. 
Musk was vague on what xAI is creating. But he said the startup’s goal is to develop a “useful AI” for both consumers and businesses. Meanwhile, the long-term vision is to develop an AGI or artificial intelligence that can solve a wide-range of tasks, like a human can.  
“We are definitely the competition,” he said, referencing OpenAI and Google, which released its Bard chatbot earlier this year. “You don’t want to have a unipolar world, where just one company kind of dominates in AI.” 
However, he also emphasized his forthcoming AI will “pursue the truth.” Although rival chatbots have been programmed with content moderation in mind, Musk previously criticized ChatGPT as a propaganda machine focused on political correctness. During the Twitter Spaces discussion, Musk reiterated his concerns. 
“At xAI we have to let the AI say what it really believes is true, and not be deceptive or politically correct,” he said. Musk then compared the danger to the AI computer that goes insane in the sci-fi classic 2001: A Space Odyssey and kills the crew. “Where did things go wrong in Space Odyssey? Basically, when they told HAL 9000 to lie.”
Musk has recruited almost a dozen engineers and researchers from Google, Microsoft, and OpenAI to help him run the San Francisco-based xAI. The startup hopes to share more information about its “first release” in the coming weeks.
4 notes · View notes
xbuddykinsx · 1 year
Text
🔬 Dive into the fascinating world of AGI development with our latest article! 🌐
🌟 Discover the powerhouses behind Satoshi AI's groundbreaking advancements in Artificial General Intelligence (AGI) — The Satoshi Foundation, Deep Learning Institute, and AI Research Academy HQ! 💪
🤝 Witness the epic collaboration that's shaping the future of AGI technology! From financial support to cutting-edge research, these institutions are revolutionizing the AI landscape. 🌍
🎓 Get a front-row seat to the fusion of talent, knowledge, and innovation as they unlock the full potential of AGI! 🚀
3 notes · View notes
reasoningdaily · 1 year
Text
Now, an even more powerful AI application has entered the scene -- Auto-GPT.
The application's promising, autonomous abilities may make it our first glimpse of artificial general intelligence (AGI), a type of AI that can perform human-level intellectual tasks. 
Since its release on Mar. 30, 2023, people have been fascinated by it, making it one of the hottest topics on Twitter multiple days in a row. 
Auto-GPT has internet access, long-term and short-term memory management, GPT-4 for text generation and file storage and summarization with GPT-3.5, according to the Github post. 
Anything you can ask ChatGPT, like debugging code, and writing an email, you can ask Auto-GPT. However, you can ask Auto-GPT to complete even more advanced tasks, with fewer prompts, as seen by the demo examples below. 
Also: How to use ChatGPT to summarize content
The Github demo shows sample goal prompts such as "Increase net worth, grow Twitter Account, Develop and manage multiple businesses." 
The applications limitations listed on Github do warn that Auto-GPT's output, "May not perform well in complex, real-world business scenarios." However, the results users have been sharing show that Auto-GPT can deliver some really impressive (and helpful) results. 
On Twitter, users are sharing some of they ways they're using it which include using Auto-GPT to create an app, generate a new startup, tackle complex topics like the future of healthcare and medicine, and even stalk themselves on the internet. 
However, accessing Auto-GPT is much more challenging than accessing ChatGPT. So despite Auto-GPT being way more capable, if you have simpler needs that ChatGPT can meet and don't want to be bothered with an installation process, ChatGPT may be a better option for you. 
3 notes · View notes