Tumgik
#Emad Mostaque
otherworldlyinfo · 1 year
Text
The Demise of Programmers: AI's Ascendancy in the Tech World
AI’s Growing Dominance in CodingStability AI: Powering Innovative ProjectsBuilding a “Society OS”The Revolution in Conversational AIDemocratizing AI: Personalized ModelsTransparency and Access to AIEmbracing AI’s Potential: Enhancing Human Potential Emad Mostaque, the visionary founder and CEO of Stability AI, challenges the conventional notion of the programming landscape. In his…
Tumblr media
View On WordPress
0 notes
masonpelt · 1 year
Text
Calling For A Moratorium On Calling For A Moratorium On AI
The AI industry is heavily capitalized, and reliant on smoke and mirrors. Calls for regulation and open letters asking for a pause in AI research from leaders of AI companies make headlines every few months. These are not circulating out of genuine fear, they are self aggrandizing, and honestly monopolistic. In under a year, artificial intelligence has transformed from a once-geeky sci-fi…
Tumblr media
View On WordPress
0 notes
crafantale · 1 year
Text
"The tech founder behind Stable Diffusion"
...
In reality,  while Stable Diffusion was the main reason for his own startup Stability AI’s ascent to prominence, its source code was written by a different group of researchers.
...
He has no formal experience in the field of artificial intelligence. 
...
To build buzz around Stability, Mostaque made an elaborate gambit supported by exaggerated claims and promises, overstating his role in several major AI projects and embellishing a quotidian transaction with the notoriously uncompromising Amazon into a “strategic partnership” with an 80% discount.
When pressed, Stability spokesperson Motez Bishara admitted to Forbes that Stability had no special deal with Amazon.
...
Inside the company, wages and payroll taxes have been repeatedly delayed or unpaid, according to eight former employees, and last year the UK tax agency threatened to seize company assets.
While staffers said they stressed over being paid last summer, tens of thousands of British pounds moved from Stability’s corporate account to the personal account of Qureshi, Mostaque’s wife, per screenshots of financial transactions obtained by Forbes.
...
In addition, Mostaque directed his team to list groups like UNESCO, OECD, WHO and World Bank as partners in pitch decks, even though they were not involved in the company’s later evolution, according to four former employees. 
...
Stability is also facing a pair of lawsuits which accuse it of violating copyright law to train its technology.
0 notes
reachartwork · 3 months
Note
What are your thoughts on SD3's scorched earth porn filter?
your name is emad mostaque and you have shot yourself in the foot for the third time this week and its really starting to hurt and look weird and green
17 notes · View notes
stablediffusion · 2 years
Photo
Tumblr media Tumblr media
Emad Mostaque, CEO and founder of Stability AI, confirms that there have already been talks made regarding Hollywood and Video Game developers, about the use of Stable Diffusion in their business. "All of Hollywood and Video Games will use this Tech" [Source: Discord]
Give us a follow on Twitter: @StableDiffusion
ht SteinLaw
1 note · View note
jcmarchi · 1 month
Text
The Sequence Chat: Emad Mostaque -Stability AI, Schelling AI- About Open and Decentralized AI
New Post has been published on https://thedigitalinsider.com/the-sequence-chat-emad-mostaque-stability-ai-schelling-ai-about-open-and-decentralized-ai/
The Sequence Chat: Emad Mostaque -Stability AI, Schelling AI- About Open and Decentralized AI
The co-founder and former CEO of Stability AI discusses his new vision for decentralized AI and his new project.
Bio:
Emad Mostaque is widely recognized as one of the leaders in the open-source generative AI movement. He is the former CEO of Stability AI, the company behind Stable Diffusion and numerous open-source generative AI models across different modalities. Stability AI attracted a community of hundreds of thousands of AI researchers and is actively pushing the boundaries of innovation in the field. Emad holds both a BA and an MA in mathematics and computer science from the University of Oxford, followed by a successful career as a hedge fund manager.
After leaving Stability AI, Emad decided to focus on the potential of decentralization. His new project, Schelling AI, combines generative AI and Web3 to enable transparency and trust in the world of foundation models.
TheSequence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
🛠 ML Work  
You recently stepped down as CEO of Stability AI and coined the phrase, “You can’t beat centralized AI with more centralized AI.” I often think of AI as an increasingly centralising force. Why is decentralized AI so important, and what are the key areas where it can contribute to the development of generative AI?
It was clear when setting Stability AI up that absent a catalyst the foundation model space would be dominated by a few very large players. In scaling the company we adopted a similar corporate and structural model to these large players and we can see the arms race that ensues, with giant rounds, pressure on hiring, involvement with global regulators and more. 
The landscape has changed over the last few years and its clear that distributed and decentralised AI has its place, for privacy, security, governance and ensuring the benefits of this technology are
Distributed AI is important because by centralising AI and moving towards AGI, we create black boxes which are understood just in terms of inputs and outputs with no understanding of the inner workings or the reasoning behind the models. 
However, open-source models means code is transparent and publicly available. A framework where everyone checks on everyone promotes accountability, trust and agency. Everyone has a voice in AI creation and evolution.
A large area of importance for decentralisation is in facilitating and distributing the benefits of research. Transparency in research builds trust and ensures security in the open. 
Imagine the healthcare industries of every nation, each with open models and datasets. By allowing everyone to access all medical literature, it would enable the development of unique models that are representative of each community and nationality. For example, a model specifically tailored to Bulgarian cancer research would have a far greater impact on the Bulgarian healthcare system than a generalised American cancer model. Open source means specialisation.
This could be said for every industry, not just healthcare, from finance to education.
Your new project, Schelling AI, is deeply rooted in Web3 and crypto economics. Can you share the vision for Schelling AI and your thoughts on the intersection of AI and Web3?
While I think web3/crypto perhaps deservedly have a bad reputation, much of the advances made in these areas will be directly applicable to our augmented intelligence future.
If we look at Bitcoin, it was the original incentive mechanism for large amounts of highly specialised compute – the total energy usage of the Bitcoin network is 160 TWh versus around 350 TWh for all the global data centres to give you an idea of scale.
This provides an example of an incentive and coordination system that did a job and could potentially be used to provide the compute needed for open, global models for all, owned and governed as widely as possible.
Bitcoin is sometimes also noted as a Schelling Point, a focal point in game theory terms that enables coordination without communication. I think that our future AI systems comprising of billions of robots and trillions of agents will need coordination systems that build from this, from payments (agents are unlikely to have bank accounts) to verification of inputs and outputs and more.
These features echo the capabilities being built into second and third generation distributed ledgers, but I don’t think any of these systems are up to the task of coordinating and supporting AI in health, education, government or any important and regulated sector.
I think we have a real opportunity to design and build an open, distributed and resilient AI system for all incorporating learnings from across the board. If we can do this an a way that is verifiable and trustworthy then not only will we have solved many issues that plague web3, where the key is trust, not decentralisation for decentralisation’s sake, but more importantly we will perhaps solve many of the issues that plague our existing systems but first integrating with them and then reimagine them. 
Decentralized AI is not a new concept, yet it has never achieved significant adoption. One could argue that the massive scale of frontier models makes decentralized AI even more challenging now. Why do you believe this time will be different? 
The first wave of AI models were based around scale, with relatively poor data being eaten by giant supercomputers that papered over their low quality and achieved remarkable results.
We are now seeing the important of the data put into models, with high quality models beating larger ones on a fraction of the data and splits in performance based on data.
Decentralised AI training of full models will always lag behind centralised clusters due to communication overhead. 
However, if base models are trained on these massive clusters, say as LLaMA was, then these base models can be taken and customised and improved on a fraction of the compute. We have seen this with the explosion of fine tunes and their combination and recombination from the community to outperform the base models.
Decentralisation is also highly suitable for data augmentation, particularly asynchronous, model tuning optimisation and many other use areas.
However, the mission isn’t really to decentralise, but to distribute this technology to drive genuinely useful use cases.
I think what it will eventually be is a few large players providing base models as infrastructure and swarms of people and then agents optimising the models and underlying data versus training from scratch in swarms.
How do you view the balance between the race towards massively large models, trillion-dollar GPU clusters, and the need for small, sovereign, decentralized models? 
I think this is similar to highly specialised experts, the models the large organisations are trying to build and the team of talented juniors you bring in, which are similar to these smaller models being run locally and on the edge.
There is a concept to satisficing where you reach a level that is good enough and I think small language models have achieved that for many use cases, outperforming giant models from a generation or two ago yet working on a smartphone or laptop.
We have seen from Gemma 27b and other models that you can also use large models to instruct and improve smaller models, something done by Meta for the smaller LLaMA models too.
I think the final space will incorporate all of these varaiations and there won’t be just type of the model out there.
The release of Stable Diffusion completely changed my perspective on the open-source generative AI space. Since then, a lot has happened in the field. How do you see the long-term balance between open and closed models? Can open source really compete on a massive scale?
Closed models will always outperform open models as you can just take an open model and add private data to it (!).
Open has an advantage in spread, optimisation and similar areas over directed centralised models.
I think ultimately they are complementary though, equivalent to hiring your own graduates versus bringing in consultants. It is likely that open will end up being most used if it can keep up in performance terms. Even if it lags somewhat, models are rapidly becoming “good enough” to build around and the next leg of growth is likely to be on products and services to provide and implement this technology as a result.
One of the biggest challenges in open-source AI is the lack of funding, an area where crypto excels. How can crypto’s capital formation and token economics help develop open-source generative AI? 
I think we perhaps catalysed large amounts of funding to go into open source AI at Stability AI (!)
While I think exponential compute is likely not needed for the next generation of models, lessons from crypto capital formation from a funding and distribution perspective are instructive. 
As noted in question 2 above, Bitcoin has a been a spectacular success in attracting and rewarding specialised compute and energy provision. It has many other issues, but has become institutional and provides some insight into how incentive systems may be employed to provide the compute and funding we need to create genuine public AI infrastructure.
Our healthcare, education, government and similar AI systems should not run on black boxes and should not be controlled by an unelected few. Creating a mechanism to provide the compute and capital needed to build and maintain this infrastructure, which government initiatives are clearly likely not to be able to keep up with, is imperative. It is difficult to see how to do this outside of building a new type of organisations based on prior lessons.
The intersection of Web3 and AI is intellectually fascinating but filled with technical and cultural gaps compared to the mainstream AI space, leading to the creation of many projects without real use cases. Which aspects of the lifecycle of generative AI applications can genuinely be decentralised with today’s Web3 stacks? 
Digging into this area I have moved away from decentralised AI towards thinking distributed AI is where it is at, particularly for the implantation of AI technology to important areas of society like health and education.
I am somewhat disillusioned by web3/crypto projects forgetting that the core mission is to build systems that can coordinate in a trust minimised fashion for real world use cases versus decentralisation for the sake of it and relying on speculation.
If we look to the future as outlined in our how to think about AI piece, it is clear that generative AI has a role to play in the future of many areas of the public and private sector. 
While this needs to be built on new coordination and alignment infrastructure, it is unclear whether any of the systems we have today are suitable for this.
Where projects today are good is in supply aggregation (eg DePIN for distributed compute), research on governance with DAOs having made all the mistakes of democracy and more and payments, which will be essential as the number of agents and robots increases.
When I think about decentralized AI, I gravitate towards trends such as small foundation models, decentralized inference, and other areas that are still in a relatively nascent state. What technical or research milestones should be achieved to unlock the potential of decentralised AI?
I think you area already seeing models good enough for a range of tasks on the edge and innovative architectures to enable this.
I think firming up a baseline of model quality people can build around, much as they continue to build around the original stable diffusion, is very important as this opens up a range of potential mechanism design. This includes distributed tuning and model/data optimisation and ablation capability.
I think this is somewhat recursive as well as better base models that are predictable and improve can continually support data location and improvement that then, in turn can make the models better.
What data and knowledge should go into a model pre training and post training is probable the most of important outstanding research question.
If we can figure this out then we can pull on not only the compute and support of the masses, but their expertise to increase the quality and diversity of the data that feeds our models and their ability to help us all.
💥 Miscellaneous – a set of rapid-fire questions 
What is your favorite area of research outside of generative AI?
I have a particularly interest in neurochemistry from my ASD research and functional medicine, which I think will be completely transformed by AI.
How far can the LLM scaling laws take us? All the way to AGI? 
I think AGI from scaling LLMs is unlikely. What we are seeing now is similar to cooking a poor quality steak (massive datasets) for longer. They get tender and nice and the system exhibits increasing capability, but not necessarily generalised knowledge as an individual model nor capability. When put together in a broader system this does, of course, become more difficult to predict.
It could be that humans plus sufficiently advanced generative AI systems are the real ASI. Especially when we get BCI kicking off.
Describe the crypto-AI + Schelling AI world in five years. 
An open, distributed, AI system that offers universal basic intelligence to everyone, is communally owned and governed and constantly improving with the objective function of human flourishing.
Who are your favorite mathematicians and computer scientists, and why?
I have a particular soft spot for Claude Shannon whose wonderful work laid the foundation for these massive advances we have seen. Herb Simon is another favourite bridging multiple disciplines and has been an inspiration for the design of Schelling AI in particular.
0 notes
blackladisdestrcz · 5 months
Text
2 Ex-AI CEOs Debate the Future of AI w/ Emad Mostaque & Nat Friedman | E...
youtube
0 notes
fastdot · 6 months
Text
A.I.’s Messy Moment + Listeners Respond to Jonathan Haidt + Shrimp Jesus
Listen to and follow ‘Hard Fork’Apple | Spotify | Amazon | YouTube Kevin Roose and Davis Land and Dan Powell, Marion Lozano and This week, we look at a mess of corporate drama in artificial intelligence. Stability AI has announced that its founder and C.E.O., Emad Mostaque, is leaving the company. Meanwhile, Microsoft hired away two of the co-founders and much of the staff of Inflection,…
Tumblr media
View On WordPress
0 notes
ramtracking · 6 months
Text
Stability AI CEO Emad Mostaque Resignation: What Happened? [ Chief Executive Officer ]
Stability AI CEO Emad Mostaque Resignation: What Happened? [News Summary] Stability AI Chief Executive Officer Emad Mostaque announced his resignation to staff late Friday night with a strikingly optimistic note. Emad Mostaque is stepping down from his role as CEO of Stability AI, the startup that helped bring Stable Diffusion to life. In a press… Robin Rombach and a group of key researchers…
Tumblr media
View On WordPress
0 notes
techrookies · 6 months
Text
Stability AI CEO resigns because you’re ‘not going to beat centralized AI with more centralized AI’
Stability AI founder and chief executive Emad Mostaque has stepped down from the top role and the unicorn startup‘s board, the buzzy firm said Friday night, making it the second hot AI startup to go through major changes this week. Stability AI, which has been backed by investors including Lightspeed Venture Partners and Coatue Management, […] © 2024 TechCrunch. All rights reserved. For personal…
View On WordPress
0 notes
tumnikkeimatome · 6 months
Text
Stability AIの創業者辞任劇:分散型AIに専念?投資家との対立?懸念される画像生成AIのオープンソース化の行方
衝撃的な創業者兼CEOの辞任 Stability AIの創業者兼CEOであったEmad Mostaqueが、2024年3月23日に突如辞任を発表しました。Mostaqueは、分散型AIの追求に専念するために、CEOと取締役の職を退くと説明しています。後任には、最高執行責任者のShan Shan Wongと最高技術責任者のChristian…
View On WordPress
0 notes
revotalk · 6 months
Link
Stability AI founder and chief executive Emad Mostaque has stepped down from the top role and the unicorn startup‘s board, the buzzy firm said Friday night, making it the second hot AI startup to go through major changes this week. Stability AI, which has been backed by investors including Lightspeed Venture Partners and Coatue Management, […] © 2024 TechCrunch. All rights reserved. For personal use only.
0 notes
hackernewsrobot · 6 months
Text
Stability AI: Emad Mostaque resigned from his role as CEO
https://stability.ai/news/stabilityai-announcement
0 notes
tamarovjo4 · 6 months
Text
Sources: Stability AI CEO Emad Mostaque told staff last week that Robin Rombach and other researchers, the key creators of Stable Diffusion, have resigned (Forbes)
http://dlvr.it/T4M5r3
0 notes
govindhtech · 9 months
Text
ChatGPT Role in 2023’s Mainstream Rise of AI
Tumblr media
ChatGPT Role Impact on AI Mainstream
2023 was AI-year. After launching in November 2022, ChatGPT Role became one of the fastest-growing apps, acquiring 100 million monthly users in two months. After Bill Gates stated in January that AI will be the hot issue of the year, a number of firms launched with AI tools that could make synthetic speech and videos. Since ChatGPT role was questioned to replace Google search at the start of the year, AI has advanced.
“My focus is on thinking beyond search…Do we do something absolutely different and cooler?'” OpenAI CEO Sam Altman told Forbes in January.
Venture capitalists invested billions in the field due of rapid technology improvements. Microsoft’s $10 billion investment in OpenAI, now valued at $80 billion, led the way. In June, high-profile AI firm Inflection launched its AI chatbot Pi and raised $1.3 billion at $4 billion. Hugging Face, which hosts thousands of open-source AI models, was worth $4 billion a month later. Amazon stated in September that it would invest $4 billion in OpenAI rival Anthropic, which launched its conversational chatbot Claude 2.0 in July and is now worth $25 billion.
Not all AI founders have had easy fundraising. After raising $1 billion in September 2022 for its popular text-to-image AI technology Stable Diffusion, Stability AI has struggled to raise since. In June, Forbes discovered that CEO Emad Mostaque misled investors about his credentials and potential ties. Stanford detected illicit child sexual abuse material in Stable Diffusion’s training dataset in December.
The AI gold rush also created unicorns like Adept, which builds AI assistants that can browse the internet and operate software, and ChatGPT role, which 20 million people use to create and converse with AI chatbot characters like Taylor Swift and Elon Musk. Typeface, Writer, and Jasper, enterprise-focused generative AI businesses that automate email authoring and document summarization, have also received money. Google was caught off guard by the AI tool race. The internet giant unveiled its conversational AI chatbot Bard and AI model Gemini late this year.
In the past year, AI has permeated almost every aspect of existence. The most popular U.S. school districts banned ChatGPT Role because teachers believed pupils would cheat on schoolwork. Generative AI was used for notetaking, grunt work, and patient diagnosis by doctors and hospitals. Some politicians employed AI to communicate with voters, while others used generative AI to construct deep fakes of their opponents.
AI-generated content swamped the internet, raising fears about hazardous content using freely available AI techniques. Generational AI-generated fake news stories went viral on TikTok and YouTube, and nonconsensual AI-generated porn on Reddit and Etsy. While ChatGPT Role flooded the web with low-quality AI-generated content, freelancers worried they would lose their gigs to the buzzy new AI software that could produce content faster and cheaper than people.
Companies employed AI chatbots to evaluate, interview, and attract personnel, prompting concerns about biases and hazards. ChatGPT Role helped cybercriminals write malware code and monitor social media. To address these issues, Microsoft and Google engaged red teams to jailbreak and secure their AI models.
“There are still a lot of unsolved questions,” said MIT CSAIL electrical engineering and computer science professor Regina Barzila. “We need tools to discover issues and biases in these datasets and meta AI technologies to regulate AI and make us much safer with AI.”
In 2023, artists, writers, and coders sued OpenAI, Stability AI, and Anthropic for copyright infringement, claiming these technologies used their original content without license or payment. Legal analyst Edward Klaris expects these class action cases will allow the U.S. Copyright Office to adopt new nuanced AI fair use regulations in 2024.
AI transactions abound in the legal sector. Klaris, CEO and managing partner at IP rights legal company KlarisIP, said some people are outraged that their work was scraped to create training data and want to license their content to AI businesses and get paid.
After the EU AI Act sought to regulate the technology, the Biden administration issued an executive order mandating startups creating huge AI models that potentially threaten national security to notify the government. Tech firms backed the executive order, but entrepreneurs worried it would slow innovation.
“The executive order formulated principles, which is good to articulate, but it doesn’t really translate to how do we take these principles and translate them into some technology or guardrail that helps us ensure that the tool we’re using is really safe,” Barzilla said.
Artificial intelligence leaders split in 2023 about whether big businesses like Google, OpenAI, and Anthropic should develop AI technologies openly or privately. Since anyone could misuse open-sourced AI models, safety concerns have been raised. Others, like Meta AI’s Chief Scientist Yann LeCun, who oversaw Llama 2’s development, support open-source AI stress testing.
Clement Delangue told a press event that open-source large language models will match closed-source models by 2024.
In late November, OpenAI’s board of directors fired CEO Sam Altman for not being “candid” with their representations, revealing an internal disagreement. After employees threatened to leave if Altman did not return, he was restored as CEO a few days later. Bret Taylor and Larry Summers joined the board as additional directors.
Delangue said the economics of AI, particularly how AI businesses will earn money and profit margins, will be the biggest questions in 2024. Since most AI models are trained on massive volumes of data using GPUs like Nvidia and AMD, they are becoming more expensive and carbon-intensive. Delangue predicted that most companies will understand that smaller, cheaper, more specialized models are better for 99% of AI use cases by 2024.
Read more on Govindhtech.com
0 notes
coolshaunmichael · 10 months
Text
0 notes