#autogpt
Explore tagged Tumblr posts
mlearningai · 2 years ago
Link
11 notes · View notes
jcmarchi · 3 months ago
Text
Can AI automate computational reproducibility?
New Post has been published on https://thedigitalinsider.com/can-ai-automate-computational-reproducibility/
Can AI automate computational reproducibility?
Last month, Sakana AI released an “AI scientist”, which the company called “the first comprehensive system for fully automatic scientific discovery”. It was touted as being able to accelerate science without suffering from human limitations. 
Unfortunately, the “AI Scientist” has many shortcomings. It has no checks for novelty, so generated papers could rehash earlier work. And Sakana did not perform any human review (let alone expert “peer” review) of the generated papers—so it is unclear if the papers are any good (apparently they are not). While these flaws are particularly flagrant in Sakana’s case, the lack of good evaluation affects most AI agents, making it hard to measure their real-world impact.
Today, we introduce a new benchmark for measuring how well AI can reproduce existing computational research. We also share how this project has changed our thinking about “general intelligence” and the potential economic impact of AI. Read the paper.
Visions of AI automating science are enticing, but aren’t within reach, and lead to flawed science. In contrast, using AI for well-scoped tasks such as verifying computational reproducibility can save a lot of time and redirect effort towards more productive scientific activity. AI could also help find relevant literature, write code to rapidly test ideas, and perform other computational tasks.
In a new paper, we introduce CORE-Bench (Computational Reproducibility Agent Benchmark), a benchmark for measuring how well AI can automate computational reproducibility, that is, reproducing a paper’s findings when the code and data are available. The authors are Zachary S. Siegel, Sayash Kapoor, Nitya Nadgir, Benedikt Stroebl, and Arvind Narayanan. CORE-Bench is a first step in a larger project to rigorously evaluate progress in automating research tasks of increasing difficulty.
Computationally reproducing a study is a far more limited task than replication, which requires re-running experiments that might involve human subjects. Even the limited reproducibility task is hard: In the 2022 Machine Learning Reproducibility Challenge, over a third of the papers could not be reproduced even when experts reproducing the papers had the code and data. 
If AI could automate this mundane yet important task, researchers could automate the implementation of baselines, reviewers could more easily assess if a paper has flaws, and journals and conferences could more easily verify if submitted and published papers are reproducible.
We created CORE-Bench using scientific papers and their accompanying code and data repositories. We used Code Ocean to source papers that were likely to be reproducible. We manually reproduced 90 papers from computer science, medicine, and social science, and curated a set of questions for each paper to be able to verify the answers. 
We release CORE-Bench with three difficulty levels. Tasks in all three levels require the use of both language and vision capabilities. The hardest version closely resembles real-world reproduction attempts, and we expect that improvements on the benchmark will translate to agents that are actually useful to scientists.
To implement baselines, we tested the generalist AutoGPT agent and also implemented a task-specific modification to AutoGPT, which we call CORE-Agent. While the task-specific version improved accuracy significantly, there is still massive room for improvement: the best agent (CORE-Agent with GPT-4o) has an accuracy of 22% on CORE-Bench-Hard.
Computational reproducibility requires setting up the code environment correctly, running the code, and seeing if it produces the same results as reported in the paper. Using the shell and other tools correctly is still tricky for LLMs. When we evaluated generalist agents like AutoGPT, we weren’t surprised by their poor accuracy (less than 10% on CORE-Bench-Hard). 
Yet, with a few person-days of effort, we were able to build CORE-Agent by modifying AutoGPT, which more than doubled accuracy on the hardest level. We also built a task-specific agent from scratch, but modifying AutoGPT was far less time consuming while also resulting in a stronger agent. We are cautiously optimistic that this approach can be pushed to yield agents that perform well enough to be useful in practice. 
Simple task-specific modifications allow CORE-Agent to outperform AutoGPT. 
If this pattern of being able to easily adapt a generalist agent to produce a task-specific agent holds in other areas, it should make us rethink generality. Generality roughly translates to being able to use the same model or agent without modification to perform a variety of tasks. This notion of generality underpins how Artificial General Intelligence (or AGI) is usually understood and the hopes and fears that accompany it. 
But at least from the point of view of economic impacts, generality might be a red herring. For a task such as computational reproducibility on which expert humans collectively spend millions of hours every year, being able to automate it would be hugely impactful — regardless of whether the AI system did so out of the box, or after a few person days (or even a person year) of programmer effort. 
In the AI Snake Oil book, we define generality as the inverse of task-specificity, and analyze how the history of AI (and computing) can be seen as the pursuit of gradually increasing generality. Increasing generality means decreasing the human effort it takes to build an AI system to perform a given task. From this perspective, systems like AutoGPT may be more general than most people (including us) gave them credit for.
Yet, definitions of AGI typically insist that a single system be able to do everything out of the box. There is no systematic effort to track how the human effort needed to build task-specific AI is changing over time. Just as we’ve argued against flawed conceptions of generality that overestimate AI progress, we should avoid flawed conceptions of generality that underestimate it. 
Read the CORE-Bench paper here.
In our recent paper, AI Agents That Matter, we found several shortcomings with AI agent evaluations. While building CORE-Bench, these shortcomings informed the design of our benchmark.
We recently organized an online workshop on useful and reliable AI agents where leading experts shared their views on better agent design and evaluation. The workshop videos are available online.
Ben Bogin et al. released the SUPER benchmark to evaluate if AI agents can set up and execute tasks from repositories accompanying research papers. It is another interesting benchmark for measuring AI agents’ capability to automate research tasks. It differs from CORE-Bench in many ways: 
CORE-Bench consists of tasks across scientific disciplines (computer science, medicine, social science) whereas SUPER consists of tasks from AI.
CORE-Bench requires the use of both vision-language and language models, and consists of multiple languages (Python and R) as opposed to SUPER (language models, Python).
Tasks in SUPER require access to a Jupyter notebook. In contrast, tasks in CORE-Bench require shell access and allow the agent to modify the sandbox arbitrarily.
0 notes
0xalzzy · 1 year ago
Text
LLMs as Operating Systems
Last month the Microsoft Research team shared some insights in a generative application framework called AutoGen that casts a spotlight on so many intriguing possibilities in AI. Large Language Models (LLMs) like GPT-4 can step up beyond simple tasks
[vc_row el_id=”blog__prefix-audience”][vc_column width=”1/4″][/vc_column][vc_column width=”3/4″][vc_column_text animation=”animation bottom-to-top”] People familiar with ChatGPT, GPT-3, and advancements in machine learning, Generative AI like Midjourney, or those who’ve used AI features on Snapchat, Khan Academy, or Duolingo’s personalized tutors [/vc_column_text][/vc_column][/vc_row] Last month…
Tumblr media
View On WordPress
0 notes
linuxiarzepl · 1 year ago
Text
AutoGPT
AutoGPT – krok w stronę walki z cyberprzestępcami za pomocą sztucznej inteligencji https://linuxiarze.pl/autogpt-krok-w-strone-walki-z-cyberprzestepcami-za-pomoca-sztucznej-inteligencji/
1 note · View note
greedskill · 1 year ago
Text
Tumblr media
What is Auto-GPT? Why is it important?
In the end, Auto-GPT uses the versatility of OpenAI’s most recent AI models to communicate with services and software online which allows Auto-GPT to “autonomously” perform tasks like X and Y.
0 notes
stefaanmeeuws · 2 years ago
Text
Instant AI Translator
I want to make an instant AI translator. It would use a microphone, voice-to-text recognition software, a processing unit to match the input language and the output language, a tiny speaker, and a horizontal (O)LED display that can be attached to clothing. There are two uses for the device: 1. is where the owner of the device speaks in his or her native language, and his speech is translated into…
View On WordPress
0 notes
pebblegalaxy · 2 years ago
Text
AutoGPT: The Next Evolution in AI Technology
The world of technology is moving at an unprecedented pace, with innovations and advancements happening at an astonishing rate. One such innovation that has been causing a stir is the AutoGPT architecture. Described as ChatGPT on steroids, this architecture has the potential to revolutionize the way we work and carry out tasks. AutoGPT works by using the GPT-4 API, which carries out tasks…
Tumblr media
View On WordPress
1 note · View note
seasiainfotechusa · 2 years ago
Text
A Comprehensive Guide to Autogpt
While ChatGPT is still enjoying the attention it has garnered in recent months, AI agents that were born from the tool are already giving it stiff competition. AutoGPT, a self-generating AI agent that runs independently, is the newest player to join this league. Check out this blog to know about a comprehensive guide to autogpt and its relevance for the modern world.
Tumblr media
0 notes
scienza-magia · 2 years ago
Text
Corsi di laurea in intelligenza artificiale in Italia
Studiare l'intelligenza artificiale nelle università italiane. Sono 45 i corsi di laurea in Ai in Italia, tra ingegneria, matematica, fisica ma anche materie umanistiche, in 53 atenei censiti dall'associazione del settore, AIxIA Intelligenza artificiale? È da tempo materia di studio in Italia. Tutte le università italiane hanno infatti attivato da anni corsi e piani di studio dedicati al settore. "In Italia l'Ai è diventata una disciplina pervasiva, che viene insegnata in tutto il territorio, dal nord al sud", dice Gianluigi Greco, professore ordinario di Informatica presso l’Università della Calabria, dal 2018 direttore del dipartimento di Matematica e informatica, e presidente di AIxIA, l'Associazione italiana per l'intelligenza artificiale, fondata negli anni Ottanta. Nei 53 atenei che hanno partecipato alla rilevazione fatta dall'Associazione (in Italia ci sono 96 atenei, distribuiti in tutte le regioni e presenti praticamente in tutte le province) ci sono 45 corsi di laurea in intelligenza artificiale tra ingegneria, matematica, fisica ma anche in facoltà umanistiche, da economia a lettere per un totale di 7.635 crediti formativi erogati tra le facoltà censite. Il 70% sono lauree magistrali, il 27% lauree triennali e il 3% magistrali a ciclo unico. Praticamente, non c'è ateneo che non abbia attivato degli insegnamenti di primo o secondo livello. "Gli argomenti toccati ci sono tutti", dice Greco. Dal machine learning al data mining, dalla computer vision ai temi di etica e privacy passando per la rappresentazione della conoscenza e la robotica, non manca niente nei curricula del nostro Paese.
Tumblr media
Come trovare i corsi In particolare, nella ricerca sono stati rilevati su campione regionale 26 tra master e corsi di laurea nel Nord-Ovest, 57 nel Nord-Est, 43 nel Centro, 16 nelle Isole e 21 nel Sud. Per chi volesse intraprendere questo tipo di studi, basati sull'Ai, l'obiettivo è cercare nell'ateneo di interesse se sono presenti i corsi con i curriculum giusti: - LM32 - Lauree Magistrali in Ingegneria Informatica; - LM18 - Lauree Magistrali in Informatica; - L31 - Lauree in Scienze e Tecnologie Informatiche; - LO8 - Lauree in Ingegneria dell'Informazione; - LM40 - Lauree Magistrali in Matematica. Tuttavia, in Italia ci sono almeno 64 classi di laurea ulteriori che hanno corsi centrati su AI, dal corso di laurea in Filosofia e Intelligenza Artificiale alla Sapienza di Roma al corso di Diritto e intelligenza artificiale dell'università Mediterranea di Reggio Calabria Le cinque università di riferimento in Italia per corsi di laurea triennali e magistrale, ma anche per i master, sono i due Politecnici di Milano e Torino, l'Università Statale e Bocconi di Milano, l'università di Pisa e l'università di Genova. In coda, l'Università di Firenze. In questi atenei si trovano i maggiori centri di ricerca, ma ovviamente sono disponibili anche corsi di laurea e master praticamente in tutti e 96 gli atenei italiani. Non esiste tuttavia una classifica indipendente dei corsi di laurea con specializzazione in Ai. Il caso di Bocconi "All'Università Bocconi – dice Luca Trevisan, direttore del master in Ai dell'ateneo meneghino dopo aver insegnato per venti anni a Berkeley e Stanford – sono almeno cinque anni che riflettiamo sull'importanza dell'informatica in generale e delle Ai in particolare". La Bocconi ha da molti anni attivato insegnamenti di matematica, statistica e informatica, ma è più di recente che si è focalizzata su computer science e Ai: "Tre anni fa abbiamo attivato una laurea triennale in intelligenza artificiale, l'anno scorso è nato il dipartimento di informatica (diretto da Riccardo Zecchina) e l'anno prossimo verrà attivata la laurea magistrale in Ai che completerà questo cammino". Il salto in avanti nel settore dal punto di vista della ricerca internazionale è avvenuto nel 2010: è stato quello il punto di svolta del deep learning, che ha dimostrato che, rispetto alle tecniche precedenti, si potevano fare cose inedite. Nel corso di un decennio i cambiamenti si sono accumulati e oggi siamo davanti a una esplosione del settore, molto importante anche per chi fa studi giuridici, economici e di political science. Per questo i corsi di laurea non sono solo all'interno delle facoltà scientifiche ma si sono allargati anche negli ambiti umanistici. "Vogliamo i nostri laureati imparino a essere leader in azienda imparando anche la matematica, la statistica, l'informatica e quindi l'intelligenza artificiale". Luca Trevisan, direttore del master in Ai dell'Università Bocconi di Milano Nel nostro Paese l'informatica si insegna dagli anni Settanta, tra i primi paesi al mondo (poco dopo i corsi aperti negli Stati Uniti a Stanford e al Mit di Boston). L'insegnamento e la ricerca sulle Ai è molto sviluppata negli Stati Uniti anche perché richiede investimenti notevoli per il calcolo, che sono disponibili grazie ai finanziamenti di aziende come Amazon, Facebook e Google. L'Europa in generale e l'Italia in particolare sta cercando di colmare il divario con le università americane più ricche, e da noi ci sono scuole di eccellenza per lo studio delle Ai in alcuni atenei d'Italia: Genova, Pisa, i Politecnici di Milano e Torino. "Le Ai – dice Trevisan – si studiano a informatica e ingegneria ma hanno grandi potenzialità interdisciplinari. Bocconi e Humanitas collaborano per applicazioni medicali, ci sono altri ambiti come l'elaborazione del linguaggio naturale che sono di grandi interessi e hanno un campo di applicabilità estremamente vasto nella ricerca. C'è uno sviluppo nuovo nel settore che porta all'incontro con studiosi di altre aree". Bocconi è entrata in Ellis (l'European Laboratory for Learning and Intelligent Systems) e nei progetti del Pnrr dedicati alle Ai. Il sistema è pronto a rispondere alle domande del mercato: nel nostro Paese, secondo una ricerca dell'Osservatorio del Politecnico di Milano, infatti, il mercato dell'Al nel 2022 ha raggiunto un valore di 500 milioni di euro, con una crescita del 32% in un solo anno. I settori più attivi sono l'intelligent data processing (34%), l'interpretazione del linguaggio (28%), i sistemi di raccomandazione (19%) la computer vision (10%) e l'automazione robotica intelligente dei processi (9%). Tutte materie che vengono insegnate da tempo nelle università italiane. Read the full article
0 notes
sharprank · 2 years ago
Text
AgentGPT, BabyGPT and AutoGPT - what is the difference?
These are semi-autonomous “agents”, which can be given high level goals “make a website for selling books online”, and can figure out the high level tasks, such as front-end HTML site development, then backend database, etc. and execute each of the tasks and subtasks. They are all the same (at a high level), but use recursive mechanisms to help GPT create prompts for GPT (so meta). Which means…
Tumblr media
View On WordPress
0 notes
mlearningai · 2 years ago
Link
Are you ready to use multiple AI agents with one click?
8 notes · View notes
t00l-xyz-ai-news · 1 month ago
Text
I'm a GitHub Expert and I'm Shocked by These Trending AI Projects! #93
Tumblr media
I'm a GitHub Expert and I'm Shocked by These Trending AI Projects! #093 #githubprojects #aiprojects #autogpttutorials 👉 Try FREE Aiarty Image Enhancer: Enhance & Upscale images up to 32K : https://www.aiarty.com/midjourney-prompts/?ttref=2410-ytb-aia-aigcmj-mgg-text-3l "🚀Explore the latest trending open-source GitHub projects focusing on AI frameworks, large language models (LLMs), and AI-powered investment research tools. Whether you're a developer looking for new AI frameworks, an investor seeking advanced research tools, or an AI enthusiast exploring LLM innovations, this video has you covered! Join us as we delve into groundbreaking projects like RouteLLM for efficient LLM routing, OpenBB for AI-driven investment research, and much more. 👍 Like, share, and subscribe for more updates on AI advancements. Drop your thoughts and questions in the comments below! #AITechnology #GitHubProjects #Innovation #AutoGPTTutorial #tensorart" 📌 Video Project Details. 🔗 Get FREE AI Project Updates : https://manuagi.beehiiv.com/subscribe 📌 Important AI Tools (affiliate) 🔗 Build Your AI Startup : https://shipfa.st/?via=autogpt 🔗 AI Voice - https://try.elevenlabs.io/0wgaz29csuo5 🔗 Try NordVPN : https://nordvpn.sjv.io/autogpt 🔗 NextJS Directory : https://nextjsdirectory.com?aff=j1Dej 📌 Timestamps : 00:00 - Intro Part 00:28 - o1-engineer : https://github.com/Doriandarko/o1-engineer 02:27 - BaseAI : https://github.com/LangbaseInc/BaseAI 04:39 - Crawl4AI : https://github.com/unclecode/crawl4ai 07:01 - ChatMLX : https://github.com/maiqingqiang/ChatMLX 09:23 - RouteLLM : https://github.com/lm-sys/RouteLLM 11:39 - OpenBB : https://github.com/OpenBB-finance/OpenBB 13:54 - EXO : https://github.com/exo-explore/exo 15:56 - Netdata : https://github.com/netdata/netdata 18:10 - Fragments by E2B : https://github.com/e2b-dev/fragments 20:25 - GenAI Agents: https://github.com/NirDiamant/GenAI_Agents 👍 Enjoyed the breakdown? Give us a thumbs up! 🔔 Stay updated with the latest in tech by subscribing. 💬 Share your thoughts or suggest projects for our next review in the comments!" Tags: I'm a GitHub Expert and I'm Shocked by These Trending AI Projects,GitHub Expert,Trending AI Projects,Github projects,github tutorial,github,ai projects,ai,ai tools,open source projects,open source,ai news,autogpt,manuagi,elon musk,chatgpt,RouteLLM,OpenBB,BaseAI,PodSnap AI,GenAI Agents 📈 Subscribe for more AI tutorials, tips, and industry insights. Don't forget to like, comment, and share with your tech-savvy friends! Hashtags: #OpenSourceProjects #GitHubTrends #CodingWonders #AIDevelopment #MachineLearning #ArtificialIntelligence #TechInnovation #MLflow #Playwright #OpenSource #AIProjects #DevTools #LeapAI #TechnologyTrends #AIInnovation #LLM #AIProjects #DeepLearning #MachineLearning #TechTrends #FutureOfAI #mentat #TechInnovation #FutureOfAI #AIProjects #DeveloperTools #AutoGPTTutorials #AITech #GitHubProjects #Innovations #TechTrends source Read the full article
1 note · View note
nexgitspvtltd · 2 years ago
Text
Tumblr media
Ready to take your productivity to the next level? ⚡️ With AutoGPT, you can finally conquer your to-do list and achieve your goals faster. From automating repetitive tasks to delegating to team members, this tool has everything you need to stay ahead of the game.
1 note · View note
luanthainews · 2 years ago
Text
Auto GPT là gì? Đây có phải là bản nâng cấp của Chat GPT?
Auto GPT là một dự án AI mã nguồn mở được xây dựng trên Generative Pre-training Transformer (GPT) của ChatGPT. Về cốt lõi, Auto GPT cung cấp cho GPT khả năng hành động tự chủ mà không yêu cầu tác nhân của con người nhắc nhở mọi hành động của nó.
Nó không phải là một công nghệ mới và nó không phải là một mô hình ngôn ngữ lớn mới. Và nó chắc chắn không phải là một chatbot AI mới—chúng ta đã có đủ những thứ đó trong một năm.
Có rất nhiều điểm khác biệt về kỹ thuật giữa ChatGPT và Auto GPT, nhưng một trong những điểm khác biệt chính là quyền tự chủ. Auto GPT thay thế "tác nhân con người" bằng "tác nhân AI"
Auto GPT là gì?
Sự khác biệt giữa ChatGPT và Auto GPT là gì?
Nguyên lý hoạt động của Auto GPT
Xem thêm: https://topchuyengia.vn/tin-tuc-hot/auto-gpt-la-gi
Source: https://topchuyengia.vn
0 notes
dstarr · 2 years ago
Link
0 notes
mysocial8onetech · 2 years ago
Text
Auto GPT - a remarkable open-source Python software that enables AI to operate independently, without any user intervention. This ingenious tool utilizes the advanced capabilities of GPT-4 and is adept at various tasks such as conducting web searches, formulating plans, coding and debugging autonomously, and even functioning as an independent Twitter bot. For further insight into this cutting-edge technology, be sure to check out my latest blog post.
0 notes