Tumgik
#No-code GPT tool
makergpt · 10 months
Text
Tumblr media
A Deep Dive into Generative AI Development Brilliance!
Dive into the depths of innovation with "A Deep Dive into Generative AI Development Brilliance!" This guide is your gateway to profound insights, offering a meticulous exploration of coding excellence in the world of Generative AI Development. Master the intricacies and emerge with brilliance, shaping the future of AI with every line of code! You can visit our website for more information.
1 note · View note
god-of-prompt · 8 months
Text
As an entrepreneur constantly on the lookout for cutting-edge tools, I was thrilled to discover the Custom GPT Toolkit from God of Prompt. This toolkit isn't just another AI chatbot software; it's a powerhouse for business growth and digital marketing strategy. With its no-code chatbot creation feature, I've been able to deploy Custom ChatGPT bots that engage my audience, enhance customer service, and automate key marketing tasks.
The toolkit's integration with OpenAI's GPTs technology means I'm leveraging the latest in machine learning for my business communications. The AI Assistant feature has been a game-changer for lead generation, helping me tap into new markets with precision targeting. It's impressive how it simplifies complex tasks like SEO content creation, making my website more visible and driving organic traffic.
Moreover, the toolkit aids in brand identity development and streamlines ad copywriting. It's like having an in-house AI-powered marketing agency! The insights I've gained have been invaluable in crafting effective marketing strategies and planning for long-term business success.
For anyone in digital marketing, e-commerce, or managing a startup, the Custom GPT Toolkit is a goldmine. It boosts workflow efficiency, ensures high-quality content creation, and opens up new avenues for revenue generation. I highly recommend it for anyone looking to elevate their brand's online presence.
#customgpt#customgpttoolkit#Gpt4turbo#ChatGPTPlus#chatgpt4#artificialintelligence#gptstore#openAI#godofprompt#AI#GPTBuilder#GPT#gpt35
0 notes
nnctales · 1 year
Text
Can I Use ChatGPT as a Construction Assistant?
The construction industry, like many others, has started to embrace the tremendous potential of artificial intelligence (AI). As part of this shift, industry professionals are increasingly asking: “Can I use AI like OpenAI’s ChatGPT as a construction assistant?” The answer is not only a resounding ‘yes’ but also that this technology can offer significant benefits. First, let’s understand what…
Tumblr media
View On WordPress
0 notes
nostalgebraist · 1 year
Text
Honestly I'm pretty tired of supporting nostalgebraist-autoresponder. Going to wind down the project some time before the end of this year.
Posting this mainly to get the idea out there, I guess.
This project has taken an immense amount of effort from me over the years, and still does, even when it's just in maintenance mode.
Today some mysterious system update (or something) made the model no longer fit on the GPU I normally use for it, despite all the same code and settings on my end.
This exact kind of thing happened once before this year, and I eventually figured it out, but I haven't figured this one out yet. This problem consumed several hours of what was meant to be a relaxing Sunday. Based on past experience, getting to the bottom of the issue would take many more hours.
My options in the short term are to
A. spend (even) more money per unit time, by renting a more powerful GPU to do the same damn thing I know the less powerful one can do (it was doing it this morning!), or
B. silently reduce the context window length by a large amount (and thus the "smartness" of the output, to some degree) to allow the model to fit on the old GPU.
Things like this happen all the time, behind the scenes.
I don't want to be doing this for another year, much less several years. I don't want to be doing it at all.
----
In 2019 and 2020, it was fun to make a GPT-2 autoresponder bot.
[EDIT: I've seen several people misread the previous line and infer that nostalgebraist-autoresponder is still using GPT-2. She isn't, and hasn't been for a long time. Her latest model is a finetuned LLaMA-13B.]
Hardly anyone else was doing anything like it. I wasn't the most qualified person in the world to do it, and I didn't do the best possible job, but who cares? I learned a lot, and the really competent tech bros of 2019 were off doing something else.
And it was fun to watch the bot "pretend to be me" while interacting (mostly) with my actual group of tumblr mutuals.
In 2023, everyone and their grandmother is making some kind of "gen AI" app. They are helped along by a dizzying array of tools, cranked out by hyper-competent tech bros with apparently infinite reserves of free time.
There are so many of these tools and demos. Every week it seems like there are a hundred more; it feels like every day I wake up and am expected to be familiar with a hundred more vaguely nostalgebraist-autoresponder-shaped things.
And every one of them is vastly better-engineered than my own hacky efforts. They build on each other, and reap the accelerating returns.
I've tended to do everything first, ahead of the curve, in my own way. This is what I like doing. Going out into unexplored wilderness, not really knowing what I'm doing, without any maps.
Later, hundreds of others with go to the same place. They'll make maps, and share them. They'll go there again and again, learning to make the expeditions systematically. They'll make an optimized industrial process of it. Meanwhile, I'll be locked in to my own cottage-industry mode of production.
Being the first to do something means you end up eventually being the worst.
----
I had a GPT chatbot in 2019, before GPT-3 existed. I don't think Huggingface Transformers existed, either. I used the primitive tools that were available at the time, and built on them in my own way. These days, it is almost trivial to do the things I did, much better, with standardized tools.
I had a denoising diffusion image generator in 2021, before DALLE-2 or Stable Diffusion or Huggingface Diffusers. I used the primitive tools that were available at the time, and built on them in my own way. These days, it is almost trivial to do the things I did, much better, with standardized tools.
Earlier this year, I was (probably) one the first people to finetune LLaMA. I manually strapped LoRA and 8-bit quantization onto the original codebase, figuring out everything the hard way. It was fun.
Just a few months later, and your grandmother is probably running LLaMA on her toaster as we speak. My homegrown methods look hopelessly antiquated. I think everyone's doing 4-bit quantization now?
(Are they? I can't keep track anymore -- the hyper-competent tech bros are too damn fast. A few months from now the thing will be probably be quantized to -1 bits, somehow. It'll be running in your phone's browser. And it'll be using RLHF, except no, it'll be using some successor to RLHF that everyone's hyping up at the time...)
"You have a GPT chatbot?" someone will ask me. "I assume you're using AutoLangGPTLayerPrompt?"
No, no, I'm not. I'm trying to debug obscure CUDA issues on a Sunday so my bot can carry on talking to a thousand strangers, every one of whom is asking it something like "PENIS PENIS PENIS."
Only I am capable of unplugging the blockage and giving the "PENIS PENIS PENIS" askers the responses they crave. ("Which is ... what, exactly?", one might justly wonder.) No one else would fully understand the nature of the bug. It is special to my own bizarre, antiquated, homegrown system.
I must have one of the longest-running GPT chatbots in existence, by now. Possibly the longest-running one?
I like doing new things. I like hacking through uncharted wilderness. The world of GPT chatbots has long since ceased to provide this kind of value to me.
I want to cede this ground to the LLaMA techbros and the prompt engineers. It is not my wilderness anymore.
I miss wilderness. Maybe I will find a new patch of it, in some new place, that no one cares about yet.
----
Even in 2023, there isn't really anything else out there quite like Frank. But there could be.
If you want to develop some sort of Frank-like thing, there has never been a better time than now. Everyone and their grandmother is doing it.
"But -- but how, exactly?"
Don't ask me. I don't know. This isn't my area anymore.
There has never been a better time to make a GPT chatbot -- for everyone except me, that is.
Ask the techbros, the prompt engineers, the grandmas running OpenChatGPT on their ironing boards. They are doing what I did, faster and easier and better, in their sleep. Ask them.
5K notes · View notes
Note
Many artists are wary of AI. I've been using Chat Gpt and come to the realization that with my kids, school and other life obligations I need to use AI in my writing process. No, I'm NOT using the AI's writing or passing it on as mine. It's for plotting, character arcs and to help me get over writers block. I'm conflicted. I just don't have the time to write that I wish I did. :( if I use AI am I not a real writer?
Using AI as a Writer
I want to preface this by saying that we are quite literally in the Wild West when it comes to AI and how it fits into the creative world. The technology is still evolving. The legalities of use are still evolving. Public sentiment and codes of ethics are still evolving... the only definitive answer I can give on the topic of using AI in writing is that you should never use it to write all or part of your story.
Using AI as a tool to help you think through plot problems, flesh out character arcs, and move past story road blocks in and of itself is something many writers and writer organizations have embraced. You're still doing the actual writing, plotting, character arc design, etc. yourself.
However, if you're using AI to generate your entire plot, characters, character arc, etc., it gets into a bit of a gray area for me. Yes, you're still doing the actual writing, but are you a landscape painter if you only do paint by number? I'm not sure. Yes, you're technically moving the brush and making the strokes, but your brain didn't imagine the imagery and your skills didn't know what colors to use or where to put the shadows. So I'm not sure where that leaves writing. And my bigger concern is that you're not putting in the work to improve your craft, so you may get stories or books out there, but you don't have the writing skills to back them up.
What I can tell you is this: people still write when they have kids, and jobs, and school, and various other responsibilities and dependents. They may only write for twenty minutes a day, or once a week for an hour, but they find the time. And some don't... some put off writing until the kids are older or the other obligations let up... either route is fine. I guess what I'm saying is I don't think not having time or energy is a good excuse for using AI or over-relying on it. But, I also don't think you should feel bad if you're simply using it as a tool to help flesh out your own thoughts and ideas. ♥
•••••••••••••••••••••••••••••••••
I’ve been writing seriously for over 30 years and love to share what I’ve learned. Have a writing question? My inbox is always open!
LEARN MORE about WQA
SEE MY ask policies
VISIT MY Master List of Top Posts
COFFEE & FEEDBACK COMMISSIONS ko-fi.com/wqa
119 notes · View notes
Text
A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.  ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.  The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”  “Maybe check us out, and maybe think twice about releasing ai tools on such a weakly secured account,” the same archived Github page says.  The page said that it was a legitimate extension until it was compromised, and an archive of its Github page from May 25 shows that it was somewhat active, with 42 stars, four forks, and 12 commits. On its website, the hackers claim that it had control of the extension for “many months,” and they had taken control of ComfyUI_LLMVISION before its creator ever posted it, indicating that it may have contained malicious code the entire time its been up on Github.
11 June 2024
31 notes · View notes
Text
On the object level, I do find myself just very frustrated with any AI singularity discourse topics nowadays. Part of this is just age - I think everyone over time runs out of patience with the sort of high-concept, "really makes you think" discussions as their own interests crystallize and you have enough of them that their circularity and lack of purpose becomes clear. Your brain can just "skip to the end" so cleanly because the intermediate steps are all cleverness, no utility, and you know that now.
Part of it is how ridiculous the proposals are - "6 months of AI research pause" like 'AI research' isn't a thing, that is not a category of human activity one can pause. The idea of building out a regulatory framework that defines what that means is itself at minimum a 6 month process. These orgs have legal rights! Does the federal government even have jurisdiction to tell a company what code it can run on servers it owns because of 'x-risk'? Again, your brain can skip to the end; a legal pause on AI research is not happening in the current environment, at all, 0% odds.
And the other part, which I admit my own humility on being not an AI researcher, is that still am not convinced any of these new tools are at all progress towards actual general intelligence because they are completely without agency. It seems very obviously so and the fact that Chat-GPT is not ever going to prompt itself to do things without human input is just handwaved away. The idea that it will ship itself nanomachine recipes and hack Boston Dynamics bots to build them is a fantasy, that isn't how any of this works. There is always some new version of the AI that can do this, but its vaporware, no one is really building it.
Its not shade on the real AI X-risk people, I have heard their arguments, I get it and they are smart and thorough. Its just a topic where every one of their arguments I ever engage with leave me with the feeling that I am reading something not based in reality. Its a feeling that is hard to push through for me.
63 notes · View notes
shituationist · 9 months
Text
assuaging my anxieties about machine learning over the last week, I learn that despite there being about ten years of doom-saying about the full automation of radiomics, there's actually a shortage of radiologists now (and, also, the machine learning algorithms that are supposed to be able to detect cancers better than human doctors are very often giving overconfident predictions). truck driving was supposed to be completely automated by now, but my grampa is still truckin' and will probably get to retire as a trucker. companies like GM are now throwing decreasing amounts of money at autonomous vehicle research after throwing billions at cars that can just barely ferry people around san francisco (and sometimes still fails), the most mapped and trained upon set of roads in the world. (imagine the cost to train these things for a city with dilapidated infrastructure, where the lines in the road have faded away, like, say, Shreveport, LA).
we now have transformer-based models that are able to provide contextually relevant responses, but the responses are often wrong, and often in subtle ways that require expertise to needle out. the possibility of giving a wrong response is always there - it's a stochastic next-word prediction algorithm based on statistical inferences gleaned from the training data, with no innate understanding of the symbols its producing. image generators are questionably legal (at least the way they were trained and how that effects the output of essentially copyrighted material). graphic designers, rather than being replaced by them, are already using them as a tool, and I've already seen local designers do this (which I find cheap and ugly - one taco place hired a local designer to make a graphic for them - the tacos looked like taco bell's, not the actual restaurant's, and you could see artefacts from the generation process everywhere). for the most part, what they produce is visually ugly and requires extensive touchups - if the model even gives you an output you can edit. the role of the designer as designer is still there - they are still the arbiter of good taste, and the value of a graphic designer is still based on whether or not they have a well developed aesthetic taste themself.
for the most part, everything is in tech demo phase, and this is after getting trained on nearly the sum total of available human produced data, which is already a problem for generalized performance. while a lot of these systems perform well on older, flawed, benchmarks, newer benchmarks show that these systems (including GPT-4 with plugins) consistently fail to compete with humans equipped with everyday knowledge.
there is also a huge problem with the benchmarks typically used to measure progress in machine learning that impact their real world use (and tell us we should probably be more cautious because the human use of these tools is bound to be reckless given the hype they've received). back to radiomics, some machine learning models barely generalize at all, and only perform slightly better than chance at identifying pneumonia in pediatric cases when it's exposed to external datasets (external to the hospital where the data it was trained on came from). other issues, like data leakage, make popular benchmarks often an overoptimistic measure of success.
very few researchers in machine learning are recognizing these limits. that probably has to do with the academic and commercial incentives towards publishing overconfident results. many papers are not even in principle reproducible, because the code, training data, etc., is simply not provided. "publish or perish", the bias journals have towards positive results, and the desire of tech companies to get continued funding while "AI" is the hot buzzword, all combined this year for the perfect storm of techno-hype.
which is not to say that machine learning is useless. their use as glorified statistical methods has been a boon for scientists, when those scientists understand what's going on under the hood. in a medical context, tempered use of machine learning has definitely saved lives already. some programmers swear that copilot has made them marginally more productive, by autocompleting sometimes tedious boilerplate code (although, hey, we've had code generators doing this for several decades). it's probably marginally faster to ask a service "how do I reverse a string" than to look through the docs (although, if you had read the docs to begin with would you even need to take the risk of the service getting it wrong?) people have a lot of fun with the image generators, because one-off memes don't require high quality aesthetics to get a chuckle before the user scrolls away (only psychopaths like me look at these images for artefacts). doctors will continue to use statistical tools in the wider machine learning tool set to augment their provision of care, if these were designed and implemented carefully, with a mind to their limitations.
anyway, i hope posting this will assuage my anxieties for another quarter at least.
35 notes · View notes
Text
My New Article at American Scientist
Tweet
As of this week, I have a new article in the July-August 2023 Special Issue of American Scientist Magazine. It’s called “Bias Optimizers,” and it’s all about the problems and potential remedies of and for GPT-type tools and other “A.I.”
This article picks up and expands on thoughts started in “The ‘P’ Stands for Pre-Trained” and in a few threads on the socials, as well as touching on some of my comments quoted here, about the use of chatbots and “A.I.” in medicine.
I’m particularly proud of the two intro grafs:
Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you’ll get more of it if you live in a facility than if you receive care at home.
At least, that is what I would believe if I accepted the sexist, racist, and misleading ableist pronouncements from today’s new artificial intelligence systems. It has been less than a year since OpenAI released ChatGPT, and mere months since its GPT-4 update and Google’s release of a competing AI chatbot, Bard. The creators of these systems promise they will make our lives easier, removing drudge work such as writing emails, filling out forms, and even writing code. But the bias programmed into these systems threatens to spread more prejudice into the world. AI-facilitated biases can affect who gets hired for what jobs, who gets believed as an expert in their field, and who is more likely to be targeted and prosecuted by police.
As you probably well know, I’ve been thinking about the ethical, epistemological, and social implications of GPT-type tools and “A.I.” in general for quite a while now, and I’m so grateful to the team at American Scientist for the opportunity to discuss all of those things with such a broad and frankly crucial audience.
I hope you enjoy it.
Tweet
Read My New Article at American Scientist at A Future Worth Thinking About
62 notes · View notes
naryrising · 1 year
Note
Hi Nary! First off, thank you for your positive and impactful presence in fandom and on AO3. Both are better because you’re here with us.
Quick question: I’ve seen a lot of posts urging AO3 authors to lock their accounts for members-only to prevent AI scraping for things like sudowrite. But isn’t that like closing the barn door after the horse is already out? From your knowledge, does locking down an account starting today provide any benefit from AI issues?
Thank you for everything you do!
Ok, well, I wrote the AO3 news post that went out about that topic, and in it I did suggest that locking works is a way to potentially help avoid scraping. But I can expand on that somewhat, because it's really quite a bit more complicated. (And as always, I'm not speaking in an official capacity here, just my own personal outlook).
Will locking your work stop AO3's data from being used in things like ChatGPT, Sudowrite, etc? No. Those tools are all based on the CommonCrawl dataset, which was collected years ago - it began collecting in 2011 and continues to this day. Specifically, as far as I understand, Sudowrite and ChatGPT and others were trained on a version of the dataset, GPT-3, which was released in 2020 (and therefore, based on data collected earlier than 2020). Therefore, if that is your primary concern, yes, the horse is very much out of the barn - this data was collected many years ago at this point, and any prospect of removing it is going to probably involve legal challenges about how such data can be used. This is very much uncharted territory as far as the law is concerned, so it may take years for courts to sort out what rights authors have in this situation. (For instance, can you request the removal of your copyrighted texts? Who knows!)
What about scraping in the future, though? When AO3 became aware that this data was being used to train AI text generators, it blocked the CommonCrawl bot. Therefore, assuming CommonCrawl behaves ethically, it will respect that block and not scrape further data from the site. Therefore, locking your works today will makes no difference if what you're trying to avoid is being scraped by CommonCrawl, as AO3 already took measures to prevent that going forward.
What about other types of data scraping? Great question, and that is the murky area. There are many other people and companies out there who are not CommonCrawl and may have other goals and motives. Some of that could range from a dedicated fan wanting to scrape a copy of their entire fandom's contents on a certain date to keep as a private backup, to academic researchers working on entirely above-board projects in linguistics or literature or media studies, to companies wanting to build their own dataset for training some other future kind of AI, or something none of us are currently able to guess. If that's your concern, then locking your work might provide some degree of protection. It will, for instance, probably protect against fairly crude large-scale mass scraping. (AO3's coding team has also stated that it will block these type of mass scrapers if and when they become aware of them, and has already for some time taken measures such as rate limiting to make the scraping process harder.) But - people, including people who want to scrape data, can make accounts on AO3. It's free, anyone can join, it typically takes about a week to get an invite. They can log in, see the works that are only visible to logged-in users, and scrape them, just with a bit more effort. Now, these are currently, I suspect, more likely to be the kind of scraping projects like "I just want a personal copy of every work in my fandom" or "I'm an academic doing research on fanfiction and I'm collecting data about how fic writers use tags", which some people might be okay with. But it could also be someone with less ethical motives. It's hard to stop one without also stopping the other, from AO3's side. From users' side, locking your works is probably protective against large-scale data scraping, but less so against this type of smaller scale data scraping. But also, I can't predict the future, and maybe there's some project happening right now to figure out a way around this! I don't know!
In short, if you don't want your data scraped, never put anything online anywhere ever, or support legal changes that will allow for stronger data protection. Right now, nothing is completely safe. Locking your works might make them slightly safer, but is not a total guarantee of protection.
62 notes · View notes
makergpt · 10 months
Text
"Your Words, Your Way: Empower Creativity with a Custom GPT Tool! 🚀🖋️"
Tumblr media
Unlock a world of personalized language innovation with "Custom GPT Tool" – where your words take center stage! 🌐💬
🎨 Tailor your narrative and elevate your storytelling with a tool that allows you to customize the language model to your unique style. "Custom GPT Tool" is not just about coding; it's about infusing your creative fingerprint into every word, turning your ideas into linguistic masterpieces. ✨💻
Craft brilliance effortlessly, whether you're a seasoned developer or a writing enthusiast. This tool empowers you to shape the future of communication, putting your words at the forefront of language innovation. 🌟📖
Join the revolution of personalized language models, and let your creativity flow in a digital realm where every sentence reflects your individuality. Your words, your way – the journey with "Custom GPT Tool" promises to redefine how we communicate in the ever-evolving landscape of artificial intelligence. 🤖🔍 #CustomGPT #LanguageInnovation #CreativeCoding
0 notes
jcmarchi · 6 months
Text
AI GPTs for PostgreSQL Database: Can They Work?
New Post has been published on https://thedigitalinsider.com/ai-gpts-for-postgresql-database-can-they-work/
AI GPTs for PostgreSQL Database: Can They Work?
Artificial intelligence is a key point of debate right now. ChatGPT has reached 100 million active users in just the first two months. This has increased focus on AI’s capabilities, especially in database management. The introduction of ChatGPT is considered a major milestone in the Artificial Intelligence (AI) and tech space, raising questions about the potential applications of generative AI like AI GPTs for PostgreSQL database. This generative AI tool is considered a significant discovery because it can execute complex tasks, including writing programming code efficiently.
For example­, Greg Brockman from OpenAI made a whole­ website using an image he­ drew on a napkin and GPT-4. Feats like this show why pe­ople want to blend AI GPTs and database syste­ms such as PostgreSQL. This blog will discuss the answer to the question: Can AI GPTs optimize PostgreSQL databases?
Understanding AI GPTs
Researchers use a large amount of text data to train AI GPTs. The main goal of these AI systems is to produce content that reads like its human-written. These models identify difficult patterns in their training data, allowing them to provide relevant and accurate text outputs. They are not Artificial General Intelligence (AGI) systems but specialized models created for language processing tasks.
PostgreSQL: A Brief Overview
PostgreSQL, also known as Postgres, is a widely used open-source object-relational database management system. Postgres gained a solid reputation among database management systems due to its reliability, extensive features, and performance. Companies can use Postgres for all kinds of applications – from small projects to handling the big data needs of major tech corporations.
G2 ratings rank Postgres as the third easiest-to-use relational database software, showing it is a user-friendly option for developers and organizations seeking a dependable database solution.
Can AI GPTs be effectively used with PostgreSQL?
Imagine having human-like conversations with a database, where GPTs translate our everyday language into SQL queries or summarize complex Postgres data. Using AI GPTs for PostgreSQL databases opens up new exciting opportunities.
Here are some ways this integration could come to life:
Query Generation
AI GPTs simplify database queries by turning natural language prompts into SQL queries. This improvement makes data more accessible to non-technical users and enables them to interact with databases. It can bridge the gap between non-technical users and Postgres databases, allowing them to query and analyze the data effectively, even if they don’t know how to write database queries.
Postgresql Data Management with AI GPTs
Integrating AI GPTs with PostgreSQL databases, especially on the Microsoft Azure cloud platform, introduces a new world of possibilities for data management. With the pgvector extension support in Postgres, ChatGPT can access, store, search, and update knowledge directly in these databases. This improves data retrieval efficiency and enables real-time interactions with systems and data.
Data Analysis and Reporting
Data Scientists can use AI GPTs to analyze natural language data in PostgreSQL databases. These AI systems can create reports, summaries, and analyses by analyzing complex data. This allows them to provide useful information in a format that is easy for people to understand. It also enables non-technical stakeholders to effortlessly gain meaningful insights from Postgres data.
Schema Design and Database Documentation
AI agents with GPTs can potentially streamline database management for data scientists. These advanced AI tools can design database schemas that meet specific data needs and automatically produce detailed documentation for Postgres database structures.
Query Optimization
GPTs have the potential to interpret and analyze SQL queries and recommend optimizations that offer more efficient ways to write queries. They can identify redundancies, inefficient joins, or overlooked indexing opportunities, improving database performance and lowering query execution times.
Data Validation and Integrity Checks
AI GPTs can check data for quality, consistency, and integrity before it’s inserted or updated in Postgres databases. These models can identify unusual, irregular, or inconsistent entries in stored structured data. This capability helps in proactive data cleaning and maintaining high-quality data in databases.
AI GPTs for PostgreSQL Database: Challenges and Limitations
Although the potential use cases of AI GPTs for PostgreSQL are intriguing, the implementation comes with a unique set of challenges and limitations:
Accuracy and Safety
AI GPTs might produce inaccurate or potentially harmful outputs when used alongside Postgres. Strong safeguards and verification processes are important to counteract this risk and ensure data is stored reliably.
Domain Knowledge and Contextual Understanding
AI GPTs lack the domain knowledge to grasp complex database structures. They also struggle to understand the business logic related to PostgreSQL. This highlights the need for specialized training and fine-tuning of these AI GPTs. By using Retrieval-Augmented Generation (RAG) systems, we can potentially equip them with technical Postgres knowledge.
Integration and Scalability
Integrating AI GPTs with PostgreSQL databases carefully while ensuring compatibility is crucial for smooth operation. Training and deploying large language models require organizations to employ skilled cloud architects to manage the extensive computational resources required.
Trust and Adoption
Database professionals might show resistance or skepticism toward incorporating AI agents into Postgres databases. Overcoming this challenge requires industrial engineers to perform thorough testing and demonstrate AI GPTs’ benefits to foster trust.
Data Privacy and Security
Robust measures must secure data privacy and prevent data exposure while using AI GPTs for PostgreSQL databases. Strong measures must be implemented to prevent sensitive data from being accidentally exposed or misused during training or inference processes.
Finding the Sweet Spot: AI GPTs for PostgreSQL
Integrating AI GPTs into PostgreSQL database management presents considerable challenges alongside its potential benefits. Effective integration of these AI systems requires detailed testing, targeted training,  and advanced security to ensure data safety. With the evolution of AI, applying AI GPTs to database management could become more practical. Ultimately, the goal is to improve database environments for tasks like time-series data processing.
Visit unite.ai today to stay updated with the latest AI and machine learning developments, including in-depth analyses and news.
0 notes
professorspork · 1 year
Note
Hey, a little while ago, you reblogged that post about AI learning when people insert fics into AI text generators, and I wanted to offer good news and bad news: the good news is that AI learning models mostly don’t work like this. The publicly accessible text generator isn’t the whole learning model, it’s a single machine that the learning model generated. It won’t get fed directly back into the AI.
The BAD news is that there’s not really anything stopping them from saving that information separately to use later, and (much worse) anything that’s publicly available has probably already been scraped and saved. The good-in-this-context-but-depressing-overall news is that these models operate on the scale of billions of words, so, like. Idk. Individual fics ending up in a database mostly isn’t going to matter. That’s part of why the data-scraping isn’t something devs think about, ethically. This info is a paraphrase of another post I’ve seen going around saying the same thing, but I can personally corroborate it; before AI was a “crypto people hate when artists can earn a living” thing, I took some college courses on it and followed blogs about AI stuff for years. The last year or two of AI news has been really shitty :P It’s been really cool to me for a long time, but it is now clear that it’s even-more-vulnerable-than-usual to “capitalism uses every tool for oppression first” Knowing how it works is exhausting because anti-AI people are sometimes not all that much more accurate about how it actually works than the fervently pro-AI “I think chat-gpt is a person and human-generated art is dead” people, and then both of them skip talking about the more concrete problems like the “chat-gpt is propped up by slave labor” stuff.
I really appreciated this series of asks and wanted to make it available for all!
I think what we run into here is where like. A rhetorical device to invoke a sense of stakes and a bit of a guilt trip ("this is plagiarism because it feeds the AI" and its many permutations) can run up against misinformation (it's not literally becoming part of the AI's knowledge base, though as you noted it certainly COULD.) Because like
Where that post was coming from was someone being like "but why shouldn't I do this?" and the answerer resorting to "because it takes my work away from me" and this is still true in like, the rules of community and creativity if not necessarily in the hard lines of code. it's harder to articulate "this makes me uncomfortable because it's violated my ineffable sense of mutual belonging with and ownership of my own work, which I already felt on shaky ground on because it's fanwork but still FEEL with my WHOLE HEART" than it is to say "this concretely makes my words fuel for the machine" which I think people grok as a more sort of understandable breach of that social contract.
Which is why I like this post a lot because it gets at the WHY of why this is so perturbing and violating and isolating
Fandom was never meant to be a solo endeavor! when I write fic and put it out into the world, it's like echolocation. the words I put out are only half of what gives it shape and meaning to me-- the other half is the sound of it reverberating back to me as it bounces off the people it hits by way of comments, tags in reblogs, and DMs and they tell me their reactions and interpretations. that's what makes it a complete picture and not just screaming into the void.
to be removed from that process at all is a heartbreak to me; to have my words taken without my consent is insulting and misses the point and just. ultimately makes all of us that much more alone. which is to say that it's factually correct to say individual fics ending up in a database won't matter because it's probably already been scraped anyway because that's true for the AI and for the data. but individual fics DO matter insofar as like, these are choices people are making about what this hobby is and means and why they like it and what they think it's for and how they enjoy it, on a communal and social level, and THAT matters to me a great deal, in the same way that like, people now might end up getting videoed for a tiktok without their consent or whatever. it's about the erosion of privacy and respect.
but also yeah ChatGPT also runs thanks to exploited and underpaid workers, consumes horrific amounts of water in a time of increasing drought crisis and emits tons of carbon to boot.
79 notes · View notes
womaneng · 4 months
Text
Tumblr media
Top 10 generative AI tools for software developers ✨
Generative AI can be used among developers for providing solutions, coding widgets, fixing bugs, and learning as well. Generative AI is considered a cutting-edge field in AI research due to its potential to create high-quality, innovative outputs that can be indistinguishable from human-generated content. 👩🏻‍💻 1. ChatGPT 2. Google Gemini 3. OpenAI Codex 4. AlphaCode 5. GPT-4 6. GitHub Copilot 7. Amazon CodeWhisperer 8. Tabnine 9. CodeWP
10 notes · View notes
mariacallous · 4 months
Text
ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.
Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.
Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.
The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.
ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.
“Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.
OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.
OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.
Even though LLMs defy easy interrogation, a growing body of research suggests they can be poked and prodded in ways that reveal useful information. Anthropic, an OpenAI competitor backed by Amazon and Google, published similar work on AI interpretability last month. To demonstrate how the behavior of AI systems might be tuned, the company's researchers created a chatbot obsessed with San Francisco's Golden Gate Bridge. And simply asking an LLM to explain its reasoning can sometimes yield insights.
“It’s exciting progress,” says David Bau, a professor at Northeastern University who works on AI explainability, of the new OpenAI research. “As a field, we need to be learning how to understand and scrutinize these large models much better.”
Bau says the OpenAI team’s main innovation is in showing a more efficient way to configure a small neural network that can be used to understand the components of a larger one. But he also notes that the technique needs to be refined to make it more reliable. “There’s still a lot of work ahead in using these methods to create fully understandable explanations,” Bau says.
Bau is part of a US government-funded effort called the National Deep Inference Fabric, which will make cloud computing resources available to academic researchers so that they too can probe especially powerful AI models. “We need to figure out how we can enable scientists to do this work even if they are not working at these large companies,” he says.
OpenAI’s researchers acknowledge in their paper that further work needs to be done to improve their method, but also say they hope it will lead to practical ways to control AI models. “We hope that one day, interpretability can provide us with new ways to reason about model safety and robustness, and significantly increase our trust in powerful AI models by giving strong assurances about their behavior,” they write.
10 notes · View notes
rajibperfection · 26 days
Text
A Review on Merlin Lifetime deals.
It’s hard to believe AI tools help you work smarter when you’re still stuck switching between tabs to get things done. (“Just call me an AI assistant juggler.”)
With so many AI models and features on the market, you’re using way too much tech to research and generate different types of content.
What if there was a Chrome extension packed with all the AI models you need to speed up your research and content creation process?
Overview:
Merlin is a Chrome browser extension and web app that gives you access to popular AI models to research, summarize, and write content.
      Best for: 
Alternatives to:Integrations:Main Features:
Educators
Marketers
Small businesses
Copy.ai
Grammarly
Jasper
Facebook
Gmail
LinkedIn
Outlook
Twitter
GDPR-compliant
AI
Pros and cons:
Chat with leading AI models, from one browser
With Merlin, you’ll receive access to prominent AI models, like GPT-4, Claude-3, Gemini 1.5, Leonardo, and others—all from your Chrome web browser.
No more moving between browser tabs! Use Merlin’s AI Chatbot on every websites you visit.
Use complex image-generation models to develop captivating brand storylines.
Plans & features
Deal terms & conditions
Lifetime access to Merlin
All future Pro Plan updates
If Plan name changes, deal will be mapped to the new Plan name with all accompanying updates
No codes, no stacking—just choose the plan that’s right for you
You must activate your license within 60 days of purchase
Ability to upgrade between 3 license tiers while the deal is available
Ability to downgrade between 3 license tiers within 60 days of purchase
GDPR compliant
Available for new Merlin users and returning AppSumo purchasers
Previous AppSumo customers who purchased Merlin can upgrade their license to increase their feature limits
1 Merlin query = 1 Chat GPT 3.5 query
Find all other AI model Query Standards here
All purchasers subject to Merlin’s Terms & Conditions
60 day money-back guarantee. Try it out for 2 months to make sure it’s right for you!
Features included in all plans
Chat with documents
Image generation
Chatbots
Chat with web pages
YouTube summarization
Blog summarization
Twitter, Gmail, Outlook, and LinkedIn FAB bars
LinkedIn Pro connect
Create from YouTube
Post in YouTube comments
AI personas
2 notes · View notes