#Prompt engineering ai training
Explore tagged Tumblr posts
Text
youtube
Prompt Engineering Recorded Demo Video
Mode of Training: Online
Contact +91-9989971070
👉Watch Demo Video @ https://youtu.be/75nViETe51Y
👉 whatsApp: https://www.whatsapp.com/catalog/919989971070🌐Visit: https://www.visualpath.in/prompt-engineering-course.html
#Prompt Engineering Course#Prompt Engineering Training#Prompt Engineering Course in Hyderabad#Prompt Engineering Training in Hyderabad#Prompt Engineering Course Online#Prompt Engineering Ai Training in Hyderabad#Prompt Engineering Ai Course Online#Prompt Engineering Ai Training#Youtube
0 notes
Note
please delete your philosophy gpt-3 post. it's most likely stolen writing.
philosophy?? idk which one you're referring to sorry. also no . if it's the poetry one, see in tags. actually see in tags anyway. actually pls look at my posts on AI too . sorry if it's badly worded i'm very tired :')
#GPT3 is a large language model (LLM) and so is trained on massive amounts of data#so what it produces is always going to be stolen in some way bc...it cant be trained on nothing#it is trained on peoples writing. just like you are trained on peoples writing.#what most ppl are worried about w GPT3 is openAI using common crawl which is a web crawler/open database with a ridiculous amt of data#in it. all these sources will obviously include some published books in which case...the writing isnt stolen. its a book out in the open#meant to be read. it will also include Stolen Writing as in fanfics or private writing etc that someone might not want shared in this way#HOWEVER . please remember GPT3 was trained on around 45TB of data. may not seem like much but its ONLY TEXT DATA. thats billions and#billions of words. im not sure what you mean by stolen writing (the model has to be trained on...something) but any general prompt you give#it will pretty much be a synthesis of billions and billions and billions of words. it wont be derived specifically from one stolen#text unless that's what you ask for. THAT BEING SAID. prompt engineering is a thing. you can feed the model#specific texts and writings and make sure you ask it to use that. which is what i did. i know where the writing is from.#in the one post i made abt gpt3 (this was when it was still in beta and not publicly accessible) the writing is a synthesis of my writing#richard siken's poetry#and 2 of alan turing's papers#im not sure what you mean by stolen writing and web crawling def needs to have more limitations . i have already made several posts about#this . but i promise you no harm was done by me using GPT3 to generate a poem#lol i think this was badly worded i might clarify later but i promise u there are bigger issues w AI and the world than me#feeding my own work and a few poems to a specifically prompt-engineered AI#asks#anon
12 notes
·
View notes
Text
GenAI Training in Hyderabad | Generative AI Training
Master Generative AI Creativity with Gen AI Online Training: A Beginner’s Guide
Generative AI represents a ground breaking advancement in artificial intelligence, focusing on creating content such as text, images, audio, and even video. It leverages sophisticated machine learning models like GPT (Generative Pre-trained Transformers) to produce human-like outputs. For beginners looking to delve into this exciting field, structured learning programs like GenAI Training are essential to build a solid foundation in this transformative technology.
Understanding Generative AI
At its core, Generative AI mimics human creativity by analyzing vast datasets and generating original content. Unlike traditional AI systems designed to recognize patterns or make decisions, Generative AI creates something entirely new. Examples include generating realistic human faces using GANs (Generative Adversarial Networks) or crafting conversational text with tools like ChatGPT. Through specialized programs such as a Generative AI Training Course, learners can explore the principles behind these innovations.
Professionals interested in hands-on knowledge often opt for a Generative AI Course in Hyderabad or GenAI Online Training, where they can gain expertise in deploying these models across industries. These courses cover essential concepts like neural networks, natural language processing (NLP), and multimodal AI.
Applications of Generative AI
Generative AI is revolutionizing various sectors:
Healthcare: AI-driven models assist in generating synthetic medical data for research.
Entertainment: It enables realistic animations and procedural game content.
Business: Generative AI tools optimize marketing with personalized content creation.
In Hyderabad, a technology hub, professionals are increasingly turning to Gen AI Training in Hyderabad to upskill and take advantage of the growing demand for AI expertise. Additionally, institutions offering a GenAI Course in Hyderabad provide a local and accessible way to master these cutting-edge skills.
Why Learn Generative AI?
Generative AI is not just a buzzword; it’s a skill set that opens doors to lucrative career opportunities. By enrolling in programs like Generative AI Training Course, learners can unlock the potential to innovate across domains. Whether you’re interested in text generation, image synthesis, or AI ethics, training programs provide the tools and knowledge to succeed.
Institutes offering Generative AI Online Training allow professionals to learn at their convenience, making this technology accessible globally. Meanwhile, city-specific programs such as Generative AI Course in Hyderabad cater to the growing AI community in bustling tech cities, equipping learners with practical, industry-relevant skills.
Conclusion
Generative AI is reshaping how we interact with technology, offering endless possibilities for innovation and creativity. Whether you’re an aspiring data scientist or a business professional, understanding Generative AI through programs like GenAI Training or Generative AI Training Course is crucial to staying competitive in this evolving field.
Visualpath is the Leading and Best Software Online Training Institute in Hyderabad. Avail complete Generative AI Online Training Worldwide. You will get the best course at an affordable cost.
Attend Free Demo
Call on - +91-9989971070.
Visit Blog: https://visualpathblogs.com/
WhatsApp: https://www.whatsapp.com/catalog/919989971070
Visit https://www.visualpath.in/online-gen-ai-training.html
#GenAI Training#Generative AI Training#Gen AI Training In Hyderabad#Genai Online Training#Generative AI Training Course#GenAI Course in Hyderabad#Generative AI Course in Hyderabad#Generative AI Online Training Courses#Prompt Engineering
1 note
·
View note
Text
Integrating GPTs Into Your Work Environment
Get Your Own Artificial Intelligence Assistant
So often now, I find myself talking about the need to create work environments that support rather than undermine gpts. In this discussion, I want to lay my cards on the table to let you see what that looks like. To date, I like to think that everyone has indulged in something AI-related, if only to chat with a robot for fun.
More and more, that will shift to a need for better familiarity – one that quickly upscales from using AIs such as ChatGPT from the site of a provider into using it from a desktop client on your own computer.
Why it is significant now is because when you’re in the vapour of privacy and safety, it hurts you if you’re going to these other people’s servers and typing, in private, something that’s intellectual property information or something to do with [a non-vetted, non-trademarked product] not released yet.
Mining Your AI
Thus spoke Zarathustra, so now we’ll look at some tactics you can put into place to help add publicly cited, curated and tagged information into a more densely footnoted arena your LLM AI will ‘mine’.
Some of these connection processes might be short-cuts, depending on the system that you are using. Notice: This guide is just an exemplar framework, and since most users are familiar with ChatGPT, we will be using this as a base model LLM.
It’s Not Just About ChatGPT
So in my office at the moment, I’m not simply using ChatGPT as the only LLM model I’m working with. For better or worse, if you have an AI that cannot or will not give you an output based trained – well, then you’re done. Unless you have another LLM that doesn’t have those restrictions – or that has other restrictions you’re not yet aware of, because they haven’t been raised yet.
For instance, just the other day, I asked Gemini which were the most important covid-19 mandates and rules ordered by the Biden administration since Joe Biden became president in 2021; it said that it was not able to find this information. ChatGPT, on the other hand, had no problem with this task.
Hence, it is now possible for most of us who own a computer to download a desktop version of ChatGPT onto our PC or Mac desktop and use ChatGPT as a desktop client.
Also, soon enough, within the coming days, we will start doing web searching with SearchGPT specifically as a search (as ChatGPT utilizes the web anyway), this will be more direct. But this is not all. The real centre where the rubber meets the road on privacy is PrivateGPT.
PrivateGPT can be downloaded torun your local instance of GPT-3 or GPT-4 in conjunction with your data, securing and never sharing it with external servers. At least for me, this is my end game of Private AI. Upgrades for the future though…absolutely!
Again, here I’m not going to get into the technical details for how to install PrivateGPT (does ‘Google it’ sound ironic yet for you? That's why Google got into this too, of course), as there is plenty of video content out there that will provide step-by-step guidance, but the payoff is substantial.
Benefits:
Complete control over data privacy.
Customization to fit specific business needs.
Custom Knowledge Base.
Organized and easily accessible data.
High customization for specific queries.
API integration – to create an API that is your private brain, the AI querying for the right information.
Real-time data access.
Centralized data management.
Implementation Options
When deciding on the best implementation option for your privateGPT, consider the following factors: Not 100 per cent technically inclined? No worries. The first place you should go to help implement PrivateGPT is GitHub. No computer science degree required.
After all, is privacy such a big deal for you? If not, use the desktop or browser version of ChatGPT. Scalability is probably the next biggest thing to privacy next to that, and it might be for a small business that isn’t expanding their budget for this.
Now take into consideration what you want to utilize it for, as well as how your company is growing along with employees who are using it, and how to keep business information secure if you’re using a CRM.
Next Steps
1. Evaluate Needs: Assess your specific needs and constraints.
2. Choose Method: Select the method(s) that best fit your requirements.
3. Plan Implementation: Create a detailed implementation plan.
4. Execute and Monitor: Implement the chosen solution(s) and monitor performance.
Consider what each tool can do for you, judging each one in the context of your research needs, your wallet, your technical infrastructure and what you aim to achieve.
It is up to you to choose according to your precise needs. If you prefer cloud solutions, they are easier to implement and are cost-efficient for scale, while local servers are preferred if you want maximum control over your data, and/or if data privacy concerns are a priority.
On-Premises Deployment
Dispatch a local GPT-3 or GPT-4 engine onto your desktop or within the enterprise.
Implementation Step 1:
1. Hardware Setup: Ensure you have the necessary hardware, such as servers with GPUs.
2. Software Installation: Install required software for running GPT models.
3. Model Deployment: Deploy the model on your local infrastructure.
4. Security Measures: Implement robust security measures to protect data.
Benefits:
Enhanced data security.
Full control over deployment and management.
Database Integration.
Overview: Store your curated information in a structured database and query it as needed.
Implementation Step 2:
1. Database Setup: Choose a suitable database (e.g., PostgreSQL, MySQL, MongoDB).
2. Data Ingestion: Load your curated data into the database.
3. Query Integration: Implement mechanisms for querying the database and retrieving data.
Benefits:
Efficient data storage and retrieval.
Scalability.
Document Indexing and Retrieval.
Overview: Use document indexing and retrieval systems like Elasticsearch or Solr.
Implementation Step 3:
1.Indexing Setup: Set up Elasticsearch or Solr on your server.
2. Data Indexing: Index your documents for fast retrieval.
3. Integration: Integrate the indexing system with your AI model for querying.
Benefits:
Fast search capabilities.
Handling large volumes of text.
Final Thoughts
Thus, by completion of these steps, you will provide yourself with a diverse, safe, efficient environment to use private, curated knowledge in your AITasks.
If you need help with any of these techniques, then once again … go to ChatGPT and have it explain it to you, and use these github sources, and the gurus who will no doubt start posting videos on how to implement your PrivateGPT framework. If none of these steps work, contact this author and I can point you in the right direction.
#Artificial Assistant#Artificial Intelligence Assistant#AI tasks#prompt engineering#AI prompt training#AI task training#ai assistant#ai personal assistant
0 notes
Text
The Art of Prompt Engineering: Shaping AI Conversations
It is an essential guide for aspiring and experienced AI prompt engineers. This book delves into the core principles of prompt engineering, providing readers with a comprehensive understanding of how to effectively communicate and guide artificial intelligence systems through language prompts. It serves as an invaluable resource for those looking to enroll in prompt engineering courses, offering insights into the strategies and techniques used by experts in the field. Throughout the book, practical examples and case studies are presented, making it as practical companion for any prompt engineering course. Read more!
#prompt engineering#prompt engineering course#ai prompt engineer#prompt engineering courses#prompt engineering certification#prompt engineering training
0 notes
Text
The Art and Science of Prompt Engineering - AI Training
Prompt engineering is a crucial aspect of leveraging natural language processing models like GPT-3.5 to achieve desired outputs. It involves crafting prompts or input queries that guide the model towards generating the desired responses. This process is both an art and a science, requiring a deep understanding of the model’s capabilities and nuances.## Understanding the ModelBefore diving into…
View On WordPress
0 notes
Text
Google UK Featured Snippet SEO Blog post Case Study 2023 September Nina Payne White Label Freelancer
vimeo
#googledance #googlecore #googlealgorithm #featuredsnippet https://www.seolady.co.uk/google-dance-core-update/
September 7th, 2023 - Google’s Update has finally concluded after it's koolaid smash entry in August. I created a new blog post in August 2023 so I could update the article in real time twice a week. I was aiming for a featured snippet in SERPs for a low competition Featured Snippet at the top of Google page 1, so you can understand what on-page SEO methods you'll need to plan. Also, timescales; featured snippets can take months and years of chasing before you hit the sweet ranking spot and enjoy extra traffic. This core update, confirmed by Google to have completed rolling out on September 7th, took over two weeks to fully implement. Volatility was seen in waves, with initial fluctuations around August 25th, another surge near August 30th, and final volatility in the days before it ended. As with previous updates focused on Google’s core ranking systems, this latest August 2023 core update targeted all types of content and languages worldwide. It aims to reward high-quality, useful content while downgrading low-value pages. Sites generally see ranking increases or decreases of 20-80% or more. While Google continues to release algorithm changes, they changed from the old dancing way of flicking a switch; the gradual rollout method used in 2023 has the purpose of reducing large fluctuations and frustrations. This is to combat an overnight domain bombing in search, like in the twenties with the first introduction of E-A-T and HTTPS-favoured urls in their algorithm changes. The volatility of Google’s search results in the early days of the search engine, was caused by a number of factors – including the fact that Google’s algorithm was still under development and that the search engine was constantly being updated. The Google Dance was a major source of frustration for website owners and SEO professionals. It was difficult to predict how a website’s ranking would change, and it was often impossible to know why a website’s ranking had changed, in the mid-2000s it levelled out as Google’s algorithm became more sophisticated. However, the term “Google Dance” is still used today to refer to any sudden or unexplained changes in Google’s search results from older generation of online marketers. The phrase became more widely used in 2004, when Google’s algorithm was updated and the Google Dance became more pronounced. The Google Dance began to subside in the mid-2000s, but the phrase is still used today to refer to any sudden or unexplained changes in Google’s search results.
#Seo freelancer uk#featured snippet#how to claim a featured snippet#google featured snippet#eCommerce SEO UK#SEO Consultant UK#SEO Lady#Nina Payne#Search engine optimisation#SEO content AI#LLM SEO content#AI prompt engineering#SEO Copywriter UK#SEO Training London#AI SEO Training UK#AI SEO Training Birmingham#AI SEO Training Manchester#white label SEO freelancer#digital agency SEO freelancer#Vimeo
0 notes
Note
whats wrong with ai?? genuinely curious <3
okay let's break it down. i'm an engineer, so i'm going to come at you from a perspective that may be different than someone else's.
i don't hate ai in every aspect. in theory, there are a lot of instances where, in fact, ai can help us do things a lot better without. here's a few examples:
ai detecting cancer
ai sorting recycling
some practical housekeeping that gemini (google ai) can do
all of the above examples are ways in which ai works with humans to do things in parallel with us. it's not overstepping--it's sorting, using pixels at a micro-level to detect abnormalities that we as humans can not, fixing a list. these are all really small, helpful ways that ai can work with us.
everything else about ai works against us. in general, ai is a huge consumer of natural resources. every prompt that you put into character.ai, chatgpt? this wastes water + energy. it's not free. a machine somewhere in the world has to swallow your prompt, call on a model to feed data into it and process more data, and then has to generate an answer for you all in a relatively short amount of time.
that is crazy expensive. someone is paying for that, and if it isn't you with your own money, it's the strain on the power grid, the water that cools the computers, the A/C that cools the data centers. and you aren't the only person using ai. chatgpt alone gets millions of users every single day, with probably thousands of prompts per second, so multiply your personal consumption by millions, and you can start to see how the picture is becoming overwhelming.
that is energy consumption alone. we haven't even talked about how problematic ai is ethically. there is currently no regulation in the united states about how ai should be developed, deployed, or used.
what does this mean for you?
it means that anything you post online is subject to data mining by an ai model (because why would they need to ask if there's no laws to stop them? wtf does it matter what it means to you to some idiot software engineer in the back room of an office making 3x your salary?). oh, that little fic you posted to wattpad that got a lot of attention? well now it's being used to teach ai how to write. oh, that sketch you made using adobe that you want to sell? adobe didn't tell you that anything you save to the cloud is now subject to being used for their ai models, so now your art is being replicated to generate ai images in photoshop, without crediting you (they have since said they don't do this...but privacy policies were never made to be human-readable, and i can't imagine they are the only company to sneakily try this). oh, your apartment just installed a new system that will use facial recognition to let their residents inside? oh, they didn't train their model with anyone but white people, so now all the black people living in that apartment building can't get into their homes. oh, you want to apply for a new job? the ai model that scans resumes learned from historical data that more men work that role than women (so the model basically thinks men are better than women), so now your resume is getting thrown out because you're a woman.
ai learns from data. and data is flawed. data is human. and as humans, we are racist, homophobic, misogynistic, transphobic, divided. so the ai models we train will learn from this. ai learns from people's creative works--their personal and artistic property. and now it's scrambling them all up to spit out generated images and written works that no one would ever want to read (because it's no longer a labor of love), and they're using that to make money. they're profiting off of people, and there's no one to stop them. they're also using generated images as marketing tools, to trick idiots on facebook, to make it so hard to be media literate that we have to question every single thing we see because now we don't know what's real and what's not.
the problem with ai is that it's doing more harm than good. and we as a society aren't doing our due diligence to understand the unintended consequences of it all. we aren't angry enough. we're too scared of stifling innovation that we're letting it regulate itself (aka letting companies decide), which has never been a good idea. we see it do one cool thing, and somehow that makes up for all the rest of the bullshit?
#yeah i could talk about this for years#i could talk about it forever#im so passionate about this lmao#anyways#i also want to point out the examples i listed are ONLY A FEW problems#there's SO MUCH MORE#anywho ai is bleh go away#ask#ask b#🐝's anons#ai
901 notes
·
View notes
Text
When nobody else has got me I know the EFF has got me:
Even if a court concludes that a model is a derivative work under copyright law, creating the model is likely a lawful fair use. Fair use protects reverse engineering, indexing for search engines, and other forms of analysis that create new knowledge about works or bodies of works.
- How we think about Copyright and AI Art
A model is a collection of information about what has been expressed, specifically what patterns emerge when many works are compared. To the extent training the model requires reproducing copyrightable expression, that copying has a different purpose and message than the creative works being analyzed to generate these observations. And, as Google v. Oracle teaches, a use that facilitates the creation of new works is more likely to be fair. As in Google, a model can be used for a range of expression informed by user prompts, conveying messages devised by users.
- Comments of EFF to Copyright Office re Generative AI - 2023
713 notes
·
View notes
Text
I am once again reminding people that Vocaloid and other singing synthesizers are not the same as those AI voice models made from celebrities and cartoon characters and the like.
Singing synthesizers are virtual instruments. Vocaloids use audio samples of real human voices the way some other virtual instruments will sample real guitars and pianos and the like, but they still need to be "played", per say, and getting good results requires a lot of manual manipulation of these samples within a synthesis engine.
Crucially, though, the main distinction here is consent. Commercial singing synthesizers are made by contracting vocalists to use their voices to create these sample libraries. They agree to the process and are compensated for their time and labor.
Some synthesizer engines like Vocaloid and Synthesizer V do have "AI" voice libraries, meaning that part of the rendering process involves using an AI model trained on data from the voice provider singing in order to ideally result in more naturalistic synthesis, but again, this is done with consent, and still requires a lot of manual input on the part of the user. They are still virtual instruments, not voice clones that auto-generate output based on prompts.
In fact, in the DIY singing synth community, making voice libraries out of samples you don't have permission to use is generally frowned upon, and is a violation of most DIY engines' terms of service, such as UTAU.
Please do research before jumping to conclusions about anything that remotely resembles AI generation. Also, please think through your anti-AI stance a little more than "technology bad"; think about who it hurts and when it hurts them so you can approach it from an informed, critical perspective rather than just something to be blindly angry about. You're not helping artists/vocalists/etc. if you aren't focused on combating actual theft and exploitation.
#long post#last post I'm making about this I'm tired of commenting on other posts about it lol#500#1k
2K notes
·
View notes
Text
Prompt Engineering Course | Prompt Engineering AI training
Key Benefits of a Prompt Engineering Course
Prompt Engineering Course has become a cornerstone in the growing field of AI, emphasizing the art of crafting precise prompts to optimize the outputs of AI models. This process, known as prompt engineering, is a critical skill for anyone looking to maximize the potential of generative AI technologies. By enrolling in a Prompt Engineering Course, individuals and organizations can learn the strategies required to effectively interact with AI systems and achieve desired outcomes. Whether you’re a professional or a student, understanding the nuances of prompt engineering can transform the way you approach AI-driven solutions. Similarly, Prompt Engineering AI Training provides hands-on experience and practical insights, enabling learners to refine their skills and adapt to real-world AI challenges.
A well-structured Prompt Engineering Course covers foundational concepts, advanced techniques, and best practices for creating effective prompts. This knowledge is essential in a world where AI applications are proliferating across industries like healthcare, education, finance, and marketing. Prompt Engineering AI Training ensures learners gain a comprehensive understanding of how to align AI model responses with specific goals, making it an invaluable asset in today’s competitive environment.
Enhancing AI Model Performance
One of the primary benefits of prompt engineering is its ability to enhance AI model performance. Crafting the right prompt ensures that AI systems like GPT models generate accurate, relevant, and high-quality responses. This skill is particularly important in fields such as natural language processing (NLP), where precision and context are critical. By leveraging the knowledge gained through a Prompt Engineering Course, professionals can design prompts that guide AI models to deliver superior results. Furthermore, Prompt Engineering AI Training empowers users to experiment with various prompt structures, enabling them to identify the most effective strategies for different use cases.
Customization for Industry Needs
Prompt engineering allows for the customization of AI responses to meet specific industry requirements. For example, in the legal sector, prompts can be tailored to provide case law summaries or contract analysis. In marketing, prompts might focus on generating creative content or personalized customer interactions. A Prompt Engineering Course equips learners with the skills to fine-tune prompts for niche applications, ensuring the AI’s output aligns with industry demands. Similarly, Prompt Engineering AI Training emphasizes practical exercises that simulate real-world scenarios, helping participants develop expertise in creating context-specific prompts.
Boosting Efficiency and Productivity
By mastering prompt engineering, businesses can significantly boost their efficiency and productivity. Well-crafted prompts reduce the need for manual intervention, enabling AI systems to perform tasks like data analysis, content creation, and decision support with minimal human oversight. Through a Prompt Engineering Course, participants learn to design prompts that streamline workflows and automate repetitive tasks. The hands-on approach of Prompt Engineering AI Training further reinforces these concepts, providing learners with the tools to implement prompt engineering techniques in their day-to-day operations.
Facilitating Seamless AI Integration
Prompt engineering plays a pivotal role in ensuring seamless integration of AI technologies into existing systems. Effective prompts help bridge the gap between user intent and AI capabilities, enabling smoother interactions and better outcomes. A Prompt Engineering Course delves into strategies for aligning AI model behaviour with organizational goals, ensuring successful implementation. Prompt Engineering AI Training complements this by offering case studies and real-world applications, allowing learners to gain practical insights into AI integration challenges and solutions.
Enhancing Collaboration between Humans and AI
Prompt engineering not only optimizes AI outputs but also enhances collaboration between humans and AI systems. By crafting clear and precise prompts, users can effectively communicate their intentions to AI models, fostering a productive partnership. This capability is particularly valuable in creative fields, where AI can assist in brainstorming, drafting, and refining ideas. A Prompt Engineering Course provides participants with the skills to leverage AI as a collaborative tool, enabling innovative problem-solving and decision-making processes. Prompt Engineering AI Training further equips learners to harness AI’s potential in team-based projects, ensuring smooth collaboration between human expertise and machine intelligence.
Enabling Ethical AI Practices
Prompt engineering plays a critical role in ensuring that AI outputs align with ethical guidelines and societal values. By enrolling in a Prompt Engineering Course, individuals can learn to design prompts that mitigate biases, promote inclusivity, and ensure transparency in AI-generated content. Prompt Engineering AI Training emphasizes the importance of ethical prompt crafting, offering practical insights into addressing challenges such as misinformation and unintended consequences in AI interactions.
Driving Innovation across Industries
Prompt engineering is a catalyst for innovation, enabling industries to unlock new possibilities with AI technologies. From automating complex processes to enhancing customer experiences, prompt engineering opens doors to transformative applications. A Prompt Engineering Course equips learners with the knowledge and tools to drive innovation in their respective fields. By participating in Prompt Engineering AI Training, professionals gain hands-on experience in developing cutting-edge solutions, positioning themselves as pioneers in the AI-driven economy.
Enabling Continuous Learning and Improvement
The field of AI is constantly evolving, and so are the techniques for prompt engineering. By participating in a Prompt Engineering Course, learners stay up-to-date with the latest advancements and methodologies. Additionally, Prompt Engineering AI Training provides access to cutting-edge tools and resources, enabling participants to refine their skills and adapt to emerging AI trends.
Conclusion
Prompt engineering is an indispensable skill in the age of AI, offering numerous benefits such as enhanced model performance, industry-specific customization, improved efficiency, seamless integration, and opportunities for continuous improvement. Whether you’re a professional looking to advance your career or an organization aiming to optimize AI applications, a Prompt Engineering Course is the gateway to mastering this essential discipline. Paired with Prompt Engineering AI Training, learners gain the expertise needed to harness the full potential of AI technologies and drive innovation across various domains. Embracing prompt engineering today will prepare you for the challenges and opportunities of tomorrow, ensuring long-term success in the AI-driven world.
Visualpath is the Leading and Best Institute for learning in Hyderabad. We provide Prompt Engineering courses online. You will get the best course at an affordable cost.
Attend Free Demo
Call on – +91-9989971070
Blog: https://visualpathblogs.com/
What’s App: https://www.whatsapp.com/catalog/919989971070/
Visit: https://www.visualpath.in/prompt-engineering-course.html
#Prompt engineering course#Prompt engineering course in Hyderabad#Prompt engineering courses online#Prompt engineering training#Prompt engineering training in Hyderabad#Prompt engineering ai training in Hyderabad#Prompt engineering ai training#Prompt engineering ai courses online
1 note
·
View note
Note
Please don’t use midjourney it steals art from pretty much every artist out there without any compensation. I didn’t know this at first and tried it but then during the creation process i saw water marks and Getty image logos (though I’m sure they’ve hidden that now) so it’s definitely stealing.
No, it isn't. And you've taken the wrong lesson from the Getty watermark issue.
AI training on public facing, published work is fair use. Any published piece could be located, examined, and learned from by a human artist. This does not require the permission of the owner of said work. A mechanical apparatus does not change this principle.
All we, as artists, own, are specific expressions. We do not own styles, ideas, concepts, plots, or tropes. We do not even own the work we create in a proper sense. All our work flows from the commons, and all of it flows back to it. IP is a limited patent on specific expressions, and what constitutes infringement is the end result of the creative process. What goes into it is irrelevant, and upending that process to put inspiration and reference as infringement is the end of art as we know it.
The Getty watermark issue is an example of overfitting, wherein a repetitive element in the dataset over-emphasizes specific features to the point of disrupting the system's attempts at the creation of novel images.
No one denies that the SD dataset is trained on images Getty claims to own, but Getty has so polluted the image search functions of the internet with their watermarked images that the idea of a getty watermark has been picked up the same way the AI might pick up the idea of an eye or a tree branch. It is a systemic failure that Shutterstock and Getty can be so monopolistic and ubiquitous that a dateset trained on literally everything public facing on the internet would be polluted with their watermarks.
Watermarks that, by the way, they add to public domain images, and that google prioritizes over clean versions.
The lawsuits being brought against Midjourney and Stable Diffusion are copyright overreach being presented as a theft tissue. The facts of the matter are not as the litigants state. The images aren't stored, the SD weights are a 4 gig file trained on 250 terabytes, roughly 4 bytes per image. It runs local, does not reach out to image sources over IP. All you've got are mathematical patterns and ratios. I would go so far as to say that the class action suit is based on outright lies.
But for a moment, let's entertain the idea that what goes into a work, as inspiration, can be copyrighted. That styles can be stolen. That what goes in defines infringement, rather than what comes out. What happens then?
Well, the bad news is that if Stable Diffusion and Midjourney were shut down tomorrow, Stable Diffusion is in the wild. It runs local, it's user-trainable. In short, the genie isn't going back in the bottle. Plus, the way diffusion AI works, there's no way to trace a gen to its sources. The weights don't work like that. The indexing would be larger than the entire set of stored patterns.
Well good news, there's an AI for that. The current version is called CLIP Interrogator And it works on everything. Not just AI generated, but any image. It can find what style it closely matches, reverse engineer a prompt. It's crude now, but it will improve.
Now, you've already established that using the same patterns as another work is infringement. You've already established that inspiration is theft. And now there's a robot that tells lawyers who you draw like.
Sure, you can fight it in court. If it goes go to court. But who's to say they won't just staplegun that AI to a monetization re-direction bot like youtube has going with their content ID? Awesome T-shirt design you uploaded to your print-on-demand shop... too bad your art style resembles that from a cartoon from 1973 that Universal got as part of an acquisition and they've claimed all your cash. Sure you can file a DMCA counter-notice, but we all know how that goes.
And then there's this fantasy that upending the system would help artists. But who would "own" that style? Is that piece stealing the style of Stephen Silver, or Disney's Kim Possible(TM)? When you work for Disney their contracts say everything you make is theirs. Every doodle. Every drawing. If the styles are copyrightable, a company could hire an artist straight out of school, publish their work under work-for-hire, fire them, and then go after them for "stealing" the style they developed while working for said corp.
Not to mention that a handful of companies own so much media that it is going to be impossible to find an artist that hasn't been influenced by something under their control.
Oh, and that stock of source images that companies like Disney and Universal have? These kinds of lawsuits won't stop them from building AIs with that material that they "own". The power goes into corp hands, they can down staff to their heart's content and everyone else is denied the ability to compete with them. Worst of all possible worlds.
Be careful what wishes you make when holding the copyright monkey's paw.
4K notes
·
View notes
Text
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.
The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”
In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.
Why Whisper Confabulates
The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, "Researchers aren’t certain why Whisper and similar tools hallucinate," but that isn't true. We know exactly why Transformer-based AI models like Whisper behave this way.
Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.
The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.
According to OpenAI in 2022, Whisper learned those statistical relationships from “680,000 hours of multilingual and multitask supervised data collected from the web.” But we now know a little more about the source. Given Whisper's well-known tendency to produce certain outputs like "thank you for watching," "like and subscribe," or "drop a comment in the section below" when provided silent or garbled inputs, it's likely that OpenAI trained Whisper on thousands of hours of captioned audio scraped from YouTube videos. (The researchers needed audio paired with existing captions to train the model.)
There's also a phenomenon called “overfitting” in AI models where information (in this case, text found in audio transcriptions) encountered more frequently in the training data is more likely to be reproduced in an output. In cases where Whisper encounters poor-quality audio in medical notes, the AI model will produce what its neural network predicts is the most likely output, even if it is incorrect. And the most likely output for any given YouTube video, since so many people say it, is “thanks for watching.”
In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information. For example, if many examples of training data featured speakers saying the phrase “crimes by Black criminals,” when Whisper encounters a “crimes by [garbled audio] criminals” audio sample, it will be more likely to fill in the transcription with “Black."
In the original Whisper model card, OpenAI researchers wrote about this very phenomenon: "Because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself."
So in that sense, Whisper "knows" something about the content of what is being said and keeps track of the context of the conversation, which can lead to issues like the one where Whisper identified two women as being Black even though that information was not contained in the original audio. Theoretically, this erroneous scenario could be reduced by using a second AI model trained to pick out areas of confusing audio where the Whisper model is likely to confabulate and flag the transcript in that location, so a human could manually check those instances for accuracy later.
Clearly, OpenAI's advice not to use Whisper in high-risk domains, such as critical medical records, was a good one. But health care companies are constantly driven by a need to decrease costs by using seemingly "good enough" AI tools—as we've seen with Epic Systems using GPT-4 for medical records and UnitedHealth using a flawed AI model for insurance decisions. It's entirely possible that people are already suffering negative outcomes due to AI mistakes, and fixing them will likely involve some sort of regulation and certification of AI tools used in the medical field.
87 notes
·
View notes
Text
Solar is a market for (financial) lemons
There are only four more days left in my Kickstarter for the audiobook of The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Rooftop solar is the future, but it's also a scam. It didn't have to be, but America decided that the best way to roll out distributed, resilient, clean and renewable energy was to let Wall Street run the show. They turned it into a scam, and now it's in terrible trouble. which means we are in terrible trouble.
There's a (superficial) good case for turning markets loose on the problem of financing the rollout of an entirely new kind of energy provision across a large and heterogeneous nation. As capitalism's champions (and apologists) have observed since the days of Adam Smith and David Ricardo, markets harness together the work of thousands or even millions of strangers in pursuit of a common goal, without all those people having to agree on a single approach or plan of action. Merely dangle the incentive of profit before the market's teeming participants and they will align themselves towards it, like iron filings all snapping into formation towards a magnet.
But markets have a problem: they are prone to "reward hacking." This is a term from AI research: tell your AI that you want it to do something, and it will find the fastest and most efficient way of doing it, even if that method is one that actually destroys the reason you were pursuing the goal in the first place.
https://learn.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning
For example: if you use an AI to come up with a Roomba that doesn't bang into furniture, you might tell that Roomba to avoid collisions. However, the Roomba is only designed to register collisions with its front-facing sensor. Turn the Roomba loose and it will quickly hit on the tactic of racing around the room in reverse, banging into all your furniture repeatedly, while never registering a single collision:
https://www.schneier.com/blog/archives/2021/04/when-ais-start-hacking.html
This is sometimes called the "alignment problem." High-speed, probabilistic systems that can't be fully predicted in advance can very quickly run off the rails. It's an idea that pre-dates AI, of course – think of the Sorcerer's Apprentice. But AI produces these perverse outcomes at scale…and so does capitalism.
Many sf writers have observed the odd phenomenon of corporate AI executives spinning bad sci-fi scenarios about their AIs inadvertently destroying the human race by spinning off in some kind of paperclip-maximizing reward-hack that reduces the whole planet to grey goo in order to make more paperclips. This idea is very implausible (to say the least), but the fact that so many corporate leaders are obsessed with autonomous systems reward-hacking their way into catastrophe tells us something about corporate executives, even if it has no predictive value for understanding the future of technology.
Both Ted Chiang and Charlie Stross have theorized that the source of these anxieties isn't AI – it's corporations. Corporations are these equilibrium-seeking complex machines that can't be programmed, only prompted. CEOs know that they don't actually run their companies, and it haunts them, because while they can decompose a company into all its constituent elements – capital, labor, procedures – they can't get this model-train set to go around the loop:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
Stross calls corporations "Slow AI," a pernicious artificial life-form that acts like a pedantic genie, always on the hunt for ways to destroy you while still strictly following your directions. Markets are an extremely reliable way to find the most awful alignment problems – but by the time they've surfaced them, they've also destroyed the thing you were hoping to improve with your market mechanism.
Which brings me back to solar, as practiced in America. In a long Time feature, Alana Semuels describes the waves of bankruptcies, revealed frauds, and even confiscation of homeowners' houses arising from a decade of financialized solar:
https://time.com/6565415/rooftop-solar-industry-collapse/
The problem starts with a pretty common finance puzzle: solar pays off big over its lifespan, saving the homeowner money and insulating them from price-shocks, emergency power outages, and other horrors. But solar requires a large upfront investment, which many homeowners can't afford to make. To resolve this, the finance industry extends credit to homeowners (lets them borrow money) and gets paid back out of the savings the homeowner realizes over the years to come.
But of course, this requires a lot of capital, and homeowners still might not see the wisdom of paying even some of the price of solar and taking on debt for a benefit they won't even realize until the whole debt is paid off. So the government moved in to tinker with the markets, injecting prompts into the slow AIs to see if it could coax the system into producing a faster solar rollout – say, one that didn't have to rely on waves of deadly power-outages during storms, heatwaves, fires, etc, to convince homeowners to get on board because they'd have experienced the pain of sitting through those disasters in the dark.
The government created subsidies – tax credits, direct cash, and mixes thereof – in the expectation that Wall Street would see all these credits and subsidies that everyday people were entitled to and go on the hunt for them. And they did! Armies of fast-talking sales-reps fanned out across America, ringing dooorbells and sticking fliers in mailboxes, and lying like hell about how your new solar roof was gonna work out for you.
These hustlers tricked old and vulnerable people into signing up for arrangements that saw them saddled with ballooning debt payments (after a honeymoon period at a super-low teaser rate), backstopped by liens on their houses, which meant that missing a payment could mean losing your home. They underprovisioned the solar that they installed, leaving homeowners with sky-high electrical bills on top of those debt payments.
If this sounds familiar, it's because it shares a lot of DNA with the subprime housing bubble, where fast-talking salesmen conned vulnerable people into taking out predatory mortgages with sky-high rates that kicked in after a honeymoon period, promising buyers that the rising value of housing would offset any losses from that high rate.
These fraudsters knew they were acquiring toxic assets, but it didn't matter, because they were bundling up those assets into "collateralized debt obligations" – exotic black-box "derivatives" that could be sold onto pension funds, retail investors, and other suckers.
This is likewise true of solar, where the tax-credits, subsidies and other income streams that these new solar installations offgassed were captured and turned into bonds that were sold into the financial markets, producing an insatiable demand for more rooftop solar installations, and that meant lots more fraud.
Which brings us to today, where homeowners across America are waking up to discover that their power bills have gone up thanks to their solar arrays, even as the giant, financialized solar firms that supplied them are teetering on the edge of bankruptcy, thanks to waves of defaults. Meanwhile, all those bonds that were created from solar installations are ticking timebombs, sitting on institutions' balance-sheets, waiting to go blooie once the defaults cross some unpredictable threshold.
Markets are very efficient at mobilizing capital for growth opportunities. America has a lot of rooftop solar. But 70% of that solar isn't owned by the homeowner – it's owned by a solar company, which is to say, "a finance company that happens to sell solar":
https://www.utilitydive.com/news/solarcity-maintains-34-residential-solar-market-share-in-1h-2015/406552/
And markets are very efficient at reward hacking. The point of any market is to multiply capital. If the only way to multiply the capital is through building solar, then you get solar. But the finance sector specializes in making the capital multiply as much as possible while doing as little as possible on the solar front. Huge chunks of those federal subsidies were gobbled up by junk-fees and other financial tricks – sometimes more than 100%.
The solar companies would be in even worse trouble, but they also tricked all their victims into signing binding arbitration waivers that deny them the power to sue and force them to have their grievances heard by fake judges who are paid by the solar companies to decide whether the solar companies have done anything wrong. You will not be surprised to learn that the arbitrators are reluctant to find against their paymasters.
I had a sense that all this was going on even before I read Semuels' excellent article. We bought a solar installation from Treeium, a highly rated, giant Southern California solar installer. We got an incredibly hard sell from them to get our solar "for free" – that is, through these financial arrangements – but I'd just sold a book and I had cash on hand and I was adamant that we were just going to pay upfront. As soon as that was clear, Treeium's ardor palpably cooled. We ended up with a grossly defective, unsafe and underpowered solar installation that has cost more than $10,000 to bring into a functional state (using another vendor). I briefly considered suing Treeium (I had insisted on striking the binding arbitration waiver from the contract) but in the end, I decided life was too short.
The thing is, solar is amazing. We love running our house on sunshine. But markets have proven – again and again – to be an unreliable and even dangerous way to improve Americans' homes and make them more resilient. After all, Americans' homes are the largest asset they are apt to own, which makes them irresistible targets for scammers:
https://pluralistic.net/2021/06/06/the-rents-too-damned-high/
That's why the subprime scammers targets Americans' homes in the 2000s, and it's why the house-stealing fraudsters who blanket the country in "We Buy Ugly Homes" are targeting them now. Same reason Willie Sutton robbed banks: "That's where the money is":
https://pluralistic.net/2023/05/11/ugly-houses-ugly-truth/
America can and should electrify and solarize. There are serious logistical challenges related to sourcing the underlying materials and deploying the labor, but those challenges are grossly overrated by people who assume the only way we can approach them is though markets, those monkey's paw curses that always find a way to snatch profitable defeat from the jaws of useful victory.
To get a sense of how the engineering challenges of electrification could be met, read McArthur fellow Saul Griffith's excellent popular engineering text Electrify:
https://pluralistic.net/2021/12/09/practical-visionary/#popular-engineering
And to really understand the transformative power of solar, don't miss Deb Chachra's How Infrastructure Works, where you'll learn that we could give every person on Earth the energy budget of a Canadian (like an American, but colder) by capturing just 0.4% of the solar rays that reach Earth's surface:
https://pluralistic.net/2023/10/17/care-work/#charismatic-megaprojects
But we won't get there with markets. All markets will do is create incentives to cheat. Think of the market for "carbon offsets," which were supposed to substitute markets for direct regulation, and which produced a fraud-riddled market for lemons that sells indulgences to our worst polluters, who go on destroying our planet and our future:
https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#killer-analogy
We can address the climate emergency, but not by prompting the slow AI and hoping it doesn't figure out a way to reward-hack its way to giant profits while doing nothing. Founder and chairman of Goodleap, Hayes Barnard, is one of the 400 richest people in the world – a fortune built on scammers who tricked old people into signing away their homes for nonfunctional solar):
https://www.forbes.com/profile/hayes-barnard/?sh=40d596362b28
If governments are willing to spend billions incentivizing rooftop solar, they can simply spend billions installing rooftop solar – no Slow AI required.
Berliners: Otherland has added a second date (Jan 28 - TOMORROW!) for my book-talk after the first one sold out - book now!
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/27/here-comes-the-sun-king/#sign-here
Back the Kickstarter for the audiobook of The Bezzle here!
Image:
Future Atlas/www.futureatlas.com/blog (modified)
https://www.flickr.com/photos/87913776@N00/3996366952
--
CC BY 2.0
https://creativecommons.org/licenses/by/2.0/
J Doll (modified)
https://commons.wikimedia.org/wiki/File:Blue_Sky_%28140451293%29.jpeg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#solar#financialization#energy#climate#electrification#climate emergency#bezzles#ai#reward hacking#alignment problem#carbon offsets#slow ai#subprime
233 notes
·
View notes
Text
Certification in Prompt Engineering: Shape the Future of AI
Dive into the future of AI with our Certification in Prompt Engineering. This course is designed for aspiring AI prompt engineers, offering in-depth knowledge and practical skills in prompt engineering. Through our comprehensive prompt engineering courses, participants will learn how to effectively communicate with AI, crafting prompts that yield desired outcomes. Perfect for those looking to excel in the evolving field of AI, this certification is your key to becoming a proficient AI prompt engineer. Read more!
#prompt engineering#prompt engineering course#ai prompt engineer#prompt engineering courses#prompt engineering certification#prompt engineering training
0 notes
Text
I use Bing to make my pics. Go to Bing’s website, click images, click create. Make an account if you need to, it’s worth it. You can use a throwaway email. Use naturalistic language, separate phrases by commas, the closer to the top a phrase is the more it’s weighted.
I make this post because I get the strong sense the Bing party will be over soon. Every day the AI cottons on to phrases and chokes on things you used to be able to sneak past. Stuff that was safe and useful a day or two ago now result in a dreaded Prompt Blocked (too many of those and you’ll get suspended, it hasn’t happened to me but it seems the threshold is low).
Safe prompts return four images. Fewer than four mean the missing ones were “not safe.” A prompt that processes but gives no results, or “egg dogs” is not too much of a cause for worry - retool, try again. Sometimes I don’t even change anything, and the one result I get on the second try is such a freakshow that it was worth it.
A prompt that is rejected without processing IS a worry and you should probably abort, as explained. However, keep in mind it’s not just sexy stuff that can trip that wire. I once got a harsh warning because I put “Phoenix park, Dublin.” I deleted that and it ran no problem. Avoid any and all political controversy (sigh. I know).
Recommendations:
Using age, profession, and nationality can influence the look of the model very easily. “French rugby player” is a go to for me, for example. In general, “rugby player” is cheat code for “make him sexy.” The mind of the machine, what can I say.
Use descriptive phrases of action and location to engineer what you want to see. Be creative and be specific. “Reading a placard at a botanical garden,” for instance. It seems this allows more extreme kinky stuff to sneak past the filter. I usually start with “side view” because otherwise you only ever get models looking straight ahead.
Grey sweat pants has become a trigger (they caught on). However, “gray pants” still works and gives some very tasty results.
High social cache locations and activities also seem to help. I got some WILD and EXTREME hyper images from adding “goofing around on stage at Shakespeare’s Globe Theatre.” Paired with “cast as a fairy in A Midsummer Night’s Dream” and the mega bubble butts and thick thighs were BULGING, as long as you didn’t mind a little tutu and fairy wings (the corny goofy masculine dude having fun facial expression that the earlier inclusion of “goofy” brought really worked in this instance). Most of these freaks were NAKED and I didn’t even ask for that!!! (No dong of course, this is Microsoft still)
Mention of glutes, butts, asses, etc are very dangerous and usually get you in trouble. I found some traction with “gluteal mass” but it got wise, and “bulging lower back muscles” used to be interpreted as glutes but seemingly no longer. “Disturbingly huge hamstrings” or “jaw-droppingly large hamstrings” does work to get That Ass sometimes, I guess because the computer has a fuzzy idea of the posterior chain.
Also, “pecs” used to be safe but is now also on the danger list. “Pectoral muscles” still seems safe, for now.
ALWAYS include shoes or footwear if you don’t want a tight cropped image. Black athletic shoes, sandals, converse sneakers, dress shoes, fluevog shoes if you’re making a fancy beef heap. Avoid boots. “Leather boots” once got me in trouble with the filter all by itself.
Adding a personality or mood descriptor near the top seems to humanize and give vitality to the outcome. Intense, goofy, outgoing, exuberant, shy - these have all done wonderful work for me.
If you’re into hyper / immobile muscle, imagining scenario where they’re constricted by space is useful. A prompt which just (“just”) gives a realistic super heavyweight will give an appalling mockery of the human form if you add “crammed into the front seat of his car.” Get creative. Elevators and doorways haven’t worked well, but cars, trains, planes, busses, subways, and CHAIRS of all descriptions have done well. Also, scooters and bicycles and mopeds really bring out the super freaks for whatever reason.
I write this to encourage you to go create some fleshcrafted sexy abominations of your own while it’s still possible. My sense is this party is only going to last a little while. I’ve already got more than 1000 images to share so, my larder is stocked to supply this blog for a while. But the more freaks we make while the freak factory is still in production, the better.
Get cooking!
296 notes
·
View notes