#Limiting
Explore tagged Tumblr posts
funkyglitch · 1 month ago
Text
u ever jus wonder why you cant do shit right?
ya me too. what will it take for me to gain the ability to fly 😞😞😞 without airplanes or flying vehicles
2 notes · View notes
kai-ninjago · 2 years ago
Text
Amity: how did your first T shot go, Luz?
Luz, still shaking like a leaf after breaking a blood vessel: I’m a pro at it!
19 notes · View notes
howifeltabouthim · 1 year ago
Text
These were impositions, defining categories that failed to recognize the muddle that is us, human beings.
Siri Hustvedt, from The Blazing World
3 notes · View notes
lavideenrose · 2 years ago
Quote
It's so limiting to say what something is. It becomes nothing more than that.
David Lynch
3 notes · View notes
mykl · 2 years ago
Photo
Tumblr media
The best way i know to complete a to-do list is limiting it to three monosyllabic tasks.
3 notes · View notes
spitblaze · 5 months ago
Text
[guy who doesnt watch shows voice] yeah ive been meaning to watch that show
59K notes · View notes
oidheadh-con-culainn · 7 months ago
Text
as an aroace person with limited sexual experience, no interest in watching porn, and poor sex ed as a teen, there IS something simultaneously funny and vaguely tragic about being 28 adult years old and realising how extremely tiny your frame of reference is for genitalia and deciding you should expand this to better understand bodies (yours and others). and then you're just there like "okay so what the fuck do I even google right now, anyway"
53K notes · View notes
fly-chicken · 21 days ago
Text
A Pragmatic and surprisingly comforting perspective about the Trump 2nd Presidency from the ACLU
***Apologies if this is how you found out the 2024 election results***
Blacked out part is my name.
Tumblr media Tumblr media Tumblr media Tumblr media
I’m not going to let this make me give up. It’s disheartening, and today I will wallow, probably tomorrow too
AND
I will continue to do my part in my community to spread the activism and promote change for the world I want to live in. I want to change the world AND help with the dishes.
And I won’t let an orange pit stain be what stops me from trying to be better.
A link to donate to the ACLU if able and inclined. I know I am
26K notes · View notes
william-snekspeare · 5 months ago
Text
Tumblr media Tumblr media
alternate take on my other steve comic.
help me afford new socks
31K notes · View notes
bored-boring-and-tired · 6 months ago
Text
i propose that instead of pride month, we have queer year (queer people are treated like actual people all year long)
edit: @ilackhumanqualities wins best addition to this post
Tumblr media
34K notes · View notes
killertimes · 9 days ago
Text
Tumblr media
Dual Action WIDOWSPEAK, breiter audio (April 2022)
0 notes
beetlejuce · 3 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
CHAPPELL ROAN winning Best New Artist at the 2024 MTV Video Music Awards
14K notes · View notes
spacecolonie · 2 months ago
Text
Tumblr media
live and learn !
11K notes · View notes
ckarlova · 1 month ago
Text
Tumblr media
It’s time for a new champion to rise…
10K notes · View notes
jcmarchi · 4 months ago
Text
📽 [Webinar] Beat GPT-4 with a Small Model and 10 Rows of Data*
New Post has been published on https://thedigitalinsider.com/webinar-beat-gpt-4-with-a-small-model-and-10-rows-of-data/
📽 [Webinar] Beat GPT-4 with a Small Model and 10 Rows of Data*
Small language models (SLMs) are increasingly rivaling the performance of large foundation models like GPT-4. However, the need for high-quality datasets for fine-tuning these models presents a persistent challenge. 
On August 8th, Predibase is going to showcase an approach for fine-tuning an SLM that outperforms GPT-4 with synthetic data generated from only 10 real-world examples.
The Data Dilemma
While fine-tuning SLMs with high-quality datasets can consistently produce task-specific models that outperform large foundation models, many teams face a significant barrier: assembling sufficient training data. This challenge has often been a bottleneck in AI development, limiting the ability of teams to develop production-ready models quickly and cost-effectively.
Synthetic Data Through Distillation
Our upcoming webinar introduces an innovative solution to this persistent challenge. By leveraging the capabilities of large language models such as GPT-4 and Llama-3.1-405b, we’ve developed techniques to generate high-quality synthetic data for fine-tuning task-specific SLMs. This approach enables teams to achieve GPT-4 level results with as few as 10 real-world examples, dramatically reducing the data collection burden and accelerating the path to production.
In this comprehensive session, we’ll delve into the following key areas:
The Data Insufficiency Challenge: We’ll explore the persistent issue of insufficient training data in AI development, discussing the limitations it imposes on teams working with SLMs.
Synthetic Data Generation Techniques: Our ML team will demonstrate methods for generating high-quality synthetic data based on as few as 10 data rows using Llama-3.1-405B and GPT-4. 
Achieving GPT-4 Level Performance: We’ll show how SLMs fine-tuned with synthetic data can match or exceed the performance of GPT-4 across various tasks. Attendees will gain insights into the fine-tuning process, hyperparameter optimization, and performance evaluation metrics.
Streamlining the Development Process: We’ll discuss strategies for significantly reducing data collection efforts and accelerating the journey from concept to production. This includes techniques for identifying key seed examples, automating the synthetic data generation pipeline, and optimizing the fine-tuning workflow.
Join us on August 8th
Whether you’re an AI practitioner, startup founder, or enterprise decision-maker, this session will equip you with knowledge to effectively use synthetic data and SLMs. Join us to explore how synthetic data and fine-tuned SLMs can unblock your AI initiatives. Register today.
*This post was written by Will Van Eaton from Predibase. We thank Predibase for their insights and ongoing support of TheSequence.
0 notes