#autoresponders
Explore tagged Tumblr posts
faranae · 2 months ago
Text
So, uh. This is real:
Tumblr media
This is 100% a a thing that has happened.
Tumblr media
What even is this timeline?!
Tumblr media
I'm dying. Oh my stars.
Tumblr media
13K notes · View notes
calware-png · 2 months ago
Text
Tumblr media Tumblr media
saw this tweet and immediately knew what had to be done
6K notes · View notes
mdabutaleb · 9 months ago
Text
Tumblr media
ZapAI Revolutionary NexusAI Technology: Send unlimited “bulk messages” across WhatsApp to millions of mobile phones with a single click.
1 note · View note
slavhew · 8 months ago
Text
Tumblr media Tumblr media Tumblr media
hm
4K notes · View notes
starksmarketingllc · 1 year ago
Text
Automating Your Email Campaigns: Streamlining Your Marketing Workflow
Email marketing is a powerful tool for businesses to connect with their audience and drive engagement. However, managing email campaigns manually can be time-consuming and resource-intensive. Enter email automation—the process of automating repetitive tasks in your email marketing workflow. In this article, we will explore the benefits of automating your email campaigns and how they can…
Tumblr media
View On WordPress
0 notes
calware · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
34K notes · View notes
0fallen0 · 2 months ago
Text
Tumblr media
LIL HALLL based on readysetrose's cosplay on tiktok!!
898 notes · View notes
pancakemolybdenum · 9 months ago
Text
Tumblr media
screw it. im posting this too
1K notes · View notes
runawayexpresstravel · 2 years ago
Photo
Tumblr media
Best Email Marketing Service marketing automation e-commerce marketing auto responders and more #EmailMarketing #EcommerceMarketing #Autoresponders #EmailWriting #EmailServices #BusinessEmail getresponse.com/?ab=XFRdVfwvax https://www.instagram.com/p/ClWV4L4OccA/?igshid=NGJjMDIxMWI=
1 note · View note
nostalgebraist · 2 years ago
Text
Honestly I'm pretty tired of supporting nostalgebraist-autoresponder. Going to wind down the project some time before the end of this year.
Posting this mainly to get the idea out there, I guess.
This project has taken an immense amount of effort from me over the years, and still does, even when it's just in maintenance mode.
Today some mysterious system update (or something) made the model no longer fit on the GPU I normally use for it, despite all the same code and settings on my end.
This exact kind of thing happened once before this year, and I eventually figured it out, but I haven't figured this one out yet. This problem consumed several hours of what was meant to be a relaxing Sunday. Based on past experience, getting to the bottom of the issue would take many more hours.
My options in the short term are to
A. spend (even) more money per unit time, by renting a more powerful GPU to do the same damn thing I know the less powerful one can do (it was doing it this morning!), or
B. silently reduce the context window length by a large amount (and thus the "smartness" of the output, to some degree) to allow the model to fit on the old GPU.
Things like this happen all the time, behind the scenes.
I don't want to be doing this for another year, much less several years. I don't want to be doing it at all.
----
In 2019 and 2020, it was fun to make a GPT-2 autoresponder bot.
[EDIT: I've seen several people misread the previous line and infer that nostalgebraist-autoresponder is still using GPT-2. She isn't, and hasn't been for a long time. Her latest model is a finetuned LLaMA-13B.]
Hardly anyone else was doing anything like it. I wasn't the most qualified person in the world to do it, and I didn't do the best possible job, but who cares? I learned a lot, and the really competent tech bros of 2019 were off doing something else.
And it was fun to watch the bot "pretend to be me" while interacting (mostly) with my actual group of tumblr mutuals.
In 2023, everyone and their grandmother is making some kind of "gen AI" app. They are helped along by a dizzying array of tools, cranked out by hyper-competent tech bros with apparently infinite reserves of free time.
There are so many of these tools and demos. Every week it seems like there are a hundred more; it feels like every day I wake up and am expected to be familiar with a hundred more vaguely nostalgebraist-autoresponder-shaped things.
And every one of them is vastly better-engineered than my own hacky efforts. They build on each other, and reap the accelerating returns.
I've tended to do everything first, ahead of the curve, in my own way. This is what I like doing. Going out into unexplored wilderness, not really knowing what I'm doing, without any maps.
Later, hundreds of others with go to the same place. They'll make maps, and share them. They'll go there again and again, learning to make the expeditions systematically. They'll make an optimized industrial process of it. Meanwhile, I'll be locked in to my own cottage-industry mode of production.
Being the first to do something means you end up eventually being the worst.
----
I had a GPT chatbot in 2019, before GPT-3 existed. I don't think Huggingface Transformers existed, either. I used the primitive tools that were available at the time, and built on them in my own way. These days, it is almost trivial to do the things I did, much better, with standardized tools.
I had a denoising diffusion image generator in 2021, before DALLE-2 or Stable Diffusion or Huggingface Diffusers. I used the primitive tools that were available at the time, and built on them in my own way. These days, it is almost trivial to do the things I did, much better, with standardized tools.
Earlier this year, I was (probably) one the first people to finetune LLaMA. I manually strapped LoRA and 8-bit quantization onto the original codebase, figuring out everything the hard way. It was fun.
Just a few months later, and your grandmother is probably running LLaMA on her toaster as we speak. My homegrown methods look hopelessly antiquated. I think everyone's doing 4-bit quantization now?
(Are they? I can't keep track anymore -- the hyper-competent tech bros are too damn fast. A few months from now the thing will be probably be quantized to -1 bits, somehow. It'll be running in your phone's browser. And it'll be using RLHF, except no, it'll be using some successor to RLHF that everyone's hyping up at the time...)
"You have a GPT chatbot?" someone will ask me. "I assume you're using AutoLangGPTLayerPrompt?"
No, no, I'm not. I'm trying to debug obscure CUDA issues on a Sunday so my bot can carry on talking to a thousand strangers, every one of whom is asking it something like "PENIS PENIS PENIS."
Only I am capable of unplugging the blockage and giving the "PENIS PENIS PENIS" askers the responses they crave. ("Which is ... what, exactly?", one might justly wonder.) No one else would fully understand the nature of the bug. It is special to my own bizarre, antiquated, homegrown system.
I must have one of the longest-running GPT chatbots in existence, by now. Possibly the longest-running one?
I like doing new things. I like hacking through uncharted wilderness. The world of GPT chatbots has long since ceased to provide this kind of value to me.
I want to cede this ground to the LLaMA techbros and the prompt engineers. It is not my wilderness anymore.
I miss wilderness. Maybe I will find a new patch of it, in some new place, that no one cares about yet.
----
Even in 2023, there isn't really anything else out there quite like Frank. But there could be.
If you want to develop some sort of Frank-like thing, there has never been a better time than now. Everyone and their grandmother is doing it.
"But -- but how, exactly?"
Don't ask me. I don't know. This isn't my area anymore.
There has never been a better time to make a GPT chatbot -- for everyone except me, that is.
Ask the techbros, the prompt engineers, the grandmas running OpenChatGPT on their ironing boards. They are doing what I did, faster and easier and better, in their sleep. Ask them.
5K notes · View notes
alfiely-art · 4 months ago
Text
Crossover episode
Tumblr media Tumblr media Tumblr media Tumblr media
473 notes · View notes
uranian-umbrella · 2 months ago
Note
Omg wb Callie! I've got a question, how do you feel about robots?
Tumblr media
to be honest, i’m not a fan. u_u
313 notes · View notes
calware-png · 4 months ago
Text
Tumblr media
i reaaaaally like complex robot designs so i wanted to do a fairly detailed drawing of a robot body for hal. i'm also trying to re-introduce myself to anti-aliasing
color version:
Tumblr media
480 notes · View notes
cringefail-clown · 9 months ago
Text
Tumblr media Tumblr media Tumblr media
biblically accurate hal + some other hal doodles i did in paint
753 notes · View notes
slavhew · 8 months ago
Text
Tumblr media
talking to yourself again?
Tumblr media Tumblr media
alt + clean before i started playing with layers like theyre dolls
810 notes · View notes
offkilterkeys · 9 months ago
Text
Tumblr media
Crazy to think that from his perspective he went from thinking it would be funny if he cloned his own consciousness to then spending years tormented by the indignity of a cage wrought up by his own hubris.
914 notes · View notes