#and just writing your own damn training loop is easier than that lol
Explore tagged Tumblr posts
Text
Due to recent experiences, I am feeling an urge to make an anti-drug-style PSA except it's warning impressionable machine-learning-curious teens to never, ever try a thing called "Huggingface transformers Trainer"
Not. Even. Once.
#and don't even get me started on “unsloth”#this week i learned what “unsloth” actually does when you import it and... man.#i thought i'd seen the worst of “hacky brittle 'it-just-works' (by doing the most cursed shit imaginable) ML python code” but no.#no. unsloth was Worse#and huggingface Trainer is bad enough by itself#did you know it has 131 (one hundred and thirty one!) config arguments and yet it cannot log *more than one loss number at once*#(for like multitask training or whatever)#i don't just mean it's hard to do - i mean its logging mechanism is built from the ground up on the assumption you would never do this.#you'd have to rewrite a bunch of internals to get it working - i.e. basically write a new nontrivial feature on HF's behalf#and just writing your own damn training loop is easier than that lol#it's not that hard kids. take it from me. dataset + dataloader + model(*args) + loss.backward() + opt.step() + opt.zero_grad(). that's it#it'll take you 30 minutes and save you a billion hours down the road#i do not understand computers#(is a category tag)
77 notes
·
View notes