#llm subjects
Explore tagged Tumblr posts
Text
Tumblr media
Pursue an advanced LLM course or Master of Law course in top LLM colleges in India or renowned institutions in LLM Bangalore. With flexible LLM duration and diverse LLM course subjects, explore the comprehensive syllabus of LLM tailored for your specialization. Understand LLM eligibility, streamlined LLM admission processes, and LLM course fees, and achieve a prestigious LLM degree India to enhance your legal career.
1 note · View note
mahendrareddy6595 · 4 months ago
Text
Tumblr media
Explore the diverse range of LLM subjects and specializations offered in India, including Corporate Law, in this visual guide. Discover the best one-year LLM programs, top LLM courses, and leading BBA LLB colleges in Bangalore that provide a strong foundation for legal careers. Perfect for aspiring legal professionals seeking detailed insights into India's LLM education landscape.
0 notes
youzicha · 2 months ago
Text
transgenderer:
you guys remember the Future? greg egan makes me think about the Future. in the 90s it was common among the sciency to imagine specific discrete technologies that would make the lower-case-f future look like the Future. biotech and neurotech and nanotech. this was a pretty good prediction! the 80s looked like the Future to the 60s. not the glorious future, but certainly the Future. but i dont think now looks like the Future to the 00s. obviously its different. but come on. its not that different. and then finally we get a crazy tech breakthrough, imagegen and LLMs. and theyre really cool! but theyre not the Future. theyre…tractors, not airplanes, or even cars. they let us do things we could already do, with a machine. thats nice! that can cause qualitative changes! but its not the kind of breakthrough weve been hoping for. it doesnt make the world stop looking like the mid 2000s
(quote-replying since I wanted to riff on this part specifically)
I would like some more quantifiable test for this though! I always worry that it's a subjective thing based on when we transitioned from teenagers to adults; the things before is the consequential history inexorably leading up to the now, and the things afterwards are just some fads by the kids these days which you can safely ignore.
Was the 80s properly futuristic with respect to the 60s? Were they not supposed to have robots and space ships and nuclear power everywhere and a base on the moon? (Or, for that matter, full Communism? The Times They Are A-Changin', except they didn't.) I think I have seen some take that cyberpunk science fiction was a concerning sign that people had given up on the notion of progress and just imagined a grimy "more of the same"; a kind of cynical awakening from the sweeping dreams of science fiction a few decades earlier.
56 notes · View notes
hesperocyon-lesbian · 2 months ago
Note
since when are you pro-chat-gpt
I’m not lol, I’m ambivalent on it. I think it’s a tool that doesn’t have many practical applications because all it’s really good at doing is sketching out a likely response based on a prompt, which obv doesn’t take accuracy into account. So while it’s terrible as, say, a search engine, it’s actually fairly useful for something hollow and formulaic like a cover letter, which there are decent odds a human won’t read anyway
The thing about “AI”, both LLMs and AI art, is that both the people hyping them up and the people fervently against them are annoying and wrong. It’s not a plagiarism machine because that’s not what plagiarism is, half the time when someone says that they’re saying it copied someone’s style which isn’t remotely plagiarism.
Basically, the backlash against these pieces of tech centers around rhetoric of “laziness” which I feel like I shouldn’t need to say is ableist and a straightforwardly capitalistic talking point but I’ll say it anyway, or arguments around some kind of inherent “soul” in art created by humans, which, idk maybe that’s convincing if you’re religious but I’m not so I really couldn’t care less.
That and the fact that most of the stars about power usage are nonsense. People will gesture at the amount of power servers that host AI consume without acknowledging that those AI programs are among many other kinds of traffic hosted on those servers, and it isn’t really possible to pick apart which one is consuming however much power, so they’ll just use the stats related to the entire power consumption of the server.
Ultimately, like I said in my previous post, I think most of the output of LLMs and AI art tools is slop, and is generally unappealing to me. And that’s something you can just say! You’re allowed to subjectively dislike it without needing to moralize your reasoning! But the backlash is so extremely ableist and so obsessed with protecting copyright that it’s almost as bad as the AI hype train, if not just as
33 notes · View notes
wordsnbones · 2 years ago
Text
Master Willem was right, evolution without courage will be the end of our race.
Tumblr media
92K notes · View notes
argumate · 4 days ago
Text
still chewing on neural network training and I don't like the way that the network architecture has to be figured out by trial and error, that feels like something that should be learned too!
evolutionary algorithms would achieve that but only at enormous cost: over 300,000 new human brains are created each day but new LLMs only come out at a rate of a what, a handful a year? a dozen?
(on top of biological evolution we also have cultural evolution, which LLMs can benefit from if they can access external resources, and also market mechanisms for resource allocation which LLMs are also subject to).
22 notes · View notes
preservationofnormalcy · 3 months ago
Text
[Director Council 9/11/24 Meeting. 5/7 Administrators in Attendance]
Attending: 
[Redacted] Walker, OPN Director
Orson Knight, Security
Ceceilia, Archival & Records
B. L. Z. Bubb, Board of Infernal Affairs
Harrison Chou, Abnormal Technology
Josiah Carter, Psychotronics
Ambrose Delgado, Applied Thaumaturgy
Subject: Dr. Ambrose Delgado re: QuantumSim 677777 Project Funding 
Transcript begins below:
Chou:] Have you all read the simulation transcript?
Knight:] Enough that I don’t like whatever the hell this is.
Chou:] I was just as surprised as you were when it mentioned you by name.
Knight:] I don’t like some robot telling me I’m a goddamned psychopath, Chou. 
Cece:] Clearly this is all a construction. Isn’t that right, Doctor?
Delgado:] That’s…that’s right. As some of you may know, uh. Harrison?
Chou:] Yes, we have a diagram.
Delgado:] As some of you may know, our current models of greater reality construction indicate that many-worlds is only partially correct. Not all decisions or hinge points have any potential to “split” - in fact, uh, very few of them do, by orders of magnitude, and even fewer of those actually cause any kind of split into another reality. For a while, we knew that the…energy created when a decision could cause a split was observable, but being as how it only existed for a few zeptoseconds we didn’t have anything sensitive enough to decode what we call “quantum potentiality.” 
Carter:] The possibility matrix of something happening without it actually happening.
Delgado:] That’s right. Until, uh, recently. My developments in subjective chronomancy have borne fruit in that we were able to stretch those few zeptoseconds to up to twenty zeptoseconds, which has a lot of implications for–
Cece:] Ambrose. 
Delgado:] Yes, on task. The QuantumSim model combines cutting-edge quantum potentiality scanning with lowercase-ai LLM technology, scanning the, as Mr Carter put it, possibility matrix and extrapolating a potential “alternate universe” from it.
Cece:] We’re certain that none of what we saw is…real in any way?
Chou:] ALICE and I are confident of that. A realistic model, but no real entity was created during Dr Delgado’s experiment.
Bubb:] Seems like a waste of money if it’s not real.
Delgado:] I think you may find that the knowledge gained during these simulations will become invaluable. Finding out alternate possibilities, calculating probability values, we could eventually map out the mathematical certainty of any one action or event. 
Chou:] This is something CHARLEMAGNE is capable of, but thus far he has been unwilling or unable to share it with us. 
Delgado:] You’ve been awfully quiet, Director. 
DW:] Wipe that goddamned smile off your face, Delgado.
DW:] I would like to request a moment with Doctor Delgado. Alone. You are all dismissed.
Delgado:] ….uh, ma’am. Director, did I say something–
DW:] I’m upset, Delgado. I nearly just asked if you were fucking stupid, but I didn’t. Because I know you’re not. Clearly, obviously, you aren’t. 
Delgado:] I don’t underst–
DW:] You know that you are one of the very few people on this entire planet that know anything about me? Because of the station and content of your work, you are privy to certain details only known by people who walked out that door right now.
DW:] Did you think for a SECOND about how I’d react to this?
Delgado:] M-ma’am, I….I thought you’d…appreciate the ability to–
DW:] I don’t. I want this buried, Doctor. 
Delgado:] I…unfortunately I–
DW:] You published the paper to ETCetRA. 
Delgado:] Yes. As…as a wizard it’s part of my rites that I have to report any large breakthroughs to ETCetRa proper. The paper is going through review as we speak.
DW:] Of course. 
Delgado:] Ma’am, I’m sorry, that’s not something I can–
DW:] I’d never ask you directly to damage our connection to the European Thaumaturgical Centre, Doctor. 
Delgado:] Of course. I see.
DW:] You’ve already let Schrödinger’s cat out of the bag. We just have to wait and see whether it’s alive or dead.
Delgado:] Box, director.
DW:] What? 
Delgado:] Schrödinger’s cat, it was in a–
DW:] Shut it down, Doctor. I don’t want your simulation transcript to leave this room. 
Delgado:] Yes. Of course, Director. I’ll see what I can do.
DW:] Tell my secretary to bring me a drink. 
Delgado:] Of course. 
DW:] ...one more thing, Doctor. How did it get so close?
Delgado:]Ma'am?
DW:] Eerily close.
Delgado:]I don't–
DW:] We called it the Bureau of Abnormal Affairs.
Delgado:] ....what–
DW:] You are dismissed, Doctor Delgado.
44 notes · View notes
samueldays · 10 months ago
Text
Contra Yishan: Google's Gemini issue is about racial obsession, not a Yudkowsky AI problem.
@yishan wrote a thoughtful thread:
Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents. [...] If you have a woke/anti-woke axe to grind, kindly set it aside now for a few minutes so that you can hear the rest of what I’m about to say, because it’s going to hit you from out of left field. [...] The important thing is how one of the largest and most capable AI organizations in the world tried to instruct its LLM to do something, and got a totally bonkers result they couldn’t anticipate. What this means is that @ESYudkowsky has a very very strong point. It represents a very strong existence proof for the “instrumental convergence” argument and the “paperclip maximizer” argument in practice.
See full thread at link.
Gemini's code is private and Google's PR flacks tell lies in public, so it's hard to prove anything. Still I think Yishan is wrong and the Gemini issue is about the boring old thing, not the new interesting thing, regardless of how tiresome and cliched it is, and I will try to explain why.
I think Google deliberately set out to blackwash their image generator, and did anticipate the image-generation result, but didn't anticipate the degree of hostile reaction from people who objected to the blackwashing.
Steven Moffat was a summary example of a blackwashing mindset when he remarked:
"We've kind of got to tell a lie. We'll go back into history and there will be black people where, historically, there wouldn't have been, and we won't dwell on that. "We'll say, 'To hell with it, this is the imaginary, better version of the world. By believing in it, we'll summon it forth'."
Moffat was the subject of some controversy when he produced a Doctor Who episode (Thin Ice) featuring a visit to 1814 Britain that looked far less white than the historical record indicates that 1814 Britain was, and he had the Doctor claim in-character that history has been whitewashed.
This is an example that serious, professional, powerful people believe that blackwashing is a moral thing to do. When someone like Moffat says that a blackwashed history is better, and Google Gemini draws a blackwashed history, I think the obvious inference is that Google Gemini is staffed by Moffat-like people who anticipated this result, wanted this result, and deliberately worked to create this result.
The result is only "bonkers" to outsiders who did not want this result.
Yishan says:
It demonstrates quite conclusively that with all our current alignment work, that even at the level of our current LLMs, we are absolutely terrible at predicting how it’s going to execute an intended set of instructions.
No. It is not at all conclusive. "Gemini is staffed by Moffats who like blackwashing" is a simple alternate hypothesis that predicts the observed results. Random AI dysfunction or disalignment does not predict the specific forms that happened at Gemini.
One tester found that when he asked Gemini for "African Kings" it consistently returned all dark-skinned-black royalty despite the existence of lightskinned Mediterranean Africans such as Copts, but when he asked Gemini for "European Kings" it mixed up with some black people, yellow and redskins in regalia.
Gemini is not randomly off-target, nor accurate in one case and wrong in the other, it is specifically thumb-on-scale weighted away from whites and towards blacks.
If there's an alignment problem here, it's the alignment of the Gemini staff. "Woke" and "DEI" and "CRT" are some of the names for this problem, but the names attract flames and disputes over definition. Rather than argue names, I hear that Jack K. at Gemini is the sort of person who asserts "America, where racism is the #1 value our populace seeks to uphold above all".
He is delusional, and I think a good step to fixing Gemini would be to fire him and everyone who agrees with him. America is one of the least racist countries in the world, with so much screaming about racism partly because of widespread agreement that racism is a bad thing, which is what makes the accusation threatening. As Moldbug put it:
The logic of the witch hunter is simple. It has hardly changed since Matthew Hopkins’ day. The first requirement is to invert the reality of power. Power at its most basic level is the power to harm or destroy other human beings. The obvious reality is that witch hunters gang up and destroy witches. Whereas witches are never, ever seen to gang up and destroy witch hunters. In a country where anyone who speaks out against the witches is soon found dangling by his heels from an oak at midnight with his head shrunk to the size of a baseball, we won’t see a lot of witch-hunting and we know there’s a serious witch problem. In a country where witch-hunting is a stable and lucrative career, and also an amateur pastime enjoyed by millions of hobbyists on the weekend, we know there are no real witches worth a damn.
But part of Jack's delusion, in turn, is a deliberate linguistic subversion by the left. Here I apologize for retreading culture war territory, but as far as I can determine it is true and relevant, and it being cliche does not make it less true.
US conservatives, generally, think "racism" is when you discriminate on race, and this is bad, and this should stop. This is the well established meaning of the word, and the meaning that progressives implicitly appeal to for moral weight.
US progressives have some of the same, but have also widespread slogans like "all white people are racist" (with academic motte-and-bailey switch to some excuse like "all complicit in and benefiting from a system of racism" when challenged) and "only white people are racist" (again with motte-and-bailey to "racism is when institutional-structural privilege and power favors you" with a side of America-centrism, et cetera) which combine to "racist" means "white" among progressives.
So for many US progressives, ending racism takes the form of eliminating whiteness and disfavoring whites and erasing white history and generally behaving the way Jack and friends made Gemini behave. (Supposedly. They've shut it down now and I'm late to the party, I can't verify these secondhand screenshots.)
Tumblr media
Bringing in Yudkowsky's AI theories adds no predictive or explanatory power that I can see. Occam's Razor says to rule out AI alignment as a problem here. Gemini's behavior is sufficiently explained by common old-fashioned race-hate and bias, which there is evidence for on the Gemini team.
Poor Yudkowsky. I imagine he's having a really bad time now. Imagine working on "AI Safety" in the sense of not killing people, and then the Google "AI Safety" department turns out to be a race-hate department that pisses away your cause's goodwill.
---
I do not have a Twitter account. I do not intend to get a Twitter account, it seems like a trap best stayed out of. I am yelling into the void on my comment section. Any readers are free to send Yishan a link, a full copy of this, or remix and edit it to tweet at him in your own words.
61 notes · View notes
mariacallous · 3 months ago
Text
One phrase encapsulates the methodology of nonfiction master Robert Caro: Turn Every Page. The phrase is so associated with Caro that it’s the name of the recent documentary about him and of an exhibit of his archives at the New York Historical Society. To Caro it is imperative to put eyes on every line of every document relating to his subject, no matter how mind-numbing or inconvenient. He has learned that something that seems trivial can unlock a whole new understanding of an event, provide a path to an unknown source, or unravel a mystery of who was responsible for a crisis or an accomplishment. Over his career he has pored over literally millions of pages of documents: reports, transcripts, articles, legal briefs, letters (45 million in the LBJ Presidential Library alone!). Some seemed deadly dull, repetitive, or irrelevant. No matter—he’d plow through, paying full attention. Caro’s relentless page-turning has made his work iconic.
In the age of AI, however, there’s a new motto: There’s no need to turn pages at all! Not even the transcripts of your interviews. Oh, and you don’t have to pay attention at meetings, or even attend them. Nor do you need to read your mail or your colleagues’ memos. Just feed the raw material into a large language model and in an instant you’ll have a summary to scan. With OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude as our wingmen, summary reading is what now qualifies as preparedness.
LLMs love to summarize, or at least that’s what their creators set them about doing. Google now “auto-summarizes” your documents so you can “quickly parse the information that matters and prioritize where to focus.” AI will even summarize unread conversations in Google Chat! With Microsoft Copilot, if you so much as hover your cursor over an Excel spreadsheet, PDF, Word doc, or PowerPoint presentation, you’ll get it boiled down. That’s right—even the condensed bullet points of a slide deck can be cut down to the … more essential stuff? Meta also now summarizes the comments on popular posts. Zoom summarizes meetings and churns out a cheat sheet in real time. Transcription services like Otter now put summaries front and center, and the transcription itself in another tab.
Why the orgy of summarizing? At a time when we’re only beginning to figure out how to get value from LLMs, summaries are one of the most straightforward and immediately useful features available. Of course, they can contain errors or miss important points. Noted. The more serious risk is that relying too much on summaries will make us dumber.
Summaries, after all, are sketchy maps and not the territory itself. I’m reminded of the Woody Allen joke where he zipped through War and Peace in 20 minutes and concluded, “It’s about Russia.” I’m not saying that AI summaries are that vague. In fact, the reason they’re dangerous is that they’re good enough. They allow you to fake it, to proceed with some understanding of the subject. Just not a deep one.
As an example, let’s take AI-generated summaries of voice recordings, like what Otter does. As a journalist, I know that you lose something when you don’t do your own transcriptions. It’s incredibly time-consuming. But in the process you really know what your subject is saying, and not saying. You almost always find something you missed. A very close reading of a transcript might allow you to recover some of that. Having everything summarized, though, tempts you to look at only the passages of immediate interest—at the expense of unearthing treasures buried in the text.
Successful leaders have known all along the danger of such shortcuts. That’s why Jeff Bezos, when he was CEO of Amazon, banned PowerPoint from his meetings. He famously demanded that his underlings produce a meticulous memo that came to be known as a “6-pager.” Writing the 6-pager forced managers to think hard about what they were proposing, with every word critical to executing, or dooming, their pitch. The first part of a Bezos meeting is conducted in silence as everyone turns all 6 pages of the document. No summarizing allowed!
To be fair, I can entertain a counterargument to my discomfort with summaries. With no effort whatsoever, an LLM does read every page. So if you want to go beyond the summary, and you give it the proper prompts, an LLM can quickly locate the most obscure facts. Maybe one day these models will be sufficiently skilled to actually identify and surface those gems, customized to what you’re looking for. If that happens, though, we’d be even more reliant on them, and our own abilities might atrophy.
Long-term, summary mania might lead to an erosion of writing itself. If you know that no one will be reading the actual text of your emails, your documents, or your reports, why bother to take the time to dig up details that make compelling reading, or craft the prose to show your wit? You may as well outsource your writing to AI, which doesn’t mind at all if you ask it to churn out 100-page reports. No one will complain, because they’ll be using their own AI to condense the report to a bunch of bullet points. If all that happens, the collective work product of a civilization will have the quality of a third-generation Xerox.
As for Robert Caro, he’s years past his deadline on the fifth volume of his epic LBJ saga. If LLMs had been around when he began telling the president’s story almost 50 years ago—and he had actually used them and not turned so many pages—the whole cycle probably would have been long completed. But not nearly as great.
21 notes · View notes
transgenderer · 2 months ago
Text
reread this article recently and it has me thinking about time, and the Future.
first, time: i think time is an interesting example of where physics meets philosophy. presentism feels like the natural perspective. the past used to exist, and now it doesnt, only now exists. thats what makes now now. but relativity means there is no consistent global now. and far away things definitely exist. so. thats confusing. i think there's probably some idealist take on this, that each observer generates their reality so its fine if they dont agree. but if you think reality exists full independent of the self, its very weird.
i mean, you can say that all time exists in a big block, where each slice is related to every other by certain rules, and the rules happen to favor the formation (not formation in "formation over time" sense but like if you have a vein of gold ore, thats a formation in the earth. but its not like the gold "formed" from the left to the right, even though it has a start and an end if you progress through slices left to right) of structures which have a very particular relation to themselves along a particular axis, a relation that causes them to form "subjective experiences" that in each slice only contain information-in-a-retrievable-sense (im pretty sure this is rigorously definable using noise, retrievable information is noise-robust) originating in one half of the light double-cone at that point in space time. and then the perception of "now" is just like, the end of that information-relation-structure? im not sure it satisfies me though
second, Future: you guys remember the Future? greg egan makes me think about the Future. in the 90s it was common among the sciency to imagine specific discrete technologies that would make the lower-case-f future look like the Future. biotech and neurotech and nanotech. this was a pretty good prediction! the 80s looked like the Future to the 60s. not the glorious future, but certainly the Future. but i dont think now looks like the Future to the 00s. obviously its different. but come on. its not that different. and then finally we get a crazy tech breakthrough, imagegen and LLMs. and theyre really cool! but theyre not the Future. theyre...tractors, not airplanes, or even cars. they let us do things we could already do, with a machine. thats nice! that can cause qualitative changes! but its not the kind of breakthrough weve been hoping for. it doesnt make the world stop looking like the mid 2000s
13 notes · View notes
whitelionspirit · 1 year ago
Text
LL Medic meeting Optimus Prime official for the first time was an interesting occasion. Too nervous you had a few drinks to come the nervous before Rodimus brought him over.
LLM: *Tipsy* Its a pleasure to meet Rodimus father did you know Roddy also calls me daddy as well?
Optimus: ….
Rodimus: *facepalms and stutters as he tries to change the subject.*
Whirl: *begging Rewind to film this*
Ratchet and Drift: *Trying to remove you from the situation.*
Optimus: You refer to me as your dad?
85 notes · View notes
strangestcase · 2 days ago
Text
ngl I love how Caine is such a foil to AM and TADC isn’t even an IHNMAIMS adaptation, just a homage. Caine treats his human subjects terribly but not out or malice or spite. He’s a petty, strange, Willy Wonka esque child-like children’s entertainer put in charge of a bunch of adults and he doesn’t know how to tend to their needs. As Zooble and Jax put it, it’s not like he’d punish the players, but it’s still better to be on his good side. He’s incoherent. And why would he? He’s an AI, and AI is notoriously bad at coherence. I feel there’s some commentary on the modern use of AI (well, LLMs, generators, etc) and what is assumed of them
10 notes · View notes
apocalymons · 4 months ago
Text
Tumblr media
Are you digikin? Are you looking for a place to hang out with other digikin to chat about kinstuff? Then consider joining The Net Ocean, a brand-new digikin Discord server!
Features:
PluralKit, for anyone who needs it, and a channel specifically for headmate introduction threads!
Appy for verification--- no need to worry about randos coming in to the server!
React roles to help show other people what canons you connect to, including roles for more obscure sources like the Xros Wars Manga, Appmon, and original media!
Rules:
Please be respectful of fellow members of the server. This server is intended to be a comfortable space for all kinds of digikin. Respect everyone's pronouns and identities. Failure to do so will result in administrative action depending on the severity of the issue. There is no tolerance for things such as racism, ableism, homophobia, or transphobia here.
This server is only for adults (18+). This is for ease of moderation. If you are found to be lying about your age and are a minor, you will be banned from the server, no exceptions. That said, don't be needlessly crude. If the need for NSFW chats arises, keep riskier topics to it. No one wants to be jump scared by messages that could get them fired.
We are an explicitly plural inclusive server. That means that, to join, you must be accepting of different system structures, be they traumagenic, endogenic, or anything between or without. We do not consider "endo-neutral" to be an inclusive stance, as this often implies an expectation for endogenic systems to "prove" their existence.
This server is run by a system that would generally be considered an "anti." If this bothers you, we would not recommend this server for you. This warning is provided for your comfort as much as ours, as we understand that those labels or the perception of them can come across as antagonistic.
Piggybacking off of that, while in this server please refrain from engaging in discourse subjects. The hard rules set above exist for a reason, but we are all complex individuals, and discourse can often devolve quickly into fights. We understand that the presence of Rule 4 to begin with is, in itself, discourse, but further discussion in the server of these topics (such as shipping discourse) is discouraged.
Similarly, let's try to leave heavier topics at the door. For the time being, outside of memory vent channels, we will not be opening vent channels in this server. Right now the world is a mess, but let's not make this a space to talk about it.
Try to stay relatively on-topic to a given channel. If you find yourself drifting from the subject of one channel to conversation that would be better suited in another, consider moving to continue talking. If you find yourself unable to determine what is "on topic" for a channel, refer to its description or ask a moderator in the suggestions chat for clarification. A moderator may redirect particularly off-topic conversation, just so that anyone that might want to talk on-topic is able to do so.
It should go without saying, but please be courteous to the privacy of your peers. Do not screenshot messages or profiles of other server members without permission. Similarly, we request that you do not share another member's art or writing without explicit permission.
Any art shared in the chat which you did not make must be credited. Anyone found claiming someone else's artwork (either via uncredited artwork or use of content-generation LLMs, colloquially known as "AI" generators) will be given a warning or banned, depending on the severity of the infraction.
If we ever have to add rules, we'll make an announcement to let everyone know what's changed.
If you are using PluralKit, please ensure you have a System Tag enabled and appended to your root account's username. This is for ease of moderation. Warnings are applied on an account basis; therefore, if one headmate in a system breaks a rule, the system as a whole will receive disciplinary action.
Well? What are you waiting for?
Let's dive!
11 notes · View notes
my-castles-crumbling · 2 months ago
Note
hi cas it's career advice anon
one of my best friends kind of immediately clocked onto the fact that i was career advice anon and called me out on it and we talked about the whole thing and she's insanely supportive so yeah that's helped a lot
i've decided to go into law. there are some technicalities that still need to be resolved which can only be done when the time comes so i'm not too worried about those but since i'll be an international student who did my master's in the US, it'll be harder to get into big law firms as a mainstream associate instead of an international role because they'd prefer students with a JD which would be insanely costly for me to get and my parents wouldn't support that idea either so it kinda feels like i'm stuck between a rock and a hard place yk? i mean, it's not impossible, just extremely competitive because they hire fewer LLMs as mainstream associates than they hire JD holders and if i'm doing better than everyone else, i might still stand a chance and i fully intend to do that but i kinda worry that there are going to be people in the same situation as me but doing better (which might very well be the case), in which case my future looks kinda extremely uncertain when it comes to what my main goal is.
i haven't told my parents yet but i'll need to soon because my country's universities have entrance tests depending on what field we want to work in and i need to start preparing for the one for law as well as take up the necessary subjects in eleventh and twelfth grade. it's just kinda a big shift yk and i'm not entirely sure how my parents will take it since they're both in stem and stem oriented people. like, they won't be unsupportive, but jokes have been made before about how my grades in humanities were better than those in science. not to mention, i'll be doing exactly what my subtly misogynistic grandparents want me to do by dropping out of stem and i don't really want to give them that satisfaction but at the same time, i'm done with stem.
sorry for ranting, hope you're having a good day
Hi!
Honestly, I think it's important for you to remember that you're doing what YOU want. And for real, if your parents are disappointed that you want to do into LAW? that's really fucked because like...lawyers are very smart and can make huge amounts of money! They should be super proud that you have such goals. Fuck anyone who judges you, and remember that youre trying to do something for yourself.
I'm proud of you for doing some soul searching and realizing that this is what you want. Im sending you all the luck!
7 notes · View notes
theliterarywolf · 9 months ago
Note
Unfortunately most spell checkers already use some form of llm that are commonly incorrectly referred to as AI so no escaping them there. Though really that's what llms were originally created for anyway.
I would say that LLMs are more tool-centric than the common applications of AI which are 'do this work for me'; that would be the difference.
Because, as I mentioned before, if it were matters of using AI just as a means of assisting work --
For example, someone posted a comic on Twitter today that was basically:
How to Use AI in Art
"You! Machine! Draw this picture for me!!" X
"AI: Here are references for the subject you are currently drawing as well as color palettes that may fit the composition of your piece!" ✔
-- That would be fine! So an LLM offering more applicable words based on what you're writing is fine because, ultimately, you as the writer have to make the final decision on what sounds better for your writing.
The problem is more 'Hey, ChatGPT/Grammarly-Utilizing-ChatGPT-applications, write me a paragraph about metaphors in American poetry'. And the result is a paragraph that scrapes from several different essays while improperly citing resources that haven't been current for three decades.
23 notes · View notes
nostalgebraist · 2 years ago
Text
gpt-4 prediction: it won't be very useful
Word on the street says that OpenAI will be releasing "GPT-4" sometime in early 2023.
There's a lot of hype about it already, though we know very little about it for certain.
----------
People who like to discuss large language models tend to be futurist/forecaster types, and everyone is casting their bets now about what GPT-4 will be like. See e.g. here.
It would accord me higher status in this crowd if I were to make a bunch of highly specific, numerical predictions about GPT-4's capabilities.
I'm not going to do that, because I don't think anyone (including me) really can do this in a way that's more than trivially informed. At best I consider this activity a form of gambling, and at worst it will actively mislead people once the truth is known, blessing the people who "guessed lucky" with an undue aura of deep insight. (And if enough people guess, someone will "guess lucky.")
Why?
There has been a lot of research since GPT-3 on the emergence of capabilities with scale in LLMs, most notably BIG-Bench. Besides the trends that were already obvious with GPT-3 -- on any given task, increased scale is usually helpful and almost never harmful (cf. the Inverse Scaling Prize and my Section 5 here) -- there are not many reliable trends that one could leverage for forecasting.
Within the bounds of "scale almost never hurts," anything goes:
Some tasks improve smoothly, some are flatlined at zero then "turn on" discontinuously, some are flatlined at some nonzero performance level across all tested scales, etc. (BIG-Bench Fig. 7)
Whether a model "has" or "doesn't have" a capability is very sensitive to which specific task we use to probe that capability. (BIG-Bench Sections 3.4.3, 3.4.4)
Whether a model "can" or "can't do" a single well-defined task is highly sensitive to irrelevant details of phrasing, even for large models. (BIG-Bench Section 3.5)
It gets worse.
Most of the research on GPT capabilities (including BIG-Bench) uses the zero/one/few-shot classification paradigm, which is a very narrow lens that arguably misses the real potential of LLMs.
And, even if you fix some operational definition of whether a GPT "has" a given capability, the order in which the capabilities emerge is unpredictable, with little apparent relation to the subjective difficulty of the task. It took more scale for GPT-3 to learn relatively simple arithmetic than it did for it to become a highly skilled translator across numerous language pairs!
GPT-3 can do numerous impressive things already . . . but it can't understand Morse Code. The linked post was written before the release of text-davinci-003 or ChatGPT, but neither of those can do Morse Code either -- I checked.
On that LessWrong post asking "What's the Least Impressive Thing GPT-4 Won't be Able to Do?", I was initially tempted to answer "Morse Code." This seemed like as safe a guess as any, since no previous GPT was able to it, and it's certainly very unimpressive.
But then I stopped myself. What reason do I actually have to register this so-called prediction, and what is at stake in it, anyway?
I expect Morse Code to be cracked by GPTs at some scale. What basis to I have to expect this scale is greater than GPT-4's scale (whatever that is)? Like everything, it'll happen when it happens.
If I register this Morse Code prediction, and it turns out I am right, what does that imply about me, or about GPT-4? (Nothing.) If I register the prediction, and it turns out I am wrong, what does this imply . . . (Nothing.)
The whole exercise is frivolous, at best.
----------
So, here is my real GPT-4 prediction: it won't be very useful, and won't see much practical use.
Specifically, the volume and nature of its use will be similar to what we see with existing OpenAI products. There are companies using GPT-3 right now, but there aren't that many of them, and they mostly seem to be:
Established companies applying GPT in narrow, "non-revolutionary" use cases that play to its strengths, like automating certain SEO/copywriting tasks or synthetic training data generation for smaller text classifiers
Startups with GPT-3-centric products, also mostly "non-revolutionary" in nature, mostly in SEO/copywriting, with some in coding assistance
GPT-4 will get used to do serious work, just like GPT-3. But I am predicting that it will be used for serious work of roughly the same kind, in roughly the same amounts.
I don't want to operationalize this idea too much, and I'm fine if there's no fully unambiguous way to decide after the fact whether I was right or not. You know basically what I mean (I hope), and it should be easy to tell whether we are basically in a world where
Businesses are purchasing the GPT-4 enterprise product and getting fundamentally new things in exchange, like "the API writes good, publishable novels," or "the API performs all the tasks we expect of a typical junior SDE" (I am sure you can invent additional examples of this kind), and multiple industries are being transformed as a result
Businesses are purchasing the GPT-4 enterprise product to do the same kinds of things they are doing today with existing OpenAI enterprise products
However, I'll add a few terms that seem necessary for the prediction to be non-vacuous:
I expect this to be true for at least 1 year after the release of the commercial product. (I have no particular attachment to this timeframe, I just need a timeframe.)
My prediction will be false in spirit if the only limit on transformative applications of GPT-4 is monetary cost. GPT-3 is very pricey now, and that's a big limiting factor on its use. But even if its cost were far, far less, there would be other limiting factors -- primarily, that no one really knows how to apply its capabilities in the real world. (See below.)
(The monetary cost thing is why I can't operationalize this beyond "you know what I mean." It involves not just what actually happens, but what would presumably happen at a lower price point. I expect the latter to be a topic of dispute in itself.)
----------
Why do I think this?
First: while OpenAI is awe-inspiring as a pure research lab, they're much less skilled at applied research and product design. (I don't think this is controversial?)
When OpenAI releases a product, it is usually just one of their research artifacts with an API slapped on top of it.
Their papers and blog posts brim with a scientist's post-discovery enthusiasm -- the (understandable) sense that their new thing is so wonderfully amazing, so deeply veined with untapped potential, indeed so temptingly close to "human-level" in so many ways, that -- well -- it surely has to be useful for something! For numerous things!
For what, exactly? And how do I use it? That's your job to figure out, as the user.
But OpenAI's research artifacts are not easy to use. And they're not only hard for novices.
This is the second reason -- intertwined with the first, but more fundamental.
No one knows how to use the things OpenAI is making. They are new kinds of machines, and people are still making basic philosophical category mistakes about them, years after they first appeared. It has taken the mainstream research community multiple years to acquire the most basic intuitions about skilled LLM operation (e.g. "chain of thought") which were already known, long before, to the brilliant internet eccentrics who are GPT's most serious-minded user base.
Even if these things have immense economic potential, we don't know how to exploit it yet. It will take hard work to get there, and you can't expect used car companies and SEO SaaS purveyors to do that hard work themselves, just to figure out how to use your product. If they can't use it, they won't buy it.
It is as though OpenAI had discovered nuclear fission, and then went to sell it as a product, as follows: there is an API. The API has thousands of mysterious knobs (analogous to the opacity and complexity of prompt programming etc). Any given setting of the knobs specifies a complete design for a fission reactor. When you press a button, OpenAI constructs the specified reactor for you (at great expense, billed to you), and turns it on (you incur the operating expenses). You may, at your own risk, connect the reactor to anything else you own, in any manner of your choosing.
(The reactors come with built-in safety measures, but they're imperfect and one-size-fits-all and opaque. Sometimes your experimentation starts to get promising, and then a little pop-up appears saying "Whoops! Looks like your reactor has entered an unsafe state!", at which point it immediately shuts off.)
It is possible, of course, to reap immense economic value from nuclear fission. But if nuclear fission were "released" in this way, how would anyone ever figure out how to capitalize on it?
We, as a society, don't know how to use large language models. We don't know what they're good for. We have lots of (mostly inadequate) ways of "measuring" their "capabilities," and we have lots of (poorly understood, unreliable) ways of getting them to do things. But we don't know where they fit in to things.
Are they for writing text? For conversation? For doing classification (in the ML sense)? And if we want one of these behaviors, how do we communicate that to the LLM? What do we do with the output? Do they work well in conjunction with some other kind of system? Which kind, and to what end?
In answer to these questions, we have numerous mutually exclusive ideas, which all come with deep implementation challenges.
To anyone who's taken a good look at LLMs, they seem "obviously" good for something, indeed good for numerous things. But they are provably, reliably, repeatably good for very few things -- not so much (or not only) because of their limitations, but because we don't know how to use them yet.
This, not scale, is the current limiting factor on putting LLMs to use. If we understood how to leverage GPT-3 optimally, it would be more useful (right now) than GPT-4 will be (in reality, next year).
----------
Finally, the current trend in LLM techniques is not very promising.
Everyone -- at least, OpenAI and Google -- is investing in RLHF. The latest GPTs, including ChatGPT, are (roughly) the last iteration of GPT with some RLHF on top. And whatever RLHF might be good for, it is not a solution for our fundamental ignorance of how to use LLMs.
Earlier, I said that OpenAI was punting the problem of "figure out how to use this thing" to the users. RLHF effectively punts it, instead, to the language model itself. (Sort of.)
RLHF, in its currently popular form, looks like:
Some humans vaguely imagine (but do not precisely nail down the parameters of) a hypothetical GPT-based application, a kind of super-intelligent Siri.
The humans take numerous outputs from GPT, and grade them on how much they feel like what would happen in the "super-intelligent Siri" fantasy app.
The GPT model is updated to make the outputs with high scores more likely, and the ones with low scores less likely.
The result is a GPT model which often talks a lot like the hypothetical super-intelligent Siri.
This looks like an easier-to-use UI on top of GPT, but it isn't. There is still no well-defined user interface.
Or rather, the nature of the user interface is being continually invented by the language model, anew in every interaction, as it asks itself "how would (the vaguely imagined) super-intelligent Siri respond in this case?"
If a user wonders "what kinds of things is it not allowed to do?", there is no fixed answer. All there is is the LM, asking itself anew in each interaction what the restrictions on a hypothetical fantasy character might be.
It is role-playing a world where the user's question has an answer. But in the real world, the user's question does not have an answer.
If you ask ChatGPT how to use it, it will roleplay a character called "Assistant" from a counterfactual world where "how do I use Assistant?" has a single, well-defined answer. Because it is role-playing -- improvising -- it will not always give you the same answer. And none of the answers are true, about the real world. They're about the fantasy world, where the fantasy app called "Assistant" really exists.
This facade does make GPT's capabilities more accessible, at first blush, for novice users. It's great as a driver of adoption, if that's what you want.
But if Joe from Midsized Normal Mundane Corporation wants to use GPT for some Normal Mundane purpose, and can't on his first try, this role-play trickery only further confuses the issue.
At least in the "design your own fission reactor" interface, it was clear how formidable the challenge was! RLHF does not remove the challenge. It only obscures it, makes it initially invisible, makes it (even) harder to reason about.
And this, judging from ChatGPT (and Sparrow), is apparently what the makers of LLMs think LLM user interfaces should look like. This is probably what GPT-4's interface will be.
And Joe from Midsized Normal Mundane Corporation is going to try it, and realize it "doesn't work" in any familiar sense of the phrase, and -- like a reasonable Midsized Normal Mundane Corporation employee -- use something else instead.
ETA: I forgot to note that OpenAI expects dramatic revenue growth in 2023 and especially in 2024. Ignoring a few edge case possibilities, either their revenue projection will come true or the prediction in this post will, but not both. We'll find out!
251 notes · View notes