#artificial intelligence thesis
Explore tagged Tumblr posts
Text
Artificial Intelligence Thesis: Unraveling the Future of Technology
Artificial intelligence (AI) has transformed multiple industries, ranging from healthcare to finance. As our dependency on AI continues to grow, it is crucial to examine the potential of this technology and its implications for our future. This blog delves into the concept of an artificial intelligence thesis, examining its significance, key components, and the vast scope it offers for research and innovation.
Artificial intelligence thesis: A gateway to the future
An artificial intelligence thesis consists of a research topic that aims to explore the depths of AI, diving into its complex algorithms, machine learning models, and real-world applications. This thesis is a valuable resource, enabling students and researchers to delve into and actively contribute to the dynamic field of artificial intelligence.
Why is an artificial intelligence thesis important?
The exploration of artificial intelligence through a thesis holds immense significance for multiple reasons. Primarily, it enables you to acquire a comprehensive understanding of AI, allowing you to unravel the mysteries behind its functionality and potential. Secondly, it provides a platform for individuals to introduce novel ideas, conduct research, and contribute to the advancement of AI technologies. Ultimately, an artificial intelligence thesis provides career opportunities to students in various industries, as companies seek professionals who possess in-depth knowledge and expertise in the field.
Key components of the artificial intelligence thesis
To effectively tackle an artificial intelligence thesis, a thorough understanding of the fundamental elements is important. These components form the foundation of the thesis, enabling researchers to develop insightful and impactful studies. Here are the key components of an artificial intelligence thesis:
Machine learning models
Machine learning models allow AI systems to learn from data and make predictions or decisions accordingly. An artificial intelligence thesis often delves into different machine learning models, such as supervised learning, unsupervised learning, and reinforcement learning, to uncover their strengths, limitations, and potential applications.
Real-world applications
Artificial Intelligence (AI) has truly revolutionized multiple industries, showcasing its remarkable impact as a game-changer. An artificial intelligence thesis focuses on exploring the practical implementations of AI in sectors such as healthcare, finance, transportation, and many more. Understanding these real-world applications helps researchers identify areas where AI can make a significant impact.
Scope of research in the artificial intelligence thesis
The artificial intelligence thesis provides an opportunity for research and innovation. By selecting this topic, students and researchers can explore a wide range of subtopics and areas of interest within the field of AI. Some potential research areas in an artificial intelligence thesis include:
Natural language processing
Natural language processing (NLP) is a branch of AI that concentrates on enhancing computers' ability to understand and interpret human language. Researchers can examine the challenges and advancements in NLP and suggest ways to enhance language processing capabilities.
Deep learning
Deep learning is a subset of machine learning that utilizes artificial neural networks to make decisions like humans. Exploring the potential of deep learning models and their applications is an exciting avenue for research within the artificial intelligence thesis.
ConclusionThesis writing requires full effort and concentration. But in higher education, students are unable to balance their multiple tasks. Techsparks is the best thesis institute that provides complete guidance to students. By exploring the key components and research areas within the artificial intelligence thesis, students can unlock new possibilities and shape the future of AI-driven applications. With our guidance, you can easily score better grades and achieve academic success.
#artificial intelligence thesis#thesis guidance in chandigarh#techsparks#thesis guide#online thesis help#thesis help
2 notes
·
View notes
Text
the way that so many people are willing to literally destroy the earth just “talk” to their favorite fictional character. or to not have to write their own damn paper. or their own email
#like HELLOOOO???#please just talk to someone irl#please god just make one friend#or stop being SOOO desperately single#sorry but it like actually pisses me off bc yall aren’t even using it for a good reason!!!#it’s shit like you being too fucking lazy to write an email to your fucking coworker!!#it’s absurd!!!!!#learn how to write a proper fucking email!!! like CHRIST#we’re so fucked yall#our future doctors nurses etc are using chat gpt to write their thesis#silas speaks#chat gpt#anti ai#anti chatgpt#chatgpt#ai#artificial intelligence#anti artificial intelligence
7 notes
·
View notes
Text
Looking to make meaningful strides in your academic journey?
PhdPRIMA Research Consultancy is here to support you every step of the way. With specialized services in PhD research assistance, thesis and dissertation writing, publication support, and sponsored research, we’re committed to empowering your success. Let’s collaborate to achieve your research goals and drive academic excellence!
🌐 Visit www.phdprima.com or contact us at email. or contact us at [email protected].
For more information:
www.eduplexus.com
0120-4157857, +91 9686824082
PhdResearch #DissertationWriting #CorporateConsulting #AcademicSupport #Careers #Innovation #JoinUs
#thesis#research#phd research#dissertation#artificial intelligence#writingcitation#academic support#university#career
0 notes
Text
How to: Complete a master's thesis related to AI in healthcare
Step 1 Develop your research questions and choose an appropriate supervisor
The first main decision you have to make after you decide to pursue a thesis is to choose a supportive supervisor; this person is going to fight your battles on the admin-level as there are a few icky spots in the evaluation phase which can lead to undue stress i.e., defence, mid-year evaluation, final evaluation etc. Hence, make sure you choose someone who has sway i.e., dean or head or chairman of the department. Next, select a specific problem area from whatever topic interests you most, for me it was using machine learning to find the direction and magnitude of cardiac forces that express as the ST-segment on an ECG signal. One way to select your research question/s is by looking at WHAT THE CURRENT LITERATURE LACKS.
Step 2 Review recent available literature and modify your research question/s accordingly
Lit rev is a dynamic process, you must keep up to date with new developments in your research area so you are well-informed during any evaluation. You may take help from this lit rev and modify it and make it unique to answer your research questions.
Step 3 Prepare to defend your thesis topic and write your research proposal
As I mentioned above, one of your weapons for a good defence is a having an influential supervisor. Next, you must make sure YOU KNOW THE GAPS in your research because these are usually targeted by the examiners. Make sure to keep your research as specific as possible which, in turn, makes it novel and derails any questions unrelated to your research.
Step 4 Collect and preprocess data
This is more of a technical step. Make sure to sign the right MoUs and devote maximum amount of time to data collection and preprocessing. You will have to record high fidelity data to avoid unnecessary preprocessing before using it as input to your algorithm. Make note of the recording equipment, sensors etc., used. Preprocessing for the vector-electrocardiography signals included averaging, cross-correlation, correction of baseline shift, moving average etc.
Step 5 Choose your programming language
My preferable programming applications MATLAB, r-studio and python. However, python has some very inept libraries for ML as well as deep learning and neural networks i.e., scikit learn, tensor flow, keras etc.
Step 6 Develop an algorithm
If you have chosen python and machine learning, then making an algorithm can't get easier. The structure you should follow would start with introducing relevant libraries, reading data, preprocessing using sk-learn i.e., making the data numerical if there are any categorical variables, defining evaluation metrics and displaying results as confusion matrix and AUC graphs etc. One of the steps that can help you in reading and understand your data is the exploratory data analysis (EDA); python has libraries to this for you i.e. sweetviz, dtale etc., and they can identify types of variables, correlation bw variables, skewness of your data etc., which you can use to select the appropriate algorithm for your application.
Step 7 Evaluate your algorithm
Sensitivity, specificity and accuracy are the 3 main performance metrics that are understood by healthcare professionals.
Step 8 Write your report
Usually, there are 2 reports that you have to submit; one is the progress report and the other is your final report. Reports must be very concise i.e., intro, research questions, methodology, results, discussion, supported with images related to your recording equipment and graphical results etc. You will need a referencing software like Mendeley or Endnote and MS word or LaTeX.
#engrblr#artificial intelligence#grad school#graduate#biomedical engineer#medicine department#aga khan university#spotify#hozier#🌻#thesis#Spotify
1 note
·
View note
Link
#artificial intelligence#SALAMI#Linguistics#I'm scremaing! Manningbases his model on Wittgenstein 1953:20 just to deny additional information context by gesture to paragraphs later#apart from this being very short sighted understanding of any language but spoken English#I can't help but think that he needs to argue for this semantic model otherwise the pattern reproduction of the SALAMI doesn't work#quick thesis: the bare bones model Mannings is working with is so open/blind to dogwhistles explaining reproduction of right wing arguments
1 note
·
View note
Text
machine learning thesis topics 2022
It's that time of year again! If you're looking for some inspiration for your next AI project, check out our list of the top 10 research and thesis topics for AI in 2022.From machine learning to natural language processing, there's plenty to choose from. So get inspired and start planning your next project today!
#machine learning thesis topics 2022#machine learning#thesis topics 2022#artificial intelligence#techalertx
0 notes
Text
Chat GPT (The Alan Association AU)
Fun Fact (for those who don't know): Noogai is a Medical Bot/Artificial Intelligence, he cannot be used for "revising"
Fun Fact: Vee's thesis is all about not using AI on art, and he's making Noogai (who is an AI) revise it.
Another Fun Fact (Unrelated): Spongey had nightmares for a week about her own thesis.
Also, another Fun Fact (also Unrelated): Both JM and Spongey worked on the same thesis, and they hated it.
45 notes
·
View notes
Text
Claims of AGI ignore fundamental problems in package management
Recently, a number of public intellectuals have claimed that we're getting increasingly closer to artificial intelligence that can solve a wide array of problems as well as a human can. However, these claims overlook fundamental barriers in the field that we are still decades from solving. To discuss this, I turned to alcoholic grad student James Belmini at MIT.
"It's just these fucking packages, man", James told me, while pouring himself a glass of straight vodka at 3 in the afternoon. "The 'language comprehension' package requires pyflubnugget at version 3.8.6 or less, but this 'Superintelligence' git repo requires conkflonk of 1.1.2 or greater, which conflicts with pyflubnugget. So any speculation of the capabilities of true AGI is purely hypothetical, because it's gonna take at least 5 years to work this shit out."
Asking about James' thesis progress did not yield anymore information about the problem, but did cause him to pour himself another shot and down it wordlessly without making eye contact
814 notes
·
View notes
Text
Day-017: Partner
Lore:
Dr. Light’s idea was for robots that could grow, change, and think for themselves. However, he’d had to win over the committee before he could make that dream a reality. Feeling bad after he’d told the committee that he couldn’t support Wily’s double-gear system (especially because Wily had been there to witness it), he offered that they could build the first Robot Master together.
Wily had always been better with hardware and mechanical design. It was something of his specialty, even. With his skills with hardware and Light’s skills in artificial intelligence, the two could be basically unstoppable.
Wily had initially refused - the salty man he is, he definitely interpreted Light’s gesture of goodwill as some kind of condescension. However, eventually he accepted. Light (mistakenly) took to mean he’d been forgiven. Wily would design a lot of the hardware and design, including, eventually, the prototype Megabuster and the Variable Tool System.
Dr.s Cossack and LaLinde weren’t officially on the project, though they did contribute their thoughts and ideas. When Blues’s core failed in the middle of the Military demonstration (because they built it with Blues the unarmed child in mind and forgot to compensate for the weapon attachment, and it turns out it couldn’t generate enough to power both Blues and the buster. The event damaged his core, making it more inefficient and unbalanced.), Light made sure to consult Dr. LaLinde to help him design the solar cores for Rock and Roll (she’s an environmental scientist and he figured she would probably be able to help him and Wily design a more efficient environmentally-friendly solar core.)
————————————————————————
Notes:
Their overall appearances are based on their young designs from Megaman 11. Their shirts are colored that way as a reference to their young selves from the Ruby Spears cartoon.
Wily’s hair color came from blending the shades of yellow of Piano, Bass, and Zero together (& then lightening it because 11’s Flashback Wily has LIGHT blond hair). With Light, I blended Rock, Blues, and X’s hair colors and darkened it. ✨ Contrast ✨
Light and Wily were roommates. They couldn’t exactly escape each other. After ignoring Light for a week, Wily told their shared friend group that he was getting sick of having see his "stupid, traitorous face" every the morning and afternoon and LaLinde & Cossack (he was doing his thesis at the time) individually suggested that he should try talking to Light about how he felt. He basically said "screw that!" but did take their advice to at least try to get along. It was the first crack in their friendship and he never actually forgave it.
Dysfunctional Besties <3 (/hj)
Also I think it would be kinda cute if Light was inspired on a subconscious level by Dr. Cossack talking about Kalinka & how she was growing up. She might be like 2, if she is even born yet tho. Still working out the timeline there. It’s a little fuzzy.
Blues took quite a few years to build because they had to do everything from the ground up. The Robot Masters built after him used modified versions of Blues’s base code and designs, so they took comparatively less long.
The idea was presented before the committee as just the base code, which they determined would probably work. (I assume the committee would reach out to investors or something, but I’m like the furthest thing from a roboticist so I have no idea.) By the time Light founded Light Labs and obtained military funding, he’d gotten like. parental-levels of attached. He didn’t set out to make robot children but boy did he want robot children now—
and then he made 4 that were "children" children and like a bajillion that don’t stay at Light Labs
#rocktober#rocktober 2024#sibling shuffle au#mega man au#mega man classic#megaman#my art#dr. light#dr. Light#dr. Wily#dr wily#Lore#im not an engineer Idk what I’m talking about when it comes to that stuff
34 notes
·
View notes
Text
Steph the Alter Nerd is reading Omid’s new book.
Following the live read:
I joined late so Steph was already reading. She was starting the Sophie section. Seriously, why pick on Sophie who just puts her head down and focuses on work? That’s strikes me as unnecessarily vile.
Omid apparently thinks Charles hasn’t modernized the monarchy. Dude is an environmental icon and we now have a blended family in BP. That may not be everyone’s cup of tea, but you can’t deny that it’s modern.
Apparently Omid writes pages and pages about Charles’ “leaky pen” incident. It’s just a pen, Omid. Omid thinks this means Charles may not be up to the job, lololol. I’m dying. Mind you, Omid worships Harry who stripped in Vegas, wore a Nazi uniform, and called his fellow soldiers names. But yes, the leaky pen is far more significant than all that, somehow.
Really boring part about government stuff. Charles negotiates and reaches compromises with the government and that’s apparently bad? Also, Charles didn’t know what to expect after he became King??? Lolololol.
Charles lost sympathy for the Harkles after the documentary. Well, duh. We all did, Omid. That documentary was a huge own goal.
He blames the Royal Family for the documentary’s melodrama? Seriously? Who was crying on Oprah? Who was crying in a rented Vancouver mansion with her head wrapped in a towel? Who dropped hot, salty tears on her Hermes blanket? That’s the person responsible for the melodrama.
Anne supposedly kicked them out of Frogmore. I suspect this is fanfiction, but I love it. I want it to be true. This is my headcanon now.
And I do thin fanfiction is the right term for this book. The BRF is super popular right now so the book thesis itself (that the BRF is in trouble) is pretty fantastical.
This book seems very, very boring. Omid seems to be desperately trying to argue that Charles’ first year went badly, but that’s just not reality. Omid used to be better at spinning than this.
Make the Royals Great Again? Uh, that was done in 2011. Everything we are seeing now was planted way back then, down to Kate’s leafy crown. There’s a general lack of both self-awareness and historical awareness in this book. Omid writes like someone who first became a “royal reporter” in 2016…which is exactly what he is. Too bad, because I do think there’s an interesting analysis that could be made regarding 2023 and it’s place in royal pr. That’s above Omid’s pay grade though.
Lol, Omid discusses UK politics and it’s every bit as much of a disaster as one would expect. Stick to gossip, Omid.
Ok, Steph’s hydrating, so let’s step back for a minute and recall what this book was supposed to be. This was to be “Finding Freedom 2.0,” a chronicle of the Harkle post-Megxit success story. The publishers clearly didn’t like that and they made Omid write a book about the family as a whole. That’s because there was no Harkle success story and the publisher didn’t think another Harkle book would sell. Unfortunately, Omid is a Harkle specialist. He can’t write a book about the family (let alone successfully argue for its imminent demise). He simply doesn’t know enough.
Back to Steph. We’re now in Harry’s military service? Er, why? We jumped from 2023 to 2016 and now to the Afghanistan War?
I agree with Steph that Omid’s trying to associate the royals with MAGA and I can’t even articulate how stupid that is. Completely different countries, completely different cultures, completely different iconography. Just doesn’t work.
Now we’re at the Coronation Concert? The royals are in trouble because Elton wasn’t at the concert! Lolololol. The Harkle bubble is out of this world. Basically, if their inner circle wasn’t centered (Oprah, Elton, Omid, etc…), it’s because of a MAGA conspiracy that will bring the royals down.
Something, something throne. Charles looked awkward again. Constitutional crisis!
I feel like I’m grading student briefs. There’s a way to argue this and there is evidence you can cite for this argument, but this isn’t it. You shouldn’t write pages and pages about a leaky pen and then minimize the bags of charity money as “perception.” You should start with the bags of charity money then use the leaky pen to bolster the “perception” argument.
Another disagreement with the government. Aargh! That should be lumped together with the other arguments with the government. Or it shouldn’t be mentioned at all. You’re arguing that Charles is and old-fashioned idiot who is not a good king, so why make him look like someone who is aware of current social issues and engaged with his government?
Racism. Finally! No wait, it’s boring.
Charles had an affair with Camilla. Lol, that’s not exactly news, love. The time jumping is driving me nuts.
Took a break to let the dog out and now we’re in Andrew’s interview. Of course we are.
Will exiled Andrew. I hope this is true. Wait, that’s the famous “power struggle”? Andrew??? I don’t think that’s a power struggle. That’s just Charles passing the buck.
Oh, lord. More Andrew. That’s it. I’m going to bed. I’ll tune back tomorrow.
120 notes
·
View notes
Text
I have recently been thinking about the term "Artificial Intelligence", and how it might be defined in the Tron universe. What I find interesting is that most of the conscious entities in the digital world in Tron weren't intentionally created by humans to be conscious. They didn't intend for actuarial programs and security software to have thoughts and feelings. And yet, what can be called "intelligent life" somehow grew inside the ENCOM computers, unbeknownst to the humans. The same can of course be said about the ISOs, who weren't intentionally created by Kevin, but just showed up one day, again having grown out of the Grid somehow.
What's really interesting about this is that as far as I know, there are only two digital entities in the Tron universe who were created intentionally by humans to be "intelligent": The Master Control Program (created by Dillinger) and Clu (created by Flynn). And what do these two creations have in common? They're the antagonists. The bad guys. The main villains that have to be defeated, by the humans and by the unintentionally conscious programs.
I don't have any well thought out thesis about this, just some random thoughts. Is the message of Tron that life is only good if it "grows naturally" instead of being intentionally created? Are there some other interesting commonalities between the MCP and Clu that are relevant to this idea, such as the fact that they consider themselves better than their creators? Or that both Dillinger and Flynn intended for their creations to "run things" better than a human would? Is there some kind of reading to be made that could be a criticism agains the current "AI" trend?
It also makes me genuinely curious about how the upcoming third Tron film will handle the subject. Because I can't imagine that it won't be dealing with AI in some regard. Will there be an intentionally created AI in it, and if so, will it be a good guy or a bad guy?
44 notes
·
View notes
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this). Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations. An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
58 notes
·
View notes
Text
Dawntrail Retrospective
Okay, it's been two weeks since Dawntrail launched, a bit over a week since I've cleared it and have had time to think things over, wanted to do a big dump of my thoughts, a non-scored review of it all.
Full spoilers after the break.
So, I want to give this all a nuanced look; I know this has been a polarizing expansion; I did very much enjoy my time while still having some qualms, and I'll try to highlight both sides of that here.
Overall, while it would be low in my expansion rankings, that's not to say it's bad. Just as I probably bump Heavensward up a bit in my rankings because it did so much with so little (in terms of budget, gameplay tools available, story to build on, cast, etc.), Dawntrail takes a hit because I know what they're capable of these days.
But a 10 year saga is a tough act to follow, and I know if this was my first FFXIV experience (it might be a lot of people's one day, if the 'second saga starting point' for new players they mentioned ever gets implemented), I'd be going 'oh wow'.
Anyways, before I pick things apart, I'd like to highlight what really worked for me.
Sphene was my problematic fave. I know Artificial Intelligence tropes can be overdone, but I have a fondness for them because when done right, an AI is clearly authored by someone. Just like a biography, even an autobiography, paints the subject in a certain way, an AI really reflects the creators' biases.
Just as the soul technology was shown as a mechanical version of the aetherial sea, Sphene really felt like a sort of digital primal for Alexandria, the people's desires latched on to her, sort of a vtuber Zodiark.
I loved the development that her compassionate personality (taken from the OG Sphene) was distinctly incompatible with her unsustainable primary directive, protecting and preserving Alexandria's way of life (the directive from the people)
And I appreciated that part of the thesis statement of her character is "a Garnet who never traveled with Zidane would become a tool of Alexandria, her kindness taken advantage of as a figurehead''. Which makes it nice when Wuk Lamat breaks through during The Interphos to appeal to her.
She can feel like a bit of a rehash of Hades and Metion, but I do enjoy the contrast of her valuing life too much to Metion not valuing it enough; it's important to know how to live in spite of despair, but it's also important to accept that even memory is not forever.
Also while I'm here I have to say I absolutely respect the zone change of Living Memory from stunningly beautiful to hauntingly somber. I hope that change is not reverted in patches, as it's absolutely the starkest environment change in the game.
I like the idea of casting aside nostalgia to care for the living, and I thought this zone was a welcome surprise from the "Golden City" imagery a South and Central American expansion invokes.
(PS, massive Simulated Twilight Town vibes)
And I thought Cachuia was well done in this zone; I was a bit antsy earlier with how they made her into just a drone, but I liked the resolution between her and Erenvelle
But one thing I want to stress as I sing the praises of the last zone and change, is that neither half of this expansion works without the other, because unlike other split expansions (SB having the Gyr Albania and Yanxia halves, EW having Islabard then the Ancients), it felt clear in why it had both halves, and that was for the contrast of the same theme, Namely, the ideas of culture, tradition, and history, and how they affect the living.
When Wuk Lamat is giving her speech during her ceremony, she notes one of the societies taught her "they believe death is not the end, and we live on so long as we are remembered", which Sphene says almost verbatim of her people later, menacingly polite as that same belief is twisted.
There are some roots of this conflict in the first half too, with Koana's disinterest in culture and tradition, before realizing progress and culture weren't incompatible.
While Alexandria instead takes it to a logical, Black Mirror extreme, discarding culture and history, literally forgetting anyone who passes (while assuring themselves anyone who is lost still lives on) and living purely in the present moment; death itself removed from the public circle.
I don't think the Alexandria half, the modern, present-focused society works without first setting up a region with a rich culture and history.
In the first few regions, you see how those who walked before led those who walked after, while in Heritage Found... you see a heritage lost. On both sides of the divider there are abandoned buildings; a sidequest in the graveyard has the keeper note that memorials have fallen out of favor due to regulators, Alexandria had a perfect record of history (data, people, all stored in the cloud), but didn't use it; specifically keeping it away to prevent painful memories from affecting the present. While Yok Tural had an imperfect history (a lot in legends, retold/inconsistent oral history), but that history distinctly affected their day to day; even the painful memories, the tragedies, all played a part in shaping the present.
Even though it could make the pacing clunky at times, I did like that Wuk Lamat's basic setup of "learn about these people and understand why they make the choices they do" extended to Solution Nine and even Living Memory.
Garool Jaja was also a very good character, loved his performance, do kinda wish his solo duty where he confides the true nature of the contest was the very start of the expansion; I feel like it would have set the tone for this being the "The WoL is a mentor arc" better.
Also to wrap up the good side: every single dungeon and trial was ace. Dungeons finally hit a good level of difficulty for normal content, and were well designed and very pretty. Vanguard and Everkeep in particular were delights, as was the postgame dungeon Tender Valley.
------------------------------------------------------------------
And on to my more mixed feelings.
Wuk Lamat- I don't hate her, but I don't like her that much either. I tried to keep an open mind for the full MSQ, but ultimately she's not a character I really vibed with; I do get the shonen protag/Naruto appeal, but it's really not for me. She's been described as 'Lyse 2.0', and while I admit I have similar feelings about Lyse, I do think Wuk Lamat has a more natural progression. Lyse started Stormblood feeling like a 20 something on a mission trip, while Wuk Lamat feels like a reasonable candidate who just needs a little encouragement.
I don't mind too much our WoL taking a mentor role and taking a backseat, while downplaying their powers; but what I struggled most with was fatigue. Wuk Lamat was always there, like the memes of "Talk to Wuk Lamat" say. Like Shadowbringers was the expansion where Graha was the main character, and a lot of the time he was away doing city stuff or being mysterious. Wuk Lamat would have benefited from more time to breathe, especially in the back half of the game. She should still be there in the back half for sure, for the expansion to work she needs to be a player in all this, but I'll admit I sighed when I got to Solution Nine thinking I'd explore by myself (probably bumping into her at one of the locations) but instead needed to escort her. As I noted earlier, I don't mind the Interphos interruption (though I did appreciate the chance for the WoL to be at full strength) because Wuk Lamat appealing to Sphene's humanity fit the expansion themes well.
Succession- I'm happy this didn't go into my worst fear: a retread of the Azim Steppe where we actively interfere in another nation's politics by being their champion; but it left a bit to be desired. Notably, while I knew the Scion Civil War was a bit of a misdirect, it felt kinda pointless? Like Thancred and Urianger are here helping an alumni from their university out as he applies for the same job, but they're totally chill with you. And honestly there are no stakes to him getting the job, he's the only other qualified candidate to the point where you hire him later yourself.
I didn't want any longstanding inter-scion conflict, but for a character as frequently duplicitous as Urianger and driven as Thancred it just felt like a waste.
Also, GJJ clearly told the WoL that the keystones didn't determine the victor; he would pick a successor that was worthy- I kinda wish they just stuck to that. Having the "good' rulers and "bad" rulers paired together for the cooking challenge felt like a bit of a cop-out, , plus needing to win back a stolen keystone, etc. just felt like missed opportunities.
Zarool Ja and Bakool Jaja - I get what they were going for in the end with each of these: ZJ being the "impossible son of an impossible son, the weight of expectations causing him to shun those around him, and that loneliness twisting him", BJJ being desperate to help his people, feeling the major survivors guilt of his own life costing so many others.
but neither of their narrative arcs are smooth, and in the first half, especially during the trial, they seem to be doing comically evil Wacky Races Dick Dastardly behavior with no regards for a continuous arc. BJJ releasing Valigarmanda was the icing on the cake for me. He could have done this in a reasonable way, weakening the seal in an attempt to sabotage the trial, then feeling guilt over what he did in desperation, but no he walked up to the gate keepers like 'no I'm evil, I'm gonna destroy that now' any hints of ZJ sympathy come like, during his trial and from the Wandering Minstrel, who even notes most will see him as a one dimensional tyrant
I also think they could have distinguished both more from just being warmongers; in the same way that Wuk Lamat and Koana are somewhat aligned but have different visions, personally it would have made more sense to me if BJJ had a different brand of conservatism; putting a stronger emphasis on defense and isolationism rather than world conquest. It would fit his background better too, as someone who wanted to protect his homeland
--------------------------------------------------------------------
My most negative thoughts are really just pacing. I would like to not have so many quests just running around talking to people, not learning much of note. I know there are only so many things you can do (stand in purple cloud and kill 3 enemies isn't great either), but at this point I'd honestly just take a shorter MSQ if it meant better story pacing.
I know the first half is meant to be like an abbreviated ARR, and I don't mind it being low stakes, just wish it had a bit more polish.
I will also say I felt a lot more limited in my dialog at times? Like I don't need every box to have "I'll kill your god if I have to, maybe even if I don't", but there felt like a lot of instances where you had two ways to say the same sentiment. I like it when the game lets you have opinions, even if the opinions are objectively bad (you can straight up tell Noah the Allagans were visionaries) A lot of that pacing was more actual story content than the quests though; the first three zones could feel like extended allied society quests (solid enough ones), which wasn't bad for a 'fresh start', but Shalooni is where things felt off. I liked the vibes but frankly the quests left barely any impressions at all.
(loved the trolley dig though)
------------------------------------------------------------------
Overall, like I said, I enjoyed my time. While it may not be a favorite expansion, it sets a good baseline for another ten years, and I hope they can refine it in the patch series and beyond.
There's a lot more I could probably say, I realized I didn't have a chance to touch on Erenvelle (very glad he tagged along) and Krile; but I feel I'll have more thoughts on both of their plotlines after the patch series.
P.S. though I rolled my eyes at some of the running jokes I genuinely got a chuckle out of Wuk Evu always freaking out then snapping back to polite with "well I won't overthink it then" and similar. Felt very Chocobo Racing GP.
P.P.S. Wood-carved owl nouliths are the best idea. A+ weapon.
26 notes
·
View notes
Text
Looking to make meaningful strides in your academic journey?
PhdPRIMA Research Consultancy is here to support you every step of the way. With specialized services in PhD research assistance, thesis and dissertation writing, publication support, and sponsored research, we’re committed to empowering your success. Let’s collaborate to achieve your research goals and drive academic excellence!
🌐 Visit www.phdprima.com or contact us at email. or contact us at [email protected].
For more information:
www.eduplexus.com
0120-4157857, +91 9686824082
PhdResearch #DissertationWriting #CorporateConsulting #AcademicSupport #Careers #Innovation #JoinUs
#publication#research#phd research#dissertation#artificial intelligence#thesis#writingcitation#academic support#university#career
0 notes
Text
final exam prep week v.v
we in the final week of the master's program; i am not really prepared coz brain.. hopefully, we will do good (:
edit: my thesis turned out to be pretty good
#MS AI#artificial intelligence#thesis#2023#august#blues#exam prep#not really tho#biomedical engineer#grad school#spotify#Spotify
1 note
·
View note
Text
At 23 I wrote my masters thesis in a single 18 hour stint, hyper-focused, one draft, running on a half-forgotten can of dr pepper and two cake pops. Until AI can get on my level I refuse to acknowledge it as artificial OR intelligent because I have personally, personally, bested it at BOTH without even meaning to.
92 notes
·
View notes