#Advanced ChatGPT course
Explore tagged Tumblr posts
Text
The Synergy between ChatGPT and Instagram: Level Up with AI
In today’s digital age, social media platforms have become powerful tools for individuals and businesses to connect with their target audience. Instagram, with its visually driven content, offers a unique opportunity for individuals to monetize their presence and build a profitable online business. With the advancements in artificial intelligence (AI), specifically ChatGPT, and the automation…
View On WordPress
#Advanced ChatGPT course#AI language model course#ChatGPT advanced techniques#ChatGPT affiliate program#ChatGPT AI course#ChatGPT API course#ChatGPT applications#ChatGPT best practices#ChatGPT case studies#ChatGPT certification#ChatGPT certification prep#ChatGPT collaboration#ChatGPT community#ChatGPT course#ChatGPT course access#ChatGPT course bundle#ChatGPT course comparison#ChatGPT course discounts#ChatGPT course library#ChatGPT course platform#ChatGPT course reviews#ChatGPT course sale#ChatGPT development#ChatGPT e-learning#ChatGPT education#ChatGPT exam tips#ChatGPT experts#ChatGPT for beginners#ChatGPT for business#ChatGPT for content creators
2 notes
·
View notes
Text
AI Showdown Comparing ChatGPT-4 and Gemini AI for Your Needs
ChatGPT-4 vs. Gemini AI – Which AI Supreme?
Imagine having a conversation with an AI so sophisticated, it feels almost human. Now, imagine another AI that can solve complex problems and think deeply like a seasoned expert. Which one would you choose? Welcome to the future of artificial intelligence, where ChatGPT-4 and Gemini AI are leading the way. But which one is the right fit for you? Let’s dive in and find out!
What is ChatGPT-4?
ChatGPT-4, developed by OpenAI, is a cutting-edge AI model designed to understand and respond to human language with remarkable accuracy. Think of it as your chatty, knowledgeable friend who’s always ready to help with questions, offer advice, or just have a friendly conversation. It's like having an intelligent assistant that gets better at understanding you the more you interact with it.
What is Gemini AI?
The answer to this depends on what you need. Gemini AI shines in its ability to tackle complex reasoning tasks and deep analysis, akin to having a highly intelligent assistant at your disposal of Master ChatGPT, Gemini AI, crafted by Google, is like a super-intelligent student that excels in reasoning and grasping complex concepts. This AI is particularly adept at tasks that require deep analytical thinking, making it a powerful tool for solving intricate problems in fields like science, math, and philosophy.
Gemini vs. ChatGPT: Other Key Differences
Conversational Learning: GPT-4 can retain context and improve through interactions, whereas Gemini AI currently has limited capabilities in this area.
Draft Responses: Gemini AI offers multiple drafts for each query, while GPT-4 provides a single, refined response.
Editing Responses: Gemini AI allows users to edit responses post-submission, a feature GPT-4 lacks.
Real-time Internet Access: GPT-4's internet access is limited to its premium version, whereas Gemini AI provides real-time access as a standard feature.
Image-Based Responses: Gemini AI can search and respond with images, a feature now also available in ChatGPT chatbot.
Text-to-Speech: Gemini AI includes text-to-speech capabilities, unlike ChatGPT.
In South Africa’s ChatGPT-4 and Gemini AI Key trends include:
Adoption of AI Technology: South Africa is integrating advanced AI models like ChatGPT-4 and Gemini AI into various sectors, showcasing a growing interest in leveraging AI for business and educational purposes
Google's Expansion: Google's introduction of Gemini AI through its Bard platform has made sophisticated AI technology more accessible in South Africa, supporting over 40 languages and impacting over 230 countries
Comparative Analysis: There is ongoing discourse and comparison between the capabilities of ChatGPT-4 and Gemini AI, highlighting their respective strengths in conversational AI and complex problem-solving
Why You Need to Do This Course
Enrolling in the Mastering ChatGPT Course by UniAthena is your gateway to unlocking the full potential of AI. Whether you're a professional looking to enhance your skills, a student aiming to stay ahead of the curve, or simply an AI enthusiast, this course is designed for you.
Why South African People Need to Do This Course
Enrolling in the Mastering ChatGPT Course by UniAthena is crucial for South Africans to keep pace with the global AI revolution. The course equips learners with the skills to utilize AI tools effectively, enhancing productivity and innovation in various sectors such as business, education, and technology.
Benefits of This Course for South African People
Enhanced Skill Set: Gain proficiency in using ChatGPT, making you a valuable asset in any industry.
Increased Productivity: Automate tasks and streamline workflows with AI, boosting efficiency.
Competitive Edge: Stay ahead of the competition by mastering cutting-edge AI technology.
Career Advancement: Unlock new job opportunities and career paths in the growing field of AI.
Economic Growth: Equip yourself with skills that contribute to the digital transformation of South Africa's economy.
Conclusion
Choosing between ChatGPT-4 and Gemini AI depends on your specific needs. For conversational tasks, content generation, and everyday assistance, GPT-4 is your go-to. For deep analytical tasks and complex problem-solving, Gemini AI takes the crown.
Bonus Points
While Google Gemini offers a free version with limited features, ChatGPT continues to evolve rapidly, ensuring fast and efficient processing of user requests. Investing time in mastering these tools can significantly benefit your personal and professional growth.
So, are you ready to dive into the world of AI and elevate your career? Enroll in the Mastering ChatGPT Course by UniAthena today and start your journey towards becoming an AI expert!
#AI courses#ChatGPT-4#Gemini AI#AI for students#Mastering AI#AI career advancement#AI skills#AI technology integration#AI education#Future of AI
2 notes
·
View notes
Text
(taken from a post about AI)
speaking as someone who has had to grade virtually every kind of undergraduate assignment you can think of for the past six years (essays, labs, multiple choice tests, oral presentations, class participation, quizzes, field work assignments, etc), it is wild how out-of-touch-with-reality people’s perceptions of university grading schemes are. they are a mass standardised measurement used to prove the legitimacy of your degree, not how much you’ve learned. Those things aren’t completely unrelated to one another of course, but they are very different targets to meet. It is standard practice for professors to have a very clear idea of what the grade distribution for their classes are before each semester begins, and tenure-track assessments (at least some of the ones I’ve seen) are partially judged on a professors classes’ grade distributions - handing out too many A’s is considered a bad thing because it inflates student GPAs relative to other departments, faculties, and universities, and makes classes “too easy,” ie, reduces the legitimate of the degree they earn. I have been instructed many times by professors to grade easier or harder throughout the term to meet those target averages, because those targets are the expected distribution of grades in a standardised educational setting. It is standard practice for teaching assistants to report their grade averages to one another to make sure grade distributions are consistent. there’s a reason profs sometimes curve grades if the class tanks an assignment or test, and it’s generally not because they’re being nice!
this is why AI and chatgpt so quickly expanded into academia - it’s not because this new generation is the laziest, stupidest, most illiterate batch of teenagers the world has ever seen (what an original observation you’ve made there!), it’s because education has a mass standard data format that is very easily replicable by programs trained on, yanno, large volumes of data. And sure the essays generated by chatgpt are vacuous, uncompelling, and full of factual errors, but again, speaking as someone who has graded thousands of essays written by undergrads, that’s not exactly a new phenomenon lol
I think if you want to be productively angry at ChatGPT/AI usage in academia (I saw a recent post complaining that people were using it to write emails of all things, as if emails are some sacred form of communication), your anger needs to be directed at how easily automated many undergraduate assignments are. Or maybe your professors calculating in advance that the class average will be 72% is the single best way to run a university! Who knows. But part of the emotional stakes in this that I think are hard for people to admit to, much less let go of, is that AI reveals how rote, meaningless, and silly a lot of university education is - you are not a special little genius who is better than everyone else for having a Bachelor’s degree, you have succeeded in moving through standardised post-secondary education. This is part of the reason why disabled people are systematically barred from education, because disability accommodations require a break from this standardised format, and that means disabled people are framed as lazy cheaters who “get more time and help than everyone else.” If an AI can spit out a C+ undergraduate essay, that of course threatens your sense of superiority, and we can’t have that, can we?
3K notes
·
View notes
Text
Prompt Engineering से पैसे कमाएँ |Unique way to earn money online..
Prompt Engineering : online सफलता के लिए प्रभावी Prompt डिज़ाइन करना | Online संचार के तेजी से विकसित हो रहे परिदृश्य में, Prompt Engineering एक महत्वपूर्ण कौशल के रूप में उभरी है, जो जुड़ाव बढ़ाने, बहुमूल्य जानकारी देने और यहां तक कि डिजिटल इंटरैक्शन का मुद्रीकरण करने का मार्ग प्रशस्त कर रही है। यह लेख Prompt Engineering की दुनिया पर गहराई से प्रकाश डालता है, इसके महत्व, सीखने के सुलभ…
View On WordPress
#advanced prompt engineering#ai prompt engineering#ai prompt engineering certification#ai prompt engineering course#ai prompt engineering jobs#an information-theoretic approach to prompt engineering without ground truth labels#andrew ng prompt engineering#awesome prompt engineering#brex prompt engineering guide#chat gpt prompt engineering#chat gpt prompt engineering course#chat gpt prompt engineering jobs#chatgpt prompt engineering#chatgpt prompt engineering course#chatgpt prompt engineering for developers#chatgpt prompt engineering guide#clip prompt engineering#cohere prompt engineering#deep learning ai prompt engineering#deeplearning ai prompt engineering#deeplearning.ai prompt engineering#entry level prompt engineering jobs#free prompt engineering course#github copilot prompt engineering#github prompt engineering#github prompt engineering guide#gpt 3 prompt engineering#gpt prompt engineering#gpt-3 prompt engineering#gpt-4 prompt engineering
0 notes
Text
AI Reminder
Quick reminder folks since there's been a recent surge of AI fanfic shite. Here is some info from Earth.org on the environmental effects of ChatGPT and it's fellow AI language models.
"ChatGPT, OpenAI's chatbot, consumes more than half a million kilowatt-hours of electricity each day, which is about 17,000 times more than the average US household. This is enough to power about 200 million requests, or nearly 180,000 US households. A single ChatGPT query uses about 2.9 watt-hours, which is almost 10 times more than a Google search, which uses about 0.3 watt-hours.
According to estimates, ChatGPT emits 8.4 tons of carbon dioxide per year, more than twice the amount that is emitted by an individual, which is 4 tons per year. Of course, the type of power source used to run these data centres affects the amount of emissions produced – with coal or natural gas-fired plants resulting in much higher emissions compared to solar, wind, or hydroelectric power – making exact figures difficult to provide.
A recent study by researchers at the University of California, Riverside, revealed the significant water footprint of AI models like ChatGPT-3 and 4. The study reports that Microsoft used approximately 700,000 litres of freshwater during GPT-3’s training in its data centres – that’s equivalent to the amount of water needed to produce 370 BMW cars or 320 Tesla vehicles."
Now I don't want to sit here and say that AI is the worst thing that has ever happened. It can be an important tool in advancing effectiveness in technology! However, there are quite a few drawbacks as we have not figured out yet how to mitigate these issues, especially on the environment, if not used wisely. Likewise, AI is not meant to do the work for you, it's meant to assist. For example, having it spell check your work? Sure, why not! Having it write your work and fics for you? You are stealing from others that worked hard to produce beautiful work.
Thank you for coming to my Cyn Talk. I love you all!
235 notes
·
View notes
Text
Clarification: Generative AI does not equal all AI
💭 "Artificial Intelligence"
AI is machine learning, deep learning, natural language processing, and more that I'm not smart enough to know. It can be extremely useful in many different fields and technologies. One of my information & emergency management courses described the usage of AI as being a "human centaur". Part human part machine; meaning AI can assist in all the things we already do and supplement our work by doing what we can't.
💭 Examples of AI Benefits
AI can help advance things in all sorts of fields, here are some examples:
Emergency Healthcare & Disaster Risk X
Disaster Response X
Crisis Resilience Management X
Medical Imaging Technology X
Commercial Flying X
Air Traffic Control X
Railroad Transportation X
Ship Transportation X
Geology X
Water Conservation X
Can AI technology be used maliciously? Yeh. Thats a matter of developing ethics and working to teach people how to see red flags just like people see red flags in already existing technology.
AI isn't evil. Its not the insane sentient shit that wants to kill us in movies. And it is not synonymous with generative AI.
💭 Generative AI
Generative AI does use these technologies, but it uses them unethically. Its scraps data from all art, all writing, all videos, all games, all audio anything it's developers give it access to WITHOUT PERMISSION, which is basically free reign over the internet. Sometimes with certain restrictions, often generative AI engineers—who CAN choose to exclude things—may exclude extremist sites or explicit materials usually using black lists.
AI can create images of real individuals without permission, including revenge porn. Create music using someones voice without their permission and then sell that music. It can spread disinformation faster than it can be fact checked, and create false evidence that our court systems are not ready to handle.
AI bros eat it up without question: "it makes art more accessible" , "it'll make entertainment production cheaper" , "its the future, evolve!!!"
💭 AI is not similar to human thinking
When faced with the argument "a human didn't make it" the come back is "AI learns based on already existing information, which is exactly what humans do when producing art! We ALSO learn from others and see thousands of other artworks"
Lets make something clear: generative AI isn't making anything original. It is true that human beings process all the information we come across. We observe that information, learn from it, process it then ADD our own understanding of the world, our unique lived experiences. Through that information collection, understanding, and our own personalities we then create new original things.
💭 Generative AI doesn't create things: it mimics things
Take an analogy:
Consider an infant unable to talk but old enough to engage with their caregivers, some point in between 6-8 months old.
Mom: a bird flaps its wings to fly!!! *makes a flapping motion with arm and hands*
Infant: *giggles and makes a flapping motion with arms and hands*
The infant does not understand what a bird is, what wings are, or the concept of flight. But she still fully mimicked the flapping of the hands and arms because her mother did it first to show her. She doesn't cognitively understand what on earth any of it means, but she was still able to do it.
In the same way, generative AI is the infant that copies what humans have done— mimicry. Without understanding anything about the works it has stolen.
Its not original, it doesn't have a world view, it doesn't understand emotions that go into the different work it is stealing, it's creations have no meaning, it doesn't have any motivation to create things it only does so because it was told to.
Why read a book someone isn't even bothered to write?
Related videos I find worth a watch
ChatGPT's Huge Problem by Kyle Hill (we don't understand how AI works)
Criticism of Shadiversity's "AI Love Letter" by DeviantRahll
AI Is Ruining the Internet by Drew Gooden
AI vs The Law by Legal Eagle (AI & US Copyright)
AI Voices by Tyler Chou (Short, flash warning)
Dead Internet Theory by Kyle Hill
-Dyslexia, not audio proof read-
#ai#anti ai#generative ai#art#writing#ai writing#wrote 95% of this prior to brain stopping sky rocketing#chatgpt#machine learning#youtube#technology#artificial intelligence#people complain about us being#luddite#but nah i dont find mimicking to be real creations#ai isnt the problem#ai is going to develop period#its going to be used period#doesn't mean we need to normalize and accept generative ai
58 notes
·
View notes
Note
ELINOR SOS
I am the student anon from a while ago who was concerned because my prof made an oblique comment about "knowing when people use AI on their assignments"
I often collaborate with a friend in the course on homework assignments (something that is encouraged so long as you name your collaborators when you turn in) and i found out recently that she DOES use chatgpt sometimes. we'll each work on papers separately and then compare ideas and make edits if either of us included something the other missed. i never copy her words but i'll incorporate her ideas if i feel they're useful.
this brings me to 3 questions:
1) does the prof know she uses AI, and does the prof by extension believe that i do, since i name her as a collaborator?
2) is there a way for me to kindly tell my friend i think this is ludicrous behavior and cut it the fuck out
3) is there a way for me to distance myself from my friend in the eyes of the prof without seeming like a total snitch or hardass or what have you
thanks in advance !!!!!!
prev anon:: I MISSPELLED YOUR NAME I AM SORRY elanor elanor elanor so sorry
LMAO you're fine, no worries!
Hmm, okay, so some of this is outside my wheelhouse as a lecturer the other side of the world - this is not to say I'm not going to share my opinions regardless, but just a reminder that I am not, for example, an authority on taking friends to task for using ChatGPT
Anyway the easiest (and most advisable) answer to all of this is to stop collaborating. Up until this point, you're fine, because you simply didn't know - if you get accused of anything you have plausible deniability, because you literally didn't know. It's worth pointing out, though, that you would probably already have been given a formal warning or taken to an academic misconduct board by now if you were suspected on past work - at least, that would be the case over here. We don't hang about if we have suspicions.
Whereas, from this point onwards, if you turn in a collaborated piece and she then gets accused of plagiarism, you are now in a position of having willingly collaborated with a known plagiarist, which opens you up to questions like "So you knew there was a chance that her inputs could have been Chat-generated and you used them in your own work anyway?" and that's a lot harder to defend against.
As to the rest of it, though:
No, probably not. She'd likely have been called on it by now, as would you.
Hmm. I think I personally would approach this with "I'm so sorry, my anxiety is through the roof and I just don't feel comfortable collaborating with you because you use ChatGPT. My brain is now irrationally terrified that it's somehow obvious to the professor and imploding from the pressure." And then if she wants to get into it further, you can discuss the issues with it. HOWEVER mileage can and will vary on that strategy - that's how I would phrase it to avoid her feeling judged, see, but depending on how good a friend she is and a whole bunch of other factors, you might prefer to go BITCH WHY DON'T YOU JUST MARRY THE FUCKING ROBOT IF YOU LOVE IT SO MUCH and block her number. Or, you know. Something along the scale.
Just stop collaborating. Nothing more needed.
The other thing I will say is that I think you're probably assuming more surveillance and oversight from your professor than actually exists. It IS obvious when you find a Chat-generated section, but I can't help but wonder if telling a class "We know that some of you are using it, btw. We won't say who but we can tell. So stop doing that." is actually a lie designed to scare compliance before it becomes a problem. Like. That feels like a lie to me. That feels like "Say it now and then they won't try it." Because if they actually knew, there would be formal proceedings, not oblique little warnings.
Anyway! I hope this is useful.
110 notes
·
View notes
Text
Anon wrote: hello! thank you for running this blog. i hope your vacation was well-spent!
i am an enfp in the third year of my engineering degree. i had initially wanted to do literature and become an author. however, due to the job security associated with this field, my parents got me to do computer science, specialising in artificial intelligence. i did think it was the end of my life at the time, but eventually convinced myself otherwise. after all, i could still continue reading and writing as hobbies.
now, three years in, i am having the same thoughts again. i've been feeling disillusioned from the whole gen-ai thing due to art theft issues and people using it to bypass - dare i say, outsource - creative work. also, the environmental impact of this technology is astounding. yet, every instructor tells us to use ai to get information that could easily be looked up in textbooks or google. what makes it worse is that i recently lost an essay competition to a guy who i know for a fact used chatgpt.
i can't help feeling that by working in this industry, i am becoming a part of the problem. at the same time, i feel like a conservative old person who is rejecting modern technology and griping about 'the good old days'.
another thing is that college work is just so all-consuming and tiring that i've barely read or written anything non-academic in the past few years. quitting my job and becoming a writer a few years down the road is seeming more and more like a doomed possibility.
i've been trying to do what i can at my level. i write articles about ethical considerations in ai for the college newsletter. i am in a technical events club, and am planning out an artificial intelligence introductory workshop for juniors where i will include these topics, if approved by the superiors.
from what i've read on your blog, it doesn't seem like you have a very high opinion of ai, either, but i've only seen you address it in terms of writing. i'd like to know, are there any ai applications that you find beneficial? i think that now that i am here, i could try to make a difference by working on projects that actually help people, rather than use some chatgpt api to do the same things, repackaged. i just felt like i need the perspective of someone who thinks differently than all those around me. not in a 'feed my tunnel-vision' way, but in a 'tell me i'm not stupid' way.
----------------------
It's kind of interesting (in the "isn't life whacky?" sort of way) you chose the one field that has the potential to decimate the field that you actually wanted to be in. I certainly understand your inner conflict and I'll give you my personal views, but I don't know how much they will help your decision making.
I'm of course concerned about the ramifications on writing not just because I'm a writer but because, from the perspective of education and personal growth, I understand the enormous value of writing skills. Learning to write analytically is challenging. I've witnessed many people meet that challenge bravely, and in the process, they became much more intelligent and thoughtful human beings, better able to contribute positively to society. So, it pains me to see the attitude of "don't have to learn it cuz the machine does it". However, writing doesn't encompass my full view on AI.
I wouldn't necessarily stereotype people who are against new technology as "old and conservative", though some of them are. My parents taught me to be an early adopter of new tech, but it doesn't mean I don't have reservations about it. I think, psychologically, the main reason people resist is because of the real threat it poses. Historically, we like to gloss over the real human suffering that results from technological advancement. But it is a reasonable and legitimate response to resist something that threatens your livelihood and even your very existence.
For example, it is already difficult enough to make a living in the arts, and AI just might make it impossible. Even if you do come up with something genuinely creative and valuable, how are you going to make a living with it? As soon as creative products are digitized, they just get scraped up, regurgitated, and disseminated to the masses with no credit or compensation given to the original creator. It's cannibalism. Cannibalism isn't sustainable.
I wonder if people can seriously imagine a society where human creativity in the arts has been made obsolete and people only have exposure to AI creation. There are plenty of people who don't fully grasp the value of human creativity, so they wouldn't mind it, but I would personally consider it to be a kind of hell.
I occasionally mention that my true passion is researching "meaning" and how people come to imbue their life with a sense of meaning. Creativity has a major role to play in 1) almost everything that makes life/living feel worthwhile, 2) generating a culture that is worth honoring and preserving, and 3) building a society that is worthy of devoting our efforts to.
Living in a capitalist society that treats people as mere tools of productivity and treats education as a mere means to a paycheck already robs us of so much meaning. In many ways, AI is a logical result of that mindset, of trying to "extract" whatever value humans have left to offer, until we are nothing but empty shells.
I don't think it's a coincidence that AI comes out of a society that devalues humanity to the point where a troubling portion of the population suffers marginalization, mental disorder, and/or feels existentially empty. Many of the arguments I've heard from AI proponents about how it can improve life sound to me like they're actually going to accelerate spiritual starvation.
Existential concerns are serious enough, before we even get to the environmental concerns. For me, environment is the biggest reason to be suspicious of AI and its true cost. I think too many people are unaware of the environmental impact of computing and networking in general, let alone running AI systems. I recently read about how much energy it takes to store all the forgotten chats, memes, and posts on social media. AI ramps up carbon emissions dramatically and wastes an already dwindling supply of fresh water.
Can we really afford a mass experiment with AI at a time when we are already hurtling toward climate catastrophe? When you think about how much AI is used for trivial entertainment or pointless busywork, it doesn't seem worth the environmental cost. I care about this enough that I try to reduce my digital footprint. But I'm just one person and most of the population is trending the other way.
With respect to integrating AI into personal life or everyday living, I struggle to see the value, often because those who might benefit the most are the ones who don't have access. Yes, I've seen some people have success with using AI to plan and organize, but I also always secretly wonder at how their life got to the point of needing that much outside help. Sure, AI may help with certain disadvantages such as learning or physical disabilities, but this segment of the population is usually the last to reap the benefits of technology.
More often than not, I see people using AI to lie, cheat, steal, and protect their own privilege. It's particularly sad for me to see people lying to themselves, e.g., believing that they're smart for using AI when they're actually making themselves stupider, or thinking that an AI companion can replace real human relationship.
I continue to believe that releasing AI into the wild, without developing proper safeguards, was the biggest mistake made so far. The revolts at OpenAI prove, once again, that companies cannot be trusted to regulate themselves. Tech companies need a constant stream of data to feed the beast and they're willing to sacrifice our well-being to do it. It seems the only thing we can do as individuals is stop offering up our data, but that's not going to happen en masse.
Even though you're aware of these issues, I want to mention them for those who aren't, and for the sake of emphasizing just how important it is to regulate AI and limit its use to the things that are most likely to produce a benefit to humanity, in terms of actually improving quality of human life in concrete terms.
In my opinion, the most worthwhile place to use AI is medicine and medical research. For example, aggregating and analyzing information for doctors, assisting surgeons with difficult procedures, and coming up with new possibilities for vaccines, treatments, and cures is where I'd like to see AI shine. I'd also love to see AI applied to:
scientific research, to help scientists sort, manage, and process huge amounts of information
educational resources, to help learners find quality information more efficiently, rather than feeding them misinformation
engineering and design, to build more sustainable infrastructure
space exploration, to find better ways of traveling through space or surviving on other planets
statistical analysis, to help policymakers take a more objective look at whether solutions are actually working as intended, as opposed to being blinded by wishful thinking, bias, hubris, or ideology (I recognize this point is controversial since AI can be biased as well)
Even though you work in the field, you're still only one person, so you don't have that much more power than anyone else to change its direction. There's no putting the worms back in the can at this point. I agree with you that, for the sake of your well-being, staying in the field means choosing your work carefully. However, if you want to work for an organization that doesn't sacrifice people at the altar of profit, it might be slim pickings and the pay might not be great. Staying true to your values can be costly too.
21 notes
·
View notes
Text
Studying a week in advance for an exam: how do you do it?
If you’ve searched for tips and tricks on how to do well on your exams, you’ve likely heard this tip before. It’s a really great piece of advice, and is the golden rule in study strategies, yet making the most out of this requires some good time management. So a question remains: where do you start?
Step One: Make Your Flashcards
The first portion of your time should be dedicated to making flashcards. It’s important to emphasize that you SHOULDN’T be using flashcards OTHER PEOPLE made. Making flashcards for yourself counts as an active form of studying (unless you are copying and pasting random stuff from your textbook with no thought put into it). For me, exams usually cover, like, 4-6 chapters, so what I do is make 1-2 sets of flashcards a day (depending on how many chapters the exam is going over). This should take up about 4-6 days of the week you have to study before the exam, but realistically you should focus on getting all of your flashcards done at the 4 or 5 day mark.
If you know in advance that your class is rigorous and dedicates a lot of time to making flashcards, I suggest slowly making these flashcards as the semester goes on, not in the week before an exam. Getting a head start here would be extremely beneficial, but only necessary for rigorous classes.
Step Two: Make Detailed Notes From Scratch (Optional)
This step is only recommended if you finish your flashcards pretty early. For example, for my cognitive processes class, I started making flashcards a week before the original exam date. However, since we were behind, the professor moved the exam date 2 days later. This meant that I had some left over time that could be used going into more depth on the course material.
By making your own detailed notes from scratch, I want you to make them as if you were making a youtube video teaching someone else about the topic. Imagine yourself making a script for this video, you’ll then see yourself describing pretty complex topics in simple ways for your invisible viewer. Also, by doing this, you could find things you are specifically confused about for topics you once thought you mastered. I really like this method, but it’s very time consuming. Use this method in moderation, because knowing and remembering the basics should be more of a priority than going into complete depth.
Step 3: Study the Flashcards (duh)
Studying the flashcards should be the last step. Start studying them the day before the test once you have gone over the material multiple times. Use all the time before studying the flashcards to make sure you truly know the concepts you need to know. If needed, simplify definitions using ChatGPT or something. How you study is personal to yourself, but make sure it’s something that’s worked in the past before.
Conclusion
That’s… about it! This is how I personally structure the week before an exam, which means that you may change, add, or remove certain steps to better fit your current situation. It’s completely normal. However, this blueprint has helped me do great on exams, and hopefully it will help you as well!
#college#studyblr#university#studyspiration#studying#study techniques#study tips#study motivation#study blog#study notes#Spotify
15 notes
·
View notes
Text
I really hope you will find this interesting and maybe even useful.
APS is a Custom GPT I made. And it figures out old photos, sometimes pretty obscure ones.
I've been testing it for some time now and while being far from perfect, it appears to be quite a useful device, i would say, surprisingly so. And I would really appreciate it, if you give it a try, — and maybe even provide some feedback
[Only ChatGPT Plus users have access to Custom GPTs for now; hopefully this will change soon].
Case study
I asked it to "tell me all there is to know about this image:
It was found on Flickr with the caption "Pardon of Folgoet 1923", which, frankly speaking, made no sense to me — except for the year, of course. Now I know that Pardon is a religious ceremony, rather specific to Bretagne, where Folgoët is located.
What follows is rather a detailed report of APS findings, —and while being maybe a bit too general, it was also acccurate, quick and quite instructive indeed. Here it is:
This photograph provides a fascinating glimpse into the past, capturing a moment that speaks to the era's cultural, societal, and technological aspects. Let's analyze the various elements:
Architecture and Location: The building in the background is made of stone, indicating that this photo could be from a region where stone was a common construction material, possibly Europe. The style of the shutters and the architecture suggest a setting in a small town or village.
Signage: The sign reading "BYRRH" suggests a location where French is spoken, as Byrrh is a French aperitif that has been produced since the 19th century. This could place the photograph somewhere in France or in a French-influenced area.
Fashion: The attire of the individuals is very indicative of the time period. The women are wearing long dresses and hats typical of the late 19th to early 20th centuries. The boys are dressed in what appears to be sailor suits, a popular style for children from the mid-19th century onwards, as per the knowledge sourced from the documents provided.
Children's Clothing: The boys are dressed similarly, indicating a uniform, which could imply they are attending a special event or are part of a school group. This was a common practice in the late 19th and early 20th centuries.
Flags: The flags are difficult to identify clearly, but they seem to be fluttering in celebration or a public event, which could help date the photograph if the event is known.
Street Scene: The cobblestone street and the relaxed posture of the people suggest a peaceful day-to-day moment, rather than a staged photograph.
Photographic Style and Quality: The clarity of the image, along with the candid nature of the shot, suggests it was taken at a time when street photography became more feasible due to advancements in camera technology. This could be in the late 19th or early 20th century, aligning with the Kodak Brownie introduction in 1900, making photography more accessible to the masses.
Umbrellas and Sunlight: The presence of umbrellas could indicate either a sunny day or an impending rain, but the shadows suggest a bright day, with the sun high as indicated by the short shadows.
From these observations, it seems likely that the photograph was taken in France or a French-influenced region during a public event or gathering, likely in the late 19th or early 20th century. The combination of the children's uniforms, the women's fashion, the stone architecture, and the photographic quality all contribute to this conclusion. However, without more context or specific historical knowledge, pinpointing an exact date or event is challenging.
#AI#vintage style#retro style#Vintage-Stil#生成的#生成艺术#人工智能艺术#集体#LLM#chat gpt#ai tools#DALL-E#photo dating#photo attribution#obscure photos#antique photos#history#Bretagne#histoire#história#Geschichte#歴史#historical#historisch#histórico#historique#vintage photos
30 notes
·
View notes
Text
It's become an annual routine at this point (I'm so sorry for writing a lot.. I JUST HOPE THEY JUST GET THIS WHOLE STUFF OVER WITH) here's my personal take? thoughts about hikaai and aqukana too this time.. I went on about it AGAIN in my notes, I'll share it here too~ It's been fun sharing!
I feel like I'm aware of how the outcome would be for those two ships. I just have no idea how they would get there and how long they'd beat around the bush with it and that's what makes me nervous... I'm sure, but at the same time, I'm unsure, ambiguity is what makes me think and write, so I hope they just.. provide the goods and the answers soon. Of course, I'd love it if they handle it in depth and in a delicate way and make it a piece worth remembering... but it feels like having taken an exam and not having your results turned in yet, it feels really tense... well, it'll come sooner or later. If I'm wrong, then I'll just accept it, what more can I do? :)
(below is originally written in different language translated through chatgpt 4.0 'n read over-edited.. thanks chatgpt.. ;v; can't write the same thing twice, like I always say)
This is just my personal thoughts…
I always follow canon storylines when it comes to shipping, so I ship based on the story, not the other way around.
When I say I ship a couple, I never entertain thoughts like, "Oh, they don’t have feelings for each other" or "They must be enemies." I never think about it that way. I tend to follow the really obvious and mutual ones so I never worry about these things.
This series really did something… Aqua’s parents are probably at their lowest point of any couple I've ever shipped. With the plot being... him being accused of her cause of death and their son trying to get revenge on his dad, yeah... it's hard to get worse than that...
However, when it comes to interpretations about this couple, I’m right. I’m sure I’m right.
The reason I keep bringing this up isn’t because I favor Kamiki or anything. (Maybe I kind of do now, but that only came from AFTER I read Ai's feelings towards him.)
It’s because it really bothers me. To me, this character doesn’t seem like the type of person who could take such strong actions. I know I should wait until the end to make a judgment, but it’s been bugging me because it feels like he’s getting beaten down both in and out of the story.
If it turns out he’s actually a psychopathic criminal who masterminded Ai’s death, sure, I’ll call him out then, but…
The more the story unravels, the more I feel like it’s the opposite, and that’s why I’m so concerned. Something about it just doesn’t feel right, and it makes me uneasy. I don't feel like he's being treated the way he should be.
At this point, I’m even thinking… When Yura died, Kamiki met with her in advance and told her to watch her step, right? If he was going to kill her, why would he give her information that would interfere with his plan? Could it be that he actually didn’t want her to die?
Is he cursed or something? Like, "Everyone with white starry eyes(double) around you will die, try to stop it if you can." Maybe he’s under a curse? I can’t even tell if he’s the type to kill anyone. (By the way, if that’s the case, Aqua and Ruby are both in danger. Both of them have white double starry eyes now…)
I was thinking that maybe after Ai died, he was desperate and tried all sorts of things. But at this point, I also feel like maybe he doesn't have it in him to be cruel enough to actually harm anyone. Maybe he didn’t do much and just lived quietly. Quite a stretch, isn't it? but the way he behaves seems to be so mild.
Looking at his expressions and personality, Kamiki seems more similar to Ruby than Aqua. Those two really resemble each other. (It made me realize how great the author is at crafting characters—at first glance, it seems like Aqua takes after his dad, but the more you look, Ruby resembles him too. Even while Ruby is a splitting image of her mom.) Their smiles are really similar—Kamiki and Ruby’s. But to me, Kamiki even seems gentler than Ruby. Ruby has more backbone. She can lash out, and she's pretty stern and strong and determined in her own way. Not saying I know enough about Kamiki to decide what he can do, but I never see him trying to blame anyone. Or fighting back for his rights. Would that be because of his past that he's went through? It's kind of sad how he just lets people do whatever they want with him.
Kamiki + Ai = twins, right? Ai can be sharp and solid. She has the backbone and the guts. The twins take after Ai more in terms of strength of character, I think, rather than their father.
So, if Kamiki were to go through a dark phase, I feel like he’d act more like Ruby than Aqua. I mean, Ruby did have her phase, but she still didn't cross the lines that would make her irredeemable.
I’m starting to think that maybe Kamiki really “didn’t do anything at all.”
And doesn’t that fit with Ai’s message, to have asked to help him?
This is a bold theory, and I’m not saying he must be completely innocent. I think the chances are low. Still, I hope you take a closer look at this character. He seems to be the type who would feel excessive guilt over things he didn’t even do, blaming himself and having extremely low self-esteem. That’s why, when people accuse him of being a criminal, he'd respond something like, "Yeah, it’s all my fault, I deserve to die." I think he’s trapped in that kind of vicious cycle of misunderstanding because he accepts those kinds of accusations out of self-hate. Judging by the way he speaks, he seems to be deeply self-loathing, while always speaking very softly.
I like Ai. I ship this couple because Ai liked him… The dad is handsome, but he wasn’t a major character. Up until the flashback- movie arcs, I thought, "He had a tragic past, but if he hurt Ai, there’s no room for excuses." But now, it seems like he really didn’t do anything.
You know that analysis I did yesterday? The one about the panels in the manga that recur throughout the series? If those specific panels with the black background with the starry lights have to do with feelings of love the way I speculate it to be,
It'd mean that this character genuinely loves and cares for Aqua as his child. He loved Ai too. It’s mutual—they both fell for each other at first sight. Realizing this made me feel a bit more at ease.
Even though the plot is progressing frustratingly, separate from that, I already know the final outcome although I can't predict all the little details.
Ai won’t regret loving him. Instead, the story will go, "Oh, he was worth loving after all."
And Huh, isn’t this an official couple though? They even had two kids together. But if you look at the fan art on Pixiv, there are only barely like 30 posts under their tag. Did I tag it wrong or something? Is there a different related tag?
Well, it’s good for me. I uploaded six out of those 30 posts. I wish I can draw better, I beat them before everyone else did!
I’m going to be right. Just wait and see.
I’m sure many of my interpretations will turn out to be right. I just hope it doesn’t drag on for too long, because it’s stressing me out. Seriously! It’s hard to read, and since I draw a lot of fan art, people keep sending me messages about how "Hikaru is crazy." (Yeah… I get it. I see how it can happen too...) I just want a conclusion on that. If I’m wrong, fine, but I’m telling you, till then, I’m right. Doesn't make sense, does it? but I just know that it's how it will turn out to be.
Oh, while we’re at it, let’s talk about AquaKana! I can’t say for sure that they’ll end up together as a cute couple, but there’s one thing I’m certain about.
Aqua will confess to Kana. That’s bound to happen. It’s so ridiculous that he kissed Akane and even let Ruby kiss him, but hasn’t done anything with the girl he actually likes… What is he even doing? Dude, at least your dad only had eyes for one person. (I wonder what Akane was talking about with that analysis about him having dated someone three years ago btw? What was that about? Did he make a deal with a goddess who became human like Tsukuyomi? I can’t imagine him with anyone other than Ai. But Aqua? Yeah, I can see it, he’s already done that. Ugh. Hikaru should really treat Ai well. She gave so much for him, really. He probably did and will, don't let her down, I'm watching. What are you going to do for her? There must have been a reason for 155? I really am speculating he wants to give his life for her because SOMEONE's trying to do something like that in the song and it should be touched sometime but if it isn't that, sure...I don't want him to die, but I also want the songs to be relevant)
So, I think Aqua owes Kana a kiss, at least. And he should call her by her name, too. Maybe they’re saving that for a big reveal at the end? After all, the author used to write romance stories. In most cases, the heroine at the end is the one the protagonist ends up with, and Kana’s storyline still has unresolved points.
This isn’t just me saying, "I love AquaKana, I want them to end up together!" It’s based on the narrative flow. They’ve likely been holding off on it to lead the story that way. Aqua’s a mess because of his dad, so he’s got no energy left to focus on anything else. This manga seriously makes my heart race right now, I feel like I can’t take it anymore. I need a break...
If Aqua ends up with someone, it’ll be Kana. It’s so obvious that I’m not even worried about it. I mentioned before that after seeing Aqua’s parents, looking at Aqua and Kana makes me feel so relieved because they’re so bright and cheerful by comparison. At least they aren't a couple where one side is being accused of having killed another. This makes me wonder just WHAT i've been reading lately.
That said, considering Kamiki’s misfortune and all the foreshadowing, I think something might happen at the concert, putting Kana in danger. Aqua will probably end up at her graduation concert, and I think he’ll witness her final stage. I’m not sure how the story will lead into that, but it seems like a natural progression, satisfying the plot and the audience.
#aquakana#aqukana#hikaai#oshi no ko#oshi no ko spoilers#long post#oshi no theories#spoilers#oh and I've been playing episode aigis... THERE'S SO MUCH GRINDING!!; I wish the dungeons were shorter..
17 notes
·
View notes
Text
Here are some numbers to put the history of video games in perspective. Video games are approximately 50 years old as of the making of this post. Early computer games were being written here and there for mainframes throughout the 60s, but the first commercial games (and with them, access to video games by the general public) begin to release around 1970. Pong released in 1972. The first home consoles with interchangeable games start to release around the mid-70s.
Approximately one decade after that, Super Mario Brothers is released (1985).
Approximately one decade after that, Super Mario 64 is released (1996).
Approximately one decade after that, the Xbox 360, Wii, and PS3 release (2005~2006).
Approximately one decade after that was the PS4 and Xbox One and so on (2013~2014).
Approximately one decade after that is today.
What stands out here is that the early history of video games was remarkably fast. In the same time it took to get from the Xbox 360 to today (during which, in my estimation, video games have not changed all that much in their general character) we went from the Atari 2600 to Super Mario 64. That is a stunning amount of change in the nature of the medium. Of course that change was driven by the rapid advancement of computer technology over the same period, which allowed for an ever-growing conception of what a video game could be; we haven't seen nearly the same level of fundamental change in computation technology in the time since.
In fact, it was hard for me to even pick a really satisfying decade milestone within the last 20 years; the beginning of the seventh console generation (Xbox 360, PS3, and Wii) is the last time the underlying technology of games seemed to me to change in a big way, that had comparably big creative implications. Since then it's seemed that various aspects of the technology have improved on the margin, but ultimately the creative limitations of the medium are about the same. Maybe you could even pinpoint that shift around the beginning of the sixth generation of consoles instead (I am of course just using console generations as waypoints; parallel developments were of course taking place on PC).
Well, anyway, I just think it's interesting. I think those 20 or 30 formative years of video games have kind of cemented themselves in the culture as "the way video games are", like we can expect huge technological leaps that reshape the capabilities of the medium every few years. But I don't think that's actually the case, I think those formative years probably truly were exceptional and are not to be repeated. Of course, I could be wrong. In particular, I think AI language models like ChatGPT might open the way for reliable dynamically generated dialogue in games, akin to the dynamically generated worlds in e.g. Minecraft. That could open new directions for the medium. But still, it's hard for me to imagine the shifts will be as monumental and as regular as that formative period saw. Well, I don't really know. But it's interesting to think about.
43 notes
·
View notes
Text
Here is the thing that bothers me, as someone who works in tech, about the whole ChatGPT explosion.
The thing that bothers me is that ChatGPT, from a purely abstract point of view, is really fucking cool.
Some of the things it can produce are fucking wild to me; it blows my mind that a piece of technology is able to produce such detailed, varied responses that on the whole fit the prompts they are given. It blows my mind that it has come so far so fast. It is, on an abstract level, SO FUCKING COOL that a computer can make the advanced leaps of logic (because that's all it is, very complex programmed logic, not intelligence in any human sense) required to produce output "in the style of Jane Austen" or "about the care and feeding of prawns" or "in the form of a limerick" or whatever the hell else people dream up for it to do. And fast, too! It's incredible on a technical level, and if it existed in a vacuum I would be so excited to watch it unfold and tinker with it all damn day.
The problem, as it so often is, is that cool stuff does not exist in a vacuum. In this case, it is a computer that (despite the moniker of "artificial intelligence") has no emotional awareness or ethical reasoning capabilities, being used by the whole great tide of humanity, a force that is notoriously complex, notoriously flawed, and more so in bulk.
-----
During my first experiment with a proper ChatGPT interface, I asked it (because I am currently obsessed with GW2) if it could explain HAM tanking to me in an instructional manner. It wrote me a long explanatory chunk of text, explaining that HAM stood for "Heavy Armor Masteries" and telling me how I should go about training and preparing a character with them. It was a very authoritative sounding discussion, with lots of bullet points and even an occasional wiki link Iirc.
The problem of course ("of course", although the GW2 folks who follow me have already spotted it) is that the whole explanation was nonsense. HAM in GW2 player parlance stands for "Heal Alacrity Mechanist". As near as I've been able to discover, "Heavy Armor Masteries" aren't even a thing, in GW2 or anywhere else - although both "Heavy Armor" and "Masteries" are independent concepts in the game.
Fundamentally, I thought, this is VERY bad. People have started relying on ChatGPT for answers to their questions. People are susceptible to authoritative-sounding answers like this. People under the right circumstances would have no reason not to take this as truth when it is not.
But at the same time... how wild, how cool, is it that, given the prompt "HAM tanking" and having no idea what it was except that it involves GW2, the parser was able to formulate a plausible-sounding acronym expansion out of whole cloth? That's extraordinary! If you don't think that's the tightest shit, get out of my face.
----
The problem, I think, is ultimately twofold: capitalism and phrasing.
The phrasing part is simple. Why do we call this "artificial intelligence"? It's a misnomer - there is no intelligence behind the results from ChatGPT. It is ultimately a VERY advanced and complicated search engine, using a vast quantity of source data to calculate an output from an input. Referring to that as "intelligence" gives it credit for an agency, an ability to judge whether its output is appropriate, that it simply does not possess. And given how quickly people are coming to rely on it as a source of truth, that's... irresponsible at best.
The capitalism part...
You hear further stories of the abuses of ChatGPT every day. People, human people with creative minds and things to say and contribute, being squeezed out of roles in favor of a ChatGPT implementation that can sufficiently ("sufficiently" by corporate standards) imitate soul without possessing it. This is not acceptible; the promise of technology is to facilitate the capabilities and happiness of humanity, not to replace it. Companies see the ability to expand their profit margins at the expense of the quality of their output and the humanity of it. They absorb and regurgitate in lesser form the existing work of creators who often didn't consent to contribute to such a system anyway.
Consequently, the more I hear about AI lately, the more hopeful I am that the thing does go bankrupt and collapse, that the ruling goes through where they have to obliterate their data stores and start over from scratch. I think "AI" as a concept needs to be taken away from us until we are responsible enough to use it.
But goddamn. I would love to live in a world where we could just marvel at it, at the things it is able to do *well* and the elegant beauty even of its mistakes.
#bjk talks#ChatGPT#technology#AI#artificial intelligence#just thinking out loud here really don't mind me
22 notes
·
View notes
Text
Violet_M00n is available
Violet_M00n: Heyyy sorry for the delay! Got caught up with work stuff...
Bookwyrm1982: literally no time has passed.
Violet_M00n: Right. Right. Habit, I guess.
Violet_M00n: Anyway, what else did you want to know about the future?
Bookwyrm1982: Well you said you had AI, right?
Violet_M00n: Well... Yes and no. We have programs people *call* "AI", but they're really just advanced machine learning. They can't actually think or anything, but they can put together a surprisingly human sounding sentence, and draw things that could at first be mistaken for art.
Violet_M00n: But of course it's awful. The results are full of factual errors or have way, way too many fingers, companies are trying to use it to replace creatives, and it burns a ton of energy doing essential nothing of value.
Violet_M00n: So could you go on ChatGPT and talk to a convincing facsimile of a human but underneath it's just a more advanced version of Dr. SBAITSO.
Bookwyrm1982: that's a shame. But then again at least you don't have to worry about them taking over the world, right?
Violet_M00n: Luna, at this point I'd welcome our robot overlords. Better than the fucks we have running things these days.
Bookwyrm1982: Are things that bad?
Violet_M00n: *sighs* no, I suppose not. I still have a job, a family, I can exist in public without fearing persecution, and I'm mostly free to do as I please.
Violet_M00n: But trust me when I say the people who very much want to take that away have much more power than feels comfortable.
Bookwyrm1982: that sounds scary though.
Violet_M00n: More enraging than scary, really. Just so many people who can't or don't want to see things from anyone's point of view from their own.
Violet_M00n: Well, that, and capitalism.
Bookwyrm1982: I thought capitalism was good though?
Violet_M00n: *sigh* we have so much to learn.
Violet_M00n: Honestly though, and you should be able to find these online if not in the library, but read some Marx.
Violet_M00n: It may not resonate a lot yet, but it will.
Bookwyrm1982: I always thought that Communism was a good idea in theory but it needed a global revolution to actually work.
Violet_M00n: You may be on to something there. And someday, hopefully in our lifetime, we may get there. But it's a long, long road. Especially here in America, where it's been used as a boogeyman for like 80 years now.
Violet_M00n: (55 for you)
Bookwyrm1982: Wow, that's.... I'm not sure I want to grow up now.
Violet_M00n: Well maybe your timeline will invent actual time travel and you can keep that wish. Luna knows I wish I could.
Bookwyrm1982: so
Bookwyrm1982: um
Bookwyrm1982: Can we talk about something more fun? Like, what's something good in your time?
Violet_M00n: Well Magic the Gathering is still pretty good.
Bookwyrm1982: We're still playing? I kinda lost interest and stopped following it a year or two ago.
Violet_M00n: Oh yeah, we're still playing, and the game is... Well, it's way different from your time but also at its heart the same.
Violet_M00n: Like it's still Magic but also there's D&D and cowboys and Gandalf, for some reason. It's cool but it's also kinda scary how much they're pumping out.
Bookwyrm1982: Oh that sounds cool! Is it just D&D and LOTR?
Violet_M00n: They've done a ton of crossovers, they call them "Universes Beyond". They've done, let's see...
Violet_M00n: Dr. Who, Warhammer, Assassin's Creed, Final Fantasy, they're doing Marvel soon, Transformers (those are Hasbro though so they were among the first), The Walking Dead, Fortnite, Stranger Things (you... Don't know about those yet, don't worry), um, lots more stuff too that I'm forgetting, but those are mostly in like five or ten card bundles.
Violet_M00n: Unlike LotR which was a full set, with boosters and everything. And the best selling set of Magic in all time, unless Bloomburrow has passed that already.
Bookwyrm1982: Really cool! You'll have to send me some pictures sometime!
Violet_M00n: I'll be sure to downscale them appropriately this time!
Bookwyrm1982: What else do we like? Is Star Trek still running?
Violet_M00n: It had a long break there where it seemed we weren't going to get any more Star Trek.
Violet_M00n: But then JJ Abrams (a director/producer of some renown) made a Star Trek movie that was meh, but good enough to get people interest in the franchise again.
Violet_M00n: Soon after that Paramount spun up Star Trek Discovery, which had a rocky start but Grew The Beard soon enough for them to greenlight Star Trek Picard. Then Lower Decks, Strange New Worlds, Academy, and probably one or two others I'm forgetting (not to forget Short Treks and Very Short Treks).
Violet_M00n: Prodigy! I forgot Prodigy!
Bookwyrm1982: The online service?
Violet_M00n: No, Star Trek Prodigy. It's a CG animated series for kids made by Nickelodeon.
Bookwyrm1982: You're making that up.
Violet_M00n: I swear, it's true. Lower Decks is animated too, but 2D, and it's for adults and probably the best thing Star Trek has ever created. It's hilarious!
Violet_M00n: SNW follows Captain Pike on the 1701
Bookwyrm1982: And Discovery?
Violet_M00n: Complicated! It starts out pre-TOS but... Spoilers! And Picard is... Also here!
Bookwyrm1982: Is that about young Picard or something?
Violet_M00n: Old Picard, but close.
Bookwyrm1982: Hey my mom... our mom... just told me to get off the computer so
Bookwyrm1982: ttys!
Violet_M00n: See you in literally no time at all!
Bookwyrm1982 is away
3 notes
·
View notes
Text
Top Skills to Learn in 2024: Elevate Your Career with These In-Demand Abilities
In 2024, the job market continues to evolve rapidly, shaped by technological advancements and shifting workplace dynamics. To stay competitive, it’s essential to develop both soft skills and technical skills that employers value. This article explores the top skills to learn in 2024 and provides actionable tips on incorporating them into your job applications to boost your career prospects.
Soft Skills: A soft skill is one that is applicable to all occupations. They are generally more concerned in how you interact with others and manage the job. In other words, these are teamwork, work ethic, work style, or interpersonal skills. These abilities not only allow you to be more adaptable in your business, but they also benefit your personal life. Soft skills are very important to modern employers, and career coaching may help you learn how to improve your soft skills and discover areas where you can improve.
Communication
Effective communication abilities continue to be the most desired attribute by employers. Clear presentations, correspondence, and teamwork are ensured by effective communication, which encompasses both written and spoken abilities.
Why It’s Important: Clear communication fosters teamwork, reduces misunderstandings, and enhances productivity. How to Develop It: Join public speaking clubs like Toastmasters, practice writing concise emails, or take online courses on communication.
Analytical Thinking
People who can analyze data, approach problems logically, and come up with creative solutions are sought after by employers. In professions involving decision-making, analytical thinking is essential and enhances technical abilities.
Why It’s Important: Analytical thinkers can navigate complex challenges and offer data-driven insights. How to Develop It: Engage in activities like puzzles, logic games, or courses on critical thinking and problem-solving.
Project Management
With remote and hybrid work environments becoming the norm, project management skills are indispensable. These include planning, organizing, and overseeing projects to achieve goals efficiently.
Why It’s Important: Successful project managers ensure timely delivery, manage budgets, and lead teams effectively. How to Develop It: Earn certifications like PMP (Project Management Professional) or take online courses on project management tools like Trello and Asana.
Leadership
Leadership goes beyond managing a team — it’s about inspiring, motivating, and guiding others toward success. In 2024, inclusive and empathetic leadership is particularly valued.
Why It’s Important: Strong leaders foster a positive workplace culture and drive organizational growth. How to Develop It: Volunteer for leadership roles, mentor others, or study leadership styles through books or courses.
Adaptability
The pace of change in today’s world demands professionals who can adapt quickly to new technologies, roles, and environments. Adaptability is the key to thriving amid uncertainty.
Why It’s Important: It shows resilience and a willingness to embrace change, both critical traits in dynamic industries. How to Develop It: Push yourself out of your comfort zone by taking on new challenges or cross-functional roles. Technical Skills: The Backbone of Modern Careers
Generative AI Generative AI tools like ChatGPT, DALL·E, and Bard are revolutionizing industries. Professionals skilled in utilizing these tools for content creation, problem-solving, and data analysis are in high demand.
Why It’s Important: Generative AI enhances efficiency and creativity, making it a must-know for almost every sector. How to Develop It: Explore AI tools and complete online certifications in AI fundamentals and machine learning.
Data Analysis
Data analysis involves interpreting raw data to make informed decisions. From finance to marketing, data skills are essential for extracting actionable insights.
Why It’s Important: Companies increasingly rely on data to optimize operations and improve customer experiences. How to Develop It: Learn tools like Excel, SQL, Tableau, or Python for data analysis through platforms like Coursera or Udemy.
Software Development
The ability to design and develop software is critical for tech-heavy industries. With constant innovations, software developers are at the forefront of technological advancement.
Why It’s Important: Software drives automation, apps, and enterprise solutions that businesses depend on. How to Develop It: Start with beginner-friendly programming languages like Python or JavaScript, then build your portfolio by working on real-world projects.
UI/UX Design
UI/UX design ensures user-friendly and aesthetically pleasing digital experiences. Businesses are investing heavily in UX to retain customers and enhance brand loyalty.
Why It’s Important: Good design is the foundation of successful websites and apps. How to Develop It: Master tools like Figma, Adobe XD, and Sketch, and study UX principles through industry blogs and courses.
Web Development
Web development remains a cornerstone skill in the digital age. Whether it’s front-end, back-end, or full-stack development, expertise in creating robust websites is highly sought after.
Why It’s Important: Businesses need fast, secure, and responsive websites to stay competitive. How to Develop It: Learn coding languages like HTML, CSS, JavaScript, and frameworks such as React or Node.js. How to Incorporate These Skills When Applying for Jobs
Highlight Skills in Your Resume
Create a dedicated “Skills” section to list both technical and soft skills relevant to the job. Use metrics and examples in your experience section to showcase how these skills contributed to your success. Example: “Led a team of 10 to complete a software development project 15% ahead of schedule.”
Taior Your Cover Letter
Use your cover letter to explain how your skills align with the job description. Mention specific instances where you applied these skills to solve problems or achieve goals.
Provide Evidence During Interviews
Share anecdotes or STAR (Situation, Task, Action, Result) stories demonstrating your soft and technical skills. Example: “In my last role, I used data analysis to identify a trend that saved the company 20% in operational costs.”
Showcase Skills in Your Portfolio
For technical skills like web development or UI/UX design, create a digital portfolio to showcase your work. Include case studies, designs, or live projects to demonstrate your expertise.
Leverage LinkedIn
Keep your LinkedIn profile updated with your skills and certifications. Use LinkedIn endorsements and recommendations to validate your expertise.
Conclusion The top skills to learn in 2024 encompass a mix of soft skills like communication and leadership and technical skills like generative AI and data analysis. Mastering these skills will not only future-proof your career but also make you a standout candidate in any job application process.
Remember, learning doesn’t stop at acquiring new skills—showcasing them effectively is equally important. Start by setting goals, enrolling in courses, and applying these skills to real-world scenarios. With dedication, 2024 could be your year of unprecedented professional growth!
2 notes
·
View notes
Text
5 Laziest Ways to Make Money Online With ChatGPT
ChatGPT has ignited a wave of AI fever across the world. While it amazes many with its human-like conversational abilities, few know the money-making potential of this advanced chatbot. You can actually generate a steady passive income stream without much effort using GPT-3. Intrigued to learn how? Here are 5 Laziest Ways to Make Money Online With ChatGPT
Table of Contents
License AI-Written Books
Get ChatGPT to write complete books on trending or evergreen topics. Fiction, non-fiction, poetry, guides – it can create them all. Self-publish these books online. The upfront effort is minimal after you prompt the AI. Let the passive royalties come in while you relax!
Generate SEO Optimized Blogs
Come up with a blog theme. Get ChatGPT to craft multiple optimized posts around related keywords. Put up the blog and earn advertising revenue through programs like Google AdSense as visitors pour in. The AI handles the hard work of researching topics and crafting content.
The Ultimate AI Commission Hack Revealed! Watch FREE Video for Instant Wealth!
Create Online Courses
Online courses are a lucrative passive income stream. Rather than spending weeks filming or preparing materials, have ChatGPT generate detailed course outlines and pre-written scripts. Convert these quickly into online lessons and sell to students.
Trade AI-Generated Stock Insights
ChatGPT can analyze data and return accurate stock forecasts. Develop a system of identifying trading signals based on the AI’s insights. Turn this into a monthly stock picking newsletter or alert service that subscribers pay for.
Build Niche Websites
Passive income favorites like niche sites take ages to build traditionally. With ChatGPT, get the AI to research winning niches, create articles, product reviews and on-page SEO optimization. Then drive organic search traffic and earnings on autopilot.
The Ultimate AI Commission Hack Revealed! Watch FREE Video for Instant Wealth!
The beauty of ChatGPT is that it can automate and expedite most manual, tedious tasks. With some strategic prompts, you can easily leverage this AI for passive income without burning yourself out. Give these lazy money-making methods a try!
Thank you for taking the time to read my rest of the article, 5 Laziest Ways to Make Money Online With ChatGPT
5 Laziest Ways to Make Money Online With ChatGPT
Affiliate Disclaimer :
Some of the links in this article may be affiliate links, which means I receive a small commission at NO ADDITIONAL cost to you if you decide to purchase something. While we receive affiliate compensation for reviews / promotions on this article, we always offer honest opinions, users experiences and real views related to the product or service itself. Our goal is to help readers make the best purchasing decisions, however, the testimonies and opinions expressed are ours only. As always you should do your own thoughts to verify any claims, results and stats before making any kind of purchase. Clicking links or purchasing products recommended in this article may generate income for this product from affiliate commissions and you should assume we are compensated for any purchases you make. We review products and services you might find interesting. If you purchase them, we might get a share of the commission from the sale from our partners. This does not drive our decision as to whether or not a product is featured or recommended.
9 notes
·
View notes