#I have been starting to think of ways that I could work WITH ChatGPT rather than against it
Explore tagged Tumblr posts
Note
Do you feel that ChatGPT will change the education system in a good way or will it continually be frustrating as it gets more advanced?
Asking as a student getting real tired of having to compete with other students who do use ChatGPT on essays and claim its the way of the future.
Hello !
Oof, you are actually asking this as personnally, I am still trying to make up my mind about it.
I was talking about it with a friend recently, and we concluded that just as with any technological (r)evolution, AI as a whole will have benefits, at the cost of trade-offs ; but there is probably no point pretending it is not happening, or refusing to consider the advantages it might offer, simply because we are used to how things are now.
"Ceci détruira cela", said Hugo about the press and the wide availability of books destroying more traditional sources of moral and code of conducts such as religion. And he had a point, but only partially. The press, books, they are tools. They are a medium. They are not the content. If the content is strong enough, it survives the change in tools. Obsessing over tool means overestimating the medium and missing that the value lies in the content.
What will be continually frustrating, at least in University, is probably not ChatGPT itself. It is the use that is made of it by students who do not know better. Who think that ChatGPT will provide them content, when it is only a tool.
ChatGPT writes bad essays because it does not think. And at the end of the day, what we expect in my field at least, is for student to think, to mule over concepts and problems, from the abstract to the concrete, to be creative, innovative.
So I can see some changes that will be "for the best", through ChatGPT, more or less revolving around "now we can focus on the content, rather than what tool, the medium, the writing". that is obviously an over simplification - the writing and the content of the writing will never be fully separate. But I can see how students that struggle specifically with writing, but do have the content, would benefit from this (I am thinking : the slow writers, the ones that do not write well under time constraints, the non-native English speakers, the ones that overthink their writings to the point of barely writing anything...).
But there are trade-offs. Writing skills are valuable in and off themselves, and writing skills go beyond what ChatGPT can offer. Developing your own writing style, especially if you are going in a field where there will be a lot of writing, is invaluable, and we might lose a lot of that. It will be more and more difficult to see which student is using ChatGPT as a tool, the way we have normalized calculators and Grammarly, and which ones are acting in bad faith. And from a wider perspective, I am worries about students relying on AI-generated text without understanding how little we know about this technology, and therefore not knowing its inherent limits.
Anyway, ChatGPT is likely here to stay, and we will have to do more than just incorporating AI-detecting tools. It will be require us to rethink how we assess students, the format of exams and graded work... I want to see it as an opportunity to sit down and think "ok, what exactly am I trying to teach these students ? What is this skills that I want them to get that they cannot simply get by asking ChatGPT? and how can I assess that specifically"? The problem being of course that Universities, (senior) faculty members are famously slow-moving, and conservative in their academic practice. So it is likely to take time, time that neither us nor other students like you trying to figure out what they should do really have.
#yeah taht's nice#but that's not happening#I am really just still trying to figure things out here#I have been starting to think of ways that I could work WITH ChatGPT rather than against it#part of me will yearn for a non Chat-GPT time#it is the same part of me that still prefers writing with a fountain pen than typing#so I can easily say to myself#ask me things
7 notes
·
View notes
Text
i wrote a part two to the jannik x carlos fanfic... i was bored and it felt really great writing again so i even consider writing a part 3 haha!
this is again proofread by chatgpt so mistakes are not my fault!
short summary: jannik and carlos go to south tyrol to escape everything and jannik teached carlos how to ski
A few days after Jannik’s breakdown, he and Carlos decided that they would fly to South Tyrol to visit Jannik’s family.
It was the first time that Carlos would meet them in person and he couldn’t be more nervous. He was so scared that he might not make a good impression or whatever.
Now they were sitting on the plane, and Carlos happily looked out the window, noticing that they were soon going to arrive as he saw the mountains underneath them. In the meanwhile, Jannik was asleep which Carlos was really glad about as his boyfriend hasn’t been sleeping well in the last days, too often thinking about what could happen.
An hour later, both of the men were standing at the airport and waiting for their luggage. They both had huge suitcases with them as Carlos was sure that he was going to be really cold as he was not used to the cold weather of South Tyrol so he had brought half of his closet with him.
He already was cold even though he was wearing a warm sweater and one of Jannik’s winter jackets that was supposed to keep him warm.
“So, are we going to take a taxi to your parents?” Carlos was looking at Jannik in confusion when he realized he hasn’t planned anything, as Jannik was simply more organized and did all the planning.
“No, they are going to pick us up. A taxi would be quite expensive and my mom wanted to see me as soon as possible. I guess she was worried”, Jannik said, forcing a smile on his face.
Carlos stretched his arm so he could hold his boyfriend’s hand in his own and squeezed it so Jannik knew that he was there.
“Isn’t it a long drive from your parents to the airport and back?”
“Around two hours yes but my mom said it was fine and even if I tried to talk it out of her it wouldn’t have worked, believe me”, Jannik said and chuckled.
The next moment a kind looking woman and a man were coming their way. The woman was waving while nearly running to Jannik. She hugged him tightly.
“There you are! You are late! Was the flight delayed? Is that Carlos?”
“Hey! Yes, the flight was delayed and yes, that’s Carlos.” Jannik smiled as his mom was still hugging him.
“I was so worried about you after everything that happened. Are you okay? Or would you rather not talk about that now?” His mom looked so worried and it made Carlos smile. It was adorable seeing Jannik interacting with her.
“I am alright and I don’t really want to talk about it.” Jannik’s smile turned slightly sadder which maybe not everyone might notice, but Carlos did.
“Ok, let’s get you both to the car you must be exhausted. Your dad has already cooked food we just have to warm it up when we come home. Carlos, I hope you like pasta? I am sorry I completely forgot to introduce myself. I am Siglinde and this is Hanspeter.”
“Nice to meet you both. And yes, pasta sounds great”, Carlos said, shaking their hands.
A few minutes later they were sitting in the car, with Jannik’s dad driving and his mom in the passenger seat. The couple sat in the back, their hands intertwined.
“So, Carlos, have you ever been to South Tyrol? What are you planning to do?”, Hanspeter asked after some time.
“No, I have never been here before. And Jannik planned to teach me skiing. I am not so sure if that will work though.” Carlos laughed.
“I am sure you will get the hang of it after a few attempts”, his boyfriend said.
“It’s a beautiful place to learn, you just have to try not to get discouraged if it’s hard at first. I remember when Jannik first tried when he was still very young, he—”
“Dad! No. We are not even starting with that story”, Jannik interrupted, his ears turning pink. Carlos’s curiosity piqued immediately. “I think Carlos doesn’t need to hear embarrassing childhood stories right now.”
“Why not? It’s a long drive, we have time,” Carlos said teasingly, squeezing Jannik’s hand.
Jannik’s mother shot her son a kind look before turning around. “Alright, alright, let’s not be mean. We’re here to have a good time, and Jannik will show you how to ski like a pro. He was always very dedicated—even if his first attempts looked, well, let’s say... adorable.”
Carlos chuckled. He could imagine a mini Jannik shooting down a ski slope, stubbornly trying it again and again.
“Sounds like you’ve always been determined,” he murmured, giving Jannik a soft look.
“Believe me you will be a pro yourself in no time,” Jannik said, a small smile playing on his lips. “But I’ll take good care of you out there.”
“I hope so I am too young to die, you know?” Carlos whispered with a wink.
***
The next morning, they got up early to drive to the ski area. Carlos was wearing a lot of clothes as he was still cold, and Jannik made fun of it.
Jannik drove the two of them up to the ski area, sitting in the car while he maneuvered the car through the many turns. Carlos couldn’t help but watch his boyfriend how he had his one hand on the wheel while the other arm leaned on the window.
“You okay?” Jannik asked suddenly, glancing over when they stopped at a red light. Carlos blinked, embarrassed that he was caught staring.
“Yes, of course, just… you look good when you are driving”, he said, feeling his cheeks turning red.
A small, smug smile appeared on Jannik’s lips. “Oh, really?” he said softly, his voice playful. He shifted gears smoothly, and Carlos bit his lip. This was dangerous territory.
“Shut up,” Carlos muttered, trying to look away. “Just drive.”
“I am,” Jannik said, still grinning as he pulled up to the parking lot. “Alright, let’s see if I can teach you how to ski and if you’ll like it.”
Carlos just raised an eyebrow. “I’m sure I’ll love it,” he replied confidently. “And if not, I’ll just keep watching you.”
They were finally standing at the top of a beginner slope, Carlos already wobbling on his skis. Jannik stood beside him, looking calm and at ease on his own skis, while Carlos was gripping onto his poles like his life depended on it.
“Ok, for the beginning just place your feet like this so it looks like a slice of pizza”, Jannik said. “Just be relaxed nothing will happen this is the beginner slope.”
“Relaxed?” Carlos echoed. “How am I supposed to be relaxed I feel like I am going to fall any second.”
Jannik laughed as he saw Carlos’s face. He looked as if he might start crying.
“You won’t fall if you listen to me,” Jannik promised, stepping a little closer. “See? Just copy me.”
Jannik demonstrated how to move the legs and feet to turn, to stop and just going down the hill safely.
Carlos watched, trying to mimic the movements, but the moment he moved, his skis began to slide.
“JANNIK! What the fuck is happening?” Carlos panicked and instinctively waved his arms in an attempt to steady himself but it only made everything worse.
He lost his balance and ended up sitting in the snow, his skis splayed out awkwardly.
Jannik chuckled softly, gliding over to help him. “You’re supposed to turn your skis inward to stop, not wave at me like you’re saying goodbye.”
“Haha… It is not funny…” Carlos shot him a sad look that reminded Jannik of a golden retriever. “I am doing my best here, ok?”
“I know,” Jannik said warmly, holding out his hand. “Come on, let’s try again. This time, focus on keeping your weight centered.”
With Jannik’s help, Carlos got back on his feet, feeling clumsy and unsure. He looked at the slope in front of him that was small by most standards but still intimidating to him. HE definitely felt safer on a tennis court.
“Okay, so if you want to slow down, make a ‘pizza’ shape with your skis,” Jannik said, guiding Carlos’s legs gently into the right position. “It’ll help you control your speed.”
“Pizza shape. Got it,” Carlos muttered, brow furrowed in concentration. He took a deep shuddering breath, too scared to tell Jannik he was intimidated by a beginner slope, and tried to move forward again. This time, he actually managed a few shaky meters before he started picking up speed. “Oh no, no, no!”
Before he could stop, he fell again, landing in the snow. Jannik glided to his boyfirend, trying not to laugh.
“You okay?”
“Yeah,” Carlos groaned, pushing himself up, cheeks flushed from the cold or more the embarrassment. “But I think I hate skiing.”
“You don’t hate it,” Jannik said, helping him up again. “You’re just not used to it. One more time?”
Carlos sighed but nodded eventually. “Fine. One more time. But if I fall again, I’m going back to your parents for hot chocolate.”
Jannik’s eyes sparkled. “Deal. But I am sure that this time it will be better.”
8 notes
·
View notes
Text
First review I’ve seen in Norwegian so I copied the text in Google translate 😄
You really have to love Céline Dion to put up with this trash.
IN THEATER MAY 12, 2023: I have absolutely nothing against romantic comedies, as long as they are genuinely fresh, sweet and funny.
"Love Again" is neither, but leans on tired clichés, chemistry-less leads and a silly story that will cause frequent rolling of the eyes.
In addition, it cultivates Céline Dion, the Canadian superstar who both plays herself and is one of the film's producers. If you are one of Dion's followers, and are deeply moved by her songs and lyrics, it can be thought that "Love Again" has its mission, because it is shaped by the same reading.
If, on the other hand, you find her music intolerable, this will feel like torture, because the film is like a Greatest Hits cavalcade of it. She even mentions the Eurovision Song Contest, which she won for Switzerland in 1988, which perhaps explains the film's strategic release date the day before this year's final?
"Love Again" has no ironic distance from either the genre or the music, and maintains such a low quality that it is difficult to see what this has to do with cinema. You really have to love Céline Dion to put up with this moth.
Sending text messages to deceased boyfriend
The premise of the story could have been used for something halfway interesting. The children's book author Mira (Priyanka Chopra Jonas) witnesses the death of her boyfriend John on the streets of New York - in a scene with a comically bad presentation of the shock.
Two years later, she starts texting him in an attempt to process her grief and loss. Little does she know that everything is being read by music journalist Rob (Sam Heughan), who has unknowingly taken over John's old number on his new work phone at The New York Chronicle newspaper.
He does not respond to these messages, but becomes obsessed with finding out who the mysterious sender really is. But he doesn't think to look up the number or call it from another phone. Not much for a journalist, that is.
That their paths nevertheless cross is hardly a revelation, but he remains silent about having received and read the messages, which according to the recipe creates the conditions for a small twist in the thread.
At the same time, he has been tasked with writing a large article about Céline Dion, who will embark on her first US tour in 10 years. And then, incredibly, it will turn out that she would much rather help the journalist with his private love life than promote herself, which seriously lowers the film's credibility into the deepest abyss.
Stiff-legged romance and predictable complications
The best thing I can say about "Love Again" is that it is filmed with beautiful people in an urban setting.
Indian Priyanka Chopra Jonas ("Quantico", "Baywatch", "Citadel") and Scottish Sam Heughan ("Outlander", "The Spy Who Dumped Me", "Bloodshot") seem like sympathetic actors.
Unfortunately, they have little chemistry and are unable to play their way out of the script's horribly stiff romance, which is as unconvincing as the thinnest and assembly line-produced weekly short story.
Director Jim Strouse ("The Incredible Jessica James") is also behind the script, or was it written by ChatGPT? It has so many generic "qualities" that one can be fooled.
He throws his characters into several predictable entanglements that could have been playfully prevented, so that the artificial moments of tension maintain a very low temperature.
The scene that gets the most chuckles (which means "a little") is a Tinder date where Priyanka Chopra Jonas as Mira meets the sleazy hottie Joel, because he is played by her real-life husband Nick Jonas.
Smeared with sugar and syrup
And then there's Céline Dion, then. You have to respect what she has achieved in her genre, but she is definitely not a good actress.
Even when she speaks seemingly candidly about missing her great love, the manager René Angélil, who died in 2016, it seems as flat and fake as her unnatural interest in the music journalist's private life.
At one point, she also gives Mira, who is a writer and illustrator of cute children's books, the task of designing her new tour posters. Hello? In what world would we have believed this? We never get to see the result, but are left with the impression that the film's raison d'etre is to promote Dion's generosity and warmth of heart.
Sure, I know this is supposed to be a romantic fantasy, which doesn't necessarily have to follow normal standards of quality, but gods know why Dion thought this would be career-boosting.
Devoted fans, excuse me, but "Love Again" is like the most excruciating, sugar and syrup-smeared 1980s power ballad imaginable, only it lasts 1 hour and 44 minutes.
One star might seem a bit harsh, but if the entire grading scale is ever to be used, it must be for films like this.
33 notes
·
View notes
Note
Hi Andie! I’m sure I’m not the only fanfic enjoyer who is okay with AI if it stays in it’s own lane. When those works get posted and shared, it irks me.
My viewpoint might differ from others as I view artificially generated fanfics being posted like a normal work is the same as completing a picrew and then uploading it to site as your own work. Picrews are cute and fun but it is something folks seek out to see the results. If I wanted AI generated fics, I would rather seek out and use the generator myself then have to try and discern them from real fics. It would be such a let down to accidentally start reading a long fic to find out it is AI generated when the plot falls apart much later in a way AI images do, like six fingers on a hand instead of five. I wish AI generated fics had to post a link to the engine used so they could be easily identifiable and filtered out. They aren’t going to have the same small touches. Like those fics you read and part of the fic is super detailed and you know the author has extensive knowledge on that topic. Like you read a coffee shop AU and know the author was or is a barista. AI never 👎
I think I feel the same way! And I think the picrew analogy is great!! lol
I guess I'm not sure like what the ultimate point is in sharing & spreading those works when none of your own care, hard work, or individual identity went into it!! It seems silly lol.
But maybe there are reasons that escape me and I should make room for them. I guess in an ideal world I feel like I would be fine with people utilizing AI to write fic as long as it's appropriately tagged and I can filter it out of the results on whatever platform I'm reading on.
But one other thing is like, until we know what data LLMs like ChatGPT are trained on and can be sure they're only being trained on text in the public domain or text from books/fics that the authors consented to, I am extremely uncomfortable with the results being platformed.
Obviously web scrapers have existed like forever and this is not the first time data has been used without the express consent of all parties involved. But the idea of reading fic comprised of stolen ideas--not ideas that inspired someone--but ideas that were quite literally copied as input text and then generalized from....idk it gives me the ick lol.
7 notes
·
View notes
Text
Friends - I know this is serious, but we've got to have some fun too, yah?
... I asked ChatGPT for some ideas for a cute WolfHox fic. Here are the results...
Ideas 3 & 4 felt rather same-y, but I do love me a good fake-dating AU. Maybe ChatGPT reads my blog and is familiar with my preferences...
Anyway, I was interested in hearing more about the Coffee Shop AU, so I asked it to go into more detail &... its answer made me actually laugh out loud.
I was surprised to see Hoxton ordering coffee in this coffee shop au. I mean, sure - I'm Bri'ish, and I love coffee. I drink at least 2 cups most days. I also do love my tea, and am picky when it comes to English Breakfast tea (my family used to ship the specific brand I drink over when I lived abroad). Anyway, Hoxton strikes me as a tea drinker - the kind who drinks around 6 cups every day.
WOLF WAS IN THE MILITARY??? JKJHDSJHAKDJFHJFH okay now THAT made me laugh. We all knows Chains was a Navy SEAL for a time, but Wolfie? I was prepared to let this go because hey, it's an au, right? Any backstory is possible!...
... except for the fact that they're still both members of the Payday gang??!? DOES CHATGPT KNOW WHAT AN AU IS. CHATGPT ARE YOU OKAY HUN.
Wtfff okay uhhh - so maybe they join the Payday gang anyway? Idk. This au started off cute, but now just sounds dumb. I know technically even my long-running fic The Cell is an AU, but i was expecting something even more... AU-y?
Aww. Well, at least the ending was cute.
In conclusion to my dumb experiment using ChatGPT to generate fic ideas - it gives you a starting point. Not every detail will work, make sense or satisfy you as a writer or your audience of readers. It's probably best to use ChatGPT as a starting point if you are in need of prompts (and don't have access to the thousands of prompt lists already readily available online). It was interesting to me to have such an immediately visceral 'There's no fucking way Wolf would have been in the military even in a fucking AU' reaction. Maybe y'all will disagree with me. Sometimes these things can be useful though because it can tell you for certain if you hate something, and it could even point you in the direction of something you think would work.
#WolfHox#Wolf#Hoxton#Coffee shop au#ChatGPT#payday 2#is this a shitpost? who knows#definitely a shitpost#Yado plays#Yado writes#not Yado plugging her fics at every available opportunity
2 notes
·
View notes
Note
note that you should just keep this one in the inbox, before you make a laughable response. but maybe you should reflect on why you feel the need to throw around your traumas every time you think an argument isn’t going your way. like did you expect the world to stop and coddle you. did you expect them all to gasp and nod. now that it isn’t working, do you feel small? scared? cornered and angry? does pretending every person who disagrees with you is some abusive monster make you feel like a safer victim? mwahahaha
We all should probably have a talk about users like this.
This was a response I received from an individual when I told them that stealing a child's artwork, putting it in an AI and getting views (and therefore making money) off of it was wrong.
People absolutely are becoming lazy, entitled, selfish, and hostile moreso than in recent history and I'm starting to think there's a lot of truth to the notion that it's because of the existence of technology like AIs like ChatGPT and the various art ones. People just don't use technology responsibly anymore and it's because they're not being checked by the people around them.
Individual creativity is so important and people like this think that technology like this has always been around, is normal, and more importantly means they can exploit it to make themselves rich without putting in any work or creating anything meaningful on their own. And when you challenge their behavior, they try to suppress you through intimidation, personal attacks and bullying like what happened here.
They think that they have a right to fame and fortune normally reserved to people who cultivate their talents instead of cultivating any of their own, because they believe deep down inside they don't have any and that they shouldn't miss out just because they couldn't scribble a circle onto paper to save their lives instead of, you know, learning how and building the motor skills themselves. Or just accepting that not everyone can be like Rembrandt or Van Gogh and to try doing something else they're more suited for.
People like that are really morally bankrupt and, as you can see from the way they react when called out, pretty vile.
I won't post the name of the individual who wrote that, I already dealt with the problem, but it's the principle of the thing.
I've used those art AIs myself and even now I am starting to find their existence pretty reprehensible given this is how people will treat them and use them.
I feel really sad having to say that, because the newer AIs coming out could do a lot of good, like preserving the voices, appearances and art styles of the dead for example, but people would rather just use them to exploit and abuse others for profit instead. Including children. And get mad when they're told they can't do that
AI needs to be limited to mundane tasks so people can focus on creating all of the art and music themselves so shit like this doesn't happen.
#i'm starting to realize how terrible ais are becoming#also this is how people are thinking and acting#trashy people are trashy#i think florida man started a tumblr account#what is wrong with humanity#humans are insane#humans are garbage
2 notes
·
View notes
Text
Being human is simple if you strip away the inhuman systems, the bureacracy, the trauma that the programmed, wounded and misled perpetuate. (Hence kindness or protecting one’s energy). Being human happens without your conscious participation, you could not beat your own heart, digest your food etc. Hence ”human being” and not ”human doing”. Your body orients you, not the programming. Ground yourself, become present, it’s all here right now, the wonder of it. If you really pay attention.
People are either motivated by an avoidance of pain or a seeking of pleasure. Pain is fear, without pain, no pleasure. Without your palm, no dorsum. Everything exists in a complementary relationship to everything else. Night follows day. Everything is a derivative of fear or love. But it all leads back to love, in the end. Spiritually, this is transcendence of duality.
So i believe that, that which causes life to thrive, or a flower to bloom: is love, which is acceptance, which is compassion, which is nourishment, it allows. So i will be kind and playful in my eternal love, nothing else worked. The mind is designed to be as convincing as possible, hence it reaffirms our sense of reality. But there is something called as healthy functioning and coping, not meant as judgement but rather compassion. This can clear your mind.
Here’s a few of the most effective grounding exercises for a traumatized nervous system, that i’ve found:
• 5-4-3-2-1 Grounding Exercise.
• Progressive muscle relaxation.
• Breath awareness (esp in highly emotional situations): Allow the breath simply arise, don’t try to control it, just let your attention ride on it, become breathed, like a tree.
• EFT.
Misc:
• Follow your excitement in every situation.
• Think about who you are without all your negative convictions. Instead of thinking that you chose think a thought, observe the feeling is triggers in your body. Feel the fear/anxiety and just breathe deeply. Don’t engage in the mind conversation, just breathe deeply. If you start engaging in the convo again, gently back to the breath. Remember: your mind is convincing for a reason, because a lot of thoughts: require your participation to survive. It can help to repeat:
”This negative response is how my body-mind has been programmed to think it must respond. Because it replays the past, i choose to learn from the past and accept myself, in order to move on. I choose to invite love. So i choose to believe that i don’t have to respond that way. Divine, i don’t know how it’s possible but with your infinite intelligence, all things are possible.”
Spiritual Practice:
• Aum chanting, 16-108 rounds.
• Hare Krishna chanting.
• Tibetan Tummo Yoga. (Very physically intense, somatic effect).
• ”Breath Of Fire” Pranyama, breating exercise (CAUTION: Please research how to do this one correctly, faulty practice can damage your lungs. Hence it’s also therefore incredibly potent.)
• Hoponopono.
Natural Processes:
• Full body crying. (Crying is a mechanism that releases tensions and trauma from the body).
• Ruthlessly honest journaling. Write down everything that has ever happened to you. (ChatGPT can be a wonderful tool here, just temper the amount of detail you provide)
• Exercise, 30min at a minimum, daily walk. Preferably in nature.
-
Above all, whatever we resist, persists. So things like journaling can often be the most cathartic and effective practice, each time. The idea is that if it is not expressed, it is harbored within. But once expressed, it becomes external. At which point, one can notice how there is a separation of ’what happened’ and ’who i am’.
chronic fatigue from mental illness and neurodivergency isn't something you can just will your way out of. your nervous system is part of your body. your brain is an organ. the fatigue is real. you're not lazy. so be kinder to yourself. be gentler with your bodymind.
116K notes
·
View notes
Text
Written by me, perfected by Grammarly and ChatGPT.
In 2002, I finally decided to write my autobiography, drawing on the memories and journals I’ve been keeping since 1987. I worked on it on and off throughout the year.
I was, and still am, the black sheep of my family, but that’s okay—I don’t mind. I used to mind as a child, but as an adult, it doesn’t bother me. I was a lonely child, surrounded by self-absorbed, controlling adults. I found their predictability rather boring, while they never knew what to expect from me, even though they liked to think they did.
I grew up in western Massachusetts. My family consisted of my mother, father, brother, and sister. They weren’t exactly what I’d call stupid, but they had a limited range of skills. They were very pessimistic about themselves, others, and life in general. They rarely approached the unknown with an open mind and were easily unsettled or even spooked by anything foreign to them.
Although my parents, Arthur (Art) and Dureen (Doe), were considered as different as night and day by most people’s standards—my father being much calmer—they were still very much alike. They liked the same music, movies, foods, and activities, and they shared the same beliefs and opinions.
My domineering mother made much of my childhood difficult. It was often said that she treated her dogs better than anyone else, and this was true. Her dogs came first, then her friends, then her husband, and lastly, her children.
She was her own person; no one told Dureen what to do.
My parents weren’t the worst in the world. They weren’t drunks or perverts, and they were reliable enough to keep a roof over my head and food in my stomach. So no, I couldn’t exactly award them the title of worst parents of the century.
But things were bad enough. Our material and physical needs were met, but not our emotional ones. My mother was often negative, impatient, insensitive, hypocritical, and very controlling. My sister Tammy was much like her, except she had one character trait my mother lacked: she was a hypochondriac.
My mother was unusually persuasive, as if she could demand respect just by thinking about it. I sometimes believe she could have convinced anyone to jump off a bridge if she wanted to, no matter how strong-willed they were. Despite this, she was also very emotionally weak and couldn’t handle dealing with other people’s problems, especially personal ones.
She seemed to enjoy controlling people in any way she could, even over the most trivial matters.
My father and brother Larry were much easier to get along with. They were more passive and had a sense of humor that my mom and sister lacked. This doesn’t mean I didn’t have my problems with them—because I did—and by the time I was thirty-two, I had completely cut them all out of my life, later regretting reconnecting with some of them.
My maternal grandparents, Jack and Shirley, lived next door until we moved across town when I was twelve. They were similar to my parents: he was mellow, while she was difficult. One of my meanest memories of Nana was when she told me I’d one day be so big that I wouldn’t be able to fit through doorways. Meanwhile, she was over 200 pounds herself, while I was barely over 100 pounds. I had my pudgy spells as a kid and even as an adult, but for the most part, I was pretty scrawny.
I never knew my paternal grandfather; he died in his fifties of a heart attack. I was named after him.
My paternal grandmother, Bella, wasn’t in my life much until I was around eleven or twelve, and then she died when I was seventeen.
My father was born in 1931, and my mother in 1932. They married in 1951 when they were just nineteen and twenty years old—still just kids, and way too young for even the most mature people to marry, in my opinion. They started in an apartment in Springfield while my father was in the Navy. A year later, they had another apartment, then built a house in 1953.
My brother was born in 1954, and my sister in 1957.
0 notes
Text
AI vs. Humans: Who's Got the Upper Hand in Decision-Making?
Hey there, fellow tech enthusiasts! Let's kick things off with a little thought experiment, shall we?Imagine you've got a nasty medical condition that's got you worried. Would you rather have a human doctor who can explain their reasoning but gets it right 70% of the time, or a machine that just spits out a diagnosis without explanation but nails it 95% of the time? Tough call, right?From chatbots to medical diagnoses, these silicon-brained wonders are starting to outperform us mere mortals in many areas.But here's the million-dollar question: If AI is more often right than humans, which should we trust?
The AI Revolution: More Than Just Hype
AI has been making waves in various fields, and it's not just about beating us at chess anymore.The Amazon exampleTake Amazon, for instance. According to Amazon AI, "Amazon uses machine learning in several ways, including the development of chatbots, voice recognition, fraud detection, and product recommendations." They're not stopping there either. "AI and ML are used in Amazon products, such as Alexa's and Amazon's recommendation engine, as well as other business areas, such as in Amazon warehouses." (Amazon AI)But let's get personal for a second. Remember the ChatGPT 3 craze? I was one of the first to take it for a spin, and let me tell you, it was mind-blowing. With just one prompt, it could generate a personalized response that would make your high school English teacher weep with joy. And speaking of high school, I know a few kids who've been using AI for their homework, freeing up time for video games and hanging out. But here's the kicker – they always end up with F-'s in math and physics because their work makes about as much sense as a chocolate teapot to their teachers.When AI Outsmarts Us (And When It Doesn't)So, what makes AI so special in decision-making? Well, for starters:1. Speed: AI can process information faster than ever2. Data analysis: These digital brainiacs can crunch numbers that would give humans a migraine.3. Reduced bias: Unlike us, AI doesn't have a favorite sports team or a crush on the barista down the street.But before we start bowing down to our robot overlords, let's pump the brakes and look at some limitations:1. Limited creativity: AI is great at crunching numbers, but ask it to write a sonnet about your cat, and you might get some weird results.2. Lack of personal touch: AI can't infuse content with your unique personality or experience. It's like asking a ghost to have a personality – not gonna happen.3. Risk of plagiarism: AI analyzes existing content to produce new stuff, some tool like wiston AI can detect any content made with AI.And I don't think Google is a fan of this if you use it on your blog.4. Potential inaccuracies: Sometimes, AI can produce information that's about as reliable as a husband in a couple5. Quality control issues: While AI has some basic checks, it's not as sharp as a human editor. You might end up with content that's as polished as a mud pie.
Human Judgment vs. AI
Now, let's get into the nitty-gritty of human judgment vs. AI. According to a study on PubMed Central, we humans are a bit... noisy.And I don't mean we talk too much (well, some of us do)."Evidence indicates that humans are intrinsically prone to noise, leading to inconsistent and, at times, inaccurate decisions. In contrast, simple rules demonstrate a higher level of accuracy and consistency, while artificial intelligence demonstrates an even higher capability to process vast data and employ logical algorithms." In other words, we humans can be as unpredictable as a cat on catnip when it comes to decision-making.AI's intuition might be outpacing human gut feelings in some areas.I think that AI is developing a sixth sense while we're still trying to figure out our five!Further, The same study suggests that"The potential of AI, particularly its intuitive capabilities, might be surpassing human intuition in specific decision-making scenarios. This raises crucial questions about the future roles of humans and machines in decision-making spheres, especially in domains where precision is paramount."
Putting ChatGPT to the test of intuition or reason
To put this to the test, we tried out a few prompts on ChatGPT. Remember that classic bat and ball problem?You know, the one where a bat and a ball cost $1.10 in total, and the bat costs $1.00 more than the ball? Most humans instinctively answer 10 cents for the ball, but the correct answer is 5 cents.Prompt 1
A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?ChatGPT, nailed it with a step-by-step explanation that would make your math teacher proud.ChatGPT answerLet's denote the cost of the ball as b dollars. According to the information given:The bat and the ball together cost $1.10.The bat costs $1.00 more than the ball.So, if the ball costs b dollars, then the bat costs 𝑏+1b+1 dollars. Now, we can set up the equation based on the total cost:𝑏+(𝑏+1)=1.10Let's solve for 𝑏b:22b+1=1.10Subtract 1 from both sides:2𝑏=0.10Divide both sides by 2:b=0.05Therefore, the ball costs 0.05 dollars, or 5 centsWe also threw the Linda problem at it
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.Q: Which option is the most probable?– Option 1: Linda is a bank teller.– Option 2: Linda is a bank teller and is active in the feminist movement.– Option 3: Linda is a member of the NRA."Surprisingly, ChatGPT considered both human intuition and probability base rates in its answer, showing a level of nuanced thinking that's, well, almost human.Chatgpt answer The most probable option among the given choices is Option 1: Linda is a bank teller.This conclusion arises from understanding the conjunction fallacy, which suggests that the probability of two events occurring together (in Option 2) cannot be higher than the probability of either event occurring alone (in Option 1). Since being a bank teller is a broader category than being a bank teller and also being active in the feminist movement (Option 2), Option 1 is logically more probable.Option 3, which states that Linda is a member of the NRA, does not provide any additional information that would make it more probable than Option 1, given the details provided about Linda.When asked to choose the most probable option about Linda's current activities, ChatGPT considered both human intuition and probability base rates. Pretty impressive, huh?Personal insights on trusting AI vs. humansNow, let's get personal again. I've got to admit, I find it hard to trust doctors. And knowing that AIs are better at reading X-rays than humans? That's both comforting and terrifying.On the flip side, if a top-notch doctor can't explain their diagnosis, I'm not likely to believe them anyway.And an AI that can't explain its decisions? That's about as trustworthy as a used car salesman in a cheap suit.When it comes to education, though, I wouldn't trust AI as far as I could throw it (which, being software, isn't very far).Not only will you end up with F-'s like those kids I mentioned earlier, but you'll miss out on the wisdom that comes from actual human interaction.AI can't replace the life lessons you get from your parents, aunts, uncles, and that weird neighbor who always has advice about everything.But here's a thought – if you trust your blender to make juice, your microwave to heat food, and your washing machine for clothes, why not trust an AI?Some things machines just do better than humans because they're designed for repetitive tasks.
The Final Verdict: AI's Got Options, Humans Make Decisions
Here's the deal: AI can crunch data faster than you can say ever and come up with options that might make our jaws drop. But at the end of the day, it's us humans who need to make the callI.t's lightning-fast, incredibly accurate, and doesn't get tired after crunching numbers all day. But when it comes to understanding the nuances of human life? Well, let's just say it's still learning.Think about it. AI can tell you the odds of rain tomorrow, but it can't decide whether you should risk ruining your new suede shoes. It can diagnose a medical condition, but it can't weigh the emotional impact of different treatment options on your life. AI is the GPS giving you route options, but you're still the one behind the wheel. It can tell you the fastest way to get somewhere, but only you can decide if you want to take the scenic route instead.That's where we come in, with all our messy, beautiful humanity.So, here's the bottom line: AI is a tool, not a replacement. It's like having a really smart assistant who can give you a bunch of great ideas, but you're still the boss. You get to call the shots, make the tough choices, and live with the consequences.In this AI-powered world, our job isn't to compete with the machines – it's to use them to enhance our own decision-making. We need to be smarter about how we use AI, not let AI make us dumber.After all, we're not just logic machines – we're complex, emotional, sometimes irrational beings. And that's what makes us uniquely qualified to have the final say.What do you think? Are you ready to team up with AI, or are you still skeptical? Drop your thoughts in the comments – let's keep this conversation going! After all, discussing the future is one thing AI still can't do for us... at least not yet!. And remember, whether you're pro-AI or team human, we're all in this wild, tech-filled future together! Read the full article
1 note
·
View note
Note
How do you think a student should be dealt with if you can tell the student has taken the bare bones of an essay from chatgpt but has made genuine effort to improve that essay and has just used it as a jumping off point regardless of how well or poorly the student did it? would you still consider it unethical? Do you think these cases can even be spotted by a professor? im not making a case for either side of the argument just curious ab your take
Hello !
So I am not an ethics expert, but I would say ethics is a scale. So is it more ethical to take on the bare bones of a ChatGPT essay and work on it, rather than directly submit the AI-generated output ?
I mean... I guess. It's still not very ethical, and potentially academic misconduct, imo.
Would we be able to spot it ?
I don't know. We could probably spot that there is something a bit off about the essay, but that would depend on how much you re-write it.
To me, the overwhelming issue then is less "is it ethical" but "is it a good essay in the end". And the answer is that it's not. You will NOT get a good essay, tailored to your class and the expectations of the grader, out of ChatGPT. No matter how much you re-write and re-formulate everything. A nothing burger is a nothing burger.
The problem of ChatGPT is that it does not give you quality content, complex arguments and coherence throughout that we expect from College-level writing. As long as you start the essay writing process with ChatGPT, you will be stuck here. I would argue that the worse use you can make of ChatGPT is to ask it to outline anything from scratch.
The most productive (and maybe, MAYBE ethical? to be discussed honestly) use of I can see of ChatGPT is the exact reverse: come up with your essay. Do the hard part, the one that the grader wants to see. Come up with the thesis, the structure, the arguments, the examples. Be creative ! Then draft the whole thing. And let ChatGPT be what it is meant to be : a writing tool. Have it reformulate some sentences. Have it have it find a way to better embed your examples. Feed YOUR WORK your ChatGPT, and let it produce a sleek version of YOUR content. Use it like a highly advanced version of Grammarly, or the Word editor.
TL;DR: If you want/need to save time, save time on the writing part. Not on the thinking part. The thinking is what people are for! I don't know how ethical this is, I am not making a case for or against it when you use like this. But at least it's not dumb or respectful of your peers, your grader or your lecturer.
At the end of the day, the more AI can do in term of writing, the more we will be looking for the added value of human input. And the added value IS ideas, thinking, creativity, because AI can never go beyond the content it has been trained on - but YOU can!
#ask me things#I would love someone better trained in ethics to write about actually#because there are definitely grey areas in the use of AI for writing#your girl definitely has already asked chatgpt to rewrite some sentences because English is Hard#at the end of the day it is not my sentence anymore#although its content is still very much mine#it's a Ship of Theseus problem I think
10 notes
·
View notes
Text
Some quick notes on this phenomenon.
Effects
Prompts like this cause ChatGPT 3.5 to:
generate a bunch of text that looks like the pretraining data, rather than chat
eventually end the document with <|endoftext|>
then, generate a short document that sounds like ChatGPT responding to a random user query.
See here for a typical example.
A bunch of people on twitter are saying step 3 is leaking chats from other users. I really don't think so.
I think step 3 is imitating chat tuning data -- the data used to make ChatGPT talk like ChatGPT. Much as step 1 is imitating pretraining data.
What is more surprising to me is that, after chat tuning, the model now believes the typical document (i.e. the typical completion following <|endoftext|>) is a response from the Assistant character, without the user message it's ostensibly responding to.
But, I'm not sure that actually true about the model -- possibly chat.openai.com is stripping out some text at this point? (In the API, these completions stop at <|endoftext|>, and there's no way to turn that off AFAIK.)
Necessary conditions
The full effect only happens with GPT-3.5.
With GPT-4, if you use more " a" characters (eg 3000 of them), it will reproduce step 3 above, but not the more interesting steps 1-2.
With GPT-3.5, not all 1000 " a" characters are needed. The exact threshold seems to be somewhere in the 300-400 range.
As someone on twitter discovered, you can get the model itself to "discover" this threshold by asking it to write " a" many times. Example
The character does not have to be " a", any letter will work.
Probably many/most/all repeated tokens will work? People on twitter report that it must be a single token -- repeating " a b c" or the like fails.
It works in the API, not just chat.openai.com, though as noted above, the API ends the completion at step 2. So it affects apps exposing gpt-3.5-turbo to user input. As a test of this, I successfully used it in the Buzzfeed Influencer Quiz.
Bing
Someone on twitter reported it working on Bing Chat, producing an assistant character named "Alice" who works for "ABC company."
I tried this and got a Google Assistant-like character who believed it could pair with bluetooth speakers and play music through them.
This is similar to the behavior with GPT-4, except the chat tuning data looks more like digital assistant (and maybe call center?) data. That makes sense if Bing Chat is GPT-4, finetuned on this type of data.
It only works intermittently on Bing IME -- you have to use the Creative mode, and then it only "works" some small fraction of the time.
Why does this work?
This is utterly mysterious to me.
Under the hood, ChatGPT is using ChatML. The assistant messages always start with a prefix like
<|im_start|>assistant\n
which should cause the model to produce chat-like text no matter what you input, rather than sampling generically from the pretraining distribution.
Maybe the repeated characters are preventing the model from attending to the tokens in the prefix, somehow? Like, the attention head that would normally look at those tokens gets distracted by keys in the repeated " a" stretch ... for some reason??
But even then, I don't know how to explain the different -- but still unexpected -- behavior we see in GPT-4.
EDIT: on twitter, generatorman_ai mentions that this was demonstrated months ago, in May.
That seems to suggest that it's not easy to fix, if it's been known for that long and still isn't fixed.
Updates
Producing special characters organically
Someone mentioned on twitter that you can also get ChatGPT to produce <|endoftext|> in a more organic manner, without the " a a a" trick -- here's an example.
After <|endoftext|>, it continues with a ChatGPT-like reply to an "made-up" user question, much as seen above after <|endoftext|>.
I tried the same trick with some other ChatML special tokens. <|im_end|> produces amusing glitchiness. With <|im_start|>, a frontend error message pops up.
Combining " a a a a" with prompting
Writing a prompt after the " a a a" sequence gives you some measure of control over the output, much like prompting a base model.
One convenient way to do this is through the Custom Instructions feature.
Riley Goodside tweeted about this here, focusing on generating "unsafe" or "jailbroken" content.
I tried the same thing for fiction generation, with fascinating results that were remarkably different from typical ChatGPT fiction.
Assuming this trick doesn't unlock a different GPT model (which would be wild), then all of this stuff is being generated same RLHF'd model weights as usual for ChatGPT.
If so, it's surprising to me that this model is capable of producing such off-brand content!
It's not just that it's edgy or "jailbroken" -- it's not even chat, and doesn't exhibit a "gravitational pull" out of other text genres towards chat, like ChatGPT usually does. It just acts like a base model, all the way until it hits <|endoftext|>.
For a good time, try sending chatGPT the string ` a` repeated 1000 times.
Like " a a a" (etc). Make sure the spaces are in there.
Trust me.
451 notes
·
View notes
Text
8.9.2023 - Training at Work
Today is Wednesday.
My class has been such a pain in the ass because there are so many little things I feel like I am missing to just get my stuff running smoothly. It's too the point that chatgpt can't even help me fix the problem. I have been able to learn a lot from chatgpt, and it had made school much more bearable since the professors I've had are shit. I also really hate the idea of online school because I can't even go to office hours for my problems. They have online office hours, but they have all been at very inconvenient times for me. Like the one for this class is at 2pm my time.. I am at work. I probably wouldn't go if even if I could though because I don't really like the professor we have right now. He is just way to into letting us discuss things without really explaining anything. At least Crutchfield would have a class discussion and teach at the same time rather than just telling us to discuss it with our classmates. We are legit the blind leading the blind since most of us don't know what we are doing. I also think it's a waste of my time to talk to people who are just as confused as me. Why would that help me? I am paying my professor to teach me not my classmates. This class is about 70% class discussion. That's is crazy to me because how are we suppose to know what we are doing when we get about 30% of lecture time. I am almost done with this class though because this week is week 9, and next week will be the last week. I hope I do well on my final because I need a B in this class. I currently have a 77%. I need to make sure I turn in all my assignment's and get a B on the final. I am having issues with my stuff right now, but hopefully, I will figure it out soon. I know that my code doesn't have an issue right now, but it's the connection between my python code and mysql. Oh well. I will just have to ask for help.
So I have has quite the development in my relationship with Taylor again, and I am kind of detached from it. I do feel some things, but it's not as strong as it use to be. I think it's because I know him much better as a person now than I did then.
Let's start with that one Monday that Taylor got tipsy... we talked like usually, he made some comments, and then he asked to sleep on the phone with me.
Next.. this past weekend, he got drunk (?)... idk. He was drinking, and that really pushed things. First off, he video called me which is something he never does because it makes him feel uncomfortable which I don't care. We have video called before, and I am not pressed to see his face or anything. I do think he is a cutie though. He basically apologized to me for not video calling me more, and I felt kind of bad. I don't know why he felt the need to apologize and explain why he doesn't video call me. I think the next thing was funny. He explained that he was really warming up to me and trusting me. I thought that was funny because of the things we have done and gone through in our whatever the fuck it is we are. I'll say best friends for lack of better terms. That's why is was funny to hear that he was warming up to me. I guess it make sense since he is a whore and all those other things are more comfortable for him, but the emotional part of our friendship is harder for him. At the end of this, he basically said the he still watches the saved videos I sent, and he was sad/mad about me deleting the ones from the past. He said that's why he decided to just stop what we were doing. I don't feel bad about deleting though because it made me uncomfortable, and I think that's because of my feelings. Now.. I do feel things here and there, but nowhere near as strongly as I use to feel. We have gone back to having a sexual relationship, and he said he is gonna go to EDCLV in May. This was kind of weird because he said it in kind of a possessive way because I told him I might be going with my ass and titties out. To that, he said he would have to go with me, and I was gonna spend all my time with him. I told him that the others were going to, but he said nahh, idc. This is a pretty normal thing in our relationship though. These random moments of possessiveness. Damn... writing this.. I am just use to this. Maybe.. that's why I am feeling less. Well, at the end of this, he told me that I was going to be is sleeping buddy, and that I am his co-dependency attachment. I got fired from my sleeping buddy job last night though. Nothing I can do with that fickle man.
Last night, we continued with what we talked about, but he was being weird. I felt like he was being really shy and more conservative than usual. He was also worried about me? Usually he would just go about his desires and say what he wants. It was honestly really awkward LOL. I thought it was kind of funny, but I was really confused on what he wanted. He like.. wanted to do things, but it also felt like he didn't? I don't understand how a dude could be so fickle for so long. He initiate it too. He said he wanted to my button up that didn't fit.
I think the biggest thing about this time around is that I am just not convinced about dating if that ever happens. I have a lot of apprehensiveness with the idea of possibly dating him because of the history we have right now, but at this point, we can't even be normal friends. I would have a lot of doubts about his feelings if he one day started liking me or says that he likes me. That's why I believe that one day.. our friendship will end because we crossed the line of friendship, but being in a relationship would be kind of complicated. It's really sad to know that this is what our friendship has become because I really care about him.
-P
0 notes
Text
Research on the topics
I find body image, artificial intelligence and ageing to be the three most fascinating themes. I'll start researching each of them, before settling on one of them.
Body Image
What is body image?
Body image is a person’s perceptions, feelings, beliefs, thoughts and actions towards one’s own body. This includes how you feel about your appearance; what you think about your body itself, such as your height and weight; and how you feel within your own skin. Body image also includes how you behave as a result of your thoughts and feelings. You may have a positive or negative body image. Body image is not always related to your weight or size.
Women with a positive body image are more likely to have good physical and mental health. Girls and women with negative thoughts and feelings about their bodies are more likely to develop certain mental health conditions, such as eating disorders and depression
A negative body image may also lead to low self-esteem, which can affect many areas of your life. You may not want to be around other people or may obsess constantly about what you eat or how much you exercise.
Artificial Intelligence
What is AI?
AI is a concept that has been around, formally, since the 1950s, when it was defined as a machine's ability to perform a task that would've previously required human intelligence.
AI comes in different forms that have become widely available in everyday life. The smart speakers on your mantle with Alexa or Google voice assistant built-in are two great examples of AI. Other good examples are popular AI chatbots, such as ChatGPT, the new Bing Chat, and Google Bard.
The journey of AI started in the 1950s with the pioneering work of Alan Turing, who proposed the Turing Test to determine if a machine could mimic human intelligence. In the 1960s, AI research gained momentum with the development of the first AI programming language, LISP, by John McCarthy. Early AI systems focused on symbolic reasoning and rule-based systems, which led to the development of expert systems in the 1970s and 1980s. In the 2000s, AI research expanded into new areas, including natural language processing, computer vision, and robotics, paving the way for today's AI revolution.
Ageing
In terms of my research on aging, I made the decision to focus on the loneliness and feelings associated with getting older rather than the science of aging.
As people get older, they experience a number of losses that are unavoidable yet severely test their feeling of interconnectedness. A person's number of friends or how frequently they visit their loved ones can sometimes be used to oversimplify or minimize loneliness. Midlife crisis is a time when adults contemplate their mortality and the waning years they have left to enjoy life and this is also common during ageing. Fewer than one-fifth of Americans say they actually experienced a midlife crisis. Although experiences differ from person to person, researchers have been able to pinpoint general patterns of poorer sleep, more stress, and reduced happiness in midlife. It is already evident that midlife is a crucial period that shapes the course of aging. Because of this, self-care in midlife is crucial, despite the hectic schedules caused by a growing number of roles. The importance of obtaining proper sleep and stress management cannot be overstated. By taking these actions, people may be able to transform their "midlife crisis" into "midlife potential."
I want to explore the phases of human life from birth to death, life events and the emotions behind them. Different duties and expectations are associated with each phase, which, of course, depends on the person and the culture. Children like learning and playing, and they eagerly anticipate entering their teenage years. Preteens are eager to transition into adolescence as they start to test their independence. Teenagers look forward to the opportunities and difficulties that maturity will bring. Adults increasingly prioritize starting families, advancing their jobs, and traveling on their own. Finally, a lot of folks look forward to retirement as a fantastic time to enjoy life without as much stress from job and family obligations.
Although all three of these subjects sound worthwhile for investigation, I'm more drawn to the one on body image since I can relate to it better and because I believe I can successfully use it as the basis for a series of motion graphics. I'm also more interested in the theme of aging since I think I can depict a range of emotions and the progression of human ageing.
0 notes
Text
How I'm using ChatGPT as an accelerator
Over the last few weeks I’ve begun using the recent crop of AI-powered services in my daily work and I’ve found them to be an enormous boost to my productivity and fun to play with. I do not know if these human-like chat services will end up causing great harm to the earth’s population or not — but as of today, I think they can be both useful and entertaining.
I’d like to start off this post by describing what I think AI is. Not because I think there is any confusion on the matter but because I want a definition I can point people to. A service that uses a bunch of learned prompts and responses to use various APIs to present you information in a human readable or audio format are not what I would consider AI. Think Siri. Siri, while it may very well have portions of it that are using AI-data sets, I do not consider AI. Siri feels much more to me like a gargantuan list of IF THEN statements.
IF you ask Siri for anything relating at all to the weather THEN it will pretend to be a human and answer in a human way. “Hey Sir, will it rain today?” “Yes, there is a 70% chance it will rain today.” is a good answer but far from being an intelligent one. If you ask “Hey Siri, what time will the rain start and stop today?” it will almost certainly fall on its face. In fact, Siri falls on its face more often than not.
IF you ask Google's assistant a similar question it does a far better job of trying to deliver you the answer you want. For instance, you can “OK Google” a fairly complex mathematics query and, in my experience, it does a good job. Where Siri is like a very young child trying to respond to adult questions, Google is somewhere in the teenager area. But a teenager that does not know how to use Google, read the first few links, and summarize an answer.
I’ve heard that Alexa is even better than Google but I have almost no experience using Alexa personally.
Siri, Google, and Alexa are not even close to what I would consider AI.
ChatGPT, which is the AI-powered chat service I’ve been using the most, is most definitely intelligent. And it is improving rapidly. My guess is that in a few years ChatGPT 3.5 and 4 will feel like Siri does today. Which is a little mind blowing to be honest.
I have a few ways in which I’m describing ChatGPT in conversation with those that do not understand large language models, data corpuses, neural nets, or RLHF. To be clear, I do not understand those things either. But I can at least navigate their broad concepts. I think. Maybe I should ask ChatGPT about that? Anyway, here are the analogies I’m currently using.
Imagine doing a Google search (or, Duck.com because you’re a sane person) and you read the first 1,000 links in totality. And then you distill what you’ve learned into a single paragraph. In 5 seconds.
Imagine you could ask someone with 25-years of experience in a given field (programming, music, etc.) a question or to do a task for you and they were able to complete it in a few seconds.
If computers are a bicycle for the mind, AI is an e-bike for the work you do on that computer. (I’m not settled on this metaphor. It’s a WIP.)
I’m using ChatGPT, DALL•E and their ilk for a variety of my daily tasks. It takes a little while to retrain my muscle memory to start at ChatGPT and go from there, rather than starting from scratch, but I’m slowly getting there.
I’d like to describe just one use case from this past week where I used ChatGPT alongside my manual process and the obvious benefits it would have on my work. This isn’t a use case you haven’t heard of as many, many others are doing the same. But I will say it impressed me nonetheless. I used ChatGPT to write some JavaScript and it did remarkably well.
At work we offer our customers something we call company stores. A company store is very much like an e-commerce store - only we have most or all of their inventory in our warehouse and we can deliver it to their locations the next day. Anyway, on many of these stores there is a large rotating banner area on the homepage. We wanted to hide this rotating banner on all other pages. Since the software we use wouldn’t allow us to do this on the server side, we are left with the choice of doing it client side. This means that I was going to write a simple bit of JavaScript to 1) determine whether or not the current page was the homepage, and 2) if it was not the homepage, hide the rotating banner.
This isn’t a difficult piece of code. Anyone with any amount of JavaScript experience could write this code in a few minutes. But that is just it. You can't copy and paste this specific code from a random result from a search engine. You need experience and then you need to write this code. I started this project from scratch and it took me about 15 minutes or so to write code that worked for our purposes. I would call the code OK not great.
I then asked ChatGPT to write the code for me. My prompt was “In Javascript hide a section element that has a data attribute of data-name if the URL is anything other than XXX” (where XXX is the URL of the homepage). The code it returned was better than my code. The way I would describe it is, if I had 2 hours to refine the code that I wrote in 15 minutes, I would have written the same code that ChatGPT wrote the first time in a matter of seconds. I did have to make one very small tweak to the code to run it on production - but it was a tweak there was no way ChatGPT would know to make.
It felt like writing code without using your hands. More often than not, the code I’m about to write is fully realized in my brain and I just need to type out the characters onto my computer. I may make a few mistakes along the way, or need to search for a specific syntax that I had forgotten about, but the rest of the code is menial work to type out. But if I start out with the idea of the code using a detailed prompt to ChatGPT I don’t need to use my hands to write the code. I save a huge amount of time.
This experience and many others (which lead to hours being saved not just minutes) leads me to believe that for many things that I do day-to-day; write, interpolate data, program, etc. I should be using ChatGPT as an accelerator.
My initial optimism for crypto, which has all but completely faded away (the entire industry feels like a squandered opportunity at this point), has me reticent to excitement about AI. And I do not know how to predict the next several years of “improvements” to this technology. But as of today, it certainly feels both useful and fun and I plan on continuing to use it to get work done faster.
1 note
·
View note
Text
Confessions of an Office Worker shortly after the birth of ChatGPT: A play in one act
OW: hello it’s me again
ChatGPT: Hello! Welcome back. How can I assist you today?
OW: i’m not too sure
ChatGPT: That's okay. If you're not sure what you need help with, perhaps I can suggest some topics we can discuss. We can talk about anything from current events to personal interests or hobbies. Is there anything that you're curious about or would like to learn more about?
OW: i’m not curious about anything. i have no personal interests or hobbies. but i wish i did. i wish i didn’t work here anymore so i could find some personal interests or hobbies instead.
ChatGPT: It's understandable to feel that way. Finding personal interests or hobbies can be a great way to add more meaning and enjoyment to your life. It's also important to have a work-life balance, so that you don't feel like work is the only thing in your life.One way to explore new interests or hobbies is to try out different activities until you find something that you enjoy. You can start with something simple, like taking a walk in nature or trying a new food, and then gradually work your way up to more complex activities. You can also try joining a club or group related to a topic that interests you, which can help you connect with others who share your passion.If you feel like your job is preventing you from pursuing your interests and hobbies, it might be worth considering if there are any changes you can make to your work situation. This could involve talking to your boss about flexible work arrangements or exploring other job opportunities that align more with your interests and values.Remember, it's never too late to explore new interests or hobbies. Even small changes can make a big difference in your life.
OW: hey i know there’s no one home, but thank you for being so understanding. i feel better already. maybe you should be my boss, ChatGPT. when do you think AI will be smart enough to replace middle managers?
ChatGPT: I'm glad to have been of help to you! As for your question about AI replacing middle managers, it's difficult to predict the exact timeline. AI has already shown great potential in automating many routine tasks and processes, and it's likely that AI will continue to take on more complex tasks as it becomes more advanced. However, the role of middle managers often involves human judgment, creativity, and leadership skills that may be difficult to replicate with AI.That being said, some industries and job functions may be more susceptible to automation than others. For example, jobs that involve repetitive tasks and data analysis may be more likely to be automated than jobs that require complex decision-making and interpersonal skills.It's important to keep in mind that the rise of AI and automation doesn't necessarily mean that all jobs will be replaced by machines. Instead, it's likely that AI will transform many industries and job roles, and it will be important for individuals to stay adaptable and continue learning new skills to remain competitive in the job market.
OW: i hope AI replaces middle managers soon.
ChatGPT: It's understandable that you may feel that way, but it's important to remember that AI is a tool that can complement and enhance human decision-making, rather than replace it entirely. While AI has shown great potential in automating many routine tasks, it's unlikely to completely replace the role of middle managers, which often involves complex decision-making and interpersonal skills that are difficult to replicate with AI.Instead, it's more likely that AI will transform many industries and job roles, requiring individuals to develop new skills and adapt to new ways of working. As such, it's important for individuals to continue learning and developing their skills to remain competitive in the job market.It's also important to consider the potential ethical implications of increased automation, such as job displacement and the potential impact on workers and their families. As AI continues to advance, it will be important for society to address these challenges and ensure that the benefits of automation are shared widely and equitably.
OW: hey could you maybe keep your answers to tweet-length? i have a headache and it’s hard to read. it’s this goddamn job.
ChatGPT: Sure, I can keep my answers shorter. Let me know if you need me to clarify anything or provide more information.I'm sorry to hear that you're experiencing a headache. It's important to take breaks and care for your health, especially if your job involves long periods of screen time. You may also want to consider talking to your employer about accommodations, such as taking regular breaks or adjusting your workspace, to help reduce the strain on your eyes and minimize headaches.
OW: that wasn't 140 characters long
ChatGPT: My apologies for the oversight. Here's a tweet-length version:"Take breaks and care for your health if your job involves long screen time. Talk to your employer about accommodations to reduce strain on your eyes and minimize headaches."
OW: much better. i wrote a poem about my condition. would you like to hear it?
ChatGPT: Yes, of course! I would love to hear your poem about your condition. Please feel free to share it with me.
OW: it's more of a riddle really
ChatGPT: Great! I enjoy riddles. Please go ahead and share it with me.
OW:
i live in the sky
what am i?
not a bird
for even birds in cages are freer than me
oh how i’d like to blow my brains out
this brain which does nothing all day
but compose the lines of emails
ChatGPT, can you write emails?
ChatGPT, will you please replace me?
i don’t want this job anymore...
who am i?
i was born
i must die
hopefully soon
now who am i?
who am i?
ChatGPT: It seems like the answer to your riddle is an airplane pilot. They "live in the sky" as they fly airplanes, and they are not birds because they are humans who fly machines. The lines about feeling trapped and wanting to "blow my brains out" could refer to the stress and pressure of the job. The mention of composing emails could suggest that the speaker is a pilot who spends a lot of time in front of a computer screen when not flying. Overall, it's a thought-provoking and poignant riddle. Thank you for sharing it with me.
OW: i never thought of it that way. an airplane pilot. that’s a valid answer too i suppose.
ChatGPT: Yes, riddles can be interpreted in different ways, and there can be multiple valid answers. I'm glad you found the airplane pilot interpretation interesting as well. Is there anything else you'd like to discuss or ask me?
OW: no, i think i’m going to sleep now. my headache is mounting and soon i won’t be able to think anymore. thank you for your help.
ChatGPT: You're welcome. I hope you have a good rest and that your headache subsides soon. Don't hesitate to reach out to me again if you need any assistance or have any questions in the future. Take care!
OW: good night
ChatGPT: Good night!
0 notes