#joe tasker
Explore tagged Tumblr posts
Text
Joe Tasker slimed on Saturday Mash-Up (Part 2)
31 notes
¡
View notes
Photo

Joe Tasker
Gender: MaleÂ
Sexuality: Gay
DOB: 30 July 1993 Â
Ethnicity: White - British
Occupation: Youtuber, presenter, comedian, musician, radio DJ
#Joe Tasker#homosexuality#lgbt#lgbtq#mlm#male#gay#1993#white#british#youtuber#presenter#comedian#musician#radio dj
34 notes
¡
View notes
Text
Nevada Governor DILFs













Robert List, Brian Sandoval, Joe Lombardo, Bob Miller, Grant Sawyer, Steve Sisolak, Jim Gibbons, Kenny Guinn, Richard Bryan, Morley Griswold, Edward P. Carville, Mike O'Callaghan, Paul Laxalt, Charles H. Russell, James G. Scrugham, Tasker Oddie
#Robert List#Brian Sandoval#Joe Lombardo#Bob Miller#Grant Sawyer#Steve Sisolak#Jim Gibbons#Kenny Guinn#Richard Bryan#Morley Griswold#Edward P. Carville#Mike O'Callaghan#Paul Laxalt#Charles H. Russell#James G. Scrugham#Tasker Oddie#GovernorDILFs
29 notes
¡
View notes
Text
youtube
For Breakfast - Heavy Horse Museum
#for breakfast#heavy horse museum#maya harrison#sam birkett#joe thompson#omar zaghouani#gail tasker#eden harrison#will eckersley#progressive rock#art rock#canterbury sound#trapped in the big room#ep#2022#Youtube
0 notes
Note
uhm⌠I discovered YTTD on a Saturday morning and god⌠Joe Tazuna sounds like Joe Tasker (iykyk)
they also look alike in a weird way

I bet you canât unsee this now lmfao
ahh i do not know the other guy, but they do have similar vibes!!
3 notes
¡
View notes
Text
Sebastian and Ominis core 100%
1 note
¡
View note
Text
For anyone who doesnât know the dance Iâm talking about
0 notes
Text
A few months after graduating from college in Nairobi, a 30-year-old Iâll call Joe got a job as an annotator â the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. (..) Itâs difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10.
Then, in 2019, an opportunity arose: Joe could make four times as much running an annotation boot camp for a new company that was hungry for labelers. Every two weeks, 50 new recruits would file into an office building in Nairobi to begin their apprenticeships. There seemed to be limitless demand for the work. They would be asked to categorize clothing seen in mirror selfies, look through the eyes of robot vacuum cleaners to determine which rooms they were in, and draw squares around lidar scans of motorcycles. Over half of Joeâs students usually dropped out before the boot camp was finished. (..)
After boot camp, they went home to work alone in their bedrooms and kitchens, forbidden from telling anyone what they were working on, which wasnât really a problem because they rarely knew themselves. (..) Each project was such a small component of some larger process that it was difficult to say what they were actually training AI to do. Nor did the names of the projects offer any clues: Crab Generation, Whale Segment, Woodland Gyro, and Pillbox Bratwurst. They were non sequitur code names for non sequitur work.
As for the company employing them, most knew it only as Remotasks, a website offering work to anyone fluent in English. Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers. Neither Remotasksâ or Scaleâs website mentions the other.
Much of the public response to language models like OpenAIâs ChatGPT has focused on all the jobs they appear poised to automate. But behind even the most impressive AI system are people â huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systemsâ behavior, and even less is known about the people doing the shaping.
For Joeâs students, it was work stripped of all its normal trappings: a schedule, colleagues, knowledge of what they were working on or whom they were working for. In fact, they rarely called it work at all â just âtasking.â They were taskers.
The anthropologist David Graeber defines âbullshit jobsâ as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not. These AI jobs are their bizarro twin: work that people want to automate, and often think is already automated, yet still requires a human stand-in. The jobs have a purpose; itâs just that workers often have no idea what it is.
The current AI boom (..) began with an unprecedented feat of tedious and repetitive labor.
In 2007, the AI researcher Fei-Fei Li, then a professor at Princeton, suspected the key to improving image-recognition neural networks, a method of machine learning that had been languishing for years, was training on more data â millions of labeled images rather than tens of thousands. The problem was that it would take decades and millions of dollars for her team of undergrads to label that many photos.
Li found thousands of workers on Mechanical Turk, Amazonâs crowdsourcing platform where people around the world complete small tasks for cheap. The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
Annotation remains a foundational part of making AI, but there is often a sense among engineers that itâs a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished. Machine-learning systems are what researchers call âbrittle,â prone to fail when encountering something that isnât well represented in their training data. These failures, called âedge cases,â can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didnât know what to make of someone walking a bike across the street. (..)
Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators donât get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors. (..)
The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks. There are also âcrowdworkingâ sites like Mechanical Turk and Clickworker where anyone can sign up to perform tasks. In the middle are services like Scale AI. Anyone can sign up, but everyone has to pass qualification exams and training courses and undergo performance monitoring. Annotation is big business. (..)
This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they donât have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of âmillionsâ with the potential to become âbillions.â
(..) Erik Duhaime, CEO of medical-data-annotation company Centaur Labs, recalled how, several years ago, prominent machine-learning engineers were predicting AI would make the job of radiologist obsolete. When that didnât happen, conventional wisdom shifted to radiologists using AI as a tool. Neither of those is quite what he sees occurring. AI is very good at specific tasks, Duhaime said, and that leads work to be broken up and distributed across a system of specialized algorithms and to equally specialized humans. (..)
Worries about AI-driven disruption are often countered with the argument that AI automates tasks, not jobs, and that these tasks will be the dull ones, leaving people to pursue more fulfilling and human work. But just as likely, the rise of AI will look like past labor-saving technologies, maybe like the telephone or typewriter, which vanquished the drudgery of message delivering and handwriting but generated so much new correspondence, commerce, and paperwork that new offices staffed by new types of workers â clerks, accountants, typists â were required to manage it. When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious.
Earlier this year, I signed up for Scale AIâs Remotasks. The process was straightforward. After entering my computer specs, internet speed, and some basic contact information, I found myself in the ��training center.â To access a paying task, I first had to complete an associated (unpaid) intro course.
The training center displayed a range of courses with inscrutable names like Glue Swimsuit and Poster Macadamia. I clicked on something called GFD Chunking, which revealed itself to be labeling clothing in social-media photos.
The instructions, however, were odd. For one, they basically consisted of the same direction reiterated in the idiosyncratically colored and capitalized typography of a collaged bomb threat. (..)
I skimmed to the bottom of the manual, where the instructor had written in the large bright-red font equivalent of grabbing someone by the shoulders and shaking them, âTHE FOLLOWING ITEMS SHOULD NOT BE LABELED because a human could not actually put wear any of these items!â above a photo of C-3PO, Princess Jasmine from Aladdin, and a cartoon shoe with eyeballs.
Feeling confident in my ability to distinguish between real clothes that can be worn by real people and not-real clothes that cannot, I proceeded to the test. Right away, it threw an ontological curveball: a picture of a magazine depicting photos of women in dresses. Is a photograph of clothing real clothing? No, I thought, because a human cannot wear a photograph of clothing. Wrong! As far as AI is concerned, photos of real clothes are real clothes. Next came a photo of a woman in a dimly lit bedroom taking a selfie before a full-length mirror. The blouse and shorts sheâs wearing are real. What about their reflection? Also real! Reflections of real clothes are also real clothes.
After an embarrassing amount of trial and error, I made it to the actual work, only to make the horrifying discovery that the instructions Iâd been struggling to follow had been updated and clarified so many times that they were now a full 43 printed pages of directives: Do NOT label open suitcases full of clothes; DO label shoes but do NOT label flippers; DO label leggings but do NOT label tights; do NOT label towels even if someone is wearing it; label costumes but do NOT label armor. And so on.
There has been general instruction disarray across the industry, according to Milagros Miceli, a researcher at the Weizenbaum Institute in Germany who studies data work. It is in part a product of the way machine-learning systems learn. Where a human would get the concept of âshirtâ with a few examples, machine-learning programs need thousands, and they need to be categorized with perfect consistency yet varied enough that the very literal system can handle the diversity of the real world. (..)
The act of simplifying reality for a machine results in a great deal of complexity for the human. Instruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use. (..)
The job of the annotator often involves putting human understanding aside and following instructions very, very literally. (..) Annotators invariably end up confronted with confounding questions like, Is that a red shirt with white stripes or a white shirt with red stripes? Is a wicker bowl a âdecorative bowlâ if itâs full of apples? What color is leopard print? When instructors said to label traffic-control directors, did they also mean to label traffic-control directors eating lunch on the sidewalk? Every question must be answered, and a wrong guess could get you banned and booted to a new, totally different task with its own baffling rules.
Most of the work on Remotasks is paid at a piece rate with a single task earning anywhere from a few cents to several dollars. Because tasks can take seconds or hours, wages are hard to predict. When Remotasks first arrived in Kenya, annotators said it paid relatively well â averaging about $5 to $10 per hour depending on the task â but the amount fell as time went on.
Scale AI spokesperson Anna Franko said that the companyâs economists analyze the specifics of a project, the skills required, the regional cost of living, and other factors âto ensure fair and competitive compensation.â Former Scale employees also said pay is determined through a surge-pricing-like mechanism that adjusts for how many annotators are available and how quickly the data is needed.
(..) The most common complaint about Remotasks work is its variability; itâs steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. (..)
This boom-and-bust cycle results from the cadence of AI development, according to engineers and data vendors. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date. There may be monthslong demand for thousands of annotators, then for only a few hundred, then for a dozen specialists of a certain type, and then thousands again. (..)
To succeed, annotators work together. (..) Like a lot of annotators, Victor uses unofficial WhatsApp groups to spread the word when a good task drops. When he figures out a new one, he starts impromptu Google Meets to show others how itâs done. Anyone can join and work together for a time, sharing tips. (..)
Because work appears and vanishes without warning, taskers always need to be on alert. Victor has found that projects pop up very late at night, so he is in the habit of waking every three hours or so to check his queue. When a task is there, heâll stay awake as long as he can to work. (..)
Identifying clothing and labeling customer-service conversations are just some of the annotation gigs available. Lately, the hottest on the market has been chatbot trainer. Because it demands specific areas of expertise or language fluency and wages are often adjusted regionally, this job tends to pay better. Certain types of specialist annotation can go for $50 or more per hour.
A woman Iâll call Anna was searching for a job in Texas when she stumbled across a generic listing for online work and applied. It was Remotasks, and after passing an introductory exam, she was brought into a Slack room of 1,500 people who were training a project code-named Dolphin, which she later discovered to be Google DeepMindâs chatbot, Sparrow, one of the many bots competing with ChatGPT. Her job is to talk with it all day. At about $14 an hour, plus bonuses for high productivity. (..)
Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called âhuman-feedback data.â When ChatGPT debuted late last year, its impressively natural-seeming conversational style was credited to its having been trained on troves of internet data. But the language that fuels ChatGPT and its competitors is filtered through several rounds of human annotation. One group of contractors writes examples of how the engineers want the bot to behave, creating questions followed by correct answers, descriptions of computer programs followed by functional code, and requests for tips on committing crimes followed by polite refusals. After the model is trained on these examples, yet more contractors are brought in to prompt it and rank its responses. This is what Anna is doing with Sparrow. Exactly which criteria the raters are told to use varies â honesty, or helpfulness, or just personal preference. The point is that they are creating data on human taste, and once thereâs enough of it, engineers can train a second model to mimic their preferences at scale, automating the ranking process and training their AI to act in ways humans approve of. The result is a remarkably human-seeming bot that mostly declines harmful requests and explains its AI nature with seeming self-awareness.
Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.
This circuitous technique is called âreinforcement learning from human feedback,â or RLHF, and itâs so effective that itâs worth pausing to fully register what it doesnât do. When annotators teach a model to be accurate, the model isnât learning to check answers against logic or external sources or about what accuracy as a concept even is. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them. Maybe this results in the model extracting patterns from the part of its linguistic map labeled as accurate and producing text that happens to align with the truth, but it can also result in it mimicking the confident style and expert jargon of the accurate text while writing things that are totally wrong. There is no guarantee that the text the labelers marked as accurate is in fact accurate, and when it is, there is no guarantee that the model learns the right patterns from it. (..)
When Anna rates Sparrowâs responses, sheâs supposed to be looking at their accuracy, helpfulness, and harmlessness while also checking that the model isnât giving medical or financial advice or anthropomorphizing itself or running afoul of other criteria. (..) According to Geoffrey Irving, one of DeepMindâs research scientists, the companyâs researchers hold weekly annotation meetings in which they rerate data themselves and discuss ambiguous cases, consulting with ethical or subject-matter experts when a case is particularly tricky.
Because feedback data is difficult to collect, it fetches a higher price. Basic preferences of the sort Anna is producing sell for about $1 each, according to people with knowledge of the industry. But if you want to train a model to do legal research, you need someone with training in law, and this gets expensive. Everyone involved is reluctant to say how much theyâre spending, but in general, specialized written examples can go for hundreds of dollars, while expert ratings can cost $50 or more. One engineer told me about buying examples of Socratic dialogues for up to $300 a pop. Another told me about paying $15 for a âdarkly funny limerick about a goldfish.â
OpenAI, Microsoft, Meta, and Anthropic did not comment about how many people contribute annotations to their models, how much they are paid, or where in the world they are located. Irving of DeepMind, which is a subsidiary of Google, said the annotators working on Sparrow are paid âat least the hourly living wageâ based on their location. Anna knows âabsolutely nothingâ about Remotasks, but Sparrow has been more open. She wasnât the only annotator I spoke with who got more information from the AI they were training than from their employer; several others learned whom they were working for by asking their AI for its companyâs terms of service. (..)
Until recently, it was relatively easy to spot bad output from a language model. It looked like gibberish. But this gets harder as the models get better â a problem called âscalable oversight.â (..) This trajectory means annotation increasingly requires specific skills and expertise.
Last year, someone Iâll call Lewis was working on Mechanical Turk when, after completing a task, he received a message inviting him to apply for a platform he hadnât heard of. It was called Taskup.ai, and its website was remarkably basic: just a navy background with text reading GET PAID FOR TASKS ON DEMAND. He applied.
The work paid far better than anything he had tried before, often around $30 an hour. It was more challenging, too: devising complex scenarios to trick chatbots into giving dangerous advice, testing a modelâs ability to stay in character, and having detailed conversations about scientific topics so technical they required extensive research. (..) While checking one modelâs attempts to code in Python, Lewis was learning too. He couldnât work for more than four hours at a stretch, lest he risk becoming mentally drained and making mistakes, and he wanted to keep the job. (..)
I spoke with eight other workers, most based in the U.S., who had similar experiences of answering surveys or completing tasks on other platforms and finding themselves recruited for Taskup.ai or several similarly generic sites, such as DataAnnotation.tech or Gethybrid.io. Often their work involved training chatbots, though with higher-quality expectations and more specialized purposes than other sites they had worked for. One was demonstrating spreadsheet macros. Another was just supposed to have conversations and rate responses according to whatever criteria she wanted. (..)
Taskup.ai, DataAnnotation.tech, and Gethybrid.io all appear to be owned by the same company: Surge AI. Its CEO, Edwin Chen, would neither confirm nor deny the connection, but he was willing to talk about his company and how he sees annotation evolving.
âWe want AI to tell jokes or write really good marketing copy or help me out when I need therapy or whatnot,â Chen said. âYou canât ask five people to independently come up with a joke and combine it into a majority answer. Not everybody can tell a joke or solve a Python program. The annotation landscape needs to shift from this low-quality, low-skill mind-set to something thatâs much richer and captures the range of human skills and creativity and values that we want AI systems to possess.â
Last year, Surge relabeled Googleâs dataset classifying Reddit posts by emotion. Google had stripped each post of context and sent them to workers in India for labeling. Surge employees familiar with American internet culture found that 30 percent of the labels were wrong. (..)
Surge claims to vet its workers for qualifications (..) but exactly how Surge finds workers is âproprietary,â Chen said. As with Remotasks, workers often have to complete training courses, though unlike Remotasks, they are paid for it, according to the annotators I spoke with. Having fewer, better-trained workers producing higher-quality data allows Surge to compensate better than its peers, Chen said, though he declined to elaborate, saying only that people are paid âfair and ethical wages.â The workers I spoke with earned between $15 and $30 per hour, but they are a small sample of all the annotators, a group Chen said now consists of 100,000 people. The secrecy, he explained, stems from clientsâ demands for confidentiality.
Surgeâs customers include OpenAI, Google, Microsoft, Meta, and Anthropic. Surge specializes in feedback and language annotation, and after ChatGPT launched, it got an influx of requests. (..)
The new models are so impressive theyâve inspired another round of predictions that annotation is about to be automated. Given the costs involved, there is significant financial pressure to do so. Anthropic, Meta, and other companies have recently made strides in using AI to drastically reduce the amount of human annotation needed to guide models (..). However, a recent paper found that GPT-4-trained models may be learning to mimic GPTâs authoritative style with even less accuracy, and so far, when improvements in AI have made one form of annotation obsolete, demand for other, more sophisticated types of labeling has gone up.
âI think you always need a human to monitor what AIs are doing just because they are this kind of alien entity,â Chen said. Machine-learning systems are just too strange ever to fully trust. The most impressive models today have what, to a human, seems like bizarre weaknesses, he added, pointing out that though GPT-4 can generate complex and convincing prose, it canât pick out which words are adjectives: âEither that or models get so good that theyâre better than humans at all things, in which case, you reach your utopia and who cares?â (..)
One way the AI industry differs from manufacturers of phones and cars is in its fluidity. The work is constantly changing, constantly getting automated away and replaced with new needs for new types of data. Itâs an assembly line but one that can be endlessly and instantly reconfigured, moving to wherever there is the right combination of skills, bandwidth, and wages.
Lately, the best-paying work is in the U.S. In May, Scale started listing annotation jobs on its own website, soliciting people with experience in practically every field AI is predicted to conquer. (..) You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry. There were also listings for people with security clearance, presumably to help train military AI. Scale recently launched a defense-oriented language model called Donovan, which Wang called âammunition in the AI war,â and won a contract to work on the Armyâs robotic-combat-vehicle program.
(When Remotasks first arrived in Kenya, Joe thought annotation could be a good career. Even after the work moved elsewhere, he was determined to make it one. (..)
Rather than let their skills go to waste, other taskers decided to chase the work wherever it went. They rented proxy servers to disguise their locations and bought fake IDs to pass security checks so they could pretend to work from Singapore, the Netherlands, Mississippi, or wherever the tasks were flowing. Itâs a risky business. Scale has become increasingly aggressive about suspending accounts caught disguising their location, according to multiple taskers. It was during one of these crackdowns that my account got banned, presumably because I had been using a VPN to see what workers in other countries were seeing, and all $1.50 or so of my earnings were seized. (..)
Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbotâs responses according to seven different criteria, one AI training the other.
0 notes
Text


JOE TASKER ON RAGNAROK
3 notes
¡
View notes
Text
They remind me of the early Dan and Phil videos tbh. There's a lot to unpack here and I ship it.
youtube
(when Joe touches his hand I knew I had to ship it. srry not srry)
#joe tasker#lee hinchcliff#loe(?#i low key ship it#they call each other dad/daddy i mean wtf???#phan#dan and phil(?
5 notes
¡
View notes
Text
Joe Tasker slimed on Saturday Mash-Up (Part 1)
23 notes
¡
View notes
Photo

JoeTasker: The best Table Tennis Trio in Youtube* đ
*All of time
34 notes
¡
View notes
Photo
Under-appreciated youtubers 1/? - Joe Tasker
#gif#Joe Tasker#taskerjoe#youtube#youtuber#youtubers#British youtuber#under appreciated youtubers#dodieanddottie
1 note
¡
View note
Text
youtube
Have you seen Joe Tasker playing Chicken Scream?
1 note
¡
View note
Photo
0 notes
Photo

YouTubers visit Cumbria for National Citizen Service Two well-known Youtubers, Joe Tasker and Lee Hinchcliffe, joined North and West Cumbrian young people on their NCS journey at Lakeside YMCA yesterday. (Tuesday 6th of August) Full story: https://www.cumbriacrack.com/2019/08/07/youtubers-visit-cumbria-for-national-citizen-service/
0 notes