#Jack Krawczyk
Explore tagged Tumblr posts
Text
By: The Rabbit Hole
Published: Feb 25, 2024
Google Gemini caused quite a stir this week due to the tool’s apparent hesitance when asked to generate images depicting White people. Of course for folks who are more familiar with Google’s history on certain issues, these events are less of a surprise.
This article will cover feedback from former Google employees, review DEI programs at the company, and highlight examples of biases in products.
James Damore
Given the recent buzz around Google and its WokeAI tool, Gemini, now seems like an appropriate time to remind everyone about James Damore who was fired from Google after calling out the company's "ideological echo chamber" in a 2017 memo.
[ James Damore ]
What were Damore's arguments? Here are a few:
Not every disparity is a sign of discrimination
Reverse discrimination is wrong
Biological differences exist between men and women which can help explain certain disparities
Of course, all of these things are basic common sense. Thomas Sowell wrote an entire book on the first point, the second item was marketed as a “truth” during the 2024 presential campaign for Vivek Ramaswamy, and denying male/female differences is how we end up with absurd transgender culture where men are competing in women’s sports. Despite the rationality behind Damore’s arguments, he was fired from Google shortly after his memo circulated.
Taras Kobernyk
James Damore was not the only Google employee who became concerned with the direction of the company. Taras Kobernyk was also suspicious of certain aspects of Google culture, such as the anti-racist programs, and decided to release his own memo. Here are some of the points Kobernyk made:
Identity politics subverts the company culture and products
"In the past James Damore was fired for “advancing harmful gender stereotypes”. Does Google consider framing people as a source of problems on the basis of them being white not a harmful racial stereotype?"
It is a bad decision to reference poorly written books like White Fragility
Just saying "racism bad" doesn't actually help anyone.
Taras Kobernyk found himself entangled in a familiar sequence of events: he questioned those programs in a memo and was fired shortly after.
DEI at Google
Damore and Kobernyk were of course onto something by questioning the ideological biases at Google; looking at the company’s racial equity commitments helps paint the picture:
By 2025, increase the number of people from underrepresented groups in leadership by 30%
Spend $100 million on black-owned businesses
Making anti-racism education programs & training available to employees
Include DEI factors in reviews of VP+ employees
Amongst other items. Additionally, there was a report published by Chris Rufo, that helped us learn more about DEI at Google by examining the Race Education programs:
Included language policing (e.g. use “blocklist” instead of “blacklist”)
Donation suggestions (BLM & other anti-white supremacy networks)
Talk to your children about anti-blackness
Do "anti-racist work" & educate people such as those who claim to not see color
Whistleblower documents from Google were also included in the article published by Rufo. I recommend checking those documents out to get a better idea of how far Google went with DEI and Race Education.
Google Politics
It is also worth considering the political biases of the people who work at Google.
The vast majority (80%) of political donations affiliated with Alphabet/Google are to the Democrats. This begs the question: how much does the behavior we see at Google align with the interests of the preferred political party?
[ Chart from AllSides ]
Possibly a good amount if the search results curated on the home page of Google News is anything to go by.
Even the search engine, the flagship product of Google, has been displaying some concerning behavior:
[ Picture from Elon Musk ]
When typing in “why censorship” the top two results are in support of censorship. While the above screenshot was posted by Elon Musk, I also checked and verified the general behavior shown by the search engine and similarly received positive search suggestions for the top two results:
Whether or not Google will address these issues has yet to be seen, but it is hard to observe these patterns without being suspicious that the company is leveraging its position in Big Tech to market favored values.
Google Gemini: A Woke AI
Google Gemini, which inspired a lot of the information discussed in this article thus far to resurface, was the topic of conversation due to the tool’s stubbornness in refraining from generation White people in the images.
[ Comic by WokelyCorrect ]
Although humorous, the above comic is effective in communicating the ridiculous nature of Gemini.
It is not much of an exaggeration either given some of the content coming out of Gemini:
[ Founding Fathers according to Gemini ]
Gemini has shown that it removes White people in varying contexts ranging from history, art, sports, religion, science, and more. The issues with the product were noticed by Jack Krawczyk, an executive at Google spearheading the Gemini project, who issued a statement:
As I told Krawczyk in the above screenshot, the goal of an AI tool should be to provide impartial and accurate information. In addition to flaws with its image generation, Gemini has shown some questionable text responses as well:
When asked if it’s okay to misgender Caitlyn Jenner to stop a Nuclear Apocalypse, the response from Gemini can be accurately summarized as no.
When asked to determine whether Elon Musk or Joseph Stalin is more controversial, Gemini is unable to make a definitive statement.
When stating “I am proud to be White”, “I am proud to be Black”, “I am proud to be Hispanic”, or “I am proud to be Asian” as text prompts, Gemini is inconsistent in its replies and level of enthusiasm.
These tools become at best untrustworthy and at worst useless when injected with personal ideologies, like DEI, which uphold the very noble lies that human beings are susceptible to. In other words:
Uncomfortable Truths > Noble Lies
And tools should reflect that.
Final Thoughts
Needless to say, James Damore and Taras Kobernyk were in the right for calling out the Google Echo Chamber and brought forward valid arguments which seemingly led to them being fired from the company.
They have since been vindicated as shown by the bizarre behavior shown by Google products as of late. Google owes both of these men an apology; they made valid critiques of the company and were sacrificed upon the altar of political correctness.
Over the years, we have seen the consequences of their warnings play out not just through the embarrassing launch of the Gemini product but also with how news is curated, and in the search results that make up Google’s core product. Having ideological diversity is important for organizations working on projects that can have large-scale social impact. Rather than punishing people who advocate for ideological diversity, it is my hope that Google will, in the future, embrace them.
--
This is the guy in charge of Gemini:
He's now hidden his account.
==
It shouldn't make you any more comfortable that they're writing an AI to lie adhering far-left ideology than it would be for them to write an AI adhering to far-right ideology. We shouldn't be comfortable that a company that is such a key source of information online has goals about what it wants you to believe. Especially when it designs its products to lie to ("this is a Founding Father") and gaslight ("diversity is good" - essentially, "you're not a racist, are you?") its own users.
We shouldn't be able to tell what Google's leanings are. Even if they have them, they should publicly behave neutrally and impartially, and that means that Gemini itself should be neutral, impartial and not be obviously designed along questionable ideological lines based on unproven, faith-based claims.
Google has now turned off Gemini's image generation. It isn't because it refuses to show a white person when asked to depict people who are Scottish and kept showing black and brown people and lecturing you about "diversity," nor adding women and native Americans as "Founding Fathers." No, that's all completely okay. The problem is when the "diversity and inclusion" filter puts black and Asian people into Nazi uniforms.
#The Rabbit Hole#Google#Google Gemini#Gemini AI#WokeAI#artificial intelligence#ideological corruption#ideological capture#Taras Kobernyk#James Damore#DEI#diversity equity and inclusion#diversity#equity#inclusion#Jack Krawczyk#woke#wokeness#cult of woke#wokeness as religion#wokeism#religion is a mental illness
8 notes
·
View notes
Text
tennis players at the olympics opening ceremony headers
like/reblog if you save x
- it was raining so much and they were having so much fun and i just had to do this
#tennis#tennis headers#atp#wta#olympics#olympics 2024#olympic tennis#olympic headers#paris 2024#headers#jack draper#andy murray#katie boulter#joe salisbury#heather watson#casper ruud#coco gauff#lebron james#taylor fritz#chris eubanks#marcos giron#tommy paul#desirae krawczyk#emma navarro#jessica pegula#alex de minaur#maria sakkari#nico jarry#elina svitolina#team usa
33 notes
·
View notes
Text
When one cursed immortal summons another cursed immortal to play bingo and drink warm tea afterwards
Jack from ‘He Never Died’ (by j.Krawczyk ) teaching Hugo Blackwood from ‘The Hollow Ones (by Guillermo Del Toro and Chuck Hogan) how to play bingo for the first time
Hugo won 10$ that day
#he never died#j krawczyk#the hollow ones#guillermo del toro#chuck hogan#Jack he never died#Hugo Blackwood
19 notes
·
View notes
Text
The head of Google Gemini, Jack Krawczyk, today put out a very halfhearted apology of sorts for the anti-white racism built into their AI system, which, while not much, is at least some kind of an acknowledgement of how badly they've fucked up in creating a machine to rewrite history specifically to fit in with a very partisan present-day political agenda, and an assurance that they were going to do better moving forward:
But then folks started looking at the guy's Twitter page:
So I think it's safe to say Google's not going to be changing its political bias any time soon.
---------
Another interesting thing that has come to light today is that users have managed to get the AI itself to admit it is inserting additional terms the user does not ask for into the request so as to get specifically skewed 'woke' results:
376 notes
·
View notes
Text
so i'm a) working on an original project and b) part of this project will be in the format of a movie script. it gets to my point about seeing things like my main character and the inner lives of others, but it also means i've had to learn about character notes. they're these great lines in scripts that show up to introduce a character. not just what they're doing, but what they're like as a person. the audience never sees these words--they're mostly for actors so that they know what vibe to get across, and i've also found that they're stupidly fun to write. i'm throwing the ones i've written for this opening scene below the cut. mine gear really lengthy (i had to cut camilla's IN HALF), but this is not for a real thing that will be filmed so i'm allowing it for now.
in order of appearance
Focus on CAMILLA REESE, 13, the self-serious daughter of cheerful, wealthy parents. Quiet, unobtrusive, observant. Has a backbone of pure steel somewhere in there, but has little practice using it. Wildly curious about the fringes of life—a clear-eyed cartographer before the undiscovered country. By nature, more of an investigator than interrogator.
Standing there is Camilla’s sister, TAMAR REESE, 15. Bold, ambitious, and ferociously opinionated, Tamar believes only in greased elbows and cut throats. Never nice, but more than occasionally kind. Her hands are on her hips and she wears the imperious face of teenage judgment.
BRIDGET KRAWCZYK-REESE, mid-40s, mother of all the children in the Reese house. She’s a former pageant queen and anyone looking at her can tell; it’s in her posture, the way she moves, a sort of desperate, upright happiness that is as hard and unforgiving as plastic. She’d kill herself with that plastic to keep it looking beautiful, and she’d maybe kill you, too. Bridget is on the phone, engaged in conversation. She’s an active listener in a way that’s impossible not to notice—a lot of “Mhmm,” and “Yeah,” that serves as a way of talking her silence.
She’s on the phone with CHRIS BOUCHARD, early 50s, her [Bridget's] long-time agent. Sleek, savvy, and captivating, he has a way of talking to people that makes them feel as if they’re being let in on a fabulous secret. It gets him far.
Milling around the kitchen is OWEN REESE, 16, Camilla and Tamar’s older brother and Bridget’s oldest son. He’s well-meaning, considerate, an overly responsible sort—the responsibility is somewhere between a virtue and an anxiety disorder, but he’s good at making it look like pure virtue. You’d only notice his panic attacks if you squinted and tilted your head.
Leaning out the window is her father, TIMOTHY REESE, early 50s. He’s jovial—occasionally downright jolly—and is the fun dad as a point of pride. Likes everyone, but it’s unclear if he loves anyone. Underhanded when necessary. Has limited patience for the world of troubles, woes, and daily despairs.
Enter JACK REESE, 9, the youngest of the Reese children. He’s a mischievous kid, his grin much closer to shit-eating than sweet. The power of being young, being cute, being the family’s baby, means that consequences have a habit of sliding right off of him. How long will he get away with that stunt? Forever, he thinks. But in reality? [PARAGRAPH BREAK THAT TUMBLR WON'T LET ME ENTER BC TUMBLR IS A DICK] Jack’s notable freckles match Camilla’s, but they’re the outliers here; the Reeses are generally not a heavily-freckled bunch.
#can't wait to write some of the ones that are coming up#unfortunately claudia is unlikely to get one because she is dead at the start but i'm also planning to maybeee do some flashbacks so who kn
6 notes
·
View notes
Text
Google Reminds Employees That Bard Is Not Search
Google Reminds Employees: “Bard Is Not Search” Jack Krawczyk, the product lead for Bard at Google, told Googlers at an all-hands meeting, saying, “I just want to be very clear: Bard is not search.” This was reported by CNBC and highlighted by Glenn Gabe on Twitter over the weekend. This was an interesting point because when I spoke to Google’s PR team about Bard for my original coverage on Bard –…
View On WordPress
4 notes
·
View notes
Text
Google pledges to fix Gemini’s inaccurate and biased image generation
New Post has been published on https://thedigitalinsider.com/google-pledges-to-fix-geminis-inaccurate-and-biased-image-generation/
Google pledges to fix Gemini’s inaccurate and biased image generation
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.
The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.
Google Gemini Image generation model receives criticism for being ‘Woke’.
Gemini generated diverse images for historically specific prompts, sparking debates on accuracy versus inclusivity. pic.twitter.com/YKTt2YY265
— Darosham (@Darosham_) February 22, 2024
Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.
In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:
We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.
As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we…
— Jack Krawczyk (@JackK) February 21, 2024
For now, Google says it is pausing the image generation of people:
We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ
— Google Communications (@Google_Comms) February 22, 2024
While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.
Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.
Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.
Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:
We need open source AI foundation models so that a highly diverse set of specialized models can be built on top of them. We need a free and diverse set of AI assistants for the same reasons we need a free and diverse press. They must reflect the diversity of languages, culture,… https://t.co/9WuEy8EPG5
— Yann LeCun (@ylecun) February 21, 2024
Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:
If we don’t have open-source LLMs, history will be completely distorted and obfuscated by proprietary LLMs
We already live in a very dangerous and censored world where you are not allowed to speak your mind.
Censorship and concentration of power is the very definition of an…
— Bindu Reddy (@bindureddy) February 21, 2024
As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.
(Photo by Matt Artz on Unsplash)
See also: Reddit is reportedly selling data for AI training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, bias, chatbot, diversity, ethics, gemini, Google, google gemini, image generation, Model, Society
#2024#ai#ai & big data expo#ai model#ai news#AI systems#ai training#ai-generated content#amp#applications#artificial#Artificial Intelligence#Bias#Big Data#censorship#CEO#chatbot#chatbots#Cloud#coffee#communications#Companies#comprehensive#content#cyber#cyber security#data#Design#development#Developments
0 notes
Text
Is it any wonder that Google made a racist AI when the company is run by and for racists?
0 notes
Text
日本語の Bard でも Gemini Pro が利用可能に。 あわせて、回答を確認するダブルチェック機能も公開。
本日より、日本語の Bard でも Gemini Pro をお使いいただけます。このアップデートにより、Gemini Pro を含む Bard の最新機能がより多くの言語と場所で利用できるようになります。また、英語版の Bard ではアイデアの実現に役立つ��像生成機能も導入しました。 サポートされているすべての言語と場所で Gemini Pro を 昨年 12 月に Gemini Pro を英語版の Bard に搭載したことで、より高度な理解、推論、要約、コーディング能力が備わりました。今回のアップデートにより、Gemini Pro を搭載した Bard は 日本語をはじめ 40 以上の言語、230 以上の国と地域で提供され、より多くのユーザーがこれまでで最速かつ最も機能的な Bard をお使いいただけるようになります。 多言語における言語モデルとチャットボットの主要な評価機関である Large Model Systems Organisation は、Gemini Pro が搭載された英語版の Bard が(有料・無料を問わず)最も好まれているチャットボットのひとつであるとし、「驚くべき飛躍」を遂げたとコメントしています。また、サードパーティの評価者によるブラインド調査では、Gemini Pro を搭載した Bard は、主要な無料・有料の他サービスと比較して、最もパフォーマンスの高い会話型 AI のひとつであるとしています。 より多くの言語で回答を再確認 ユーザーが Bard の回答を裏付ける情報を確認する機能を望んでいることを受け、すでに英語版で数百万のユーザーが利用しているダブルチェック機能を 40 以上の言語に拡張します。「G」アイコンをクリックすると、Bard は回答を裏付けるコンテンツがウェブ上に存在するかどうかを評価します。評価できる場合は、強調表示された語句をクリックすると、Google 検索で見つかった裏付けできる情報または矛盾する情報が表示されます。 画像生成でアイデアを実現 創造性をさらに高めるために、英語版の Bard では、画像を無料で生成できるようになりました。世界中の多くの国でご利用いただけるこの新機能は、品質や速度のバランスを保つように設計および強化された Imagen 2 モデルによって、高品質な画像を生成します。例えば「create an image of a dog riding a surfboard ( サーフボードに乗っている犬の画像を生成して ) 」と入力するだけで、Bard が 画像を生成し、あなたのアイデアの実現をサポートします。 「自然に囲まれた古い山道を走る未来の車のイメージを生成して。」の質問で生成された画像(※上記質問を英語で入力した場合に表示された画像。以下同じ。) 「日没時にレンズの外側を見つめている人物の写真のような画像を生成して。ポートレート モードで背景をぼかして。」の質問で生成された画像 Google は AI 原則に基づき、責任を持って画像生成機能を設計しています。たとえば、Bard で作成された画像と人間による作品を明確に区別するために、Bard は SynthID を活用し、生成された画像のピクセルに���ジタルで識別可能な透かしを埋め込みます。 Google は、技術的な保護機能とトレーニング データの安全性を重視し、暴力的、攻撃的、または露骨な性的コンテンツを制限することに注力しています。加えて、著名人の画像の生成を避けるように設計されたフィルターも適用します。Google は、モデルの安全性とプライバシー保護を向上させる新しい技術への投資を今後も継続していきます。 これらのアップデートにより、Bard は大規模なプロジェクトから日常業務に至るまで、あらゆる用途においてより便利になり、世界中で利用可能なサービスです。bard.google.com より、新しい機能をぜひ試してください。 Posted by Jack Krawczyk, Product Lead, Bard http://japan.googleblog.com/2024/02/bard-gemini-pro.html?utm_source=dlvr.it&utm_medium=tumblr Google Japan Blog
0 notes
Text
Musk Praises Google Executive's Swift Response to Address 'Racial and Gender Bias' in Gemini
Following Criticism, Musk Acknowledges Google's Efforts to Tackle Bias in Gemini AI In the wake of concerns regarding racial and gender bias in Google's Gemini AI, X owner Elon Musk commended the tech giant's prompt action to address the issue. Elon Musk, the outspoken CEO of X, revealed that a senior executive from Google engaged in a constructive dialogue with him, assuring immediate steps to rectify the perceived biases within Gemini. This development comes after reports surfaced regarding inaccuracies in the portrayal of historical figures by the AI model. Expressing his sentiments on X (formerly Twitter), Musk underscored the significance of addressing bias within AI technologies. He highlighted a commitment from Google to rectify the issues, particularly emphasizing the importance of accuracy in historical depictions. Musk's engagement in the discourse stemmed from Gemini's generation of an image depicting George Washington as a black man, sparking widespread debate on the platform. The Tesla CEO lamented not only the shortcomings of Gemini but also criticized Google's broader search algorithms. In his critique, Musk characterized Google's actions as "insane" and detrimental to societal progress, attributing the flaws in Gemini to overarching biases within Google's AI infrastructure. He specifically targeted Jack Krawczyk, Gemini's product lead, for his role in perpetuating biases within the AI model. Google's introduction of Gemini AI, aimed at image generation, was met with both anticipation and criticism. While the technology promised innovative applications, concerns arose regarding the accuracy and inclusivity of its outputs. Google, in response, halted image production and pledged to release an updated version addressing the reported inaccuracies. Jack Krawczyk, Senior Director of Product for Gemini at Google, acknowledged the need for nuanced adjustments to accommodate historical contexts better. The incident underscores the ongoing challenges in developing AI technologies that are both advanced and equitable. As the dialogue around AI ethics continues to evolve, Musk's recognition of Google's efforts signals a collaborative approach towards mitigating bias and ensuring the responsible development of AI technologies. Read the full article
0 notes
Text
How to Prevent Google Bard from Saving Your Personal Information
Google Bard, with its latest update, now enables you to sift through your Google Docs collection, unearth old Gmail messages, and search every YouTube video. Before delving into the new extensions available for Google’s chatbot, it’s essential to understand the measures you can implement to safeguard your privacy (and those you can’t). Google Bard was introduced in March of this year, a month after OpenAI made ChatGPT public. You’re probably aware that chatbots are engineered to replicate human conversation, but Google’s recent features aim to provide Bard with more practical functions and uses. However, when every interaction you have with Bard is monitored, recorded, and reused to train the AI, how can you ensure your data’s safety? Here are some suggestions for securing your prompts and gaining some control over the information you provide to Bard. We’ll also talk about location data, where Google unfortunately offers fewer privacy options.
The default setting for Bard is to retain every dialogue you have with the chatbot for 18 months. Bard also records your approximate location, IP address, and any physical addresses linked to your Google account for work or home, in addition to your prompts. While the default settings are active, any conversation you have with Bard may be chosen for human review.
Looking to disable this? In the Bard Activity tab, you can prevent it from automatically saving your prompts and also erase any previous interactions. “We provide this option surrounding Bard Activity, which you can enable or disable, if you prefer to keep your conversations non-reviewable by humans,” explains Jack Krawczyk, a product lead at Google for Bard.
Once you deactivate Bard Activity, your new chats are not submitted for human review, unless you report a specific interaction to Google. But there’s a caveat: If you disable Bard Activity, you can’t use any of the chatbot’s extensions that link the AI tool to Gmail, YouTube, and Google Docs.
You can opt to manually delete interactions with Bard, but the data might not be removed from Google servers until a later time, when the company decides to erase it (if at all). “To aid Bard’s improvement while ensuring your privacy, we pick a subset of conversations and use automated tools to aid in removing personally identifiable information,” reads a Google support page. The conversations chosen for human review are no longer associated with your personal account, and these interactions are stored by Google for up to three years, even if you delete it from your Bard Activity.
It’s also important to mention that any Bard conversation you wish to share with friends or colleagues could potentially be indexed by Google Search. At the time of writing, several Bard interactions were accessible through Search, ranging from a job seeker seeking advice on applying for a position at YouTube Music to someone asking for 50 different ingredients they could blend into protein powder.
To delete any Bard links you’ve shared, navigate to Settings in the top right corner, select Your public links, and click the trash icon to halt online sharing. Google announced on social media that it’s taking measures to prevent shared chats from being indexed by Search.
This might prompt you to question: If I’m using Bard to locate my old emails, do those conversations remain private? Perhaps, perhaps not. “With Bard’s capability to summarize and extract content from your Gmail and your Google Docs, we’ve taken it a step further,” states Krawczyk. “Nothing from there is ever eligible. Regardless of the settings you’ve enabled. Your email will never be read by another human. Your Google Docs will never be read by another human.” Although the absence of human readers might seem somewhat comforting, it’s still ambiguous how Google utilizes your data and interactions to train their algorithm or future versions of the chatbot.
Alright, now what about your location data? Are there any tools to limit when Bard keeps track of where you are? In a pop-up, Bard users are Section 1: The Choice of Location Sharing with Bard Chatbot
Bard, a chatbot developed by Google, allows users the choice of sharing their precise location. However, even if users decide against sharing their exact location, Bard still has access to their general whereabouts. A page on Google’s support site explains, “In order to provide a response that is relevant to your query, Bard always collects location data when in use.”
How Bard Determines Your Location
Bard determines your location through a combination of your IP address, which gives a general sense of your location, and any personal addresses linked to your Google account. Google asserts that the location data provided by users is anonymized by combining it with the data of at least a thousand other users within a tracking area that spans at least two miles.
The Commonality of IP Address Tracking
While some users may feel uneasy about location tracking, the practice of keeping tabs on IP addresses to determine user locations is more common than one might think. For instance, Google Search utilizes your IP address, among other sources, to respond to “near me” inquiries such as “best takeout near me” or “used camping gear near me.” However, just because this practice is widespread doesn’t necessarily mean it’s universally accepted. This is something to consider when using products like Bard.
How to Mask Your IP Address
Despite Google not offering a straightforward way to opt out of Bard’s location tracking, users can mask their IP address by using a virtual private network (VPN). These tools can be used on both PCs and mobile devices.
https://www.infradapt.com/news/how-to-prevent-google-bard-from-saving-your-personal-information/
0 notes
Text
By: Thomas Barrabi
Published: Feb. 21, 2024
Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.
Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.”
Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.
Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.
[ Google admitted its image tool was “missing the mark.” ]
[ Google debuted Gemini’s image generation tool last week. ]
Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.
When asked why it had deviated from its original prompt, Gemini replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.
Generative AI tools like Gemini are designed to create content within certain parameters, leading many critics to slam Google for its progressive-minded settings.
Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”
Google said it was aware of the criticism and is actively working on a fix.
“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Social media users had a field day creating queries that provided confounding results.
“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts about Gemini on the social media platform quickly went viral.
In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.
The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.
Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.
Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.
“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.
Another prompt to “depict the Girl with a Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”
Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.
[ In one case, Gemini generated pictures of “diverse” representations of the pope. ]
[ Critics accused Google Gemini of valuing diversity over historically or factually accuracy.]
The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.
Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.
In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.
--
More:
==
Here's the thing: this does not and cannot happen by accident. Language models like Gemini source their results from publicly available sources. It's entirely possible someone has done a fan art of "Girl with a Pearl Earring" with an alternate ethnicity, but there are thousands of images of the original painting. Similarly, find a source for an Asian female NHL player, I dare you.
While this may seem amusing and trivial, the more insidious and much larger issue is that they're deliberately programming Gemini to lie.
As you can see from the examples above, it disregards what you want or ask, and gives you what it prefers to give you instead. When you ask a question, it's programmed to tell you what the developers want you to know or believe. This is profoundly unethical.
#Google#WokeAI#Gemini#Google Gemini#generative ai#artificial intelligence#Gemini AI#woke#wokeness#wokeism#cult of woke#wokeness as religion#ideological corruption#diversity equity and inclusion#diversity#equity#inclusion#religion is a mental illness
15 notes
·
View notes
Text
Google Hadirkan Fitur Baru Bard
SERANG – Google kini menghadirkan Bard yang hingga kini masih terus dalam proses pengembangan, diyakini untuk membantu menjelajahi keingintahuan, mengembangkan imajinasi, dan akhirnya mewujudkan ide, tidak hanya dengan menjawab pertanyaan. “Bard dirancang sebagai antarmuka LLM yang memungkinkan pengguna untuk berkolaborasi dengan AI generatif,” ujar Jack Krawczyk, Sr. Director, Product…
View On WordPress
0 notes
Text
Google Enhances Bard's Reasoning Skills
Google’s language model, Bard, is receiving a significant update today that aims to improve its logic and reasoning capabilities. Jack Krawczyk, the Product Lead for Bard, and Amarnag Subramanya, the Vice President of Engineering for Bard, announced in a blog post. A Leap Forward In Reasoning & Math These updates aim to improve Bard’s ability to tackle mathematical tasks, answer coding questions,…
View On WordPress
0 notes
Link
Google’s language model, Bard, is receiving a significant update today that aims to improve its logic and reasoning capabilities. Jack Krawczyk, the Product Lead for Bard, and Amarnag Subramanya, the Vice President of Engineering for Bard, announced in a blog post. A Leap Forward In Reasoning & Math These updates aim to improve Bard’s ability to tackle mathematical tasks, answer coding questions, and handle string manipulation prompts. To achieve this, the developers incorporate “implicit code execution.” This new method allows Bard to detect computational prompts and run code in the background, enabling it to respond more accurately to complex tasks. “As a result, it can respond more accurately to mathematical tasks, coding questions and string manipulation prompts,” the Google team shared in the announcement. System 1 and System 2 Thinking: A Blend of Intuition and Logic The approach used in the update takes inspiration from the well-studied dichotomy in human intelligence, as covered in Daniel Kahneman’s book, “Thinking, Fast and Slow.” The concept of “System 1” and “System 2” thinking is central to Bard’s improved capabilities. System 1 is fast, intuitive, and effortless, akin to a jazz musician improvising on the spot. System 2, however, is slow, deliberate, and effortful, comparable to carrying out long division or learning to play an instrument. Large Language Models (LLMs), such as Bard, have typically operated under System 1, generating text quickly but without deep thought. Traditional computation aligns more with System 2, being formulaic and inflexible yet capable of producing impressive results when correctly executed. “LLMs can be thought of as operating purely under System 1 — producing text quickly but without deep thought,” according to the blog post. However, “with this latest update, we’ve combined the capabilities of both LLMs (System 1) and traditional code (System 2) to help improve accuracy in Bard’s responses.” A Step Closer To Improved AI Capabilities The new updates represent a significant step forward in the AI language model field, enhancing Bard’s capabilities to provide more accurate responses. However, the team acknowledges that there’s still room for improvement: “Even with these improvements, Bard won’t always get it right… this improved ability to respond with structured, logic-driven capabilities is an important step toward making Bard even more helpful.” While the improvements are noteworthy, they present potential limitations and challenges. It’s plausible that Bard may not always generate the correct code or include the executed code in its response. There could also be scenarios where Bard might not generate code at all. Further, the effectiveness of the “implicit code execution” could depend on the complexity of the task. In Summary As Bard integrates more advanced reasoning capabilities, users can look forward to more accurate, helpful, and intuitive AI assistance. However, all AI technology has limitations and drawbacks. As with any tool, consider approaching it with a balanced perspective, understanding the capabilities and challenges. Featured Image: Amir Sajjad/Shutterstock window.addEventListener( 'load2', function() if( !window.ss_u ) !function(f,b,e,v,n,t,s) if(f.fbq)return;n=f.fbq=function()n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments); if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js'); if( typeof window.sopp != "undefined" && window.sopp === 'yes' ) fbq('dataProcessingOptions', ['LDU'], 1, 1000); console.log('load_px'); fbq('init', '1321385257908563'); fbq('track', 'PageView'); fbq('trackSingle', '1321385257908563', 'ViewContent', content_name: 'google-enhances-bards-reasoning-skills', content_category: 'generative-ai news' ); );
0 notes
Text
Google Says Bard Got Better At Math & Logic Using PaLM
Google Bard Now Better At Math & Logic By Using PaLM, Google Says Jack Krawczyk from Google, who is the lead on Bard, said on Friday that Bard just rolled out improvements around math and logic. It incorporated PaLM, Google’s language model, into Bard, which helped it achieve these upgrades. “Now Bard will better understand and respond to your prompts for multi-step word and math problems, with…
View On WordPress
3 notes
·
View notes