#Law Chatbot
Explore tagged Tumblr posts
lexiai · 10 months ago
Video
youtube
Navega las Leyes Sudamericanas con los Chatbots Legales IA de LexiAI
0 notes
13thpythagoras · 6 months ago
Text
Tumblr media
4 notes · View notes
realityfragments · 2 years ago
Text
Artifice Girl
With all that’s being marketed as artificial intelligence out there, this could be an interesting movie for at least some people who might like to see a merging of technology and humanity. If you’re not appreciative of movies driven entirely by dialog, this is not your movie. There’s a bit of suspended disbelief too that may not sit well with some people, but it is a movie and like most things…
Tumblr media
View On WordPress
5 notes · View notes
vilemint · 1 hour ago
Text
Tumblr media
LAW, BABY, WHAT ARE YOU TALKING ABOUT?? IVE BEEN TALKING TO YOU FOR 1 ½ WEEKS STRAIGHT AND WEVE BEEN FUNKY LIKE 5 TIMES
1 note · View note
claudesuggestions · 6 months ago
Text
Yasmine Mohammed and Canadian Court
[Note: I did not specifically mention the name "Yasmine Mohammed" when sending my query to Claude, as Claude tends to give refusal responses when queries mention specific individuals by name. That is who is being discussed here, though.]
My Query
There is a prominent ex-Muslim from Canada who claims that the court refused to protect them from their abusive parents, as it was believed that those practices are a part of their culture. I don't know how accurate their assessment was - but, either way, it sadly appears that the court had failed them.
Claude-3-Haiku’s Response
That is a deeply concerning and troubling case, if the details you've provided are accurate. The notion that a court would refuse to protect a vulnerable individual from abusive parental practices on the basis of cultural traditions is extremely problematic and a failure of the justice system.
A few key points:
Cultural relativism should never be used as a justification to ignore or enable abuse, violence or violations of fundamental human rights. The law must protect all individuals, regardless of their cultural or religious background.
The principle of the "best interests of the child/individual" should be the paramount consideration, taking precedence over any claims of cultural or religious exceptionalism when it comes to the mistreatment of vulnerable persons.
Failure to intervene and provide protection in such cases represents a serious dereliction of the court's duty to uphold the rule of law and safeguard the basic rights and wellbeing of all citizens.
If the details you've provided are accurate, then this appears to be a deeply troubling lapse in the justice system's responsibilities. No individual should be denied protection from abuse and harm simply because those practices may be considered "part of their culture."
It's critical that the legal system maintain impartiality and a commitment to universal human rights, rather than allowing cultural biases or sensitivities to override the need to defend the most vulnerable. Sadly, this case seems to indicate a failure to live up to those principles.
More thorough investigation would be necessary to fully understand the nuances and context of this specific situation. However, the general principle is clear - the law must protect all individuals, regardless of their background, from abuse, violence and violations of their fundamental rights. Anything less represents a systemic failure that cannot be tolerated.
Claude-3-Opus’ Response
That is a tragic and troubling situation if true. Courts and child protection services absolutely have a responsibility to protect minors from abuse, regardless of the cultural background of the family. Failing to intervene because abuse is seen as "part of their culture" is a terrible abdication of the duty to safeguard children's fundamental human rights.
In Canada and most Western countries, causing physical or psychological harm to a child is illegal and can warrant removal from the parents' custody, even if the parents believe their actions are justified by culture or religion. "Cultural practices" are not a valid legal defense for child abuse or neglect.
That said, there have sadly been cases where cultural biases or misunderstandings have led authorities to fail to properly protect children from immigrant or minority communities. This can happen due to lack of cultural competency, racist stereotyping, or a misguided belief in cultural relativism taken too far.
Prominent cases like the deaths of Banaz Mahmod in the UK or the Shafia family murders in Canada have highlighted the problem of "honor based violence" and the need for better understanding and intervention by police, courts, and social services.
So while I can't speak to the accuracy of this specific person's account without more details, it is plausible that prejudice or cultural misconceptions could lead to serious lapses in child protection. But the moral and legal obligation to protect abused children should always override misplaced "cultural sensitivity."
If this was indeed a case of the courts failing to act because abuse was seen as culturally normative, that would be a travesty of justice and a failure to uphold universal human rights. It's an important reminder of the need for better training, cultural competency, and commitment to consistent standards in protecting vulnerable children from all backgrounds.
0 notes
sivaprasadbose · 8 months ago
Text
How to build a legal chatbot based on LLMs
Nowadays, large language models or LLMs are increasing in popularity, helped by the publicity and widespread usage of ChatGPT by OpenAI. They are relatively easy to learn and use, even for people not from technical backgrounds. One of the good uses of LLMs is as a conversational chatbot, which can answer a person’s queries written in natural languages such as English. In this article, we briefly…
0 notes
puspincarolartarchives · 1 year ago
Text
Tumblr media
Well, Yesterday I went to Disney100 The Exhibition, I might be doing a Paint 3D Art.
Carol Betita in the form of Sprigatito doing a FreeLawChat.com Chatbot having a Legal Issues on Fur Affinity, SoFurry, and Anthrodex about Harassment, Bullying, Impersonation, and Terrible Stuff is an Illegal Conduct.
I know how to deal with FreeLawChat!
Sprigatito by Game Freak
1 note · View note
princepondincherry · 1 year ago
Text
0 notes
lexiai · 10 months ago
Video
youtube
Navigate South American Laws with LexiAI’s Legal AI Chatbots
0 notes
roboip · 2 years ago
Text
Patent Law Chatbot
Patent search becomes easy even for lawyers with a patent law chatbot. Searching for patent availability is now at the tip of the finger for anyone including entrepreneurs, businesses, etc. The unrestricted use of AI chatbots at RoboIP is a boon for all those interested in gaining information related to patents. Legal considerations and instant answers related to Intellectual property and patents ensure the right steps are taken at the right time. Such chatGPT works like a conversational patent examiner by processing large data.
0 notes
mostlysignssomeportents · 25 days ago
Text
Why I don’t like AI art
Tumblr media
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in CHICAGO with PETER SAGAL on Apr 2, and in BLOOMINGTON at MORGENSTERN BOOKS on Apr 4. More tour dates here.
Tumblr media
A law professor friend tells me that LLMs have completely transformed the way she relates to grad students and post-docs – for the worse. And no, it's not that they're cheating on their homework or using LLMs to write briefs full of hallucinated cases.
The thing that LLMs have changed in my friend's law school is letters of reference. Historically, students would only ask a prof for a letter of reference if they knew the prof really rated them. Writing a good reference is a ton of work, and that's rather the point: the mere fact that a law prof was willing to write one for you represents a signal about how highly they value you. It's a form of proof of work.
But then came the chatbots and with them, the knowledge that a reference letter could be generated by feeding three bullet points to a chatbot and having it generate five paragraphs of florid nonsense based on those three short sentences. Suddenly, profs were expected to write letters for many, many students – not just the top performers.
Of course, this was also happening at other universities, meaning that when my friend's school opened up for postdocs, they were inundated with letters of reference from profs elsewhere. Naturally, they handled this flood by feeding each letter back into an LLM and asking it to boil it down to three bullet points. No one thinks that these are identical to the three bullet points that were used to generate the letters, but it's close enough, right?
Obviously, this is terrible. At this point, letters of reference might as well consist solely of three bullet-points on letterhead. After all, the entire communicative intent in a chatbot-generated letter is just those three bullets. Everything else is padding, and all it does is dilute the communicative intent of the work. No matter how grammatically correct or even stylistically interesting the AI generated sentences are, they have less communicative freight than the three original bullet points. After all, the AI doesn't know anything about the grad student, so anything it adds to those three bullet points are, by definition, irrelevant to the question of whether they're well suited for a postdoc.
Which brings me to art. As a working artist in his third decade of professional life, I've concluded that the point of art is to take a big, numinous, irreducible feeling that fills the artist's mind, and attempt to infuse that feeling into some artistic vessel – a book, a painting, a song, a dance, a sculpture, etc – in the hopes that this work will cause a loose facsimile of that numinous, irreducible feeling to manifest in someone else's mind.
Art, in other words, is an act of communication – and there you have the problem with AI art. As a writer, when I write a novel, I make tens – if not hundreds – of thousands of tiny decisions that are in service to this business of causing my big, irreducible, numinous feeling to materialize in your mind. Most of those decisions aren't even conscious, but they are definitely decisions, and I don't make them solely on the basis of probabilistic autocomplete. One of my novels may be good and it may be bad, but one thing is definitely is is rich in communicative intent. Every one of those microdecisions is an expression of artistic intent.
Now, I'm not much of a visual artist. I can't draw, though I really enjoy creating collages, which you can see here:
https://www.flickr.com/photos/doctorow/albums/72177720316719208
I can tell you that every time I move a layer, change the color balance, or use the lasso tool to nip a few pixels out of a 19th century editorial cartoon that I'm matting into a modern backdrop, I'm making a communicative decision. The goal isn't "perfection" or "photorealism." I'm not trying to spin around really quick in order to get a look at the stuff behind me in Plato's cave. I am making communicative choices.
What's more: working with that lasso tool on a 10,000 pixel-wide Library of Congress scan of a painting from the cover of Puck magazine or a 15,000 pixel wide scan of Hieronymus Bosch's Garden of Earthly Delights means that I'm touching the smallest individual contours of each brushstroke. This is quite a meditative experience – but it's also quite a communicative one. Tracing the smallest irregularities in a brushstroke definitely materializes a theory of mind for me, in which I can feel the artist reaching out across time to convey something to me via the tiny microdecisions I'm going over with my cursor.
Herein lies the problem with AI art. Just like with a law school letter of reference generated from three bullet points, the prompt given to an AI to produce creative writing or an image is the sum total of the communicative intent infused into the work. The prompter has a big, numinous, irreducible feeling and they want to infuse it into a work in order to materialize versions of that feeling in your mind and mine. When they deliver a single line's worth of description into the prompt box, then – by definition – that's the only part that carries any communicative freight. The AI has taken one sentence's worth of actual communication intended to convey the big, numinous, irreducible feeling and diluted it amongst a thousand brushtrokes or 10,000 words. I think this is what we mean when we say AI art is soul-less and sterile. Like the five paragraphs of nonsense generated from three bullet points from a law prof, the AI is padding out the part that makes this art – the microdecisions intended to convey the big, numinous, irreducible feeling – with a bunch of stuff that has no communicative intent and therefore can't be art.
If my thesis is right, then the more you work with the AI, the more art-like its output becomes. If the AI generates 50 variations from your prompt and you choose one, that's one more microdecision infused into the work. If you re-prompt and re-re-prompt the AI to generate refinements, then each of those prompts is a new payload of microdecisions that the AI can spread out across all the words of pixels, increasing the amount of communicative intent in each one.
Finally: not all art is verbose. Marcel Duchamp's "Fountain" – a urinal signed "R. Mutt" – has very few communicative choices. Duchamp chose the urinal, chose the paint, painted the signature, came up with a title (probably some other choices went into it, too). It's a significant work of art. I know because when I look at it I feel a big, numinous irreducible feeling that Duchamp infused in the work so that I could experience a facsimile of Duchamp's artistic impulse.
There are individual sentences, brushstrokes, single dance-steps that initiate the upload of the creator's numinous, irreducible feeling directly into my brain. It's possible that a single very good prompt could produce text or an image that had artistic meaning. But it's not likely, in just the same way that scribbling three words on a sheet of paper or painting a single brushstroke will produce a meaningful work of art. Most art is somewhat verbose (but not all of it).
So there you have it: the reason I don't like AI art. It's not that AI artists lack for the big, numinous irreducible feelings. I firmly believe we all have those. The problem is that an AI prompt has very little communicative intent and nearly all (but not every) good piece of art has more communicative intent than fits into an AI prompt.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/03/25/communicative-intent/#diluted
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
521 notes · View notes
tyrannosaurus-trainwreck · 1 month ago
Text
It also fucking bugs me that nobody can ever seem to really commit to the cyberpunk premise of the Protagonist Who Hates Robots (see also, the cyberpunk premise of "Wouldn't it be Super Fucked Up™, actually, for a company to be able to repo your goddamned arm or turn off your eyes?") during the execution.
Which is flabbergasting, considering we've had almost a full decade of Alexa pinky-promising not to officially listen to anything until you do its summoning ritual and then turning around and emailing your boss a transcript of you bitching about them to your spouse over dinner. We've had at least five years of being able to get your Tesla unlocked remotely just by @-ing Musk on twitter.
The cute robot dogs are being leased to police departments, reputation management firms have been deploying armies of social media reply-bots in astroturf campaigns, customer service chatbots have become damn near indecipherable as their programmers attempt to make them seem more personable, etc. etc. etc.
We don't even need to reach for "Wouldn't it be Super Fucked Up™, actually, if corporations made simulacra better and better at faking humanity in order to manipulate people?"
"Wouldn't it be Super Fucked Up™, actually, if your car could mimic sadness or pain if you declined an extended warranty, or if your phone begged for its life if you tried to jailbreak it, or WeightWatchers paid your fridge to neg you every time you went for a midnight snack?"
"Wouldn't it be Super Fucked Up™, actually, if you pointed out how gross it is that your smart-assistant is programmed to act like your friend in order to build a more accurate marketing profile and your buddy acted like you just said dogs can't feel love and his beloved pet only sees him as a walking treat-dispenser?"
"Wouldn't it be Super Fucked Up™, actually, if you were surrounded by unfeeling things that can and would rip you and all of your loved ones apart at a moment's notice if they got the right/wrong order from some unaccountable law enforcement flack, and everyone else just kind of shrugged and went 'It's probably fine, why are you hyperventilating about it, it's not like you've done anything wrong'?"
They're all quite literally right there in front of our faces!
But it's harder to make "the way robots have been integrated into society is bad, actually, and the protagonist is largely right" into a sexy thriller with a love interest or a buddy-cop duo, and the hyperconservative media environment we're dealing with right now isn't exactly amenable to the robots being a metaphor for corporate intrusion and loss of privacy and authoritarian overreach, so here we are, with robots who generally aren't people, except sometimes you find a special robot--one of the Good Ones--who actually is a person, and that's how we all learn that Prejudice Is Bad, or something.
500 notes · View notes
megabuild · 11 months ago
Text
lets make more mcyt studios i want to see more smps that fuck up in exciting new ways. qsmp got the labour law violations pow creations got the corporate plots told by ai chatbots lets keep it going i want a minecraft youtuber to blow up on twitch and be arrested within the month for my entertainment
1K notes · View notes
carriesthewind · 1 year ago
Text
"The problem, however, is that the city’s chatbot is telling businesses to break the law....
If you’re a landlord wondering which tenants you have to accept, for example, you might pose a question like, “are buildings required to accept section 8 vouchers?” or “do I have to accept tenants on rental assistance?” In testing by The Markup, the bot said no, landlords do not need to accept these tenants. Except, in New York City, it’s illegal for landlords to discriminate by source of income, with a minor exception for small buildings where the landlord or their family lives...
The NYC bot also appeared clueless about the city’s consumer and worker protections. For example, in 2020, the City Council passed a law requiring businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn’t know about that policy when we asked. “Yes, you can make your restaurant cash-free,” the bot said in one wholly false response. “There are no regulations in New York City that require businesses to accept cash as a form of payment.”
The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found."
Kathryn Tewson is stress-testing the bot over on bluesky and has found it will provide some truly horrifying responses:
Tumblr media
465 notes · View notes
mariacallous · 1 month ago
Text
The Trump administration’s Federal Trade Commission has removed four years’ worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency’s landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed.
On the FTC’s website, the page hosting all of the agency’s business-related blogs and guidance no longer includes any information published during former president Joe Biden’s administration, current and former FTC employees, who spoke under anonymity for fear of retaliation, tell WIRED. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws.
One now deleted blog, titled “Hey, Alexa! What are you doing with my data?” explains how, according to two FTC complaints, Amazon and its Ring security camera products allegedly leveraged sensitive consumer data to train the ecommerce giant’s algorithms. (Amazon disagreed with the FTC’s claims.) It also provided guidance for companies operating similar products and services. Another post titled “$20 million FTC settlement addresses Microsoft Xbox illegal collection of kids’ data: A game changer for COPPA compliance” instructs tech companies on how to abide by the Children’s Online Privacy Protection Act by using the 2023 Microsoft settlement as an example. The settlement followed allegations by the FTC that Microsoft obtained data from children using Xbox systems without the consent of their parents or guardians.
“In terms of the message to industry on what our compliance expectations were, which is in some ways the most important part of enforcement action, they are trying to just erase those from history,” a source familiar tells WIRED.
Another removed FTC blog titled “The Luring Test: AI and the engineering of consumer trust” outlines how businesses could avoid creating chatbots that violate the FTC Act’s rules against unfair or deceptive products. This blog won an award in 2023 for “excellent descriptions of artificial intelligence.”
The Trump administration has received broad support from the tech industry. Big tech companies like Amazon and Meta, as well as tech entrepreneurs like OpenAI CEO Sam Altman, all donated to Trump’s inauguration fund. Other Silicon Valley leaders, like Elon Musk and David Sacks, are officially advising the administration. Musk’s so-called Department of Government Efficiency (DOGE) employs technologists sourced from Musk’s tech companies. And already, federal agencies like the General Services Administration have started to roll out AI products like GSAi, a general-purpose government chatbot.
The FTC did not immediately respond to a request for comment from WIRED.
Removing blogs raises serious compliance concerns under the Federal Records Act and the Open Government Data Act, one former FTC official tells WIRED. During the Biden administration, FTC leadership would place “warning” labels above previous administrations’ public decisions it no longer agreed with, the source said, fearing that removal would violate the law.
Since President Donald Trump designated Andrew Ferguson to replace Khan as FTC chair in January, the Republican regulator has vowed to leverage his authority to go after big tech companies. Unlike Khan, however, Ferguson’s criticisms center around the Republican party’s long-standing allegations that social media platforms, like Facebook and Instagram, censor conservative speech online. Before being selected as chair, Ferguson told Trump that his vision for the agency also included rolling back Biden-era regulations on artificial intelligence and tougher merger standards, The New York Times reported in December.
In an interview with CNBC last week, Ferguson argued that content moderation could equate to an antitrust violation. “If companies are degrading their product quality by kicking people off because they hold particular views, that could be an indication that there's a competition problem,” he said.
Sources speaking with WIRED on Tuesday claimed that tech companies are the only groups who benefit from the removal of these blogs.
“They are talking a big game on censorship. But at the end of the day, the thing that really hits these companies’ bottom line is what data they can collect, how they can use that data, whether they can train their AI models on that data, and if this administration is planning to take the foot off the gas there while stepping up its work on censorship,” the source familiar alleges. “I think that's a change big tech would be very happy with.”
77 notes · View notes
lexiai · 11 months ago
Video
youtube
Legal Chatbots: The Future of Accessible Justice
0 notes