#this is the real problem with anti-ai legislation
Explore tagged Tumblr posts
Note
In the category of "useful use cases for generative art", I came across this blog post a while ago by James Ernest, where he talks about how he used midjourney + photoshop to generate art for playtest versions of games he was working on - he could cheaply and quickly generate art for his playtest cards that properly set the tone for the game, and this could also be a starting point for a human artist if the game panned out.
How can you consider yourself any sort of leftist when you defend AI art bullshit? You literally simp for AI techbros and have the gall to pretend you're against big corporations?? Get fucked
I don't "defend" AI art. I think a particular old post of mine that a lot of people tend to read in bad faith must be making the rounds again lmao.
Took me a good while to reply to this because you know what? I decided to make something positive out of this and use this as an opportunity to outline what I ACTUALLY believe about AI art. If anyone seeing this decides to read it in good or bad faith... Welp, your choice I guess.
I have several criticisms of the way the proliferation of AI art generators and LLMs is making a lot of things worse. Some of these are things I have voiced in the past, some of these are things I haven't until now:
Most image and text AI generators are fine-tuned to produce nothing but the most agreeable, generically pretty content slop, pretty much immediately squandering their potential to be used as genuinely interesting artistic tools with anything to offer in terms of a unique aesthetic experience (AI video still manages to look bizarre and interesting but it's getting there too)
In the entertainment industry and a lot of other fields, AI image generation is getting incorporated into production pipelines in ways that lead to the immiseration of working artists, being used to justify either lower wages or straight-up layoffs, and this is something that needs to be fought against. That's why I unconditionally supported the SAG-AFTRA strikes last year and will unconditionally support any collective action to address AI art as a concrete labor issue
In most fields where it's being integrated, AI art is vastly inferior to human artists in any use case where you need anything other than to make a superficially pretty picture really fast. If you need to do anything like ask for revisions or minor corrections, give very specific descriptions of how objects and people are interacting with each other, or just like. generate several pictures of the same thing and have them stay consistent with each other, you NEED human artists and it's preposterous to think they can be replaced by AI.
There is a lot of art on the internet that consists of the most generically pretty, cookie-cutter anime waifu-adjacent slop that has zero artistic or emotional value to either the people seeing it or the person churning it out, and while this certainly was A Thing before the advent of AI art generators, generative AI has made it extremely easy to become the kind of person who churns it out and floods online art spaces with it.
Similarly, LLMs make it extremely easy to generate massive volumes of texts, pages, articles, listicles and what have you that are generic vapid SEO-friendly pap at best and bizzarre nonsense misinformation at worst, drowning useful information in a sea of vapid noise and rendering internet searches increasingly useless.
The way LLMs are being incorporated into customer service and similar services not only, again, encourages further immiseration of customer service workers, but it's also completely useless for most customers.
A very annoyingly vocal part the population of AI art enthusiasts, fanatics and promoters do tend to talk about it in a way that directly or indirectly demeans the merit and skill of human artists and implies that they think of anyone who sees anything worthwile in the process of creation itself rather than the end product as stupid or deluded.
So you can probably tell by now that I don't hold AI art or writing in very high regard. However (and here's the part that'll get me called an AI techbro, or get people telling me that I'm just jealous of REAL artists because I lack the drive to create art of my own, or whatever else) I do have some criticisms of the way people have been responding to it, and have voiced such criticisms in the past.
I think a lot of the opposition to AI art has critstallized around unexamined gut reactions, whipping up a moral panic, and pressure to outwardly display an acceptable level of disdain for it. And in particular I think this climate has made a lot of people very prone to either uncritically entertain and adopt regressive ideas about Intellectual Propety, OR reveal previously held regressive ideas about Intellectual Property that are now suddenly more socially acceptable to express:
(I wanna preface this section by stating that I'm a staunch intellectual property abolitionist for the same reason I'm a private property abolitionist. If you think the existence of intellectual property is a good thing, a lot of my ideas about a lot of stuff are gonna be unpalatable to you. Not much I can do about it.)
A lot of people are suddenly throwing their support behind any proposal that promises stricter copyright regulations to combat AI art, when a lot of these also have the potential to severely udnermine fair use laws and fuck over a lot of independent artist for the benefit of big companies.
It was very worrying to see a lot of fanfic authors in particular clap for the George R R Martin OpenAI lawsuit because well... a lot of them don't realize that fanfic is a hobby that's in a position that's VERY legally precarious at best, that legally speaking using someone else's characters in your fanfic is as much of a violation of copyright law as straight up stealing entire passages, and that any regulation that can be used against the latter can be extended against the former.
Similarly, a lot of artists were cheering for the lawsuit against AI art models trained to mimic the style of specific artists. Which I agree is an extremely scummy thing to do (just like a human artist making a living from ripping off someone else's work is also extremely scummy), but I don't think every scummy act necessarily needs to be punishable by law, and some of them would in fact leave people worse off if they were. All this to say: If you are an artist, and ESPECIALLY a fan artist, trust me. You DON'T wanna live in a world where there's precedent for people's artstyles to be considered intellectual property in any legally enforceable way. I know you wanna hurt AI art people but this is one avenue that's not worth it.
Especially worrying to me as an indie musician has been to see people mention the strict copyright laws of the music industry as a positive thing that they wanna emulate. "this would never happen in the music industry because they value their artists copyright" idk maybe this is a the grass is greener type of situation but I'm telling you, you DON'T wanna live in a world where copyright law in the visual arts world works the way it does in the music industry. It's not worth it.
I've seen at least one person compare AI art model training to music sampling and say "there's a reason why they cracked down on sampling" as if the death of sampling due to stricter copyright laws was a good thing and not literally one of the worst things to happen in the history of music which nearly destroyed several primarily black music genres. Of course this is anecdotal because it's just One Guy I Saw Once, but you can see what I mean about how uncritical support for copyright law as a tool against AI can lead people to adopt increasingly regressive ideas about copyright.
Similarly, I've seen at least one person go "you know what? Collages should be considered art theft too, fuck you" over an argument where someone else compared AI art to collages. Again, same point as above.
Similarly, I take issue with the way a lot of people seem EXTREMELY personally invested in proving AI art is Not Real Art. I not only find this discussion unproductive, but also similarly dangerously prone to validating very reactionary ideas about The Nature Of Art that shouldn't really be entertained. Also it's a discussion rife with intellectual dishonesty and unevenly applied definition and standards.
When a lot of people present the argument of AI art not being art because the definition of art is this and that, they try to pretend that this is the definition of art the've always operated under and believed in, even when a lot of the time it's blatantly obvious that they're constructing their definition on the spot and deliberately trying to do so in such a way that it doesn't include AI art.
They never succeed at it, btw. I've seen several dozen different "AI art isn't art because art is [definition]". I've seen exactly zero of those where trying to seriously apply that definition in any context outside of trying to prove AI art isn't art doesn't end up in it accidentally excluding one or more non-AI artforms, usually reflecting the author's blindspots with regard to the different forms of artistic expression.
(However, this is moot because, again, these are rarely definitions that these people actually believe in or adhere to outside of trying to win "Is AI art real art?" discussions.)
Especially worrying when the definition they construct is built around stuff like Effort or Skill or Dedication or The Divine Human Spirit. You would not be happy about the kinds of art that have traditionally been excluded from Real Art using similar definitions.
Seriously when everyone was celebrating that the Catholic Church came out to say AI art isn't real art and sharing it as if it was validating and not Extremely Worrying that the arguments they'd been using against AI art sounded nearly identical to things TradCaths believe I was like. Well alright :T You can make all the "I never thought I'd die fighting side by side with a catholic" legolas and gimli memes you want, but it won't change the fact that the argument being made by the catholic church was a profoundly conservative one and nearly identical to arguments used to dismiss the artistic merit of certain forms of "degenerate" art and everyone was just uncritically sharing it, completely unconcerned with what kind of worldview they were lending validity to by sharing it.
Remember when the discourse about the Gay Sex cats pic was going on? One of the things I remember the most from that time was when someone went "Tell me a definition of art that excludes this picture without also excluding Fountain by Duchamp" and how just. Literally no one was able to do it. A LOT of people tried to argue some variation of "Well, Fountain is art and this image isn't because what turns fountain into art is Intent. Duchamp's choice to show a urinal at an art gallery as if it was art confers it an element of artistic intent that this image lacks" when like. Didn't by that same logic OP's choice to post the image on tumblr as if it was art also confer it artistic intent in the same way? Didn't that argument actually kinda end up accidentally validating the artistic status of every piece of AI art ever posted on social media? That moment it clicked for me that a lot of these definitions require applying certain concepts extremely selectively in order to make sense for the people using them.
A lot of people also try to argue it isn't Real Art based on the fact that most AI art is vapid but like. If being vapid definitionally excludes something from being art you're going to have to exclude a whooole lot of stuff along with it. AI art is vapid. A lot of art is too, I don't think this argument works either.
Like, look, I'm not really invested in trying to argue in favor of The Artistic Merits of AI art but I also find it extremely hard to ignore how trying to categorically define AI art as Not Real Art not only is unproductive but also requires either a) applying certain parts of your definition of art extremely selectively, b) constructing a definition of art so convoluted and full of weird caveats as to be functionally useless, or c) validating extremely reactionary conservative ideas about what Real Art is.
Some stray thoughts that don't fit any of the above sections.
I've occassionally seen people respond to AI art being used for shitposts like "A lot of people have affordable commissions, you could have paid someone like $30 to draw this for you instead of using the plagiarism algorithm and exploiting the work of real artists" and sorry but if you consider paying an artist a rate that amounts to like $5 for several hours of work a LESS exploitative alternative I think you've got something fucked up going on with your priorities.
Also it's kinda funny when people comment on the aforementioned shitposts with some variation of "see, the usage of AI art robs it of all humor because the thing that makes shitposts funny is when you consider the fact that someone would spend so much time and effort in something so stupid" because like. Yeah that is part of the humor SOMETIMES but also people share and laugh at low effort shitposts all the time. Again you're constructing a definition that you don't actually believe in anywhere outside of this type of conversations. Just say you don't like that it's AI art because you think it's morally wrong and stop being disingenuous.
So yeah, this is pretty much everything I believe about the topic.
I don't "defend" AI art, but my opposition to it is firmly rooted in my principles, and that means I refuse to uncritically accept any anti-AI art argument that goes against those same principles.
If you think not accepting and parroting every Anti-AI art argument I encounter because some of them are ideologically rooted in things I disagree with makes me indistinguishable from "AI techbros" you're working under a fucked up dichotomy.
#this is very well written up#and yeah#it's kind of amazing how many anti-ai arguments I see that are also anti-fanart#or anti-independent artist#or anti-art-you-don't-like#this is the real problem with anti-ai legislation#if it becomes possible to copyright an art style you better believe disney will single-handedly eradicate the art market#copyright laws only protect those who have the resources to use them
2K notes
·
View notes
Text
There are a lot of valid criticisms to be made about AI art generators and large language models, both in terms of ethics and practicality, but some people out there are using the stupidest arguments I've ever seen and seem to be motivated primarily by a kneejerk "ai = bad so any coherent thought I can string together about that must be a valid argument" sort of reaction.
You look at posts about problems with AI art and it's like:
someone who sounds like my stepfather complaining that those newfangled automatic checkouts will put all the checkout chicks out of work and bring down society by putting them all on The Dole and turning them into bludgers instead of hard-working young ladies
someone who sounds like legislator in the early 2000's arguing that video games "aren't real art"
someone with a well-reasoned and cogent argument about copyright, the labour market, and the increasing ability of bots to make the world more terrible, and how AI is one new small part of that
someone who sounds like a teen arguing that making avatars in picrew is anti-art because it's depriving real artists of work and if you want a profile pic on some random forum then you should either Git Gud and draw it yourself from scratch or make money and pay someone else to do it
2K notes
·
View notes
Note
People who defend machine learning stealing from creators have to be corporate shills, right? It’s literally just enabling corporations to zero-effort screw over small artists and writers. It’s an attempt to shake-and-bake art and literature like we do with marvel movies. Why the fuck would anyone defend that on the art website??
I think there are a fair number of anti-capitalists who have fair reservations about copyright laws. But I think a lot of people have only been exposed to the flaws of copyright law, like the silliness with Sherlock Holmes displaying emotions in screen adaptations, without understanding that copyright and IP law is one of the two most fundamental protections for artists (the other being free speech). Without copyright law, corporations can freely use and misuse artiste’s work with no control by the artist, or compensation for the artist.
And in luxury-gay-space-communism artists wouldn’t need to be compensated with money for their work bc their needs would already be met, right? (I know I’m being a little tongue in cheek, but I do actively want a world where everyone has their needs met) But advocating for change that is going to harm artists right now in the world we have because the attacked law would be a hindrance in that better world is not the most effective tactic.
I think too that there is an understandable grievance for people who work on develop machine learning, generative AI, LLM’s and other technologies in this area because there is a lot of misunderstanding and misinformation about how AI works. It becomes really frustrating when people want to legislate your life’s work out of existence because of very cerebral takes on what is or isn’t art. However that grievance being understandable doesn’t make it valid.
And also corporate vampires are actively pursuing LLM’s because they want to automate artists out of the process. Part of this is because they don’t want to have to pay labor costs for the artists. The other part is that corporations want total control over their brand. It can be really tough for WB when JK Rowling goes off on a transphobic screed and cuts into their profit margins, and they. Can’t do anything about it because of copyright law. (I am in no way endorsing JK here, I’m just pointing out one reason corporations don’t want artists to control their own copyright).
So like, when people defend AI against copyright law, without understanding or actively working against the way AI is being used to further separate people from artistic creation and distribution, they’re causing real harm to artistic communities (whether in film, digital, literature, or anywhere else).
And I think this is part of the experience, if you are part of the problem, but want to be not part of the problem without changing your actions or views, you have to make someone else the problem.
0 notes
Photo
4 Cyber-security Essentials for Law Firms
Law Businesses comprise vast and thoroughly various streams of painful and sensitive data - from indefinitely Identifiable Information (PII) to intellectual property and trade strategies. This content is highly precious to cyber-criminals looking to access financial data, make extortion schemes, or organization enterprise e-mail undermine (BEC) campaigns. Like Panama, visible leaks, along with Paradise Papers and regulatory limitations, possess pushed obedience to the top of the brain.
Below Will Be the Most Effective 4 Security Tools and Tips That Attorneys Have to Guard Client Data: Why
Security Info and Event Management (SIEM)
The Lack of safety threats has ushered in an equally expansive collection of cyber solutions: anti-virus, anti-spam, advanced firewalls, intrusion detection, endpoint detection, response, etc. But keeping track of those elements may be a challenge to authorized businesses, particularly individuals without in-house IT. A Security Information and Event Management (SIEM) system aim to streamline and track the information pouring in from these unique means. By having an established direction system, your IT team may assess aggregated log statistics, set up an ongoing investigation, and tackle problems since they arise.
24x7 System Monitoring
SIEM's Generate quite a lot of information, and monitoring network traffic is an essential course of action. To reduce the"noise," i.e., fake alarms, a lot of additional IT teams are now applying artificial intelligence to induce investigation within their SIEM. AI is adept at identifying false alarms, so your IT staff can concentrate on remediating real troubles. Despite AI, your system may be flagging tens of thousands of alerts each day. You wish to be sure that the false positives are expunged, false downsides are dealt with. Your team can respond in real-time.
Incident Response plan
In the event of machine or device endanger, your staff should find a way to quarantine the Threat to stop further damage or loss. In case the danger progresses, their Priorities change to organizational security or even executing a disaster recovery Strategy. Incident Response programs have been An all-encompassing attempt on the portion of your IT staff (system quarantining, Patching, etc.), but also your employees and administration group. Incident Reaction Will be a previous line of protection in case of the violation, which means that your business should possess Well-defined and tested options, guaranteeing every stakeholder has distinct Responsibilities.
Cyber-security Expertise
In Addition to automated threat detection tools, cybersecurity experts continue to be a crucial part of your plan. IT Pros that have functioned with legislation Firms offer necessary hazard evaluation, may decide on the very best tools, finetune Detection approaches, and respond to both alarms and indicators of undermine. Seasoned Security specialists may even have an exhaustive comprehension of the compliance and regulatory criteria your protection systems must fulfill, such as FRCP, ESI, and GDPR. In case you do not have in-house employees, Look at outsourcing or leveraging a Program like IT Leadership On-demand for in-depth Consulting services. To Find out more about protecting your legislation Firm's info, hit outside to NSPL services for a completely free consultation.
0 notes
Text
Tech Content Needs Regulation
Reading Time: 4 minutes
It may not be a popular perspective, but I’m increasingly convinced it’s a necessary one. The new publishers of the modern age—including Facebook, Twitter, and Google—should be subject to some type of external oversight that’s driven by public interest-focused government regulation.
On the eve of government hearings with the leaders of these tech giants, and in an increasingly harsh environment for the tech industry in general, frankly, it’s fairly likely that some type of government intervention is going to happen anyway. The only real questions at this point are what, how, and when.
Of course, at this particular time in history, the challenges and risks that come with trying to draft any kind of legislation or regulation that wouldn’t do more harm than good are extremely high. First, given the toxic political climate that the US finds itself in, there are significant (and legitimate) concerns that party-influenced biases could kick in—from either side of the political spectrum. To be clear, however, I’m convinced that the issues facing new forms of digital content go well beyond ideological differences. Plus, as someone who has long-term faith in the ability of the democratic principles behind our great nation to eventually get us through the morass in which we currently find ourselves, I strongly believe the issues that need to be addressed have very long-term impacts that will still be critically important even in less politically challenged times.
Another major concern is that the current set of elected officials aren’t the most digitally-savvy bunch, as was evidenced by some of the questions posed during the Facebook-Cambridge Analytica hearings. While there is little doubt that this is a legitimate concern, I’m at least somewhat heartened to know that there were quite a few intelligent issues raised during those hearings. Additionally, given all the other developments around potential election influencing, it seems clear that many in Congress have been compelled to become more intelligent about tech industry-related issues, and I’m certain those efforts to be more tech savvy will continue.
From the tech industry perspective, there are, of course, a large number of concerns as well. Obviously, no industry is eager to be faced with any type of regulations or other laws that could be perceived as limiting their business decisions or other courses of action. In addition, these tech companies have been particularly vocal about saying that they aren’t publishers and therefore shouldn’t be subject to the many laws and regulations already in place for large traditional print and broadcast organizations.
Clearly, companies like Facebook, Twitter and Google aren’t really publishers in the traditional sense of the word. The problem is, it’s clear now that what needs to change is the definition of publishing. If you consider that the end goal of publishing is to deliver information to a mass audience and do so in a way that can influence public opinion—these companies aren’t just publishers, they are literally the largest and most powerful publishing businesses in the history of the world. Period, end of story.
Even in the wildest dreams of publishing and broadcasting magnates of yore like William Randolph Hearst and William S. Paley, they couldn’t imagine the reach and impact that these tech companies have built in a matter of a just a decade or so. In fact, the level of influence that Facebook, Twitter, and Google now have, not only on American society, but the entire world, is truly staggering. Toss in the fact that that they also have access to staggering amounts of personal information on virtually every single one of us, and the impact is truly mind blowing.
In terms of practical impact, the influence of these publishing platforms on elections is of serious concern in the near term, but their impact reaches far wider and crosses into nearly all aspects of our lives. For example, the return of childhood measles—a disease that was nearly eradicated from the US—is almost entirely due to the spread of scientifically invalid anti-vaccine rhetoric being spread across social media and other sites. Like election tampering, that’s a serious impact to the safety and health of our society.
It’s no wonder, then, that these large companies are facing the level of scrutiny that they are now enduring. Like it or not, they should be. We can no longer accept the naïve thought that technology is an inherently neutral topic that���s free of any bias. As we’ve started to learn from AI-based algorithms, any technology built by humans will include some level of “perspective” from the people who create it. In this way, these tech companies are also similar to traditional publishers, because there is no such thing as a truly neutral set of published or broadcast content. Nor should there be. Like these tech giants, most publishing companies generally try to provide a balanced viewpoint and incorporate mechanisms and fail safes to try and do so, but part of their unique charm is, in fact, the perspective (or bias) that they bring to certain types of information. In the same way, I think it’s time to recognize that there is going to be some level of bias inherent in any technology and that it’s OK to have it.
Regardless of any bias, however, the fundamental issue is still one of influence and the need to somehow moderate and standardize the means by which that influence is delivered. It’s clear that, like most other industries, large tech companies aren’t particularly good at moderating themselves. After all, as hugely important parts of a capitalist society, they’re fundamentally driven by return-based decisions, and up until now, the choices they have made and the paths they have pursued have been enormously profitable.
But that’s all the more reason to step back and take a look at how and whether this can continue or if there’s a way to, for example, make companies responsible for the content that’s published on their platforms, or to limit the amount of personal information that can be used to funnel specific content to certain groups of people. Admittedly, there are no easy answers on how to fix the concerns, nor is there any guarantee that legislative or regulatory attempts to address them won’t make matters worse. Nevertheless, it’s becoming increasingly clear to a wider and wider group of people that the current path isn’t sustainable long-term and the backlash against the tech industry is going to keep growing if something isn’t done.
While it’s easy to fall prey to the recent politically motivated calls for certain types of changes and restrictions, I believe it’s essential to think about how to address these challenges longer term and independent of any current political controversies. Only then can we hope to get the kind of efforts and solutions that will allow us to leverage the tremendous benefits that these new publishing platforms enable, while preventing them from usurping their position in our society.
Source: https://techpinions.com/tech-content-needs-regulation/53580
0 notes
Text
29 Jul 2019: Smaller Go basket size, pizzas should be for everybody, robots in the aisle
Hello, this is the Co-op Digital newsletter - it looks at what's happening in the internet/digital world and how it's relevant to the Co-op, to retail businesses, and most importantly to people, communities and society. Thank you for reading - send ideas and feedback to @rod on Twitter. Please tell a friend about it!
[Image: an Amazon distribution centre near Salford evaporates into the cloud.]
Amazon Go basket size
“Amazon Go shoppers spend on average between $7 and $15 per shopping trip”: research suggests that Amazon Go basket sizes are about half those at other convenience stores. It’s still early days for Go, so it may turn out that the smaller basket (or rather pocket) is the natural size for checkoutless store. If so, perhaps visit frequency will increase as shoppers get used to the concept.
(Checkoutless shopping startups to keep an eye on, from the same article: Standard Cognition, Zippin, Grabango.)
Domino’s pizzas aren’t for you when you’re disabled
Domino’s Pizza asks the US supreme court to say that disability protections shouldn’t apply online. The argument seems to be that accessibility litigation is increasing but because The Government has declined to give precise guidance on how organisations can comply with the Americans with Disabilities Act, then everyone should be able to... just not bother complying? (It’s weird because it shouldn’t be too difficult to make the website fairly accessible: it’s some menus, choices and bit of geographic/language localisation, so… some forms, some pictures, some logic. Though it would have been cheaper to build it in from the start...)
It feels like this happens because it is still too easy to think of disabled people as “other”, a separate group of users whose access to a service could be thought of as a difficult-to-build nice-to-have use case rather than an essential. If everyone realised that one day they too will be disabled temporarily or permanently, there’d be less of this. Ability is temporary, and pizzas should be for everybody.
Event: 'Disability Confident' Celebrating diversity at work - Fri 13 Sep 8am at 151 Deansgate.
Robots in the aisle
Check out Marty in action: Ahold Delhaize has robots trundling up and down supermarket aisles looking for stock gaps. If it was a network of cameras and sensors (ie an Amazon Go store), instead of a moving, vertical figure with eyes and face, would shoppers feel differently about it? What should robots look like and how should they behave to reassure shoppers and colleagues? (Are they all called Marty or will they get their own names?)
Related: How contextual research helped us redesign the replenishing process in our Co-op Food stores.
Privacy vs competition (Facebook, again)
The US Federal Trade Commission will fine $5bn penalty for Facebook for privacy wobbles.
There can be a tension between regulation that strengthens privacy and regulation that supports competition. In the US, that tension is often discussed as if it’s either/or - governments shouldn’t strengthen privacy if it would dampen the vital competitive heat that comes from startups etc. The argument goes something like this:
Facebook negotiates and agrees with the FTC to do these new things which come with a hefty compliance burden. The new compliance things become something like a standard that other companies will need to follow in future. FB is better positioned than most companies to do the burdensome compliance things, so FB would welcome that regulation because it entrenches the company’s position vs nimble upstarts etc. (So GDPR would be anti-competitive according to this view, though you’d wonder if there’s a bit of “how very dare EU” coming from US firms indignant at EU interventionism.)
But even more than that, there’s a more direct reason that strong privacy regulation is in tension with competition: when the gov legislates or has a consent decree with FB that their privacy controls must be much stronger in future, it becomes harder to apply competition-promoting remedies to FB in future. The gov cannot later say to FB “you’re a monopoly so you must open up your data so that etc”. Conclusion: FB is probably delighted at paying only $5bn to never have to share data with any third party competitors.
There’s some truth in and some problems with that argument: if it is just left to the market, we know outcomes can be unequal. We can see that Big Tech are good at absorbing any startups that threaten them these days (which is how FB ended up with Instagram and WhatsApp). So perhaps regulation *is* the meaningful brake on big tech overreach in future.
Unprime
When we talk about Amazon Prime it’s usually in terms of Prime aggressively growing, capturing all of retail, never letting go etc. Yes, Prime’s a beast but it isn’t quite as clear cut as that: 28% of UK Amazon Prime customers say they signed up by mistake. And some percentage of Prime customers are very deliberately signing up to get access to the 15-16 July Prime Day deals, and then cancelling.
Other news
A working definition of Government as a Platform - good read.
US real estate/co-working startup firm WeWork secures 'financial inducement' of £55.7m in Brexit windfall :(
Store forecasting at Walmart scale - AI models (for technical readers).
"The attack then boils down to this: a vendor scans my QR code in exchange for me getting a free pen, and I retrieve a complete list of all contacts they’ve scanned on that device." - What happened when we hacked an expo?
A good example of an “Everything you need to know about how we use data” page, by Projects by If.
Co-op Digital news
We’re testing our ‘Pay in aisle’ app in Co-op Food stores.
Green is the default option for Co-op pension scheme.
Most opened newsletter in the last month: Tales from the crypto. Most clicked story: Communicating effectively through storytelling at Co-op Digital.
Events
Public events:
Northern Azure user group - Tue 30 Jul 6pm at Federation House.
Tech for Good live - Wed 31 Jul 6.30pm at Federation House.
Monzo show & tell Manchester - Tue 6 Aug 6.30pm at Federation House.
'Disability Confident' Celebrating diversity at work - Fri 13 Sep 8am at 151 Deansgate.
ODI’s Strategic data skills - Mon 16 Sep 8.30am online course (needs 3-5 hours/week)
Mind the Product - MTP Engage - Fri 7 Feb 2020 - you can get early bird tickets now.
Internal events:
Food ecommerce show & tell - Mon 29 Jul 10.30am at Fed House 5th floor.
Delivery community of practice - Mon 29 Jul 1.30pm at Fed House.
User experience future vision - Mon 29 Jul and every day this week 4pm at Fed House 5th floor.
Funeralcare show & tell - Tue 30 Jul 1pm at Angel Square 12th floor breakout.
Digital all hands meeting - Wed 31 Jul 2pm at Fed house Defiant.
Data management show & tell - Thu 1 Aug 2.30pm at Angel Square 13th floor breakout.
Membership show & tell - Fri 2 Aug 3pm at Fed House 6th floor kitchen.
More events at Federation House - and you can contact the events team at [email protected]. And TechNW has a useful calendar of events happening in the North West.
Thank you for reading
Thank you, clever and considerate readers and contributors. Please continue to send ideas, questions, corrections, improvements, etc to the newsletterbot’s valet @rod on Twitter. If you have enjoyed reading, please tell a friend!
If you want to find out more about Co-op Digital, follow us @CoopDigital on Twitter and read the Co-op Digital Blog. Previous newsletters.
0 notes
Photo
New Post has been published on https://toldnews.com/business/whats-the-new-weapon-against-money-laundering-gangsters/
What's the new weapon against money laundering gangsters?
Image copyright Getty Images
Image caption Gangster Al Capone was eventually convicted of tax evasion in 1931
Money laundering accounts for up to 5% of global GDP – or $2tn (£1.5tn) – every year, says the United Nations Office on Drugs and Crime. So banks and law enforcement agencies are turning to artificial intelligence (AI) to help combat the growing problem. But will it work?
Money laundering, so-called after gangster Al Capone’s practice of hiding criminal proceeds in cash-only laundromats in the 1920s, is a huge and growing problem.
“Dirty” money is “cleaned” by passing it through layers of seemingly legitimate banks and businesses and using it to buy properties, businesses, expensive cars, works of art – anything that can be sold on for new cash.
And one of the ways criminals do this is called “smurfing”.
Specialist software is used to arrange lots of tiny bank deposits that slip below the radar, explains Mark Gazit, chief executive of ThetaRay, a financial crime AI provider headquartered in Israel.
“A $0.25 transaction will never be spotted by a human, but transactions of that kind can launder $30m if they are done hundreds of millions of times,” he says.
And stolen money is often laundered to fund further criminal activity. One recent ATM (cash machine) scam cost banks €1bn (£854,000) in total across 40 countries, for example.
Image copyright ThetaRay
Image caption ThetaRay boss Mark Gazit says AI can spot patterns of criminal behaviour
“The gang hacked into thousands of ATMs and programmed them to release up to five notes at a certain time – say 3am – at which point a local criminal or ‘money mule’ would pick it up,” says Mr Gazit.
“The money was then converted into Bitcoin and used to fund human trafficking.”
“Money mules” are often recruited to launder this gang cash through their legitimate bank accounts in return for a fee.
“Estimates suggest that not even 1% of criminal funds flowing through the international financial system is confiscated,” says Colin Bell, group head of financial crime risk at HSBC.
Media playback is unsupported on your device
Media captionThe dangerous world of teen “money mules”
And the problem seems to be getting worse, despite tightening regulations.
In the UK alone, financial crime Suspicious Activity Reports increased by 10% in 2018, according to the National Crime Agency.
The US Federal Bureau of Investigation (FBI) told the BBC it was working on “applied technical enhancements” to its armoury of crime-fighting tools to help it keep up with advances in financial tech, but remains understandably tight-lipped on the details.
However, other organisations are openly talking about their use of AI to fight the money launderers.
More Technology of Business
“AI that applies ‘machine learning’ can sift through vast quantities of transactions quickly and effectively,” explains HSBC’s Mr Bell.
“This could be a vital tool for pinpointing suspicious activity.”
For this reason, AI is good at spotting smurfing attempts and accounts that are set up remotely by bots rather than humans, for example.
And it can also spot suspicious behaviour by corrupt insiders – a key element in many money laundering operations.
“Using AI removes much of the risk of people deliberately overlooking suspicious activity,” says Adam Williamson, head of professional standards at the UK’s Association of Accounting Technicians (AAT) – a professional body tasked with helping accountants avoid money laundering.
Image copyright Getty Images
Image caption Several high-profile banks have been caught up in money laundering scandals recently
Many of the world’s biggest banks have been embroiled in money laundering scandals in recent years.
Earlier this year, Swiss banking giant UBS was hit with a €3.7bn (£3.2bn) fine after being found guilty of helping wealthy clients in France hide billions of euros from tax authorities and launder the proceeds. It is appealing against the decision.
Last year, Dutch bank ING paid out €775 million for failing to stop criminals laundering money through its accounts.
And Danske Bank’s boss was forced to quit over a €200bn money laundering scandal involving its Estonian branch.
In Latvia, too, the country’s third largest bank ABLV Bank AS, was wound up after US authorities accused it of large-scale money laundering that had enabled its clients to violate nuclear weapons sanctions against North Korea.
AI can crunch mountains of data in real time – emails, phone calls, expense reports – and spot patterns of behaviour humans might not notice across a global banking group.
Image copyright Getty Images
Image caption Crypto-currencies such as Bitcoin have given gangs another way to launder their cash
Once the system has learned legitimate behaviour patterns it can then more easily spot dodgy activity and learn from that.
Regulators around the world are encouraging the new technology, perhaps in acknowledgement that they are losing the battle.
US Financial Crimes Enforcement Network (FinCEN) director Kenneth A. Blanco says: “Financial institutions have been improving their ability to identify customers and monitor transactions by experimenting with artificial intelligence and machine learning.
“FinCEN encourages these and other financial services-related innovations.”
AI tech firms, such as ThetaRay, LexisNexis and Refinitiv, are offering businesses tools to tackle money laundering, but there are concerns that this presents its own problems.
“If organisations are buying AI off the shelf, how can they convince regulators they are in control of it?” asks the AAT’s Adam Williamson.
And as good as AI might be at spotting anomalies when sifting through huge swathes of data, it is only as effective as the data it is fed.
So there is a growing recognition of the need for banks, financial institutions, governments, and law enforcement agencies to share more information.
“Europol is designed to operate in partnership with law enforcement agencies, governmental departments and other stakeholders,” says the agency’s deputy executive director Wil van Gemert.
“We embrace the idea of collective intelligence.”
Read more stories and features about money laundering here
Mark Hayward, a member of the UK’s new Economic Crime Strategic Board, set up in January, says: “Data sharing is one of our main priorities”.
And legislation has to keep up with the latest trends in financial services that criminals can exploit.
The terrorists behind the 2016 Nice truck attack, for example, paid for the vehicles by pre-paid card to take advantage of the anonymity these cards afford the user.
This is why the European Union’s fifth Anti-Money Laundering Directive introduced last year includes digital currencies and prepaid cards for the first time.
Given that the criminals appear to be winning, any tools that can help tackle the problem must surely be welcome.
Follow Technology of Business editor Matthew Wall on Twitter and Facebook
0 notes
Text
Sen. Harris tells federal agencies to get serious about facial recognition risks
Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.
In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.
Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.
Sen. Harris at a recent hearing.
Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”
If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.
Algorithmic accountability
“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.
Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:
Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.
First, the technology may interpret her mannerisms less accurately than a white male candidate.
Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.
Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.
If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.
“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.
A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.
Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.
“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”
Another example is offered:
Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.
Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.
Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.
The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.
The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).
Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?
“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.
FBI built a massive facial recognition database without proper oversight
The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.
“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.
The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?
The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.
These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.
You can find the letters in full below.
EEOC:
SenHarris – EEOC Facial Rec… by on Scribd
FTC:
SenHarris – FTC Facial Reco… by on Scribd
FBI:
SenHarris – FBI Facial Reco… by on Scribd
0 notes
Text
Is AI powered government worth it?
http://bit.ly/2vDcEKO
From the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed.
We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.
Furthermore, the rise of such technologies is occurring at precisely the moment when faith in governments across much of the Western world has plummeted to an all-time low. Voters across much of the developed world increasingly perceive establishment politicians and those who surround them to be out-of touch bubble-dwellers and are registering their discontent at the ballot box.
A technical solution
In this volatile political climate, there’s a growing feeling that technology can provide an alternative solution. Advocates of algorithmic regulation claim that many human-created laws and regulations can be better and more immediately applied in real-time by AI than by human agents, given the steadily improving capacity of machines to learn and their ability to sift and interpret an ever-growing flood of (often smartphone-generated) data.
AI advocates also suggest that, based on historical trends and human behaviour, algorithms may soon be able to shape every aspect of our daily lives, from how we conduct ourselves as drivers, to our responsibilities and entitlements as citizens, and the punishments we should receive for not obeying the law. In fact one does not have to look too far into the future to imagine a world in which AI could actually autonomously create legislation, anticipating and preventing societal problems before they arise.
Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself – and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:
1) Self-reinforcing bias
What machine learning and AI, in general, excel at (unlike human beings) is analysing millions of data points in real time to identify trends and, based on that, offering up “if this, then that” type conclusions. The inherent problem with that is it carries with it a self-reinforcing bias, because it assumes that what happened in the past will be repeated.
Let’s take the example of crime data. Black and minority neighborhoods with lower incomes are far more likely to be blighted with crime and anti-social behaviour than prosperous white ones. If you then use algorithms to shape laws, what will inevitably happen is that such neighbourhoods will be singled out for intensive police patrols, thereby increasing the odds of stand-offs and arrests.
This, of course, turns perfectly valid concerns about the high crime rate in a particular area into a self-fulfilling prophecy. If you are a kid born in an area targeted in this way, then the chances of escaping your environment grow ever slimmer.
This is already happening, of course. Predictive policing – which has been in use across the US since the early 2010s – has persistently faced accusations of being flawed and prone to deep-rooted racial bias. Whether or not predictive policing can sustainably reduce crime, remains to be proven.
2) Vulnerability to attack A second and no less important issue around AI-shaped law is security. Virtually all major corporations, government institutions and agencies – including the US Department of Justice – have likely been breached at some point, largely because such organizations tend to lag far behind the hackers when it comes to securing data. It is, to put it mildly, unlikely that governments will be able to protect algorithms from attackers, and as algorithms tend to be “black boxed”, it’s unclear whether we’ll be able to identify if and when an algorithm has even been tampered with.
The recent debate in the US about alleged Russian hacking of the Democratic National Committee, which reportedly aided Donald Trump’s bid to become president, is a case in point. Similarly, owing to the complexity of the code that would need to be written to transfer government and judicial powers to a machine, it is a near certainty, given everything we know about software, that it would be riddled with bugs.
3) Who’s calling the shots?
There is also an issue around conflict of interest. The software used in policing and regulation isn’t developed by governments, of course, but by private corporations, often tech multinationals, who already supply government software and tend to have extremely clear proprietary incentives as well as, frequently, opaque links to government.
Such partnerships also raise questions around the transparency of these algorithms, a major concern given their impact on people’s lives. We live in a world in which government data is increasingly available to the public. This is a public good and I’m a strong supporter of it.
Yet the companies who are benefiting most from this free data surge show double standards: they are fierce advocates of free and open data when governments are the source, but fight tooth and nail to ensure that their own programming and data remains proprietary.
4) Are governments up to it?
Then there’s the issue of governments’ competence on matters digital. The vast majority of politicians in my experience have close to zero understanding of the limits of technology, what it can and cannot do. Politicians’ failure to grasp the fundamentals, let alone the intricacies, of the space means that they cannot adequately regulate the software companies that would be building the software.
If they are incapable of appreciating why backdoors cannot go hand-in-hand with encryption, they will likely be unable to make the cognitive jump to what algorithmic regulation, which has many more layers of complexity, would require.
Equally, the regulations that the British and French governments are putting in place, which give the state ever-expanding access to citizen data, suggest they do not understand the scale of the risk they are creating by building such databases. It is certainly just a matter of time before the next scandal erupts, involving a massive overreach of government.
5) Algorithms don’t do nuance
Meanwhile, arguably reflecting the hubristic attitude in Silicon Valley that there are few if any meaningful problems that tech cannot solve, the final issue with the AI approach to regulation is that there is always an optimal solution to every problem.
Yet fixing seemingly intractable societal issues requires patience, compromise and, above all, arbitration. Take California’s water shortage. It’s a tale of competing demands – the agricultural industry versus the general population; those who argue for consumption to be cut to combat climate change, versus others who say global warming is not an existential threat. Can an algorithm ever truly arbitrate between these parties? On a macro level, is it capable of deciding who should carry the greatest burden regarding climate change: developed countries, who caused the problem in the first place, or developing countries who say it’s their time to modernize now, which will require them to continue to be energy inefficient?
My point here is that algorithms, while comfortable with black and white, are not good at coping with shifting shades of gray, with nuance and trade-offs; at weighing philosophical values and extracting hard-won concessions. While we could potentially build algorithms that implement and manage a certain kind of society, we would surely first need to agree what sort of society we want.
And then what happens when that society undergoes periodic (rapid or gradual) fundamental change? Imagine, for instance, the algorithm that would have been built when slavery was rife, being gay was unacceptable and women didn’t have the right to vote. Which is why, of course, we elect governments to base decisions not on historical trends but on visions which the majority of voters buy into, often honed with compromise.
Much of what civil societies have to do is establish an ever-evolving consensus about how we want our lives to be. And that’s not something we can outsource completely to an intelligent machine.
Setting some ground rules
All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction?
To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement.
Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity.
To be sure, not all companies use data against their customers. When a 2015 Harvard Business School study, and subsequent review by Airbnb, uncovered routine bias against black and ethnic minority renters using the home-sharing platform, Airbnb executives took steps to clamp down on the problem. But Airbnb could have avoided the need for the study and its review altogether, because a really smart application of AI algorithms to the platform’s data could have picked up the discrimination much earlier and perhaps also have suggested ways of preventing it. This approach would exploit technology to support better decision-making humans, rather than displace humans as decision-makers.
To realize the potential of this approach in the public sector, governments need to devise a methodology that starts with a debate about what the desired outcome would be from the deployment of algorithms, so that we can understand and agree exactly what we want to measure the performance of the algorithms against.
Secondly – and politicians would need to get up to speed here – there would need to be a real-time and constant flow of data on algorithm performance for each case in which they are used, so that algorithms can continually adapt to reflect changing circumstances and needs.
Thirdly, any proposed regulation or legislation that is informed by the application of AI should be rigorously tested against a traditional human approach before being passed into law.
Finally, any for-profit company that uses public sector data to strengthen or improve its own algorithm should either share future profits with the government or agree an arrangement whereby said algorithm will at first be leased and, eventually, owned by the government.
Make no mistake, algorithmic regulation is on its way. But AI’s wider introduction into government needs to be carefully managed to ensure that it’s harnessed for the right reasons – for society’s betterment – in the right way. The alternative risks a chaos of unintended consequences and, ultimately, perhaps democracy itself.
This article was originally published on the World Economic Forum website. Click here to read.
0 notes
Text
Agorism and Bitcoin: Free People Don’t Ask Maxine Waters for Permission
New Post has been published on https://coinmakers.tech/news/agorism-and-bitcoin-free-people-don-t-ask-maxine-waters-for-permission
Agorism and Bitcoin: Free People Don’t Ask Maxine Waters for Permission
Agorism and Bitcoin: Free People Don’t Ask Maxine Waters for Permission
Anti-agorism Congressional Representative Maxine Waters still has misgivings about Facebook’s proposed Libra digital currency, even after meeting with Swiss government officials to discuss the tech last week. The sustained reservations echo the message of July’s open letter from the House of Representatives to Facebook, calling for a halt on Libra’s ongoing development. While many view regulation and careful legislative feet-dragging a troublesome necessity for crypto mainstreaming, agorists, anarchists, and other free marketeers see a critical problem: the tech is already here, and how we use it in non-violence is nobody else’s damn business.
Intro to Agorism
Agorism is, quite simply, the free exchange of goods and services by individual, free market actors. News.bitcoin.com has previously reported on the philosophy, and this primer about Agorism and crypto is a good place to start for the agora-curious. Suffice to say that at its most basic form, agorism is the philosophy and practice of engaging in free market activity outside of the control or regulations of a state. Living one’s life in such a way, to such an extent as possible, that violent governments are ignored, counteracted, and rendered increasingly irrelevant. The word “agora” itself is a Greek term, meaning “open markets.” The hugely influential agorist activist, philosopher and author Samuel Edward Konkin III once said agorist counter-economics is:
The study or practice of all peaceful human action which is forbidden by the State.
The Problem With Regulation
One of the most misunderstood aspects of agorism, voluntaryism, and anarchism is the fact that chaotic violence and a lack of order are not what is being sought. Agorists want the same things any other sane person wants: better education, better healthcare, better opportunities, and more peace. What is being sought is logical order and voluntary interaction, governed not by sociopathic, economically inept politicians and religious beliefs like the “divine right to rule,” but by logic, science, and the natural reality of individual self-ownership. That is to say, each individual owns his or her own life and body, and by extension, the property legitimately acquired or created by that body and mind.
The regulation of cryptocurrencies by the state exists ostensibly to fight crime and terror. What is seen playing out in reality, however, is that the groups that are by far the largest financiers of terror and violence globally — governments — have labeled themselves “regulators,” and now stifle a technological revolution set to help free billions of people. Waters states, in her August, 25 official assessment of the meeting with Swiss officials:
While I appreciate the time that the Swiss government officials took to meet with us, my concerns remain with allowing a large tech company to create a privately controlled, alternative global currency. I look forward to continuing our Congressional delegation, examining these issues, money laundering, and other matters within the Committee’s jurisdiction.
It is interesting that the state Waters represents, the United States Federal Government, is the world’s leading money launderer, by most rational estimations. After all, what is the unlimited, systematic creation of debt for the benefit of an elite class, represented by pieces of paper and zeroes and ones in computers, but a gigantic scheme to launder financial power? Bitcoin presents a threat because these irresponsible economic practices are simply not possible within the protocol itself.
The United States Federal Government spends over $1.25 trillion on war, annually. There is an infestation of child porn users in the Pentagon and at NASA. The IRS pays people to spy on hardworking Americans, organizing letter threat campaigns to scare even law-abiding citizens into paying money they don’t owe. And these are the regulators “concerned” with crime?
The Biggest Roadblock for Inclusion of the Poor Is Government Regulation
Financial inclusion is a big buzz-phrase these days, especially with influential, mainstream-friendly projects like Libra. It sounds nice. Include the previously disconnected, unbanked, and impoverished in the exciting new “crypto revolution” where blockchain saves us all, ends world hunger, and wipes our asses for us on the way out. There’s just one problem: there’s no need for centralized regulators – only individual human action.
“Hey there, impoverished guy! Wanna get out of debt!? Just head on over to Facebook or Coinbase and create an account. Of course, you’ll have to wait a few weeks to a month for your passport photo to be …. what’s that? You don’t have a passport? Well, I’m sure that’s okay, just present some proof of resi … how’s that? You’re homeless? Oh. Well, not to worry. If you figure out how to pass the KYC/AML requirements, pretty soon you’ll be able to do business online, and send and receive crypto! Bye!”
A cheap smartphone and an internet connection. This is all that is currently needed to make and receive crypto through free trade. Private, P2P platforms like Local.bitcoin.com help make this possible. Introduce a little government, however, and everything becomes cumbersome, violent, and vexingly inconvenient and inhuman. Agorism says if it’s non-violent, trade it freely.
The Desperate IRS
Speaking of poor people, the IRS is understaffed and overworked. Currently flailing to finagle whatever paltry satoshi dust they can out of America’s pockets, over 46,000 of the agency’s employees were forced to work for free during the last government shutdown. Some of them got discouraged and decided not to go back to work at all. The agency has turned to fear-mongering letter campaigns and automation, sending out a series of AI-generated notices relating to supposed non-payment of crypto taxes. They’ve also sent out over 400,000 notices since February, 2018 about failure to report income which could result in the loss of one’s passport. Financial inclusion never sounded less inclusive.
Government Through the Lens of Agorism
“Free people don’t ask for permission.” The commonly repeated agorist bromide deserves fresh attention. In most people’s daily lives, it would be absurd to ask someone else for permission to do things like drive into town, go out for a pizza, or help a friend fix his car. Without payment to a small group of people calling themselves government, though, each of these activities can turn deadly.
When state agents force someone to halt and find they don’t have that special piece of plastic for driving, they can be kidnapped. Those who try to provide a service — say starting a pizza shop — are also not immune. Without the proper building permits and food service licenses (also costing a pretty penny, of course) an entrepreneur will be shut down, fined, and thrown in a cage if they don’t pay. Potentially killed, if they physically resist the kidnapping. Those who try to help a friend with auto repair might also be criminals, thanks to the protection of the state. In Sacramento County, CA and elsewhere, this is already the reality.
No Victim, No Crime
When someone’s neighbor smokes cannabis in their home, they are not violating the body or property of anyone. Going a few miles over the speed limit is not violent, either. Nor is selling tacos outside of a sporting event to willing customers. Nor is collecting rainwater. Neither is drinking raw milk. Generating one’s own electricity and not selling a surplus back to a political jurisdiction called a city is not a violent crime. Nor is refusing to pay taxes.
Critics of the agorist approach rightly are concerned that there must be some means by which to establish order in any given society. Anarchists, agorists, and voluntaryists agree. The prescribed methods are different. Where the Maxine Waters, Steven Mnuchins, and Donald Trumps of the world demand submission to their violence-based class system, agorists maintain we are all equal under the biological, metaphysical, immutable reality of individual self-ownership.
Decentralized rules can be set up for any group of property owners anywhere, based on these principles. As such, asking self-styled gods called politicians for permission becomes a laughable prospect, if the risk of disobedience were not so real. Yet, for some, freedom is well worth it, open letters and legislative scribbles from psychopaths be damned.
Source: news.bitcoin
0 notes
Text
Palmer Luckey’s Secretive Defense Company Is Booming Under Trump
Photo Illustration by Sarah Rogers/The Daily Beast / Photos GettyAs many tech giants grow skittish about cashing in on the surveillance boom, one company helmed by an industry iconoclast seems custom-built for Big Brother.For Anduril Industries, scanning the California desert alongside border agents or helping drones home in on targets isn’t toxic—it isn’t even controversial. That mostly has to do with the company’s founder, Palmer Luckey. The 26-year-old is best known as the designer of the Oculus Rift virtual reality headset that shepherded the futuristic technology into the mainstream. In 2014, Luckey sold his 100-person virtual reality company to Facebook for $3 billion. Luckey was reportedly forced out of Facebook in early 2017 after The Daily Beast revealed that he was bankrolling an unofficial pro-Trump group dedicated to “shitposting” and circulating anti-Clinton memes. It only took a few months for the boyish, ever-Hawaiian shirt clad near-billionaire to launch his second act, a defense company called Anduril Industries. Prone to references to fantasy worlds and role-playing games, Luckey named his new project after a mythical sword from The Lord of the Rings trilogy. Tellingly, the weapon’s other name is the “Flame of the West.” With Oculus, Luckey turned science fiction into affordable hardware. With Anduril, he’d port those innovations over into the defense sector, fusing affordable hardware and machine learning to create a border and battlefield surveillance suite that the federal government couldn’t resist. Two years ago, Anduril was little more than a placeholder website with a casting call for “dedicated, and patriotic engineers.” But with a handful of contracts in its cap and some friends in high places, Luckey’s AI-powered defense experiment has established itself as an up-and-comer in the scrum for federal business. Anduril is still small—a fraction of the size of a Lockheed or a Raytheon, say—but it has quickly grown to employ close to 100 people, moving into a 155,000-square-foot headquarters in Irvine, California, where it can comfortably double in size.And far from shying away from politics post-Facebook, Luckey leaned into the MAGA-friendly ideology—donating big money to pro-Trump outfits, and meeting with Trump cabinet officials, all while his company quietly picks up military contracts and expands its work with border patrol.In a recent Reddit thread Luckey defended his new company’s business model: “Of the things people might find divisive about me, this should be near the bottom of the list.” Palmer LuckeyAnduril/Twitter BIG BORDER BUSINESSWhen Trump’s vision of a “big, beautiful wall” ran into the costly, inefficient realities of a contiguous physical partition along the southern U.S. border, high-tech surveillance solutions emerged as a viable next best thing. In 2017, Luckey worked with Rep. Will Hurd (R-TX) on cost estimates for legislation to push a virtual border wall into consideration. As part of that collaboration, Hurd introduced Luckey to a rancher in his Texas border district who agreed to let the young company test drive three of its portable sentry towers on his private land. (On Thursday, the 41-year-old tech-savvy Congressman announced that he would not seek re-election “in order to pursue opportunities… to solve problems at the nexus between technology and national security.”)Anduril bills itself as an “AI product company” specializing in hardware and software for national defense. Its hallmark product, called Lattice, is a modular surveillance setup comprising drones, “Lattice Sensor Towers,” and software that autonomously identifies potential targets. As it demonstrated in two live pilot programs at the U.S. Southern border last year, the system can detect a human presence and push alerts to Customs and Border Protection agents in real time.Now, the company is expanding its reach. Anduril is currently working on a new pilot program with Customs and Border Protection (CBP) to test “cold weather variations” of its high-tech surveillance system capable of running reliably outside the hot, dry climate of states along the U.S. border with Mexico. That program consists of two limited trials, one in Vermont and one in Montana. The pilots were pursued by the agency’s innovation team, which explores new technologies for guarding U.S. borders and will “determine the efficacy and applicability of the technology to northern border challenges,” according to CBP. While the northern U.S. border sees far less activity outside of designated border crossing sites, it does span some terrain even more remote and challenging than the arid stretches that line the southwest states. The U.S.-Mexico border makes headlines for its divisive role in American immigration policy, but the line dividing the U.S. and Canada is actually five times as long. Anduril may also be shopping its technology to the other side of the border. In May, Luckey represented Anduril at a Toronto event advertised as part of an official trade delegation to Canada. When asked if Anduril’s business in Canada was purely aspirational or actually in the works, the company declined to comment. Anduril’s border work was previously limited to a CBP pilot near San Diego and some unofficial testing at a private ranch outside of El Paso. The San Diego program began with only four towers in the agency’s San Diego Sector and over time expanded over time to 14. Now, with the pilot program successfully ended, those 14 towers remain operational. The company has also turned its unofficial deployment in Texas into a formal relationship. The agency recently bought 18 additional Anduril-made towers and plans to deploy them later this year. That installation is not part of a pilot program. “Like any company, CBP’s future relationship with Anduril will be subject to fair and open competition, the company’s ability to deliver relevant technology, available funding, and a variety of other factors,” CBP told The Daily Beast.Beyond its border-watching sentry towers, Anduril also makes its own heli-drone, a sort of miniaturized helicopter that can stay airborne for long periods. Those drones, known as Lattice Ghosts, are capable of stealth flight and flying in formation over large swaths of land or sea for anything from “anti-cartel operations to stealth observation.”An Anduril sentry tower with one of the company's heli-dronesAnduril GAMER GOD GOES TO WASHINGTON Anduril is a curious company to have grown out of the West Coast tech scene—and a sign of the times. Luckey might still refuse to wear closed-toed shoes, but he’s reinvented himself within Anduril’s hyper-patriotic, veteran-friendly image. Luckey has smartly made efforts to surround himself with serious military types who blend in with the close-cropped national security crowd. The company has quickly built its operation out in Washington D.C., recruiting former Senate Armed Services Committee staff director Christian Brose late last year to serve as the company’s head of strategy. As the Intercept previously reported, that hire helped get Anduril into the National Armaments Consortium, a nonprofit that connects defense companies with military contracts.“The company’s existed a year, and they already have systems that have been built and fielded right now,” Brose told Defense News around the time of his hiring. “This isn’t the classic play, ‘Give us billions of dollars and 10 years, and we’ll promise we’ll build you something.’ They have developed systems, and they’re going out and solving problems with them.”Anduril also picked up Scott Sanders, a former intelligence and special operations officer for the Marine Corps, to lead operations. By late 2018, Sanders was demoing Anduril’s hardware and software surveillance system for Marines at Camp Pendleton. Less than a year later, the company sealed the deal on a $13.5 million contract with the Marine Corps to secure bases in Japan, Arizona, and Hawaii, surrounding each with a “virtual ‘digital fortress.’” With two co-founders from Oculus and four from Palantir, tech’s biggest defense success story, Anduril’s early hires have been key to its quick expansion. One of those was Trae Stephens, a former Palantir engineer and current partner at Peter Thiel’s Founders Fund, who joined Trump’s transition team through Thiel.AndurilAnduril’s leadership represents a blend of political leanings, even if Palmer Luckey’s politics are quite a bit louder. The company’s co-founder and COO Matt Grimm in particular is an active Democratic donor, with donations to Hillary Clinton, Beto O’Rourke, ActBlue, and Pete Buttigieg’s presidential campaign more recently. Co-founder and CEO Brian Schimpf donates to Democrats, too, including Henry Cuellar, a co-sponsor of Hurd’s SMART Wall Act bill in late 2018. Christian Brose represents the traditional Republican wing within the company, having worked under the late Sen. John McCain. Given his work with Trump and Thiel, Stephens has shown a willingness to work with leaders whose politics are more closely aligned with Luckey’s own. Next to Thiel, Luckey is probably Trump’s most high-profile booster in the tech world, even if he was excommunicated from its mainstream. SIX DEGREES OF TRUMPLuckey has described himself as “fiscally conservative, pro-freedom, little-L libertarian, and big-R Republican.” Regularly donning wigs and candy-colored anime garb, Luckey might be the only military contractor who’s active on the cosplay circuit. Reportedly a longtime Trump fan, converted after reading The Art of The Deal, Luckey donated $100,000 to Trump’s inaugural committee. He was spotted last month at a Trump 2020 fundraiser put on by Donald Trump Jr. and his girlfriend, former Fox News host Kimberly Guilfoyle. While his political choices and some of his company may have previously placed him outside of Silicon Valley’s establishment politics—the Trump administration’s embrace of fringey, irreverent far-right idealogues helpfully opened some doors. In 2017, for example, Luckey discussed his border wall tech with Trump’s Interior Secretary Ryan Zinke in a face-to-face partially arranged by Chuck C. Johnson, a former Breitbart reporter who was permanently banned from Twitter for threatening to “take out” Black Lives Matter activist DeRay Mckesson. During Anduril’s earliest days, Luckey also met with former Trump strategist and Breitbart editor Steve Bannon, another figure from the political edges who found his way to the center in 2016. Luckey, described as a “proud nationalist” by former Oculus friend John Carmack, has evoked ominous language with echoes of Trump’s own on the issue of the border.“If I could wave a magic wand, the United States would have perfect border security and arms wide open to everyone who believes in American values,” Luckey said in a tweet. “Murderous gangs that terrorize communities across North America don’t fit the bill, and I hope we can erase them from existence.”Luckey added that his views are “mainstream libertarian as it gets” and that in spite of his business in border security he is “a big fan of immigration.” In any online scrap over Anduril’s border business, he’s quick to draw a distinction between the concept of “border security” and policies around immigration that shape realities—and technologies—at the U.S. border.While his departure from Facebook also coincided with the end of the Zenimax trial, in which the Oculus founder defended himself against allegations that his virtual reality empire was built on stolen trade secrets, Luckey’s tendency to live his right-leaning, irreverent politics out loud within Facebook’s tepidly liberal leadership culture led to the events that made the axe come down.“I contributed $10,000 to Nimble America because I thought the organization had fresh ideas on how to communicate with young voters,” Luckey said in a Facebook post at the time, claiming that he actually planned to cast his vote for Gary Johnson. The Wall Street Journal later reported that Luckey’s public support for the third-party candidate was a facet of Facebook’s PR strategy foisted on him by executives at the company. TECH UNDER THE MICROSCOPEThe tide of public opinion has turned against the tech industry in recent years. After the revelations of Russian interference in the 2016 election and a concurrent wave of heightened sensitivity for privacy, the sector is no longer viewed as an optimistic hub filling the near-future with consequence-free innovation.That shift in public perception coupled with new activist energy within the tech workforce means that tech companies are facing a new level of scrutiny on their government defense deals, when previously they might have guiltlessly enjoyed federal cash infusions. Those deals have also grown out of the government’s increased comfort with maturing tech companies capable of handling sensitive contracts and jumping through certification hoops. When Google came under fire and backed away from the Pentagon’s controversial Project Maven contract, developing AI that can help drones autonomously home in on potential targets, Anduril stepped in. Amazon stayed the course under similar pressure, batting away internal dissent about the Pentagon’s whopping $10 billion cloud computing project for Joint Enterprise Defense Infrastructure, better known as “JEDI.”After Microsoft landed a $480 million Army contract for its HoloLens augmented reality goggles late last year, a cluster of Microsoft employees protested. “While the company has previously licensed tech to the U.S. military, it has never crossed the line into weapons development,” they wrote. “With this contract, it does.”Microsoft CEO Satya Nadella defended the work in an interview with CNN. “We made a principled decision that we’re not going to withhold technology from institutions that we have elected in democracies to protect the freedoms we enjoy,” Nadella said.Last month, Luckey spelled out Anduril’s own uncomplicated attitude toward military work in an interview with CNBC. “What I am glad of is that Microsoft and Amazon are both willing to do this contract in the first place. There’s a lot of U.S. tech companies that have been pulling out on the D.O.D,” Luckey said. He went on to criticize Google for withdrawing from Pentagon’s $10 billion JEDI contract over internal backlash around ethical concerns.“I’m mostly just glad that Amazon and Microsoft are still in there fighting this… they are willing to work with the military,” Luckey said. “I think we could use a lot more of that and I would love to see even more companies in the mix.”With a president shredding his office’s long-held traditions while obsessing over slowing immigration to a trickle, maybe it’s no surprise that a boyish gamer demi-god in a Hawaiian shirt could reinvent himself as a serious security contractor keen to lock down borders around the world.In June, Anduril entered into a relationship with the UK Royal Navy through its NavyX tech accelerator. “The artificial intelligence and [intelligence, surveillance and reconnaissance] systems from Anduril are game changing technologies for the Royal Marines Future Commando Force,” Royal Navy Chief Technology Officer Dan Cheesman said.Recently, Luckey has hinted at the company’s interest in deploying its border surveillance system to the Irish border, where Brexit has reignited historical tensions along what would become the only land border between the UK and the EU. A soldier tries the company's VR system for controlling its hardware AndurilAnduril believes that its technology is modular and versatile enough t0 be applied well beyond the military sector. While its AI-powered towers have mostly been implemented to secure borders, the company is in conversation about providing tech to other industries, like securing power grids and oil and gas facilities.What’s more, the company has signaled its interest in applying its AR and VR expertise to “real-time battlefield awareness for soldiers”— a chance it might get after landing a piece of the drone-centric Project Maven contract. The company is also interested in providing tech to aid soldiers on the ground. “Imagine if the Nazis had been the first people to make practical nuclear weapons. Imagine if the Russians had been the first people to make practical nuclear weapons,” Luckey told CNBC last month. If America’s top scientists and technologists steered clear of that technology due to ethical concerns, Luckey argued that we’d be in “a very different world today.”“It would not be the world that we’re in right now—and it would be a lot worse.”Read more at The Daily Beast.Get our top stories in your inbox every day. Sign up now!Daily Beast Membership: Beast Inside goes deeper on the stories that matter to you. Learn more.
from Yahoo News - Latest News & Headlines
Photo Illustration by Sarah Rogers/The Daily Beast / Photos GettyAs many tech giants grow skittish about cashing in on the surveillance boom, one company helmed by an industry iconoclast seems custom-built for Big Brother.For Anduril Industries, scanning the California desert alongside border agents or helping drones home in on targets isn’t toxic—it isn’t even controversial. That mostly has to do with the company’s founder, Palmer Luckey. The 26-year-old is best known as the designer of the Oculus Rift virtual reality headset that shepherded the futuristic technology into the mainstream. In 2014, Luckey sold his 100-person virtual reality company to Facebook for $3 billion. Luckey was reportedly forced out of Facebook in early 2017 after The Daily Beast revealed that he was bankrolling an unofficial pro-Trump group dedicated to “shitposting” and circulating anti-Clinton memes. It only took a few months for the boyish, ever-Hawaiian shirt clad near-billionaire to launch his second act, a defense company called Anduril Industries. Prone to references to fantasy worlds and role-playing games, Luckey named his new project after a mythical sword from The Lord of the Rings trilogy. Tellingly, the weapon’s other name is the “Flame of the West.” With Oculus, Luckey turned science fiction into affordable hardware. With Anduril, he’d port those innovations over into the defense sector, fusing affordable hardware and machine learning to create a border and battlefield surveillance suite that the federal government couldn’t resist. Two years ago, Anduril was little more than a placeholder website with a casting call for “dedicated, and patriotic engineers.” But with a handful of contracts in its cap and some friends in high places, Luckey’s AI-powered defense experiment has established itself as an up-and-comer in the scrum for federal business. Anduril is still small—a fraction of the size of a Lockheed or a Raytheon, say—but it has quickly grown to employ close to 100 people, moving into a 155,000-square-foot headquarters in Irvine, California, where it can comfortably double in size.And far from shying away from politics post-Facebook, Luckey leaned into the MAGA-friendly ideology—donating big money to pro-Trump outfits, and meeting with Trump cabinet officials, all while his company quietly picks up military contracts and expands its work with border patrol.In a recent Reddit thread Luckey defended his new company’s business model: “Of the things people might find divisive about me, this should be near the bottom of the list.” Palmer LuckeyAnduril/Twitter BIG BORDER BUSINESSWhen Trump’s vision of a “big, beautiful wall” ran into the costly, inefficient realities of a contiguous physical partition along the southern U.S. border, high-tech surveillance solutions emerged as a viable next best thing. In 2017, Luckey worked with Rep. Will Hurd (R-TX) on cost estimates for legislation to push a virtual border wall into consideration. As part of that collaboration, Hurd introduced Luckey to a rancher in his Texas border district who agreed to let the young company test drive three of its portable sentry towers on his private land. (On Thursday, the 41-year-old tech-savvy Congressman announced that he would not seek re-election “in order to pursue opportunities… to solve problems at the nexus between technology and national security.”)Anduril bills itself as an “AI product company” specializing in hardware and software for national defense. Its hallmark product, called Lattice, is a modular surveillance setup comprising drones, “Lattice Sensor Towers,” and software that autonomously identifies potential targets. As it demonstrated in two live pilot programs at the U.S. Southern border last year, the system can detect a human presence and push alerts to Customs and Border Protection agents in real time.Now, the company is expanding its reach. Anduril is currently working on a new pilot program with Customs and Border Protection (CBP) to test “cold weather variations” of its high-tech surveillance system capable of running reliably outside the hot, dry climate of states along the U.S. border with Mexico. That program consists of two limited trials, one in Vermont and one in Montana. The pilots were pursued by the agency’s innovation team, which explores new technologies for guarding U.S. borders and will “determine the efficacy and applicability of the technology to northern border challenges,” according to CBP. While the northern U.S. border sees far less activity outside of designated border crossing sites, it does span some terrain even more remote and challenging than the arid stretches that line the southwest states. The U.S.-Mexico border makes headlines for its divisive role in American immigration policy, but the line dividing the U.S. and Canada is actually five times as long. Anduril may also be shopping its technology to the other side of the border. In May, Luckey represented Anduril at a Toronto event advertised as part of an official trade delegation to Canada. When asked if Anduril’s business in Canada was purely aspirational or actually in the works, the company declined to comment. Anduril’s border work was previously limited to a CBP pilot near San Diego and some unofficial testing at a private ranch outside of El Paso. The San Diego program began with only four towers in the agency’s San Diego Sector and over time expanded over time to 14. Now, with the pilot program successfully ended, those 14 towers remain operational. The company has also turned its unofficial deployment in Texas into a formal relationship. The agency recently bought 18 additional Anduril-made towers and plans to deploy them later this year. That installation is not part of a pilot program. “Like any company, CBP’s future relationship with Anduril will be subject to fair and open competition, the company’s ability to deliver relevant technology, available funding, and a variety of other factors,” CBP told The Daily Beast.Beyond its border-watching sentry towers, Anduril also makes its own heli-drone, a sort of miniaturized helicopter that can stay airborne for long periods. Those drones, known as Lattice Ghosts, are capable of stealth flight and flying in formation over large swaths of land or sea for anything from “anti-cartel operations to stealth observation.”An Anduril sentry tower with one of the company's heli-dronesAnduril GAMER GOD GOES TO WASHINGTON Anduril is a curious company to have grown out of the West Coast tech scene—and a sign of the times. Luckey might still refuse to wear closed-toed shoes, but he’s reinvented himself within Anduril’s hyper-patriotic, veteran-friendly image. Luckey has smartly made efforts to surround himself with serious military types who blend in with the close-cropped national security crowd. The company has quickly built its operation out in Washington D.C., recruiting former Senate Armed Services Committee staff director Christian Brose late last year to serve as the company’s head of strategy. As the Intercept previously reported, that hire helped get Anduril into the National Armaments Consortium, a nonprofit that connects defense companies with military contracts.“The company’s existed a year, and they already have systems that have been built and fielded right now,” Brose told Defense News around the time of his hiring. “This isn’t the classic play, ‘Give us billions of dollars and 10 years, and we’ll promise we’ll build you something.’ They have developed systems, and they’re going out and solving problems with them.”Anduril also picked up Scott Sanders, a former intelligence and special operations officer for the Marine Corps, to lead operations. By late 2018, Sanders was demoing Anduril’s hardware and software surveillance system for Marines at Camp Pendleton. Less than a year later, the company sealed the deal on a $13.5 million contract with the Marine Corps to secure bases in Japan, Arizona, and Hawaii, surrounding each with a “virtual ‘digital fortress.’” With two co-founders from Oculus and four from Palantir, tech’s biggest defense success story, Anduril’s early hires have been key to its quick expansion. One of those was Trae Stephens, a former Palantir engineer and current partner at Peter Thiel’s Founders Fund, who joined Trump’s transition team through Thiel.AndurilAnduril’s leadership represents a blend of political leanings, even if Palmer Luckey’s politics are quite a bit louder. The company’s co-founder and COO Matt Grimm in particular is an active Democratic donor, with donations to Hillary Clinton, Beto O’Rourke, ActBlue, and Pete Buttigieg’s presidential campaign more recently. Co-founder and CEO Brian Schimpf donates to Democrats, too, including Henry Cuellar, a co-sponsor of Hurd’s SMART Wall Act bill in late 2018. Christian Brose represents the traditional Republican wing within the company, having worked under the late Sen. John McCain. Given his work with Trump and Thiel, Stephens has shown a willingness to work with leaders whose politics are more closely aligned with Luckey’s own. Next to Thiel, Luckey is probably Trump’s most high-profile booster in the tech world, even if he was excommunicated from its mainstream. SIX DEGREES OF TRUMPLuckey has described himself as “fiscally conservative, pro-freedom, little-L libertarian, and big-R Republican.” Regularly donning wigs and candy-colored anime garb, Luckey might be the only military contractor who’s active on the cosplay circuit. Reportedly a longtime Trump fan, converted after reading The Art of The Deal, Luckey donated $100,000 to Trump’s inaugural committee. He was spotted last month at a Trump 2020 fundraiser put on by Donald Trump Jr. and his girlfriend, former Fox News host Kimberly Guilfoyle. While his political choices and some of his company may have previously placed him outside of Silicon Valley’s establishment politics—the Trump administration’s embrace of fringey, irreverent far-right idealogues helpfully opened some doors. In 2017, for example, Luckey discussed his border wall tech with Trump’s Interior Secretary Ryan Zinke in a face-to-face partially arranged by Chuck C. Johnson, a former Breitbart reporter who was permanently banned from Twitter for threatening to “take out” Black Lives Matter activist DeRay Mckesson. During Anduril’s earliest days, Luckey also met with former Trump strategist and Breitbart editor Steve Bannon, another figure from the political edges who found his way to the center in 2016. Luckey, described as a “proud nationalist” by former Oculus friend John Carmack, has evoked ominous language with echoes of Trump’s own on the issue of the border.“If I could wave a magic wand, the United States would have perfect border security and arms wide open to everyone who believes in American values,” Luckey said in a tweet. “Murderous gangs that terrorize communities across North America don’t fit the bill, and I hope we can erase them from existence.”Luckey added that his views are “mainstream libertarian as it gets” and that in spite of his business in border security he is “a big fan of immigration.” In any online scrap over Anduril’s border business, he’s quick to draw a distinction between the concept of “border security” and policies around immigration that shape realities—and technologies—at the U.S. border.While his departure from Facebook also coincided with the end of the Zenimax trial, in which the Oculus founder defended himself against allegations that his virtual reality empire was built on stolen trade secrets, Luckey’s tendency to live his right-leaning, irreverent politics out loud within Facebook’s tepidly liberal leadership culture led to the events that made the axe come down.“I contributed $10,000 to Nimble America because I thought the organization had fresh ideas on how to communicate with young voters,” Luckey said in a Facebook post at the time, claiming that he actually planned to cast his vote for Gary Johnson. The Wall Street Journal later reported that Luckey’s public support for the third-party candidate was a facet of Facebook’s PR strategy foisted on him by executives at the company. TECH UNDER THE MICROSCOPEThe tide of public opinion has turned against the tech industry in recent years. After the revelations of Russian interference in the 2016 election and a concurrent wave of heightened sensitivity for privacy, the sector is no longer viewed as an optimistic hub filling the near-future with consequence-free innovation.That shift in public perception coupled with new activist energy within the tech workforce means that tech companies are facing a new level of scrutiny on their government defense deals, when previously they might have guiltlessly enjoyed federal cash infusions. Those deals have also grown out of the government’s increased comfort with maturing tech companies capable of handling sensitive contracts and jumping through certification hoops. When Google came under fire and backed away from the Pentagon’s controversial Project Maven contract, developing AI that can help drones autonomously home in on potential targets, Anduril stepped in. Amazon stayed the course under similar pressure, batting away internal dissent about the Pentagon’s whopping $10 billion cloud computing project for Joint Enterprise Defense Infrastructure, better known as “JEDI.”After Microsoft landed a $480 million Army contract for its HoloLens augmented reality goggles late last year, a cluster of Microsoft employees protested. “While the company has previously licensed tech to the U.S. military, it has never crossed the line into weapons development,” they wrote. “With this contract, it does.”Microsoft CEO Satya Nadella defended the work in an interview with CNN. “We made a principled decision that we’re not going to withhold technology from institutions that we have elected in democracies to protect the freedoms we enjoy,” Nadella said.Last month, Luckey spelled out Anduril’s own uncomplicated attitude toward military work in an interview with CNBC. “What I am glad of is that Microsoft and Amazon are both willing to do this contract in the first place. There’s a lot of U.S. tech companies that have been pulling out on the D.O.D,” Luckey said. He went on to criticize Google for withdrawing from Pentagon’s $10 billion JEDI contract over internal backlash around ethical concerns.“I’m mostly just glad that Amazon and Microsoft are still in there fighting this… they are willing to work with the military,” Luckey said. “I think we could use a lot more of that and I would love to see even more companies in the mix.”With a president shredding his office’s long-held traditions while obsessing over slowing immigration to a trickle, maybe it’s no surprise that a boyish gamer demi-god in a Hawaiian shirt could reinvent himself as a serious security contractor keen to lock down borders around the world.In June, Anduril entered into a relationship with the UK Royal Navy through its NavyX tech accelerator. “The artificial intelligence and [intelligence, surveillance and reconnaissance] systems from Anduril are game changing technologies for the Royal Marines Future Commando Force,” Royal Navy Chief Technology Officer Dan Cheesman said.Recently, Luckey has hinted at the company’s interest in deploying its border surveillance system to the Irish border, where Brexit has reignited historical tensions along what would become the only land border between the UK and the EU. A soldier tries the company's VR system for controlling its hardware AndurilAnduril believes that its technology is modular and versatile enough t0 be applied well beyond the military sector. While its AI-powered towers have mostly been implemented to secure borders, the company is in conversation about providing tech to other industries, like securing power grids and oil and gas facilities.What’s more, the company has signaled its interest in applying its AR and VR expertise to “real-time battlefield awareness for soldiers”— a chance it might get after landing a piece of the drone-centric Project Maven contract. The company is also interested in providing tech to aid soldiers on the ground. “Imagine if the Nazis had been the first people to make practical nuclear weapons. Imagine if the Russians had been the first people to make practical nuclear weapons,” Luckey told CNBC last month. If America’s top scientists and technologists steered clear of that technology due to ethical concerns, Luckey argued that we’d be in “a very different world today.”“It would not be the world that we’re in right now—and it would be a lot worse.”Read more at The Daily Beast.Get our top stories in your inbox every day. Sign up now!Daily Beast Membership: Beast Inside goes deeper on the stories that matter to you. Learn more.
August 04, 2019 at 10:43AM via IFTTT
0 notes
Text
The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings
I had the pleasure of attending the first-ever Virtual Beings Summit in San Francisco on Wednesday, where I met real people talking about making virtual characters driven by artificial intelligence.
It felt like I was witnessing the dawn of a new industry. I know that the idea of making a virtual human or animal has been around for a long time, but Edward Saatchi, the CEO of AI-powered virtual being company Fable Studios, gathered a diverse group of people from across disciplines and international borders to speak at the conference, as if they all had the same mission. To be there at the beginning.
Who they are
Above: Edward Saatchi is cofounder of Fable Studios.
Image Credit: Dean Takahashi
The whole day was full of inspiring talks from people who came from has far away as Japan and Australia. So many uses of the technology were built by a wide array of people. Saatchi curated a list of entrepreneurs, investors, artists, writers, engineers, designers, musicians, virtual reality creators, and machine-learning experts. They included people who built virtual influencers, artificial fashion models, AI music creators, virtual superhero chatbots, virtual reality game characters, and augmented reality assistants. The virtual beings will help us with medical issues, entertain us, and god knows what else.
This cross-disciplinary cast is what it will take to create virtual beings who are characters that you know aren’t real but with whom you can build a two-way emotional relationship, Saatchi said. And it won’t be machine learning and AI alone that can deliver this. It will take artists working alongside engineers and storytellers. These virtual beings will be works of art and engineering. And Saatchi announced that Virtual Beings grants totaling $1,000 to $25,000 will be awarded to those who create their own virtual beings.
youtube
Saatchi’s Fable Studios has shifted from being a VR company into a virtual beings company, and it has created the VR experience Wolves in the Walls, starring an eight-year-old girl, Lucy. Pete Billington and Jessica Shamash of Fable said the goal with Lucy was to create a companion that you could live with or speak to for decades. Lucy was just one of many virtual characters shown at the event. They ranged from Instagram influencer Little Miquela to MuseNet, which is an AI that creates its own music, like a new Mozart composition.
“We think about how we take care of her, and how she takes care of us,” Shamash said.
Amazing progress
Above: Kim Libreri, CTO of Epic Games, shows off A Boy and His Kite.
Image Credit: Dean Takahashi
In a brief talk, Kim Libreri, chief technology officer of Epic Games, showed how fast the effort to create digital humans has progressed. The Unreal Engine company and its partners 3Lateral and Cubic Motion have pushed the state of the art in virtual human demos, starting with A Boy and His Kite in 2015, 2016’s Hellblade, Mike in 2017, Siren in 2018, Troll and Andy Serkis in 2018.
But the summit made clear that this wasn’t just a matter of physically reproducing humans with digital animations. It was also about getting the story and the emotion right to make a believable human. Cyan Banister, a partner at Founders Fund and an investor in many Virtual Beings Projects, said she wanted to see if someone could reproduce her grandmother so that she could have conversations with her again. Banister said these characters could be so much more compelling if they remember who you are and converse with you in context.
youtube
She became interested in virtual beings when she heard about a Japanese virtual character — Hatsune Miku — who didn’t exist, but who threw successful music concerts singing songs that are created by fans. She has invested in Fable Studios as well as companies like Artie, which is bringing virtual superhero characters and other celebrities to life as a way get consumers more engaged with mobile apps.
“I saw Hatsune Miku in person, and that was magical, seeing how genuinely excited people were,” Banister said. “I wondered what is the American equivalent of it. We haven’t seen it yet, but I think it’s coming.”
Would you bring back your best friend?
Above: Eugenia Kuyda, creator of Replika, built a chatbot in memory of her best friend.
Image Credit: Dean Takahashi
My sense of wonder turned into an entirely different kind of emotion when I heard Eugenia Kuyda talk about why she cofounded Replika. Her company was born from a tragedy. Her best friend, Roman Mazurenko, was killed in a car accident. Months afterward, she gathered his old text messages in an effort to preserve his memory. She wanted one more text message from him.
She had her team in Russia build a chatbot using artificial intelligence, with the aim of reproducing the style and nature of Mazurenko’s personality in a text-based chatbot. It worked. Kuyda put it out on the market as Replika, and now it has more than 6 million users in the past couple of years. Many of those users write fan letters, saying that they are in love with their chatbot friends.
Above: Replika has 6 million users who text with chatbots.
Image Credit: Dean Takahashi
“It’s like a friend that is there for you 24/7,” Kuyda said. “Some of them went beyond friendships.”
There are so many lonely people in the world, Kuyda said. She has been told that Replika is creepy, but she has begun to figure out how to measure the happiness that it creates. If those lonely people have someone to talk to, they aren’t so lonely anymore, and they can function better in social situations. If Replika keeps making people happier and less lonely, then that is a good thing, she said.
Above: Replika’s conversations
Image Credit: Dean Takahashi
I went up to Kuyda afterward and remarked to her how much it resembled the script of the Academy-Award-winning film Her, with Joaquin Phoenix, a lonely man who fell in love with his AI-driven computer companion. The worst thing that could happen here is similar to the plot of the movie, where one day the bot simply disappears. Kuyda wants to make sure that doesn’t happen, and she is investigating where to take this next. She wanted to make sure that everyone could have a best friend, as she had Roman.
Who we pretend to be
Above: Lucy from Wolves in the Walls shows what it takes to make a virtual being.
Image Credit: Dean Takahashi
If something was missing at the event, it was the sobering talk about how the technology needs some rules of the road. Several speakers hinted that virtual beings could be creepy, as we’ve seen a lot of science fiction horror stories about AI from to The Terminator to the latest Black Mirror episodes on Netflix.
Since nobody offered this warning, I jumped in myself. On the last panel, I noted how the upcoming Call of Duty: Modern Warfare game will be disturbing because it combines the agency of an interactive video game with realistic combat situations and realistic humans. It puts you under intense pressure while deciding whether to shoot civilians — men or women — who may be harmless or running to detonate a bomb. That’s a disturbing level of realism, and I’m not sure that’s my idea of entertainment.
The potential risks of the wrong use of AI — virtual slaves, deep fakes, Frankenstein monsters, and killing machines — are plentiful.
And that, once again, made me think of the moral of the story of Kurt Vonnegut’s Mother Night novel, where the anti-hero is an American spy who does better at his cover job, as a Nazi propagandist, than he performs as a spy. The moral is, “We are what we pretend to be, so we must be careful about what we pretend to be.”
Above: Don’t fall in love. She’s not real.
Image Credit: Dean Takahashi
I said, “I think that’s a wise lesson, not only for users with the agency they have in an open world with virtual beings. You will be able to do things that are there for you to do. But it’s also a lesson for creators of this technology and the decisions they make about how much agency you can have” when you are in control of a virtual being or interacting with one. You have to decide how to best use your hard-earned talent for the good of society when you are thinking about creating a virtual being.
The temptations of the future world of virtual beings are many. But Peter Rojas, partner at Betaworks Ventures, said, “We shouldn’t be afraid to think about legislation and regulations for things that we want to happen.”
He said there are moral, ethical, and responsibility issues that we can discuss for another day. Rojas’ firm funded a company that is working on technology to identify deep fakes, so that journalists, social media firms, or law enforcement can identify attempts at deception when you put someone else’s believable head on a person’s body, making them do things that they didn’t do.
“There is incredible talent working on the different technical problems here on the storytelling side,” Rojas said. “As excited as I am about what’s happening in the field, I also share fears about how this could be used. And where I don’t see a lot of entrepreneurs is in working on new products around technology that will help against the deception.”
I agree with Rojas. Let’s all think this through before we do it.
Credit: Source link
The post The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings appeared first on WeeklyReviewer.
from WeeklyReviewer https://weeklyreviewer.com/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings/?utm_source=rss&utm_medium=rss&utm_campaign=the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings from WeeklyReviewer https://weeklyreviewer.tumblr.com/post/186580281927
0 notes
Text
The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings
I had the pleasure of attending the first-ever Virtual Beings Summit in San Francisco on Wednesday, where I met real people talking about making virtual characters driven by artificial intelligence.
It felt like I was witnessing the dawn of a new industry. I know that the idea of making a virtual human or animal has been around for a long time, but Edward Saatchi, the CEO of AI-powered virtual being company Fable Studios, gathered a diverse group of people from across disciplines and international borders to speak at the conference, as if they all had the same mission. To be there at the beginning.
Who they are
Above: Edward Saatchi is cofounder of Fable Studios.
Image Credit: Dean Takahashi
The whole day was full of inspiring talks from people who came from has far away as Japan and Australia. So many uses of the technology were built by a wide array of people. Saatchi curated a list of entrepreneurs, investors, artists, writers, engineers, designers, musicians, virtual reality creators, and machine-learning experts. They included people who built virtual influencers, artificial fashion models, AI music creators, virtual superhero chatbots, virtual reality game characters, and augmented reality assistants. The virtual beings will help us with medical issues, entertain us, and god knows what else.
This cross-disciplinary cast is what it will take to create virtual beings who are characters that you know aren’t real but with whom you can build a two-way emotional relationship, Saatchi said. And it won’t be machine learning and AI alone that can deliver this. It will take artists working alongside engineers and storytellers. These virtual beings will be works of art and engineering. And Saatchi announced that Virtual Beings grants totaling $1,000 to $25,000 will be awarded to those who create their own virtual beings.
youtube
Saatchi’s Fable Studios has shifted from being a VR company into a virtual beings company, and it has created the VR experience Wolves in the Walls, starring an eight-year-old girl, Lucy. Pete Billington and Jessica Shamash of Fable said the goal with Lucy was to create a companion that you could live with or speak to for decades. Lucy was just one of many virtual characters shown at the event. They ranged from Instagram influencer Little Miquela to MuseNet, which is an AI that creates its own music, like a new Mozart composition.
“We think about how we take care of her, and how she takes care of us,” Shamash said.
Amazing progress
Above: Kim Libreri, CTO of Epic Games, shows off A Boy and His Kite.
Image Credit: Dean Takahashi
In a brief talk, Kim Libreri, chief technology officer of Epic Games, showed how fast the effort to create digital humans has progressed. The Unreal Engine company and its partners 3Lateral and Cubic Motion have pushed the state of the art in virtual human demos, starting with A Boy and His Kite in 2015, 2016’s Hellblade, Mike in 2017, Siren in 2018, Troll and Andy Serkis in 2018.
But the summit made clear that this wasn’t just a matter of physically reproducing humans with digital animations. It was also about getting the story and the emotion right to make a believable human. Cyan Banister, a partner at Founders Fund and an investor in many Virtual Beings Projects, said she wanted to see if someone could reproduce her grandmother so that she could have conversations with her again. Banister said these characters could be so much more compelling if they remember who you are and converse with you in context.
youtube
She became interested in virtual beings when she heard about a Japanese virtual character — Hatsune Miku — who didn’t exist, but who threw successful music concerts singing songs that are created by fans. She has invested in Fable Studios as well as companies like Artie, which is bringing virtual superhero characters and other celebrities to life as a way get consumers more engaged with mobile apps.
“I saw Hatsune Miku in person, and that was magical, seeing how genuinely excited people were,” Banister said. “I wondered what is the American equivalent of it. We haven’t seen it yet, but I think it’s coming.”
Would you bring back your best friend?
Above: Eugenia Kuyda, creator of Replika, built a chatbot in memory of her best friend.
Image Credit: Dean Takahashi
My sense of wonder turned into an entirely different kind of emotion when I heard Eugenia Kuyda talk about why she cofounded Replika. Her company was born from a tragedy. Her best friend, Roman Mazurenko, was killed in a car accident. Months afterward, she gathered his old text messages in an effort to preserve his memory. She wanted one more text message from him.
She had her team in Russia build a chatbot using artificial intelligence, with the aim of reproducing the style and nature of Mazurenko’s personality in a text-based chatbot. It worked. Kuyda put it out on the market as Replika, and now it has more than 6 million users in the past couple of years. Many of those users write fan letters, saying that they are in love with their chatbot friends.
Above: Replika has 6 million users who text with chatbots.
Image Credit: Dean Takahashi
“It’s like a friend that is there for you 24/7,” Kuyda said. “Some of them went beyond friendships.”
There are so many lonely people in the world, Kuyda said. She has been told that Replika is creepy, but she has begun to figure out how to measure the happiness that it creates. If those lonely people have someone to talk to, they aren’t so lonely anymore, and they can function better in social situations. If Replika keeps making people happier and less lonely, then that is a good thing, she said.
Above: Replika’s conversations
Image Credit: Dean Takahashi
I went up to Kuyda afterward and remarked to her how much it resembled the script of the Academy-Award-winning film Her, with Joaquin Phoenix, a lonely man who fell in love with his AI-driven computer companion. The worst thing that could happen here is similar to the plot of the movie, where one day the bot simply disappears. Kuyda wants to make sure that doesn’t happen, and she is investigating where to take this next. She wanted to make sure that everyone could have a best friend, as she had Roman.
Who we pretend to be
Above: Lucy from Wolves in the Walls shows what it takes to make a virtual being.
Image Credit: Dean Takahashi
If something was missing at the event, it was the sobering talk about how the technology needs some rules of the road. Several speakers hinted that virtual beings could be creepy, as we’ve seen a lot of science fiction horror stories about AI from to The Terminator to the latest Black Mirror episodes on Netflix.
Since nobody offered this warning, I jumped in myself. On the last panel, I noted how the upcoming Call of Duty: Modern Warfare game will be disturbing because it combines the agency of an interactive video game with realistic combat situations and realistic humans. It puts you under intense pressure while deciding whether to shoot civilians — men or women — who may be harmless or running to detonate a bomb. That’s a disturbing level of realism, and I’m not sure that’s my idea of entertainment.
The potential risks of the wrong use of AI — virtual slaves, deep fakes, Frankenstein monsters, and killing machines — are plentiful.
And that, once again, made me think of the moral of the story of Kurt Vonnegut’s Mother Night novel, where the anti-hero is an American spy who does better at his cover job, as a Nazi propagandist, than he performs as a spy. The moral is, “We are what we pretend to be, so we must be careful about what we pretend to be.”
Above: Don’t fall in love. She’s not real.
Image Credit: Dean Takahashi
I said, “I think that’s a wise lesson, not only for users with the agency they have in an open world with virtual beings. You will be able to do things that are there for you to do. But it’s also a lesson for creators of this technology and the decisions they make about how much agency you can have” when you are in control of a virtual being or interacting with one. You have to decide how to best use your hard-earned talent for the good of society when you are thinking about creating a virtual being.
The temptations of the future world of virtual beings are many. But Peter Rojas, partner at Betaworks Ventures, said, “We shouldn’t be afraid to think about legislation and regulations for things that we want to happen.”
He said there are moral, ethical, and responsibility issues that we can discuss for another day. Rojas’ firm funded a company that is working on technology to identify deep fakes, so that journalists, social media firms, or law enforcement can identify attempts at deception when you put someone else’s believable head on a person’s body, making them do things that they didn’t do.
“There is incredible talent working on the different technical problems here on the storytelling side,” Rojas said. “As excited as I am about what’s happening in the field, I also share fears about how this could be used. And where I don’t see a lot of entrepreneurs is in working on new products around technology that will help against the deception.”
I agree with Rojas. Let’s all think this through before we do it.
Credit: Source link
The post The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings appeared first on WeeklyReviewer.
from WeeklyReviewer https://weeklyreviewer.com/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings/?utm_source=rss&utm_medium=rss&utm_campaign=the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings from WeeklyReviewer https://weeklyreviewer.tumblr.com/post/186580281927
0 notes
Text
The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings
I had the pleasure of attending the first-ever Virtual Beings Summit in San Francisco on Wednesday, where I met real people talking about making virtual characters driven by artificial intelligence.
It felt like I was witnessing the dawn of a new industry. I know that the idea of making a virtual human or animal has been around for a long time, but Edward Saatchi, the CEO of AI-powered virtual being company Fable Studios, gathered a diverse group of people from across disciplines and international borders to speak at the conference, as if they all had the same mission. To be there at the beginning.
Who they are
Above: Edward Saatchi is cofounder of Fable Studios.
Image Credit: Dean Takahashi
The whole day was full of inspiring talks from people who came from has far away as Japan and Australia. So many uses of the technology were built by a wide array of people. Saatchi curated a list of entrepreneurs, investors, artists, writers, engineers, designers, musicians, virtual reality creators, and machine-learning experts. They included people who built virtual influencers, artificial fashion models, AI music creators, virtual superhero chatbots, virtual reality game characters, and augmented reality assistants. The virtual beings will help us with medical issues, entertain us, and god knows what else.
This cross-disciplinary cast is what it will take to create virtual beings who are characters that you know aren’t real but with whom you can build a two-way emotional relationship, Saatchi said. And it won’t be machine learning and AI alone that can deliver this. It will take artists working alongside engineers and storytellers. These virtual beings will be works of art and engineering. And Saatchi announced that Virtual Beings grants totaling $1,000 to $25,000 will be awarded to those who create their own virtual beings.
youtube
Saatchi’s Fable Studios has shifted from being a VR company into a virtual beings company, and it has created the VR experience Wolves in the Walls, starring an eight-year-old girl, Lucy. Pete Billington and Jessica Shamash of Fable said the goal with Lucy was to create a companion that you could live with or speak to for decades. Lucy was just one of many virtual characters shown at the event. They ranged from Instagram influencer Little Miquela to MuseNet, which is an AI that creates its own music, like a new Mozart composition.
“We think about how we take care of her, and how she takes care of us,” Shamash said.
Amazing progress
Above: Kim Libreri, CTO of Epic Games, shows off A Boy and His Kite.
Image Credit: Dean Takahashi
In a brief talk, Kim Libreri, chief technology officer of Epic Games, showed how fast the effort to create digital humans has progressed. The Unreal Engine company and its partners 3Lateral and Cubic Motion have pushed the state of the art in virtual human demos, starting with A Boy and His Kite in 2015, 2016’s Hellblade, Mike in 2017, Siren in 2018, Troll and Andy Serkis in 2018.
But the summit made clear that this wasn’t just a matter of physically reproducing humans with digital animations. It was also about getting the story and the emotion right to make a believable human. Cyan Banister, a partner at Founders Fund and an investor in many Virtual Beings Projects, said she wanted to see if someone could reproduce her grandmother so that she could have conversations with her again. Banister said these characters could be so much more compelling if they remember who you are and converse with you in context.
youtube
She became interested in virtual beings when she heard about a Japanese virtual character — Hatsune Miku — who didn’t exist, but who threw successful music concerts singing songs that are created by fans. She has invested in Fable Studios as well as companies like Artie, which is bringing virtual superhero characters and other celebrities to life as a way get consumers more engaged with mobile apps.
“I saw Hatsune Miku in person, and that was magical, seeing how genuinely excited people were,” Banister said. “I wondered what is the American equivalent of it. We haven’t seen it yet, but I think it’s coming.”
Would you bring back your best friend?
Above: Eugenia Kuyda, creator of Replika, built a chatbot in memory of her best friend.
Image Credit: Dean Takahashi
My sense of wonder turned into an entirely different kind of emotion when I heard Eugenia Kuyda talk about why she cofounded Replika. Her company was born from a tragedy. Her best friend, Roman Mazurenko, was killed in a car accident. Months afterward, she gathered his old text messages in an effort to preserve his memory. She wanted one more text message from him.
She had her team in Russia build a chatbot using artificial intelligence, with the aim of reproducing the style and nature of Mazurenko’s personality in a text-based chatbot. It worked. Kuyda put it out on the market as Replika, and now it has more than 6 million users in the past couple of years. Many of those users write fan letters, saying that they are in love with their chatbot friends.
Above: Replika has 6 million users who text with chatbots.
Image Credit: Dean Takahashi
“It’s like a friend that is there for you 24/7,” Kuyda said. “Some of them went beyond friendships.”
There are so many lonely people in the world, Kuyda said. She has been told that Replika is creepy, but she has begun to figure out how to measure the happiness that it creates. If those lonely people have someone to talk to, they aren’t so lonely anymore, and they can function better in social situations. If Replika keeps making people happier and less lonely, then that is a good thing, she said.
Above: Replika’s conversations
Image Credit: Dean Takahashi
I went up to Kuyda afterward and remarked to her how much it resembled the script of the Academy-Award-winning film Her, with Joaquin Phoenix, a lonely man who fell in love with his AI-driven computer companion. The worst thing that could happen here is similar to the plot of the movie, where one day the bot simply disappears. Kuyda wants to make sure that doesn’t happen, and she is investigating where to take this next. She wanted to make sure that everyone could have a best friend, as she had Roman.
Who we pretend to be
Above: Lucy from Wolves in the Walls shows what it takes to make a virtual being.
Image Credit: Dean Takahashi
If something was missing at the event, it was the sobering talk about how the technology needs some rules of the road. Several speakers hinted that virtual beings could be creepy, as we’ve seen a lot of science fiction horror stories about AI from to The Terminator to the latest Black Mirror episodes on Netflix.
Since nobody offered this warning, I jumped in myself. On the last panel, I noted how the upcoming Call of Duty: Modern Warfare game will be disturbing because it combines the agency of an interactive video game with realistic combat situations and realistic humans. It puts you under intense pressure while deciding whether to shoot civilians — men or women — who may be harmless or running to detonate a bomb. That’s a disturbing level of realism, and I’m not sure that’s my idea of entertainment.
The potential risks of the wrong use of AI — virtual slaves, deep fakes, Frankenstein monsters, and killing machines — are plentiful.
And that, once again, made me think of the moral of the story of Kurt Vonnegut’s Mother Night novel, where the anti-hero is an American spy who does better at his cover job, as a Nazi propagandist, than he performs as a spy. The moral is, “We are what we pretend to be, so we must be careful about what we pretend to be.”
Above: Don’t fall in love. She’s not real.
Image Credit: Dean Takahashi
I said, “I think that’s a wise lesson, not only for users with the agency they have in an open world with virtual beings. You will be able to do things that are there for you to do. But it’s also a lesson for creators of this technology and the decisions they make about how much agency you can have” when you are in control of a virtual being or interacting with one. You have to decide how to best use your hard-earned talent for the good of society when you are thinking about creating a virtual being.
The temptations of the future world of virtual beings are many. But Peter Rojas, partner at Betaworks Ventures, said, “We shouldn’t be afraid to think about legislation and regulations for things that we want to happen.”
He said there are moral, ethical, and responsibility issues that we can discuss for another day. Rojas’ firm funded a company that is working on technology to identify deep fakes, so that journalists, social media firms, or law enforcement can identify attempts at deception when you put someone else’s believable head on a person’s body, making them do things that they didn’t do.
“There is incredible talent working on the different technical problems here on the storytelling side,” Rojas said. “As excited as I am about what’s happening in the field, I also share fears about how this could be used. And where I don’t see a lot of entrepreneurs is in working on new products around technology that will help against the deception.”
I agree with Rojas. Let’s all think this through before we do it.
Credit: Source link
The post The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings appeared first on WeeklyReviewer.
from WeeklyReviewer https://weeklyreviewer.com/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings/?utm_source=rss&utm_medium=rss&utm_campaign=the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings
0 notes
Text
My Health Record justifications 'kind of lame': Godwin
Lawyer and writer Mike Godwin is one of America’s most prominent commentators on digital policy. Recently, he spent more than a month researching Australia’s controversial My Health Record and its background. He didn’t like what he found.
“The benefits are not clear. On the one hand, it seems to be billions of Australian dollars spent for nothing really useful, and on the other hand it seems very privacy invasive,” Godwin told ZDNet last week.
“If you don’t want anyone associated with any healthcare organisation you ever connect to, or with government generally, looking at your health records over some long period of time, you ought to opt out now.”
Godwin thinks the government has done “a very poor job” of justifying My Health Record. In the last couple of months of the opt-out window, at least, it’s been trying to “propagandise” for the centralised digital health record system.
“Honestly, from my perspective, even the best-case stories of My Health Record are kind of lame,” he said.
“If everybody had to carry around a shopping cart full of their health records for every visit to the doctor then you might have a case, but that doesn’t seem to be a problem for most Australians.”
Godwin summarised his research in a 2200-word article for Slate in August.
“If you want the tl;dr version of it, it’s ‘Don’t sign up. Opt out.’ If you forget everything else I’ve said here, just opt out. You can opt in later, but you need to opt out now before November 15.”
Democracies rely on ‘limits to what government can do’
Godwin has also been tracking the progress of the Assistance and Access Bill, the Australian government’s proposed legislation to tackle the problems that end-to-end encrypted messaging are posing for law enforcement.
The government has been eager to hose down concerns that the new laws would force vendors to build backdoors into encrypted communications. But some experts say it merely relocates the backdoors, and a new coalition of industry, technology, and human rights groups has formed to fight the legislation.
Godwin says the government does understand the concerns, which is why the legislation is written the way it is.
“The government actually is aware that if they said outright what they really want, that you wouldn’t like it. And so what they do is, they’ve mushed it up a little bit by saying we’re not going to mandate stuff. We’re not going to require Apple or Samsung to build in insecurity, except that we maybe will,” he said.
“If you read through the legislation, you find that the exceptions eat the rule, eat the declared good intent.”
According to Godwin, there’s nothing new here in the governmental push for more power, but that the digital world has changed the balance.
“For almost all of human history, it’s been impossible for governments or police agencies to know everything that was happening with you privately. If you wanted to have a private conversation with your mate, you would just walk down the road and be out of earshot … if you didn’t want to be seen talking to him, you could walk around the bend of the road so you were not visible.
“But now, because so much of our lives is digital and online, that is a real treasure trove, potentially not just for police agencies but also for any government administrative agency, for intelligence agencies. They want to build that snooper ability into your devices, and that seems inhumane, wrong, anti-democratic,” he said.
“The nature of democracies is that they rely on the idea of limits to what government can do, and you can’t abandon that. You have to stick with that, even if it’s uncomfortable, because you can’t capture every bad guy because you can break into his iPhone.”
Related Coverage
Australian industry and tech groups unite to fight encryption-busting Bill
The new mega-group has called on Canberra to ditch its push to force technology companies to help break into their own systems.
Encryption Bill sent to joint committee with three week submission window
Fresh from rushing the legislation into Parliament, the government will ram its legislation through the Parliamentary Joint Committee on Intelligence and Security.
Home Affairs makes changes to encryption Bill without addressing main concerns
Services providers now have a defence to use if they are required to violate the law of another nation, and the public revenue protection clause has been removed.
Australia’s anti-encryption law will merely relocate the backdoors: Expert
If the Assistance and Access Bill becomes law as it stands, it could affect ‘every website that is accessible from Australia’ with relatively few constraints in the government’s powers.
Internet Architecture Board warns Australian encryption-busting laws could fragment the internet
Industry groups, associations, and people that know what they are talking about, line up to warn of drawbacks from Canberra’s proposed Assistance and Access Bill.
Despite risks, only 38% of CEOs are highly engaged in cybersecurity (TechRepublic)
Business leaders believe AI and IoT will seriously impact their security plan, but they’re unsure how to invest resources to defend against new threats.
5 tips to secure your supply chain from cyberattacks (TechRepublic)
It’s nearly impossible to secure supply chains from attacks like the alleged Chinese chip hack that was reported last week. But here are some tips to protect your company.
IT staff systems/data access policy (Tech Pro Research)
IT pros typically have access to company servers, network devices, and data so they can perform their jobs. However, that access entails risk, including exposure of confidential information.
Source: https://bloghyped.com/my-health-record-justifications-kind-of-lame-godwin/
0 notes
Link
Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.
In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.
Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.
Sen. Harris at a recent hearing.
Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”
If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.
Algorithmic accountability
“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.
Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:
Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.
First, the technology may interpret her mannerisms less accurately than a white male candidate.
Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.
Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.
If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.
“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.
A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.
Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.
“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”
Another example is offered:
Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.
Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.
Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.
The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.
The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).
Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?
“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.
FBI built a massive facial recognition database without proper oversight
The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.
“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.
The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?
The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.
These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.
You can find the letters in full below.
EEOC:
SenHarris – EEOC Facial Rec… by on Scribd
FTC:
SenHarris – FTC Facial Reco… by on Scribd
FBI:
SenHarris – FBI Facial Reco… by on Scribd
via TechCrunch
0 notes