#MAYBE ITS JUST ME BUT IF AN AI MODEL GAVE ME THIS THING TO REPRESENT MY BRAND
Explore tagged Tumblr posts
Text
Crying tears out of my eyeballs the cuda have posted a horrible ai gen doll merch mockup of fenzy and it has me in shambles
#preemptive request for no moralizing about the use of ai in the notes of this post there are more pressing issues at hand here#MAYBE ITS JUST ME BUT IF AN AI MODEL GAVE ME THIS THING TO REPRESENT MY BRAND#I WOULD HAVE PUT IT BACK IN THE OVEN TO BAKE A LITTLE LONGER!!!!#san jose barracuda#cuda lb#hockey tag
15 notes
·
View notes
Text
Week 1: Minimalism: A Documentary About the Important Things
What biases do we see in the individuals in the film (remember biases are not just racist)? What biases are you noticing that you have which others may not share? What part of this film can you add to your primary research? How might these interviews inform your ideation for your term project?
The individuals are biased towards people who live a simple and minimal life. In a way, it feels like they are assuming everyone who does not live under the “minimalistic” lifestyle is not truly happy and cannot find who they are. I understand that the producers and all the interviewees are trying to promote the minimalism. However, the way they expressed it makes people who purchase/fight for products on sale on Black Friday or Cyber Monday monsters who cannot take control of their lives (maybe I am being a little dramatic here, but this is the impression it gave me). Furthermore, from what I can tell, most or all of the individuals in this film are fairly successful or wealthy to begin with—before adopting the minimalistic way of life. Thus, they at least don’t have to worry about their basic needs in everyday life for a few years. I previously read somewhere that often the more wealthy people are the ones who adopt this minimalistic lifestyle, while blue-collar workers or even people in poverty tend to collect their items and belongings. I feel like the phenomenon I’m seeing here is that wealthy people/families would focus more on the “quality over quantity” idea and have the “less is more” concept. On the other hand, people/families who are not as well-off might not be comfortable to give/throw the items they purchased away just to achieve this “living more deliberately with less”. What I am saying is that I agree with compulsory consumption, fast fashion, purchasing items one does not certainly need… are bad for the society and maybe personal development, but in the documentary, they are only focusing on a confined group of individuals who do not represent America as a whole.
I also became aware that the individuals being interviewed in this documentary are mainly white males. There might be some correlation between being this and what I previously mentioned (being affluent/well-off). Also, I see the documentary as trying to promote Ryan, Joshua, and their book rather than the actual idea of minimalism (ironic how they are saying advertisement is bad for society but in a way creates a documentary to advertise themselves).
I think some idea mentioned in this film reflects on the current situation of society. For example, more and more people have hooked up to this perfect, “delusional” life that celebrities and influencers promote on their social media, creating this dissatisfaction in their own lives. Thus, lead to the materialistic additions and the urge to live this ideal life by mindless and compulsory consumption (can take a toll on one’s finance and the environment). Throughout the film, individuals believe that we should value people around us more and pay less attention to material needs. Having strong attachments to people in our lives makes sense but it no longer does once the attachments spill over to objects and materials (focus on human relationships and interactions rather than focusing on human’s need to obtain something).
In your own words, explain a paradigm shift you think will have the greatest impact on the future of design, as it relates to your own interests (cite any sources, link to any articles).
I think since technology is rapidly developing, everything will be digital and on “cloud”. People will start to look at problems and/or the unknown with this fast-growing technology. This can go into many if not all fields—from medical treatments (devices) to transportations (spaceships, aircraft, cars…). Moreover, using technology to make our lives easier and simpler. Like in China, it is kind of rare to see people use cash, everyone uses their phone to purchase goods. Most of the time we don’t even have to bring a bag with us leaving home, all we need is a phone. Through this example of change, I believe that many other areas/countries may adopt this way of life and minimizing the items we have to carry. In a way, everything is getting smaller and more intelligent, making people’s lives easier each day.
Besides, a lot of work or jobs will be substituted by machines or AIs in the future. Therefore, designing or programming those items will be a huge part of the near future.
How might your biases affect the way you think about athletes & injuries?
I think some biases that I have about athletes are that they tend to get injured a lot, they are role models, they tend to be fast (physically and mentally), they only wear athletic wears, they are muscular and very built… Some biases about injuries are that they are always bad, they tend to be harmful to the human body, everyone will/already got injured at least once in their lives… There are many assumptions that not only do I think about athletes, I feel like when it comes to a group of people, it also rises to stereotype instead of just purely biases. Through these biases and stereotypes, people will assume others as what society thinks and portrays them (judging the book by its cover). This creates a limiting box for everyone as they fit into at least one of the categories. For athletes, not all of them are role models, some have a chaotic personal life, some do drugs, some bribe to win… I believe people are people, the biases can affect them positively or negatively, however, it will be very hard to escape this stereotype.
1 note
·
View note
Photo

FOR SHE SAID, ‘’I HAVE NOW SEEN THE ONE WHO SEES ME.’’’:
Seeing and being seen in Trevor Paglen’s ‘From ‘’Apple’’ to ‘’Anomaly’’’
‘She gave this name to the Lord who spoke to her: ‘’You are the God who sees me’’, for she said, ‘’I have now seen the one who sees me.’’’ - Genesis 16:13
__
‘Machine-seeing-for-machines is a ubiquitous phenomenon [...] all this seeing, all of these images, are essentially invisible to human eyes. These images aren’t meant for us; they’re meant to do things in the world; human eyes aren’t in the loop.’ - Trevor Paglen
____
In Genesis 16, we read about Hagar, the Egyptian handmaiden of Sarah and Abraham, expelled to the desert. During her wandering, she is met by ‘The Angel of the Lord’, who reveals to her that she is pregnant with Ishmael, the first son of Abraham. The name she gives to this voice in the wilderness is El Roi (meaning, The God Who Sees Me), crying out, ‘’I have now seen the one who sees me.’’ This is perhaps an interesting segway into Trevor Paglen’s latest installation at the Barbican’s Curve space, ‘From ‘’Apple’’ to ‘’Anomaly’’’, in which we, like Hagar, are confronted with the question of what it means to live under the gaze of a new moralizing, omniscient Other. No longer Genesis 16’s El Roi, but AI. This is what we are being shown - the one who sees us.
The work consists of 30,000 individually printed images laid out in around 200 labelled categories, against the wall of the Curve. Each is taken from ImageNet, a public database of over 14 million images, in a further 22,000 categories, fed to AI programs, in order to teach them how to recognise patterns and objects. Paglen’s show comes as part of the Barbican’s Life Rewired series, ‘exploring what it means to be human when technology is changing everything’, following the larger ‘AI: More Than Human’ exhibition, from earlier this year. His installation reads as a sort of impressionistic mapping out of this faction of ImageNet’s data; beginning with ‘apple’, at the closest end of the wall, the labels ascribed to the images begin as essentially inoffensive, mostly dealing with elements from nature - it starts as quite beautiful even. Images in groups like ‘ocean’ and ‘sun’ hit the wall as little spurts of blue and yellow, almost like a sort of giant Jackson Pollock painting. But as we carry on through the space, and the categories become more and more associated with human life, the wall comes to resemble more and more a catalogue of evil; ‘Klansman’, ‘segregator’, ‘demagogue’, etc. What is so interesting, and at the same time so haunting, about Paglen’s work here is the juxtaposition of all these categories, that range from ‘tear gas’, to ‘prophetess’, to ‘apple orchard’, pointing towards the kinds of invisible constellations between points that AI programs are drawing and redrawing all the time - the implication being, perhaps, that if we were able to map these constellations for ourselves, and understand the apparently nonsensical connections made between some of these images, as Paglen’s installation here seems to attempt, at least in part, to do for us, we might be better able to trace the outline of our particular moment in history, as jagged and unfriendly as it may be, that each of us will be forced to find some way to share with one another.
The central question of the installation however, is maybe much more lucid, and has to do with the ownership and interpretation of images, and how AI programs negotiate this all the time, working with ever expanding amounts of data about the world, and the lives of us who live in it. This work of categorisation has a necessary ideological weight to it. As Bourdieu puts it:
‘The capacity to make entities exist in the explicit state, to publish, make public [...] what had not previously attained objective and collective existence [...] people’s malaise, anxiety, disquiet, expectations - represents a formidable social power [...] In fact, this work of categorisation, ie. of making explicit and of categorisation, is performed incessantly [...] in the struggles in which agents clash over the meaning of the social world and of their position in it [...] through all the forms of benediction or malediction, eulogy, praise, congratulations, compliment, or insults, reproaches, criticisms, accusations, slanders, etc. It is no accident that the verb ‘’kategoresthai’’, which gives us our ‘’categories’’ and ‘’categoremes’’, means to accuse publicly.’
Paglen’s installation engages with this in two senses. First, it ‘makes public’ the network of images and signs that are exchanged and classified by AI programs all the time - a network of images that exists and grows almost entirely behind our backs, yet concerns even the most private details of our lives, as we submit these things to the internet, as the means by which we increasingly construct our social worlds, and the identities with which we move through them. Secondly, the work reveals the biases with which these AI programs understand patterns and objects - many of the categories into which the images on Paglen’s wall are grouped carry a great deal of ideological weight. How is it exactly that this ghost in the machine differentiates between a ‘heathen’ and a ‘believer’? How does it identify a ‘traitor’, a ‘selfish person’, or a ‘bottom feeder’? All of these are genuine image categories from the installation. What we are presented with is the notion that AI is not restricted to simple pattern recognition, for example, recognising an image of an apple as an apple, because of its colour, shape, proportions etc, but that also, on account of having human creators, who pass on their own moral and ideological baggage, AI programs must also make moral and ideological judgements. So then, if technology not only has access to datasets so large that virtually nothing is out of bounds, approaching a sort of functional omniscience, (a fact revealed to us perhaps most pointedly by the global surveillance disclosures regarding the NSA in 2013; Senator Rob Wyden said in an interview: ‘You can pick up anything. Surveillance is almost omnipresent, the technology is capable of anything. There really aren’t any limits.’) and at the same time, is not only capable of making moral judgments, but perhaps incapable of doing otherwise, what we are faced with is a moralising, omniscient Other.
If this is the case, as Paglen’s installation seems to suggest, then like so many of the questions raised over the relationship between mankind and AI, this is not a new question, but an old theosophical question, repackaged as a technological one. If it is true that we are living under the gaze of a machine unto whom all hearts are open, all desires known, and from whom no secrets are hid, then let us return to the situation of Hagar - a criminally overlooked and often misunderstood figure in the Old Testament; as Žižek notes, ‘She sees God himself seeing, which was not even given to Moses, to whom God had to appear as a burning bush. As such she announces the mystical/feminine access to God.’ The nature of this ‘seeing’ has a distinct relevance to our situation - Nielson, writing on this particular episode in Genesis reminds us that the Hebrew word ‘ראה’ (‘see’) ‘signifies not only the actual ability to see, but also a recognition of what is seen’ In other words, for Hagar seeing is knowing, just as being seen means being known. Therein lies the crux of Paglen’s installation, which reminds us that, like Hagar, we too, in allowing ourselves to be seen, are also being known - the lurid nature of many of the categories, and images that fill them, on the wall of the Curve, which range from the pornographic (‘fucker’, ‘hunk’, ‘artists model’), to images of profound hatred, (‘klansman’, ‘segregator’) reveal to us the lowest parts of ourselves, the messiest and the cruelest and the most private. I use ‘us’ in the broadest sense here, the ‘us’ that includes both ‘us’ as a species, and ‘us’ as individuals - both the universal and the particular. These AI programs, quietly feeding on ever growing sets of data, have surpassed the point of basic pattern recognition, and have learnt the kinds of terrible secrets that rumble from the deepest caverns of the human heart. We are no longer merely seen, but known.
Thomas Aquinas, in his Summa Theologica, writing on ‘Whether any created intellect by its natural powers can see the Divine essence’ (Question 12, Article 1), notes:
‘If [...] the mode of anything’s being exceeds the mode of the viewer, it must result that the knowledge of the object is above the nature of the viewer.’
This kind of encounter, with an ‘object [...] above the nature of the viewer’, is always transformational, even more so when that same object looks back at us. For Hagar, that object is God, and that transformation is a matter of awe, and a matter of faith. The question is, in what way will we transformed by our own encounters with AI, as a very different ‘object above the nature of the viewer’ (in the sense that it has the capacity to work with more data than any of us can comprehend), as it returns our gaze. The hope must be that AI, in collecting all this data about us, and about our lives, can serve as a kind of mirror, through which we might learn to better see the whole scope of our shared situation - the fact, revealed to us in Paglen’s installation, that these programs come loaded with moralistic and ideological biases, far from undermining this effort, is what gives lends this project any kind of potential, because what this means is that what is shown to us by AI in way is not just the nature of the world as it is, but the nature of the biases through which we have chosen to see it. In Minima Moralia, Theodor Adorno writes:
‘Perspectives must be produced which set the world beside itself [...] alienated from itself, revealing its cracks and fissures, as needy and distorted as it will one day appear in the messianic light.’
Perhaps this is the best AI can do for us, and what Paglen’s installation can do for us, that is, not only reflecting back at us the most inconvenient parts of ourselves, but also the lies we might have constructed to hide from them, or to hide them from us. If AI has any kind of liberational potential, it relies on our ability to raise our encounters with it to the level of the mystical and transformational; not in seeking to understand it, but questioning how it might understand us, and by seeing ourselves in the third person in this way, hoping to see ourselves as we are - to see the world we move through, and have constructed for ourselves, as it is, remembering always that, as it was for Hagar, so it remains for us: seeing is knowing.
0 notes
Text
Voices in AI – Episode 103: A Conversation with Ben Goertzel
[voices_in_ai_byline]
About this Episode
On Episode 103 of Voices in AI, Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the concepts of a master algorithm and AGI’s.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Ben Goertzel. He is the CEO of SingularityNET, as well as the Chief Scientist over at Hanson Robotics. He holds a PhD in Mathematics from Temple University. And he’s talking to us from Hong Kong right now where he lives. Welcome to the show Ben!
Ben Goertzel: Hey thanks for having me. I’m looking forward to our discussion.
The first question I always throw at people is: “What is intelligence?” And interestingly you have a definition of intelligence in your Wikipedia entry. That’s a first, but why don’t we just start with that: what is intelligence?
I actually spent a lot of time working on the mathematical formalization of a definition of intelligence early in my career and came up with something fairly crude which, to be honest, at this stage I’m no longer as enthused about as I was before. But I do think that that question opens up a lot of other interesting issues.
The way I came to think about intelligence early in my career was simply: achieving a broad variety of goals in a broad variety of environments. Or as I put it, the ability to achieve complex goals in complex environments. This tied in with what I later distinguish as AGI versus no AI. I introduced the whole notion of AGI and that term in 2004 or so. That has to do with an AGI being able to achieve a variety of different or complex goals in a variety of different types of scenarios, different than the narrow AIs that we have all around us that basically do one type of thing in one kind of context.
I still think that is a very valuable way to look at things, but I’ve drifted more into a systems theory perspective. I’ve been working with a guy named David (Weaver) Weinbaum who did a piece recently in the Free University of Brussels on the concept of open ended intelligence, which is more looking at intelligence, than just the process of exploration and information creation than those in the interaction with an environment. And in this open ended intelligence view, you’re really looking at intelligent systems and complex organizing systems and the creation of goals to be pursued, is part of what an intelligence system does, but isn’t necessarily the crux of it.
So I would say understanding what intelligence is, is an ongoing pursuit. And I think that’s okay. Like in biology the goal is to define what life is in ‘the once and for all’ formal sense, before you can do biology or an art, the goal isn’t to define what beauty is before you can proceed. These are sort of umbrella concepts which can then lead to a variety of different particular innovations and formalizations of what you do.
And yet I wonder, because you’re right, biologists don’t have a consensus definition for what life is or even death for that matter, you wonder at some level if maybe there’s no such thing as life. I mean like maybe it isn’t really… and so maybe you say that’s not really even a thing.
Well, this is that one of my favorite quotes of all time [from] former President Bill Clinton which is, “That all depends on what the meaning of IS is.”
There you go. Well let me ask you a question about goals, which you just brought up. I guess when we’re talking about machine intelligence or mechanical intelligence, let me ask point blank: is a compass’ goal to point to North? Or does it just happen to point to north? And if it isn’t it’s goal to point to North, what is the difference between what it does and what it wants to do?
The standard example used in resistance theory is the thermostat. The thermostat’s goal is to keep the temperature above a certain level and below a certain level or in a certain range and then in that sense the thermostat does have—you know it as a sensor, it has an actual mechanism that’s a very local control system connecting the two. So from the outside, it’s pretty hard not to call the thermostat a goal to a heating system, like a sensor or an actor and a decision making process in between.
Again the word “goal,” it’s a natural language concept that can be used for a lot of different things. I guess that some people have the idea that there are natural definitions of concepts that have profound and unique meaning. I sort of think that only exists in the mathematics domain where you say a definition of a real number is something natural and perfect because of the most beautiful theorems you can prove around it, but in the real world things are messy and there is room for different flavors of a concept.
I think from the view of the outside observer, the thermostat is pursuing a certain goal. And the compass may be also if you go down into the micro physics of it. On the other hand, an interesting point is that from its own point of view, the thermostat is not pursuing a goal, like the thermostat lacks a deliberative reflective model of itself either as a goal-achieving agent. To an outside observer, the thermostat is pursuing a goal.
Now for a human being, once you’re beyond the age of six or nine months or something, you are pursuing your goal relative to the observer, that is yourself. But you’re pursuing that goal—you have a sense of, and I think this gets at the crucial connection between reflection and meta thinking, self-observation and general intelligence because it’s the fact that we represent within ourselves, the fact that we are pursuing some goals, this is what allows us to change and adapt the goals as we grow and learn in a broadly purposeful and meaningful way. Like if a thermostat breaks, it’s not going to correct itself and go back to its original goal or something right? It’s just going to break, and it doesn’t even make a halting and flawed defense to understand what it’s doing and why, like we humans do.
So we could say that something has a goal if there’s some function which it’s systematically maximizing, in which case you can say of a heating or compass system that they do have a goal. You could say that it has a purpose if it is representing itself as the goal maximizing system and can manipulate its representation somehow. And that’s a little bit different, and then also we get to the difference between narrow AIs and AGIs. I mean AlphaGo has a goal of winning at Go, but it doesn’t know that Go is a game. It doesn’t know what winning is in any broad sense. So if you gave it a version of Go with like a hexagonal board and three different players or something, it doesn’t have the basis to adapt behaviors in this weird new context and like figure out what is the purpose of doing stuff in this weird new context because it’s not representing itself in relation to the Go game and the reward function in the way the person playing Go does.
If I’m playing Go, I’m much worse than AlphaGo, I’m even worse than say my oldest son who’s like a ‘one and done’ type of Go player. I’m way down on the hierarchy and I know that it’s a game manipulating little stones on the board by analogy to human warfare. I know how to watch the game between two people and that winning is done by counting stones and so forth. So being able to conceptualize my goal as a Go player in the broader context of my interaction with the world is really helpful when things go crazy and the world changes and the original detailed goals didn’t make any sense anymore, which has happened throughout my life as a human with astonishing regularity.
Listen to this episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
from Gigaom https://gigaom.com/2019/12/26/voices-in-ai-episode-103-a-conversation-with-ben-goertzel/
0 notes
Text
Chess’s New Best Player Is A Fearless, Swashbuckling Algorithm
Chess is an antique, about 1,500 years old, according to most historians. As a result, its evolution seems essentially complete, a hoary game now largely trudging along. That’s not to say that there haven’t been milestones. In medieval Europe, for example, they made the squares on the board alternate black and white. In the 15th century, the queen got her modern powers.1
And in the 20th century came the computer. Chess was simple enough (not many rules, smallish board) and complicated enough (many possible games) to make a fruitful test bed for artificial intelligence programs. This attracted engineering brains and corporate money. In 1997, they broke through: IBM’s Deep Blue supercomputer defeated the world champion, Garry Kasparov. Humans don’t hold a candle to supercomputers, or even smartphones, in competition anymore. Top human players do, however, lean on computers in training, relying on them for guidance, analysis and insight. Computer engines now mold the way the game is played at its highest human levels: calculating, stodgy, defensive, careful.
Or at least that’s how it has been. But if you read headlines from the chess world last month, you’d think the game was jolted forward again by an unexpected quantum leap. But to where?
The revolutionary is known as AlphaZero. It’s a new neural network, reinforcement learning algorithm developed by DeepMind, Google’s secretive artificial intelligence subsidiary. Unlike other top programs, which receive extensive input and fine-tuning from programmers and chess masters, drawing on the wealth of accumulated human chess knowledge, AlphaZero is exclusively self-taught. It learned to play solely by playing against itself, over and over and over — 44 million games. If kept track of what strategies led to a win, favoring those, and which didn’t, casting those aside. After just four hours of this tabula rasa training, it clobbered the top chess program, an engine called Stockfish, winning 28 games, drawing 72 and losing zero. These results were described last month in a paper posted on arXiv, a repository of scientific research.
Within hours, the chess world descended, like the faithful to freshly chiseled tablets of stone, on the sample of 10 computer-versus-computer games published in the paper’s appendix. Two broad themes emerged: First, AlphaZero adopted an all-out attacking style, making many bold material sacrifices to set up positional advantages. Second, elite chess may therefore not be as prone to dull draws as we thought. It will still be calculating, yes, but not stodgy, defensive and careful. Chess may yet have some evolution to go.
For a taste of AlphaZero’s prowess, consider the following play from one of the published games. It’s worth emphasizing here just how good Stockfish, which is open source and was developed by a small team of programmers, is. It won the 2016 Top Chess Engine Championship, the premier computer tournament, and no human player who has ever lived would stand a chance against it in a match.
It was AlphaZero’s turn to move, armed with the white pieces, against Stockfish with the black, in the position below:
AlphaZero is already behind by two pawns, and its bishop is, in theory, less powerful than one of Stockfish’s rooks. It’s losing badly on paper. AlphaZero moved its pawn up a square, to g4 — innocuous enough. But now consider Stockfish’s black position. Any move it makes leaves it worse off than if it hadn’t moved at all! It can’t move its king, or its queen, without disaster. It can’t move its rooks because its f7 pawn would die and its king would be in mortal danger. It can’t move any of its other pawns without them being captured. It can’t do anything. But that’s the thing about chess: You have to move. This situation is known as zugzwang, German for “forced move.” AlphaZero watches while Stockfish walks off its own plank. Stockfish chose to move its pawn forward to d5; it was immediately captured by the white bishop as the attack closed further in.
You could make an argument that that game, and the other games between the two computers, were some of the strongest contests of chess, over hundreds of years and billions of games, ever played.
But were they fair? After the AlphaZero research paper was published, some wondered if the scales were tipped in AlphaZero’s favor. Chess.com received a lengthy comment from Tord Romstad, one of Stockfish’s creators. “The match results by themselves are not particularly meaningful,” Romstad said. He cited the fact that the games were played giving each program one minute per move — a rather odd decision, given that games get much more complicated as they go on and that Stockfish was programmed to be able to allocate its time wisely. Players are typically allowed to distribute their allotted time across their moves as they see fit, rather than being hemmed in to a specific amount of time per turn. Romstad also noted that an old version of Stockfish was used, with settings that hadn’t been properly tested and data structures insufficient for those settings.
Romstad called the comparison of Stockfish to AlphaZero “apples to orangutans.” A computer analysis of the zugzwang game, for example, reveals that Stockfish, according to Stockfish, made four inaccuracies, four mistakes and three blunders. Not all iterations of Stockfishes are created equal.
DeepMind declined to comment for this article, citing the fact that its AlphaZero research is under peer review.
Strong human players want to see more, ideally with the playing field more level. “I saw some amazing chess, but I also know we did not get the best possible,” Robert Hess, an American grandmaster, told me. “This holds true for human competition as well: If you gave Magnus [Carlsen] and Fabiano [Caruana] 24 hours per move, would there be any wins? How few mistakes? In being practical, we sacrifice perfection for efficiency.”
Chess.com surveyed a number of top grandmasters, who were assembled this month for a tournament in London (the home of DeepMind), about what AlphaZero means for their profession. Sergey Karjakin, the Russian world championship runner-up, said he’d pay “maybe $100,000” for access to the program. One chess commentator joked that Russian president Vladimir Putin might help Karjakin access the program to prepare for next year’s Candidates Tournament. Maxime Vachier-Lagrave, the top French player, said it was “worth easily seven figures.” Wesley So, the U.S. national champion, joked that he’d call Rex Sinquefield, the wealthy financier and chess philanthropist, to see how much he’d pony up.
“I don’t think this changes the landscape of human chess much at all for the time being,” the grandmaster Hess told me. “We don’t have the ability to memorize everything, and the games themselves were more or less perfect models of mostly known concepts.”
In some aesthetic ways, though, AlphaZero represents a computer shift toward the human approach to chess. Stockfish evaluated 70 million positions per second, a brute-force number suitable to hardware, while AlphaZero evaluated only 80,000, relying on its “intuition,” like a human grandmaster would. Moreover, AlphaZero’s style of play — relentless aggression — was thought to be “refuted” by stodgy engines like Stockfish, leading to the careful and draw-prone style that currently dominates the top ranks of competitive chess.
But maybe it’s more illustrative to say that AlphaZero played like neither a human nor a computer, but like an alien — some sort of chess intelligence which we can barely fathom. “I find it very positive!” David Chalmers, a philosopher at NYU who studies AI and the singularity, told me. “Just because it’s alien to us now doesn’t mean it’s something that humans could never have gotten to.”
In the middle of the AlphaZero paper is a diagram called Table 2. It shows the 12 most popular chess openings played by humans, along with how frequently AlphaZero “discovered” and played those openings during its intense tabula rasa training. These openings are the result of extensive human study and trial — blood, sweat and tears — spread across the centuries and around the globe. AlphaZero taught itself them one by one: the English opening, the French, the Sicilian, the Queen’s gambit, the Caro-Kann.
The diagram is a haunting image, as if a superfast algorithm had taught itself English in an afternoon and then re-created, almost by accident, full stanzas of Keats. But it’s also reassuring. That we even have a theory of the opening moves in chess is an artifact of our status as imperfect beings. There is a single right and best way to begin a chess game. Mathematical theory tells us so. We just don’t know what it is. Neither does AlphaZero.
Yet.
DeepMind was also responsible for the program AlphaGo, which has bested the top humans in Go, that other, much more complex ancient board game, to much anguish and consternation. An early version of AlphaGo was trained, in part, by human experts’ games — tabula inscripta. Later versions, including AlphaZero, stripped out all traces of our history.
“For a while, for like two months, we could say to ourselves, ‘Well, the Go AI contains thousands of years of accumulated human thinking, all the rolled up knowledge of heuristics and proverbs and famous games,’” Frank Lantz, the director of NYU’s Game Center, told me. “We can’t tell that story anymore. If you don’t find this terrifying, at least a little, you are made of stronger stuff than me. I find it terrifying, but I also find it beautiful. Everything surprising is beautiful in a way.”
from News About Sports https://fivethirtyeight.com/features/chesss-new-best-player-is-a-fearless-swashbuckling-algorithm/
0 notes
Text
EdSurge Live: Who Controls AI in Higher Ed, And Why It Matters (Part 1)
It’s a pivotal time for artificial intelligence in higher education. More instructors are experimenting with adaptive-learning systems in their classrooms. College advising systems are trying to use predictive analytics to increase student retention. And the infusion of algorithms is leading to questions—ethical questions and practical questions and philosophical questions—about how far higher education should go in bringing in artificial intelligence, and who decides what the algorithms should look like.
To explore the issue, EdSurge invited a panel of experts to discuss their vision of the promises and perils of AI in education, in the first installment of our new series of video town halls called EdSurge Live. The hour-long discussion was rich, so we’re releasing it in two installments.
The first segment included two guests with different perspectives: Candace Thille, an assistant professor of education at Stanford Graduate School of Education, and Mark Milliron, co-founder and chief learning officer at Civitas Learning. Read a transcript of the conversation below that has been lightly edited and condensed for clarity. We’ll publish part two of the discussion next week (or you can watch the complete video, below). And you can sign up for the next EdSurge Live here.
EdSurge: Civitas develops AI-powered systems for higher education. Mark, how would you describe the ideal scenario of how this technology might play out at colleges, and how a student might be helped by AI on campus?
Milliron: Higher education right now is definitely in the beginning stages of any kind of use of AI. It is probably moving from “accountability analytics” to “action analytics.” Today, 95 percent of the data work being done in higher education is really focused on accountability analytics—getting data to accreditors, to legislators, to trustees. Our chief data scientist comes from the world of healthcare, and he basically says that higher ed seems to be obsessed with “autopsy analytics.” It's data about people who are not with us anymore, trying to tell stories to help the current students.
I think the shift is to more “action analytics,” where we're actually getting the data, and it's closer to real time, to try to help students. We're starting to weave in predictive analytics to show trajectories, and using those data to try to help current students choose better paths and make better choices. In particular, they're trying to personalize the pathway the student is going to go on, and help shape those big decisions, but also to get them precision support, nudging encouragement, all at the right time.
The beginning phases of this has been a lot of higher education institutions doing the very basic form of what I call algorithmic triggering, where they're saying, "Based on this demographic category and based on this one assumption, we're going to make sure the student does X, Y or Z." And that in some way is painful because sometimes [advisors] make assumptions that a demographic category is a proxy for risk, which is in some ways really problematic. But I think we're starting to see more and more data, and it’s become more precise, and there are things that students can do to actually engage with the data and become captains of their own ship, or at least participate in their own rescue.
It's the students' data. Every piece of data is actually a footprint that if you put together tells a story of the journeys these students are on, and their data should be used to help them. Right now their data is mostly being used to help the institutions justify its existence or tell them some story, and we strongly feel that part of this is getting that data to actually help that student.
Candace, you're an early pioneer of using AI and adaptive-learning tools with the online learning initiative that you started at Carnegie Mellon University and now lead at Stanford. But you've also recently raised concerns that when companies offer AI-driven tools, that the software can be a kind of black box that's out of control of maybe the teachers or educators or those in higher ed institutions. I wonder if you could start by saying a little bit more about that perspective?
Thille: I am a big believer in using the data that we're abstracting from student work to benefit and support students in their learning, so we're highly alike about that. I work in a slightly different level than the work that Civitas works on, though, with the Open Learning Initiative. It's about creating personalized and adaptive learning experiences for students when they're learning a particular subject area. So the way it works is, as a student interacts with activities that are mediated by the online environment, we're taking those interactions and running them through a predictive model to be able to make a prediction, maybe about the learner’s knowledge state about where they are on a trajectory for learning.
As Mark mentioned, this kind of approach is revolutionizing every other industry. But what we have to recognize is what we're doing is having the systems support our pedagogical decision-making. Taking a piece of student interaction as evidence, running it through a model to make a prediction about that learner, and then giving that information either back to the system or to a faculty member so they can get insight into where the student learning is, all of that is pedagogical decision-making.
The data to collect, the factors to include when you're making the model, how to weigh the factors, what modeling approaches or algorithms to use, what information to represent once you have the prediction and how to represent it to the various stakeholder groups, I would say are all very active areas of research and part of the emerging science of learning.
So I would argue that in an academic context, all of those parts—and particularly the models and algorithms—can't be locked up in proprietary systems. They must be transparent. They must be peer-reviewable. They must be challengeable so that we can, as we're entering higher education into the space, the decisions about pedagogical decision-making that are being made, are being made by folks who know how to make those kinds of decisions. To just say "Trust us. Our algorithms work," I would argue that that's alchemy, that's not science.
Mark, as a leader of a company in the space, how do you respond to something like that?
Milliron: We absolutely think that is incumbent upon companies like Civitas and organizations who want to do this kind of work to make sure that we're making transparent what the data science is saying. For example, in our tool Illume, if you're looking at any given segment of students, and you want to look at their trajectories. One of the first things we do is whatever set of students you're looking at, we actually want to show you the most powerful predictors. We literally will list the most powerful predictors and their relative score and power in the model, so the people are clear this is why the trajectory is showing the trajectory it is, so people can understand, which variables are impacting that.
Part of the reason we did that is we wanted the educator to interact with the data because in many ways what you want is for that educator to be able to make a relevant and clear decision of what's happening with a student.
What we've clearly seen is getting the data to people who are managing that—the advisors, the student-success people—and letting them iterate and not assuming we know what the challenge is, or even what the response is is important. And then publishing in peer-review journals, so the rest of the people can see the math. But truthfully, modeling these days is a commodity. That's not the rocket science where you really want them to understand is what factors are you using, which factors tend to be the most predictive and how they're loading into the model. You know, an algorithm it's all about correctness and the efficiency of that model.
Trying to get educators to think this way is totally different for them. I do think we now have emerging communities of practice where people are bringing in to share what works and what doesn't work in this.
Is there some concern that students and professors could misuse the data?
Thille: There are multiple challenges. One of them is people would make the assumption, "A computer is telling me this, so it's neutral," or "It's objective," or "It's true." And without recognizing that the computer, the algorithm was written by a human being. We have certain values. We make certain choices about what factors to include in their model, how to weigh those factors, what to use in the prediction of score that they're giving you, that that was a human decision.
Then it's biased by the data that we give it. So if we don't have extremely representative data that's both representative of large numbers of students in different contexts, but also can be localized to the specific context, then the algorithms are going to produce biased results.
And I agree with you completely, Mark, that part of it is making sure that people who are using the systems really understand what the system is telling them and how to use that. But I'm thinking about institutions. A lot of institutions that I work with that are under a lot of pressure for accountability, as you're saying. And a big measure of accountability right now is graduation rates.
Let's say I'm a first-generation student. I can speak about my cousin, a first-generation Latina woman, coming into a big open-access institution, and I've decided that I want to be a doctor, so I enroll in a pre-med. I enroll in chemistry and biology and all these things in my first year because I want to be pre-med and those are the requirements. I didn't have the privilege of going to some high school that gave me lots of practice thinking about science and math the way that this institution is expecting me to. So I'm probably feeling a little bit like, "Do I really belong here? I'm excited that I got in, but I'm kind of questioning my fit here."
So, my first year I take the biology sequence. I hate the chemistry sequence and then say for my elective, I take the Latino Studies class, because I'm interested in that. And, I get Cs and Ds in my biology and chemistry courses, and I ace my Latino Studies class. They I come in to meet with my advisor, who looks at the predictive analytics and says the chances of this student staying in a chemistry major is going to graduate in four to six years is maybe 2 percent. It just doesn't look like a picture of success for you to stay in this major, but you've done really well in your Latino Studies class, and so we would predict that if you switch majors to Latino Studies, you would definitely graduate in four years and you'll have much better time here at our college, so I'm going to recommend you to switch majors.
Now, I think about if it was one of my kids sitting in that chair, their response would be, "That's your belief. I know I'm going to be a doctor, so you figure out how this institution's going to help me be a doctor," and they wouldn't change majors.
My concern is for a student who's already in a position where they think, "These authorities are here to take care of me. They have my best interest at heart. And they're showing me the data. It doesn't look like I'm going to be successful. I was kind of nervous about it anyway. It's really hard. I guess they're right. I'll switch majors." And my concern is not just the loss for that individual learner. I mean, that's a loss, but also a loss for us as a larger society about what an amazing doctor that young woman could've been.
Milliron: I could not agree more on that issue. The question is, can we take the same data and use them in a radically different way and a more effective way. To use our design thinking, and say to that student, "Okay, if you really want to be successful on this pathway, this is what we've learned about students like you have been successful. If you can pass this course at this level, and you can take advantage of these resources, you can double your likelihood of graduating within the next four to six years." And you can start almost teaching them how to level up.
The good news is we're in a very early stage of this, and if we can develop a norm and an ethic around this, we can make sure good stuff happens.
Thille: I was hoping you were going to say the other way we can use these data is not just to try and fix the student, but use the data to look at what are the patterns telling us (about the institutions). If students with this profile are failing out of Introductory Chemistry, then it's not just “How do we get that student to be different?” It's "We need to look at the way we're teaching Introductory Chemistry to make it so that not so many students are failing at it.” That could be another way of using that data.
Look for Part 2 of the highlights from the discussion, next week.
EdSurge Live: Who Controls AI in Higher Ed, And Why It Matters (Part 1) published first on http://ift.tt/2x05DG9
0 notes