#faceosc
Explore tagged Tumblr posts
Text
Festlegung der finalen Idee
Heute ging es vor allem darum verschiedene Settings auszuprobieren. Wie klingt es wenn man eine Person weiter hinten platziert? Was passiert wenn man mit einem weiteren Sound eine Atmosphäre kreiert? Wie können wir unsere Idee verwirklichen und erklären?
Diese Fragen und weitere haben wir uns gestellt und auch getestet. Uns wurde noch einmal stärker bewusst, dass SonoBus, Ableton und Zoom zusammen enorm viel Internet benötigten. So war es vor allem für Ramona und auch ein wenig für Mai schwierig durchgehend verbunden zu sein. Trotzdem oder gerade deshalb entstanden ein paar lustige Audioaufnahmen bei denen Gian und Nicola eine Unterhaltung führen und von sehr weit weg eine weitere Stimme erklingt.
Immer mehr fokussieren wir uns auf die Positionierung der Personen und weniger auf das an und ausstellen der Musik mit dem FaceOSC. Auf dem unteren Beispiel sind es vier Personen in verschiedenen Anordnungen.
Des weiteren haben wir das Vorgehen für unser Endprodukt und weiteren User-Tests besprochen. Dabei kamen interessante Ideen und Anregungen zustande.
Für unsere User-Tests erstellen wir vier Audiodateien, in der wir die oben gezeigten Sitzordnungen ausprobieren und sie an verschiedene Personen versenden. Dabei unterhalten wir uns über etwas und beziehen die Testperson mit ein. Diese sollten sich die Dateien mit Kopfhörern anhören und uns mitteilen bei welcher Datei sie an welcher Position steht.
Wir haben auch gedacht dass es interessant wäre die gleiche Konversation von verschiedenen Prospektiven zu hören. Wie hört sich ein Dialog im Raum an wenn man an einer Stelle stehet oder an der anderen? Um dieses umzusetzten wollte wir einen Dialog aufnehmen, für jede Person eine Audio Spur haben und diese dann in verschiedenen Positionen anordnen. Wir haben einen Schema erstellt um die Positionierung bei Ableton zu vereinfachen. Für die Positionierung benutzten wir den E4L Source Panner.
1 note
·
View note
Photo
Im still too boring.... a long way to go to be as good as Jeffree Starr 🌟 . . . #jeffreestar #iceleb #interactiveArt #art #coding #computer #ai #faceOsc #faceGame #robot #internetcelebrity #youtubers #charming #me #technology #connect (at DTLA Artwalk) https://www.instagram.com/p/BrReJGRjaY5/?utm_source=ig_tumblr_share&igshid=1ik8kds3onds3
#jeffreestar#iceleb#interactiveart#art#coding#computer#ai#faceosc#facegame#robot#internetcelebrity#youtubers#charming#me#technology#connect
0 notes
Text
Tools
I am planning to use:
- digital applications for facial recognition software, like FaceOSC
- a camera / webcam to track the human
- processing to code in
0 notes
Video
vimeo
MEGTAR - Facial Direction as a Switch
Using TouchDesigner & FaceOSC
0 notes
Video
vimeo
ER (Emotional Reality) is a speculative technology arising from augmented technologies (such as AR) for the purpose of augmenting empathetic inter human relationships. Using #EMS (Electric Muscle Stimulation), facial recognition (#faceosc), #maxmsp and #arduino technologies, one person’s facial gestures are imposed onto another’s by forcibly contracting facial muscles with electricity. As facial gestures are closely linked to emotional states, ER allows one user to empathize with and perceive the other at a greater capacity. Speculative future uses include: long distance relationships, customer service, espionage, military, teenage discipline, quarreling couples, etc.
0 notes
Video
vimeo
Face Casino
Playful automated facial casino experience that augments the viewer with crude new personalities that are positioned over and move with their actual face.
An antidote to facial recognition systems that now pervade our daily lives. This application pretends to partially know the viewer offering an interpretation of likes, stats and surreal poetry attached to the side of each new face.
Every time the viewer looks to the side a new facial combination is loaded. The app encourages a tinderesque swiping through the facial mash-ups in the pursuit of finding a match - ie. getting comfortable with the “interpretation of the machine”.
Every configuration is automatically collected and added to an ever evolving piece.
I have a big interest in computer vision and am particularly curious about how machines interpret us with all the other data they hold on us. What do they know about us? How do we appear in their eyes?
Over the past 5 years we have seen facial augmentation mushroom with anything from cartoonesque overlays to facial enhancement or distortion apps. I wanted to create a distinct appearance that separated itself from common visual reference points but rather resembled an expression by an "unsophisticated" artificial intelligence.
One big inspiration here was the german dadaist collage artist Hannah Höch. Her provocative facial photomontages often a critique of partriarchial society feel very relevant to most of the political and societal issues we are facing at the moment. Another big inspiration was the work of the American multi-media artist Tony Oursler. In his series of works that are themed around privacy, surveillance and identity, absurd head installations invite the viewer to see themselves through the lense of a machine.
I was immediately drawn to the face controller homework we had as part of the OSC week. I wanted to create a gestural experience and Kyle Mc Donald's faceOSC add-on seemed a perfect fit to start with. After a short time experimenting with it though I realised that ofxFaceTracker was a better fit for what I was trying to achieve. I needed the camera feed and just wanted the overlaid components to move with the user's actual face so there was no need for OSC messaging from one app to another.
I started experimenting with the ofxFaceTracker add-on examples to evaluate what I needed for my experience to work. Temporarily I was hoping to create a 2 user interaction where the app takes a screenshot of the first user and then comps it onto the second user, but I found it hard to extract the raw data pairs for each face; something that should be technologically possible though, but was way beyond my technical skillset.
Within ofxFaceTracker addons folder I found the "example-cutout" which used the poly-line moving with the user's actual face to distort another image. I stripped out all the Delauney code which I didn't need and just used the poly-line to anchor the faceTracker for my mapped facial components. Then I integrated the camera feed and gave it a solarised black and white effect to knock the camera feed back from the facial overlays.
Parallel to my tech experimentation I had a big task of sourcing images and cutting them out in photoshop and exporting these as pngs with transparent areas that allow for overlapping. I started with normal facial components but then moved on to exploring objects as well. I wanted the experience to be about the digital representations of our data as well as about our human reminiscent looks. During this process I found myself hopping between software: Illustrator to create quick compositions and working out the sizing of components, Photoshop to cut out the components and openFrameworks to feed my efforts into the machine. I enjoyed the surprising and absurd combinations I got back.
I created image vectors for every facial component that loaded in around 10-15 photographic images each. Then I randomised the way these images were displayed with a current object holding one random component of the vector at the time. Every time the facial tracking looses the user's facial data a new configuration is loaded for all the facial component vectors.
To allude to a desired future idea I created currently "fake" image classifier rectangles that reference the system analysing what it sees. I'm hoping to develop this further for our Popup show so that the system cycles through words in the same way that it currently runs through the image vectors.
Finally I integrated a screen shot facility so the system would take a picture of every creation that it compiles. I'm planning to create an ever evolving piece with that.
I think this piece would be really interesting if it performed sentiment analysis on the user's face and then supplied a live response by searching for images on the internet. I'm also keen to implement machine learning to train the system on its own monster mash-ups. Maybe it could analyse these based on emotions that it interprets in their face? I like the absurdity and the creative promise this application holds. Over the summer I'm hoping to experiment with silicon prosthetics that the user could attach to their real face. These artificial components could then compute with the app. This would then juxtapose the physical with digital augmentation and add another layer to the blurring between human and machine.
I'm really happy with how phase one of this project completed. Of course, I could spend a lot more time on sourcing images and experimenting with what works and what doesn't, but I think it's at a perfect place now to get it user tested. From a code point I will turn the facial component vectors into classes to simplify the ofApp.cpp file, something I didn't have time to do. I also want to spend some time on the classifiers to deliver a more personalised experience.
From the little user testing that I have undertaken the interaction feels intuitive and playful. At this phase now I feel confident to expand it to the next level.
0 notes
Video
tumblr
顔認識フェイスマッピングの試み「kumadori mapping」
使用技術:Wekinator,faceOSC,MAX,Madmapper
0 notes
Text
MIDTERM: Technical Proposal
Platforms:
Max
Wekinator
FaceOSC (or potential other facial recognition software)
Kinect (with NI Mate) or Arduino (using proximity sensors)
Hardware:
2 Laptops
Many CRT monitors
Multi Display Adapters
Webcams
Speakers
Soundcard
Lighting
Prototype Parts:
CRT monitors (kijiji, pawnshops, salvation army, etc.)
Proximity Sensor
Webcam
DIY Parabolic Speakers (http://www.instructables.com/id/Build-a-Parabolic-Speaker/)
Dome
Piezo
Amp
⅛’’ Audio Plugs
Wires
Power Adapter
0 notes
Link
0 notes
Text
First Response
For first response on Friday 28th August, we needed to show something to the whole VCD 4th year cohort to give them an idea as to what we were looking into for our project.
For mine, I decided to show an interactive experience of realtime datamoshing to change the experience of digital imagery.
It was done in a program called ‘Max 7′ and utilised an existing ‘patch’ which could create a datamosh effect to your webcam in real time. I then influenced this effect through ‘FaceOSC’, a face recognition software, where by smiling in front of the camera, it would trigger the datamosh effect for a certain time period.
During the first response day, I set up my project as a mock exhibition, taping in instructions onto my laptop to guide the audience into what to do, and hooking it up to a projector to show people outside of the laptop screen, the experience of the project.
Annoyingly, someone else had setup in my space, which meant that people viewing her project would think that my setup was part of hers which meant it didn’t get as much attention as it could have. If there’s another response day, I’ll need to be sure to be more brutal about my space.
Anyway, below are the results from the day:
To quickly get people to understand the gist of my project though, I displayed a matrix which we were all required to fill out and show:
at the centre of my matrix, which my project would revolve around are people who are blinded by technology, who can only see it as this perfect thing only looking at its surface/screen. I wanted people to be aware of the raw computation inside their devices, tech and machines and of the data that could be messed around with to generate interesting results.
Feedback from others was quite interesting.
From tutors, it was very good.
Karl (feedback located top left) suggested I look into the aesthetics of printing errors such as fax machines, which was perfect as this was one of the areas I wanted to explore into.
Andre (located bottom right) looked more into the refinement of the product I had made and into its exhibition. How can I get people to seamlessly integrate into the experience of this? Can there be a takeway from this? He challenged my use of instructions which I fully agree, as the ones I had made were clumsy and were further proven so when users didn’t quite acknowledge them.
Feedback from fellow students however was a mix. They really liked the idea, but a lot were thinking too traditional in relation to graphic design where they were used to seeing concrete forms of an output such as a website, app or a book. This was not the point of the project however, as I wanted to simply explore and experiment in order to generate experiences. This could be another intent of my project as well. To break the current mould of thought for students in regards to the need of concrete forms and to let their creativity reign free.
0 notes
Text
Nützlichkeit und sound
Wir haben uns auch die Frage gestellt wo, wann und weshalb Sound in unserem Alltagsleben sound benutzt wird. So konnten wir uns besser auf unseren neuen Fokus konzentrieren. Hier sind ein paar Beispiele:
Wo?
Benachrichtigungen, Klingelton, Timer, Filme, Alarm, Sirene, Antwort auf eine Handlung, Lift/Restaurants/Bar/Club,
Wann und weshalb?
Aufmerksamkeit erwecken, Feedback, Spannung/Gefühle erwecken, Warnung, Leitfaden, Unterhaltung
Diese Stichwörter waren gut im Hinterkopf zu behalten, damit der Nutzen nicht verloren geht.
Als Beispiel; wenn in einem Zoommeeting das FaceOSC oder ein anderes Programm merken würde, dass die Gesichtsausdrücke gelangweilt wirken, könnte eine Ambiente Musik sich verschnellern und so eine Dringlichkeit hinüberbringen. Wie in der letzten Runde von Mariokart.
0 notes
Text
Wordless
Concept
In this experiment, I am using optical flow and blur to create semi visible yet colorful faces and by using face tracker and ofxCv I tried to create random words each time when the person open his mouth. Mouse open also means grabbing a screenshot. If we consider this as a public experiment, each time we will have words and the ghosts behind those words and it turns into a digital ghost portraits archive as the result of this experiment.
The main problem I faced was to control only mouth and create interaction. I tried to use FaceOSC to track specific part of the face but I couldn’t use properly. That’s why I kept using two different facial expressions. And I still have problems to position words on mouth.
Github: https://github.com/mehtapaydin/wordless
vimeo
Precedents
youtube
https://github.com/crecord/faceOSC/blob/master/textureMouth.pde
vimeo
https://github.com/edap/bubbles
Resources
http://www.v3ga.net/blog2/2009/03/gravity/
https://github.com/armadillu/ofxFboBlur
http://openframeworks.cc/learning/02_graphics/how_to_screenshot/
0 notes
Text
How
By working with the facial recognition software of FaceOSC and experiment with this, I can explore its possibilities and abilities, and its effects on human behaviour.
0 notes
Video
vimeo
FaceOSC + Processing
1 note
·
View note