Don't wanna be here? Send us removal request.
Text
Glitch/Runway
Create a p5.js sketch that receives data from RunwayML (using any model). You can use this glitch RunwayML template which hides the keys in a .env file.
I decided to work with the template model from class—I chose to host a model which generates images of landscapes. I remixed the template and changed the URL and token in .env. I also changed the canvas size appropriately for landscape images.
public/sketch.js
server.js
The end result was successful. It managed to generate new landscape images which was great.
vimeo
Training Generative Model (test)
I also decided to train a generative model (text dataset) on runway. I didn't manage to get it to work on Glitch though.
I chose to train this model based on two different poetry datasets - one which was Dante's Divine Comedy, and one which was the Persian Mystics by Rumi. I thought it would be an interesting mix of datasets due to the difference in their poetry style. I didn't know how to train with two separate txt files so I just combined them together on one file.
I chose extra small for the initial pre-trained model size and 1000 training steps as I just wanted to see what it would turn out like within a short training time. It actually turned out better than I expected. The style of the text output was very similar to both poetry styles.
vimeo
I would still clean up the dataset manually in order to get rid of unwanted syntax, punctuation etc. as it sometimes makes the output a bit weird. I would also try and train this model with greater accuracy as I am pretty interested to see what the outputs would be like.
I tried to use Glitch and the sample provided (runway-ml-gpt-api example.) I hosted my model and changed the URL and token, however I kept getting this:
I would like to try this out again and get this to work!!
UPDATE
After changing the value in the count.json file to 0, it managed to work!
vimeo
outputs below
0 notes
Text
Faces to Flowers
What is this model developed to do?
This model was developed to translate images in to a certain style based on custom floral print datasets.
vimeo
It also includes a style parameter so you get multiple style outputs from one model. I played around with this to get the results above. They are pretty unique from each other in terms of color and even texture at points. which I think is pretty cool.
Testing out the export option:
I liked playing around with this and moving the slider to see the outcome preview.
vimeo
Exported image
Can you write a Model and Data "biography" that covers where it came from and what data was used to train it?
MUNIT is a multi-modal version of an unpaired image translation network. It’s like CycleGAN but includes a style parameter so you get multiple style outputs from one model. MUNIT is trained on datasets such as CelebA-HQ, SYNTHIA, AFHQ, CATS. The faces to flowers checkpoint was created by Derrick Schultz using the FFHQ and a custom floral print datasets.
Describe the results of working with the model, do they match your expectations?
I think they do match my expectations. I expected the results to be pretty artistic and abstract especially with portrait images and I actually like that they turn out like this. I personally like this style as I think it could potentially be used as starting points for creative projects or artwork.
Compare and contrast working with RunwayML as a tool for machine learning as related to ml5.js, python, and any other tools explored this semester.
I thought RunwayML (so far with what I've experimented with) is pretty easy to use since we get access to pre-trained models by other users. I also think it's cool since we get to see some community of works and experiment with the models to our own liking. It makes the workflow as a user pretty simple and I enjoy testing out these different options.
0 notes
Text
Assignment 7
Reading Response
The questions that Emily Martinez proposes are some that are extremely important and should be addressed when learning/working with AI. One aspect I was intrigued by was the idea of decolonizing AI. What does it even mean to try to decolonize AI? In The Subtext of a Black Corpus, Ayo discusses the "lack of representation and the misrepresentation of blackness out there." Ayo discusses Hemmingway for example, and how if a model outputs "garbage or gibberish", this doesn't change the way he is viewed, whereas"if you run the works of James Baldwin through a model and it outputs nonsense, that’s not okay." There is also the question of: How do we work in consensual and respectful ways with texts by marginalized authors that are not as well-represented, and by virtue of that fact alone, much more likely to be misrepresented, misappropriated, or misunderstood if we are not careful?
Existing structural injustices are very much reflected in the technology and therefore this further prolongs those injustices. For example, could it be that there is a lack of diversity among developers and this leads to biases of which limits values that are integrated in to AI systems to specific groups and excluding other groups? How can we ensure that AI is developed in a way that is favorable to all groups? To be honest, I am not 100% sure what the right steps to be taken are at this moment, though I am interested to learn about how AI is being decolonized currently and what has been done in the past.
Coding Exercise
The text generation exercise we were working on in class was fun and I wanted to try it out for this assignment. I wanted to try just generating text with one of the pre-trained models first. I used the Woolf model—It turned out fine and worked pretty well.
vimeo
I then attempted using my own model which I trained during class, however it actually did not really work out. None of the words that are generated are actually words from the text for some reason. Or they seem like they're jumbled up. I am not really sure why this was the case. Perhaps something went wrong during the training?
It was also a pretty large txt file and took a while to train. I do want to try this out again and get it to work better.
vimeo
I wanted to try one more model just to see if it worked and this one (Bronte) worked fine too.
vimeo
I also tried the Markov chain example with my text of choice (little women.) After all these different examples, I see that the output sometimes clearly relate to the input, but I still wonder how the text can be generated in a way which is actually coherent or makes more sense. The reading gave me a bit more insight in to this and I am interested in the capabilities of machine learning language models.
vimeo
0 notes
Text
Assignment 5
For this assignment, I just decided to work upon my previous sketch (handpose) and broke it in to multiple sketches for saving/loading the model. Honestly, I struggled a bit to do this for a bit because I was a bit confused with the steps but I managed to figure it out in the end. I started by loading the data, and then creating a separate sketch for training it, then another one for loading the model.
vimeo
I struggled a bit with the training part and obtaining the 3 files (I forgot exactly how to do this) but it worked in the end. My final product was okay, but very simple.
I think the 🖖🏻 and 🤟🏼 pose were too similar that it had trouble recognizing the difference between the two. Although this exercise was very simple, I think it was good practice for me to break down into multiple sketches and understand the process better!
0 notes
Text
Data Research
Something you find online. For example, take a look at Kaggle, awesome datasets or this list of datasets.
1) RAMEN RATINGS!
https://www.kaggle.com/residentmario/ramen-ratings
I, too am an avid ramen enjoyer so this dataset was very attractive to me.
Content/Context
The Ramen Rater is a product review website for the hardcore ramen enthusiast (or "ramenphile"), with over 2500 reviews to date. This dataset is an export of "The Big List" (of reviews), converted to a CSV format.
Each record in the dataset is a single ramen product review. More recently reviewed ramen varieties have higher numbers.
Brand, Variety (the product name), Country, and Style (Cup? Bowl? Tray?)
Stars indicate the ramen quality, as assessed by the reviewer, on a 5-point scale
2) MOVIES!
https://www.kaggle.com/ruchi798/movies-on-netflix-prime-video-hulu-and-disney
I also love watching movies from time-to-time. This dataset allows me to find out which streaming platform(s) I can find certain movies on, as well as its ratings.
Content
The dataset is an amalgamation of:
data that was scraped, which comprised a comprehensive list of movies available on various streaming platforms
IMDb dataset
Find a dataset that you collect yourself or is already being collected about you. For example, personal data like steps taken per day, browser history, minutes spent on your mobile device, sensor readings, and more.
HEART RATE!
Here is an example of the data automatically collected (when I wear my watch.)
0 notes
Text
Assignment 4
Response to Dr. Rebecca Fiebrink
How can machine learning support people's existing creative practices? Expand people's creative capabilities?
I thought Dr. Fiebrink's talk was really insightful — her points about machines being capable of being creative and making "real art" was thoughtful and got me to think a bit more about it. At the start of this class, I had the same question and doubts about computers and their ability to mimic art to the level of physical art made my human hands. I always thought this was something that was crucial when thinking about machine-generated art. After watching this talk, I feel like my views have changed a little bit and this bridge between machine/human interaction and what it brings to enhance our creative practices, is what makes it so significant. One thing that stands out to me is that we are able to use machine learning together with the body, and this makes art a whole lot more personal. I also think it can help us to create/generate things that we may have never thought about before. It only helps to enhance creativity.
Real-Time Machine Learning System
Dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
There are so many possibilities when dreaming up a real time machine learning system. I would love to see something that allows for the user to interact in order to generate some kind of art. I think a bridge between dance and art would be really wonderful. The input would be the user performing some a dance and the data collected could be certain key points from the arms and legs and depending on the distance between them (regression), the canvas draws different kinds of shapes (perhaps different numbers of sides or even radius, size etc) and colors. By the end of the dance, the canvas should be filled with artwork in response to the user's dance.
Sketch
I decided to build a little bit upon the sketch we made in class with HandPose. I wanted to see if I could add more key points (I added the middle finger and pinky finger) and tried to train it to recognize a "rock" sign and a "peace" sign. I would build upon this by adding more classifications. Or even trying to turn it into a regression model (though I am not completely sure how to do that.)
Though this was a really simple exercise, I was really happy to see it working properly!
Sketch
0 notes
Text
Assignment 3
Model & Data Biography
What questions do you still have about the model and the associated data? Are there elements you would propose including in the biography?
How does understanding the provenance of the model and its data inform your creative process?
Seeing and reading about the model and data biography was really interesting to me. The collection method and "How was the data collected" for the data biography particularly caught my eye. Though it states how it was collected, I still feel like there is still ethical considerations that are yet to be explicitly stated in the data biography. Information regarding owners consent and specifically what agreements were made should be stated as well.
Handpose Sketch
I didn't really know exactly what I wanted to do for this assignment. I started off by using the demo we started in class and worked from there. I ended up creating something fairly simple; a sketch which allowed for the user to use their fingers to paint on a canvas. I wanted the user to create something that was fairly abstract and also wanted an element of change, which in this case, were the colors.
I focused on 3 fingers: thumb, index & middle. These would essentially be the paint brushes. I wanted to play around with RGB values so I tried mapping the Z position of the fingers to the R and G values. This would allow for the user to quickly change colors by moving them forward or backwards. I thought about some apps which I would usually use and the ease of switching between colors (it usually takes a couple of seconds to pick a new color from the color wheel) and I wanted to make this process quick and simple for the user.
Here is a test for the red colors:
vimeo
Here are the colors when the fingers are further away from the webcam:
vimeo
Closer to the webcam — colors are slightly darker/different hues
vimeo
I am happy with the result, it feels like an "abstract" painting once you're done playing around with it. However I am still unsure about the mapping (I'm not sure what the exact values are for the Z index and therefore the color change may not be as significant as it potentially could be.) I initially used the Z index to adjust the size of the circle itself and it looked cool, however I wanted to experiment with color. It felt a bit overwhelming when both the size and the hues were changing at the same time. Perhaps there could be another way to adjust the size?
0 notes
Text
Assignment 2
Reading Response
Reflect on the relationship between labels and images in a machine learning image classification dataset? Who has the power to label images and how do those labels and machine learning models trained on them impact society?
Magritte's work clearly challenges the conventions of language/labels and visual representation. Though it clearly states that it is not a pipe, as an audience reading the words below the image (which clearly shows a pipe), I actually start to doubt it - is it actually a pipe or not? I think the relationship between labels and images in the context of machine learning and image classification is something which is extremely significant in a cultural and social standpoint. It is important to be aware of the biases which can arise when it comes to labelling images and trained ML models + its impact on society. Label bias can be harmful in the sense that it isn't fully representative- trained models may not be generalizable to a universal population. I am curious to know how exactly this can be changed; how can we ensure labelling without bias?
Teachable Machine
I decided to create a sketch which recognized my multiple soft toys on my bed. Each time it recognized one, a label and a couple of emojis would appear on the canvas. Here is a video of me training the machine:
vimeo
I started out with five different classes for each soft toy and then realized that I forgot to create a "neutral" one, so I went ahead and did that.
Next, I tested out the model to see how well it recognized each toy, and I think it turned out really well. My background was pretty neutral and I never changed location etc, so I feel like it made the recognition a lot more accurate.
vimeo
After this, I exported the model and uploaded the shareable link, then proceeded to create a p5 sketch. I wanted each label to show up along with some emojis which represented the soft toy.
// Draw the label
fill(255);
textSize(20);
textFont('Georgia');
textAlign(CENTER);
text(label, width / 2, height - 4);
let emoji = '🙂'
if (label == "MIKE WAZOWSKI") {
emoji = 'wazowski'
} else if (label == "happy octopus") {
emoji = '🥰🐙'
} else if (label == "sad octopus") {
emoji = '😢🐙'
} else if (label == "unicorn") {
emoji = '🦄💜💖'
} else if (label == "dino") {
emoji = '🦖💛✨'
}
textSize(150);
text(emoji, width / 2, height / 3);
}
I then adjusted the styling and size of the text. This is how it turned out:
vimeo
I thought it turned out pretty well and it was really fun to create - the soft toys were recognized instantly and with a high level of accuracy.
0 notes
Text
Assignment 1b
What surprises you about this data set? What questions do you have? Thinking back to last week’s assignment, can you think of any ethical considerations around how this data was collected Are there privacy considerations with the data?
To be honest, it's still pretty difficult for me to comprehend the amount of images in this dataset itself. The issue about privacy is something which instantly comes to my mind. It is hard to believe that all of these images were collected ethically, especially with a dataset this large. What else are these images being used for? What information can be accessed from this dataset and in what ways does this impact whoever owns the images? How are they being collected and to what extent is the privacy of those images at risk? Also, how often is/should the dataset updated? I'm really interested in understanding more about the collection method because I'm still not 100% sure how it works.
Webcam Image Classification
This was really fun to try out and it was interesting to see how accurate/not accurate it would be and what kind of objects would be recognized. How much would the angle and the background impact the accuracy?
Sunglasses were recognized pretty easily
However, when put at different angles, it was recognized as completely different things like violin???? or knee pad?
Water bottle/soda bottle was also recognized, however when the background was more cluttered, it was recognized as a microphone
My phone and notebook were both recognized as desktop computer - is this because of the similarity in shape? What elements of objects does it recognize?
0 notes
Text
Assignment 1a
When you hear the words "Artificial Intelligence", what are the first four things that come to your mind?
When I hear the words "Artificial Intelligence", the first thing that comes to my mind is robots, which is something I've always associated AI with over the years. I also think of chatbots on the web. Growing up, I used to be really fascinated with the way chatbots worked. Now I see them in more corporate or professional setting such as in banking services or really any kind of service industry where bots are used to assist customers. I also think of self-driving cars and social media algorithms.
Think about the devices and/or digital services you use daily. Write below a list of the top three that are present in your life
iphone
macbook
social media apps
Have these things ever surprised you by guessing something about you that you didn’t expect?
Yes, almost every day. Ads on social media applications such as Instagram or even YouTube always seem to know what kind of product or service I may be interested in. One time, I was scrolling through Instagram and a makeup palette which I really wanted showed up, along with other similar makeup products. Sometimes, accessories and jewelry which are eerily close to my style/tase show up when I don't expect it to.
Take a moment to see if you can identify what function AI plays in the following list
Email inbox: Spam filtering - recognizes which emails are likely to be spam so you don't have to manually do it
Check depositing: Handwriting recognition - when depositing checks online, AI is what allows for the machine to recognize the letters and numbers
Texting and mobile keyboards: Recommendation engine (predicting words that you personally may use next?)
Netflix: Recommendation engine - suggests shows or movies based on things previously watched (though Netflix is surprisingly not accurate at all)
Google (search function): Recommendation engine - suggestions on searches
Social media platforms: Recommendation engine - AI predicts the users likes and show things (ads) based on previous interactions
Automated message systems: Virtual Assistants eg. chatbots which can answer a customer's questions
What do we gain by having AI in our everyday lives? What do we lose?
I think having AI in our everyday lives allows us to spend less time on things which aren't super necessary or simply makes tasks easier for us. We gain time and efficiency. Could be argued that it makes us lazier though. We lose privacy and algorithmic bias is also an issue.
Use the prompts below to help design an AI system.
Problem: Looking for inspiration for new artwork is a little difficult especially if I want to look for existing work that is a similar style to my personal work. It would be cool to find art and artists who share similar elements easily to 1) use as future references for work or 2) simply find new artists to reach out to.
How can AI help solve: AI can be used in an app or website which recognizes and analyzes data (media) and filters them according to an artist's personal style - this can include color usage, patterns, media used, textures, form etc.
Role humans have in addressing issue: Artists involved would need to be comfortable with sharing their art
Data needed: visual data (media) i.e. artwork - analysis of patterns, color, texture, medium, form etc.
How to responsibly gather data (privacy): I don't think privacy would be an extremely large issue (artists could use aliases and not share as much personal information.)
Flowchart:
0 notes