Hello everyone, I'm Siddhesh. Currently, I'm pursuing MA Game Development (Design) from Kingston University London. I've read numerous blogs in my life. But now, I'm going to start to write my own blogs. These blogs are going to be related to what I'm doing in my course period.
Don't wanna be here? Send us removal request.
Text
SpidyDance - Final Project Phase 5
I was very upset with the results after I imported the .exr file from Maya into nuke. My characters were floating in the air and were not getting tracked. I went to Maya again, adjusted the assets and rendered them again. It took a lot of time for rendering so I took an afternoon nap. When I completed my nap I checked my laptop screen and it was still rendering. But it got complete in the next 15 minutes.
Again I imported the .exr file into Nuke and this attempt got wasted too. I checked all the things in Maya and they were all fine. Then I asked one of my friends who is an expert in Maya. And he asked me if the .fbx footage I exported from nuke is correct or not. Then I went back to nuke and checked my trackers. I found the mistake and I created another .fbx file. I created everything again and imported the file into Maya. This time I was happy with my tracking data.
The problem with my previous footage was the ground was not tracked well and it used to go below the plane when imported into Maya. Now the ground from nuke and plane from Maya were at the same level, I inserted assets.
Before rendering, I checked everything in Arnold Render View.
I also checked the shadows of assets.
I rendered everything again and till then I made delicious dinner.
Had my dinner and checked the rendering progress. While it was still in progress, I worked on my other module.
When it got complete, I was already sleepy. I just turned off my laptop and went to sleep.
The next morning, I inserted the files in nuke and it worked.
I then color graded the files and it was looking nice.
I rendered the footage from nuke using WRITE node and made a .mov file.
After rendering my plan was to add Gangnam Style music in the background but I had no time to do it. So I just went with non-audio footage.
After so much of hard work, it was good to see what I planned for and what happened. I faced many ups and downs while doing this project. I learned a lot in these past weeks. It made me realize that making a VFX video is not that easy and it created another level of respect for people who make such kind of videos. Please check the footage I created below,
You can access my files for this project in this LINK.
I would like to thank Richard, Jamie and Johana for guiding us in these past weeks and solving every query we had.
I would also like to thank my friends who helped me with Maya.
Credits:
Main footage - Kingston University
Batman logo - Clipart Library LINK
Spiderman and speaker - CGTrader.com
Spiderman Animation - www.mixamo.com
This project was made by Siddhesh Thakur (2150114) from Kingston University for Perfecting the Look module.
0 notes
Text
SpidyDance - Final Project Phase 4
Now it was time to import the file into Maya. My Maya was working absolutely fine a day ago before doing this task. My student offer was still valid but I don't why my Maya stopped working. I tried every possible thing to make it work but failed miserably. At last, I asked my friend for his Maya account and he helped me. He logged in the Maya on my computer. I really appreciate what he did for me.
I created a plane in Maya and imported the file from nuke. I created an HDR image in Photoshop under my friend's guidance as I was not familiar with it.
Then I inserted that image into Maya and added the assets which I got from CGTrader. Added Animated spiderman.
I adjusted the assets according to the plane. Then I created render layers for assets. I had 2 assets so I created 4 layers as Spiderman, Spiderman_Shadow, Speaker, Speaker_Shadow. Then I added 2 collections for each layer.
I linked the model of each layer to the respective asset.
Then I linked the plane to the plane(Ground) which I added in Maya.
I disabled the primary visibility for shadow for Spiderman.
Then I disabled primary visibility for model for Spiderman_Shadow.
Did the same thing with the Speaker and Speaker_Shadow.
Now it was time for rendering the footage. I changed the render settings as follows.
I selected the render device as my CPU in render settings. Selecting GPU as a render device creates issues with asset shadow.
Then I simply hit render sequence and rendered the footage into .exr file. As there were 300 frames for 1 layer, 1200 frames in total for 4 layers, it was taking a lot of time. It was already around 3 AM. So I kept my laptop ON and went to sleep.
Thank you.
0 notes
Text
SpidyDance - Final Project Phase 3
In this phase, I had to find a batman character. So I went to www.cgtrader.com and searched for batman. There were many good batman characters but they were paid. Fortunately, I found 2 good batman characters. These characters were royalty-free assets. So I downloaded those characters and put them in Mixamo to animate them.
Now here's a twist, we all know Mixamo automatically rigs the character. But because of the cape of batman, it failed to recognize the batman. I tried the same thing with another batman and it gave me the same error. All my plan was disturbed and I was shocked. Then I thought, let's not give up and I started looking for another superhero character. I checked my PC wallpaper and I found my second character which was Spiderman.
I went to CGTrader again and started looking for spiderman. I found a good spiderman character that was available for free. This was the character -> SpiderMan
I animated my spiderman in Mixamo with Gangnam Style animation.
Then I hit the download button and changed the download settings as follows.
Back to Nuke, I had to create a camera tracker node to track the camera. I remember Richard told us in a lecture that if we want to make our track good, we might have to use Sharpen node in the main footage. So I followed him and sharpen the image to maintain the tracking accuracy. I deleted rejected nodes and unsolved nodes from the AutoTracks section in the camera tracker. I decreased the solve error to 0.97. It should be less than 1.
Then I created a scene by clicking on the scene button in the camera tracker section.
Write Geo node creates a .fbx file that we can export from nuke and import in Maya.
That's it for this blog. Thank you.
0 notes
Text
SpidyDance - Final Project Phase 2
Now, every basic thing was done, it was time to search for the batman assets. I just went to Chrome and searched for a batman logo. I got one website that had pretty attractive logos. I tried multiple logos from www.clipart-library.com. I downloaded many logos from the clipart library and imported them into the nuke. Finally, I got a logo that was suitable for the conditions in my footage.
Please click the LINK to see the logo.
I imported the logo in nuke and color graded it.
After color grading, I tracked the wall above the door but it failed miserably.
Then I tried to track the big grey door by using CornerPin2D and Roto node. And it got successful.
Then I placed the batman logo on the door by using the transform node.
It took most of my time to track. The main twist was yet to come and because of which I had to change my idea.
So stay tuned for my next blog. Thank you.
0 notes
Text
SpidyDance - Final Project Phase 1
I was listening to The Batman soundtrack by Michael Giacchino and I got an idea for my perfecting the look final project. I was already a fan of batman and thought this is a good opportunity to do something and create a VFX video related to batman. That music was inspiring me to do something crazy.
I planned to start with the basic steps on nukeX. But before that, I had to choose a source video. I needed a source video that will support my idea. So I chose Cobblestone Property Footage Video 3 as my source video which was provided by Kingston University.
After this, I created an image sequence from the video by using the WRITE node. After creating an image sequence, I undistorted the footage by using a 24mm checkerboard.
Then I color graded the image and made it a little bright. Initially, I had a plan to make the footage a little dark because the batman does his work at night. I will explain why I made it bright in my next blog. Thank you. Stay tuned!
0 notes
Text
Perfecting the Look (Week 10)
This week was the last week for this module. In this last lecture, Richard introduced us to some additional nodes that we can use in our project. I was not there for the lecture after lunch because I had some emergency work. So I had to go. But when I asked one of my friends what happened in the lecture, he told me Richard just solved the queries of students. Richard also asked if anyone have finalized what are they gonna do in their final project.
Honestly, I had no idea what to do in my final project. But after spending some time and thinking about it, I had an idea. I will mention this in my blog for final project.
That's it. Thank you so much.
0 notes
Text
Perfecting the Look (Week 8)
At the point where I just started understanding Maya, I got myself cut while I was cooking. I could not do anything as I was unable to work. But I listened to Richard carefully and understood that this week, we are focusing on AOV and Layers.
We can create layers by going into the Render Settings. If there is one asset, we create 2 collections for it. The first collection is for the asset itself and the second one for the asset's shadow.
We have to disable the primary visibility for shadow when we are layering assets and vice versa.
After this, we learned about shuffle node in depth. In which we saw diffuse direct, diffuse indirect, specular direct and specular indirect. It was a little advanced info from what we learned in week 2.
I will try to understand more about this concept in the future as I was unable to focus on this week's lecture.
That is it for this week. Thanks.
0 notes
Text
Perfecting the Look (Week 7)
Richard revised the stuff that happened in the past week. He told us about the maya and how to export a .fbx file from nuke. This time he took another asset so that he can practically explain the steps in maya. I was not familiar with maya so I could not focus much in the lecture. But when I came back home I saw the lecture again on microsoft teams. Then I understood a little part of it.
What Richard did was basically export the footage from the nuke and place it in the Maya. Then he added a plane where we can put our assets.
To make the shadow work, he created a skydome. Because of the skydome, the asset will have its shadow which will fall on the plane we created.
We can see the asset and its shadow falling on the plane in Arnold Render View.
It was time to render the footage and export it to the Nuke.
The footage from Maya created .exr files which got stored in a folder we created.
After putting that folder into the nuke, it was time for grading. Richard color corrected the box and saved it.
Richard had an appointment with his dentist so he had to leave and Alex took over.
After the lunch break, Richard came back and told us about how we can take a character and put it in the Mixamo which is used to animate the character. Mixamo automatically rigs the character. If the character is not rigged originally, we can just adjust the chin, wrists, elbows, knees and groin on the predetermined points.
After clicking on the Next button, we can animate the character with the animations available on Mixamo. We just have to select the animation and download it.
Now that we have an animation for our character, we just have to put it in Maya and follow the same steps and render it.
I see a little progression in myself as compared to the previous week but still, there are lots of things to learn.
Thank you.
0 notes
Text
Perfecting the Look (Week 6)
Richard recovered from COVID and taught us this week. He revised everything that happened in week 5. This week's topic was mainly focused on 3D camera tracking in Maya. We did lens distortion first which affects the footage and makes the work easier.
After this, it was time for camera tracking and to know about nodes related to it. After the camera tracking is done we can use the roto node if we want to remove the points from a specific area. The number of tracked points will be the same but it will appear on some other part of the footage. It will not appear in the zone we have selected through the roto node.
We can delete the tracks by selecting camera tracker node -> Auto tracks section. Through this option, we can delete unsolved nodes and rejected nodes. We can also play around with Max Track Error and track the footage more accurately. Richard told us that the solve error should be less than 1 for better results.
It was time to export the tracked footage. We can simply do this by going into the camera tracker and in the Export section we select the Scene and hit create button.
We can render the footage by using the Write Geo node. Write Geo node creates a .fbx file which we can import into the Maya. It is upto us which extension we want and what we have to export. After selecting all the options simply hit execute button and we are good to go.
We took the exported footage and imported it into the Maya. I lost track from here as I have never used Maya in my life. Still, I followed the steps Richard was doing but at one point I lost that too.
Although we imported the ground from nuke, the footage from Maya was unstable. So we added a plane in Maya and placed and adjusted the asset on the plane. By doing this, Maya can track the asset easily and also it looks better.
After this, we rendered the footage. It was hard for me to understand all these steps in one go. It was different experience for me to learn about new things. I will try to focus more on this part in the future.
That is it for this week. Thank you.
0 notes
Text
Perfecting the Look (21 Feb 2022 - Week 5)
Unfortunately, Richard got COVID this week so he sent Alex to teach us further about nuke. Richard was supporting us and observing the class through Teams. The lecture was about 3D Tracking.
The footage we took was 700 frames. So we had to use retime to make it 200 frames. After converting, we rendered the footage using the write node. We did this because the system processes faster when we use a sequence of images instead of a movie file as we learned in week 2 of nuke class.
After this, we needed to undistort the footage based on camera and lens. We added a checkerboard of 35mm focal lenght. To undistort the footage we added a lens correction node and attached it to the checkerboard.
Nuke cannot track all the lines correctly so we need to adjust and make some changes if needed to correct those lines. Firstly, I was marking the same line all along the checkerboard and when I clicked on solve button, I got an error. After raising this error to Richard, he told us that we need to make straight lines. If we create only one line and carry it all along the checkerboard it will give the error. He corrected us and gave us the solution.
This is how the error looked like,
This is how is should look,
After this, we needed to render it as we wanted a single frame from it. Before rendering we used STMap.
This is the node tree,
Now it was the time for 3D tracking. After the lunch break, Alex told us to follow a tutorial and work according to it. The tutorial was about the working of camera tracker node. Only NukeX has the camera tracking features. Normal Nuke does not work and it does not track.
We can insert any number of trackers. So I inserted 800. It takes a while to track every point.
After everything is tracked we needed to solve the trackers using solve button in camera tracker node. If there are any errors, we can solve them in AutoTracks section in camera tracker node. Orange and red trackers affect the result. So we need to delete them first.
After every tracker turns green, we are good to go.
After this it was time to add a cube.
This is the node tree so far,
We rendered the footage using WRITE.
With trackers video -
Thank you.
0 notes
Text
Perfecting the Look (14th Feb 2022 - Week 4)
This week Richard introduced us to 3D tracking. He told us about the camera node, scanline renderer and scene node. The camera node is used to access 3D scenario of the image. Scanline renderer is used to convert 3D into 2D. Scene mode works like a merge node but it is used for 3D tracking. We can change the view from 2D to 3D and vice versa by pressing the SPACE key. We can also change the view from top right corner of the viewer.
Richard also told us how to move and operate the camera in 3D perspective.
Richard introduced us to the card node and grid node. A card node is a 2D plane that we can attach to another node. The grid node is used to give the geometrical models a grid. We arranged the plane in such a way that it matches the ground of the image. It took some time to adjust the camera angle and to match the place to the background image.
This is how it looks.
After this, we added a cube and attached a checkerboard to that cube. We can add a checkerboard in the same way we add nodes just by pressing the tab key on the node graph. The checkerboard adds texture to the geometrical shape when we attach both.
After some time Richard told us to add Project3D node and link the card to it. After attaching a card we needed to attach Project3D node to the image. We applied the same technique on the pillars on the right side. We took a cube and attached it to the grid node.
This is how it looked like.
Richard told us to go to vertex selection. At this point, I was feeling really hungry and lost track of what Richard was telling us. Finally, we went to the cafe and had our lunch.
After the lunch break, Richard gave us a tutorial to complete and work according to it. But I needed to complete previous work. So I jumped back to the station again and started working on vertex selection. I aligned the vertex just like that pillar. It took some time to get me to that point as I asked my friends about how it works. Now that the cube has replaced the pillar we were asked to add a sphere.
After adding the sphere, I attached a checkerboard to it.
This is the node graph so far.
Then I placed the sphere behind the pillar/cube from 3D view.
At the end it looked like this,
Thanks for your time!
0 notes
Text
Perfecting the Look (7th Feb 2022 - Week 3)
Jamie's part was over and it was Richard's time to teach us about the Nuke. Jamie taught us really well but Richard revised all the things which we did in the last two weeks.
Richard told us how to convert a movie file into an image sequence using the WRITE node. Using an image sequence helps the computer process smoother and faster instead of using a movie file.
Retime is helpful in setting a frame which means we can convert a specific frame into another specific frame.
TimeOffset is used to cut frames from a specific frame.
While creating an image sequence we have to make sure that we create a separate folder inside our footage folder for the video file we are rendering so that our images does not get saved all over and create a mess.
Nodes can be replaced in the space tab. We can do this by pressing SHIFT+CLICK. We can drag them where we want. We can also add grid lines on the tab and arrange the node tree. This makes user to arrange the node tree in an organized manner. To add the grid lines we can go to Preferences -> Node Graph -> Show Grid.
After that, we came to the main topic of the lecture "To match the colors of different images by using RGB channels". Added grade node so that I can manage the color settings. Then I selected the darkest point for blackpoint and selected the brightest point for whitepoint from the match by using grade.
From Reference I selected darkest point for lift and brightest point for gain.
Matching the colors of these images was time consuming. It needed much focus in order to get similar color as another image. We did this exercise for 2 more set of images.
These are the images which I worked on during the lecture.
In the second half, Richard told us about Lens Distortion. Lens Distortion is used to adjust focal length of the footage. By using lens distortion we can add straight lines on the footage so that it can track it. After drawing the lines and clicking on solve, we get green, orange and red lines. Orange and red line means we need to work on the lines again that we have drawn. If it is green we are good to go.
Selecting a darkest point on the spot where we want to place our asset makes the tracking really easy and it results almost accurate.
Thanks!
0 notes
Text
Perfecting the Look (31st Jan 2022 - Week 2)
This week we focused on color correction node and the basic properties while color correction. Another node which was introduced to us was the Grade node. The grade node helps to identify and correct the image and adjust the black and white levels of the image. This is usually done before the color correcting.
The next node we learned is Premult. Premult multiplies the alpha channel values with RGB channel. We can rearrange the channels from an image i.e. we can change the value of a red channel to a green channel and vice versa with the shuffle node.
As shown in the above node tree, we took the background image, spaceship and its reflection and merged them with the merge node. We used the shuffle node to change the channels and did some color correction using the ColorCorrect node. Again, we followed Jamie's word and arranged the tree in a good manner to make it look simple but effective and easy to understand.
The above picture is the work I've done during the lecture.
Thanks!
0 notes
Text
Perfecting the Look (24th Jan 2022 - Week 1)
It was my 1st day in the uni and my 1st lecture was about Perfecting the Look. Mr. Jamie Bhalla is an excellent lecturer and he introduced us to the module brilliantly. He told us how the VFX industry works and what kind of software we can use. I got to know about Nuke and its basic operations. Nuke is a software based on nodes. Basically, we can edit the image and add some visual effects in it by using Nuke.
On the 1st day we learned about Merge, Color Correction, Corner Pins, Transform and the arrangement of nodes. Merge allows us to merge multiple images and layer them accordingly. Color correction lets us change the contrast, gamma, gain, etc. We can use the color correction to change the highlights, shadows and midtones of the footage. We can insert a picture in picture by using corner pins. We can place these pins anywhere we want in order to show the content in its desired place. Mr. Bhalla told us that it is important to arrange the nodes in a good manner and not to create a mess with a node tree. The nodes should be properly arranged. If in case, an error pops up we should know where it is exactly situated at. So, node arrangement helps us to identify the error quickly. Unfortunately, I do not have the file which I worked on. Whenever I’ll get the access to the file I’ll add it for your reference. Thank you so much.
1 note
·
View note