Tuesday, 8 December 2015
Finshing off the animation
On high sprits my partner and I proceeded with the last stages of post production. After finishing drawing all of the stages we put everything together on premier and worked out how much time we needed to fill. There was a few second missing so I went back to the scenes and streched the a bit, in some places drawing more animation on it. After we had a solid minute of animation I suggested to end the animation as if it were trailer and put the title in the end with dramatic sound effects. Callum like the idea and he gave me the green light to proceed with that idea. The next day we moved on to sound recording. It took quite a bit of rehearsing but we managed to record all the sounds we needed. My voice was a bit too feminen for the bully so I have changed the pitch on the audio editing software. After recording the sounds I moved on to editing and cutting all the sound effects and putting them onto the final animation. Some of the sounds were recorded too quetly so I had to inhance the sound myself and fade it out or in where needed to fit it in with the timing of the animation. After putting all the sounds on the animation,I was finally relieved to send Callum the finished thing and he liked it. Finally we are ready for submission, and I am proud to submit the finished project!
Character and Narrative: finishing off
So for the final stage of making the demo charcater I had to do some final housekeeping. Basically it was just grouping everything nicely on the outliner, making sure everything is in it place. Locking off some of the attributes (I tried doing that, but I liked the freedom of modeling my character with no restrictions so I left it as it was), making the key attributes for the fingers (I did that on my bully, but I did not seemed to be working, so I passed on that) and Parenting everything to the master controller. However if you just parent it does not work properly, jus it was a matter of going to the Orient>Contrain and schoose Scale. So it allowes the master controller scale both the mesh, controls and the skeleton, which otherwise does not scale.
Now it was a matter of making a pose and setting up everyting for the rendering of a turnaraound which was really easy and fun.
Now it was a matter of making a pose and setting up everyting for the rendering of a turnaraound which was really easy and fun.
Character and Narrative: binding
So I have finally gotten closer to binding. First of all I had to orient the controllers to the joints. Since you cannot juts orient it to the joint because it has trach values and if you try it just messes everything up,the way we did it was creating a null group, orienting it to the joint, then parenting it to the control and freezing the transformations on the control NOT the null group!!! And then orienting the control to the joint. Then it was the matter of setting up the hiearchy of the controls.
I then moved on to painting the weight on the charcater-
(setting up the hierchy of controllers)
When orienting the controls to the joints a few times I had a problem of joints flying off or messing up, but while making my bully character I have learned how to fix it, it was just a matter of taking the control out of the hierchy, orienting the null group again, parenting everything back and make sure to freeze trasformations again. By trial and error with the bully character I have learned that the trash values on the controller can mess everything up. However when I saved the whole thing and opned it up again, the legs were messed up for some reason, Matt advised to just redo the joints on the legs because everything else was working fine. Once again I was battleing with the legs of the character. But the second time around they seemed to cooperate and I was able to bind everything with no major issues.I then moved on to painting the weight on the charcater-
(painting weights tool)
The Paint Skin Weights Tool is one of the Artisan-based tools in Maya. With the Paint Skin Weights Tool, you can paint weight intensity values on the current smooth skin. When you select an influence to paint, the mesh displays color feedback (by default) in white and black. You can verify that
Color Feedback is turned on under the Display heading. Turning on Color Feedback helps you identify the weights on the surface by representing them as grayscale values (smaller values are darker, larger
values are lighter). You can also turn on Use color ramp under the Gradient heading to view weights in color.
So when I was finished with the painting weights it was finally tome to finish everything off.
Character and Narratve: Controls and IK handles
The second step of making the demo charcater was to make the controls. It was a pretty streight forwad process, just a matter of making shapes and putting them next to the joint you wish to control. Afterwards it is necessary to colour the controls so its easier to tel which side is which. So usualy it is red for right and blue for left. Everything in the middle is yellow. Since the root control and hips dislocate is in the same place, one of them has to be diffrent in order to tell which is which.
(creating the controls and naming them)
(Colouring the controls, pink for right and blue for left)
Then I insterted the IK handles on the legs. IK handles stands for Ivenverse Kinematics. There are
three types of IK handles with corresponding solvers: Single
Chain (SC) Handle, Rotate Plane
(RP) Handle, and Spline Handle.
(SC/RP Handles used for limbs)
(Spline Handle used for necks or tails)
- Single Chain (SC) Handle and Rotate Plane (RP) Handle can be used to animate the motion of an articulated figure's limbs, and similar objects.
- Spline Handle can be used to animate the motion of curvy or twisty shapes, such as tails, necks, spines, tentacles, bull-whips, and snakes.
- SC Handle's end effector tries to reach the position and orientation of its IK handle.
- RP Handle's end effector only tries to reach the position of its IK handle.
- When you are using SC Handle, to control the orientation of the middle joints, you rotate the IK handle.
- When you are using RP Handle, two vectors, Pole vector and Handle vector, define the plane in which the middle joints lie on.
Sunday, 6 December 2015
Character and Narrative: post production
So as I mentioned earlier we left quite a lot of work for post production. When I finished with animating my scenes I moved onto post production which I decided to do on Flash and it was great relief to finally do some 2D animation and on a Wacom Cintiq which made the whole process really enjoyable. I first started with importing the rendered out image sequences and then pin pointing the key frames, drawing up some lines so I can understand how the face is positioned. Then I started animating faces frame by frame, a few times I used a tool that allows the object to be rotated and moved in a set 3D space which saved some time for me.. But apart from tedious frame drawing there was not much to it. I tried to make it as funny as I could, as well as stretching some scenes out leaving room for editing the timing (which was my concern as mentioned in the earlier posts). Callum raised a concern with the speech bubbles. I suggested he would make a few sketches so I could understand better of what he is looking for. Later on I came up with the idea of a badly drawn speech bubble and asked if Callum could play around with that idea. He came up with some brilliant solutions which I am going to animate into the scenes. I can't wait to put everything together and see the final result as well as voicing it.
3D Potential and Limitations: Path Tracing
Have you watched 3D-animated Disney flicks like Big Hero 6 and wondered how some of its scenes manage to look surprisingly realistic?
: Disney has posted a top-level explanation of how its image rendering engine, Hyperion, works its movie magic. The software revolves around "path tracing," an advanced ray tracing technique that calculates light's path as it bounces off objects in a scene. It takes into account materials (like Baymax's translucent skin), and saves valuable time by bundling light rays that are headed in the same direction -- important when Hyperion is tracking millions of rays at once. The technology is efficient enough that animators don't have to 'cheat' when drawing very large scenes, like BH6's picturesque views of San Fransokyo. Although Disney's tech still isn't perfectly true to life, it's close enough that the studio might just fool you in those moments when it strives for absolute accuracy.
: Disney has posted a top-level explanation of how its image rendering engine, Hyperion, works its movie magic. The software revolves around "path tracing," an advanced ray tracing technique that calculates light's path as it bounces off objects in a scene. It takes into account materials (like Baymax's translucent skin), and saves valuable time by bundling light rays that are headed in the same direction -- important when Hyperion is tracking millions of rays at once. The technology is efficient enough that animators don't have to 'cheat' when drawing very large scenes, like BH6's picturesque views of San Fransokyo. Although Disney's tech still isn't perfectly true to life, it's close enough that the studio might just fool you in those moments when it strives for absolute accuracy.
Strike a Pose
In the beginning of this module we were asked to pose the demo model into some poses to illustrate certain emotions and moods. However the twist was that we had to do it from our own reference images. So basically we had to try and act out the emotion ourselves, take a picture of that and then pose the model according to the reference imagery. So I picked Exhaustion, Hunger, Tiredness, Surprise and Shame.
Exhaustion
Tiredness
Shame
Hunger
Suprise (I look soooooo surprised)
To be honest I did this assignment ages ago, when it was briefed, but I did not publish it because I felt I could do better. So I did. I came back to the images I did before and redid it, I took lighting into consideration, as well as camera angles.
So here is Exhaustion. When I was acting this out I thought about exhaustion when you have your shoulders hunched and your hand are almost dragging on the floor. I made the character look up to light giving it a slight feeling of hope, I also put in another point light behind his back to outline the silhouette and separate it from the background.
So here is Hunger. When I am hungry I have the same pose as tired only grabbing the empty belly, so that is what I did with the model. As for lighting, I've put in a spotlight to have something runing through the composition, in this case a shadow. Once again there is another source of light from the back separating the characters shadows from the background. The camera is positioned to the left because left/right composition gives a feeling of moving forward or back. In this case forward. To the fridge.
SURPRISE!!! So I figured to put in another object so the character has something to be surprised about. Also I wanted to light the characters face from the same angle the surprising object appears.
When I was posing for shame, the first thing I thought of to cover my face or eyes. I consider that to be the natural reaction for many people when they are ashamed. So kept that in mind while posing the character, however I took it a bit further. I imagined that when you are ashamed you want to "run away" from the situation and/or just blend into the background and disappear. That is why I made the lighting very soft and tried to blend the character with the background. For that I used the indirect lighting and physical sun and sky. There is also some spotlight there because the shadows of the feet were blending with the ground too much. The composition logic is the same as in the previous one, positioning everything to the right side to give it a feeling of moving forward.
This one is pretty simple. Tiredness I imagine as seeing the first rays of sunlight after a whole night of work. I think that idea really come through in this one.
Overall I enjoyed this exercise, so much that I did it twice! It gave me a deeper understanding of how lighting and camera angles can help exaggerate an emotion or mood, and it was just a good chance to play around with those things and see for myself how they change the setting of the scene.
Tuesday, 1 December 2015
3D Potential and Limitations: Rendering
3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images with 3D photorealistic effects or non-photorealistic rendering on a computer.
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
A render farm is a high performance computer system, e.g. a computer cluster, built to render computer-generated imagery (CGI), typically for film and television visual effects.
This is different from a render wall, which is a networked, tiled display used for real-time rendering.The rendering of images is a highly parallelizable activity, as frames and sometimes tiles can be calculated independently of the others, with the main communication between processors being the upload of the initial source material, such as models and textures, and the download of the finished images.
Over the decades, advances in computer power would allow an image to take less time to render. However, the increased computation is instead used to meet demands to achieve state-of-the-art image quality. While simple images can be produced rapidly, more realistic and complicated higher-resolution images can now be produced in more reasonable amounts of time. The time spent producing images can be limited by production time-lines and deadlines, and the desire to create high-quality work drives the need for increased computing power, rather than simply wanting the same images created faster.
To manage large farms, one must introduce a queue manager that automatically distributes processes to the many processors. Each "process" could be the rendering of one full image, a few images, or even a sub-section (or tile) of an image. The software is typically a client–server package that facilitates communication between the processors and the queue manager, although some queues have no central manager. Some common features of queue managers are: re-prioritization of the queue, management of software licenses, and algorithms to best optimize throughput based on various types of hardware in the farm. Software licensing handled by a queue manager might involve dynamic allocation of licenses to available CPUs or even cores within CPUs. A tongue-in-cheek job title for systems engineers who work primarily in the maintenance and monitoring of a render farm is a render wrangler to further the "farm" theme. This job title can be seen in film credits.
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
A render farm is a high performance computer system, e.g. a computer cluster, built to render computer-generated imagery (CGI), typically for film and television visual effects.
This is different from a render wall, which is a networked, tiled display used for real-time rendering.The rendering of images is a highly parallelizable activity, as frames and sometimes tiles can be calculated independently of the others, with the main communication between processors being the upload of the initial source material, such as models and textures, and the download of the finished images.
Over the decades, advances in computer power would allow an image to take less time to render. However, the increased computation is instead used to meet demands to achieve state-of-the-art image quality. While simple images can be produced rapidly, more realistic and complicated higher-resolution images can now be produced in more reasonable amounts of time. The time spent producing images can be limited by production time-lines and deadlines, and the desire to create high-quality work drives the need for increased computing power, rather than simply wanting the same images created faster.
To manage large farms, one must introduce a queue manager that automatically distributes processes to the many processors. Each "process" could be the rendering of one full image, a few images, or even a sub-section (or tile) of an image. The software is typically a client–server package that facilitates communication between the processors and the queue manager, although some queues have no central manager. Some common features of queue managers are: re-prioritization of the queue, management of software licenses, and algorithms to best optimize throughput based on various types of hardware in the farm. Software licensing handled by a queue manager might involve dynamic allocation of licenses to available CPUs or even cores within CPUs. A tongue-in-cheek job title for systems engineers who work primarily in the maintenance and monitoring of a render farm is a render wrangler to further the "farm" theme. This job title can be seen in film credits.
(Pixar’s renderfarm)
(Another photo of Pixars renderfarm)
3D Potential and Limitations: 3D Animation
Computer animation, or CGI animation, is the process used for generating animated images by using computer graphics. The more general term computer-generated imagery encompasses both static scenes and dynamic images while computer animation only refers to moving images.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after the modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.
3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after the modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.
3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
Subscribe to:
Posts (Atom)