Tuesday, 8 December 2015

Finshing off the animation

On high sprits my partner and I proceeded with the last stages of post production. After finishing drawing all of the stages we put everything together on premier and worked out how much time we needed to fill. There was a few second missing so I went back to the scenes and streched the a bit, in some places drawing more animation on it. After we had a solid minute of animation I suggested to end the animation as if it were trailer and put the title in the end with dramatic sound effects. Callum like the idea and he gave me the green light to proceed with that idea. The next day we moved on to sound recording. It took quite a bit of rehearsing but we managed to record all the sounds we needed. My voice was a bit too feminen for the bully so I have changed the pitch on the audio editing software. After recording the sounds I moved on to editing and cutting all the sound effects and putting them onto the final animation. Some of the sounds were recorded too quetly so I had to inhance the sound myself and fade it out or in where needed to fit it in with the timing of the animation. After putting all the sounds on the animation,I was finally relieved to send Callum the finished thing and he liked it. Finally we are ready for submission, and I am proud to submit the finished project!

Character and Narrative: finishing off

So for the final stage of making the demo charcater I had to do some final housekeeping. Basically it was just grouping everything nicely on the outliner, making sure everything is in it place. Locking off some of the attributes (I tried doing that, but I liked the freedom of modeling my character with no restrictions so I left it as it was), making the key attributes for the fingers (I did that on my bully, but I did not seemed to be working, so I passed on that) and Parenting everything to the master controller. However if you just parent it does not work properly, jus it was a matter of going to the Orient>Contrain and schoose Scale. So it allowes the master controller scale both the mesh, controls and the skeleton, which otherwise does not scale.
Now it was a matter of making a pose and setting up everyting for the rendering of a turnaraound which was really easy and fun.

Character and Narrative: binding

So I have finally gotten closer to binding. First of all I had to orient the controllers to the joints. Since you cannot juts orient it to the joint because it has trach values and if you try it just messes everything up,the way we did it was creating a null group, orienting it to the joint, then parenting it to the control and freezing the transformations on the control NOT the null group!!! And then orienting the control to the joint. Then it was the matter of setting up the hiearchy of the controls.
(setting up the hierchy of controllers)
When orienting the controls to the joints a few times I had a problem of joints flying off or messing up, but while making my bully character I have learned how to fix it, it was just a matter of taking the control out of the hierchy, orienting the null group again, parenting everything back and make sure to freeze trasformations again. By trial and error with the bully character I have learned that the trash values on the controller can mess everything up. However when I saved the whole thing and opned it up again, the legs were messed up for some reason, Matt advised to just redo the joints on the legs because everything else was working fine. Once again I was battleing with the legs of the character. But the second time around they seemed to cooperate and I was able to bind everything with no major issues.
I then moved on to painting the weight on the charcater-
(painting weights tool)
The Paint Skin Weights Tool is one of the Artisan-based tools in Maya. With the Paint Skin Weights Tool, you can paint weight intensity values on the current smooth skin. When you select an influence to paint, the mesh displays color feedback (by default) in white and black. You can verify that Color Feedback is turned on under the Display heading. Turning on Color Feedback helps you identify the weights on the surface by representing them as grayscale values (smaller values are darker, larger values are lighter). You can also turn on Use color ramp under the Gradient heading to view weights in color. 
So when I was finished with the painting weights it was finally tome to finish everything off.

Character and Narratve: Controls and IK handles

The second step of making the demo charcater was to make the controls. It was a pretty streight forwad process, just a matter of making shapes and putting them next to the joint you wish to control. Afterwards it is necessary to colour the controls so its easier to tel which side is which. So usualy it is red for right and blue for left. Everything in the middle is yellow. Since the root control and hips dislocate is in the same place, one of them has to be diffrent in order to tell which is which.
 (creating the controls and naming them)
(Colouring the controls, pink for right and blue for left)
Then I insterted the IK handles on the legs. IK handles stands for Ivenverse Kinematics. There are three types of IK handles with corresponding solvers: Single Chain (SC) Handle, Rotate Plane (RP) Handle, and Spline Handle.
(SC/RP Handles used for limbs) 

(Spline Handle used for necks or tails)

  • Single Chain (SC) Handle and Rotate Plane (RP) Handle can be used to animate the motion of an articulated figure's limbs, and similar objects.
  • Spline Handle can be used to animate the motion of curvy or twisty shapes, such as tails, necks, spines, tentacles, bull-whips, and snakes.
  •  
he difference between an Single Chain (SC) Handle and an IK Rotate Plane (RP) Handle is that:
  • SC Handle's end effector tries to reach the position and orientation of its IK handle.
  • RP Handle's end effector only tries to reach the position of its IK handle.
Notes:
  • When you are using SC Handle, to control the orientation of the middle joints, you rotate the IK handle.
  • When you are using RP Handle, two vectors, Pole vector and Handle vector, define the plane in which the middle joints lie on. 
 

Sunday, 6 December 2015

Character and Narrative: post production

 

So as I mentioned earlier we left quite a lot of work for post production. When I finished with animating my scenes I moved onto post production which I decided to do on Flash and it was great relief to finally do some 2D animation and on a Wacom Cintiq which made the whole process really enjoyable. I first started with importing the rendered out image sequences and then pin pointing the key frames, drawing up some lines so I can understand how the face is positioned. Then I started animating faces frame by frame, a few times I used a tool that allows the object to be rotated and moved in a set 3D space which saved some time for me.. But apart from tedious frame drawing there was not much to it. I tried to make it as funny as I could, as well as stretching some scenes out leaving room for editing the timing (which was my concern as mentioned in the earlier posts). Callum raised a concern with the speech bubbles. I suggested he would make a few sketches so I could understand better of what he is looking for. Later on I came up with the idea of a badly drawn speech bubble and asked if Callum could play around with that idea. He came up with some brilliant solutions which I am going to animate into the scenes. I can't wait to put everything together and see the final result as well as voicing it.

3D Potential and Limitations: Path Tracing

Have you watched 3D-animated Disney flicks like Big Hero 6 and wondered how some of its scenes manage to look surprisingly realistic?
: Disney has posted a top-level explanation of how its image rendering engine, Hyperion, works its movie magic. The software revolves around "path tracing," an advanced ray tracing technique that calculates light's path as it bounces off objects in a scene. It takes into account materials (like Baymax's translucent skin), and saves valuable time by bundling light rays that are headed in the same direction -- important when Hyperion is tracking millions of rays at once. The technology is efficient enough that animators don't have to 'cheat' when drawing very large scenes, like BH6's picturesque views of San Fransokyo. Although Disney's tech still isn't perfectly true to life, it's close enough that the studio might just fool you in those moments when it strives for absolute accuracy.

Strike a Pose

In the beginning of this module we were asked to pose the demo model into some poses to illustrate certain emotions and moods. However the twist was that we had to do it from our own reference images. So basically we had to try and act out the emotion ourselves, take a picture of that and then pose the model according to the reference imagery. So I picked Exhaustion, Hunger, Tiredness, Surprise and Shame.
Exhaustion

Tiredness

Shame

Hunger

Suprise (I look soooooo surprised)




To be honest I did this assignment ages ago, when it was briefed, but I did not publish it because I felt I could do better. So I did. I came back to the images I did before and redid it, I took lighting into consideration, as well as camera angles. 

So here is Exhaustion. When I was acting this out I thought about exhaustion when you have your shoulders hunched and your hand are almost dragging on the floor. I made the character look up to light giving it a slight feeling of hope, I also put in another point light behind his back to outline the silhouette and separate it from the background.
So here is Hunger. When I am hungry I have the same pose as tired only grabbing the empty belly, so that is what I did with the model. As for lighting, I've put in a spotlight to have something runing through the composition, in this case a shadow. Once again there is another source of light from the back separating the characters shadows from the background. The camera is positioned to the left because left/right composition gives a feeling of moving forward or back. In this case forward. To the fridge.
SURPRISE!!! So I figured to put in another object so the character has something to be surprised about. Also I wanted to light the characters face from the same angle the surprising object appears.
When I was posing for shame, the first thing I thought of to cover my face or eyes. I consider that to be the natural reaction for many people when they are ashamed. So kept that in mind while posing the character, however I took it a bit further. I imagined that when you are ashamed you want to "run away" from the situation and/or just blend into the background and disappear. That is why I made the lighting very soft and tried to blend the character with the background. For that I used the indirect lighting and physical sun and sky. There is also some spotlight there because the shadows of the feet were blending with the ground too much. The composition logic is the same as in the previous one, positioning everything to the right side to give it a feeling of moving forward.
This one is pretty simple. Tiredness I imagine as seeing the first rays of sunlight after a whole night of work. I think that idea really come through in this one.
Overall I enjoyed this exercise, so much that I did it twice! It gave me a deeper understanding of how lighting and camera angles can help exaggerate an emotion or mood, and it was just a good chance to play around with those things and see for myself how they change the setting of the scene. 

Tuesday, 1 December 2015

3D Potential and Limitations: Rendering

3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images with 3D photorealistic effects or non-photorealistic rendering on a computer.

 Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
A render farm is a high performance computer system, e.g. a computer cluster, built to render computer-generated imagery (CGI), typically for film and television visual effects.
This is different from a render wall, which is a networked, tiled display used for real-time rendering.The rendering of images is a highly parallelizable activity, as frames and sometimes tiles can be calculated independently of the others, with the main communication between processors being the upload of the initial source material, such as models and textures, and the download of the finished images.
Over the decades, advances in computer power would allow an image to take less time to render. However, the increased computation is instead used to meet demands to achieve state-of-the-art image quality. While simple images can be produced rapidly, more realistic and complicated higher-resolution images can now be produced in more reasonable amounts of time. The time spent producing images can be limited by production time-lines and deadlines, and the desire to create high-quality work drives the need for increased computing power, rather than simply wanting the same images created faster.
To manage large farms, one must introduce a queue manager that automatically distributes processes to the many processors. Each "process" could be the rendering of one full image, a few images, or even a sub-section (or tile) of an image. The software is typically a client–server package that facilitates communication between the processors and the queue manager, although some queues have no central manager. Some common features of queue managers are: re-prioritization of the queue, management of software licenses, and algorithms to best optimize throughput based on various types of hardware in the farm. Software licensing handled by a queue manager might involve dynamic allocation of licenses to available CPUs or even cores within CPUs. A tongue-in-cheek job title for systems engineers who work primarily in the maintenance and monitoring of a render farm is a render wrangler to further the "farm" theme. This job title can be seen in film credits.
(Pixar’s renderfarm)
(Another photo of Pixars renderfarm)


3D Potential and Limitations: 3D Animation

Computer animation, or CGI animation, is the process used for generating animated images by using computer graphics. The more general term computer-generated imagery encompasses both static scenes and dynamic images while computer animation only refers to moving images.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after the modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.
3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.

Monday, 30 November 2015

3D Potential and Limitations: 3D printing

3D printing, also known as additive manufacturing (AM), refers to various processes used to synthesize a three-dimensional object. In 3D printing, successive layers of material are formed under computer control to create an object.These objects can be of almost any shape or geometry, and are produced from a 3D model or other electronic data source. A 3D printer is a type of industrial robot.

Futurologists such as Jeremy Rifkin believe that 3D printing signals the beginning of a third industrial revolution,succeeding the production line assembly that dominated manufacturing starting in the late 19th century. Using the power of the Internet, it may eventually be possible to send a blueprint of any product to any place in the world to be replicated by a 3D printer with "elemental inks" capable of being combined into any material substance of any desired form.

3D printing in the term's original sense refers to processes that sequentially deposit material onto a powder bed with inkjet printer heads. More recently, the meaning of the term has expanded to encompass a wider variety of techniques such as extrusion and sintering-based processes. Technical standards generally use the term additive manufacturing for this broader sense.

3D printable models may be created with a computer aided design (CAD) package, via a 3D scanner or by a plain digital camera and photogrammetry software. 3D printed models created with CAD results in reduced errors and can be corrected before printing, allowing verification in the design of the object before it is printed.
The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D scanning is a process of collecting digital data on the shape and appearance of a real object, creating a digital model based on it.

Who would have thought that modern manufacturing could be done without a factory? Since the Industrial Revolution, manufacturing has been synonymous with factories, machine tools, production lines and economies of scale. So it is startling to think about manufacturing without tooling, assembly lines or supply chains. However, that is what is emerging as the future of 3D printing services takes hold.

3D printing has been around for decades, better known as additive manufacturing (building an object layer by layer). What’s new is that 3D printing has reached consumer-friendly price points and footprints, new materials and techniques are making new things possible, and the Internet is tying it all together. Technology has developed to the point where we are rethinking industry. The next industrial revolution is opening up manufacturing to the whole world – where everyone can participate in the process.

(3d Art)

 This democratization idea will not be much different than the journey computers had – from a few, big, centralized mainframes to something we now hold in our hands.At the same time, 3D printing, long used for rapid prototyping, is being applied in a number of industries today, including aerospace and defense, automotive and healthcare.
(3d printed organs)

 As accuracy has improved and the size of printed objects has increased, 3D printing services are being used to create such things as topographical models, lighter airplane parts, aerodynamic car bodies and custom prosthetic devices. In the future, it may be possible for the military to print replacement parts right on the battlefield instead of having to rely on limited spares and supply chains.

3D Potential and Limitations: Uncanny Valley

The uncanny valley is a hypothesis in the field of aesthetics which holds that when features look and move almost, but not exactly, like natural beings, it causes a response of revulsion among some observers. The "valley" refers to the dip in a graph of the comfort level of beings as subjects move toward a healthy, natural likeness described in a function of a subject's aesthetic acceptability. Examples can be found in the fields of robotics and 3D computer animation, among others.
A number of films that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic.  Examples include:

According to roboticist Dario Floreano, the animated baby in Pixar's groundbreaking 1988 short film Tin Toy provoked negative audience reactions, which first led the film industry to take the concept of the uncanny valley seriously.
(the baby from Pixars Tin Toy, 1988)

The 2001 film Final Fantasy: The Spirits Within, the first photorealistic computer-animated feature film, provoked negative reactions from some viewers due to its near-realistic yet imperfect visual depictions of human characters.The Guardian critic Peter Bradshaw stated that while the film's animation is brilliant, the "solemnly realist human faces look shriekingly phoney precisely because they're almost there but not quite". Rolling Stone critic Peter Travers wrote of the film, "At first it's fun to watch the characters, but then you notice a coldness in the eyes, a mechanical quality in the movements".
(Characters from Fival Fantasy: Spirits Within, 2001)

Several reviewers of the 2004 animated film The Polar Express called its animation eerie.  CNN.com reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy.  So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying."   The term "eerie" was used by reviewers Kurt Loder and Manohla Dargis,among others. Newsday reviewer John Anderson called the film's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train." Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.
(a screenshot from The Polar Express, 2004)

In a review of the 2007 animated film Beowulf, New York Times technology writer David Gallagher wrote that the film failed the uncanny valley test, stating that the film's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf’s face... allowing viewers to admire every hair in his 3-D digital stubble."
(Grendel from Beowulf, 2007)

Some reviewers of the 2009 film A Christmas Carol criticized its animation as being creepy. Joe Neumaier of the New York Daily News said of the film, "The motion-capture does no favors to co-stars (Gary) Oldman, Colin Firth and Robin Wright Penn, since, as in 'Polar Express,' the animated eyes never seem to focus. And for all the photorealism, when characters get wiggly-limbed and bouncy as in standard Disney cartoons, it's off-putting". Mary Elizabeth Williams of Salon.com wrote of the film, "In the center of the action is Jim Carrey -- or at least a dead-eyed, doll-like version of Carrey".

In the 2010 film The Last Airbender, the character Appa, the flying bison, has been called "uncanny".  Geekosystem's Susana Polo found the character "really quite creepy", noting "that prey animals (like bison) have eyes on the sides of their heads, and so moving them to the front without changing rest of the facial structure tips us right into the uncanny valley".
(Appa from the movie The Last Airbender, 2010)

The 2011 film Mars Needs Moms was widely criticized for being creepy and unnatural because of its style of animation. The film was the second biggest box office bomb in history, which may have been due in part to audience revulsion.(Mars Needs Moms was produced by Robert Zemeckis's production company, ImageMovers, which had previously produced The Polar Express, Beowulf, and A Christmas Carol.)

By contrast, at least one film, the 2011 The Adventures of Tintin: The Secret of the Unicorn, was praised by reviewers for avoiding the uncanny valley despite its animated characters' realism.  Critic Dana Stevens wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley.'" Wired Magazine editor Kevin Kelly wrote of the film, "we have passed beyond the uncanny valley into the plains of hyperreality

Thursday, 26 November 2015

Character and Narrative: Animating

So I have finally started animating my bully! The first problem I have encountered was when importing my character into the stage, when moving it forward or back it kept shrinking. One of my peers suggested to check if I somehow locked some of the attributes or tied them together, since I had poor understand of how to check that, I ended up going through all of them and unlocking them. And luckily enough it worked. Then I learned how to set up the lighting and camera and how to manipulate the workplace and not get lost in it, for isntance confusing perspective mode and camera mode
(perspective mode)
(camera mode)
With all that set I started animating the character. What seemed really tedious to me was making a key frame for every seprate joint. For some reason I imgined that you can select the character make a keyframe, move it a bit and make another keyframe and Maya would figure everything else out. But appearantly it doesn't work that way, you have to keyframe ever joint seperatly. So to start off with I did a simple scene where the bully is loughing at the nerd, so all it required for the bully to do was to point a finger and move his shoulder a bit.
In render view I have noticed that some of the character were black and I fugured that the issue was the project was not set up properly. I was given a workspace but not the whole project. But that was not a diffcult fix, so I've set up the project and reatched all the texture to the characters.

 I asked Callum to help me with the render so it was consistent to what he has done with some other scenes, so we went through the shadows and lighting and adjusted it a bit more. After starting the batch render I was frustrated to find out that it was rendering the wrong camera angle, I had trouble figuring out why, but Callum noticed that there were keyframes set for the camera, so after deleting those and setting up the camera again all was good and I left it alone to render. Can't wait to draw some faces on that later!


Critique Session

So today we showed our animation test and asked for some feedback. So everyone seemed to like it and our peers said it looks really good. One suggestion we got was to try puttinga phone ringtone so it would be easier to understand that the character recieved a message, however we want to stick with boopidy boop idea where every time the characters get messages we voice it as *boopidy boop* and as the messages get more agressive, the tone of the messages changes as well. We asked if the 2d animation works with 3d and everyone agreed that it works really well. One thing as pointed out, that the aspect ratio seemed to change in the video. After the critique I looked into it and it was my mistake, because after exporting the edited image sequence from Flash I did not check the ratio, therefore some of the composition was cut off. As I could not find the Flash file I was scared I would have o reanimate it, but I came up with an easy fix on after effects. The part of the video missing was not moving so I just put up the whole image as a backround and fitted the cropped out video on it. No we are going to proceed with animating as Callum is going to deal with the animating and I have the whole post production. There a lot of work needed to be done, but we are trying to stay optimistic and hoping to get the animation done by the deadline.

Test Aimation

This week we focused on making the short peiece of animation look as finalised as possible for the crit session. Callum managed to render out a whole 6 seconds of animation and I took over with the post production. I also figured that the timing on the animatic needed a bit more work so I suggested to voice it, to see if it could give us a better feel of timing. We done some test recordings and decided to use the sounds we made for now, but I really want to re-record it later on. As for the faces, I've put up the rendered out images on Flash and drew the faces on frame by frame. To be honest it felt really good to animate in 2d again so I did not mind spending a bit more time on it and it looked good too!
Callum was happy with the 2d faces so we decided to stick with that way of animating them. Afterwards I have put everything together on After Effects and tried to play around with sounds as well. Felling really happy with the animation so far, can't wait to hear the feedback!!

Character and Narrative: backround characters

Regarding the last critique session Callum and I decided to include the paper cut out idea on the backround characters. At first I suggested we make them as 2d plains and put them up on maya, but Callum insisted we do the as 3d because it would look better. I raised a cocern that it might extend the rendering process so we decided to rty it out and see how it looked. Callum made the 3d mesh and I did the UV textures for them. When we put them into the scene they seemed to look really good, and I noticed that in one of the scenes with my character loughing at the whimp, the backround character would look really bad if it were in 2d, so we decided to stick with it. After some more research into facial expressions I decided to make every character diffrent not only in their faces but in shades of clothing and such, both Callum and I were happy with the way the backround character looked-




And this is how they look on the mesh-



Chracter and Narrative: painting weights

So I have finally moved onto painting weights on my character. I thought it was going to be diffucult, but appearantly once agan I was mistaken. Basically just picking up the paint weights tool and clicking on the verticies you can tell Maya how you want your mesh to be affected by the joints and their position, so it was just a matter of selecting every joint and corecting the amount of influence it has on certain parts of the mesh. For instance the fingers, since their joints ar so close together, after binding, moving one finger it moves some of the geometry on the other finger and it just looks untidy. So the paint weights tool just helps to clean that up.
After painting the weights on, I have done some of the final housekeeping and neating up the character, locking away some of the attributes and connecting everything to the master controller.
Finally my character is rigged and ready to bully everyone!

Character and Narrative: rigging aint no game yo

So I moved on from modelling to making the skeleton of the model. The process was pretty straight forward for me as we already went through it with the demo character. However my characters skeleton was a bit diffrent because it had a hunched back and no neck.
(skeleton of the bully character)

So the skleton part took no time at all. I moved on with making the controllers. As I followed the video tutorials I had the controls in no time. However orienting the joints and the controllers was a challenge for me, because I did not completely understand what I am doing and why. By this point I started mindlessly folowing the videos without really understanding them.
And that's not good because I had to redo all the orientations as they weren't done properly, mainly because at some point I have done something I shouldn't- freeze the transformations, so whenever I would orient something the joint would fly off. Bet the second time around it was all good


Then I moved on to setting the Hierarchy of the joints, which seemed not that difficult. To be fair the hierarchy part was pretty straight forward. The hand is the child of the forearm, the forearm is the child of elbow and so on. And the whole father of them all is the root control!!
(controller hierarchy)

And then came the dreaded connecting the mesh to the rig. When it came to that I lost the SDK handles for the legs for some reason. The leg controls just did not work. I figured there's not much I can do until the next session, so I went on doing the UV map. That was pretty easy, however the hands posed a bit of a challenge to cut up so I inserted a few edge loops to help.
(Character UV)
After I applied the UV texture the model just broke. It could not move and the more I tried moving it the more the UV moved from place but not the character. I had no clue what went wrong. I asked Matt and he pin pointed the mistake streight away. Appearantly you cannot add geometry after connecting the mesh to the rig, and that is what I did when I was cutting up the UV maps. But thank the Maya Gods it was an easy fix, I just needed to detach the mesh from the rig and reconnect ir again. After that everything worked fine, even the legs for some reason started working.
(Mesh connected to the rig)

 I felt really relieved that all my hard work with the rig did not go to waste! With high spirits I moved on to painting the weights.