skip to main | skip to sidebar

See lectures by course…

Friday, October 10, 2008

IMD 4003: Character Animation IV

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Poses

Posing is the way a character presents themselves in front of a camera to clarify their actions and emotions.

1.1 Posing Body Parts

Posing Bodies

  • Balance: We are always trying to maintain balance - not tipping over. If the centre of balance is off, your character will look like they are about to tip over.
  • Anatomical Correctness: This means that you should keep the joints positioned and rotating within their natural limits, otherwise you will make people feel uncomfortable with the movement.

Posing the Hips and Torso

  • Hips support the weight of the upper body and distribute this through the legs to the ground. When we pick up an object, the object's weight needs to be balanced by our body.
  • Most motions start with a change in the character's hips because most motions begin with a change in balance.

Posing the Legs and Feet

  • Feet: Weight is mostly on the heel of the foot and the ball is used to fine-tune the weight distribution.
  • Knees: These tend to follow the direction in which the foot is pointing.

Posing the Hands and Arms

  • Index Finger: This is the dominant finger, with decreasing dominance as you move towards the pinky.
  • Grasping and Manipulating: The least dominant fingers will curl first when you go to close your hand around something. If you are picking up a heavy object, you would use your entire hand. Delicate objects may only require a few fingers.

1.2 Line of Action

The line of action is useful for making poses seem fluid. The line is drawn from the tips of the toes to the tips of the fingers.

2. Animating with Poses

2.1 Two Types of Animation

1. Straight-Ahead Animation

  • This is a linear progression through the frames starting from the beginning and working to the end.
  • Good for spontaneous and complex motion such as athletic actions.
  • Bad for getting well-defined and solid poses.

2. Pose-to-Pose Animation

  • 1st: Plan the shot by setting out the main poses of the character. 2nd: Break down every action into a series of poses. 3rd: Create the 'tweens.
  • Good for acting and dialogue because each pose can be fitted to the major points in the dialogue track.

2.2 Key Frames

These are typically used to define the position or pose of the character but can also be used to define the colour, shape or transparency.

Motion Graphs

These use curves to indicate exactly how much to move an object in between a set of key frames.

Dope Sheets

These display the key frames as blocks without the curves. It's good because you can change the timing without changing the curves. Also, it allows you to select the key frames for a specific group of objects without manually selecting each part. For example, if you want to move the entire arm, you would just select the 'arm' block and it would move the 'hand' block as well.

Trajectories

Trajectories for objects can help plan out shots.

Ghosting

Ghosting permits you to see multiple exposures of an object. This is good for visualizing objects in motion, especially complex objects.

IMD 4003: Character Animation III

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Facial Rigging

1.1 Morphing

Morphing means to permit one object to assume the shape of another. For example, you can morph a sphere into a face. Because it is so complex, you would create facial expressions individually. You can let the morphing tool smoothly transition between poses. We would use Multi-Target Morphing in order to mix and blend multiple shapes with morph targets.

Morph Targets

Ideally, we would create a morph target for each muscle on the face but this isn't always necessary. One slider in the morphing tool would control one muscle. It's important to have smooth transitions and symmetry (where necessary).

Target Poses: These are the 'extremes' of the poses - aim for these.

1.2 Basic Facial Poses

Basic poses are fundamental and based on the major muscles of the face.

  1. Lower Face:
    • Smiles - we use the zygomatic major (the corner's of the mouth are pulled up into the cheek);
    • Frown - pulls the corners of the mouth down;
    • Fear - pulls the corners of the mouth out;
  2. Upper Face:
    • Anger - furrowed brow - the inner brow is pulled down;
    • Worry - pulls the outer brows down;
    • Eyes use blinks and winks, squinting, and wide eyes to convey different meanings.

2. Facial Dialogue and Animation

We need to know exactly which parts of the dialogue to emphasize. Rhythm and timing and important.

2.1 Six Main Emotions (SHAFDS)

  1. Sadness: brow raised in the middle, and corner of the mouths down;
  2. Happiness: mouth pulled up, exposed teeth, and cheeks pulled up;
  3. Anger: brows down in the middle, teeth bared;
  4. Fear: raised brows, wide eyes, mouth open wide, and jaw dropped;
  5. Disgust: centre brow slightly lowered, narrow eyes, and side of mouth pulled up towards the side of the nose (in a sneer);
  6. Surprise: raised brows, wide eyes, and slack jaw (mouth slightly open).

2.2 Head, Eye and Lip Sync Animation

Head Animation

Use for emphasizing facial expressions. For example, cocking your head conveys interest or curiosity. People tend to bob their head while they speak to emphasize certain points.

Eye Animation

  • Direction and Eye Contact: the direction in which the eyes are looking insinuates that the character is focused on that object or person. Avoid staring but maintain eye contact to show interest. Breaking contact suddenly can imply evasiveness or embarrassment.
  • Blinking and Turning: people tend to blink in the direction they are turning their head. The eyes tend to lead the movement.
  • Thinking and Eye Direction: From your perspective, if a person looks to the left, they are constructing information. If they are looking to the right, they are remembering it. (This is when the person doing the looking is right-handed. If they are left-handed, reverse all of this.)

    Looking up suggests you are thinking about an image. So keeping in mind what I just mentioned about remembering and constructing information, if a person looks to your left, they are constructing an image. If they look to the right, they're remembering. (Think of a green kangaroo with a red top hat. Which way did you look?)

    Looking to your left or right suggests you are thinking of a sound.

    A person looking down to your left suggests they are remembering an externally expressed emotion, taste or smell. To the right suggests they are talking to themselves in their head.

Lip Sync Animation (just like Britney Spears!)

Lip Sync Animation is when you move the lips to match an audio track. Dialogue is normally recorded before the characters are drawn. "Reading the Track" is when you're breaking down the dialogue track frame-by-frame into individual phonemes (we did this in Flash in 2nd year - see my Assignment)

2.3 Eight Basic Mouth Positions

The most important thing to remember about these positions is that

  • Vowels: the mouth will open quickly but close slowly.
  • Consonants: these are short in length and break up vowels.

In the list below, the highlighted letters make the mouth motion.

  1. Position 1: closed mouth - mmmm, bubbles, pedantic;
  2. Position 2: teeth closed - deny, grand, then, crack;
  3. Position 3: open jaw - aero, i like bananas;
  4. Position 4: mouth open slightly but wide - eeeeeeeeeeeevil;
  5. Position 5: open jaw - oooooooooooooooooh;
  6. Position 6: open jaw with pursed lips - food;
  7. Position 7: open jaw with tongue moving up against the top teeth - love;
  8. Position 8: bottom lip tucked under top teeth - fart;

Visemes: similar to phonemes but these are generic facial images used to describe a particular sound.

People that can read lips can read visemes.

IMD 4003: Character Animation II

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Rigging Bones

1.1 What is “Rigging”?

Rigging is when you add a skeleton to your model to control it (rather than moving the vertices manually). Then you would use FK and/or IK to control the movement.

1.2 Skeletons

There are 3 steps to define your skeleton:

  1. Hierarchies: These are useful because they do a lot of the work for us. We don't have to replicate real-life skeletons in order to get the right movement - just enough to make it look approximate. We often need fewer bones to duplicate the movement but sometimes we may need more.

    The "Root" is almost always the hip or the pelvis for 3 reasons:
    1. it is close to the centre of gravity;
    2. it's the centre point connecting the upper and lower body; and
    3. it's the central point of almost all motions.
  2. Bones and Joints: Rigid bones are connected by joints. The bones act as a guide for the mesh (skin) deformation. [Suggestion: it's easier to create the mesh first and add the bones later.]
  3. Bone Naming: Name your bones like you would (should!) name your Photoshop layers. It's useful for organizing complex set of bone structures. Some animation programs require that the bones are labelled correctly so that they are linked correctly.

1.3 Skinning/Binding

Once we have correctly positioned the bones within the mesh, we can determine the position of the mesh based on the bone positions. However, because the points are related to the bones by a relative distance all the time, if you rotate too far, the skin will deform into other skin parts giving you awful looking kinks. We can fix this with weighting.

1.4 Weighting

We assign an influence between 1 and 0 (1 being the greatest influence, and 0 being no influence) to vertices. This influence is based on the movement of the bones. Generally, we are dealing with the influence of 2 bones but there are cases where more bones are involved (the shoulder, for example).

3 Methods of Deformation (FAP):

  1. FFD Animation: We create an FFD lattice around the skeleton. The skeleton controls this FFD and the FFD controls the vertices of the skin.
  2. Associated Point Animation: We associated mesh points to a specific bone or link. The vertex points move parallel to the bone. This is useful because the system can calculate the rotation quickly because it's only being rotated around one axis.

    The problem with this is that the points will cluster on the inner part of the rotation because there's no associated between joints and points. To solve this, we can break up the skin into separate pieces and use spherical joints to hide the issue but this isn't really skinning. Using Point-Weighted Animation is much better.
  3. Point-Weighted Animation: Each point is affected by more than one link and the influence on each point is based on a weighted average.

3 Ways to Apply Weights (PEN)

  1. Painted Weights: We use a 3D painting tool to apply weights. This is useful for control in tricky areas because you can very closely fit the shape of the character.
  2. Envelopes: This is the most common method. We use a capsule shape around the skin that has influence over the underlying bones. You can easily shape and resize it.
    Envelopes have “Fall-Offs”, where the strength of the capsule is reduced over a specific distance. This is good for a smooth transition between bones.
  3. Numerical Assignment: We assign one weight value per vertex. This is very tedious but provides a lot of control that envelopes and painted weights can't offer.

2. Muscles

Muscle bulge is influenced by muscle systems. You would first model the bones, muscle and skin tissue as deformable bodies. Then you would use physical simulation to calculate the motion.

2.1 Simplified Anatomical Models

Muscles are attached to the bones. As the muscles get shorter, they become wider; as they get longer, they lengthen.

If you don't understand, extend your arms out to your sides. Bend your elbows (on the y or z axis) and feel your muscles contract. They get shorter as you bend your elbows.

Muscles can be built using primitive NURB Surfaces or polygon shapes combined with an FFD. The skin would be attached to the muscles with springs or dampers and the muscle deformation would be simulated by the collision between the bone and the muscles.

2.2 Detailed Anatomical Models

Detailed simulations can be performed that accurately depict bone and muscle geometry but the setup is extensive.

IMD 4003: Character Animation I

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Character Design

  • Artistic Considerations: When designing characters, we define the physical characteristics to match the personality. Physical characteristics include: size, proportion, shape, colour, texture and clothing. Poses and movement help define the individuality of the character but that is discussed in another lecture. After the physical characteristics are defined, we create sketches.
  • Technical Considerations: While creating the sketches, you must consider time and budget. For example, if your character has long flowing hair, you might be better off to change it to short, not flowing hair in order to save time and money - any modeling and animation with hair and cloth can be complicated and expensive.

1.1 Two Types of Character Styles

  1. Realistic Characters: These types of characters are used in live action films. They are significantly harder to animate because they must look realistic.
  2. Stylized Characters: These characters don't necessarily need to be true to real-life forms - as long as the general idea is expressed. One important thing is that you must define the rules and constraints of the character (in what they can do to interact with their environment) and you must stick to these rules!

1.2 Creating Stylized Characters

  • Body and Head: Stylized characters have a disproportionate head:body ratio. The head tends to be large in comparison with the body.
  • Face: The face should be large enough to clearly see the emotions expressed.
  • Eyes: Eyes convey a lot of emotion. Their size is important - large eyes tend to convey youthfulness and attractiveness (Psychology insert: women with large eyes tend to be perceived as open and beautiful) while small eyes tend to convey evilness (Psychology insert: small eyes tend to make people seem less open and less attractive). Eyelids help convey emotions as well - mid-way closed eyelids can convey tiredness. The shape of the eyes makes a difference as well.
    Finally, giving non-human forms human-like eyes can help give the character more human characteristics - it is easier to convey emotions and the audience will feel more of a connection to the character.
  • Eyebrows: These convey a variety of emotions (think of your own emotions for examples).
  • Mouths: Mouths are used for speech and emotion. The size of the mouth can convey different personality types: small mouths can mean the character is quiet (less speech) and less expressive; large mouths can mean the character is loud and very expressive.
  • Hands: Because hands are naturally complicated, it is useful to simplify their shape and their movements when possible. Animating the general movement of the hands can be enough to convey the correct meaning (Ex. Small hand movements over a shoe would mean that the character is tying their shoes.)
  • Body Segments:
    • Single Mesh - a single mesh covers the entire body. We need to specify how all the body parts will deform during animation.
    • Segmented Parts - you can segment the body parts at the joints (or where appropriate). This means you worry less about deformation but you need good techniques to hide the seams between segments.

2. Modeling Characters

There are two types of characters: Organic and Non-Organic. Organic characters tend to be smooth (Ex. humans and animals) whereas Non-organic characters can be smooth or hard (Ex. robots and frying pans). We'll look at the surface and skeletal considerations for both types of characters.

2.1 Surface Considerations

NURBs vs. Polygons vs. Subdivision Surfaces

NURBs are slightly more difficult to use because we use NURB patches to control the mesh. The advantage is that achieving a smooth form is very easy. NURBs are often used for faces but we may want to revert to polygons in order to get the proper jaw movement.

Polygons are easy to use because we have great control over the vertices. We tend to use these for solid objects that don't necessarily need an extremely smooth surface.

Subdivision Surfaces use a polygonal mesh but have a NURBs-like smoothing technique for a higher resolution appearance. These are very useful because we can attach a rig to the low-polygonal model to control the high resolution mesh without applying all of the weighting.

2.2 Skeletal Considerations

  • Knees and Elbows: These are the easiest to rig and animate because they are hinge joints (rotate on one-axis). The skin motion is easy to understand: as the angle becomes smaller, the inside of the joint compresses and the outside expands.
  • Wrists: More complex than knees and elbows - they can be rotated on 2 axes.
  • Hips: Hips provide the legs with 2 axes of rotation (forward/backward and side).
  • Shoulders: Shoulders are very complex because they do not rotate around a pivot - they rotate based on different bone combinations. The key issue is that we need to know how the mesh will compress with different movements. To solve this, we can use different models for different movements.
  • Spine: The spine rotates on 3 axes but we deal mostly with the motion of leaning forward and back. We must have enough detail in the mesh around the spine in order for it to bend smoothly.
  • Faces: It is composed of 2 major bones: the skull and the jaw. The jaw movement affects only the bottom part of the face. There are many muscles on the face controlling different areas. I will not list them because this isn't a biology class.

Tuesday, October 07, 2008

IMD 4003: Motion Capture

For IMD 4003: Computer Animation, taught by Chris Joslin

1. The System

1.1 Camera

  • Strobe: used to illuminate the markers with a "visible" wavelength of 623nm.
  • On-Board Processor: each camera has a processor that collects information from the sensor, processes it, sends the information to the camera network switch and finally to Tarsus. This information is compressed as 2D data containing only greyscale reflections of the data.
  • Filter: each camera has a filter attached to it between the lens and the image sensor to filter everything but 623nm wavelengths.

1.2 Lens

The lens has a Focal Length of 12.5mm, a Horizontal (↔) FoV of 54.22° and a Vertical FoV (↑↓) of 42.0°.

  • The area of light projected onto the image sensor is smaller than the area of the complete image sensor. This is why we don't have full frame projection.
  • Depth of Field: the nearest an object can be without being blurry is 0.9m. The farthest it can be is over 10m.

1.3 The Room

The room must be large enough because we can only use a small subset where all of the camera's views converge. All of the reflective surfaces (retro-reflective, reflective and active light sources) must be either removed or covered.

1.4 The Suit

The purpose of the suit is to provide a connection between the markers and the person we're tracking. It keeps the markers on with Velcro so that we don't have to use glue or tape (though they are still sometimes used). It also helps reduce the reflection from the skin.

1.5 Markers

Markers are approximately spherical and have retro-reflective tape on them. There are 4 types of markers:

  1. 18mm: these are flexible and used for bodies.
  2. 14mm: these are solid and use for objects (especially those expecting impact or contact with other objects in the scene.
  3. 4mm and
  4. 2mm: these are used for facial and finger capturing (they offer more precision).

The 18mm (the ones we use in class) appear as 7-8px from the centre of the floor. They appear as 3px (the smallest you want to get a complete circle) at approximately 9m.

2. Vicon IQ

2.1 File Types

These are some common file types you will come across:

  • .x2d: contains raw 2D image data compiled from all of the cameras.
  • .vtt: (Vicon Threshold Template) contains the masking data for each camera.
  • .cp: (Camera Parameters) contains information about where cameras are in the 3D space, the camera distortion, etc.
  • .trial: contains reconstructed data in 2D for all marker positions.
  • .enf: contains information about the directory and the trial files within the directory.
  • .v: contains the animation data connected to a skeleton.
  • .vsk: contains the skeleton for the subject being captured.

2.2 Calibration

The purpose of calibration is to:

  • link the cameras to the 3D environment;
  • link the cameras to each other in this environment in order to calculate the triangulation of the markers; and
  • determine the distortion of the lenses.

Distortion: the further the marker is from the centre of the image, the more it is distorted. By using a "Distortion Map," the software can correct the distortion as much as possible.

Wand: the wand is set up with 3 markers which the cameras know about (so the ratio between the markers is more important than their actual physical distance). By recording the movement, the software can set the position in the 3D space. This is useful for calculating the Distortion Map too. We need about 6000 (±10%) frames with the 240mm wand.

3. Step-By-Step Process

There are five major steps:

  1. Data Management
    • Create the Project
    • Create the Day
    • Create the Session
  2. Setup
    • Open Vicon IQ
    • Connect to Tarsus
    • Set the Strobe to 100
    • Set the Threshold to 10-20
    • Set the Gain to 1
    • Set the Circle Quality to 30-40
  3. Calibration
    • Wand Wave with 240mm wand
    • Set the Origin with the L-Frame
  4. ROM
    • Put on the suit and the markers
    • Capture the ROM
    • Open the .vst Template
    • Reconstruct the markers
    • Label the subject
    • Process the trajectories
    • Calibrate the skeleton
  5. Subject Capture
    • Reconstruct
    • Fill the gaps
    • Kinematic fit
    • Export the subject

IMD 4003: Rigid Bodies II

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Point and Objects

objects are composed of points (vertices) and connected by lines (edges). If you connect vertices with edges in a somewhat organized way, you can create recognizable objects in the World Coordinate System.

2. Translation and Rotation

The value of the vertices (x, y, z) doesn't matter as long as the vertices relate to each correctly. That is, the distance between the vertices is what matters.

Example: Pretend you have a vertex at (0,0,0) and another at (1,0,0). They are connected by an edge. If you move them by 1 positive unit on the x-axis, you get (1,0,0) and (2,0,0). The distance between the 2 vertices are the same - they aren't deformed when you translate them.

For this reason, the translation of an object is easy.

Rotation: if we use xform or setAttr, we are moving the local coordinate system - we are rotating the objects pivot point.

Pivot Points: the point around which the object will rotate.

3. Inverse Kinematics

Because I found inverse kinematics boring and hard to absorb, I never finished these notes (sorry!).

IMD 4003: Rigid Body I

For IMD 4003: Computer Animation, taught by Chris Joslin

1. Rigid Bodies

What are Rigid Bodies?

Rigid bodies are used to describe something that does not change shape (including the scale) when it's animated. Animating a rigid body is to animate the manipulation of a joint. Hierarchies are important for this animation in order to control the animation of a complex system.

Skeletons

Skeletons are control mechanisms for characters or objects. They are useful because they are fast and intuitive (Ex. If you move your arm, you expect your hand to move as well.). We can allow the hierarchy to control and manipulate details that would otherwise be too time-consuming to work out (Ex. You wouldn't want to manually move the hand and the fingers and the forearm when you rotate your shoulder.)

We can and do use soft body systems (these have no skeletons) but they don't necessarily follow a defined set of rules.

2. Scene Graphs

What are Scene Graphs?

Scene graphs handle hierarchies containing information about the position of the objects in space and other information specific to that hierarchy. They comprise of links (the connections between joints which can have any value, including zero) and joints (see the 5 Types of Joints in Articulated Models for different types of joints).

Parent-Child Relationships

The relationship between nodes is expressed from the top to the bottom of the hierarchy.

ROOT → PARENT → CHILD

Nodes are created incrementally. As long as your computer can handle it, you can have an infinite number of nodes.

Node Types

In general, nodes are represented as class instances which means there are data and functions associated with each node type.

Example: The Transform Node translates the attached shape in the local coordinate system.
There are many other types of nodes such as Group, Shape, Lights, Camera, Action, etc.

Scene Graph Manager

This handles the rendering and handling of each local coordinate system. It is a set of hierarchical nodes defined either through direct declaration or imported via a scene-graph based file.

3. Articulated Models

What are Articulated Models?

Articulated Models are rigid body objects connected by joints. Each element can be manipulated by the joint angle between each joint. Animation is achieved by a change in these angles over a discrete time step. Δt needs to be small enough in order to avoid jerky motions.

5 Types of Joints (BHPPB)

  1. Ball and Socket: allows free rotation in all axes (x, y, z). (Ex. Hip joint)
  2. Hinge / Revolute: allows free rotation around one axis (x, y or z). (Ex. Door hinge)
  3. Plane / Gliding: permits a slight translation along the axis of the connecting link. (Ex. Hand joint)
  4. Prismatic: allows translation along a single axis (absolutely no rotation).
  5. Bearing / Gimbal: Bearings allow a rotation around an axis (Ex. Rollerblade wheels have ball bearings that rotate around an axis). Gimbals allow rotation around all axes in turn.

Degrees of Freedom: This describes the amount of articulation that is possible within a rigid body hierarchy system. It is measured based on the sum of the number of axis rotations per joint.

Example: A mouse has 2 degrees of rotation.

There is a maximum of 3 degrees of rotation (x, y and z).

Joint Limitations: These restrict the motion to provide a natural appearance - generally, we don't want the body of one link to end up in another.

Example: Think of your finger. It can only rotate so many degrees on its axes. If you could rotate your finger back into your hand, that would be awkward and unrealistic.

4. Forward Kinematics

What is "Forward Kinematics"?

Forward kinematics describes the direction of motion that is applied to jointed objects. You start at the ROOT and travel down the kinematics tree, propagating the transformations as you go.

Why do we use it? (Practical Application)

  • Animating hierarchical joint models using rotations;
  • Updating link positions according to rotations;
  • Uses vector-matrix multiplication;
  • The transform matrix is composed of all the joint transforms between sensor/effector and the root.

The Mathematics of Forward Kinematics

Forward kinematics is used when you know the joint angles and you want to find the end-affector position:

$(p,q) = F (\Theta_{i})$ where,

  • $y = \sin(\Theta)$ and
  • $x = \cos (\Theta)$

2D Coordinate System

Let's take a look at Forward Kinematics in a 2D, left-handed system:

where,

  • $x_{e} = l_{1} \cos\Theta_{1} + l_{2} \cos(\Theta_{1}+\Theta_{2})$
  • $y_{e} = l_{1} \sin\Theta_{1} + l_{2} \sin(\Theta_{1}+\Theta_{2})$

In the local coordinate view, we calculate the translation from Left to Right (in the World Coordinate System, we read it Right to Left):

$T = (\text{rot}\Theta_{1})(\text{transl}_{1})(\text{rot}\Theta_{2})(\text{transl}_{2})$, where

  • $\text{rot}\Theta_{1}= \begin{bmatrix} \cos\Theta_{1} & -\sin\Theta_{1} & 0 \\ \sin\Theta_{1} & \cos\Theta_{1} & 0 \\ 0 & 0 & 1 \end{bmatrix}$
  • $\text{transl}_{1}= \begin{bmatrix} 0 & 0 & l_{1} \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
  • $\text{rot}\Theta_{2}= \begin{bmatrix} \cos\Theta_{2} & -\sin\Theta_{2} & 0 \\ \sin\Theta_{2} & \cos\Theta_{2} & 0 \\ 0 & 0 & 1 \end{bmatrix}$
  • $\text{transl}_{2}= \begin{bmatrix} 0 & 0 & l_{2} \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$

giving us,

  • $T = \small\begin{bmatrix} \cos\Theta_{1} & -\sin\Theta_{1} & 0 \\ \sin\Theta_{1} & \cos\Theta_{1} & 0 \\ 0 & 0 & 1 \end{bmatrix} \small\begin{bmatrix} 0 & 0 & l_{1} \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \small\begin{bmatrix} \cos\Theta_{2} & -\sin\Theta_{2} & 0 \\ \sin\Theta_{2} & \cos\Theta_{2} & 0 \\ 0 & 0 & 1 \end{bmatrix} \small\begin{bmatrix} 0 & 0 & l_{2} \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$

3D Coordinate System

We can calculate translations and rotations in a 3D coordinate system by using 4x4 matrices, rotating about an arbitrary axis. Each segment has its own coordinate system where,

$T = J_{0} (L_{1}J_{1} L_{2}J_{2} L_{3}J_{3}\text{…})$ where,

  • $J_{0}$ = Position and orientation of the root segment
  • $L_{1}$ = First link transformation
  • $J_{1}$ = First joint transformation

Unfortunately, I had trouble understanding the last 2 slides so there aren't any notes on them.

IMD 4003: Introduction to Animation

For IMD 4003: Computer Animation, taught by Chris Joslin

1. A History of Animation

1.1 What is Animation?

Animation is when an animator determines how an objects moves through space and time (Ex. There are two types: Motion Pictures and Simulation)

1.2 Early Animation

Examples of early animation are Thaumatrope, Flipbook, Stop Motion and Cel Animation.

Frames and Keyframes

Frames were used first - each single image represents a frame. Keyframes are the 'important' (or key) frames of an animation. Everything in between these important frames are called 'tweens.

7 Step Sequence of Conventional Animation

  1. Story is written;
  2. Storyboard is laid out;
  3. Detailed layout - soundtrack;
  4. Keyframes are drawn;
  5. 'Tweens are drawn;
  6. Pencil test - trail film is made;
  7. Cels are prepared and coloured.

1.3 Two Categories of Conventional Animation

  1. Computer-Assisted Animation: 2D and 2.5D systems computerize traditional hand-drawn animation poses (uses interpolation).
  2. Computer-Generated (CG) Animation: covers all changes that have a visual effect (such as motion dynamics, update dynamics, lighting, camera position, focus, etc.). This is very good for flexibility.
    CG is used is the entertainment industry, fine art, education, scientific visualization, et al.

1.4 Two Categories of CG Motion Specification

  1. Low Level Techniques:
    • Consists more of techniques (Ex. 'tweening);
    • Animator should have a very good idea of the motion they would like to achieve;
    • Requires more input from the user.
  2. High Level Techniques:
    • Algorithms are used to generate a motion using a set of rules or constraints (Ex. physically-based motion);
    • Requires less input from the user but is computationally intensive.

Motion Control Method: 1st, models begin as geometric shapes; 2nd, use physically-based models for realism; and 3rd, behavioural models are used to give objects or characters individuality.

1.5 Animation Pipeline (AAR)

  1. Appearance: model synthetic characters and environment, where synthetic characters are life-like forms in the physical or graphical world (Ex. robots and CG characters). This can either be an imitation of real-life forms or imaginary forms (which is more visually pleasing).
  2. Action: animate physical movement, environment elements (such as camera, lighting, etc) and personality/communication. Physical movement can be Physically Feasible (life-like), Perceptually Plausible (exaggerated) or Unrealistic (extreme exaggeration).
  3. Rendering: includes shading and lighting, real-time vs. not, photo-realistic vs. not.

Why Use Objective Programming? It can be very expressive for us creative artists and powerful enough to use without worrying about how to specify the details we don't care about.

2. Eight Principles of Animation (TEA SAFES)

  1. Timing and Motion: the timing or speed of the motion gives meaning to the movement. This can help emphasize weight, scaling or emotion.
  2. Ease-in/Ease-Out: describes the spacing of the 'tweens (uses non-linear interpolation). This is good for fluidity and naturalism.
  3. Arcs: describes visual path of action from one extreme to another. This is, again, good for fluidity and naturalism.

  4. Squash and Stretch: movement emphasizes rigidity by preserving the volume (similar to geometric deformation).
  5. Anticipation: the preparation of an action through a reveal, indication of speed and directing the attention (Ex. In real life, amateur boxers telegraph their punches by pulling back before they punch).
  6. Follow Through: termination of an action through weight and drag, initiation, and overlapping.
  7. Exaggeration: exaggerate the essence of an action; helps to emphasize the action, emotion, shape or sound.
  8. Secondary Action: secondary occurs directly from a primary action. This provides interest and realism. (Ex. You jump off a cliff wearing a cape. The primary action is you jumping and the secondary action is your cape waving in the wind). This technique is similar to physically-based motion.

3. Four Methods for Preparing Animation

Traditional Animation Techniques

  1. Straight-Ahead: animator draws or sets up some objects one frame at a time in sequence. This is good for creativity but difficult to time and tweak.
  2. Pose to Pose: similar to keyframing; animator sets up main frames and does the 'tweens later. This is good for tweaking, timing, and planning out the animation. This technique is similar to timing, easing and arcs (TEA).

Traditional Assessment

  1. Staging: presenting the idea so that it is unmistakeably clear. You should be able to tell what the action is solely by the silhouette. This is similar to rendering.
  2. Appeal: character has charm (personality), a pleasing design and simplicity. (Ex. Exaggerate the design, avoid symmetry, use overlapping action.)

4. Five Types of Animation Systems (PSS BR)

  1. Procedural: control over motion is achieved through procedures that explicitly define the movement as a function of time.
  2. Scripting Systems: animator writes a script. The system is not interactive (Ex. ASAS).
  3. Stochastic: controls general features by invoking stochastic processes that generate large amounts of low-level detail (Ex. Good for particle systems).

  4. Behavioural: objects are given a set of rules about how they interact with their environment. (Ex. A school of fish.)
  5. Representation: the data that represents the object is animated. There are 3 types: 1) Jointed objects; 2) Soft (deformable); and 3) Morphing.