Sunday, 16 August 2020

Maya UI to swap textures on a mesh

I've been working on a tool to speed up texturing workflow for crowds in Maya.

It's early stages, but I intend to use this tool to set the Home team and Away team for a stadium crowd setup.

It can be used for many teams in lots of different leagues in Europe. That's hundreds of teams and I'm not finished populating the texture files and thumbnails yet, but here is a video to show its funcionality so far.

and in Houdini

Thursday, 16 July 2020

Show all clips in an Agent's Clip Catalog

Here is a way of displaying all the clips in an Agent's clip catalog.

Let's say you have an agent with a number of clips already defined.
It's very useful to playblast the agent, as it plays the animation in each clip.
It's possible to manually place agents and change the clip of each agent, but that can be laborious and we want to do things procedurally, do we not?

  1. In a fresh scene, lay down a Geometry object, rename it 'Agents'. Inside it, lay down an Agent SOP node. Point Agent Definition Cache to where the Agent definition is located, so that the agent is loaded into your scene.

    All the Agent's clips will be available in the Current Clip dropdown menu.
  2. Drop down an Attribute Wrangle node.
    The Wrangle will create an array from the Clip Catalog.
    Then, it will create a point for each clip.
    Each point will have a string attribute, containing the name of one clip from the catolog.

    The 'addpoint' VEX function is also reading a parameter added to the Wrangle node.
    The 'seperation' parameter changes the distance between each point.
    I have decided to arrange all the points along the x-axis. This made sense for me as all my clips are aligned along the z-xais.

    The Geometry Spreadsheet shows all the points created and the string attribute 'catalogClip'.
    This attribute will be used later.

  3. Next, use a Foreach loop to place an agent on each of the points.

    The Foreach node is using 'Fetch Piece or Point' as the Method

    The Crowd Source node is reading the 'catalogClip' attribute from each point and using that string value to set the Initial State.
    Gotcha! The VEX function to do this is 'points()' and NOT 'point()'. Note well the extra 's' in the function name. This is a variation of the point() function that handles string attributes.

    To label the clips, use the Font node to generate text.

    The Font node can read data, not just text. It requires the data to be evaluated, using the back-ticks (`)
    The Font node does not take an input, so to get data into it, use a Spare Input.
    Using the cog menu at the top of the parameter pane, choose Add Spare Input.
    Drag the Block Begin node from the Foreach loop into the newly created Spare Input field.
    That allows the Font node to read data from the geometry going into the Foreach loop.
    The data we need is 'catalogClip', so the syntax to read that into the Font node is:
    `points(-1, 0, "catalogClip")`
    Here, the first parameter is -1, which tells the node to read the first Spare Input.
    The text generated by the Font node needs to be translated to the location of each point.
    The expressions in the 'origin' parameter reads the "P" attribute of each point.
    The y component of the origin is using the modulus operator (%) to stagger the position of the text to avoid overlapping text.

    If required, a line can be drawn between the text and the point, to clearly show which text comment references each Agent.

    The line node will draw a poly line from a point, in a specified direction.
    I used the same location as the Font node for the 'origin' parameter, only I raised the y-position by a small amount.
    The direction is simply (0,1,0)
    The length of the line is just the y-origin, multiplied by -1.

  4. Merge the three elements and connect the merge to the first input of the Block End of the Foreach loop.

  5. Add and Agent Terrain Adaptation node to apply foot locking.

    For this to work, an Agent Prep node needs to specify the IK chains for the legs.
    I copied the one from the Agent Definition HIP file (along with the associated CHOP Network)

Wednesday, 8 July 2020

Re-Time Agent Clips using CHOPS

Here is a quick method for re-timing clips for use in a Crowd system.

If you have a clip, loaded from a FBX file directly into the Agent Clip node, that is too slow or too fast you can re-time the clip.

In my example, I have loaded a number of clips, categorised by type - I love to be organised that way and it speeds things up if you have many clips.

I have a little transition clip where the agent moves hands from the knee to the lap. The original clip os very slow, taking 65 frames to complete the actions. I need it to be done much quicker than that.
I don't want to open the clip and re-animate it, export the clip and re-import it. I would much rather re-time the clip as I am working in the Agent Definition workflow.

Here is the original clip alongside the re-timed clip for comparison


To re-time a clip:

  1. Create a new Agent Clip node, after the original clip has been loaded.
  2. Create a CHOP network node

  3. Inside the CHOP network, create an Agent node. Set the Agent node to read the node upstream of the new Agent Clip node that was just created. In my example, I created a Null just after all the Agent Clip nodes. I point the Agent CHOP to that Null.

  4. Still in the CHOP network, create a Channel CHOP, a Warp CHOP and a Trim CHOP. Connect the nodes as shown in the network above.
  5. The Warp node reads a 'curve' from it's second input and will re-time the clip depending on the value of that curve. In my case I want to speed up the clip by 3x. To do that, I set the value of the channel curve to 3. To halve the speed, set the value to 0.5 and to reverse the clip, set the value to -1. etc.

  6. Next, the clip's channels will need to be trimmed to the correct length. Using the Trim node, set the start and end points as you need them. For my case, the original clip length was 67 frames, so I set the new end point to 22.

  7. Drop down an Output node and set it's output flag.
  8. Back out of the CHOP network and set the Agent Clip node to read the re-timed clip.
    The input source will be 'CHOP' and select the CHOP network that contains the re-timed clip.

This new clip can be used just like any other clip in the Crowd setup. You can use this method to reverse clips and even ramp clip speed if you need to.

Tuesday, 23 June 2020

Tuesday, 16 June 2020

Stadium Crowd in Golaem

I decided to re-make the previous stadium shot using Golaem, Maya and Arnold.
I have used Golaem a few years ago and I wanted to refresh my skills.

Using the same assets from Character Creator (re-posed), I imported them and rigged them in Golaem. I setup one male and one female character but with a few clothing variations.
Each character is playing a random selection of Golaem's built-in clips.
All this was setup and rendered in a day. Try doing that in Houdini!

Friday, 12 June 2020

Agent Preparation with Skinned Clothing

I have found a solutuion to using skinned clothing layers in Houdini's crowd system.

This method involes saving clothing geometry in T-pose, or whatever rest pose your rig uses.
In my case, I am generating characters using Reallusion's Character Creator, which alows a character, with clothing to be posed in any position. The rig I am using comes from MocapOnline which has a T-pose rest position.
It is possible to use any pose, if the rig is key-framed before frame 1 and animated to rest position at the start of the clip. There is a step-by-step demonstration of this process by Kevin Ma, which clearly explains what to do.

What I want is to have an agent with multiple shirts, trousers and shoes. A few of each will give a reasonable variety. I plan to vary the shader on each piece of clothing too, but that will come later.

I need to generate a few versions of my character and save out the geometry. Character Creator can output obj and FBX formats.

Here are the variations of my character that I generated:

I will be using shirts from all five but just three trousers and shoes.
I have named these geometry meshes as MALE_01_VAR_01.obj, MALE_01_VAR_02.obj, etc.

We need to skin all this geometry to the Rig. As mentioned, I am using the Crowd Animation pack from Mocap Online. That comes with a rig and skinned geometry.

Here is what the geometry and rig look like once they have been imported into Houdini.

The third picture shows the geometry and materials that comes with the MocapOnline rig. We will not need these so they can be deleted. We will replace the geometry with our own.

Rename the FBX import node as RIG

Now we will create an un-clothed agent:
Inside RIG, Create a geometry node, name it MALE_01
Jump inside MALE_01 and create a file node.
The file node should point to one of the obj files exported from Character Creator, let's say MALE_01_VAR_01.obj
That file will have clothes but we are going to remove those clothes.
Geometry exported from Character Creator has primitive groups, which is very useful in this next step.
We want to blast away all the primitive groups belonging to the clothes.
Follow that blast node with a null.

This is now ready for skinning to the rig.

Jump up one level so you can see the rig and the geometry object.
Select the geometry object and on the Rigging shelf, press the Capture Geometry button.
The viewport will prompt you to select the root node of the rig. Do that and press enter in the viewport.
After a short calculation, the goemetry will be skinned to the rig. Sort of.

If you see this kind of result, it's because the bind is calculated at frame 0, not frame 1. SideFX in their infinite wisdom have made that the default. It's easily fixed, though.

Jump into the geometry node again. You will see some new nodes.

The node called Bone Capture Lines has an option to specify which frame to use for binding the geometry to the rig. Set that parameter to 1. Then, on the Capture Cache node, press the Stash button. You should now have a properly skinned character.

Jump up to the /obj level and make a new Geomotry node.
Jump inside and drop down an Agent node.
The agent should have the Input set to Character Rig and then specify the RIG node with the rig and geometry inside.

You can import clips in the usual way, and then cache out the agent using the AgentDefinitionCache node.
The details of this process are covered in another post, so I will not spend too much time discussing these steps.

Now for the clothing layers.

Jump inside the RIG node again. We are going to create skinned geometry in the same way that we did for the unclothed body.

Greate a new Geometry node. Rename it MALE_01_SHIRT_01
Inside that node, drop down a File node and import MALE_01_VAR_01.obj
We want to delete everything except the shirt geometry, so use a Blast node and in the Group drop-down, choose the primitive group that refers to the shirt and then check the box 'Delete Non Selected'.

Now you just have the shirt. This can be skinned to the rig, as before.
Jump up one level, select the geometry node with the shirt geometry and press the Capture Geometry button on the Rigging shelf.
Again, you will probably have to set the capture frame to 1 and then hit the Stash button, just like we did before.
So you should now have a shirt skinned to the same rig as the body geometry.
We can make an Agent Layer from this.

A couple of critical points to note here:
Do not use a Source Layer. We want the clothing on it's own, without any body geometry. We are adding the clothing to the default Agent layer, which is the body, so we do not want another copy of the body.
Bind the clothes shape to the Root node of the rig. Because the clothes shape is skinned, it will follow the Root node the same way as the body does.
Repeat this for all the clothes layers you need.

Save the Agent definition using the AgentDefinitionCache node.

Bringing the Agent into a new scene and using the shirt layers requires an Agent SOP with the Input set to Agent Definition Cache. The agent will have a default layer and all the new clothing layers ('shirt_01', 'shirt_02', etc).
To have some agents using the default and some using the shirt layers, I have used 2 Crowdsource nodes.

One node is used for the default layer and the other to chose a random shirt.
You can use the wildcard ('*') to allow Houdini to randomly select a shirt with equal distribution, or you can have a more guided selection, allowing you to choose the probability of which shirt is chosen

The clothes should follow the body with any animation clip that the agent is playing.

My skinning skills are limited, which is why there are some areas of inter-penetration, but with more careful skinning these can be fixed.

Wednesday, 10 June 2020

Making a Stadium Crowd from scratch with Houdini and Character Creator

Creating Stadium Crowds is a fairly routine task for a Crowd FX Artist, but there is scope for some interesting workflow and automation.
I will explain how I have made a basic Stadium Crowd with SideFX Houdini and Reallusion Character Creator. I am using Houdini v18 and Character Creator v3.2. I am also using the Crowd Animation pack from Mocap Online. I find the animation clips are good quality and have plenty of variations. You can also source clips for free from Mixamo.

When planning a Stadium Crowd, there are a few things to consider:
  1. Probably the most important question is "how close to the camera is the crowd?". In most cases, the the crowd will be far away, in the shade and motion blurred. That may not be the best conditions to showcase your work, but it does allow you to work with lower quality assets with faster render times.
  2. Is the crowd system to be customisable for multiple scenarios? I was inspired by the work by Postoffice's Crowd and Stadium Tool to start creating a system that can be used for any team in any stadium.
  3. What assets do you have? Do you have a variety of 3D characters that are rigged? Can you get access to high quality motion capture? Do you have a model of the stadium? Where are you going to get these assets? In my case, I was luck to to have a good model of a stadium (Machester City, Etihad stadium) but there are options out there for free stadium models. You may need to add seating to these models or modify them to suit your needs, but they are a good start.

Break it Down

To create an effect like a stadium crowd, the best strategy is to break the job down into smaller parts while still being concious of the whole pipeline.

First up are the assets. We will need the folowing:
  1. Characters
  2. Animation clips
  3. Geometry for placing the characters
  4. Environment and lighting

There are several sources for gathering character assets. Mixamo has a few that would be suitable for stadium crowd work (Brian, Adam, Liam, Shea, Malcolm, Kate, Suzie, Elizabeth). These are free, rigged and come with animation clips.
If you need higher quality, you may find free models online. Have a look on cgTrader, TurboSquid and Sketchfab.

Another way to get high quality models is to generate them yourself using software, such as Reallusion's Character Creator.
This software can generate character meshes in Obj and Fbx format, posed as you like.
Clothing can be varied using the built-in library but that library is quite limited. However the library can be extended using clothing geometry from other sources.

Here is an example of a character with clothes from the built-in library

Another example using a modified texture and custom decals on a built-in garment

An example of a mesh (hooded sweater) imported into Character Creator

It is also possible to import textures for skin and faces. Here's Pep:

These meshes can be exported from Character Creator as FBX, which will produce a single mesh with primitive groups which will become useful when breaking the mesh in Houdini. The export will also save out texture files for each of the seperate mesh groups. Textures include Diffuse, Normal map, specular, roughness and maybe one or two others, depending on the materials on the clothing. Some of these textures will be too much detail for a Crowd simulation, but they are there if required for close-up shots. I would consider Crowd FX unsuitable for close-up shots, so I only use the Diffuse and Normal maps in most cases.