Wednesday, 16 April 2025

Clip Browser UI Update

 The UI for the Clip Browser project is complete.

The first page is used to dearch for clips. The second page is used to add paths to search. On the second page, the user can select a clip and open a dialog to define clip attributs. These attributes are searchable on the first page.

page 1 - search
 
page 2 - paths


dialog - define

When defining a clip's attributes, it's possible to set the clip as a Vignette. This will allow the clip to be associated with other clips in the vignette, for example characters fighting in a melee, conversing or acting together in some other way. 

The 'Character', 'Primary Action', 'Secondary Action' and 'Direction' attributes can be chosed from a drop-down list or text can be entered to make a new entry into the list.

The is also an option to associate props for the clip. If props are to be used, a hook to the current pipeline can be added so that prop objects can be selected and added for the clip.

Notes can be added, in a multi-line text box and a score can be given. I have given five possible scores:
'missing', 'broken', 'popping', 'fixable' and 'good!' The artist can select clips down to any score level, but a colour-coded icon will be displayed in the Clips Found list on search page, to indicate the clip's score. That feature is to be included in the next iteration of the UI.

Thursday, 27 March 2025

Clip browser for Houdini crowds artists

The idea is to create a tool that will present animation clips to the artist, based on certain search or filtering criteria. The artist can then select the clips they want and import them into Houdini, where the clips will be assigned to the correct character with looping frames set correctly.

I have coded the UI for the clip browser in Qt with PySide2.
I purposefully did not use QtCreator or QtDesigner, as I wanted to re-visit the grammar and syntax of Qt and Pyside.

The project is in the initial stages, and there is a long way to go. I have many techniques to learn and many hours of coding, testing and discovery ahead. Here, I present the first draft of the UI, which I can describe in more detail.

The artist will want to work down the UI. Starting with the Character selection comboBox. The comboBox will be populated with a list of all the characters in the current Houdini setup.

Next, the artist can select a Primary and/or Secondary actions. For example, a clip may have a character who is idle, but also waving a flag. In this case, 'idle' is the Primary Action, 'wavingFlag' is the Secondary Action.

The artist can also select a direction. Most of the time, this will be left as 'straight', but there may some turning clips, arcing clips or up ramp or down stairs clips.

A text search will also be available, where the artist can simply type a short description of the required action, or if they know the clip name, the text search can find all clips with that name.

Once the artist has filtered the library and found some clips, they will want to review and select some clips to use in their setup. The middle row of the UI contains a list of filtered clips ('clips found'). The artist can select one and the preview of the clip will load in the small viewport. This is a standard Houdini viewport and the artist can tumble the camera and focus on specific areas - the feet, for example. I will include the option to display a ground plane. The character playing the clip will be the character selected in the first comboBox.

If the artist likes the clip, the clip can be added to the 'basket'. Once the artist is happy with their clip selections, they can Check Out the basket. This will create a clip node, organised by Character, and also, if the clip is a looping clip, the in/out frames will be added to the agent's clipProperties node.

There is also an Export List option, for the artist to write out a CSV file containing all the details of the basket of clips.
 

The topic of clip naming is an important one. Consistency is obviously critical, but in many studios there are various naming conventions brought about by diverse departments, and differing ideas about how to do things properly. I am considering an extnesion to this too to allow Crowd Leads to find, collate, name and categorise all the clips for thier current show, to create a subset of the library, with custom meta-date applied to the clips to make them more easily searchable. I will update the tool, if I find this is a good idea.

 

Tuesday, 11 March 2025

Stadium Crowd Project. Houdini, Python, Qt

Inspired by Postoffice (Amsterdam) Stadium and Crowd tool, I decided to start development of my own system.

I started with a very basic premise: to be able to select any team, from any football league, from any country and assign that choice to the home and away team. This would then be reflected in the texture of the shirts worn by the crowd.

In this way, I would be able to customise the crowd to any combination. Useful for broadcasters who need to quickly swap teams in virtual stadiums.

This UI was created in Python 2


Team shirt selector from Daniel Sidi on Vimeo.

Team Selector tool in Houdini from Daniel Sidi on Vimeo.

Thursday, 6 March 2025

Stadium Reel 2025

For those interested to see my stadium crowd work, I have put together a short stadium reel.
There are just a couple of completed projects:

  • Man City: The End of Football - Coffee & TV - Xylem
  • FC Barcelona: Més que un club - Glassworks - Nike

 


Friday, 28 February 2025

Crowd Reel 2025


 
  • Napoleon - MPC - Apple
  • Mufasa: The Lion King - MPC - Disney
  • Prehistoric Planet - MPC - Apple
  • Aquaman: The Lost Kingdom - MPC - Warner Bros
  • Man City: The End of Football - Coffee & TV - Xylem
  • FC Barcelona: Més que un club - Glassworks - Nike

Tuesday, 28 March 2023

How to create a pointcloud from any joint in the rig of crowd agents.

Here is a method I have learned to create a pointcloud from any joint of crowd of agents in Houdini.

For example, an army of soldiers, carrying guns. The FX department require the location and orientation of the end of each gun barrel. We can deliver a pointcloud which has that data.

Here are the steps

First, take your crowd....

// 1. get joint name
string JOINT_NAME = "r_handJA_JNT";
 
// 2. get joint index
int JOINT_IDX = agentrigfind(0, @ptnum, JOINT_NAME);
 
// 3. get position of agent
matrix AGENT_XFORM = primintrinsic(0, "packedfulltransform", @ptnum);
 
// 4. get position of joint within agent
matrix JOINT_XFORM = agentworldtransform(0, @ptnum, JOINT_IDX);
 
// 5. set offset to end of the gun barrel (enter manually)
vector POS = chv("offset");
 
// 6. transform by JOINT_XFORM
POS *= JOINT_XFORM;

// 7. trnasform by AGENT_XFORM
POS *= AGENT_XFORM;

// 8. set initial direction along the gun (+x direction)
vector DIR = set(1, 0, 0);

// 9. transform by rotation component of JOINT_XFORM
DIR *= matrix3(JOINT_XFORM);

// 10. trnasform by the rotation component of AGENT_XFORM
DIR *= matrix3(AGENT_XFORM);

// 11. make a new point
int newPoint = addpoint(0, POS);

// 12. set DIR on new point
setpointattrib(0, "DIR", newPoint, DIR);

// 13. delete the agent
removepoint(0, @ptnum, 1);


then export this pointcloud.

Thursday, 16 July 2020

Show all clips in an Agent's Clip Catalog

Here is a way of displaying all the clips in an Agent's clip catalog.



Let's say you have an agent with a number of clips already defined.
It's very useful to playblast the agent, as it plays the animation in each clip.
It's possible to manually place agents and change the clip of each agent, but that can be laborious and we want to do things procedurally, do we not?

  1. In a fresh scene, lay down a Geometry object, rename it 'Agents'. Inside it, lay down an Agent SOP node. Point Agent Definition Cache to where the Agent definition is located, so that the agent is loaded into your scene.


    All the Agent's clips will be available in the Current Clip dropdown menu.
     
  2. Drop down an Attribute Wrangle node.
    The Wrangle will create an array from the Clip Catalog.
    Then, it will create a point for each clip.
    Each point will have a string attribute, containing the name of one clip from the catolog.


    The 'addpoint' VEX function is also reading a parameter added to the Wrangle node.
    The 'seperation' parameter changes the distance between each point.
    I have decided to arrange all the points along the x-axis. This made sense for me as all my clips are aligned along the z-xais.


    The Geometry Spreadsheet shows all the points created and the string attribute 'catalogClip'.
    This attribute will be used later.

  3. Next, use a Foreach loop to place an agent on each of the points.


    The Foreach node is using 'Fetch Piece or Point' as the Method


    The Crowd Source node is reading the 'catalogClip' attribute from each point and using that string value to set the Initial State.
    Gotcha! The VEX function to do this is 'points()' and NOT 'point()'. Note well the extra 's' in the function name. This is a variation of the point() function that handles string attributes.

    To label the clips, use the Font node to generate text.




    The Font node can read data, not just text. It requires the data to be evaluated, using the back-ticks (`)
    The Font node does not take an input, so to get data into it, use a Spare Input.
    Using the cog menu at the top of the parameter pane, choose Add Spare Input.
    Drag the Block Begin node from the Foreach loop into the newly created Spare Input field.
    That allows the Font node to read data from the geometry going into the Foreach loop.
    The data we need is 'catalogClip', so the syntax to read that into the Font node is:
    `points(-1, 0, "catalogClip")`
    Here, the first parameter is -1, which tells the node to read the first Spare Input.
    The text generated by the Font node needs to be translated to the location of each point.
    The expressions in the 'origin' parameter reads the "P" attribute of each point.
    The y component of the origin is using the modulus operator (%) to stagger the position of the text to avoid overlapping text.

    If required, a line can be drawn between the text and the point, to clearly show which text comment references each Agent.



    The line node will draw a poly line from a point, in a specified direction.
    I used the same location as the Font node for the 'origin' parameter, only I raised the y-position by a small amount.
    The direction is simply (0,1,0)
    The length of the line is just the y-origin, multiplied by -1.

  4. Merge the three elements and connect the merge to the first input of the Block End of the Foreach loop.

  5. Add and Agent Terrain Adaptation node to apply foot locking.



    For this to work, an Agent Prep node needs to specify the IK chains for the legs.
    I copied the one from the Agent Definition HIP file (along with the associated CHOP Network)