Object Detection Description
There are a few ways to detect objects in VRChat. This page starts with the basics and gradually move to the more complex versions. Note that it is very easy to run into Oversync issues when trying to detect objects, see the final section to understand more about how to handle this. Also note that detecting a specific object is not currently possible with the tools available in the SDK, only Layers. This is very important because to detect a specific object it needs to to be differentiated from other objects.
Detecting A Layer
For any detection to happen a few things are needed:
- Both objects each need a collider. It is best to use a primitive collider (SphereCollider, CapsuleCollider, BoxCollider), but you can use a mesh collider if it is set to convex. The detection object should have IsTrigger checked in the collider settings.
- The layer of the detection object needs to collide with the layer of the other object. You can check this by going to Edit -> Project Settings -> Physics, under Layer Collision Matrix.
- One or both objects need to have a Rigidbody. This is a unity requirement. If the object moves, add a Rigidbody. If you do not want physics, check Kinematic.
- VRChat specific: The detection object needs a VRC_Trigger with OnEnterTrigger.
VRC_Trigger’s OnEnterTrigger event has two main settings: Layers and Trigger individuals. Layers is a list of all the layers you want this trigger to act on. You can choose as many as you want. When any of these objects enter the trigger collider, all the actions in the VRC_Trigger will fire. One special case where they do not fire, is when you have too many objects enter at the same time. This is where Trigger Individuals come in. Without Trigger Individuals, there is a minimum time delay between when the next object can be detected. The old SDK had this default off but now in the new SDKs it defaults on. If an object has multiple colliders, Trigger Individuals on will detect each collider. OnExitTrigger with Trigger Individuals has been known to fail with too many objects exiting it at the same time. Insert Canny[] I can't find it!
Note: If you ever need to turn off the detection object and intend to turn it back on again, turn off only the collider! If you turn off the entire object, it can lead to a repeating execution bug when the object is turned back on. Link to old canny or remove if no longer true []. not even sure what to search for to find this canny
Example - Detecting Pickups
As an example, let’s show how to detect when a pickup enters a collider:
Starting from a blank world, create a new cube. Position it in a reachable area. Enable IsTrigger on the collider. Next add a VRC_Trigger and add OnEnterTrigger. This example will set the layer to check for in the VRC_Trigger to Pickup. Since the cube’s layer is Default, it will properly detect the Pickup layer. To show when an object enters, lets toggle the mesh renderer of the cube. Add the action SetComponentActive, drag the cube in, and select Toggle.
Now that the detector is setup, we need to create a pickup. Create a sphere and add the VRC_Pickup script. When adding the pickup script, the object’s layer will automatically be set to Pickup.
With this, you can test the world. Grab the pickup and waggle it over where the cube is, and you will see it toggle on and off.
Detecting Specific Objects: Custom Layers
As mentioned earlier, the tools given to us only allow for specific layer detection. The easiest way to detect specific objects is to give each object its own custom layer. However, using this method is very limited as we only have 10 custom layers. You can use the other VRChat layers but they may change in the future, so be careful with how you use them. See Layers for how to create your own custom layers.
Detecting Specific Objects: Lifted Colliders
If you need to detect more than 10 objects, then you need to use a special technique: Lifted Colliders, Infinite Layers, Simulated Physics, Lock & Key Collision, Sky Keys, and Parallel Universe all refer to this same technique.
The idea is simple: Instead of detecting objects at their exact locations in the world, offset its collision and detect it there. The layer no longer distinguishes the object, but the offset itself does.
Picture it like this: You have 3 objects you want to detect. For each object, you tie a balloon to it. Each balloon has a different length of string. For the first object it has a length of 1 unit, the second 2 units, and the third 3 units. When the object moves, the balloon moves too. The detector also has 3 balloons, each having the same length strings as the three objects. When you move one object to the detector, only the balloons with the matching string length will touch, allowing you to know which object it is.
Insert silly diagram
In VRChat terms, this is all about having a second object following your pickup and detector. Using Standard Assets' FollowTarget[] script, this is made easy. The Follow Target script takes a target transform and an offset and sets its own position to that target plus that offset.
Each object should have a different multiple of a specific offset. This is so you can’t accidentally trigger it with a different object at an unexpected location within your map. Determining the offset depends on the bounds of your map. There is the "easy" way and there is the exact way:
- The easy way is to set the offset to be some number you know is way larger than your map. This can be 10,000. Note that setting it to large numbers will eventually result in floating point precision issues. If it is too far, the object will only move in large increments rather than small steps. This is why you should always build your maps close to the (0, 0, 0) point, but that is not part of this guide. See Object Snapping[] for more examples of this.
- The exact way is to figure out the bounds of your map and use this as your multiple. If you map only allows players to go up and down 100 units, then use this as your offset.
Note: Most creators tend to only use the y value for offsets, but you can use any vector value. This is why balloons and sky keys are part of the example and other names.
Example
This example will involve detecting two different pickups at two locations. Same as last example, the detectors will be cubes and the pickups will be spheres.
Create a new cube. This will be for visuals only. Create a second cube as a child. On the child, set the collider to IsTrigger. Add a VRC_Trigger and OnEnterTrigger event. Add SetComponentActive action, drag the parent in, and set it to Toggle. In the OnEnterTrigger, set the layer to Default only. For this example, we will use an offset of 100. Set the child Cube’s y position to 100. Duplicate the parent cube and move it over some. This will be the second detector. Set this cube’s child to 200 y.
Both the detectors are done and now we need to make the pickups:
Create an empty GameObject. Create two spheres as children. On the first, add the pickup script. On the second, add the follow target script and a Rigidbody. In the follow target, set the target to the first sphere, and set the offset to (0, 100, 0). Also move this sphere to (0, 100, 0). Check IsTrigger on the second sphere and set the rigidbody to kinematic.
Duplicate the parent of these to make a second pickup and follower. In the second pickup’s follow target, set the offset to (0, 200, 0) and move the sphere up to this location as well.
Test in VRChat and you will see that only one sphere will trigger for each cube!
Detecting Object Rotations
TODO anyone? 1-2-3 not it! (Probably easiest to use AutoCam or combo Follow Target with a Look At) -Igbar
Detecting Players
TODO @CyanLaser
There are many cases for why you would want to detect players. There are two layers dedicated just to players in VRChat:
- Player Local - You the local player
- Player - Every other player in the game that isn’t you.
See Layers for more information on these layers
Note that all colliders on an avatar are on the respective player layers. This includes avatar chairs and the combat system colliders. It is best to not use Trigger Individuals when detecting players like this.
Unsurprisingly, doing any form of sync with players is difficult. There are two general approaches to it:
- Local only
- Players broadcast when they enter.
Player Tracking
Since player colliders are disabled when you sit in a chair, the only way to detect these players is by giving them your own collider to detect. This can be done by using player tracking[] and a following collider using Follow Target.
Detection and Sync
TODO @CyanLaser
This is a difficult topic. Make sure you understand the different Broadcast Types as it is very easy to have Oversync issues.
Syncing methods
Local only
- For activating it only for the local player
Local + object sync
- Works for all players in the room with the object enabled
- Ignores late joiners if object moves away
- Does not buffer
Master Buffer one
- For broadcasting to everyone, including late joiners
- Requires the object to be enabled for Master which may not always be the case (Trigger Occlusion)
- Suffers from Master Buffer one bug. See this Canny for more information.
Pickup holder only
- For broadcasting to everyone, including late joiners
- Should always work if detection happens while someone is holding the object.
Last holder only
- Same as above and will work while if someone drops the object
- For broadcasting to everyone, including late joiners
- Will not fire if no one holds or last holder leaves room
Other
If you decide you don’t want to use VRC_Triggers for object detection, then you can use ActivateTrigger[] from Standard Assets'. This is similar to OnEnterTrigger with Trigger Individuals set but you cannot specify the layers. There are no notable examples of why you would do this, but it is an option.