I think of Divinity Original Sin II when I think of this effect. The environment dissolves around your party of characters as you move through the world. It feels great as a player and it opens up possibilities for verticality in isometric level design.
Cocoon flips this effect around and dissolves pathways beyond a certain radius from the player to create some really novel puzzles.
I don’t know any of the details of how Larian Studios or Geometric Interactive implemented this effect. But, for my tastes and needs, the approach outline below works well.
I set out to achieve this effect without turning to the stencil or depth buffer. I already have a number of render features in my project that would make that a more complex task than I'd like to take on. Instead, I wanted to see how well I could make this work using only Unity's physics methods.
TLDR
This will show a typical dissolve effect shader using alpha clipping and 3d noise in Shader Graph.
When it comes to checking whether objects should be dissolved, a raycast check falls short if we have anything more than very simple level geometry.
A sphere cast check helps, but leads to objects dissolving that should not.
Checking the sphere cast's hit points' viewport space and world space coordinates against the player position goes a long way to solving this.
I tested whether running two sphere casts per update - one between the camera and player and one behind the player - leads to more reliable results. But for me all of the same checks against the player's position were still required to get good results - making the double sphere cast redundant.
Limiting the radius of the sphere cast and the dissolve helps - your gameplay and level design may or may not allow this simple solution.
You could also try using a mesh collider to trigger dissolve effects on your environment assets but I don't detail that approach here.
Make Some Noise
You could try to use a 2d noise texture and triplanar projection to hide uv seams. But I think you’ll find better results using procedural 3d noise.
Stefan Gustavson and Ahima Arts have written procedural noise shader routines for web-gl and you can find them in a convenient package for Unity on Jimmie Cushine’s github.
To avoid seams between different environment assets using this shader, use world space position coordinates (rather than object) as the input to your procedural noise.
Turn on alpha clipping in Shader Graph's Graph Inspector and plug your noise into Base Color and Alpha and any meshes using this shader should look something like this:
I want to use the noise to transition from full transparency around the player character to opacity further from the player. For that I'll need a radial gradient.
Create a radial gradient
I use the screen position of the mesh vertex. I need the magnitude of that screen position vector (the square root of each of the components squared and added together - which is Length in Shader Graph):
To create a gradient from the middle of the screen rather than the corner I can subtract 0.5 from the screen position. And to better map the full range of the gradient (0 to 1) to the width and height of the screen, I then multiply the vector by 2.
Plug the output of the length node into the Base Color and your mesh should look something like this:
To control the size of the area that is pure black - which will become full alpha later - I remap the output of length exposing control of the minimum value.
To increase the area that is black I need to shift values lower. But to make this easier to conceptualize when I am adjusting material I prefer to pass the float into a one minus so the value is positive in the inspector:
I can also expose the maximum value in the inspector to provide control over where pure white starts / the size of the transition from black to white. But for now I've left it unchanged as 1.
The gradient currently outputs an oval shape rather than a nice sphere. It needs to account for the difference between the width and the height of the screen like this:
Now we should have a nice circular gradient in the middle of the screen. If your player character moves independently of the camera you would need to pass your player characters position into the shader and use that instead of the center of the screen but this works for my needs:
Control the noise with the radial gradient
The real magic happens after plugging the noise into a Step Node's Edge input and the radial gradient into the In input.
After sending the output of the step node to Alpha your material should now look something like this:
It is easy to add a glowing edge between the opaque base color and the full transparency.
To do that just add a small negative float value to the radial gradient. Pass that and the noise into a new step node. And then use the output of that as the t value to a lerp between your base color and an edge color (that you then pass to your Base Color output).
To make the effect more compelling, I have it set to render both back and front faces. And I am using the Is Front Face node to make back faces darker (as if they are in shadow).
Control when the effect is applied
Right now, the shader is set to dissolve all the time. This is clearly not what we want. It should dissolve only when the object is closer to the camera than the player.
Should you use raycast? A raycast could be easily used to check whether there is a game object between the camera and the player character (in my case, with an isometric camera, I use their head bone).
However, this introduces a false negative problem: it fails to identify objects that we want to dissolve because it checks a single raycast rather than a radius around the player. We then get this undesirable effect where opaque areas of a mesh are seen through transparent parts of a hit mesh. Here is an example, with the cube on the right dissolving but the cube on the left not:
Reduce the dissolve radius: A very cheap option to improve the raycast is to reduce the radius of the dissolve. If gameplay allows for this, doing so reduces the probability of this ugly effect happening but it obviously doesn't solve the problem.
Use a sphere cast? We could use a sphere cast rather than a raycast to check for hit colliders. This can easily solve for false negatives. But it introduces the potential for false positives - dissolving objects that are not (literally or visually) between the player and camera.
This happens when the sphere radius is high enough to 1) hit the floor the player is standing on, 2) reach anything behind the player when it reaches the player position, and 3) hits complex geometry that can be both behind and in front of the player (e.g. a spiral staircase).
You can see here the floor is dissolving because the hit point is closer to the camera than the player:
We can mostly solve problems 1 and 2 by introducing some position checks:
Convert the hit points of the sphere cast to viewport space. And then compare the z value of the hits to the player character's z value in viewport space. This avoids dissolving objects behind the player but reached by the sphere cast radius. But it does not solve hitting the floor between the player and the camera.
If you don't need the floor to sometimes dissolve then this won't be a problem for you if you can put the floor on a different layer.
If you do need the floor to dissolve you can check whether the hit point is below the player character (in world space coordinates). If it is below then you probably don't want to dissolve it and if it is above (perhaps by some margin) you probably do want to dissolve it.
I tried running a second sphere cast starting from the player's position, going deeper into the scene, and ignoring any hits from the first sphere cast if they were found by the second. I'd wondered whether this could lead to more accurate results but from my testing it didn't do anything to improve accuracy. (I left the code in below all the same).
Lower the sphere cast radius? We can adjust the sphere cast radius, setting it low enough to avoid any false positives that we haven't eradicated with the steps above.
The sphere cast working well - confirmed hits in green / ignored hits in red:
Use a mesh collider? With a mesh collider fixed to the rotation and position of your camera or player you could use OnTriggerEnter and OnTriggerExit events to turn on and off the dissolve of your environment assets. This way you could use a custom mesh shape that fits your camera and environment setup. I didn't take that approach but if this does not work for you it might be a relevant approach to consider.
Here is my implementation of a script attached to objects that I want to dissolve:
[Space] public float minRadius = 2f;
public float lerpSpeed = 0.5f;
[Space]
public EasingFunctions.Ease ease = EasingFunctions.Ease.EaseOutCubic;
protected EasingFunctions.Function EaseFunc;
private List<Material> _dissolveMaterials;
private static readonly int UseDissolve = Shader.PropertyToID("_UseDissolve");
private static readonly int MinimumRadius = Shader.PropertyToID("_Minimum_Radius");
private void Awake()
{
_dissolveMaterials = new List<Material>();
EaseFunc = EasingFunctions.GetEasingFunction(ease);
if (transform.TryGetComponent(out Renderer r))
{
Material[] materials = r.materials;
foreach (Material m in materials.Where(m => m.HasProperty(UseDissolve)))
{
_dissolveMaterials.Add(m);
}
}
}
public void StartDissolve()
{
foreach (Material m in _dissolveMaterials)
{
StartCoroutine(LerpRadius(m, minRadius));
}
}
public void EndDissolve()
{
foreach (Material m in _dissolveMaterials)
{
StartCoroutine(LerpRadius(m, 0));
}
}
private IEnumerator LerpRadius(Material mat, float end)
{
float start = mat.GetFloat(MinimumRadius);
float t = 0;
while (t < lerpSpeed)
{
float ease = EaseFunc(0, 1, t / lerpSpeed);
float radius = Mathf.Lerp(start, end, ease);
mat.SetFloat(MinimumRadius, radius);
t += Time.unscaledDeltaTime;
yield return null;
}
mat.SetFloat(MinimumRadius, end);
}
Here is my implementation of a manager script, which is attached to my main camera in the Unity hierarchy:
public PlayerData playerData; // holds data about the player's position
private Camera _mainCamera;
public LayerMask dissolveObjects;
public float sphereCastRadius = 2f;
[Space]
[Tooltip("To dissolve, the hit point must be higher than the player position + this offset")]
public float heightTestOffset = 0.1f;
[Space] public int raycastHitLimit = 30;
public bool useTwoSphereCasts;
private RaycastHit[] _foregroundHits;
private int _foregroundResults;
private List<RaycastHit> _fore = new List<RaycastHit>();
private RaycastHit[] _backgroundHits;
private int _backgroundResults;
private List<RaycastHit> _back = new List<RaycastHit>();
private float _playerViewportDepth;
private List<DissolveObject> _dissolved = new List<DissolveObject>();
private List<DissolveObject> _newDissolveObjects = new List<DissolveObject>();
private List<DissolveObject> _old = new List<DissolveObject>();
private void Awake()
{
_mainCamera = GetComponent<Camera>();
_foregroundHits = new RaycastHit[raycastHitLimit];
_backgroundHits = new RaycastHit[raycastHitLimit];
}
private void Update()
{
RunSphereCast();
if(useTwoSphereCasts) DissolveNewHitsDoubleSphereCast();
else DissolveNewHits();
EndOldDissolves();
}
private void RunSphereCast()
{
Vector3 playerHeadBone = playerData.playerBones[0];
_playerViewportDepth = _mainCamera.WorldToViewportPoint(playerHeadBone).z;
Vector3 cameraPos = transform.position;
Vector3 direction = (playerHeadBone - cameraPos);
float distance = direction.magnitude;
direction = direction.normalized;
_foregroundResults = Physics.SphereCastNonAlloc(transform.position, sphereCastRadius, direction, _foregroundHits, distance, dissolveObjects, QueryTriggerInteraction.Collide);
if(!useTwoSphereCasts) return;
_backgroundResults = Physics.SphereCastNonAlloc(playerHeadBone, sphereCastRadius, direction, _backgroundHits, distance, dissolveObjects, QueryTriggerInteraction.Collide);
}
private void DissolveNewHits()
{
_newDissolveObjects.Clear();
for (int i = 0; i < _foregroundResults; i++)
{
RaycastHit hit = _foregroundHits[i];
if (hit.transform.TryGetComponent(out DissolveObject d))
{
if(hit.point.y < playerData.position.y + heightTestOffset) continue;
float hitViewportDepth = _mainCamera
.WorldToViewportPoint(_foregroundHits[i].point).z;
if (hitViewportDepth < _playerViewportDepth)
{
_newDissolveObjects.Add(d);
if (_dissolved.Contains(d)) continue;
_dissolved.Add(d);
d.StartDissolve();
}
}
}
}
private void DissolveNewHitsDoubleSphereCast()
{
_newDissolveObjects.Clear();
_fore.Clear();
for (int i = 0; i < _foregroundResults; i++)
{
RaycastHit hit = _foregroundHits[i];
float hitViewportDepth = _mainCamera.WorldToViewportPoint(hit.point).z;
if(hitViewportDepth < _playerViewportDepth) _fore.Add(hit);
}
_back.Clear();
for (int i = 0; i < _backgroundResults; i++)
{
RaycastHit hit = _backgroundHits[i];
float hitViewPortDepth = _mainCamera.WorldToViewportPoint(hit.point).z;
if(hitViewPortDepth > _playerViewportDepth) _back.Add(hit);
}
foreach (RaycastHit h in _fore.Where(h => !_back.Contains(h)))
{
if (h.transform.TryGetComponent(out DissolveObject d))
{
_newDissolveObjects.Add(d);
if (_dissolved.Contains(d)) continue;
_dissolved.Add(d);
d.StartDissolve();
}
}
}
private void EndOldDissolves()
{
_old.Clear();
foreach (DissolveObject dissolved in _dissolved
.Where(d => !_newDissolveObjects.Contains(d)))
{
dissolved.EndDissolve();
_old.Add(dissolved);
}
foreach (DissolveObject d in _old)
{
_dissolved.Remove(d);
}
}
Further Resources
In researching this effect I found Glowfish Interactive's article on this topic which provided inspiration and you can find here.
Comments