Unity splash paint projectile test

https://www.youtube.com/watch?v=diFIRKu3fh0

A follow-up to the previous post.

Painting on objects in-game

Introduction

A long time ago I read an article about a game being developed called The Unfinished Swan. The concept was that the world is blank white and the player can't see anything until they paint it. Here's what it looked like:
unfinished-swan.jpg
The Unfinished Swan
This concept is stuck in my head, even though the game was released (and didn't get much praise). So I thought trying to recreate the game's mechanics in Unity is a nice way of learning.

Research

First I remembered some games with similar features: Tag, Portal2, Splatoon. Searching for implementation details gave the following:

  1. Decals and Projectors won't do. It's the simplest method but for every splash of paint an instance is required. These should be used for temporary texture modifications.
  2. It's possible to paint a texture. Downside is that the change will appear everywhere the texture is used, and if it's tiled, the change will appear on every tile.
  3. There a Valve document that actually describes working with textures but one can deduce that gel is actually a set of textures that get filtered by another texture, thus only being visible where the gel is applied
  4. This reddit thread has an explanation of paint mechanics. Basically, a map for the whole level is baked on which the game can draw. Paint texture is then displayed "through" this map.

Concept

I wanted to be able to paint on as many objects as possible including the dynamically created ones. Avoiding baking a map, I need to have a separate texture to paint on for every object. At the same time, I should not be required to manually create and assign such textures.
So, Experiment #0 is:

  1. Write a shader that renders the "drawing" texture on top of another texture
  2. Automatically create and assign "drawing" textures to objects
  3. Paint on the "drawing" texture at the center of a screen

Experiment #0

Shader

I created a Standard Surface Shader and edited it.
In the properties added a new texture, hiding it from inspector:

Properties {
    [HideInInspector]_DrawingTex("Drawing texture", 2D) = "" {}

Created the corresponding variables:

            sampler2D _DrawingTex;
    sampler2D _MainTex;

    struct Input {
        float2 uv_DrawingTex;   
        float2 uv_MainTex;
    };

Altered the surf method

        void surf (Input IN, inout SurfaceOutputStandard o) {
        float4 drawData = tex2D(_DrawingTex, IN.uv_DrawingTex);
        float4 mainData = tex2D(_MainTex, IN.uv_MainTex) * _Color;
        fixed4 c = lerp(mainData, drawData, drawData.a);
        c.a = drawData.a + mainData.a;
        o.Albedo = c.rgb;
        // Metallic and smoothness come from slider variables
        o.Metallic = _Metallic;
        o.Smoothness = _Glossiness;
        o.Alpha = c.a;
    }

This is a simple color blending: drawing texture will add to the base texture.

Texture assignment

I created a script that should be added to every object that we want to draw on. At start it creates a texture and assigns it

            private readonly Color c_color = new Color(0, 0, 0, 0);
            `... (correct renderer and material availability check) ...
            m_texture = new Texture2D(textureWidth, textureHeight);
            for (int x = 0; x < textureWidth; ++x)
                for (int y = 0; y < textureHeight; ++y)
                    m_texture.SetPixel(x, y, color);
            m_texture.Apply();

            m_material.SetTexture("_DrawingTex", m_texture);
            isEnabled = true;

And it has a method that paints given texture on drawing texture

public void PaintOn(Vector2 textureCoord, Texture2D splashTexture)
{
    if (isEnabled)
    {
        int x = (int)(textureCoord.x * textureWidth) - (splashTexture.width / 2);
        int y = (int)(textureCoord.y * textureHeight) - (splashTexture.height / 2);
        for (int i = 0; i < splashTexture.width; ++i)
            for (int j = 0; j < splashTexture.height; ++j)
            {
                int newX = x + i;
                int newY = y + j;
                Color existingColor = m_texture.GetPixel(newX, newY);
                Color targetColor = splashTexture.GetPixel(i, j);
                float alpha = targetColor.a;
                if (alpha > 0)
                {
                    Color result = Color.Lerp(existingColor, targetColor, alpha);   // resulting color is an addition of splash texture to the texture based on alpha
                    result.a = existingColor.a + alpha;                             // but resulting alpha is a sum of alphas (adding transparent color should not make base color more transparent)
                    m_texture.SetPixel(newX, newY, result);
                }
            }

        m_texture.Apply();
    }
}

Painter

Another script that will find an object the camera is looking at and paint given texture on it

public class ClickScript : MonoBehaviour {
public Camera mainCamera;
public Texture2D splashTexture;

void Update ()
{
    if (Input.GetMouseButtonDown(0))
    {
        RaycastHit hit;
        if (Physics.Raycast(mainCamera.transform.position, mainCamera.transform.forward, out hit))
        {
            MyShaderBehavior script = hit.collider.gameObject.GetComponent<MyShaderBehavior;>();
            if (null != script)
                script.PaintOn(hit.textureCoord, splashTexture);
        }
    }
}

Setting the scene

I created a material and set the shader to Custom/MyShader. Assigned this material and texture generation script to various objects. Assigned painter script to camera.
I also used standard Unity assets (Characters) to enable camera mouse control.

Sources

A package with a shader, two scripts and a splash texture I used.

Results

spl0_scr.png
Splash texture being added on click on various objects
It seems to work. New "drawing" texture does not depend on the base texture parameters like tiling.

Drawbacks

  1. The object is required to have a mesh collider for the physics raycast to properly give texture coordinates. The box you can see on the left is actually modified to have a mesh collider instead of a box collider. The same way sphere, capsule, cyllinder colliders will not work.
  2. Splash size depends on the object's size.
  3. Overlapping: drawing near the edge of a texture will draw on the opposing edge.
  4. Box's mesh uses the texture on every side, which means that "drawing" texture is also used on every side. This method does not work for such objects.
    spl0_scr1.png

Next

I'd like to try the following:

  1. Try to calculate the optimal "drawing" texture size based on the object's size, so that drawing a splash will look similar on all objects
  2. Avoid overlapping when drawing
  3. Account for the geometry when drawing: the more perpendicular the vertice is to the central point of drawing, the longer the drawing should be stretched
  4. Try to make a "paint bomb" that paints nearby objects

Laser Pointer Style VR Controller UI Interaction

Introduction

After I made a VR controller I faced a new challenge: Unity's UI system is not made for VR. I wanted to be able to point the controller at button and press it. For this, I found Unity UI system source files and modified its input module so I could do this:
pointer-style ui interaction
(Don't mind the glitchy controller, Vuforia requires better light than I had while testing)

The module

The files are published on the Github.

What's it for

I think this module can be useful to developers who use unusual controllers and want Unity's UI to work. I hope it can save them some time and enable them to get to their projects faster. The module will work with any controllers that are represented as game objects.

How to use it

  1. Create a child camera for your controller game object;
  2. Set the camera's culling mask to none;
  3. Assign VRControllerInputModule to the EventSystem;
  4. Assign the camera to VRControllerInputMode's Ui Camera;
  5. Assign the camera as Event Camera to all of the canvases;
  6. (For click to work) Call SetIsControllerButtonPressed on your controller's button press.

Why make a new module?

Well, the thing is: Unity's UI system is meant for mouse or touch, not 3d controller. It utilizes graphic raycasting, not physics raycasting, and requires a camera.
I wanted badly to avoid using cameras and instead utilize physics raycasting, but it requires much more work (almost a total rewrite of event system and input modules).
I also tried searching but only found hardware-specific solutions made for Oculus or Vive, which were unsuitable, required specific API (for buttons, for example), or were overall too heavy and complicated for my task.

GitHub – Sergey-Shamov

Sergey Shamov

C# developer