kmdreko's

A SIMPLE BLOG

A Simple Sketch Effect

SEPTEMBER 17, 2019 | C++, OPENGL, GRAPHICS

I had always wanted to make 3D renderings appear hand-made, in a way similar to classical 2D animation. After months of late-evening hacking, I had developed a technique that would create the effect I desired using tessellation, geometry shaders, and a lot of fiddling about. I would hesitate to call the result hand-made, but it still evokes the style I was going for.

Here it is used in a later project:

This technique may look familiar to some, but it is not derived from a basic cel shader. I wanted control over sublte variations you would find in classical animations: lines being over-drawn, off-angle, or otherwise imperfect. These effects cannot be (easily) achieved with a cel shader that has only depth and model-distinctions to work from. What I ended up developing is much more involved, but gives far more personality to the end result.


1. Making Lines

I used a simple box since it is easy to visualize and will demonstrate how various situations are handled. At the beginning, the technique only works off of lines. However, I'll go ahead and note that a purely wireframe model is not sufficient for the whole effect; the triangular faces are actually needed for multiple things to work. But, more on that later.

2. Adding Randomness

As a first step, I adjusted the vertex shader to add some randomness the vertex's position. Its pretty subtle because I don't want the lines to be too crazy at this point.

While this change seems simple, there's already a bit of underlying complexity to it. I made sure that the randomness is deterministic and based on both the vertex and a independent "frame" value that I only change 12 times a second. Its not very apparent on the 20fps gif below, but I promise it is there. I had developed this at 144fps and if the limit wasn't there, it'd be nothing but a jittery mess. This same method was used for all sources of randonmess.


3. Splitting Lines

Then I started treading in unfamiliar territory. I wanted to divide each line into segments so I could add more randomness and not rely on just the vertex endpoints. I could've remade the cube with intermediate points, but that adds a burden to the model creation and isn't responsive to how its actually rendered.

This can be done with tessellation shaders. If you don't know, tessellation is an OpenGL feature where a control shader dictates how a primitive will be subdivided and an evaluation shader that dictates how the intermediate points are generated. For my purposes, it was pretty simple to take the line and split it up evenly into smaller lines. I quickly noticed that I only needed to split up lines that needed the extra detail, so I made the subdivision count dependent on screen space. That way, lines further away will stay a single line, and those closer to the screen get the additional segments they need.

As a side note, I made sure that the individual segments stay connected in this example, but they don't have to be.

4. Thickening Lines

Basic lines in OpenGL are severely limited; they're only really good for wireframes since you can't control them much beyond color, width, and stippling. Fortunately, OpenGL also has geometry shaders, which can transform a primitive into multiple/different primitives. So I used this to transform each line into a couple triangles, making a thin rectangle, so that I could manipulate them easier. At this point, it looks a bit rough, but I go further...


5. Adding Endcaps

Since stopping each line abruptly doesn't look that good, I extended the geometry shader to output eight vertexes per line instead of four. That way I could add some smaller bits on the ends to make them appear rounded. It doesn't look good when zoomed-in, but it looks convincing at a more modest line width.

This was where I really started caring about it rendering correctly regardless of aspect ratio. Since the points in the geometry shader are already in clip-space, extending from the points uniformly would appear skewed on a non-square viewport. I was really frustrated at first trying to account for it in all the required places, but I eventually just normalized the points to screen-space, did what I had to, and transformed them back at the end. It was much simpler that way.

6. Using Depth

This is getting even more complicated; I don't want to draw lines that are obscured by being on the backside or even behind other objects. For that, I need depth.

So here's where I need the model's triangular mesh. I can render the model normally to a separate framebuffer without any visual effects, textures, or even color since all I need is the depth. I then send that depth texture to the geometry shader, which checks the depth manually and will reject lines that are hidden. A lot of work, but now rendered objects look solid.


7. Detecting Edges

So far, I've been drawing the cube using 12 lines, but a triangular mesh of a cube would actually have 18 lines: I haven't been drawing the diagonals. In this case, the diagonals were irrelevant, but I can't make any assumptions on a more complicated model; each line of the mesh will need to be considered. However, just drawing all the front-facing lines won't look good either. So much like a cel shader, I want only the lines on the edges to be drawn.

How can I determine when a line is an edge, given the current camera view? Well, a line from a solid mesh will have two adjacent faces, and it will be on the edge if one face is pointing towards the camera and one is pointing away. However, to know about the adjacent faces' orientations, I need more than just the two points of the line. Instead of drawing two vertices with GL_LINES, I can send four vertices using GL_PATCHES and interpret them in order A-B-C-D: where B-C is the original line, A-B-C is one face, and B-C-D is the other.

After the points are transformed in the vertex shader, the tessellation shader will consume all four points, determine if it is an edge, and, if not, stop further processing. To make things easier, I don't actually have to calculate which way the faces are pointing; all I need to do is determine if both adjacent points are on the same side of the line when viewed from the camera. I did this using a trimmed-down version of the distance-from-point-to-line formula and check if both points produce the same sign.

8. Handling Interruptions

The last bit is to handle situations where a line is only partially obscured. Before, if either end of a line was hidden, the geometry shader didn't draw it at all. This doesn't work so well when models intersect or if there is something in front of it. So, instead of using the line ends, it considers the intermediate points as well.

I actually went even further; in situations where one end was obscured and the other wasn't, I used a binary search to determine where the line should actually end. This adds a lot more depth checks, but I made the search resolution configurable since some styles are sharp and others are more loose about intersecting lines.


Conclusion

I'm actually really pleased with the result; I ended up with xkcd-style lines. There are quite a few caveats to how models should be made in order for them to actually look good, but the effect is worth it. I am continuing to use this for some of my other projects.

Performance

Given that models can have 100,000s of lines, and each line could be split into 64 smaller lines, and each line could generate 6 triangles, there's a potentially very large branching factor. However, I've yet to see a signifigant performance impact; I haven't really tried, but metrics on drawing 100,000-line models on my high-ish end graphics card barely registers a blip (modern graphics cards are amazing).

What you have to consider is how much of those lines are thrown out. The first step beyond vertex transformations is to see if a line is an edge, which I've made very cheap, and that filters out 90% or more of the lines depending on the model complexity. The largest chunk of processing is actually the depth testing since querying textures is comparatively slow on the graphics card. I'll also point out that this technique works equally well regardless of resolution, the fragment shader plays no part in it besides showing black.

Integration Into Other Renderers

I wrote a custom Python script to export models with the important information and made a C++/OpenGL program that will load it up and render it. So this was all custom work and, as much as I'd like, I don't think it is possible to use this technique in a more mainstream rendering pipeline. I haven't looked super hard, but much of the work is done outside the traditional vertex and fragment shaders that are typically used for visual effects. However, if someone knows a lot about custom rendering in Blender, Unity or anything, let me know if it is possible.

Samples

Here are some quick samples I made for visual effects that you wouldn't be able to achieve with a traditional cel shader. Wave effects, ripples, movement lines, smears, etc. By manipulating how each line is drawn, the possibilities are endless.

All the demonstrations here can be found on my Twitter along with continued progress and hilarious glitches. I mostly post game-development stuff there, but give a follow if you are interested in anything I've shown here.