This post is an evaluation of some experiments with noise in a GLSL shader for my master's project. You can download the shader code here.
Noise is elusive, slips through your fingers. Using noise as your fundamental building block, unless you are too zoomed out to see individual features, means it is almost impossible to get a completely deterministic result. Unless you are quick enough and catch a screenshot (or catch the parameter values, see below) that specific shape is gone and is never coming back. That means painting with noise is a, you guessed it, fragile exploration of parameter space where great care is needed to catch a specific result. But as previously discussed there needs to be a value that is searched for or protected for fragility to arise. How do we encourage such a value being placed on shapes arising in noise?
One such way could be to allow the player/painter to save states of noise and combine several saved states together. The value then comes from the work of painstakingly finding states that fit together in a certain way. When two "images" fit together they create a value based on their relationship to eachother and their combination to a third imagined image.
As eluded to above, completely uniform noise without any shaping is not fragile in itself. It is the emergent features of the noise when shaped that can be fragile.
Using the smooth voronoi function deveoped by Inigo Quilez for fractal brownian motion (or fractal noise) had the effect of giving the noise some extra definition, but also gave it a very specific look, a bit nightmarish maybe. I'll have to evaluate if I think it's worth it. For the moment I'm leaving both functions in the shader file.
If you zoom in really close the movements around the y axis in the middle of the screen is strangely symmterical and there is a visible artefact of a slightly darker line if you zoom out slightly.
It has been easier to find nice looking results when heavily zoomed in. At a more zoomed out level the noise becomes consistent and looses a lot of its mystery.
Zoomed in:
Zoomed out:
I should definitely be able to make it more veil like when zoomed out. Parameters for warping the space are also much more effective when used on zoomed out versions, for example with the centerDensity parameter high:
Some parameters values work better together than others. It would therefore be helpful if I could create a mapping layer for the noise layers which maps some higher level parameters to concrete values.
Putting together layers of noise with different parameters requires strategies for combining them so that different layers have different visual functions. Some such strategies:
Blurring and alpha overly together could create the illusion of constantly zooming in or out.
Having layers zoomed in differently and then combined thrugh multiplication would allow fine tuning of tiny details in the resulting image. This could also be done on a single layer by allowing some fine tuning of the different fbm octaves.