Brytpunkt is an audiovisual fixed media work commissioned by KAC i Rosa Huset. Using purely synthesised computer-generated sounds and visuals it explores the superimposition and sepration of different layers of reality. Point clouds shaped by a Lorenz system, depth of field simulations and images of faces let different kinds of breaking points unfold as fragments in space.
The visuals were composed first by combining a few rather simple techniques and ideas in openFrameworks. These are dominated by point clouds created as meshes with vertices drawn as points, and fading the background between black and white using different subdivisions. Keeping in mind that there would be a sound part to the work I created a threaded timeline system that allows for triggering sound events at a higher precision than the frame rate, sending sound trigger messages to SuperCollider via OSC. I also implemented a rendering mode where the visuals could be rendered as individual frames with a constant frame rate to be merged into a video file later.
For practical reasons I finished the visuals before seriously tackling the music. This meant that I could freely compose large parts of the music straight in the DAW instead of having to have all sounds and sonic changes triggered in the timeline "score". The music is therefore made up of both triggered sounds and sounds played on my digital musical instrument the scarda. All sounds are synthesised using SuperCollider though.
First, something expertly pointed out to me by my friend and colleauge Simon Donelly: don't be afraid to let a scene take time. What I showed him as around 4 minutes easily turned into almost 10 with very few additions, and was much better for it.
One 4k image isn't so big, but when you create 60 of them per second for 584 seconds things add up pretty quickly! The raw rendered frames themselves took up around 22.7 GB of disk space. Because of how much the frames consist of grains I didn't want too much compression so using the "Perceptually lossless" render setting in Blender I got a movie file that was around 8 GB.
All of this data takes a lot of time to draw, save to disk, read and process. Rendering time was around 3 hours with another 3 hours for combining the frames into a video file. I then had to convert the video file to a lower resolution with more compression for use in my DAW which took another 30 minutes. So, allocate a lot of time for rendering! Luckily, I had built my application so that most of it could run in real-time at close to the frame rate, but working more with single frame changes would have required considerably more time. Implementing the ability to skip forward on the timeline saved me so much time!
Now I knew the work would be shown (at least on the premier showing) on a projector with a significantly lower resolution than 4k (800x600). By trying different combinations of rendering and converting sizes it was very clear that the best image quality in 800x600 was obtained by rendering in 4k and then converting and cropping down to 800x600.