My highly over-engineered extravagant framework of shaders including each other multiple times with different things defined (to emulate C++ templates with #GLSL function overloading without polymorphism) takes significantly longer to link the #shader than it does to render the #animation.
First attempts with typos gave 100k lines of cascaded errors in the shader info log, which which the Qt GUI list widget was Not Happy At All. Luckily the log went to stdout too, so I could pipe to a file and see the start where I missed a return statement or two.
current progress, 4 colour conversion in GIMP, then exported a layer for each colour, then potrace command line for each layer and assembled the output into a single SVG with a bash script.
Today this project is morphing into a generalized "make nice lo-colour/flat/line art from images" thing.
Two photos that my algorithms (currently a modified median cut that splits the largest group each time, where size is measured by the product of bounding box volume and pixel count) struggle with at the moment:
- the left image has large flat areas of almost same colour, and somehow my code dithers the top half in 3 colours instead of a nice clean split with 2 colours
- the right image has lots of greens and some "obvious" bright red, but limited to 8 colours my algorithms just give greens, no red.
GIMP does much better for both images, so next I'll look at how it does it.
continous coupled cellular automaton in OpengGL/GLSL exported to full colour RGB24 PPM image, with edge detection
median cut in C exporting each of the colours to its own black on white PGM image (this one had 5 colours)
potrace from bi-level bitmap to monochrome vector SVG image, one per colour
geany text edited the 5x SVGs into one SVG document with some pattern fill defs, changed the fill of each layer to one of the patterns, added stroke.
inkscape converted to PDF
okular displayed PDF
gimp took screenshot
TODO: shell script to automate it
TODO: make more patterns to fill with
Implemented median cut colour quantization as described at https://en.wikipedia.org/wiki/Median_cut
The only non-straightforward part was needing to split at the boundary between median and next value, rather than (potentially) in the middle of a joint median region, to avoid weird artifacts where different parts of the image had different colours.
Hardcoded to 8 colours for now, not sure how to do optimal palette size choice with this method. Maybe https://en.wikipedia.org/wiki/Akaike_information_criterion or similar could work. But I'd need to redo the code structure, currently it's simply recursive (binary splitting, depth 3) and I'd need it to be iterative with a set of blocks to be able to make the decisions about when to stop splitting.
Hmm it's very sensitive, these are all with the same input, only thing changing is the pseudo-random number generator seed... various numbers of colours are chosen as optimal, ranging from 2 to 11.
Probably I should minimize over some randoms in an innerer loop instead of outside the whole process.
Worked (almost) first try! (after fixing compilation errors, one stupid typo in a variable name, and having to set ulimit -s unlimited in my shell because I have Big arrays on the stack).
After adding the shader, diagonal lines are smooth, and axis aligned edges are not jittery (this is not so visible in this particular excerpt).
Before adding the shader, diagonal lines are ugly, especially when animated, and axis-aligned edges are jittery.
Added a #GLSL #shader to my autonomous #audiovisual piece #Puzzle (implemented in #PureData #Pd with #Gem for #OpenGL), to make the very edges of the tiles fade to transparent. Otherwise the width of the black border jitters very distractingly as the tiles move, because the rasterization is quantized and the texture coordinates are not. As a bonus it makes diagonal lines smoother too.
uniform sampler2D tex;
void main (void)
vec2 coord = (gl_TextureMatrix * gl_TexCoord).st;
vec4 colour = texture2D(tex, coord) * gl_Color;
float d = length(vec4(dFdx(coord), dFdy(coord))) * sqrt(2.0);
= smoothstep(0.0, d, coord.x)
* smoothstep(0.0, d, 1.0 - coord.x)
* smoothstep(0.0, d, coord.y)
* smoothstep(0.0, d, 1.0 - coord.y)
gl_FragColor = colour;
Videos coming up next.
It's implemented as a bit of hack modifying the core template pad.html at present, but it should be possible to port it to a standalone Etherpad plugin - I was just fighting with npm and node and require() and things I don't understand at all so took an easier way out.
Firefox's "strict" content blocking mode breaks it. Not sure why.
Finally found a copy of the "hairiness" paper that I've been searching for for some time, hadn't realize it had been reprinted in
> COLLECTED PAPERS OF JOHN
> VI Dynamical Systems (1953-2000)
Araceli Bonifant (editor)
> American Mathematical Society
> Providence, Rhode Island
Which has some very nice diagrams and discussion about some things I stumbled across too over the years (my GIF is part of a collection of similar-but-different zoom loops)
Iterated tuning for generalized #Feigenbaum points, among other things. Just skimmed bits of it so far, should keep me busy.
The f(q) vs a(q) curves look a bit crisper with this method.
And calculating D(q) is easy, just
$$D = (a * Q - f) / (Q - 1)$$
I ported the linear regression to OpenCL too, now only input and output needs to run on the CPU, GPU does all the calculations.
Some of the test histograms I'm using give bizarro curves though, not sure what's up - bizarro with two different algorithms means it's probably (?) a real behaviour?
Added EXR import to my #MultifractalSpectrum calculator. Hacked an option into #Fractorium command line renderer to output raw histogram data to EXR. Ran a batch process to render and analyze some of the examples that come with Fractorium (in its Data/Bench directory).
Better multifractal spectrum graphs for the fractal flame upthread. The main difference is that the Q range is higher, and more of it is calculated in OpenCL.
Still need to move the simple linear regression code into OpenCL and get it working on GPU (the OpenCL works perfectly fine on CPU, but on my GPU it outputs garbage (different on each run) - even the initial image mipmap reduction stage fails).
Computation time with 1024 Q steps is 23mins on CPU, input histogram image resolution 7280x7552x4 f32.
making art with maths and algorithms
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.