the bot works by iteratively zooming in. given a view, it computes a bunch of randomly zoomed in pictures, and replaces the view by the zoom that had the best score.

currently it computes 16 zooms at each iteration, and does 10 iterations. ideally the score gradually increases, but that doesn't always happen.

the final iteration's hiscore is rendered bigger.

unfortunately it tends to go dark and stretched quite often...

working on a to automatically explore for

the main problem is devising a fitness function, I came up with this:

1. for each pixel in the fractal, compute the local box dimension of its neighbourhood. use the gray value of each pixel as its measure. use square neighbourhoods of radius 1,3,7,15,..., with simple linear regression to get the slope of the log(measure)/log(neighbourhoodsize) graph

2. compute a histogram of all these dimensions (I simply sorted the array). then take as fitness metric the difference between 25% and 75% through the array: this is typically the width of the central bulge in the histogram.

I came up with this after skimming "Multifractal-based Image Analysis with applications in Medical Imaging" master thesis by Ethel Nilsson , viewing the dimension image in geeqie with histogram overlayed was interesting. also inspired by

/ / Mandelbrot

Parameter data for (usually this is stored in image metadata, but Mastodon strips that out...):

z := z^p + c
z := (|x| + i |y|)^p + c
z := z^p + c



Using and commands to make a out of image thumbnails sorted by average level:

montage -tile ${COLUMNS}x${ROWS} -geometry ${TILEWIDTH}x${TILEHEIGHT}+0+0 $( identify -colorspace RGB -format "%[fx:mean] %[filename]\n" in-*.png | sort -nr | cut -d\ -f 2 ) out.png &&
convert tmp.png -colorspace RGB -geometry ${WIDTH}x${HEIGHT} -colorspace sRGB out.png

I didn't yet figure out how to get the -correct colour space conversion to happen inside the montage's own rescaling, so I render at natural size and with convert afterwards.

The last image is the good result, the other two are gamma-incorrect, thus they appear too dark on screen (and presumably print too).

Another good iteration loop formula is:

z := z^p + c
z := (|x| + i |y|)^p + c
z := (|x| + i |y|)^p + c
z := z^p + c

(p = 2)

An excerpt from the parameter plane of this formula iterated in a loop (p = 2):

z := z^p + c
z := (|x| - i y)^p + c
z := (x - i |y|)^p + c

got sRGB gamma-correct scaling working in et, is a bit blurry because it uses mipmaps instead of more advanced (slower) algorithms like GIMP does, but the brightness is good.

before today's coding, the output in the et window was too dark (saved PNGs were brighter when downscaled by GIMP)

key things were: GL_SRGB8_ALPHA8 instead of GL_RGBA in texture specification; glEnable(GL_FRAMEBUFFER_SRGB) to do gamma-correct blending, and manual linear->sRGB in the final "display texture to screen" fragment shader because GtkGLArea does not expose a way to get an sRGB default framebuffer (yet; I filed a ticket)

I added light sampling to Raymond: for diffuse opaque surfaces it is vastly more efficient (less noise for given rendering time) than random sampling, at the cost of double the number of rays to trace (an additional one from each surface hit directly towards a light).

Not so beneficial for shiny opaque surfaces in one test. And it breaks rendering of transparent objects (refractive caustics disappear).

for viewing

depth map exported from (one of the examples that comes with it)

horizontally repeating fractal texture made with (modulusx final transform).

combined with homemade software. final image had autolevels applied to brighten it up before downscaling to 1920x1080@100dpi

I think this one turned out ok despite the depth discontinuities.

for viewing

map exported to from my "raymond" thingy implemented in inside , converted to using . EXR data must be in [0..1] for GIMP to understand it (otherwise it clips, not sure if it's clipped by FragM or GIMP). Not sure if sRGB non-linear response curve is mangling my data.

texture from a photo I took, cropped into a narrow vertical strip and made in

depth and texture combined with . took many iterations to get something that didn't make my eyes hurt too much. depth is jarring, too-small texture detail is distracting.

- for PFM file format (like PGM/PPM but float32 data)

render a vertical strip that repeats horizontally in (possibly also other software like or ). possible use case: texture for generation.

0. start with some flame
1. add a Final transform with linear variation weight 0 and modulusx weight 1. its affine transform should be reset to identity.
2. set flame Width to 256px, and Scale to 128.0 (presumably half the width works in all cases).
3. make sure camera rotation and offset are 0.0
4. render!


- a small seam is still visible in Firefox when zooming in, possible workaround is to render bigger, tile in , then downscale and crop, to make the seam as (subpixel) small as desired

- "overlaps" in the fractal structure, but I don't care about that (yet)




excerpt of a Burning Ship (quasi-)Julia set rendered in the same way

trying to figure out a heuristic for limiting iteration count to prevent the cells becoming too small and numerous, nothing working yet

Playing around with low-iteration count, binary decomposition, edge detection renderings of the Burning Ship. Maybe something to turn into a colouring book?

(a,b)=(0.25,-0.75) results in a dense pattern

trouble is to show that it remains bounded...

Some progress: (a,b)=(-0.75,-0.25) gives a non-repeating but apparently bounded collection of iterates.

Need to improve my gnuplot skills to colour the points in a comprehensible gradient from first point to last

I got checking working for the , based on Xavier Buff's method for the Mandelbrot set presented by Arnaud Cheritat:

Replace `der` by the Jacobian $L$ w.r.t. $(x_1, y_1)$. Replace `squared_modulus(der)` with $|\det{L}|$. Arbitrarily use the pixel spacing for `eps`.

Should be straightforward to generalize the idea for other formulas.

The red zone of unknown is troubling - I wonder what is really going on in there.

got centred pattern fill working

print at 180dpi (A4) and view cross-eyed from about 7 inches

texture is based on

Show more

Welcome to, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that. This is part of a family of services that include mailing lists, group chat, and XMPP.