Updated the Gem example with a spinning [cube] with desktop OpenGL 1.x Fixed Function Pipeline lighting emulated in WebGL by Regal.
Other primitives like [sphere] and [torus] fail due to a bug with GL_QUAD_STRIP, have reported it to the Emscripten port.
[teapot] fails for other reasons (incomplete GLU? something else?), seems to be an infinite loop rather than a simple failure...
Scala Native processing function plugged into web audio script processor node: https://www.sciss.de/temp/sn/whitenoise.html
(AudioWorklet still to do, have too old Firefox version)
added meta tags to my blog generator:
https://code.mathr.co.uk/cmcms/commitdiff/a0d4e304679ae51f7e5b18e8355755957bb07b67
https://mathr.co.uk/empd/#build-cleanroom added some scripts for building in a chroot, so you don't risk potential weird interactions with your main dev system
http://mathr.co.uk/empd/gemtest.html victory! #puredata #gem #web #browser #emscripten #regal #opengl
working on cleaning up / committing my changes to the Gem sources now, will push as soon as I've got a reproducible build script
fighting may have got easier with this
https://github.com/emscripten-ports/regal
currently requires emsdk "incoming" tag, afaict
made a mini-site for my emscripten pd experiments:
https://mathr.co.uk/empd/
managed to get a slightly modified version of #puredata working in the #web #browser using #emscripten
quick start guide:
https://github.com/claudeha/libpd/blob/emscripten/samples/emscripten/pdtest/README.md
demo:
https://cafe.mathr.co.uk/eiskaffee/empd/pdtest.html
don't look:
https://github.com/claudeha/pure-data/commit/56ec8f1e763b307dc5757380f885282c58adb18e
link to stroboscopic video Show more
Today I'm mostly working on 'nnirror', my art project about training neural networks to recognize themselves.
The ego network is trained using a generative adversarial network against the id network. Ego aims to recognize its own weights (output 1) vs everything else (output 0 for id's attempts to fool ego, output 0 for random input too).
The network weights are visualized at the top left of the first image, below is the normalized change since the previous epoch.
The second image plots the parameters (learning rates, momementa, etc), on the left if the ego network failed to achieve enlightenment after 1000 epochs, on the right if it managed to score above 4.5 in that time. The total score is twice the top graph minus the two lower graphs.
I think I'll punt these compiler changes to a future `kf-2.15` branch, the generated code for reference and perturbation iterations is just not good enough yet.
Would be nice though, adding a formula would become 2 lines of code, and the huge formula.cpp generated from the semi-automatically crafted formula.xml via formua.xsl takes a long time to compile, at least the generated code is split into many files so it can be compiled in parallel...
Hopefully will manage to release `kf-2.14.4` today with some less invasive changes.
I enjoyed this release:
Bebhionn - Interplanetary
https://archive.org/details/Zimmer155
minimal, techno, deep
CC BY-NC-ND
While hacking on a reference implementation to go with my work-in-progress paper, I found several bugs in the paper. Mostly embarrassing sign errors, but some off-by-one too.
I also accidentally fixed one algorithm: arbitrarily setting `sgn(0) = 1` instead of `sgn(0) = 0` makes it succeed in more cases (less risk of a singular matrix).
But now I'm considering deleting 3/4 of a page of vagueness about some advanced techniques and replacing with a fuller treatment of the basics: as it is now, you need to know already how to render escape time fractals for my first page to make any sense. Running very tight to the 8-page hard limit...
I might be able to shrink some of the images to 2/3 page instead of 1 whole page, but whether the page breaks will still be good is another issue.