@sneak I work with him occasionally - e.g. https://magpi.raspberrypi.com/articles/aphex-twin-midimutant
@nebogeo did you tell your collaborator that you are appropriating his trademarks for your own project? if someone i worked with did that to me without permission i would be pissed.
@sneak one could ask, given that he paid for it, came up with the idea and designed it - who's logo should be on it?
@sneak more info here (it was on a warp mailout) https://www.reddit.com/r/aphextwin/comments/xm1ybl/samplebrain_custom_sample_mashing_app_designed_by/
@nebogeo nice! I was trying to do something similar in PureData this summer, so I can't wait to try this one!
@yhancik I suppose one interesting bit in samplebrain is using a graph (synaptic mode) so you can use more sample material than you can search in realtime
@nebogeo cool! I hope to try it soon.
I once did a vaguely similar thing for two live inputs, but just exhaustive search using energy-per-octave vectors, no fancy brain thing.
Looking at the code, the implementation of one function doesn't make sense to me:
double brain::calc_average_diff(search_params ¶ms)
I'd move the division outside the loop and divide by the count squared.
If comparison is symmetric you could speed it up by a constant factor of 2 by considering only j>=i indices.
But I'm always a bit nervous suggesting changes to art software as it might change the character of the output...
@nebogeo I'll be watching it late this evening to get the weekend out of my bones. And congrats for the big story at CDM. https://cdm.link/2022/09/free-sample-mashing-with-samplebrain-by-aphex-twin-and-dave-griffiths/
@nebogeo sooo cool! 🤩 I've saw this some days ago and had to think about "tomomibot"!
Tomomibot takes any audio chunck, splits it in samples and analyzes the similarity of the samples to finalize train a model to "learn" a pattern in the occurrences of sounds. During live performance something similar happens but the model suggests now which sample to play next based on the live input. The cool thing is, due to the model only getting trained on categorized sounds, it's enough if it falls under that category, meaning you can swap out the samples but still use the same trained sequence model 😊https://github.com/adzialocha/tomomibot
I've built it for Tomomi Adachi, it's surely over-engineered, very chaotic but led to some really fun noise and improvisation concerts with Tomomi, who really figured out the weirdness of this instrument https://m.soundcloud.com/tomomibot/tomomibot-drives-2-orchestras
@nebogeo I was having a discussion with a friend that was saying that this software uses artificial intelligence, while to me it seems is a concatenative synthesis driven by an MFCC analysis. Starting from the idea that the term AI is kind of general, what would be your take on this?
BTW super cool work, congrats!
@nesso the "brain" refers to the connections between similar blocks that speed up/guide the search with big sets of samples, so thats just analogous (it builds a huge network of "synapses"). "Artificial Intelligence" doesn't really have any well defined meaning, but it doesn't use a neural network, and the "learning" part is defined by you rather than relying on some else's data (which I think is more interesting)
A fediverse community for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.