(Deep) Chernoff Faces and sonification ramblings
Deep Chernoff Faces: https://www.ihatethefuture.com/2020/06/deep-chernoff-faces.html
Basically, draw a bunch of faces to represent and visualize data. Take advantage of pareidolia and the human brain's natural sensitivity in human face detection to help better understand nuanced relationships in the data.
I can't help but think about what the sound equivalent would be: human speech. Just like how our eyes are very sensitive to faces and face-like things, our ears are sensitive to speech and speech-like things. Speech synthesis, and subsets of speech synthesis like vowel/formant synthesis come to mind.
Despite being around for decades, sonification (data->sound) is still a bit of a novelty outside of the computer music world. But I think there's great potential in making streams of complex multi-dimensional data babble and leverage the pattern-matching statistical learning bits of our brain to hear insightful things. Maybe apply a filter to the babble to a subset of phonemes to the listeners native language?
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.