It would begin with what I consider to be music theory first principles. I think this is taught incorrectly in institutions. This causes people to fixate on the wrong things, which leads to poopy-sounding computer-generated music.
Western Music Theory is centered around one instrument. Piano you say? BZZZ! WRONG. Don't let Big Piano fool you. It *all* starts with the human singing voice.
It all begins with melody + plainsong/gregorian chant. Intervals, step-wise motion, etc. Basically, what goes into a making a good melody.
From one melodic line comes multiple melodic lines running at once. This is known as counterpoint. The challenge to solve here is how to get many monophonic sounds playing well together.
From counterpoint, the concept of harmony and harmonic structure naturally falls into place.
So, how does this relate to computer music?
Western Music theory, up until the late 20th century, naturally assumed human performers and human audiences. With computer music, it's computer performers and human audiences.
The trick with this translation is decoupling the audience from the performer.
Perhaps, it starts with these series of questions:
What does it mean for a computer to compute?
How does the nature of computation relate to musical performance?
How can computational musical performance be relatable to our collective human perception of sound?
Then, it's just a matter of retro-actively applying voice->melody->counterpoint->harmony with this new context.
@paul I think you would love James Tenney's Meta-Hodos and Meta Meta Hodos if you haven't read it already. It was his pass at creating a music theory text from first principles: https://monoskop.org/images/1/13/Tenney_James_Meta-Hodos_and_Meta_Meta-Hodos.pdf
Excellent. I was quietly hoping you'd come up with something for me.
@paul This is the only thing of his I've read, but Denis Smalley also has some interesting ideas on the subject: ftp://infomus.it/pub/Papers/AestheticsPapers/OrganisedSound-Smalley.pdf
@paul Also I'd love to read this. Maybe you could serialize it in blog-like-form as you work on it?
If I do get around to working on this, it'll definitely be done in an incremental and informal way.
I may work it into my sndkit project. I think it can fit inside the scope. I'd want supplementary code to go with the ideas.
@paul I found it surprisingly hard to find music theory writeups which clearly explained why there are 12 notes. I’d have thought that would be lesson one, but apparently not.
@mathew It's a hard one to answer definitively. The scale in Western music, as well as the intonation of those scales, evolved quite a bit over the centuries. Most of what we consider to be music theory just comes from musical trends seen in 18th century Europe, and at that point the notion of a scale was very much an established axiom. The division of the octave into 12 parts comes from the invention of Equal Temperament, which is what we still use today.
If Quora is to be trusted, this answer provides some interesting historical insights into the matter: https://www.quora.com/How-did-Western-music-%E2%80%9Csettle%E2%80%9D-on-a-12-tone-scale?share=1
@paul What I mean is, you can start from "frequency f and 2f sound good together because of how the ear works" and the concept of harmonics, and get from there to 12 approximately evenly spaced tones existing that will sound good in various combinations. Then you can discuss temperament, and subsets of the 12 to form specific keys.
Sure, that's a common way to derive it. IMO it's a bit of an oversimplification compared to how it works in practice (especially for fretless instruments).
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.