It would begin with what I consider to be music theory first principles. I think this is taught incorrectly in institutions. This causes people to fixate on the wrong things, which leads to poopy-sounding computer-generated music.
Western Music Theory is centered around one instrument. Piano you say? BZZZ! WRONG. Don't let Big Piano fool you. It *all* starts with the human singing voice.
It all begins with melody + plainsong/gregorian chant. Intervals, step-wise motion, etc. Basically, what goes into a making a good melody.
From one melodic line comes multiple melodic lines running at once. This is known as counterpoint. The challenge to solve here is how to get many monophonic sounds playing well together.
From counterpoint, the concept of harmony and harmonic structure naturally falls into place.
So, how does this relate to computer music?
Western Music theory, up until the late 20th century, naturally assumed human performers and human audiences. With computer music, it's computer performers and human audiences.
The trick with this translation is decoupling the audience from the performer.
Perhaps, it starts with these series of questions:
What does it mean for a computer to compute?
How does the nature of computation relate to musical performance?
How can computational musical performance be relatable to our collective human perception of sound?
Then, it's just a matter of retro-actively applying voice->melody->counterpoint->harmony with this new context.
@paul I think you would love James Tenney's Meta-Hodos and Meta Meta Hodos if you haven't read it already. It was his pass at creating a music theory text from first principles: https://monoskop.org/images/1/13/Tenney_James_Meta-Hodos_and_Meta_Meta-Hodos.pdf
Excellent. I was quietly hoping you'd come up with something for me.
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.