I’ve been wondering—why don’t we have an AI model that can take any piece of music, compress it into a super small “musical script” with parameters, and then generate it back so it sounds almost identical to the original? Kind of like MIDI or sheet music but way more detailed, capturing all the nuances. With modern AI, it seems like this should be possible. Is it a technical limitation, or are we just not thinking about it?
Not AI, but maybe a midi file (or another format that holds instrument playback data) that uses the same instruments? You don’t need AI to do this