1. 26
  1.  

  2. 4

    Cool. Here are some compositions I just made with it:

    It’s so freeing to be able to cut, copy, and paste, rather than having to re-enter each note individually on the phone’s keypad. Being able to separate measures/sections with blank lines is handy too.

    1. 2

      I made another composition I’m proud of, one that simulates three-voice chords by arpeggiating (cycling through) 64th notes. Here’s my Nokia Composer arrangement of the chorus (0:52–1:09) of “Countryside 2 (Lee Brothers – Glad I Am)” from Double Dragon Neon by Jake Kaufman.

      Something I learned while making this: arpeggiated chords for three voices seem to sound better when you cycle [High, Low, Mid] rather than [High, Mid, Low].

      1. 1

        These are nice 👍

      2. 3

        This is fun. The syntax is actually very similar to what lilypond uses.

        Who recognizes the melody I transcribed here?

        1. 2

          Well that’s lovely. I appreciate that the code isn’t even golfed excessively hard, it’s just a nicely designed regular language.

          Is it possible to render the output of a webaudio thing to a wav file like you can rerender the output of a html canvas to a png?

          “Light My Fire” is a great choice of example song. Very 90s retro. :)

          1. 3

            Is it possible to render the output of a webaudio thing to a wav file like you can rerender the output of a html canvas to a png?

            It doesn’t look AudioContext has a one-step solution like the canvas has with HTMLCanvasElement.toDataURL(). However, the example on the MDN page for AudioContext.createMediaStreamDestination() shows how to hook up a MediaRecorder to an AudioContext, ultimately producing an Opus-encoded Blob that can be played by an <audio> element.

            1. 1

              Nice! Thank you.