Researching Music Creation in MakeCode

Unique trying to steal Invalid’s “MakeCode Music Status” here /j.

As of late, I’ve been trying to learn how to compose music and translate that into my MakeCode projects. Naturally, that means the built-in music editor is what I rely on to create my music. Here’s where my issues begin though: the music editor is just not that great.

It does get the job done, but there’s many things that could use improvement. This is why I’ve decided to try and develop an alternative to the existing music editor within MakeCode Arcade: the Unique Tracker for MakeCode (UTM).

This isn’t meant to be an advertisement for the tool, since it’s still early, incomplete, and not a better composing method than the built-in music editor. The point of this post is to document what I’ve learned about how MakeCode’s music system actually functions under the hood, and why tools like this run into the problems they do. Anyone who’s tried pushing the music system beyond the basics has probably hit some of the same issues, so I’m putting this here as a research topic more than anything else.

UTM attempts a tracker-style workflow with six channels that can use any instrument or frequency. Nothing is locked to a single waveform. Notes are color-coded, and custom instruments can be designed using the sound-effect API, giving you the same control you’d expect when creating standalone SFX. The goal was to see whether MakeCode could support a more flexible composing environment similar to FamiTracker or Pico-8’s music editor, but with a simpler, more intuitive interface. Developing this has revealed both clear structural limitations in how the audio engine behaves, and some issues with how the UTM exports sound.

The biggest issue is that every single note needs to be generated as its own sound effect. Since the built-in melody system can’t store custom instrument data, the only option is to recreate entire SFX objects for every note placed in the tracker. This becomes extremely large, extremely fast. Less than thirty seconds of music already results in over seven hundred lines of code. That’s for one piece. This obviously isn’t a realistic storage method for games that are trying to run on hardware or have any memory considerations at all.

There’s also the problem that UTM’s note playback doesn’t match MakeCode Arcade’s native note system. Since UTM relies on custom sound expressions, the way frequencies are calculated and played back isn’t identical to how the built-in music engine handles tones. Certain waveforms, especially noise, don’t behave the same as their built-in counterparts, and some pitches come out slightly differently than they do when using the stock music editor. Even if you set the same values, the two systems simply don’t generate the same sound. Because of this, UTM can’t perfectly recreate the built-in instruments or even guarantee the same pitch accuracy across different wavelengths.

Another major limitation is long notes. When using sound effects for music, MakeCode can’t play notes longer than a quarter note when multiple channels are active. If a note needs to sustain across beats, you have to chain together repeated quarter notes of the same frequency. This works, but you can hear the separation between the segments, and the note’s volume can’t evolve across its full length the way it normally would. The result is that long notes lose their shape completely, which removes one of the few advantages custom instruments theoretically had over the built-in music editor.

All of this is why UTM, in its current state, isn’t an improvement over the music editor. It’s heavier, less efficient, less consistent, and limited by the underlying audio scheduling system. To make it a real alternative, I’d need a way to generate a compressed representation of each song—something closer to the existing melody hex, but flexible enough to encode custom instrument data. The line count would need to be reduced by at least two-thirds for longer pieces to be viable.

This post is mainly here to consolidate what I’ve learned in just a few days of development and to provide a clearer picture of where MakeCode’s music limitations actually come from. Most of the issues stem from structural aspects of how the audio engine handles notes, timing, and mixing. But even with those limitations, I don’t think UTM is a lost cause or something that can’t become useful. If it can adapt to the way MakeCode handles audio and find ways to work within those constraints instead of fighting them, it could grow into a genuinely helpful composing tool.

There’s still a lot of room for improvement and experimentation. The more I understand how MakeCode processes sound, the more realistic the solutions become. UTM won’t surpass the built-in editor right now, but with the right compression approach and a better strategy for handling long notes, I can see it becoming a strong addition to the workflow rather than a replacement. This research phase is just the start.

If anyone else has dug into the audio engine or found different behavior, I’d be interested in hearing about it.

Below, you’ll find the UTM project link: UTM - Unique’s Tracker for MakeCode: A new music editor tool for MakeCode Arcade

10 Likes

I don’t know much about making websites, why does it say Claud AI?

2 Likes

Claude.AI is an AI tool that actually helped massively in the little details of MakeCode’s music system that I didn’t fully understand. It also helped bring my idea of a visually intuitive and professional music editor for MakeCode to life. And of course, it provides a website to use the tool publicly without the need for servers or anything like that.

(P.S. NOT A BIG FAN OF AI FOR MULTIPLE REASONS. But when it helps you learn and isn’t doing all the work for you, it can be a useful way to learn and develop new things.)

3 Likes

While I was at school, I finally sat down and tried to figure out a proper solution to convert MakeCode Arcade’s existing music.createSoundEffect–based format into something more usable for tracker-style music. After messing with different ideas, I ended up building a completely new format that fixes the biggest issue the UTM system.

The biggest problem was the way notes were being played for songs. Each note that was played would look something like this:

music.play(music.createSoundEffect(
    WaveShape.Sine, 330, 330, 140, 255, 200,
    SoundExpressionEffect.None, InterpolationCurve.Linear
), music.PlaybackMode.InBackground)

This was incredibly problematic because even if a note had already been played, they weren’t defined and the engine would have to redefine each note just as shown. Basically, if the same sound effect was played 20 times, the entire sound would be redefined 20 times.

In the new system, each unique soundEffect is defined once in a global SFX table. And every stepped note just references it by ID.

// sfx table
    const SFX_WAVE: WaveShape[] = [
        WaveShape.Square, WaveShape.Sine, WaveShape.Sawtooth, WaveShape.Noise, WaveShape.Sawtooth, WaveShape.Square, WaveShape.Triangle, WaveShape.Triangle, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine
    ];
    const SFX_START_FREQ: number[] = [
        262, 50, 65, 262, 69, 277, 277, 262, 262, 330, 392, 415, 370, 349
    ];
    const SFX_END_FREQ: number[] = [
        262, 50, 65, 262, 69, 277, 277, 262, 262, 330, 392, 415, 370, 349
    ];
    const SFX_START_VOL: number[] = [
        255, 255, 255, 255, 255, 255, 255, 255, 140, 140, 140, 140, 140, 140
    ];
    const SFX_END_VOL: number[] = [
        0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255
    ];
    const SFX_DURATION: number[] = [
        200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200
    ];
    const SFX_EFFECT: SoundExpressionEffect[] = [
        SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None
    ];
    const SFX_CURVE: InterpolationCurve[] = [
        InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear
    ];
    const SFX_COUNT = SFX_WAVE.length;

Although that may seem like a good amount of code, it’s essential to removing the need for hundreds of lines of the same code over-and-over again.

The system will also determine the BPM and follow each step of the song with that tempo without the need to repeat pause(ms) a million times. But the most important addition is that instead of hundreds of lines of sound-effects to define just one song, it can all be compressed into one single hex code.

// compressed song data
    const desertRaceData = hex`
        0500010101020500010101020200030300040405000101010202000303000404
        0105050001010102050001010102030003060400040407060001010102060300
        0307040004040602050705000101010205000101010202000303000404050001
        0101020200030300040401050500010101020500010101020300030303000404
        0600010101020302000304000404030105060001010102080600010101020803
        0003080400040408060001010102080300030904000404090205090600010101
        020a0600010101020a0300030a040004040a0600010101020a0300030a040004
        040b02050b0600010101020a0600010101020a0300030a040004040a06000101
        01020a0300030a040004040a02050a0500010101020500010101020300030303
        0004040600010101020302000304000404030105050001010102050001010102
        0200030300040405000101010202000303000404010505000101010205000101
        0102030003060400040407060001010102060300030704000404060205070500
        0101010205000101010202000303000404050001010102020003030004040105
        0500010101020500010101020300030303000404060001010102030200030400
        0404030105060001010102080600010101020803000308040004040806000101
        0102080300030c040004040c02050c0600010101020a0600010101020a030003
        0a040004040a0600010101020a0300030a040004040d02050d06000101010209
        0600010101020903000309040004040906000101010209030003090400040409
        0205090600010101020606000101010207030003060400040407060001010102
        06030003070400040406020507
    `

That hex code is comparable in scale to the hex codes generated by the built-in music editor for longer songs, meaning that this is a goal now considered reached!

There’s more to do obviously, but this is really great so far. I’m looking forward to developing songs on this tracker. This new system hasn’t been fleshed out fully just yet but it will be coming shortly.


Full-Code Snippet for New HEX Song Compression

//% color="#ff0000" weight=100 icon="\uf025" block="UTMMusic"
namespace utmMusic {
    const STEP_MS = 200

    // sfx table
    const SFX_WAVE: WaveShape[] = [
        WaveShape.Square, WaveShape.Sine, WaveShape.Sawtooth, WaveShape.Noise, WaveShape.Sawtooth, WaveShape.Square, WaveShape.Triangle, WaveShape.Triangle, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine, WaveShape.Sine
    ];
    const SFX_START_FREQ: number[] = [
        262, 50, 65, 262, 69, 277, 277, 262, 262, 330, 392, 415, 370, 349
    ];
    const SFX_END_FREQ: number[] = [
        262, 50, 65, 262, 69, 277, 277, 262, 262, 330, 392, 415, 370, 349
    ];
    const SFX_START_VOL: number[] = [
        255, 255, 255, 255, 255, 255, 255, 255, 140, 140, 140, 140, 140, 140
    ];
    const SFX_END_VOL: number[] = [
        0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255
    ];
    const SFX_DURATION: number[] = [
        200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200
    ];
    const SFX_EFFECT: SoundExpressionEffect[] = [
        SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None
    ];
    const SFX_CURVE: InterpolationCurve[] = [
        InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear
    ];
    const SFX_COUNT = SFX_WAVE.length;

    // compressed song data
    const desertRaceData = hex`
        0500010101020500010101020200030300040405000101010202000303000404
        0105050001010102050001010102030003060400040407060001010102060300
        0307040004040602050705000101010205000101010202000303000404050001
        0101020200030300040401050500010101020500010101020300030303000404
        0600010101020302000304000404030105060001010102080600010101020803
        0003080400040408060001010102080300030904000404090205090600010101
        020a0600010101020a0300030a040004040a0600010101020a0300030a040004
        040b02050b0600010101020a0600010101020a0300030a040004040a06000101
        01020a0300030a040004040a02050a0500010101020500010101020300030303
        0004040600010101020302000304000404030105050001010102050001010102
        0200030300040405000101010202000303000404010505000101010205000101
        0102030003060400040407060001010102060300030704000404060205070500
        0101010205000101010202000303000404050001010102020003030004040105
        0500010101020500010101020300030303000404060001010102030200030400
        0404030105060001010102080600010101020803000308040004040806000101
        0102080300030c040004040c02050c0600010101020a0600010101020a030003
        0a040004040a0600010101020a0300030a040004040d02050d06000101010209
        0600010101020903000309040004040906000101010209030003090400040409
        0205090600010101020606000101010207030003060400040407060001010102
        06030003070400040406020507
    `;

    // playback engine

    function playSfx(id: number) {
        if (id < 0 || id >= SFX_COUNT) return
        music.play(
            music.createSoundEffect(
                SFX_WAVE[id],
                SFX_START_FREQ[id],
                SFX_END_FREQ[id],
                SFX_START_VOL[id],
                SFX_END_VOL[id],
                SFX_DURATION[id],
                SFX_EFFECT[id],
                SFX_CURVE[id]
            ),
            music.PlaybackMode.InBackground
        )
    }

    function playSongData(data: Buffer) {
        let i = 0
        while (i < data.length) {
            const count = data[i++]
            for (let j = 0; j < count; j++) {
                const sfxId = data[i++]
                playSfx(sfxId)
            }
            pause(STEP_MS)
        }
    }

    //% blockId=play_desert_race_song_compact
    //% block="play UTM desert race song (compact)"
    export function play_desert_race_song_compact() {
        playSongData(desertRaceData)
    }
}

1 Like

The main 2 issues I have with the current music system are:

  1. The backend code could totally support custom instruments, which I think I’m gonna look into as a side project, but it just doesn’t. Each instrument is stored in the beginning section of the hex code, so I could totally just edit that and see what happens. I think I’ll make something like my Makecode font editor for this, where it parses the data and I can edit it in a way that makes sense. If I make it good enough that someone other than me could understand what the heck is going on, then I’ll release it. Maybe. It’s currently no more than an idea. One of many.
  2. You can’t start a note of a certain instrument and then have another note of that instrument start or end while that note is playing. This is because notes are stored in batches with a single start and end time and then the notes played, which is yet another space saving strategy.

When I made my rhythm game, I gained a vague understanding of how the whole built in music system works, so I would love to help with this project if you have any questions. I would heavily recommend reading the built in music code and at least seeing how the big picture works, to give you some ideas and stuff at least. The main improvement it makes here is that the instruments are only stored once at the start of the data, which is the main issue with your current system from what you’ve written. Anyways, very cool, and I hope you have fun with this!

2 Likes

This is great research to have, especially for anyone else who attempts the same project! And from what I’ve seen, this is the farthest anyone has gotten to a custom music editor :slightly_smiling_face:
I really can’t add much to the technical details, but I can say that I’d love to make a plug-in for a Makecode custom music editor that automatically makes SAMMY sing in pitch and rhythm with the song. If this project (or another user’s version) becomes viable, let me know! :eyes:

3 Likes

Crazy fact: the new compression method makes export code for songs 97 percent smaller than previously!

I’ll be posting a YouTube tutorial on how to install the UTMMusic extension—no GitHub required, since the files need to stay editable. Then I’ll port the UTM website to GitHub so updating it will be a lot simpler.

4 Likes

This looks pretty interesting! I’ll be tracking this topic because I’m a big fan of composing music and not a very big fan of the makecode music editor that I compose my music on. Btw, @Unique, it’s awesome to have you back. The quality of your projects never ceases to amaze me.

4 Likes

so peak

3 Likes

@Unique you might get way more mileage out of using the existing song format instead of inventing your own. the instrument format for songs is way more flexible than the createSoundEffect function.

here’s some sample code:

const myInstrument = new music.sequencer.Instrument();

/**
 * An instrument is defined with the following components:
 *      - A waveform
 *      - An amplitude (volume) envelope
 *      - An optional pitch envelope
 *      - An optional amplitude low frequency oscillator (tremolo)
 *      - An optional pitch low frequency oscillator (vibrato)
 * 
 * For more information on envelopes, see https://en.wikipedia.org/wiki/Envelope_(music)
 * 
 * For more informatino on LFOs, see https://en.wikipedia.org/wiki/Low-frequency_oscillation
 */
myInstrument.waveform = 1;                  // The waveform. See https://arcade.makecode.com/developer/sound for a list of values

myInstrument.ampEnvelope.amplitude = 1024;  // A value from 0-1024 which controls the volume of the instrument
myInstrument.ampEnvelope.attack = 400;      // How long in milliseconds until the sound reaches its max volume
myInstrument.ampEnvelope.decay = 100;       // How long in milliseconds until the sound goes from its max volume to its sustain volume
myInstrument.ampEnvelope.sustain = 512;     // A value from 0-1024 which controls the sustain volume of the instrument. This is relative to the amplitude
myInstrument.ampEnvelope.release = 100;     // How long in milliseconds until the sound goes from its sustain volume to 0 after the gate ends

myInstrument.pitchEnvelope.amplitude = 0;   // A value in HZ for how much this pitch envelope will affect the instrument's pitch. 0 Means no pitch envelope
myInstrument.pitchEnvelope.attack = 100;    // How long in milliseconds until the sound reaches its max pitch
myInstrument.pitchEnvelope.decay = 100;     // How long in milliseconds until the sound goes from its max pitch to its sustain pitch
myInstrument.pitchEnvelope.sustain = 512;   // A value from 0-1024 which controls the sustain pitch of the instrument. This is relative to the amplitude
myInstrument.pitchEnvelope.release = 100;   // How long in milliseconds until the sound goes from its sustain pitch to its normal pitch after the gate ends

myInstrument.ampLFO.amplitude = 0;          // A value from 0-1024 which controls how much this LFO affects the volume. 0 Means no LFO
myInstrument.ampLFO.frequency = 0;          // A value in HZ which controls how fast the LFO is

myInstrument.pitchLFO.amplitude = 20;       // A value in HZ which controls how much this LFO affects the pitch. 0 Means no LFO
myInstrument.pitchLFO.frequency = 5;        // A value in HZ which controls how fast the LFO is


/**
 * Plays a note using an instrument.
 * 
 * The gateLength here is how long the note plays for in milliseconds.
 * In synthesizer terms, you can think of it as how long a key is pressed
 */
function playNote(instrument: music.sequencer.Instrument, midiNote: number, gateLength: number) {
    const frequency = music.lookupFrequency(midiNote);

    // Render instrument outputs sound instructions which can be played by MakeCode's synth engine
    const instructions = music.sequencer.renderInstrument(
        instrument,
        frequency,
        gateLength,
        music.volume()
    );

    // Play instructions plays the actual sounds
    music.playInstructions(0, instructions);
}

const notes = [
    60,
    62,
    64,
    65,
    67,
    69,
    71,
    72
]

controller.A.onEvent(ControllerButtonEvent.Pressed, () => {
    for (const note of notes) {
        playNote(myInstrument, note, 500);
        pause(500);
    }
})

and if you want to see the values that the built-in instruments for the music editor use as inspiration, they are all defined here:

2 Likes

This is actually quite interesting! I’ll be messing around with this instrument format to see the UTM can adopt it. Especially since I see some cool settings and fixes for limitations too.

3 Likes

Public Testing

This reply is dedicated to explaining how you can use this tool. It isn’t very complex to use and implement luckily.

This version of the UTM (v1.2.1) still uses the custom UTMF format, this may be subject to change in the future. But future versions of the software will have the option to export your songs in UTMF.


Step 1:
Paste this code into a custom.ts file within your project, this will create an editable extension for you to use and add songs to your game from the UTM.

//% color="#ff0000" weight=100 icon="\uf025" block="UTM Music"
namespace utmMusic {

    // This enum starts empty. The tracker’s auto-merge will
    // inject new entries like:
    //
    //   //% block="My Song"
    //   MY_SONG,
    //
    export enum UTMSong {
    //% block="newAgentSuspenceLoop"
    NEWAGENTSUSPENCELOOP,
    }

    interface SongData {
        stepMs: number;
        sfxWave: WaveShape[];
        sfxStartFreq: number[];
        sfxEndFreq: number[];
        sfxStartVol: number[];
        sfxEndVol: number[];
        sfxDuration: number[];
        sfxEffect: SoundExpressionEffect[];
        sfxCurve: InterpolationCurve[];
        data: Buffer;
    }

    // The auto-merge script looks for this exact function + switch + default.
    // It will inject extra `case UTMSong.X:` blocks before the default.
    function getSongData(song: UTMSong): SongData {
        switch (song) {
            // (cases will be auto-added here by the tracker)

                    case UTMSong.NEWAGENTSUSPENCELOOP:
            return {
                stepMs: 200,
                sfxWave: [WaveShape.Square, WaveShape.Sine, WaveShape.Sawtooth, WaveShape.Square, WaveShape.Noise, WaveShape.Sawtooth, WaveShape.Square],
                sfxStartFreq: [262, 50, 65, 131, 262, 69, 277],
                sfxEndFreq: [262, 50, 65, 131, 262, 69, 277],
                sfxStartVol: [255, 255, 255, 255, 255, 255, 255],
                sfxEndVol: [0, 0, 0, 0, 0, 0, 0],
                sfxDuration: [200, 200, 200, 1600, 200, 200, 200],
                sfxEffect: [SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None, SoundExpressionEffect.None],
                sfxCurve: [InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear, InterpolationCurve.Linear],
                data: hex`
                    06 00 01 01 01 02 03 05 00 01 01 01 02 02 00 04
                    03 00 05 05 04 01 01 01 02 03 00 04 00 03 00 05
                    05 01 06
                `
            };

        default:
                return null;
        }
    }

    function playSfx(songData: SongData, id: number) {
        const sfxCount = songData.sfxWave.length;
        if (id < 0 || id >= sfxCount) return;

        music.play(
            music.createSoundEffect(
                songData.sfxWave[id],
                songData.sfxStartFreq[id],
                songData.sfxEndFreq[id],
                songData.sfxStartVol[id],
                songData.sfxEndVol[id],
                songData.sfxDuration[id],
                songData.sfxEffect[id],
                songData.sfxCurve[id]
            ),
            music.PlaybackMode.InBackground
        );
    }

    function playSongData(songData: SongData) {
        let i = 0;
        const data = songData.data;
        while (i < data.length) {
            const count = data[i++];
            for (let j = 0; j < count; j++) {
                const sfxId = data[i++];
                playSfx(songData, sfxId);
            }
            pause(songData.stepMs);
        }
    }

    //% blockId=play_utm_song
    //% block="play UTM song %song"
    export function playSong(song: UTMSong) {
        const songData = getSongData(song);
        if (songData) {
            playSongData(songData);
        }
    }

    //% blockId=play_utm_song_loop
    //% block="play UTM song %song in loop"
    export function playSongLoop(song: UTMSong) {
        const songData = getSongData(song);
        if (songData) {
            forever(function () {
                playSongData(songData);
            });
        }
    }
}

Step 2:
Visit the UTM Tool on GitHub. Here, you’ll use the tracker to make songs using custom instruments and a sleek composing format.


Step 3:
Once you’re done composing, press export to get the rendered song.

Once you select export, an export menu will open. The way exports work is pretty simple:

  1. Copy the code that’s already in your UTM extension code file
  2. Press “Merge Song into Extension” and it’ll automatically insert the song you created into the extension code. The updated code can be found in the instructions text box below the merge section.
  3. Replace the contents of your extension file with the updated extension code.

After completing these steps, you’ll get an extension with a single play UTM music block with a dropdown containing all your songs.

This isn’t finished, but it’s solid and I already slightly prefer it over the current built-in editor, but it needs so much polish. Either way, I encourage you to try it out and try making some cool songs with the UTM!

Stay unique!

4 Likes

I also am working on phasing out UTMF as the main format for making music in the tracker. The built-in intstrument system seems like a perfect solution.

This helps me realize that although the built-in music editor may be a little lackluster or limited in functionality, the instrument system itself is quite useful.

3 Likes

Here’s a strong demo for the UTM, this song has a couple cool things:

  • It uses all six channels for sound.
  • Uses longer note durations alongside quarter notes, making for more interesting rhythms.
  • Custom instruments have the ability to change volume from beginning to end
6 Likes

WOW!! This is really impressive, it sounds amazing!

3 Likes