Microsoft MakeCode

SPI write crashes the microbit. I'm baffled

I have a situation in which a write to SPI seems to crash the Microbit. I’ve been looking at this for an hour or two now. I really cannot see what is wrong.

My Makecode extension to drive a Nokia 3310 LCD screen (84 x 48 pixels) is in this repository
https://github.com/MTKilpatrick/pxt-nokialcd

The TS file has shims for lots of functions that I have transferred to the .cpp file to speed up all the graphics. I’m transferring all the SPI routines to C as well.

The extension beings by calling init() in JS which in turn calls SPIinit() (in the CPP) to initialise the SPI port. init() then sets up the LCD display with several commands - these are determine by setting Digital Pin16 to low (LCD_CMD). Data writes to the screen are performed when P16 is high (LCD_DAT). The chip select is performed by P12 and reset by P8.

The first set-up call is writeFunctionSet() then lcdExtenedFunctions() which sets the display into a command mode in order to set up a few parameters. writeFunctionSet() exists in both the JS and C files.

This extension makes the Microbit hang up if I use the shim=nokialcd::writeFunctionSet to make it use the C version of writeFunctionSet(). If I use the JS version of writeFunctionSet() by removing the shim, the whole thing works.

If you look at the code in both the TS and CPP files, writeFunction() is identical. It merely sets P12 and P16 to the appropriate states, calls spi.write() with a value and then reverts P12 and P16.

Please note that spi.write() is also used in the function writeSPIBuf() in the CPP file - this transfers the 504-byte buffer to the screen. This function works and I have tested it.

What is going on here? If I comment out the spi.write(0x20 | (v << 1) | (h & 1)); line in writeFunctionSet, the init() routine completes. If I leave the line it, the Microbit hangs up at that point.

Update: The spi.write() function in C only works if, beforehand, I have done at least one call to pins.spiWrite() in the JS code.

I tested this by removing all of the set-up calls in init() after initialising the SPI port and then making a call to writeSPIBuf() (in C) to push the 504-byte buffer to the screen. This operation did not complete. It only completed after I uncommented a minimum of one JS function call that writes a byte to the SPI.

OK, so this Nokia LCD is supposed to use spi.Format(8,0) not (8,3). But it works on either presumably because of the timing on MOSI. Anyway…

When I successfully no longer have SPI write operations in the Javascript having moved them all to C, I find that the thing crashes at the init() stage unless I put either or both a spiFormat() or spiFrequency() calls in the init() function in Javascript:

export function init(): void {
    pins.spiFormat(8, 0)
    pins.spiFrequency(1000000)
    SPIinit()
}

Even though in the CPP file I have this:

namespace nokialcd {
   SPI spi(mbit_p15, mbit_p14, mbit_p13);
    DigitalOut LCD_CE(mbit_p12);
    DigitalOut LCD_RST(mbit_p8);
    DigitalOut LCD_DC(mbit_p16);
    static Buffer bytearray = NULL;
    static bool state = true;
    static int lcdDE = 0;
   //%
       void SPIinit() {
        LCD_CE = 1;
        lcdDE = 0;
        LCD_RST = 0;
        spi.format(8,0);
        spi.frequency(1000000);
....and so on....

Which clearly inititalises the SPI device. Why does it need to be either formatted or frequency-set in the JS too? That makes no sense to me.