DSP

Boot SHARC+ DSP Over UART

Published 6 Feb 2026. By Jakob Kastelic.

Most microcontrollers can be built and programmed without vendor IDEs or expensive debug probes. The ADSP-21569 SHARC DSP, at first glance, appears to be an exception. The official workflow assumes Analog Devices’ CrossCore Embedded Studio IDE and dedicated expensive debugging hardware. But instead we can just boot the thing over plain UART—the hardware supports it, the manuals describe it, and the tools exist to generate the boot streams.

These are my notes getting started with the Analog Devices SHARC processor installed on the EV-21569-SOM evaluation board, plugged into the EV-SOMCRR-EZLITE carrier board. Since there is very little information available online about these chips, compared to the more “usual” parts from ST or NXP, I hope the writeup will be of some use to someone.

Hardware setup

Hardware setup: install the SOM board into the SOMCRR board and establish the Default Configuration specified in the EV-SOMCRR-EZLITE Manual.

Connect the provided 12V 1.6A power supply to the POWER IN connector on the SOMCRR board. Connect a USB A-to-C cable to P16, labeled USB-C DA (debug agent).

Software setup

Although the debug tools are relatively expensive, Analog Devices offers a board-locked license that allows the CCES IDE to be used without additional cost after purchasing the evaluation boards.

Install the CrossCore Embedded Studio from the Analog Devices website. I’m using “Product version” 3.0.2.0 (w2410301) with “IDE version” 3.0.2029.202410301506.

Install also the EV-21569-EZKIT Board Support Package, Current Release (Rev. 3.0.0) from the SOM website; this provides some additional code examples to start from.

Code sharing is less common in the DSP ecosystem compared to some other software domains. However, basic reference examples are included within the IDE. To access them, follow these steps:

File
  New
    Project ...
    C/C++
      CrossCore Project
        Next >
        (enter project name, say "test")
        Next >
        (select ADSP-21569 revision "any")
        Next >
        Finish
"Click here to browse all examples."

Search for blink under Keywords, select the LED_Blink example for EV-2156x EZ-KIT v2.0.0 [2.0.0], and press “Open example”. Now we can delete the "test" project created just before: right click it in the Project Explorer, select “Delete”, and also check “Delete project contents on disk”.

Only the LEDBlink project remains; compile it (Project -> Build All) and run it (Run -> Debug, followed by Run -> Resume). With some luck, all three yellow LEDs on the SOM board will start blinking. Great!

Boot from UART

Previously we have run the code through the “Debug Agent”, which is an ATSAME70Q21B-CFNT on the SOM carrier board. Now, let’s try to download the same program through the UART interface.

Locate the P14 pin header on the right side of the carrier board, right under the Analog Devices logo. Locate the two top right pins, labelled SPI0 CLK and MISO. Cross-referencing with the datasheet, we learn that these two pins are UART0_TX and UART0_RX, respectively. Connect them to a 3.3V UART to USB adapter (I’m using the UMFT230XB-01 adapter).

The Processor Hardware Reference[1] describes the UART Slave Boot Mode. In particular, the part supports Autobaud Detection which works as follows:

  1. Send the @ character (0x40) to the UART RXD input.

  2. DSP returns four bytes: 0xBF, UART_CLK [15:8], UART_CLK [7:0], 0x00.

  3. Send the entire boot stream.

Step 1. Let’s attempt this in Python:

import serial
s = serial.Serial('COM20', baudrate=115200, timeout=1)
s.write(b'@')
res = s.read(4)

If we print(res), we get out b'\xbf2\x00\x00'. Great, the processor succeeded with the autobaud detection.

Step 2. Next, we need to convert the ELF file produced by the IDE (LEDBlink_21569.dxe) into a form suitable for loading into the chip. I’m running the elfloader (part of the CCES installation) from WSL2 as follows:

/mnt/c/analog/cces/3.0.2/elfloader.exe \
   -proc ADSP-21569 -b UARTHOST -f ASCII -Width 8 -verbose \
   LEDBlink_21569.dxe -o blink.ldr

Step 3. Back in Python, we can take the file and shove it into the DSP one byte at a time:

with open('blink.ldr', 'r') as f:
    for line in f:
        d = int(line.strip(), 16)
        s.write(bytes([d]))

After a few seconds of blinking on the RX and TX LEDs on the SOM, indicating the UART0 activity, we see the familiar blinking of the yellow LED4, LED6, and LED7, as before.

That’s great news, as it provides a way to program the part without any specialized hardware tools. For example, in an embedded application an application processor could send the DSP its boot code via UART. It’s a bit slow, but presumably the same process could work over a faster interface like SPI2.

Bad news?

If we unplug the USB DA connection to the SOMCRR board, we find that the Python code example above does not communicate with the board anymore (no response to the Autobaud command). But if we plug the USB cable back into the debug agent, then it all works fine. It works even with the CCES program closed.

Silly mistake! The USB connector was the only shared ground between the computer and the DSP board. Attaching a ground wire between the USB to serial adapter and P14, and all is well again.

Preload executable

Let’s take a closer look at how the IDE runs the code. Go into Run -> Debug Configurations, and add a new “Application with CrossCore Debugger”. Click through the wizard and notice that it adds a “preload” ELF file such as the following one, to be run before the blink code runs:

C:\analog\cces\3.0.2\SHARC\ldr\ezkit21569_preload.dxe

What does this do? Let’s navigate to the following location in the CCES documentation:

CrossCore® Embedded Studio 3.0.2 >
  Integrated Development Environment >
    Debugging Targets >
      Debugging ADSP-SC5xx SHARC+ and ARM Projects

The first sentence says that “preload files are used for the ADSP-SC5xx EZ-KITs and processors only”. These files are “equivalent to initcodes, but used during the debugging phase of development”. From my reading of this section, it appears that the preload files configure external memory, if using it, and only for the multi-core parts. Thus, on ADSP-21569 we should have no use for it.

Nevertheless, the project comes with pre-built init code executables even on ADSP-21569. In fact, the source code is provided for two

C:\analog\cces\3.0.2\SHARC\ldr\21569_init
C:\analog\cces\3.0.2\SHARC\ldr\21569_preload

One of these is an “initialization code project” and the other is “CCES preload code project”. Clear as mud! I think the init is used for production applications, while the preload version is used only when running the code directly from the CCES, but they do more or less the same thing: configure clocks, and the DDR.

Clock configuration is the reason why these files are provided not just for ADSP-SC5xx, but also for the 2156x. However, in this basic tutorial we will not modify the clocks, so the init code is not, in fact, needed after all.

Preload exe vs init code

LDR files produced by the elfloader utility support an -init switch, as does the processor bootstream itself. The hardware reference manual explains:

An initialization block instructs the boot kernel to perform a function call to the target address after the entire block has loaded. The function called is referred to as the initialization code (Initcode) routine.

Traditionally, an Initcode routine is used to set up the system PLL, bit rates, wait states, and the external memory controllers. Boot time can be significantly reduced when an init block is executed early in the boot process.

We read in the CCES documentation that the init code can also be packaged into a separate project/program, instead of being a part of the full application’s boot stream:

The preload executables are simple programs that invoke the init code functionality to configure the processor prior to loading the main application.

But the CCES documentation warns:

Do not use the preload executables when building bootable LDR files with the -init switch. The preload executables are not configured for use for LDR initialization blocks.

In other words, to use the -init switch, we should compile the 21569_init version of the initialization project.

However, no init code is needed for the blink project, since we do not need to adjust any of the clock parameters—the default values work.

CCES scatters various related files all over the file system, making it seem more complicated to build a project than is necessary. If we collect all the files together, it’s not so bad. Here’s what the IDE-provided Blink example needs to put together:

OBJ = \
  ConfigSoftSwitches_EV_SOMCRR_EZLITE_LED_OFF.doj \
  ConfigSoftSwitches_EV_SOMCRR_EZLITE_LED_ON.doj \
  SoftConfig_EV_21569_SOM_Blink1.doj \
  SoftConfig_EV_21569_SOM_Blink2.doj \
  adi_gpio.doj \
  adi_initialize.doj \
  app_IVT.doj \
  app_heaptab.doj \
  app_startup.doj \
  main.doj \
  pinmux_config.doj \
  sru_config.doj \

These files are either assembly files, like app_IVT.s and app_startup.s, or plain C files, like all the rest of them. Some are auto-generated, like adi_initialize.c (initializes SRU and pin-mux), app_heaptab.c, pinmux_config.c, and sru_config.c. The “Soft Config” files are essentially copies of each other, two files to turn the LED on, and two files to turn it off (seriously—the only difference in these files is the “data” written to the LED). This leaves just main.c, and the GPIO driver. By the standards of modern SoCs with tens of peripherals, all of this is almost trivial.

The compilation rules are as simple as can be. For the sake of explicitness, here they are:

blink.ldr: blink.dxe
	$(ELFL) $(ELFFLAGS) $< -o $@

blink.dxe: $(OBJ)
	$(CC) $(LDFLAGS) -o $@ $^

%.doj: %.s
	$(ASM) $(ASFLAGS) -o $@ $<

%.doj: %.c
	$(CC) $(CFLAGS) -c -o $@ $<

We have met the ELFL, or elfloader, rule above already: it creates the bootstream from an ELF input. The rest are standard linking, assembly, and compilation steps. The toolchain gets installed with CCES and is unfortunately entirely closed-source:

CC   = /mnt/c/analog/cces/3.0.2/cc21k.exe
ASM  = /mnt/c/analog/cces/3.0.2/easm21k.exe
ELFL = /mnt/c/analog/cces/3.0.2/elfloader.exe

The remaining piece of the Makefile are the flags. I’ll give CFLAGS, the other two are very similar:

CFLAGS = \
  -proc ADSP-21569 -si-revision any -flags-compiler \
  --no_wrap_diagnostics -g -DCORE0 -D_DEBUG -DADI_DEBUG \
  -structs-do-not-overlap -no-const-strings -no-multiline -warn-protos \
  -double-size-32 -char-size-8 -swc -gnu-style-dependencies

Discussion

The Blink example is accompanied by a lengthy license agreement that imposes significant restrictions, such as prohibiting external distribution and public posting of source code. This makes it impractical to release modifications without careful review.

The process to build and run the Blink example is somewhat fragile and may break in future versions of the Eclipse-based IDE, making it difficult to fully automate.

Due to the complexity of modern IDEs, it is not always clear which source files beyond main.c are included in the build. While this is not critical for a simple example like Blink, developers should be aware of the build inputs and dependencies when working on more complex projects to ensure proper provenance and supply chain transparency.

Notably, ADI appears to rely on a “security through obscurity” approach, reflecting a limited transparency regarding security mechanisms. This approach limits developers’ ability to audit or verify the security of the system:

The sources for ROM code are not available in CCES to protect the ADSP-SC5xx/ADSP-215xx secure booting and encryption details.[2]

It is noteworthy that the SHARC+ processor family currently lacks open-source toolchain components—such as an assembler, linker, compiler, loader, and debugging tools—which may limit accessibility for experimentation and early evaluation, potentially affecting broader adoption among engineers.


  1. Analog Devices: ADSP-21562/3/5/6/7/9 SHARC+ Processor Hardware Reference. Revision 1.1, October 2022. Part Number 82-100137-01.
  2. Analog Devices: CrossCore® Embedded Studio 3.0.2 > SHARC-FX Development Tools Documentation > Loader and Utilities Manual > Loader.
Unix

Weinberger on Coding

Published 6 Feb 2026. By Jakob Kastelic (ed.).

All of the quotes below are taken from the interview with Peter J. Weinberger (Murray Hill, 8 September 1989), with Michael S. Mahoney interviewing. This expands on the Unix values captured in the previous article.

Programming as Changing Reality

What you tell the machine to do, it’s not doing it on the model, it’s doing it on the mathematical reality. There’s no gap. Which, I think, makes it very appealing. There’s a difference between a computer program and a theorem, which is that when you prove a theorem you now sort of know something you didn’t know before. But when you write a program, the universe has changed. You could do something you couldn’t do before.

The Ideal of Permanent, Correct Design

I also have this feeling that you never want to have to touch the program. So it’s important to do it right early and that it always be okay. It’s not just a problem of the minute, although one writes a lot of code that’s got to do the problem of the minute, it’s got to fill the niche permanently—which is completely unrealistic but it’s certainly an attitude. And I think that matches this other. If it’s just going to be a slipshod temporary hacked up way of doing it it’s just not going to work long enough. And you’re going to just have to come back and do it again and it’s just too much like work. Not that reality actually matches this in any way but I think that’s the attitude.

Theory-Driven Code vs. Hack-Driven Code

If you have a theory based program you can believe you got the last bug out. If you have a hacked up program, all it is, is bugs. Surrounded by, you know, something that does something. You never get the last bug out. So we’ve got both kinds. And it’s partly the style of the programmer. Both kinds are useful. But one kind’s a lot easier to explain and understand even if it’s not more useful.

Usefulness Ultimately Beats Elegance

A program that’s sufficiently useful can have any number of bad effects or properties. But people prefer small, clean, easy-to-understand programs. But they will use big, horrible, grotesque, disgusting, buggy programs if they’re sufficiently useful. And some will complain louder than others, but it’s a rare few who will say “this is just so awful I won’t use it.”

Learnability Over Perfection

My guess is that there is a modest amount to learn and you can use it. And the truth is our secretaries use it. We don’t have a special system for secretaries. They just use it. Now, when you watch them use it you say “oh, but there’s so many easier ways of doing it, there this and this”, but it doesn’t really matter. They don’t have to use it perfectly.

Documentation as a Design Litmus Test

The story is if you write the documentation early, it’s likely it’ll be possible to explain what your program does, whereas if you wait until your program is completely finished, you may discover that however coherent it looked while you writing the various pieces, it’s impossible to explain it.

Embedded

Ethernet on Bare-Metal STM32MP135

Published 21 Jan 2026. By Jakob Kastelic.

In this writeup we’ll go through the steps needed to bring up the Ethernet peripheral (ETH1) on the STM32MP135 eval board as well as a custom board.

Eval board connections to PHY

The evaluation board uses the LAN8742A-CZ-TR Ethernet PHY chip, connected to the SoC as follows:

PHY pinPHY signalSoC signalSoC pinAlt. Fn.Notes
16TXENPB11/ETH1_TX_ENAA2AF11
17TXD0PG13/ETH1_TXD0AA9AF11
18TXD1PG14/ETH1_TXD1Y10AF11
8RXD0/MODE0PC4/ETH1_RXD0Y7AF1110k PU
7RXD1/MODE1PC5/ETH1_RXD1AA7AF1110k PU
11CRS_DV/MODE2PC1/ETH1_CRS_DVY9AF1010k PU
13MDCPG2/ETH1_MDCV3AF11
12MDIOPA2/ETH1_MDIOY4AF111k5 PU
15nRSTETH1_NRSTIO9MPC IO
14nINT/RECLKOPA1/ETH1_RX_CLKAA3AF11

Reset pin

In this design, the Ethernet PHY connected to ETH1 has its own 25MHz crystal. Note the ETH1_RX_CLK connection, which uses the MCP23017T-E/ML I2C I/O expander.

One wonders if it was really necessary to complicate Ethernet bringup by requiring this extra step (I2C + IO config) on an SoC that has 320 pins. True to form, the simple IO expander needs more than 1,300 lines of ST driver code plus lots more in the pointless BSP abstraction layer wrapper.

With a driver that complicated, it’s easier to start from scratch. As it happens, writing these GPIO pins involves just two I2C transactions. The I2C code is trivial, find it here.

Sending an Ethernet frame from eval board

Again ST code examples are very complex, but it takes just over 300 lines of code to send an Ethernet frame, by way of verifying that data can be transmitted over this interface. I asked ChatGPT to summarize what happens in the code:

  1. Configure the pins for Ethernet First, all the GPIO pins required by the RMII interface are set up. Each pin is switched to its Ethernet alternate function, configured for push-pull output, and set to a high speed. This ensures the STM32’s MAC can physically drive the Ethernet lines correctly. If you’re using an external GPIO expander like the MCP23x17, it is also initialized here, and relevant pins are set high to enable the PHY or other control signals.

  1. Enable the Ethernet clocks Before the MAC can operate, the clocks for the Ethernet peripheral—MAC, TX, RX, and the reference clock—are enabled in the RCC. This powers the Ethernet block inside the STM32 and allows it to communicate with the PHY.

  1. Initialize descriptors and buffers DMA descriptors for transmit (TX) and receive (RX) are allocated and zeroed. The transmit buffer is allocated and aligned to 32 bytes, as required by the DMA. A TX buffer descriptor is created, pointing to the transmit buffer. This descriptor tells the HAL exactly where the frame data is and how long it is.

  1. Configure the Ethernet peripheral structure The ETH_HandleTypeDef is populated with the MAC address, RMII mode, pointers to the TX and RX descriptors, and the RX buffer size. The clock source for the peripheral is selected. At this stage, the HAL has all the information needed to manage the hardware.

  1. Initialize the MAC and PHY Calling HAL_ETH_Init() programs the MAC with the descriptor addresses, frame length settings, and other features like checksum offload. The PHY is reset and auto-negotiation is enabled via MDIO. Reading the PHY ID verifies that the PHY is responding correctly.

  1. Start the MAC With HAL_ETH_Start(), the MAC begins normal operation, monitoring the RMII interface for frames to transmit or receive.

  1. Build the Ethernet frame A frame is constructed in memory. The first 6 bytes are the destination MAC (broadcast in this case), the next 6 bytes are the source MAC (the STM32’s MAC), followed by a 2-byte EtherType. The payload is copied into the frame (e.g., a short test string), and the frame is padded to at least 60 bytes to satisfy Ethernet minimum length requirements.

  1. Transmit the frame The TX buffer descriptor is updated with the frame length and pointer to the buffer. HAL_ETH_Transmit() is called, which programs the DMA to fetch the frame from memory and put it onto the Ethernet wire. After this call completes successfully, the frame is sent, and you can see it in Wireshark on the network.

For the record, when a cable is connected, the PHY sees the link is up:

> eth_status
Ethernet link is up
  Speed: 100 Mbps
  Duplex: full
  BSR = 0x782D, PHYSCSR = 0x1058

Custom board connections to PHY

The custom board (Rev A) also uses the LAN8742A-CZ-TR Ethernet PHY chip, connected to the SoC as follows:

PHY pinPHY signalSoC signalSoC pinAlt. Fn.Notes
16TXENPB11/ETH1_TX_ENN5AF11
17TXD0PG13/ETH1_TXD0P8AF11
18TXD1PG14/ETH1_TXD1P9AF11
8RXD0/MODE0PC4/ETH1_RXD0U6AF1110k PU
7RXD1/MODE1PC5/ETH1_RXD1R7AF1110k PU
11CRS_DV/MODE2PA7/ETH1_CRS_DVU2AF1110k PU
13MDCPG2/ETH1_MDCR1AF11
12MDIOPG3/ETH1_MDIOL5AF111k5 PU
15nRSTPG11M310k PD
14nINT/RECLKOPG12/ETH1_PHY_INTNT1AF1110k PU
5XTAL1/CLKINPA11/ETH1_CLKT2AF11

The differences with respect to eval board are:

SignalEval boardCustom board
ETH1_CRS_DVPC1/ETH1_CRS_DVPA7/ETH1_CRS_DV
ETH1_MDIOPA2/ETH1_MDIOPG3/ETH1_MDIO
nRSTGPIO expanderPG11, 10k pulldown
nINT/REFCLKOPA1/ETH1_RX_CLKPG12/ETH1_PHY_INTN
XTAL1/CLKIN25 MHz XTALPA11/ETH1_CLK

That is, two different port assignments, direct GPIO for reset instead of expander, clock to be output from the SoC to the PHY, and using INTN signal instead of RX_CLK. All alternate functions are 11, while on the eval board one of them (CRS_DV) was 10.

Transmit Ethernet frame from custom board

First, we need to set the clock correctly. Since Ethernet does not have a dedicated crystal on the custom board, we need to source it from a PLL. In particular, we can set PLL3Q to output 24/2*50/24=25 MHz, and select the ETH1 clock source:

pclk.PeriphClockSelection = RCC_PERIPHCLK_ETH1;
pclk.Eth1ClockSelection   = RCC_ETH1CLKSOURCE_PLL3;
if (HAL_RCCEx_PeriphCLKConfig(&pclk) != HAL_OK)
   ERROR("ETH1");

With the scope, I can see a 25 MHz clock on the ETH_CLK trace and the nRST pin is driven high (3.3V). Nonetheless, HAL_ETH_Init() returns with an error.

Of course, we forgot to tell the HAL what the Ethernet clock source is. On the eval board, we had

eth_handle.Init.ClockSelection = HAL_ETH1_REF_CLK_RX_CLK_PIN;

But on the custom board, the SoC provides the clock to the PHY:

eth_handle.Init.ClockSelection = HAL_ETH1_REF_CLK_RCC;

Mistake in HAL driver?

With the RCC clock selected for Ethernet, yet again HAL_ETH_Init() fails. This time, it tries to select the RCC clock source:

if (heth->Init.ClockSelection == HAL_ETH1_REF_CLK_RCC)
{
  syscfg_config |= SYSCFG_PMCSETR_ETH1_REF_CLK_SEL;
}
HAL_SYSCFG_ETHInterfaceSelect(syscfg_config);

The Ethernet interface and clocking setup is done in the PMCSETR register, together with some other configuration.

void HAL_SYSCFG_ETHInterfaceSelect(uint32_t SYSCFG_ETHInterface)
{
   assert_param(IS_SYSCFG_ETHERNET_CONFIG(SYSCFG_ETHInterface));
   SYSCFG->PMCSETR = (uint32_t)(SYSCFG_ETHInterface);
}

Now the driver trips over the assertion. The assertion macro expects the config word to pure interface selection, forgetting that the same register also carries the ETH1_REF_CLK_SEL field (amongst others!):

#define IS_SYSCFG_ETHERNET_CONFIG(CONFIG)                                      \
   (((CONFIG) == SYSCFG_ETH1_MII) || ((CONFIG) == SYSCFG_ETH1_RMII) ||         \
    ((CONFIG) == SYSCFG_ETH1_RGMII) || ((CONFIG) == SYSCFG_ETH2_MII) ||        \
    ((CONFIG) == SYSCFG_ETH2_RMII) || ((CONFIG) == SYSCFG_ETH2_RGMII))
#endif /* SYSCFG_DUAL_ETH_SUPPORT */

If we comment out this assertion, the initialization proceeds without further errors. However, link is still down.

Biasing transformer center taps

Even with an Ethernet cable plugged in, link is down:

// Read basic status register
if (HAL_ETH_ReadPHYRegister(&eth_handle, LAN8742_ADDR,
      LAN8742_BSR, &v) != HAL_OK) {
   my_printf("PHY BSR read failed\r\n");
   return;
}

if ((v & LAN8742_BSR_LINK_STATUS) == 0u) {
   my_printf("Link is down (no cable or remote inactive)\r\n");
   return;
}

On the schematic diagram of the custom board, we notice that the RJ-45 transformer center taps (TXCT, RXCT on the J1011F21PNL connector) are decoupled to ground, but are not connected to 3.3V unlike on the eval board. The LAN8742A datasheet does not talk about it explicitly, but instead shows a schematic diagram (Figure 3-23) where the two center taps are tied together and pulled up to 3.3V via a ferrite bead.

Tying the center taps to 3.3V, we still get no link. Printing the PHY Basic Status Register, we see:

Link is down (no cable or remote inactive)
BSR = 0x7809

This means: link down, auto-negotiation not complete.

REF_CLK pin is not outputting a 50 MHz clock but instead sits at about 3.3V.

LEDs and straps

The PHY chip shares LED pins with straps.

LED1 is shared with REGOFF and is tied to the anode of the LED, which pulls down the pin such that REGOFF=0 and the regulator is enabled. We measure that VDDCR is at 1.25V, which indicates that the internal regulator started successfully. During board operation, this pin is low (close to 0V).

LED2 is shared with the nINTSEL pin, and is connected to the LED cathode. During board operation, this pin is high (close to 3.3V). Selecting nINTSEL=1 means REF_CLK In Mode, as is explained in Table 3-6: “nINT/REFCLKO is an active low interrupt output. The REF_CLK is sourced externally and must be driven on the XTAL1/CLKIN pin.”

Section 3.7.4 explains further regarding the “Clock In” mode:

In REF_CLK In Mode, the 50 MHz REF_CLK is driven on the XTAL1/CLKIN pin. This is the traditional system configuration when using RMII [...]

In REF_CLK In Mode, the 50 MHz REF_CLK is driven on the XTAL1/CLKIN pin. A 50 MHz source for REF_CLK must be available external to the device when using this mode. The clock is driven to both the MAC and PHY as shown in Figure 3-7.

Furthermore, according to Section 3.8.1.6 of the PHY datasheet, the absence of a pulldown resistor on LED2/nINTSEL pin means that LED2 output is active low. That means that the anode of LED2 should have been tied to VDD2A according to Fig. 3-15, rather than ground as is currently the case.

This means we have two alternatives:

In this instance I chose the latter option and ordered PLL3Q to output 24/2*50/12=50 MHz. The link is briefly up and the green LED2 blinks:

> eth_status
Ethernet link is up
  Speed: 100 Mbps
  Duplex: full
  BSR = 0x782D, PHYSCSR = 0x1058

But strange enough, when I check the status just a moment later, the link is down again:

> eth_status
Link is down (no cable or remote inactive)
BSR = 0x7809

Checking repeatedly, sometimes it’s up, and sometimes it’s down.

I see that the current drawn from the 3.3V supply switches between 0.08A and 0.13A continuously, every second or two.

Digging in registers

Printing out some more info in both situations:

Link is down (no cable or remote inactive)
  BSR = 0x7809, PHYSCSR = 0x0040, ISFR = 0x0098, SMR = 0x60E0, SCSIR = 0x0040
SYSCFG_PMCSETR = 0x820000
> e
Ethernet link is up
  Speed: 100 Mbps
  Duplex: full
  BSR = 0x782D, PHYSCSR = 0x1058, ISFR = 0x00CA, SMR = 0x60E0, SCSIR = 0x1058
SYSCFG_PMCSETR = 0x820000

PHY Basic Status Register BSR, when link is down, shows the following status:

When link is up, BSR shows (of course) that link is up, and also that the auto-negotiate process completed.

The PHY Special Control/Status Register (PHYSCSR), when link is down, does not have a meaningful speed indication (000), or anything else. When link is up, it shows speed as 100BASE-TX full-duplex (110), and that auto-negotiation is done.

The PHY Interrupt Source Flag Register (PHYISFR), when link is down, shows Auto-Negotiation LP Acknowledge, Link Down (link status negated), and ENERGYON generated. When link is up, we get Auto-Negotiation Page Received, Auto-Negotiation LP Acknowledge, ENERGYON generated, and Wake on LAN (WoL) event detected.

The PHY Special Modes Register (PHYSMR), when link is either up or down, shows the same value: 0x60E0. This means that PHYAD=00000 (PHY address), and MODE=111 (transceiver mode of operation is set to “All capable. Auto-negotiation enabled.”.

The PHY Special Control/Status Indications Register (PHYSCSIR), when link is up, shows Reversed polarity of 10BASE-T, even though link is 100 Mbps.

SoC PMCSETR has two fields set: ETH1_SEL is set to 100, meaning RMII, and ETH1_REF_CLK_SEL is set to 1, meaning that the reference clock (RMII mode) comes from the RCC.

Solution: PLL config (again!)

Painfully obvious in retrospect, but the problem was that PLL3, from which we’ve derived the Ethernet clock, was set to fractional mode:

rcc_oscinitstructure.PLL3.PLLFRACV  = 0x1a04;
rcc_oscinitstructure.PLL3.PLLMODE   = RCC_PLL_FRACTIONAL;

If instead we derive the clock from PLL4, which is already set to integer mode, then sending the Ethernet frame just works, and the link gets up and stays up:

rcc_oscinitstructure.PLL4.PLLFRACV  = 0;
rcc_oscinitstructure.PLL4.PLLMODE   = RCC_PLL_INTEGER;
// ...
pclk.PeriphClockSelection = RCC_PERIPHCLK_ETH1;
pclk.Eth1ClockSelection   = RCC_ETH1CLKSOURCE_PLL4;

Of course! Ethernet requires a perfectly precise 50 MHz clock, up to about 50 ppm. On the eval board that was not a problem: the PHY had its own crystal, and it returned a good 50 MHz clock directly back to the SoC’s MAC.

Incoherent Thoughts

Scary Things First

Published 21 Jan 2026. By Jakob Kastelic.

This morning it occurred to me that I’m really not looking forward to going to the office, for I’ll have to continue doing something that I spent two days on already, and it’s still not working. I can easily think of many other such things that I’d rather not do, and as it happens each of them comes with a “positive”, or attractive aspect (written in brackets):

These are generalized examples; my real list is longer and more specific, but I won’t bore you with the details since anyone can easily write down their own, personally relevant version.

The point of these contrasts is not so much that the “bad” part of the stick is to be borne because the “good” part is worth so much more. The point is not even to try and forget about the bad part by various means (distraction, expression, repression, suppression), even though that’s what I end up doing most of the time. The point is to try and see them as a single “yin-yang” unit: black in white, white in black.

These contrasts are inevitable, so why waste time fighting them, denying their existence? Relax into the reality, let go of the fear and dread by feeling it directly until your brain gets tired of it. I’m not saying, “stop fearing the inevitable”, as the fear itself is in fact part of the inevitable. The lake would not try to hide its waves when a stone is thrown into it; its waves radiate outwards until they stop. In fact they never really stop, so the lake does not reject them.

Somewhere in the Tao Te Ching it is said that the great power of water (wearing down mountains, etc.) is because it’s not loath to take the lowest, humblest part, where no one wants to be. Elsewhere there’s the image of the malformed tree surviving, while the straight, useful ones are cut down for the carpenter. I wonder if peace can be had in the face of the above mentioned “dreadful” future situations by sinking, in each of them, to the most dreadful point. Assume the most broken, useless mental state: be angry and sad, afraid and trembling, and watch things come and go. Strength in weakness?

On a practical note: each day, do the “dreadful” thing first to avoid wasting too much time and effort doing pointless other things. Looking back, avoidance behaviors are often much more exhausting than what they supposedly protect me from. Or, in someone’s wise words: “Procrastination is not worth the time it takes.”

Embedded

LCD/CTP on Bare-Metal STM32MP135

Published 19 Jan 2026. By Jakob Kastelic.

In this writeup we’ll go through the steps needed to bring up the LCD/CTP peripheral on the custom STM32MP135 board.

Connections

I am using the Rocktech RK050HR01-CT LCD display, connecting to the STM32MP135FAE SoC, as follows:

LCD pinLCD signalSoC signalSoC pinAlt. Fn.
1, 2VLED+/-PB15/TIM1_CH3NB12AF1
8R3PB12/LCD_R3D9AF13
9R4PE3/LCD_R4D13AF13
10R5PF5/LCD_R5B2AF14
11R6PF0/LCD_R6C13AF13
12R7PF6/LCD_R7G2AF13
15G2PF7/LCD_G2M1AF14
16G3PE6/LCD_G3N1AF14
17G4PG5/LCD_G4F2AF11
18G5PG0/LCD_G5D7AF14
19G6PA12/LCD_G6E3AF14
20G7PA15/LCD_G7E6AF11
24B3PG15/LCD_B3G4AF14
25B4PB2/LCD_B4H4AF14
26B5PH9/LCD_B5A9AF9
27B6PF4/LCD_B6L2AF13
28B7PB6/LCD_B7C1AF14
30DCLKPD9/LCD_CLKE8AF13
31DISPPG7C9
32HSYNCPE1/LCD_HSYNCB5AF9
33VSYNCPE12/LCD_VSYNCB4AF9
34DEPG6/LCD_DEA14AF13

Backlight

The easiest thing to check is the display backlight, since it’s just a single GPIO pin to turn on/off, or a simple PWM to control the brightness via the duty cycle.

In our case, the backlight pin is connected to TIM1_CH3N, which is alternate function 1:

GPIO_InitTypeDef gpio;
gpio.Pin       = GPIO_PIN_15;
gpio.Mode      = GPIO_MODE_AF_PP;
gpio.Pull      = GPIO_NOPULL;
gpio.Speed     = GPIO_SPEED_FREQ_LOW;
gpio.Alternate = GPIO_AF1_TIM1;
HAL_GPIO_Init(GPIOB, &gpio);

ChatGPT can write the PWM configuration:

__HAL_RCC_TIM1_CLK_ENABLE();

htim1.Instance = TIM1;
htim1.Init.Prescaler         = 99U;
htim1.Init.CounterMode       = TIM_COUNTERMODE_UP;
htim1.Init.Period            = 999U;
htim1.Init.ClockDivision     = TIM_CLOCKDIVISION_DIV1;
htim1.Init.RepetitionCounter = 0;
htim1.Init.AutoReloadPreload = TIM_AUTORELOAD_PRELOAD_DISABLE;
HAL_TIM_PWM_Init(&htim1);

TIM_OC_InitTypeDef oc;
oc.OCMode       = TIM_OCMODE_PWM1;
oc.Pulse        = 500U;
oc.OCPolarity   = TIM_OCPOLARITY_HIGH;
oc.OCNPolarity  = TIM_OCNPOLARITY_HIGH;
oc.OCIdleState  = TIM_OCIDLESTATE_RESET;
oc.OCNIdleState = TIM_OCNIDLESTATE_RESET;
oc.OCFastMode   = TIM_OCFAST_DISABLE;

HAL_TIM_PWM_ConfigChannel(&htim1, &oc, TIM_CHANNEL_3);
HAL_TIMEx_PWMN_Start(&htim1, TIM_CHANNEL_3);
htim1.Instance->BDTR |= TIM_BDTR_MOE;

The only “tricky” part, or the part that AI got wrong, was that we have to use HAL_TIMEx_PWMN_Start() instead of HAL_TIM_PWM_Start(), since we’re dealing with the complementary output. With that fixed, the brightness pin showed a clean square wave output, with duty cycle adjustable in units of percent:

__HAL_TIM_SET_COMPARE(&htim1, TIM_CHANNEL_3, 
      (htim1.Init.Period + 1U) * percent / 100U);

Unfortunately, the PCB reversed all pins and the connector is single sided, so we cannot directly check if the above works on the actual display or not. Nonetheless, we can see a nice 2.088893 kHz square wave with 50 duty cycle, and we can tune it from 0% to 100%.

CTP connections

The Rocktech RK050HR01-CT LCD display includes a capacitive touchpad (CTP), connecting to the STM32MP135FAE SoC, as follows:

CPT pinCPT signalSoC signalSoC pinAlt. Fn.
1SCLPH13/I2C5_SCLA10AF4
8SDAPF3/I2C5_SDAB10AF4
4RSTPB7A4
5INTPH12C2

Luckily the 6-pin CTP connector, albeit wired in reverse, has contacts on both top and bottom sides, so we can simply flip the ribbon cable. With entirely usual I2C configuration it simply works. Check out the final result here.

My GT911 driver is just under 300 lines of code; it’s very interesting that it takes ST almost 3,000 (yes, it has more features ... Whatever, I don’t need them!)

stm32cubemp13-v1-2-0/STM32Cube_FW_MP13_V1.2.0/Drivers/BSP/Components/gt911$ cloc .
      12 text files.
      12 unique files.
       1 file ignored.

github.com/AlDanial/cloc v 1.90  T=0.10 s (109.2 files/s, 48189.3 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
CSS                              1            209             56           1446
C                                2            223            636            940
C/C++ Header                     3            159            614            421
Markdown                         2             24              0             62
HTML                             1              0              3             56
SVG                              2              0              0              4
-------------------------------------------------------------------------------
SUM:                            11            615           1309           2929
-------------------------------------------------------------------------------

My example code prints out the touch coordinates whenever the touch interrupt fires. Not much more to do, since the CTP will be used within some application which will implement more advanced features. The only reason to include this in the bootloader code is to verify that the I2C connection works.

LCD

The custom board is wired backwards, but we can verify that the code is correct on the eval board. Besides forgetting to turn the LCD_DISP signal on, it all worked. You set up a framebuffer somewhere (I just used the beginning of the DDR memory), and write bits there, and magically the picture appears on the display. For example, to display solid colors:

volatile uint8_t *lcd_fb = (volatile uint8_t *)DRAM_MEM_BASE;

for (uint32_t y = 0; y < RK043FN48H_HEIGHT; y++) {
   for (uint32_t x = 0; x < RK043FN48H_WIDTH; x++) {
      uint32_t p    = (y * RK043FN48H_WIDTH + x) * 3U;
      lcd_fb[p + 0] = b; // blue
      lcd_fb[p + 1] = g; // green
      lcd_fb[p + 2] = r; // red
   }
}

/* make sure CPU writes reach DDR before LTDC reads */
L1C_CleanDCacheAll();

40-pin adapter

Making use of an adapter from the 40-pin FFC ribbon cable to jumper wires, we can verify the signals also on the custom board. We see:

R[3:7] signal when screen set to red, otherwise low
G[3:7] signal when screen set to green, otherwise low
B[3:7] signal when screen set to blue, otherwise low
DCLK:  10 MHz
DISP:  3.3V
HSYNC: 17.6688 kHz, 92.76% duty cycle
VSYNC: 61.779 Hz, 96.5% duty cycle
DE:    16.7--16.9 kHz, ~84% duty cycle

We can see the brightness change when adjusting the duty cycle of the backlight.

Left ~2/3 of the screen shows white vertical stripes, the exact pattern of these stripes depending on what “color” the screen is set to. The right ~1/3 of the screen is black. This is to be expected, since we’re using the same settings for both displays. Here’s the settings which work fine on the eval board:

#define LCD_WIDTH  480U // LCD PIXEL WIDTH
#define LCD_HEIGHT 272U // LCD PIXEL HEIGHT
#define LCD_HSYNC  41U  // Horizontal synchronization
#define LCD_HBP    13U  // Horizontal back porch
#define LCD_HFP    32U  // Horizontal front porch
#define LCD_VSYNC  10U  // Vertical synchronization
#define LCD_VBP    2U   // Vertical back porch
#define LCD_VFP    2U   // Vertical front porch

The custom board uses a different display, so let’s try different settings:

#define LCD_WIDTH   800U
#define LCD_HEIGHT  480U
#define LCD_HSYNC   1U
#define LCD_HBP     8U
#define LCD_HFP     8U
#define LCD_VSYNC   1U
#define LCD_VBP     16U
#define LCD_VFP     16U

Now the screen is totally white, regardless of which color we send it. We notice that the LCD datasheet specifies a minimum clock frequency of 10 MHz. Note that on the STM32MP135, the LCD clock comes from PLL4Q. Raising the DCLK to 24 MHz, the screen works! We get to see all the colors. The PLL4 configuration that works for me is

rcc_oscinitstructure.PLL4.PLLState  = RCC_PLL_ON;
rcc_oscinitstructure.PLL4.PLLSource = RCC_PLL4SOURCE_HSE;
rcc_oscinitstructure.PLL4.PLLM      = 2;
rcc_oscinitstructure.PLL4.PLLN      = 50;
rcc_oscinitstructure.PLL4.PLLP      = 12;
rcc_oscinitstructure.PLL4.PLLQ      = 25;
rcc_oscinitstructure.PLL4.PLLR      = 6;
rcc_oscinitstructure.PLL4.PLLRGE    = RCC_PLL4IFRANGE_1;
rcc_oscinitstructure.PLL4.PLLFRACV  = 0;
rcc_oscinitstructure.PLL4.PLLMODE   =
RCC_PLL_INTEGER;

USB stops working

Unfortunately, just as the LCD becomes configured correctly and is able to display the solid red, green, or blue colors, I noticed that the USB MSC interface disappeared. If I comment out the LCD init code, so it does not run, then USB comes back. How could they possibly interact?

Even more interesting, the USB stops working only if both of the following functions are called: lcd_backlight_init(), which configures the backlight brightness PWM, and lcd_panel_init(), which does panel timing and pin configuration.

As it turns out, my 3.3V supply was set with a 0.1A current limit. Having enabled so many peripherals, the current draw can be a bit higher now. Increasing the current limit up to 0.2A, and everything works fine. In the steady state, after init is complete, the board draws just under 0.1A from the 3.3V supply. (For the record, I’m drawing about 0.26A from the combined 1.25V / 1.35V supply.)

Conclusion

Bringing up the LCD on the custom board ultimately came down to matching the panel’s exact timing and, critically, running the pixel clock within the range specified by the datasheet. Once the LTDC geometry and PLL4Q frequency were correct, the display worked immediately, confirming that the signal wiring and framebuffer logic were sound.