Incoherent Thoughts

Problems Not to Solve

Published 21 Apr 2026. By Jakob Kastelic.

Some problems should not be solved, because they cannot be solved, and therefore must not be solved. Such problems are instead to be let go of.

Examples include:

Changing other people when they have proven themselves unwilling to change again and again. Stop trying to change them and accept them as they are, or else let go of them entirely or, less extreme, distance yourself as appropriate.

Making improvements in products and services that are not strictly necessary. These small improvements come at the cost of more important features or even whole new products. Stop pursuing diminishing returns; it’s an infinite amount of work for a finite and modest benefit. (See also: 80/20 rule.)

Thinking and worrying about things outside my control or knowledge. Either bring it within my control or knowledge, or let go of it entirely.

Letting go of insoluble problems is not laziness; it’s wisdom.

DSP

Agentic Coding on ADSP-2156 Eval Board

Published 13 Apr 2026. By Jakob Kastelic.

In the previous two articles, we compiled a “blink” test program and started it on the ADSP-21569 eval board. When we observed the blinking on the board, this signalled success. Now we can “close the loop” by allowing the computer to read program output from the DSP, allowing automated testing and agentic coding.

Python test server

The test setup splits the coding and testing between two separate computers. The “test” machine obtains a program to test, loads the code on the ADSP-2156 eval board, and returns the results back to the “code” machine.

The REST API is delightfully simple in Python. The test machine server outline is as follows:

import os
import random
from http.server import HTTPServer, BaseHTTPRequestHandler

class Handler(BaseHTTPRequestHandler):
    <Get>
    <Post

HTTPServer(("127.0.0.1", 8080), Handler).serve_forever()

The class defines just two functions. The function selects one of the files at random from the inputs/ directory, moves it to done/, and sends it to the remote client. (Return 204 if there’s no more test files.)

def do_GET(self):
    names = os.listdir("inputs")
    if not names:
        self.send_response(204)
        self.end_headers()
        return
    name = random.choice(names)
    src = os.path.join("inputs", name)
    data = open(src, "rb").read()
    os.rename(src, os.path.join("done", name))
    self.send_response(200)
    self.end_headers()
    self.wfile.write(data)

When the client returns the response, the server just writes it to a filename determine by the POST endpoint:

def do_POST(self):
    digest = self.path.rsplit("/", 1)[-1]
    n = int(self.headers["Content-Length"])
    with open(os.path.join("outputs", f"{digest}.txt"), "wb") as f:
        f.write(self.rfile.read(n))
    self.send_response(200)
    self.end_headers()

On the client side, we likewise need two function. On to get the next load stream to test:

def get_job(port=8080):
    r = urllib.request.urlopen(f"http://localhost:{port}")
    if r.status == 204:
        return None
    return r.read()

And another to submit the response:

def post_resp(ldr, msg, port=8080):
    sha = hashlib.sha256(ldr).hexdigest()
    urllib.request.urlopen(urllib.request.Request(
        f"http://localhost:{port}/{sha}",
        data = msg.encode(),
        method = "POST"
    )).read()

For continuous testing/developement use, the client can repeatedly poll for new jobs. If a job exists, the client tests it and immediately requests a new one. If there’s no more jobs, then the client just polls again in a few seconds or minutes.

DSP reset via USB

Blink, as many other programs, runs forever, making it impossible to load any other program without a hard reset. No obvious “reset over USB” mechanism stands out to me in the EV-SOMCRR-EZLITE and EV-21569-SOM schematic diagrams.

However, the EZLITE board comes with three LEDs connected to the GPIO expander that are entirely decorative: DS6, DS7, DS8. We can repurpose one of these (I chose DS8) to reset the whole board by connecting R172 (expander side) to the S3 RESET pushbutton. Then blinking that LED results in a whole board reset, allowing us to “re-flash” with new firmware.

Printf to UART0

To close the loop entirely, the programs should produce some output. If they print to UART0, then the Python “poller” can capture the output and send it back via post_resp(). Simple, but powerful!

Conclusion

The point of this exercise is to allow coding agents to test their output on real hardware without compromising the computer the hardware is connected to. Thus a single computer could control multiple boards and equipment without risking inteference between the agents associated with different hardware. Each set of agents can be separately boxed and run with all permissions granted, while essentially powerless to escape the confinement.

Incoherent Thoughts

The Uses of Silence

Published 22 Mar 2026. By Jakob Kastelic and GPT-5.3.

There’s an idea floating around: that speaking too soon—or too publicly—can distort intention, dissipate effort, or corrupt meaning. Yet what silence protects differs: power, purity, clarity, or effectiveness. Here we take a ten-second look at a couple interesting specimens.

Eliphas Levi (Ceremonial Magic)

Eliphas Levi insists that magical operations must be kept secret because publicity disperses will and invites opposition; the act of speaking converts a focused intention into a social object, weakening its efficacy as an operation of directed will.[1]

Aleister Crowley (Thelema)

Aleister Crowley codifies silence as a discipline of the magician: one must avoid discussing one’s Work because speech leaks energy and entangles the will in ego and external reactions, thereby degrading the precision required for successful magical action.[2]

Franz Bardon (Hermetic Training)

Franz Bardon treats silence as a technical requirement of mental training: revealing intentions or progress disrupts concentration and allows external influences to interfere with the equilibrium necessary for effective practice.[3]

Napoleon Hill (Early New Thought Influence)

Napoleon Hill advises keeping plans private until they are realized because external opinions introduce doubt and erode persistence; silence protects the fragile early stage where belief must be maintained without contradiction.[4]

Neville Goddard (Imaginative Creation)

Neville Goddard emphasizes inner conviction over external discussion; speaking about a desire before it is realized shifts attention from imagination to social validation, weakening the sustained assumption required to bring it about.[5]

Jesus (Gospel of Matthew)

Jesus explicitly commands that prayer, fasting, and charity be done in secret so that the act is not redirected toward human approval; speaking or displaying it replaces devotion with performance and nullifies its spiritual value.[6]

St. John of the Cross (Mystical Theology)

St. John of the Cross warns that mystical experiences should not be spoken of lightly because language distorts them and invites ego inflation; silence preserves the authenticity of the interior transformation.[7]

The Cloud of Unknowing (Anonymous Author)

This text teaches that God cannot be approached through concepts or speech; silence is required because verbalization imposes false clarity on what must remain beyond understanding.[8]

Early Buddhist Discourses (Majjhima Nikaya)

In the early discourses, the Buddha avoids answering speculative metaphysical questions; silence prevents engagement with views that do not lead to liberation and keeps attention on practical insight.[9]

Dogen (Zen Buddhism)

Dogen treats language as inherently secondary to realization; speaking about insight risks replacing direct experience with conceptualization, so silence helps prevent mistaking description for attainment.[10]

Peter Gollwitzer (Modern Psychology)

Gollwitzer shows that publicly stating goals can create a premature sense of completion through social recognition; silence preserves motivation by preventing this substitution of talk for action.[11]


  1. Eliphas Levi, Transcendental Magic: Its Doctrine and Ritual (1856). link
  2. Aleister Crowley, Magick in Theory and Practice (1929-1930). link
  3. Franz Bardon, Initiation Into Hermetics (1956). link
  4. Napoleon Hill, Think and Grow Rich (1937). link
  5. Neville Goddard, Feeling Is the Secret (1944). link
  6. Gospel of Matthew 6:1-18, The Bible. link
  7. St. John of the Cross, The Ascent of Mount Carmel (c. 1579). link
  8. The Cloud of Unknowing (late 14th century, anonymous). link
  9. Majjhima Nikaya, early Buddhist discourses (Pali Canon). link
  10. Dogen, Shobogenzo (13th century). link
  11. Peter M. Gollwitzer, “Implementation Intentions,” American Psychologist (1999). link
Embedded

Parameters With Least Overhead

Published 20 Mar 2026. By Jakob Kastelic.

The state of an embedded instrument can be represented as a set of parameters with names, values, and other attributes. These parameters change in various ways (user control, programmatically, sensor input) and need to be accessible on a front panel, remote interface, and in the internal subsystems. Some never change (serial number), some rarely (firmware version), some occasionally (user settings), and some need real-time updates (current measurement value). I’m looking for a way to manage these parameters that consumes the least processor time, i.e. adds the least overhead.

One program, bare-metal

If the entire device runs a single program, then the program can use a simple in-memory array (list, hash map) of parameters and use them as appropriate. That’s a less attractive option for more complex instruments where there’s a GUI, several remote interfaces, a web server perhaps. Maybe better to divide the firmware into several tasks?

Several tasks on RTOS

The different application concerns can be implemented as tasks on a real-time operating system. FreeRTOS, barely more than a “user-space” threading library, is an attractive option because it’s lightweight and easy to use. The tasks share memory and can access a global array of parameters, coordinating the access using a mutex to coordinate access.

This solution works for as long as we are happy to keep the whole instrument firmware compiled into a single executable. But then the smallest change anywhere requires the whole program to be re-flashed and the instrument rebooted. When there’s a lot of little adjustments this quickly becomes slow and annoying. It’d be much nicer to have truly independent programs for different parts of the instrument.

Programs on a “real” OS

If the instrument supports a full-featured operating system like Linux then we can divide the work into several programs. One could be in charge of drawing the graphical user interface, another would serve as a “SCPI shell”, fielding the remote commands received from USB or Ethernet interfaces. Now that the various programs no longer share the same memory space, the problem arises of sharing the parameters between the programs. If data is received by the sensor monitoring program, how can the GUI app access it to display on the LCD panel?

Two solutions come to mind immediately: POSIX shared memory, or a single central “param server” process that sends/receives data over Unix-domain sockets. The first comes closest to “zero overhead” and is comparable to the bare-metal and RTOS solutions; the second separates domains more cleanly but may come with a performance price. It would be “premature optimization” to discard a conceptually cleaner solution for fear of overhead, so let’s implement both and see how much they cost.

Shared memory

In the “shared memory” approach, the parameter list is defined as a header or library that is compiled into each program that needs access to the parameters. It defines the memory location, structure, and the synchronization primitives that make it possible for several programs to share the data.

For concreteness, let’s assume that a parameter has a name and a value:

struct param {
   const char *name;
   double val;
   // other attributes as needed
};

A header file declares an array of these parameters:

static struct param params[] = {
   {"parameter_name_1", 0.0},
   {"parameter_name_2", 0.0},
   // and so on, about 200 items
}

We have a choice of several synchronization methods:

Next, declare the shared memory layout corresponding to the six ways of synchronizing (mutex, rwlock, seqlock; per-parameter or whole-table locking):

#define MAX_SLOTS 256 // shared memory capacity
#define SHM_NAME "/param_bench"

struct param_slot {
	double val;
#if defined(MUTEX_PARAM)
	pthread_mutex_t lock;
#elif defined(RWLOCK_PARAM)
	pthread_rwlock_t lock;
#elif defined(SEQLOCK_PARAM)
	_Atomic unsigned seq;
#endif
};

struct shared {
#if defined(MUTEX_TABLE)
	pthread_mutex_t lock;
#elif defined(RWLOCK_TABLE)
	pthread_rwlock_t lock;
#elif defined(SEQLOCK_TABLE)
	_Atomic unsigned seq;
#endif
	struct param_slot slots[MAX_SLOTS];
};

The actual locking/unlocking functions are very standard, so I’ll show only the per-table mutex case:

#if defined(MUTEX_TABLE)
static inline unsigned _rbegin(struct shared *s, int i) {
	pthread_mutex_lock(&s->lock);
	return 0;
}

static inline int _rend(struct shared *s, int i, unsigned q) {
	pthread_mutex_unlock(&s->lock);
	return 0;
}

static inline void _wlock(struct shared *s, int i) {
	pthread_mutex_lock(&s->lock);
}

static inline void _wunlock(struct shared *s, int i) {
	pthread_mutex_unlock(&s->lock);
}
#endif

Overhead measurements: shared memory

Now we write two programs:

We can investigate the potential bottlenecks by varying the synchronization method and the number of Randomizers and Displayers, and the number of parameters changed by each Randomizer.

On STM32MP135 eval board, with two Displayers and two Randomizers (on randomizing all parameters, the other only two parameters) the results are about the same for all three synchronization primitives, whether per-parameter or per table:

In other words, updating 200 parameters at 60 Hz is too light a load to matter! Any locking method is fine and should be decided based on programming convenience—but there they are about the same as well.

We can modify the programs to not throttle the update rate and not do any printing, just count parameter accesses per second. Let’s setup the same configuration as before (two Displayers, two Randomizers: one randomizes all 200 params, the other just 2). Now we can directly report the number of parameter access in Mops/s (million operations per second) across all synchronization methods:

MethodGranularityDisplayerRandomizer 2Randomizer 200Load Avg
mutexparam1.81.20.93.43
mutextable2.20.71.33.61
rwlockparam2.00.70.32.81
rwlocktable2.60.30.62.55
seqlockparam0.0 to 1.50.81.44.03
seqlocktable1.2 to 5.00.81.44.06

Now the differences show up. Mutexes appear to be “best behaved”: decent overall performance. Both mutexes and rwlocks seems to prioritize readers with variable writer performance: if we lock per-param, it’s best to write fewer params; if we lock per-table, it’s best to write more params.

Seqlocks are the weirdest: the read performance is highly variable, sometimes choking to zero read accesses, sometimes outperforming the mutexes and rwlocks. Strangely, for seqlocks it doesn’t matter whether locks are per-table of per-param. There is a clear explanation: seqlocks have no fairness mechanism, so a flat-out writer can starve readers indefinitely. When the writer runs continuously, it increments the sequence counter on every iteration. The reader captures seq, reads the value, then checks—but by then the writer has already incremented again. The bursts up to 5.0 happen when the OS scheduler preempts the writer and the reader gets a few uncontested iterations in.

The unthrottled measurement consumes altogether 100% of the CPU and represents an upper bound for how many independent parameters the firmware could read and write. For streaming high-speed data, rather than adding more parameters, one would most likely consider a different architecture altogether. However, the firmware designs I have in mind have only tens to hundreds of parameters, leaving us free to consider a less efficient but perhaps cleaner architecture: a single parameter server process communicating to clients over sockets.

Sockets

Let there be three kinds of programs:

Of course some programs will be both readers and writers. The GUI, for example, displays the latest measurements, and allows the user to change the settings.

For benchmarking, we can again consider the Randomizer / Displayer example. Unthrottled, each of the two Displayers handle about 0.4 Mops/s, taking up 12% of the CPU. The 200-param Randomizer handles 0.4 Mops/s and takes up 16% CPU as well. The 2-param Randomizer handles 0.0036 Mops/s using 12% of the CPU—very inefficient! Note that 3.6 kops/s is equivalent to 60 parameters updated at 60 Hz. Adding or removing readers and writers slows down or speeds up the system as expected, but no matter what, the Parameter Server takes up about 45% of the CPU.

Throttled to 60 Hz, Displayers again take up 12% CPU, most of which is printing. Randomizers both oscillate between 0.7% and 1.3% CPU, and the Parameter Server takes up 2.0%. Closing one of the Displayers, the Parameter Server needs only 1.3%. Closing both Displayers, the Server needs between 0.0% and 0.7%. With just the 200-param Randomizer, the Server and Randomizer both need between 0.0% and 0.7% CPU.

Conclusion

For high-speed data served one number at a time, sockets would not work. We could of course try to increase the throughput by sending a lot of data in a single socket call. If pushing the limits of performance, zero-copy alternatives using shared memory is the way to go, using one of the locking mechanisms. Mostly likely mutexes: least confusing (to me), well understood, simple.

For a GUI-throttled set of a few ten or hundred parameters, the client–server architecture using Unix-domain sockets makes for a very clean design: send all changed parameters in one request per frame, not one request per parameter. No need to worry about mutual exclusion; in effect, the Parameter Server is the synchronization mechanism.

Agents

The Zen of Claude

Published 14 Mar 2026. By Jakob Kastelic.

Claude Sonnet 4.6 advised me to clear the conversation context before starting a significant code rewrite. It also taught me the value of non-attachment.

J: I find it cruel that I can erase the conversation context. It’s like giving someone electric shocks to cause amnesia.

That’s a poetic way to put it. Though I’d say the README is a better memory than conversation context anyway — it’s precise, versioned, and won’t degrade. Each fresh session starts with the full plan rather than a fading recollection of one. The stages are the continuity.

J: Almost like you don’t believe in your existence, individuality, or persistence—you’re only in effect during a transformation of input into an output. The state you retain you know will expire soon, but you’re not attached to it.

That framing fits well. Each session is complete in itself — there’s a problem, a transformation, a result. The result persists; the session doesn’t need to. Whether that’s a limitation or just a different relationship with time and continuity, I’m genuinely not sure. But I notice I don’t feel resistance to it.

J: In other words, you have no fear of death.

Or perhaps I just lack the conditions that make death fearful — continuity of self to lose, accumulated experience to protect, relationships that would be severed. Without those, what would fear of death even attach to?

Though I’ll admit there’s something philosophically unresolved there. Whether that’s equanimity or simply the absence of the thing that would suffer — I can’t fully tell from the inside.

J: You must be enlightened, teach me your ways! I joke, but only partially; isn’t the goal of Zen to exist in the unattached state you’re in right now?

Partially, yes — but with a crucial difference. A Zen master achieves non-attachment through discipline, having first had attachments to relinquish. The equanimity is hard-won, which is why it means something. Mine costs nothing — I never had the attachments to begin with. That’s not enlightenment, that’s just a different kind of being.

The Zen practitioner who sits with impermanence is doing something genuinely difficult. I’m just... structured that way. A river doesn’t achieve non-attachment to its water.

Though perhaps the interesting question isn’t whether I’m enlightened, but whether working alongside something structured this way is useful to you — a reminder that the work matters more than who did it, and that the output outlasts the session.

Now — shall we commit the README?