I’m Paula Muldoon and I’m a staff software engineer at Zopa Bank in London wrangling LLMs into shape for our customers. I’m also a classically trained violinist and I love growing potatoes to break up difficult soil. This post is not written with AI because I like writing.

In my last post, Context in LLM Systems of Experts, I mentioned that you probably want to store an audit trail of LLM interactions.

However, some of that data may be very sensitive so you need to handle it in line with regulations as well as what’s best for your customer.

It’s best practice to assume that any data sent to an LLM can be extremely sensitive. Some examples:

  • a customer gives their name and address when ordering potatoes (PII, or personally identifiable information)
  • a customer enters their credit card number (PCI data) – you probably don’t want them entering this data in this way but users do weird things
  • a customer is explaining they can’t pay their potato bill because they tripped and fell over a bucket of Brussels sprouts and aren’t able to work due to a broken wrist (medical data)

So make sure wherever you’re storing this data has the correct safeguards – encryption, access control, the usual.

Do not log inputs and responses to LLMs as they may contain sensitive data. (Also, LLMs can be wordy so you might end up paying for a lot of log storage costs you don’t really need.)

It’s very tempting to take transcripts from LLM-based conversations and turn them into evals. If you’re going to do this, make sure you scrub any sensitive data first.

And make sure that you have an agreement with whatever model provider you’re using that they won’t use your data (including your customer’s data) for training models. If you want to use this data for fine tuning models and your T&Cs only say you won’t use the data for training models, you’re in a grey area. I’d make sure your customers know and as always, scrub the sensitive data first.

I firmly believe that software engineers have an ethical duty (as well as a legal obligation, at least in the UK) to protect our users’ data. Hopefully you agree, so make sure, as you’re building your potato and other vegetable LLM systems that you’re keeping your users’ best interests at heart.

I think what I’m trying to say is scrub the sensitive data like you’d scrub your potatoes before entering them into the village gardening competition.

Next up: handling partial data in system of expert workflows

I’m Paula Muldoon and I’m a staff software engineer at Zopa Bank in London wrangling LLMs into shape for our customers. I’m also a classically trained violinist and I’m hoping to grow some purple potatoes for my village’s potato competition next year. This post is not written with AI because I like writing.

In a previous blog post I introduced the system of experts. Now I’m going to talk about how you handle context in these systems.

What is context? Also known as memory and conversation history, it’s essentially a record of your interactions with an LLM system which allows the system to serve meaningful responses.

The typical conversation turn through a system of experts using a routing workflow looks like this:

user: “I’d like to buy a bag of purple potatoes”
TriageExpert: "ORDERS"
OrderRequestExpert: variety: purple, number of bags: 1
OrderResponseExpert: “Ok great, please press confirm to buy one bag of purple potatoes”

Each time an Expert gets involved, we’re calling an LLM with a json payload. The key component of this payload we’re discussing is the messages array.

For the very first conversation turn, this will look something like the following:

TriageExpert:

[
  {"system": "categorise the user message into ORDERS...etc using all of the message history"},
  {"user": "I'd like to buy a bag of purple potatoes"}
]

RequestExpert:

[
  {"system": "identify the correct tool call and populate the parameters based on the following messages"},
  {"user": "I'd like to buy a bag of purple potatoes"}
]

ResponseExpert:

[
  {"system": "craft a delightful response to the human using the following messages as context but only respond to the last one"},
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "Ask the user for confirmation that they want to buy one bag of purple potatoes"}
]

As you can see, we’re passing along the same user message each time, but with a different system prompt. The response expert gets some additional context – in the RequestExpert we have extracted the “variety: purple, number of bags: 1” parameters and we are ready to process the order using TradCode (tongue-in-cheek term to mean Java or TypeScript or COBOL or your programming language of choice) so we just need to ask for confirmation.

However, let’s say the user changes their mind and sends a new message: “Actually, I want to buy two bags”.

At this point, if we haven’t implemented any memory, then our system won’t have the right context.

TriageExpert still has enough info to categorise it as an ORDERS interaction:

[
  {"system": "categorise the user message into ORDERS...etc using all of the message history"},
  {"user": "Actually, I want to buy two bags"}
]

RequestExpert, however, doesn’t have enough information – two bags of what?

[
  {"system": "identify the correct tool call and populate the parameters based on the following messages"},
  {"user": "Actually, I want to buy two bags"}
]

(Don’t worry, there are techniques for dealing with partial information which I’ll cover in a later blog post.)

This is where we need to start introducing memory into these systems, because actually what we want that second conversation turn to look like is this:

TriageExpert:

[
  {"system": "categorise the user message into ORDERS...etc using all of the message history"},
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "OK great, please confirm your order"},
  {"user": "Actually, I want to buy two bags"}
]

RequestExpert now has enough information: purple potatoes (from the first conversation turn) and two bags (from the second conversation turn).

[
  {"system": "identify the correct tool call and populate the parameters based on the following messages"},
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "OK great, please confirm your order"},
  {"user": "Actually, I want to buy two bags"}
]

And the ResponseExpert now gets this data:

[
  {"system": "craft a delightful response to the human using the following messages as context but only respond to the last one"},
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "OK great, please confirm your order"},
  {"user": "Actually, I want to buy two bags"},
  {"assistant": "Ask the user for confirmation that they want to buy two bags of purple potatoes"}
]

This brings us to a concept we call user thread. There isn’t really an industry standard around terminology for memory in LLM-based systems so we at Zopa created this term to describe the messages that travel back and forth between the user and an LLM-based system and which are used to give context to various LLM-based experts. It’s not guaranteed to be the same as a transcript, for reasons I’ll discuss below.

Here is the user thread at this point in the flow:

[
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "OK great, please confirm your order"},
  {"user": "Actually, I want to buy two bags"},
  {"assistant": "Ok can you confirm you want to buy two bags?"}
]

Note that there are no system messages, only user and assistant. Each expert has its own system prompt but they all get this user thread.

Why is the user thread not the same as a transcript?

Good question! Let’s imagine a very chatty customer who is ordering a lot of potatoes and goes back and forth about how many and what variety. This user thread could get quite long.

[
  {"user": "I'd like to buy a bag of purple potatoes"},
  {"assistant": "OK great, please confirm your order"},
  {"user": "Actually, I want to buy two bags"},
  {"assistant": "Ok can you confirm you want to buy two bags?"},
  {"user": "actually maybe Idaho potatoes are better this time of year"},
  {"assistant": "ok, shall I order some Idaho potatoes?"}
  {"user": "no let's go back to my first plan but make it three bags"}
]

Eventually it’s going to get so long that two things will happen:

  1. The LLM’s context window will get overloaded
  2. You’ll start having very expensive interactions

As LLM context windows grow larger, the chances of overloading the window decrease. You’d have to really like talking about potatoes and have very few demands on your time to overload it.

But because LLMs are usually billed on tokens, the longer your user thread, the more tokens you send each time and the more the model provider charges you.

So it might make sense, after a certain length, to do something about this big context. There are two options here:

  1. Drop a bunch of messages from the beginning of the conversation. This is not great though because the model may need the context of those earlier messages.
  2. Summarise the messages. This uses another LLM Expert so it’s a bit trickier than naively dropping messages but the end result is that instead of 6 outdated messages, you have 1 message with a summary of the earlier parts of the conversation.

So this is why the user thread and the transcript aren’t necessarily the same thing. The transcript is the record of what actually happened and the user thread is a useful approximation of the transcript that we can give an LLM Expert in a cost-effective manner.

The other key thing to remember is that all the “gubbins” of the system, the system prompts for each expert as well as their results, aren’t being stored in the user thread. But you’ll probably want to store them somewhere for audit and debugging purposes – just make sure they never get into the user thread or this unnecessary data will a) confuse the experts and b) incur more per-token costs.

And yes, potatoes really can be purple.

Coming up next: handling sensitive data in LLM-based systems

I’m Paula Muldoon and I’m a staff software engineer at Zopa Bank in London wrangling LLMs into shape for our customers. I’m also a classically trained violinist and I recently failed to win any prizes in my village’s potato-growing competition. This post is not written with AI because I like writing.

Agentic AI is all the rage these days. An AI that can control its own destiny? How cool is that! (Or, depending on your view of AI, how utterly terrifying.)

But if you’re trying to build a system to interact with humans, what you (probably) really want is a system of experts. What’s the difference?

According to Wikipedia, Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention.

A system of experts is a chain of foundational models (aka LLMs) which has a human on the end of each conversational turn. A very classic example for this is an AI chatbot.

But agentic AI sounds way cooler, so why can’t I have that?

Ok, let’s dig into an example. Let’s say you’re a potato farm and you want to use AI to handle your customer interactions. Everything from ordering potatoes to complaining that your Charlottes were too small. How does this work with a system of experts?

Your customer contacts you, via voice or chat or email or whatever, and says “I want to order another bag of Maris Piper potatoes.” This goes to a TriageExpert, which has a prompt something like:

If the customer is talking about orders, return ORDERS
If the customer is making a complaint, return COMPLAINT
If the customer is talking about something else, return CANNOT_HELP

If you think this sounds like an old-style IVR system, you’re not wrong. If you think you could do way better with agentic AI…hold your horses.

In our case, the customer is talking about orders so we use structured output to return ORDERS.

We then use TradCode (a snarky term I invented to mean the bog-standard, deterministic Java or Kotlin or Python or whatever code you’ve been writing since before LLMs arrived) to make a very fancy algorithm:

if ORDERS:
  -> OrdersRequestExpert
if COMPLAINT
  -> ComplaintRequestExpert
if CANNOT_HELP
  -> CannotHelpResponseExpert

C’mon, use an agent already I hear you say, possibly cheered on by a blog post you read.

Bear with me, though.

Next step is to let this OrdersRequestExpert do something. You’ve probably given it a tool call named CREATE_ORDERS and the LLM will return that tool call with params like this:

variety: Maris Piper
number of bags: 1

At this point, you’d still like that human in the loop and HEY IT TURNS OUT YOU HAVE A HUMAN! So you tell your customer, via the OrderResponseExpert, “I’m going to order a bag of Maris Piper potatoes. Press confirm or cancel.” The user is jonesin’ for potatoes, so they press confirm.

Then TradCode reappears, and you call your /orderpotato endpoint with the payload the OrderRequestExpert has identified for you.

This is a very basic workflow, where the foundational model is responsible for translating the human speak into computer speak. You then use all your standard APIs and code to actually fulfil the customer’s wishes.

Please can we use agentic AI now? OK, let’s see how that would look.

Your customer contacts you, via voice or chat or email or whatever, and says “I want to order another bag of Maris Piper potatoes.” Let’s say you have a RequestAgent, which can deal with any incoming request. It’s got a huge bunch of tool calls to choose from but it’s agentic AI so it’s really really good at choosing the right tool call and never makes a mistake.

[That was sarcasm. -Editor]

It calls the order potatoes tool call 100% of the time [more sarcasm], but it needs to question its own decisions so it now calls a JudgeLLM. This judge is going to look at what’s happened so far and decide whether the output of the OrderRequestExpert is good enough to process. So here’s a few examples of what the OrderRequestExpert might come up with assuming it’s selected the right tool call (which we know it reliably does) [even more sarcasm]:

variety: Maris Pooper
number of bags: 1

variety: Maris Piper
number of bags: 100

variety: Maris Piper
number of bags: one

variety: Marish Piper
number of bags: 1

variety: Maris Piper
number of bags: a

As you can see, there’s quite a few varieties of answer there. But the JudgeLLM has a simple task: when it receives a request, decide whether that request should be processed as-is or sent back to try again.

variety: Maris Pooper
number of bags: 1

JudgeAnswer: yes...
OR no

So the key difference here is that the JudgeLLM is doing what the human in the loop did in the first example: deciding if an output is correct.

So let’s say the JudgeLLM has finally decided that something should be processed. Let’s say it’s decided to process this output:

variety: Maris Piper
number of bags: 100

We’re now ordering 100x the number of bags of Maris Piper that our customer wanted. An easy mistake for an LLM to make, especially if there was voice transcription involved. Maybe we should present them with a confirmation page…

So we’re back to square 1 with agentic AI, where when we’re taking actions that could cost the customer money, we want the customer to confirm the action is right.

Now in both these scenarios, the OrderRequestExpert or RequestAgent might get it wrong. But there’s a few failure scenarios here that companies should care about:

OrderRequestExpert/Agent gives a wrong answer that the JudgeLLM says is correct but which the human says is wrong. Now all we’ve done is paid extra processing costs to [insert model provider of your choice] for the judge and still made the human do the work for us.

OrderRequestExpert/Agent gives a right answer that JudgeLLM says is wrong. This is even worse – we’ve paid more money for the JudgeLLM, plus more money for the next round of LLM interactions, and the human is waiting longer for their correct answer.

OrderRequestExpert/Agent gives a wrong answer that JudgeLLM (correctly) says is wrong and we end up in a looping request that costs extra money. The longer this loop goes on, the more expensive it is and the more frustrated the customer is.

Is it possible that agentic AI is hype driven by companies who charge per token cost?

Aside from the potentially expensive failure scenarios, there’s another problem with agentic AI: testing.

In this scenario, there is a defined number of outcomes (order potatoes, complain about potatoes, leave a review about potatoes, change order address, cancel order blah blah). It may be a long list but it’s not infinite. And you probably care about everything on that list working – an address update fails, that’s bad because where do you send your potatoes?

With the system of experts, you can “unit test” the experts. Given any input, the TriageExpert should only return a specific ENUM (ORDERS, COMPLAINTS, CANNOT_HELP). So you can test just that TriageExpert and tune it to give the correct results. Same with the OrderRequestExpert, although in this case you expect structured JSON instead of an enum. Only the OrderResponseExpert is tricky to test, because you need some NLP (this may be a case for a JudgeLLM). Or you could decide you don’t care that much about delighting your users with AI-generated responses and just return from a list of hardcoded responses.

Testing the agentic AI? Harder and more expensive because you need to test not just the inputs and outputs, but you also need to test how many tries it takes. So if your agentic AI takes 100 (internal) turns to come up with a correct answer, at which point the user would be bored, has your test passed or failed? And is it a Pyrrhic victory, because you’ve spent so much of your LLM budget just on tests?

This system is also not decomposable: the JudgeLLM has been tightly coupled to the RequestAgent – you can only meaningfully test them as a system, not as individual components. With latencies around 800ms [that’s a decent average I’ve seen using gpt-4.1 deployed in Azure in a geographically nearish region to me], each test will take 1.6 seconds AT LEAST to run, and potentially rack up a LOT of token costs. And you’ll probably need hundreds if not thousands of test cases.

So: you probably don’t need agentic AI.

Coming up next: Context in LLM Systems of Experts

Woman, Mysterious, Traveler, Journey, Only, Lonely, Sad

Me: “I think I’m not good enough at my job.”
Other person: “Oh, yea, imposter syndrome is tough.”

I’ve had this dialogue a few times with various people in the course of my engineering career. It never sat right, and in this blog post I’ll pick apart why I think the term “imposter syndrome” is harmful, and suggest some alternative ways of thinking of the bundle of feelings it describes.

First, a disclaimer and a definition.
Disclaimer: I am not a medical professional. I am a violinist and a career changer software engineer. This blog post is based on personal experience, not actual research.
Imposter Syndrome (official definition): is a psychological pattern in which an individual doubts their skills, talents, or accomplishments and has a persistent internalized fear of being exposed as a “fraud”. [Wikipedia]
Imposter Syndrome (as I have seen it applied): a catchall term for socio-performance work anxiety. Annoyingly, sometimes used as a humble brag (in my opinion).

TL;DR: what’s bad about the term “imposter syndrome

1. It pathologises a normal developmental stage.

2. It risks hindering people with actual medical (mental health is medical) issues from accessing appropriate care.

3. It puts the blame for feelings (e.g. anxiety) on the individual, not on the surrounding people or system – this can be especially harmful for under-indexed folks who are potentially being gaslit.

So let’s talk about each of these individually, and then how situations can combine to create what we call “imposter syndrome”.

1.Normal developmental stage

At the risk of stating the obvious, when you’re new to something, you’re less good at it. More specifically, you have a lot of known unknowns and a lot of unknown unknowns. Known unknowns are fine – you can make a list, prioritise them, and learn them. But when you’re junior, there’s a vague feeling of “stuff I should know but I don’t know what it is and here’s this long list of stuff I know I should know so how do I have time to find more stuff to put on that list?”

This is a normal developmental stage. Over time, you remove items from the known unknowns list (or stack, or queue) as you learn them, and then you add more items as you discover that there’s something you should know about. Items can even return to your known unknowns as you learn about them, but forget the details.

(If you’re interested, I deliberately chose “list” because a list is a data structure whose items you can access at random, whereas stacks and queues are LIFO and FIFO, respectively, and I assumed you’d throw items into your list whenever you found them and need to take them out at random order when the need arose. And I had to look up which of stacks and queues were LIFO or FIFO – they were in my known unknown, or perhaps known semi-known, list.)

This normal developmental stage can feel a lot more problematic for career changers: you are used to operating with many knowns, some known unknowns, and few unknown unknowns. You forget the times when you were a beginner at the thing you used to be good at (especially for musicians, because that time is usually when we were young children). My suggestions are patience, kindness to yourself, focusing on getting items into knowns and known unknowns, and finding the right senior (not necessarily at your workplace) for reassurance and guidance.

2.Actual Medical Problems
Ok so you have your knowns, your known unknowns, and you’re pretty sure your unknown unknowns list isn’t too big. But you still doubt your ability and fear being exposed as a fraud. I’m not a medical professional, but this sounds like a really great time to access medical support to deal with whatever underlying issue is manifesting in this sort of anxiety.

3. Systemic Issues
Your workplace should give you timely, actionable feedback on your performance (feedback is a whole book so I’ll just say that it can take many forms, official and unofficial). Without that feedback, you have no way of knowing how you are perceived to be doing. Sure, you have your (un)known(s) lists, and your own assessment, but you need to calibrate your own assessment against an external marker. Without that calibration, unless you’re incredibly self-centered, you will worry about whether you are doing your job well enough.

And hey, because this is all about how others perceive your work, turns out if you a member of a marginalised group, you’re likely to worry a lot more about how others see you – because there is your own experience as well as (likely) hundreds of years of documented evidence of people seeing your work as less good and you as less human, and with much graver consequences. (Want to read more about this? Try Caste, by Isabel Wilkerson if you want a long read, or this post on intersectionality for a short read.)

Combinatorial Explosion

So I’ve teased out 3 separate elements to the Thing Formerly Known As Imposter Syndrome (TFKAIS, pronounced “tiff-kais”, second syllable “eyes”). But of course, it’s very easy for 3 to appear in combination with each other. This is not great, and solutions will vary depending on individual circumstances. One word of advice I’d give, though, is that systemic change is hard, and it’s even harder when you are more junior. It can be worth gently pushing on the door of better feedback, but if nothing happens and you are suffering, it’s often better to leave if you can.

Alternative Terms

I said I’d offer some alternatives to “imposter syndrome”, so I’ll end with these. I am not going to coin a new phrase to replace “imposter syndrome”, because my whole argument is that we’ve jumbled a bunch of potentially similar feelings with very different causes together, and we should separate them out. So how about using phrases like the following?

“I have the feeling that I have a lot of unknown unknowns, so I am in an earlier stage of my knowledge on this subject. I am actively working to discover these unknown unknowns, but I’m also working with a more senior colleague to ensure my lists have the right stuff on them.”

“I have anxiety about my performance at work. I am going to talk to my manager about improving the feedback system, but I am also going to seek medical help because the anxiety is really impacting my life.”

“My workplace doesn’t set clear expectations so I don’t have anything to benchmark myself against. This makes me anxious, as I am a woman in tech and know about implicit bias, so I am going to look for a workplace that has a clear set of responsibilities and progression. I do not currently believe that I need medical help as I think a change of environment will improve my anxiety, but I am willing to revisit that decision if needed.”

Photo credit: Ruby Somera

What is your current programming and musical life?


I lead the Acoustic Research Engineering team at Facebook Reality Labs Research – the organization within Facebook that researches and develops next generation AR/VR technology. Almost all of my programming these days deals with some form of acoustic and audio signal processing or ML for audio. Given that it’s research, the tech stack changes significantly depending on the project I’m working on, but I’m usually working in some combination of Python, Matlab, C/C++, and assorted bash scripting kind of things. Sometimes I’m also working on hardware/software interfacing which can be javascript, .NET, or C# stuff, but that’s significantly rarer.

My musical life has been quiet lately, but with the pandemic I’ve been looking for music making opportunities that I can do easily at home. I’ve been practicing a lot of guitar and attempting to get my jazz improvisation skills from “terrible” to “marginal.” At the intersection of my music and programming life, I’ve done a lot of sound artwork which usually manifests as sound installation projects. I’ve also recently been experimenting with building instrument effects using ML so that has been a fun endeavor at the intersection of music, software, and hardware.

What is your primary instrument and what age did you start learning it? Did you study any other instruments, either as a child or an adult?

The instrument I went to school for was classical percussion. I started learning drum set at around 8 years old and then shifted my focus more towards classical music in high school. I played bass guitar in a few bands in high school as well as in Seattle during my time here and I’ve been focusing on guitar more lately (melodies and harmonies on the same instrument?! GASP).

What was your experience of learning a musical instrument, and how did it differ from your experience in learning to program?

I learned music almost entirely through private lessons and traditional academic institutions. For programming, it’s been a hodgepodge of formal educational learning, self-teaching through projects, and shamelessly asking colleagues to explain things I don’t understand. In some ways they’re very different. Music has this mentor/mentee relationship as well as a pretty linear idea of progress, whereas programming has been much more scattered; deep diving some things, barely scraping the surface of others before moving on and forgetting them, finding resources in a lot of different places. In other ways, I think there’s a lot of similarity. Despite the degrees and the teachers, most of my musical learning was done in a practice room: trying to figure things out in whatever way would get the best results, asking friends for help, watching videos on YouTube, etc. That’s not so different from learning new skills in tech for me. The wallpaper changes, but it’s still a practice room.

How old were you when you started programming? Why did you start programming?

I dabbled on and off in college. Percussionists tend to be tinkerers and when I started doing some work with electronics, I wanted to dive a little deeper and figure out how things worked. I initially started using Max/MSP to do some audio stuff and then when I hit the limits of the in-built tools, I needed to investigate other languages to get things done. It was honestly pretty casual, and I didn’t make a lot of progress. When I went back to grad school to study acoustics, I did much more programming. All my engineering classes required some level of programming, so it became something I was doing daily.

Do you have any specialisms in your musical performance?

I guess the sound art is pretty specialized. Live electronics, generative art, and architectural scale sound installations have all been a big part of my creative life since school. Other than that, it was all garage bands and orchestral playing.

Do you have any specialisms in your programming?

Audio digital signal processing (DSP) and machine learning is where I’ve been doing most of my work for the last few years. Audio signal processing, in particular, is an interesting field to write code, because you often have hard real-time constraints.

Do you hold degrees in music, computer science, or something else?

I have a BM and MM from the University of Michigan and New England Conservatory, respectively, and a MS in acoustics from Rensselaer Polytechnic Institute.

What influence, if any, does your musical background have on your programming, and vice versa?

The idea of solving a problem by breaking it down in to small steps that are manageable is something that I learned first in music and applied to engineering. You learn pieces by note, then measure, then phrase, etc. until you’ve learned the whole thing. Sometimes you can do that all in sequence. Sometimes you parallelize parts of it. You start with nothing and end up with music. Building software feels the same way.

Aside from music and software, what other hobbies or pursuits do you have?

I’m lucky to live in Seattle, where it’s reasonable to be outside every day of the year so I do a lot of running and I used to bike commute before the pandemic. My wife and I have also recently gotten into playing tennis which has been another humbling episode of trying to learn an entirely new skill from scratch.

What would you say to a musician considering a career change into programming?

There are a lot of great resources online to try it out to see if you like it, but try to go beyond a web-based tutorial like Codacademy. Part of the job is writing code, but you also spend significant amounts of time on documentation, environment set-up, testing, version control, debugging and refactoring. Make sure those parts are at least tolerable. It doesn’t need to be a big thing, but try something from the ground-up once.

I’d also say that programming is a HUGE field and can look very different depending on your company or field. Working at a start-up will be different than working at a FAANG company. Insurance companies and airlines all need software developers whose jobs will look very different. IC1 at a small company will have very different responsibilities than IC1 at a Fortune 500. It’s important to keep in mind that the portrayal of software engineering in the media or on blogs or youtube is only a tiny slice of things, if it even exists at all.

And finally, what advice would you give your younger self when you started programming?

Program to solve a problem you care about. I wasted a lot of time hemming and hawing about learning C vs JS vs Python and doing tutorials that weren’t really practical. The activities that have been most educational have been the ones where I can contextualize WHY I’m learning the things I am and how they can be applied.Also, seek out people you trust that you can ask for help (this is good advice for my current and future self too).