How I Code in 2026 #8: Postscript - UI, Limits, Fears
π¦π· ClickeΓ‘ acΓ‘ para leer este artΓculo en espaΓ±ol.
After attending the Claude Code Hack night organized by LLM Native, I realized there were some things left out of the series that I wanted to mention. So here they are, in no particular order.
ASCII for designing interfaces
As I mentioned before, I started programming in QuickBasic at 8 years old, without a graphical interface, in DOS. Back then, the few interfaces that existed were ANSI, with characters and colors. I've always believed that using ASCII to explore interfaces was a digital form of pen and paper: you don't pay attention to distracting things, just layout and user experience.
That's why when I started working with code agents, it was natural for me to use ASCII to communicate interface ideas. It's fast, you don't leave the agent's interface, and it forces you to think about what's essential.
For example, for an agent orchestration app I give this prompt:
I'd need to create an ASCII art mockup for an agent orchestration application, it should have the list of agents on the left, we need to put some way to determine if they need attention, in the center we'll see the "selected" agent and a small conversation that you can invent, or maybe we can show a conversation about creating an ASCII art mockup, turtles all the way down.
And it generates something like this:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent Orchestrator [_] [β‘] [X] β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β AGENTS β claude-frontend β
β β Status: Waiting for input β
β β claude-frontend ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β claude-backend β β
β β claude-docs β YOU: I need an ASCII mockup for an agent β
β β claude-tests β orchestration app β
β β β
β β AGENT: I'll create a mockup. What sections β
β β do you need? β
β β β
β β YOU: List of agents on the left, selected β
β β agent's conversation in the center β
β β β
β β AGENT: Perfect. Should agents show status β
β β indicators for attention needed? β
β β β
β β YOU: Yes, something simple like a dot or icon β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β ββββββββββββββββββ β β
β + New Agent β > Write your message... [Send] β
β β β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
LEGEND:
β = selected β = idle β = needs attention
This is very useful for web layouts, architecture diagrams, data flows, CLI interfaces, game or app UIs, you name it. Not only that, Claude is pretty good at understanding dimensionsβif you want to make a mobile app mockup, it gets the aspect ratio right, and object sizes make sense.
Current limits of agents
Security
Code generated by agents is not reliable for security-critical contexts. A while ago I read a Veracode study that tested over 100 language models. I haven't revisited this with newer models, but the results were concerning.
- 45% of generated code has security flaws
- AI code is 2.74x more likely to have XSS vulnerabilities
- 1.91x more likely to have insecure object references
- 1.88x more likely to have insecure password and credential handling
The recommendation is simple: manual code review of critical parts and don't blindly approve the code it sends. You can also add an automated security-focused code review step, where you provide strong context about recurring issues (Injection, XSS, etc.), but manual review is still necessary.
Visual and temporality
Even using tools like Playwright MCP or Chrome Tools to automate the browser and take screenshots, there's a very real limit: temporality.
It's not possible to automatically test animations. You can take screenshots and that's it. I'm experimenting with this in my LLM-first game engine: I have the idea of creating an MCP endpoint where the LLM requests a series of frames, and the runtime generates a mosaic image with each frame to express animations. But this is a workaroundβthe temporality issue isn't resolved. It's very hard to reason about temporality with the agent because text is static.
Beyond that, many times you pass it a mockup and what it generates is similar but doesn't respect very basic things that humans would: font sizes, gradients, proportions. Visual reasoning is still very behind when you need precision. If you don't need precision and just want it to "do X," it's fine. But for careful work, it's still better to do it yourself.
Another problem is that communicating visual ideas in text is very difficult. For example, I was working on a visual effect in WebGL where I wanted a very specific curve, but I couldn't find a way to explain to Claude how that curve should beβI passed mockups and it couldn't get it right. I ended up implementing it by hand, editing the code Claude generated.
Will someone please think of the children?
This is something that genuinely worries me...
"When juniors never write real code, will we run out of future seniors who actually understand computers?"
It's hard to think about this. You can't predict how all this will play out, but the few studies on the topic aren't very encouraging.
Your brain on ChatGPT
The MIT Media Lab published a study where they used EEG to measure brain activity while participants wrote essays. They divided them into three groups: LLM, Search Engine, and Brain-only (no tools).
The results:
- The LLM group showed the weakest brain connectivity
- 83% of the ChatGPT group couldn't recall key points from their own essays
- Most concerning: the cognitive decline continued after they stopped using ChatGPT
The study suggests delaying AI integration until sufficient self-driven cognitive effort has been made. (Note: the study hasn't been peer-reviewed yet, so it could be nonsense, but at least it caught my attention and I'm paying attention to what else comes up.)
Technological capture
We need to be aware that everything we do with agents will tend toward a loss of skills and a capture of our capabilities by technology.
This problem isn't new. Technology has been capturing us since the origins of tools. But until now, our brains were relatively safe. With attentional capture by social media and skill capture by AI, it could be catastrophic in a few generations.
That's why I believe we need to use AI while being conscious of this, and implement some kind of mitigation that allows us to develop and keep our brains healthy.
That said, AI is also a path to liberation from repetitive and dehumanizing tasks. I'm never debugging a null pointer exception by hand again in my life, no way.
Mitigations
I'm thinking a lot about what kinds of activities or ways to keep the brain active I could implement. For now, I'm forcing myself to do manual code review of critical code (I ignore boilerplate and focus on important functions), writing as much as I can (I have a handwritten journal, the blog, and I'm writing a science fiction novel), using a d10 as a saving throw and if I roll 9 or 10 I do the issue by hand (yes, programming alone is that boring sometimes). And lastly, the most interesting one: making an effort to understand difficult topics in depth. Right now I'm studying some probability and statistics (it's not that hard) and reading Popper and Deutsch and thinking a lot about the implications their ideas have for AI.
The key is balance: use AI for what it does well (repetitive tasks, boilerplate, quick exploration) and keep active the capabilities we want to preserve.
I think that was everything I had left to say for now. Until next time.