PhilipMat

"The machine didn't take your craft. You gave it up." -- David Abram

It’s a bit of a long essay by David Abram in which he argues that LLMs shifted where problems are solved, much the way we shifted to high-level languages or compiler.

I have been doing this for years, and the hardest parts of the job were never about typing out code. I have always struggled most with understanding systems, debugging things that made no sense, designing architectures that wouldn’t collapse under heavy load, and making decisions that would save months of pain later.

None of these problems can be solved LLMs. They can suggest code, help with boilerplate, sometimes can act as a sounding board. But they don’t understand the system, they don’t carry context in their “minds”, and they certianly [sic] don’t know why a decision is right or wrong.

I think that’s a solid argument.
However, at another level, code is a measurable output, while knowledge is not.

I have seen product owners/managers being mistaken about their level of “understanding the system” and think that with the help of an AI agent that “understanding” is trivial to transfer into some output (code).

And the most importantly, they don’t choose. That part is still yours. The real work of software development, the part that makes someone valuable, is knowing what should exist in the first place, and why.

Also fully agree with this, though I didn’t quite see it bare in practice in any environment. If that were true in the workforce, we would see a lot higher value assigned to long-timers who might have that knowledge, but are not interested in climbing up the ladder.
Based on my personal experience, “seniority” is more closely associated with demonstrating forward-looking coding and architectural skills rather than knowledge of the existing systems (and the choices made to get there); those tend to be mere by-products or “nice to haves”.

If you reduce yourself to “the one who types code,” then yes, you should feel obsolete. But don’t fool yourself any further: typing code that was never the essence of the craft.

This is a long standing argument that the “code” is an incidental side-product of software development; the main goal is satisfying product requirements. (And, cynically but no less true, in enterprise environments is about making your boss look good.)

The real danger is that people stop thinking. The actual trap is engineers letting the tool carry the cognitive load they were meant to build – The abdication of reason from within.

I’d argue that we see is a real transfer of “work” from engineers to product builders, the latter which (might) over-estimate their knowledge, and conclude they have no need of the former.

To take it one step further: if the decisions that built a system are encoded in some documentation – including product decision, architectural decisions, etc – and and LLM can look at those and the resulted code and explain “everything”, then where do the engineers David describes fit in?

Removing the menu icons in macOS Tahoe

Saw this advice from multiple sources, most pointing to @stroughtonsmith:

defaults write -g NSMenuEnableActionImages -bool NO followed by a computer restart results in the menu icons being hidden, except for when apps specifically overwrite this settings.

It even preserves the couple of instances you do want icons, like for window zoom/resize.

Firefox, for example, keeps icons for the Bookmarks menu entries. Finder keeps the icons for the specific locations under the Go menu.
iTerm2 doesn’t seem to be observing this setting as of v3.6.9, but a fix is in the works.

As Daring Fireball observes, Safari needs 26.4 to observe this preference.

Tags: til, macos, mac, terminal

Two macOS tools for sandboxing agents

Both Agent Safehouse and Nono (get it, no-no?) use macOS sandboxing to execute agents.

Agent Safehouse

Pull down a self-contained Bash script with curl, and drop it in ~/.local/bin. Run your agent command prefixed with safehouse: safehouse opencode.
The tool auto-detects the git root of the working directory, applies a deny-all baseline, and layers on permissions for common toolchains.

Nono

Same, but installed with brew. Then nono run --profile claude-code -- claude to run a sandboxed agent.

Nono works on Linux as well, Agent Safehouse is macOS only. Nono is written in Rust, AS is all fish-shell scripting.

Founderland.ai mentions a few other:

Microsandbox and Agent Harbor lean on VM-level isolation. DevCage and AgentSphere target multi-platform or cloud deployments. Kilntainers gives each agent an ephemeral Linux sandbox via containers or microVMs.

May be worth investigating. It’s a fledgling space, so new tools will come and go.

TIL: Essential Claude Code Skills and Commands

Summary

The article explains the difference between Claude Code’s built-in slash commands and its prompt-based skills: slash commands are fixed, non-AI operations (like /clear or /model), while skills load instruction files into Claude’s context and can spawn subagents, accept arguments, use tools, and include supporting files and frontmatter. The commands-and-skills systems have been unified under the /slash interface, with .claude/skills/ recommended for new customizations because it supports richer features (templates, dynamic context, subagents, and more).

It then surveys the most useful built-in skills and commands: /simplify (automated code-quality review that spawns parallel reviewers and can auto-fix issues), /review (thorough code/PR review for bugs and edge cases), /batch (decomposes large refactors into parallel worktree agents), /loop (recurring scheduled prompts), /debug (session diagnostics), and /claude-api (loads API reference material). Helpful slash commands covered include /compact (conversation compression), /diff (interactive diff of Claude’s edits), /btw (side questions without polluting context), /copy (copy code to clipboard), and /rewind (undo changes). The piece highlights practical workflows—e.g., run /review for correctness then /simplify for cleanup—and recommends listing available skills with /skills.

Commands/skills I didn’t know about, and they seem useful:

  • /btw lets you ask a side question without affecting the main conversation context
  • /simplify reviews your recently changed files for code reuse opportunities, quality issues, and efficiency improvements – and then fixes them automatically.
  • /review gves you a proper code review of your changes – the kind of feedback you’d expect from a thorough pull request review.`
    [..] My typical workflow is: make changes, run /review to catch issues, fix anything it flags, then run /simplify to clean things up.

Also /copy to copy code to clipboard, with a selector for multiple changes, and /rewind to roll back to a certain point in order to explore a new path.

Source: Essential Claude Code Skills and Commands

TIL: Single-executable local LLM

Summary

llamafile is a single-file executable format that packages an open LLM’s runtime and weights so the model can run locally with no installation. By combining llama.cpp with Cosmopolitan Libc, a llamafile contains everything needed to execute a model on a user’s machine and aims to make open LLMs more accessible to developers and end users.

Technically, llamafiles add runtime dispatching for multiple CPU microarchitectures and concatenate AMD64 and ARM64 builds so the appropriate binary runs on each system. The format targets six OSes (macOS, Windows, Linux, FreeBSD, OpenBSD, NetBSD) and supports embedding weights via PKZIP in the GGML library for memory-mapped, self-contained distribution. The project provides tooling to create and distribute llamafiles, is an Apache 2.0-licensed project with MIT-licensed changes to llama.cpp, and has recently been adopted by Mozilla.ai, which is soliciting community feedback on modernization plans.

Because it’s a LLM, this is akin to having Wikipedia offline, but you can ask it questions.

Also powered by Cosmopolitan libc, the explanation of which is an amazing work in itself.

Source: TIL: Single-executable local LLM