Weekly notes
- The Evolution of Garbage Collectors: From Java’s CMS to ZGC, and a JVM vs Go vs Rust Latency Shootout: Modern garbage collectors are great. Stopping threads at safepoints during a gc don’t seem to introduce any latency and go / java / rust aren’t that far away from eachother (Note rust doesn’t have gc. You have think about memory allocator config up front maybe? and memory is reference counted / freed as necessary)
- Astronomical probability of UUIDv4 collision: Uuidv4 are unlikely
- Omicron Database Design: Oxide describes choices they made in their db design. Stuff for me to think about:
- No foreign keys for performance?
- Scalability def: add a server to increase capacity, little or no sensitivity to size of collection
- Db transaction: a group of statements that together should share acid properties
- What is the use of ProxyPassReverse Directive: Makes sense. Spending a bit of time thinking about httpd configuration these last few days … sigh :)
A comment in apache httpd’s source code, mod_proxy_http.c
/* RFC2616 tells us to forward this.
*
* OTOH, an interim response here may mean the backend
* is playing sillybuggers. The Client didn't ask for
* it within the defined HTTP/1.1 mechanisms, and if
* it's an extension, it may also be unsupported by us.
*
* There's also the possibility that changing existing
* behaviour here might break something.
*
* So let's make it configurable.
*
* We need to force "r->expecting_100 = 1" for RFC behaviour
* otherwise ap_send_interim_response() does nothing when
* the client did not ask for 100-continue.
*
* 101 Switching Protocol has its own configuration which
* shouldn't be interfered by "proxy-interim-response".
*/
- VIDEO Storytelling & Science with James Cameron (Full Episode): Great interview with James Cameron about movies, science, and science fiction
- More people are using openai’s agent codex: This is a neat survey of prs submitted to github and accepted
- Development gets better with Age: Llms are turning out to be surprisingly effective at many tasks that nobody could have predicted. With that context how do we use them to deliver more value faster while keeping in mind what our customers actually need. Being an older developer means we’ve been through a few of these cycles already where an amazing new tool comes out and changes how we think about problem solving. Llms haven’t been normalized in our heads yet and we don’t quite know what their limits are right now is all …
- LLMs as Parts of Systems: Marc brooker talks about how ai agents combined with other components in a system are able to participate in solving more kinds of problems. Also this idea of proper ai needing to “think how we think” was interesting if I’m interpreting that rightly. Interesting and proving to be false as far as I can tell.
On efficiency
Here’s the lightning sketch of Paul’s Treatise Against Efficiency that I’ve never written:
1. Efficiency is asymptotically inefficient: as costs approach zero, the cost of further reducing them approaches infinity.
2. Efficiency prioritizes the measurable over the difficult-to-measure.
3. Efficiency prioritizes what those in power see (or imagine) over on-the-ground reality.
4. Following from 2 and 3, efficiency reduces the amount and quality of information flowing into a human system.
5. Efficiency foments institutional inflexibility.
6. By removing slack, efficiency causes small failures to cascade more readily and increases the risk of catastrophic failure.
7. Following rom 4, 5, and 6, efficiency trades small costs for massive risks: from failures, from missed opportunities, and from inability to adjust.
8. Efficiency, when pushed, strangles the emergent phenomena that in the long term create all new things of value.
9. Thus, although it can be a by-product of evolution, efficiency as a goal in itself strangles evolution.
10. Efficiency as a goal strangles joy.
- Best practices for using GitHub Copilot to work on tasks: A few simple guidances like start by giving it small tasks, what sorts of things you should do yourself mostly (large refactors, or those times when you want to learn more), and also a sample agents.md file.
-
Designing agentic loops: Agents can be setup to run on their own and for some problems that can be a good idea (eg debugging, trying out many variations on a specific theme, etc). You have to be careful to limit what it can do though. The agentic loop can involve giving something like claude access to your dev environment, your terminal where it can run commands like you would, and potentially access to cloud platforms depending on the task.
- Kiro and the future of AI spec-driven software development: Marc Brooker uses kiro + llms to make a towers of hanoi game. He started with asking the llm to help him write a spec.
His first prompt:
Let’s start a specification for building a program to play the Towers of Hanoi game with a Javascript-based UI.
He worked with the tool to write a spec. And then:
Let’s start building.
At some point he added to his initial spec:
- When the solution demonstration starts, it starts from the current game state.
- The player can choose a solution demonstration that completes the game, OR a single-step towards completing the game.
Starting with a well written spec is a good idea in general and predates llms.