- How We Reduced Median Memory Estimation Error by 99%, With the Help of AI:
Time to learn how to do data analysis task as part of cost optimization
exercise involving wrongly sized containers that couldn’t finished batch work
because they were oom-killed was compressed by an llm agent very helpfully
allegedly.
- My local agentic dev setup today: Local models might actually be fun to play with but they do need better hardware than I have sadly. Claude and codex are much better right now. This fellow describes his current local setup. He’s using pi.dev which makes me wonder what other harnesses are like. I think I’ll try a few non-claude ones. How important is the harness?
- Field report: coding with Qwen 3.6 35B-A3B on an M2 Macbook Pro with 32GB RAM: Lots of detail about another local ai setup with a couple of experimental outcomes of how the setup did on real development tasks. It’s coming along. Local h/w resources for these models are super expensive right now because of the ai bubble. Need that to contract a bit.
- The Boring Internet: This resonated for me. The internet applications we use daily are a thin substrate over a lot of gnarly stuff that has been running for a long time now and is open, and unencumbered by gobs of vc money (mostly) that requires websites to enshitify eventually.