-
Otra de Vieques
-
Adios desde Vieques
-
Hola desde Vieques
-
Phantom Obligation (Text Version) | Terry Godier
But when we applied that same visual language to RSS (the unread counts, the bold text for new items, the sense of a backlog accumulating) we imported the anxiety without the cause. Nobody is waiting.
Really good essay about the design of RSS readers. If this appeals to you, check out the app by this author that implements the ideas in this essay: Current.
In summer 2024, I started a hobby project to build a personal RSS reader and I had a similar approach to what is described here: no unread counts and building a feed like view of posts across all sources, organized in a way I like. I use it every day.
If I hadn’t built this, I’d probably be using Current as my daily rss reader.
-
Technology Connections YouTube video: You are being misled about renewable energy
If this doesn’t convince you that the continued pursuit of fossil fuels is friggen stupid and that the people who are pushing for this are lying to you, then I don’t know what will.
Watch through the end to see a nerd clutching a solar panel become very, justifiably, angry about how this connects to the fascist bastard Republicans currently in office.
-
With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value. For someone. But who actually gets to keep that value?
Interesting post about burnout and AI. Feels like this is increasingly relevant for anyone working with AI but especially software devs.
-
Finished reading: Hole in the Sky by Daniel H. Wilson 📚
-
AI, Software Development, and Centralization
Two posts on AI that caught my attention recently:
Don’t fall into the anti-AI hype -
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don’t want AI to economically succeed, I don’t care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.
Antirez is the creator of Redis, which is well respected cache software that I’ve used professionally. Writing this kind of software requires extra levels of care and thoughtfulness to maintain performance and reliability goals of the project. So, I find it interesting that LLMs are very useful to Antirez in this codebase.
I do think that LLMs will be a standard tool for programming going forward. But that’s probably been obvious for folks working in the field for a while now.
What caught my attention more is his views on the ecosystem around LLMs:
However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I’m scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough “magic” inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).
This aligns well with how I feel about LLMs. I think anyone who cares about computing and the impacts it has on society has a vested interest in seeing that AI does not become centralized. And I agree that there’s too much practical use for LLMs in coding for things to totally evaporate in a cloud of hype. (Don’t take this to mean there isn’t too much hype around LLMs and a ton of worthless AI slop. I think the industry as a whole will see the bubble burst, but, like previous tech bubbles, we will some aspect of the technology make it through.)
Birchtree blogged about using LLMs to quickly build personalized software:
LLMs have made simple software trivial
I was out for a run today and I had an idea for an app. I busted out my own app, Quick Notes, and dictated what I wanted this app to do in detail. When I got home, I created a new project in Xcode, I committed it to GitHub, and then I gave Claude Code on the web those dictated notes and asked it to build that app.
About two minutes later, it was done…and it had a build error. 😅
What’s happening here, I think, will quickly be a major shift in the industry and it’s one that I’m excited about: software development will become increasingly decentralized and personal. For ages, we’ve all had incredibly powerful computers (and phones!) accessible to us on a daily basis. Yet, our dominant computing experiences have become increasingly centralized. Just think how much of our digital lives runs in the cloud rather than on our personal machines. Now, don’t get me wrong, I love the web. But what I love about it is how it is open and accessible to everyone as both a reader and a writer. However, today the web is dominated by a handful of companies (Google, namely). And it’s the same situation for our personal devices: Apple, Google, and Microsoft retain so much power over what’s allowed to run on their operating systems.
But, I think there’s a path into the future where LLMs help chip away at the centralization we are currently experiencing. Apple and others will lose their positions as gatekeepers and toll extractors if people with no development skills are able to use LLMs to build whatever idea they have into functioning software that runs on their personal devices. I think the trick here is to ensure we don’t just replace Apple, Google, and Microsoft with OpenAI, Anthropic, and, well, Google again.
To that end, it’s become a hobby of mine to experiment with opensource LLMs and the ecosystem around them on my Framework Desktop which can easily run large models on it’s shared memory architecture. I hope to write more about what I’ve been experimenting with here in more detail. But for now, here’s a few pointers if you are interested in this space:
- clawd.bot: Think open source Siri with much more capability and integrations. I’ve only just started playing with this, but it’s super interesting.
- opencode: A Claude Code like CLI tool that can utilize LLMs running in the cloud as well as models you are running locally, which is how I’ve been using it.
-
Currently reading: Hole in the Sky by Daniel H. Wilson 📚
-
Finished reading: Moby-Dick by Herman Melville 📚
Epic! I read this with a book club, and I’m very glad this was the club’s pick because I’m not sure I would have read it otherwise. Moby Dick was very different than what I expected, more enjoyable and more relevant than my expectations.
-
pluralistic.net/2026/01/01/39c3
Which is why I’ve come to Hamburg today. Because, after decades of throwing myself against a locked door, the door that leads to a new, good internet, one that delivers both the technological self-determination of the old, good internet, and the ease of use of Web 2.0 that let our normie friends join the party, that door has been unlocked.
I’m a fan of Cory Doctorow and the EFF, I think they argue for the right side of most issues. If you care about open technology (and I’d argue everyone should) this post is worth a read. It describes an opportunity for the world to loosen the grip that American tech companies have over technology and the way it’s used.
I’ve been aware of America’s DMCA law, and it’s flaws, since the law’s inception. What I didn’t realize is that America has coerced most of the world to pass similar laws locally. Give this post a read for why that’s important and why there’s opportunity now to unwind the effects of these laws.
-
www.joanwestenberg.com/the-case-for-blogging-in-the-ruins
When people talk about the Enlightenment as if it were an intellectual garden party where everyone sipped wine and agreed about reason, they’re missing the part where producing and distributing ideas was (in fact) dangerous and thankless work.
Diderot’s project was fundamentally about building infrastructure for thinking. He wanted to create a shared repository of human knowledge that anyone could access, organized in a way that invited exploration and cross-referencing. He believed that structuring information properly could change how people thought.
He was right.
Couldn’t agree more with Joan’s whole post. Start a blog! Joan links to some good platforms.
Or at least give RSS readers another try!
-
Year in books for 2025
-
Finished reading: Neuromancer by William Gibson 📚
The cyberpunk classic. I’ve read it before but it’s been a long time and I had forgotten the plot. It’s such a great book and incredible that it defined a genre.
-
jwz: The original Mozilla “Dinosaur” logo artwork
It has come to my attention that the artwork for the original mozilla.org “dinosaur” logo is not widely available online. So, here it is.
-
There’s been a flurry of discussion on Hacker News and other tech forums about what killed Perl. I wrote a lot of Perl in the mid 90s and subsequently worked on some of the most trafficked sites on the web in mod_perl in the early 2000s, so I have some thoughts. My take: it was mostly baked into the culture. Perl grew amongst a reactionary community with conservative values, which prevented it from evolving into a mature general purpose language ecosystem. Everything else filled the gap.
-
Hank Green And The Fantastical Tales of God AIs
Savannah, Georgia—In the old lacquered coffee shop on the corner of Chippewa Square, I eat a blueberry scone the size of a young child’s head and sip cold black coffee while staring incredulously at my phone. I’m watching Hank Green interview Nate Soares, co-author of the new book If Anyone Builds It, Everyone Dies, and I am in utter disbelief at the conversation taking place before my eyes. Hank Green, the internet’s favorite rational science nerd, does not appear to be approaching this interview with any critical lens at all. Instead, he seems to be outright gushing over Soares, an AI-doomerist who’s made it impossible to know where his message ends and big tech’s lobbying begins. Let me explain…
Pretty good dive into part of why I don’t trust the companies producing the frontier AI models.
I also don’t buy AI doomerism. I agree it’s a FUD and regulatory capture tactic while also distracting from the real issues associated with AI.
There’s no putting LLMs back in the bag, so rather than anoint a few companies monopolies over the technology, I’d really rather see a continued open source ecosystem around it, which, I think, will help us find beneficial ways to apply this technology for individuals rather than keeping all the wealth and power concentrated in a few companies.
-
Finished reading: Sky Daddy by Kate Folk 📚
This book is funny and wild! It starts with an epigraph to Moby Dick, which is fitting since I’ve been reading that with my book club.
-
Was not expecting to find some hope and inspiration on the topics of climate change and politics from Al Gore on the Zero podcast:
-
Some photos from the nearby Doll’s Head Trail:







