This week, I am prepping to give a talk at the US-RSE Monthly Community Call and in preparation for this, I have started to write down my workflow for when I write software and find the cases where it is beneficial. I want to caveat this with one big asterisk, LLMs can be helpful, but they have some legitimate concerns in their use. My biggest learning in all of this, never blindly trust the output of an AI, or you are setting yourself up for failure.
My editor and model choice
I was a long time Neovim user but I have moved to using Zed almost full time. I find Zed to be the perfect mix of speed, features, and agentic editing. Zed has lots of ways for me to interact with agents and code without forcing me to pick how to achieve a task. Additionally, with their integration of MCP servers, I am able to leverage a lot of the tooling I used to use with Claude Desktop.
I also use Claude Desktop for working through and designing tasks that I want to complete. I find it more helpful when I want to focus on a specific task and explore potential considerations. I find this to be a very helpful rubber duck when solo coding. These days, I am using Claude 4 both on the desktop and in Zed via GitHub Copilot Chat. I find Claude 4 to be more than enough for my coding tasks and causes very little friction. In some of my side agent work, I tend to use some of OpenAI’s cheaper models like o3-mini or gpt-4o-mini to handle the short data extraction tasks I use agent workflows for (more details in a later post).
Tools used
I find that using any LLM for programming tasks requires context. In my own testing, I find that LLMs work better when they have enough tools and information about the task they need to complete. I like to think of using an LLM like talking to a new team member. They might have the capability to solve the problem, but they will need the why and how to actually do it.
Zed has included some filesystem tools for manipulating files and using the CLI which makes working with files in my editor a breeze. Additionally, I have started to use Context7 and its accompanying MCP server to link to docs that exist in its database. If your docs aren’t in their codebase, no worries, they have a pretty easy tool to add them (I did this with a project I developed where it parsed in our LLM-accessible doc) I have largely been using Context7 to handle working with documentation on web dev projects with robust docs (particularly with Svelte, Astro, and Deno).
My other guilty pleasure that I use in both Zed and Claude Desktop is the Linear MCP Server. At work, we use Linear for task management and the Linear MCP Server helps me to generate new tasks that I might need and them correctly. Furthermore, it enables me to quickly close tasks related to a coding session using the chat interface I am already in.
The actual workflow
Generally, a coding session starts by me reading my task list and determining what I need to work on. I start on achieving that task and write most of it. This allows me to sketch my idea out roughly. Then, I will use an LLM to help fill in the areas that are missing detail or to help me refactor my code to uphold better practices.
If I use an LLM for a more “vibe coding” case, create a new branch and have the LLM generate a list of tasks first. When it achieves a task, I have it create a git commit so that I can revert changes if necessary. I also do this for documentation generation. I find writing READMEs can often be very tedious work so I will have an LLM parse my code base and determine how to structure the README that I can review and edit as needed.
It is important to note that my workflow is not reliant on using LLMs to write code. Most of my code is still written by me and validated using existing tools like compilers or type-checkers. In fact, I find that LLMs are really good at leveraging these tools to help inform if an edit is correct or needs work.
What I want to try next
We mentioned before that context is king. Oftentimes, I really don’t need to restate that context every time. I have started to investigate the use of rules files to handle best practices. However, I have found this difficult considering I often work across multiple languages and the practices differ depending on the language.
One thing I am in progress in working on is domain specific context. Working in computational chemistry, we have a lot of great tools but they aren’t as common as something like React. Thus, I plan on building my own private RAG pipeline for working with computational chemistry tools using the Crawl4AI RAG MCP Server to help me build a specific workflow for working with tools in the space.
Additionally, I plan on exploring the use of Taskmaster, to handle project-scale task management when working in my codebases. Additionally, I intend to start using a memory system to share memory between Zed and Claude Desktop, enabling a more cohesive flow between the tools I use.
Conclusion
Overall, I have found LLMs to be a really useful tool when it comes to iterating on the tasks that I often procrastinate. Things like task and document generation have improved dramatically. Additionally, I am able to leverage LLMs to help with more cohesive and actionable ideation by helping me to quickly prototype the ideas I have. However, I have had quite a few times where blindly trusting AI had set me back days, a mistake that I will not forget. At the end of the day, these are powerful tools, but we need to be careful in using them to ensure long-term sustainability of the things we build. As always, if you want to comment, join me over on Bluesky!