How to Build Internal Software in 2026

The agentic era allows us to make software fast and cheap - it's not just SaaS companies that should be taking advantage of it. Internal software has always been about solving a hyper specific problem; the core issue is that it was usually prohibitively expensive and had to provide a huge amount of value in order to justify building and supporting it. I tend to think that a really targetted solution tends to beat a general solution any of day of the week, so if the cost of creating that software are close enough zero, why go buy a license from a vendor that only solves 80 or 90% of the problem? Software companies themselves are currently ditching vendors in droves and preferring to build in house solutions. As that trend accelerates and as non-software organizations gain access to the right tools, its a reasonable bet that the future of software lies in hyper specific internal software.

So with that in mind, here's my top 10 for making internal software. We'll start off with what is probably a hard one if you aren't an engineer - tooling.

1. Tools: Use a coding agent, git worktrees, and a decent terminal

The real concept here is have a cohesive setup. If you are a software engineer, you probably have strong opinions and an existing setup. If you are reading this in 2027, this is probably outdated but all you should take away is that you need a development environment that's ergonomic, doesn't get in your way and that you can develop muscle memory for. The rest of the points in this blog are going to be more about architecture, identity and workflows so if you have a setup already, skip to #2.

Anyways, back to tooling.

I use Claude Code, it's fairly cheap for what you get out of it. Cursor is pretty good and the agent is built into a VS Code clone, although TBH you probably aren't gonna look at the code much. There's a bunch of open source agents that you can plug whatever provider in - the Vibe CLI from Mistral is great for this, I use it to wrap ollama for doing work with local models.

Git worktrees make working with agents a ton nicer and there's lots of tools out there for making it ergonomic. The basic premise is to create a repository and have an agent scaffold your app out, then create a worktree for each feature you want to add, run an agent in each worktree and merge then back in as agents complete. This requirement implies that you need git - and you do, it's what engineers use for version control and enables multiple people (or clankers) to work on the same codebase. Plus it's also how we share code. Apps like Emdash do a great job of abstracting the technical bits away.

The terminal is something you should be comfortable working in. Software tooling is usually native to the command line (e.g. a CLI, command line interface), so you will be lost if you can't work in your terminal. Warp is a great one - they build an agent right in it so you can always just ask it for the command you need. It also has nifty autocomplete.

2. Multi-tenacy is hard, avoid if possible

Multi-tenant software is predominant in SaaS because the entire idea of SaaS is to spread the cost of the application over as many customers as possible. If your app is multi-tenant, there is a whole list of complications that come up:

  • logical separation of data (e.g. customer A can't see customer B's data)
  • user data structures go a level deeper - users and accounts
  • authentication and authorization (identity) needs to be fancier
  • database structures contain more relationships
  • etc

So you know, just avoid. Internal software is kinda magical these days; its cheap to make and once you strip away the complexity of commercial SaaS, relatively easy to create something that solves your problems. As far as identity goes, you probably already have some ways of verifying the user. Company email works great, just go send a magic link with a token in it. Maybe your company has an internal intranet, pop it in there and you know that the user was already authorized. If your company uses gsuites, just slap a Google login in front of it.

3. Modular Monoliths

We call an app a monolith if a single program runs everything that your software does. The opposite is microservices where we break tasks out into dedicated programs like authentication or event publishing. The hard part of making your app out of microservices is sharing state, that is, keeping all the different parts of your app on the same page with respect to what users are doing on it.

I recommend something in between these architectures - small but complete applications. Coding agents will have an easier time creating an updating a small application and monoliths are way easier to host - it's just one app after all.

4. Use a lightweight data layer

There is probably no need for a Postgres cluster. You probably don't need Redis. High availability? Doubt that too.

Most production level, commercial software products are a distributed system - multiple physical machines are involved in delivering different aspects of the software, the services themselves likely all have multiple copies running at once (hundreds or thousands of copies even), and there are usually several pieces of software making up one "app". This complexity introduces several things that make your life harder than it needs to be and one of those is how to share state across everything. One way is to use a database and let everything connect to it and query for the application state. Another method is called "consensus" where each copy of a service communicates state with each other. Both are probably more complicated than your four person team at work needs to generate reports on weekly orders for pizza boxes. Design the app so that it only ever expects one copy of the application running at any given time - keep your data in memory and flush it to disk in a JSON file or a CSV every now and then. Back up to S3 or similar if it's really important.

5. Use a Sandbox

Get ready for a shameless plug (Heyo Computer sells sandboxes, although you can create and run them on your own hardware for free).

In all seriousness however, you want a sandbox for a couple of reasons and uses. First off, its convenient. Very likely during the development of an app, you will be installing software and tooling - either as dependencies of what you're building or even just as helpers. Polluting your own machine can have downsides and if you have multiple different projects going at once, you need a way to separate concerns. Sandboxes come in many different forms but are essentially just an isolation system for your project. A sandbox makes development easier, but choosing the right kind of sandbox also makes deployment and sharing your app easier. In a perfect world, when you need to share your app with someone else, you can just move the sandbox to the internet or send the entire thing to your coworker.

Another use of a sandbox: making an agent. If you are building an agent to do some hyper specific task for you, you will want an isolation layer for the agent. Typically, an "agent" is a loop of LLM inference calls and tool calling; the LLM requests a tool call by responding in a specific format and we use traditional software to run the tool (which is just another piece of code). A lot of agents complete tasks by writing scripts and then calling the script. The sandbox isolates the LLM generated code when it runs, protecting you from side effects.

6. Think about how much you trust your users

Is your only user you? Great, don't bother building RBAC (role based access control, a very enterprisy feature). Is it machine to machine and you control both machines? Great, use bearer tokens. The point is really, get away with the least complicated identity system you can, ideally relying on another, more authoritative system.

If you are distributing on the web or over HTTP, the simplest form of authorization can be done at the header level on requests; it's so easy that it would be weird if you did nothing else.

7. Plan Ahead for Distribution

Who needs to access the app and where do they need to do it from? Will everyone using the app have the source code?

Not everything needs distributed over the public internet. Distributing securely on the web and doing so on an internal intranet are different ballgames - having identity built into your distribution mechanism reduces your surface area. A private git repo and a shared secret in a password manager can go a long way.

Desktop apps generally require signing and other privacy measures if you want to distribute widely, and have a far different distribution mechanism then web apps - notably, that a desktop or CLI user gets a choice when they update.

Web apps have the disadvantage of being, well, web apps - having public entrypoints to your software introduces its own class of problems. At a minimum you now need to have and configure a web server, web application firewall, and internet gateway. A lot of platforms will do this for you but you'll usually want to control the DNS as well.

8. Make a Pipeline

Regardless of where you distribute your app, a build and deploy pipeline will save both time and tears. That is, these steps should be automated: you push a change, the change makes it to the users. Deploys can be manually gated; e.g. someone has to click a button or run a command to kick it off, but builds shouldn't, especially if you have multiple contributors - breaking the build sucks for everyone, and the person who breaks it should be informed of the matter as soon as possible, because they are also the most likely person to fix the issue. Automated pipelines are also nice for any alerting you might want to integrate for important services - "this deployed successfully" can stay in your build system, but "failed to deploy" and "health check failed" are really helpful to know about when they happen.

9. At the minimum, have an agent investigate security

The same advantages that make creating software fast and cheap in the AI era also make it fast and cheap for bad actors to create and take advantage of vulnerabilities.

Access control is likely the most common security failure out there - coding agents are great at writing tests, have it make a bunch of end-to-end tests (that is, tests that work through an entire feature or code path), that attempt to compromise access control.

Then, inspect your supply chain - ideally everytime you add a package or upgrade something should do a sweep on your dependencies. Socket.dev is great for the JavaScript ecosystem.

10. Build for updates

There's a couple of themes to tie up here - distribution and identity. Identity governs who you can trust and how well you can trust them while distribution is kind of the mechanics of getting it to a user - build it, copy the artifacts somewhere, make a container image, deploy it to production, migrate data, etc.

Traditional software goes through a review and approval process, what compliance frameworks call "change management". Reviewing code written by the clankers can be onerous, so some teams elect to have another clanker review a pull request (a bunch of related changes that we want to merge into the working branch of source control) but be warned, clankers cannot be held accountable, only you can.

After reviews and approval, you need a path to update your project. If you distribute the app via source code and its repository, you are done. Otherwise, you will need to push an update through to whereever the app is running. One aspect to be careful with is the data layer. It is very common to have to change the shape of the data you are storing (e.g. adding a column to a database), it is also very common for these data migrations to cause incidents and it can be difficult to roll back changes which in the event something goes wrong you can easily find yourself in a place where you are restoring from a backup (if it exists!) or you are manually doing surgery on your data. Neither is very much fun. If you have a change to your data shape to roll out, test it locally on data that looks like "production" (in a compliance constrained environment, pulling production data onto a local machine for testing is typically a no-no). If you use a traditional database and the data is critical, you should be taking backups anyways and ensuring that you know how the write ahead log works. Take the 2 minutes to ensure that you can recover before pushing out a data change.

During deployments, it's likely that services will go down - you can architect for high availability but that adds a fair amount of complexity for internal software and you should be pragmatic about what you actually need. Give your users a heads up.

Finally, have a way to verify that a fresh update is working as intended. The simplest form of this is simply opening the app and running through core features. You can add simple automation scripts or something like Playwright to make things more consistent (always a good idea) and basic health checks (ping test) and alerts go a long way.