Mindset Shift: How AI is Bringing Services Back In-House

Mindset Shift: How AI is Bringing Services Back In-House
Photo by Nik / Unsplash

With the development of AI and intelligent agents, I foresee that, in the near future, parts of SaaS will be brought in-house mainly for cost-saving reasons. A single engineer, knowledgeable in multiple areas (though not an expert in any one field), might be able to develop and maintain a services from A to Z. This approach could result in reduced reliability but significant cost savings

Introduction

I have been mostly pro cloud most of my career. When it came to making a decision: buy or run in house - I almost took a unilateral stance - in favor to buy vs run in house.

Whether it would be a database, caching service, Web Sockets or observability stack.

But this has changed with rise of AI. It's not that it provided any more knowledge or resources on how to run services in house, but it's more of peace of mind aka "satefy net" if you will. A mate willing to give you a helping hand in case you are stuck or start doubting your self. You always have AI to ask for an advice if stuck or run into an error.

I have experienced this first hand when I decided to move my "home lab" from the Google Cloud to a Ubuntu Server hosted at home. I decided to host my own light weight Kubernetes solution myself. It was fun, I felt more confident in making changes.

By doing this I will break even on my home server investment by cutting costs on personal Google Cloud Project lab.


Especially when open source software is good enough for what projects need vs buying solution.

Most of the time question is "Who will have to support it?" with the team.
black and white analog device
Photo by Anne Nygård / Unsplash

Pre-AI Era.

Early Stage Startups

New startups typically lacked the technical depth and operational experience to run complex infrastructure. Cloud or managed services provided fast, reliable solutions without needing to invest in building and maintaining systems internally. The risk of downtime or misconfiguration was high without expert, hands‑on troubleshooting—which wasn’t as readily accessible.

Growth Stage / Mid‑Size Companies

At this stage companies took the Hybrid Approach. As companies grew and built up internal expertise, non–mission‑critical services (like caching, observability, or even some database systems) might be moved in‑house for cost savings. They would usually hired expertise at this point. However, many organizations still preferred to “buy” these services to avoid the overhead of the provisioning and maintenance risk of operational errors, because the apps were developed with assumption that underlying services would be always available. This complicates the shift to move in house and it does not make sense financially.

Large Enterprises

For core, mission‑critical applications where control, security, and customizability are crucial, enterprises often invested in building their own systems. Non-critical or commodity services were usually outsourced or run in the cloud to minimize risk and leverage external expertise.

black and white robot toy on red wooden table
Photo by Andrea De Santis / Unsplash

AI‑Enhanced Era

Early Stage Startups

This is where I think focus is or will be shifting. Primarily Buy, with Emerging In‑House Options. Startups continue to rely on cloud services for mission‑critical needs, given the rapid time‑to‑market advantages. However, with AI tools available (like ChatGPT for troubleshooting), even lean teams can consider experimenting with building simple, non‑critical components in‑house to reduce costs. Even the open source AI models are good enough to handle various troubleshooting tasks. Especially on Apple M series chips, since NVIDIA GPU prices have skyrocketed. You can run your AI locally by using this LMStudio. But I digress.

Growth Stage / Scaling Companies


As companies mature and gain both technical expertise and a better understanding of their non‑core service needs, they can gradually shift non‑mission‑critical services (such as observability tools or simple caching solutions) in‑house. AI-powered assistance provides a “safety net”— developers can quickly get advice and resolve errors, lowering the barrier to maintaining internal systems and thus saving cash. At this point engineering team has clear areas of responsibilities for developers. There is a one or two developers who oversee entire infrastructure.

Mature Enterprise


 Large companies with robust DevOps practices can afford to run more services internally, even for non‑critical functions. With AI tools integrated into their support workflows, these organizations experience lower downtime and quicker problem resolution, enhancing control and reducing operational expenses. The decision is still contextual: critical services often remain on trusted cloud platforms, while non‑essential services might be re‑architected in‑house to optimize cost and flexibility.

fan of 100 U.S. dollar banknotes
Photo by Alexander Mils / Unsplash

Pricing Comparison: Managed Redis on GCP vs. Running on a GCP VM

For this example, assume the project runs in the europe-west3 region and targets non–mission-critical services.

Requirements:

  • Redis Memory Size: 100 GB

Instance Costs:

  • Managed Redis (Memory Store) Basic version: Approximately $1,700 per month
  • GCP Compute VM: A 16‑vCPU, 128‑GB RAM instance costs roughly $680 per month

This shows about a 2.5x cost difference in favor of running Redis on a VM.

Additional Considerations (Risks & Overheads):

  • Initial Setup Overhead:
    There are various methods for installing Redis (e.g., Docker or using a Kubernetes Helm chart). The best approach depends on your performance needs vs ability to maintain the solution.
  • Scalability Concerns:
    Running Redis on a VM carries the risk of hitting memory limits, which could lead to performance bottlenecks, especially if frequent resizing is required.
  • Monitoring & Troubleshooting Complexity:
    While managed services include built-in monitoring and alerting, using a VM means you must set these up yourself. (On the plus side, Google Cloud Platform has a great monitoring solution.)
assorted electric cables
Photo by John Barkiple / Unsplash

Cost vs. Complexity Trade-Off:

  • If your usage is “set-and-forget” – where Redis runs steadily without needing frequent adjustments – the 2.5x savings may be well worth the trade-offs.
  • If your workload requires constant resizing or frequent manual adjustments, the benefits of a managed service could outweigh the cost savings.

Additionally, running Redis on a 128‑GB instance might leave you with around 10gb extra memory to run other processes if needed.

PS. include the time of maintenance into the cost if you take service in house. It is kinda of obvious, still wanted to bring this up.

Conclusion

As AI and intelligent agents continue to mature, we may soon see a shift where in-house solutions become a viable option for parts of SaaS. This evolution challenges the traditional reliance on cloud platforms, inviting us to weigh cost savings against operational complexity.