That simple question kicked off one of my most rewarding experiments in Vibe Coding and Agentic AI — powered entirely by Ollama running locally on my machine.
Motivation: Coding by Vibe, not by Ticket
Lately, I’ve been inspired by the idea of "Vibe Coding" — a freeform, creative development style where we start with a concept or feeling and let the code evolve organically, often in partnership with an AI assistant. It’s not about Jira tickets or rigid specs; it’s about prototyping fast and iterating naturally.
My goal was to build a movie recommendation app where users enter a movie title and get back a vibe-based summary and some thoughtful movie suggestions — not just by keyword match, but by understanding why someone liked the original movie.
Stage 1: The Big Idea
I started with a prompt:
"Take a movie name from the user, determine its vibe using its genre, plot, and characters, and recommend similar movies."
The app needed to:
- Fetch movie metadata from the OMDb API
- Use a local LLM (via Ollama) to generate a vibe summary and similar movie suggestions
- Serve results via a clean JSON API
We scaffolded a Spring Boot project, created REST controllers and services, and started building out the logic to integrate with both the OMDb API and the locally running Ollama LLM.
Stage 2: Engineering the Integration
Things were going smoothly until they weren’t. 😅
Compilation Errors
When we added the OmdbMovieResponse
model, our service layer suddenly couldn't find the getTitle()
, getPlot()
, etc. methods — even though they clearly existed. The culprit? Missing getters (at least that's what we thought at the time...).
We tried:
- Manually writing getters ✅
- Using Lombok’s
@Getter
annotation ✅ - Cleaning and rebuilding Maven ✅
Still, values were null
at runtime.
The Root Cause
Turns out the problem was with URL encoding of the title parameter. Movie titles with spaces (like The Matrix) weren’t properly encoded, which broke the API call. Once we fixed that, everything clicked into place. 🎯
Note: The AI would never have figured this out by itself. This was just my natural instincts kicking in to guide the AI as I would direct any other human developer. Also, It has been ages since I worked on a Spring boot project with Maven. However, the usual gotchas are still there in the year 2025 🙄.
Stage 3: Talking to the LLM (via Ollama)
This was where things got really fun.
Instead of relying on cloud APIs like OpenAI, I used Ollama, a local runtime for open-source LLMs. It let me:
- Run a model like LLaMA or Mistral locally
- Avoid API keys and cloud latency
- Iterate on prompts rapidly without rate limits
The app sends movie metadata (genre, plot, characters) to the local LLM with a tailored prompt. The LLM returns:
- A summarized “vibe” of the movie
- A list of recommended films with similar emotional or narrative energy
The results were surprisingly nuanced and human-like.
Tests, Cleanups, and Git Prep
To make the app production-ready:
- We wrote integration tests using
MockMvc
- Hid API keys in
.env
files and excluded them via.gitignore
- Structured the
MovieVibeRecommendationResponse
as a list of objects, not just strings - Wrote a solid
README.md
for onboarding others
Going Agentic
With the basic loop working, I asked:
How can this app become Agentic AI?
We designed the logic to act more like an agent than a pipeline:
- It fetches movie metadata
- Synthesizes emotional and narrative themes
- Determines recommendations with intent — not just similarity
This emergent behavior made the experience feel more conversational and human, despite being fully automated and offline.
Reflections
This project was peak Vibe Coding — no rigid architecture upfront, just a flowing experiment with a clear purpose and evolving ideas.
The use of Ollama was especially empowering. Running an LLM locally gave me:
- Full control of the experience
- No API costs or usage caps
- A deeper understanding of how AI can enhance personal and creative tools
Next Steps
For future improvements, I'd love to:
- Add a slick front-end UI (maybe with React or Tailwind)
- Let users rate and fine-tune their recommendations
- Persist data for returning visitors
- Integrate retrieval-augmented generation for even smarter results
But even as an MVP, the app feels alive. It understands vibe. And that’s the magic. I committed the code to my Github at https://github.com/tyrell/movievibes. All this was done in a few hours since publishing my previous post about Spring AI .
A Word on Spring AI
While this project used a more manual approach to interact with Ollama, I’m excited about the emerging capabilities of Spring AI. It promises to simplify agentic workflows by integrating LLMs seamlessly into Spring-based applications — with features like prompt templates, model abstractions, embeddings, and even memory-backed agents.
As Spring AI matures, I see it playing a major role in production-grade, AI-powered microservices. It aligns well with Spring’s core principles: abstraction, convention over configuration, and testability.
Try the idea. Build something weird. Talk to your code. Let it talk back. Locally. ✨
UPDATE (01/AUG/2025): Read the sequel of this here.