pilifs/Terminal-Value: A sandbox showcasing rendering personalized user views with an LLM-enabled pipeline, along with engineering principles that make it possible.


This repository takes a Base Home Page Web Component from an eCommerce web app:
Base Thumbnail

Then feeds it into an LLM, along with user-specific context (i.e., notes from a CRM system) through a fully programmatic pipeline, to render a Custom Client-Specific Home Page Web Component:
Client-Specific Thumbnail

Each custom web component is rendered in one-shot using about 10,000 tokens, averaging around ~4kb unminified size. Sample raw responses from Gemini Batch APIs that show raw prompt input, raw LLM response and detailed token usage metadata can be found in ./apps/gemini-batch/local-inputs.

The framework and architecture behind this approach is the namesake of this repository, Terminal Value. It’s not quite using an LLM as a compiler, but it’s also not quite like agentic AI, MCP, vibe coding, or other LLM-related vernacular that I have come across. It seems more like transational AI, invoking an LLM with a higher-order transaction to invoke a structured reasoning response.

Before continuing, I encourage you to start with Approaching LLMs Like an Engineer, a blog post embedded in this repo that details the philosophy applied here, along with bite-sized examples.

We are approaching LLMs wrong. The industry is rushing to ‘vibe code’ by throwing massive context windows and brute-force agentic loops at problems and hoping it sticks. This is more like gambling than software engineering. We are skipping primitives and jumping straight to leaky abstractions. To build better systems, we must stop treating LLMs as magic wands and start treating them as predictable, probabilistic components whose value is amplified when harnessed by a deterministic architecture.

This repository serves as a sandbox to anchor these views. The codebase demonstrates how to generate custom, personalized web components for key clients of a mock online ski shop with LLMs in a fully automated way. Notably, while almost every line of code here was written by an LLM, none of it was “vibe coded” or generated by autonomous agents. It was built using the precise interaction patterns detailed in this post.

Run npm install then npm run start:ski-shop to start the example eCommerce app. Here are some links to custom rendered components you can check out after running the app:

Compare and contrast them with the base home page experience, which is used in the prompt. Visit the admin page to view all client details and open other custom LLM-rendered pages.

Admin Thumbnail

There are three apps in this repo.

  • Ski Shop: detailed above.
  • Gemini Batch: an app you can start by executing npm run start:gemini. It has a crude front-end to help keep track of Gemini Batch API requests to render components, along with other methods to interact with this API.
  • Terminal Value: a pipeline to render custom views for Ski Shop by extracting relevant user info, along with key files, by programatically passing them to an LLM then dynamically serving the results.

Terminal Value Architecture

The architecture behind the Terminal Value pipeline looks something like this, at a high-level:

Database -> Parse Data -> Construct Base Prompts -> Append Code Context -> Invoke LLM -> Serve Result Dynamically

You can see additional details by browsing the repository. For now, rather than spend more time documenting, I will refactor this code in coming days to be much cleaner, then update this README.

The mock eCommerce application architecture looks something like this, at a high-level:

Events -> Projections -> Database

The Ski Shop application implements an event sourcing / CQRS pattern to tightly control all state changes. It leverages projections to simulate strongly consistent writes and eventually consistent reads. It was written this way because, in theory, it allows us to easily implement features like re-render a new custom view based on an event that changes some context specific to one or more users efficiently. LLMs also seem to do well with functional codebases.

Here are some feature ideas on my mind.

  • Refactor Terminal Value Pipeline

The current approach is rife with side effects as I have not finished extra all the logic from geminiBatchServices to coreServices. The data structure behind the prompt will also change to make it easier to render other multi-modal components to enable an integrated vertical experience for the end-user.

  • Add Additional User-Specific Render Prompts

The obvious ones are marketing prompts, such as reddit or twitter copy. It would also be interesting to show example marketing images with the same look and feel as the web components and marketing copy.

  • Harden Web App and Make Context More Realistic

Update so dynamic pricing is set by a back-end config, and allow the LLM to render this dynamically as well. Refine context for viewports and devices so we can pass to LLM for device-specific experiences.

  • Optimize and Demonstrate Scaled Example

Render for 10,000 users. Analyze the prompt much more carefully to tune performance. Publish token utilization metrics.

If you’d like to work on these, or submit any of your own, please read the contributing guidelines first, then feel free to jump in.

This is meant to be a thought provoking example, not a startup or polished final product. Your participation is strongly encouraged!

License: MIT



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *