AI prompts are hardcoded everywhere:
// ❌ Scattered across your codebase
const prompt = "You are a helpful assistant that generates marketing copy...";
await llm.complete(prompt);
This creates:
- Version chaos – Which prompt version is in production?
- No collaboration – Engineers and prompt experts can’t work independently
- Deployment hell – Changing a prompt requires code deployment
- Zero reusability – Every app reinvents prompt management
PLP is a universal protocol that decouples prompts from code, just like APIs decouple frontends from backends.
// ✅ Clean, versioned, traceable
import { PLPClient } from '@goreal-ai/plp-client';
const plp = new PLPClient('https://prompts.goreal.ai');
const prompt = await plp.get('marketing/welcome-email', '1.2.0');
await llm.complete(prompt.content);
- RESTful & Simple – Three endpoints: GET, PUT, DELETE
- Language Agnostic – Works with any stack (Python, JS, Go, Rust…)
- Version Control Built-In – Fetch latest or pin to specific versions
- Open Standard – MIT licensed, community-driven
PLP is an open specification – like HTTP or OAuth. Any prompt management tool can implement it, and any AI framework can consume it.
For Prompt Library Providers: Implement PLP and your service works with every PLP client.
For AI Framework Developers: Add a PLP client and your framework works with any PLP-compliant prompt library.
Learn more: Protocol Overview | Full Specification
# JavaScript/Node.js
npm install @goreal-ai/plp-client
# Python
pip install plp-client
JavaScript Example:
import { PLPClient } from '@goreal-ai/plp-client';
const client = new PLPClient('https://your-plp-server.com', {
apiKey: 'optional-key'
});
// Get latest version
const prompt = await client.get('product/feature-announcement');
console.log(prompt.content); // "Hello {{name}}..."
// Save a new prompt
await client.put('support/faq-response', {
content: 'Answer: {{answer}}',
meta: { author: 'shimon', version: '1.0.0' }
});
// Delete a prompt
await client.delete('deprecated/old-prompt');
Python Example:
from plp_client import PLPClient
client = PLPClient("https://your-plp-server.com", api_key="optional-key")
# Get latest version
prompt = client.get("product/feature-announcement")
print(prompt.content) # "Hello {{name}}..."
# Save a new prompt
client.put("support/faq-response", {
"content": "Answer: {{answer}}",
"meta": {"author": "shimon", "version": "1.0.0"}
})
Every prompt follows this structure:
{
"id": "marketing/welcome-email",
"content": "Hello {{name}}, welcome to {{product}}!",
"meta": {
"version": "1.2.0",
"author": "shimon",
"description": "Sent on user signup",
"model_config": { "temperature": 0.7 }
}
}
See Full Specification for details.
npm install @goreal-ai/plp-express-middleware
import express from 'express';
import { plpMiddleware } from '@goreal-ai/plp-express-middleware';
const app = express();
// Add PLP endpoints with file-based storage
app.use('/v1', plpMiddleware({
storage: './prompts-db'
}));
app.listen(3000);
// Now serving: GET/PUT/DELETE /v1/prompts/{id}
Use our OpenAPI spec to generate server stubs:
# Generate Go server
openapi-generator generate -i spec/openapi.yaml -g go-server
# Generate Python FastAPI
openapi-generator generate -i spec/openapi.yaml -g python-fastapi
Or manually implement the three endpoints.
| Language | Package | Status |
|---|---|---|
| JavaScript/TypeScript | @goreal-ai/plp-client |
✅ Ready |
| Python | plp-client |
✅ Ready |
| Go | go-plp |
🚧 Coming Soon |
| Rust | plp-rs |
🚧 Coming Soon |
Want to build an SDK? Follow the Implementation Guide.
Want to build a PLP-compliant server or client?
- Read the Full Specification
- Follow the Compliance Guide
- Test your implementation
- Add the compliance badge:
[](https://github.com/gorealai/plp)
- GitHub Discussions – Ask questions, share implementations
- Contributing – See CONTRIBUTING.md for guidelines
- Roadmap – Check Issues for planned features
Built with ❤️ by GoReal.AI
MIT © GoReal.AI
See LICENSE for details.