Quickstart
Get started with LLMOps in minutes. This guide will walk you through installing the SDK, setting up a database, and making your first API request.
Prerequisites
- Node.js 18+
- PostgreSQL database
- An LLM provider API key (e.g., OpenRouter)
Set up the database
LLMOps requires a PostgreSQL database. You can use Docker to quickly spin one up:
docker run -d \
--name llmops-postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=llmops \
-p 5432:5432 \
postgres:16Your DATABASE_URL will be:
DATABASE_URL=postgresql://postgres:password@localhost:5432/llmopsCreate the LLMOps configuration
Create a file called llmops.ts to configure LLMOps:
import { llmops } from '@llmops/sdk';
import { Pool } from 'pg';
export default llmops({
basePath: '/llmops',
providers: {
openrouter: {
apiKey: process.env.OPENROUTER_API_KEY || '',
},
},
database: new Pool({
connectionString: process.env.POSTGRES_URL || '',
}),
});Create your server
Set up an Express server with the LLMOps middleware:
import 'dotenv/config';
import express from 'express';
import { createLLMOpsMiddleware } from '@llmops/sdk/express';
import llmopsClient from './llmops';
const app = express();
const port = 3000;
app.use(express.json());
app.use('/llmops', createLLMOpsMiddleware(llmopsClient));
app.listen(port, () => {
console.log(`LLMOps running at http://localhost:${port}/llmops`);
});Set environment variables
Create a .env file with your configuration:
POSTGRES_URL=postgresql://postgres:password@localhost:5432/llmops
OPENROUTER_API_KEY=sk-or-your-api-keyStart the server
Run your server:
npx tsx server.tsYour LLMOps instance is now running at http://localhost:3000/llmops.
Create a config
Open the LLMOps dashboard and create your first config to start routing requests to your LLM providers.
Making your first request
Once you've created a config in the LLMOps dashboard, you can make requests using the OpenAI-compatible API:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:3000/llmops/api/genai',
apiKey: '',
defaultHeaders: {
'x-llmops-config': 'your-config-id',
},
});
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Next steps
- Configuration - Learn about all configuration options
- Authentication - Set up authentication for your deployment
- Providers - Configure different LLM providers