Integrations
Express
Learn how to integrate LLMOps into your Express.js application using the official middleware.
Installation
Install the required dependencies:
npm install @llmops/sdk pg expressSetup
Create LLMOps configuration
Create a llmops.ts file to configure your LLMOps instance:
import { llmops } from '@llmops/sdk';
import { Pool } from 'pg';
export default llmops({
basePath: '/llmops',
providers: {
openrouter: {
apiKey: process.env.OPENROUTER_API_KEY || '',
},
},
database: new Pool({
connectionString: process.env.DATABASE_URL || '',
}),
});Create Express server
Set up your Express server with the LLMOps middleware:
import 'dotenv/config';
import express from 'express';
import { createLLMOpsMiddleware } from '@llmops/sdk/express';
import llmopsClient from './llmops';
const app = express();
const port = 3000;
app.use(express.json());
// Mount LLMOps middleware
app.use('/llmops', createLLMOpsMiddleware(llmopsClient));
app.listen(port, () => {
console.log(`LLMOps running at http://localhost:${port}/llmops`);
});Set environment variables
Create a .env file with your configuration:
DATABASE_URL=postgresql://postgres:password@localhost:5432/llmops
OPENROUTER_API_KEY=sk-or-your-api-keyRun the server
Start your Express server:
npx tsx server.tsYour LLMOps instance is now available at http://localhost:3000/llmops.
API Endpoints
Once mounted, the following endpoints are available:
| Endpoint | Description |
|---|---|
GET /llmops/health | Health check endpoint |
/llmops/api/genai/v1/* | OpenAI-compatible API |
GET /llmops | LLMOps dashboard UI |
Making Requests
Use the OpenAI SDK to make requests to your LLMOps instance:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:3000/llmops/api/genai',
apiKey: 'your-environment-secret',
defaultHeaders: {
'x-llmops-config': 'your-config-id',
},
});
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);