LLMOps
Integrations

Hono

Learn how to integrate LLMOps into your Hono application using the official middleware.

Installation

Install the required dependencies:

npm install @llmops/sdk pg hono @hono/node-server

Setup

Create LLMOps configuration

Create a llmops.ts file to configure your LLMOps instance:

llmops.ts
import { llmops } from '@llmops/sdk';
import { Pool } from 'pg';

export default llmops({
  basePath: '/llmops',
  providers: {
    openrouter: {
      apiKey: process.env.OPENROUTER_API_KEY || '',
    },
  },
  database: new Pool({
    connectionString: process.env.DATABASE_URL || '',
  }),
});

Create Hono server

Set up your Hono server with the LLMOps middleware:

server.ts
import 'dotenv/config';
import { Hono } from 'hono';
import { serve } from '@hono/node-server';
import { createLLMOpsMiddleware } from '@llmops/sdk/hono';
import llmopsClient from './llmops';

const app = new Hono();
const port = 3000;

// Health check endpoint
app.get('/health', (c) => {
  return c.text('Hello from Hono!');
});

// Mount LLMOps middleware
app.use('/llmops/*', createLLMOpsMiddleware(llmopsClient));

serve(
  {
    fetch: app.fetch,
    port,
  },
  (info) => {
    console.log(`LLMOps running at http://localhost:${info.port}/llmops`);
  }
);

Set environment variables

Create a .env file with your configuration:

.env
DATABASE_URL=postgresql://postgres:password@localhost:5432/llmops
OPENROUTER_API_KEY=sk-or-your-api-key

Run the server

Start your Hono server:

npx tsx server.ts

Your LLMOps instance is now available at http://localhost:3000/llmops.

API Endpoints

Once mounted, the following endpoints are available:

EndpointDescription
GET /llmops/healthHealth check endpoint
/llmops/api/genai/v1/*OpenAI-compatible API
GET /llmopsLLMOps dashboard UI

Making Requests

Use the OpenAI SDK to make requests to your LLMOps instance:

import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'http://localhost:3000/llmops/api/genai',
  apiKey: 'your-environment-secret',
  defaultHeaders: {
    'x-llmops-config': 'your-config-id',
  },
});

const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.choices[0].message.content);

On this page