Agent Chat - Development Guide
Local Setup, Testing, and Contribution Guidelines
Prerequisites
Required Software
- Node.js: 20.0.0+ (LTS recommended)
- npm: 9.0.0+
- PostgreSQL: 14+ (for persistent storage)
- Redis: 7+ (for sessions and caching)
- Qdrant: 1.7+ (for vector search)
- Docker: 24+ (optional, for containerized dependencies)
Optional Tools
- MongoDB: 6+ (for LibreChat compatibility)
- Neo4j: 5+ (for knowledge graph features)
- MeiliSearch: 1.5+ (for full-text search)
- OrbStack: Latest (macOS container runtime)
Initial Setup
1. Clone Repository
# Clone from GitLab
git clone https://gitlab.bluefly.io/llm/common_npm/agent-chat.git
cd agent-chat
# Install dependencies
npm install
# Create environment file
cp .env.sample .env
2. Configure Environment
Edit .env file:
# Server Configuration
AGENT_CHAT_PORT=3080
AGENT_CHAT_HOST=0.0.0.0
NODE_ENV=development
LOG_LEVEL=debug
# Database Connections
POSTGRESQL_URL=postgresql://user:pass@localhost:5432/agent_chat
MONGODB_URL=mongodb://localhost:27017/agent_chat
REDIS_URL=redis://localhost:6379
# Vector Database
QDRANT_URL=http://localhost:6333
QDRANT_API_KEY=
# LLM Platform Integration
LLM_GATEWAY_URL=http://localhost:4000
VECTOR_HUB_URL=http://localhost:6333
AGENT_MESH_URL=http://localhost:3005
# Authentication
JWT_SECRET=your-secret-key-here
SESSION_TIMEOUT=3600
# Observability
PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006
PROMETHEUS_PORT=9090
JAEGER_ENDPOINT=http://localhost:14268/api/traces
# Feature Flags
LIBRECHAT_ENHANCED=true
KAGENT_ENABLED=true
OSSA_ENABLED=true
AGENT_STUDIO_ENABLED=true
ROCKETSHIP_ENABLED=false # Disable experimental features in dev
ECHO_VOICE_ENABLED=false
# CORS
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:3080
3. Start Dependencies
Option A: Docker Compose (Recommended)
# Start all dependencies
docker-compose up -d
# Verify services
docker-compose ps
docker-compose.yml:
version: '3.8'
services:
postgres:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_DB: agent_chat
POSTGRES_USER: agent_chat
POSTGRES_PASSWORD: dev_password
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
qdrant:
image: qdrant/qdrant:v1.7.4
ports:
- "6333:6333"
volumes:
- qdrant_data:/qdrant/storage
mongodb:
image: mongo:7
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: dev_password
volumes:
- mongo_data:/data/db
volumes:
postgres_data:
redis_data:
qdrant_data:
mongo_data:
Option B: Manual Setup
# PostgreSQL
brew install postgresql@16
brew services start postgresql@16
createdb agent_chat
# Redis
brew install redis
brew services start redis
# Qdrant
docker run -d -p 6333:6333 qdrant/qdrant:v1.7.4
# MongoDB (optional)
brew install mongodb-community
brew services start mongodb-community
4. Database Setup
# Run Prisma migrations
npx prisma migrate dev
# Generate Prisma client
npx prisma generate
# Seed database (optional)
npm run db:seed
# Setup Qdrant collections
npm run agent-os:qdrant:setup
5. Build and Start
# Build TypeScript
npm run build
# Start development server (with auto-reload)
npm run dev
# Or start production build
npm run start
6. Verify Installation
# Check health
curl http://localhost:3080/health
# Check API info
curl http://localhost:3080/api/info
# Test chat endpoint
curl -X POST http://localhost:3080/api/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4-turbo",
"messages": [{"role": "user", "content": "Hello"}]
}'
Development Workflow
Project Structure
agent-chat/
├── src/
│ ├── agent-os/ # Agent OS features
│ │ ├── core/ # Agent launcher
│ │ ├── memory/ # 5-layer memory system
│ │ └── types/ # TypeScript interfaces
│ ├── api/ # GraphQL and REST APIs
│ │ ├── routes/ # Express routes
│ │ ├── resolvers/ # GraphQL resolvers
│ │ └── schema.graphql # GraphQL schema
│ ├── core/ # Core business logic
│ │ ├── services/ # Service layer
│ │ └── use-cases/ # Use case layer
│ ├── infrastructure/ # External integrations
│ │ ├── database/ # Database clients
│ │ ├── providers/ # LLM providers
│ │ ├── mcp/ # MCP server
│ │ └── websocket/ # WebSocket handlers
│ ├── librechat/ # LibreChat compatibility
│ ├── middleware/ # Express middleware
│ ├── services/ # Legacy services
│ ├── types/ # TypeScript types
│ ├── server.js # Main server entry
│ └── mcp-server.js # MCP server entry
├── openapi/ # OpenAPI specifications
├── examples/ # Integration examples
├── tests/ # Test suites
├── scripts/ # Utility scripts
├── infrastructure/ # Deployment configs
│ ├── kubernetes/ # K8s manifests
│ ├── docker/ # Dockerfiles
│ └── grafana/ # Dashboards
├── package.json
├── tsconfig.json
├── .env.sample
└── README.md
Development Scripts
# Development
npm run dev # Start with auto-reload
npm run build # Build TypeScript
npm run start # Start production build
# Testing
npm test # Run all tests
npm run test:unit # Unit tests only
npm run test:integration # Integration tests
npm run test:e2e # End-to-end tests
npm run test:coverage # Coverage report
# Code Quality
npm run lint # ESLint
npm run lint:fix # Auto-fix linting issues
npm run format # Prettier formatting
npm run typecheck # TypeScript type checking
# Database
npm run db:migrate # Run migrations
npm run db:reset # Reset database
npm run db:seed # Seed test data
npm run db:studio # Prisma Studio GUI
# OpenAPI
npm run openapi:sync # Sync specs to registry
npm run openapi:validate # Validate specs
npm run openapi:bundle # Bundle to JSON
npm run openapi:docs # Generate HTML docs
# Agent OS
npm run agent-os:setup # Setup Agent OS
npm run agent-os:demo # Run Agent OS demo
Hot Reload Development
# Terminal 1: TypeScript compilation in watch mode
npm run dev
# Terminal 2: Run tests in watch mode
npm run test:watch
# Terminal 3: Monitor logs
tail -f logs/combined.log
Adding New Features
1. Create Feature Branch
git checkout development
git pull origin development
git checkout -b feature/new-chat-feature
2. Implement Feature
Example: Add custom model routing
// src/core/services/custom-routing-service.ts
import { logger } from '../../utils/logger';
export class CustomRoutingService {
async selectModel(context: RoutingContext): Promise<string> {
logger.info('Selecting model for context', { context });
// Your routing logic
if (context.complexity === 'high') {
return 'claude-3-5-sonnet-20241022';
} else {
return 'gpt-4-turbo';
}
}
}
export const customRoutingService = new CustomRoutingService();
3. Add Tests
// tests/unit/custom-routing-service.test.ts
import { CustomRoutingService } from '../../src/core/services/custom-routing-service';
describe('CustomRoutingService', () => {
let service: CustomRoutingService;
beforeEach(() => {
service = new CustomRoutingService();
});
it('should select Claude for high complexity', async () => {
const model = await service.selectModel({ complexity: 'high' });
expect(model).toBe('claude-3-5-sonnet-20241022');
});
it('should select GPT-4 for low complexity', async () => {
const model = await service.selectModel({ complexity: 'low' });
expect(model).toBe('gpt-4-turbo');
});
});
4. Update Documentation
/**
* Custom Routing Service
*
* Intelligently selects LLM model based on task complexity.
*
* @example
* ```typescript
* const model = await customRoutingService.selectModel({
* complexity: 'high'
* });
* ```
*/
5. Run Quality Checks
# Type check
npm run typecheck
# Lint
npm run lint:fix
# Format
npm run format
# Test
npm test
# Build
npm run build
Testing
Unit Tests
// tests/unit/chat-service.test.ts
import { ChatService } from '../../src/core/services/chat-service';
import { mockLLMGateway } from '../mocks/llm-gateway';
describe('ChatService', () => {
let chatService: ChatService;
beforeEach(() => {
chatService = new ChatService(mockLLMGateway);
});
it('should send message and receive response', async () => {
const response = await chatService.sendMessage({
content: 'Hello',
model: 'gpt-4-turbo'
});
expect(response).toHaveProperty('content');
expect(response.content).toBeTruthy();
});
it('should handle streaming responses', async () => {
const tokens: string[] = [];
await chatService.sendMessageStream(
{ content: 'Hello', model: 'gpt-4-turbo' },
(token) => tokens.push(token)
);
expect(tokens.length).toBeGreaterThan(0);
});
});
Integration Tests
// tests/integration/api.test.ts
import request from 'supertest';
import { app } from '../../src/server';
describe('Chat API Integration', () => {
it('POST /api/chat/completions should return response', async () => {
const response = await request(app)
.post('/api/chat/completions')
.send({
model: 'gpt-4-turbo',
messages: [{ role: 'user', content: 'Hello' }]
})
.expect(200);
expect(response.body).toHaveProperty('choices');
expect(response.body.choices[0].message.content).toBeTruthy();
});
it('GET /api/conversations should return list', async () => {
const response = await request(app)
.get('/api/conversations')
.expect(200);
expect(response.body).toHaveProperty('conversations');
expect(Array.isArray(response.body.conversations)).toBe(true);
});
});
E2E Tests
// tests/e2e/chat-flow.test.ts
import { chromium, Browser, Page } from 'playwright';
describe('Chat Flow E2E', () => {
let browser: Browser;
let page: Page;
beforeAll(async () => {
browser = await chromium.launch();
});
afterAll(async () => {
await browser.close();
});
beforeEach(async () => {
page = await browser.newPage();
await page.goto('http://localhost:3080');
});
it('should send message and receive response', async () => {
await page.fill('#message-input', 'What is 2+2?');
await page.click('#send-button');
await page.waitForSelector('.assistant-message');
const response = await page.textContent('.assistant-message');
expect(response).toContain('4');
});
});
Debugging
VS Code Configuration
.vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug Server",
"skipFiles": ["<node_internals>/**"],
"program": "${workspaceFolder}/src/server.js",
"runtimeArgs": ["--loader", "ts-node/esm"],
"env": {
"NODE_ENV": "development",
"DEBUG": "*"
}
},
{
"type": "node",
"request": "launch",
"name": "Debug Tests",
"skipFiles": ["<node_internals>/**"],
"program": "${workspaceFolder}/node_modules/jest/bin/jest",
"args": ["--runInBand", "--no-cache"],
"env": {
"NODE_ENV": "test"
}
}
]
}
Debug Logging
import debug from 'debug';
const log = debug('agent-chat:service');
export class MyService {
async doSomething() {
log('Starting operation');
// ...
log('Operation complete', { result });
}
}
Run with debug output:
DEBUG=agent-chat:* npm run dev
Performance Profiling
CPU Profiling
# Run with CPU profiler
node --prof dist/server.js
# Generate report
node --prof-process isolate-*.log > profile.txt
Memory Profiling
# Run with memory snapshots
node --inspect dist/server.js
# Open Chrome DevTools
# chrome://inspect
# Take heap snapshots
Load Testing
# Install artillery
npm install -g artillery
# Run load test
artillery run tests/load/chat-load.yml
tests/load/chat-load.yml:
config:
target: 'http://localhost:3080'
phases:
- duration: 60
arrivalRate: 10
processor: "./processor.js"
scenarios:
- name: "Chat Flow"
flow:
- post:
url: "/api/chat/completions"
json:
model: "gpt-4-turbo"
messages:
- role: "user"
content: "Hello"
Continuous Integration
GitLab CI Pipeline
.gitlab-ci.yml:
include:
- component: gitlab.bluefly.io/llm/gitlab_components/workflow/golden@v0.1.0
inputs:
project_name: "agent-chat"
enable_auto_flow: true
kubernetes_namespace: "agent-chat"
enable_observability: true
stages:
- test
- build
- deploy
test:
stage: test
image: node:20
services:
- postgres:16
- redis:7
script:
- npm ci
- npm run lint
- npm run typecheck
- npm run test:coverage
coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
build:
stage: build
image: docker:24
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
deploy:
stage: deploy
script:
- kubectl apply -f infrastructure/kubernetes/
only:
- main
- development
Contributing
Code Style
- Formatting: Prettier (runs automatically on commit)
- Linting: ESLint with TypeScript rules
- Naming: camelCase for variables, PascalCase for classes
- Comments: JSDoc for public APIs
Commit Messages
Follow conventional commits:
feat: add custom model routing
fix: resolve WebSocket reconnection issue
docs: update API reference
refactor: extract chat service logic
test: add integration tests for vector search
chore: update dependencies
Pull Request Process
- Create Feature Branch:
git checkout -b feature/my-feature - Make Changes: Implement feature with tests
- Run Quality Checks:
npm run lint && npm test - Commit Changes: Use conventional commit messages
- Push Branch:
git push origin feature/my-feature - Create MR: Open merge request in GitLab
- Code Review: Address reviewer feedback
- CI/CD: Ensure all pipelines pass
- Merge: Maintainer merges to development
Code Review Checklist
- [ ] Code follows style guide
- [ ] Tests added for new features
- [ ] Documentation updated
- [ ] No breaking changes (or properly documented)
- [ ] Performance impact considered
- [ ] Security implications reviewed
- [ ] Backward compatibility maintained
- [ ] Error handling implemented
- [ ] Logging added for debugging
Troubleshooting
Common Issues
WebSocket Connection Fails
# Check CORS configuration
echo $ALLOWED_ORIGINS
# Verify Socket.IO version compatibility
npm ls socket.io
Database Connection Error
# Test PostgreSQL connection
psql $POSTGRESQL_URL -c "SELECT 1"
# Check Redis connection
redis-cli -u $REDIS_URL ping
High Memory Usage
# Check for memory leaks
node --expose-gc --inspect dist/server.js
# Monitor memory usage
watch -n 1 "ps aux | grep node"
Slow Response Times
# Enable Phoenix tracing
export PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006
# Check database query performance
npm run db:studio # Prisma Studio
Resources
- GitLab Repository: https://gitlab.bluefly.io/llm/common_npm/agent-chat
- Issue Tracker: https://gitlab.bluefly.io/llm/common_npm/agent-chat/-/issues
- CI/CD Pipelines: https://gitlab.bluefly.io/llm/common_npm/agent-chat/-/pipelines
- Package Registry: https://gitlab.bluefly.io/llm/common_npm/agent-chat/-/packages
- OpenAPI Registry: https://gitlab.bluefly.io/llm/technical-guide/openapi/agent-chat/
Related Pages: - Architecture - System design and components - API Reference - Complete endpoint documentation - Integration Guide - Integration examples
Last Updated: 2025-11-02