# API Reference ## Modular API Reference (2025) This section documents all public API endpoints for the J4F Assistant, built on a fully modular architecture with LangChain-powered streaming. --- ## Architecture Overview The API is built using modular route handlers: - **AssistantRoutes.js** - AI assistant and streaming endpoints - **ConversationRoutes.js** - Conversation management - **ModelRoutes.js** - Model management and switching - **TerminalRoutes.js** - Terminal session management - **WorkspaceRoutes.js** - Workspace and Git operations - **StreamingRoutes.js** - Streaming-specific endpoints - **UnifiedAPIRoutes.js** - Unified API for external providers --- ## Authentication & Security - Most endpoints require no authentication by default (configurable via `.env`) - All system commands go through security modules (risk analysis, audit logging) - Configurable rate limiting and CORS via environment variables - For external model APIs, see their respective documentation for API key usage --- ## Endpoints ### Chat & Assistant (ChatRoutes.js) #### POST /api/message Send a message to the AI assistant (non-streaming) - **Body**: `{ message: string, mode?: string, model?: string, conversation_id?: string, promptOrders?: object }` - **Response**: `{ response: string, context: object }` - **Features**: Tool integration, context awareness, security validation #### POST /api/message/stream Stream a message response using LangChain-powered streaming - **Body**: `{ message: string, mode?: string, model?: string, conversation_id?: string, promptOrders?: object }` - **Response**: Server-Sent Events (SSE) stream - **Headers**: `Content-Type: text/event-stream` - **Features**: Real-time streaming, tool execution, conversation memory **Example Stream Response:** ``` data: {"type": "text", "content": "I'll help you with that..."} data: {"type": "tool", "name": "file_read", "params": {"path": "/file.txt"}} data: {"type": "text", "content": "File contents: Hello World"} data: {"type": "done"} ``` #### POST /api/message/stream/markdown Stream a markdown-formatted response - **Body**: `{ message: string, ... }` - **Response**: SSE stream with markdown chunks #### GET /api/status Get current assistant status and health information - **Response**: `{ status: string, assistant: boolean, timestamp: string, uptime: number, ... }` #### GET /api/context Get current conversation context and memory - **Response**: `{ context: object, conversationId: string, messageCount: number }` #### POST /api/context/clear Clear current conversation context - **Response**: `{ success: boolean, message: string }` ### Streaming (StreamingRoutes.js) #### POST /api/message/stream Stream a message response (see above) #### POST /api/message/stream/markdown Stream a markdown-formatted response (see above) ### Conversations (ConversationRoutes.js) #### GET /api/conversations List all saved conversations - **Response**: `{ conversations: Array<{id, title, lastModified, messageCount}> }` #### POST /api/conversation/save Save current conversation with optional title - **Body**: `{ title?: string, conversationData?: object }` - **Response**: `{ success: boolean, conversationId: string }` #### GET /api/conversation/:id Get a specific conversation by ID - **Response**: `{ conversation: object, messages: Array, metadata: object }` #### DELETE /api/conversation/:id Delete a conversation by ID - **Response**: `{ success: boolean, message: string }` #### POST /api/conversation/:id/set-current Set a conversation as the current active conversation - **Response**: `{ success: boolean, conversationId: string }` #### GET /api/conversation/current Get current active conversation - **Response**: `{ conversation: object, messages: Array }` ### Models (ModelRoutes.js) #### GET /api/models List all available AI models and their status - **Response**: `{ models: Array<{name, status, provider, capabilities}> }` #### POST /api/model/download Download a new model (for Ollama or similar) - **Body**: `{ modelName: string, provider?: string }` - **Response**: `{ success: boolean, status: string }` #### POST /api/model/switch Switch the active AI model - **Body**: `{ modelName: string, provider?: string }` - **Response**: `{ success: boolean, activeModel: string }` #### GET /api/model/current Get currently active model information - **Response**: `{ model: string, provider: string, capabilities: object }` ### Terminal (TerminalRoutes.js) #### POST /api/terminals Create a new terminal session - **Body**: `{ workingDirectory?: string, environment?: object }` - **Response**: `{ terminalId: string, status: string }` #### GET /api/terminals/:terminalId Get terminal session information and status - **Response**: `{ terminalId: string, status: string, workingDirectory: string }` #### DELETE /api/terminals/:terminalId Kill a terminal session and clean up resources - **Response**: `{ success: boolean, message: string }` #### POST /api/terminals/:terminalId/execute Execute a command in a terminal session - **Body**: `{ command: string, waitForCompletion?: boolean }` - **Response**: `{ success: boolean, output?: string, exitCode?: number }` - **Security**: Commands go through risk analysis and may require confirmation #### GET /api/terminals/:terminalId/output Get recent output from a terminal session - **Query**: `?lines=50` (optional, default 50) - **Response**: `{ output: string, lastUpdate: timestamp }` ### Workspaces (WorkspaceRoutes.js) #### GET /api/workspaces List all configured workspaces - **Response**: `{ workspaces: Array<{id, name, path, branch, status}> }` #### POST /api/workspaces/switch Switch to a different workspace - **Body**: `{ workspaceId: string }` - **Response**: `{ success: boolean, activeWorkspace: object }` #### POST /api/workspaces/add Add a new workspace to the system - **Body**: `{ name: string, path: string, description?: string }` - **Response**: `{ success: boolean, workspaceId: string }` #### DELETE /api/workspaces/:workspaceId Remove a workspace from the system - **Response**: `{ success: boolean, message: string }` #### POST /api/workspaces/:workspaceId/refresh Refresh workspace information (files, Git status, etc.) - **Response**: `{ success: boolean, workspace: object }` #### POST /api/workspaces/refresh-all Refresh all workspace information - **Response**: `{ success: boolean, workspaces: Array }` #### GET /api/workspaces/:workspaceId/branches List Git branches in a workspace - **Response**: `{ branches: Array<{name, current, lastCommit}> }` #### POST /api/workspaces/:workspaceId/switch-branch Switch Git branch in a workspace - **Body**: `{ branchName: string }` - **Response**: `{ success: boolean, currentBranch: string }` ### Unified API (UnifiedAPIRoutes.js) #### POST /api/unified/message Send a message to any provider (OpenAI, Anthropic, Ollama, etc.) - **Body**: `{ provider: string, message: string, model?: string, mode?: string, conversation_id?: string, promptOrders?: object, apiKey?: string }` - **Response**: `{ response: string, model: string, conversation_id: string }` #### POST /api/unified/stream Stream a message response from any provider - **Body**: same as above - **Response**: SSE stream (see streaming format) ### External Provider APIs #### POST /api/openai/message Send a message to OpenAI (non-streaming) - **Body**: `{ message: string, model?: string, mode?: string, conversation_id?: string, promptOrders?: object, apiKey: string }` - **Response**: `{ response: string }` #### POST /api/openai/stream Stream a message response from OpenAI - **Body**: same as above - **Response**: SSE stream #### GET /api/openai/models List available OpenAI models - **Query**: `?apiKey=...` - **Response**: `{ models: Array }` #### POST /api/anthropic/message Send a message to Anthropic (non-streaming) - **Body**: `{ message: string, model?: string, mode?: string, conversation_id?: string, promptOrders?: object, apiKey: string }` - **Response**: `{ response: string }` #### POST /api/anthropic/stream Stream a message response from Anthropic - **Body**: same as above - **Response**: SSE stream #### GET /api/anthropic/models List available Anthropic models - **Query**: `?apiKey=...` - **Response**: `{ models: Array }` #### POST /api/ollama/message Send a message to Ollama (non-streaming) - **Body**: `{ message: string, model?: string, mode?: string, conversation_id?: string, promptOrders?: object }` - **Response**: `{ response: string }` #### POST /api/ollama/stream Stream a message response from Ollama - **Body**: same as above - **Response**: SSE stream #### GET /api/ollama/models List available Ollama models - **Response**: `{ models: Array }` ### Prompt Orders & Audit #### GET /api/prompt-orders Get all prompt order configurations - **Response**: `{ orders: Array }` #### POST /api/prompt-orders Create or update a prompt order - **Body**: `{ order: object }` - **Response**: `{ success: boolean, orderId: string }` #### DELETE /api/prompt-orders/:orderId Delete a prompt order - **Response**: `{ success: boolean }` #### GET /api/audit/log Get audit log entries - **Query**: `?limit=100&offset=0&type=command` - **Response**: `{ entries: Array<{timestamp, type, details, risk}> }` ### Security Endpoints #### GET /api/security/audit Get audit log entries (admin only) - **Query**: `?limit=100&offset=0&type=command` - **Response**: `{ entries: Array<{timestamp, type, details, risk}> }` #### POST /api/security/confirm Confirm a pending high-risk command - **Body**: `{ commandId: string, confirmed: boolean }` - **Response**: `{ success: boolean, executed?: boolean }` #### GET /api/security/pending Get pending commands requiring confirmation - **Response**: `{ pending: Array<{id, command, risk, timestamp}> }` ### WebSocket & SSE Endpoints #### ws://localhost:3000/ws Main WebSocket for real-time chat and events #### ws://localhost:3000/api/terminals/:terminalId/ws WebSocket for real-time terminal session #### /ws/markdown-stream WebSocket for markdown streaming --- ## LangChain Streaming Features ### Streaming Capabilities The J4F Assistant uses **LangChain exclusively** for streaming, providing: 1. **Enhanced Reliability** - Automatic error recovery and connection management 2. **Conversation Memory** - Context preservation across streaming sessions 3. **Tool Integration** - Seamless tool execution during streaming 4. **Advanced Prompting** - Dynamic prompt building and templates ### Streaming Response Format All streaming endpoints return Server-Sent Events (SSE) with the following format: ``` data: {"type": "text", "content": "Response text chunk"} data: {"type": "tool_call", "name": "tool_name", "params": {...}} data: {"type": "tool_result", "name": "tool_name", "result": "..."} data: {"type": "error", "message": "Error description"} data: {"type": "done", "metadata": {...}} ``` ### Stream Event Types - **text** - Text content chunks from the AI model - **tool_call** - Tool execution request with parameters - **tool_result** - Results from tool execution - **context_update** - Conversation context changes - **error** - Error information (non-fatal) - **done** - Stream completion with metadata ## Security Features ### Command Execution Security All system commands go through multiple security layers: 1. **Risk Analysis** - Commands are analyzed for potential risks 2. **Confirmation Workflow** - High-risk commands require user confirmation 3. **Audit Logging** - All command executions are logged 4. **Workspace Isolation** - Commands are restricted to configured workspaces --- ## Error Handling ### Error Response Format All endpoints return consistent error responses: ```json { "success": false, "error": { "code": "ERROR_CODE", "message": "Human-readable error message", "details": {...}, "timestamp": "2025-06-19T10:30:00Z" } } ``` ### Common Error Codes - **INVALID_REQUEST** - Malformed request or missing parameters - **MODEL_UNAVAILABLE** - AI model is not available or not responding - **STREAMING_ERROR** - Error during streaming response - **TOOL_EXECUTION_ERROR** - Error executing a tool or command - **WORKSPACE_ERROR** - Workspace-related error (path not found, permission denied) - **SECURITY_VIOLATION** - Security policy violation - **RATE_LIMIT_EXCEEDED** - Too many requests in time window ### HTTP Status Codes - **200** - Success - **400** - Bad Request (invalid parameters) - **401** - Unauthorized (if authentication is enabled) - **403** - Forbidden (security policy violation) - **404** - Not Found (conversation, workspace, etc.) - **429** - Too Many Requests (rate limiting) - **500** - Internal Server Error - **503** - Service Unavailable (model or service down) ## Configuration via Environment Variables ### Model Configuration ```bash AI_MODEL_BASE_URL=http://localhost:11434 AI_MODEL_NAME=llama2 AI_MODEL_API_KEY= DEFAULT_TEMPERATURE=0.7 MAX_CONVERSATION_HISTORY=50 ``` ### API Configuration ```bash PORT=3000 ALLOWED_ORIGINS=http://localhost:3000 API_RATE_LIMIT=100 ``` ### Feature Flags ```bash ENABLE_WEB_INTERFACE=true ENABLE_SCHEDULING=true ENABLE_FILE_OPERATIONS=true ENABLE_SYSTEM_COMMANDS=false ``` ### Security Configuration ```bash COMMAND_CONFIRMATION_REQUIRED=true AUDIT_LOG_RETENTION_DAYS=30 WORKSPACE_RESTRICTION_ENABLED=true ``` ## WebSocket Support ### Real-time Communication In addition to REST endpoints, the assistant supports WebSocket connections for real-time communication: - **Connection**: `ws://localhost:3000/ws` - **Authentication**: Token-based (if enabled) - **Features**: Real-time streaming, bidirectional tool execution, live terminal sessions ### WebSocket Message Format ```json { "type": "message|tool_call|context_update|error", "id": "unique-message-id", "data": {...}, "timestamp": "2025-06-19T10:30:00Z" } ``` ## Rate Limiting ### Default Limits - **Chat endpoints**: 60 requests per minute per IP - **Model management**: 10 requests per minute per IP - **Terminal operations**: 30 requests per minute per IP - **Workspace operations**: 20 requests per minute per IP ### Headers Rate limit information is included in response headers: ``` X-RateLimit-Limit: 60 X-RateLimit-Remaining: 45 X-RateLimit-Reset: 1671891600 ``` ## Examples ### Basic Chat Request ```bash curl -X POST http://localhost:3000/api/message \ -H "Content-Type: application/json" \ -d '{"message": "Hello, can you help me with Python?"}' ``` ### Streaming Chat Request ```bash curl -X POST http://localhost:3000/api/message/stream \ -H "Content-Type: application/json" \ -H "Accept: text/event-stream" \ -d '{"message": "Explain machine learning", "options": {"temperature": 0.8}}' ``` ### Tool Execution Request ```bash curl -X POST http://localhost:3000/api/message \ -H "Content-Type: application/json" \ -d '{"message": "List files in the current directory"}' ``` ### Model Switching ```bash curl -X POST http://localhost:3000/api/model/switch \ -H "Content-Type: application/json" \ -d '{"modelName": "llama2:13b"}' ``` ### Terminal Command Execution ```bash curl -X POST http://localhost:3000/api/terminals/term-123/execute \ -H "Content-Type: application/json" \ -d '{"command": "ls -la", "waitForCompletion": true}' ``` For more detailed information about the underlying architecture, see [Backend Overview](../backend/Overview.md) and [LangChain Streaming](../backend/LangChain-Streaming.md). --- ## Contributing New Endpoints - Add new route modules in `src/interfaces/routes/` - Register them in `web-modular.js` or `RouteSetup.js` - Document new endpoints here (API Reference)