API
Welcome to the Neura AI API documentation! This guide provides detailed information about how to interact with the Neura AI endpoints to integrate AI-powered capabilities into your applications.
Table of Contents
JSON Endpoint
Omni Multi-Part Endpoint
Research and Discovery Endpoint
Endpoints for History and Logging
NLP Specific Endpoint
Proxy Interaction Endpoint
Error Handling
Common Error Codes
1. JSON Endpoint
Overview
The JSON Endpoint offers a comprehensive way to interact with the Neura AI capabilities. It accepts a JSON payload containing parameters for authentication, messages, file data, user ID, and session ID, allowing for versatile and context-aware interactions.
HTTP Method & Path
HTTP Method: POST
Path:
/v1/chat/completions
Request Body Format
The request body should be in JSON format, containing the following parameters:
Parameters
auth.token
String
Yes
Bearer token for authentication.
messages
String
Yes
The message array or prompt for the AI.
file_data
Vec
<u8>
No
Byte array representing file data in various formats (e.g., PDF, DOCX, JPG).
user_id
String
No
User ID for personalization or tracking.
session_id
String
No
UUIDv4 - Session ID for maintaining context across multiple interactions.
stream
String
No
To stream response in real time.
reasoning_format
String
No
This optional parameter is available for O3 and DeepSeek R1 models.
Example Request
Example Response
The response will depend on the request and the AI model's processing. Here’s a general structure of the response:
Authentication
All requests must include a valid Bearer token in the auth.token field. The token should be provided in the JSON payload as shown in the example above.
Additional Considerations
File Formats: Supported file formats include PDF, DOCX, TXT, JPG, and others. Ensure that file_data is provided as a byte array or file reference.
Session Management: Use session_id to maintain context across multiple interactions, enhancing the conversational experience.
User ID: Include user_id for personalization or tracking purposes.
2. Omni Multi-Part Endpoint
Overview
The Omni Multi-Part Endpoint is designed to handle multi-modal interactions, supporting both text and file inputs through a single endpoint. It provides a flexible way to integrate AI capabilities into your applications, allowing for a combination of text prompts and file uploads.
HTTP Method & Path
HTTP Method: POST
Path:
/v1/chat/completions/multipart
Request Format
The request should be sent as multipart/form-data
, allowing for both text fields and file uploads. The supported fields are:
messages
The text input or prompt for the AI.
session_id
UUIDv4 for maintaining context across multiple interactions.
user_id
User ID for personalization or tracking.
file
File data for processing (e.g., PDF, DOCX, JPG).
stream
Boolean indicating whether to stream the response in real-time.
reasoning_format
Optional parameter for models that support different reasoning formats.
Example Request
Example Response
The response will depend on the request and the AI model's processing. Here’s a general structure of the response:
Streamed Response Format
If streaming is enabled, the response will be sent in chunks:
Parameters
messages
String
Yes
The message array or prompt for the AI.
session_id
String
No
UUIDv4 - Session ID for maintaining context across multiple interactions.
user_id
String
No
User ID for personalization or tracking.
file
File
No
File data for processing (e.g., PDF, DOCX, JPG).
stream
Boolean
No
To stream the response in real time.
reasoning_format
String
No
This optional parameter is available for models that support different reasoning formats.
3. Research and Discovery Endpoint
Overview
The Research and Discovery Endpoint is designed to fetch relevant information from the web based on user queries. It utilizes the Gemini API to provide accurate and up-to-date results.
HTTP Method & Path
HTTP Method: POST
Path:
/v1/research/web
Request Format
The request body should be in JSON format, containing the following parameters:
Parameters
query
String
Yes
The search query to be processed.
num_results
Number
No
The number of results to return.
session_id
String
No
UUIDv4 - Session ID for maintaining context across multiple interactions.
user_id
String
No
User ID for personalization or tracking.
Example Request
Example Response
4. Endpoints for History and Logging
These endpoints allow you to fetch chat history, user IDs, logs, and clear logs.
Fetch Chat History
HTTP Method: GET
Path:
/v1/history/fetch
Get User IDs
HTTP Method: GET
Path:
/v1/history/users/ids
Fetch Logs
HTTP Method: GET
Path:
/v1/logs/fetch
Clear Logs
HTTP Method: POST
Path:
/v1/logs/clear
5. NLP Specific Endpoint
Lexicon Access
HTTP Method: POST
Path:
/v1/lexicon
This endpoint provides access to the Lexicon NLP capabilities for advanced text processing.
6. Proxy Interaction Endpoint
Overview
This endpoint is used for load-balanced request handling and follows the same format as the main interaction endpoint.
HTTP Method: POST
Path:
/v1/proxy/chat/completions
7. Error Handling
All errors follow this format:
Common Error Codes
PROCESSING_ERROR: General processing failure.
TIMEOUT_ERROR: Request exceeded the time limit.
VALIDATION_ERROR: Invalid input.
AUTH_ERROR: Authentication failure.
This documentation is regularly updated to reflect new features and improvements. For any questions or feedback, please get in touch with us at info@meetneura.ai
.
Last updated