🚀 Asynchronous API: Built with Tokio for high-performance async I/O
🔄 Streaming Support: Real-time streaming of AI responses
🔒 Retry Logic: Automatic retries for failed requests with exponential backoff
🚦 Rate Limiting: Built-in throttling to respect API limits
💾 Caching: Optional response caching to improve performance
📁 File Handling: Support for sending files to the API
🔀 Model Routing: Support for specifying different AI models
Installation
Add this to your Cargo.toml:
[dependencies]
neura-ai-router = "0.1.0"
Basic Usage
Initialize the Client
use neura_ai_router::{NeuraClient, CompletionsRequest};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// Create a client
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
// Create a completion
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Hello, how are you?"}]"#)
.build()?;
let response = client.create_completion(request).await?;
println!("Response: {}", response.response);
Ok(())
}
Model Routing
use neura_ai_router::{NeuraClient, RouterCompletionsRequest};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
// Create a completion with a specific model
let request = RouterCompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Hello, how are you?"}]"#)
.model("gpt-4")
.build()?;
let response = client.create_router_completion(request).await?;
println!("Response: {}", response.response);
Ok(())
}
Streaming Responses
use futures::StreamExt;
use neura_ai_router::{NeuraClient, CompletionsRequest};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Tell me a story"}]"#)
.stream(true)
.build()?;
let mut stream = client.create_completion_stream(request).await?;
let mut full_response = String::new();
while let Some(chunk) = stream.next().await {
println!("Received chunk: {}", chunk.chunk);
full_response.push_str(&chunk.chunk);
}
println!("Full response: {}", full_response);
Ok(())
}
Advanced Usage
Sending File Data
use neura_ai_router::{NeuraClient, CompletionsRequest};
use std::error::Error;
use std::fs;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
// Read file data
let file_data = fs::read("path/to/file")?;
// Create a completion with file data
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Analyze this file"}]"#)
.file_data_from_bytes(&file_data)
.build()?;
let response = client.create_completion(request).await?;
println!("Response: {}", response.response);
Ok(())
}
Configuring Caching
use neura_ai_router::NeuraClient;
use std::error::Error;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.enable_cache(true)
.cache_ttl(Duration::from_secs(60)) // 1 minute
.build()?;
// Use the client...
// Clear the cache when needed
client.clear_cache();
Ok(())
}
Configuring Retries and Rate Limiting
use neura_ai_router::NeuraClient;
use std::error::Error;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.max_retries(5)
.retry_delay(Duration::from_millis(500))
.requests_per_minute(120)
.build()?;
// Use the client...
Ok(())
}
Using Reasoning Format
use neura_ai_router::{NeuraClient, CompletionsRequest};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
// Create a completion with reasoning format
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Solve this problem step by step"}]"#)
.reasoning_format("chain-of-thought")
.build()?;
let response = client.create_completion(request).await?;
println!("Response: {}", response.response);
Ok(())
}
Session and User Management
use neura_ai_router::{NeuraClient, CompletionsRequest};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
// Create a completion with session and user tracking
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Remember this information"}]"#)
.session_id("session-123")
.user_id("user-456")
.build()?;
let response = client.create_completion(request).await?;
println!("Response: {}", response.response);
Ok(())
}
Error Handling
The SDK provides detailed error information for both API errors and client-side errors.
use neura_ai_router::{NeuraClient, CompletionsRequest, Error};
use std::error::Error as StdError;
#[tokio::main]
async fn main() -> Result<(), Box<dyn StdError>> {
let client = NeuraClient::builder()
.api_key("your-api-key")
.base_url("https://api.example.com")
.build()?;
let request = CompletionsRequest::builder()
.messages(r#"[{"role":"user","content":"Hello"}]"#)
.build()?;
match client.create_completion(request).await {
Ok(response) => {
println!("Success: {}", response.response);
}
Err(e) => {
match e {
Error::Api(msg) => println!("API error: {}", msg),
Error::HttpClient(e) => println!("HTTP error: {}", e),
Error::Json(e) => println!("JSON error: {}", e),
_ => println!("Other error: {}", e),
}
}
}
Ok(())
}
Examples
Check out the examples directory for more sample code:
examples/basic.rs - Basic completion example
examples/streaming.rs - Example of streaming responses
examples/model_routing.rs - Working with different models
License
By contributing to Neura SDK, you agree that your contributions will be licensed under the project's MIT License.
Copyright (c) 2025 Neura AI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
For more information: https://opensource.org/licenses/MIT