Quickstart

Aiberm provides a unified API that gives you access to hundreds of AI models through a single endpoint. Get started in minutes with our simple integration process.

Tip

Looking for information about free models and rate limits? Please see the FAQ.

Using the OpenAI SDK

The easiest way is to use the official OpenAI SDK, just change the base_url to start using it.

1from openai import OpenAI
2 
3client = OpenAI(
4 api_key="sk-your-api-key-here",
5 base_url="https://aiberm.com/v1"
6)
7 
8response = client.chat.completions.create(
9 model="google/gemini-2.5-flash",
10 messages=[
11 {"role": "user", "content": "Hello!"}
12 ]
13)
14 
15print(response.choices[0].message.content)

Using the Aiberm API directly

You can use HTTP requests to call the API directly.

Info

Use our interactive Request Builder to generate API requests.

1import requests
2 
3response = requests.post(
4 "https://aiberm.com/v1/chat/completions",
5 headers={
6 "Authorization": "Bearer YOUR_API_KEY",
7 "Content-Type": "application/json"
8 },
9 json={
10 "model": "google/gemini-2.5-flash",
11 "messages": [
12 {"role": "user", "content": "Hello!"}
13 ]
14 }
15)
16 
17print(response.json())

Next Steps

Our API also supports advanced features such as streaming and batch processing. Learn more by checking out the detailed documentation for the Chat Completions API.

The API also supports streaming.

Supported Models

We support various AI models, including but not limited to:

OpenAI Models

  • GPT-4
  • GPT-4 Turbo
  • GPT-3.5 Turbo

Anthropic Models

  • Claude-3 Opus
  • Claude-3 Sonnet
  • Claude-3 Haiku

Other Models

  • Gemini Pro
  • Llama 2
  • More…