Running with Ollama
Datapizza AI supports running with local models through Ollama, providing you with complete control over your AI infrastructure while maintaining privacy and reducing costs.
Prerequisites
Before getting started, you'll need to have Ollama installed and running on your system.
Installing Ollama
- Download and Install Ollama
- Visit ollama.ai and download the installer for your operating system
-
Follow the installation instructions for your platform
-
Start Ollama Service
-
Pull a Model
Installation
Install the Datapizza AI OpenAI-like client:
Basic Usage
Here's a simple example of how to use Datapizza AI with Ollama:
import os
from datapizza.clients.openai_like import OpenAILikeClient
from dotenv import load_dotenv
load_dotenv()
# Create client for Ollama
client = OpenAILikeClient(
api_key="", # Ollama doesn't require an API key
model="gemma2:2b", # Use any model you've pulled with Ollama
system_prompt="You are a helpful assistant.",
base_url="http://localhost:11434/v1", # Default Ollama API endpoint
)
# Simple query
response = client.invoke("What is the capital of France?")
print(response.content)