Run local LLMs inside your browser

Privacy-focused, personalized AI with zero infrastructure costs.

npm i @browserai/browserai

Click the button below to start a conversation with AI models running locally in your browser.

Integration
import { BrowserAI } from '@browserai/browserai'

const browserAI = new BrowserAI()
await browserAI.loadModel('llama-3.2-1b-instruct ')

const response =
await browserAI.generateText('Hello, how are you?')

How It Works

BrowserAI leverages WebAssembly and WebGPU to run increasingly efficient small language models directly in your browser. Integration just takes a few lines of code - no APIs required.

1

$0 operational cost

No API fees or cloud infrastructure expenses

2

100% privacy

Local processing with no exposed to third parties

3

Effortless Integration

No servers, API keys, rate limits, or infrastructure to maintain

Ready to get started?

Experience the power of AI running directly in your browser. Try BrowserAI now or explore our documentation to learn more.