← Blog

Add AI to Your SaaS in 30 Minutes

A complete integration guide with real code examples for Node.js and Python. Choose the right API, integrate it, and keep costs under $20/month.

Updated May 14, 2026. Prices verified against official provider pages.

Adding AI features to your SaaS doesn't require a machine learning team or a six-figure budget. With the right API choice and a simple integration pattern, you can ship AI-powered features to your users in under 30 minutes — for under $20/month.

This guide walks you through the entire process: choosing a provider, writing the integration code, adding cost controls, and optimizing for your budget. Every code example is production-ready.

Step 1: Choose Your AI API (5 minutes)

The first decision is which provider to use. Here's the honest breakdown for SaaS integration in 2026:

Provider Best For Input/1M Output/1M Free Tier
OpenAI (GPT-4o mini) General SaaS features $0.15 $0.60 $5 credit
Google (Gemini 2.0 Flash) High-volume, multimodal $0.10 $0.40 Generous free tier
Anthropic (Claude 4 Haiku) Long documents, complex tasks $0.80 $4.00 $5 credit
DeepSeek (V4 Flash) Ultra-low cost $0.14 $0.28 Limited

Recommendation for most SaaS products: Start with GPT-4o mini ($0.15/$0.60). It's the best balance of quality, cost, and ecosystem support. If you need multimodal (image/video), use Gemini 2.0 Flash ($0.10/$0.40). If you're cost-constrained, use DeepSeek V4 Flash ($0.14/$0.28).

Step 2: Node.js Integration (10 minutes)

Here's a production-ready integration for a typical SaaS feature: an AI-powered search or chat feature.

Install the SDK

# OpenAI npm install openai # Or Anthropic npm install @anthropic-ai/sdk # Or Google npm install @google/generative-ai

Basic Integration (OpenAI)

import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }); async function aiFeature(userInput) { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant for [your product].' }, { role: 'user', content: userInput } ], max_tokens: 500, temperature: 0.7, }); return response.choices[0].message.content; }

With Cost Controls (Important!)

async function aiFeatureSafe(userInput, userId) { // 1. Check user's monthly usage const usage = await getUserUsage(userId); if (usage.monthlyTokens > 1_000_000) { return 'You\'ve reached your monthly AI limit. Upgrade for more.'; } // 2. Set token budget per request const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: userInput } ], max_tokens: 300, // Cap output to control costs temperature: 0.7, }); // 3. Track usage const tokensUsed = response.usage.total_tokens; await trackUsage(userId, tokensUsed); return response.choices[0].message.content; }

Step 3: Python Integration (10 minutes)

Install the SDK

# OpenAI pip install openai # Or Anthropic pip install anthropic # Or Google pip install google-generativeai

Basic Integration (OpenAI)

from openai import OpenAI client = OpenAI() def ai_feature(user_input: str) -> str: response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": user_input} ], max_tokens=500, temperature=0.7, ) return response.choices[0].message.content

With Cost Controls

import tiktoken def count_tokens(text: str) -> int: enc = tiktoken.encoding_for_model("gpt-4o-mini") return len(enc.encode(text)) def ai_feature_safe(user_input: str, user_id: str) -> str: # Check usage usage = get_user_usage(user_id) if usage["monthly_tokens"] > 1_000_000: return "Monthly AI limit reached. Upgrade for more." # Count input tokens input_tokens = count_tokens(user_input) if input_tokens > 2000: return "Input too long. Please shorten your message." response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": user_input} ], max_tokens=300, temperature=0.7, ) # Track usage total_tokens = response.usage.total_tokens track_usage(user_id, total_tokens) return response.choices[0].message.content

Step 4: Real Cost Breakdown

Here's what your AI feature actually costs at different usage levels. All calculations use GPT-4o mini pricing ($0.15/1M input, $0.60/1M output). Assume 70% input, 30% output split.

Monthly cost — GPT-4o mini
100 requests/day (3K/mo)$0.48/mo
1K requests/day (30K/mo)$4.80/mo
10K requests/day (300K/mo)$48.00/mo
100K requests/day (3M/mo)$480/mo

For under $20/month, you can handle roughly 60K requests per month — about 2K per day. That's enough for most early-stage SaaS products.

Cost comparison — 30K requests/month
GPT-4o mini$4.80
Gemini 2.0 Flash$3.15
DeepSeek V4 Flash$2.52
Claude 4 Haiku$19.50
GPT-4o$37.50

Step 5: Cost Optimization Tips

These strategies can cut your AI costs by 40-70%:

1. Use the cheapest model that works

Most SaaS features (search suggestions, content generation, FAQ bots) work perfectly on GPT-4o mini ($0.15/$0.60). Don't use GPT-4o ($2.50/$10) unless you need the extra quality.

2. Cap max_tokens aggressively

Output tokens are 4x more expensive than input tokens. If your feature only needs a short response, set max_tokens: 200 instead of 500. That's a 60% cost reduction on output.

3. Cache repeated queries

If users ask similar questions, cache the responses. A simple Redis cache with a 1-hour TTL can reduce API calls by 30-50% for common queries.

4. Use system prompts wisely

Long system prompts consume input tokens on every request. Keep system prompts under 200 tokens. Use the dynamic-date.js pattern — only send what's needed.

5. Batch where possible

If your feature processes multiple items (e.g., categorizing support tickets), batch them into a single API call instead of making separate calls. OpenAI's Batch API offers 50% off.

Common SaaS Features and Their Costs

Feature Tokens/Request Cost per 1K Requests Monthly @ 10K req
Search suggestions ~200 $0.03 $0.30
FAQ bot ~500 $0.08 $0.80
Content generation ~1,500 $0.23 $2.30
Data extraction ~2,000 $0.30 $3.00
Code assistant ~3,000 $0.45 $4.50
Document analysis ~5,000 $0.75 $7.50

All estimates use GPT-4o mini pricing. For cheaper alternatives, see our cost calculator.

What to Do Next

  1. Pick your feature: Start with one AI feature (search suggestions or FAQ bot are easiest)
  2. Choose a provider: GPT-4o mini for most cases, Gemini Flash for multimodal, DeepSeek for ultra-low cost
  3. Integrate: Copy the code examples above, adjust the system prompt for your use case
  4. Add cost controls: Set token limits, track usage, add monthly caps per user
  5. Monitor: Check your API dashboard weekly for the first month
  6. Optimize: Once you have usage data, optimize model choice and caching

Calculate your exact costs

Use our free calculator to model your specific usage and find the cheapest provider.

Related Reading