AI Security

ChatGPT API Tutorial for Beginners: Your First AI Application

DeviDevs Team
7 min read
#chatgpt#openai#api#tutorial#beginner

Want to build with ChatGPT but don't know where to start? This beginner-friendly guide takes you from zero to your first working AI application.

What You'll Learn

Journey Map:
1. Get API Key         → 5 minutes
2. First API Call      → 10 minutes
3. Build Simple Chat   → 20 minutes
4. Add Features        → 30 minutes
   ─────────────────────────────
   Total: ~1 hour to working app

Step 1: Get Your API Key

Create an OpenAI Account:

  1. Go to https://platform.openai.com
  2. Sign up with email or Google
  3. Verify your email

Get API Key:

  1. Click your profile → "API keys"
  2. Click "Create new secret key"
  3. Give it a name like "my-first-app"
  4. Copy the key immediately (you won't see it again!)
# Your key looks like this:
sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 
# Store it safely - NEVER share or commit to git!

Add Payment Method:

Free tier has limits. Add a payment method for better access:

  1. Go to Settings → Billing
  2. Add credit card
  3. Set usage limits to avoid surprises

Step 2: Set Up Your Environment

Option A: Python (Recommended for beginners)

# Install Python if needed: https://python.org
 
# Create project folder
mkdir my-first-ai-app
cd my-first-ai-app
 
# Create virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
 
# Install OpenAI library
pip install openai python-dotenv

Create .env file for API key:

# .env file (DON'T commit this to git!)
OPENAI_API_KEY=sk-your-key-here

Option B: Node.js

mkdir my-first-ai-app
cd my-first-ai-app
npm init -y
npm install openai dotenv
// .env file
OPENAI_API_KEY=sk-your-key-here

Step 3: Your First API Call

Python - Hello World:

# hello_ai.py
import os
from openai import OpenAI
from dotenv import load_dotenv
 
# Load API key from .env file
load_dotenv()
 
# Create client
client = OpenAI()
 
# Make your first request!
response = client.chat.completions.create(
    model="gpt-4o-mini",  # Cheaper model for learning
    messages=[
        {"role": "user", "content": "Say hello and tell me a fun fact!"}
    ]
)
 
# Print the response
print(response.choices[0].message.content)

Run it:

python hello_ai.py
# Output: Hello! Here's a fun fact: Honey never spoils...

JavaScript - Hello World:

// hello_ai.js
require('dotenv').config();
const OpenAI = require('openai');
 
const client = new OpenAI();
 
async function main() {
    const response = await client.chat.completions.create({
        model: "gpt-4o-mini",
        messages: [
            { role: "user", content: "Say hello and tell me a fun fact!" }
        ]
    });
 
    console.log(response.choices[0].message.content);
}
 
main();

Step 4: Understanding the API

The Messages Array:

messages = [
    # System message: Sets AI behavior
    {"role": "system", "content": "You are a helpful coding tutor."},
 
    # User messages: What you ask
    {"role": "user", "content": "What is a variable?"},
 
    # Assistant messages: AI's previous responses
    {"role": "assistant", "content": "A variable is like a labeled box..."},
 
    # Another user message
    {"role": "user", "content": "Can you give me an example?"}
]

Key Parameters:

response = client.chat.completions.create(
    model="gpt-4o-mini",     # Which AI model to use
    messages=messages,        # Conversation history
    temperature=0.7,          # Creativity (0=focused, 2=creative)
    max_tokens=500,           # Maximum response length
)

Available Models:

| Model | Best For | Cost | |-------|----------|------| | gpt-4o-mini | Learning, simple tasks | $ | | gpt-4o | Complex tasks, production | $$ | | gpt-4-turbo | Long documents | $$$ |

Step 5: Build a Simple Chatbot

Python Chatbot:

# chatbot.py
import os
from openai import OpenAI
from dotenv import load_dotenv
 
load_dotenv()
client = OpenAI()
 
def chat():
    print("ChatBot Ready! Type 'quit' to exit.\n")
 
    # Store conversation history
    messages = [
        {"role": "system", "content": "You are a friendly assistant. Keep responses concise."}
    ]
 
    while True:
        # Get user input
        user_input = input("You: ").strip()
 
        if user_input.lower() == 'quit':
            print("Goodbye!")
            break
 
        if not user_input:
            continue
 
        # Add user message to history
        messages.append({"role": "user", "content": user_input})
 
        # Get AI response
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages,
            max_tokens=500
        )
 
        # Extract and print response
        ai_message = response.choices[0].message.content
        print(f"\nBot: {ai_message}\n")
 
        # Add AI response to history (for context)
        messages.append({"role": "assistant", "content": ai_message})
 
if __name__ == "__main__":
    chat()

Run your chatbot:

python chatbot.py
 
# ChatBot Ready! Type 'quit' to exit.
#
# You: What's Python?
#
# Bot: Python is a popular programming language known for being
# easy to read and learn...
#
# You: How do I install it?
#
# Bot: You can install Python by visiting python.org...

Step 6: Add Streaming (Real-time Responses)

Stream responses word-by-word like ChatGPT:

# streaming_chat.py
import os
from openai import OpenAI
from dotenv import load_dotenv
 
load_dotenv()
client = OpenAI()
 
def stream_chat():
    print("Streaming ChatBot Ready!\n")
 
    messages = [
        {"role": "system", "content": "You are a helpful assistant."}
    ]
 
    while True:
        user_input = input("You: ").strip()
 
        if user_input.lower() == 'quit':
            break
 
        messages.append({"role": "user", "content": user_input})
 
        # Enable streaming
        print("Bot: ", end="", flush=True)
 
        stream = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages,
            stream=True  # Enable streaming!
        )
 
        # Collect response as it streams
        full_response = ""
        for chunk in stream:
            if chunk.choices[0].delta.content:
                content = chunk.choices[0].delta.content
                print(content, end="", flush=True)
                full_response += content
 
        print("\n")
        messages.append({"role": "assistant", "content": full_response})
 
if __name__ == "__main__":
    stream_chat()

Step 7: Handle Errors Gracefully

Common errors and how to handle them:

import openai
from openai import OpenAI
 
client = OpenAI()
 
def safe_chat(messages):
    try:
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages
        )
        return response.choices[0].message.content
 
    except openai.AuthenticationError:
        return "Error: Invalid API key. Check your OPENAI_API_KEY."
 
    except openai.RateLimitError:
        return "Error: Too many requests. Please wait a moment."
 
    except openai.APIConnectionError:
        return "Error: Can't connect to OpenAI. Check your internet."
 
    except openai.BadRequestError as e:
        return f"Error: Bad request - {e.message}"
 
    except Exception as e:
        return f"Unexpected error: {str(e)}"
 
# Usage
messages = [{"role": "user", "content": "Hello!"}]
result = safe_chat(messages)
print(result)

Step 8: Track Your Costs

Monitor token usage:

def chat_with_cost_tracking(messages):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages
    )
 
    # Get token counts
    usage = response.usage
    prompt_tokens = usage.prompt_tokens
    completion_tokens = usage.completion_tokens
    total_tokens = usage.total_tokens
 
    # Calculate cost (gpt-4o-mini prices)
    # $0.15 per 1M input tokens, $0.60 per 1M output tokens
    cost = (prompt_tokens * 0.00000015) + (completion_tokens * 0.0000006)
 
    print(f"Tokens used: {total_tokens} (Cost: ${cost:.6f})")
 
    return response.choices[0].message.content

Complete Beginner Project: Personal Assistant

# assistant.py
import os
from openai import OpenAI
from dotenv import load_dotenv
from datetime import datetime
 
load_dotenv()
client = OpenAI()
 
class PersonalAssistant:
    def __init__(self, name="Assistant"):
        self.name = name
        self.messages = [
            {
                "role": "system",
                "content": f"""You are {name}, a helpful personal assistant.
                Today's date is {datetime.now().strftime('%B %d, %Y')}.
                Be friendly, concise, and helpful."""
            }
        ]
 
    def chat(self, user_message):
        self.messages.append({"role": "user", "content": user_message})
 
        try:
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=self.messages,
                max_tokens=500,
                temperature=0.7
            )
 
            ai_message = response.choices[0].message.content
            self.messages.append({"role": "assistant", "content": ai_message})
 
            return ai_message
 
        except Exception as e:
            return f"Sorry, I encountered an error: {str(e)}"
 
    def clear_history(self):
        """Start fresh conversation"""
        self.messages = self.messages[:1]  # Keep system message
 
def main():
    assistant = PersonalAssistant("Aria")
 
    print(f"🤖 {assistant.name} is ready to help!")
    print("Commands: 'quit' to exit, 'clear' to reset conversation\n")
 
    while True:
        user_input = input("You: ").strip()
 
        if user_input.lower() == 'quit':
            print(f"{assistant.name}: Goodbye! 👋")
            break
        elif user_input.lower() == 'clear':
            assistant.clear_history()
            print(f"{assistant.name}: Conversation cleared! How can I help?\n")
            continue
        elif not user_input:
            continue
 
        response = assistant.chat(user_input)
        print(f"\n{assistant.name}: {response}\n")
 
if __name__ == "__main__":
    main()

What's Next?

Now that you have the basics, explore:

  1. Add memory - Store conversations in a database
  2. Build a web app - Use Flask or FastAPI
  3. Add RAG - Let your bot answer from your documents
  4. Deploy - Put your app on the internet

Quick Reference

# Install
pip install openai python-dotenv
 
# Test API key
python -c "from openai import OpenAI; OpenAI().models.list()"
 
# Simple call
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Need Help Building AI Apps?

Going from tutorial to production requires more than just API calls. Our team can help with:

  • Production-ready AI architecture
  • Security and compliance (EU AI Act)
  • Cost optimization
  • Custom AI solutions

Get AI development help

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.