AI Automation

Post-Bot

AI-Powered Social Media Content Automation

2024 Python, FastAPI, Groq LLM Production Ready

Project Overview

The Problem

Content creators spend 15-20 hours weekly managing social media across multiple platforms. Manual posting is time-consuming and leads to inconsistent brand voice.

The Solution

An AI-powered automation system that generates platform-optimized content, schedules posts, and maintains consistent brand voice using LLM technology.

Key Metrics

  • ⚡ 80% reduction in content creation time
  • 📈 3x increase in posting consistency
  • 🎯 Platform-specific content optimization

System Architecture

High-level overview of the system components and data flow.

flowchart TB subgraph Client["Client Layer"] UI[Web Dashboard] API_CLIENT[API Client] end subgraph API["API Gateway"] FASTAPI[FastAPI Server] AUTH[Auth Middleware] RATE[Rate Limiter] end subgraph Core["Core Services"] CONTENT[Content Generator] SCHEDULER[Post Scheduler] ANALYTICS[Analytics Engine] end subgraph AI["AI Layer"] GROQ[Groq LLM API] PROMPT[Prompt Engine] end subgraph Data["Data Layer"] POSTGRES[(PostgreSQL)] REDIS[(Redis Cache)] end subgraph External["External APIs"] TWITTER[Twitter/X API] LINKEDIN[LinkedIn API] INSTAGRAM[Instagram API] end UI --> FASTAPI API_CLIENT --> FASTAPI FASTAPI --> AUTH AUTH --> RATE RATE --> CONTENT RATE --> SCHEDULER RATE --> ANALYTICS CONTENT --> PROMPT PROMPT --> GROQ SCHEDULER --> POSTGRES SCHEDULER --> REDIS CONTENT --> POSTGRES SCHEDULER --> TWITTER SCHEDULER --> LINKEDIN SCHEDULER --> INSTAGRAM

Database Schema

Entity-Relationship diagram showing the core data models.

erDiagram USERS ||--o{ POSTS : creates USERS ||--o{ ACCOUNTS : owns USERS ||--o{ TEMPLATES : creates ACCOUNTS ||--o{ POSTS : publishes TEMPLATES ||--o{ POSTS : generates USERS { uuid id PK string email UK string password_hash string name timestamp created_at boolean is_active } ACCOUNTS { uuid id PK uuid user_id FK string platform string access_token string refresh_token timestamp token_expires json metadata } POSTS { uuid id PK uuid user_id FK uuid account_id FK uuid template_id FK text content string status timestamp scheduled_for timestamp published_at json analytics } TEMPLATES { uuid id PK uuid user_id FK string name text prompt string platform json parameters }

Engineering Challenges

01

Rate Limiting at Scale

Problem

Social media APIs have strict rate limits. Hitting limits causes failed posts and poor UX.

Solution

Implemented a distributed rate limiter using Redis with sliding window algorithm. Added exponential backoff with jitter for retries.

# Sliding window rate limiter
async def check_rate_limit(user_id: str, platform: str):
    key = f"rate:{platform}:{user_id}"
    current = await redis.incr(key)
    if current == 1:
        await redis.expire(key, WINDOW_SIZE)
    return current <= RATE_LIMIT[platform]
02

Token Refresh Race Conditions

Problem

Multiple concurrent requests could trigger simultaneous OAuth token refreshes, causing token invalidation.

Solution

Used Redis distributed locks to ensure only one refresh happens at a time. Implemented optimistic locking with version checks.

# Distributed lock for token refresh
async def refresh_token(account_id: str):
    lock_key = f"lock:refresh:{account_id}"
    async with redis.lock(lock_key, timeout=30):
        account = await db.get(account_id)
        if account.token_expires > now():
            return account.access_token
        # Proceed with refresh...
03

LLM Response Consistency

Problem

LLM outputs varied significantly for the same input, making brand voice inconsistent.

Solution

Created a prompt templating system with brand voice parameters. Added post-processing to enforce character limits and formatting rules.

# Brand voice prompt template
PROMPT_TEMPLATE = """
You are writing for {brand_name}.
Tone: {tone} (e.g., professional, casual)
Platform: {platform}
Character limit: {char_limit}

Rules:
- {rules}

Generate a post about: {topic}
"""

Key Takeaways

🔒

Security First

OAuth token management requires careful handling. Always encrypt tokens at rest and implement proper rotation.

📊

Observability Matters

Added comprehensive logging and metrics from day one. Invaluable for debugging async scheduler issues.

🔄

Idempotency is Essential

All scheduler operations are idempotent. Prevents duplicate posts when retrying failed jobs.

Cache Strategically

Cached LLM responses for similar prompts reduced API costs by 40% and improved response times.

Interested in the Technical Details?

Check out the full source code or reach out to discuss the architecture.