Memory Sizing

Choose the right memory for your workload and scale efficiently

Memory Sizing Guide

Choose the right memory allocation for your Redis workload. Get it wrong and you'll either waste money or hit eviction. Get it right and sleep soundly at night.

Memory Units Explained

SwiftCache stores memory in bytes but displays in megabytes (MB) for convenience. This confusion has caused 3 AM debugging sessions.

Unit Conversion Quick Reference

Display (MB)BytesHow to Calculate
1010,485,76010 * 1024 * 1024
6467,108,86464 * 1024 * 1024
128134,217,728128 * 1024 * 1024
256268,435,456256 * 1024 * 1024
512536,870,912512 * 1024 * 1024
1024 (1GB)1,073,741,8241024 * 1024 * 1024
2048 (2GB)2,147,483,6482048 * 1024 * 1024
4096 (4GB)4,294,967,2964096 * 1024 * 1024

Converting MB to Bytes (The Formula You'll Need)

bytes = MB * 1024 * 1024

Examples:

  • 512 MB = 512 × 1,024 × 1,024 = 536,870,912 bytes
  • 1024 MB = 1,024 × 1,024 × 1,024 = 1,073,741,824 bytes (1 GB)

API Expects Bytes

When creating or scaling an instance, always use bytes:

// Creating 512MB instance - use 536,870,912 bytes
const response = await fetch('https://api.swiftcache.io/api/v1/instances', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer sk_live_xxx',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    organizationId: 'org_123',
    name: 'my-cache',
    maxMemory: 536870912,  // Always in bytes, not MB!
    region: 'eu-central'
  })
});

The API response will show both:

{
  "instance": {
    "maxMemory": 536870912,  // Raw bytes
    "maxMemoryMb": 512       // Converted to MB for readability
  }
}

Creating a Helper Function

Stop converting manually. This is what helper functions are for:

// TypeScript
function mbToBytes(mb: number): number {
  return mb * 1024 * 1024;
}

function bytesToMb(bytes: number): number {
  return Math.round(bytes / (1024 * 1024));
}

// Usage
const bytes = mbToBytes(512);  // 536870912
const mb = bytesToMb(536870912); // 512
# Python
def mb_to_bytes(mb: int) -> int:
    return mb * 1024 * 1024

def bytes_to_mb(bytes_val: int) -> int:
    return round(bytes_val / (1024 * 1024))

# Usage
bytes_val = mb_to_bytes(512)  # 536870912
mb = bytes_to_mb(536870912)   # 512

How Much Memory Do You Actually Need?

This depends on three things: what you're caching, how long you keep it, and your traffic patterns.

Estimate Your Working Set Size

Working set = the data you need to keep hot (in memory for fast access)

// Example: E-commerce product cache
const products = 50000;
const bytesPerProduct = 2048; // ~2KB per product JSON
const workingSetBytes = products * bytesPerProduct;
const neededMb = Math.ceil(workingSetBytes / (1024 * 1024));
console.log(`Need at least ${neededMb}MB`); // ~98MB

Common Workload Sizing

Session Storage (user login sessions):

  • 1,000 concurrent users × 2KB per session = ~2MB
  • Recommendation: 64MB (with safety margin)

Rate Limiting (tracking API calls):

  • 10,000 users × 100 bytes per entry = ~1MB
  • Recommendation: 32MB

Product Cache (e-commerce):

  • 10,000 products × 2KB each = ~20MB
  • Recommendation: 128MB (to handle spikes)

Real-time Leaderboards (gaming):

  • 100,000 players × 50 bytes each = ~5MB
  • Recommendation: 64MB

Job Queue (background jobs):

  • 50,000 queued jobs × 1KB each = ~50MB
  • Recommendation: 256MB (queues grow quickly)

Account for Redis Overhead

Redis doesn't use memory 100% efficiently. Account for:

  • Encoding overhead: ~20% extra for Redis internal structures
  • Fragmentation: ~10-15% when frequently updating data
  • Safety margin: ~20% buffer for traffic spikes
Recommended Memory = (Working Set × 1.35) + Safety Buffer

Example:

Working Set: 100MB
With Overhead: 100MB × 1.35 = 135MB
With 50MB Safety: 135MB + 50MB = 185MB
Choose: 256MB (next size up)

Eviction Policies Explained

If you run out of memory, Redis will evict (delete) data. The policy determines what gets deleted.

Available Policies

PolicyBehaviorGood For
allkeys-lruDelete least recently used (default)General purpose, most workloads
volatile-lruDelete least recently used keys with TTL setMixed hot/cold data
allkeys-lfuDelete least frequently usedCache with access patterns
volatile-lfuDelete least frequently used keys with TTLCache with expiration
volatile-ttlDelete keys with shortest remaining TTLTime-sensitive data
volatile-randomRandomly delete keys with TTLTesting only
allkeys-randomRandomly delete any keyRare use cases
noevictionThrow error, don't evictCritical data only

Choosing Your Eviction Policy

Use allkeys-lru (default) if:

  • You're caching data (sessions, products, pages)
  • You don't set explicit TTLs
  • You want predictable behavior

Use volatile-lru if:

  • You cache some things permanently
  • You expire other things manually
  • You want most control

Use noeviction if:

  • Your data is critical
  • You never want to lose anything
  • You've sized memory correctly (will error if full)
// Creating instance with custom eviction policy
const response = await fetch('https://api.swiftcache.io/api/v1/instances', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer sk_live_xxx',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    organizationId: 'org_123',
    name: 'critical-cache',
    maxMemory: 536870912, // 512MB
    evictionPolicy: 'noeviction', // Never evict, error instead
    region: 'eu-central'
  })
});

Monitoring Memory Usage via Metrics

After creating your instance, monitor how much memory you're actually using.

Get Memory Metrics

# Fetch last 24 hours of metrics
curl -H "Authorization: Bearer sk_live_xxx" \
  https://api.swiftcache.io/api/v1/instances/inst_abc123/metrics

# Response includes command counts, bytes in/out, cache hits/misses
{
  "metrics": [
    {
      "time": "2024-03-15T10:00:00Z",
      "commands": 1250,
      "bytesIn": 102400,
      "bytesOut": 204800,
      "hits": 890,
      "misses": 360
    },
    ...
  ],
  "current": {
    "commands": 1250,
    "bytesIn": 102400,
    "bytesOut": 204800,
    "hits": 890,
    "misses": 360
  }
}

Using Metrics to Right-Size

Monitor these metrics over 24 hours:

  1. Command Count: Are you hitting your limit?
  2. Bytes In/Out: How much data flows through?
  3. Hit Ratio: hits / (hits + misses) = percentage of requests served from cache
// Calculate hit ratio from metrics
const metrics = await fetchMetrics('inst_abc123');
const totalHits = metrics.reduce((sum, m) => sum + m.hits, 0);
const totalMisses = metrics.reduce((sum, m) => sum + m.misses, 0);
const hitRatio = totalHits / (totalHits + totalMisses);

console.log(`Hit ratio: ${(hitRatio * 100).toFixed(1)}%`);

// Interpretation:
// 90%+ = Memory is probably right-sized, you're serving from cache
// 70-90% = Reasonable, but could scale up for better performance
// <70% = Too much missing from cache, probably too small

Python Metrics Example

import requests
from datetime import datetime

def analyze_memory_usage(instance_id, api_key):
    response = requests.get(
        f'https://api.swiftcache.io/api/v1/instances/{instance_id}/metrics',
        headers={'Authorization': f'Bearer {api_key}'}
    )
    response.raise_for_status()

    data = response.json()
    metrics = data['metrics']

    # Calculate statistics
    total_commands = sum(m['commands'] for m in metrics)
    total_hits = sum(m['hits'] for m in metrics)
    total_misses = sum(m['misses'] for m in metrics)
    total_bytes_in = sum(m['bytesIn'] for m in metrics)

    hit_ratio = total_hits / (total_hits + total_misses) if (total_hits + total_misses) > 0 else 0
    avg_command_size = total_bytes_in / total_commands if total_commands > 0 else 0

    print(f"Last 24 hours summary:")
    print(f"  Commands: {total_commands:,}")
    print(f"  Hit ratio: {hit_ratio * 100:.1f}%")
    print(f"  Data in: {total_bytes_in / 1024 / 1024:.1f} MB")
    print(f"  Avg command: {avg_command_size:.0f} bytes")

    # Recommendation
    if hit_ratio < 0.7:
        print("  Recommendation: Consider scaling up memory")
    elif hit_ratio > 0.95:
        print("  Recommendation: Memory is well-utilized, no action needed")

analyze_memory_usage('inst_abc123', 'sk_live_xxx')

Scaling Memory Up or Down

Scale Up (Increase Memory)

When your hit ratio drops or you're seeing evictions:

# Scale to 1GB (1073741824 bytes)
curl -X PATCH \
  -H "Authorization: Bearer sk_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{"maxMemory": 1073741824}' \
  https://api.swiftcache.io/api/v1/instances/inst_abc123

# Response: updated instance details
{
  "instance": {
    "id": "inst_abc123",
    "maxMemory": 1073741824,
    "maxMemoryMb": 1024,
    "status": "RUNNING"
  }
}

Important: Scaling up happens instantly with no downtime. Your connection stays active.

Scale Down (Decrease Memory)

You cannot scale down - only up. Once allocated, memory stays the same or increases. If you need less:

  1. Create a new instance with smaller size
  2. Migrate data from old to new instance
  3. Update your application to use new instance
  4. Delete old instance
// Migration pattern
const oldInstance = { hostname: 'old.redis.swiftcache.io' };
const newInstance = { hostname: 'new.redis.swiftcache.io' };

// 1. Connect to both
const redis = require('redis');
const oldClient = redis.createClient({ host: oldInstance.hostname });
const newClient = redis.createClient({ host: newInstance.hostname });

// 2. Copy all keys
const keys = await oldClient.keys('*');
for (const key of keys) {
  const value = await oldClient.get(key);
  await newClient.set(key, value);
}

// 3. Update app config to use newInstance
// (async, can take time)

// 4. Delete old instance after confirming new one works
await fetch(`https://api.swiftcache.io/api/v1/instances/${oldInstanceId}`, {
  method: 'DELETE',
  headers: { 'Authorization': 'Bearer sk_live_xxx' }
});

Plan Limits on Memory

Your plan determines the maximum memory per instance. Check your plan before scaling:

PlanMax Memory Per InstanceMax Total Memory
Free256MB256MB
Starter1GB2GB
Professional16GB64GB
EnterpriseCustomCustom

If you hit your plan limit:

  1. Delete an unused instance
  2. Upgrade your plan
  3. Contact sales for custom limits
# This will fail if you exceed your plan limits
curl -X PATCH \
  -H "Authorization: Bearer sk_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{"maxMemory": 2147483648}' \ # 2GB (exceeds Free plan limit)
  https://api.swiftcache.io/api/v1/instances/inst_abc123

# Response: 403 Forbidden
{
  "error": "PLAN_LIMIT_EXCEEDED",
  "message": "Memory limit exceeded for Free plan (max 256MB)",
  "suggestion": "Upgrade to Starter plan to increase limits"
}

Right-Sizing Checklist

Before you finalize your memory size:

  • Calculated working set size - Know your data volume
  • Added overhead buffer - 35% for Redis internals + fragmentation
  • Set eviction policy - Usually allkeys-lru
  • Reviewed plan limits - Know your maximum
  • Created the instance - Use bytes, not MB!
  • Waited for RUNNING - Polled until provisioning done
  • Monitored 24 hours - Captured real traffic metrics
  • Verified hit ratio - Aim for 80%+ hits
  • Prepared to scale - Know how to increase if needed

Common Sizing Mistakes

Mistake 1: Using MB when API expects bytes

// WRONG - API receives 512 instead of 536,870,912
maxMemory: 512

// CORRECT - 512 MB in bytes
maxMemory: 536870912

Mistake 2: Sizing based on peak, not sustained

// Wrong approach
Average traffic: 100 users, 2KB sessions = 200KB
Peak traffic: 1000 users, 2KB sessions = 2MB
// Size for peak? Wrong. You pay the whole month.

// Better approach
Size for 80-90th percentile traffic, then scale up if needed

Mistake 3: Setting noeviction without monitoring

// If you use noeviction, Redis throws errors when full
evictionPolicy: 'noeviction'
// Better: Monitor metrics and scale before hitting limit

Mistake 4: Ignoring overhead

// Wrong
"I need 100MB data, so 100MB instance is enough"
// Actual needed: 135MB (with 35% overhead)

// Right
"I need 100MB data, so 256MB instance is safe"

Summary

Memory sizing is about balance: too small and you get cache misses, too big and you overpay. Use this process:

  1. Estimate your working set (products × size, sessions, etc.)
  2. Add overhead - multiply by 1.35
  3. Add safety margin - 10-20% extra
  4. Start conservative - Easier to scale up than down
  5. Monitor metrics - Hit ratio tells you if you're right-sized
  6. Scale responsively - Increase when hit ratio drops below 80%

When in doubt, start with 256MB or 512MB and scale based on real metrics. Your analytics will tell you the truth.