Caching Layer
Redis-based caching system for Promenade Platform - improves performance and reduces database load for frequently accessed data.
Overview
Promenade implements a flexible caching layer with multiple adapters (Redis, NoOp) and configurable TTL per resource type. The cache follows cache-aside pattern with write-through invalidation for consistency.
Key Features:
- Multiple Adapters: Redis (production), NoOp (testing/fallback)
- Resource-Specific TTL: Different expiration times for reference/user/session data
- Graceful Degradation: Application continues if Redis unavailable
- Pattern-Based Invalidation: SCAN for efficient bulk deletion
- JSON Marshaling: Automatic serialization of complex types
- Context-Aware: Request tracing and cancellation support
Architecture
Cache Interface
type Cache interface {
Get(ctx context.Context, key string, dest interface{}) error
Set(ctx context.Context, key string, value interface{}, ttl time.Duration) error
Delete(ctx context.Context, key string) error
DeletePattern(ctx context.Context, pattern string) error
Clear(ctx context.Context) error
Health(ctx context.Context) error
Close(ctx context.Context) error
}Adapters
Redis Adapter (Production):
- Distributed caching across multiple instances
- Persistent storage with TTL expiration
- High throughput (10K+ ops/sec)
- Separate database (DB 2) for isolation
NoOp Adapter (Testing):
- Always returns
ErrCacheMiss - No actual caching (always fresh data)
- Used in tests and when Redis unavailable
Configuration
YAML Configuration
Development (config/app.postgres-dev.yaml or config/app.sqlite-dev.yaml):
cache:
enabled: true
adapter: "redis"
prefix: "promenade:dev:"
default_ttl: "5m"
ttl:
# Reference data (rarely changes)
countries: "1h"
currencies: "1h"
languages: "1h"
timezones: "1h"
# User data (changes more frequently)
user_profile: "15m"
customer: "10m"
# Session data (short-lived)
session: "30m"Production (config/app.postgres-prod.yaml):
cache:
enabled: true
adapter: "redis"
prefix: "promenade:prod:"
default_ttl: "10m"
ttl:
countries: "24h" # Reference data changes very rarely
currencies: "24h"
languages: "24h"
timezones: "24h"
user_profile: "30m" # User data cached longer
customer: "20m"
session: "1h"Testing (config/app.postgres-test.yaml or config/app.sqlite-test.yaml):
cache:
enabled: false
adapter: "noop"
prefix: "promenade:test:"
default_ttl: "1m"
ttl:
countries: "1m"
currencies: "1m"
languages: "1m"
timezones: "1m"
user_profile: "1m"
customer: "1m"
session: "1m"Redis Database Layout
| DB | Purpose | Key Pattern |
|---|---|---|
| 0 | Token Revocation | token:{jti} |
| 1 | Event Bus | bus:{topic}:{event_id} |
| 2 | Cache Layer | promenade:{env}:{resource}:{id} |
| 3 | Sessions | session:{session_id} |
Usage Examples
Initialize Cache
// In cmd/api/main.go
var cacheClient cache.ICache
if redisClient != nil {
cacheConfig, err := cfg.Cache.ToCacheConfig()
if err != nil {
logger.Fatal("Failed to parse cache config", slog.Any("error", err))
}
// Create Redis client for cache (separate DB)
cacheRedisClient := redis.NewClient(&redis.Options{
Addr: cfg.Database.Redis.Addr,
Password: cfg.Database.Redis.Password,
DB: cfg.Database.Redis.Databases.Cache, // Use cache DB
PoolSize: cfg.Database.Redis.PoolSize,
MaxRetries: cfg.Database.Redis.MaxRetries,
})
cacheClient, err = cache.NewCache(cacheConfig, cacheRedisClient)
if err != nil {
logger.Fatal("Failed to initialize cache", slog.Any("error", err))
}
logger.Info("Cache initialized",
slog.String("adapter", cacheConfig.Adapter),
slog.Bool("enabled", cacheConfig.Enabled),
)
} else {
// Fallback to no-op cache when Redis unavailable
cacheClient, _ = cache.NewCache(&cache.Config{Enabled: false, Adapter: "noop"}, nil)
logger.Warn("Cache disabled (Redis unavailable)")
}
defer cacheClient.Close(context.Background())UseCase Integration
Cache-Aside Pattern (Read-Through):
type countryUseCase struct {
repo IRepository
cache cache.ICache
}
// GetByID - Cache-aside pattern with 1 hour TTL
func (uc *countryUseCase) GetByID(ctx context.Context, id uuidv7.UUID) (*Country, error) {
// Try cache first
cacheKey := fmt.Sprintf("country:id:%s", id.String())
var country Country
if err := uc.cache.Get(ctx, cacheKey, &country); err == nil {
return &country, nil
}
// Cache miss - fetch from database
country, err := uc.repo.GetByID(ctx, id)
if err != nil {
return nil, err
}
// Store in cache (1 hour TTL for reference data)
const cacheTTL = 1 * time.Hour
if err := uc.cache.Set(ctx, cacheKey, country, cacheTTL); err != nil {
logger.FromContext(ctx).Error("Failed to cache country", slog.Any("error", err))
// Don't fail the request if caching fails
}
return country, nil
}Write-Through Invalidation:
// Update - Invalidate cache on write
func (uc *countryUseCase) Update(ctx context.Context, country *Country) error {
// Update database
if err := uc.repo.Update(ctx, country); err != nil {
return err
}
// Invalidate related cache entries
idKey := fmt.Sprintf("country:id:%s", country.ID.String())
codeKey := fmt.Sprintf("country:code:%s", country.Code)
listKey := "country:list:all"
uc.cache.Delete(ctx, idKey)
uc.cache.Delete(ctx, codeKey)
uc.cache.Delete(ctx, listKey)
return nil
}Pattern-Based Invalidation:
// Create - Invalidate list cache only
func (uc *countryUseCase) Create(ctx context.Context, country *Country) error {
if err := uc.repo.Create(ctx, country); err != nil {
return err
}
// Invalidate all list caches (creates don't affect individual item caches)
uc.cache.DeletePattern(ctx, "country:list:*")
return nil
}Key Naming Conventions
Consistent key patterns for easy management and debugging:
{entity}:id:{uuid} # Single item by ID
{entity}:code:{code} # Single item by code
{entity}:name:{name} # Single item by name
{entity}:list:all # Complete list
{entity}:list:{filter} # Filtered list
{entity}:user:{user_id} # User-specific itemsExamples:
country:id:01JGABC... # Country by UUID
country:code:US # Country by ISO code
country:list:all # All active countries
currency:id:01JGXYZ... # Currency by UUID
currency:code:USD # Currency by ISO code
user:profile:01JGDEF... # User profile by user ID
customer:id:01JGMNO... # Customer by UUIDTTL Strategy
Reference Data (Long TTL)
Characteristics: Rarely changes, frequently accessed, expensive to compute
TTL:
- Dev: 1 hour
- Prod: 24 hours
Resources: Countries, Currencies, Languages, Timezones
Rationale: Reference data changes very rarely (maybe once a month), safe to cache for long periods.
User Data (Medium TTL)
Characteristics: Changes occasionally, user-specific, moderate cost
TTL:
- Dev: 10-15 minutes
- Prod: 20-30 minutes
Resources: User Profiles, Customer Records
Rationale: Users update profiles infrequently, but changes should be visible relatively quickly.
Session Data (Short TTL)
Characteristics: Changes frequently, security-sensitive, cheap to fetch
TTL:
- Dev: 30 minutes
- Prod: 1 hour
Resources: Session state, temporary data
Rationale: Sessions need to reflect current state, but complete freshness not critical.
Error Handling
Cache Miss
err := cache.Get(ctx, key, &dest)
if errors.Is(err, cache.ErrCacheMiss) {
// Expected - fetch from database
dest, err = repo.GetFromDB(ctx)
}
if err != nil {
// Unexpected error (Redis down, etc.)
// Log and continue without cache
logger.FromContext(ctx).Warn("Cache error", slog.Any("error", err))
}Graceful Degradation
// If cache operation fails, log and continue
if err := cache.Set(ctx, key, value, ttl); err != nil {
logger.FromContext(ctx).Error("Failed to cache", slog.Any("error", err))
// Don't return error - caching is not critical
}
// Application still works when Redis is down
if redisClient == nil {
cacheClient, _ = cache.NewCache(&cache.Config{Enabled: false, Adapter: "noop"}, nil)
logger.Warn("Cache disabled (Redis unavailable)")
}Testing
Unit Tests with NoOp Cache
func TestCountryUseCase_GetByID(t *testing.T) {
mockRepo := new(MockCountryRepository)
noopCache, _ := cache.NewCache(&cache.Config{Enabled: false, Adapter: "noop"}, nil)
uc := NewCountryUseCase(mockRepo, noopCache)
// Test passes through to repository (no caching)
country := &Country{ID: uuidv7.New(), Code: "US"}
mockRepo.On("GetByID", mock.Anything, country.ID).Return(country, nil)
result, err := uc.GetByID(context.Background(), country.ID)
assert.NoError(t, err)
assert.Equal(t, country.Code, result.Code)
mockRepo.AssertExpectations(t)
}Integration Tests with Real Redis
func TestCacheIntegration(t *testing.T) {
// Setup test Redis
redisClient := redis.NewClient(&redis.Options{
Addr: "localhost:6380",
DB: 2,
})
defer redisClient.Close()
cacheConfig := &cache.Config{
Enabled: true,
Adapter: "redis",
Prefix: "test:",
}
c, err := cache.NewCache(cacheConfig, redisClient)
require.NoError(t, err)
defer c.Close(context.Background())
// Test Set/Get
ctx := context.Background()
key := "country:id:test"
country := &Country{Code: "US", Name: "United States"}
err = c.Set(ctx, key, country, 1*time.Minute)
require.NoError(t, err)
var retrieved Country
err = c.Get(ctx, key, &retrieved)
require.NoError(t, err)
assert.Equal(t, "US", retrieved.Code)
}Performance Metrics
Redis Cache
Throughput:
- Get: ~15K ops/sec
- Set: ~12K ops/sec
- Delete: ~18K ops/sec
Latency (P50/P95/P99):
- Get: 0.8ms / 1.5ms / 3ms
- Set: 1.2ms / 2.1ms / 4ms
- Delete: 0.5ms / 1ms / 2ms
Memory Usage:
- Reference data (4 resources × ~250 items): ~2MB
- Active sessions (1000 users): ~5MB
- Total typical: ~10MB
NoOp Cache
Throughput: N/A (always miss)
Latency: <0.01ms (no-op)
Memory Usage: 0MB
Monitoring
Health Checks
// Cache health check
if err := cacheClient.Health(ctx); err != nil {
logger.Error("Cache unhealthy", slog.Any("error", err))
}Metrics (Planned)
Cache Hit Rate:
type CacheMetrics struct {
Hits int64
Misses int64
Errors int64
}
func (m *CacheMetrics) HitRate() float64 {
total := m.Hits + m.Misses
if total == 0 {
return 0
}
return float64(m.Hits) / float64(total)
}Instrumentation (Future):
- Prometheus metrics endpoint
- Grafana dashboards
- Cache hit/miss rates by resource
- Average latency per operation
- Memory usage tracking
Best Practices
DO
- Use cache for read-heavy workloads (countries, currencies, profiles)
- Invalidate on writes to maintain consistency
- Use pattern-based deletion for bulk invalidation
- Log cache errors but don't fail requests
- Set appropriate TTL based on data change frequency
- Use consistent key naming for easy debugging
- Test with NoOp adapter for unit tests
- Monitor cache hit rates in production
DON'T
- DON'T cache frequently changing data (real-time quotes, stock prices)
- DON'T cache security-sensitive data without encryption
- DON'T fail requests on cache errors (graceful degradation)
- DON'T use cache for write-heavy workloads
- DON'T store large objects (>1MB) in cache
- DON'T rely on cache for correctness (always have DB fallback)
- DON'T forget to invalidate on updates/deletes
Troubleshooting
Cache Always Misses
Symptoms: All cache.Get() returns ErrCacheMiss
Possible Causes:
- Redis not running
- Wrong Redis database number
- Cache disabled in config (
enabled: false) - NoOp adapter in use
Solution:
# Check Redis connection
redis-cli -h localhost -p 6379 -n 2 PING
# Check config
grep -A 10 "cache:" config/app.postgres-dev.yaml
# Check logs
grep "Cache initialized" logs/app.logCache Not Invalidating
Symptoms: Stale data returned after updates
Possible Causes:
- Missing invalidation in Update/Delete methods
- Wrong cache keys
- Pattern mismatch in DeletePattern
Solution:
// Always invalidate on writes
func (uc *useCase) Update(ctx context.Context, entity *Entity) error {
if err := uc.repo.Update(ctx, entity); err != nil {
return err
}
// Invalidate ALL related keys
uc.cache.Delete(ctx, fmt.Sprintf("entity:id:%s", entity.ID))
uc.cache.DeletePattern(ctx, "entity:list:*") // All lists
return nil
}High Memory Usage
Symptoms: Redis memory usage growing unbounded
Possible Causes:
- TTL not set (keys never expire)
- Caching large objects
- Too many cached items
Solution:
# Check Redis memory
redis-cli -n 2 INFO memory
# Check key count
redis-cli -n 2 DBSIZE
# Check TTL on keys
redis-cli -n 2 TTL "promenade:dev:country:id:01JGABC..."
# Set default TTL policy
redis-cli CONFIG SET maxmemory-policy allkeys-lruRelated Documentation
- Main README - Project overview
- Package Overview - All packages documentation
- Cache Package - Detailed API reference
- Configuration Reference - YAML config guide
- Shared Context - Cache integration example
Status: Production-ready
Version: 1.0
Last Updated: December 30, 2025
Maintainer: Promenade Team