Simple guides introducing Fabrica with hands-on examples.
This guide covers using Ent as the storage backend for Fabrica-generated microservice APIs.
Ent is a powerful entity framework for Go that provides:
Use Ent when:
Use File Storage when:
Here’s the complete workflow to create and run an Ent-backed Fabrica API:
# 1. Create a new project with Ent storage
fabrica init my-api --storage=ent --db=postgres
cd my-api
# 2. Add your resources
fabrica add resource Device
# 3. Generate all code (handlers, storage, Ent client)
# Ent client generation now runs automatically when using Ent storage
fabrica generate
# 4. Update dependencies
go mod tidy
# 5. Set up your database connection
export DATABASE_URL="postgres://user:pass@localhost:5432/mydb?sslmode=disable"
# For SQLite development: export DATABASE_URL="file:test.db?cache=shared&_fk=1"
# 6. Build and run (migrations run automatically on startup)
go build -o api ./cmd/server
./api
# 7. Test your API
curl http://localhost:8080/api/v1/devices
When you run fabrica init --storage=ent, the following files are created:
| File | Purpose |
|---|---|
internal/storage/ent/schema/*.go |
Ent schema definitions (Resource, Label, Annotation) |
internal/storage/generate.go |
Contains //go:generate directive for Ent code generation |
internal/storage/ent_adapter.go |
Converts between Fabrica resources and Ent entities |
internal/storage/storage_ent.go |
Ent-backed storage implementation |
cmd/server/main.go |
Includes database connection and auto-migration |
Note: fabrica generate automatically runs Ent code generation when Ent storage is detected, creating internal/storage/ent/*.go files with the generated client code.
# PostgreSQL
export DATABASE_URL="postgres://user:pass@localhost/mydb?sslmode=disable"
# MySQL
export DATABASE_URL="user:pass@tcp(localhost:3306)/mydb?parseTime=true"
# SQLite (development/testing)
export DATABASE_URL="file:./data.db?cache=shared&_fk=1"
Fabrica maintains its Kubernetes-style Resource{Spec, Status} pattern while using Ent for persistence:
┌─────────────────────────────────────────────┐
│ HTTP Handler (Generated) │
│ 1. Decode JSON → Fabrica Resource struct │
└─────────────────┬───────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Layer 2: Fabrica Struct Tag Validation │
│ - Validates Spec/Status structure │
│ - K8s validators (k8sname, labels, etc.) │
└─────────────────┬───────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Layer 3: Custom Business Validation │
│ - Cross-field validation │
│ - Database lookups │
└─────────────────┬───────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Ent Adapter (Generated) │
│ - Marshal Spec/Status to JSON │
│ - Convert Fabrica Resource → Ent Entity │
└─────────────────┬───────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Layer 1: Ent Schema Validation │
│ - Field constraints │
│ - Unique constraints │
└─────────────────┬───────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Database │
└─────────────────────────────────────────────┘
Ent uses three tables to store Fabrica resources:
resources table:
CREATE TABLE resources (
id SERIAL PRIMARY KEY,
uid VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(253) NOT NULL,
api_version VARCHAR(50) DEFAULT 'v1',
kind VARCHAR(100) NOT NULL,
resource_type VARCHAR(100) NOT NULL,
spec JSONB NOT NULL, -- Desired state as JSON
status JSONB, -- Observed state as JSON
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL,
resource_version VARCHAR(50) DEFAULT '1',
namespace VARCHAR(253)
);
labels table:
CREATE TABLE labels (
id SERIAL PRIMARY KEY,
resource_id INTEGER REFERENCES resources(id) ON DELETE CASCADE,
key VARCHAR(253) NOT NULL,
value VARCHAR(63)
);
annotations table:
CREATE TABLE annotations (
id SERIAL PRIMARY KEY,
resource_id INTEGER REFERENCES resources(id) ON DELETE CASCADE,
key VARCHAR(253) NOT NULL,
value TEXT
);
Ent integration maintains Fabrica’s comprehensive validation approach:
Defined in generated internal/storage/ent/schema/resource.go:
field.String("uid").
Unique().
Immutable().
NotEmpty()
field.String("name").
NotEmpty().
MaxLen(253)
field.JSON("spec", json.RawMessage{}).
Validate(func(data json.RawMessage) error {
if len(data) == 0 {
return fmt.Errorf("spec cannot be empty")
}
return nil
})
Validates:
When: Automatically during .Save() / .Create()
In your resource definitions:
type DeviceSpec struct {
Name string `json:"name" validate:"required,k8sname"`
Location string `json:"location" validate:"required,min=1,max=100"`
Model string `json:"model" validate:"required"`
IPAddress string `json:"ipAddress" validate:"omitempty,ip"`
}
In handlers (auto-generated):
// Layer 2: Struct tag validation
if err := validation.ValidateResource(&device); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
Validates:
When: In HTTP handlers before database save
Implement Validate(ctx) on your resources:
func (d *Device) Validate(ctx context.Context) error {
// Cross-field validation
if d.Spec.Location == "production" && d.Spec.Model == "" {
return fmt.Errorf("model required for production devices")
}
// Database lookup validation
if d.Spec.ParentDeviceID != "" {
exists, err := deviceExists(ctx, d.Spec.ParentDeviceID)
if err != nil || !exists {
return fmt.Errorf("parent device not found")
}
}
// Business rules
if d.Spec.IPAddress != "" {
available, _ := isIPAvailable(ctx, d.Spec.IPAddress)
if !available {
return fmt.Errorf("IP address already in use")
}
}
return nil
}
In handlers:
// Layer 3: Custom validation
if err := validation.ValidateWithContext(r.Context(), &device); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
Validates:
When: After struct validation, before database save
device := &device.Device{
APIVersion: "infra.example.io/v1",
Kind: "Device",
Metadata: Metadata{},
Spec: device.DeviceSpec{
Name: "sensor-001",
Location: "Building A",
Model: "TMP-100",
},
}
device.Metadata.Initialize("sensor-001", uid)
device.SetLabel("environment", "production")
// All three validation layers execute
err := storage.SaveDevice(ctx, device)
// Load single resource
device, err := storage.LoadDevice(ctx, "dev-abc123")
// Load all devices
devices, err := storage.LoadAllDevices(ctx)
For advanced queries, use the Ent client directly:
// In your custom code
devices, err := entClient.Resource.Query().
Where(resource.KindEQ("Device")).
Where(resource.HasLabelsWith(
label.KeyEQ("environment"),
label.ValueEQ("production"),
)).
WithLabels().
All(ctx)
device, err := storage.LoadDevice(ctx, uid)
if err != nil {
return err
}
device.Spec.Location = "Building B"
device.Touch() // Update timestamp
err = storage.SaveDevice(ctx, device)
err := storage.DeleteDevice(ctx, uid)
if errors.Is(err, storage.ErrNotFound) {
// Handle not found
}
Fabrica-generated main.go includes auto-migration:
// Run auto-migration
if err := client.Schema.Create(
ctx,
migrate.WithDropIndex(true),
migrate.WithDropColumn(true),
); err != nil {
log.Fatalf("failed creating schema: %v", err)
}
Development: Safe for rapid iteration Production: Use versioned migrations instead
For production, use Ent’s migration system:
# Generate migration files
fabrica ent migrate --dry-run > migrations/001_init.sql
# Review and apply
psql $DATABASE_URL < migrations/001_init.sql
fabrica init my-api --storage=ent --db=postgres
go.mod includes:
require github.com/lib/pq latest
Connection string:
export DATABASE_URL="postgres://user:pass@localhost/dbname?sslmode=disable"
Features:
fabrica init my-api --storage=ent --db=mysql
go.mod includes:
require github.com/go-sql-driver/mysql latest
Connection string:
export DATABASE_URL="user:pass@tcp(localhost:3306)/dbname?parseTime=true"
fabrica init my-api --storage=ent --db=sqlite
go.mod includes:
require github.com/mattn/go-sqlite3 latest
Connection string:
export DATABASE_URL="file:./data.db?cache=shared&_fk=1"
Use for:
For operations requiring atomicity:
// Start transaction
tx, err := entClient.Tx(ctx)
if err != nil {
return err
}
// Multiple operations
device1, err := tx.Resource.Create()./* ... */.Save(ctx)
if err != nil {
tx.Rollback()
return err
}
device2, err := tx.Resource.Create()./* ... */.Save(ctx)
if err != nil {
tx.Rollback()
return err
}
// Commit
return tx.Commit()
Access the Ent client for advanced queries:
// Complex filtering
devices, err := entClient.Resource.Query().
Where(
resource.KindEQ("Device"),
resource.HasLabelsWith(
label.Or(
label.ValueEQ("production"),
label.ValueEQ("staging"),
),
),
).
Order(ent.Desc(resource.FieldCreatedAt)).
Limit(10).
All(ctx)
// Count resources by type
counts, err := entClient.Resource.Query().
GroupBy(resource.FieldKind).
Aggregate(ent.Count()).
Ints(ctx)
If you see this when running fabrica ent commands:
# Check if you're in an Ent-enabled project
ls internal/storage/ent/schema
# If the directory doesn't exist, your project wasn't initialized with Ent
# Create a new Ent-enabled project:
fabrica init my-new-api --storage=ent
This means Ent client code hasn’t been generated yet:
# Ent generation happens automatically during fabrica generate
fabrica generate
# Note: 'fabrica ent generate' is deprecated but still works for backward compatibility
# Test database connection
psql $DATABASE_URL -c "SELECT 1"
# Verify DATABASE_URL is set
echo $DATABASE_URL
# Check Ent client initialization in logs
tail -f /var/log/myapp.log | grep "ent"
# Check current database state
fabrica ent describe
# Drop and recreate (DEVELOPMENT ONLY - destroys all data!)
psql $DATABASE_URL -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
fabrica ent migrate
# For production, use versioned migrations
Enable query logging:
client, err := ent.Open("postgres", dbURL,
ent.Debug(), // Log all queries
)
Add indexes:
// In schema definition
func (Resource) Indexes() []ent.Index {
return []ent.Index{
index.Fields("kind", "created_at"),
index.Fields("namespace", "name"),
}
}
ent.Debug()To migrate existing file-based projects to Ent:
tar -czf backup.tar.gz ./data
fabrica init new-project --storage=ent --db=postgres
# Copy resource definitions from apis/<group>/<version>/ in your old project
# (Legacy projects: move from pkg/resources/* into apis/<group>/<version>/)
cd new-project
fabrica generate # Automatically generates Ent client code
// Load from file backend
devices, _ := fileBackend.LoadAll(ctx, "Device")
// Save to Ent backend
for _, device := range devices {
entBackend.Save(ctx, "Device", device.UID, device)
}
Fabrica now generates query builder functions that expose Ent’s powerful query capabilities:
// Query all devices
devices := storage.QueryDevices(ctx).
Order(ent.Asc(resource.FieldCreatedAt)).
All(ctx)
// Query with label filtering
prodDevices := storage.ListDevicesByLabels(ctx, map[string]string{
"env": "prod",
"team": "ops",
})
// Get single device by UID
device, err := storage.GetDeviceByUID(ctx, "dev-abc123")
Generated query functions per resource:
Query<PluralName>(ctx) - Returns Ent query builderGet<Name>ByUID(ctx, uid) - Loads single resourceList<PluralName>ByLabels(ctx, labels) - Filters by exact label matchGeneric query functions:
QueryResources(ctx, kind) - Query builder for any kindQueryResourcesByLabels(ctx, kind, labels) - Label-based filteringUse WithTx wrapper for atomic multi-resource operations:
import "your-project/internal/storage"
err := storage.WithTx(ctx, func(tx *ent.Tx) error {
// Create device
device := &Device{...}
if err := tx.Resource.Create().
SetUID(device.UID).
SetKind("Device").
// ... other fields
Exec(ctx); err != nil {
return err
}
// Create related rack (both succeed or both fail)
rack := &Rack{...}
if err := tx.Resource.Create().
SetUID(rack.UID).
SetKind("Rack").
// ... other fields
Exec(ctx); err != nil {
return err
}
return nil // Commit
})
If any operation fails, the entire transaction rolls back automatically.
Generated server binaries include export and import subcommands for backup and migration:
Export resources:
# Export all resources to YAML
./myapi export --format yaml --output ./backup
# Export specific types
./myapi export --kinds Device,Rack --output ./partial
# Export to JSON
./myapi export --format json --output ./backup-json
# Organize by type in subdirectories
./myapi export --format yaml --output ./backup --per-type
Import resources:
# Import from backup
./myapi import --input ./backup
# Dry run to preview changes
./myapi import --input ./backup --dry-run
# Replace mode (delete all first)
./myapi import --input ./backup --mode replace
# Skip existing resources
./myapi import --input ./backup --skip-existing
Export directory structure:
backup/
├── devices/
│ ├── device-001.yaml
│ └── device-002.yaml
└── racks/
└── rack-001.yaml
Import modes:
upsert (default) - Create new or update existing resourcesreplace - Delete all resources first, then importskip - Only create new resources, skip existingArchitecture:
cmd/server/export.go and cmd/server/import.gostorage.Query{Resource}(ctx).All(ctx) for direct storage accessstorage.WithTx() for transactional importsUse cases:
See Example 10 - Export/Import for complete workflows.
Fallback for older versions: For Fabrica versions before v0.4.0, use the manual REST-based scripts in Example 10 that interact with the running API.
Filter by labels:
// Find all production servers
servers, err := storage.ListServersByLabels(ctx, map[string]string{
"env": "prod",
})
Get single resource:
server, err := storage.GetServerByUID(ctx, "srv-12345")
Atomic multi-resource operation:
err := storage.WithTx(ctx, func(tx *ent.Tx) error {
// Create server
// Create config
// Both succeed or both fail
return nil
})
Q: How do I query with multiple conditions? A: Use the generated query builders:
servers, err := storage.QueryServers(ctx).
Where(predicate...).
Order(...).
All(ctx)
Q: How do I ensure atomicity?
A: Wrap operations in WithTx():
storage.WithTx(ctx, func(tx *ent.Tx) error {
// Multiple operations
return nil // Commits if no error
})
Q: Can I export and re-import data? A: Yes, use the generated export/import commands (v0.4.0+):
# Export all data
./myapi export --format yaml --output ./backup
# Import with various modes
./myapi import --input ./backup --mode upsert # Default: create or update
./myapi import --input ./backup --mode replace # Delete all first
./myapi import --input ./backup --dry-run # Preview changes
Commands are generated into cmd/server/export.go and cmd/server/import.go. They use storage abstraction directly for efficient, atomic operations. See Example 10 - Export/Import for complete workflows.
Q: What about pagination? A: Use Limit/Offset on query builders:
servers, err := storage.QueryServers(ctx).
Offset(pageSize * pageNum).
Limit(pageSize).
All(ctx)
Q: How do migrations work?
A: Ent automatically generates migrations from internal/storage/ent/schema/*.go. Run:
cd internal/storage && go generate ./...
?charset=utf8mb4 for MySQL