Garbage Collection in Kubernetes: OwnerReferences and Finalizers


You delete a Deployment. Seconds later, its ReplicaSet is gone. Then the Pods vanish. You didn’t delete them explicitly—Kubernetes garbage collection did. But how does it know what to delete? And what happens when you need to clean up external resources that Kubernetes doesn’t know about?

This post covers the two mechanisms that control object lifecycle: OwnerReferences for automatic cascading deletion, and Finalizers for custom cleanup logic.

Imagine you create a Deployment. Kubernetes creates a ReplicaSet, which creates Pods:

Deployment (my-app)
    └── ReplicaSet (my-app-7d9fc5)
            ├── Pod (my-app-7d9fc5-abc12)
            ├── Pod (my-app-7d9fc5-def34)
            └── Pod (my-app-7d9fc5-ghi56)

Now you delete the Deployment. What should happen to the ReplicaSet and Pods?

Without garbage collection: They’d become orphans—still running, consuming resources, but no longer managed by anything. You’d have to manually track and delete them.

With garbage collection: Kubernetes automatically deletes dependents when their owner is deleted. Delete the Deployment, and the whole tree disappears.

Every Kubernetes object can declare its owners via the metadata.ownerReferences field:

apiVersion: v1
kind: Pod
metadata:
  name: my-app-7d9fc5-abc12
  namespace: default
  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: my-app-7d9fc5
      uid: 12345678-1234-1234-1234-123456789abc
      controller: true
      blockOwnerDeletion: true
Field Required Description
apiVersion Yes API version of the owner
kind Yes Kind of the owner
name Yes Name of the owner
uid Yes UID of the owner (prevents accidental matches)
controller No If true, this is THE controller (only one allowed)
blockOwnerDeletion No If true, blocks owner deletion until this object is deleted

When your controller creates child resources, set the owner reference:

import (
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)

func (r *MyReconciler) createPod(ctx context.Context, owner *myv1.MyResource) error {
    pod := &corev1.Pod{
        ObjectMeta: metav1.ObjectMeta{
            Name:      owner.Name + "-pod",
            Namespace: owner.Namespace,
        },
        Spec: corev1.PodSpec{
            // ...
        },
    }

    // Set owner reference - enables garbage collection
    if err := controllerutil.SetControllerReference(owner, pod, r.Scheme); err != nil {
        return err
    }

    return r.Create(ctx, pod)
}

SetControllerReference does several things:

  1. Sets ownerReferences with the owner’s details
  2. Sets controller: true (marks this as THE controller)
  3. Sets blockOwnerDeletion: true
  4. Validates that owner and dependent are in the same namespace

An object can have multiple owners:

ownerReferences:
  - apiVersion: apps/v1
    kind: ReplicaSet
    name: my-app-7d9fc5
    uid: abc123
    controller: true      # This is the controller
  - apiVersion: v1
    kind: ConfigMap
    name: shared-config
    uid: def456
    controller: false     # Just an owner, not the controller

Rules:

  • Only one owner can have controller: true
  • Object is garbage collected when all owners are deleted
  • Use SetOwnerReference (not SetControllerReference) for non-controller owners
// Non-controller owner reference
if err := controllerutil.SetOwnerReference(configMap, pod, r.Scheme); err != nil {
    return err
}

When you delete an owner, what happens to its dependents? Kubernetes supports three deletion propagation policies:

The owner waits for dependents to be deleted first:

1. Owner gets deletionTimestamp set
2. Owner enters "deletion in progress" state
3. GC deletes all dependents with blockOwnerDeletion=true
4. Once all blocking dependents are gone, owner is deleted
kubectl delete deployment my-app --cascade=foreground
// Programmatically
propagation := metav1.DeletePropagationForeground
client.Delete(ctx, deployment, &client.DeleteOptions{
    PropagationPolicy: &propagation,
})

Use when: You need to ensure children are gone before the parent disappears (e.g., cleaning up PVCs before deleting a StatefulSet).

The owner is deleted immediately; dependents are garbage collected asynchronously:

1. Owner is deleted immediately
2. GC notices orphaned dependents
3. GC deletes dependents in the background
kubectl delete deployment my-app --cascade=background
# or just
kubectl delete deployment my-app  # background is default

Use when: You don’t need to wait for cleanup (most cases).

Delete the owner but leave dependents alone:

1. Owner is deleted
2. Dependents remain, but ownerReferences are cleared
3. Dependents become standalone objects
kubectl delete deployment my-app --cascade=orphan

Use when: You want to “detach” resources. For example, adopting Pods into a new ReplicaSet.

Garbage collection is implemented by the garbage collector controller in kube-controller-manager. Here’s how it works:

The GC controller maintains an in-memory graph of all owner-dependent relationships:

┌─────────────────────────────────────────────────────┐
│              GC Dependency Graph                    │
│                                                     │
│  Deployment/my-app                                  │
│       │                                             │
│       └──► ReplicaSet/my-app-7d9fc5                 │
│                 │                                   │
│                 ├──► Pod/my-app-7d9fc5-abc12        │
│                 ├──► Pod/my-app-7d9fc5-def34        │
│                 └──► Pod/my-app-7d9fc5-ghi56        │
│                                                     │
│  Service/my-svc (no dependents)                     │
│                                                     │
└─────────────────────────────────────────────────────┘

When an object is deleted:

  1. GC detects deletion via watch events
  2. Looks up dependents in the graph
  3. For each dependent:
    • If blockOwnerDeletion=true and foreground deletion: delete dependent first
    • If background deletion: queue dependent for deletion
    • If orphan deletion: remove ownerReference from dependent

If the GC finds an object with an ownerReference pointing to a non-existent owner:

ownerReferences:
  - apiVersion: apps/v1
    kind: ReplicaSet
    name: my-app-7d9fc5
    uid: abc123  # This UID no longer exists!

The object is considered orphaned and will be deleted (unless orphan propagation was used).

Important: The UID must match. If you delete and recreate an owner with the same name, dependents won’t automatically re-attach—they’ll be garbage collected because the UID changed.

OwnerReferences handle Kubernetes-native relationships. But what if deleting your custom resource should:

  • Delete an S3 bucket?
  • Remove a DNS record?
  • Clean up a database user?
  • Revoke cloud IAM permissions?

Kubernetes doesn’t know about these external resources. Finalizers let you run custom cleanup logic before an object is deleted.

A finalizer is just a string in metadata.finalizers:

apiVersion: myapp.example.com/v1
kind: Database
metadata:
  name: my-db
  finalizers:
    - databases.myapp.example.com/cleanup
spec:
  # ...

When you delete an object with finalizers:

  1. Kubernetes sets deletionTimestamp but doesn’t delete the object
  2. Object enters “terminating” state — it still exists in etcd
  3. Your controller sees the deletion (via watch)
  4. Controller performs cleanup (delete S3 bucket, etc.)
  5. Controller removes the finalizer from the object
  6. Once all finalizers are removed, Kubernetes deletes the object
DELETE request
     │
     ▼
┌─────────────────────────┐
│ Has finalizers?         │
│                         │
│ Yes: Set deletionTime-  │
│      stamp, keep object │
│                         │
│ No: Delete immediately  │
└─────────────────────────┘
     │ (Yes)
     ▼
┌─────────────────────────┐
│ Object in "terminating" │
│ state, still in etcd    │
└─────────────────────────┘
     │
     ▼
┌─────────────────────────┐
│ Controller sees object  │
│ with deletionTimestamp  │
│                         │
│ Performs cleanup...     │
│ Removes finalizer       │
└─────────────────────────┘
     │
     ▼
┌─────────────────────────┐
│ All finalizers removed  │
│ Object deleted from     │
│ etcd                    │
└─────────────────────────┘

Here’s the standard pattern in a controller:

const finalizerName = "databases.myapp.example.com/cleanup"

func (r *DatabaseReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    log := log.FromContext(ctx)

    // Fetch the Database instance
    db := &myappv1.Database{}
    if err := r.Get(ctx, req.NamespacedName, db); err != nil {
        if errors.IsNotFound(err) {
            // Object not found, could have been deleted after reconcile request
            return ctrl.Result{}, nil
        }
        return ctrl.Result{}, err
    }

    // Check if object is being deleted
    if db.ObjectMeta.DeletionTimestamp.IsZero() {
        // Object is NOT being deleted
        // Add finalizer if it doesn't exist
        if !controllerutil.ContainsFinalizer(db, finalizerName) {
            controllerutil.AddFinalizer(db, finalizerName)
            if err := r.Update(ctx, db); err != nil {
                return ctrl.Result{}, err
            }
        }
    } else {
        // Object IS being deleted
        if controllerutil.ContainsFinalizer(db, finalizerName) {
            // Run cleanup logic
            if err := r.cleanupExternalResources(ctx, db); err != nil {
                // If cleanup fails, requeue
                return ctrl.Result{}, err
            }

            // Remove finalizer to allow deletion
            controllerutil.RemoveFinalizer(db, finalizerName)
            if err := r.Update(ctx, db); err != nil {
                return ctrl.Result{}, err
            }
        }

        // Finalizer removed, object will be deleted
        return ctrl.Result{}, nil
    }

    // Normal reconciliation logic
    return r.reconcileDatabase(ctx, db)
}

func (r *DatabaseReconciler) cleanupExternalResources(ctx context.Context, db *myappv1.Database) error {
    log := log.FromContext(ctx)
    log.Info("Cleaning up external resources", "database", db.Name)

    // Delete the actual database
    if err := r.cloudProvider.DeleteDatabase(ctx, db.Spec.DatabaseID); err != nil {
        // Ignore "not found" errors — resource may already be deleted
        if !isNotFound(err) {
            return err
        }
    }

    // Delete associated secrets
    if err := r.cloudProvider.DeleteCredentials(ctx, db.Spec.CredentialsID); err != nil {
        if !isNotFound(err) {
            return err
        }
    }

    log.Info("Successfully cleaned up external resources")
    return nil
}

1. Add finalizer early

Add the finalizer before creating external resources:

// Good: Add finalizer first
if !controllerutil.ContainsFinalizer(db, finalizerName) {
    controllerutil.AddFinalizer(db, finalizerName)
    if err := r.Update(ctx, db); err != nil {
        return ctrl.Result{}, err
    }
    // Requeue to continue after finalizer is persisted
    return ctrl.Result{Requeue: true}, nil
}

// Now safe to create external resource
if err := r.createExternalDatabase(ctx, db); err != nil {
    return ctrl.Result{}, err
}

If you create the external resource first and then crash before adding the finalizer, the resource becomes orphaned.

2. Make cleanup idempotent

Cleanup may run multiple times (controller restarts, errors, requeues):

func (r *Reconciler) cleanupExternalResources(ctx context.Context, db *myappv1.Database) error {
    // Idempotent: safe to call even if already deleted
    err := r.cloudProvider.DeleteDatabase(ctx, db.Spec.DatabaseID)
    if err != nil && !isNotFound(err) {
        return err  // Real error, retry
    }
    // Success or already deleted — both are fine
    return nil
}

3. Handle cleanup failures gracefully

If cleanup fails, return an error to requeue. But consider adding a timeout or retry limit:

func (r *Reconciler) cleanupExternalResources(ctx context.Context, db *myappv1.Database) error {
    // Check if we've been trying too long
    if db.DeletionTimestamp != nil {
        deleteAge := time.Since(db.DeletionTimestamp.Time)
        if deleteAge > 1*time.Hour {
            // Log and give up — manual intervention required
            log.Error(nil, "Cleanup taking too long, giving up", 
                "database", db.Name, "age", deleteAge)
            return nil  // Remove finalizer anyway
        }
    }

    return r.doCleanup(ctx, db)
}

4. Use unique finalizer names

Include your domain to avoid collisions:

// Good
const finalizerName = "databases.myapp.example.com/cleanup"

// Bad — could collide with other controllers
const finalizerName = "cleanup"

5. Don’t block indefinitely

A stuck finalizer blocks deletion forever. Always have a path to completion:

func (r *Reconciler) cleanupExternalResources(ctx context.Context, db *myappv1.Database) error {
    // Use context with timeout
    cleanupCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
    defer cancel()

    if err := r.doCleanup(cleanupCtx, db); err != nil {
        if cleanupCtx.Err() == context.DeadlineExceeded {
            // Timeout — requeue with backoff
            return fmt.Errorf("cleanup timed out, will retry: %w", err)
        }
        return err
    }
    return nil
}

For complex resources, you often need both:

func (r *DatabaseReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    db := &myappv1.Database{}
    if err := r.Get(ctx, req.NamespacedName, db); err != nil {
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    // Handle deletion
    if !db.DeletionTimestamp.IsZero() {
        return r.handleDeletion(ctx, db)
    }

    // Add finalizer for external resources
    if !controllerutil.ContainsFinalizer(db, finalizerName) {
        controllerutil.AddFinalizer(db, finalizerName)
        if err := r.Update(ctx, db); err != nil {
            return ctrl.Result{}, err
        }
    }

    // Create Secret with owner reference (GC handles this)
    secret := &corev1.Secret{
        ObjectMeta: metav1.ObjectMeta{
            Name:      db.Name + "-credentials",
            Namespace: db.Namespace,
        },
        Data: map[string][]byte{
            "password": generatePassword(),
        },
    }
    if err := controllerutil.SetControllerReference(db, secret, r.Scheme); err != nil {
        return ctrl.Result{}, err
    }
    if err := r.Create(ctx, secret); err != nil && !errors.IsAlreadyExists(err) {
        return ctrl.Result{}, err
    }

    // Create external database (finalizer handles cleanup)
    if err := r.ensureExternalDatabase(ctx, db); err != nil {
        return ctrl.Result{}, err
    }

    return ctrl.Result{}, nil
}

func (r *DatabaseReconciler) handleDeletion(ctx context.Context, db *myappv1.Database) (ctrl.Result, error) {
    if !controllerutil.ContainsFinalizer(db, finalizerName) {
        return ctrl.Result{}, nil
    }

    // Clean up external resources (not covered by OwnerReferences)
    if err := r.deleteExternalDatabase(ctx, db); err != nil {
        return ctrl.Result{}, err
    }

    // Remove finalizer
    // Note: The Secret will be automatically deleted by GC (OwnerReference)
    controllerutil.RemoveFinalizer(db, finalizerName)
    if err := r.Update(ctx, db); err != nil {
        return ctrl.Result{}, err
    }

    return ctrl.Result{}, nil
}

In this example:

  • Secret: Uses OwnerReference → automatic GC deletion
  • External database: Uses Finalizer → custom cleanup logic
# See owner references
kubectl get pod my-pod -o jsonpath='{.metadata.ownerReferences}' | jq

# Find all objects owned by a specific resource
kubectl get all --all-namespaces -o json | jq '
  .items[] | 
  select(.metadata.ownerReferences[]?.name == "my-deployment") |
  "\(.kind)/\(.metadata.name)"
'
# See finalizers on an object
kubectl get database my-db -o jsonpath='{.metadata.finalizers}'

# Find objects stuck in terminating (have deletionTimestamp but still exist)
kubectl get all --all-namespaces -o json | jq '
  .items[] |
  select(.metadata.deletionTimestamp != null) |
  "\(.kind)/\(.metadata.namespace)/\(.metadata.name): \(.metadata.finalizers)"
'

If an object is stuck terminating because the controller is gone or broken:

# DANGEROUS: Remove finalizer manually to unblock deletion
kubectl patch database my-db -p '{"metadata":{"finalizers":null}}' --type=merge

# Or edit directly
kubectl edit database my-db
# Remove the finalizers array

Warning: This skips cleanup! External resources may be orphaned.

# Check garbage collector logs in controller-manager
kubectl logs -n kube-system kube-controller-manager-<node> | grep -i garbage

Sometimes you want a controller to “adopt” existing resources:

func (r *Reconciler) adoptOrphanedPods(ctx context.Context, owner *myv1.MyResource) error {
    // Find pods that should be owned but aren't
    pods := &corev1.PodList{}
    if err := r.List(ctx, pods, 
        client.InNamespace(owner.Namespace),
        client.MatchingLabels{"app": owner.Name},
    ); err != nil {
        return err
    }

    for _, pod := range pods.Items {
        // Skip if already owned by someone else
        if metav1.GetControllerOf(&pod) != nil {
            continue
        }

        // Adopt the pod
        if err := controllerutil.SetControllerReference(owner, &pod, r.Scheme); err != nil {
            return err
        }
        if err := r.Update(ctx, &pod); err != nil {
            return err
        }
    }
    return nil
}

OwnerReferences only work within a namespace. For cross-namespace relationships, use finalizers:

// ClusterDatabase (cluster-scoped) creates Secrets in user namespaces
func (r *ClusterDatabaseReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    cdb := &myv1.ClusterDatabase{}
    if err := r.Get(ctx, req.NamespacedName, cdb); err != nil {
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    if !cdb.DeletionTimestamp.IsZero() {
        // Clean up secrets in all namespaces
        for _, ns := range cdb.Spec.TargetNamespaces {
            secret := &corev1.Secret{
                ObjectMeta: metav1.ObjectMeta{
                    Name:      cdb.Name + "-credentials",
                    Namespace: ns,
                },
            }
            if err := r.Delete(ctx, secret); err != nil && !errors.IsNotFound(err) {
                return ctrl.Result{}, err
            }
        }
        // Remove finalizer
        controllerutil.RemoveFinalizer(cdb, finalizerName)
        return ctrl.Result{}, r.Update(ctx, cdb)
    }

    // Can't use OwnerReference (cross-namespace), so we must clean up manually
    // ...
}

If you add a finalizer but your controller isn’t running (or crashes permanently), objects get stuck:

apiVersion: myapp.example.com/v1
kind: Database
metadata:
  name: stuck-db
  deletionTimestamp: "2025-01-25T10:00:00Z"  # Stuck!
  finalizers:
    - databases.myapp.example.com/cleanup     # No controller to remove this

Prevention:

  • Ensure controllers are highly available
  • Consider finalizer timeouts
  • Document manual recovery procedures

Don’t create circular ownership:

# Bad: A owns B, B owns A
# Result: Neither can be deleted!

# Object A
ownerReferences:
  - name: B
    uid: ...

# Object B
ownerReferences:
  - name: A
    uid: ...

The GC controller detects and logs circular references but can’t resolve them automatically.

# Create deployment
kubectl create deployment my-app --image=nginx

# Note the ReplicaSet's ownerReference UID
kubectl get rs -o jsonpath='{.items[0].metadata.ownerReferences[0].uid}'
# abc123

# Delete and recreate deployment with same name
kubectl delete deployment my-app
kubectl create deployment my-app --image=nginx

# New deployment has different UID
kubectl get deployment my-app -o jsonpath='{.metadata.uid}'
# def456 (different!)

# Old ReplicaSet (if it somehow survived) would be orphaned
# because its ownerReference.uid (abc123) doesn't match

Kubernetes provides two mechanisms for managing object lifecycle:

Mechanism Use Case How It Works
OwnerReferences Kubernetes-native parent-child relationships Automatic cascading deletion by GC controller
Finalizers External resources, custom cleanup logic Blocks deletion until controller removes finalizer

OwnerReferences:

  • Set via controllerutil.SetControllerReference() or SetOwnerReference()
  • Same namespace only
  • Automatic cleanup by garbage collector
  • Three propagation policies: Foreground, Background, Orphan

Finalizers:

  • Add before creating external resources
  • Remove after cleanup is complete
  • Must be idempotent
  • Stuck finalizers block deletion indefinitely

When to use which:

  • Child Kubernetes objects (Pods, Secrets, ConfigMaps) → OwnerReferences
  • External resources (cloud databases, DNS records, IAM) → Finalizers
  • Cross-namespace relationships → Finalizers
  • Need custom cleanup ordering → Finalizers

The combination of both gives you complete control over resource lifecycle—automatic cleanup for Kubernetes objects, and guaranteed custom cleanup for everything else.