Evaluating C, C++, Rust, and Zig for Modern Low-Level Development (Memory Management)

Published by

on

Balancing Speed, Safety, and Complexity in Low-Level Development and Systems Programming

Explore the complete series through the links below:

In this part, we will focus on the memory management of each programming language.


Introduction

In the previous section, we explored the design philosophies of C, C++, Rust, and Zig, analyzing how each language balances performance, safety, and complexity in low-level programming.

In this section, we will delve into memory management, comparing how these four languages handle stack vs. heap allocation, manual vs. automated memory control, and safety mechanisms. We will examine common pitfalls like memory leaks, fragmentation, and undefined behavior, and explore how each language mitigates (or exposes) these risks.

By understanding these trade-offs, we can better assess how memory management choices influence developer experience, performance optimization, and real-world system reliability.


Memory Management

Memory Layout

The fundamental memory layout (stack, heap, static, and text segments) remains largely the same for all programming languages—C, C++, Rust, and Zig—because it is dictated by the underlying operating system and hardware rather than the language itself. However, how each language interacts with these memory regions differs, especially regarding memory management, allocation strategies, and safety mechanisms.

Memory Layout
Memory Layout

The stack is static in the sense that its size is determined at program startup and does not grow dynamically during execution. It is managed automatically by the OS or runtime, and the programmer does not need to allocate or free memory explicitly. It stores function call frames, local variables, and return addresses. When a function exits, its stack frame is automatically deallocated.

The heap is dynamic and grows as needed. It allows manual memory management (as in C, C++, and Zig), garbage collection (as in Java, JavaScript, and Go), or ownership-based automatic management (as in Rust). Unlike the stack, heap memory must be explicitly managed, either by the developer or through an automated system like GC or ownership rules.

Thus, stack memory is predictable and automatic, while heap memory provides flexibility at the cost of manual intervention or runtime overhead.

C

In C, heap memory is managed manually using:

  • malloc(size_t size): Allocates a block of memory of the given size but does not initialize it.
  • realloc(void *ptr, size_t new_size): Resizes an already allocated block, copying its contents if needed.
  • free(void *ptr): Deallocates a previously allocated memory block.

Heap memory is particularly useful for dynamically growing data structures like arrays, linked lists, and trees that need to be allocated at runtime when the required size is not known in advance.

Since dynamically allocated memory is not associated with a named variable (as stack variables are), the only way to reference it is through pointers. When malloc() or realloc() is used, it returns a pointer to the allocated block, which must be stored and later used to access or modify the memory.

#include 
#include 
int main() {
    int *arr = malloc(5 * sizeof(int));  // Allocate an array of 5 integers on the heap
    if (arr == NULL) {
        printf("Memory allocation failed\n");
        return 1;  // Exit if allocation failed
    }
    // Assign values using pointer arithmetic
    for (int i = 0; i < 5; i++) {
        *(arr + i) = i * 10;  // Equivalent to arr[i] = i * 10;
    }
    // Read values using pointer dereferencing
    for (int i = 0; i < 5; i++) {
        printf("%d ", *(arr + i));  // Equivalent to printf("%d ", arr[i]);
    }
    free(arr);  // Deallocate heap memory
    return 0;
}

However, manually managing memory introduces several potential pitfalls, making it both powerful and risky.

Memory Leaks (Forgetting to Free Memory)

  • If dynamically allocated memory is not freed, it remains allocated indefinitely, leading to a memory leak.
  • Over time, excessive leaks can exhaust available memory, causing performance degradation or even system crashes.
#include 
void allocate_memory() {
    int *ptr = malloc(10 * sizeof(int));  // Allocates 10 integers
    // Forgetting to call free(ptr) causes a memory leak
}
int main() {
    allocate_memory();
    return 0;
}

Dangling Pointers (A Pointer That Still Holds a Freed Memory Address)

  • If a pointer is pointing to a memory location that has been deallocated earlier in the program, it is called Dangling Pointer.
  • Dangling pointer occurs at the time of the object destruction when the object is deleted or de-allocated from memory without modifying the value of the pointer. In this case, the pointer is pointing to the memory, which is de-allocated.
#include 
#include 
int main() {
    int *ptr = malloc(sizeof(int));
    *ptr = 42;
    free(ptr);  // Memory is freed
    printf("%d\n", *ptr);  // Undefined behavior: accessing freed memory
    return 0;
}
// Solution
free(ptr);
ptr = NULL;  // Prevents accidental use

Double Free (Freeing the Same Pointer Twice)

  • Calling free() twice on the same pointer leads to corruption of memory management structures.
  • Some implementations might crash, while others may allow the program to continue running unpredictably.
#include 
int main() {
    int *ptr = malloc(sizeof(int));
    free(ptr);
    free(ptr);  // ERROR: Double free
    return 0;
}

Buffer Overflows (Writing Outside Allocated Memory)

Occurs when writing beyond the allocated heap memory, leading to corruption of adjacent memory.

#include 
#include 
int main() {
    int *arr = malloc(5 * sizeof(int));  // Allocating space for 5 integers
    arr[5] = 42;  // ERROR: Out-of-bounds access
    printf("%d\n", arr[5]);
    free(arr);
    return 0;
}

Memory Fragmentation

  • Fragmentation occurs when memory is allocated and freed in an unorganized way, leading to gaps in the heap.
  • Over time, the available memory may become fragmented, making it harder to allocate large blocks, even if the total free memory is sufficient.
void create_fragmentation() {
    int *a = malloc(1000);
    int *b = malloc(1000);
    free(a);  // Leaves a hole in memory
    int *c = malloc(500);  // Might not reuse the hole efficiently
}

While C‘s manual memory management requires discipline, it becomes manageable and even efficient when we adopt its philosophy and leverage the right tools.

C++

C++ memory management is mostly like C’s, in that it still requires manual control over memory allocation and deallocation. However, C++ provides better APIs and tools that make memory management more structured and safer if used properly.

C++ has new and delete as replacements for malloc() and free():

  • new dynamically allocates an object and calls its constructor. Unlike malloc, new initializes the object properly.
  • delete releases the memory and calls the destructor.
#include 
int main() {
    int *ptr = new int(42); // Allocating memory on the heap
    std::cout << *ptr << std::endl; // Accessing memory
    delete ptr; // Freeing memory
    return 0;
}

In addition, modern C++ (since C++11) introduces smart pointers, which automate memory management and prevent common issues like leaks and dangling pointers.

C++ leverages Object-Oriented Programming (OOP) to implement automatic memory management through destructors. This is the foundation of RAII (Resource Acquisition Is Initialization), which ensures that allocated resources (memory, file handles, sockets, etc.) are automatically cleaned up when an object goes out of scope.

std::unique_ptr (Exclusive Ownership)

  • std::unique_ptr follows a single-owner model (exclusive ownership).
  • std::unique_ptr owns a dynamically allocated object and is responsible for freeing it when the unique_ptr itself is destroyed. This happens automatically when:
    • It goes out of scope (end of function/block).
    • It is explicitly reset using .reset().
    • It is moved to another std::unique_ptr (ownership transfer).
  • std::unique_ptr cannot be copied, it can only be moved. When we move a std::unique_ptr, it transfers ownership to another std::unique_ptr, and the original pointer is set to nullptr.
#include 
#include 
class Test {
public:
    Test() { std::cout << "Test created\n"; }
    ~Test() { std::cout << "Test destroyed\n"; }
};
int main() {
    std::unique_ptr ptr1 = std::make_unique(); // Allocates object
    std::unique_ptr ptr2 = std::move(ptr1); // Transfers ownership to ptr2
    if (!ptr1) {
        std::cout << "ptr1 is now null\n";
    }
    return 0; // ptr2 goes out of scope, Test is destroyed
}

std::shared_ptr (Reference-Counted Shared Ownership)

  • Reference-counted smart pointer.
  • Allows multiple pointers to share ownership of the same object.
  • Memory is automatically freed when the last reference goes out of scope.
#include 
#include 
int main() {
    std::shared_ptr ptr1 = std::make_shared(100);
    std::shared_ptr ptr2 = ptr1; // Shared ownership
    std::cout << *ptr1 << ", " << *ptr2 << std::endl; // Both access the same memory
    return 0; // Memory is automatically freed when both pointers go out of scope
}

🔴 std::shared_ptr avoids manual memory tracking, but introduces slight overhead due to reference counting.

Safety & Performance Trade-offs

Just like any other scientific discipline, there are no perfect worlds, and even with modern C++ features, memory issues can arise. These issues are less frequent than in raw pointer-based C++, but they require careful design choices to avoid them:

  1. Even though std::shared_ptr automates memory management using reference counting, it cannot break cyclic references (when two or more shared_ptr instances reference each other). This prevents the reference count from reaching zero, leading to a memory leak.
  2. While std::unique_ptr manages memory automatically, using custom deleters incorrectly can result in double deletion or memory corruption.
  3. std::weak_ptr does not contribute to reference counting, which prevents cyclic leaks. However, if it is used without checking whether the object is still valid, it may become dangling, leading to undefined behavior.
  4. Stack Exhaustion (Deep Recursion or Large Objects): Even with smart pointers, excessive recursion depth or large object allocations can still cause stack overflow.
#include 
#include 
void recursiveFunction(int depth) {
    std::unique_ptr ptr = std::make_unique(depth);
    if (depth > 0) {
        recursiveFunction(depth - 1); // Too many recursive calls cause stack overflow
    }
}
int main() {
    recursiveFunction(100000); // Stack overflow risk
    return 0;
}

Rust

Rust redefines memory management by enforcing safety at compile time through its ownership model and borrow checker. Unlike C and C++, Rust does not require manual memory deallocation, nor does it use garbage collection. Instead, it ensures that every value has a clear owner, and memory is automatically freed when the owner goes out of scope.

The borrowing system prevents multiple mutable references to the same object, eliminating entire classes of bugs such as use-after-free, buffer overflows, and data races in multithreading.

Borrowing means taking a reference to data without owning it. Each value in Rust has a single owner (by default, that’s the variable that owns it). When we assign or pass that value to another variable, by default it moves (transfers ownership) rather than creating an alias.

The key point is, Rust’s compiler ensures that at compile time most memory lifetime issues are resolved. At runtime there’s no metadata or overhead, it’s all checked statically.

Single Ownership: Box

Box provides exclusive ownership over a heap-allocated value. When the Box goes out of scope, it automatically deallocates the memory.

fn main() {
    let b = Box::new(42); // Heap allocation
    println!("{}", *b); // Dereferencing
} // `b` goes out of scope and memory is freed

📌 Similar to: std::unique_ptr in C++ (exclusive ownership, automatically freed when out of scope).

Shared Ownership: Rc (Reference Counting)

Rc is a reference-counted smart pointer that allows multiple owners of the same heap-allocated value. It is single-threaded only (not thread-safe).

use std::rc::Rc;

fn main() {
    let a = Rc::new(42); // Shared heap allocation
    let b = Rc::clone(&a); // Increases reference count

    println!("a: {}, b: {}", a, b); // Both share ownership
    println!("Reference count: {}", Rc::strong_count(&a)); // Output: 2
} // Memory is freed when the last reference is dropped

📌 Similar to: std::shared_ptr in C++ (reference counting). Just like std::shared_ptr, Rc can cause reference cycles (memory leaks).

Thread-Safe Shared Ownership: Arc

Arc is the thread-safe version of Rc, using atomic reference counting for multi-threaded environments.

use std::sync::Arc;
use std::thread;
fn main() {
    let counter = Arc::new(42); // Thread-safe shared ownership
    let handles: Vec<_> = (0..3).map(|_| {
        let counter_clone = Arc::clone(&counter);
        thread::spawn(move || {
            println!("{}", counter_clone);
        })
    }).collect();
    for handle in handles {
        handle.join().unwrap();
    }
}

📌 Similar to: std::shared_ptr in C++ with std::atomic for thread safety.

Interior Mutability: RefCell and Mutex

Rust does not allow mutability through shared references (&T). However, if we need mutable shared access, we can use interior mutability mechanisms:

  • RefCell (Single-threaded): Allows borrow checking at runtime instead of compile-time.
  • Mutex (Multi-threaded): Ensures safe concurrent access across threads.
use std::cell::RefCell;

fn main() {
    let value = RefCell::new(42);
    *value.borrow_mut() = 100; // Mutable access inside immutable reference
    println!("{}", value.borrow());
}

📌 Similar to: std::shared_ptr + mutable in C++. Runtime borrow checking can cause panic! if borrowing rules are violated.

Safety & Performance Trade-offs

While Rust’s ownership, borrowing, and lifetimes eliminate common memory issues, they also introduce new constraints that can feel restrictive:

  • Borrowing and lifetimes require a new way of thinking, making it harder for beginners or those coming from garbage-collected languages (Python, Java).
  • Rust moves ownership by default when assigning or passing a value, meaning developers often need explicit clone() calls to work around ownership restrictions.
  • Lifetimes ensure references are always valid, but writing explicit lifetime annotations can be tricky, especially for complex function signatures.
  • Rust’s strict rules sometimes frustrate developers, leading them to overuse unsafe to bypass restrictions.

Rust eliminates entire classes of memory errors, but requires re-learning memory management compared to C++’s RAII-based approach.

Zig

Zig brings back manual memory management like C, but instead of relying on malloc() and free(), it introduces a structured and explicit allocator system. The function responsible for allocation takes an Allocator argument and leaves it to the caller to handle deallocation.

const std = @import("std");

fn createBuffer(allocator: std.mem.Allocator, size: usize) ![]u8 {
    return try allocator.alloc(u8, size); // Allocating a buffer of `size` bytes
}

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){}; // Creating a general-purpose allocator
    const allocator = gpa.allocator(); // Extracting the allocator interface

    const buffer = try createBuffer(allocator, 128); // Allocating a buffer
    defer allocator.free(buffer); // Ensuring the buffer is freed

    std.debug.print("Buffer allocated with {} bytes\n", .{buffer.len});
}

The convention in Zig dictates that since the caller provides the allocator, it is also responsible for later freeing the memory. This approach makes memory management explicit while allowing developers to choose different allocation strategies, such as heap allocation, stack allocation, or arena-based allocation. This flexibility means that developers can swap in different memory strategies depending on performance needs, ensuring greater control over how and when memory is freed.

Since Zig does not have destructors or RAII like C++, it uses defer statements to guarantee resource cleanup. defer schedules a block of code to run when the current scope exits, making it easy to ensure that allocated memory is freed properly, even in the presence of early returns or errors.

const std = @import("std");

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    const allocator = gpa.allocator();

    const buffer = try allocator.alloc(u8, 256);
    defer allocator.free(buffer); // Ensures memory is freed even if an error occurs

    if (buffer.len > 100) return error.TooLarge; // `defer` still executes here
}

defer ensures that allocator.free(buffer) runs even if the function returns early due to an error, preventing memory leaks.

For cases where many small allocations need to be freed together, Zig provides arena allocators, similar to Objective-C’s NSAutoreleasePool. Instead of freeing individual allocations, an arena allocator frees all allocated memory at once when deinit() is called.

const std = @import("std");

pub fn main() !void {
    var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
    defer arena.deinit(); // Free all allocated memory in the arena at once

    const allocator = arena.allocator();

    const buffer1 = try allocator.alloc(u8, 64);
    const buffer2 = try allocator.alloc(u8, 128);

    std.debug.print("Allocated two buffers of {} and {} bytes\n", .{ buffer1.len, buffer2.len });

    // No need to call `free()` on buffer1 and buffer2 individually
}

Using an arena allocator is particularly useful for batch-processing scenarios where multiple allocations need to be freed together at a specific point in time, reducing fragmentation and allocation overhead.

Zig supports multiple allocation strategies, allowing fine-tuned control over memory usage:

  • General-Purpose Allocator (std.heap.GeneralPurposeAllocator): The default dynamic memory allocator.
  • Arena Allocator (std.heap.ArenaAllocator): Optimized for bulk allocations, freeing everything at once.
  • Stack Allocator (std.heap.stack_allocator): Allocates memory on the stack, ideal for short-lived allocations.
  • Fixed Buffer Allocator (std.heap.FixedBufferAllocator): Uses a pre-allocated buffer, avoiding heap allocations.

Each allocator serves a different purpose, and Zig allows us to choose the right one based on performance needs and memory lifetime requirements.

Zig stands out by avoiding automatic reference counting or borrow checking while still maintaining a high level of predictability and safety in memory management.

Comparison Summary

Instead of relying on the compiler to enforce ownership, Zig makes allocation and deallocation the programmer’s responsibility, but helps with language features like defer and function-level allocation contracts:

FeatureZigCC++Rust
Memory Management ModelExplicit allocatorsManual (malloc/free)RAII, smart pointers (unique_ptr)Ownership and borrow checker
Automatic CleanupdeferNoRAII (std::unique_ptr)Compiler-enforced scope-based cleanup
Reference CountingNoNostd::shared_ptrRc, Arc
Custom AllocatorsNative featureRarePossible but complexLimited
Garbage CollectionNoNoNoNo

In the article “Is Zig Safer Than Unsafe Rust?” from Rust Magazine, the discussion centers on Zig’s built-in safety strategies, such as explicit allocation policies and non-null default pointers, which can mitigate certain risks associated with manual memory management. The article suggests that while unsafe Rust allows for operations akin to those in C, potentially leading to safety issues if misused, Zig’s approach may offer a safer alternative in some contexts.

Similarly, the Reddit discussion “When Zig is Safer and Faster than (Unsafe) Rust” highlights a case where implementing a project in Zig resulted in code that was not only safer but also faster compared to its unsafe Rust counterpart. This underscores how Zig’s design can lead to more efficient and secure code in scenarios where Rust’s safety guarantees are bypassed.

Finally, the article “Comprehensive Understanding of Unsafe Rust” provides an in-depth look into the necessity and implications of using unsafe in Rust. It emphasizes that while unsafe Rust allows for greater control and performance optimizations, it also requires meticulous attention to uphold safety invariants, as the compiler’s guarantees are relaxed.

As a lover of simplicity and efficiency, I am starting to feel a special fondness for Zig, but let’s move on to explore other features.


Conclusion

Memory management is at the heart of low-level programming, dictating how a language balances control, safety, and performance:

  • C provides full control over memory allocation and deallocation but requires manual discipline to avoid memory corruption and leaks.
  • C++ introduces RAII and smart pointers, reducing risk but adding complexity and abstraction overhead.
  • Rust enforces strict ownership and borrowing rules, eliminating common memory errors at compile-time, but demanding a different mental model for resource management.
  • Zig offers explicit allocators and fine-grained control, avoiding hidden behavior, but leaving safety enforcement to the developer.

These differences become even more crucial in concurrent programming, where data races, synchronization, and memory safety must be carefully managed to ensure correct, scalable, and efficient multithreading. In the next section, we will explore how each language handles concurrency, evaluating threading models, synchronization primitives, and strategies to prevent data corruption in parallel execution.


Discover more from Code, Craft & Community

Subscribe to get the latest posts sent to your email.

4 responses to “Evaluating C, C++, Rust, and Zig for Modern Low-Level Development (Memory Management)”

  1. […] Memory Management […]

  2. […] Memory Management […]

  3. […] Memory Management […]

  4. […] Memory Management […]

Leave a Reply

Discover more from Code, Craft & Community

Subscribe now to keep reading and get access to the full archive.

Continue reading