
Most of the time, Rust’s strict safety guarantees protect you from memory bugs and data races without you even thinking about them. But what happens when you hit a performance-critical section, need to interface with low-level C code, or bend the borrow checker just enough to get your job done? That’s where unsafe
Rust comes in — a powerful tool that should be used with care, but not feared.
In this post, we’ll pull back the curtain on unsafe
, demystifying its purpose, exploring when it’s justified, and showing how to use it responsibly without compromising your codebase’s safety and maintainability.
Introduction
Rust is built on the promise of memory safety and fearless concurrency, and it delivers this through strict compile-time checks. However, there are rare and specific cases where the compiler simply cannot verify what you’re trying to do, even though you might know it’s safe. In these situations, Rust gives you the option to opt out of some of its guarantees using the unsafe
keyword.
Contrary to its name, unsafe
doesn’t mean “bad” or “dangerous” — it means “you, the programmer, are asserting to the compiler that you know what you’re doing.” It’s a contract of trust, not a license to write reckless code. In fact, judicious use of unsafe
can unlock optimizations, enable low-level systems work, and make certain abstractions possible that would be otherwise unreachable in safe Rust.
This post will walk through the major use cases, best practices, and real-world examples of unsafe
Rust — empowering you to use it thoughtfully, or at the very least, understand it when you encounter it in libraries or codebases you depend on.
What Does unsafe
Mean in Rust?
Rust is famous for its guarantees around safety and concurrency, enforced at compile time by its powerful type system and borrow checker. But there are limits to what the compiler can verify. In rare but necessary cases, Rust gives you an “escape hatch” — the unsafe
keyword — to step outside of its usual guarantees.
But don’t let the word scare you: using unsafe
doesn’t mean your code is inherently buggy or reckless. It simply means you, the programmer, are taking on the responsibility for upholding some of the guarantees that the compiler would otherwise enforce.
This section breaks down what unsafe
really means, when it comes into play, and what kind of operations it allows — and doesn’t allow.
Note: All example code in this post is educational and designed to be safe to run as-is. If you expand on these examples for your own projects, use caution and make sure you fully understand the implications of using unsafe
Rust.
The Scope of unsafe
When you mark a block of code as unsafe
, you are signaling to the compiler: “I know what I’m doing here, even though you can’t prove it.” But it’s important to understand what exactly unsafe
permits — and what it doesn’t.
Using unsafe
gives you access to five specific capabilities that are otherwise disallowed in safe Rust:
- Dereferencing raw pointers.
- Calling
unsafe
functions (including foreign functions). - Accessing or modifying mutable static variables.
- Implementing
unsafe
traits. - Accessing union fields.
That’s it. Everything else — like array bounds checking, type checking, and borrow checking within safe code — remains in place, even inside an unsafe
block.
Let’s look at a few simple examples to see what unsafe
lets us do — and what it doesn’t.
Open a shell window (Terminal on Mac/Linux, Command Prompt or PowerShell on Windows). Then navigate to the directory where you store Rust packages
for this blog series, and run the following command:
cargo new unsafe_in_rust
Next, change into the newly created unsafe_in_rust
directory and open it in VS Code (or your favorite IDE).
Note: Using VS Code is highly recommended for following along with this blog series. Be sure to install the Rust Analyzer extension — it offers powerful features like code completion, inline type hints, and quick fixes.
Also, make sure you’re opening the unsafe_in_rust
directory itself in VS Code. If you open a parent folder instead, the Rust Analyzer extension might not work properly — or at all.
As we see examples in this post, you can either replace the contents of main.rs or instead comment out the current code for future reference with a multi-line comment:
/*
CODE TO COMMENT OUT
*/
We’ll work through a number of examples throughout this post. Open the file src/main.rs and replace its contents entirely with the code for this example.
Example: Dereferencing a Raw Pointer
fn main() {
let x = 42;
let r = &x as *const i32; // raw pointer
unsafe {
println!("Raw pointer dereferenced: {}", *r);
}
}
/*
Output:
Raw pointer dereferenced: 42
*/
This code compiles and runs fine — but dereferencing a raw pointer (*r
) is only allowed in an unsafe
block. Unlike normal references, raw pointers don’t guarantee that they point to valid memory, or that the memory is properly aligned or initialized. That’s why this action is considered unsafe.
If you comment out the opening and closing unsafe block lines you’ll get a compilation error:
// unsafe {
println!("Raw pointer dereferenced: {}", *r);
// }
error[E0133]: dereference of raw pointer is unsafe and requires unsafe block
--> src/main.rs:6:50
|
6 | println!("Raw pointer dereferenced: {}", *r);
| ^^ dereference of raw pointer
Example: Calling an Unsafe Function
unsafe fn dangerous() {
println!("You’ve entered an unsafe function.");
}
fn main() {
unsafe {
dangerous();
}
}
/*
Output:
You’ve entered an unsafe function.
*/
Here, we’ve defined an unsafe fn
named dangerous
. Rust requires you to wrap the call to it in an unsafe
block — even if the function itself is harmless — because it’s been marked as potentially unsafe to call.
Example: What unsafe
Does Not Let You Do
fn main() {
let v = vec![1, 2, 3];
unsafe {
// Still panics! Bounds checks are not disabled.
println!("{}", v[99]);
}
}
/*
Output:
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 99'
*/
Despite being inside an unsafe
block, this code will still panic. That’s because indexing a vector is a safe operation in Rust, and safety rules still apply to safe code — even inside unsafe
blocks. The compiler only gives you additional powers inside unsafe
; it doesn’t remove all other checks.
The Five Things You Can Do in unsafe
The unsafe
keyword unlocks just five special powers in Rust — and each comes with a heavy dose of responsibility. When you’re writing unsafe
code, you’re telling the compiler: “Trust me, I’ve ensured this operation won’t cause undefined behavior.” But you’re not disabling all of Rust’s checks — you’re only gaining access to a limited set of features that require manual safety guarantees.
Let’s walk through each of the five things you’re allowed to do in an unsafe
block, with examples you can run safely and learn from.
1. Dereferencing Raw Pointers
Rust’s regular references (&
and &mut
) are always safe. But sometimes you need to step outside those rules — for example, when interfacing with C code or building custom abstractions. That’s when raw pointers (*const T
and *mut T
) come in — and dereferencing them requires unsafe
.
fn main() {
let value = 100;
let ptr = &value as *const i32;
unsafe {
println!("Dereferenced value: {}", *ptr);
}
}
/*
Output:
Dereferenced value: 100
*/
This example is safe because we’re converting a valid reference into a raw pointer, and immediately dereferencing it while it’s still valid.
2. Calling Unsafe Functions or Methods
Some functions in Rust are explicitly marked as unsafe
to call, meaning the compiler can’t guarantee safety when using them — but the function itself might still be perfectly safe if used correctly.
unsafe fn greet() {
println!("Hello from an unsafe function!");
}
fn main() {
unsafe {
greet();
}
}
/*
Output:
Hello from an unsafe function!
*/
Declaring a function unsafe
means that the caller must uphold certain invariants — even if the body is safe, like in this example.
What does “the caller must uphold certain invariants” really mean?
An invariant is simply a condition that must always be true to ensure safety or correctness.
So when we say:
“The caller must uphold certain invariants,”
We mean:
There are certain rules that must be followed before calling this function. The function assumes those rules are true — and if they’re not, bad things (like undefined behavior) can happen.
Example: A real-world analogy
Imagine a toaster that says:
“Only put in dry bread. Do not insert wet items or metal.”
The invariant here is:
- “What you put in is dry bread.”
The toaster won’t check — it just assumes you followed the rule.
If you break that rule (e.g. insert a wet sandwich), it could spark, short out, or catch fire.
In Rust, an unsafe fn
might be the same:
- It assumes you’ve validated the inputs.
- It does not check.
- If you break the rule → undefined behavior.
Rust Example: get_unchecked(index)
Rust’s standard library has a method:
unsafe fn get_unchecked(&self, index: usize) -> &T
The invariant:
index
must be within the bounds of the slice.
If the caller breaks that — say by passing index = 999
on a 3-element slice — then memory may be read incorrectly, leading to crashes or worse.
So in plain English:
“The caller must uphold certain invariants” means:
You must make sure certain rules are followed before calling this function — because the function assumes they are true and won’t check for you.
3. Accessing or Modifying Mutable Static Variables
Global mutable state is inherently risky in concurrent programs, so Rust restricts access to static mut
variables. Accessing or modifying them requires unsafe
, because the compiler cannot guarantee exclusive access or thread safety.
use std::ptr;
static mut COUNTER: i32 = 0;
fn main() {
unsafe {
let ptr = &mut COUNTER as *mut i32;
// Safely read, modify, and write using volatile operations
let current = ptr::read_volatile(ptr);
ptr::write_volatile(ptr, current + 1);
let updated = ptr::read_volatile(ptr);
println!("COUNTER: {}", updated);
}
}
/*
Output:
COUNTER: 1
*/
This version avoids triggering a compiler lint about creating a shared reference to a static mut
value — which would happen if you tried to format COUNTER
directly using println!
. Using std::ptr::read_volatile
and write_volatile
ensures the memory access is treated explicitly and avoids subtle aliasing risks.
Note: This is safe in a single-threaded context. In multi-threaded code, you’d need proper synchronization like std::sync::atomic
or Mutex
to prevent data races and undefined behavior.
Note: You may see the following compiler warning or error:
error: creating a mutable reference to mutable static is discouraged
--> src/main.rs:7:19
|
7 | let ptr = &mut COUNTER as *mut i32;
| ^^^^^^^^^^^^ mutable reference to mutable static
This happens because even inside an unsafe
block, Rust discourages taking mutable references to a static mut
— it can violate aliasing rules and cause undefined behavior in multithreaded code.
In this controlled example, the code is safe because:
- We’re in a single-threaded context,
- We never alias the
COUNTER
, - And we treat the pointer as raw with
ptr::read_volatile
andwrite_volatile
.
However, in production code or multithreaded scenarios, consider using AtomicI32
or Mutex<i32>
instead of static mut
.
4. Implementing Unsafe Traits
Traits marked as unsafe
indicate that incorrect implementations could lead to undefined behavior. Implementing them requires an explicit unsafe impl
.
Here’s a contrived but safe example using a custom unsafe
trait:
unsafe trait Foo {
fn do_something(&self);
}
unsafe impl Foo for i32 {
fn do_something(&self) {
println!("i32 doing something: {}", self);
}
}
fn main() {
let num: i32 = 42;
num.do_something();
}
/*
Output:
i32 doing something: 42
*/
You won’t often need to declare your own unsafe
traits, but it’s common in libraries like Send
and Sync
, which have strict invariants.
5. Accessing Union Fields
Unions are like structs, but only one field is valid at a time. Reading from a union field is unsafe because the compiler can’t guarantee which field is currently valid.
union MyUnion {
int: i32,
float: f32,
}
fn main() {
let u = MyUnion { int: 99 };
unsafe {
println!("Union as int: {}", u.int);
}
}
/*
Output:
Union as int: 99
*/
This example works because we initialized the union with int
, and then read from the same field. Accessing a different field than the one initialized would be UB — but we’re not doing that here.
Accessing union fields (reading from a field) is unsafe because Rust can’t track which field is currently valid. Writing to a union field, however, is safe.
What Is Undefined Behavior (UB)?
Undefined Behavior, or UB, means that your program does something the compiler assumes is impossible — and once that happens, all bets are off.
In Rust (just like in C/C++), UB is dangerous because it allows the compiler to make optimizations based on the assumption that UB never happens. If you violate that assumption, your program might:
- Crash unexpectedly
- Corrupt memory
- Pass tests but fail in production
- Leak data
- Behave inconsistently across builds or platforms
In short: UB is not just a bug — it breaks the entire safety model.
Examples of UB in Rust
All of the following are undefined behavior even though they may compile and run:
- Dereferencing a null or dangling pointer
- Creating two
&mut
references to the same data - Reading uninitialized memory
- Calling a function with an incorrect function pointer signature
- Violating type invariants (like using an invalid enum discriminant)
- Data races in multithreaded code
UB vs. Unsafe
Just because something is
unsafe
doesn’t mean it causes UB.
But if yourunsafe
code violates Rust’s rules, it can cause UB — and that’s where things get dangerous.
Think of unsafe
as giving you a knife — you can cut a sandwich, or you can cut your finger. Rust won’t stop you — but it will assume you know which side is sharp.
These five capabilities form the entire scope of what unsafe
allows. If you’re doing anything outside these areas, you likely don’t need unsafe
— or you might be doing something truly dangerous.
Why Rust Needs an Escape Hatch
Rust is known for its strong safety guarantees, especially around memory safety and concurrency. These guarantees are enforced by the compiler through strict rules about ownership, borrowing, lifetimes, and type checking.
But the real world is messy. Sometimes, you need to do things the compiler can’t verify — like interfacing with C code, building performance-critical abstractions, or manipulating memory directly. In these cases, the compiler’s safety net becomes a roadblock.
That’s why Rust provides an escape hatch: the unsafe
keyword. It allows you to tell the compiler,
“I know what I’m doing. Let me handle this part manually.”
Let’s look at a few cases where safe Rust simply isn’t enough — and why unsafe
becomes essential.
Example: Accessing Elements Without Bounds Checks
In performance-critical code, bounds checks can become a bottleneck. Rust’s safe .get()
and []
indexing will panic or return None
on out-of-bounds access. But what if you’re sure the index is valid and you want to skip the check?
fn main() {
let nums = vec![10, 20, 30];
let second = unsafe { *nums.as_ptr().add(1) };
println!("Second element: {}", second);
}
/*
Output:
Second element: 20
*/
Here, we manually calculate the pointer to the second element and dereference it. This bypasses the bounds check — something safe Rust would never allow.
Example: Building a Safe Abstraction That Requires Unsafe Internals
Let’s say you want to create your own abstraction over a buffer. You want the outside of your API to be safe — but you may need raw pointers internally.
struct MyBuffer {
data: Vec<i32>,
}
impl MyBuffer {
fn new(values: Vec<i32>) -> Self {
MyBuffer { data: values }
}
fn get_unchecked(&self, index: usize) -> i32 {
unsafe { *self.data.as_ptr().add(index) }
}
}
fn main() {
let buf = MyBuffer::new(vec![5, 10, 15]);
let val = buf.get_unchecked(2); // Safe as long as index is valid
println!("Value at index 2: {}", val);
}
/*
Output:
Value at index 2: 15
*/
This is a classic use of unsafe
: you hide it behind a safe API, and you, as the implementer, ensure the invariant (“index must be valid”) is maintained.
Example: Interfacing with C Code (FFI)
Rust can’t validate foreign function interfaces. When calling C code, you’re stepping outside Rust’s safety model — so it must be wrapped in unsafe
.
extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
let x = -42;
let result = unsafe { abs(x) };
println!("Absolute value: {}", result);
}
/*
Output:
Absolute value: 42
*/
This code produces a compilation error:
error: extern blocks must be unsafe
--> src/main.rs:1:1
|
1 | / extern "C" {
2 | | fn abs(input: i32) -> i32;
3 | | }
| |_^
error: could not compile `unsafe_in_rust` (bin "unsafe_in_rust") due to 1 previous error
If you declare an extern
block, Rust requires you to mark it as unsafe extern "C"
, because the compiler can’t validate the behavior or existence of foreign functions. If you omit unsafe
, you’ll get an error.
What’s Going On?
Rust requires extern
blocks to be explicitly marked unsafe
, even if you’re not immediately calling the functions inside. This is because declaring foreign functions is inherently unsafe — Rust can’t verify their existence or behavior.
So the code should be written like this:
unsafe extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
let x = -42;
let result = unsafe { abs(x) };
println!("Absolute value: {}", result);
}
/*
Output:
Absolute value: 42
*/
Note: This example works because abs
is part of the standard C library (libc
), which is available on most platforms by default. You don’t need to write or compile any C code yourself — but you do need to declare the function inside an unsafe extern "C"
block, because Rust cannot verify foreign function signatures or behavior at compile time.
Summary
Rust’s safety model is powerful, but sometimes too restrictive for certain low-level or high-performance tasks. That’s why unsafe
exists — not as a loophole, but as a deliberate tool for developers who understand when and why to use it.
The escape hatch isn’t a bug — it’s a feature, and a critical part of making Rust both safe and systems-level capable.
When to Reach for unsafe
Rust gives you powerful guarantees around memory safety, data races, and aliasing — and it maintains these guarantees through strict compile-time checks. But sometimes, you’ll run into situations where you need to step beyond what the borrow checker or type system allows.
That’s where unsafe
comes in.
Using unsafe
should be rare, deliberate, and well-contained. This section walks through the most common and legitimate scenarios where unsafe
Rust is not only acceptable, but necessary — starting with one of the most practical: interfacing with foreign code.
Interfacing with Foreign Code (FFI)
Rust is often used in systems-level programming where interoperability with C libraries is required — for instance, when calling system APIs or using a performance-critical C library that already exists. Since Rust can’t validate the correctness or safety of external code, any interaction with foreign functions requires unsafe
.
Let’s walk through some examples that safely demonstrate calling C functions from Rust.
Example: Calling a Standard C Library Function (abs
)
The standard C function abs()
returns the absolute value of an integer. It’s part of libc and available on most platforms by default, so you don’t need to write or compile any C code yourself. Our previous example showed this:
unsafe extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
let x = -42;
let result = unsafe { abs(x) };
println!("Absolute value: {}", result);
}
/*
Output:
Absolute value: 42
*/
Example: Safer Wrapper Around an FFI Call
Since calling FFI functions requires unsafe
, it’s best practice to wrap those calls in a safe function that enforces any needed checks. Here’s how to encapsulate abs()
safely:
unsafe extern "C" {
fn abs(input: i32) -> i32;
}
fn safe_abs(input: i32) -> i32 {
// This wrapper ensures only valid i32 values are passed
unsafe { abs(input) }
}
fn main() {
println!("safe_abs(-7) = {}", safe_abs(-7));
}
/*
Output:
safe_abs(-7) = 7
*/
By isolating the unsafe
call inside a controlled wrapper, you reduce the surface area for bugs and make your code easier to reason about and test.
Want to go deeper? You can use the
cc
crate and a build script (build.rs
) to compile your own C code and call it from Rust. I’ll cover this in an upcoming post on advanced FFI techniques.
Performance-Critical Inner Loops
Rust’s safety guarantees come with a small runtime cost — especially when performing repetitive operations in tight loops. Bounds checking on arrays, iterator overhead, and borrow-checker restrictions can sometimes stand in the way of maximum performance, even when you know the code is safe.
In scenarios like video encoding, graphics rendering, scientific simulations, or game loops, even small inefficiencies can add up. That’s when unsafe
can be used as a tool to unlock performance optimizations — as long as you’re careful and uphold Rust’s safety invariants manually.
Let’s explore a few examples where unsafe
helps shave off overhead in performance-critical loops.
Example: Skipping Bounds Checks in a Loop
Normally, indexing into a vector like vec[i]
performs a bounds check on each access. If you’re absolutely sure the index is safe — say, because you’re iterating over known-valid ranges — you can skip that check using raw pointers.
fn main() {
let data = vec![1, 2, 3, 4, 5];
let ptr = data.as_ptr(); // raw pointer to the first element
unsafe {
for i in 0..data.len() {
// Skip bounds check by using raw pointer arithmetic
let val = *ptr.add(i);
println!("Element {}: {}", i, val);
}
}
}
/*
Output:
Element 0: 1
Element 1: 2
Element 2: 3
Element 3: 4
Element 4: 5
*/
We use as_ptr()
to get a raw pointer to the underlying array, then use ptr.add(i)
to navigate through it without triggering bounds checks. This avoids redundant safety checks in hot paths like data processing loops.
Example: Comparing Safe vs. Unsafe in a Benchmark Setup (Conceptual)
While we won’t write a full benchmark here, it’s worth noting that unsafe
like the above can cut down overhead in real-world benchmarks — especially when used in deeply nested or highly iterative code.
Tip: Before reaching for
unsafe
to micro-optimize, use benchmarking tools likecriterion
orcargo bench
to verify the performance bottleneck is real. Don’t sacrifice safety for a performance gain you can’t measure.
Example: A Controlled Panic Using Safe Indexing
To contrast with unsafe
, here’s the safe version of the same loop using vec[i]
, which includes bounds checking. Let’s intentionally trigger a panic:
fn main() {
let data = vec![10, 20, 30];
for i in 0..4 {
// This will panic at i = 3
println!("Element {}: {}", i, data[i]);
}
}
/*
Output:
Element 0: 10
Element 1: 20
Element 2: 30
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 3'
*/
In real applications, this check protects you. But in performance-critical code where the index is guaranteed to be valid, bounds checking can be safely removed — with unsafe
, if you’re confident.
Final Thoughts
Using unsafe
to improve performance can make sense — but only if:
- You’ve benchmarked and proven it’s a bottleneck.
- You can uphold all memory safety guarantees yourself.
- You isolate and document the
unsafe
code carefully.
Rust is fast by default. Reach for unsafe
only when the last bit of performance really matters.
Shared Mutability with Raw Pointers
Rust’s ownership system ensures that you can’t have both shared and mutable access to the same data at the same time — a core rule that prevents data races and memory corruption. In safe Rust, this is enforced through the borrow checker: either you get one &mut
, or many &
, but not both.
But sometimes, especially in low-level code or interfacing with hardware or C libraries, you need to mutably access data even when other parts of the program might still have a reference. That’s where unsafe
and raw pointers (*mut T
and *const T
) give you the ability to opt out of these restrictions — with the caveat that you now carry the full responsibility for maintaining safety.
Let’s walk through what shared mutability looks like using raw pointers, and how to use it responsibly.
Example: Modifying a Value Through a Raw Mutable Pointer
fn main() {
let mut value = 42;
let ptr = &mut value as *mut i32;
unsafe {
*ptr += 1;
println!("Modified value: {}", *ptr);
}
}
/*
Output:
Modified value: 43
*/
We create a raw mutable pointer from a &mut
, then modify and read the value using *ptr
. This bypasses borrow checking, so you must ensure no other references to value
are used during this time.
Example: Creating Raw Pointers for Shared and Mutable Access
Here’s an example that demonstrates why this is dangerous if misused — we create both a shared and a mutable raw pointer to the same data. This is fine as long as you don’t dereference both at the same time.
fn main() {
let mut data = 100;
let const_ptr = &data as *const i32;
let mut_ptr = &mut data as *mut i32;
unsafe {
// Mutate the data through the mutable pointer
*mut_ptr += 1;
// Now read the data through the shared pointer
println!("Data (via const ptr): {}", *const_ptr);
}
}
/*
Output:
Data (via const ptr): 101
*/
This is undefined behavior in theory, because it violates aliasing rules: Rust assumes that data accessed via a shared reference isn’t also being mutated. However, this specific example runs fine and illustrates the danger — in real code, you should avoid using both pointers like this unless you’re absolutely sure it’s safe (e.g., via synchronization or fencing).
Tip: Even if something “works”, it may still be UB (undefined behavior). The compiler could reorder operations or make assumptions that break your program in subtle ways.
Example: Safer Shared Mutability with Cell<T>
Instead of using raw pointers directly, safe Rust provides interior mutability types like Cell<T>
and RefCell<T>
to let you mutate data through shared references safely.
use std::cell::Cell;
fn main() {
let value = Cell::new(5);
// Shared reference, but we can still mutate the contents
value.set(value.get() + 1);
println!("Cell value: {}", value.get());
}
/*
Output:
Cell value: 6
*/
This is the preferred way to achieve shared mutability safely in most cases. Only fall back to raw pointers when Cell
or RefCell
won’t work due to performance, FFI constraints, or non-Rust compatibility.
Summary
Raw pointers give you full control — and full risk. Shared mutability via *mut T
and *const T
is one of the key motivations for using unsafe
, but it should always be used with caution.
Reach for raw pointers only when:
- The borrow checker genuinely can’t express what you’re doing.
- You’ve ensured no overlapping access will occur.
- You can’t use
Cell
,RefCell
, or other safe abstractions instead.
Implementing Unsafe Traits or Abstractions
Sometimes, the abstraction you’re implementing has requirements that can’t be enforced by the Rust compiler, but are nonetheless crucial for soundness. In those cases, Rust allows you to declare a trait as unsafe
— meaning implementing the trait is unsafe because violating its contract can lead to undefined behavior.
This is different from simply using unsafe
blocks in a function. Declaring a trait as unsafe
shifts the burden of safety to the implementor, who must ensure that certain invariants are upheld.
Let’s walk through what this means with some simple and safe-to-run examples.
Example: Declaring and Implementing an Unsafe Trait
unsafe trait MyUnsafeTrait {
fn act(&self);
}
unsafe impl MyUnsafeTrait for i32 {
fn act(&self) {
println!("Acting on i32: {}", self);
}
}
fn main() {
let x = 123;
x.act(); // Calls the unsafe trait method
}
/*
Output:
Acting on i32: 123
*/
Here, MyUnsafeTrait
is declared with the unsafe
keyword, and we use unsafe impl
to implement it for i32
. The method itself is safe to call, but implementing the trait incorrectly in real-world use could cause UB. For instance, if this trait promised thread safety or certain memory layout constraints, violating those could break programs using it.
Example: Real-World Unsafe Trait — Send
and Sync
Rust’s standard library has several unsafe traits, such as Send
and Sync
, which are used to mark types as safe to transfer across threads or to share between threads.
Here’s a simplified and safe simulation of what that looks like (note: you usually won’t implement these manually unless you’re writing a concurrency abstraction):
use std::marker::PhantomData;
use std::rc::Rc;
// Hypothetical wrapper that falsely claims `Send`
struct MyWrapper {
_data: Rc<i32>, // Rc<T> is NOT thread-safe
_phantom: PhantomData<*const ()>,
}
// Manually implementing Send — this is unsafe and wrong!
unsafe impl Send for MyWrapper {}
fn main() {
println!("Declared MyWrapper as Send (do not do this in real code!)");
}
/*
Output:
Declared MyWrapper as Send (do not do this in real code!)
*/
This code compiles, but it lies to the compiler. Rc<T>
is not thread-safe, and marking it Send
can cause data races or memory corruption in multithreaded contexts. This is why implementing unsafe traits should be done only with deep understanding of what the trait contract entails.
Note: Don’t actually do this. It’s an example of how easy it is to misuse unsafe traits if you’re not careful.
Example: Safer Alternative Using Trait Bounds
Whenever possible, prefer using trait bounds and generic constraints to enforce invariants in safe code. Here’s an example that stays entirely within safe Rust:
trait MySafeTrait {
fn double(&self) -> i32;
}
impl MySafeTrait for i32 {
fn double(&self) -> i32 {
self * 2
}
}
fn main() {
let num = 21;
println!("Double: {}", num.double());
}
/*
Output:
Double: 42
*/
This shows that most traits in Rust should be safe. You should only use unsafe trait
when:
- There’s no way to enforce the required invariants with the type system alone.
- The trait contract, if violated, could cause undefined behavior.
Summary
Use unsafe trait
and unsafe impl
when you’re creating abstractions that:
- Rely on invariants Rust can’t check for you (e.g. thread safety, memory layout, FFI guarantees).
- Could result in UB if implemented incorrectly.
- Are backed by a soundness contract that you must document and uphold.
Always ask yourself: Can I express this safely using regular trait bounds or safe wrappers? If so — do that instead.
Understanding the Risks
Using unsafe
in Rust gives you tremendous power — but with that power comes the responsibility to uphold safety guarantees that the compiler would normally enforce for you.
One of the biggest risks is triggering undefined behavior (UB) — a condition where the Rust compiler makes no guarantees about what your program will do. Once UB occurs, your program becomes unpredictable, untrustworthy, and potentially unsafe to run.
This section covers the real dangers of UB, what causes it, and how it can impact your program even when it seems like everything is working fine.
Undefined Behavior and Its Consequences
Undefined Behavior (UB) occurs when a program violates Rust’s core safety assumptions in a way that the compiler cannot detect. Once UB happens, the compiler is allowed to do literally anything — including optimizing your code in ways that make it fail silently, crash, leak data, or become exploitable.
UB is different from a crash or panic — it’s worse because:
- It might not fail immediately.
- It might pass tests and still break in production.
- It can lead to security vulnerabilities, especially in concurrent or FFI-heavy code.
Let’s look at examples of how UB can occur, and why it’s so important to avoid.
Example: Dereferencing a Dangling Pointer
fn main() {
let dangling = {
let temp = 99;
&temp as *const i32 // raw pointer to a stack value that will go out of scope
};
unsafe {
// This is undefined behavior: dereferencing a dangling pointer
// The program may crash or print garbage
println!("Dangling pointer value: {}", *dangling);
}
}
/*
Output:
Dangling pointer value: 99 // OR a crash, OR garbage value — it's UB!
*/
Even though this might appear to work (and often does in small programs), it’s undefined behavior — the compiler assumes you would never dereference a pointer to deallocated stack memory. If it sees that you did, it may optimize your program in ways that make things go horribly wrong.
Example: Violating Aliasing Rules with Shared and Mutable Access
Rust’s aliasing model says you can either have:
- One
&mut
reference, or - Multiple
&
references, but not both at the same time.
unsafe
lets you bypass that — and if you misuse it, UB ensues.
fn main() {
let mut data = 42;
let r1 = &data as *const i32;
let r2 = &mut data as *mut i32;
unsafe {
// Mutate and read from overlapping pointers
*r2 += 1;
println!("Read from r1: {}", *r1); // UB: shared + mutable aliasing
}
}
/*
Output:
Read from r1: 43 // Or something worse
*/
This violates Rust’s aliasing rules. Even though it may “work,” the behavior is undefined and can result in corruption or hard-to-diagnose bugs in larger applications.
Example: Safe Code That Panics vs. Unsafe Code That UB’s
To illustrate the contrast, here’s a safe panic:
fn main() {
let data = vec![1, 2, 3];
println!("{}", data[99]); // Panics with bounds check
}
/*
Output:
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 99'
*/
And here’s the unsafe equivalent that causes UB instead.
Warning: This code demonstrates undefined behavior and should not be run as-is. Dereferencing a pointer to an out-of-bounds memory location is not safe, even if it seems to work. In real-world code, this could cause crashes, data corruption, or subtle logic errors.
fn main() {
let data = vec![1, 2, 3];
let ptr = data.as_ptr();
unsafe {
// No bounds check — this is undefined behavior
// Even though it might "work" on your machine, it is not valid Rust
// println!("{}", *ptr.add(99)); // ❌ DO NOT RUN
println!("Skipping dereference to avoid undefined behavior.");
}
}
/*
Output:
Skipping dereference to avoid undefined behavior.
*/
Key Takeaways
- UB breaks all bets. The compiler assumes your unsafe code is safe. If it’s not, your program may misbehave in subtle and dangerous ways.
- UB is not a crash. Crashing is safe. UB is when the compiler does things as if your assumption was true, and it wasn’t.
- Use
unsafe
with surgical precision, isolate it, and double-check any assumptions you make.
Soundness vs. Safety: What You’re Really Promising
When working with unsafe
, it’s important to understand a deeper concept than just “don’t crash”: soundness. In Rust, safety refers to code that doesn’t cause undefined behavior. Soundness, on the other hand, refers to whether a safe abstraction can be relied on to uphold Rust’s guarantees.
If you write
unsafe
code that allows safe Rust to trigger UB, your abstraction is unsound.
If your unsafe code never causes UB when used as intended, your abstraction is sound.
Even though the Rust compiler can’t check for unsoundness directly, it assumes all safe
code is, by definition, free from undefined behavior. So if you use unsafe
to build something that breaks that assumption — even if it compiles — you’ve created a soundness bug.
Let’s break it down with examples.
Example: A Safe Abstraction That’s Sound
Suppose we’re wrapping a raw pointer-based access method but keeping the API safe:
struct Wrapper {
data: Vec<i32>,
}
impl Wrapper {
fn get_unchecked(&self, index: usize) -> Option<i32> {
if index < self.data.len() {
// Safe because we validate the index
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let w = Wrapper { data: vec![10, 20, 30] };
println!("Index 1: {:?}", w.get_unchecked(1));
println!("Index 99: {:?}", w.get_unchecked(99));
}
/*
Output:
Index 1: Some(20)
Index 99: None
*/
This abstraction is sound: it uses unsafe
internally, but never allows UB to leak into safe Rust. The Option
ensures no invalid accesses occur.
Example: A Safe Abstraction That’s Unsound
Now let’s see a case where we wrap unsafe
in a safe
API without validating the input. It looks safe — but it’s actually dangerous.
Warning: This example illustrates a soundness bug. Even though the method is declared safe, it allows safe Rust to trigger undefined behavior by accessing memory out of bounds.
The unsafe code inside get_unchecked()
trusts the caller to provide a valid index, but the API doesn’t enforce that. This breaks Rust’s safety guarantees — and is why this code should not be run with invalid input.
struct UnsafeWrapper {
data: Vec<i32>,
}
impl UnsafeWrapper {
fn get_unchecked(&self, index: usize) -> i32 {
// ❌ No bounds check!
unsafe { *self.data.as_ptr().add(index) }
}
}
fn main() {
let w = UnsafeWrapper { data: vec![1, 2, 3] };
// Looks safe to call — but this would cause undefined behavior
// let _ = w.get_unchecked(99); // ❌ DO NOT RUN
println!("Skipping unsafe call that would cause undefined behavior.");
}
/*
Output:
Skipping unsafe call that would cause undefined behavior.
*/
Even though this method doesn’t require unsafe
to call, it’s unsound because it lets safe Rust trigger undefined behavior. That breaks the promise of Rust’s safety model.
Example: Safe Code That Can Trust Sound Abstractions
Now let’s show how a properly sound abstraction is just as usable as normal safe code:
struct Wrapper {
data: Vec<i32>,
}
impl Wrapper {
fn get_unchecked(&self, index: usize) -> Option<i32> {
if index < self.data.len() {
// Safe use of unsafe pointer math with bounds check
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn sum_values(wrapper: &Wrapper) -> i32 {
(0..3)
.filter_map(|i| wrapper.get_unchecked(i))
.sum()
}
fn main() {
let w = Wrapper { data: vec![1, 2, 3] };
println!("Sum: {}", sum_values(&w));
}
/*
Output:
Sum: 6
*/
The caller of Wrapper::get_unchecked()
doesn’t need to worry about safety — that’s the whole point of sound abstractions. You used unsafe
responsibly, documented the invariants, and now other code can call your methods safely and confidently.
Summary
- Safety is local: It means “this code doesn’t cause UB.”
- Soundness is global: It means “this abstraction can’t be used in a way that causes UB.”
- Writing
unsafe
isn’t just about avoiding crashes — it’s about protecting safe users from making mistakes that lead to UB. - The goal of
unsafe
is to create sound, safe APIs — even if you need to take a shortcut under the hood.
Examples of Common Pitfalls
When using unsafe
, the compiler takes a step back and trusts that you’ll uphold Rust’s safety invariants. But even experienced developers can fall into subtle traps that introduce undefined behavior, make abstractions unsound, or create data races in disguise.
This section shows a few common mistakes when writing unsafe Rust — each is safe to run in your terminal, but will help your readers understand how easily unsafe can be misused and why caution is essential.
Example: Dereferencing a Null Pointer
Rust normally makes it impossible to dereference null. But with raw pointers, you can — and it’s instant undefined behavior if you do.
fn main() {
let ptr: *const i32 = std::ptr::null();
unsafe {
// UB: Dereferencing a null pointer
// This might crash, or seem to work depending on the platform
// println!("{}", *ptr); // Commented out to prevent a crash
println!("Would crash if we dereferenced the null pointer.");
}
}
/*
Output:
Would crash if we dereferenced the null pointer.
*
/
Important: This example does not dereference the pointer — but the code shows what would happen if you did. In actual unsafe code, this kind of mistake can be easy to miss.
Example: Misusing mem::transmute
std::mem::transmute
is one of the most dangerous tools in Rust — it blindly converts one type into another based purely on size and memory layout, with no safety checks. Even if the types are the same size, transmute
can violate type invariants and cause undefined behavior.
Warning: Do not run the unsafe transmute shown below. While it compiles, it causes undefined behavior by creating an invalid char
. Rust assumes all char
values are valid Unicode scalar values (i.e., in the range 0x0000..=0x10FFFF
, excluding surrogate code points). Transmuting an invalid u32
into a char
breaks that guarantee — potentially leading to memory corruption or subtle bugs.
use std::mem;
fn main() {
let x: u32 = 0x110000; // Not a valid Unicode scalar value
// ❌ This would cause undefined behavior if uncommented
// let c: char = unsafe { mem::transmute(x) };
// println!("Transmuted char: {}", c);
println!("Skipping invalid transmute to avoid undefined behavior.");
}
/*
Output:
Skipping invalid transmute to avoid undefined behavior.
*/
Use safe alternatives when converting types with constraints. For instance, this is the safe and preferred way:
fn main() {
let x: u32 = 0x110000;
match char::from_u32(x) {
Some(c) => println!("Valid char: {}", c),
None => println!("Invalid Unicode scalar value: {}", x),
}
}
/*
Output:
Invalid Unicode scalar value: 1114112
*/
Example 3: Dangling Pointer from Moved Value
Creating raw pointers is safe — but dereferencing them after moving the value they point to is not.
fn main() {
let value = String::from("hello");
let ptr = value.as_ptr(); // pointer into the String
let _moved = value; // ownership moved, `value` is no longer valid
unsafe {
println!("Pointer still exists, but is now dangling: {:?}", ptr);
// ❌ Dereferencing now would be undefined behavior
// println!("{}", *ptr); // DO NOT RUN
println!("Dereferencing now would be undefined behavior.");
}
}
/*
Output:
Pointer still exists, but is now dangling: 0x...
Dereferencing now would be undefined behavior.
*/
Warning: This example demonstrates a dangling pointer — the value was moved, so the raw pointer is no longer valid. Do not dereference the pointer, even if it looks like the memory is still accessible. Dereferencing a dangling pointer is undefined behavior and can lead to crashes or memory corruption.
This shows how easy it is to accidentally use a dangling pointer — and why raw pointers must always be treated carefully in Rust.
Final Thoughts
These pitfalls show how unsafe
bypasses Rust’s normal protections — and how dangerous that can be when you’re working with raw memory, lifetimes, or conversions.
Golden Rule: Just because code compiles, doesn’t mean it’s correct — especially in
unsafe
Rust.
Always:
- Document invariants clearly,
- Minimize unsafe block scope,
- Validate inputs before unsafe operations,
- Use safe abstractions where possible.
Writing unsafe
Code Safely
Using unsafe
in Rust doesn’t mean writing dangerous code — it means taking full responsibility for a specific operation that Rust cannot prove is safe. The goal is to contain and isolate the unsafety, not to let it leak throughout your codebase.
This section explores best practices for writing safe, maintainable code even when you need to use unsafe
. We’ll start with one of the most important techniques: keeping unsafe
blocks small and tightly scoped.
Minimizing the Scope of unsafe
Blocks
The smaller your unsafe
block, the easier it is to reason about. You should only wrap the exact operation that requires unsafe
, not entire functions or large chunks of logic. This makes it clear what part of the code is unsafe, and what assumptions need to be upheld.
Let’s look at a few examples of how to reduce the scope of unsafe
in real code.
Example: Wrapping Only the Dangerous Operation
fn get_nth(slice: &[i32], index: usize) -> Option<i32> {
if index < slice.len() {
// Only the pointer dereference is unsafe
Some(unsafe { *slice.as_ptr().add(index) })
} else {
None
}
}
fn main() {
let data = [10, 20, 30];
println!("Element at index 1: {:?}", get_nth(&data, 1));
}
/*
Output:
Element at index 1: Some(20)
*/
Why this is good:
Only the exact *slice.as_ptr().add(index)
expression is wrapped in unsafe
, not the entire if
block. This tells readers and auditors exactly where the danger is, and that bounds were checked before entering the unsafe block.
Example: Too Much Scope (Don’t Do This)
Here’s a version that uses a bigger-than-necessary unsafe
block — not ideal:
fn get_nth_bad(slice: &[i32], index: usize) -> Option<i32> {
unsafe {
if index < slice.len() {
Some(*slice.as_ptr().add(index))
} else {
None
}
}
}
Problem: This makes it harder to know which part is unsafe. It also invites someone to later add logic inside the block that doesn’t need to be unsafe — increasing the chance of accidental bugs.
Example: Safe API Built from Tiny unsafe
Block
This approach creates a safe wrapper for a dangerous operation, encapsulated cleanly:
struct Wrapper {
data: Vec<i32>,
}
impl Wrapper {
fn get_unchecked(&self, index: usize) -> Option<i32> {
if index < self.data.len() {
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let w = Wrapper { data: vec![1, 2, 3] };
println!("Index 2: {:?}", w.get_unchecked(2));
}
/*
Output:
Index 2: Some(3)
*/
Good practice: The unsafe
code is minimal, wrapped in bounds checking, and used to power a safe API. This keeps callers in safe Rust, even if the implementation uses unsafe
internally.
Summary
Minimizing the scope of unsafe
blocks helps:
- Prevent accidental mistakes,
- Communicate clearly what code is trusted and why,
- Isolate danger for better testing and auditing.
Golden Rule: Wrap only what truly needs unsafe
— no more, no less.
Encapsulating Unsafe Code in Safe Abstractions
One of the most important goals when using unsafe
in Rust is to hide it behind a safe, well-tested abstraction. This means wrapping the unsafe logic in a way that prevents unsafe usage by others, enforces any required invariants, and guarantees that safe code can’t cause undefined behavior by mistake.
This is the same principle that the Rust standard library follows: Vec
, Box
, and Option::unwrap_unchecked
all use unsafe internally — but they expose clean, safe interfaces.
Let’s walk through how to do that effectively in your own code.
Example: A Safe Wrapper for Unchecked Indexing
struct SafeVec {
data: Vec<i32>,
}
impl SafeVec {
fn new(data: Vec<i32>) -> Self {
SafeVec { data }
}
fn get_fast(&self, index: usize) -> Option<i32> {
if index < self.data.len() {
// Encapsulate the unsafe pointer math internally
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let svec = SafeVec::new(vec![10, 20, 30]);
println!("Index 1: {:?}", svec.get_fast(1));
println!("Index 99: {:?}", svec.get_fast(99)); // Gracefully returns None
}
/*
Output:
Index 1: Some(20)
Index 99: None
*/
This example uses unsafe
internally but guarantees safety by checking the bounds beforehand. The caller can’t misuse the function — it’s sound.
Example: Encapsulation with Lifetimes
Here’s a more advanced example where we return a reference, preserving Rust’s lifetime safety even though we use unsafe under the hood.
struct Wrapper<'a> {
slice: &'a [i32],
}
impl<'a> Wrapper<'a> {
fn new(slice: &'a [i32]) -> Self {
Wrapper { slice }
}
fn get_unchecked_ref(&self, index: usize) -> Option<&i32> {
if index < self.slice.len() {
Some(unsafe { &*self.slice.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let values = [5, 10, 15];
let wrap = Wrapper::new(&values);
if let Some(val) = wrap.get_unchecked_ref(1) {
println!("Reference: {}", val);
}
}
/*
Output:
Reference: 10
*/
Even though we’re creating a reference from a raw pointer (which is unsafe), we only do it after checking the index — and we return it with a lifetime tied to the input slice, preserving soundness.
Example: A More General Use Case (ByteReader)
Here’s a real-world-style abstraction: reading bytes from a buffer without bounds checks on every read.
struct ByteReader<'a> {
ptr: *const u8,
len: usize,
pos: usize,
_marker: std::marker::PhantomData<&'a [u8]>,
}
impl<'a> ByteReader<'a> {
fn new(slice: &'a [u8]) -> Self {
ByteReader {
ptr: slice.as_ptr(),
len: slice.len(),
pos: 0,
_marker: std::marker::PhantomData,
}
}
fn next(&mut self) -> Option<u8> {
if self.pos < self.len {
let byte = unsafe { *self.ptr.add(self.pos) };
self.pos += 1;
Some(byte)
} else {
None
}
}
}
fn main() {
let data = b"abc";
let mut reader = ByteReader::new(data);
while let Some(byte) = reader.next() {
println!("Byte: {}", byte as char);
}
}
/*
Output:
Byte: a
Byte: b
Byte: c
*/
This shows how unsafe internals can power a clean, safe API. The invariants (ptr
must point to a valid buffer, and pos < len
) are upheld by the implementation.
Summary
Encapsulating unsafe code in safe abstractions lets you:
- Write flexible, performant APIs,
- Prevent misuse from external callers,
- Keep the unsafety localized and auditable.
Treat unsafe like a sharp tool: store it in a safe box, document it, and only expose what can’t hurt the rest of the program.
Documenting Invariants and Preconditions
When you write unsafe
code in Rust, the compiler takes your word that you’re upholding certain guarantees — things it would normally enforce in safe code. These guarantees are called invariants and preconditions, and it’s your job to make them explicit.
If someone later uses your code without understanding what must be true for it to remain safe, they might inadvertently trigger undefined behavior. That’s why documenting these assumptions is just as important as writing the unsafe code itself.
Let’s look at examples where documenting invariants makes unsafe code safer to understand, maintain, and use correctly.
Example: Documenting a Valid Index Precondition
/// SAFETY: `index` must be less than `slice.len()`
/// The caller must guarantee that the index is within bounds.
unsafe fn get_unchecked(slice: &[i32], index: usize) -> i32 {
*slice.as_ptr().add(index)
}
fn main() {
let data = [10, 20, 30];
let safe_value = if 1 < data.len() {
// Safe because we uphold the documented invariant
unsafe { get_unchecked(&data, 1) }
} else {
0
};
println!("Value: {}", safe_value);
}
/*
Output:
Value: 20
*/
The function documents its precondition clearly: you must ensure the index is valid. This makes it easy for reviewers and future users to know what’s required.
Example: Soundness Invariant for a Custom Wrapper
/// A wrapper over a raw pointer to i32 values.
///
/// # Safety Invariants:
/// - `ptr` must be non-null and point to at least `len` valid `i32` elements.
/// - The memory must remain valid for the lifetime of this struct.
struct IntBuffer {
ptr: *const i32,
len: usize,
}
impl IntBuffer {
/// SAFETY: Caller must ensure that `ptr` is valid for `len` reads.
unsafe fn new(ptr: *const i32, len: usize) -> Self {
IntBuffer { ptr, len }
}
fn get(&self, index: usize) -> Option<i32> {
if index < self.len {
Some(unsafe { *self.ptr.add(index) })
} else {
None
}
}
}
fn main() {
let data = vec![5, 10, 15];
let ptr = data.as_ptr();
// Safe because the data is valid and lives long enough
let buf = unsafe { IntBuffer::new(ptr, data.len()) };
println!("Element 2: {:?}", buf.get(2));
}
/*
Output:
Element 2: Some(15)
*/
In this example, the documentation explains exactly what guarantees the caller must uphold when using unsafe fn new
. This is essential for maintaining soundness and avoiding subtle bugs down the line.
Example: Safe Abstraction with Documented Internal Safety
struct SafeReader<'a> {
slice: &'a [u8],
pos: usize,
}
impl<'a> SafeReader<'a> {
/// Returns the next byte, or None if we're at the end.
fn next(&mut self) -> Option<u8> {
if self.pos < self.slice.len() {
// SAFETY: We check bounds before accessing
let byte = unsafe { *self.slice.as_ptr().add(self.pos) };
self.pos += 1;
Some(byte)
} else {
None
}
}
}
fn main() {
let bytes = b"Rust";
let mut reader = SafeReader { slice: bytes, pos: 0 };
while let Some(byte) = reader.next() {
println!("Byte: {}", byte as char);
}
}
/*
Output:
Byte: R
Byte: u
Byte: s
Byte: t
*/
This example is safe to call from the outside, and the comment next to the unsafe block documents why it’s safe — because bounds are checked first.
Summary
- Invariants are the rules that must always hold true for your unsafe code to remain sound.
- Preconditions are what you require from the caller before entering an unsafe context.
- Always document:
- What conditions must be true (
index < len
, pointer validity, non-aliasing, etc.). - Who is responsible for ensuring those conditions (caller vs. implementer).
- Why an unsafe block is safe in this specific case.
- What conditions must be true (
Rule of Thumb: If it’s not enforced by the compiler, write it down.
Real-World Examples
While most of your Rust code will never need unsafe
, real-world systems-level projects sometimes require dropping down a level — for performance, FFI, or building new abstractions that can’t be expressed in safe Rust alone.
In this section, we’ll look at realistic and responsible ways to use unsafe
, starting with a pattern used across Rust’s standard library and major crates: hiding unsafe internals behind a safe API.
Creating a Safe Abstraction with unsafe
Internals
The best kind of unsafe
is the one your users never see. If you can create a sound and safe abstraction that internally uses unsafe
, you get all the benefits of performance or flexibility — without exposing other parts of your codebase (or users of your API) to the risks.
Let’s look at a few real-world-flavored examples.
Example 1: Fixed-Size Stack Buffer
Suppose you want a small buffer that holds exactly 4 integers — with no heap allocation and fast access. You can do this with an array, but you may want to offer indexed access without bounds checks in critical paths.
struct SmallBuffer {
data: [i32; 4],
}
impl SmallBuffer {
fn new(values: [i32; 4]) -> Self {
SmallBuffer { data: values }
}
/// Returns the value at the given index, or None if out of bounds.
fn get(&self, index: usize) -> Option<i32> {
if index < self.data.len() {
// SAFETY: index is bounds-checked above
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let buf = SmallBuffer::new([10, 20, 30, 40]);
println!("Index 2: {:?}", buf.get(2));
println!("Index 5: {:?}", buf.get(5));
}
/*
Output:
Index 2: Some(30)
Index 5: None
*/
This is a great example of hiding unsafe logic behind a safe, well-bounded API.
Example: Bit Flags with a Custom API
Here’s a bit flag structure with custom accessors. The internal implementation uses unchecked shifts for performance, but callers are never exposed to them directly.
struct Flags {
bits: u8, // Up to 8 boolean flags
}
impl Flags {
fn new() -> Self {
Flags { bits: 0 }
}
fn set(&mut self, index: u8) {
if index < 8 {
self.bits |= 1 << index;
}
}
fn is_set(&self, index: u8) -> bool {
if index < 8 {
// SAFETY: index is checked above to be < 8
unsafe { self.bits & (1 << index) != 0 }
} else {
false
}
}
}
fn main() {
let mut flags = Flags::new();
flags.set(3);
flags.set(7);
for i in 0..9 {
println!("Flag {}: {}", i, flags.is_set(i));
}
}
/*
Output:
Flag 0: false
Flag 1: false
Flag 2: false
Flag 3: true
Flag 4: false
Flag 5: false
Flag 6: false
Flag 7: true
Flag 8: false
*/
Unsafe bit math is hidden behind safe checks. Callers can’t trigger UB because the API guarantees valid use.
Example: Safe API for Zero-Copy Conversion
Here’s an abstraction that reads a byte slice and safely converts it into a slice of u32
, as long as the alignment and length requirements are satisfied. Internally, it uses unsafe
— but the API protects the user.
fn read_u32s(bytes: &[u8]) -> Option<&[u32]> {
if bytes.len() % 4 != 0 {
return None;
}
let ptr = bytes.as_ptr() as *const u32;
let len = bytes.len() / 4;
// SAFETY: We check that the byte slice has the right length,
// but this still assumes correct alignment (on most platforms it works)
Some(unsafe { std::slice::from_raw_parts(ptr, len) })
}
fn main() {
let bytes = [0, 0, 0, 1, 0, 0, 0, 2]; // [1, 2] in u32
if let Some(values) = read_u32s(&bytes) {
for v in values {
println!("Value: {}", v);
}
} else {
println!("Invalid byte length for u32 slice.");
}
}
/*
Output:
Value: 16777216
Value: 33554432
*/
In a real-world application, you’d add more checks (e.g., alignment with align_to()
or #[repr(C)]
struct safety). But the pattern is the same: you use unsafe
to power a fast API, but protect callers with strong preconditions.
Summary
Creating safe abstractions around unsafe
code:
- Makes your code safer to use and maintain.
- Localizes risk and makes it easier to audit.
- Empowers performance and flexibility without exposing the rest of your program to UB.
Rule of Thumb: Unsafe should be like radiation — powerful, but well-contained in a lead box with a clearly labeled interface.
Using unsafe
for Performance in a Tight Loop
Rust is fast by default — but in performance-critical sections like inner loops, even minor overhead (like bounds checks) can add up. When you’re absolutely sure your data access is safe and the compiler can’t optimize it away, you can use unsafe
to remove these checks manually.
The key is to only do this when you’ve measured the bottleneck and can safely guarantee that the unsafe code won’t cause undefined behavior.
This pattern is common in parsing, image processing, signal analysis, and other data-intensive domains where a hot loop runs millions of times.
Let’s look at how to apply unsafe
surgically for performance — without breaking soundness.
Example: Safe vs. Unsafe in a Loop (Bounds Check Removed)
fn sum_safe(data: &[i32]) -> i32 {
let mut sum = 0;
for i in 0..data.len() {
sum += data[i]; // Bounds-checked access
}
sum
}
fn sum_unsafe(data: &[i32]) -> i32 {
let mut sum = 0;
let ptr = data.as_ptr();
unsafe {
for i in 0..data.len() {
sum += *ptr.add(i); // No bounds check
}
}
sum
}
fn main() {
let data: Vec<i32> = (1..=5).collect();
println!("Safe sum: {}", sum_safe(&data));
println!("Unsafe sum: {}", sum_unsafe(&data));
}
/*
Output:
Safe sum: 15
Unsafe sum: 15
*/
Both functions return the same result. The unsafe version skips the bounds check on each iteration, which could offer performance gains in tight, high-volume loops.
Example: Benchmark-style Loop Simulation
You could simulate the performance benefits using a larger loop to observe time differences, but here’s a simple illustrative loop with no side effects:
fn main() {
let data: Vec<i32> = (0..10_000).collect();
let ptr = data.as_ptr();
let mut sum = 0;
unsafe {
for i in 0..data.len() {
sum += *ptr.add(i);
}
}
println!("Sum of 0..9999: {}", sum);
}
/*
Output:
Sum of 0..9999: 49995000
*/
This loop is safe because we:
- Use
.len()
to guard the range. - Don’t go out of bounds.
- Don’t alias or mutate the data while iterating.
Example: Safe Alternative with Iterators (for Comparison)
fn main() {
let data: Vec<i32> = (0..10_000).collect();
let sum: i32 = data.iter().sum();
println!("Iterator sum: {}", sum);
}
/*
Output:
Iterator sum: 49995000
*/
This version uses idiomatic, safe Rust. In many cases, the compiler can optimize iterators very well, and they should be your first choice unless profiling shows a need for manual unsafe
.
Summary
unsafe
can remove per-iteration overhead in hot loops — but it’s a precision tool.- Only use it when:
- The loop is in a bottleneck.
- The invariants (bounds, validity, aliasing) are guaranteed.
- Keep the
unsafe
block as small as possible — ideally just the read.
Tip: Benchmark before and after! If the safe version is fast enough, leave it that way.
Calling C Code with FFI in Rust
Rust is a systems programming language — and that means it often needs to talk to other low-level languages like C. This is done through FFI (Foreign Function Interface), which lets Rust call external functions that are written in C or exposed in a C-compatible format.
Because Rust can’t verify the behavior or memory safety of external functions, all FFI calls are considered unsafe
. It’s your job to ensure that:
- The function really exists and is linked correctly.
- The arguments are valid.
- The contract (e.g. null safety, valid pointers, etc.) is upheld.
Let’s walk through some simple and safe examples that show how to use FFI in practice.
Example: Calling a Standard C Library Function (abs
)
Rust can call C functions like abs()
from libc, which are usually available on all platforms without extra setup.
unsafe extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
let x = -42;
let result = unsafe { abs(x) };
println!("Absolute value of {} is {}", x, result);
}
/*
Output:
Absolute value of -42 is 42
*/
This works because abs()
is part of the standard C library. You don’t need a C compiler or a separate .c
file — Rust links it automatically.
Note: You must declare the function inside an unsafe extern "C"
block and call it within an unsafe
block, because Rust can’t verify the foreign function’s behavior or contract.
Example: Wrapping an FFI Call in a Safe Rust Function
To isolate unsafe behavior and expose a clean API, it’s a good idea to wrap FFI calls in safe functions:
unsafe extern "C" {
fn abs(input: i32) -> i32;
}
fn safe_abs(x: i32) -> i32 {
unsafe { abs(x) }
}
fn main() {
println!("safe_abs(-99) = {}", safe_abs(-99));
}
/*
Output:
safe_abs(-99) = 99
*/
This shields the unsafe code behind a safe interface, which is a best practice — especially if you’re calling the function in many places.
Example: Declaring and Using an extern "C"
Function That Doesn’t Exist
What happens if you declare a function that doesn’t actually exist in any linked library?
unsafe extern "C" {
fn non_existent_function(x: i32) -> i32;
}
fn main() {
// Uncommenting this will cause a linker error at build time.
// let result = unsafe { non_existent_function(5) };
println!("Not calling the undefined function to avoid linker error.");
}
/*
Output:
Not calling the undefined function to avoid linker error.
*/
This shows how FFI is a contract: the compiler will trust you, but the linker will complain if the function isn’t defined somewhere. If you uncomment the call, it won’t compile unless you actually link in that function.
Summary
Calling C functions from Rust:
- Requires
unsafe
because Rust can’t verify their contracts. - Should be isolated behind safe wrappers whenever possible.
- Can be done easily for standard C library functions like
abs
,strlen
, etc.
Best Practice: Wrap FFI calls in a safe function and document any invariants (e.g., pointer validity, null safety) that must be upheld by the caller.
Tooling for Auditing Unsafe Code
Writing unsafe
Rust responsibly doesn’t stop at careful coding and documentation — it also involves regular auditing. Fortunately, Rust’s ecosystem provides tools to help you identify, count, and audit unsafe
blocks in your own code and in your project’s dependencies.
These tools can help you:
- Ensure unsafe code is reviewed.
- Avoid unsafe dependencies in security-sensitive applications.
- Maintain a culture of minimal and justified unsafe use.
We’ll start with a community-standard tool: cargo-geiger
.
Using cargo-geiger
to Track Unsafe Usage
cargo-geiger
is a tool that scans your Rust project and reports:
- How many
unsafe
blocks exist. - Whether they’re in your own code or your dependencies.
- Which files and functions they appear in.
This is incredibly useful for auditing your codebase or checking third-party crates for unexpected unsafe
usage.
Installing cargo-geiger
You can install it with cargo install
:
cargo install cargo-geiger
This adds a new subcommand: cargo geiger
.
Example Project: Tracking Unsafe Blocks
Let’s say you have the following simple program:
fn main() {
let ptr = &42 as *const i32;
unsafe {
println!("Unsafe dereference: {}", *ptr);
}
}
/*
Output:
Unsafe dereference: 42
*/
This code is completely safe to run, but it uses a single unsafe
block. You can scan the file like this:
cargo geiger
And you’ll get output like:
Metric output format: x/y
x = unsafe code used by the build
y = total unsafe code found in the crate
Symbols:
🔒 = No `unsafe` usage found, declares #![forbid(unsafe_code)]
❓ = No `unsafe` usage found, missing #![forbid(unsafe_code)]
☢️ = `unsafe` usage found
Functions Expressions Impls Traits Methods Dependency
0/0 0/0 0/0 0/0 0/0 ❓ unsafe_in_rust 0.1.0
0/0 0/0 0/0 0/0 0/0
No unsafe usages were found but we know we have one, what’s going on?
Why You’re Seeing 0/0
for Everything
The key is: cargo-geiger
only analyzes lib.rs
and the main library crate by default — not main.rs
from a binary target, unless explicitly configured to do so.
Your current code is in a main.rs
(binary crate), and unless cargo-geiger
is configured to analyze binaries, it won’t count the unsafe
code in them. That’s why you’re getting all 0/0
.
How to Fix It
To include binaries like main.rs
, run this instead:
cargo geiger --build-deps
Functions Expressions Impls Traits Methods Dependency
0/0 0/0 0/0 0/0 0/0 ❓ unsafe_in_rust 0.1.0
0/0 0/0 0/0 0/0 0/0
That still didn’t capture our code in main.rs
, so let’s temporarily move your code to src/lib.rs
.
Move code to src/lib.rs
like this:
pub fn run() {
let ptr = &42 as *const i32;
unsafe {
println!("Unsafe dereference: {}", *ptr);
}
}
Then in src/main.rs
:
fn main() {
unsafe_in_rust::run();
}
Still we see no unsafe usages in the output.
Functions Expressions Impls Traits Methods Dependency
0/0 0/0 0/0 0/0 0/0 ❓ unsafe_in_rust 0.1.0
0/0 0/0 0/0 0/0 0/0
Our code in lib.rs does contain unsafe code, but cargo-geiger
does not count expression-level unsafe { ... }
blocks inside safe functions.
That means:
unsafe fn foo() {}
→ will be detectedunsafe trait Foo {}
→ will be detectedunsafe impl Foo for Bar
→ will be detectedunsafe { ... }
inside a regularfn
→ won’t be counted
This is a known limitation of cargo-geiger
‘s current static analysis.
How to See It Detected
Try rewriting your function like this:
src/lib.rs
pub unsafe fn run() {
let ptr = &42 as *const i32;
println!("Unsafe dereference: {}", *ptr);
}
src/main.rs
fn main() {
unsafe {
unsafe_in_rust::run();
}
}
Now re-run:
cargo geiger
You’ll see something like:
Functions Expressions Impls Traits Methods Dependency
1/1 3/3 0/0 0/0 0/0 ☢️ unsafe_in_rust 0.1.0
1/1 3/3 0/0 0/0 0/0
This means:
Column | Meaning |
---|---|
Functions | You declared and called 1 unsafe fn → run() |
Expressions | There are 3 unsafe expressions (i.e. unsafe {} blocks) total:1 in lib.rs (inside run() for pointer dereference)1 in main.rs (to call unsafe fn run() )and possibly an internal dereference considered unsafe. |
Impls | No unsafe impl s detected. |
Traits | No unsafe trait s declared. |
Methods | No unsafe methods found. |
☢️ | Unsafe usage detected in the crate (unsafe_in_rust ) |
cargo-geiger
is doing its job here by detecting:
- The
unsafe fn
declaration, - The
unsafe
block used to call it, - And the internal pointer dereference (all as expressions or usage).
- ✅ 1/1 Functions →
unsafe fn run()
declared and used. - ✅ 3/3 Expressions → Three
unsafe
usages found (call site, dereference, and possibly the cast as part of the expression). - ✅ Report correctly shows unsafe usage in your crate.
This lets you:
- Catch unintentional unsafe usage creeping into a project.
- Make informed decisions about whether to audit or replace crates with high unsafe usage.
- Help enforce a “no unsafe without review” policy.
cargo-geiger
is an essential tool when:
- You want to review unsafe use before a release.
- You need to enforce security or audit requirements.
- You’re working on safety-critical or high-assurance systems.
Best practice: Use cargo-geiger
regularly workflows to keep unsafe code visible, minimal, and justifiable.
Reading Unsafe Code in Crates You Use
Even if your own code is free of unsafe
, your project might still rely on crates that use it internally — and that’s perfectly fine. Many high-performance or low-level libraries use unsafe
responsibly to deliver safe, fast APIs.
However, when working on security-sensitive, embedded, or mission-critical projects, it’s valuable to inspect and understand the unsafe code in your dependencies. Rust gives you several ways to do this — both manually and with tooling.
Let’s explore how to identify, locate, and interpret unsafe code in crates you depend on.
Example: Using cargo geiger
to Audit Dependencies
Let’s look at a simple project with a common dependency. Add the bitflags crate dependency to your Cargo.toml file.
[dependencies]
bitflags = "2.4"
Then in your main.rs
:
use bitflags::bitflags;
bitflags! {
#[derive(Debug) struct Flags: u32 {
const A = 0b0001;
const B = 0b0010;
const C = 0b0100;
}
}
fn main() {
let f = Flags::A | Flags::C;
println!("Combined flags: {:?}", f);
}
/*
Output:
Combined flags: Flags(A | C)
*/
Run:
cargo geiger
You might see output like:
1/1 2/2 0/0 0/0 0/0 ☢️ unsafe_in_rust 0.1.0
0/0 0/0 0/0 0/0 0/0 ❓ └── bitflags 2.9.0
1/1 2/2 0/0 0/0 0/0
This tells you that even though your crate is safe, bitflags
uses unsafe
internally.
Summary
Even when you don’t write unsafe
yourself:
- You may still rely on it through dependencies.
- Tools like
cargo geiger
help you track and assess unsafe usage in your project tree. - Inspecting crates manually or via GitHub/docs.rs gives you transparency into how unsafe code is used — and whether it’s trustworthy.
Best Practice: Treat dependency review as part of your unsafe audit — especially for crates that operate at the system level or handle raw memory, cryptography, or FFI.
Code Review Practices Around unsafe
unsafe
Rust should always be treated as a special case in code review — not because it’s bad, but because it turns off the compiler’s guardrails. That means the burden of safety and correctness falls entirely on the developer — and the reviewer.
If your team (or you, solo) are working with unsafe
, it’s crucial to establish clear, repeatable review habits to catch issues early and maintain soundness.
This subsection walks through key practices and shows examples that are simple but meaningful to review.
What to Look for in Unsafe Code Reviews
- Minimal scope: Is the unsafe block as small as possible?
- Soundness: Is the unsafe block or API sound (can safe code use it incorrectly)?
- Invariants documented: Are preconditions and invariants clearly stated?
- Safe alternative: Could this be written in safe Rust?
- Encapsulation: Is the unsafe code wrapped in a safe abstraction?
- Code clarity: Is it obvious why
unsafe
is required?
Let’s walk through examples that either follow or violate these principles.
Example: Good Unsafe Code with Comments and Tight Scope
/// Returns the value at `index` if it's within bounds.
///
/// # Safety
/// This function must not be called with an index >= slice.len().
unsafe fn get_unchecked(slice: &[i32], index: usize) -> i32 {
*slice.as_ptr().add(index)
}
fn main() {
let data = [100, 200, 300];
let val = if 1 < data.len() {
unsafe { get_unchecked(&data, 1) }
} else {
0
};
println!("Value: {}", val);
}
/*
Output:
Value: 200
*/
This is a great candidate for inclusion:
- The unsafe block is isolated.
- The invariant is documented.
- It’s only called after checking the index.
Example: Unsafe with No Explanation or Justification
Warning – do not run this code. It is just to illustrate the code review process.
fn main() {
let data = vec![1, 2, 3];
let val;
unsafe {
val = *data.as_ptr().add(10); // No check!
}
println!("Value: {}", val);
}
/*
Output:
(May panic, crash, or show garbage — undefined behavior)
*/
As a reviewer, this should raise red flags:
- No bounds check before dereference.
- No comment explaining the contract or why it’s okay.
- Unsafe use could easily lead to UB.
You’d want to ask the author to:
- Justify the unsafe usage.
- Add bounds checks or a wrapper function.
- Consider
get_unchecked
or returnOption<i32>
.
Example: Safe Abstraction Wrapping Unsafe Internals
struct SmallBuffer {
data: [u8; 4],
}
impl SmallBuffer {
fn get(&self, index: usize) -> Option<u8> {
if index < self.data.len() {
// SAFETY: We checked bounds above.
Some(unsafe { *self.data.as_ptr().add(index) })
} else {
None
}
}
}
fn main() {
let buf = SmallBuffer { data: [10, 20, 30, 40] };
println!("Index 3: {:?}", buf.get(3));
}
/*
Output:
Index 3: Some(40)
*/
This is sound and idiomatic:
- Unsafe is fully encapsulated.
- The API is safe.
- Clear precondition is enforced before dereferencing.
Summary
When reviewing unsafe
code, always ask:
- Are the invariants clearly stated?
- Is the scope of unsafety minimized?
- Is the unsafe logic isolated behind a safe API?
- Could it be done safely instead?
Code Review Mindset: Don’t just ask “Does it work?” — ask “Is it sound, readable, and robust against accidental misuse?”
When Not to Use unsafe
While unsafe
Rust gives you powerful tools to bypass compiler checks, its misuse can introduce undefined behavior and subtle, hard-to-debug issues. Not every performance gain or ergonomic improvement is worth the cost of soundness and safety.
This section covers cases where you should avoid using unsafe
, even if it seems tempting — starting with one of the most common traps: premature optimization.
Avoiding Premature Optimization
One of the most common misuses of unsafe
is trying to speed up code that hasn’t actually been proven slow. Rust is already designed for performance, and in many cases, its high-level constructs (like iterators) compile down to code that’s just as fast — or faster — than hand-written unsafe versions.
Before reaching for unsafe
, ask:
- Have I benchmarked the code?
- Is there a real performance bottleneck?
- Can the compiler already optimize this safely?
Let’s walk through examples where unsafe
seems like it might help — but isn’t necessary at all.
Example: Using Iterators vs Manual Indexing
fn sum_iterator(data: &[i32]) -> i32 {
data.iter().sum()
}
fn sum_manual(data: &[i32]) -> i32 {
let mut total = 0;
for i in 0..data.len() {
total += data[i];
}
total
}
fn main() {
let values = vec![1, 2, 3, 4, 5];
println!("Iterator sum: {}", sum_iterator(&values));
println!("Manual sum: {}", sum_manual(&values));
}
/*
Output:
Iterator sum: 15
Manual sum: 15
*/
In release mode, both approaches compile to near-identical machine code thanks to Rust’s optimizer. You don’t need unsafe or raw pointers here — iterators are safe, fast, and idiomatic.
Example: Avoiding Unsafe Access for Slight Gains
You might be tempted to write:
fn sum_unsafe(data: &[i32]) -> i32 {
let mut sum = 0;
let ptr = data.as_ptr();
unsafe {
for i in 0..data.len() {
sum += *ptr.add(i);
}
}
sum
}
This works — and is safe in this exact form — but:
- It adds complexity and risk.
- It only helps if profiling shows a real performance gain.
- It doesn’t outperform iterators unless the loop is in an ultra-hot path.
So if you’re doing this before benchmarking, you’re optimizing too early.
Example: Using Safe Wrappers Instead
Instead of writing your own unsafe abstractions, it’s often better to lean on the standard library or crates that already encapsulate the unsafe logic soundly:
use std::cell::Cell;
fn main() {
let counter = Cell::new(0);
for _ in 0..5 {
counter.set(counter.get() + 1);
}
println!("Counter: {}", counter.get());
}
/*
Output:
Counter: 5
*/
Instead of using unsafe
and raw pointers for shared mutability, Cell<T>
gives you a safe, sound abstraction — no need to reinvent it.
Summary
Don’t use unsafe
just to:
- Save a few nanoseconds without measuring.
- Skip what you assume is “overhead” in safe Rust.
- “Prove” your code is low-level — Rust already gives you that power safely.
Best Practice: Benchmark first. Optimize second. Reach for unsafe
only when safe Rust can’t express what you need — and you know the performance gain is real.
Safe Alternatives You Might Have Missed
Before reaching for unsafe
, it’s worth asking: “Is there a safe abstraction that already solves this?”
Rust’s standard library and ecosystem are packed with tools designed to give you low-level control without exposing you to undefined behavior.
This subsection highlights a few common situations where developers often think unsafe
is required — but it’s not.
Example: Shared Mutability with Cell
and RefCell
You might think you need unsafe
to mutate shared data — but Rust offers interior mutability types like Cell<T>
and RefCell<T>
.
use std::cell::Cell;
fn main() {
let count = Cell::new(0);
for _ in 0..5 {
count.set(count.get() + 1);
}
println!("Count: {}", count.get());
}
/*
Output:
Count: 5
*/
Cell<T>
gives you copy-style interior mutability. You don’t need raw pointers or unsafe tricks to achieve this.
Example: Avoiding Manual Memory with MaybeUninit
If you’re initializing a value manually and think you need raw pointers and mem::uninitialized()
(which is deprecated), use MaybeUninit
instead.
use std::mem::MaybeUninit;
fn main() {
let mut val: MaybeUninit<i32> = MaybeUninit::uninit();
// SAFELY initialize it
val.write(42);
let initialized = unsafe { val.assume_init() };
println!("Initialized value: {}", initialized);
}
/*
Output:
Initialized value: 42
*/
MaybeUninit
is a safe abstraction for manually initializing memory. This is useful when interfacing with FFI or performance-critical buffers, and avoids UB from using uninitialized memory.
Example: Avoiding Pointer Math with split_at
Say you need to divide a slice into two parts. You might be tempted to do manual pointer math:
// Unsafe and unnecessary:
// let (left, right) = unsafe {
// let ptr = slice.as_ptr();
// (std::slice::from_raw_parts(ptr, mid), std::slice::from_raw_parts(ptr.add(mid), len - mid))
// };
Instead, use this safe built-in:
fn main() {
let data = [1, 2, 3, 4, 5, 6];
let (left, right) = data.split_at(3);
println!("Left: {:?}, Right: {:?}", left, right);
}
/*
Output:
Left: [1, 2, 3], Right: [4, 5, 6]
*/
split_at
gives you pointer-splitting behavior safely, with bounds checks included.
Summary
Before writing unsafe
, ask:
- Is there a safe equivalent in the standard library?
- Did I check the docs for types like
Cell
,RefCell
,MaybeUninit
,split_at
,Arc
,Rc
, or iterators? - Can I restructure my logic slightly to stay within safe Rust?
Pro Tip: “Unsafe” often means “Unnecessary” if you haven’t explored the full power of safe Rust yet.
Learning to Love the Borrow Checker (Again)
After spending time in the land of unsafe
, it’s easy to feel frustrated by Rust’s borrow checker when you return to safe code. You might even be tempted to bypass its rules “just this once” — but don’t! The borrow checker is what makes Rust’s safety guarantees possible.
Instead of fighting it, the better approach is to learn to work with the borrow checker, understand what it’s protecting you from, and embrace the clarity it brings to ownership and lifetimes.
Let’s look at some examples where the borrow checker feels like a hurdle — but turns out to be a hero.
Example: Borrow Checker Prevents Aliasing Bugs
fn main() {
let mut value = 100;
let r1 = &value;
// let r2 = &mut value; // ❌ Error: cannot borrow `value` as mutable because it's also borrowed as immutable
println!("Read-only borrow: {}", r1);
// Once r1 is no longer used, mutable borrow is allowed
let r2 = &mut value;
*r2 += 1;
println!("Mutated value: {}", r2);
}
/*
Output:
Read-only borrow: 100
Mutated value: 101
*/
The borrow checker enforces exclusive mutability, preventing subtle aliasing bugs and race conditions — rules that C and C++ leave entirely up to the programmer.
Example: Preventing Use-After-Free with Ownership
fn take_ownership(data: Vec<i32>) {
println!("Got data: {:?}", data);
}
fn main() {
let values = vec![1, 2, 3];
take_ownership(values);
// println!("{:?}", values); // ❌ Error: values was moved
}
/*
Output:
Got data: [1, 2, 3]
*/
Rust prevents use-after-free at compile time. Once ownership is moved, the original variable is no longer accessible — eliminating an entire class of memory bugs common in C/C++.
Example: Safe Mutability Through Scoped Borrowing
fn main() {
let mut x = 10;
{
let y = &mut x;
*y += 5;
println!("Inside block: {}", y);
}
println!("After block: {}", x);
}
/*
Output:
Inside block: 15
After block: 15
*/
You can have multiple mutations — just not overlapping ones. By scoping borrows tightly, the borrow checker allows safe and flexible mutability without ever invoking unsafe
.
Summary
The borrow checker:
- Prevents aliasing and data races,
- Eliminates use-after-free and double-free bugs,
- Encourages clean, maintainable ownership semantics.
Reframe the mindset: The borrow checker isn’t your enemy — it’s your compiler’s bodyguard.
It’ll complain loudly so your program never silently breaks.
Checklist for Reviewing Unsafe Code
Use this checklist whenever you write or review unsafe
blocks to ensure you’re not unintentionally introducing undefined behavior or breaking soundness guarantees:
- Is the
unsafe
block as small and isolated as possible?- Minimize the surface area to reduce risk.
- Do you fully understand what makes this code
unsafe
?- Know exactly which of the five
unsafe
operations you’re using.
- Know exactly which of the five
- Have you clearly documented the safety invariants?
- What must always be true for this code to be safe? Who is responsible for maintaining that?
- Have you written tests that exercise both common and edge cases?
- Safe APIs wrapping
unsafe
internals should be stress-tested to catch invariant violations.
- Safe APIs wrapping
- Are you avoiding aliasing violations (e.g.,
&mut
and&
to the same data)? - Are you sure you’re not reading uninitialized or deallocated memory?
- Is your abstraction sound?
- Can safe code call it in a way that causes UB? If so, fix the API.
- Can you use a safe alternative like
Cell
,RefCell
,MaybeUninit
, or atomics?- Don’t use
unsafe
just to silence the borrow checker — understand why it’s complaining.
- Don’t use
- Have you reviewed it with tools like
miri
,cargo-geiger
, or peer review?
Rust’s unsafe
keyword is a powerful tool — but it’s not a shortcut. It’s a contract. When you use it, you take on the responsibility of maintaining the same guarantees the compiler enforces in safe code: memory safety, thread safety, and sound abstractions.
Throughout this post, we explored what unsafe
truly means, when it’s appropriate, how to use it responsibly, and most importantly, when not to use it. We saw how to minimize risk with small, well-documented unsafe blocks, encapsulate danger in safe APIs, audit dependencies, and re-embrace the borrow checker with new appreciation.
Used wisely, unsafe
can unlock performance and flexibility that would be impossible otherwise — without compromising the safety Rust is known for. It’s not about silencing the compiler; it’s about proving to it (and yourself) that you know exactly what you’re doing.
Handle it with care. Use it with clarity. And keep your codebase safe, sound, and fearless.
Thanks very much for stopping by and for including ByteMagma in your quest for Rust programming mastery!
Leave a Reply