BYTEMAGMA

Master Rust Programming

Async Programming in Rust: Futures, async/await, and Executors

This post builds on our previous exploration of Rust’s thread-based concurrency. Now, we’ll dive into Rust’s powerful asynchronous programming model—designed for high-performance, non-blocking, and I/O-heavy workloads.

Here is our earlier post on concurrency in Rust: Mastering Rust Concurrency: Atomics, Locks, and Threads


Introduction

Rust’s async programming model provides a powerful way to write efficient, non-blocking code—especially useful when dealing with tasks like file I/O, networking, or handling thousands of concurrent connections.

Unlike thread-based concurrency, async Rust uses lightweight tasks, futures, and executors to achieve high performance with fewer system resources.

In this guide, we’ll walk through the core concepts and syntax of async programming in Rust, explore how it works under the hood, and show how to build real-world async applications using tools like tokio and async-std.


Understanding the Async Programming Model

Asynchronous programming in Rust is centered around futures, which represent values that may not be available yet but will be computed or resolved eventually. Unlike thread-based concurrency, async Rust doesn’t spawn OS threads for every concurrent task. Instead, it relies on futures and executors to efficiently manage and schedule tasks without blocking threads.

This section lays the foundation for async programming by explaining what a Future is, how it’s used, and how it integrates with async/await syntax.


What is a Future?

A Future in Rust is a value that represents an asynchronous computation. Think of it as a promise to deliver a value at some point in the future. However, unlike some other languages, Rust’s futures are lazy—they do nothing until polled by an executor.

Rust’s standard library provides the Future trait:

pub trait Future {
    type Output;

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}

This low-level trait is typically not implemented manually. Instead, developers use the async/await syntax, which compiles down to a state machine that implements this trait.


Let’s get started writing some code and explore this with practical examples.

Open a shell window (Terminal on Mac/Linux, Command Prompt or PowerShell on Windows). Then navigate to the directory where you store Rust packages for this blog series, and run the following command:

cargo new async_programming

Next, change into the newly created async_programming directory and open it in VS Code (or your favorite IDE).

Note: Using VS Code is highly recommended for following along with this blog series. Be sure to install the Rust Analyzer extension — it offers powerful features like code completion, inline type hints, and quick fixes.

Also, make sure you’re opening the async_programming directory itself in VS Code. If you open a parent folder instead, the Rust Analyzer extension might not work properly — or at all.

As we see examples in this post, you can either replace the contents of main.rs or instead comment out the current code for future reference with a multi-line comment:

/*
    CODE TO COMMENT OUT
*/

Now, open the file src/main.rs and replace its contents entirely with the code for this example.


Example: A Simple Future Using async fn

async fn say_hello() {
    println!("Hello from the future!");
}

#[tokio::main]
async fn main() {
    say_hello().await;
}
/* Output: Hello from the future! */

This example uses the #[tokio::main] macro to run async code. You’ll need the Tokio runtime, so update your Cargo.toml file dependencies:

[dependencies]
tokio = { version = "1", features = ["full"] }

The full feature includes everything you need for basic async, including the macros and task scheduling.

say_hello() returns a Future that, when .awaited, executes the function body.

say_hello().await;

The #[tokio::main] macro provides the executor needed to run the async code.


Example: A Future That Returns a Value

async fn add(a: i32, b: i32) -> i32 {
    a + b
}

#[tokio::main]
async fn main() {
    let sum = add(5, 7).await;
    println!("The sum is: {}", sum);
}

/* Output: The sum is: 12 */

This example also uses the #[tokio::main] macro to run async code, so you’ll need the Tokio dependency in Cargo.toml for this example as well:

[dependencies]
tokio = { version = "1", features = ["full"] }

Just like synchronous functions, async fn can return values. But they do so via a Future, and the actual value is only accessible after .await.

let sum = add(5, 7).await;

Example: Manually Implementing a Custom Future (For Advanced Understanding)

use std::pin::Pin;
use std::task::{Context, Poll};
use std::future::Future;

struct ImmediateHello;

impl Future for ImmediateHello {
    type Output = ();

    fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
        println!("Hello from a custom future!");
        Poll::Ready(())
    }
}

fn main() {
    let mut future = ImmediateHello;
    let waker = futures::task::noop_waker();
    let mut cx = Context::from_waker(&waker);

    let _ = Pin::new(&mut future).poll(&mut cx);
}

/* Output: Hello from a custom future! */

This example uses futures::task::noop_waker() to simulate an executor polling a future. You’ll need the futures crate, so update your Cargo.toml file dependencies. You can keep the tokio dependency and just add the futures dependency.

[dependencies]
futures = "0.3"

This provides the noop_waker() utility used to manually poll the custom future.

This is a low-level view of what an async fn gets compiled into—a state machine that implements the Future trait. In real projects, you rarely need to do this manually, but it’s helpful to understand how async/await works behind the scenes.


How async/await Works Under the Hood

Rust’s async/await syntax looks simple, but it hides some powerful mechanics under the surface. When you write an async fn, you’re not defining a function that runs asynchronously—you’re defining a function that returns a Future. That future doesn’t do anything until it’s polled by an executor.

Under the hood, the Rust compiler transforms your async fn into a state machine that implements the Future trait. Each .await point becomes a state transition, and the poll method advances through those states.

This design gives Rust async its efficiency and predictability, while still maintaining memory safety and zero-cost abstractions.


Example: async/await Compiles to a Future

Here’s how a basic async fn is desugared into something that implements Future behind the scenes.

async fn compute() -> i32 {
    42
}

#[tokio::main]
async fn main() {
    let result = compute().await;
    println!("Result: {}", result);
}

/* Output: Result: 42 */

Under the hood, compute() returns an anonymous type that implements Future<Output = i32>. The executor (in this case, tokio) polls the future until it’s ready.

This example also uses the tokio crate dependency added earlier to Cargo.toml


Example: async/await State Transitions Illustrated

Let’s walk through a sequence with two .await calls to visualize how async functions move through states.

async fn first() -> i32 {
    10
}

async fn second(x: i32) -> i32 {
    x + 20
}

#[tokio::main]
async fn main() {
    let a = first().await;
    let b = second(a).await;
    println!("Final result: {}", b);
}

/* Output: Final result: 30 */

The compiler generates a future that first awaits first(), stores the result, then proceeds to second(). Each .await represents a pause point and state transition.

This example also uses the tokio crate dependency added to Cargo.toml earlier.


Example: Manually Polling the Desugared Future (Advanced)

To really drive the point home, here’s a simplified manual version of how a future gets polled and yields a result.

use std::pin::Pin;
use std::task::{Context, Poll};
use std::future::Future;
use futures::task::noop_waker;

struct SimpleFuture;

impl Future for SimpleFuture {
    type Output = i32;

    fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
        println!("Polling SimpleFuture");
        Poll::Ready(99)
    }
}

fn main() {
    let mut future = SimpleFuture;
    let waker = noop_waker();
    let mut ctx = Context::from_waker(&waker);

    let result = Pin::new(&mut future).poll(&mut ctx);
    println!("Poll result: {:?}", result);
}

/* Output: 
Polling SimpleFuture
Poll result: Ready(99) 
*/

This example shows what happens when a future is manually polled. While real async functions produce more complex state machines, the principle is the same.

This example also uses the futures crate dependency added to the Cargo.toml file earlier.


Tasks vs Threads: A Conceptual Shift

In traditional concurrency, we often think in terms of threads—independent paths of execution that the OS schedules on CPU cores. Each thread has its own stack and system resources. While powerful, threads can be expensive and limited in number, especially for high-concurrency I/O-bound applications like web servers or chat apps.

Rust’s async model introduces a new abstraction: the task. A task is a lightweight unit of work managed by an async runtime, not the OS. Tasks are just state machines (futures) that yield control when waiting, allowing the runtime to schedule other tasks in the meantime. This makes async Rust highly scalable and memory-efficient.

Let’s compare them in action.


Example: Spawning Threads

This example spawns several OS threads and waits for them to complete.

use std::thread;

fn main() {
    let mut handles = vec![];

    for i in 0..5 {
        let handle = thread::spawn(move || {
            println!("Hello from thread {}", i);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }
}

/* Output (order may vary): 
Hello from thread 0
Hello from thread 1
Hello from thread 2
Hello from thread 3
Hello from thread 4 
*/

Each call to thread::spawn creates a new OS thread. Threads are isolated and relatively heavy.


Example: Spawning Async Tasks

Now let’s do the same thing using async tasks with Tokio. These don’t create threads—they’re run by the async runtime.

use tokio::task;

#[tokio::main]
async fn main() {
    let mut handles = vec![];

    for i in 0..5 {
        let handle = task::spawn(async move {
            println!("Hello from task {}", i);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }
}

/* Output (order may vary): 
Hello from task 0
Hello from task 1
Hello from task 2
Hello from task 3
Hello from task 4 
*/

tokio::task::spawn creates async tasks that are executed by the runtime in a single or multi-threaded thread pool, depending on configuration.

This example also makes use of the tokio crate dependency added earlier to Cargo.toml.


Example: Combining Both – Async Tasks on Multiple Threads

For completeness, here’s how Tokio handles multiple tasks and uses a thread pool under the hood:

use tokio::task;
use std::time::Duration;

#[tokio::main(flavor = "multi_thread", worker_threads = 4)]
async fn main() {
    let mut handles = vec![];

    for i in 0..10 {
        let handle = task::spawn(async move {
            tokio::time::sleep(Duration::from_millis(100)).await;
            println!("Task {} done", i);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }
}

/* Output (order and timing may vary): 
Task 0 done
Task 1 done
...
Task 9 done 
*/

Here we explicitly configure Tokio to use a thread pool (multi_thread) with 4 worker threads. Async tasks are distributed across these threads.

This example also uses the tokio crate dependency added earlier to the Cargo.toml file.


Writing Async Code in Rust

Now that we’ve covered how Rust’s async model works under the hood, it’s time to write some real-world async code. In this section, we’ll learn how to declare and call async functions, chain multiple async calls, and use .await to drive those futures to completion. These tools are the foundation of building scalable, non-blocking applications in Rust.


Declaring and Calling async Functions

Declaring an async fn is almost identical to a normal function—except it returns a Future instead of executing immediately. To run the body of an async function, you must .await the future it returns, typically inside another async context or with the help of an async runtime like Tokio.

Let’s go step-by-step through examples that show how to define and use async functions in practical scenarios.


Example: A Basic async fn That Prints a Message

async fn greet() {
    println!("Hello from an async function!");
}

#[tokio::main]
async fn main() {
    greet().await;
}

/* Output: Hello from an async function! */

greet() is an async function that returns a Future. The .await keyword drives the future to completion using the Tokio runtime.


Example: Returning Values from an async Function

async fn multiply(x: i32, y: i32) -> i32 {
    x * y
}

#[tokio::main]
async fn main() {
    let result = multiply(4, 6).await;
    println!("Result is: {}", result);
}

/* Output: Result is: 24 */

Async functions can return any value—just wrap it in a future. The return type of an async fn is automatically inferred as impl Future<Output = T> where T is the return type.


Example: Composing Async Functions

async fn fetch_user_id() -> u32 {
    42
}

async fn fetch_username(user_id: u32) -> String {
    format!("user_{}", user_id)
}

#[tokio::main]
async fn main() {
    let user_id = fetch_user_id().await;
    let username = fetch_username(user_id).await;
    println!("Username is: {}", username);
}

/* Output: Username is: user_42 */

You can chain async calls just like regular function calls—just remember to .await each step. This composition style is common in database queries, API calls, and file operations.


These examples demonstrate the key aspects of writing and invoking async functions: defining them with async fn, returning values, and using .await to progress through asynchronous workflows.


Using .await with Futures

The .await keyword is what powers Rust’s asynchronous syntax. When you call an async fn, it returns a future—an object that represents a computation that hasn’t completed yet. By calling .await on a future, you’re telling the Rust async runtime to poll it until it’s ready, then yield the result.

Under the hood, .await temporarily suspends the current task and lets the executor handle other tasks in the meantime, making it perfect for I/O-bound or highly concurrent applications.

Let’s see .await in action across several realistic examples.


Example: Awaiting a Basic Future

async fn get_value() -> i32 {
    100
}

#[tokio::main]
async fn main() {
    let val = get_value().await;
    println!("Value: {}", val);
}

/* Output: Value: 100 */

The call to get_value() returns a future. Using .await runs that future and extracts the resulting value once ready.


Example: Awaiting a Delayed Future with tokio::time::sleep

use tokio::time::{sleep, Duration};

async fn delayed_message() {
    sleep(Duration::from_secs(2)).await;
    println!("Finished waiting!");
}

#[tokio::main]
async fn main() {
    println!("Start...");
    delayed_message().await;
    println!("Done.");
}

/* Output: 
Start... (wait 2 seconds) 
Finished waiting! 
Done. 
*/

This example simulates a delayed operation using tokio::time::sleep. The .await allows other tasks to run during the sleep.


Example: Mixing Await and Synchronous Code

use tokio::time::{sleep, Duration};

async fn async_step(n: u64) {
    println!("Step {}: sleeping...", n);
    sleep(Duration::from_millis(n * 100)).await;
    println!("Step {}: done", n);
}

#[tokio::main]
async fn main() {
    println!("Starting steps...");
    async_step(1).await;
    println!("Back to sync code...");
    async_step(2).await;
}

/* Output: 
Starting steps... 
Step 1: sleeping... (wait 100ms) 
Step 1: done Back to sync code... 
Step 2: sleeping... (wait 200ms) 
Step 2: done 
*/

You can freely alternate between async and sync code. The key is to .await any future-returning function before using its result or continuing.


These examples show how .await interacts with futures in Rust: suspending the current task while letting other tasks proceed, enabling highly efficient multitasking.


Composing Multiple Async Operations

One of the key strengths of async Rust is the ability to compose multiple async operations—either sequentially or concurrently. You can await operations in order, or run them in parallel using utilities like tokio::join! or futures::join!.

Understanding how to compose async calls is essential when writing programs that depend on multiple asynchronous steps—whether it’s fetching from multiple APIs, querying a database, or performing file and network I/O together.

Let’s explore how to compose async operations both sequentially and concurrently.


Example: Sequential Composition

async fn load_user_id() -> u32 {
    println!("Fetching user ID...");
    101
}

async fn load_user_profile(user_id: u32) -> String {
    println!("Fetching profile for user {}...", user_id);
    format!("UserProfile_{}", user_id)
}

#[tokio::main]
async fn main() {
    let user_id = load_user_id().await;
    let profile = load_user_profile(user_id).await;

    println!("Loaded profile: {}", profile);
}

/* Output: 
Fetching user ID... 
Fetching profile for user 101... 
Loaded profile: UserProfile_101 
*/

Each async call is .awaited one after the other, just like chaining synchronous function calls. This is great when each step depends on the previous result.


Example: Concurrent Composition with tokio::join!

use tokio::time::{sleep, Duration};
use tokio::join;

async fn fetch_temperature() -> i32 {
    sleep(Duration::from_secs(2)).await;
    println!("Temperature fetched");
    72
}

async fn fetch_humidity() -> u32 {
    sleep(Duration::from_secs(1)).await;
    println!("Humidity fetched");
    55
}

#[tokio::main]
async fn main() {
    let (temp, humidity) = join!(fetch_temperature(), fetch_humidity());

    println!("Weather report: {}°F, {}% humidity", temp, humidity);
}

/* Output: 
Humidity fetched 
Temperature fetched 
Weather report: 72°F, 55% humidity 
*/

tokio::join! runs both futures concurrently on the same thread. The .await inside each future still yields control when blocked, so both tasks proceed efficiently.


Example: Dynamic Composition with a Loop and .await

async fn async_double(n: i32) -> i32 {
    n * 2
}

#[tokio::main]
async fn main() {
    let nums = vec![1, 2, 3, 4];
    let mut results = vec![];

    for num in nums {
        let doubled = async_double(num).await;
        results.push(doubled);
    }

    println!("Doubled values: {:?}", results);
}

/* Output: Doubled values: [2, 4, 6, 8] */

Here, we dynamically .await futures inside a loop. This is useful when the number of tasks isn’t known at compile time.


These examples show how async operations can be composed sequentially when order matters, or concurrently for performance. Mastering these patterns will let you scale your applications with ease.


Executors and Runtimes

When working with async Rust, it’s important to understand that futures don’t execute on their own. They’re just inert state machines waiting to be polled. To actually run them, we need an executor—a core component of an async runtime.

Executors schedule and drive futures to completion by polling them when they’re ready to make progress. Runtimes like tokio and async-std provide these executors, along with additional utilities like timers, I/O support, and task spawning.

In this section, we’ll explore what executors do, how they relate to runtimes, and how to use them to run async code.


What is an Executor?

An executor is responsible for polling futures to completion. It checks whether a future is ready to make progress (i.e., return a value) and continues polling it over time as it becomes unblocked. Executors can be single-threaded or multi-threaded, and they manage the scheduling of tasks without blocking threads unnecessarily.

In Rust, you rarely write an executor yourself—libraries like tokio or async-std do that for you. But understanding the role of the executor helps demystify how your async code actually runs.


Example: Manually Driving a Future Without an Executor (Educational)

This demonstrates how an executor would call .poll() on a future directly.

use std::pin::Pin;
use std::task::{Context, Poll};
use std::future::Future;
use futures::task::noop_waker;

struct MyFuture;

impl Future for MyFuture {
    type Output = &'static str;

    fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
        println!("Polling MyFuture...");
        Poll::Ready("Done")
    }
}

fn main() {
    let mut future = MyFuture;
    let waker = noop_waker();
    let mut ctx = Context::from_waker(&waker);
    let result = Pin::new(&mut future).poll(&mut ctx);
    println!("Poll result: {:?}", result);
}

/* Output: 
Polling MyFuture... 
Poll result: Ready("Done") 
*/

This is a manual simulation of what an executor does behind the scenes: repeatedly calling .poll() until the future is Ready.


Example: Using the Tokio Executor with #[tokio::main]

This shows how Tokio’s executor handles task scheduling and polling for you.

async fn say_hello() -> &'static str {
    "Hello from the executor!"
}

#[tokio::main]
async fn main() {
    let message = say_hello().await;
    println!("{}", message);
}

/* Output: Hello from the executor! */

The #[tokio::main] macro sets up a runtime with an executor that handles polling the returned future. You don’t need to write any poll() logic yourself.


Example: Manually Creating a Tokio Runtime and Spawning a Task

This gives you more control over the executor and shows how to create one manually.

use tokio::runtime::Runtime;

async fn compute() -> i32 {
    99
}

fn main() {
    let rt = Runtime::new().unwrap();

    let result = rt.block_on(async {
        let handle = tokio::spawn(async {
            compute().await
        });

        handle.await.unwrap()
    });

    println!("Computed: {}", result);
}

/* Output: Computed: 99 */

You manually create a Tokio runtime using Runtime::new() and run async code with block_on. This is useful in libraries or non-async main functions.


These examples help illustrate what an executor does: it polls futures, schedules tasks, and runs your async functions. While you won’t often interact with executors directly, they’re the engine behind every .await.


Popular Runtimes: tokio vs async-std

Rust’s async model is runtime-agnostic: the language provides the Future trait and async/await syntax, but it’s up to an external runtime to drive futures to completion. The two most widely used runtimes in the Rust ecosystem are:

  • Tokio – fast, feature-rich, highly configurable; widely adopted in production systems.
  • async-std – simpler, standard-library-like API with a focus on ease of use and compatibility.

While both serve the same purpose, they differ in performance, ergonomics, ecosystem integration, and design philosophy. Let’s look at them in action.


Example: A Simple async Function with Tokio

async fn get_message() -> &'static str {
    "Hello from Tokio!"
}

#[tokio::main]
async fn main() {
    let msg = get_message().await;
    println!("{}", msg);
}

/* Output: Hello from Tokio! */

Tokio uses the #[tokio::main] macro to launch its executor. It’s very fast and widely supported, with extra crates for networking, file I/O, timers, and more.


Example: The Same Code Using async-std

async fn get_message() -> &'static str {
    "Hello from async-std!"
}

#[async_std::main]
async fn main() {
    let msg = get_message().await;
    println!("{}", msg);
}

/* Output: Hello from async-std! */

async-std mimics the std library API and also provides a procedural macro to launch the async runtime. It’s known for being beginner-friendly and lightweight.

Note that for this example, in addition to the tokio dependency, we also need to add the async-std dependency to the Cargo.toml file:

[dependencies]tokio = { version = "1", features = ["full"] }async-std = { version = "1.12", features = ["attributes"] }

Example: Concurrent Tasks – Tokio vs async-std

Let’s compare how both runtimes handle task spawning.

Tokio:

use tokio::task;

#[tokio::main]
async fn main() {
    let handle = task::spawn(async {
        "Running in a Tokio task"
    });

    let result = handle.await.unwrap();
    println!("{}", result);
}

/* Output: Running in a Tokio task */

async-std:

use async_std::task;

#[async_std::main]
async fn main() {
    let handle = task::spawn(async {
        "Running in an async-std task"
    });

    let result = handle.await;
    println!("{}", result);
}

/* Output: Running in an async-std task */

Summary

FeatureTokioasync-std
PerformanceHigh-performance, low-level controlGood, simpler model
Ecosystem IntegrationHuge ecosystem, widely usedSmaller but growing
Learning CurveModerateEasier for beginners
Built-in UtilitiesTimers, networking, fs, sync, etc.Simpler equivalents available
Community AdoptionMost popular async runtimeSecond most popular

If you’re building a production system or web server, Tokio is typically the go-to. For smaller tools or for learning purposes, async-std can be a great fit.


Spawning Tasks with Executors

In Rust async runtimes like Tokio and async-std, spawning a task means creating a lightweight unit of execution (a future) that the runtime will schedule and run independently. Unlike threads, spawned tasks are non-blocking, share memory more easily, and are managed by the async executor, not the operating system.

Spawning is useful for running multiple operations concurrently—such as serving multiple requests, listening for events, or handling background processing.

Let’s walk through how to spawn tasks using both Tokio and async-std.


Example: Spawning a Task with Tokio

use tokio::task;

#[tokio::main]
async fn main() {
    let handle = task::spawn(async {
        println!("Running inside a Tokio task");
        42
    });

    let result = handle.await.unwrap();
    println!("Task returned: {}", result);
}

/* Output: 
Running inside a Tokio task
Task returned: 42 
*/

task::spawn creates a future and schedules it for execution. The result is a JoinHandle, which you can .await to get the result.


Example: Spawning Multiple Concurrent Tasks with Tokio

use tokio::task;

#[tokio::main]
async fn main() {
    let mut handles = vec![];

    for i in 1..=3 {
        let handle = task::spawn(async move {
            println!("Task {} is running", i);
            i * 10
        });
        handles.push(handle);
    }

    for handle in handles {
        let result = handle.await.unwrap();
        println!("Task returned: {}", result);
    }
}

/* Output: Task 1 is running
Task 2 is running
Task 3 is running
Task returned: 10
Task returned: 20
Task returned: 30 */

Each task runs independently, and they’re all scheduled concurrently by the Tokio executor.


Example: Spawning a Task with async-std

use async_std::task;

#[async_std::main]
async fn main() {
    let handle = task::spawn(async {
        println!("Running inside an async-std task");
        123
    });

    let result = handle.await;
    println!("Task returned: {}", result);
}

/* Output: 
Running inside an async-std task
Task returned: 123 
*/

The API is very similar to Tokio’s, making it easy to switch between runtimes if needed.


Spawning tasks is a foundational technique in async programming—allowing your program to handle many operations simultaneously, without blocking threads or duplicating resources. It’s the async alternative to multithreaded execution, but with much lower overhead.


Common Async Patterns and Tools

Once you’re comfortable writing async functions and spawning tasks, you’ll often need to coordinate multiple futures—run some concurrently, wait for all to finish, or race them and react to the first result. Rust provides macros like join! and select! in the tokio and futures crates to support these patterns ergonomically and efficiently.

In this section, we’ll explore the most useful async combinators and patterns you’ll use to build concurrent flows in real-world applications.


Using join! and select! for Concurrency

The join! macro is used when you want to run multiple async operations concurrently and wait for all of them to complete. It’s ideal for parallel tasks that don’t depend on each other.

The select! macro, on the other hand, lets you race multiple async operations—it returns as soon as one of the futures completes, canceling or ignoring the others. This is great for timeout handling or prioritizing responsiveness.

Let’s look at these in action.


Example: Using tokio::join! to Wait for Multiple Futures

use tokio::time::{sleep, Duration};
use tokio::join;

async fn fetch_user() -> &'static str {
    sleep(Duration::from_secs(1)).await;
    "user123"
}

async fn fetch_settings() -> &'static str {
    sleep(Duration::from_secs(2)).await;
    "dark_mode=true"
}

#[tokio::main]
async fn main() {
    let (user, settings) = join!(fetch_user(), fetch_settings());
    println!("User: {}, Settings: {}", user, settings);
}

/* Output: 
User: user123, Settings: dark_mode=true 
*/

Both functions start executing at the same time. join! waits until both complete and returns their results as a tuple.


Example: Using tokio::select! to Wait for the First Completed Future

use tokio::time::{sleep, Duration};
use tokio::select;

async fn slow_task() -> &'static str {
    sleep(Duration::from_secs(3)).await;
    "slow result"
}

async fn fast_task() -> &'static str {
    sleep(Duration::from_secs(1)).await;
    "fast result"
}

#[tokio::main]
async fn main() {
    let result = select! {
        res = slow_task() => res,
        res = fast_task() => res,
    };

    println!("First completed: {}", result);
}

/* Output: First completed: fast result */

select! races both futures and returns the result of the one that finishes first. This is great for responsiveness and fallbacks.


Example: Using futures::join! and futures::select! (Alternative Runtime)

If you’re not using Tokio, the futures crate provides similar functionality, though select! requires pinned futures.

use futures::join;

async fn step_one() -> &'static str {
    "step one"
}

async fn step_two() -> &'static str {
    "step two"
}

#[async_std::main]
async fn main() {
    let (a, b) = join!(step_one(), step_two());
    println!("Results: {}, {}", a, b);
}

/* Output: Results: step one, step two */

This version uses futures::join! and works similarly to tokio::join!, but it’s runtime-agnostic. It’s compatible with async-std or custom executors.


These macros are essential for controlling concurrency in async Rust. Use join! when you need all results, and select! when you want to respond to the first result quickly.


Async Channels and Message Passing

In asynchronous applications, especially those that follow an actor model or need to coordinate between multiple tasks, message passing is often preferred over shared mutable state. Rust provides several ways to accomplish this, and both Tokio and async-std (via async-channel) support efficient, non-blocking channels for sending messages between tasks.

Channels are a safe and scalable way to communicate across tasks without worrying about locks or shared state. In this subsection, we’ll look at creating async channels, sending and receiving messages, and how to use them to structure concurrent async systems.


Example: Using tokio::sync::mpsc for Message Passing

use tokio::sync::mpsc;

#[tokio::main]
async fn main() {
    let (tx, mut rx) = mpsc::channel(5);

    tokio::spawn(async move {
        tx.send("hello").await.unwrap();
        tx.send("from").await.unwrap();
        tx.send("tokio channel").await.unwrap();
    });

    while let Some(msg) = rx.recv().await {
        println!("Received: {}", msg);
    }
}

/* Output: 
Received: hello
Received: from
Received: tokio channel 
*/

This example shows how to send multiple messages from a spawned task to the main task using a Tokio MPSC (multi-producer, single-consumer) channel.


Example: One Sender, Many Receivers with tokio::sync::broadcast

use tokio::sync::broadcast;

#[tokio::main]
async fn main() {
    let (tx, _) = broadcast::channel(10);

    for i in 1..=3 {
        let mut rx = tx.subscribe();
        tokio::spawn(async move {
            while let Ok(msg) = rx.recv().await {
                println!("Receiver {} got: {}", i, msg);
            }
        });
    }

    tx.send("Broadcast message 1").unwrap();
    tx.send("Broadcast message 2").unwrap();
}

/* Output: 
Receiver 1 got: Broadcast message 1
Receiver 2 got: Broadcast message 1
Receiver 3 got: Broadcast message 1
Receiver 1 got: Broadcast message 2
Receiver 2 got: Broadcast message 2
Receiver 3 got: Broadcast message 2 
*/

broadcast::channel allows one sender to send messages to many receivers. This is useful in pub/sub or event-driven systems.


Example: Using async-channel with async-std

use async_channel::{bounded, Sender};
use async_std::task;

#[async_std::main]
async fn main() {
    let (sender, receiver) = bounded::<&str>(3);

    let sender_task = task::spawn(send_messages(sender));

    let receiver_task = task::spawn(async move {
        while let Ok(msg) = receiver.recv().await {
            println!("Got: {}", msg);
        }
    });

    sender_task.await;
    receiver_task.await;
}

async fn send_messages(sender: Sender<&str>) {
    sender.send("hi").await.unwrap();
    sender.send("from").await.unwrap();
    sender.send("async-std").await.unwrap();
}

/* Output: 
Got: hi
Got: from
Got: async-std 
*/

async-channel provides runtime-agnostic MPSC channels and works great with async-std or custom runtimes.

Note that for this example we need to add the async-channel crate to the Cargo.toml file:

[dependencies]
async-std = { version = "1.12", features = ["attributes"] }
async-channel = "1.9"

Async channels are essential for task coordination, pipelines, actors, and clean task decoupling. Whether you’re broadcasting events or just passing data between tasks, channels make async communication robust and safe.


Error Handling in Async Code

Error handling in asynchronous Rust follows the same principles as synchronous code: you use Result<T, E> to represent success or failure, and you propagate or handle errors using ?, match, or combinators like map_err. However, there are a few important considerations:

  • Async functions must return Result<T, E> if they may fail.
  • If you’re using .await on a future that returns a Result, errors need to be handled or propagated properly.
  • You can combine try blocks, .await, and ? for clean async error handling.

Let’s go through some practical examples of handling errors effectively in async contexts.


Example: Returning Result from an async fn

async fn divide(a: i32, b: i32) -> Result<i32, String> {
    if b == 0 {
        Err("Cannot divide by zero".into())
    } else {
        Ok(a / b)
    }
}

#[tokio::main]
async fn main() {
    match divide(10, 0).await {
        Ok(result) => println!("Result: {}", result),
        Err(e) => eprintln!("Error: {}", e),
    }
}

/* Output: Error: Cannot divide by zero */

Async functions can return Result<T, E> just like regular functions, and .await works seamlessly with match.


Example: Using ? to Propagate Errors in Async Code

async fn read_user() -> Result<&'static str, &'static str> {
    Ok("user123")
}

async fn load_profile(user: &str) -> Result<String, &'static str> {
    if user.is_empty() {
        Err("User not found")
    } else {
        Ok(format!("Profile for {}", user))
    }
}

async fn get_profile() -> Result<String, &'static str> {
    let user = read_user().await?;
    let profile = load_profile(user).await?;
    Ok(profile)
}

#[tokio::main]
async fn main() {
    match get_profile().await {
        Ok(profile) => println!("{}", profile),
        Err(err) => eprintln!("Failed: {}", err),
    }
}

/* Output: Profile for user123 */

The ? operator works with .await, making it easy to chain multiple fallible async calls with clean syntax.


Example: Handling Errors from Spawned Tasks

use tokio::task;

async fn risky_task() -> Result<i32, &'static str> {
    Err("Something went wrong")
}

#[tokio::main]
async fn main() {
    let handle = task::spawn(async {
        risky_task().await
    });

    match handle.await {
        Ok(Ok(val)) => println!("Success: {}", val),
        Ok(Err(e)) => eprintln!("Task error: {}", e),
        Err(e) => eprintln!("Join error: {}", e),
    }
}

/* Output: Task error: Something went wrong */

When using tokio::spawn, your async task returns a Result<Result<T, E>, JoinError>. You must handle both layers of errors: one from the task logic, and one from task execution.


Handling errors in async code follows the same robust patterns as synchronous Rust—just with .await in the mix. The key is to clearly define what your async functions may fail with, and use Result, ?, and proper matching to make your error handling clean and expressive.


Advanced Topics in Async Rust

Once you’re comfortable writing and managing async code, it’s helpful to explore some of the lower-level mechanics that make Rust’s async model both powerful and safe. These advanced topics—like pinning, async traits, cancellation, and lifetimes—provide insight into how Rust maintains strict memory guarantees even in complex async workflows.


Pinning and Why It Matters

Pinning is a key concept in Rust async internals, but you can write async code for quite a while without needing to deal with it directly. So why does it matter?

The Rust compiler transforms async fns into state machines. These state machines store internal values across .await points. If such a future were moved in memory while being polled, it could invalidate references inside it and lead to undefined behavior. To prevent this, Rust requires that these futures be pinned—they must not move once they’ve been polled.

In most cases, pinning is handled automatically by executors like Tokio. But when you implement Future manually or work with advanced async constructs, understanding pinning becomes essential.


Example: What Happens When You Try to Move a Future Manually

use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use futures::task::noop_waker;

struct MyFuture {
    polled: bool,
}

impl Future for MyFuture {
    type Output = &'static str;

    fn poll(mut self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
        if self.polled {
            Poll::Ready("Already polled")
        } else {
            self.polled = true;
            Poll::Pending
        }
    }
}

fn main() {
    let mut fut = MyFuture { polled: false };

    let waker = noop_waker();
    let mut cx = Context::from_waker(&waker);

    // Pin each poll individually
    let _ = Pin::new(&mut fut).poll(&mut cx);
    let _ = Pin::new(&mut fut).poll(&mut cx);
}

/* Output: 
(nothing visible, but poll() is called twice internally) 
*/

Example: Why Boxed Futures Are Pinned Automatically

When you use Box::pin, the boxed data is heap-allocated and won’t move, satisfying the pinning requirement.

use std::pin::Pin;
use std::future::Future;

async fn greet() {
    println!("Hello, pinned future!");
}

fn main() {
    let fut = greet(); // impl Future
    let _pinned: Pin<Box<dyn Future<Output = ()>>> = Box::pin(fut);
}

/* Output: (none – no executor in this example) */

Box::pin is the safe way to pin a future. This is often necessary when working with trait objects (dyn Future) or passing futures across threads.


Example: Pinning in Custom Future Combinators

If you’re building your own combinator, you may need to use Pin in your implementation:

use std::future::Future;use std::pin::Pin;
use std::task::{Context, Poll};
use futures::task::noop_waker;

struct DelayOnce {
    polled: bool,
}

impl Future for DelayOnce {
    type Output = &'static str;

    fn poll(mut self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
        if self.polled {
            Poll::Ready("Ready after delay")
        } else {
            self.polled = true;
            Poll::Pending
        }
    }
}

fn main() {
    let mut delay = DelayOnce { polled: false };
    let waker = noop_waker();
    let mut cx = Context::from_waker(&waker);

    // Pin inline to avoid moving
    println!("First poll: {:?}", Pin::new(&mut delay).poll(&mut cx));
    println!("Second poll: {:?}", Pin::new(&mut delay).poll(&mut cx));
}

/* Output: 
First poll: Pending
Second poll: Ready("Ready after delay") 
*/

The Pin type guarantees the memory location is stable. Without pinning, this future could be moved mid-execution, violating Rust’s safety guarantees.


Pinning prevents future state machines from being moved in memory during execution, preserving internal reference safety. Most of the time, async runtimes handle this for you—but when you implement custom futures or work with dyn Future, pinning becomes your responsibility.


Async Traits and the Current Limitations

Traits are a cornerstone of Rust’s abstraction capabilities. Naturally, you may want to define async functions in traits, especially when building trait-based services or plugins. However, due to the way Rust compiles async fn into state machines, async trait methods aren’t directly supported in stable Rust—at least not without workarounds.

The core issue is that async fn in a trait would imply returning an opaque type (impl Future) from a trait method, but traits require named return types or boxed trait objects. This leads to one of the most well-known workarounds in async Rust: using boxed futures or using the async-trait crate, which hides the complexity.

Let’s explore the problem and the available solutions.


Example: The Problem – You Can’t Write an async fn Directly in a Trait

trait Fetcher {
    async fn fetch(&self) -> String; // ❌ This won't compile
}

You’ll get an error like:

error[E0277]: the trait bound `Self: Sized` is not satisfied

This is because async fn in traits would return impl Future, which requires a concrete, known return type—something that trait definitions can’t provide natively.


Example: Using Box<dyn Future> for Manual Trait Workaround

You can work around this by boxing the returned future manually:

use std::future::Future;
use std::pin::Pin;

trait Fetcher {
    fn fetch(&self) -> Pin<Box<dyn Future<Output = String> + Send>>;
}

struct MyFetcher;

impl Fetcher for MyFetcher {
    fn fetch(&self) -> Pin<Box<dyn Future<Output = String> + Send>> {
        Box::pin(async {
            "Manual boxed future".to_string()
        })
    }
}

#[tokio::main]
async fn main() {
    let f = MyFetcher;
    let result = f.fetch().await;
    println!("Fetched: {}", result);
}

/* Output: Fetched: Manual boxed future */

This is verbose but works on stable Rust. You’re returning a heap-allocated Future trait object (dyn Future) wrapped in a Pin<Box<...>>.


Example: Using the async-trait Crate for Ergonomic Syntax

The async-trait crate abstracts the boxing for you, letting you use async fn in traits as if it were natively supported.

use async_trait::async_trait;

#[async_trait]
trait Fetcher {
    async fn fetch(&self) -> String;
}

struct MyFetcher;

#[async_trait]
impl Fetcher for MyFetcher {
    async fn fetch(&self) -> String {
        "Using async-trait".to_string()
    }
}

#[tokio::main]
async fn main() {
    let f = MyFetcher;
    let result = f.fetch().await;
    println!("Fetched: {}", result);
}

/* Output: Fetched: Using async-trait */

This is the most ergonomic approach and is widely used in practice. Internally, it uses boxing under the hood.

Note that we need to add the async-trait crate dependency to the Cargo.toml file:

[dependencies]
tokio = { version = "1", features = ["full"] }
async-trait = "0.1"

Summary

  • Native async fn in traits isn’t supported in stable Rust (yet).
  • You can manually box futures and return Pin<Box<dyn Future>>, but it’s verbose.
  • The async-trait crate provides a clean and production-ready workaround, widely used in the ecosystem.

Cancellation and Timeouts

In asynchronous applications, tasks may sometimes need to be cancelled—for example, when a client disconnects, a timeout occurs, or a higher-priority task preempts another. Rust’s async model handles cancellation in a cooperative and safe way: when a future is dropped, its execution is simply stopped, and no cleanup code runs unless explicitly awaited or handled.

This makes cancellation inexpensive and predictable. For timeouts, libraries like Tokio provide tools such as tokio::time::timeout() to enforce time limits on async operations.

Let’s walk through how cancellation and timeouts work, and how to use them to write robust async code.


Example: Implicit Cancellation by Dropping a Future

use tokio::time::{sleep, Duration};

async fn long_task() {
    println!("Starting long task...");
    sleep(Duration::from_secs(5)).await;
    println!("This will not be printed if cancelled");
}

#[tokio::main]
async fn main() {
    let handle = tokio::spawn(long_task());

    sleep(Duration::from_secs(1)).await;
    println!("Cancelling the task...");
    drop(handle); // Cancel the task
}

/* Output: 
Starting long task...
Cancelling the task... 
*/

Dropping the task handle cancels it. Since async cancellation in Rust is cooperative, the task is simply stopped without panicking or unwinding.


Example: Using tokio::time::timeout() to Limit Task Duration

use tokio::time::{sleep, Duration, timeout};

async fn slow_operation() -> &'static str {
    sleep(Duration::from_secs(3)).await;
    "Completed"
}

#[tokio::main]
async fn main() {
    let result = timeout(Duration::from_secs(1), slow_operation()).await;

    match result {
        Ok(msg) => println!("Success: {}", msg),
        Err(_) => eprintln!("Timed out!"),
    }
}

/* Output: Timed out! */

If the future doesn’t complete within the timeout duration, timeout() returns an Err(tokio::time::error::Elapsed). The future is dropped, canceling its execution.


Example: Graceful Cancellation with a Select Pattern

use tokio::time::{sleep, Duration};
use tokio::select;

async fn cancellable_task() {
    for i in 1..=5 {
        println!("Working... step {}", i);
        sleep(Duration::from_secs(1)).await;
    }
    println!("Task completed");
}

#[tokio::main]
async fn main() {
    let cancel_delay = sleep(Duration::from_secs(3));
    tokio::pin!(cancel_delay); // Pin the timeout

    select! {
        _ = cancellable_task() => println!("Finished normally"),
        _ = &mut cancel_delay => println!("Cancelled by timeout"),
    }
}

/* 
Output: Working... step 1
Working... step 2
Working... step 3
Cancelled by timeout 
*/

select! lets you race two async operations: here, the task either finishes or gets cancelled after 3 seconds. This pattern is extremely useful for timeouts, fallbacks, and graceful exits.


Summary

  • Dropping a future cancels it safely and immediately.
  • Use tokio::time::timeout() to enforce time constraints on async operations.
  • select! enables racing and graceful cancellation with fine-grained control.

Using Async in Structs and with Lifetimes

When building more complex async applications, you’ll often find yourself needing to store futures in structs, call async functions that borrow data, or return futures that depend on lifetimes. These patterns bring async and Rust’s strict ownership system together—and can trip up even experienced Rustaceans.

There are a few common scenarios to master:

  • Returning futures from struct methods
  • Calling async functions that borrow from self
  • Returning impl Future with lifetimes
  • Working with references inside async blocks

Let’s walk through some real-world examples that highlight best practices and common pitfalls.


Example: Async Method That Returns a Future

struct User {
    name: String,
}

impl User {
    async fn greet(&self) -> String {
        format!("Hello, {}!", self.name)
    }
}

#[tokio::main]
async fn main() {
    let user = User { name: "Alice".into() };
    let msg = user.greet().await;
    println!("{}", msg);
}

/* Output: Hello, Alice! */

You can define async fn directly in impl blocks. If the method borrows self, lifetimes are managed automatically by Rust’s async transformation.


Example: Async Method with Borrowed Arguments and Explicit Lifetimes

struct Logger;

impl Logger {
    async fn log<'a>(&self, msg: &'a str) -> &'a str {
        println!("LOG: {}", msg);
        msg
    }
}

#[tokio::main]
async fn main() {
    let logger = Logger;
    let message = "System initialized.";
    let returned = logger.log(message).await;
    println!("Returned message: {}", returned);
}

/* Output: 
LOG: System initialized.
Returned message: System initialized. 
*/

The method borrows a string slice with an explicit lifetime 'a, and the returned future preserves that lifetime. Rust will ensure this is safe even across .await points.


Example: Returning impl Future from a Struct Method

If you want to return a future without making the whole method async, you can do it using impl Future:

use std::future::Future;

struct TaskScheduler;

impl TaskScheduler {
    fn schedule(&self, task: &'static str) -> impl Future<Output = ()> {
        async move {
            println!("Executing task: {}", task);
        }
    }
}

#[tokio::main]
async fn main() {
    let scheduler = TaskScheduler;
    scheduler.schedule("Backup").await;
}

/* Output: Executing task: Backup */

This pattern is useful when your method needs to be callable in both sync and async contexts, or when you want more control over how the future is used.


Summary

Returning impl Future gives flexibility, but boxed futures may be required when working across traits or dynamic dispatch.

You can freely use async fn in impl blocks, and borrow self safely.

Lifetimes in async methods must be managed carefully, especially when borrowing external data.


Real-World Example: Building an Async Application

Now that we’ve explored Rust’s async foundations and advanced features, it’s time to put everything together in a practical example. Real-world async applications often involve handling multiple tasks concurrently, such as serving client requests, performing background I/O, or communicating between components.

In this section, we’ll build a simple asynchronous web server using the tokio and hyper crates. We’ll walk through setting up an async HTTP server, handling requests, and sending responses—illustrating how async Rust shines in highly concurrent environments.


Creating a Simple Async Web Server

We’ll build a minimal async HTTP server using hyper, a fast and correct HTTP implementation that runs on top of tokio. It allows you to handle requests concurrently and write fully non-blocking networking code.


Example: Minimal Hyper Server with a Static Response

use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};

async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    Ok(Response::new(Body::from("Hello from async Rust!")))
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 3000).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(handle))
    });

    let server = Server::bind(&addr).serve(make_svc);

    println!("Listening on http://{}", addr);
    server.await?;
    Ok(())
}

/* Output: 
(Listening on http://127.0.0.1:3000, and if you curl or open it in a browser:) 
Hello from async Rust! 
*/

This sets up a simple HTTP server that listens on localhost:3000 and returns a static response. Each request is handled concurrently and asynchronously.

You can enter this url in your browser and press Enter:

http://127.0.0.1:3000

Note that we have added the hyper crate dependency to file Cargo.toml.

[dependencies]
tokio = { version = "1", features = ["full"] }
hyper = { version = "0.14", features = ["full"] }

Example: Adding Route Matching (Manual Approach)

use hyper::{Body, Request, Response, Server, Method, StatusCode};
use hyper::service::{make_service_fn, service_fn};

async fn router(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    match (req.method(), req.uri().path()) {
        (&Method::GET, "/") => Ok(Response::new(Body::from("Home Page"))),
        (&Method::GET, "/about") => Ok(Response::new(Body::from("About Us"))),
        _ => {
            let mut not_found = Response::new(Body::from("Not Found"));
            *not_found.status_mut() = StatusCode::NOT_FOUND;
            Ok(not_found)
        }
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 8080).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(router))
    });

    println!("Listening on http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output:
curl http://localhost:8080/ → Home Page
curl http://localhost:8080/about → About Us
curl http://localhost:8080/xyz → Not Found 
*/

This example adds basic routing by inspecting the method and URI. For more complex routing, you could use a framework like warp or axum.

You will need to stop the previous running example with Ctrl + c and then start the above example, then either use the curl commands or the following in the browser to see the results:

http://localhost:8080 (Home Page)

http://localhost:8080/about (About Us)

http://localhost:8080/xyz (Not Found)


Example: Simulating Asynchronous I/O with tokio::time::sleep

use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use tokio::time::{sleep, Duration};

async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    sleep(Duration::from_secs(2)).await;
    Ok(Response::new(Body::from("Delayed Response")))
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 4000).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(handle))
    });

    println!("Listening on http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output: 
curl http://localhost:4000
response arrives after ~2 seconds 
Delayed Response 
*/

You can simulate real I/O delays using tokio::time::sleep—which doesn’t block threads and allows other requests to be served during the delay.

Stop the previous example with Ctrl + c and then run the above example and in the browser:

http://127.0.0.1:4000


This real-world example shows how to build a fast and efficient async web server using Tokio and Hyper—illustrating concurrency, request routing, and non-blocking I/O.


Handling Concurrent Requests

One of the biggest strengths of Rust’s async ecosystem is the ability to handle many client requests concurrently, without spawning a new thread for each one. Thanks to the underlying async runtime and non-blocking I/O, servers built with tokio and hyper can efficiently handle thousands of simultaneous connections using just a handful of threads.

In this subsection, we’ll show how your async server automatically supports concurrent request handling, demonstrate slow and fast endpoints running side by side, and optionally limit concurrency using semaphores or task tracking.


Example: Handling Slow and Fast Requests Concurrently

use hyper::{Body, Request, Response, Server, Method, StatusCode};
use hyper::service::{make_service_fn, service_fn};
use tokio::time::{sleep, Duration};

async fn router(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    match (req.method(), req.uri().path()) {
        (&Method::GET, "/fast") => {
            Ok(Response::new(Body::from("Fast response")))
        },
        (&Method::GET, "/slow") => {
            sleep(Duration::from_secs(5)).await;
            Ok(Response::new(Body::from("Slow response")))
        },
        _ => {
            let mut not_found = Response::new(Body::from("Not Found"));
            *not_found.status_mut() = StatusCode::NOT_FOUND;
            Ok(not_found)
        }
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 7070).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(router))
    });

    println!("Server running at http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output:
curl http://localhost:7070/fast → responds immediately
curl http://localhost:7070/slow → responds after 5 seconds
Both can be requested at the same time from separate terminals! 
*/

Even though /slow sleeps for 5 seconds, /fast can respond instantly if requested concurrently. This is true concurrency managed by the async runtime.

Stop the previous example with Ctrl + c and then run the above example and execute the curl command or in the browser:

http://127.0.0.1:7070/fast

http://127.0.0.1:7070/slow


Example: Logging Request Concurrency with a Counter

use std::sync::{Arc, atomic::{AtomicUsize, Ordering}};
use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use tokio::time::{sleep, Duration};

async fn handle(req: Request<Body>, counter: Arc<AtomicUsize>) -> Result<Response<Body>, hyper::Error> {
    let active = counter.fetch_add(1, Ordering::SeqCst) + 1;
    println!("Request started: {} active", active);

    sleep(Duration::from_secs(2)).await;

    let remaining = counter.fetch_sub(1, Ordering::SeqCst) - 1;
    println!("Request finished: {} active", remaining);

    Ok(Response::new(Body::from("Done")))
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let counter = Arc::new(AtomicUsize::new(0));

    let make_svc = {
        let counter = counter.clone();
        make_service_fn(move |_conn| {
            let counter = counter.clone();
            async move {
                Ok::<_, hyper::Error>(service_fn(move |req| handle(req, counter.clone())))
            }
        })
    };

    let addr = ([127, 0, 0, 1], 9090).into();
    println!("Listening on http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output: 
Request started: 1 active
Request started: 2 active
Request finished: 1 active
Request finished: 0 active 
*/

This logs how many concurrent requests are active at a time. You’ll see overlap if you hit the server quickly from multiple terminals or with a tool like ab or wrk.

Stop the previous example with Ctrl + c then run the above example and in the browser:

http://127.0.0.1:9090

In the browser you’ll just see a message Done, but in the shell window you will see the output mentioned in the code above.


Example: Limiting Concurrency Using a Semaphore

use std::sync::Arc;
use tokio::sync::Semaphore;
use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use tokio::time::{sleep, Duration};

async fn handle_with_limit(req: Request<Body>, semaphore: Arc<Semaphore>) -> Result<Response<Body>, hyper::Error> {
    let _permit = semaphore.acquire().await.unwrap();
    println!("Handling request: {}", req.uri().path());

    sleep(Duration::from_secs(3)).await;

    Ok(Response::new(Body::from("Response from limited handler")))
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let semaphore = Arc::new(Semaphore::new(2)); // Max 2 concurrent requests

    let make_svc = {
        let semaphore = semaphore.clone();
        make_service_fn(move |_conn| {
            let semaphore = semaphore.clone();
            async move {
                Ok::<_, hyper::Error>(service_fn(move |req| handle_with_limit(req, semaphore.clone())))
            }
        })
    };

    let addr = ([127, 0, 0, 1], 6060).into();
    println!("Server with limit on http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output: 
Only two requests handled at a time; others wait until a slot is free 
*/

Stop the previous example, run this example and then in the browser:

http://127.0.0.1:6060

Using tokio::sync::Semaphore, you can enforce concurrency limits across requests—handy for rate-limiting, resource caps, or graceful degradation.


With async runtimes like Tokio, concurrent request handling is efficient, scalable, and straightforward. You can monitor, throttle, and structure request flow with very fine control using simple async constructs.


Integrating Async File I/O

In many real-world async applications, such as web servers, logging services, and data processors, you need to perform file I/O—whether reading from or writing to disk. While Rust’s standard library provides synchronous file APIs, Tokio extends this with a suite of fully asynchronous file utilities in tokio::fs.

These functions are non-blocking and allow file operations to run without halting the runtime, making them ideal for high-concurrency async applications.

Let’s walk through how to perform async reads, writes, and combine file I/O with request handling.


Example: Async File Read with tokio::fs::read_to_string

use tokio::fs;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let content = fs::read_to_string("example.txt").await?;
    println!("File contents:\n{}", content);
    Ok(())
}

/* Output: 
File contents: (This will show the contents of example.txt if it exists.) 
*/

This asynchronously reads a whole file as a String. It doesn’t block other tasks while waiting on disk I/O.


Example: Async File Write with tokio::fs::write

use tokio::fs;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let data = "Log entry: async write successful!\n";
    fs::write("log.txt", data).await?;
    println!("Data written to log.txt");
    Ok(())
}

/* Output: Data written to log.txt */

This writes data asynchronously to a file. It creates or overwrites the file, depending on whether it exists.


Example: Read File Content in a Web Request Handler

use hyper::{Body, Request, Response, Server, StatusCode};
use hyper::service::{make_service_fn, service_fn};
use tokio::fs;

async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    match fs::read_to_string("data.txt").await {
        Ok(contents) => Ok(Response::new(Body::from(contents))),
        Err(_) => {
            let mut not_found = Response::new(Body::from("File not found"));
            *not_found.status_mut() = StatusCode::NOT_FOUND;
            Ok(not_found)
        }
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 5050).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(handle))
    });

    println!("Serving file at http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output:
Visit http://localhost:5050 → shows content of data.txt
If file doesn’t exist → returns “File not found” 
*/

This combines tokio::fs::read_to_string with hyper to serve file content in HTTP responses. It’s a great foundation for file-based APIs or templated responses.


Summary

Combine them seamlessly in request handlers, background jobs, or logging pipelines.

Use tokio::fs for fully async file operations in concurrent systems.

These APIs mirror their sync counterparts but are non-blocking and work with .await.


When to Choose Async vs Threads

Rust provides two powerful models for concurrent programming: asynchronous tasks and OS threads. Both have their strengths, but choosing the right one for your application depends on your performance goals, resource constraints, and workload type.

This section explores when to choose async over threads (and vice versa), with a focus on real-world use cases, performance trade-offs, and system behavior under load.


Performance Considerations

At a high level:

  • Async excels in handling many lightweight, I/O-bound tasks efficiently using a small number of threads.
  • Threads excel in CPU-bound scenarios where tasks require sustained, parallel computation.

Let’s break it down with examples and performance-focused insights.


Example: Spawning Many Threads – Memory and Context Switching Overhead

use hyper::{Body, Request, Response, Server, StatusCode};
use hyper::service::{make_service_fn, service_fn};
use tokio::fs;

async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    match fs::read_to_string("data.txt").await {
        Ok(contents) => Ok(Response::new(Body::from(contents))),
        Err(_) => {
            let mut not_found = Response::new(Body::from("File not found"));
            *not_found.status_mut() = StatusCode::NOT_FOUND;
            Ok(not_found)
        }
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    let addr = ([127, 0, 0, 1], 5050).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(handle))
    });

    println!("Serving file at http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output:
Visit http://localhost:5050 → shows content of data.txt
If file doesn’t exist → returns “File not found” 
*/

Threads are powerful but heavy: each thread has its own stack (typically 2MB). Spawning thousands of threads can exhaust system resources and increase context switch overhead.


Example: Spawning Thousands of Async Tasks (Efficient)

use tokio::task;

#[tokio::main]
async fn main() {
    for i in 0..10_000 {
        task::spawn(async move {
            println!("Task {} running", i);
        });
    }

    // Allow spawned tasks to complete
    tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}

/* Output: Task 0 running
Task 1 running
...
Task 9999 running 
*/

Async tasks are extremely lightweight. You can spawn tens of thousands with negligible memory usage, making them ideal for I/O-heavy workloads like web servers or message processors.


Example: CPU-Bound Work – Threads Are Often Better

use std::thread;

fn expensive_work(n: u32) -> u64 {
    (0..n as u64).map(|x| x * x).sum()
}

fn main() {
    let handles: Vec<_> = (0..4).map(|i| {
        thread::spawn(move || {
            let result = expensive_work(1_000_000);
            println!("Thread {} result: {}", i, result);
        })
    }).collect();

    for handle in handles {
        handle.join().unwrap();
    }
}

/* Output: 
Thread 0 result: 333332833333500000
Thread 1 result: 333332833333500000
... 
*/

For pure number crunching and CPU-intensive workloads, OS threads scale better across CPU cores. Async won’t help much unless the I/O bottlenecks dominate.


Summary

Workload TypePreferWhy
I/O-bound (network, file)AsyncLightweight, scales to thousands of tasks
CPU-bound (computation)ThreadsUses all cores, no polling overhead
Mixed workloadsHybridThreads for CPU tasks, async for I/O

Async is not always faster—it’s about the right tool for the right task. For high-throughput I/O-bound systems, async wins. For parallel computation, threads rule.


Memory and Resource Efficiency

One of the most compelling reasons to choose async Rust over threads is its efficient use of system resources, especially memory and CPU scheduling overhead. While threads give you preemptive multitasking and native parallelism, they also come with significant memory costs (stack size) and OS-level context switching.

Async tasks, in contrast, are cooperatively scheduled within a single thread or a small thread pool. They don’t require their own stacks, and they yield control voluntarily using .await, making them extremely lightweight—ideal for applications with thousands of concurrent tasks.

Let’s compare how async tasks and threads behave in terms of memory and scheduling efficiency.


Example: Thread Stack Usage Adds Up Quickly

use std::thread;

fn main() {
    let mut handles = vec![];

    for i in 0..1000 {
        let handle = thread::spawn(move || {
            std::thread::sleep(std::time::Duration::from_millis(10));
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Spawned 1000 threads.");
}

/* Output: Spawned 1000 threads. */

Spawning 1000 threads each using a 2MB default stack can consume ~2GB of virtual memory. While modern OSes use lazy allocation, this still pressures system limits and increases context switch overhead.


Example: 10,000 Async Tasks on a Single Thread (Efficient!)

use tokio::task;
use tokio::time::{sleep, Duration};

#[tokio::main(flavor = "current_thread")]
async fn main() {
    for i in 0..10_000 {
        task::spawn(async move {
            sleep(Duration::from_millis(1)).await;
        });
    }

    println!("Spawned 10,000 tasks!");
    sleep(Duration::from_secs(1)).await;
}

/* Output: Spawned 10,000 tasks! */

Async tasks are stackless state machines, requiring just a few dozen bytes each. Even with 10,000 tasks, memory usage stays low, and no OS threads are created beyond the main thread.


Example: Compare Task Overhead Using System Monitor

Try launching each of the above programs and monitor memory usage using:

  • htop or top (Linux/macOS)
  • Task Manager (Windows)

You’ll observe:

  • Threaded program: high memory usage, many entries in the process/thread list.
  • Async program: minimal memory footprint, single-threaded (if using current_thread flavor).

This highlights async Rust’s suitability for resource-constrained environments like embedded systems, serverless functions, and high-density I/O services.


Summary

FeatureThreadsAsync Tasks
Stack size~2MB per thread (by default)Minimal — state machine only
SchedulingOS-level, preemptiveRuntime-level, cooperative
Max concurrent units~Thousands (realistically)10,000+ easily
Ideal forCPU-bound tasksI/O-bound, high-concurrency tasks

Async Rust enables you to scale efficiently without blowing up memory usage—a critical factor for high-throughput servers and low-latency systems.


Use Cases Best Suited for Async

Async Rust shines in applications that require handling a large number of concurrent, I/O-bound tasks. Because async tasks are lightweight and non-blocking, they allow you to maximize throughput without exhausting system resources like threads or memory.

You should reach for async when your application involves:

  • High concurrency with network I/O
  • Real-time messaging or event streaming
  • Asynchronous file or database access
  • Long-lived background tasks (sockets, polling)
  • Scalable microservices or web APIs

Below are some classic use cases where async Rust is a perfect fit.


Example : Building a High-Concurrency Web Server

use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};

async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    Ok(Response::new(Body::from("Hello, async web!")))
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let addr = ([127, 0, 0, 1], 3001).into();

    let make_svc = make_service_fn(|_conn| async {
        Ok::<_, hyper::Error>(service_fn(handle))
    });

    println!("Running async web server on http://{}", addr);
    Server::bind(&addr).serve(make_svc).await?;
    Ok(())
}

/* Output: 
Running async web server on http://127.0.0.1:3001
(Response: "Hello, async web!") 
*/

Great for microservices, APIs, or any service handling many simultaneous clients efficiently.


Example: Real-Time Messaging or Notifications

use tokio::sync::broadcast;
use tokio::task;
use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let (tx, _) = broadcast::channel::<&str>(10);

    for i in 1..=3 {
        let mut rx = tx.subscribe();
        task::spawn(async move {
            while let Ok(msg) = rx.recv().await {
                println!("Listener {} got: {}", i, msg);
            }
        });
    }

    tx.send("New message!").unwrap();
    tx.send("Another event!").unwrap();

    // Give the listeners time to receive messages
    sleep(Duration::from_millis(100)).await;
}


/* Output: Listener 1 got: New message!
Listener 2 got: New message!
Listener 3 got: New message!
Listener 1 got: Another event!
... 
*/

Perfect for pub-sub systems, chat apps, or internal event buses—all with minimal CPU and memory overhead.


Example: Polling an External API Without Blocking

use tokio::time::{sleep, Duration};

async fn poll_api() {
    loop {
        println!("Polling external API...");
        sleep(Duration::from_secs(5)).await;
    }
}

#[tokio::main]
async fn main() {
    tokio::spawn(poll_api());

    println!("App is doing other work...");
    sleep(Duration::from_secs(12)).await;
    println!("Main task done.");
}

/* Output: App is doing other work...
Polling external API...
Polling external API...
Polling external API...
Main task done. */

Use this pattern for background jobs, health checks, schedulers, or watchers—all without blocking your main logic.


Async is ideal for:

✅ High-throughput servers and APIs
✅ Real-time message passing
✅ Concurrent background processing
✅ Polling, scheduling, and reactive systems
✅ Any I/O-heavy system where responsiveness matters

When your app needs to scale connections, not cores, async is your best friend.


Rust offers both async tasks and OS threads for concurrency, and each excels in different scenarios:

  • Use async when handling thousands of lightweight, I/O-bound tasks like serving HTTP requests, managing sockets, or polling resources. Async tasks are memory-efficient, cooperatively scheduled, and scale well even on a single thread.
  • Use threads for CPU-bound or compute-heavy workloads that benefit from true parallelism across cores. Threads are more appropriate for heavy calculations, compression, rendering, or simulations.
  • For mixed workloads, a hybrid model often works best: use threads for parallel computation and async for orchestrating I/O and communication.

Async Rust shines when system throughput, responsiveness, and resource efficiency are key. Threads remain valuable for their simplicity and raw performance in CPU-intensive use cases.

Choose based on the workload, not the hype. Async is powerful—but threads still have their place.


Thank you for visiting and for allowing ByteMagma to be a part of your Rust mastery journey!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *