
Testing is a first-class citizen of the Rust language.
Rust’s built-in test framework makes it easy to write and run both unit and integration tests. Whether you’re validating a single function or ensuring your modules work together seamlessly, testing helps you catch bugs early.
In this post, we’ll start with simple unit tests, work our way up to integration tests, and see how Rust makes it all surprisingly intuitive.
Unlike many other languages that require third-party libraries to get started with testing, Rust comes with a full-featured test framework built into the language and the compiler.
With just the #[test]
attribute and the cargo test
command, you can:
- Define unit and integration tests
- Assert expectations using built-in macros
- Group tests in modules
- Mark tests to be ignored or to expect panics
- Filter and run specific tests easily
This seamless integration means you don’t have to choose or configure a testing framework, you can just write code and tests side by side, and focus on writing high-quality software.
Unit Tests vs Integration Tests
Unit tests are focused, low-level tests that validate the behavior of a small, isolated piece of code, usually a single function, method, or module. They:
- Live inside the same crate (often the same file) as the code being tested
- Are defined in a special
#[cfg(test)]
module - Can access private functions and internal details
Use them to verify edge cases, logic branches, or specific calculations in your core logic.
Integration tests live in a separate tests/
directory at the root of your crate. These:
- Test your library or binary as a black box
- Only access the public API
- Are ideal for end-to-end checks, public-facing behaviors, and cross-module interactions
Together, these unit and integration tests give you confidence and broad assurance that your crate behaves as expected.
Let’s get started!
Getting Started with Unit Testing
Let’s create a new Rust library
, write some code, and write some tests. CD into the directory where you store Rust packages
for this blog, and execute this command:
cargo new math --lib
This creates a new package math
with one library crate src/lib.rs.
Now CD into the math directory and open that directory in VS Code or your favorite IDE.
Note: using VS Code will make it easier to follow along with our posts. And installing the Rust Analyzer extension is also beneficial.
Also, opening the directory containing your Rust package
allows the Rust Analyzer VS Code extension to work better. If you open a parent folder in VS Code the Rust Analyzer extension might not work at all. So open the math directory in VS Code.
Tests are designed to verify that non-test code works as expected. You’re creating a Rust program that does something useful. Tests verify your program code does what it is designed to do.
The bodies of test functions typically perform these three actions:
- Set up any needed data or state.
- Run the code you want to test.
- Assert that the results are what you expect.
#[test] Annotates Functions as Tests
To make a function a test function, simply annotate it with the test
attribute.
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
Attributes are metadata about Rust code. We’ve seen this in other blog posts to derive the Debug trait using the derive attribute, to enable printing of non-primitive and complex types:
#[derive(Debug)]
When you create a new library package with cargo, a tests module containing a test function is automatically generated. Open the lib.rs file that was generated for the math library package we just created.
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
Cargo generated our library with some example code, fn add(), and an example test module and test function. This is to help you quickly become productive as you add code to your library and write tests against your code.
Note that the following line makes all code in the parent module, including private code, available to test in the tests module. That includes all functions, structs, enums etc. in the parent module.
use super::*;
For this simple lib.rs, the parent module is everything outside the tests module. Note that although this could include library code below the test module, the best practice in Rust is to place tests below library code.
Let’s analyze the it_works() test function.
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
The annotation #[test]
indicates to the test runner that this function is a test. The tests module could have other functions that are not tests, perhaps performing setup for tests, or other tasks. So ensure you annotate test functions.
Our test function body uses the assert_eq!()
macro to verify that the result of calling the add()
function with arguments of 2 and 2 returns a value of 4. Let’s see if this test passes by running cargo test
in the shell window, with the current working directory as our math directory.
running 1 test
test tests::it_works ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests math
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Here we see that one test was run and it passed. The output tells us our test function it_works passed ( ok ):
test tests::it_works ... ok
Note that it says zero Doc-tests were run.
Doc-tests are tests you can include in documentation comments for your code. Doc-tests help ensure your code and their comments stay in sync.
Replace the contents of lib.rs with the following to add a doc test for our add() function:
/// Adds two unsigned 64-bit integers.
///
/// # Examples
///
/// ```
/// // Import your crate or use a fully-qualified path if testing externally
/// use math::add;
///
/// let result = add(2, 2);
/// assert_eq!(result, 4);
/// ```
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
Run the tests again with cargo test
and you should see this output.
running 1 test
test tests::it_works ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests math
running 1 test
test src/lib.rs - add (line 5) ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.50s
Now we have one test being run and passing, and one doc-test being run and passing.
Let’s take a closer look at the reported results:
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
The overall result was ok
, meaning all tests passed. If any test fails, the result will be FAILED
.
Next the output tells us one test passed, zero tests failed, zero tests were ignored, also zero measured and zero filtered out. Later we’ll see how you can ignore tests. We’ll also see later how you can filter tests to run only specific tests, by passing a string to the cargo test
command.
Measured tests refer to performance benchmarks, which are disabled in stable Rust and only work with nightly builds.
If you did want to give benchmark tests a try:
Switch to using the Rust nightly build:
rustup override set nightly
Add this to your math package Cargo.toml:
[dev-dependencies]
test = "0.1.0"
Add this to your lib.rs
#![feature(test)]
extern crate test;
#[bench]
fn bench_addition(b: &mut test::Bencher) {
b.iter(|| 1 + 2);
}
Run the benchmark tests:
cargo bench
Reference the benchmark tests section of the Official Rust Book.
Change the Name of Your Test Function
Let’s change the name of our test function to something more meaningful and relevant:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn adds_two_numbers() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
Execute cargo test
and the output reflects our test function name change:
running 1 test
test tests::adds_two_numbers ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Add Another Test
Let’s add another test. We use the panic!()
macro to cause the test to fail.
#[test]
fn second_test() {
panic!("Force this test to fail");
}
Execute cargo test and this is the output:
running 2 tests
test tests::adds_two_numbers ... ok
test tests::second_test ... FAILED
failures:
---- tests::second_test stdout ----
thread 'tests::second_test' panicked at src/lib.rs:28:9:
Force this test to fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
tests::second_test
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
The results indicate that one test result was ok and one test result was FAILED and the overall test result was FAILED.
Tests fail when something in the test panics. The assert_eq!()
macro panics if the values are not equal.
Note that each test executes in a separate thread, and when the main testing thread sees an individual test thread has died, it marks the test as having failed.
We won’t cover threads in detail in this post, but a thread is a lightweight, independent path of execution within a program. A thread is like a mini-program running alongside others in the same application.
The output tells us why the test failed and where in the file it failed:
thread 'tests::second_test' panicked at src/lib.rs:28:9:
The test failed on line 28 of file lib.rs:
panic!("Force this test to fail");
The output also mentions you can run the test to display a backtrace. Execute this now:
RUST_BACKTRACE=1 cargo test
This sets the environment variable RUST_BACKTRACE to a value of 1 while the tests are running. Here is the output:
running 2 tests
test tests::adds_two_numbers ... ok
test tests::second_test ... FAILED
failures:
---- tests::second_test stdout ----
thread 'tests::second_test' panicked at src/lib.rs:28:9:
Force this test to fail
stack backtrace:
0: rust_begin_unwind
at /rustc/4d91de4e48198da2e33413efdcd9cd2cc0c46688/library/std/src/panicking.rs:692:5
1: core::panicking::panic_fmt
at /rustc/4d91de4e48198da2e33413efdcd9cd2cc0c46688/library/core/src/panicking.rs:75:14
2: math::tests::second_test
at ./src/lib.rs:28:9
3: math::tests::second_test::{{closure}}
at ./src/lib.rs:27:21
4: core::ops::function::FnOnce::call_once
at /Users/recording/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
5: core::ops::function::FnOnce::call_once
at /rustc/4d91de4e48198da2e33413efdcd9cd2cc0c46688/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
failures:
tests::second_test
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.05s
The backtrace can provide valuable information in tracking down the cause of the failed test.
The assert!() Macro
In addition to the assert_eq!(val1, val2)
macro that calls the panic!()
macro if the two values are not equal, Rust also provides the assert!()
macro that calls panic!()
if the expression passed to it does not evaluate to a boolean value of true.
Add the following code to lib.rs above the test module:
#[derive(Debug)]
struct Point {
x: u32,
y: u32,
}
impl Point {
fn same_point(&self, other: &Point) -> bool {
self.x == other.x && self.y == other.y
}
}
Our method same_point()
returns a boolean so we can use the assert!()
macro to test it.
Now add this code inside the test module:
#[test]
fn points_are_the_same() {
let point_one = Point {
x: 100,
y: 50,
};
let point_two = Point {
x: 100,
y: 50,
};
assert!(point_one.same_point(&point_two));
}
Execute cargo test and you should see these results:
running 3 tests
test tests::adds_two_numbers ... ok
test tests::points_are_the_same ... ok
test tests::second_test ... FAILED
failures:
---- tests::second_test stdout ----
thread 'tests::second_test' panicked at src/lib.rs:40:9:
Force this test to fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
tests::second_test
test result: FAILED. 2 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Our new test passes, and our old tests have one pass one failure as before.
Let’s add another test to our test module:
#[test]
fn points_are_not_the_same() {
let point_one = Point {
x: 100,
y: 50,
};
let point_two = Point {
x: 300,
y: 50,
};
assert!(!point_one.same_point(&point_two));
}
Run the tests again and you’ll see this test passed as well. Here we do something slightly different. We use the !
(not, negation) operator to test that the result of calling the same_point()
function is NOT true (it is false). Because the x
field of the two input points do not have the same value, the function returns false. The ! (negation operator)
converts the false to a true, and assert!() is happy, the test passes.
Introduce a Bug
Now let’s make a code change that results in a bug in our code. Replace the Point impl block with the following:
impl Point {
fn same_point(&self, other: &Point) -> bool {
self.x == other.x || self.y == other.y
}
}
Here we’ve replaced the && AND
operator with the || OR
operator. Now the function will return true if either the two Points x fields are the same OR if the two Points y fields are the same.
That’s a bug, because the purpose of the same_point() function is to compare two Points and determine if they are the same point, and this can only be the case if the x AND y
fields are the same.
Run the tests again:
running 4 tests
test tests::adds_two_numbers ... ok
test tests::points_are_the_same ... ok
test tests::second_test ... FAILED
test tests::points_are_not_the_same ... FAILED
Our failed test causes us to investigate the points_are_not_the_same
test failure, and we find the bug in our code and replace the || operator
with the && operator
, rerun our tests and confirm that our bug fix worked.
assert_eq!() and assert_ne!()
We’ve seen the assert_eq!()
macro in action, which verifies that the code results in a value that is equal to an expected value. Rust also offers an assert_ne!()
macro that does the opposite, it verifies the code results in a value that is not equal to an expected value.
So choosing to use assert!()
, assert_eq!()
or assert_ne!()
for any particular test is personal choice, and also your trying to ensure that your tests are easy to understand.
Note that when a test fails, the assert_eq!()
and assert_ne!()
macros provide the values that were compared, helping you understand what went wrong. So they might be a better choice than using the assert!()
macro, which does not provide this additional information.
Let’s change one of our tests so it fails, so we can see the additional information assert_ne!() provides.
#[test]
fn adds_two_numbers() {
let result = add(2, 2);
assert_ne!(result, 4);
}
Here we’ve changed the test adds_two_numbers()
to use the assert_ne!()
macro instead of the assert_eq!()
macro. Now run the tests. That test fails and gives this output:
thread 'tests::adds_two_numbers' panicked at src/lib.rs:35:9:
assertion `left != right` failed
left: 4
right: 4
The assert_ne!()
macro makes the assertion that left != right, but as both values have the same value of 4, the test fails. This is a problem with our test, we should be using the assert_eq!()
macro, so this was just for teaching purposes.
Note that some languages use the terms expected
and actual
for the two values being compared, Rust uses the terms left
and right
instead.
When a test fails, the assert_eq!()
and assert_ne!()
macros print their arguments using debug formatting, so the values being compared must implement the Debug
and PartialEq
traits.
Primitive types such as integers, floats, bool, char
, and a few others implement the PartialEq
and Debug
traits by default, but complex types such as structs, enums
, etc. do not.
Traits will be covered in detail in a future post, but for now understand that you can add the following before your struct
, enum
, etc. to derive the default implementations of the PartialEq
and Debug
traits.
#[derive(PartialEq, Debug)]
Adding Custom Failure Messages
The assert!()
, assert_eq!()
and assert_ne!()
macros take an optional argument after any required arguments. That optional argument is passed to the format!()
macro that is used to format strings. You pass a format string that includes placeholders such as {}
and values to go in those placeholders.
This feature is useful to allow tests to provide more information on what went wrong on failure.
Let’s add this function above the tests module in lib.rs:
pub fn greeting(name: &str) -> String {
format!("Hello {name}!")
}
Now add this test to the tests module:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn greeting_contains_name() {
let result = greeting("Carol");
assert!(result.contains("Carol"));
}
}
Note that our test simple asserts that the return value contains the text “Carol”. We do this because we might change the implementation of the greeting()
function to append the name to some different message, such as Welcome {name} or perhaps Greetings {name}. So our test doesn’t need to change, even if the code implementation changes.
Run the tests and this test passes. Now let’s introduce a bug in our code. Replace the greeting()
function with the following:
pub fn greeting(name: &str) -> String {
String::from("Hello!")
}
The test fails but we don’t get much useful information on what went wrong. Let’s add a custom error message to our test:
#[test]
fn greeting_contains_name() {
let result = greeting("Carol");
assert!(
result.contains("Carol"),
"Greeting did not contain name, value was `{result}`"
);
}
We now pass the optional argument, a format string with a placeholder that embeds the function call result in the custom error message. Run the tests again and our error message is now a bit more helpful, and includes the value returned from the function.
thread 'tests::greeting_contains_name' panicked at src/lib.rs:78:9:
Greeting did not contain name, value was `Hello!`
Checking for Panics with #[should_panic]
Sometimes our code is designed to intentionally panic in certain situations. We can test for this by annotating our test with #[should_panic]. Our test will pass if the code panics and the test will fail if the code does not panic.
Add this code above the tests module:
pub struct Guess {
value: i32,
}
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 || value > 100 {
panic!("Guess value must be between 1 and 100, got {value}.");
}
Guess { value }
}
}
Now add this test:
#[test]
#[should_panic]
fn greater_than_100() {
Guess::new(200);
}
We place the #[should_panic]
annotation after the #[test]
annotation and before the test function.
Run the tests and this new test passes. Our new()
function is designed to panic if the function input value is less than 1 or greater than 100. We pass a value of 200 so we expect the code to panic, and the test passes.
Now let’s change the code to introduce a bug:
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 {
panic!("Guess value must be between 1 and 100, got {value}.");
}
Guess { value }
}
}
We’ve changed the condition to only check for values less than one. Run the tests again and our new test fails.
test tests::greater_than_100 - should panic ... FAILED
---- tests::greater_than_100 stdout ----
note: test did not panic as expected
Tests using should_panic
can be imprecise because our code could panic for reasons other that what we expect.
We can add an optional expected
parameter to the should_panic
annotation, and the test harness will ensure on failure the error message contains the value of the expected
parameter.
Let’s change our code to panic with different messages when the provided value is less than 0 or greater than 100.
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 {
panic!(
"Guess value must be greater than or equal to 1, got {value}."
);
} else if value > 100 {
panic!(
"Guess value must be less than or equal to 100, got {value}."
);
}
Guess { value }
}
}
Now we’ll update the test to have the optional expected
parameter for the should_panic
annotation.
#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
Guess::new(200);
}
The test passes because the value of the expected
parameter is contained within the message received on panic.
You can specify a value for the expected
parameter that makes your test more or less precise. It is up to your requirements.
To introduce a bug, let’s swap the messages for the if-else construct:
if value < 1 {
panic!(
"Guess value must be less than or equal to 100, got {value}."
);
} else if value > 100 {
panic!(
"Guess value must be greater than or equal to 1, got {value}."
);
}
The test now fails and we get more information about the failure.
thread 'tests::greater_than_100' panicked at src/lib.rs:43:13:
Guess value must be greater than or equal to 1, got 200.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
note: panic did not contain expected string
panic message: `"Guess value must be greater than or equal to 1, got 200."`,
expected substring: `"less than or equal to 100"`
Using Result<T, E>
in Tests
Rather than panic
for failed tests we can instead use the Result<T, E> enum
, returning Err()
on failure.
Let’s modify our first test, the one we renamed, to use Result<T, E>
.
#[test]
fn adds_two_numbers() -> Result<(), String> {
let result = add(2, 2);
if result == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
Run the tests and this test passes. Now change the test to result in an error and the test fails:
#[test]
fn adds_two_numbers() -> Result<(), String> {
let result = add(2, 6);
if result == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
---- tests::adds_two_numbers stdout ----
Error: "two plus two does not equal four"
Obviously our test is now flawed as we should be passing the function values of 2 and 2, not 2 and 6, but this was just for teaching purposes.
Note that you cannot use #[should_panic] with tests that use Result<T, E>.
Controlling How Tests Are Run
Just as cargo run
compiles your code and executes the resultant binary, cargo test
compiles your code in test mode and runs the resultant test binary. The default behavior of cargo test
is to run the tests in parallel and output the results gathered from the tests. You can specify command line options to change this behavior.
Some of the command line arguments will go with the cargo test
command, other command line arguments will go with the resultant test binary. The command line arguments for the cargo test
command come first, and then come the command line arguments for binary. They are separated by --
(two dashes).
cargo test --help
displays the options for the cargo test
command itself, like building, compiling, filtering tests, etc.
cargo test -- --help
passes the --help
flag to the test binary that Cargo builds and runs. The first --
tells Cargo that no more arguments are meant for Cargo itself, everything after that should be passed to the test runner.
Running Tests in Parallel or Consecutively
By default your tests are run in parallel using multiple threads. This has the advantage that your tests run quicker and you see the results sooner. Because the tests are running in parallel, you need to make sure they don’t depend on each other or depend on any shared state, such as the current working directory or environment variables.
One test might write to a file, and other tests might depend on the contents of that file. But then another test might modify the contents of that file, causing other tests to fail.
You could overcome this problem by ensuring all your tests write to different files, or you could have your tests run sequentially. You could pass the --test-threads
argument to the test binary indicating how many threads you want to use to execute the tests. The following will ensure all tests run in the same one thread, making the tests run sequentially.
cargo test -- --test-threads=1
Showing Function Output
When a test passes, the Rust test library captures anything printed to the terminal. So if we use a println!() in the code to be tested, and the test passes we won’t see the print output, because the Rust test library “eats it”. We’ll just see a message that the test passed.
If the test fails we will see the message that the test failed and we will see the output of any println!() used in the code being tested.
Our lib.rs is getting a bit cluttered. Replace the entire contents of lib.rs with this content:
fn prints_and_returns_10(a: i32) -> i32 {
println!("I got the value {a}");
10
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn this_test_will_pass() {
let value = prints_and_returns_10(4);
assert_eq!(value, 10);
}
#[test]
fn this_test_will_fail() {
let value = prints_and_returns_10(8);
assert_eq!(value, 5);
}
}
This has a function that doesn’t really do anything, and two tests, one that passes and one that fails.
Run the tests and you see for the passing test, although the function prints “I got the value 4”, the test library captures it and it is not in the test output. But for the failing test we do see the print:
---- tests::this_test_will_fail stdout ----
I got the value 8
If we want to see the println!() for passing tests as well, we can pass the –show-output argument to the test binary.
cargo test -- --show-output
---- tests::this_test_will_pass stdout ----
I got the value 4
---- tests::this_test_will_fail stdout ----
I got the value 8
If you have many tests it can take a long time for them to run. If you’re working on a specific area of the code, you might to only run tests for that code. You can specify the test(s) to run by specifying the name(s).
Replace the content of lib.rs with this content:
pub fn add_two(a: usize) -> usize {
a + 2
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn add_two_and_two() {
let result = add_two(2);
assert_eq!(result, 4);
}
#[test]
fn add_three_and_two() {
let result = add_two(3);
assert_eq!(result, 5);
}
#[test]
fn one_hundred() {
let result = add_two(100);
assert_eq!(result, 102);
}
}
Running the tests with no arguments makes them run in parallel:
running 3 tests
test tests::add_three_and_two ... ok
test tests::one_hundred ... ok
test tests::add_two_and_two ... ok
Let’s run just one test by specifying its name:
cargo test one_hundred
running 1 test
test tests::one_hundred ... ok
Filtering to Run Specific Multiple Tests
We can’t pass multiple test names like this, instead, to run specific multiple tests we need to perform filtering.
To filter and only run tests whose names contain the word “add”:
cargo test add
running 2 tests
test tests::add_three_and_two ... ok
test tests::add_two_and_two ... ok
Note that the module containing a test is part of the test’s name, so we can run all the tests in a module by filtering on the module’s name.
Ignoring Tests Unless Specifically Requested
You might want to not run some long running tests. Rather than filter to control the tests that do run, you can annotate tests with #[ignore] and those tests will not be run.
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn adds_two_numbers() {
let result = add(2, 2);
assert_eq!(result, 4);
}
#[test]
#[ignore]
fn expensive_test() {
// code that takes an hour to run
}
}
If you want to run only the ignored tests:
cargo test -- --ignored
If you want to run all tests whether they’re ignored or not:
cargo test -- --include-ignored
Test Organization
When talking about software testing there are many terms out there to refer to tests and their organization. The Rust community thinks about tests in two categories, unit tests
and integration tests
.
So far in this post we’ve only created unit tests. Unit tests
are small and are more focused, testing one module at a time in isolation, and can test private interfaces. Integration tests
are entirely external to your library, and exercise your code in the same way any other external code would. Integration tests only use the public interface, and can exercise multiple modules at once.
Both unit tests and integration tests are important.
Unit Tests
Unit tests exercise each unit of code in isolation to quickly identify areas of code that aren’t working as expected. Unit tests go in the src directory, in the same files as the code they will test.
The #[cfg(test)]
annotation on the tests module tells the compiler to compile and run tests only when you execute cargo test
, not when you execute cargo build
or cargo run
.
Later when we examine integration tests we’ll see that they don’t need the #[cfg(test)]
annotation, because integration tests are in their own directory, and not in the same files as the code they test.
Here’s the default code generated by cargo new XYZ --lib
to create a new library:
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
The cfg
here stands for configuration
and tells Rust that the following item should only be included if a certain configuration option is present.
The configuration option is test
, so the compiler only compiles and runs out tests if we execute cargo test
.
Unit tests can test private function, integration tests can only test public functions. Replace the code in lib.rs with this:
pub fn add_two(a: usize) -> usize {
internal_adder(a, 2)
}
fn internal_adder(left: usize, right: usize) -> usize {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn internal() {
let result = internal_adder(2, 2);
assert_eq!(result, 4);
}
}
Integration Tests
In Rust, integration tests
are entirely external to your library. Integration tests use your library in the same way any other code would, and they can only access the public interface of your code.
Their purpose is to test whether various parts of your library work together properly. Code may work correctly in isolation but when used in an integrated way it might not.
Integration tests are placed in a tests
directory you create at the same level as the src
directory:
adder
├── Cargo.lock
├── Cargo.toml
├── src
│ └── lib.rs
└── tests
└── integration_test.rs
To get started, change the working directory of your shell window somewhere outside a Rust package, and create a new library package with cargo new adder --lib
. The point is we don’t want to create this new library inside an existing Rust library.
Now create a test
directory at the same level as the src
directory. Then add a file integration_test.rs
and add this code to that file:
use adder::add_two;
#[test]
fn it_adds_two() {
let result = add_two(2);
assert_eq!(result, 4);
}
Each file in the tests directory is a separate crate, so we need to bring in the library to each integration test file:
use adder::add_two;
This is why we must bring functions in with a use
statement, as if using them from an external crate. We didn’t need to do this for unit tests because they were in the same file as the code they test.
For integration tests we don’t need the annotation #[cfg(test)]
because cargo treats the tests
directory as special, and compiles files in the tests
directory only when we run cargo test
.
Execute cargo test
and you’ll see our one test passes:
Compiling adder v0.1.0 (/Users/recording/Documents/backupDONOTDELETE.tmp/paragonica/bytemagma-blogs/rust/blog-projects/testing_in_rust/adder)
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.41s
Running unittests src/lib.rs (target/debug/deps/adder-40a10fb2bb98b24f)
running 1 test
test tests::internal ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running tests/integration_test.rs (target/debug/deps/integration_test-618cc6cf6f3bef11)
running 1 test
test it_adds_two ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests adder
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
cargo test
runs the unit test, doc test and integration test sections. If any of these test sections fail, the following sections won’t run. So if a unit test fails, the doc tests and integration tests won’t be run.
Each integration test file has its own section in the test output.
You can run a specific integration test by specifying the test function name:
cargo test it_adds_two
To run all the tests in a specific integration test file give the name of the file (without the .rs extension):
cargo test --test integration_test
Submodules in Integration Tests
As you create more integration tests, you might want to break them up into separate files in the tests
directory, perhaps organizing them by the functionality they test.
But what if you want to add some code such as set up code for tests? Go ahead and add a file common.rs
to the tests directory with this content:
pub fn setup() {
// setup code specific to your library's tests would go here
}
If you now run cargo test
you’ll see something in the output referencing common.rs
, even though there are no tests in that file, and none of our tests reference the file:
Running tests/common.rs (target/debug/deps/common-02ba26b020a0882d)
We don’t want this kind of output for non-test files in our output. We just wanted some common code our integration tests could use.
To avoid this, instead of having a file tests/common.rs, we’ll create a directory tests/common and in that directory we’ll create a file mod.rs, copy the content from common.rs, and then remove file common.rs.
So this is our new folder structure:
├── Cargo.lock
├── Cargo.toml
├── src
│ └── lib.rs
└── tests
├── common
│ └── mod.rs
└── integration_test.rs
Now executing cargo test
will not have anything in the output for our common mod.rs file.
Putting the file into a sub directory tells the compiler not to treat this file as an integration test. So all integration test files must be directly inside the tests directory. If you were to put an integration test file in a sub directory, its tests would not be run.
Integration Tests for Binary Crates
If our project is a binary crate that only contains a src/main.rs file and has no library file, we can’t create integration tests in the tests directory and bring functions defined in the src/main.rs file into scope with a use
statement. Only library crates expose functions that other crates can use; binary crates are meant to be run on their own.
This is one reason Rust projects that provide a binary have a straightforward src/main.rs file that calls logic that lives in the src/lib.rs file. Using that structure, integration tests can test the library crate with use
to make the important functionality available. If the important functionality works, the small amount of code in the src/main.rs file will work as well, and that small amount of code doesn’t need to be tested.
Well, this has been another long post. But it walked us through the most important topics related to creating unit and integration tests for Rust programs. The Rust test library makes it easy to create tests that ensure as we continue to develop our program, that our code works as expected. Also, if we make changes to one part of the code later, we can run the test suite to ensure we didn’t make changes that break other code that depends on the changed code.
Thanks again for stopping by, and for allowing ByteMagma to be part of your Rust mastery journey.
Leave a Reply