Efficient File Upload and Download with Axum in Rust: A Comprehensive Guide

4 min readMar 1, 2025

Introduction

Handling large file uploads efficiently is a critical requirement for many web applications. Whether you’re building a file-sharing service, a content management system, or a data processing pipeline, managing large files without exhausting system memory is essential. In this blog post, we’ll explore how to build a Rust-based backend using the Axum framework that supports chunked file uploads and downloads. This approach ensures that even large files can be uploaded and downloaded efficiently, without consuming excessive memory.

Why Use Chunked Upload and Download?

Chunked file upload and download are techniques that allow handling large files in smaller, manageable pieces (chunks). Instead of loading the entire file into memory, the server processes it in smaller segments. This approach offers several benefits:

  1. Memory Efficiency: Prevents memory exhaustion by processing files in smaller chunks.
  2. Resumable Uploads: Allows users to resume uploads if the connection is interrupted.
  3. Parallel Processing: Enables efficient parallel processing of large files.
  4. Improved Reliability: Reduces the risk of upload failures due to network issues.

Prerequisites

Before we dive into the implementation, ensure you have the following installed:

  • Rust (latest stable version)
  • Cargo (comes with Rust)
  • tokio for asynchronous runtime
  • axum for building the API
  • serde for JSON serialization
  • tower-http for middleware support

Add the following dependencies to your Cargo.toml file:

[dependencies]
axum = { version = "0.7.6", features = ["multipart"] }
tokio = { version = "1.39.3", features = ["full"] }
serde = { version = "1.0.183", features = ["derive"] }
tower-http = { version = "0.5.2", features = ["cors"] }

Setting Up the Axum Server

Let’s start by setting up the Axum server and configuring CORS to allow requests from the frontend.

use axum::{
body::Body, extract::{Multipart, Query}, http::{HeaderValue, Method, StatusCode}, response::{IntoResponse, Response}, routing::{get, post}, Router
};
use serde::Deserialize;
tower_http::cors::CorsLayer;
use std::{
fs::{self, File, OpenOptions},
io::{Read, Seek, SeekFrom, Write},
};

#[tokio::main]
async fn main() {
// Initialize uploads directory
fs::create_dir_all("./uploads/temp").unwrap();

let cors = CorsLayer::new()
.allow_origin("http://localhost:3000".parse::<HeaderValue>().unwrap())
.allow_methods([Method::GET, Method::POST]);

// Build our app
let app = Router::new()
.route("/upload", post(upload_chunk))
.route("/download", get(download_chunk))
.layer(cors);

let listener = tokio::net::TcpListener::bind("0.0.0.0:8000").await.unwrap();
println!("Server running on http://localhost:8000");
axum::serve(listener, app).await.unwrap();
}

Implementing the File Upload Handler

The upload handler receives file chunks as multipart form data and saves them temporarily before assembling the full file.

pub async fn upload_chunk(mut multipart: Multipart) -> impl IntoResponse {
let mut file_name = String::new();
let mut chunk_number = 0;
let mut total_chunks = 0;
let mut chunk_data = Vec::new();

while let Some(field) = match multipart.next_field().await {
Ok(f) => f,
Err(err) => {
eprintln!("Error reading multipart field: {:?}", err);
return StatusCode::BAD_REQUEST;
}
} {
let field_name = field.name().unwrap_or_default().to_string();
match field_name.as_str() {
"fileName" => file_name = sanitize_filename(&field.text().await.unwrap_or_default()),
"chunkNumber" => chunk_number = field.text().await.unwrap_or_default().parse().unwrap_or(0),
"totalChunks" => total_chunks = field.text().await.unwrap_or_default().parse().unwrap_or(0),
"chunk" => chunk_data = field.bytes().await.unwrap_or_else(|_| Vec::new()).to_vec(),
_ => {}
}
}

if file_name.is_empty() || chunk_data.is_empty() {
return StatusCode::BAD_REQUEST;
}

let temp_dir = format!("./uploads/temp/{}", file_name);
fs::create_dir_all(&temp_dir).unwrap_or_else(|_| {});
let chunk_path = format!("{}/chunk_{}", temp_dir, chunk_number);
let mut file = File::create(&chunk_path).unwrap();
file.write_all(&chunk_data).unwrap();

if is_upload_complete(&temp_dir, total_chunks) {
assemble_file(&temp_dir, &file_name, total_chunks).unwrap();
}

StatusCode::OK
}

Checking Upload Completion and Assembling File

Once all chunks are uploaded, we assemble them into the final file.

fn is_upload_complete(temp_dir: &str, total_chunks: usize) -> bool {
match fs::read_dir(temp_dir) {
Ok(entries) => entries.count() == total_chunks,
Err(_) => false,
}
}

fn assemble_file(temp_dir: &str, file_name: &str, total_chunks: usize) -> std::io::Result<()> {
let output_path = format!("./uploads/{}", file_name);
let mut output_file = OpenOptions::new()
.create(true)
.write(true)
.open(&output_path)?;

for chunk_number in 0..total_chunks {
let chunk_path = format!("{}/chunk_{}", temp_dir, chunk_number);
let chunk_data = fs::read(&chunk_path)?;
output_file.write_all(&chunk_data)?;
}

fs::remove_dir_all(temp_dir)?;
Ok(())
}

Handling File Downloads in Chunks

To download files in chunks, we use a query-based approach to specify the file name, offset, and chunk size.

#[derive(Deserialize)]
struct DownloadParams {
fileName: String,
offset: u64,
chunkSize: usize,
}

async fn download_chunk(Query(params): Query<DownloadParams>) -> impl IntoResponse {
let file_path = format!("./uploads/{}", sanitize_filename(&params.fileName));
let mut file = File::open(&file_path).unwrap_or_else(|_| return StatusCode::NOT_FOUND.into_response());
let mut buffer = vec![0; params.chunkSize];
file.seek(SeekFrom::Start(params.offset)).unwrap();
let bytes_read = file.read(&mut buffer).unwrap();

if bytes_read == 0 {
return StatusCode::NO_CONTENT.into_response();
}

let body = Body::from(buffer[..bytes_read].to_vec());
Response::builder()
.header("Content-Type", "application/octet-stream")
.body(body)
.unwrap()
}

Conclusion

In this blog post, we’ve built a Rust-based backend using the Axum framework that supports chunked file uploads and downloads. This approach ensures efficient handling of large files without exhausting system memory. By processing files in smaller chunks, we improve performance, reliability, and scalability.

Key Takeaways:

  • Chunked uploads prevent memory exhaustion and support resumable uploads.
  • Chunked downloads allow efficient retrieval of large files.
  • Axum and tokio provide a robust foundation for building asynchronous web servers in Rust.

Now, you can integrate this API with a frontend framework like React or Next.js to build a complete file-sharing system. Happy coding!

--

--

Aarambh Dev Hub
Aarambh Dev Hub

Written by Aarambh Dev Hub

Rust developer sharing coding tutorials, backend tips, and insights. Follow for deep dives into Rust, programming challenges, and modern development practices.

Responses (1)