Laravel File Storage and Uploads
Handling file uploads is one of the most common requirements in web applications — from profile pictures and product images to PDFs, CSVs, and video files. Laravel's filesystem abstraction, powered by Flysystem, gives you a clean, unified API for working with local storage, Amazon S3, DigitalOcean Spaces, and other cloud providers without ever changing your application code. In this guide, you'll learn everything from basic uploads to secure cloud storage with production-ready patterns.
Laravel wraps the Flysystem library to give every disk — local, S3, SFTP — the exact same API. You can develop on local disk and deploy to S3 just by changing an environment variable. No code changes required.
The Filesystem Architecture
Before touching any code, it helps to understand how Laravel organises file storage. At the core is the concept of disks — named storage backends defined in config/filesystems.php. Each disk has a driver (local, s3, ftp, sftp) and driver-specific options like root path or bucket name.
Laravel ships with three disks preconfigured:
- local — maps to
storage/app, not publicly accessible - public — maps to
storage/app/public, served via a symlink atpublic/storage - s3 — Amazon S3 (or any S3-compatible service like DigitalOcean Spaces, Cloudflare R2)
Laravel's Flysystem abstraction routes Storage calls to any backend disk
Configuration
The config/filesystems.php file is where all disks are defined. You should drive everything sensitive through .env variables — never hardcode credentials.
'default' => env('FILESYSTEM_DISK', 'local'),
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
'throw' => false,
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
'throw' => false,
],
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'), // for S3-compatible (e.g. R2)
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
],
],
Creating the Public Symlink
For the public disk to be web-accessible, you need to create a symbolic link from public/storage to storage/app/public. This is a one-time setup command:
php artisan storage:link
Run php artisan storage:link during every fresh deployment. If you're using Docker or a shared hosting environment, make sure the web server can follow symlinks (check Options +FollowSymLinks in Apache or disable_symlinks off in Nginx).
Basic File Operations
The Storage facade is your primary interface. It accepts an optional disk argument — if omitted, it uses the default disk from your config.
use Illuminate\Support\Facades\Storage;
// Write a file
Storage::put('reports/q1.pdf', $pdfContent);
// Write to a specific disk
Storage::disk('s3')->put('reports/q1.pdf', $pdfContent);
// Read a file
$content = Storage::get('reports/q1.pdf');
// Check existence
if (Storage::exists('reports/q1.pdf')) { ... }
// Get file size in bytes
$size = Storage::size('reports/q1.pdf');
// Get last modified timestamp
$timestamp = Storage::lastModified('reports/q1.pdf');
// Delete a file
Storage::delete('reports/q1.pdf');
// Delete multiple files
Storage::delete(['file1.txt', 'file2.txt']);
// List files in a directory
$files = Storage::files('reports');
$allFiles = Storage::allFiles('reports'); // recursive
// List directories
$dirs = Storage::directories('reports');
// Create a directory
Storage::makeDirectory('reports/2026');
// Delete a directory (and its contents)
Storage::deleteDirectory('reports/temp');
File Visibility
Files stored via the local driver are private by default. To make a file publicly accessible when using a cloud driver like S3, set its visibility to public:
// Store with explicit visibility
Storage::disk('s3')->put('avatars/user-1.jpg', $data, 'public');
// Change visibility after storing
Storage::setVisibility('avatars/user-1.jpg', 'public');
// Get current visibility
$visibility = Storage::getVisibility('avatars/user-1.jpg'); // 'public' or 'private'
// Get public URL (works for public disk and public S3 files)
$url = Storage::url('avatars/user-1.jpg');
// Generate a temporary signed URL for private files (S3 only)
$signedUrl = Storage::disk('s3')->temporaryUrl(
'documents/contract.pdf',
now()->addMinutes(30)
);
Handling HTTP File Uploads
When a user uploads a file via a form, Laravel wraps it in an Illuminate\Http\UploadedFile instance, which extends Symfony's UploadedFile. The store and storeAs methods make persisting uploads trivial.
The store() Method
store() generates a unique filename based on a hash of the file contents, preventing collisions automatically. This is the recommended approach for most uploads:
public function updateAvatar(Request $request)
{
$request->validate([
'avatar' => 'required|image|mimes:jpg,jpeg,png,gif,webp|max:2048',
]);
$file = $request->file('avatar');
// store() returns the generated path, e.g. "avatars/abc123def456.jpg"
$path = $file->store('avatars', 'public');
// Delete old avatar if exists
if ($request->user()->avatar_path) {
Storage::disk('public')->delete($request->user()->avatar_path);
}
$request->user()->update(['avatar_path' => $path]);
return back()->with('success', 'Avatar updated successfully.');
}
The storeAs() Method
When you need to control the filename — for example, naming a file after a user ID or a slug — use storeAs():
public function uploadDocument(Request $request)
{
$request->validate([
'document' => 'required|file|mimes:pdf,doc,docx|max:10240',
]);
$file = $request->file('document');
$ext = $file->getClientOriginalExtension();
$filename = 'contract-' . auth()->id() . '-' . time() . '.' . $ext;
$path = $file->storeAs('documents', $filename, 's3');
Document::create([
'user_id' => auth()->id(),
'file_path' => $path,
'file_name' => $file->getClientOriginalName(),
'file_size' => $file->getSize(),
'mime_type' => $file->getMimeType(),
]);
return response()->json(['path' => $path], 201);
}
File Validation Best Practices
Never trust client-provided file metadata. Laravel's validation rules let you whitelist exactly what you accept. Here's a comprehensive reference of file validation rules:
| Rule | What It Validates | Example |
|---|---|---|
file |
Is a successfully uploaded file | 'doc' => 'file' |
image |
JPEG, PNG, GIF, BMP, SVG, or WebP | 'photo' => 'image' |
mimes:jpg,png |
MIME type by content inspection | 'photo' => 'mimes:jpg,png,webp' |
mimetypes:image/jpeg |
Exact MIME type string | 'photo' => 'mimetypes:image/jpeg' |
max:2048 |
Maximum file size in kilobytes | 'doc' => 'max:10240' |
min:10 |
Minimum file size in kilobytes | 'video' => 'min:100' |
dimensions:... |
Image width/height constraints | 'avatar' => 'dimensions:max_width=500,max_height=500' |
extensions:pdf,docx |
File extension (Laravel 10.31+) | 'report' => 'extensions:pdf,xlsx' |
Always use mimes or mimetypes instead of relying on getClientOriginalExtension(). A malicious user can rename shell.php to shell.jpg. Laravel's mimes rule uses finfo_file() to inspect the actual binary content, making it much harder to spoof.
Multiple File Uploads
// HTML form
// <input type="file" name="photos[]" multiple>
// Validation
$request->validate([
'photos' => 'required|array|max:5',
'photos.*' => 'image|mimes:jpg,jpeg,png,webp|max:4096',
]);
// Process each file
$paths = [];
foreach ($request->file('photos') as $photo) {
$paths[] = $photo->store('gallery', 'public');
}
// Or using collect()
$paths = collect($request->file('photos'))
->map(fn ($file) => $file->store('gallery', 'public'))
->toArray();
Amazon S3 and Cloud Storage
Moving from local to S3 requires two steps: installing the AWS SDK and updating your .env. Your application code stays unchanged.
Install the AWS SDK
Laravel uses the league/flysystem-aws-s3-v3 package for S3 integration.
composer require league/flysystem-aws-s3-v3 "^3.0"
Configure Environment Variables
Add your AWS credentials and bucket name to .env.
FILESYSTEM_DISK=s3
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=my-app-uploads
AWS_URL=https://my-app-uploads.s3.us-east-1.amazonaws.com
Use S3 in Your Code
Since FILESYSTEM_DISK=s3, Storage::put() now writes directly to S3. For multi-disk apps, explicitly specify the disk.
// Upload (same API as local)
$path = $request->file('invoice')->store('invoices', 's3');
// Get a permanent URL (only works if the file is public)
$url = Storage::disk('s3')->url($path);
// Get a 60-minute temporary signed URL for private files
$signedUrl = Storage::disk('s3')->temporaryUrl($path, now()->addHour());
// Stream a large file to avoid memory exhaustion
return response()->stream(function () use ($path) {
$stream = Storage::disk('s3')->readStream($path);
fpassthru($stream);
fclose($stream);
}, 200, [
'Content-Type' => Storage::disk('s3')->mimeType($path),
'Content-Disposition' => 'attachment; filename="'.basename($path).'"',
]);
DigitalOcean Spaces and Cloudflare R2
Both services are S3-compatible. The only difference is the endpoint and use_path_style_endpoint settings:
AWS_ACCESS_KEY_ID=DO_SPACES_KEY
AWS_SECRET_ACCESS_KEY=DO_SPACES_SECRET
AWS_DEFAULT_REGION=nyc3
AWS_BUCKET=my-spaces-bucket
AWS_ENDPOINT=https://nyc3.digitaloceanspaces.com
AWS_URL=https://my-spaces-bucket.nyc3.digitaloceanspaces.com
AWS_USE_PATH_STYLE_ENDPOINT=false
Image Processing with Intervention Image
Resizing and optimising images server-side prevents gigantic uploads from reaching your storage. The most popular package for this in the Laravel ecosystem is Intervention Image v3.
composer require intervention/image
use Intervention\Image\Laravel\Facades\Image;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
public function uploadAvatar(Request $request)
{
$request->validate([
'avatar' => 'required|image|mimes:jpg,jpeg,png,webp|max:5120',
]);
$file = $request->file('avatar');
$filename = Str::uuid() . '.jpg';
// Resize to 200x200, cover crop, convert to JPEG
$image = Image::read($file)
->cover(200, 200)
->toJpeg(quality: 85);
// Store the processed image bytes
Storage::disk('public')->put("avatars/{$filename}", (string) $image);
// Optional: also create a thumbnail
$thumb = Image::read($file)
->cover(50, 50)
->toJpeg(quality: 75);
Storage::disk('public')->put("avatars/thumbs/{$filename}", (string) $thumb);
$request->user()->update(['avatar' => "avatars/{$filename}"]);
return back()->with('success', 'Avatar uploaded.');
}
Security Best Practices
File uploads are one of the most exploited attack vectors in web applications. Following these guidelines keeps your application safe:
Complete Security Checklist
| Threat | Mitigation |
|---|---|
| PHP file execution via upload | Never store uploads inside the public/ web root; use storage/app or S3 |
| MIME spoofing | Use mimes or mimetypes validation (inspects binary); never trust extension alone |
| Path traversal | Never use client filenames directly; always generate your own using Str::uuid() or hash_file() |
| Oversize uploads (DoS) | Enforce max: in validation AND set upload_max_filesize and post_max_size in php.ini |
| Sensitive file exposure | Use temporary signed URLs for private files; never serve them via direct S3 URL |
| Server-side image attacks (ImageMagick) | Use Intervention Image with GD driver; keep ImageMagick policy.xml hardened if used |
| SVG XSS | Sanitize SVGs with league/html-to-markdown or reject SVGs entirely from user uploads |
Serving Private Files Through Laravel
For files that require authentication — contracts, invoices, user data — never expose the storage path directly. Route all access through a controller that checks permissions first:
// routes/web.php
Route::get('/documents/{document}/download', [DocumentController::class, 'download'])
->middleware('auth')
->name('documents.download');
// app/Http/Controllers/DocumentController.php
public function download(Document $document)
{
// Authorise: only the owner can download
Gate::authorize('view', $document);
// File stored privately on S3 (not public)
$path = $document->file_path;
if (!Storage::disk('s3')->exists($path)) {
abort(404, 'File not found.');
}
return Storage::disk('s3')->download($path, $document->original_name);
}
// Or stream directly without temp files:
public function preview(Document $document)
{
Gate::authorize('view', $document);
return response()->stream(function () use ($document) {
$stream = Storage::disk('s3')->readStream($document->file_path);
fpassthru($stream);
}, 200, [
'Content-Type' => $document->mime_type,
'Cache-Control' => 'private, max-age=3600',
]);
}
Chunked Uploads for Large Files
Uploading large files (videos, datasets) in a single HTTP request is fragile — connections time out, PHP memory limits hit, and users can't resume failed uploads. The solution is chunked uploads: split the file client-side and reassemble server-side.
public function uploadChunk(Request $request)
{
$request->validate([
'file' => 'required|file|max:51200', // 50MB per chunk
'upload_id' => 'required|uuid',
'chunk_index' => 'required|integer|min:0',
'total_chunks' => 'required|integer|min:1',
]);
$uploadId = $request->input('upload_id');
$chunkIndex = $request->input('chunk_index');
$totalChunks = (int) $request->input('total_chunks');
$chunk = $request->file('file');
// Store each chunk temporarily
$chunk->storeAs("chunks/{$uploadId}", "part_{$chunkIndex}", 'local');
// Check if all chunks have arrived
$storedChunks = Storage::disk('local')->files("chunks/{$uploadId}");
if (count($storedChunks) === $totalChunks) {
// Reassemble
$finalPath = "uploads/{$uploadId}.bin";
$stream = Storage::disk('local')->writeStream($finalPath, fopen('php://temp', 'r+'));
for ($i = 0; $i < $totalChunks; $i++) {
$chunkContent = Storage::disk('local')->get("chunks/{$uploadId}/part_{$i}");
Storage::disk('s3')->append($finalPath, $chunkContent);
}
// Clean up temp chunks
Storage::disk('local')->deleteDirectory("chunks/{$uploadId}");
return response()->json(['status' => 'complete', 'path' => $finalPath]);
}
return response()->json(['status' => 'chunk_received', 'index' => $chunkIndex]);
}
For production chunked uploads, consider Filepond (JavaScript) on the frontend paired with pion/laravel-chunk-upload on the backend. These handle edge cases like chunk ordering, retries, and cleanup that a hand-rolled solution can miss.
Key Takeaways
Laravel's filesystem abstraction makes file handling both powerful and portable. Here's a summary of what to take away from this guide:
What You've Learned
- Flysystem disks give you one consistent API across local, S3, SFTP, and any other backend
store()generates collision-safe hashed filenames automatically;storeAs()lets you control the name- Validation rules like
mimes,max, anddimensionsare your first line of defence - Never store uploads in the public web root — route private files through a controller that checks authorisation
- Temporary signed URLs are the right tool for time-limited access to private S3 files
- Intervention Image lets you resize and reformat before storing, saving bandwidth and storage costs
- Use chunked uploads for files >10 MB to avoid timeouts and allow resumable transfers
"Laravel's filesystem abstraction is one of its most underappreciated features. Write your upload logic once, and you can move between local development and S3 production with a single environment variable change."
Whether you're building a simple avatar uploader or a document management system that handles thousands of files daily, Laravel's storage system scales with your needs. Combine disk abstraction, strict validation, and proper authorisation, and you'll ship file upload features that are both developer-friendly and secure.