How to upscale and enhance video quality with FFmpeg: scaling, denoising, stabilization

FFmpeg upscales video with -vf "scale=1920:1080:flags=lanczos" — Lanczos is the best scaling algorithm for upscaling. For denoising, hqdn3d reduces grain while preserving edges.

INEZA Felin-Michel

INEZA Felin-Michel

10 April 2026

How to upscale and enhance video quality with FFmpeg: scaling, denoising, stabilization

TL;DR

FFmpeg upscales video with -vf "scale=1920:1080:flags=lanczos" — Lanczos is the best scaling algorithm for upscaling. For denoising, hqdn3d reduces grain while preserving edges. For stabilization, vidstab handles camera shake through a two-pass process. Combine all three in a single filter chain for a quality enhancement pipeline.

Introduction

Video quality enhancement with FFmpeg goes beyond just changing resolution. True enhancement combines resolution upscaling with intelligent denoising and optional stabilization. Each step targets a different quality problem: soft or pixelated footage, grainy or noisy frames, and shaky camera movement.

This guide covers each technique independently and shows how to combine them.

button

Scaling algorithms

The scaling algorithm determines how FFmpeg fills in new pixels when upscaling. The choice has a visible effect on output quality.

Algorithm Speed Quality Best for
neighbor Fastest Lowest Pixel art
bilinear Fast Low Speed-critical
bicubic Medium Good General downscaling
lanczos Slower Best Upscaling

Upscale to 1080p with Lanczos:

ffmpeg -i input_720p.mp4 -vf "scale=1920:1080:flags=lanczos" -c:v libx264 -crf 20 output_1080p.mp4

Maintain aspect ratio:

ffmpeg -i input.mp4 -vf "scale=1920:-2:flags=lanczos" -c:v libx264 -crf 20 output.mp4

-2 auto-calculates height while maintaining aspect ratio and ensuring the result is divisible by 2.

Scale to 4K:

ffmpeg -i input.mp4 -vf "scale=3840:-2:flags=lanczos" -c:v libx264 -crf 18 -preset slow output_4k.mp4

-preset slow tells x264 to spend more time optimizing compression, which matters more at higher resolutions.

Denoising with hqdn3d

The hqdn3d filter is high-quality 3D denoising. It removes grain and noise while preserving edge detail.

ffmpeg -i noisy_video.mp4 -vf "hqdn3d=4:3:6:4.5" -c:v libx264 -crf 20 denoised.mp4

The four parameters: luma_spatial:chroma_spatial:luma_temporal:chroma_temporal

Stronger denoising:

ffmpeg -i grainy.mp4 -vf "hqdn3d=10:8:15:10" -c:v libx264 -crf 20 clean.mp4

Higher values remove more noise but may blur fine detail. Test with different settings before processing the full video.

Light denoising (preserve detail):

ffmpeg -i video.mp4 -vf "hqdn3d=2:1.5:3:2.5" -c:v libx264 -crf 20 output.mp4

Stabilization with vidstab

The vidstab filter requires a two-pass approach: first analyze the motion, then apply stabilization.

Install: vidstab support depends on your FFmpeg build. Check with ffmpeg -filters | grep vidstab. On macOS: brew install ffmpeg includes it.

Pass 1: Analyze motion

ffmpeg -i shaky_video.mp4 -vf "vidstabdetect=stepsize=6:shakiness=8:accuracy=9:result=transform.trf" -f null -

shakiness=8 (1-10): How much camera shake to expect. accuracy=9 (1-15): Detection accuracy. The -f null - discards output; we only need the .trf file.

Pass 2: Apply stabilization

ffmpeg -i shaky_video.mp4 -vf "vidstabtransform=input=transform.trf:zoom=1:smoothing=10" -c:v libx264 -crf 20 stabilized.mp4

zoom=1 adds 1% zoom to compensate for edge cropping that stabilization causes. Increase if black borders appear. smoothing=10 controls how smooth the camera path becomes (higher = smoother).

More aggressive stabilization:

ffmpeg -i video.mp4 -vf "vidstabtransform=input=transform.trf:zoom=3:smoothing=30:optzoom=1" -c:v libx264 -crf 20 stable.mp4

optzoom=1 automatically optimizes zoom to avoid borders.

Combined quality enhancement pipeline

Run all three operations together in a single filter chain:

ffmpeg -i source.mp4 \
  -vf "hqdn3d=4:3:6:4.5,scale=1920:-2:flags=lanczos,vidstabtransform=input=transform.trf:zoom=1:smoothing=10" \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy \
  enhanced.mp4

Apply this order: denoise first (removes artifacts that might affect scaling), then scale, then stabilize. This avoids scaling noise up to higher resolution.

Note: Run the vidstabdetect pass first before using this combined pipeline.

Sharpening filter

If footage looks soft rather than noisy, sharpening can help:

ffmpeg -i video.mp4 -vf "unsharp=5:5:1.5:5:5:0.5" -c:v libx264 -crf 20 sharpened.mp4

Parameters: lx:ly:la:cx:cy:ca (luma/chroma sizes and amounts)
lx:ly — luma matrix size (pixels)
la — luma amount (positive = sharpen, negative = blur)
cx:cy:ca — chroma equivalents

For light sharpening: unsharp=3:3:0.5:3:3:0.0
For strong sharpening: unsharp=5:5:2.5:5:5:0.0

Performance considerations

Enhancement operations are compute-heavy. Processing time estimates for a 10-minute 1080p video:

Use -preset to balance encoding speed vs. file size:

For batch processing, run multiple files concurrently or use parallel:

ls *.mp4 | parallel ffmpeg -i {} -vf "scale=1920:-2:flags=lanczos" -c:v libx264 -crf 20 enhanced_{/}

Connecting to AI video upscaling APIs

For AI-powered upscaling (which produces better results than FFmpeg’s filters for low-quality or damaged footage), specialized APIs exist alongside FFmpeg’s tools.

WaveSpeedAI offers AI upscaling models that use neural upscaling rather than algorithmic filters:

POST https://api.wavespeed.ai/api/v2/wavespeed-ai/video-enhance
Authorization: Bearer {{WAVESPEED_API_KEY}}
Content-Type: application/json

{
  "video_url": "https://storage.example.com/source-video.mp4",
  "scale": 2,
  "enhance": true
}

Test this with Apidog before integrating:

Add assertions:

Status code is 200
Response body has field id

Poll the status endpoint for completion, then compare the AI-upscaled output against FFmpeg’s Lanczos output. AI upscaling handles textures and fine detail better than algorithmic methods; FFmpeg is faster and free.

Use FFmpeg for standard quality work and API-based AI upscaling for footage where quality matters most.

FAQ

Is Lanczos better than bicubic for all cases?
For upscaling, yes. For downscaling, bicubic is often faster with comparable quality. Lanczos is computationally more expensive.

Does vidstab work on phone footage?
Yes. Phone footage often benefits most from stabilization. The shakiness parameter should be set high (8-10) for handheld phone video.

How much zoom is needed to hide stabilization borders?
Typically 3-8% depending on how shaky the source is. Set optzoom=1 to let FFmpeg calculate it automatically.

Can FFmpeg enhance low-resolution historical footage?
FFmpeg’s filters help but have limits. AI-based upscaling tools (like ESRGAN or specialized video enhancement APIs) produce significantly better results on severely degraded footage.

Does denoising slow down playback?
No. Denoising is a processing step during conversion, not a real-time effect. The output video plays normally.

Explore more

Grok Imagine Video vs Sora 2, Veo 3, Seedance, WAN, and Vidu: 2026 comparison

Grok Imagine Video vs Sora 2, Veo 3, Seedance, WAN, and Vidu: 2026 comparison

Grok Imagine Video ($0.05/second) competes on price with Seedance 1.5 Pro but caps at 720p while most competitors offer 1080p. Granular duration control (1-second increments up to 15 seconds) and no cold starts are genuine advantages.

10 April 2026

GLM-5 vs DeepSeek V3 vs GPT-5: speed, cost, and practical developer comparison

GLM-5 vs DeepSeek V3 vs GPT-5: speed, cost, and practical developer comparison

For real-time apps, GLM-5 and DeepSeek are fastest at short prompts. For tool-heavy assistants, GPT-5 leads on schema stability.

10 April 2026

GLM-5.1 vs Claude, GPT, Gemini, DeepSeek: how Zhipu AI's model stacks up

GLM-5.1 vs Claude, GPT, Gemini, DeepSeek: how Zhipu AI's model stacks up

GLM-5.1 (744B MoE, 40-44B active parameters, MIT license) reaches 77.8% on SWE-bench versus Claude Opus 4.6’s 80.8%. Costs $1.00/$3.20 per million tokens versus Claude Opus 4.6 at $15.00/$75.00.

10 April 2026

Practice API Design-first in Apidog

Discover an easier way to build and use APIs