avfilter/dnn_backend_torch: add CUDA/ROCm device support

Add support for CUDA and ROCm (AMD GPU) devices in the LibTorch DNN
backend.

This works for both NVIDIA CUDA and AMD ROCm, as PyTorch exposes ROCm
through the CUDA-compatible API.

Usage:

./ffmpeg -i input.mp4 -vf scale=224:224,format=rgb24,dnn_processing=dnn_backend=torch:model=sr_model_torch.pt:device=cuda output.mp4

Reviewed-by: Guo Yejun <yejun.guo@intel.com>
Signed-off-by: younengxiao <steven.xiao@amd.com>
This commit is contained in:
stevxiao
2026-02-05 11:07:10 -05:00
committed by Guo Yejun
parent 924cc51ffe
commit a077da895b

View File

@@ -439,6 +439,13 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A
#else
at::detail::getXPUHooks().initXPU();
#endif
} else if (device.is_cuda()) {
// CUDA device - works for both NVIDIA CUDA and AMD ROCm (which uses CUDA-compatible API)
if (!torch::cuda::is_available()) {
av_log(ctx, AV_LOG_ERROR, "CUDA/ROCm is not available\n");
goto fail;
}
av_log(ctx, AV_LOG_INFO, "Using CUDA/ROCm device: %s\n", device_name);
} else if (!device.is_cpu()) {
av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", device_name);
goto fail;