PyTorch torch.nn.HuberLoss 函数
torch.nn.HuberLoss 是 PyTorch 中的 Huber 损失函数。
它是 MSE 和 MAE 的结合,对异常值鲁棒且梯度稳定。
函数定义
torch.nn.HuberLoss(delta=1.0, reduction='mean')
参数:
delta: 切换点,超出该值使用 MAEreduction: 聚合方式
数学原理
L(y, ŷ) = 0.5 * (y - ŷ)², if |y - ŷ| ≤ δ L(y, ŷ) = δ * |y - ŷ| - 0.5 * δ², if |y - ŷ| > δ
使用示例
示例 1: 基本用法
实例
import torch
import torch.nn as nn
criterion = nn.HuberLoss(delta=1.0)
pred = torch.tensor([1.0, 2.0, 3.0, 10.0])
target = torch.tensor([1.5, 2.5, 3.5, 8.0])
loss = criterion(pred, target)
print("Huber Loss:", loss.item())
# 对比 MSE
mse = nn.MSELoss()(pred, target)
print("MSE Loss:", mse.item())
import torch.nn as nn
criterion = nn.HuberLoss(delta=1.0)
pred = torch.tensor([1.0, 2.0, 3.0, 10.0])
target = torch.tensor([1.5, 2.5, 3.5, 8.0])
loss = criterion(pred, target)
print("Huber Loss:", loss.item())
# 对比 MSE
mse = nn.MSELoss()(pred, target)
print("MSE Loss:", mse.item())
示例 2: 回归训练
实例
import torch
import torch.nn as nn
model = nn.Linear(10, 1)
criterion = nn.HuberLoss()
optimizer = torch.optim.Adam(model.parameters())
# 训练
X = torch.randn(100, 10)
y = torch.randn(100, 1)
model.train()
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print("训练损失:", loss.item())
import torch.nn as nn
model = nn.Linear(10, 1)
criterion = nn.HuberLoss()
optimizer = torch.optim.Adam(model.parameters())
# 训练
X = torch.randn(100, 10)
y = torch.randn(100, 1)
model.train()
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print("训练损失:", loss.item())
使用场景
- 回归任务: 带异常值的数据
- 目标检测: 边界框回归
- 鲁棒训练: 噪声数据

PyTorch torch.nn 参考手册