PyTorch torch.nn.LeakyReLU 函数
torch.nn.LeakyReLU 是 PyTorch 中的带泄漏的 ReLU 激活函数。
它允许负值有一个小的正梯度,避免"死亡神经元"问题。
函数定义
torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)
参数
negative_slope: 负值的斜率,默认 0.01
公式
f(x) = x, x > 0 f(x) = negative_slope * x, x <= 0
使用示例
示例 1: 基本用法
实例
import torch
import torch.nn as nn
lrelu = nn.LeakyReLU(negative_slope=0.1)
x = torch.tensor([-2.0, -1.0, 0.0, 1.0, 2.0])
output = lrelu(x)
print("输入:", x.tolist())
print("输出:", output.tolist())
print("负值有小幅正梯度")
import torch.nn as nn
lrelu = nn.LeakyReLU(negative_slope=0.1)
x = torch.tensor([-2.0, -1.0, 0.0, 1.0, 2.0])
output = lrelu(x)
print("输入:", x.tolist())
print("输出:", output.tolist())
print("负值有小幅正梯度")
示例 2: GAN 中使用
实例
import torch
import torch.nn as nn
# GAN 生成器常用 LeakyReLU
generator = nn.Sequential(
nn.Linear(100, 256),
nn.LeakyReLU(0.2),
nn.Linear(256, 512),
nn.LeakyReLU(0.2),
nn.Linear(512, 784),
nn.Tanh()
)
z = torch.randn(4, 100)
output = generator(z)
print("输入:", z.shape, "-> 输出:", output.shape)
import torch.nn as nn
# GAN 生成器常用 LeakyReLU
generator = nn.Sequential(
nn.Linear(100, 256),
nn.LeakyReLU(0.2),
nn.Linear(256, 512),
nn.LeakyReLU(0.2),
nn.Linear(512, 784),
nn.Tanh()
)
z = torch.randn(4, 100)
output = generator(z)
print("输入:", z.shape, "-> 输出:", output.shape)
示例 3: 防止死亡神经元
实例
import torch
import torch.nn as nn
# 对比 ReLU 和 LeakyReLU
relu = nn.ReLU()
lrelu = nn.LeakyReLU()
# 负值输入
x = torch.randn(10, 100) - 5
# ReLU 输出全 0
print("ReLU 后非零比例:", (relu(x) != 0).float().mean().item())
# LeakyReLU 有梯度
print("LeakyReLU 后非零比例:", (lrelu(x) != 0).float().mean().item())
import torch.nn as nn
# 对比 ReLU 和 LeakyReLU
relu = nn.ReLU()
lrelu = nn.LeakyReLU()
# 负值输入
x = torch.randn(10, 100) - 5
# ReLU 输出全 0
print("ReLU 后非零比例:", (relu(x) != 0).float().mean().item())
# LeakyReLU 有梯度
print("LeakyReLU 后非零比例:", (lrelu(x) != 0).float().mean().item())
使用场景
- GAN: 生成器、判别器
- 防止死亡神经元
- 稀疏网络

PyTorch torch.nn 参考手册