Command Palette
Search for a command to run...
Proximal Gradient Descent
Date
Proximal Gradient MethodIt is a kind of gradient descent method, which is mainly used to solve optimization problems with non-differentiable objective functions. If the objective function is not differentiable at some points, the gradient of that point cannot be solved and the traditional gradient descent method cannot be used.
The proximal gradient method uses neighboring points as approximate gradients and performs gradient descent based on them. It is usually used to solve L1 regularization.
Related concepts
Assume , where
are convex functions and
is a smooth function, then the proximal gradient
Among them are
Proximal Gradient Method Process
For the objective function , where f0 is non-smooth and f1 is smooth, it is defined as follows:
Iteration r = 0, 1, 2, …
- When
, the formula is the gradient descent method
- When
, the formula is the proximal endpoint method
Special case of proximal gradient method
- Landweber is expected;
- Alternating projection;
- Alternating direction method of multipliers;
- Fast Iterative Shrinkage Thresholding Algorithm (FISTA).
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.