Working with Heaviside function part1(Machine Learning 2024)

Monodeep Mukherjee
2 min readMar 28, 2024
  1. Sigmoidal approximations of a nonautonomous neural network with infinite delay and Heaviside function(arXiv)

Author : Peter E. Kloeden, V. M. Villarragut

Abstract : In this paper, we approximate a nonautonomous neural network with infinite delay and a Heaviside signal function by neural networks with sigmoidal signal functions. We show that the solutions of the sigmoidal models converge to those of the Heaviside inclusion as the sigmoidal parameter vanishes. In addition, we prove the existence of pullback attractors in both cases, and the convergence of the attractors of the sigmoidal models to those of the Heaviside inclusion.

2.Time-Optimal Control via Heaviside Step-Function Approximation (arXiv)

Author : Kai Pfeiffer, Quang-Cuong Pham

Abstract : Least-squares programming is a popular tool in robotics due to its simplicity and availability of open-source solvers. However, certain problems like sparse programming in the ℓ0- or ℓ1-norm for time-optimal control are not equivalently solvable. In this work, we propose a non-linear hierarchical least-squares programming (NL-HLSP) for time-optimal control of non-linear discrete dynamic systems. We use a continuous approximation of the heaviside step function with an additional term that avoids vanishing gradients. We use a simple discretization method by keeping states and controls piece-wise constant between discretization steps. This way, we obtain a comparatively easily implementable NL-HLSP in contrast to direct transcription approaches of optimal control. We show that the NL-HLSP indeed recovers the discrete time-optimal control in the limit for resting goal points. We confirm the results in simulation for linear and non-linear control scenarios.

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development