Tuning PID Controller Parameters with Backpropagation
Ivan Kazakov
233

Hi, Ivan,

Thanks for sharing the article. Very good idea of using gradient decent and simulation to tune the PIDs, I am quite inspired.

However, after reading your post, it was not so clear to me what your error function is, I initially thought you are minimizing the “steer” come from the PID output. This seems to be minimum control optimization, which doesn’t make sense. Then I had a look of your code on github, and now I just to confirm with you if I have understood correctly.

The error function to be optimized is f(steer) = cross_track_error² (steer is hidden in this function, since it involves the complex dynamics of the car, but we know the end result, the cross_track_error). Then you optimize the function with respect to Kp, Ki, Kd parameters.

So for example, for optimzing Kp, the gradient from f(steer) with respect to Kp, gonna be (df/dKp) = (df/dsteer)*(dsteer/dP) = dE * dx.

dE and dx are symbols taken from the following gradient decent equation from your code:

void PID::adjust(double &Kx, double dx, double dE)
{
double partialDKx = Kx * dx * dE * learnRate_;
Kx -= partialDKx;
}

Now it is not clear to me why you have Kx, in this case, Kp, also in the partial derivative term?

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.