Difference between revisions of "LQR"
(.) |
(+nte) |
||
Line 3: | Line 3: | ||
<math>J = \int_0^\infty \vec{x}^\intercal Q\vec{x} + \vec{u}^\intercal R\vec{u}</math> | <math>J = \int_0^\infty \vec{x}^\intercal Q\vec{x} + \vec{u}^\intercal R\vec{u}</math> | ||
− | where (when we use it) <math>\vec{u}</math> is <math>[\dot{\delta}]</math> and <math>R</math> is <math>[0.01]</math>. Also, <math>Q</math> is: <math>Q = \begin{bmatrix} 1&0&0\\0&0.01&0\\0&0&0.01\end{bmatrix}</math> | + | where (when we use it) <math>\vec{u}</math> is <math>[\dot{\delta}]</math> and <math>R</math> is <math>[0.01]</math>. Also, <math>Q</math> is: <math>Q = \begin{bmatrix} 1&0&0\\0&0.01&0\\0&0&0.01\end{bmatrix}</math> Your mileage may vary. Try changing these weights and seeing what happens. |
<hr /> | <hr /> |
Revision as of 18:16, 14 September 2019
When we use LQR, we penalize any deviations from an upright riding position. Q penalizes errors in the state. The state matrix has lean, lean rate, and delta as x1, x2, and x3 respectively. So, . Thus, our total error function is:
where (when we use it) is and is . Also, is: Your mileage may vary. Try changing these weights and seeing what happens.
Ryan thinks we really have to ensure that each state must have a nonzero cost, because LQR will eventually minimize cost. If we can possibly get negative cost somehow, then LQR would just give us that state, which sucks.
So. What LQR does is it says we must use the control law , where you get using matlab using the code:
[K, S, e] = lqr(A, B, Q, R);
where the system is defined as
On the bike, it is possible that we can use the control law
What's good about LQR? So you can tune the cost matrices (Q and R) instead of tuning the gains directly. The cost matrices have easier intuition (we just want to avoid this!).