CSCE 574 Robotics Notes

Hydraulic Drive

Shape Memory Alloy

Pneumatic Drive

Electric Drive

For a differential drive robot, where the wheels are distance d apart and the wheel velocities are V_l V_r. Estimate the linear velocity V and the angular velocity \omega.

\|V\| = {V_r + V_l \over 2}
V_x = \|V\| \cos \theta
V_y = \|V\| \sin \theta
\omega = {V_r - V_l \over d}






Laser scanner


Stereo Camera

Frontier Based Exploration Algorithm

  1. Go to nearest unexplored cell (that is not an obstacle).
  2. Repeat.
  3. Profit!

Higher exploitation (localization) means higher map accuracy, but lower efficiency, whereas higher exploration means higher mapping efficiency, but lower accuracy.

Generalized Voronoi Graph Exploration Algorithm

  1. Access GVG
  2. Follow edge
  3. Reach node
  4. Select edge to closest frontier
  5. Repeat

Outdoor Path Planning Cost Parameters

  1. Ease of terrain (ruggedness)
  2. Distance
  3. Safety

Outdoor Path Planning Cost Parameters

  1. Distance (efficiency)
  2. Closeness to walls (for localization)
  3. Human traffic

Sonar Sensor Innaccuracy

Problems with Euler Angles

Global Localization

Unknown initial position


Known initial position

Kidnapped Robot Problem (i.e. Re-Localization)

Incorrect known position

For a Bayesian Filter:
Bel(x_t) = p(x_t | o_t,a_{t-1},o_{t-1},a_{t-2},…,o_0)
where o_i are observations at time i and a_i are actions at time i. Simplify the equation using the Markov property, the theorem of total probability and Bayes rule to get to:

Bel(x_t) = \eta p(o_t | x_t) \int p(x_t | x_{t-1},a_{t-1})Bel(x_{t-1})dx_{t-1}


Bayes Rule : $p(a|b) = \frac{p(b|a)p(a)}{p(b)}

you can assume : \eta = 1 / p(o_i | a_{t-1},…,o_0)


Bel(x_t) = p(x_t | Z_T)

Expand Z_T:

Bel(x_t) = p(x_t | o_t,a_{t-1},o_{t-1},...,o_0)

Definition of history:

h_t := a_{t-1},o_{t-1},a_{t-2},...,o_0
Bel(x_t) = p(x_t | o_t,h_t)

Apply Bayes’ Rule (p(a|b) = {p(b|a)p(a) \over p(b)}):

Bel(x_t) = {p(o_t | x_t,h_t) p(x_t | h_t) \over p(o_t | h_t)}

Substitute in \eta = 1 / p(o_t | h_t):

Bel(x_t) = \eta p(o_t | x_t,h_t) p(x_t | h_t)

Apply Markov assumption (the probability of our current observation is only dependent on our current state — If you are somewhere, you will see the same thing no matter how you got there, whereas the state itself is dependent on history):

Bel(x_t) = \eta p(o_t | x_t) p(x_t | h_t)

Apply the law of total probability:

Bel(x_t) = \eta p(o_t | x_t) \int p(x_t|x_{t-1},h_t)p(x_{t-1}|h_t) dx_{t-1}

Another Markov assumption (the probability of our current state is dependent only on our previous state and our previous history):

Bel(x_t) = \eta p(o_t|x_t)\int p(x_t|x_{t-1},a_{t-1})p(x_{t-1}|h_t)dx_{t-1}

Substitute in definition for Bel(x_{t-1}):

Bel(x_t) = \eta p(o_t|x_t) \int p(x_t|x_{t-1},a_{t-1})Bel(x_{t-1}) dx_{t-1}

For a mobile robot whose estimated motion is described by:

\hat{x}_{t+1} = \hat{x}_t + (V_t + w_{V_t})\delta t\cos \hat{\phi}_t
\hat{y}_{t+1} = \hat{y}_t + (V_t + w_{V_t})\delta t\sin \hat{\phi}_t
\hat{\phi}_{t+1} = \hat{\phi}_t + (\omega_t + w_{\omega_t})\delta t

and its real motion is defined as:

x_{t+1} = x_t + V_t \delta t\cos \hat{\phi}_t
y_{t+1} = y_t + V_t \delta t\sin \hat{\phi}_t
\phi_{t+1} = \phi_t + \omega_t \delta t

Derive the error: \tilde{x}_{t+1} = x_{t+1} - \hat{x}_{t+1}
using small angle approximation.

\tilde{x}_{t+1} = x_{t+1} - \hat{x}_{t+1}
=x_t+V_t\delta t\cos\phi_t-(\hat{x}_t+(V_t+w_{V_t})\delta t\cos\hat{\phi}_t)
=\tilde{x}_t+V_t\delta t(\cos\phi_t-\cos\hat{\phi}_t)-w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t(\cos(\hat{\phi}_t+\tilde{\phi}_t)-\cos\hat{\phi}_t)-w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t([\cos\hat{\phi}_t\cos\tilde{\phi}_t-\sin\hat{\phi}_t\sin\tilde{\phi}_t]-\cos\hat{\phi}_t)-w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t([\cos\hat{\phi}_t-\tilde{\phi}_t\sin\hat{\phi}_t]-\cos\hat{\phi}_t)-w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t-V_t\delta t\tilde{\phi}_t\sin\hat{\phi}_t-w_{V_t}\delta t\cos\hat{\phi}_t

\tilde{y}_{t+1} = y_{t+1} - \hat{y}_{t+1}
=y_t+V_t\delta t\sin\phi_t-(\hat{y}_t+(V_t+w_{V_t})\delta t\sin\hat{\phi}_t)
=\tilde{y}_t+V_t\delta t(\sin\phi_t-\sin\hat{\phi}_t)-w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t(\sin(\hat{\phi}_t+\tilde{\phi}_t)-\sin\hat{\phi}_t)-w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t([\sin\hat{\phi}_t\cos\tilde{\phi}_t+\cos\hat{\phi}_t\sin\tilde{\phi}_t]-\sin\hat{\phi}_t)-w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t([\sin\hat{\phi}_t+\tilde{\phi}_t\cos\hat{\phi}_t]-\sin\hat{\phi}_t)-w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t\tilde{\phi}_t\cos\hat{\phi}_t-w_{V_t}\delta t\sin\hat{\phi}_t

\tilde{\phi}_{t+1} = \phi_{t+1} - \hat{\phi}_{t+1}
=\phi_t+\omega_t\delta t-(\hat{\phi}_t+(\omega_t+w_{\omega_t})\delta t)
=\tilde{\phi}_t+\omega_t\delta t-\omega_t\delta t-w_{\omega_t}\delta t
=\tilde{\phi}_t-w_{\omega_t}\delta t

Resampling is a process necessary for particle filters with finite numbers of particles in which the particles are redistributed according to the current PDF (probability distribution function).

Particle Filter State Estimation Algorithm


Calculate the pre-reading state estimate.


Calculate the state estimate taking into consideration the pre-reading estimate and the reading.


Redistribute the particles according to the new state estimate.

Simultaneous Localization and Mapping

SLAM is the process of building a map while simultaneously using that map to continually localize.

Main challenges for SLAM:

A robot configuration is a specification of the positions of all robot points releative to a fixed coordinate system. This configuration can be represented as a vector. Therefore, the configuration space is the vector space of the configuration — all of the possible configurations of the robot. The topology of this space is often not Cartesian. Free space is the vector space in which the robot is not touching any obstacle. Semi-free space is the vector space in which the robot is touching but not overlapping any obstacle. C-Obstacle space is the vector space in which the robot is overlapping an obstacle. Two paths are homotopic whenm they have the same endpoints and can be continuously deformed into each other.

Probabalistic Roadmap (PRM)

Select random points in the free space, connect them, do a graph search.

Rapidly-Exploring Random Trees (RRT)

Select a random point, add a node at some constant distance in the direction of the random point from the nearest node to the point (unless the point is closer than the constant distance, in which case you add the point as a node).

Visibility Graph

  1. Connect initial and goal locations with visible vertices.
  2. Connect each obstacle vertex to visible obstacle verticies.
  3. Remove edges that intersect the interior of an obstacle.
  4. Plan on the resulting graph.

Guiding principle: stick to corners.


  1. Construct GVG.
  2. Plan on that graph.

Guiding principle: stick to the center of corridors.

Bug 2

Line from start to goal is s-line.

  1. Head toward goal on s-line
  2. If obstacle in the way, follow until encounter s-line closer to goal.
  3. Repeat.

Bug2 requirements:


  1. Label goal “2”, obstacles “1”, all others “0”.
  2. Starting at the “2”, fill out adjacent “0”s with “3”.
  3. Fill out “0”s adjacent to “3”s with “4”s.
  4. And so on…

“0”s are unreachable. To plan path, start at start point, go down a number, repeat until you reach the goal.

Bug 1

  1. Head toward goal.
  2. If obstacle encountered, circumnavigate. Remember closest point to goal.
  3. Cicumnavigate back to closest point.
  4. Repeat.


  1. Surround the obstacles (including map edge if impassable) with “fire”.
  2. Expand fire to adjacent cells.
  3. Repeat step 2.

© Emberlynn McKinney