CSCE 574 Robotics Notes
Hydraulic Drive
 Higher torque
 Useful for e.g. earth digging
Shape Memory Alloy
 Fine control
 Don’t require speed controllers
 Useful for e.g. prehensile hands (light touch)
Pneumatic Drive
 Fast
 Useful for e.g. sorting (manufacturing  high throughput)
Electric Drive
 Doesn’t require external power conversion (e.g. Compressor)
 Easy to control
 Rotational by default
 Useful for e.g. wheeled systems
For a differential drive robot, where the wheels are distance d apart and the wheel velocities are V_l V_r. Estimate the linear velocity V and the angular velocity \omega.
Topological
 Graph of nodes (points)
 Low complexity
 Exploration is graph exploration
 Useful for finding the safest path (GVG)
 Useful for simple environments
 Useful for indoor environments (hallways, rooms, nice corners)
GridBased
 Discretized map (grid)
 Exploration is Frontierbased
 Easy to implement
 More difficult to navigate
 Useful for reducing complex obstacles
 Can localize to a specific cell (often more accurate than to topol. node)
 Useful when you know your location (e.g. GPS)
FeatureBased
 Locations of features
 Easiest to localize in
 No inherent exploration method
 Obstacles difficult to define
 Useful where localization is imperative
 Useful where you don’t have something like GPS but also have a complex environment and need to localize easily (e.g. a coral reef)
Exteroceptive
 Measures the environment
 Camera
 Laser scanner
Proprioceptive
 Measures the robot
 Wheel encoder
 IMU
Laser scanner
 Accurate
 Low computational cost
 High energy cost
 Does not work on transparent (or sometimes black) obstacles
Sonar
 Low accuracy
 Low computational cost
 Middling energy cost
Stereo Camera
 Middling accuracy (depends on setup)
 High computational cost
 Low Energy Cost
Frontier Based Exploration Algorithm
 Go to nearest unexplored cell (that is not an obstacle).
 Repeat.
 Profit!
Higher exploitation (localization) means higher map accuracy, but lower efficiency, whereas higher exploration means higher mapping efficiency, but lower accuracy.
Generalized Voronoi Graph Exploration Algorithm
 Access GVG
 Follow edge
 Reach node
 Select edge to closest frontier
 Repeat
Outdoor Path Planning Cost Parameters
 Ease of terrain (ruggedness)
 Distance
 Safety
Outdoor Path Planning Cost Parameters
 Distance (efficiency)
 Closeness to walls (for localization)
 Human traffic
Sonar Sensor Innaccuracy
 Specular reflections (erroneous reflection)
 Crosstalk (multiple sensors interfering)
 Speed of sound difference in different materials
Problems with Euler Angles
 Gimbal lock
 0,2\pi rad discontinuity
 Global Localization

Unknown initial position
 Tracking

Known initial position
 Kidnapped Robot Problem (i.e. ReLocalization)

Incorrect known position
For a Bayesian Filter:
Bel(x_t) = p(x_t  o_t,a_{t1},o_{t1},a_{t2},…,o_0)
where o_i are observations at time i and a_i are actions at time i. Simplify the equation using the Markov property, the theorem of total probability and Bayes rule to get to:Bel(x_t) = \eta p(o_t  x_t) \int p(x_t  x_{t1},a_{t1})Bel(x_{t1})dx_{t1}where:
Bayes Rule : $p(ab) = \frac{p(ba)p(a)}{p(b)}
you can assume : \eta = 1 / p(o_i  a_{t1},…,o_0)
Estimation:
Expand Z_T:
Definition of history:
Apply Bayes’ Rule (p(ab) = {p(ba)p(a) \over p(b)}):
Substitute in \eta = 1 / p(o_t  h_t):
Apply Markov assumption (the probability of our current observation is only dependent on our current state — If you are somewhere, you will see the same thing no matter how you got there, whereas the state itself is dependent on history):
Apply the law of total probability:
Another Markov assumption (the probability of our current state is dependent only on our previous state and our previous history):
Substitute in definition for Bel(x_{t1}):
For a mobile robot whose estimated motion is described by:
\hat{x}_{t+1} = \hat{x}_t + (V_t + w_{V_t})\delta t\cos \hat{\phi}_t
\hat{y}_{t+1} = \hat{y}_t + (V_t + w_{V_t})\delta t\sin \hat{\phi}_t
\hat{\phi}_{t+1} = \hat{\phi}_t + (\omega_t + w_{\omega_t})\delta tand its real motion is defined as:
x_{t+1} = x_t + V_t \delta t\cos \hat{\phi}_t
y_{t+1} = y_t + V_t \delta t\sin \hat{\phi}_t
\phi_{t+1} = \phi_t + \omega_t \delta tDerive the error: \tilde{x}_{t+1} = x_{t+1}  \hat{x}_{t+1}
using small angle approximation.
\tilde{x}_{t+1} = x_{t+1}  \hat{x}_{t+1}
=x_t+V_t\delta t\cos\phi_t(\hat{x}_t+(V_t+w_{V_t})\delta t\cos\hat{\phi}_t)
=\tilde{x}_t+V_t\delta t(\cos\phi_t\cos\hat{\phi}_t)w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t(\cos(\hat{\phi}_t+\tilde{\phi}_t)\cos\hat{\phi}_t)w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t([\cos\hat{\phi}_t\cos\tilde{\phi}_t\sin\hat{\phi}_t\sin\tilde{\phi}_t]\cos\hat{\phi}_t)w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_t+V_t\delta t([\cos\hat{\phi}_t\tilde{\phi}_t\sin\hat{\phi}_t]\cos\hat{\phi}_t)w_{V_t}\delta t\cos\hat{\phi}_t
=\tilde{x}_tV_t\delta t\tilde{\phi}_t\sin\hat{\phi}_tw_{V_t}\delta t\cos\hat{\phi}_t
\tilde{y}_{t+1} = y_{t+1}  \hat{y}_{t+1}
=y_t+V_t\delta t\sin\phi_t(\hat{y}_t+(V_t+w_{V_t})\delta t\sin\hat{\phi}_t)
=\tilde{y}_t+V_t\delta t(\sin\phi_t\sin\hat{\phi}_t)w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t(\sin(\hat{\phi}_t+\tilde{\phi}_t)\sin\hat{\phi}_t)w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t([\sin\hat{\phi}_t\cos\tilde{\phi}_t+\cos\hat{\phi}_t\sin\tilde{\phi}_t]\sin\hat{\phi}_t)w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t([\sin\hat{\phi}_t+\tilde{\phi}_t\cos\hat{\phi}_t]\sin\hat{\phi}_t)w_{V_t}\delta t\sin\hat{\phi}_t
=\tilde{y}_t+V_t\delta t\tilde{\phi}_t\cos\hat{\phi}_tw_{V_t}\delta t\sin\hat{\phi}_t
\tilde{\phi}_{t+1} = \phi_{t+1}  \hat{\phi}_{t+1}
=\phi_t+\omega_t\delta t(\hat{\phi}_t+(\omega_t+w_{\omega_t})\delta t)
=\tilde{\phi}_t+\omega_t\delta t\omega_t\delta tw_{\omega_t}\delta t
=\tilde{\phi}_tw_{\omega_t}\delta t
Resampling is a process necessary for particle filters with finite numbers of particles in which the particles are redistributed according to the current PDF (probability distribution function).
Particle Filter State Estimation Algorithm
 Propogation

Calculate the prereading state estimate.
 Update

Calculate the state estimate taking into consideration the prereading estimate and the reading.
 Resampling

Redistribute the particles according to the new state estimate.
Simultaneous Localization and Mapping
SLAM is the process of building a map while simultaneously using that map to continually localize.
Main challenges for SLAM:
 Controlling uncertainty
 Controlling growth of complexity
 Achieving autonomous exploration
A robot configuration is a specification of the positions of all robot points releative to a fixed coordinate system. This configuration can be represented as a vector. Therefore, the configuration space is the vector space of the configuration — all of the possible configurations of the robot. The topology of this space is often not Cartesian. Free space is the vector space in which the robot is not touching any obstacle. Semifree space is the vector space in which the robot is touching but not overlapping any obstacle. CObstacle space is the vector space in which the robot is overlapping an obstacle. Two paths are homotopic whenm they have the same endpoints and can be continuously deformed into each other.
 Probabalistic Roadmap (PRM)

Select random points in the free space, connect them, do a graph search.
 RapidlyExploring Random Trees (RRT)

Select a random point, add a node at some constant distance in the direction of the random point from the nearest node to the point (unless the point is closer than the constant distance, in which case you add the point as a node).
Visibility Graph
 Connect initial and goal locations with visible vertices.
 Connect each obstacle vertex to visible obstacle verticies.
 Remove edges that intersect the interior of an obstacle.
 Plan on the resulting graph.
Guiding principle: stick to corners.
GVG
 Construct GVG.
 Plan on that graph.
Guiding principle: stick to the center of corridors.
Bug 2
Line from start to goal is sline.
 Head toward goal on sline
 If obstacle in the way, follow until encounter sline closer to goal.
 Repeat.
Bug2 requirements:
 Local sensing (e.g. whiskers)
 Known direction to goal (e.g. beacon/IMU)
Wavefront:
 Label goal “2”, obstacles “1”, all others “0”.
 Starting at the “2”, fill out adjacent “0”s with “3”.
 Fill out “0”s adjacent to “3”s with “4”s.
 And so on…
“0”s are unreachable. To plan path, start at start point, go down a number, repeat until you reach the goal.
Bug 1
 Head toward goal.
 If obstacle encountered, circumnavigate. Remember closest point to goal.
 Cicumnavigate back to closest point.
 Repeat.
Grassfire/Brushfire
 Surround the obstacles (including map edge if impassable) with “fire”.
 Expand fire to adjacent cells.
 Repeat step 2.
© Emberlynn McKinney