Move Tatrabot 2D mapping - Johanna, Romana, Nicole, Oswaldo, Dušan, Viktor

From RoboWiki
Revision as of 16:20, 29 June 2018 by Robot (talk | contribs)
Jump to: navigation, search

The project's aim was to let the Tatrabot successfully reach a given target position in a grid environment using A* search algorithm. This is realized by creating an internal two-dimensional-map representing the robot's current position in the environment and its knowledge about probable locations of obstacles, gained from taking ultrasonic measurements during movement. To account for inaccurate movements Tatrabot uses its IR light sensors to perceive and follow the grid's lines.

Grid to Map Representation

An 8 by 8 grid of 25 by 25 cm squares with 2.5 cm black borders was printed out as the layout of the map over which the robot is to navigate. This corresponded to a 16 by 16 (row, column) coordinate system as shown in the figure below.

WholeGrid.png RealLife.jpg

Left: Representation of the Grid, right: robot in its environment in real life

To represent the grid layout with obstacle arrangement and the movement of the robot within it, 3 virtual maps were constructed. Data was collected by the robot during the mapping phase. In the mapping phase, control commands were sent to the robot over Bluetooth radio connection as entered by a human operator on keyboard. Once the map has been constructed, the robot could execute orders to navigate to a specified destination location. For searching through the map it utilizes A* searching algorithm. Signals from the ultrasonic sensors were received either at the end of its movement only, or continuously during the movement. The latter option did not prove to be useful, since the robot trajectory while moving from a crossing to another crossing was too wavy. In the visualization below, the position of the robot within the map is represented by an "R" followed by an arrow " < , > ,^ or v " corresponding to the orientation of the robot. The following are the 3 constructed maps. The figures illustrate the state of the maps at the same moment in time.


See Obstacle Map

-----see obstacle----------------
 0   0   0   1   1   1   0   0   0   0   0   0   0   0   0   0
 0   0   0  20  15   1   0   0   0   0   0   2   2   5  14   0
 0   0   1  54  57  11  13   0   0   1   1   1   0   3  12   0
 0   0   1  52  46   0   0   0   0  25   0   0   1   4  12   0
 0   0   0  28  14   0  24   2   0   8   0  14   1   2  12   0
 0   0  11   9   0   0   1   3   0   0   0   9   1   1   0   0
 0   0  11   9   0   0   0  21  83  83  47   3   0   0   0   0
 0   0  18  16   0   0   0  35  81  81  34  14   1   0   2  11
 0   0  18  16   0   0   0   1   4   5   5   1   1   0   2  11
 0   0  18  18   7   7   0   0   1   2   1   1   1   0  18  13
 0   0   7  54   3   3  11  10  10   0   0   1   1  18  24   4
 0   0   7  75  39  39  24   1   1   0   0   0   0  18  22   3
 0   0   0  49  57  24  Rv  13  13   0   0   0   0  18   6  54
 0   0   0   7  46   1   1   1   1   1   0   2   2   2   2  55
 0   0   0   8 106   1   0   0   0   0   0   3   3   3   3 115
 0   0   0   2  61   2   1   0   0   0   0   1   1   1   1  60

The see obstacle map is a map of how many times an obstacle has been sensed in that particular position. Depending on the detected distance of the object, the radius of the ultrasonic beam at that distance is calculated. The algorithm then takes into consideration the width of the cells and the orientation of the robot to determine which cells fall within that radius. The value representing the probability of the obstacle is then incremented within those cells.

The position and orientation of the robot within the grid is not an absolute one but is tracked and changed depending on the initial position and orientation and the respective changes in position and orientation. The direction of signal detection is also changed accordingly.


See Nothing Map

-----see nothing-----------------
 0   0   0  11  11  11   0   0   0  18  18  39  65  65  47   0
33   3   3  15  34  34  22   0   0  33  33  54  80  77  21   0
33  33   5  19  62 108  69  84  51  69  55  55  82  79  23   4
65  33   5  30  73 119  69  84  51  76  62  62  87  83   4   4
35  35   3  27  69  83   9  15  15  34  34  25  54  53   4  13
41  32   0  11  61  61  34  40  15  24  43  34  84  83  13  13
 9   9   0  11 104 104  45  54  30  21  40  28  53  53  13  13
31  31  22  18  71  71  36  65 109  84  41  31  95  95  70  38
22  22  22  40 103 103  83  83 155 171 242  63 105 105 112  80
41  41  41  55  85  85  72  72 163 167  87  82 124 124 106  72
19  19  19  51 119 119 122  60 233 252  19  40  97  79  55  50
19  19  19  31  57  57  75  36 126 252  19  19  77  59  37  13
 0   0   0   7  77  87  Rv  42  80  79  22  43  74  93  87  12
 0   0   0  33  40  86 101  61  79 101  22  41  72  72  72  19
 0   0   0  32  39  91 109  61  20  42  22  38  75 255 255  16
 0   0   0  30  30 111 108  61  20  42  22  37  74  74  74  15

The see nothing map increments the probability of " no obstacle" in the cells in front of the detected object. It also increments the "no obstacle" probability of cells within a 154cm distance where no obstacle is detected. The radius of the ultrasonic beam is also taken into consideration as in the "see obstacle" map.


Full Map

FullMap duringmovements.png FullMap.png The left figure shows a map resulting from measurements which were taken during the robot's movement (as in the current version of the code). The map on the right sight depicts a less noisy map which evolved from measurements that were only taken when the robot was standing still.


It is estimated, due to the angle of the obstacles with respect to the robot, the material and reflectance of the obstacle, and the general sensitivity of the sensors, that an obstacle is only detected about 1/5 of the time. To compensate for this inconsistency, therefore, the value of the "see obstacle" map only needs to be 1/5 that of the "see nothing" map in order to be considered as present. For clarity, the position of the Robot is noted down as coordinates at the bottom of the map.


Visual Representation Using UTF-8 Characters

The probability of an obstacle being present is further represented in increments using UTF-8 characters to create a better visualization of the map. It is calculated by for each location using the following formula:

ProbabilityChars.png

The meaning of each of the used characters and its encoding is represented in the following table:

Character	Probability	UTF-8 (hexadecimal)

x		unknown		-
 		0		-
.		0.1		-
:		0.2		-
o		0.3		-
O		0.4		-
@		0.5		-
░		0.6		0xE2 0x96 0x91
▒		0.7		0xE2 0x96 0x92
▓		0.8		0xE2 0x96 0x93
█		1		0xE2 0x96 0x88

The used UTF-8 characters are coded into three bytes, as explained on the example of the full square ( █ ):

  1. The full square has Unicode point U+2588.
  2. All unicode code points between U+0800 and U+FFFF are encoded in three bytes.
  3. The hexadecimal 2588 is in binary 0010 0101 1000 1000
  4. Since it should be encoded in three bytes at the end, 1110 is added at the beginning.
  5. The first four bits of our code point are written after this still within the first byte, resulting in 1110 0010.
  6. The remaining part to be encoded is the following: ... 0101 1000 1000.
  7. In the second byte of our code we continue with 10 to mark that it is a continuation byte and the rest of it is filled up with the next six bits, resulting in 1110 0010 1001 0110. What stays left is: ...00 1000.
  8. For the third byte we repeat the previous step: 10 is put at the beginning and the remaining part forms the rest, resulting in 1110 0010 1001 0110 1000 1000.
  9. If we write this in hexadecimal, we get E2 96 88.


Movement

The robot is programmed to perform 3 kinds of movements, i.e. forwards, leftwards and rightwards. After each move the position in map is being updated. Movements always initiate assuming a default position; i.e. on the top left corner of the allocated cell, immediately behind the gridline with the leftmost LED sensor sensing the left-side line of the grid. After each move, the position of the robot on the map is updated accordingly. One square on the grid, represents a 2 * 2 array on the virtual map.

InitialGrid.png

Default position


Forward

For the forwards motion, starting behind the line, the robot moves forward by rotating both motors for a short while, surpassing the initial line. A function for following the line is then activated. Due to the possibly unequal rotation speed of the two motors, the robot is programmed to compensate from deviations from the line.

  • if only the first sensor to the left is on line --> move forward by rotating both motors.
  • if second sensor from left sees the line (robot has turned left) --> stop right motor, continue with left motor
  • if first sensor on the left stops seeing the line (robot has turned right) --> stop left motor, continue with right motor.

The loop is interrupted if the sensor on the far right detects a grid, meaning that the robot has reached the next perpendicular grid line. Robot reverses slightly with both motors to place itself in default position.


Right Turn

Starting from default position with robot at rest:

  • Rotate left motor while leaving right motor at rest. Stop movement when robot has rotated 90° (25 counts).
  • Reverse both motors to relocate robot behind the line.

Follow the same loop as in the forward function to follow the line and regain default position.

RightTurn.png

Robot performing a turn to the right


Left Turn

Starting from default position with robot at rest:

  • Rotate both motors to move forward past the initial line.
  • Follow the line forward for a short time.
  • Stop right motor while rotating left motor backwards until robot has rotated 90°.
  • Move robot backwards (both motors) over the line for a given amount of time.
  • Follow the same loop as in the forward function to follow the line and regain default position.

LeftTurn.png

Robot performing a turn to the left


Navigation

Coordinate Systems

The robot is using 3 coordinates systems:

  • Local
  • Global
  • Grid


Coordinates.png


Every ultrasonic sensor has its own local coordinate system. The point [0,0] is oriented in front of each sensor, the X-coordinate is growing from left to right and the Y-coordinate is rising from down to top in the direction of the sensor.

The global coordinate system is used for real calculation. It’s a normal mathematics coordinate system. Each sensor has defined his own shift from global point [0,0].

The grid coordinate system is the last one, which is used for grid representation of the map. The X-coordinate is the same, but the Y-coordinate is switched. This is because a representation of map is stored in a 2-dimensional array. Every point in the real grid is represented as 4 points in our array.

The final calculation is provided on global coordinate system using shifts and final result of seen and unseen is done with vector transformation to grid coordinate system.


Implementation of A* Algorithm

Function initialize_plan_data_structures

Initialize the data that will be used to make a plan to go from a starting position to a final position.

Data structures:

  • Array of visited blocks by the robot which initially have value 0 for every block except for the block where the robot currently is (value 1).
  • A start point in the queue initialized with 0.
  • An end point in the queue initialized with 1.


Function insert_into_queue

Inserts values for rows and columns in a queue according to the plan until the starting value of the queue is the same as the ending value.

Data structures:

  • Array of rows in queue and columns in queue.
  • Starting and ending values of the queue.


Function create_plan

Creates a valid plan, executes initialize_plan_data_structures and while the ending value in the queue is higher than the starting value it assigns the rows and columns in the queue to the current values of the rows and columns. Then it tries a direction forwards or backwards in columns as well as in rows. The algorithm will try to find a suitable route to the goal without leaving the map, avoiding obstacles, and not visiting already visited spaces on the grid. If the space on the grid is suitable for the robot to move there, the function will assign a visited value on that space and sum that movement into the count of the steps from start, and the visited value will be inserted in the queue. When the candidate row and column reaches the destination row and column the queue starting point will have reached the ending point of the queue, meaning that all the points in the queue correspond to the route from start to goal. Finally, it plans the direction in which the robot will move and the corresponding rows and columns through steps. It adds the steps required to go to the goal and it calculates the plan length.

Data structures:

  • Array of suitable directions.
  • Array of candidate rows and columns.
  • Array of visited rows and columns.
  • Steps from start based on the candidate rows and columns.
  • Plan length based on steps from start.


Ultrasonic Sensor Model

Cone.png

The cone is defined by the following 2 functions (see also figure on the right):

Formula.png

These functions are used to determine if any point is in the cone. The condition is met when the result of first function is in interval (-∞;0> and of second in interval <0; ∞).

The process to update seen and unseen is described like this:

  1. Get measurement of sensor as distance
  2. Processing every row from robot position until distance as unseen
  3. Updating columns in row to the left until at least 1 condition is matched:
    1. is in cone left distant edge
    2. is in cone right distant edge
    3. the cone is between left and right distant edge
  4. Same updating columns in row to the right as to the left
  5. Mark all columns as seen at distance with same process

This process is done for every used sensor with its own measured distance.


Sensors

Ultrasonic Measurement System

Our robot is equiped with 3 HCSRO4P ultrasonic sensors.

Their general directions are:

  • Front
  • Left
  • Right

Sensors are being run in endless loop measuring distances in all 3 directions working through 18 states. There are 6 states for each sensor direction.

  1. sets value 1 on pin GPIOB13
  2. clears GPIO13 pid
  3. waits for value on GPIO12, starting bit
  4. waits until sensor stops transmitting on GPIO12, which signals that measuring stopped. If transmission doesnt stop until 870ms, waiting stops and default value is used.
  5. is used as a state signaling that this sensor is already measured and other sensors can be used.
  6. means that measuring failed, response wasn’t recieved

States 1-6 work with sensor 1, while states 7-12 are used for sensor 2 and 13-18 for sensor 3.


HCSR04P module

This module is the second generation of HC-SR04 Ultrasonic Sensor. Unlike the first generation HC-SR04 that can only operate between 4.8V 5V DC, this new version has wider input voltage range, allow it to work with controller operates on 3.3V. It measures distance using sonar, an ultrasonic (well above human hearing) pulse (40KHz) is transmitted from the unit and distance-to-target is determined by measuring the time required for the echo return.

Viktor2.jpg

Features

  • HC-SR04P Upgrade version with build-in crystal, more reliable, lower working current.
  • Power Supply: 3.3 5 VDC
  • Quiescent Current : <2mA
  • Working Current: 2.8mA @ 5V
  • Effective Angle: <15 degrees
  • Ranging Distance : 2 to 400 cm
  • Connector: 4-pins header with 2.54mm pitch.
  • Dimension: 45mm x 20mm x 15mm
  • Weight 8.5g


Rotation Sensors

The number of rotations of the wheels is measured by built in rotation sensors in the following way: A wooden disc looking similar to a gear wheel is attached to each of the wheels and turns as they do. Next to it, on the robot's core, the actual sensor is fixed. It reacts to light changes every time when one of the disc's teeth passes by when the robot is moving. More precisely, the number of light changes is counted. Since the disc has 19 teeth (and therefore also 19 gaps), 38 measured light changes represent one whole rotation of a wheel.


STM32 platform

STM32 is a family of 32-bit microcontroller integrated circuits by STMicroelectronics. The STM32 chips are grouped into related series that are based around the same 32-bit ARM processor core, such as the Cortex-M7F, Cortex-M4F, Cortex-M3, Cortex-M0+, or Cortex-M0. Internally, each microcontroller consists of the processor core, static RAM memory, flash memory, debugging interface, and various peripherals.


Visual Studio

The aim of the Visual Studio code is to create a visualisation of the physical Tatrabot action. It generates a map with probable obstacles and marks the most efficient route to the final destination depending on the starting point, final destination and obstacles by utilizing the A* search. It helps us proceed the task faster than in real world.

It utilizes the following functions:

  • fill_the_map creates a map consisting of number of rows and columns simulating the real field
  • char robot_orientation marks the robots orientation at a current position with an arrow symbols and marks obstacles with the ‘’@’’ symbol
  • print_level_of_gray visualises the magnitude of a probability of an obstacle occurrence at each spot separately
  • initialize_plan_data_structures organises the data (starting point, destination and visited spots) needed for the evaluation of the plan
  • insert_into_queue calculates values for each spot and inserts them into a queue
  • create_plan finds the most appropriate route and creates plans length
  • navigate_to executes and prints the calculated route, if the create_plan function didn’t find the suitable route, it prints "No trajectory exists.’’


Video Documentation

Video Documentation

Full Code

Full Code on Github