Autonomous Drone with AI Object DetectionFollow project
|1||RS PRO AA NiMH Rechargeable AA Batteries, 2.7Ah, 1.2V||1834279|
|1||Raspberry Pi, Camera Module , CSI-2 with 3280 x 2464 pixels Resolution||9132664|
|1||Raspberry Pi 4 8gb Starter Kit||2097566|
|1||RS PRO NiCd, NiMH 9V, AA, AAA, C, D Battery Charger with EU, UKplug||8003079|
|1||DesignSpark, Official Raspberry Pi Black Case with 7in Capacitive Touch Screen||1228914|
|1||Pixhawk Flight Controller|
|1||Alien RR7 300mm Quadcopter Frame|
|1||Ori 4in1 Mini 25A ESC|
|4||EMAX Eco-ii Series 2807 Motor|
The main purpose of this is to verify the feasibility of developing truly autonomous solutions for future industrial applications. Understanding the hardware required for this purpose is vital to rolling out fleets of unmanned aerial vehicles for package delivery solutions, smart surveillance, and search and rescue operations.
Hardware Design and Assembly
In recent years, hobbyists and enthusiasts for drones have allowed for a market to develop for off-the-shelf constituent parts for unmanned vehicles. These are typically supported by open-source software solutions. Drone hardware is typically connected as shown in the following diagram.
The drone was assembled using standard hardware available for purchase online. All parts are listed in the parts list.
The drone was initially built without the Raspberry Pi or Pi camera attached, so it could be confirmed to fly correctly using a radio controller.
Relatively large and powerful motors were chosen as the frame had to be large enough to support the Raspberry Pi.
Basic Software Setup
An application called Mission Planner was installed on a Windows laptop. This is known as ground control software and is used to install firmware on a flight controller. It is also used to perform sensor calibrations, modify PID loops and adjust various settings. The flight controller was set up with the correct firmware for the copter type, and the radio controller was connected and calibrated to be consistent with the flight controller’s PWM outputs for the motors.
Once this was done, the drone was taken for a test flight. The motors were clearly responding incorrectly to inputs from the radio controller, as it was unable to stably lift off. This indicated a problem with the data being sent from the flight controller to the electronic speed controllers which delivered power to the motors. A setting was changed on the flight controller via Mission Planner which rectified this – the electronic speed controller was expecting a different data rate to what was being originally provided. The following test flight was successful.
AI Software Setup
The Raspberry Pi was set up with TensorFlow. The associated Python modules were installed, and the example implementation was modified for the purposes of this project. The model used was the COCO SSD MobileNet v1 model, which supports the detection of 80 different object types. Once installed, a basic Python script was made to test the performance of the program on the Raspberry Pi. Since this was done on a desk, a Raspberry Pi touchscreen was used since the camera could be mounted on the included case, and the screen itself could display the detected object stream. This yielded a detection rate of approximately 5fps.
Since the only object required to be detected for this project was a person, the script was modified to disregard other detected objects. It, therefore, did not create annotations for them on the image. To avoid confusion as to what object should be followed, the program was also modified to allow for a maximum of one object annotation per frame. This was set to annotate only the largest object, as this could be considered the closest object to the camera. Below are screenshots of the program before and after modification:
Streaming From Drone to Ground
Due to cost restrictions, it was decided to use the Raspberry Pi’s onboard WiFi to stream video data from the camera to a grounded computer while the drone was in flight. An additional Python module was created, which had basic APIs to call for connecting to the grounded computer and sending data with the TCP protocol. The stream was very slow and the camera image was blurry due to vibrations from the motors, but it was able to show the drone moving to follow detected objects. Detected object boxes were sent as part of the stream, which required them to be combined with the camera image each time before being transmitted.
Full Hardware Integration
The Raspberry Pi was attached to the drone and powered using a standard USB portable charger. This provided the stable 5V power required by the Pi to avoid system crashes and memory corruption. A 5V voltage regulator could theoretically be used instead to power the Pi directly from the battery, and this would save space and weight on the drone. However, the high and rapidly changing currents due to the drone motors make the battery voltage unstable, which can, in turn, lead to unstable output voltages from an inexpensive voltage regulator. The 5V portable charger was sufficient to power the Pi as it was measured to draw fewer than 5W when running TensorFlow-based applications.
One of the Pi’s USB ports was used to connect a USB to a micro-USB cable between the Pi and the flight controller. This would enable the Pi to send encoded command messages to the flight controller.
Once the Pi was comfortably attached, testing began to ensure the data link between the flight controller and the Pi was operational. This took the form of a simple Python script that declared a drone object which included the arguments of a named USB port on the Pi and a suitable baud rate. The drone was then ‘armed’ as part of the script, which started the motors running. This was confirmed to work, showing the Pi could instruct the drone.
Achieving Autonomous Control
Using the MAVLink protocol, the Raspberry Pi can send flight commands to the flight controller, and the flight controller can return useful telemetry information to the Pi. This can be achieved using an associated Python module. Since the drone relies only on an accelerometer for position control, it ‘drifts’ significantly when hovering, and this is exacerbated in windy conditions. The types of autonomous flight instructions were therefore restricted to ensure the safety of both the done and the surroundings. To demonstrate the self-guided movement of the drone, a simple yaw command was added based on the position of a detected person in the viewfinder of the camera. These commands worked as an interruption to the standard RC input, meaning both types of input could be used in the same flight.
If a person was significantly to the right of the camera’s line of sight, the drone would be instructed to turn to the right. Similarly, if a person was to the left, the drone would turn to the left. This allowed for manual control to adjust the drone’s horizontal movement to avoid excessive drifting in space, while still displaying autonomous capabilities.
This project was able to demonstrate the abilities of neural network-based image processing for autonomous navigation applications. It was able to show this could be achieved with consumer hardware and open-source software, proving that the technology can feasibly be implemented industrially. There are clear improvements that should be made to improve the robustness of a true industrial solution, however. The camera on the drone requires stabilization to counteract vibrations from the motors. The drone should also include some depth sensor to avoid any drifting-related crashes. A more efficient data transmission protocol should also be designed to transmit video to grounded devices at a higher framerate.
Thanks to the RS Components Grassroots student project fund, key hardware components including the Raspberry Pi and camera could be sourced for this project.