Student Innovation - Gigapixel Photography with the Raspberry Pi HQ CameraFollow project
|1||Raspberry Pi 4 4G Model B||182-2096|
|1||Raspberry Pi HQ Camera||201 2852|
|35||RS PRO M6 x 8mm Hex Socket Button Screw Black, Self-Colour Steel||822-9117|
|4||300mm x 8mm Diameter Stainless Steel Rod||786-6015|
|9||RS PRO M3 x 20mm Hex Socket Cap Screw Plain Stainless Steel||293-319|
|12||RS PRO M4 x 8mm Hex Socket Button Screw Plain Stainless Steel||183-8626|
|1||Bosch Rexroth Aluminium Strut, 20 x 20 mm, 6mm Groove , 3000mm Length||466-7219|
|1||GT2 Timing Belt|
|6||2020 extrusion corner bracket|
|3||NEMA17 stepper motor|
|35||M6 T nut|
|1||Anet A8 Mainboard|
|2||8mm lead screw (I used 345mm length)|
|2||8mm flanged lead nut|
|4||M2.5 x 5mm socket screw|
|4||M2.5 threaded insert|
|12||M3 x 8mm countersunk socket screw|
|2||flexible shaft coupling|
|4||M3 threaded insert|
|2||micro limit switch|
|8||M3 square nut|
|1||large format lens (I used a Carl Zeiss Jena 12cm f/4.5 Tessar)|
|1||corrugated plastic board|
|2||z axis stepper mount|
|1||X axis mount (left)|
|1||X axis mount (right)|
|2||Z axis top mount|
|1||X axis carriage mount|
|1||Pi HQ camera mount|
|1||Z axis limit switch mount|
|1||X axis limit switch mount|
|1||X axis limit switch contact|
In early 2020 the Raspberry Pi Foundation released the Pi HQ Camera. It features a sensor which is bigger and higher resolution than previous Pi cameras, and it is designed for use with C or CS-mount lenses (which are commonly used in things like CCTV cameras). This is great, but what if you want to take even higher resolution images with your Raspberry Pi?
I created a motion system which moves the Raspberry Pi HQ camera around a 2D plane, behind a fixed position large format lens. As this lens projects an image many times larger than the sensor of the Pi HQ camera, images can be taken at multiple locations behind the lens, which can later be stitched into a single high-resolution image.
Old large format lenses can be easily found on sites like eBay for relatively little money, and although using them on a standard digital camera is possible, only the centre of the frame is captured. This is because they are designed for a much larger film/sensor size. From the illustration below, it can be seen that when a large format lens is mounted in front of a standard sized sensor, it can only see a small part of the full image. This reduction in view is commonly known as the ‘crop factor’.
In order to produce an image which covers the entire large format frame, instead of rigidly mounting the lens to the camera sensor, the camera sensor can be translated around the 2D plane of the image. Individual images can then be captured and stitched together to create a single high-resolution image showing the full image projected by the large format lens.
To move the camera around on a fixed plane, a similar motion mechanism to a 3D printer was used. The frame is constructed using 2020 aluminium extrusion, with the camera mounted to bearings which run on smooth rods. The camera is pulled left and right using a belt connected to a stepper motor. This horizontal axis is mounted at each end to a threaded rod, which raises and lowers the camera using two more stepper motors. The lens is mounted to a rail which allows it to slide backwards and forwards in order to focus. Rigid board is mounted to the back and sides of the frame to block excess light. Black corrugated plastic is used to cover the front and top of the housing when the images are being taken.
Stepper motors are connected to the old 3D printer control board, and a basic program was created using the Arduino IDE which takes commands from the Raspberry Pi. These commands tell the control board what direction to turn the stepper motors, and how many steps to move by. There is also a ‘home’ command which runs at the start of the program to position the camera in a known location - using a limit switch on each axis.
Python is used on the Raspberry Pi to plan a route for the motion system to take. The user inputs the number of desired rows and columns which the final image will be made up of, and the starting position and path is generated. This path is centred behind the lens, in the middle of the known limits of the motion system. The path and individual image locations are then plotted. This path planner also has an input for image overlap - which is useful both for blending the images later, or producing a coarse test path.
Once the path is confirmed, the Raspberry Pi sends each movement command to the motor control board, stopping to take an image at each pre-determined location. Once all of the images are taken, they can be processed and aligned to form a complete single image using NumPy and OpenCV. Although this image stitching can be done on the Pi 4, the overall resolution is limited by the available RAM. This means that images of only around 500 megapixels can be created on a 4GB Pi. One of the images stitched using this NumPy/OpenCV method on the Pi is shown below. Although the lens aperture and camera ISO, shutter speed and white balance are supposedly fixed, there are clear inconsistencies between the individual images. Some of these can be fixed by converting the image to greyscale, but the individual images are still noticeable. Stitching at full resolution with this NumPy/OpenCV method is possible using a more powerful desktop PC, which so far has produced images up to 40560x30400 (1.23 gigapixels). Using overlapping images, proper image blending is possible with OpenCV but I have not yet tried this approach.
I have done another test using Microsoft Image Composite Editor, which does a good job at blending the image together (some faint lines and miss-matched edges are visible), and can stitch images at over 1,000 megapixels. This equates to a 10x10 grid of images using the Pi HQ camera, with an overlap of around 10%. A reduced resolution image is shown below, along with a cropped region.
The below link should take you to a page displaying the image at full resolition.
3D printable STL files are attached to this article, along with the Arduino code (which runs on the Anet mainboard), a Python script for taking the images, and another Python script for merging the images.
The parts list at the top of this page contains (I think) every part used in this project (apart from the 12V mainboard and 5V Pi power supplies). Alternatives to the old 3D printer parts I used can easily be found on RS!