60
Design, Implementation and Evaluation of Raspberry Pi Based Parking Monitoring System Songzeng Fan A report submitted for the course COMP 8755: Individual Computing Project Supervised by: Sid Chi-Kin Chau The Australian National University June 2020 c Songzeng Fan 2020

Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Design, Implementation andEvaluation of Raspberry Pi Based

Parking Monitoring System

Songzeng Fan

A report submitted for the courseCOMP 8755: Individual Computing Project

Supervised by: Sid Chi-Kin ChauThe Australian National University

June 2020c© Songzeng Fan 2020

Page 2: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Except where otherwise indicated, this report is my own original work.

Songzeng Fan12 June 2020

Page 3: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Acknowledgments

I would express my thanks to:

1. My supervisor Dr. Sid Chau. It was his patient guidance that helped me finishthe project

2. Dr. Sid’s PhD Nan Wang, his research student Henry Zhu, his master studentHaohan Lian and Xiyu Wang. They provide me a lot of help.

3. My family and my friends.They always support me in life and spirit.

iii

Page 4: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can
Page 5: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Abstract

The parking monitoring system of the parking lot has been widely used in the parkinglot of shopping malls, apartments, and office buildings. Today’s technologies mostlyneed special sensors [Shih and Tsai, 2014], or transfer the data to the remote serverand use the powerful computing power to process the captured image [Hudson et al.,2018]. In this report, I will embed the parking lot monitoring system into the Rasp-berry Pi which is a small single-board computer. This system will run in real-timeon the lower computational capacity platform, and can accurately identify the in andout of the vehicle when the picture is blurred (to protect the privacy of pedestriansand the vehicle, a blur filter is put in front of the camera). The algorithm I appliedis mainly based on background subtraction provided by OpenCV. Furthermore, opti-mization like reducing the calculated area on the input image and balancing the lightintensity in the picture are also been used to improve the overall operating efficiencyof the system and the accuracy of identification. What’s more, the system can alsowork at night, the camera will turn on night vision mode. I will also compare theworking efficiency and recognition accuracy of the system among different RaspberryPi models. In the future, the system can be optimized for long hours of outdoor work.

Keywords: Raspberry Pi, Parking lot monitoring system, OpenCV, Blurry Filter

v

Page 6: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

vi

Page 7: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Contents

Acknowledgments iii

Abstract v

1 Introduction 11.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Report Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background and Related Work 32.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.1.1 Raspberry Pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1.2 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1.3 Background Subtraction . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Design and Implementation 73.1 Basic Software Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.1.2 Mixture of Gaussian (MOG) . . . . . . . . . . . . . . . . . . . . . 83.1.3 Determination of vehicle movement status . . . . . . . . . . . . . 9

3.2 Optimization Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.1 Reduce the Actual Detection Area . . . . . . . . . . . . . . . . . . 113.2.2 Luminance Proportion . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.3 Prototype Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3.1 Core Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3.2 Assemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Experimental Methodology 214.1 Software platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 Hardware platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.3 System Testing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.3.1 Validity and accuracy verification . . . . . . . . . . . . . . . . . . 234.4 Running Performance Comparison . . . . . . . . . . . . . . . . . . . . . . 26

vii

Page 8: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

viii Contents

4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5 Results and Discussion 295.1 Result of Validity and accuracy verification . . . . . . . . . . . . . . . . . 295.2 Performance comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6 Conclusion 336.1 Limitation and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Bibliography 35

Appendix1 37.1 Project Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

.1.1 Project Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

.1.2 Project Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

.1.3 Learning Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

.1.4 Expected Outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Appendix2 39.2 Study Contract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Appendix3 43.3 Description of Artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Appendix4 45.4 README file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Design, Implementation and Evaluation of Raspberry Pi based parking moni-toring system 47.5 Description of Artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.6 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48.7 Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Page 9: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

List of Figures

2.1 Raspberry Pi Model 4 from https://techcrunch.com/2019/07/09/the-raspberry-pi-4-doesnt-work-with-all-usb-c-cables/. . . . . . . . . . . . . 4

2.2 Background frame from wikipedia.org/Foreground detection . . . . . . 5

3.1 The workflow of the whole system . . . . . . . . . . . . . . . . . . . . . . 73.2 The input image after applying blurry filter . . . . . . . . . . . . . . . . . 83.3 To create MOG or MOG2 background subtractor using OpenCV . . . . 93.4 Set a dividing line in the picture . . . . . . . . . . . . . . . . . . . . . . . 103.5 Set buffer area on the both side . . . . . . . . . . . . . . . . . . . . . . . . 103.6 Reduce the detection area . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.7 Images with uneven light intensity distribution . . . . . . . . . . . . . . 123.8 Light destroy the established model . . . . . . . . . . . . . . . . . . . . . 123.9 After applying brightness balance method . . . . . . . . . . . . . . . . . 143.10 Raspberry Pi and its extensible interface . . . . . . . . . . . . . . . . . . . 153.11 Fisheye camera and its effect . . . . . . . . . . . . . . . . . . . . . . . . . 163.12 Wide-angle camera and its effect . . . . . . . . . . . . . . . . . . . . . . . 163.13 Battborg - Pi Battery Power Board . . . . . . . . . . . . . . . . . . . . . . 173.14 Use powerbank to supply power . . . . . . . . . . . . . . . . . . . . . . . 183.15 Double-sided frosted plastic material . . . . . . . . . . . . . . . . . . . . 193.16 Assembly process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.1 Actual experiment site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 Placed a layer of the blurry filter . . . . . . . . . . . . . . . . . . . . . . . 234.3 Placed two layers of the blurry filter . . . . . . . . . . . . . . . . . . . . . 244.4 Placed three layers of the blurry filter . . . . . . . . . . . . . . . . . . . . 254.5 Placed a special blurry filter . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.1 Result Comparison Between RPI 3B and 4B . . . . . . . . . . . . . . . . . 30

1 Structure of Artefact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

ix

Page 10: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

x LIST OF FIGURES

Page 11: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

List of Tables

4.1 Specification Hardware Comparison of Model 3B and Model 4B. Datamainly from: https://www.raspberrypi.org/documentation/hardware/raspberrypi/README.md 22

5.1 Record of Validity and Accuracy . . . . . . . . . . . . . . . . . . . . . . . 295.2 Record Performance Comparison . . . . . . . . . . . . . . . . . . . . . . . 30

xi

Page 12: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

xii LIST OF TABLES

Page 13: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 1

Introduction

In this section, I will introduce the main problem I encounter in Section 1.1. And inSection 1.2, the aim and the contribution of this project will be given. As for Section1.3, I will introduce the whole project from a global perspective. And the reportoutline will be provided in Section 1.4.

1.1 Problem Statement

In modern society, the parking monitoring system plays a significant role in theintelligent management of a parking lot. It is very convenient to know the currentvacant parking space in the parking lot by monitoring the traffic at the entrance andexit of the parking lot, which can help the driver easily find the parking lot that hasfree parking space. In this project, we simplified the problem of monitoring the flowof imports and exports. Currently, We focus on parking lots with only one entrance,which means if we place the camera at the entrance, we can monitor all in and outvehicles.

Vehicle flow monitoring is now widely researched and applied. The widely usedmethod now requires special sensors [Shih and Tsai, 2014] or need to transfer the datato the server for calculation [Hudson et al., 2018]. The former method will increasethe overall cost. And for the latter method, first of all, the entire system needs aserver that can provide powerful computation resources, which makes its cost higher.Besides, if it is a remote server deployed in the cloud, the entire system is at risk ofbeing hacked. Therefore, I use Raspberry Pi for local computing in this project anduse the official camera as a sensor. The Raspberry Pi ’s official camera comes withnight vision, which makes this monitoring system work at night.

Another problem I encountered was due to the perspective of the camera, whichcould always take pictures of the faces of pedestrians passing by and cars. This isnot conducive to protecting their privacy. So in the system design, I added a filterin front of the camera to ensure that the images input by the system is all blurred toprotect privacy.

1

Page 14: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

2 Introduction

1.2 Motivations

First of all, using Raspberry Pi for vehicle monitoring can greatly save costs. And thewhole system has good mobility (the whole system is very small and light, and fordifferent parking lots, only need to change some parameters in the program to work).The addition and optimization of the functions can improve the overall performance.The blurring of the input image can also protect privacy well.

1.3 Project Scope

This project can be summarized in following part:

1. Make a Raspberry Pi that can work all day and process the input image inreal-time.

2. Place a blur filter in front of the camera and optimize the program to protectthe privacy of pedestrians and vehicles.

3. Compare the differences between the system among different Raspberry Pimodel and the performance differences when placing different layers of theblurry filter.

1.4 Report Outline

In Chapter 2, I will give some related background for this project. And in Chapter 3,the whole design including the system, installation of Raspberry Pi, and its requiredcomponents. As for Chapter 4, I will talk about the process of the experiment. Andthe specific result and comparison of different performance for the system in differentRaspberry Pi will be given in Chapter 5. Then I will give the conclusion and futurework in Chapter 6.

Page 15: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 2

Background and Related Work

In this part, I will introduce some key background knowledge involved in my project.Section 2.1 gives background material necessary in order to read this report, andrelated work is given in Section 2.2.

2.1 Background

2.1.1 Raspberry Pi

The first thing I want to introduce is the base of this project and also the computationalplatform for the parking lot monitor system: Raspberry Pi. It is a card-sized computerlooks like below:

3

Page 16: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

4 Background and Related Work

Figure 2.1: Raspberry Pi Model 4 from https://techcrunch.com/2019/07/09/the-raspberry-pi-4-doesnt-work-with-all-usb-c-cables/.

Raspberry Pi is a flexible platform that supports the user to watch videos, surf theInternet, write programs, and do experiments [Richardson and Wallace, 2012]. More-over, as can be seen from the above picture, the Raspberry Pi has many expandableinterfaces, including HDMI(High Definition Multimedia Interface), USB(UniversalSerial Bus), RCA connector and so on. There is also built-in Ethernet, WLAN, andBluetooth network connection method.

As for software, Raspbian is the recommended operating system for normal useon a Raspberry Pi. It is an official operating system for all models of the RaspberryPi since 2015 and it is highly optimized for the Raspberry Pi line’s low-performancehardware. [Harrington, 2015].

Models that are used in this project are Raspberry Pi 3B and Raspberry Pi 4B.In some basic configurations, such as the expandable interface and network servicementioned above, there is no significant difference between the two. The 4B modeluses four-cores CPU with the main frequency of 1.5GHz, while the 3B model uses four-cores CPU with the main frequency of 1.2GHz. Another performance improvementover 3B is that 4B has 4Gb of memory, while 3B only has 1Gb. Here we use thesetwo to compare the performance of the same system on two different Raspberry Pimodels.

Page 17: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§2.1 Background 5

2.1.2 OpenCV

The full name of OpenCV is Open Source Computer Vision Library. It providesdevelopers with many useful tools for computer vision. Furthermore, the design ofOpenCV pays great attention to the working efficiency, and it supports the real-timeapplication very well [Bradski and Kaehler, 2008]. This feature provides great supportfor developers to process the input image of the entrance and exit of the parking lotin real-time.

Another feature of OpenCV is that it supports cross-platform and cross-languagedevelopment. It not only supports the mainstream operating platforms such asWindows, Linux, and Mac OS X but also supports the Raspbian operating system. Atthe same time, it also provides hardware optimization for the ARM instruction setwhich is the CPU used by Raspberry Pi.

Also, OpenCV supports a variety of mainstream programming languages, includ-ing python, ruby, MATLAB, etc., which also provides convenience for development.

2.1.3 Background Subtraction

Background subtraction is a widely used method of detecting the moving object. Itsname illustrates that it mainly obtains the target we need to detect by subtractingthe background in the image. The rationale in this approach is comparing the inputframe with the "background frame" and find the difference which always is movingobjects. The "background frame" we discussed here is chosen with no moving objectsinside, so the results of the subsequent comparison can be more accurate in this way[Piccardi, 2004]. The following picture clearly shows the background frame. Thereare no passengers on the left-hand side, so we could simply notify them when theright-hand side frame is input.

Figure 2.2: Background frame from wikipedia.org/Foreground detection

After selecting the background frame, the next step is to distinguish the fore-ground from the background. At present, mainstream approaches that distinguishdifferent parts are Mixture of Gaussian (MOG), k nearest neighbor (KNN), and other

Page 18: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

6 Background and Related Work

classification algorithms. In this project, MOG is preferred due to its better resistanceto noise.

2.2 Related work

Parking monitoring systems are widely used in these years. And as a low-costcomputing platform, Raspberry Pi can well support some low budget projects. Inrecent years, many people have developed the parking lot monitoring system basedon Raspberry Pi. For example, Nataliana et al. [2014] introduced a parking monitoringsystem which monitoring the ordering parking area. In their project, they place theinfrared sensor in the monitoring area to detect vehicle movement.

There are also some people who implement background subtraction algorithmson Raspberry Pi. Cocorullo et al. [2015] designed a background subtraction methodwhich could embed into video surveillance systems in Raspberry Pi. They claimthat their algorithm is robust against noises typically occurring in both indoor andoutdoor environments.

2.3 Summary

In this section, I introduce the related background knowledge of Raspberry Pi whichis the main device I used in this project, OpenCV which integrates a number of usefulmethods for this project and background subtraction which is the main algorithmfor this project. And then in chapter 3, I will talk about the design for the projectincluding the algorithm design, optimization method, and prototype making.

Page 19: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 3

Design and Implementation

In this chapter, the design and implementation method will be introduced. In section3.1, I will introduce the software design, including the logic diagram of the entiresoftware operation and some key algorithms. In section 3.2, I will introduce someoptimization for the Raspberry Pi and outdoor experiment. As for section 3.3, I willshow how to make the prototype hardware for the experiment.

3.1 Basic Software Design

I cooperate with my partner Haohan Lian to implement this part. But I later optimizedour original design. Although this part is not totally my effort, I will still explain thedesign in detail for the sake of reading consistency. The following flow chart showsthe workflow of the whole system.

Figure 3.1: The workflow of the whole system

3.1.1 Motivation

The main algorithm involved is background subtraction. It brings two benefits. First,as an object detection algorithm, background subtraction requires less computingresources than deep learning, which is acceptable for Raspberry Pi’s low computingcapacity. Second, One possible situation is that hackers hack the camera of RaspberryPi, which causes pictures captured by the camera to leak, thus violating the privacyof vehicles and pedestrians. To protect the privacy of vehicles and pedestrians, wewill place the blurry filter in front of the camera, then the input image will look likefigure 3.2.

7

Page 20: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

8 Design and Implementation

Figure 3.2: The input image after applying blurry filter

In this case, the external contour of the object will be less clear. If the traditionaltarget detection idea is used, that is, by directly identifying the object in the picture,the accuracy will be low. From the above two points of view, background deletion isa more suitable method for the current situation. I will briefly explain its principle inthe following content.

For our software, we choose Mixture of Gaussian for modeling background ob-jects. Naive background subtraction method like we mentioned in section 2.1 alwayschooses a background frame, and then compare the current frame with the back-ground frame to get foreground objects. This method is very effective, but it hasa disadvantage that it requires background objects to appear in almost the sameposition, even in pixel scale. This is very inconvenient for us to do the experimentoutdoors. The shadow of some objects moves with the change of light angle ora branch of the tree falls. The change of these background objects may affect thecomparison between the subsequent and the selected background frame.

3.1.2 Mixture of Gaussian (MOG)

To solve the above problems, KaewTraKulPong and Bowden [2002] describes anapproach to model each background pixel by a mixture of K Gaussian distributions(K chose from 3 to 5). The weights of the mixture represent the time proportions thatthose colors stay in the scene. The probable background colours are the ones thatstay longer and more static. In this case, the moving foreground objects are oftenobtained more accurately. This algorithmic idea provides us with many inspirationsfor identifying foreground objects in blurred input images.

After that, the algorithm was optimized and MOG2 was proposed [Zivkovic

Page 21: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.1 Basic Software Design 9

1 #import Opencv library2 import cv23

4 mog_method = cv2.createBackgroundSubtractorMOG() # To create MOG backgroundsubtractor

5

6 mog2_method = cv2.createBackgroundSubtractorMOG2() # To create MOG2 backgroundsubtractor

Figure 3.3: To create MOG or MOG2 background subtractor using OpenCV

and Van Der Heijden, 2006]. One of the important optimizations is that it selectsthe appropriate number of Gaussian distribution for each pixel. So in some simplescenarios, its modeling speed is faster than MOG. Theoretically, MOG2 has betterillumination difference processing than MOG.

OpenCV provides us several useful functions to implement MOG and MOG2. Wecould create a background subtractor using the code 3.3. And then, we could applythe background subtractor to each frame to get the foreground of the current frame.

After obtained the foreground of the current frame, we need to determine whichforeground objects are cars and which objects are needless for the system. At present,we don’t have a perfect way to identify cars in the foreground by its shape. In theexperiment, the camera is usually placed at a high place near the entrance, so inthe foreground objects, the larger ones are always cars. Therefore, we mainly judgewhether it is a car by the size of the foreground object. We have set a suitablethreshold value. If the size of the foreground object is larger than this threshold value,we regard it as a car. Otherwise, we will ignore it. Although this method sounds lessrigorous, there are few other objects larger than this threshold in our experiments.

3.1.3 Determination of vehicle movement status

So far, we have obtained cars in the current frame. To calculate the flow of vehicles inand out of the parking lot, we need to analyze the moving state of the objects markedas vehicles in the foreground. Since the camera is placed horizontally, the vehicleenters the parking lot from left to right, so we set a dividing line in the image likefigure 3.4

Page 22: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

10 Design and Implementation

Figure 3.4: Set a dividing line in the picture

Therefore, we could simply judge the movement status of the car by determiningif the vehicle is passing the line from left to right or right to left. If it moves from leftto right, it will be marked as leaving the parking lot and if it moves right to left, wewill regard it as entering the parking lot.

Actually, the image we obtained is discrete, so we may not be able to get exactlythe frame where the boundary of the vehicle just touches the line. So, we need to setbuffers for both sides of this line like figure 3.6 so that the area of judgment. It willhelp the system to prevent missing some cars.

Figure 3.5: Set buffer area on the both side

Page 23: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.2 Optimization Method 11

In this way, we can analyze the sequence of vehicles passing through two bufferzones to determine whether the vehicles enter or leave the parking lot. If the carenters the right buffer space and then enters the left buffer space, then I mark it asentering the parking lot. Otherwise, mark it as leaving the parking lot.

3.2 Optimization Method

3.2.1 Reduce the Actual Detection Area

This optimization method is proposed by me and my partner Haohan worked withme to refine the idea step by step.

Since the camera can take a wide range of pictures, a large part of the input imagedoesn’t have target cars. There is no point in modeling the background objects inthese areas. Based on this idea, we adopt cutting the input image to reduce the pixelpoints that need to be modeled, to improve the efficiency of MOG.

Figure 3.6: Reduce the detection area

The area which is surrounded by green line is the critical area, cars must passthrough here to enter the parking lot. According to the experimental statistics, thisoptimization can save about 20 percent of the CPU consumption.

3.2.2 Luminance Proportion

Due to different lighting conditions, the surface of the object often has differentbrightness distribution. Some objects are too reflective, or there is a light source in thepicture itself, so some objects in the picture are blocked by strong light and cannot bedisplayed like figure 3.7:

Page 24: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

12 Design and Implementation

Figure 3.7: Images with uneven light intensity distribution

Furthermore, If we put the filter in front of the camera, the light spot of the car’sheadlights and taillights will be larger in the image, thus destroying the backgroundmodel we currently build. Figure 3.8 indicates the situation, light source makes thetaillight of the car looks huge. Through the obtained foreground image, we foundthat the size of the tail light in the foreground is much larger than its actual size.

Figure 3.8: Light destroy the established model

The main reason for the system to identify the size of the foreground object is

Page 25: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.2 Optimization Method 13

wrong in this situation because of the principle of the MOG algorithm which I havementioned above. The MOG algorithm mainly judges whether an object is movingby sensing the color change in the input screen. The strong light source in the picturechanges the color of a large area to a color close to itself. And the blurry filter alsomagnifies this color block. All these factors lead to the system’s poor recognition ofstrong light sources in the picture.

To solve this problem, we need to balance the light intensity in the picture. Pengand Jiang [2006] proposed a method that divides the image into blocks accordingto the luminance proportion, and dynamically adjust it so that the adjusted imagebackground brightness is approximately at the same level.

The specific algorithm process displayed below:

1. Calculate the global average light intensity of the image. (We can achieve thisstep by converting the image into a gray-scale image, then calculate the averagegrayscale).

2. Divide the image into blocks (usually 16×16 or 32×32 blocks, of course, thesmaller the block size, the better the effect of the algorithm, but the requiredcomputing resources will also be greater).

3. Averaging the light intensity of each sub-block, so that we get a thumbnail ofthe brightness distribution of the original image.

4. Subtract the global average brightness from each value in the sub-block bright-ness matrix to obtain the sub-block brightness difference matrix, so that thebrightness difference of the sub-block in the high brightness area is positive,and the brightness difference of the low-brightness sub-block is negative.

5. Then extend the sub-block brightness difference matrix to the same size asthe original image by interpolation operation (Authors proved that the bicubicinterpolation method can be used to guarantee data smoothness). After thisstep, we obtain the full image brightness difference matrix.

6. The last step is to normalize the data. Adjust each sub-block image according tothe weakest and strongest light intensity in the original image. The brightnessof the element matches the entire light intensity range.

After applying the above method, the taillight of the car becomes figure 3.9 in theforeground. Now, the size of the taillight in the foreground is closer to its actual sizethan before.

Page 26: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

14 Design and Implementation

Figure 3.9: After applying brightness balance method

3.3 Prototype Making

After completing the software design, the next step is to assemble the hardwareequipment needed for the experiment. Here we need to install a series of componentson the Raspberry Pi.

Page 27: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.3 Prototype Making 15

3.3.1 Core Component

Figure 3.10: Raspberry Pi and its extensible interface

Figure 3.10 shows the Raspberry Pi and its extensible interface. The model in thefigure is Raspberry Pi 4 Model B. In this project, we also used Model 3B, because thestructure of the two is almost the same, so I will not add extra introduction for 3Bhere. Some of the connectors I marked in the picture are mainly used in our project.The power-in connector allows us to connect an external power supply to providepower. The USB port supports us to connect some control devices such as mouse andkeyboard. GPIO 40 in header can be designed (in software) as an input or output pinand used for a wide range of purposes [Pi, 2019]. And in this project, it is used as

Page 28: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

16 Design and Implementation

power output for the cooling fan, and also, as a power input when using a batterybox for power supply. CSI camera connector is supposed to connect to the camera.In this project, we tested two kinds of cameras, a fish-eye camera, and a wide-anglecamera.

Camera

Figure 3.11 shows the fish-eye camera and the effect of this camera. Figure 3.12 showsthe wide-angle camera the effect of it.

Figure 3.11: Fisheye camera and its effect

Figure 3.12: Wide-angle camera and its effect

From the picture, we can see that the picture taken by the fish-eye camera is round,and the image on the edge of the picture is slightly deformed. But the picture takenby the wide-angle camera is a rectangle, and the shape of the objects inside is similarto what you usually see. There will not be some curves like the picture taken by thefish-eye camera.

These two cameras are placed in the same place, and from the pictures taken bythem, we can see that the field of view of the fish-eye camera is much larger than

Page 29: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.3 Prototype Making 17

that of the wide-angle camera, which can cover the entire parking lot entrance. So inactual experiments, we choose fish-eye cameras to monitor the parking lot’s entrance.

Power Supply

In the experiment, I tested two power suppliers. Figure 3.13 shows the connectionmethod when using BattBorg as a power supplier. Figure 3.14 indicates the connectionwhen using a power bank as a power supplier.

Figure 3.13: Battborg - Pi Battery Power Board

The black box at the top of the picture is the battery box, and the green objecton the left of the screen is BattBorg. It is a plug and plays a power converter forthe Raspberry Pi which allows people to power the Raspberry Pi with batteries. Itis connected to the Raspberry Pi via GPIO pin connector to supply power. It hasa maximum of 1.5A output and works with most batteries/battery packs that arebetween 7-36V [PiBorg, 2017].

Page 30: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

18 Design and Implementation

Figure 3.14: Use powerbank to supply power

The power bank connects the power-in connector on the Raspberry Pi to supplypower to the Raspberry Pi, and the power bank can adapt to various voltages.

Raspberry Pi 3 Model B recommends 2.5A PSU current capacity and the maximumtotal USB peripheral current draw for it is 1.2A. And Raspberry Pi 4 Model B requires3A PSU current capacity and the maximum total USB peripheral current draw is thesame with Model 3B. During the actual experiment, I found that BattBorg can onlymaintain the operation of the system when it is not working. When the monitoringsoftware is turned on, the Raspberry Pi will automatically restart due to insufficientcurrent input. Therefore, when doing experiments, the main use of the chargingtreasure as the power supply.

Blurry Filter

Since there is no blurry filter customized for the Raspberry Pi camera on the market,the blurry filter I used for this project is handmade. The main raw material is thisdouble-sided frosted plastic material like

Page 31: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§3.3 Prototype Making 19

Figure 3.15: Double-sided frosted plastic material

This material has very poor light transmittance. We increased the light transmit-tance by wrapping transparent tape on both sides.

3.3.2 Assemble

After all components are ready, the next step is to assemble them all. Figure 3.16indicates the specific process of assembling.

Figure 3.16: Assembly process

The first step is to connect the camera to the CSI camera connector and connectthe power bank. The second step is to add a cooling fan through the GPIO 40 pin

Page 32: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

20 Design and Implementation

connector. The temperature of Raspberry Pi during the experiment will be very highbecause the experiment is carried out outdoors, with continuous sunlight and long-term running program. If it is kept at high temperature for a long time, not only theworking frequency of CPU will be reduced, but also other hardware may be damaged.So we added a fan to the CPU.

The last step in the figure is what the whole system looked like in our finalexperiment. We put Raspberry Pi and other parts in the black box at the bottom, andwe took the camera to the top and made a protective shell for it. We also installed anopaque tube on the camera to reduce the sunlight which reflected by the blurry filter.

3.4 Summary

In this chapter, I mainly discuss the prototype design, including software and hard-ware design. In section 3.1, I introduce the software design, the background sub-traction algorithm, and car moving status determining is included. In section 3.2,I introduce some optimization for the Raspberry Pi and outdoor experiment. Asfor section 3.3, I show some core components for the hardware prototype and theassembling process. And in the next chapter, the experimental methodology will beintroduced.

Page 33: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 4

Experimental Methodology

In this chapter, I will introduce the methodology of the experiment. In the experiment,I pay more attention to test the feasibility and correctness of the software algorithm,the impact of the blurry filter, and the performance comparison between differentRaspberry Pi. In section 4.1, I will introduce the software used in the experiment onRaspberry Pi. In section 4.2, I will provide a detailed introduction of the hardwareplatform and compare the difference between different Raspberry Pi. And in section4.3, the details of the experiment during the test will be introduced.

4.1 Software platform

For Raspberry Pi 3B and Raspberry Pi 4B, I use the same version of the Raspbianoperating system. To have a graphical interface to monitor the images captured by thecamera, I choose "Raspberry Pi OS (32-bit) with desktop". It can be changed to a liteversion that excludes user interface but has a smaller overall capacity for subsequentperformance improvements.

The entire software system is developed based on python 3.7 and OpenCV 3.3.It is worth mentioning that in the experiment, I found that if I use OpenCV 4 orabove, I may encounter some problems with the number of function return values.Compared with OpenCV 3, OpenCV 4 seems to optimize some functions. But atpresent, the hardware optimization of OpenCV 4 for Raspberry Pi is not very good,and the installation is also very cumbersome, so I still use OpenCV 3 here.

4.2 Hardware platform

In the experiment, I use Raspberry Pi 3 Model B and Raspberry Pi 4 Model B toconduct a comparative test, the specific hardware configuration of the two is listed intable 4.1.

And also, I need to set up several configurations in Raspberry Pi. In this exper-iment, I set the video size of the camera input to 800 × 480 ,and the video framerate to 24 frames per second. Besides, I divide 128Mb memory for the GPU in theconfiguration interface of the Raspberry Pi.

21

Page 34: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

22 Experimental Methodology

Model Type RPi 3 Model B RPi 4 Model B

Instruction Set ARMv8-A (64/32-bit) ARMv8-A (64/32-bit)SoC Broadcom BCM2837 32nmFPU VFPv4 + NEON VFPv4 + NEONCPU 4 × Cortex-A53 1.2GHz 4 × Cortex-A72 1.5GHzGPU Broadcom VideoCore IV @ 250 MHz Broadcom VideoCore VI @ 500 MHz

Memory (SDRAM) 1 GiB 4GiBUSB 2.0 ports 4 2USB 3.0 ports 0 2Bluetooth 4.1 BLE 5.0

WiFi IEEE 802.11 wireless b/g/n single band 2.4 GHz b/g/n/ac dual band 2.4/5 GHz

Table 4.1: Specification Hardware Comparison of Model 3B and Model 4B. Data mainly from:https://www.raspberrypi.org/documentation/hardware/raspberrypi/README.md

4.3 System Testing Process

As long as I divide the experiment into two parts, the first part is the evaluationof the validity and accuracy of the algorithm. This part is defined by whether theRaspberry Pi can accurately track the vehicles entering or leaving the parking lot.The second part is the evaluation of the overall system performance. In this part,I monitor the temperature and usage of the CPU, GPU, and memory usage of theRaspberry Pi while the parking lot monitoring system working. Figure 4.1 showsthe actual experiment site. During the experiment, I use remote desktop to connectRaspberry Pi to observe the experiment.

Figure 4.1: Actual experiment site

Page 35: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§4.3 System Testing Process 23

4.3.1 Validity and accuracy verification

As mentioned above, I made some blurry filters using double-sided frosted plasticsheets. Here I will place these filters in front of the camera, and try to superimpose adifferent number of filters to evaluate whether the system can accurately detect thevehicle in the blurry input picture.

Figure 4.2: Placed a layer of the blurry filter

Figure 4.2 shows the effect when applying one layer of the blurry filter in frontof the camera. The below black window shows the foreground of the current frame,which is produced by the parking monitor system. It can be found that the currentvehicle can still be tracked by the system, and the vehicle outline is identifiable.

Page 36: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

24 Experimental Methodology

Figure 4.3: Placed two layers of the blurry filter

Figure 4.3 indicates the effect when applying two layers of the blurry filter. Atthis time, the overall video is more blurred than when only one filter is placed. Butthe car at this time can still be discovered by the system and the outline of the car canstill be discerned.

Page 37: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§4.3 System Testing Process 25

Figure 4.4: Placed three layers of the blurry filter

Figure 5.1 illustrates the effect when applying three layers of the blurry filter. Atthis time, the outline of the car is more blurred, but the monitoring system can stillcorrectly determine its trajectory.

Page 38: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

26 Experimental Methodology

Figure 4.5: Placed a special blurry filter

Different from the ordinary blurry filter, I also made a special blur filter. Ordinaryfilters are affixed with transparent tape on both sides to enhance light transmittance.This special piece is only attached to one side. Figure 4.5 shows the effect of thespecial blurry filter. From the perspective of the degree of blur, after placing this filter,it is difficult for humans to distinguish objects from the picture. At this time, themonitoring system can still distinguish the trajectory of the vehicle, but the contourof the vehicle can no longer be distinguished. This is also the limit of ambiguity thatthis system can withstand.

4.4 Running Performance Comparison

During the experiment, I also compared the running performance of the two Rasp-berry Pi to analyze the hardware requirements of the monitoring system. I mainlymonitored the temperature and usage of CPU, GPU, and memory usage. Raspberry

Page 39: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§4.5 Summary 27

Pi provides a very convenient interface for python "os" package. I can simply use itto get the current status of the Raspberry Pi. In the next chapter, I will analyze thedata I got and make a comparison between Raspberry Pi 3B and Raspberry Pi 4B.

4.5 Summary

In this chapter, I give the design of specific experiments to evaluate the entire system.In the next chapter, I will give the experiments’ results and some discussion for theresults.

Page 40: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

28 Experimental Methodology

Page 41: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 5

Results and Discussion

In this chapter, I will give the results of the experiment mentioned in Chapter 4.In section 5.1, I will talk about the validity and accuracy verification of the system.In section 5.2, I will give the result data of the performance comparison betweenRaspberry Pi 3B and Raspberry Pi 4B. Furthermore, I will discuss some experimentalfindings and my understanding in section 5.3.

5.1 Result of Validity and accuracy verification

Due to the impact of the Covid-19, the campus was closed, so the traffic in the parkinglot for the test was not large. For each situation (one-layer filter, two-layer filter, etc.),the system was tested for more than 20 minutes, with an average of 4 cars come in andout. I kept the whole process of the experiment by recording videos. Then analyzethe results of the experiment through videos. The data recorded in the experimentare given in the following table.

Case 1 layer filter 2 layer filters 3 layer filters 1 speciallayer

Test duration(min:second) 20:43 22:30 20:56 25:51

In and out actions 9 8 8 9

Correct Identified 9 8 8 8

Comment None None None A car comingout of theparkinglot wasincorrectlycountedtwice

Table 5.1: Record of Validity and Accuracy

In the experiment, the system can correctly identify and record the entry and exitof cars when using 1, 2, 3 layer(s) of ordinary filters. But when applying the special

29

Page 42: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

30 Results and Discussion

filter, there is a situation that two cars queued out of the parking lot at the sametime, the first car was incorrectly counted twice. I think the main reason is that thepicture at this time is too blurry, resulting in the unstable shape of the vehicle in theforeground picture, which erroneously triggered multiple judgment conditions.

5.2 Performance comparison

Here I mainly compare the temperature and usage of CPU, GPU, and memory usageof two Raspberry Pi models when running this system. The following table indicatesthe result. Since the use of different filters does not affect the processing speed andusage of the program, I used a set of more average data here.

Model Type RPi 3 Model B RPi 4 Model B

CPU usage (normal) 1.4% 1.3%

CPU Temperature (normal) 57 ◦C 59 ◦C

GPU Temperature (normal) 57 ◦C 59 ◦C

Memory usage (normal) 20% 11.9%

CPU usage (working) 45.8% 24.1%

CPU Temperature (working) 69 ◦C 63 ◦C

GPU Temperature (working) 70 ◦C 63 ◦C

Memory usage (working) 35% 13.4%

Time consuming to process a frame 90ms 70ms

Table 5.2: Record Performance Comparison

Figure 5.1: Result Comparison Between RPI 3B and 4B

From the data in the table above, we can see that the computing resources requiredto run the system are not high for the Model 3B and Model 4B. I can see from thedata I recorded that the running system needs about 300-400 MB of memory. Also,the speed at which each frame processes each frame is relatively similar. For Model

Page 43: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

§5.3 Discussion 31

3B, it takes an average of 90ms to process a frame, while Model 4B takes about 70msto process a frame.

5.3 Discussion

Experiment data shows that the modified system can run smoothly even on the Model3B which has poorer hardware. But in the experiment, several factors greatly affectperformance. The first is the optimization method I proposed in Chapter 3 3.2.2,which can set a parameter to adjust the accuracy of image optimization. The smallerthe value, the better the effect of optimizing the light source, but it requires morecomputing resources. Since the influence of the light source is not large during theexperiment, I set this value relatively large, so the demand for computation resourceis small. Another point is that in future applications,if current processing effect of theRaspberry Pi is not needed, the graphical interface can be closed, which can save alot of computing resources.

In addition, I did some experiments at night with the camera turned on thenight vision mode, but the actual result was very unstable. Because MOG mainlydetermines the foreground and background by the time the color stays in the inputimage, the color of the vehicle in the night vision mode is not the same as in daytime.So the detection accuracy is bad for certain color vehicles (such as black vehicles).

5.4 Summary

In this chapter, I give the experiment result and discuss the finding I see in theexperiment. In the next chapter, I will give a conclusion for the report.

Page 44: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

32 Results and Discussion

Page 45: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Chapter 6

Conclusion

I introduced in detail the software, and hardware design, prototype assembly andperformance evaluation process of the parking lot monitoring system. In terms ofsoftware design, this system mainly uses the background subtraction algorithm basedon Mixture of Gaussian, and is optimized in processing image size, lighting balance,etc.. In terms of hardware, I choose a fisheye camera to shoot and used a power bankto supply power. And optimized the outdoor experiment in some details. In the caseof highly blurred input images, the system can still accurately track the car’s moving.Also, the system runs smoothly on the Raspberry Pi Model 3B and Raspberry PiModel 4B. And for the CPU, the memory usage is not high.

6.1 Limitation and Future Work

Although the current design can meet most of our needs, there are still some pointsthat can improve or change.

1. Firstly, it’s a little less rigorous to determine whether the object in the fore-ground is a car. At present, we don’t have a perfect way to identify cars inthe foreground by its shape. In the experiment, the camera is usually placedat a high place near the entrance, so in the foreground objects, the larger onesare always cars. Here, we mainly judge whether it is a car by the size of theforeground object. In the experiment, I did not meet the object with a size largerthan the car in the foreground, but in theory, it still exists. But I think thatunless the overall background subtraction algorithm is changed, it is difficultto judge whether it is a car by its shape. On this issue, better methods can bedeveloped in the future.

2. Secondly, the current system is not performing as well at night as during theday. At night, my optimization of balanced lighting is very effective, but Ineed to choose between performance and accuracy. In addition, there are moreinterference factors at night than during the day. For example, when the cameraturns on night vision mode, the lights on both sides will turn on, which willattract many flying insects at night. The judgment of these flying insects on the

33

Page 46: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

34 Conclusion

foreground objects has caused a lot of interference. So the optimization of thenight mode is also a point that can be considered in the future.

3. At last, long term stress testing is required. During my experiment, I did notrecord the energy consumption when the system was placed. If the RaspberryPi needs to be placed outdoors for a long time in the future, experiments areneeded to test the problems caused by the system working for a long time. Be-sides, for long hours of work, you can also do some optimization. For example,you can turn off the camera to reduce energy consumption when there is notraffic.

Page 47: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Bibliography

Bradski, G. and Kaehler, A., 2008. Learning OpenCV: Computer vision with theOpenCV library. " O’Reilly Media, Inc.". (cited on page 5)

Cocorullo, G.; Corsonello, P.; Frustaci, F.; Guachi, L.; and Perri, S., 2015.Embedded surveillance system using background subtraction and raspberry pi. In2015 AEIT International Annual Conference (AEIT), 1–5. IEEE. (cited on page 6)

Harrington, W., 2015. Learning raspbian. Packt Publishing Ltd. (cited on page 4)

Hudson, T. G.; Caldwell, J. M.; Gage, R. C.; and Danko, N., 2018. Parking lotmonitoring system. US Patent 10,121,172. (cited on pages v and 1)

KaewTraKulPong, P. and Bowden, R., 2002. An improved adaptive background mix-ture model for real-time tracking with shadow detection. In Video-based surveillancesystems, 135–144. Springer. (cited on page 8)

Nataliana, D.; Syamsu, I.; and Giantara, G., 2014. Sistem monitoring parkir mobilmenggunakan sensor infrared berbasis raspberry pi. ELKOMIKA: Jurnal TeknikEnergi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika, 2, 1 (2014), 68. (cited onpage 6)

Peng, X. B. and Jiang, J. G., 2006. An image segmentation thresholding methodbased on luminance proportion. Computer technology and development, 11 (2006).(cited on page 13)

Pi, R., 2019. Documentation > usage > gpio. (cited on page 15)

PiBorg, 2017. Battborg - overview getting started. (cited on page 17)

Piccardi, M., 2004. Background subtraction techniques: a review. In 2004 IEEEInternational Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583),vol. 4, 3099–3104. IEEE. (cited on page 5)

Richardson, M. and Wallace, S., 2012. Getting started with raspberry PI. " O’ReillyMedia, Inc.". (cited on page 4)

Shih, S.-E. and Tsai, W.-H., 2014. A convenient vision-based system for automaticdetection of parking spaces in indoor parking lots using wide-angle cameras. IEEETransactions on Vehicular Technology, 63, 6 (2014), 2521–2532. (cited on pages v and 1)

Zivkovic, Z. and Van Der Heijden, F., 2006. Efficient adaptive density estimationper image pixel for the task of background subtraction. Pattern recognition letters,27, 7 (2006), 773–780. (cited on page 8)

35

Page 48: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

36 Bibliography

Page 49: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Appendix

Appendix1

.1 Project Description

.1.1 Project Title

Design, Implementation and Evaluation of Raspberry Pi Based Parking MonitoringSystem

.1.2 Project Description

In modern society, the parking system is very important for intelligent management ofa parking lot. Here we want to develop a parking system which could run efficientlyon Raspberry Pi. The system can track and record car moving and update theavailable parking space through the information it receives.

The basic algorithm is based on OpenCV, which is a library of programmingfunctions mainly aimed at real-time computer vision. Because of its excellent cross-platform capability and algorithm efficiency, we can analyze the images captured onRaspberry Pi in real-time.

The whole project can be summarized in the following parts:

1. Record the video in the actual parking lot and do some experiments via theexisting algorithm.

2. Use OpenCV library and other advanced method to optimize the algorithm sothat it could run efficiently on Raspberry Pi.

3. Install a camera on the Raspberry Pi to test whether the algorithm can recognizethe vehicle in real-time.

4. Use the assembled camera in the parking lot for actual testing.

5. Evaluate the performance of the algorithm and the whole system.

.1.3 Learning Objective

1. Investigate the method for optimizing recognition algorithm.

37

Page 50: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

38 Appendix1

2. Analyze the method for running software on Raspberry Pi.

.1.4 Expected Outcome

Optimized parking monitoring system and the assembled Raspberry Pi prototype.

Page 51: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Appendix

Appendix2

.2 Study Contract

Below file is my study contract.

39

Page 52: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can
Page 53: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can
Page 54: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can
Page 55: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Appendix

Appendix3

.3 Description of Artefacts

Figure 1: Structure of Artefact

Figure 1 shows the structure of the artefact. The Artefacts contains 2 Python file, 1README file, 1 shell script, 1 directory, and 1 test video.

43

Page 56: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

44 Appendix3

1. Python file "backsub_args.py" is the python file to run the parking monitoringsystem.

2. Python file "system_evaluation.py" is the python script to get Raspberry Pi’scurrent system status, including CPU usage, CPU temperature, memory usage.

3. Shell script "requirements.sh" is the shell script to quickly build the project.

4. Markdown file "README.md" is the README file for the artifacts.

5. Directory "demo_video" includes two demo videos that are captured duringour field test.

6. Video "stableVideo_trim.mp4" is used to test whether the system can work.

In addition to the “backsub_args.py" file, I added optimization functions andrestructured the overall structure based on Haohan Lian’s code, the rest are my ownwork.

Page 57: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Appendix

Appendix4

.4 README file

The README file is attached as follows.

45

Page 58: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

46 Appendix4

Page 59: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

Appendix

Design, Implementation andEvaluation of Raspberry Pi basedparking monitoring system

Author: Songzeng Fan,Student ID: u6013724,Supervised by: Dr. Sid Chi-Kin Chau,Course code: COMP8755 - Individual Computing Project

.5 Description of Artefacts

The Artefact of this project is the design, implementation and evaluation of theRaspberry Pi based parking monitoring system.

The Artefacts contains 2 Python file, 1 README file, 1 shell script, 1 directory,and 1 test video.

• Python file backsub_args.py is the python file to run the parking monitoringsystem.

• Python file system_evaluation.py is the python script to get Raspberry Pi’scurrent system status, including CPU usage, CPU temperature, memory usage.

• Shell script requirements.sh is the shell script to quickly build the project.

• Markdown file README.md is the README file for the artifacts.

• Directory demo_video includes two demo videos that are captured during ourfield test.

• Video stableVideo_trim.mp4 is used to test whether the system can work.

47

Page 60: Design, Implementation and Evaluation of Raspberry Pi ...working efficiency and recognition accuracy of the system among different Raspberry Pi models. In the future, the system can

48Design, Implementation and Evaluation of Raspberry Pi based parking monitoring system

.6 Setup

You can choose to follow the instruction I provide below to set up the environment oruse the shell script "requirements.sh" to build the project. The premise of using thisscript is that you have installed the official Raspberry Pi system "Raspberry Pi OS (32-bit) with desktop" which is available here https://www.raspberrypi.org/downloads/raspberry-pi-os/

1. Use any shell interpreter, and use the command bash requirements.sh, or./ requirements.sh to set up the environment.

2. If you like manually building, you need to install python 3 and git at first.And then use the command sudo apt install python3-opencv to installOpenCV. Please note that installing OpenCV in this way can be used for Rasp-berry Pi.

.7 Run

There are several parameters you can customize to optimize the system performance.1. Use command python back_sub_args.py to run the system, there are several

customized parameters.• ’–mode’ is the string type, you could type ’video’ for video testing, type ’camera’

for real-time processing.• ’–input’ if you choose the video, determine which video do you want to use to.• ’–algo’ is the string type, Background subtraction method (KNN, MOG2).• ’–light’ is the bool type. Whether to apply balance light intensity function, True

for enable, False for disable.• ’–kernelSize’ is the int type. Kernel size for balance light intensity function• ’–threshold’ is the int type. Custom valid buffer zone distance.• ’–vehicle_size’ is the int type. Custom the size of vehicles to be detected.

2. Use command python back_sub_args.py to get the current system status.