Upload
others
View
42
Download
0
Embed Size (px)
Citation preview
xiii
DAFTAR ISI
HALAMAN JUDUL ............................................................................................. i
LEMBAR PENGESAHAN .................................................................................. ii
PERNYATAAN KEASLIAN & PERSETUJUAN PUBLIKASI TA .............iii
KATA PENGANTAR / UCAPAN TERIMAKASIH ......................................... ii
ABSTRAK ............................................................................................................. ii
ABSTRACT ........................................................................................................... ii
DAFTAR ISI .......................................................................................................... ii
DAFTAR GAMBAR .............................................................................................. i
DAFTAR TABEL ................................................................................................... i
BAB I PENDAHULUAN ...................................................................................... 1 Latar Belakang ................................................................................. 1
Perumusan Masalah ......................................................................... 2
Tujuan Penelitian ............................................................................. 2
Manfaat Penelitian ........................................................................... 2
Batasan Penelitian ............................................................................ 2
Sistematika Penulisan ...................................................................... 3
BAB II TINJAUAN PUSTAKA ........................................................................... 5 2.1 Pengertian Robot .............................................................................. 5
Robot line follower ................................................................ 9
Robot Pendeteksi ................................................................. 10
Pengertian Software ....................................................................... 10
Pengertian Aplikasi/Software Geany Editor Teks ............... 10
Open CV .............................................................................. 12
Bahasa Pemrograman Python .............................................. 12
Pengertian Hardware ...................................................................... 13
2.2.1 Raspberry Pi ........................................................................ 13
Modul Pi Camera ................................................................ 15
Motor DC ............................................................................ 15
L298 Motor Driver .............................................................. 18
Pengertian Kabel Jumper .................................................... 18
Pengolahan Citra Digital ................................................................ 20
BAB III METODE PENELITIAN .................................................................... 25 Metode Penelitian .......................................................................... 25
Rancangan Penilitian ...................................................................... 25
Perancangan Sistem ....................................................................... 27
Blok Diagram Sistem .......................................................... 27
Perancangan Perangkat Lunak ....................................................... 28
xiv
Flowchart ............................................................................. 28
Perancangan Perangkat Keras (Hardware) ..................................... 30 Rancang Raspberry Pi dengan Pi Camera ........................... 30
Rancang Driver Motor ........................................................ 30
Rancangan Keseluruhan Alat .............................................. 31
BAB IV HASIL DAN PEMBAHASAN ............................................................. 33 Proses ............................................................................................. 33
Implementasi Hardware ...................................................... 33
Implementasi Software ........................................................ 34
Pengujian Software ........................................................................ 34
Pengujian Program Pengolahan Citra .................................. 34
Pengujian Jarak Citra .......................................................... 35
Pengujian Lux ..................................................................... 36
Pengujian Hardware ....................................................................... 37
Pengujian Raspberry Pi 3 .................................................... 38
Pengujian Modul Kamera Raspberry Pi .............................. 39
Pengujian Motor Driver L298N .......................................... 39
Pengujian Keseluruhan .................................................................. 40
Pengujian Objek Pola Arah Panah ...................................... 40
BAB V KESIMPULAN DAN SARAN .............................................................. 51 5.1 Kesimpulan .................................................................................... 51
5.2 Saran .............................................................................................. 51
DAFTAR REFERENSI ...................................................................................... 53
LAMPIRAN ......................................................................................................... 55
xv
DAFTAR GAMBAR
Gambar 2.1 : Contoh Bentuk Robot. .................................................................... 6 Gambar 2.2 : Mobile robot ................................................................................... 7 Gambar 2.3 : Robot Manipulator. ........................................................................ 8
Gambar 2.4 : Robot Humanoid. .......................................................................... 10 Gambar 2.5 : Robot Berkaki ............................................................................... 11 Gambar 2.6 : Robot Line Follower. .................................................................... 13
Gambar 2.7 : Robot Pendeteksi . ......................................................................... 14 Gambar 2.8 : Tampilan Geany ............................................................................ 14 Gambar 2.9 : library OpenCV. ............................................................................ 15
Gambar 2.10 : Bahasa Pemrograman python. ....................................................... 17
Gambar 2.11 : Arsitektur Raspberry Pi Model B+. ............................................... 17
Gambar 2.12 : Peta Standart GPIO raspberry Pi mode; B+. ................................. 18
Gambar 2.13 : Pi Camera ...................................................................................... 18
Gambar 2.14 : Motor DC ...................................................................................... 19
Gambar 2.15 : IC Driver Motor L298. .................................................................. 20
Gambar 2.16 : Driver Motor L298N. .................................................................... 20
Gambar 2.17 : Kabel Jumper Male to Male. ......................................................... 21
Gambar 2.18 : Kabel Jumper Female to Female. .................................................. 21
Gambar 2.19 : Kabel Jumper Female to Male. ..................................................... 21
Gambar 2.20 : Pengolahan Citra Digital ............................................................... 24 Gambar 3.1 : Metode Penelitian. ......................................................................... 26
Gambar 3.2 : Blok Diagram Proses. .................................................................... 27
Gambar 3.3 : Flowchart Robot Pengikut Berdasarkan pola Arah Panah. ........... 28 Gambar 3.4 : Rancang Raspberry Pi dengan Pi Camera ..................................... 28 Gambar 3.5 : Rancang L298N dengan Raspberry Pi dan Batrai. ........................ 29
Gambar 3.6 : Rancang Keseluruhan Alat. ........................................................... 30
Gambar 4.1 : Komponen Setelah Dirangkai. ...................................................... 32
Gambar 4.2 : Raspberry menyala ketika diberi daya. ......................................... 33
Gambar 4.3 : Tampilan Awal Raspberry Pi. ....................................................... 34
Gambar 4.4 : Tampilan Stream Real-Time Modul Kamera Raspberry Pi .......... 38
xvi
Halaman ini sengaja dikosongkan
xvii
DAFTAR TABEL
Tabel 2.1 : Fungsi Pin-Pin Pada IC L298. .............................................................. 9
Tabel 4.1 : Pengujian Program Pengolahan Citra. ................................................ 10 Tabel 4.2 : Pengujian jarak citra. ........................................................................... 20 Tabel 4.3 : Pengujian Lux . ................................................................................... 29 Tabel 4.4 : Pengujian Motor Driver L298N. ......................................................... 30 Tabel 4.5 : Pengujian Objek Pola Maju Pada Intensitas Cahaya Redup. .............. 31
Tabel 4.6 : Pengujian Objek Pola Maju Pada Intensitas Cahaya Terang . ............ 32 Tabel 4.7 : Pengujian Objek Pola Putar Balik Pada Intensitas Cahaya
Redup. ..................................................................................................................... 35
Tabel 4.8 : Pengujian Objek Pola Putar Balik Pada Intensitas Cahaya
Terang. .................................................................................................................... 45
Tabel 4.9 : Pengujian Objek Pola Kekanan Pada Intensitas Cahaya Redup . ....... 56
Tabel 4.10 : Pengujian Objek Pola Kekanan Pada Intensitas Cahaya Terang. ....... 58
Tabel 4.11 : Pengujian Objek Pola Kekiri Pada Intensitas Cahaya Redup. ............ 60
Tabel 4.12 : Pengujian Objek Pola Kekiri Pada Intensitas Cahaya Terang. ........... 65
xviii
Halaman ini sengaja dikosongkan
TUGAS AKHIR
NAVIGASI MOBILE ROBOT MENGGUNAKAN KAMERA
BERDASARKAN POLA ARAH PANAH
Diajukan sebagai salah satu syarat untuk memperoleh gelar
Sarjana Komputer di Program Studi Informatika
Diajukan Oleh :
Moch Subahan
1461505122
JURUSAN TEKNIK INFORMATIKA
FAKULTAS TEKNIK
UNIVERSITAS 17 AGUSTUS 1945 SURABAYA
2019
vii
KATA PENGANTAR / UCAPAN TERIMAKASIH
Puji syukur penulis panjatkan kepada ALLAH SWT yang senantiasa
melimpahkan berkat, rahmat dan karunia-NYA, sehingga dapat menyelesaikan
Tugas Akhir secara tepat waktu, yang berjudul :
“ Navigasi Mobile Robot Menggunakan Kamera Berdasarkan Pola
Arah Panah ”
Tugas Akhir ini dimaksudkan untuk memenuhi salah satu persyaratan
dalam menyelesaikan pendidikan S1 Jurusan Teknik Informatika Universitas 17
Agustus 1945 Surabaya. Pada Kesempatan ini, penulis mengucapkan banyak
terima kasih kepada semua pihak yang telah memberi bantuan, kesempatan,
bimbingan serta pengarahan baik secara langsung maupun tidak langsung
kepada penulis dalam menyelesaikan Tugas Akhir ini, Untuk itu penulis
mengucapkan terima kasih sebesar-besarnya kepada :
1) Allah SWT yang telah memberi petunjuk dan karunia-Nya, beserta
junjungan-Nya Nabi Muhammad SAW.
2) Bapak Dr. Mulyanto Nugroho M.M., CMA., CPAI. selaku Rektor
Universitas 17 Agustus 1945 Surabaya.
3) Bapak Dr. Ir. Sajiyo M.Kes. selaku Dekan Fakultas Teknik Universitas 17
Agustus 1945 Surabaya.
4) Bapak Geri Kusnanto S.Kom., M.M. selaku ketua Program Studi Teknik
Informatika Universitas 17 Agustus 1945 Surabaya.
5) Nuril Esti Khomariah S.ST., M.T. selaku dosen pembimbing utama yang
telah menyediakan banyak waktu, tenaga dan pikiran. Dan juga
pengarahan, petunjuk serta bimbingan dari awal pembuatan sistem hingga
dalam penyusunan skripsi ini.
6) Bapak/Ibu Dosen Jurusan Teknik Infomatika yang telah mendidik dan
memberikan ilmunya pada penulis selama di bangku perkuliahan.
7) Kepada Orang Tua dan keluarga tercinta yang selalu mendukung,
mendoakan, memotivasi dan melengkapi segala keperluan penulis sehingga
terselesaikan Tugas Akhir ini.
8) Teman-teman seperjuangan angkatan 2015 di Jurusan Teknik Informatika
Universitas 17 Agustus 1945 Surabaya yang telah berjuang bersama-sama
viii
dan saling membantu selama kurang lebih tiga tahun setengah dalam
meraih kesuksesan bersama.
Penulis juga menyadari bahwa masih banyak kekurangan dan kelemahan
dalam penyusunan Tugas Akhir ini, untuk itu penulis mengharapkan masukkan
berupa kritik dan saran yang membangun guna sempurna di masa-masa yang
akan datang. Pada akhirnya penulis sampaikan permintaan maaf yang setulus-
tulusnya dan kepada Allah SWT penulis mohon ampun, bila ada kata-kata
penulis yang kurang berkenan baik penulis sengaja maupun tidak penulis sadari,
karena kesalahn hanya milik manusia dan kebenaran hanya milik Allah SWT
semata.
Akhir kata, saya berharap Tuhan Yang Maha Esa berkenan membalas
segala kebaikan semua pihak yang telah membantu. Semoga tugas akhir ini
membawa manfaat bagi pengembangan ilmu dan juga bagi semua pihak,
khususnya mahasiswa Jurusan Teknik Informatika .
Surabaya, 25 July 2019
Penulis
57
LAMPIRAN
Sourcecode Deteksi Dengan kamera
from imutils.perspective import four_point_transform
#from imutils import contours
#import imutils
camera = cv2.VideoCapture(0)
def findTrafficSign():
'''
This function find blobs with blue color on the image.
After blobs were found it detects the largest square blob, that must be
the sign.
'''
# define range HSV for blue color of the traffic sign
lower_blue = np.array([85,100,70])
upper_blue = np.array([115,255,255])
while True:
# grab the current frame
(grabbed, frame) = camera.read()
if not grabbed:
print("No input image")
break
frame = imutils.resize(frame, width=500)
frameArea = frame.shape[0]*frame.shape[1]
# convert color image to HSV color scheme
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define kernel for smoothing
kernel = np.ones((3,3),np.uint8)
# extract binary image with active blue regions
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# morphological operations
58
Universitas 17 Agustus 1945 Surabaya
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# find contours in the mask
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
# defite string variable to hold detected sign description
detectedTrafficSign = None
# define variables to hold values during loop
largestArea = 0
largestRect = None
# only proceed if at least one contour was found
if len(cnts) > 0:
for cnt in cnts:
# Rotated Rectangle. Here, bounding rectangle is drawn with
minimum area,
# so it considers the rotation also. The function used is
cv2.minAreaRect().
# It returns a Box2D structure which contains following detals
-
# ( center (x,y), (width, height), angle of rotation ).
# But to draw this rectangle, we need 4 corners of the
rectangle.
# It is obtained by the function cv2.boxPoints()
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
# count euclidian distance for each side of the rectangle
sideOne = np.linalg.norm(box[0]-box[1])
sideTwo = np.linalg.norm(box[0]-box[3])
# count area of the rectangle
area = sideOne*sideTwo
# find the largest rectangle within all contours
if area > largestArea:
largestArea = area
largestRect = box
59
Universitas 17 Agustus 1945 Surabaya
# draw contour of the found rectangle on the original image
if largestArea > frameArea*0.02:
cv2.drawContours(frame,[largestRect],0,(0,0,255),2)
#if largestRect is not None:
# cut and warp interesting area
warped = four_point_transform(mask, [largestRect][0])
# show an image if rectangle was found
#cv2.imshow("Warped", cv2.bitwise_not(warped))
# use function to detect the sign on the found rectangle
detectedTrafficSign = identifyTrafficSign(warped)
#print(detectedTrafficSign)
# write the description of the sign on the original image
cv2.putText(frame, detectedTrafficSign, tuple(largestRect[0]),
cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 255, 0), 2)
# show original image
cv2.imshow("Original", frame)
# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
print("Stop programm and close all windows")
break
def identifyTrafficSign(image):
'''
In this function we select some ROI in which we expect to have the
sign parts. If the ROI has more active pixels than threshold we mark it
as 1, else 0
After path through all four regions, we compare the tuple of ones and
zeros with keys in dictionary SIGNS_LOOKUP
'''
60
Universitas 17 Agustus 1945 Surabaya
# define the dictionary of signs segments so we can identify
# each signs on the image
SIGNS_LOOKUP = {
(1, 0, 0, 1): 'Turn Right', # turnRight
(0, 0, 1, 1): 'Turn Left', # turnLeft
(0, 1, 0, 1): 'Move Straight', # moveStraight
(1, 0, 1, 1): 'Turn Back', # turnBack
}
THRESHOLD = 150
image = cv2.bitwise_not(image)
# (roiH, roiW) = roi.shape
#subHeight = thresh.shape[0]/10
#subWidth = thresh.shape[1]/10
(subHeight, subWidth) = np.divide(image.shape, 10)
subHeight = int(subHeight)
subWidth = int(subWidth)
# mark the ROIs borders on the image
cv2.rectangle(image, (subWidth, 4*subHeight), (3*subWidth,
9*subHeight), (0,255,0),2) # left block
cv2.rectangle(image, (4*subWidth, 4*subHeight), (6*subWidth,
9*subHeight), (0,255,0),2) # center block
cv2.rectangle(image, (7*subWidth, 4*subHeight), (9*subWidth,
9*subHeight), (0,255,0),2) # right block
cv2.rectangle(image, (3*subWidth, 2*subHeight), (7*subWidth,
4*subHeight), (0,255,0),2) # top block
# substract 4 ROI of the sign thresh image
leftBlock = image[4*subHeight:9*subHeight, subWidth:3*subWidth]
centerBlock = image[4*subHeight:9*subHeight,
4*subWidth:6*subWidth]
rightBlock = image[4*subHeight:9*subHeight,
7*subWidth:9*subWidth]
topBlock = image[2*subHeight:4*subHeight,
3*subWidth:7*subWidth]
# we now track the fraction of each ROI
leftFraction =
np.sum(leftBlock)/(leftBlock.shape[0]*leftBlock.shape[1])
61
Universitas 17 Agustus 1945 Surabaya
centerFraction =
np.sum(centerBlock)/(centerBlock.shape[0]*centerBlock.shape[1])
rightFraction =
np.sum(rightBlock)/(rightBlock.shape[0]*rightBlock.shape[1])
topFraction =
np.sum(topBlock)/(topBlock.shape[0]*topBlock.shape[1])
segments = (leftFraction, centerFraction, rightFraction, topFraction)
segments = tuple(1 if segment > THRESHOLD else 0 for segment in
segments)
cv2.imshow("Warped", image)
if segments in SIGNS_LOOKUP:
return SIGNS_LOOKUP[segments]
else:
return None
def main():
findTrafficSign()
if __name__ == '__main__':
Sourcecode Mendeteksi Pola
from imutils.perspective import four_point_transform
from imutils import contours
import imutils
import cv2
import numpy as np
# define the dictionary of signs segments so we can identify
# each signs on the image
SIGNS_LOOKUP = {
(1, 0, 0, 1): 'turnRight', # turnRight
(0, 0, 1, 1): 'turnLeft', # turnLeft
(0, 1, 0, 0): 'moveStraight', # moveStraight
(1, 0, 1, 1): 'turnBack', # turnBack
}
62
Universitas 17 Agustus 1945 Surabaya
camera = cv2.VideoCapture(0)
def defineTrafficSign(image):
# pre-process the image by resizing it, converting it to
# graycale, blurring it, and computing an edge map
image = imutils.resize(image, height=500)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 50, 200, 255)
# find contours in the edge map, then sort them by their
# size in descending order
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
# The is_cv2() and is_cv3() are simple functions that can be used to
# automatically determine the OpenCV version of the current
environment
# cnts[0] or cnts[1] hold contours
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
# loop over the contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if the contour has four vertices, then we have found
# the thermostat display
if len(approx) == 4:
displayCnt = approx
break
# extract the sign borders, apply a perspective transform
# to it
# A common task in computer vision and image processing is to
perform
# a 4-point perspective transform of a ROI in an image and obtain a
top-down, "birds eye view" of the ROI
63
Universitas 17 Agustus 1945 Surabaya
warped = four_point_transform(gray, displayCnt.reshape(4, 2))
output = four_point_transform(image, displayCnt.reshape(4, 2))
# draw a red square on the image
cv2.drawContours(image, [displayCnt], -1, (0, 0, 255), 5)
# threshold the warped image, then apply a series of morphological
# operations to cleanup the thresholded image
# cv2.THRESH_OTSU. it automatically calculates a threshold
value from image histogram
# for a bimodal image
thresh = cv2.threshold(warped, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1,
5))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# (roiH, roiW) = roi.shape
#subHeight = thresh.shape[0]/10
#subWidth = thresh.shape[1]/10
(subHeight, subWidth) = np.divide(thresh.shape, 10)
subHeight = int(subHeight)
subWidth = int(subWidth)
# mark the ROIs borders on the image
cv2.rectangle(output, (subWidth, 4*subHeight), (3*subWidth,
9*subHeight), (0,255,0),2) # left block
cv2.rectangle(output, (4*subWidth, 4*subHeight), (6*subWidth,
9*subHeight), (0,255,0),2) # center block
cv2.rectangle(output, (7*subWidth, 4*subHeight), (9*subWidth,
9*subHeight), (0,255,0),2) # right block
cv2.rectangle(output, (3*subWidth, 2*subHeight), (7*subWidth,
4*subHeight), (0,255,0),2) # top block
# substract 4 ROI of the sign thresh image
leftBlock = thresh[4*subHeight:9*subHeight,
subWidth:3*subWidth]
centerBlock = thresh[4*subHeight:9*subHeight,
4*subWidth:6*subWidth]
rightBlock = thresh[4*subHeight:9*subHeight,
64
Universitas 17 Agustus 1945 Surabaya
7*subWidth:9*subWidth]
topBlock = thresh[2*subHeight:4*subHeight,
3*subWidth:7*subWidth]
# we now track the fraction of each ROI. (sum of active
pixels)/(total number of pixels)
leftFraction =
np.sum(leftBlock)/(leftBlock.shape[0]*leftBlock.shape[1])
centerFraction =
np.sum(centerBlock)/(centerBlock.shape[0]*centerBlock.shape[1])
rightFraction =
np.sum(rightBlock)/(rightBlock.shape[0]*rightBlock.shape[1])
topFraction =
np.sum(topBlock)/(topBlock.shape[0]*topBlock.shape[1])
segments = (leftFraction, centerFraction, rightFraction,
topFraction)
segments = tuple(1 if segment > 230 else 0 for segment in
segments)
if segments in SIGNS_LOOKUP:
# show original image
cv2.imshow("output", output)
return SIGNS_LOOKUP[segments]
else:
return None
while True:
(grabbed, frame) = camera.read()
defineTrafficSign(frame)
# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
print("Stop programm and close all windows")
break
Sourcecode Keseluruhan
65
Universitas 17 Agustus 1945 Surabaya
from __future__ import division
import cv2
import numpy as np
import argparse
from operator import xor
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
from imutils.perspective import four_point_transform
# libraries to send data to Serial port
import serial
import struct
defaultSpeed = 50
windowCenter = 320
centerBuffer = 10
pwmBound = float(50)
cameraBound = float(320)
kp = pwmBound / cameraBound
leftBound = int(windowCenter - centerBuffer)
rightBound = int(windowCenter + centerBuffer)
error = 0
ballPixel = 0
#GPIO
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
#Pin definitions
rightFwd = 7
rightRev = 11
leftFwd = 13
leftRev = 15
#GPIO initialization
GPIO.setup(leftFwd, GPIO.OUT)
GPIO.setup(leftRev, GPIO.OUT)
GPIO.setup(rightFwd, GPIO.OUT)
GPIO.setup(rightRev, GPIO.OUT)
66
Universitas 17 Agustus 1945 Surabaya
#Disable movement at startup
GPIO.output(leftFwd, False)
GPIO.output(leftRev, False)
GPIO.output(rightFwd, False)
GPIO.output(rightRev, False)
#PWM Initialization
rightMotorFwd = GPIO.PWM(rightFwd, 50)
leftMotorFwd = GPIO.PWM(leftFwd, 50)
rightMotorRev = GPIO.PWM(rightRev, 50)
leftMotorRev = GPIO.PWM(leftRev, 50)
rightMotorFwd.start(defaultSpeed)
leftMotorFwd.start(defaultSpeed)
leftMotorRev.start(defaultSpeed)
rightMotorRev.start(defaultSpeed)
def updatePwm(rightPwm, leftPwm):
rightMotorFwd.ChangeDutyCycle(rightPwm)
leftMotorFwd.ChangeDutyCycle(leftPwm)
def pwmStop():
rightMotorFwd.ChangeDutyCycle(0)
rightMotorRev.ChangeDutyCycle(0)
leftMotorFwd.ChangeDutyCycle(0)
leftMotorRev.ChangeDutyCycle(0)
# default values for servos
currentPan = 95
currentTilt = 45
# default values for PID
MAX_MOTOR_SPEED = 230 # from 127 to 255
e_prev = 0
e_int = 0
# start values for HSV range, that can be choose with findHSVRange() on
startup
v1_min, v2_min, v3_min, v1_max, v2_max, v3_max = (0,0,0,180,255,255)
# initialize the camera and grab a reference to the raw camera capture
67
Universitas 17 Agustus 1945 Surabaya
cameraResolution = (640, 480)
camera = PiCamera()
camera.resolution = cameraResolution
camera.framerate = 90
camera.brightness = 60
camera.rotation = 180
rawCapture = PiRGBArray(camera, size=cameraResolution)
# allow the camera to warmup
time.sleep(2)
# record video from the camera
#fourcc = cv2.VideoWriter_fourcc(*'XVID')
#out = cv2.VideoWriter('output.avi',fourcc, 6.0, (640,480))
# parameters of the center of the frame
halfFrameWidth = cameraResolution[0]/2
halfFrameHeight = cameraResolution[1]/2
#define serial port
#usbport = '/dev/ttyACM0'
#serialArduino = serial.Serial(usbport, 9600, timeout=1)
###########################
# Block of help functions #
###########################
def get_arguments():
'''
Help function to hold script arguments
'''
ap = argparse.ArgumentParser()
ap.add_argument('-p', '--programm', required=True,
help='Specify programm to start: "-p line" - line following, "-p
sign" - move with signs, "-p track" - color object tracking')
ap.add_argument('-c', '--color', required=False,
help='Start HSV trackbar to choose color',
action='store_true')
args = vars(ap.parse_args())
return args
68
Universitas 17 Agustus 1945 Surabaya
def mapValueToRange(value, fromMin, fromMax, toMin, toMax):
'''
Mapping function from one range to another
>>> translate(127, 0, 255, -255, 255) translate from 0 - 255 to -255 - 255
-1
'''
# Figure out how 'wide' each range is
fromSpan = fromMax - fromMin
toSpan = toMax - toMin
# Convert the left range into a 0-1 range (float)
valueScaled = (value - fromMin) / fromSpan
# Convert the 0-1 range into a value in the right range.
return int(toMin + (valueScaled * toSpan))
def findHSVRange():
'''
This is a help function to find HSV color ranges, that will be used in other
functions of our robot
It will cteate trackbars to find optimal range values from the captured images
'''
global v1_min, v2_min, v3_min, v1_max, v2_max, v3_max
# he function namedWindow creates a window that can be used as a
placeholder for images and trackbars.
# Created windows are referred to by their names.
cv2.namedWindow("Trackbars", 0)
for i in ["MIN", "MAX"]:
v = 0 if i == "MIN" else 255
for j in 'HSV':
if j == 'H':
# create trackbar for Hue from 0 tj 180 degrees
# For HSV, Hue range is [0,179], Saturation range is [0,255] and
Value range is [0,255].
cv2.createTrackbar("%s_%s" % (j, i), "Trackbars", v, 179, (lambda x:
None))
else:
cv2.createTrackbar("%s_%s" % (j, i), "Trackbars", v, 255, (lambda x:
None))
69
Universitas 17 Agustus 1945 Surabaya
while True:
camera.capture(rawCapture, use_video_port=True, format='bgr')
frame = rawCapture.array
frame_to_thresh = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
values = []
for i in ["MIN", "MAX"]:
for j in 'HSV':
v = cv2.getTrackbarPos("%s_%s" % (j, i), "Trackbars")
values.append(v)
v1_min, v2_min, v3_min, v1_max, v2_max, v3_max = values
thresh = cv2.inRange(frame_to_thresh, np.array([v1_min, v2_min,
v3_min]), np.array([v1_max, v2_max, v3_max]))
kernel = np.ones((3,3),np.uint8)
mask = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
cv2.imshow("Original", frame)
cv2.imshow("Mask", mask)
rawCapture.truncate(0)
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
print("Stop programm and close all windows")
break
def pidController(xCentroidCoordinate, xCenterOfTheImage, Kp, Kd, Ki):
global e_prev
global e_int
error = xCentroidCoordinate - xCenterOfTheImage
e_int = e_int + error
e_diff = error - e_prev
pid = Kp * error + Ki * e_int + Kd * e_diff
#print('pid (%d)= (Kp(%d) * error(%d))%d + (Ki(%d) * e_int(%d))%d +
70
Universitas 17 Agustus 1945 Surabaya
(Kd(%d) * e_diff(%d))%d' % (pid, Kp, error, Kp*error, Ki, e_int, Ki*e_int, Kd,
e_diff, Kd*e_diff ))
e_prev = error
if abs(pid) < MAX_MOTOR_SPEED:
return pid
else:
if pid > MAX_MOTOR_SPEED:
return MAX_MOTOR_SPEED
elif pid < -MAX_MOTOR_SPEED:
return -MAX_MOTOR_SPEED
'''def movePanTilt(servo, angle):
Moves the specified servo to the supplied angle.
Arguments:
servo
the servo number to command, an integer from 1-4
angle
the desired servo angle, an integer from 0 to 180
(e.g.) >>> servo.move(2, 90)
# "move servo #2 to 90 degrees"
if (0 <= angle <= 180):
serialArduino.write(struct.pack('>B', 255)) # code 255 for servo angles
serialArduino.write(struct.pack('>B', servo))
serialArduino.write(struct.pack('>B', angle))
if servo == 1:
print("Pan angle is: ", angle)
else:
print("Tilt angle is: ", angle)
else:
print ("Servo angle must be an integer between 0 and 180.\n")
'''
'''def moveMotors(left, right):
Moves the motors with specific speed.
We can send to serial only values between 0 and 255.
To move motors in different directions we define 0-126 for back motion
and 128-255 for stright motion.
71
Universitas 17 Agustus 1945 Surabaya
On Arduino we will map values like this leftPWM = map(leftPWM, 0, 255,
-255, 255);
127 will be for stop the motors
if (0 <= left <= 255) or (0 <= right <= 255):
serialArduino.write(struct.pack('>B', 254)) # code 254 for move() function
on Arduino
serialArduino.write(struct.pack('>B', left))
serialArduino.write(struct.pack('>B', right))
else:
print ("Speed must be an integer between 0 and 255.\n")
'''
def calculateAnglesToMove(coordinates):
''' The function takes the coordinates of the largest object that was substructed
from the image
and calculates new coordinates of pan/tilt servos to destinct the center of
this object with
the camera
'''
global currentPan
global currentTilt
# calculate difference in pixels between center of the frame and centroid
coordinates
differenceInX = halfFrameWidth - coordinates[0]
differenceInY = halfFrameHeight - coordinates[1]
# calculate angle that must be add/subtract to/from current servos position to
reach
# the center of the frame with centroid. 6 pix is the approximate value for 1
degree servo movement for (320, 240) frame
changePanSeroAngleBy = differenceInX/12
changeTiltSeroAngleBy = differenceInY/12
if changePanSeroAngleBy > 0:
currentPan += abs(changePanSeroAngleBy)
else:
currentPan -= abs(changePanSeroAngleBy)
72
Universitas 17 Agustus 1945 Surabaya
if currentPan > 180:
currentPan = 180
elif currentPan < 0:
currentPan = 0
panAngle = currentPan
#print ("currentPan: %d" % currentPan)
if changeTiltSeroAngleBy > 0:
currentTilt += abs(changeTiltSeroAngleBy)
else:
currentTilt -= abs(changeTiltSeroAngleBy)
if currentTilt > 180:
currentTilt = 180
elif currentTilt < 0:
currentTilt = 0
tiltAngle = currentTilt
#print ("currentTilt: %d" % currentTilt)
return panAngle, tiltAngle
def identifyTrafficSign(image):
'''
In this function we select some ROI in which we expect to have the sign parts.
If the ROI has more active pixels than threshold we mark it as 1, else 0
After path through all four regions, we compare the tuple of ones and zeros
with keys in dictionary SIGNS_LOOKUP
It's the help function for findTrafficSign()
'''
# define the dictionary of signs segments so we can identify
# each signs on the image
SIGNS_LOOKUP = {
(1, 0, 0, 1): 'Turn Right', # turnRight
(0, 0, 1, 1): 'Turn Left', # turnLeft
(0, 1, 0, 1): 'Move Straight', # moveStraight
(1, 0, 1, 1): 'Turn Back', # turnBack
73
Universitas 17 Agustus 1945 Surabaya
}
THRESHOLD = 150
image = cv2.bitwise_not(image)
# (roiH, roiW) = roi.shape
#subHeight = thresh.shape[0]/10
#subWidth = thresh.shape[1]/10
(subHeight, subWidth) = np.divide(image.shape, 10)
subHeight = int(subHeight)
subWidth = int(subWidth)
# mark the ROIs borders on the image
#cv2.rectangle(image, (subWidth, 4*subHeight), (3*subWidth, 9*subHeight),
(0,255,0),2) # left block
#cv2.rectangle(image, (4*subWidth, 4*subHeight), (6*subWidth,
9*subHeight), (0,255,0),2) # center block
#cv2.rectangle(image, (7*subWidth, 4*subHeight), (9*subWidth,
9*subHeight), (0,255,0),2) # right block
#cv2.rectangle(image, (3*subWidth, 2*subHeight), (7*subWidth,
4*subHeight), (0,255,0),2) # top block
# substract 4 ROI of the sign thresh image
leftBlock = image[4*subHeight:9*subHeight, subWidth:3*subWidth]
centerBlock = image[4*subHeight:9*subHeight, 4*subWidth:6*subWidth]
rightBlock = image[4*subHeight:9*subHeight, 7*subWidth:9*subWidth]
topBlock = image[2*subHeight:4*subHeight, 3*subWidth:7*subWidth]
# we now track the fraction of each ROI
leftFraction = np.sum(leftBlock)/(leftBlock.shape[0]*leftBlock.shape[1])
centerFraction =
np.sum(centerBlock)/(centerBlock.shape[0]*centerBlock.shape[1])
rightFraction =
np.sum(rightBlock)/(rightBlock.shape[0]*rightBlock.shape[1])
topFraction = np.sum(topBlock)/(topBlock.shape[0]*topBlock.shape[1])
segments = (leftFraction, centerFraction, rightFraction, topFraction)
segments = tuple(1 if segment > THRESHOLD else 0 for segment in
segments)
cv2.imshow("Warped", image)
74
Universitas 17 Agustus 1945 Surabaya
if segments in SIGNS_LOOKUP:
return SIGNS_LOOKUP[segments]
else:
return None
###########################
# End of help functions #
###########################
def followTheColoreObject():
if args['color']:
lower = np.array([v1_min, v2_min, v3_min])
upper = np.array([v1_max, v2_max, v3_max])
else:
# define the lower and upper boundaries of the "orange"
# ball in the HSV color space, then initialize the
# list of tracked points
lower = np.array([55,100,0])
upper = np.array([75,255,255])
while True:
#start = time.time()
# The use_video_port parameter controls whether the camera's image or
video port is used
# to capture images. It defaults to False which means that the camera's
image port is used.
# This port is slow but produces better quality pictures.
# If you need rapid capture up to the rate of video frames, set this to True.
camera.capture(rawCapture, use_video_port=True, format='bgr')
# At this point the image is available as stream.array
frame = rawCapture.array
# Draw the center of the image
cv2.line(frame,(halfFrameWidth - 20,halfFrameHeight),(halfFrameWidth +
20,halfFrameHeight),(0,255,0),2)
cv2.line(frame,(halfFrameWidth,halfFrameHeight -
20),(halfFrameWidth,halfFrameHeight + 20),(0,255,0),2)
75
Universitas 17 Agustus 1945 Surabaya
frame_to_thresh = frame.copy()
hsv = cv2.cvtColor(frame_to_thresh, cv2.COLOR_BGR2HSV)
kernel = np.ones((5,5),np.uint8)
mask = cv2.inRange(hsv, lower, upper)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# draw the center of the tracking object
cv2.circle(frame, center, 5, (0, 0, 255), -1)
# draw the line from the center of the frame to the object's center
cv2.line(frame,(halfFrameWidth,halfFrameHeight),(center),(255,0,0),2)
# calculate new pan/tilt angle
panAngle, tiltAngle = calculateAnglesToMove(center)
# move servos. send command to the arduino
movePanTilt(1,panAngle)
movePanTilt(2,tiltAngle)
time.sleep(0.2)
# show images
cv2.imshow("Original", frame)
cv2.imshow("Mask", mask)
out.write(frame)
76
Universitas 17 Agustus 1945 Surabaya
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
out.release()
moveMotors(127,127)
movePanTilt(1,currentPan)
movePanTilt(2,currentTilt)
print("Stop programm and close all windows")
break
#stop = time.time()
#print(stop-start)
def followTheLine():
if args['color']:
lower = np.array([v1_min, v2_min, v3_min])
upper = np.array([v1_max, v2_max, v3_max])
else:
# define the lower and upper boundaries of the
# red line in the HSV color space, then initialize the
# list of tracked points
lower1 = np.array([0,100,80])
upper1 = np.array([10,255,255])
lower2 = np.array([170,100,80])
upper2 = np.array([180,255,255])
# move servos. send command to the arduino
movePanTilt(1,currentPan)
movePanTilt(2,0) #0 is the lowest
time.sleep(0.2)
while True:
# The use_video_port parameter controls whether the camera's image or
video port is used
# to capture images. It defaults to False which means that the camera's
image port is used.
# This port is slow but produces better quality pictures.
77
Universitas 17 Agustus 1945 Surabaya
# If you need rapid capture up to the rate of video frames, set this to True.
camera.capture(rawCapture, use_video_port=True, format='bgr')
# At this point the image is available as stream.array
frame = rawCapture.array
frame_to_thresh = frame.copy()
hsv = cv2.cvtColor(frame_to_thresh, cv2.COLOR_BGR2HSV)
kernel = np.ones((5,5),np.uint8)
# for red color we need to masks.
mask1 = cv2.inRange(hsv, lower1, upper1)
mask2 = cv2.inRange(hsv, lower2, upper2)
mask = mask1 + mask2
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask[0:90, 0:640] = 0
mask[320:, 0:640] = 0
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# draw the center of the tracking object
cv2.circle(frame, center, 5, (0, 0, 255), -1)
pid = pidController(center[0], halfFrameWidth, 0.5, 0.19, 0.04) #0.5,
0.192, 0.03
if pid < 0:
78
Universitas 17 Agustus 1945 Surabaya
moveMotors(MAX_MOTOR_SPEED + pid, MAX_MOTOR_SPEED
+ pid*0.1)
else:
moveMotors(MAX_MOTOR_SPEED - pid*0.1,
MAX_MOTOR_SPEED - pid)
else:
moveMotors(127,127)
# show images
cv2.imshow("Original", frame)
cv2.imshow("Mask", mask)
out.write(frame)
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
out.release()
moveMotors(127,127)
movePanTilt(1,currentPan)
movePanTilt(2,currentTilt)
print("Stop programm and close all windows")
break
def findTrafficSign():
'''
This function find blobs with blue color on the image.
After blobs were found it detects the largest square blob, that must be the
sign.
'''
# move servos. send command to the arduino
#movePanTilt(1,currentPan)
#movePanTilt(2,currentTilt)
#time.sleep(2)
lastDetectedTrafficSign = None
79
Universitas 17 Agustus 1945 Surabaya
if args['color']:
lower = np.array([v1_min, v2_min, v3_min])
upper = np.array([v1_max, v2_max, v3_max])
else:
# define range HSV for blue color of the traffic sign
lower = np.array([80,0,0])
upper = np.array([130,255,255])
while True:
# The use_video_port parameter controls whether the camera's image or
video port is used
# to capture images. It defaults to False which means that the camera's
image port is used.
# This port is slow but produces better quality pictures.
# If you need rapid capture up to the rate of video frames, set this to True.
camera.capture(rawCapture, use_video_port=True, format='bgr')
# At this point the image is available as stream.array
frame = rawCapture.array
frame_to_thresh = frame.copy()
frameArea = frame.shape[0]*frame.shape[1]
# convert color image to HSV color scheme
hsv = cv2.cvtColor(frame_to_thresh, cv2.COLOR_BGR2HSV)
# define kernel for smoothing
kernel = np.ones((3,3),np.uint8)
# extract binary image with active blue regions
mask = cv2.inRange(hsv, lower, upper)
# morphological operations
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# find contours in the mask
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
# defite string variable to hold detected sign description
detectedTrafficSign = None
80
Universitas 17 Agustus 1945 Surabaya
# define variables to hold values during loop
largestArea = 0
largestRect = None
largestContour = None
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
for cnt in cnts:
# Rotated Rectangle. Here, bounding rectangle is drawn with
minimum area,
# so it considers the rotation also. The function used is
cv2.minAreaRect().
# It returns a Box2D structure which contains following detals -
# ( center (x,y), (width, height), angle of rotation ).
# But to draw this rectangle, we need 4 corners of the rectangle.
# It is obtained by the function cv2.boxPoints()
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
# count euclidian distance for each side of the rectangle
sideOne = np.linalg.norm(box[0]-box[1])
sideTwo = np.linalg.norm(box[0]-box[3])
# count area of the rectangle
area = sideOne*sideTwo
# find the largest rectangle within all contours
if area > largestArea:
largestArea = area
largestRect = box
largestContour = cnt
#print("Largest Area: %d, Frame Area: %d" % (largestArea, frameArea))
if largestArea > frameArea*0.001:
# find moment for the rectangle
M = cv2.moments(largestContour)
# find center of the rectangle
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
81
Universitas 17 Agustus 1945 Surabaya
if largestArea > frameArea*0.15:
#moveMotors(127,127) #127 is mapped to 0 on Arduino
pwmStop()
print("Big sign to close")
time.sleep(0.5)
if lastDetectedTrafficSign == 'Turn Right':
#right
#moveMotors(255,0)
updatePwm(100, 0)
time.sleep(0.70)
#moveMotors(127,127)
pwmStop()
print("Turned to the right")
time.sleep(0.5)
elif lastDetectedTrafficSign == 'Turn Left':
#left
#moveMotors(0,255)
updatePwm(0, 50)
time.sleep(0.70)
#moveMotors(127,127)
pwmStop()
print("Turned to the left")
time.sleep(0.5)
elif lastDetectedTrafficSign == 'Turn Back':
#reverse_right
#moveMotors(255,0)
updatePwm(100, 0)
time.sleep(1)
#moveMotors(127,127)
pwmStop()
print("Turned back")
time.sleep(0.5)
elif lastDetectedTrafficSign == 'Move Straight':
updatePwm(50, 50)
time.sleep(1)
pwmStop()
print("Go, go, go")
time.sleep(0.5)
'''else:
# count error with PID to move to the sign direction
82
Universitas 17 Agustus 1945 Surabaya
pid = pidController(center[0], halfFrameWidth, 0.5, 0, 0)
# if error with "-", then we need to slow down left motor, else - right
if pid < 0:
updatePwm(defaultSpeed - pid, defaultSpeed - pid*0.1)
else:
updatePwm(defaultSpeed + pid*0.1, defaultSpeed + pid)
'''
# draw contour of the found rectangle on the original image
cv2.drawContours(frame,[largestRect],0,(0,0,255),2)
# cut and warp interesting area
warped = four_point_transform(mask, [largestRect][0])
# show an image if rectangle was found
cv2.imshow("Warped", cv2.bitwise_not(warped))
# use function to detect the sign on the found rectangle
detectedTrafficSign = identifyTrafficSign(warped)
print('detectedTrafficSign', detectedTrafficSign)
if detectedTrafficSign is not None:
lastDetectedTrafficSign = detectedTrafficSign
print('lastDetectedTrafficSign', lastDetectedTrafficSign)
# write the description of the sign on the original image
cv2.putText(frame, detectedTrafficSign, tuple(largestRect[0]),
cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 255, 0), 2)
# if there is no blue rectangular on the frame, then stop
else:
pwmStop() #127 is mapped to 0 on Arduino
print("No sign")
# show original image
cv2.imshow("Original", frame)
cv2.imshow("Mask", mask)
#out.write(frame)
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
83
Universitas 17 Agustus 1945 Surabaya
# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF is ord('q'):
cv2.destroyAllWindows()
out.release()
moveMotors(127,127)
movePanTilt(1,currentPan)
movePanTilt(2,currentTilt)
print("Stop programm and close all windows")
break
def main():
#
if args['programm'] == 'sign':
if args['color']:
findHSVRange()
findTrafficSign()
elif args['programm'] == 'track':
if args['color']:
findHSVRange()
followTheColoreObject()
elif args['programm'] == 'line':
if args['color']:
movePanTilt(1,currentPan)
movePanTilt(2,0)
time.sleep(0.2)
findHSVRange()
followTheLine()
if __name__ == '__main__':
# to store args as global variable. If to set this line of code in main() we will
not have the opportunity
# to use args values in other functions
args = get_arguments()
# start the main function
main()
iii
PROGRAM STUDI TEKNIK INFORMATIKA
FAKULTAS TEKNIK
UNIVERSITAS 17 AGUSTUS 1945 SURABAYA
LEMBAR PENGESAHAN TUGAS AKHIR
NAMA : Moch Subahan
NBI : 1461505122
PROGRAM STUDI : S-1 Informatika
FAKULTAS : Teknik
JUDUL : Navigasi Mobile Robot Menggunakan Kamera
Berdasarkan Pola Arah Panah
Mengetahui / Menyetujui
Dosen Pembimbing
( Nuril Esti Khomariah, S.ST., M.T. )
NPP. 20460.16.0725
Dekan Fakultas Teknik
Universitas 17 Agustus 1945
Surabaya
Ketua Program Studi
Teknik Informatika
Universitas 17 Agustus 1945
Surabaya
( Dr. Ir. Sajiyo, M.Kes. )
NPP. 20410.90.0197
( Geri Kusnanto, S.Kom, MM )
NPP. 20460.94.0401
iv
Halaman ini sengaja dikosongkan
Skip to Main Content
NAVIGASI MOBILE ROBOTMENGGUNAKAN KAMERA
BERDASARKAN POLA ARAHPANAH
by Moch Subahan 1461505122
FILE
TIME SUBMITTED 02-AUG-2019 04:25PM (UTC+0700)
SUBMISSION ID 1157010096
WORD COUNT 1331
CHARACTER COUNT 7952
1461505122-NAVIGASI-MOBILE-ROBOT.PDF (611.81K)
%25SIMILARITY INDEX
%24INTERNET SOURCES
%1PUBLICATIONS
%5STUDENT PAPERS
EXCLUDE QUOTES OFF
EXCLUDEBIBLIOGRAPHY
OFF
EXCLUDE MATCHES OFF
NAVIGASI MOBILE ROBOT MENGGUNAKAN KAMERABERDASARKAN POLA ARAH PANAHORIGINALITY REPORT
MATCH ALL SOURCES (ONLY SELECTED SOURCE PRINTED)
8%
pt.scribd.comInternet Source