Subido por joscullo

0020720920907879

Anuncio
Original Article
A reproducible
educational plan to teach
mini autonomous race
car programming
International Journal of Electrical Engineering
& Education
2020, Vol. 57(4) 340–360
! The Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0020720920907879
journals.sagepub.com/home/ije
Süleyman Eken1 , Muhammed Şara2,
Yusuf Satılmış2, Münir Karslı2,
Muhammet Furkan Tufan2,
Houssem Menhour2 and Ahmet Sayar2
Abstract
As autonomous cars and complex features of them grow in popularity, ensuring that
analyses and capabilities are reproducible and repeatable has taken on importance in
education plans too. This paper describes a reproducible research plan on mini autonomous race car programming. This educational plan is designed and implemented as
part of a summer internship program at Kocaeli University and it consists of theoretical
courses and laboratory assignments. A literate programming approach with the Python
language is used for programming the race car. To assess the educational program’s
impact on the learning process and to evaluate the acceptance and satisfaction level of
students, they answered an electronic questionnaire after finishing the program.
According to students’ feedback, the reproducible educational program is useful for
learning and consolidating new concepts of mini autonomous car programming.
Keywords
Reproducible research in robotics, mini autonomous car, robot operating system, deep
learning, Jupyter Notebook, electronic questionnaire
1
2
Department of Information Systems Engineering, Kocaeli University, _Izmit, Kocaeli, Turkey
Department of Computer Engineering, Kocaeli University, _Izmit, Kocaeli, Turkey
Corresponding author:
Süleyman Eken, Department of Information Systems Engineering, Kocaeli University, _Izmit 41001, Kocaeli,
Turkey.
Email: suleyman.eken@kocaeli.edu.tr
Eken et al.
341
Introduction
Reproducibility is considered an attainable minimum standard for evaluating scientific claims when fully independent replication of a study with different data is
not feasible. This standard requires researches to make both their data and computer code available to their peers, but it still fails to achieve full replication, since
the same data and not independently collected data are analyzed again.
Nevertheless, this standard while limited still allows for assessing the quality of
scientific claims by verifying the original data and codes. It managed to fill the gap
between full replication and none at all; after all, the reproducibility of a study can
lie on a scale of possibilities rather than just the two mentioned extreme points.
Reproducible research has been a recurring subject appearing in high-impact journals. The state of reproducibility has been studied thoroughly in some domains,
e.g., in computer systems research1 or bioinformatics.2 Some studies have covered
the field of robotics research as well. Compared with other fields, robotics is special
in that it aims to enable an embodied agent to interact with the world physically.
Research on robotics has been concerned with creating efficient models that
meet requirements like body dynamics rigidity, or multiple-view geometry that
determines an action output based on the perceptual input, or predict the perceptual output based on a certain action. This, while working fine in a controlled setup
with well-defined variables, made it a difficult challenge for robots to move out of
these environments to the real world with its unpredictability and the wider range
of tasks and situations. Robots are significantly and quickly improving regarding
how they perceive the world around them, they can now sense the world in new
unprecedented ways: see, hear, read, touch, etc.3–5 The ability to deal with these
new inputs and learn from them pushes the capabilities of robots steps ahead. Even
though researchers have studied and applied various machine learning approaches
on this task, this often comes with a caveat represented by some assumptions or
simplifications in the mapping between perception and action and vice versa.6–8
This, however, can be alleviated, thanks to the availability of big data and the
improved machine learning approaches to benefit from it.9
Unlike computational science disciplines, robotics research faces practical challenges when it comes to replicating or reproducing results for objective assessment
and comparison. The difficulty of perfectly controlling environmental variables,
poor understanding of comparable evaluation metrics or finding goal similarity
across different domains, determining what information is needed to allow result
replication, they all contribute to making this challenging. To make matters worse,
we are yet to build a solid theoretical foundation for experimental replicability in
robotics, hindering research progress and technology transfer. Though it is worth
noting that recent years have seen significant movement forward in this regard,
with workshops10,11 and summer schools12 covering the design, planning, experimental methodology, and benchmarking of reproducible research in robotics.
IEEE Robotics and Automation Magazine also introduced a new type of articles,
R-article. It includes a description and access information to data sets and the
342
International Journal of Electrical Engineering & Education 57(4)
source code or its executable files on CodeOcean,13 as well as hardware (HW)
description and identifier,14 with details on the tests that have been run.
Teaching and learning autonomous vehicle programming can be a difficult task,
and adding robotic vehicle design and construction to the mix only makes the
learning curve even steeper. This paper presents a reproducible research plan to
teach autonomy using state-of-the-art sensors and embedded computers for mobile
robotics built upon a 1/10-scale racing car platform. A combination of theoretical
courses and laboratory assignments has been given to undergraduate students
using this methodology as part of a summer internship program at Kocaeli
University. First of all, we covered the theoretical courses of this program, this
includes subjects such as basics of commanding the vehicle actuators (steering and
drive motor), robot operating system (ROS) fundamentals, leveraging data from
the Light detection and ranging (Lidar), the Inertial Measurement Unit (IMU) and
the wheel odometer, basics of control systems, basics of blob detection in images,
fundamentals of robot perception including visual servoing. Then, students have to
implement what they learned in the laboratory exercises. The 10 laboratory experiments are designed in a way to provide the students with hands-on experience with
the mini autonomous vehicle programming in this research plan. These experiments offer a fitting laboratory solution to bridge the gap between the theoretical
knowledge acquired from the courses and the real-time application. The contributions of this paper can be summarized with:
• We have presented modern scientific computational environments for programming mini autonomous vehicles using a literate programming approach with the
Python programming language due to its popularity within the scientific computing community. The environment makes it possible to combine algorithmic
details and explanations with the implementation code in one file.
• We have evaluated the reproducible educational plan based on students’
surveys.
The rest of this paper is organized as follows: section “Related work” presents
related work about courses and robotics competitions with autonomous vehicles
and similar platforms for teaching this subject. Section “Used platform” describes
the open-source HW and software of the platform. Section “Educational
approach” introduces the plan including theoretical lessons and laboratory assignments. Section “Evaluation and Discussion” gives students’ feedback on our
reproducible educational program and discussion of the results. Finally, the last
section presents the “conclusion” of this work.
Related work
The use of autonomous vehicles is becoming a common choice as a tool for teaching several related subjects, including robotics, electronics,15 programming, and
complex control. Carnegie Mellon’s Robot Autonomy course pioneered this
Eken et al.
343
approach in 2002 and set the framework for using creativity to spark the students’
interest in learning.16,17 There is also LEGO MindStorms NXT, a low-cost programmable robotics kit that has seen far reach out for it managed to advance the
ability of teachers and learners of all ages to produce autonomous vehicles in an
easy and timely manner, thanks to its canned libraries and a powerful graphical
programming language.18
Competitions and camps also have shown incredible results, setting them as
effective tools in motivating students to extend their knowledge and improve
their ability to program a robotic vehicle to run autonomously.19,20 The FIRST
Robotics Challenge, the FIRST Lego League,21 BEST Robotics,22 National
Robotics Challenge (NRC),23 and EARLY Robotics24 are a few examples.
These programs are geared toward precollege students and cover a wide range
of choices of robotics HW for students to build and experiment with. There are
many other examples25,26 of successful approaches to teaching the subject.
During this program, we took advantage of the flexibility and extensibility of
the Jupyter notebook web application in our development environment, this
approach facilitates both the educational and the reproducible research workflow.
In addition to those benefits, the use of the popular Python programming language
minimizes the effort required to get up and running with the environment and get a
hold of its syntax and semantics. Moreover, the availability of free and opensource packages for this programming language is another reason for choosing
it; they can be easily integrated into the environment, extending it to suit whatever
the developer needs.
The platforms used to teach autonomous control are as varied as the curricula
for teaching; this variation covers a wide range of capabilities and price points.27
They also vary in how robust they are, from children toys to industrial equipment
passing by hobbyist levels. Most notable examples at the lower price end include
the use of Lego kits,21,28,29 kits based on the iRobot Create fitted with either a
Gumstix-embedded computer30 or a laptop.31 These platforms serve the important
purpose of lowering the entry bar for as many students as possible, especially those
with limited budgets. However, the majority of them lack state-of-the-art sensors
and computers that power and make many contemporary robotic systems possible.
The platform we opted to use on the other hand includes the state-of-the-art HW
utilized in real-world robotic vehicles today. Detailed information about the platform is given in the following section.
Used platform
In this section, we will describe the HW configuration of the car kit and its features. Racecar, or Rapid Autonomous Complex–Environment Competing
Ackermann-steering Robot, was initially developed for a competition by MIT in
2015, then updated in 2016 to be used for teaching robotics.32 The kit uses a 1/10
scale Traxxas RC Rally Car racing car as its backbone (see Figure 1(a)), accompanied with VESC,33 an open-source electronic speed controller. And the Nvidia
344
International Journal of Electrical Engineering & Education 57(4)
Figure 1. Hardware parts of used car kit. (a) Racecar mini autonomous developer kit (b) and its
hardware components.
Jetson TX2 developer board as its main processor, which comes with 256 CUDA
cores. The following are a list of some advantages associated to open source
systems: lesser HW costs, no vendor lock-in, simple license management, abundant
support, scaling and consolidating, low-cost and large amounts of available information on the Internet.
As for sensors, the kit includes three types: a stereo camera, a Lidar, and an
IMU. ZED stereo camera34 allows for the use of standard stereo matching
approaches to acquire depth information from two images by two different cameras. The Stereolabs Software Development Kit (SDK) implements a semi-global
matching algorithm that works on Graphics Processing Unit (GPU)-based computers, which makes it suitable for our platform with Jetson TX2. 3D mapping
Eken et al.
345
Table 1. ZED camera video modes.
Video mode
Frames per second
Output resolution
2.2K
1080p
720p
WVGA
15
30
60
100
4416 1242
3840 1080
2560 720
1344 376
also can be done using the same algorithm. The camera is capable of working in
the following video modes. The camera is capable of working in the following
video modes as tabulated in Table 1.
The second sensor is the Scanse Sweep Lidar.35 It can operate at a distance of 40
m away collecting 1000 samples per second. The Sweep has a head rotating clockwise while emitting a laser beam to collect data in a single plane. Finally, we have
Sparkfun 9 degrees of freedom (9-DoF) Razor IMU M036 as the IMU. It has a
SAMD21 microprocessor paired with an MPU-9250 9-DoF sensor, making it a
small and flexible IMU. This means we can program it for various tasks: monitoring and logging motion, transmitting Euler angles over a serial port, or acting as
a step-counting pedometer. The 9-DoF Razor’s MPU-9250 has a triple 3-axis
sensors configuration: an accelerometer to measure linear acceleration, a gyroscope to measure angular rotation, and a magnetometer to measure magnetic
field vectors around it.
The parts can be placed on two Laser cut or 3D printed plates. One plate is
dedicated to carrying the Nvidia Jetson TX2 dev kit, while the rest of the parts can
be distributed between the second plate and the car chassis (see Figure 1(b) for
components). The main computer runs ROS on top of the Ubuntu operating
system. ROS environment serves the purpose of making our software stack modular, i.e. separate software modules can be developed for different tasks, such as
the computer vision system, the autonomous driving system, or the motors control
system.
Educational approach
Within the 2018 summer internship program, a group of 15 undergraduate computer engineering students was selected to be taught complex robot tasks on the
racecar mini autonomous developer kit. The applied reproducible educational
program includes theoretical lectures and laboratory assignments that allow students to practice their theoretical knowledge and get hands-on experience. A literate programming approach (via Jupyter notebooks) with the Python
programming language is used for programming the vehicle.
Originally introduced in 2011 as IPython notebooks, Jupyter notebooks37 is an
open-source interactive web-based application streamlining and simplifying literate
programming.38,39 Jupyter notebooks supported only Python at first, as the early
346
International Journal of Electrical Engineering & Education 57(4)
Figure 2. Thresholding example of Jupyter Notebook from assignment labs.
name implies, then extended the support to a large number of programming languages, by adding the appropriate Jupyter kernels. The kernels communicate with
the system using Jupyter’s communication protocol, the so-called Jupyter messaging protocol. A notebook is a JavaScript Object Notation (JSON) document,
which is parsed and displayed in a human-friendly format (see Figures 2 and 3).
Each laboratory assignment notebook is a document containing the description of
an image-analysis workflow using equations, text, the executable implementation
of the workflow, and figures and tables generated by running the code. The code is
split into logical sections called “code cells” or “code chunks” that can be executed
interactively, providing a natural setting for modifying the code and experimenting
with parameter values and seeing their effect on algorithm performance. Another
benefit of running individual code chunks is that one can choose to run some parts
only once, while others can be repeatedly executed. For example, while a dataloading cell only needs to run once, an algorithmic cell could run several times to
compare different approaches to solving the same problem before settling on the
best one.
Eken et al.
347
Figure 3. Segmentation example of Jupyter Notebook from assignment labs.
Theoretical lectures
This subsection introduces the seven theoretical lectures. Each lecture includes a
brief description of the learning objectives. Table 2 shows the recommended road
map for instructors, with each assignment and its appropriate lecture.
LEC0—ROS fundamentals. This lecture covers the first basics of commanding the
vehicle actuators (steering and motor). The students get familiar with how to
obtain data from the Lidar, the IMU, and the wheel odometer (through the
VESC). They also learn how to work with ROS40 and its components such as
nodes, topics, messages, services, etc. ROS is a Berkeley Software Distribution
(BSD)-licensed software for controlling robotic components on a PC. An ROS
consists of a number of independent nodes that communicate with each other
using a publish/subscribe messaging pattern.41 For example, a node might implement a particular sensor’s driver, and publishes its acquired data in a stream of
348
International Journal of Electrical Engineering & Education 57(4)
Figure 4. Examples from the data set. (a) Sample 1, (b) sample 2, (c) sample 3, (d) sample 4,
(e) sample 5, and (f) sample 6.
messages. These messages are then available for consumption by all other nodes
that subscribe to that stream, including filters, loggers, and also higher level systems such as pathfinding, collision detection, etc. Our software modules running
on the Jetson42 are structured as ROS nodes. The nodes for the LIDAR and other
sensors are part of the software package given to students. Students integrate
existing software modules (drivers for reading sensor data, and their custom algorithms) to quickly develop a complete autonomous system.
LEC1—Control systems theory. Many engineered systems have an input affecting the
value of some output. If we are interested in regulating the values of the output, as
we frequently are, we become interested in algorithms to adjust the inputs accordingly. This will be facilitated by making feedback about the current state of the
output available to the control algorithm, though it is also possible to try to
“blindly” adjust the input. Regulation algorithms and their implementations are
called control systems. The difficulty of the control problem depends on certain
factors including the physical complexity of the system, its time dynamics and
those of the desired output trajectory, the resolution and latency of the available
Eken et al.
349
Figure 5. An example from dataset and its XML file. (a) A sample of parking traffic sign (b) and
its annotated XML file.
Table 2. Syllabus for teaching mini autonomous race car programming.
Week
Lecture
Laboratory assignment(s)
1
1
2,3
LEC0—ROS fundamentals
LEC1—Control Systems Theory
LEC2—Introduction to Computer
Vision and OpenCV
2,3
LEC3—Deep Learning
Fundamentals and Tools
LEC4—Object Detection and
Classification
LEC5—Visual Servoing
LEC6—Localization
Testing and deployment of all
assignments on the mini
autonomous vehicle
LAB0—ROS Development with Python
LAB1—Bang-Bang and Safety Controllers
LAB2—OpenCV Basics
LAB3—Streaming Video with OpenCV
LAB4—Semantic Segmentation
LAB5—Blob Detection using the ZED Camera
LAB6—End-to-End Learning for Steering
Autonomous Vehicle
LAB7—Traffic Sign Classification and Localization
2,3
4
5
6,7,8
LAB8—Parking in front of a Visual Marker
LAB9—Histogram Filters and 2D Kalman Filter
feedback sensors, etc.43 We study a collection of common and relatively straightforward control techniques such as feed-forward control, Bang–Bang control,
Proportional–Integral–Derivative (PID) control, differential drive velocity
control.44
350
International Journal of Electrical Engineering & Education 57(4)
LEC2—Introduction to computer vision and OpenCV. An important part of the human
driving experience is keeping an eye on the environment, as the safety of the driver
and that of many other people depends on that. We specifically try to spot any
obstacles such as pedestrians and other cars and evade them. Similarly, autonomous vehicles must leverage their sensors and intelligence to perform the same task
of detecting obstacles as it reinforces the car’s understanding of its environment.
Other vehicles on the road represent one of the most important obstacles to deal
with, as they are most likely the biggest and the most frequently encountered
objects in our lane or neighboring ones. Within this lecture, students start by
learning basic operations on image processing with OpenCV45 such as accessing
pixel values and modifying them, accessing image properties, setting region of
Image (ROI), splitting and merging images, blurring, finding contours, line, and
shape detection. Then they go on learning more advanced concepts applicable to
autonomous driving.
LEC3—Deep learning fundamentals and tools. In 2005, Sebastian Thrum and his team
won the DARPA Grand Challenge46 using machine learning, which ignited a huge
interest in machine learning and deep learning for developing autonomous
vehicles. Today, continuous advances in deep learning technology keep pushing
the autonomous vehicle industry forward. Within this lecture, students learn how
to train a deep neural network to correctly classify newly introduced images,
deploy their models into applications, find ways to improve the performance of
these applications, deploy end-to-end deep learning models for autonomous driving. They also get familiar with frameworks like Tensorflow,47 the most famous
library used in production for deep learning models and Keras48 which is a highlevel API built on TensorFlow.
LEC4—Object detection and classification. Perception, which is the task of environment
sensing including object classification, detection, 3D position estimation, and
simultaneous localization and mapping (SLAM),49 is as important to autonomous
vehicles as it is to human drivers. Without it, the vehicle would not be able to
navigate through any kind of obstacles. And that brings us to the subtask of object
detection, for which we need accurate and reliable algorithms to deploy on the
autonomous vehicle. Within this lecture, students learn OverFeat,50 VGG16,51 and
YOLO52 models.
LEC5—Visual servoing. Visual servoing is the use of cameras and a computer vision
system to control the motion of a robot. Students leverage what they learned about
OpenCV and ROS so far combined with image-based visual servoing to program
the racecar to park in front of solid-color objects, like orange cones or specific
traffic signs.
LEC6—Localization. Navigation is a key and challenging task for mobile robots. To
perform navigation successfully, four subtasks are required: perception,
Eken et al.
351
localization, cognition, and motion control. Localization is the robot’s ability to
determine its position relative to the environment, and it is the area that has
received the most research attention in the past decade, allowing for significant
advances to be made. The use of localization techniques enables a robot to find two
basic pieces of information: what are the robot’s current location and orientation
within its environment? This lecture also covers subjects ranging from histogram
filters to N-dimensional Kalman Filters as applied in localization.
Laboratory assignments
This section introduces the 10 laboratory experiments53 accompanying the
lectures.
LAB0—ROS development with Python. Typically, the entire software functionality of a
robot is divided using ROS into different modules, each running over one or
multiple nodes. This laboratory assignment covers how to write a simple ROS
application consisting of a publisher and subscriber node in python and getting
familiar with concepts like topics and messages. Node is the ROS term for an
executable that is connected to the ROS network. Topics are named buses over
which nodes exchange messages. A message is a simple data structure, comprising
typed fields such as Bool, Int32, String, etc. In this laboratory, students will implement a publisher ROS node that generates a random number between 0 and 10
and write a subscriber ROS node that will square the generated number and
display the result.
LAB1—Bang-bang and safety controllers. The LIDAR sensor provides a cloud of distance measurements in a polar form to the system. Using this data, a system can
map the environment around it and navigate it accordingly. In this lab, students
are asked to implement a Bang-Bang safety controller as an “emergency stop”
node using ROS and Python. The node will receive the LIDAR data through
the LIDAR sensor topic. If the distances measurements right in front of the car
are below a specified threshold, the node will recognize that there is an obstacle
and publish a stop command to the SAFETY topic. The car, in turn, will stop as
soon as it receives that command preventing the collision.
LAB2—OpenCV basics. In this lab students learn how to use OpenCV for basic
operations such as reading an image from a file, displaying an image on the
screen, gray-scale conversion, mirroring, resizing, cropping, image segmentation,
and shuffling. Also, they use the MatPlotLib library54 to display images.
LAB3—Streaming video with OpenCV. In this lab, students learn how to send a continuous stream of images with ROS as a topic. Our goal here is learning how to
distribute sensor data from one node to several other nodes that need the data.
352
International Journal of Electrical Engineering & Education 57(4)
As is the case with the camera in our project, and the several modules that will be
using its pictures to perform different tasks.
LAB4—Semantic segmentation. Image semantic segmentation is the partitioning of an
image into semantically meaningful areas or classes. In a sense, it is similar to
image classification but applied at a pixel level. Semantic segmentation has been
used in a wide range of applications ranging from robotics to augmented reality,
including in autonomous driving. For the latter application, a scene could be segmented into streets, traffic signs, street markings, cars, pedestrians, or sidewalks,
all of which are important data for an autonomous vehicle. In this laboratory,
students will train a SegNet55 on the CamVid data set56 for road scenes, and then
try it on the data from the car’s camera.
LAB5—Blob detection using the ZED camera. The goal of this lab is to learn how to use
the stereo camera with the onboard GPU to influence the vehicle’s control. This is
possible thanks to computer vision algorithms implemented in the OpenCV library
which gives a robot a sense of perception in its field of view. Using this perception
to locate targets, called “blobs,” and track them, the students are asked to follow
and ultimately make decisions based on the position of these blobs. The task of
blob detection can be summarized with the following computer vision operations:
• The first step is converting the frames from RGB (red, green, blue) into HSV
(hue, saturation, value) color space. The differences can be understood from the
names; while RGB stores each pixel as a combination of red, green, and blue
values, HSV describes them as a combination of hue (angle around the color
wheel), saturation (how “pure” the color is), and value (how vibrant the color
is).
• A threshold is applied to the image to filter the desired blob out with a certain
color range. Filtering red for example is achieved by keeping only pixels with
values between HSV (2.95%, 47%) and HSV (20, 100%, 100%).
• The image is eroded to remove noise and smooth edges.
• Outlines around the blobs are highlighted using The OpenCV findContours
function.
• Smaller contours usually result from noise, so we drop them and keep only the
one with the largest area.
• A polygon encapsulating the resulting contour is approximated using OpenCV’s
approxPolyDP. If it fits the shape of the desired blob, we keep it, otherwise, we
drop it.
• The blob is described as a single position in terms of the center of the final
polygon.
LAB6—End-to-end learning for steering autonomous vehicle. Vehicle steering, as simple as
it seems, is one of the most challenging aspects of an Artificial Intelligence (AI)
robotic system. That is why researchers have focused on training deep neural
Eken et al.
353
Table 3. Sample from data set.
Image
File name
Speed
Angle
20
118
151
225
313
/home/jetson/racecar/056/00020.jpg
/home/jetson/racecar/026/00118.jpg
/home/jetson/racecar/026/00151.jpg
/home/jetson/racecar/026/00225.jpg
/home/jetson/racecar/026/00341.jpg
0.111419
0.625495
0.550052
0.500523
0.458846
0.0
0.11
0.24
0.17
0.24
network models on visual data from front cameras paired with the steering
angles.57 In this lab, students deploy different end-to-end deep learning models
on the Racecar and evaluate their performance. Experimental analysis shows that
different models successfully manage to steer the vehicle autonomously. The pipeline for this lab can be listed as follows:
• Building data set: The vehicle’s speed and steering angle are controlled with a
Logitech F710 joystick, an ROS module subscribes to the events of the joystick
recording both these values as well as images taken from the ZED which are
then stored in a directory as shown in Table 3.
• Splitting data set into training and test samples.
• Python generator is used to generate data for training on the fly, rather than
storing everything in memory. Also, data augmentation techniques such as flipping, brightness change, and shifts are applied.
• Deploying different end-to-end models after training and steering the vehicle
based on data from the front view camera.
LAB7—Traffic sign classification and localization. Recognizing traffic signs58 is another
crucial task of autonomous driving. In this lab assignment, students are asked to
achieve this using a real-time object detection and classification algorithm, namely
Tiny-YOLO.59 Students implement an ROS node with their trained Tiny-YOLO
model. The workflow of traffic signs detection can be broken up into two steps: (i)
locating possible signs in a large image and (ii) classifying the cropped traffic signs
if it is indeed relevant and recognized by the trained model or dismissing it if not.
In this lab, students cover in detail the different steps needed to create a pipeline,
which enables localization and classification of traffic signs. The pipeline itself
looks as follows:
• Building data set: same as in LAB6 (see Figure 4(a) to (f)).
• Image annotation: we labeled the frames using an image annotation tool called
LabelImg.60 The traffic signs are first encompassed in a rectangular bounding
box then the resulting area was labeled according to the sign’s class.
Annotations are stored in an XML file following the same format used by
ImageNet and called PASCAL VOC. Each entry stores the coordinates,
354
•
•
•
•
•
International Journal of Electrical Engineering & Education 57(4)
width, height, and class label of its respective image (see Figure 5(a) and (b) for
an example).
Data normalization: Learning is more stable by reducing variability across the
training data, the network will generalize better to novel data, and the inputs
will fall within the useful range of your nonlinearities.
Data augmentation: Image augmentation is a necessary step to improve the
performance of the model and mitigate the downside of having a small training
data set for our deep neural network. This is achievable by synthesizing training
images using one or more processing methods such as random rotation, brightness change, and axis flips. imgaug61 is a powerful library implementing image
augmentation techniques for machine learning.
Getting Google Colab ready to use: Google Colaboratory, or shortly Colab,62 is
a free cloud service offered by Google for machine learning education and
research. It offers dedicated virtual machines equipped with NVIDIA Tesla
K80 cards for GPU acceleration and comes with an online Jupyter notebook
environment canceling the need for any setup on a local machine.
Image classification using different pretrained models: In this part, students
learn transfer learning and fine-tuning. Darknet-19 was used as an example
pretrained model.
Measuring network performance: Students put the model to the test with their
testing dataset and evaluate various performance metrics like accuracy, loss,
precision, etc.
LAB8—Parking in front of a visual marker. The goal of this lab is binding the vehicle’s
vision directly to its control. Allowing it to reliably navigate toward a visual
marker. The visual marker is determined by blob detection which is covered in
LAB5, what this lab adds on top of that is the ability to track the blob’s center
along the horizontal axis and issue the appropriate Ackermann steering commands
to direct the vehicle toward it.
LAB9—Histogram filters and 2D Kalman filter. For this lab, students deal with filtering,
which is a general method for estimating unknown variables from noisy observations over time. In particular, they learn the Bayes Filter and some of its variants:
The Histogram Filter and the Kalman Filter. They can apply the algorithms to the
robot localization problem.
Evaluation of educational program and discussion of results
To quantitatively assess the educational program’s impact on the above learning
process and get a tangible idea on the students’ satisfaction level, an electronic
survey was conducted at the last session of the summer internship program, in
which students had to grade their satisfaction in 14 questions form according to
the five-level Likert-type scale: strongly disagree, disagree, neutral, agree, and
strongly agree. Thus, they were able to express their opinion about how useful
Eken et al.
355
Table 4. Students’ survey questions.
No.
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12
Q13
Q14
Question
Learning value
LAB0 enhanced my ability to understand the theoretical ROS lecture in a new way.
LAB1 enhanced my ability to understand the theoretical feedback control systems
lecture in a new way.
LABs 2-5 enhanced my ability to understand the theoretical image processing lecture
in a new way.
LAB6 and LAB7 enhanced my ability to understand the theoretical deep learning
lecture in a new way.
LAB8 enhanced my ability to understand the theoretical visual servoing lecture in a
new way.
LAB9 enhanced my ability to understand the theoretical localization lecture in a
new way.
Value added
The reproducible educational plan helped me to integrate different technologies into
working systems solutions.
I can learn similar topics in this way without such a training plan.
I can prepare such a reproducible educational study for other lessons.
Design and other issues
The ideas and concepts within the reproducible educational plan were presented
clearly and easy to follow.
The time allocated to the course was enough.
I was interested in this topic before I started to study it.
I am interested in this topic after having studied it.
I recommend this educational program to a fellow student.
the program was in reinforcing and improving their knowledge of the field. This
feedback from students is important because it gives instructors a good idea on
which characteristics of their teaching approach should be modified or not.
Allowing each new session to be an improvement over the previous one.
Moreover, the interaction between students and educators ensures the effectiveness
of the course, the maintenance of the quality of university education and permits
them to reflect upon their experiences among others. Questionnaire items are structured in three subscales: “Learning value,” “Value added,” and “Design and other
issues.” Following is an explanation of the aim of each item:
• Learning value encompasses the questions that try to probe students’ perceptions of how effectively the reproducible educational program is designed for the
purpose of teaching mini autonomous race car programming.
• Value added tries to assess the ability to learn and prepare similar systems and
how much of this ability is thanks to this program.
• Design and other issues focus on students’ judgment of the clarity of the course
and their interest in the program.
356
International Journal of Electrical Engineering & Education 57(4)
Figure 6. Student responses for the reproducible educational plan survey (number of
students ¼ 15). (a) Learning value (Q1–Q6) (b) value added (Q7–Q9) and (c) design and other
issues (Q10–Q14).
The full questionnaire is presented in Table 4. Answers are collected anonymously and the survey data are analyzed using Excel. The first six items make up
the Learning value subscale. The subsequent three items make up the value added
subscale. While the last five items relate to the design and other issues subscale.
Figure 6(a) to (c) represents the responses of the 15 students who took part in the
survey, the high percentage of “strongly agree” and “agree” answers indicates the
success of this reproducible educational program for mini autonomous car programming. About the learning value, 60% of students agree or strongly agree,
showing an important positive attitude toward the improvement of their abilities
in topics such as ROS, feedback control systems, image processing basics, and
visual servoing. Forty-seven percent of students agree or strongly agree that they
can integrate different technologies, but a relatively large amount of students are
neutral toward preparing such a reproducible educational notebook. Most of
the students answered positively regarding the educational design and other
issues section.
Conclusions
A reproducible educational program specifically focused on the teaching of mini
autonomous car programming has been presented in this paper. The proposed
Eken et al.
357
training program includes different lectures and laboratory assignments. Data
from students’ feedback indicate the success in achieving the learning objectives
of the program. The student response was satisfactory, showing a high degree of
interest in the proposed approach as well as satisfaction with it. We will increase
the number of students in the upcoming years for this reproducible open-source
robotic education program.
We plan to add a vehicle2vehicle communication module for better handling of
closed roads and congested traffic scenarios. The locations of traffic accidents and
jams are expected to be broadcast by other vehicles allowing the receivers to avoid
such roads and plan alternative routes.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Kocaeli University
Scientific Research and Development Support Program (BAP) in Turkey under project
number 2017/067HD. We would also like to thank OBSS for their support and
OpenZeka for their training under MARC program.
ORCID iD
Süleyman Eken
https://orcid.org/0000-0001-9488-908X
References
1. Collberg C and Proebsting TA. Repeatability in computer systems research. Commun
ACM 2016; 59: 62–69.
2. Hothorn T and Leisch F. Case studies in reproducibility. Brief Bioinformat 2011; 12:
288–300.
3. Ruvolo P. Dude, where’s my robot?: A localization challenge for undergraduate robotics. Proc AAAI ConfArtifIntell 31: 4798–4802.
4. Bihl T, Jenkins T, Cox C, et al. From lab to internship and back again: learning autonomous systems through creating a research and development ecosystem. In: Proc AAAI
Conf Artif Intell 33: 9635–9643.
5. Casser V, Pirk S, Mahjourian R, et al. Depth prediction without the sensors: leveraging
structure for unsupervised learning from monocular videos. In: Proc AAAI Conf Artif
Intell 33: 8001–8008.
6. Chi L and Mu Y. Deep steering: learning end-to-end driving model from spatial and
temporal visual cues. arXiv preprint arXiv:170803798 2017.
7. Fridman L, Jenik B and Reimer B. Arguing machines: perception control system redundancy and edge case discovery in real-world autonomous driving. arXiv preprint
arXiv:171004459 2017.
358
International Journal of Electrical Engineering & Education 57(4)
8. Snider JM, et al. Automatic steering methods for autonomous automobile path tracking.
Pittsburgh, PA: Robotics Institute, 2009.
9. Bohg J, Ciocarlie M, Civera J, et al. Big data on robotics. Big Data 2016; 4:
195–196.
10. Workshop at ICRA, http://www.heronrobots.com/EuronGEMSig/gem-sig-events/
icra2017workshoprrr (accessed 10 February 2019).
11. Workshop at ICRA in Brisbane, http://www.heronrobots.com/EuronGEMSig/gem-sigevents/ICRA2018Workshop (accessed 10 February 2019).
12. IEEE RAS summer school on deep learning for robot vision, http://robotvision2019.
amtc.cl/ (accessed 10 February 2019).
13. Code ocean, https://codeocean.com/ (accessed 10 February 2019).
14. Bonsignorio F. A new kind of article for reproducible research in intelligent robotics
[from the field]. IEEE Robot Automat Mag 2017; 24: 178–182.
15. Dahnoun N. Teaching electronics to first-year non-electrical engineering students. Int J
Electr Eng Educ 2017; 54: 178–186.
16. Nourbakhsh I, Crowley K, Wilkinson K, et al. The educational impact of the robotic
autonomy mobile robotics course. Technical Report CMU-RI-TR-03-29. Pittsburgh,
PA: Carnegie Mellon University, 2003.
17. Nourbakhsh IR, Crowley K, Bhave A, et al. The robotic autonomy mobile robotics
course: robot design, curriculum design and educational assessment. Auton Robots 2005;
18: 103–127.
18. de Gabriel JMG, Mandow A, Fernandez-Lozano J, et al. Using lego nxt mobile robots
with labview for undergraduate courses on mechatronics. IEEE Trans Educ 2011; 54:
41–47.
19. Calnon M, Gifford CM and Agah A. Robotics competitions in the classroom: enriching
graduate-level education in computer science and engineering. Global J Eng Educ 2012;
14: 6–13.
20. Bousaba NA, Conrad CMHJM and Cecchi V. Keys to success in the IEEE hardware
competition. In: Proceedings of the American Society for Engineering Education Annual
Conference, pp.1–18.
21. Chris C. Learning with first LEGO league. In: McBride R and Searson M (eds)
Proceedings of society for information technology & teacher education international conference 2013. New Orleans, LA: Association for the Advancement of Computing in
Education (AACE), pp. 5118–5124.
22. Fike H, Barnhart P, Brevik CE, et al. Using a robotics competition to teach about and
stimulate enthusiasm for Earth science and other STEM topics. In EGU General Assembly
Conference Abstracts, EGU General Assembly Conference Abstracts 18: EPSC2016–10.
23. National robotics challenge, https://www.thenrc.org/ (accessed 10 February 2019).
24. Engineering and robotics learned young (early), http://www.earlyrobotics.org/ (accessed
10 February 2019).
25. Self-driving fundamentals: Featuring Apollo, https://www.udacity.com/course/self-driv
ing-car-fundamentals-featuring-apollo–ud0419 (accessed 10 February 2019).
26. Mit 6.s094: Deep learning for self-driving cars, https://selfdrivingcars.mit.edu/ (accessed
10 February 2019).
27. McLurkin J, Rykowski J, John M, et al. Using multi-robot systems for engineering
education: teaching and outreach with large numbers of an advanced, low-cost robot.
IEEE Trans Educ 2013; 56: 24–33.
Eken et al.
359
28. Danahy E, Wang E, Brockman J, et al. Lego-based robotics in higher education:
15 years of student creativity. Int J Adv Robot Syst 2014; 11: 27.
29. Afari E and Khine M. Robotics as an educational tool: Impact of Lego mindstorms.
International Journal of Information and Education Technology 2017; 7: 437–442.
30. Crenshaw TL and Beyer S. Upbot: a testbed for cyber-physical systems. In: Proceedings
of the 3rd international conference on cyber security experimentation and test. CSET’10.
Berkeley, CA: USENIX Association, pp. 1–8.
31. Kelly J, Binney J, Pereira A, et al. Just add wheels: leveraging commodity laptop hardware for robotics and AI education. In: AAAI 2008 AI Education Colloquium. Chicago,
IL. Also appears in AAAI Technical Report WS-08-02. Menlo Park, CA: AAAI Press,
pp. 50–55.
32. Karaman S, Anders A, Boulet M, et al. Project-based, collaborative, algorithmic robotics for high school students: programming self-driving race cars at MIT. In: 2017 IEEE
integrated STEM education conference (ISEC). pp. 195–203. IEEE.
33. VESC—open source esc, http://vedder.se/2015/01/vesc-open-source-esc/ (accessed
10 February 2019).
34. Stereo labs, https://www.stereolabs.com/zed/ (accessed 10 February 2019).
35. Scanse, https://github.com/scanse (accessed 10 February 2019).
36. Sparkfun, https://www.sparkfun.com/products/retired/10736 (accessed 10 February
2019).
37. Jupyter notebook, https://jupyter.org/ (accessed 10 February 2019).
38. Kluyver T, Ragan-Kelley B, Perez F, et al. Jupyter notebooks—a publishing format for
reproducible computational workflows. In: Loizides F and Scmidt B (eds) Positioning
and power in academic publishing: players, agents and agendas. Amsterdam: IOS Press,
pp. 87–90.
39. Shen H. Interactive notebooks: sharing the code. Nature 2014; 515: 151–152.
40. Ros, http://www.ros.org/ (accessed 10 February 2019).
41. Kul S, Eken S and Sayar A. Distributed and collaborative real-time vehicle detection
and classification over the video streams. Int J Adv Robot Syst 2017; 14:
1729881417720782.
42. Jetson tx2 module, https://developer.nvidia.com/embedded/buy/jetson-tx2 (accessed
10 February 2019).
43. Twigg P, Ponnapalli P and Fowler M. Workshop problem-solving for improved student
engagement and increased learning in engineering control. Int J Electr Eng Educ 2018;
55: 120–129.
44. Skruch P, et al. Control systems in semi and fully automated cars. In Mitkowski W,
Kacprzyk J and Oprzedkiewicz K (eds) Trends in advanced intelligent control, optimization and automation. Cham: Springer International Publishing, pp. 155–167. ISBN
978-3-319-60699-6.
45. Opencv, https://opencv.org/ (accessed 10 February 2019).
46. Darpa grand challenge, https://cs.stanford.edu/group/roadrunner/old/index.html
(accessed 10 February 2019).
47. Tensorflow, https://github.com/tensorflow/tensorflow (accessed 10 February 2019).
48. Keras, https://github.com/keras-team/keras (accessed 10 February 2019).
49. Montemerlo M, Thrun S, Koller D, et al. Fastslam: a factored solution to the simultaneous localization and mapping problem. In: In Proceedings of the AAAI national conference on artificial intelligence. Menlo Park, CA: AAAI, pp. 593–598.
360
International Journal of Electrical Engineering & Education 57(4)
50. Sermanet P, Eigen D, Zhang X, et al. Overfeat: integrated recognition, localization and
detection using convolutional networks, https://arxiv.org/abs/1312.6229 (2013).
51. Simonyan K and Zisserman A. Very deep convolutional networks for large-scale image
recognition. CoRR 2014; abs/1409.1556.
52. Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object
detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR),
pp. 779–788.
53. Racecar-training, https://github.com/husmen/racecar-training (accessed 10 February
2019).
54. Matplotlib, https://matplotlib.org/ (accessed 10 February 2019).
55. Segnet, http://mi.eng.cam.ac.uk/projects/segnet/ (accessed 10 February 2019).
56. Motion-based segmentation and recognition dataset, http://mi.eng.cam.ac.uk/research/
projects/VideoRec/CamVid/ (accessed 10 February 2019).
57. Karsli M, Satilmiş Y, Şara M, et al. End-to-end learning model design for steering
autonomous vehicle. In: 2018 26th Signal processing and communications applications
conference (SIU), pp. 1–4. IEEE.
58. Sat Ilm Iş Y, Tufan F, Şara M, et al. Cnn based traffic sign recognition for mini
autonomous vehicles. In: Swiatek
J, Borzemski L and Wilimowska Z (eds)
Information systems architecture and technology: proceedings of 39th international conference on information systems architecture and technology—ISAT 2018. Cham: Springer
International Publishing, pp. 85–94. ISBN 978-3-319-99996-8.
59. Redmon J and Farhadi A. Yolo9000: better, faster, stronger. In: 2017 IEEE conference
on computer vision and pattern recognition (CVPR), pp. 7263–7271.
60. labelimg, https://github.com/tzutalin/labelImg (accessed 10 February 2019).
61. imgaug, https://github.com/aleju/imgaug (accessed 10 February 2019).
62. Colab, https://colab.research.google.com (accessed 10 February 2019).
Descargar