Citation
Al-Azizi, Jalal Ibrahim
(2020)
Deep learning approach for automated geospatial data collection.
Doctoral thesis, Universiti Putra Malaysia.
Abstract
Geospatial data collection and mapping are considered to be one of the key tasks for many users of
spatial information. Traditionally, data collection and mapping can be done using a variety of
methods, such as mobile mapping, remote sensing and conventional survey methods. Each method
has its advantages, accuracy, costs and limitations. It is therefore essential to assess the
requirements of the project in order to ensure that the relevant data quality is acquired at the
lowest possible cost. However, one of the greatest barriers is the availability of digital spatial
data and attributes. Often this problem arises because these methods are considered costly and
require considerable effort and time.
With advancements in technology, such as object recognition through Artificial Intelligence
technology, this has led to novel approaches to the extraction of features for a number of
applications. Information is expected to be more accurate and readily available in
real-time at lower operational and field observation costs. Several research groups have
therefore investigated the detection of road objects, e.g. road signs. The main drawback of these
works, however, is that none of these studies used low-cost sensors to generate
geospatial maps in their studies. In addition, some of these studies are considered
expensive and require a considerable amount of time to process the information collected.
In this study, I presented a new approach to real-time geospatial data collection and
map generation by integrating deep learning and geomatics technologies. The proposed
solution runs on a laptop which is connected with
a single vision sensor, e.g. camera, receiver to capture photographs or videos, and the location unit e.g. using global navigation satellite system to record the
user location (geographic coordinates). For some selected classes, a
customized data set and a prototype framework "DeepAutoMapping" have
been built.
"DeepAutoMapping" was developed on the basis of convolutional neural
networks inspired by recent rapid advancements in deep learning literature to
detect, locate and recognize four main street objects (trees, street light poles,
traffic signs, and palms) based on a defined object detection dataset. The
prototype calculates the positioning of the detected object using a geographic
coordinate system and then generates a geospatial database including object
ID, object name, single photograph or video sequence (based on the type of
test), distances, bearings, user and object coordinates. It allows users to verify
the results in real time without the need to revisit the site.
Various evaluation and test scenarios have been conducted to validate
outputs. The findings show that the overall proposed approach is easy to use,
provides a high detection accuracy of 88% with 6% false detection and a
positioning accuracy of 6.16 m for video streaming and 9.99 m for single
photography in the outdoor environment.
Compared to the current data collection methods available, the proposed
solution can be considered as a pipeline for the fastest and cheapest methods
of data survey and geospatial map generation. In addition, a new research
area for geospatial data collection using deep learning will be opened up.
Download File
Additional Metadata
Item Type: |
Thesis
(Doctoral)
|
Subject: |
Geospatial data - Research |
Subject: |
Geographic information systems |
Subject: |
Spatial analysis (Statistics) |
Call Number: |
FK 2020 83 |
Chairman Supervisor: |
Associate Professor Helmi Zulhaidi Mohd Shafri, PhD |
Divisions: |
Faculty of Engineering |
Keywords: |
Geospatial Data, Mapping and Localisation, Deep Learning Neural Networks,
Positioning, GIS, Computer Vision for Automation, Survey
Method |
Depositing User: |
Ms. Nur Faseha Mohd Kadim
|
Date Deposited: |
01 Jun 2021 01:16 |
Last Modified: |
08 Dec 2021 02:57 |
URI: |
http://psasir.upm.edu.my/id/eprint/85693 |
Statistic Details: |
View Download Statistic |
Actions (login required)
|
View Item |