This paper aims to assess the ways in which multi-resolution object-based classification methods can be used to group urban environments made up of a mixture of buildings, sub-elements such as car parks, roads, shades and pavements and foliage such as grass and trees. This involves using both unmanned aerial vehicles (UAVs) which provide high-resolution mosaic Orthoimages and generate a Digital Surface Model (DSM). For the study area chosen for this paper, 400 Orthoimages with a spatial resolution of 7 cm each were used to build the Orthoimages and DSM, which were georeferenced using well distributed network of ground control points (GCPs) of 12 reference points (RMSE = 8 cm). As these were combined with onboard RTK-GNSS-enabled 2-frequency receivers, they were able to provide absolute block orientation which had a similar accuracy range if the data had been collected by traditional indirect sensor orientation. Traditional indirect sensor orientation involves the GNSS receiver in the UAV receiving a differential signal from the base station through a communication link. This allows for the precise position of the UAV to be established, as the RTK uses correction, allowing position, velocity, altitude and heading to tracked, as well as the measurement of raw sensor data. By assessing the results of the confusion matrices, it can be seen that the overall accuracy of the object-oriented classification was 84.37%. This has an overall Kappa of 0.74 and the data that had poor classification accuracy included shade, parking lots and concrete pavements. These had a producer accuracy (precision) of 81%, 74% and 74% respectively, while lakes and solar panels each scored 100% in comparison, meaning that they had good classification accuracy.
In recent years, photogrammetry has been recognised as an extremely good surveying method when trying to produce 3D images of the Earth’s surface. This is because it can be used on demand and has the ability to create high-resolution data, including DSM layers and orthophotos (orthorectified images). Photogrammetry includes analysing Earth-based (terrestrial) data or dedicated air- and space-borne campaigns [
Despite its uses, in the past, the use of aerial photogrammetry has been limited. This is because it was seen as a high-cost method of data collection and often faced difficulties when trying to collect 3D topographic data, orthophotos, topographic maps and other map features due to the large format metric cameras that were used [
There have also been developments in Global Navigation Satellite Systems (GNSS), which can be seen to be particularly interesting in terms of this paper. There has been an increase in the use of Real-Time Kinematic (RTK) devices being placed into UAVs that are readily available. This is interesting because the use of RTKs means that the position of the UAV can be more easily tracked, and can also help to ensure that the data provided is more accurate (up to 2 cm) [
Advances in remote sensing have helped to make UAVs even more useful and effective data collection tools than ever before, as it means that UAVs now have the ability to combine temporal and spatial sensing. This allows for an even more precise recognition of features, which, while positive, can also mean that the images produced are subject to noise from shadows or the salt and pepper effect [
However, these building detection algorithms are not without problems though and can struggle to identify buildings when they are less than 50 km2 or the building is on sloped ground. This is particularly common in casual settlements, meaning that these detection algorithms are not suitable to be used in these kinds of areas. In order to ensure that buildings in these areas can be mapped, it is important that 2D and 3D features are analysed in order to get a high level of accuracy when classifying the area. The aim of this research, then, is to assess the effectiveness of object-oriented image analysis software eCognition (Definiens Imaging, Germany) in urban environments that include features such as buildings, roads, car parks and vegetation. This is done by combining high spatial resolution mosaic-Orthoimages and DSM layers in order to be able to group features of the environment. This method is superior to VHR imagery as UAV Orthoimages are able to combine object segmentation and the fuzzy dimension digital classification method to recognise features in a diverse environment, while objects may be too spectrally similar for VHR to be used effectively.
The site chosen for this research was the Jordan University of Science and Technology (JUST). Founded in 1986 and designed by the Japanese architect Tange, the campus combines futuristic style and sustainability. It is located 70 km north of the capital Amman and 6 km south of Al-Ramtha at 32˚28'36.77"N and longitude 35˚58'24.05"E as shown in
MARSRobotics® Talon with fixed wings, as seen in
Specification | Technical details |
---|---|
Wingspan | 2.0 m (5.65 ft) |
Construction | EPO foam |
Take-off | Hand-launch, fully automatic |
Take-off weight | 3500 g (7.7 lbs) |
Cruise speed | 50 km/h (31 mph) |
Maximum speed | 85 km/h (52.8 mph) |
Motor | 530 kV brushless motor |
Battery | 3-cell 9000 mAh, two batteries required for flight |
Flight time | 1.5 hours (with full payload) |
Landing | Repeatedly passes over desired area at 30 m - 40 m |
Autopilot | Pixhawk by 3D Robotics |
Max. altitude | About 2000 m above sea level |
Telemetry | Battery status, altitude, ground speed, compass, distance travelled, flight time (speech enabled) |
Operating conditions | All weather performance can fly in light rain as all electronics are enclosed |
The MARSRobotics® Talon features a SONY A6000 (ILCE-6000L) Digital Single-Lens Reflex (DSLR) camera, as seen in
The way in which the flight is controlled is crucial to the MARSRobotics® Talon. Drones such as this can be controlled in a variety of ways, such as GPS enabled autopilot systems or through using radio-controlled hardware. In this study, the Pixhawk autopilot system was used to control the UAV. This is an open-source autopilot system which is marketed towards users of inexpensive autonomous
Specifications | Technical details |
---|---|
Camera format | Compact system camera |
Weight | 468 g, includes rechargeable batteries |
Size | 120 × 67 × 45 mm |
Sensor type | CMOS |
Effective megapixels | 24.3 |
Sensor format | APS-C |
Sensor size | 23.50 mm × 15.60 mm |
Aspect ratio | 3:2 |
Colour filter type | RGBG |
Image resolution | 6000 × 4000 (24.0 MP, 3:2) |
Image file format | RAW + JPEG |
Continuous-mode frames/second | 11.1 |
Focal length (actual) | 16 - 50 mm |
Zoom ratio | 3.13× |