Please wait a minute...

当期目录

2019年 第2卷 第2期 刊出日期:2019-06-20
Preface to the Album “Digital Photogrammetry and Machine Vision”
2019, 2(2):  1-1.  doi:10.11947/j.JGGS.2019.0201
Abstract ( )   HTML ( )   PDF (99KB) ( )  
References | Related Articles | Metrics
General Structure Physics of an Aerial Remote Sensing Platform and Its Systemic Accuracy Criterion
Lei YAN,Zhengkang ZUO,Yingcheng LI,Xiuxiao YUAN,Yan SONG,Qingsheng XUE,Shihu ZHAO
2019, 2(2):  2-16.  doi:10.11947/j.JGGS.2019.0202
Abstract ( )   HTML ( )   PDF (1334KB) ( )  
Figures and Tables | References | Related Articles | Metrics

Accuracy is a key factor in high-resolution remote sensing and photogrammetry. The factors that affect accuracy are imaging system errors and data processing errors. Due to the complexity of aerial camera errors, this paper focuses on the design of digital aerial camera systems and the means to reduce system error and data processing inefficiencies. There are many kinds of digital aerial camera systems at present; however, these systems lack a unified physical model, which ultimately leads to more complicated designs and multi-camera modes. Such a system is complex and costly, as it is easily affected by factors such as vibration and temperature. Thus, the installed accuracy can only reach the millimeter level. Here, we describe a unified physical structure for a digital aerial camera that imitates an out-of-field multi-charge-coupled device (CCD), an in-field multi-CCD, and once-imaging and twice-imaging digital camera systems. This model is referred to as the variable baseline-height ratio spatiotemporal model. The variable ratio allows the opto-mechanical spatial parameters to be linked with height accuracy, thus providing a connection to the surface elevation. The twice-imaging digital camera prototype system and the wideband limb imaging spectrometer provide a transformation prototype from the current multi-rigid once-imaging aerial camera to a single rigid structure. Thus, our research lays a theoretical foundation and prototype references for the construction and industrialization of digital aerial systems.

Smart Photogrammetric and Remote Sensing Image Processing for Very High Resolution Optical Images— Examples from the CRC-AGIP Lab at UNB
Yun ZHANG
2019, 2(2):  17-26.  doi:10.11947/j.JGGS.2019.0203
Abstract ( )   HTML ( )   PDF (23130KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This paper introduces some of the image processing techniques developed in the Canada Research Chair in Advanced Geomatics Image Processing Laboratory (CRC-AGIP Lab) and in the Department of Geodesy and Geomatics Engineering (GGE) at the University of New Brunswick (UNB), Canada. The techniques were developed by innovatively/“smartly” utilizing the characteristics of the available very high resolution optical remote sensing images to solve important problems or create new applications in photogrammetry and remote sensing. The techniques to be introduced are: automated image fusion (UNB-PanSharp), satellite image online mapping, street view technology, moving vehicle detection using single set satellite imagery, supervised image segmentation, image matching in smooth areas, and change detection using images from different viewing angles. Because of their broad application potential, some of the techniques have made a global impact, and some have demonstrated the potential for a global impact.

Research on Key Technologies of Precise InSAR Surveying and Mapping Applications Using Automatic SAR Imaging
Xinming TANG,Tao LI,Xiaoming GAO,Qianfu CHEN,Xiang ZHANG
2019, 2(2):  27-37.  doi:10.11947/j.JGGS.2019.0204
Abstract ( )   HTML ( )   PDF (6948KB) ( )  
Figures and Tables | References | Related Articles | Metrics

Precise interferometric synthetic aperture radar (InSAR) is a new intelligent photogrammetric technology that uses automatic imaging and processing means. Precise InSAR has become the most efficient satellite surveying and mapping (SASM) method that uses the interferometric phase to create a global digital elevation model (DEM) with high precision. In this paper, we propose the application of systematic InSAR technologies to SASM. Three key technologies are proposed: calibration technology, data processing technology and post-processing technology. First, we need to calibrate the geometric and interferometric parameters including the azimuth time delay, range time delay, and atmospheric delay, as well as baseline errors. Second, we use the calibrated parameters to create a precise DEM. One of the important procedures in data processing is the determination of phase ambiguities. Finally, we improve the DEM quality through the joint use of the block adjustment method, long and short baseline combination method and descending and ascending data merge method. We use 6 sets of TanDEM-X data covering Shanxi to conduct the experiment. The root mean square error of the final DEM is 5.07m in the mountainous regions. In addition, the low coherence area is 0.8km 2. The result meets the China domestic SASM accuracy standard at both the 1∶50000 and 1∶25000 measurement scales.

Progress and Applications of Visual SLAM
Kaichang DI,Wenhui WAN,Hongying ZHAO,Zhaoqin LIU,Runzhi WANG,Feizhou ZHANG
2019, 2(2):  38-49.  doi:10.11947/j.JGGS.2019.0205
Abstract ( )   HTML ( )   PDF (2375KB) ( )  
Figures and Tables | References | Related Articles | Metrics

Visual simultaneous localization and mapping (SLAM) provides mapping and self-localization results for a robot in an unknown environment based on visual sensors, that have the advantages of small volume, low power consumption, and richness of information acquisition. Visual SLAM is essential and plays a significant role in supporting automated and intelligent applications of robots. This paper presents the key techniques of visual SLAM, summarizes the current research status, and analyses the new trends of visual SLAM research and development. Finally, specific applications of visual SLAM in restricted environments, including deep space and indoor scenarios, are discussed.

Stream-computing of High Accuracy On-board Real-time Cloud Detection for High Resolution Optical Satellite Imagery
Mi WANG,Zhiqi ZHANG,Zhipeng DONG,Shuying JIN,Hongbo SU
2019, 2(2):  50-59.  doi:10.11947/j.JGGS.2019.0206
Abstract ( )   HTML ( )   PDF (1634KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements.

Research on 3D Target Pose Tracking and Modeling
Yang SHANG,Xiaoliang SUN,Yueqiang ZHANG,You LI,Qifeng YU
2019, 2(2):  60-69.  doi:10.11947/j.JGGS.2019.0207
Abstract ( )   HTML ( )   PDF (865KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This paper tackles pose tracking and model refinement, one of the fundamental work for 3D photogrammetry. The researches belong to the videometrics, an interdisciplinewhich combines computer vision, digital image processing, photogrammetry and optical measurement. Related works are summarized briefly in this paper. This paper studies the problem of pose tracking for target with 3D model. For the target with accurate 3D model, line model based pose tracking methods are proposed for target which is rich in line features. Experimental results indicate that the proposed methods track the target pose accurately. Normal distance iterative reweighted least squares and distance image iterative least squares methods are proposed to process more general targets. This paper adopts bundle adjustment to tackle pose tracking in image sequence for target with inaccurate 3D line model. The proposed method optimizes the model line parameters and the pose parameters simultaneously. The model line orientation, position and mean angle error, mean position error of the pose are 0.3°,3.5mm and 0.12°,20.1mm in simulation experiments of satellite pose tracking. Line features are used to track target pose with unknown 3D model through image sequence. The model line parameters and pose parameters are optimized under the framework of SFM. In simulation experiments, the reconstructed line orientation, position error and mean angle error, mean position error of pose are 0.4°,7.5mm and 0.16°,23.5mm.

Microlens Light Field Imaging Method Based on Bionic Vision and 3-3 Dimensional Information Transforming
Shoujiang ZHAO,Fan LIU,Peng YANG,Hongying ZHAO,Anand ASUNDI,Lei YAN,Haimeng ZHAO
2019, 2(2):  70-78.  doi:10.11947/j.JGGS.2019.0208
Abstract ( )   HTML ( )   PDF (4937KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This paper adopts the 3-3-2 information processing method for the capture of moving objects as its premise, and proposes a basic principle of three-dimensional (3D) imaging using biological compound eye. Traditional bionic vision is limited by the available hardware. Therefore, in this paper, the new-generation technology of microlens-array light-field camera is proposed as a potential method for the extraction of depth information from a single image. A significant characteristic of light-field imaging is that it records intensity and directional information from the lights entering the camera. Herein, a refocusing method using light-field image is proposed. By calculating the focusing cost at different depths from the object, the imaging plane of the object is determined, and a depth map is constructed based on the position of the object’s imaging plane. Compared with traditional light-field depth estimation, the depth map calculated by this method can significantly improve resolution and does not depend on the number of light-field microlenses. In addition, considering that software algorithms rely on hardware structure, this study develops an imaging hardware that is only 7cm long based on the second-generation microlens camera’s structure, further validating its important refocusing characteristics. It thereby provides a technical foundation for 3D imaging with a single camera.

Splitting and Merging Based Multi-model Fitting for Point Cloud Segmentation
Liangpei ZHANG,Yun ZHANG,Zhenzhong CHEN,Peipei XIAO,Bin LUO
2019, 2(2):  78-89.  doi:10.11947/j.JGGS.2019.0209
Abstract ( )   HTML ( )   PDF (2813KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This paper deals with the massive point cloud segmentation processing technology on the basis of machine vision, which is the second essential factor for the intelligent data processing of three dimensional conformation in digital photogrammetry. In this paper, multi-model fitting method is used to segment the point cloud according to the spatial distribution and spatial geometric structure of point clouds by fitting the point cloud into different geometric primitives models. Because point cloud usually possesses large amount of 3D points, which are uneven distributed over various complex structures, this paper proposes a point cloud segmentation method based on multi-model fitting. Firstly, the pre-segmentation of point cloud is conducted by using the clustering method based on density distribution. And then the follow fitting and segmentation are carried out by using the multi-model fitting method based on split and merging. For the plane and the arc surface, this paper uses different fitting methods, and finally realizing the indoor dense point cloud segmentation. The experimental results show that this method can achieve the automatic segmentation of the point cloud without setting the number of models in advance. Compared with the existing point cloud segmentation methods, this method has obvious advantages in segmentation effect and time cost, and can achieve higher segmentation accuracy. After processed by method proposed in this paper, the point cloud even with large-scale and complex structures can often be segmented into 3D geometric elements with finer and accurate model parameters, which can give rise to an accurate 3D conformation.

Satellite Image Matching Method Based on Deep Convolutional Neural Network
Dazhao FAN,Yang DONG,Yongsheng ZHANG
2019, 2(2):  90-100.  doi:10.11947/j.JGGS.2019.0210
Abstract ( )   HTML ( )   PDF (7856KB) ( )  
Figures and Tables | References | Related Articles | Metrics

This article focuses on the first aspect of the album of deep learning: the deep convolutional method. The traditional matching point extraction algorithm typically uses manually designed feature descriptors and the shortest distance between them to match as the matching criterion. The matching result can easily fall into a local extreme value, which causes missing of the partial matching point. Targeting this problem, we introduce a two-channel deep convolutional neural network based on spatial scale convolution, which performs matching pattern learning between images to realize satellite image matching based on a deep convolutional neural network. The experimental results show that the method can extract the richer matching points in the case of heterogeneous, multi-temporal and multi-resolution satellite images, compared with the traditional matching method. In addition, the accuracy of the final matching results can be maintained at above 90%.

Salient Object Detection from Multi-spectral Remote Sensing Images with Deep Residual Network
Yuchao DAI,Jing ZHANG,Mingyi HE,Fatih PORIKLI,Bowen LIU
2019, 2(2):  101-110.  doi:10.11947/j.JGGS.2019.0211
Abstract ( )   HTML ( )   PDF (8222KB) ( )  
Figures and Tables | References | Related Articles | Metrics

Salient object detection aims at identifying the visually interesting object regions that are consistent with human perception. Multispectral remote sensing images provide rich radiometric information in revealing the physical properties of the observed objects, which leads to great potential to perform salient object detection for remote sensing images. Conventional salient object detection methods often employ handcrafted features to predict saliency by evaluating the pixel-wise or superpixel-wise contrast. With the recent use of deep learning framework, in particular, fully convolutional neural networks, there has been profound progress in visual saliency detection. However, this success has not been extended to multispectral remote sensing images, and existing multispectral salient object detection methods are still mainly based on handcrafted features, essentially due to the difficulties in image acquisition and labeling. In this paper, we propose a novel deep residual network based on a top-down model, which is trained in an end-to-end manner to tackle the above issues in multispectral salient object detection. Our model effectively exploits the saliency cues at different levels of the deep residual network. To overcome the limited availability of remote sensing images in training of our deep residual network, we also introduce a new spectral image reconstruction model that can generate multispectral images from RGB images. Our extensive experimental results using both multispectral and RGB salient object detection datasets demonstrate a significant performance improvement of more than 10% improvement compared with the state-of-the-art methods.