This paper is devoted to the development and testing of the optimal procedures for retrieving biophysical crop variables by exploiting the spectral information of current multispectral optical satellite Sentinel-2 and Venus and in view of the advent of the new Sino-EU hyperspectral satellite (e.g., PRISMA, EnMAP, and GF-5). Two different methodologies devoted to the estimation of biophysical crop variables Leaf area index (LAI) and Leaf chlorophyll content (Cab) were evaluated: non-kernel-based and kernel-based Machine Learning Regression Algorithms (MLRA); Sentinel-2 and Venus data comparison for the analysis of the durum wheat-growing season. Results show that for Sentinel-2 data, Gaussian Process Regression (GPR) was the best performing algorithm for both LAI (R2=0.89 and RMSE=0.59) and Cab (R2=0.70 and RMSE=8.31). Whereas, for PRISMA simulated data the Kernel Ridge Regression (KRR) was the best performing algorithm among all the other MLRA (R2=0.91 and RMSE=0.51) for LAI and (R2=0.83 and RMSE=6.09) for Cab, respectively. Results of Sentinel-2 and Venus data for durum wheat-growing season were consistent with ground truth data and confirm also that SWIR bands, which are used as tie-points in the PROSAIL inversion, are extremely useful for an accurate retrieving of crop biophysical parameters.
Deep learning has become popular and the mainstream technology in many researches related to learning, and has shown its impact on photogrammetry. According to the definition of photogrammetry, that is, a subject that researches shapes, locations, sizes, characteristics and inter-relationships of real objects from optical images, photogrammetry considers two aspects, geometry and semantics. From the two aspects, we review the history of deep learning and discuss its current applications on photogrammetry, and forecast the future development of photogrammetry. In geometry, the deep convolutional neural network (CNN) has been widely applied in stereo matching, SLAM and 3D reconstruction, and has made some effects but needs more improvement. In semantics, conventional methods that have to design empirical and handcrafted features have failed to extract the semantic information accurately and failed to produce types of “semantic thematic map” as 4D productions (DEM, DOM, DLG, DRG) of photogrammetry. This causes the semantic part of photogrammetry be ignored for a long time. The powerful generalization capacity, ability to fit any functions and stability under types of situations of deep leaning is making the automatic production of thematic maps possible. We review the achievements that have been obtained in road network extraction, building detection and crop classification, etc., and forecast that producing high-accuracy semantic thematic maps directly from optical images will become reality and these maps will become a type of standard products of photogrammetry. At last, we introduce our two current researches related to geometry and semantics respectively. One is stereo matching of aerial images based on deep learning and transfer learning; the other is precise crop classification from satellite spatio-temporal images based on 3D CNN.
The aim of this paper is to offer a statistically sound method to make a precise account of the speed of land degradation and regeneration processes. Most common analyses of land degradation focus instead on the extent of degraded areas, rather than on the intensity of degradation processes. The study was implemented for the Potential Extent of Desertification in China (PEDC), composed by arid, semi-arid, and dry sub-humid regions and refers to the period 2002 to 2012. The metrics were standard partial regression coefficients from stepwise regressions, fitted using Net Primary Productivity as the dependent variable, and year number and aridity as predictors. The results indicate that: ① the extension of degrading lands (292896km2 or 9.12% of PEDC) overcomes the area that is recovering (194560km2 or 6.06% of PEDC); and ② the intensity of degrading trends is lower than that of increasing trends in three land cover types (grassland, desert, and crops) and in two aridity levels (semi-arid and dry sub-humid). Such an outcome might pinpoint restoration policies by the Chinese government, and document a possible case of hysteresis.
Taking autonomous driving and driverless as the research object, we discuss and define intelligent high-precision map. Intelligent high-precision map is considered as a key link of future travel, a carrier of real-time perception of traffic resources in the entire space-time range, and the criterion for the operation and control of the whole process of the vehicle. As a new form of map, it has distinctive features in terms of cartography theory and application requirements compared with traditional navigation electronic maps. Thus, it is necessary to analyze and discuss its key features and problems to promote the development of research and application of intelligent high-precision map. Accordingly, we propose an information transmission model based on the cartography theory and combine the wheeled robot’s control flow in practical application. Next, we put forward the data logic structure of intelligent high-precision map, and analyze its application in autonomous driving. Then, we summarize the computing mode of “Crowdsourcing+Edge-Cloud Collaborative Computing”, and carry out key technical analysis on how to improve the quality of crowdsourced data. We also analyze the effective application scenarios of intelligent high-precision map in the future. Finally, we present some thoughts and suggestions for the future development of this field.
Integrity is an important index for GNSS-based navigation and positioning, and the receiver autonomous integrity monitoring (RAIM) algorithm has been presented for integrity applications. In the integrated navigation systems of a global navigation satellite system (GNSS) and inertial navigation system (INS),the conventional RAIM algorithm has been developed to extended receiver autonomous integrity monitoring (ERAIM). However, the ERAIM algorithm may fail and a false alarm may generate once the measurements are contaminated by significant outliers, and this problem is rarely discussed in the existing literatures. In this paper, a robust fault detection and the corresponding data processing algorithm are proposed based on the ERAIM algorithm and the robust estimation. In the proposed algorithm, weights of the measurements are adjusted with the equivalent weight function, and the efficiency of the outlier detection and identification is improved, therefore, the estimates become more reliable, and the probability of the false alarm is decreased. Experiments with the data collected under actual environments are implemented, and results indicate that the proposed algorithm is more efficient than the conventional ERAIM algorithm for multiple outliers and a better filtering performance is achieved.
Map is one of the communication means created by human being. Cartographers have been making efforts on the comparison of maps to natural languages so as to establish a “cartographic language” or “map language”. One of such efforts is to adopt the Shannon’s Information Theory originated in digital communication into cartography so as to establish an entropy-based cartographic communication theory. However, success has been very limited although research work had started as early as the mid-1960s. It is then found that the bottleneck problem was the lack of appropriate measures for the spatial (configurational) information of (graphic and image) maps, as the classic Shannon entropy is only capable of characterizing statistical information but fails to capture the configurational information of (graphic and image) maps. Fortunately, after over 40-year development, some bottleneck problems have been solved. More precisely, generalized Shannon entropies for metric and thematic information of (graphic) maps have been developed and the first feasible solution for computing the Boltzmann entropy of image maps has been invented, which is capable of measuring the spatial information of not only numerical images but also categorical maps. With such progress, it is now feasible to build the “Information Theory of Cartography”. In this paper, a framework for such a theory is proposed and some key issues are identified. For these issues, some have already been tackled while others still need efforts. As a result, a research agenda is set for future action. After all these issues are tackled, the theory will become matured so as to become a theoretic basis of cartography. It is expected that the Information Theory of Cartography will play an increasingly important role in the discipline of cartography because more and more researchers have advocated that information is more fundamental than matter and energy.
The concept of resilient positioning, navigation and timing (PNT) is described. The definition of resilient PNT is given, the relationship between integrated (or comprehensive) PNT and resilient PNT is analyzed, and it is pointed out that the integrated PNT is the foundation of resilient PNT. Resilient PNT should be divided into resilient sensor integration, resilient functional model and resilient stochastic model. The strategy and principles of resilient integration of sensors are discussed. The resilient integration of sensors should be designed following the optimal, available, compatible and interoperable principles. The concepts of resilient functional model and possible modification strategies of the different functional models are also described. Several possible optimal routes for resilient stochastic model improvements are also set forth. It is pointed out that the optimal improvements of stochastic models for multi PNT sources should follow the same variance scale. At last, the resilient PNT data fusion for state parameters are given based on the resilient functional and stochastic models.
Volume parameter is the basic content of a spatial body object morphology analysis. However, the challenge lies in the volume calculation of irregular objects. The point cloud slicing method proposed in this study effectively works in calculating the volume of the point cloud of the spatial object obtained through three-dimensional laser scanning (3DLS). In this method, a uniformly spaced sequent slicing process is first conducted in a specific direction on the point cloud of the spatial object obtained through 3DLS. A series of discrete point cloud slices corresponding to the point cloud bodies are then obtained. Subsequently, the outline boundary polygon of the point cloud slicing is searched one by one in accordance with the slicing sequence and areas of the polygon. The point cloud slice is also calculated. Finally, the individual point cloud section volume is calculated through the slicing areas and the adjacent slicing gap. Thus, the total volume of the scanned spatial object can be calculated by summing up the individual volumes. According to the results and analysis of the calculated examples, the slice-based volume-calculating method for the point cloud of irregular objects obtained through 3DLS is correct, concise in process, reliable in results, efficient in calculation methods, and controllable on accuracy. This method comes as a good solution to the volume calculation of irregular objects.
The tremendous development of Synthetic Aperture Radar (SAR) missions in recent years facilitates the study of smaller amplitude ground deformation over greater spatial scales using longer time series. However, this poses greater challenges for correcting atmospheric effects due to the wider coverage of SAR imagery than ever. Previous attempts have used observations from Global Positioning System (GPS) and Numerical Weather Models (NWMs) to separate atmospheric delays, but they are limited by ①The availability (and distribution) of GPS stations; ②The low spatial resolution of NWM; And ③The difficulties in quantifying their performance. To overcome these limitations, we have developed the Generic Atmospheric Correction Online Service for InSAR (GACOS) which utilizes the high-resolution European Centre for Medium-Range Weather Forecasts (ECMWF) products using an Iterative Tropospheric Decomposition (ITD) model. This enables the reduction of the coupling effects of the troposphere turbulence and stratification and hence achieves equivalent performances over flat and mountainous terrains. GACOS comprises a range of notable features: ①Global coverage; ②All-weather, all-time usability; ③Available with a maximum of two-day latency; And ④Indicators available to assess the model’s performance and feasibility. In this paper, we demonstrate some successful applications of the GACOS online service to a variety of geophysical studies.
Weighted mean temperature (Tm) is a critical parameter in Global Navigation Satellite System (GNSS) technology to retrieve precipitable water vapor (PWV). It is convenient to obtain high-precision Tm estimates near surface utilizing Bevis formula and surface temperature. However, some researches pointed out that the Bevis formula has large uncertainties in high-altitude regions. We investigate the applicability of the Bevis formula at different height levels and find that the Bevis formula has relatively high precision when the altitude is low, while with altitude increasing, the precision decreases gradually. To solve the problem, we analyze the relationship between Tm and atmospheric temperature within the near-earth space range (the height range between 0~10km) and find that they have a high correlation on a global scale. Accordingly, we build a global weighted mean temperature model based on near-earth atmospheric temperature. Validation results of the model show that this model can provide high-precision Tm estimation at any height level in the near-earth space range.
The laser scanning system based on Simultaneous Localization and Mapping (SLAM)technology has the advantages of low cost, high precision and high efficiency. It has drawn wide attention in the field of surveying and mapping in recent years. Although real-time data acquisition can be achieved using SLAM technology, the precision of the data can’t be ensured, and inconsistency exists in the acquired point cloud. In order to improve the precision of the point cloud obtained by this kind of system, this paper presents a hierarchical point cloud global optimization algorithm. Firstly, the “point-to-plane” iterative closest point (ICP) algorithm is used to match the overlapping point clouds to form constraints between the trajectories of the scanning system. Then a pose graph is constructed to optimize the trajectory. Finally, the optimized trajectory is used to refine the point cloud. The computational efficiency is improved by decomposing the optimization process into two levels, i.e. local level and global level. The experimental results show that the RMSE of the distance between the corresponding points in overlapping areas is reduced by about 50% after optimization, and the internal inconsistency is effectively eliminated.
Interferometry Synthetic Aperture Radar (InSAR) provides unique capabilities to map regional/global topography and deformation of the Earth’s surface and has led to a broad spectrum of deformation monitoring applications. In order to adapt to various challenging monitoring environments, researchers have made tremendous innovations to deal with issues such as atmospheric and ionospheric effects, loss of coherence due to large displacements, geometric distortions and unwrapping errors. Owing to recent technical and methodological advances, the Earth’s surface deformation, ranging from earthquake ruptures, volcanic eruptions, landslides, glaciers, to groundwater storage variations, mining subsidence and infrastructure instability can now be mapped anywhere in the world at high spatial and temporal resolutions. This special issue received a set of contributions highlighting recent advances in methodologies and applications of InSAR to ground deformation monitoring. We aim to present overviews of both the state of the art of SAR/InSAR techniques and the next generation of applications across the broad range of deformation monitoring applications.
Target detection technology of synthetic aperture radar (SAR) imageis widely used in the field of military reconnaissance and surveillance. The traditional SAR image target detection methods need to be provided a lot of empirical knowledge because the characteristics of SAR images in different configurations (attitude, pitch angle, imaging parameters, etc.) will change greatly,resulting in high generalization error. Currently, deep learning method has achieved great success in the field of image processing. Research shows that deep learning can achieve a more intrinsic description of the data, while the model has a stronger ability of modeling and generalization. In order to solve the problem of insufficient data in SAR data sets, an experimental system for acquiring SAR image data in real scenes was built. Then the transfer learning method and the improved convolution neural network algorithm (PCA+Faster R-CNN) are applied to improve the target detection precision. Finally, experimental results demonstrate the significant effectiveness of the proposed method.
In this paper, studies on offshore wind farm wakes observed by spaceborne synthetic aperture radar (SAR) are reviewed mainly based on our previous research. Particularly, we focus on investigating wind wakes and tidal current wakes observed by spaceborne SAR of TerraSAR-X, Gaofen-3 and Radarsat-2 in high spatial resolution, in two offshores wind farms, i.e., the Alpha Ventus in the North Sea and the one near Donghai bridge in the East China Sea. Representing examples of wind wakes and tidal current wakes observed by SAR in the two farms are presented and compared. A preliminary statistical analysis on morphology of wind feature downstream Alpha Ventus is presented as well. Besides these studies on wind wakes generated by a single offshore wind farm, we show an example of wakes downstream multiple wind farms in the North Sea to demonstrate “cluster” effect of multiple offshore wind farms on sea wind.
Currently, deep convolutional neural networks have made great progress in the field of semantic segmentation. Because of the fixed convolution kernel geometry, standard convolution neural networks have been limited the ability to simulate geometric transformations. Therefore, a deformable convolution is introduced to enhance the adaptability of convolutional networks to spatial transformation. Considering that the deep convolutional neural networks cannot adequately segment the local objects at the output layer due to using the pooling layers in neural network architecture. To overcome this shortcoming, the rough prediction segmentation results of the neural network output layer will be processed by fully connected conditional random fields to improve the ability of image segmentation. The proposed method can easily be trained by end-to-end using standard backpropagation algorithms. Finally, the proposed method is tested on the ISPRS dataset. The results show that the proposed method can effectively overcome the influence of the complex structure of the segmentation object and obtain state-of-the-art accuracy on the ISPRS Vaihingen 2D semantic labeling dataset.
The rate of the total electron content (TEC)change index (ROTI)can be regarded as an effective indicator of the level of ionospheric scintillation, in particular in low and high latitude regions. An accurate prediction of the ROTI is essential to reduce the impact of the ionospheric scintillation on earth observation systems, such as the global navigation satellite systems. However, it is difficult to predict the ROTI with high accuracy because of the complexity of the ionosphere. In this study, advanced machine learning methods have been investigated for ROTI prediction over a station at high-latitude in Canada. These methods are used to predict the ROTI in the next 5 minutes using the data derived from the past 15 minutes at the same location. Experimental results show that the method of the bidirectional gated recurrent unit network (BGRU)outperforms the other six approaches tested in the research. It is also confirmed that the RMSEs of the predicted ROTI using the BGRU method in all four seasons of 2017 are less than 0.05 TECU/min. It is demonstrated that the BGRU method exhibits a high level of robustness in dealing with abrupt solar activities.
China has been affected by some of the world’s most serious geological disasters and experiences high economic damage every year. Geohazards occur not only in remote areas but also in highly populated cities. In the framework of the Dragon-4 32365 Project, this paper presents the main results and the major conclusions derived from an extensive exploitation of Sentinel-1, ALOS-2 (Advanced Land Observing Satellite 2), GF-3 (GaoFen Satellite 3), and latest launched SAR (Synthetic Aperture Radar), together with methods that allow the evaluation of their importance for various geohazards. Therefore, in the scope of this project, the great benefits of recent remote sensing data (wide spatial and temporal coverage) that allow a detailed reconstruction of past displacement events and to monitor currently occurring phenomena are exploited to study different areas and geohazards problems, including: surface deformation of mountain slopes; identification and monitoring of ground movements and subsidence; landslides; ground fissure; and building inclination studies. Suspicious movements detected in the different study areas were cross validated with different SAR sensors and truth data.
Remote sensing provides key inputs to a wide range of models and methods developed for quantifying forest carbon. In particular, carbon inventory methods recommended by IPCC require biomass data and a suite of forest disturbance products. Significant progress has been made in deriving these products by leveraging publicly available remote sensing assets, including observations acquired by the legendary Landsat mission and new systems launched within the past decade, including Sentinel-2, Sentinel-1, GEDI, and ICESAT-2. With the L-band NISAR and P-band BIOMASS missions to be launched in 2023, the Earth’s land surfaces will be imaged by optical and multi-band (including C-, L-, and P-bands) radar systems that can provide global, sub-weekly observations at sub-hectare spatial resolutions for public use. Fine scale products derived from these observations will be crucial for developing monitoring, reporting, and verification (MRV) capabilities needed to support carbon trade, REDD+, and other market-driven tools aimed at achieving climate mitigation goals through forest management at all levels. Following a brief discussion of the roles of forests in the global carbon cycle and the wide range of models and methods available for evaluating forest carbon dynamics, this paper provides an overview of recent progress and forthcoming opportunities in using remote sensing to map forest structure and biomass, detect forest disturbances, determine disturbance attribution, quantify disturbance intensity, and estimate harvested timber volume. Advances in these research areas require large quantities of well—distributed reference data to calibrate remote sensing algorithms and to validate the derived products. In addition, two of the forest carbon pools-dead organic matter and soil carbon—are difficult to monitor using modern remote sensing capabilities. Carefully designed inventory programs are needed to collect the required reference data as well as the data needed to estimate dead organic matter and soil carbon.
In order to achieve high short-term prediction accuracy of ionospheric TEC, first, we transform a seasonal time series for ionospheric Total Electron Content (TEC) into a stationary time series by seasonal differences and regular differences with a full consideration of the Multiplicative Seasonal model. Next, we use the Autoregressive Integrated Moving Average (ARIMA) model taken from time series analysis theory for modeling the stationary TEC values to predict the TEC series. Using TEC data from 2008 to 2012 provided by the Center for Orbit Determination in Europe (CODE) as sample data, we analyzed the precision of this method for prediction of ionospheric TEC values which vary from high to low latitudes during both quiet and active ionospheric periods. The effect of the TEC sample’s length on the predicted accuracy is analyzed, too. Results from numerical experiments show that during the ionospheric quiet period the average relative prediction accuracy for a six day time span reaches up to 83.3% with average prediction residual errors of about 0.18±1.9TECu. During ionospheric active periods it changes to 86.6% with an average prediction residual error of about 0.69±2.6TECu. For the quiet periods, above 90% of predicted residual is less than ±3TECu while during active periods, it is only about 81%. The two periods show that that the higher the latitude, the higher the absolute precision, and the lower the predicted relative accuracy. In addition, the results show that prediction accuracy will improve with an increase of the TEC sample sequences length, but it will gradually reduce if the length exceeds the optimal length, about 30 days. On the other hand, with the same TEC sample, as the predicted days increase, the predictive accuracy decreases. Athough the predictive accuracy is not apparent at the beginning, it will be significantly reduced after 30 days.
The paper presents a geometric calibration method based on the sparse ground control points (GCPs), aiming to the linear push-broom optical satellite. This method can achieve the optimal estimate of internal and external parameters with two overlapped image pair along the charge-coupled device (CCD), and sparse GCPs in the image region, further get rid of the dependence on the expensive calibration site data. With the calibrated parameters, the line of sight (LOS) of all CCD detectors can be recovered. This paper firstly establishes the rigorous imaging model of linear push-broom optical satellite based on its imaging mechanism. And then the calibration model is constructed by improving the internal sensor model with a viewing-angle model after an analysis on systematic errors existing in the imaging model is performed. A step-wise solution is applied aiming to the optimal estimate of external and internal parameters. At last, we conduct a set of experiments on the ZY-3 NAD camera and verify the accuracy and effectiveness of the presented method by comparison.
The solid Earth responds elastically to terrestrial water storage (TWS)changes. Here global positioning system (GPS) vertical position data at 31 stations from the crustal movement observation network of China (CMONOC) from August 2010 to December 2016 are used to detect droughts in Southwest China. Monthly GPS vertical position displacements respond negatively to precipitation changes and TWS changes observed by gravity recovery and climate experiments(GRACE) as well as river water level variations. GPS vertical position anomalies (the non-seasonal term) are well correlated negatively (correlations of about -0.70) with the commonly used meteorological composite index (CI) in China and the GRACE drought severity index (GRACE-DSI),but less correlated with the standardized precipitation evapotranspiration index (SPEI). Compared to CI, GPS vertical position anomalies have the advantage of detecting droughts caused by abrupt precipitation deficits in a short time. GRACE-DSI is less accurate in drought monitoring for some periods due to the missing data, while the severity of abrupt precipitation absent in some cases can be overestimated from SPEI with big variability. This study shows the reliability and advantages of GPS data in drought monitoring.
This article focuses on the first aspect of the album of deep learning: the deep convolutional method. The traditional matching point extraction algorithm typically uses manually designed feature descriptors and the shortest distance between them to match as the matching criterion. The matching result can easily fall into a local extreme value, which causes missing of the partial matching point. Targeting this problem, we introduce a two-channel deep convolutional neural network based on spatial scale convolution, which performs matching pattern learning between images to realize satellite image matching based on a deep convolutional neural network. The experimental results show that the method can extract the richer matching points in the case of heterogeneous, multi-temporal and multi-resolution satellite images, compared with the traditional matching method. In addition, the accuracy of the final matching results can be maintained at above 90%.
According to the characteristics of the road features, an Encoder-Decoder deep semantic segmentation network is designed for the road extraction of remote sensing images. Firstly, as the features of the road target are rich in local details and simple in semantic features, an Encoder-Decoder network with shallow layers and high resolution is designed to improve the ability to represent detail information. Secondly, as the road area is a small proportion in remote sensing images, the cross-entropy loss function is improved, which solves the imbalance between positive and negative samples in the training process. Experiments on large road extraction datasets show that the proposed method gets the recall rate 83.9%, precision 82.5% and F1-score 82.9%, which can extract the road targets in remote sensing images completely and accurately. The Encoder-Decoder network designed in this paper performs well in the road extraction task and needs less artificial participation, so it has a good application prospect.
The atmospheric carbon dioxide (CO2) concentration has increased to more than 405 parts per million (ppm. 1ppm=10-6m/s2) in 2017 due to human activities such as deforestation, land-use change and burning of fossil fuels. Although there is broad scientific consensus on the damaging consequences of the change in climate associated with increasing concentrations of greenhouse gases, fossil CO2 emissions have continued to increase in recent years mainly from rapidly developing economies and China is now the largest emitter of CO2 generating about 30% of all emissions globally. To allow more reliable forecast of the future state of the carbon cycle and to support the efforts for mitigation greenhouse gas emissions, a better understanding of the global and regional carbon budget is needed. Space-based measurements of CO2 can provide the necessary observations with dense coverage and sampling to provide improved constrains on of carbon fluxes and emissions. The Chinese Global Carbon Dioxide Monitoring Scientific Experimental Satellite (TanSat) was established by the National High Technology Research and Development Program of China with the main objective of monitoring atmospheric CO2 and CO2 fluxes at the regional and global scale. TanSat has been successfully launched in December 2016 and as part of the Dragon programme of ESA and the Ministry of Science and Technology (MOST), a team of researchers from Europe (UK and Finland) and China has evaluated early TanSat data and contrast it against data from the GOSAT mission and models. In this manuscript, we report on retrieval intercomparisons of TanSat data using two different retrieval algorithms, on validation efforts for the Eastern Asia region using GOSAT CO2 data and first assessments of TanSat and GOSAT CO2 data against model calculations using the GEOS-Chem model.
Observation and modeling of the coupled energy and water balance is the key to understand hydrospheric and cryospheric processes at high elevation. The paper summarizes the progress to address this aspect in relation with different earth system elements, from glaciers to wetlands. The energy budget of two glaciers, i.e. Xiao Dongkemadi and Parlung No.4, was studied by means of extended field measurements and a distributed model of the coupled energy and mass balance was developed and evaluated. The need for accurate characterization of surface albedo was further documented for the entire Qinghai Tibet Plateau by numerical experiments with Weather Research and Forecast (WRF) on the sensitivity of the atmospheric boundary layer to the parameterization of land surface processes. A new approach to the calibration of a coupled distributed watershed model of the energy and water balance was demonstrated by a case study on the Heihe River Basin in northwestern China. The assimilation of land surface temperature did lead to the retrieval of critical soil and vegetation properties as the soil permeability and the canopy resistance to the exchange of vapour and carbon dioxide. The retrievals of actual Evapo-Transpiration (ET) were generated by the ETMonitor system and evaluated against eddy covariance measurements at sites spread throughout Asia. As regards glacier response to climate variability, the combined findings based on satellite data and model experiments showed that the spatial variability of surface albedo and temperature is significant and controls both glacier mass balance and flow. Experiments with both atmospheric and hydrosphere-cryosphere models documented the need and advantages of using accurate retrievals of land surface albedo to capture lan-atmosphere interactions at high elevation.
Cartography and maps support the continuous rising of the awareness of the power of spatial data, which further lays a foundation for the popularity of various location based services and applications in society. Cartography and Geographic Information System education has been a core activity in the cartographic academic community for knowledge creation and transfer in higher education institutions. Maps in primary and high schools play a unique role across disciplines to build the spatial thinking capacities of young generations. Over years educators train students via lectures and lab works into which digital technologies are gradually incorporated. The COVID-19 pandemic has been fast forwarding our pace to employ digital technologies in online teaching and learning. Teachers are passively or proactively adapted to conduct their teaching online and redesign their lectures and assessments of students’ performance. On another side, students are getting used to online learning even more quickly with various digital devices in an interactive and collective way. It creates opportunities for cartographic GIS educators to build a body of knowledge for cartography which can be used to build open source educational resources systematically. Further flexible curriculum can be designed and implemented for professional and continuous education and training at various levels. Future education of cartography and GIS can improve map literacy and make a sustainable education.
Wetlands are among the most productive and essential ecosystems on earth, but they are also highly sensitive and vulnerable to climate change and human disturbance. One of the current scientific challenges is to integrate high-resolution remote sensing data of wetlands with wildlife movements, a task we achieve here for dynamic waterbird movements. We demonstrate that the White-naped cranes Antigone vipio wintering at Poyang Lake wetlands, southeast of China, mainly used the habitats created by the dramatic hydrological variations, i.e. seasonal water level fluctuation. Our data suggest that White-naped cranes tend to follow the water level recession process, keeping close to the boundary of water patches at most of the time. We also highlight the benefits of interdisciplinary approaches to gain a better understanding of wetland ecosystem complexity.
Reliable and prompt information on forest above-ground biomass (AGB) and tree diameter at breast height (DBH)are crucial for sustainable forest management. Remote sensing technology, especially the Light Detection and Ranging (LiDAR) technology, has been proven to estimate important tree variables effectively. This study proposes predicting DBH and AGB from tree height and other LiDAR data extracted metrics. In the suggested DBH prediction, we developed a nonlinear estimation equation using the total tree height. As for the AGB prediction approach, we used regression methods such as multiple linear regression (MLR), random forest (RF) and support vector machine for regression (SVR). We conducted the study for the Gudao forest area dominated by Robinia Pseudoacacia trees, located in the Yellow River Delta (YRD), China. For our developed approaches, we used Unmanned Aerial Vehicle (UAV) and Backpack LiDAR point cloud datasets obtained in June 2017, and three field data measurements gathered in June 2017 and 2019 and October 2019, all from the same study area. The results demonstrate that: ① The LiDAR data individual tree segmentation (ITS) from which we extracted individual tree information like tree location and tree height, was carried out with an overall accuracy F=0.91; ② We used the ITS height data from the field stand in 2019 as a fit and developed a nonlinear DBH estimation equation with Root Mean Square Error (RMSE)=3.61cm, later validated by the 2017 dataset; ③ Forest AGB at stand level was estimated with the MLR, RF and also SVR regression methods, and results show that the SVR method gave higher accuracy with R2=0.82 compared to the R2=0.72 of RF and the R2=0.70 of the MLR. Calculated AGB at plot level using the 2017 LiDAR data was used to validate both models’ accuracy. Combining the UAV LiDAR data and the Backpack LiDAR significantly improved the overall ITS. The UAV LiDAR ability to provide high accuracy tree height abstraction, the DBH of the regression equation and other extracted LiDAR metrics showed high accuracy in estimating the forest AGB. This study shows that being cost-free is not the only advantage of free available software. In the performance of ITS and the LiDAR, metrics extraction proved to be as good as the commercially available software.
In studies of upwelling, usually data from infrared and optical sensors is used which provides information on the sea surface temperature (SST) and the chlorophyll-a (Chl-a) concentration. In this paper, we show that also synthetic aperture radars (SAR) images can give valuable contribution to such studies. Upwelling regions become detectable by SAR because they are associated with a reduction of the radar backscatter due to ① the change of the stability of the air-sea interface or/and ② the presence of biogenic slicks. Furthermore, the boundary of upwelling regions consists of a line of increased radar backscatter due to the presence of convergent surface flow.
Shallow water multi-beam echo sounders (MBESs) are characterized by their high resolution and high density, and MBES data processing is a hotspot in modern marine surveying. The Combined Uncertainty and Bathymetry Estimator (CUBE) is the mainstream MBES data processing algorithm, although little is known about its core theories and parameters. In this paper, the basic principle, mathematical model, key parameters, and main processing steps of CUBE are described systematically. A parameter group optimization method that combines CUBE with a surface filter is established. Additionally, an example is given that shows the steps for parameter group optimization, including selection of a typical area, parameter group testing, and comparative analysis, and the method is then applied to shallow water MBES data processing. The results show that the method can improve the accuracy and efficiency of automatic data processing effectively, and it is thus of engineering application value.
《测绘学报(英文版)》(JGGS)由中国科学技术协会主管,中国测绘学会、中国地图出版社有限公司主办,测绘出版社有限公司出版,是《测绘学报》中文版的姊妹刊,面向国内外发行,季刊。