Journal of Geodesy and Geoinformation Science ›› 2021, Vol. 4 ›› Issue (4): 46-62.doi: 10.11947/j.JGGS.2021.0404
Previous Articles Next Articles
Kexian WANG1(),Shunyi ZHENG1,Rui LI1(),Li GUI2
Received:
2021-02-28
Accepted:
2021-08-30
Online:
2021-12-20
Published:
2021-12-30
Contact:
Rui LI
E-mail:kxwang@whu.edu.cn;lironui@whu.edu.cn
About author:
Kexian WANG(1998—), male, majors in hyperspectral image classification and deep learning.E-mail: Supported by:
Kexian WANG,Shunyi ZHENG,Rui LI,Li GUI. A Deep Double-Channel Dense Network for Hyperspectral Image Classification[J]. Journal of Geodesy and Geoinformation Science, 2021, 4(4): 46-62.
Add to citation manager EndNote|Reference Manager|ProCite|BibTeX|RefWorks
Tab.1
The implements details about DDCD"
Layer name | Kernel size | Group | Output size | |
---|---|---|---|---|
Input | - | - | (9×9×200,1) | |
3D-CNN+BN+ReLU | (1×1×7) | 1 | (9×9×97,24) | |
Linear Attention Mechanism | - | - | (9×9×97,24) | |
- | - | (9×9×97,24) | ||
Concatenate | - | - | (9×9×97,48) | |
3D two- way dense layer | 3D-CNN+ BN+ReLU | (1×1×1) | 3 | (9×9×97,48) |
3D-CNN+ BN+ReLU | (3×3×3) | 3 | (9×9×97,12) | |
3D-CNN+ BN+ReLU | (3×3×3) | 3 | (9×9×97,12) | |
3D-CNN+ BN+ReLU | (1×1×1) | 3 | (9×9×97,48) | |
3D-CNN+ BN+ReLU | (3×3×3) | 3 | (9×9×97,12) | |
Concatenate | - | - | (9×9×97,72) | |
3D-CNN+BN+ReLU | (3×3×97) | 3 | (9×9×1,60) | |
Global Average Pooling | - | - | (1×60) |
Tab.2
The samples for each class for training, validation, and testing of the Indian Pines (IP) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Alfalfa | 46 | 3 | 3 | 40 |
2 | Corn-notill | 1428 | 42 | 42 | 1344 |
3 | Corn-mintill | 830 | 24 | 24 | 782 |
4 | Corn | 237 | 7 | 7 | 223 |
5 | Grass-pasture | 483 | 14 | 14 | 455 |
6 | Grass-trees | 730 | 21 | 21 | 688 |
7 | Grass-pasture-mowed | 28 | 3 | 3 | 22 |
8 | Hay-windrowed | 478 | 14 | 14 | 450 |
9 | Oats | 20 | 3 | 3 | 14 |
10 | Soybean-notill | 972 | 29 | 29 | 914 |
11 | Soybean-mintill | 2455 | 73 | 73 | 2309 |
12 | Soybean-clean | 593 | 17 | 17 | 559 |
13 | Wheat | 205 | 6 | 6 | 193 |
14 | Woods | 1265 | 37 | 37 | 1191 |
15 | Buildings-Grass-Trees | 386 | 11 | 11 | 364 |
16 | Stone-Steel-Towers | 93 | 3 | 3 | 87 |
Total | 10249 | 307 | 307 | 9635 |
Tab.3
The samples for each class for training, validation, and testing of the Pavia University (UP) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Asphalt | 6631 | 33 | 33 | 6565 |
2 | Meadows | 18649 | 93 | 93 | 18463 |
3 | Gravel | 2099 | 10 | 10 | 2079 |
4 | Trees | 3064 | 15 | 15 | 3034 |
5 | Painted metal sheets | 1345 | 6 | 6 | 1333 |
6 | Bare Soil | 5029 | 25 | 25 | 4979 |
7 | Bitumen | 1330 | 6 | 6 | 1318 |
8 | Self-Blocking Bricks | 3682 | 18 | 18 | 3646 |
9 | Shadows | 947 | 4 | 4 | 939 |
Total | 42776 | 210 | 210 | 42356 |
Tab.4
The samples for each class for training, validation, and testing of the Pavia Center (PC) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Water | 65971 | 65 | 65 | 65841 |
2 | Trees | 7598 | 7 | 7 | 7584 |
3 | Meadows | 3090 | 3 | 3 | 3084 |
4 | Bricks | 2685 | 3 | 3 | 2679 |
5 | Soil | 6584 | 6 | 6 | 6572 |
6 | Asphalt | 9248 | 9 | 9 | 9230 |
7 | Bitumen | 7287 | 7 | 7 | 7273 |
8 | Tiles | 42826 | 42 | 42 | 42742 |
9 | Shadows | 2863 | 3 | 3 | 2857 |
Total | 148152 | 145 | 145 | 147862 |
Tab.5
The samples for each class for training, validation, and testing of the HyRANK (HV) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Dense urban fabric | 1262 | 37 | 37 | 1188 |
2 | Mineral extraction sites | 204 | 6 | 6 | 192 |
3 | Non-irrigated arable land | 614 | 18 | 18 | 578 |
4 | Fruit trees | 150 | 4 | 4 | 142 |
5 | Olive groves | 1768 | 53 | 53 | 1662 |
6 | Coniferous forest | 361 | 10 | 10 | 341 |
7 | Densesclerophyllous vegetation | 5035 | 151 | 151 | 4733 |
8 | Sparce sclerophyllous vegetation | 6374 | 191 | 191 | 5992 |
9 | Sparcely vegetated areas | 1754 | 52 | 52 | 1650 |
10 | Rocks and sand | 492 | 14 | 14 | 464 |
11 | Water | 1612 | 48 | 48 | 1516 |
12 | Coastal water | 398 | 11 | 11 | 376 |
Total | 20024 | 595 | 595 | 19429 |
Tab.6
The samples for each class for training, validation, and testing of the Kennedy Space Center (KSC) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Scrub | 761 | 22 | 22 | 717 |
2 | CP hammock | 243 | 7 | 7 | 229 |
3 | CP/Oak | 256 | 7 | 7 | 242 |
4 | Slash pine | 252 | 7 | 7 | 238 |
5 | Oak/Broadleaf | 161 | 4 | 4 | 153 |
6 | Hardwood | 229 | 6 | 6 | 217 |
7 | Swamp | 105 | 3 | 3 | 99 |
8 | Graminoid marsh | 431 | 12 | 12 | 407 |
9 | Spartina marsh | 520 | 15 | 15 | 490 |
10 | Cattail marsh | 404 | 12 | 12 | 380 |
11 | Salt marsh | 419 | 12 | 12 | 395 |
12 | Mud flats | 503 | 15 | 15 | 473 |
13 | Water | 927 | 27 | 27 | 873 |
Total | 5211 | 149 | 149 | 4913 |
Tab.7
The samples for each class for training, validation, and testing of the Botswana (BS) dataset"
No. | Class | Total number | Train | Val | Test |
---|---|---|---|---|---|
1 | Scrub | 761 | 22 | 22 | 717 |
1 | Water | 270 | 3 | 3 | 264 |
2 | Hippo grass | 101 | 2 | 2 | 97 |
3 | Floodplain grasses1 | 251 | 3 | 3 | 245 |
4 | Floodplain grasses2 | 215 | 3 | 3 | 209 |
5 | Reeds1 | 269 | 3 | 3 | 263 |
6 | Riparian | 269 | 3 | 3 | 263 |
7 | Fierscar2 | 259 | 3 | 3 | 253 |
8 | Island interior | 203 | 3 | 3 | 197 |
9 | Acacia woodlands | 314 | 4 | 4 | 306 |
10 | Acacia shrublands | 248 | 3 | 3 | 242 |
11 | Acacia grasslands | 305 | 4 | 4 | 297 |
12 | Short mopane | 181 | 2 | 2 | 177 |
13 | Mixed mopane | 268 | 3 | 3 | 262 |
14 | Exposed soils | 95 | 1 | 1 | 93 |
Total | 3248 | 40 | 40 | 3168 |
Tab.8
The categorized results for the IN dataset using 3% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 29.34±3.60 | 50.17±6.79 | 79.16±8.97 | 97.08±2.71 | 94.80±3.69 | 98.37±1.63 |
2 | 55.51±0.32 | 56.59±4.46 | 86.15±2.21 | 96.24±1.91 | 91.08±0.72 | 95.64±1.37 |
3 | 62.66±1.07 | 53.17±3.07 | 91.67±2.98 | 93.14±2.65 | 85.40±5.48 | 94.95±1.35 |
4 | 42.74±3.49 | 53.31±3.91 | 84.37±4.53 | 97.17±1.07 | 88.88±2.69 | 95.37±1.58 |
5 | 85.30±1.28 | 84.05±5.66 | 97.69±1.89 | 98.42±0.64 | 97.43±0.51 | 98.24±0.73 |
6 | 82.11±1.52 | 89.03±2.82 | 95.85±1.56 | 97.02±0.87 | 96.76±1.18 | 98.12±0.79 |
7 | 64.17±6.13 | 46.28±8.09 | 90.93±6.09 | 72.21±12.14 | 52.66±9.25 | 73.47±22.19 |
8 | 89.79±0.92 | 92.06±0.98 | 97.72±1.34 | 100.0±0.00 | 100.0±0.00 | 100.0±0.00 |
9 | 42.40±10.06 | 52.17±13.04 | 74.64±10.86 | 71.29±18.60 | 62.66±6.34 | 90.21±4.17 |
10 | 63.01±2.72 | 52.87±7.61 | 85.75±3.81 | 86.01±4.24 | 82.43±3.32 | 91.31±2.35 |
11 | 64.09±1.25 | 67.79±3.09 | 88.65±1.80 | 91.56±4.05 | 90.54±1.83 | 95.67±1.66 |
12 | 48.50±1.15 | 44.67±3.28 | 86.34±2.73 | 90.63±2.75 | 80.10±5.37 | 91.02±3.59 |
13 | 87.37±2.35 | 87.12±2.60 | 99.00±1.00 | 99.79±0.20 | 98.55±0.79 | 99.05±0.69 |
14 | 89.71±0.41 | 91.17±1.25 | 95.52±0.55 | 97.01±1.69 | 97.14±0.75 | 96.51±1.47 |
15 | 61.51±2.73 | 73.97±1.00 | 94.28±1.54 | 93.24±2.18 | 86.18±2.21 | 95.69±2.31 |
16 | 97.64±1.29 | 94.36±1.33 | 94.15±2.15 | 96.99±1.69 | 94.55±3.93 | 95.57±1.87 |
OA | 68.69±0.50 | 66.90±7.38 | 90.24±1.18 | 93.16±1.96 | 89.89±1.33 | 95.44±1.59 |
AA | 66.62±1.37 | 68.05±2.06 | 90.12±1.54 | 92.36±3.08 | 87.45±2.34 | 94.31±1.23 |
K×100 | 63.93±0.49 | 62.36±7.78 | 88.84±1.36 | 92.18±2.28 | 88.48±1.51 | 94.79±1.31 |
Tab.9
The categorized results for the UP dataset using 0.5% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 83.61±2.58 | 87.30±2.83 | 98.89±0.47 | 97.42±1.06 | 92.91±0.94 | 97.31±2.43 |
2 | 84.96±2.07 | 92.65±1.12 | 97.96±0.37 | 98.69±0.34 | 96.03±2.1 | 99.13±0.41 |
3 | 58.75±5.38 | 45.81±12.22 | 74.34±10.03 | 91.34±6.61 | 89.41±4.36 | 94.13±5.81 |
4 | 96.37±0.86 | 95.02±2.65 | 98.98±0.47 | 97.75±1.59 | 96.86±1.48 | 98.00±1.66 |
5 | 94.99±1.16 | 96.96±1.27 | 99.93±0.06 | 99.67±0.11 | 99.49±0.16 | 99.71±0.17 |
6 | 81.90±4.16 | 82.71±4.28 | 91.07±4.24 | 98.72±0.27 | 96.86±0.92 | 97.64±2.46 |
7 | 53.26±13.41 | 69.82±8.51 | 78.69±4.42 | 96.53±1.30 | 95.18±4.49 | 98.67±1.33 |
8 | 71.36±1.96 | 65.38±2.04 | 77.71±4.16 | 74.33±2.14 | 81.67±2.05 | 86.21±2.17 |
9 | 99.89±0.03 | 93.89±1.76 | 98.60±0.78 | 97.17±0.95 | 92.78±3.73 | 98.11±0.95 |
OA | 82.63±2.95 | 85.82±1.62 | 92.92±1.26 | 95.32±1.24 | 93.79±1.53 | 96.98±1.23 |
AA | 80.57±4.68 | 81.06±2.93 | 90.68±1.93 | 94.62±2.14 | 93.47±1.96 | 96.21±1.30 |
K×100 | 76.23±4.56 | 81.08±2.11 | 90.66±1.63 | 93.78±1.66 | 91.69±2.15 | 95.86±1.64 |
Tab.10
The categorized results for the PC dataset using 0.1% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 99.75±0.09 | 97.08±0.75 | 99.98±0.02 | 99.85±0.13 | 99.84±0.05 | 99.74±0.24 |
2 | 83.36±2.23 | 82.01±4.28 | 97.05±0.85 | 87.73±4.72 | 93.32±2.77 | 93.27±5.32 |
3 | 62.47±5.34 | 85.20±5.92 | 77.15±5.88 | 84.17±7.69 | 76.82±5.15 | 86.67±9.59 |
4 | 63.15±5.80 | 49.91±12.72 | 65.43±6.89 | 63.46±17.03 | 64.52±2.83 | 76.95±12.00 |
5 | 82.76±4.16 | 73.20±4.87 | 89.23±2.16 | 90.01±3.43 | 87.02±3.23 | 93.22±6.24 |
6 | 83.52±2.27 | 85.77±2.24 | 87.29±4.79 | 88.96±3.80 | 87.87±3.94 | 90.76±2.20 |
7 | 91.88±0.97 | 86.91±7.96 | 99.64±0.30 | 94.53±5.19 | 99.14±0.35 | 97.92±1.69 |
8 | 95.26±2.57 | 97.28±0.69 | 98.19±0.86 | 99.66±0.10 | 99.55±0.18 | 99.58±0.09 |
9 | 99.77±0.10 | 92.57±3.53 | 97.99±1.77 | 92.43±7.23 | 95.61±2.34 | 99.46±0.83 |
OA | 93.87±1.64 | 92.92±2.08 | 96.36±1.39 | 96.54±0.78 | 96.50±0.86 | 97.54±0.33 |
AA | 84.66±0.85 | 83.33±5.99 | 90.22±2.24 | 88.98±4.36 | 89.30±2.18 | 93.06±1.28 |
K×100 | 91.27±2.38 | 89.86±3.00 | 94.83±1.97 | 95.10±1.11 | 95.04±1.22 | 96.51±0.47 |
Tab.11
The categorized results for the HV dataset using 3% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 74.27±0.37 | 85.98±4.20 | 93.80±5.34 | 89.81±7.88 | 89.92±7.10 | 93.38±4.40 |
2 | 88.07±1.51 | 89.08±11.56 | 99.43±1.14 | 99.79±0.42 | 100.0±0.00 | 98.81±2.38 |
3 | 85.93±2.39 | 73.49±16.74 | 93.25±3.74 | 79.05±15.69 | 93.31±6.61 | 93.68±4.17 |
4 | 89.01±3.21 | 67.04±20.48 | 83.92±21.15 | 59.09±34.54 | 91.79±8.37 | 91.63±16.18 |
5 | 86.80±0.36 | 85.82±4.81 | 88.97±1.88 | 93.37±1.81 | 88.18±2.22 | 93.17±2.20 |
6 | 97.09±1.04 | 88.71±9.59 | 99.23±0.95 | 99.59±0.68 | 98.52±0.81 | 99.58±0.83 |
7 | 95.83±0.95 | 94.86±1.86 | 97.07±0.70 | 97.26±1.22 | 96.87±2.53 | 97.92±1.92 |
8 | 86.28±3.11 | 89.26±4.77 | 93.38±1.15 | 95.34±1.55 | 96.17±1.42 | 95.86±0.86 |
9 | 83.29±0.57 | 84.98±7.15 | 95.80±2.03 | 91.25±8.38 | 91.97±6.47 | 96.23±2.03 |
10 | 91.40±0.05 | 89.61±9.03 | 96.13±5.02 | 97.61±4.78 | 98.42±2.12 | 99.33±1.35 |
11 | 94.56±0.68 | 81.73±2.41 | 99.91±0.12 | 99.96±0.05 | 100.0±0.00 | 100.0±0.00 |
12 | 100.0±0.00 | 39.64±48.57 | 99.84±0.31 | 100.0±0.00 | 100.0±0.00 | 100.0±0.00 |
OA | 88.74±1.31 | 87.85±2.85 | 94.80±0.52 | 94.52±1.48 | 95.06±1.41 | 96.44±0.57 |
AA | 89.38±0.58 | 80.85±7.66 | 95.06±1.51 | 91.84±3.84 | 95.43±1.13 | 96.63±1.18 |
K×100 | 85.99±067 | 84.90±3.58 | 93.53±0.66 | 93.19±1.85 | 93.86±1.77 | 95.58±0.70 |
Tab.12
The categorized results for the KSC dataset using 3% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 89.75±1.54 | 94.67±1.74 | 95.17±1.58 | 98.97±0.71 | 98.96±1.00 | 99.17±0.90 |
2 | 86.65±2.65 | 62.59±3.97 | 93.72±2.01 | 94.97±2.57 | 91.96±3.20 | 98.32±2.23 |
3 | 66.28±5.25 | 47.27±10.10 | 82.05±11.36 | 78.79±8.20 | 74.34±6.66 | 77.36±12.37 |
4 | 41.40±3.67 | 34.20±4.45 | 57.43±9.23 | 62.66±6.56 | 61.51±4.46 | 85.34±4.53 |
5 | 52.04±4.55 | 5.50±5.50 | 62.15±19.78 | 71.91±19.34 | 73.26±9.47 | 93.52±2.98 |
6 | 54.60±3.45 | 61.68±6.00 | 80.83±11.73 | 84.76±9.03 | 87.88±6.42 | 97.68±3.09 |
7 | 72.43±2.88 | 17.88±13.45 | 81.49±9.25 | 85.36±7.13 | 85.09±3.05 | 92.17±2.52 |
8 | 84.08±2.82 | 62.07±6.42 | 92.59±3.23 | 99.00±0.62 | 93.71±2.89 | 97.10±1.07 |
9 | 82.88±2.70 | 76.51±2.27 | 93.38±1.50 | 99.63±0.23 | 93.30±2.00 | 99.21±0.80 |
10 | 96.48±1.84 | 73.94±7.53 | 99.48±0.45 | 100.0±0.00 | 96.63±1.96 | 99.53±0.94 |
11 | 92.93±0.96 | 94.69±2.55 | 97.84±0.87 | 99.13±0.87 | 99.95±0.05 | 98.79±1.54 |
12 | 90.61±2.56 | 83.50±4.64 | 97.01±1.22 | 98.71±0.49 | 93.95±1.18 | 98.34±0.95 |
13 | 99.78±0.17 | 98.21±0.33 | 99.95±0.05 | 100.0±0.00 | 99.77±0.23 | 99.70±0.48 |
OA | 84.10±2.27 | 75.88±3.13 | 89.72±1.87 | 94.22±2.64 | 91.89±1.35 | 96.79±1.26 |
AA | 77.68±1.86 | 62.51±6.27 | 87.16±2.78 | 90.30±6.12 | 88.49±2.34 | 95.21±1.87 |
K×100 | 82.28±2.54 | 73.11±3.49 | 88.54±2.09 | 93.56±2.93 | 90.97±1.51 | 96.49±1.56 |
Tab.13
The categorized results for the BS dataset using 1% training samples"
Class | SVM/(%) | CDCNN/(%) | SSRN/(%) | FDSSC/(%) | DBMA/(%) | Proposed/(%) |
---|---|---|---|---|---|---|
1 | 99.85±0.15 | 71.31±18.08 | 99.33±0.41 | 96.96±1.65 | 98.16±0.68 | 98.21±0.96 |
2 | 77.30±5.42 | 44.29±16.83 | 92.30±3.35 | 82.61±8.66 | 95.55±3.73 | 92.00±8.88 |
3 | 74.28±5.94 | 67.96±17.77 | 99.41±0.50 | 100.0±0.00 | 98.76±0.84 | 99.59±0.64 |
4 | 57.35±3.13 | 43.36±18.85 | 81.01±5.13 | 82.32±4.02 | 81.99±4.47 | 91.39±4.69 |
5 | 82.65±2.10 | 57.80±15.31 | 86.27±5.19 | 89.41±3.42 | 86.74±4.43 | 87.98±5.17 |
6 | 53.09±4.22 | 52.47±13.71 | 90.93±3.97 | 95.46±2.27 | 89.21±3.86 | 94.83±2.52 |
7 | 95.29±4.14 | 87.84±4.85 | 100.0±0.00 | 97.11±2.60 | 94.58±2.36 | 99.76±0.47 |
8 | 75.02±6.92 | 59.67±16.04 | 95.21±1.90 | 96.98±1.29 | 98.93±0.84 | 97.90±2.64 |
9 | 70.95±4.50 | 71.29±18.55 | 90.24±2.63 | 87.71±6.26 | 95.47±4.05 | 99.27±0.90 |
10 | 68.53±3.80 | 65.13±17.94 | 86.39±4.84 | 94.16±3.70 | 93.08±4.11 | 92.39±8.13 |
11 | 93.00±1.57 | 61.53±16.87 | 99.05±0.62 | 99.11±0.89 | 95.32±3.60 | 99.39±0.60 |
12 | 84.85±3.46 | 52.88±14.96 | 97.56±1.66 | 97.26±2.34 | 97.50±1.53 | 98.30±0.63 |
13 | 79.47±4.56 | 71.31±18.07 | 97.69±1.38 | 91.44±4.20 | 98.90±1.00 | 99.13±0.59 |
14 | 69.90±12.95 | 57.42±13.93 | 100.0±0.00 | 99.78±0.22 | 98.43±0.98 | 100.0±0.00 |
OA | 74.73±2.45 | 61.01±26.42 | 92.90±1.37 | 92.69±2.80 | 93.51±2.30 | 95.66±1.30 |
AA | 77.25±1.58 | 61.73±27.86 | 93.96±1.18 | 93.59±2.63 | 94.47±1.90 | 96.17±1.45 |
K×100 | 72.70±2.63 | 58.50±27.32 | 92.31±1.48 | 92.08±3.03 | 92.97±2.49 | 95.40±1.41 |
Fig.10
The OA comparison between 3D Double-Channel dense layer with linear attention mechanism, 3DThere are two channels in 3D Double-Channel dense layer. The top Channel contains two stacked 3×3×3 convolution layers, which is equivalent to a 5×5×5 kernel size and obtains global information. The bottom Channel uses a 3×3×3 kernel to exploit local visual patterns. On the basis of the linear attention mechanism ablation experiment, to verify the effectiveness of the Double-Channel dense layer, we further remove a 3×3×3 convolutional layer of the top Channel. And we also take the original dense layer into comparison."
[1] |
LI Zhaokui, HUANG Lin, HE Jinrong. A multiscale deep middle-level feature fusion network for hyperspectral classification[J]. Remote Sensing, 2019, 11(6):695.
doi: 10.3390/rs11060695 |
[2] |
HONG Yongsheng, GUO Long, CHEN Songchao, et al. Exploring the potential of airborne hyperspectral image for estimating topsoil organic carbon: effects of fractional-order derivative and optimal band combination algorithm[J]. Geoderma, 2020, 365:114228.
doi: 10.1016/j.geoderma.2020.114228 |
[3] |
ROCHA A D, GROEN T A, SKIDMORE A K. Spatially-explicit modelling with support of hyperspectral data can improve prediction of plant traits[J]. Remote Sensing of Environment, 2019, 231:111200.
doi: 10.1016/j.rse.2019.05.019 |
[4] |
DAI Yuchao, ZHANG Jing, HE Mingyi, et al. Salient object detection from multi-spectral remote sensing images with deep residual network[J]. Journal of Geodesy and Geoinformation Science, 2019, 2(2):101-110. DOI: 10.11947/j.JGGS.2019.0211.
doi: 10.11947/j.JGGS.2019.0211 |
[5] |
MELGANI F, BRUZZONE L. Classification of hyperspectral remote sensing images with support vector machines[J]. IEEE Transactions on Geoscience and Remote Sensing, 2004, 42(8):1778-1790.
doi: 10.1109/TGRS.2004.831865 |
[6] |
DU Qian, CHANG C I. A linear constrained distance-based discriminant analysis for hyperspectral image classification[J]. Pattern Recognition, 2001, 34(2):361-373.
doi: 10.1016/S0031-3203(99)00215-0 |
[7] |
EDIRIWICKREMA J, KHORRAM S. Hierarchical maximum-likelihood classification for improved accuracies[J]. IEEE Transactions on Geoscience and Remote Sensing, 1997, 35(4):810-816.
doi: 10.1109/36.602523 |
[8] |
CHAN J C W, PAELINCKX D. Evaluation of Random Forest and Adaboost tree-based ensemble classification and spectral band selection for ecotope mapping using airborne hyperspectral imagery[J]. Remote Sensing of Environment, 2008, 112(6):2999-3011.
doi: 10.1016/j.rse.2008.02.011 |
[9] |
HAM J, CHEN Y C, CRAWFORD M M, et al. Investigation of the random forest framework for classification of hyperspectral data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2005, 43(3):492-501.
doi: 10.1109/TGRS.2004.842481 |
[10] |
ZHU Zexuan, JIA Sen, HE Shan, et al. Three-dimensional Gabor feature extraction for hyperspectral imagery classification using a memetic framework[J]. Information Sciences, 2015, 298:274-287.
doi: 10.1016/j.ins.2014.11.045 |
[11] |
BAU T C, SARKAR S, HEALEY G. Hyperspectral region classification using a three-dimensional Gabor filterbank[J]. IEEE Transactions on Geoscience and Remote Sensing, 2010, 48(9):3457-3464.
doi: 10.1109/TGRS.2010.2046494 |
[12] |
TANG Yuanyan, LU Yang, YUAN Haoliang. Hyperspectral image classification based on three-dimensional scattering wavelet transform[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5):2467-2480.
doi: 10.1109/TGRS.2014.2360672 |
[13] |
CAO Xiangyong, XU Lin, MENG Deyu, et al. Integration of 3-dimensional discrete wavelet transform and Markov random field for hyperspectral image classification[J]. Neurocomputing, 2017, 226:90-100.
doi: 10.1016/j.neucom.2016.11.034 |
[14] | LIU Ziwei, LUO Ping, WANG Xiaogang, et al. Deep learning face attributes in the wild[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 3730-3738. |
[15] |
WANG L, ZHANG C, LI R, et al. Scale- aware neural network for semantic segmentation of multi-resolution remote sensing images[J]. Remote Sensing, 2021, 13(24):5015.
doi: 10.3390/rs13245015 |
[16] |
LI R, ZHENG S, ZHANG C, et al. ABCNet: Attentive bilateral contextual network for efficient semantic segmentation of Fine-Resolution remotely sensed imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 181:84-98.
doi: 10.1016/j.isprsjprs.2021.09.005 |
[17] |
GONG Jianya, JI Shunping. Photogrammetry and deep learning[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(6):693-704. DOI: 10.11947/j.AGCS.2018.20170640.
doi: 10.11947/j.AGCS.2018.20170640 |
[18] |
FAN Dazhao, ZHANG Yongsheng. Satellite image matching method based on deep convolutional neural network[J]. Journal of Geodesy and Geoinformation Science, 2019, 2(2):90-100. DOI: 10.11947/j.JGGS.2019.0210.
doi: 10.11947/j.JGGS.2019.0210 |
[19] |
SUN Long, WU Tao, SUN Guangcai, et al. Object detection research of SAR image using improved faster region-based convolutional neural network[J]. Journal of Geodesy and Geoinformation Science, 2020, 3(3):18-28. DOI: 10.11947/j.JGGS.2020.0302.
doi: 10.11947/j.JGGS.2020.0302 |
[20] |
HE Hao, WANG Shuyang, WANG Shicheng, et al. A road extraction method for remote sensing image based on encoder-decoder network[J]. Journal of Geodesy and Geoinformation Science, 2020, 3(2):16-25. DOI: 10.11947/j.JGGS.2020.0202.
doi: 10.11947/j.JGGS.2020.0202 |
[21] |
ZUO Zongcheng, WANG Wen, ZHANG Dongying. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields[J]. Journal of Geodesy and Geoinformation Science, 2020, 3(3):39-49. DOI: 10.11947/j.JGGS.2020.0304.
doi: 10.11947/j.JGGS.2020.0304 |
[22] |
ZHENG Zhuo, ZHONG Yanfei, MA Ailong, et al. FPGA: fast patch-free global learning framework for fully end-to-end hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(8):5612-5626.
doi: 10.1109/TGRS.36 |
[23] |
JIA Zhuang, LU Wenkai. An end-to-end hyperspectral image classification method using deep convolutional neural network with spatial constraint[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(10):1786-1790.
doi: 10.1109/LGRS.2020.3008051 |
[24] |
SUN Hao, ZHENG Xiangtao, LU Xiaoqiang. A supervised segmentation network for hyperspectral image classification[J]. IEEE Transactions on Image Processing, 2021, 30:2810-2825.
doi: 10.1109/TIP.83 |
[25] |
PAN Bin, XU Xia, SHI Zhenwei, et al. DSSNet: a simple dilated semantic segmentation network for hyperspectral imagery classification[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(11):1968-1972.
doi: 10.1109/LGRS.8859 |
[26] |
ZHAO Wenzhi, DU Shihong. Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8):4544-4554.
doi: 10.1109/TGRS.2016.2543748 |
[27] |
LEE H, KWON H. Going deeper with contextual CNN for hyperspectral image classification[J]. IEEE Transactions on Image Processing, 2017, 26(10):4843-4855.
doi: 10.1109/TIP.2017.2725580 |
[28] |
CHEN Yushi, JIANG Hanlu, LI Chunyang, et al. Ghamisi. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(10):6232-6251.
doi: 10.1109/TGRS.2016.2584107 |
[29] | HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 770-778. |
[30] | HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 2261-2269. |
[31] |
ZHONG Zilong, LI J, LUO Zhiming, et al. Spectral-spatial residual network for hyperspectral image classification: a 3-D deep learning framework[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(2):847-858.
doi: 10.1109/TGRS.2017.2755542 |
[32] |
WANG Wenju, DOU Shuguang, JIANG Zhongmin, et al. A fast dense spectral-spatial convolution network framework for hyperspectral images classification[J]. Remote Sensing, 2018, 10(7):1068.
doi: 10.3390/rs10071068 |
[33] |
HAUT J M, PAOLETTI M E, PLAZA J, et al. Visual attention-driven hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(10):8065-8080.
doi: 10.1109/TGRS.36 |
[34] |
MA Wenping, YANG Qifan, WU Yue, et al. Double-branch multi-attention mechanism network for hyperspectral image classification[J]. Remote Sensing, 2019, 11(11):1307.
doi: 10.3390/rs11111307 |
[35] | WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018: 3-19. |
[36] | FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 3141-3149. |
[37] |
LI Rui, ZHENG Shunyi, DUAN Chenxi, et al. Classification of hyperspectral image based on double-branch dual-attention mechanism network[J]. Remote Sensing, 2020, 12(3):582.
doi: 10.3390/rs12030582 |
[38] | LI R, DUAN C, ZHENG S, et al. MACU-Net for semantic segmentation of fine-resolution remotely sensed images[J]. IEEE Geoscience and Remote Sensing Letters, 2021. |
[39] | LI R, ZHENG S, DUAN C, et al. Multistage attention resu-net for semantic segmentation of fine-resolution remote sensing images[J]. IEEE Geoscience and Remote Sensing Letters, 2021. |
[40] |
WANG L, LI R, WANG D, et al. Transformer meets convolution: a bilateral awareness network for semantic segmentation of very fine resolution urban scene images[J]. Remote Sensing, 2021, 13(16):3065.
doi: 10.3390/rs13163065 |
[41] | LI R, ZHENG S, ZHANG C, et al. Multiattention network for semantic segmentation of fine-resolution remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021. |
No related articles found! |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||