999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

基于RAdam卷積神經(jīng)網(wǎng)絡(luò)的水稻生育期圖像識別

2021-06-28 00:49:34徐建鵬琚書存
農(nóng)業(yè)工程學(xué)報 2021年8期
關(guān)鍵詞:水稻優(yōu)化模型

徐建鵬,王 杰,徐 祥,琚書存

·農(nóng)業(yè)信息與電氣技術(shù)·

基于RAdam卷積神經(jīng)網(wǎng)絡(luò)的水稻生育期圖像識別

徐建鵬1,2,王 杰1,2,徐 祥1,琚書存1,2

(1. 安徽省農(nóng)村綜合經(jīng)濟信息中心,合肥 230031;2. 安徽省農(nóng)業(yè)生態(tài)大數(shù)據(jù)工程實驗室,合肥 230031)

為了解決現(xiàn)階段水稻發(fā)育期信息的獲取主要依靠人工觀測的效率低、主觀性強等問題,該研究提出一種基于Rectified Adam(RAdam)優(yōu)化器的ResNet50卷積神經(jīng)網(wǎng)絡(luò)圖像識別方法,開展水稻關(guān)鍵生育期的自動識別。連續(xù)2 a對12塊試驗田的水稻物候特征進行持續(xù)自動拍攝,對采集的水稻圖像進行預(yù)處理,得到水稻各發(fā)育期分類圖像數(shù)據(jù)集;采用ExG因子和大津法(Otsu)算法相結(jié)合的方法對水稻圖像分割,減小稻田背景干擾;對比分析了VGG16、VGG19、ResNet50和Inception v3四種模型下水稻生育期圖像分級識別的性能,選取性能較優(yōu)網(wǎng)絡(luò)模型并進行了網(wǎng)絡(luò)參數(shù)調(diào)優(yōu);對比試驗了不同優(yōu)化器下模型準確率和損失值的變化,選取了RAdam優(yōu)化器。結(jié)果表明,采取基于RAdam優(yōu)化器卷積神經(jīng)網(wǎng)絡(luò)構(gòu)建的模型,在真實場景下分類識別準確率達到97.33%,網(wǎng)絡(luò)穩(wěn)定性高、收斂速度快,為水稻生育期自動化觀測提供了有效方法。

圖像識別;神經(jīng)網(wǎng)絡(luò);模型;水稻;RAdam;ResNet50;生育期

0 引 言

水稻是中國最重要的糧食作物之一,全國水稻種植面積約占糧食作物面積的30%,產(chǎn)量接近糧食總產(chǎn)量的一半。水稻的生育期監(jiān)測是指在水稻生育過程中,對各生長發(fā)育時期的形態(tài)變化進行記載的過程,反映了水稻的生長狀態(tài)信息。農(nóng)業(yè)氣象服務(wù)業(yè)務(wù)通過分析作物各生育期與氣象條件之間的關(guān)系,幫助農(nóng)田管理者及時規(guī)劃田間管理活動(如灌溉和施肥等),還可以為作物長勢評估及估產(chǎn)提供重要的參考[1]。因此,對水稻生育期的觀測技術(shù)開展研究具有重要意義。

現(xiàn)階段水稻生育期信息的獲取主要依靠人工觀測,觀測人員按照農(nóng)業(yè)氣象觀測規(guī)范中的定義和描述,對水稻進行田間實測,不僅效率低而且需耗費大量的人力物力,無法滿足實時、快速的監(jiān)測需求。遙感具有近實時、大面積和快速無損的優(yōu)勢,為農(nóng)作物物候期識別提供了有效的技術(shù)手段。劉丹等[2]利用衛(wèi)星遙感技術(shù)提取研究區(qū)水稻種植分布,分類精度達89.19%;孫華生等[3]利用遙感方法,根據(jù)水稻在移栽期、分蘗初期、抽穗期和成熟期的增強型的歸一化植被指數(shù)(Enhanced Vegetation Index,EVI)變化特征對生育期進行識別,水稻各生長發(fā)育期的絕對誤差大部分小于16 d。隨著以深度學(xué)習(xí)為代表的智能化技術(shù)在場景識別、物體分類等方面的研究越來越成熟,利用深度學(xué)習(xí)識別物體效率快、準確度高[3],逐漸被用于農(nóng)作物物候特性的識別。白曉東[4]利用圖像識別開展了水稻的移栽期、分蘗期和抽穗期自動化觀測研究,自動檢測結(jié)果與人工觀測記錄誤差基本在3 d以內(nèi);楊振忠等[5]開展了基于機器學(xué)習(xí)結(jié)合植被指數(shù)閾值的水稻關(guān)鍵生育期識別的研究,構(gòu)建基于K近鄰分類(K-NearestNeighbor,KNN)算法的水稻生育期識別模型,對無人機數(shù)據(jù)的識別準確率可達86.04%;Ikasari等[6]利用遙感技術(shù)和深度學(xué)習(xí)技術(shù)實現(xiàn)水稻生育期的識別,提出的采用多正則化、Dropout和批歸一化的多層神經(jīng)網(wǎng)絡(luò)算法(MultiLayer Perceptron,MLP)算法對該數(shù)據(jù)集識別的準確率達到70.28%;Gupta等[7]將ResNet50預(yù)訓(xùn)練的卷積神經(jīng)網(wǎng)絡(luò)用于雜草和農(nóng)作物分類識別中,采取ResNet50神經(jīng)網(wǎng)絡(luò)對農(nóng)作物分類識別率達到95.23%。近年來隨著計算機技術(shù)和算法技術(shù)的快速發(fā)展,深度學(xué)習(xí)已成為圖像分類識別不可或缺的工具[8],其在作物物候觀測方面的研究越發(fā)廣泛[9],但大多采用單一學(xué)習(xí)器算法,各研究的識別率普遍不高[5]。基于卷積神經(jīng)網(wǎng)絡(luò)的水稻生育期識別研究較少,將經(jīng)過調(diào)優(yōu)的ResNet50預(yù)訓(xùn)練的卷積神經(jīng)網(wǎng)絡(luò)用于水稻發(fā)育期識別研究鮮見,且進行研究的水稻生育期自動觀測數(shù)量十分有限[9],總體上水稻關(guān)鍵生育期的觀測技術(shù)仍不成熟,不能滿足水稻生育期觀測業(yè)務(wù)服務(wù)需要。

綜上,為提高水稻關(guān)鍵生育期圖像識別的精度和自動識別的生育期范圍,探索適合水稻關(guān)鍵生育期分類識別的模型中有關(guān)參數(shù)的最優(yōu)設(shè)置,本研究以不同播期的水稻試驗田數(shù)字圖像為研究對象,提出一種圖像分割與圖像分類相結(jié)合的水稻生育期自動識別方法,將ResNet50網(wǎng)絡(luò)模型與Rectified Adam(RAdam)優(yōu)化器[10]結(jié)合,進行水稻生育期自動識別,以期部分代替人工完成對水稻生長過程中部分生育期的觀測,為開發(fā)嵌入式的水稻物候特性設(shè)備提供模型支持。

1 水稻圖像數(shù)據(jù)集

1.1 數(shù)據(jù)集來源

訓(xùn)練集和驗證集均來源于安徽省農(nóng)業(yè)氣象中心合肥分中心實驗基地的水稻試驗田(117°03′26″E,31°57′20″N)的自動采集數(shù)據(jù)。2019、2020年連續(xù)2 a在田字形12塊試驗田(每塊試驗田尺寸均為12 m×5 m)(如圖1所示),按照6個不同播期(分別為4月24日、4月29日、5月4日、5月9日、5月14日和5月19日)種植4種品種(分別為當育粳10號、宣粳糯1號、創(chuàng)兩優(yōu)699和兩優(yōu)631)的一季稻,在長方形試驗田的兩端架設(shè)兩臺高清廣角攝像機,拍攝水稻從移栽到收獲整個生長過程(5月1日至10月31日)。攝像機選取的是海康威視(i)DS-2DF88,視頻輸出支持3 840×2 160 @25fps、2 100線、37倍光學(xué)變倍,最大支持300個預(yù)置位、18條巡航路徑,安裝立桿距離地面2.5 m,在每塊試驗田每個播期田塊里面設(shè)置10個拍攝預(yù)置點。為最大限度避免太陽直射帶來的光照強度顯著變化,設(shè)定每日08:00、16:00這2個時間點,通過設(shè)置定時、定點、巡航拍攝水稻圖片和短視頻,并自動上傳到中心數(shù)據(jù)服務(wù)器。

采集的水稻數(shù)據(jù)圖像數(shù)據(jù)格式j(luò)pg、視頻數(shù)據(jù)格式mp4;通過2 a的試驗和跟蹤拍攝水稻從苗圃移栽到水田之后的生育期圖像,數(shù)據(jù)庫積累了11 682張圖像、496 h視頻的數(shù)據(jù)資源,涵蓋了水稻的返青期、分蘗期、拔節(jié)期、孕穗期、抽穗期、乳熟期、成熟期7個生育期[11-12],確保了該數(shù)據(jù)訓(xùn)練的網(wǎng)絡(luò)能夠具有很好的魯棒性。其中返青期、分蘗期、拔節(jié)期、抽穗期和乳熟期是水稻生長發(fā)育的關(guān)鍵階段[13],這5個生育期中的水稻生長狀態(tài)對最終水稻的產(chǎn)量和品質(zhì)影響較大,水稻圖像如圖2所示,故本文著重對這5個生育期開展識別研究。

1.2 數(shù)據(jù)集擴充

為了增加訓(xùn)練數(shù)據(jù),使訓(xùn)練出的網(wǎng)絡(luò)具有更好的抗旋轉(zhuǎn)、平移和縮放不變性,訓(xùn)練前對訓(xùn)練數(shù)據(jù)集進行數(shù)據(jù)增強操作[14],通過圖像的幾何變換,使用翻轉(zhuǎn)、旋轉(zhuǎn)、縮放、裁剪、平移、噪聲擾動、截取幾類數(shù)據(jù)增強變化方法中的一種或多種組合來增加輸入數(shù)據(jù)的量[15]。擴增后水稻圖像數(shù)據(jù)集共35 422張,其中訓(xùn)練集24 794張,驗證集10 628張,各生育期數(shù)據(jù)集數(shù)量分布如表1所示。數(shù)據(jù)集包含了各種尺度的水稻圖片,以及高噪點圖片數(shù)據(jù),有利于增加網(wǎng)絡(luò)的魯棒性[16]。

表1 不同生育期圖像數(shù)據(jù)數(shù)量

2 水稻生育期識別方法

2.1 圖像分割方法

分析采集的圖像可知,不同生育期水稻圖片中背景存在較大差異,特別是在返青期、分蘗期,其圖片背景大面積為稻田,包含太多的干擾因素(水、土壤、垃圾等),不利于水稻生育期的特征信息提取。因而,本文采用ExG因子和大津法(Otsu)算法相結(jié)合的方法對水稻圖像進行分割,保留包含生育期特征的信息,去除或減小背景的干擾[17-18]。ExG因子通過顏色分量運算方法獲得,讓系統(tǒng)自動窮舉顏色分量運算結(jié)果,通過人的監(jiān)督判斷,獲得最佳的RGB顏色分量線性組合系數(shù)。RGB線性組合計算公式[19]見式(1)。

式中(,)為線性組合運算后的結(jié)果特征,(,)、(,)、(,)為圖像紅、綠、藍顏色分量在(,)處的灰度值,,,分別為顏色分量(,)、G(,)、(,)線性系數(shù),(,)表示顏色分量的二維數(shù)組變量。

如果(,)≤0,則(,)=0;如果(,)≥255,則(,)= 255。這樣將所有特征值規(guī)則化到0~255范圍內(nèi)。通過分量運算的顏色組合系數(shù)學(xué)習(xí),發(fā)現(xiàn)=-1,=2,=-1時效果好,這種組合對特定對象的光線和顏色變化具有很強的魯棒性,能夠滿足提取水稻植株圖像“綠色”特征的要求。故得到的灰度化因子為2--即ExG因子。本文采用此因子作為圖像分割指標。

水稻植株圖像綠色特征比較顯著,其ExG值與背景圖像ExG值差異明顯,兩者間存在一個最佳分割閾值。本文使用ExG因子對采集的圖像數(shù)據(jù)集進行灰度化處理,再使用Otsu法獲取該閾值[20]。Otsu算法見式(2)

式中為為圖像像素點的灰度值,threshold為使得灰度圖中所有像素類間方差最大的閾值。當圖像中像素點灰度值大于該閾值時,則認為該像素點是水稻植株,否則屬于背景區(qū)域。

2.2 圖像分類方法

卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Networks,CNN)是圖像分類的首選方案[21],近幾年陸續(xù)出現(xiàn)了VGGNet、GoogLeNet、ResNet等多個經(jīng)典卷積神經(jīng)網(wǎng)絡(luò)架構(gòu)[22]。

VGG模型結(jié)構(gòu)簡單,其中使用比較多的網(wǎng)絡(luò)結(jié)構(gòu)是VGG16和VGG19,VGG16包含13個卷積層和3個全連接層,VGG19包含16個卷積層和3個全連接層,兩者使用的都是3×3的卷積核和2×2的最大池化層,通過最大池化層依次減少每層的神經(jīng)元數(shù)量,最后三層分別是2個有4 096個神經(jīng)元的全連接層和一個softmax層。

GoogleNet 模型,又稱Inception 模型,其引入了Inception結(jié)構(gòu),該結(jié)構(gòu)使用多個不同尺寸的卷積核和池化層,融合不同尺度特征的卷積核進行網(wǎng)絡(luò)降維以及映射處理,在增加網(wǎng)絡(luò)深度和寬度的同時減少模型參數(shù)。

ResNet50即有50層網(wǎng)絡(luò)的ResNet模型,首先有個輸入7×7×64的卷積層,然后經(jīng)過16(3+4+6+3)個構(gòu)建塊(Building Block),每個Block為3層,所以有48層,最后有個全連接層(Fully Connected Layers,F(xiàn)C),50層網(wǎng)絡(luò)僅指卷積和全連接層,而ReLU層和Pool層并沒有統(tǒng)計在內(nèi)[23]。

ResNet50在內(nèi)存和時間上的計算要求比VGG低,準確度比VGG和GoogleNet要高,計算效率也比VGG高,網(wǎng)絡(luò)結(jié)構(gòu)比GoogleNet簡單。ResNet50模型中引入了殘差模塊,有效地解決了因神經(jīng)網(wǎng)絡(luò)層數(shù)加深導(dǎo)致的梯度彌散、梯度爆炸和退化問題[24],其網(wǎng)絡(luò)模型結(jié)構(gòu)如圖3所示,網(wǎng)絡(luò)參數(shù)如表2所示。本文優(yōu)化了網(wǎng)絡(luò)模型網(wǎng)絡(luò)參數(shù),選用了Adam優(yōu)化器,使網(wǎng)絡(luò)模型更適用于水稻生育期圖像分類識別。

2.3 RAdam優(yōu)化器

網(wǎng)絡(luò)模型參數(shù)眾多,需要合適的優(yōu)化算法來進行參數(shù)的學(xué)習(xí),Adam是適用最為廣泛的優(yōu)化器,適用于不同的深度學(xué)習(xí)網(wǎng)絡(luò),Adam算法原理是用指數(shù)滑動平均去估計梯度每個分量的一階矩和二階矩,得到每步的更新量,繼而提供自適應(yīng)學(xué)習(xí)率[25]。但在訓(xùn)練初期二階矩的方差可能會無窮大,Adam 的更新算法便不再滿足。

表2 卷積神經(jīng)網(wǎng)絡(luò)模型網(wǎng)絡(luò)參數(shù)

注: conv1是第一個卷積層,conv2_x是第2個卷積模塊,conv3_x是第3個卷積模塊,conv4_x是第4個卷積模塊,conv5_x是第5個卷積模塊,Max Pool為最大池化層,AVE Pool為平均池化層。

Note: conv1 is the first convolution layer, conv2_x is the second convolution module, conv3_x is the third convolution module, conv4_x is the fourth convolution module, conv5_x is the fifth convolution module, and Max Pool is Maximum pooling layer, AVE Pool is the average pooling layer.

基于Adam的改進后優(yōu)化器Rectified Adam(RAdam)[10],針對自適應(yīng)的學(xué)習(xí)率引入了一個修正項,針對Adam的variance進行修正,在訓(xùn)練初期將更新算法修正到隨機梯度下降(Stochastic Gradient Descent,SGD)的動量(Momentum)算法,消除了在訓(xùn)練期間warmup所涉及手動調(diào)優(yōu)的需要,對學(xué)習(xí)速率變化具有更強的魯棒性,并在各種數(shù)據(jù)集和卷積神經(jīng)網(wǎng)絡(luò)體系結(jié)構(gòu)中提供更好的訓(xùn)練精度和泛化。

3 試驗與結(jié)果分析

3.1 試驗過程

本試驗的硬件環(huán)境為內(nèi)存:16 G,CPU:Intel (R) Xeon (R) E7,GPU:NVIDIA Quadro P600,操作系統(tǒng)為Windows 10,選用的深度學(xué)習(xí)開源架構(gòu)為Tensorflow,通過Tensorflow調(diào)用 GPU 實現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)的并行運算[26]。

數(shù)據(jù)處理過程如下:

1)網(wǎng)絡(luò)模型選定前,需要完成數(shù)據(jù)的準備,并通過人工方式對每張圖片進行分類標注[27],將擴充后數(shù)據(jù)集按7:3劃分成訓(xùn)練集和驗證集2部分。然后對試驗數(shù)據(jù)進行預(yù)處理,包括圖片尺寸重定義、像素去均值化與歸一化處理,本文將數(shù)據(jù)集中不同尺寸的圖片統(tǒng)一轉(zhuǎn)換為224像素×224像素×3通道[22,28],便于比較。

2)采用ExG因子和Otsu算法相結(jié)合的方法對水稻圖像進行分割處理,消除稻田背景干擾信息,最大化提取水稻生育期的特征信息。消除背景干擾后的圖像分割結(jié)果如圖4所示。

圖4 稻田圖像分割示意圖

3)采用VGG16、VGG19、Inception v3(GoogleNet)和ResNet50四種預(yù)訓(xùn)練的深度卷積網(wǎng)絡(luò)模型[7,29],對水稻生育期圖像分類進行對比,尋找最優(yōu)卷積神經(jīng)網(wǎng)絡(luò)。開展模型訓(xùn)練,設(shè)定不同條件下對比試驗,訓(xùn)練輪次設(shè)定20次,每4個輪次記錄訓(xùn)練結(jié)束后輸出的準確率和損失值,并保存網(wǎng)絡(luò)模型。以ResNet50卷積神經(jīng)網(wǎng)絡(luò)為例,其訓(xùn)練流程如圖5所示。

4)最優(yōu)模型參數(shù)調(diào)優(yōu)。網(wǎng)絡(luò)模型超參數(shù)包含學(xué)習(xí)率(learning rate)和批大小(batch size),學(xué)習(xí)率影響模型的收斂狀態(tài),批大小影響模型的泛化性能。在對水稻生育期圖像識別中,超參數(shù)設(shè)計參考相關(guān)模型在類似數(shù)據(jù)集上的設(shè)計以及在本研究數(shù)據(jù)集上進行的系列試驗,對超參數(shù)進行統(tǒng)一化處理[30-31]。比較調(diào)參前后最優(yōu)模型性能,確定最優(yōu)調(diào)參模型。

5)采用Adam和RAdam兩種優(yōu)化器對超參數(shù)調(diào)優(yōu)后的模型進行進一步優(yōu)化,對比模型準確率和損失值,選擇較優(yōu)優(yōu)化器。

6)采用基于較優(yōu)優(yōu)化器的最優(yōu)模型識別水稻生育期,與人工觀測結(jié)果進行對比。

3.2 結(jié)果與分析

3.2.1 最優(yōu)卷積神經(jīng)網(wǎng)絡(luò)模型的確定

采用VGG16、VGG19、Inception v3(GoogleNet)和ResNet50四種深度卷積網(wǎng)絡(luò)模型獲得的水稻生育期圖像分類結(jié)果如表3所示。

表3 不同網(wǎng)絡(luò)模型學(xué)習(xí)結(jié)果

由表3可知,使用VGG16網(wǎng)絡(luò)模型的訓(xùn)練準確率達到99.46%、驗證準確率達到94.76%,使用VGG19網(wǎng)絡(luò)模型的訓(xùn)練準確率達到94.36%、驗證準確率達到89.43%,使用Inception v3網(wǎng)絡(luò)模型的訓(xùn)練準確率達到98.70%、驗證準確率達到93.59%,使用ResNet50網(wǎng)絡(luò)模型的訓(xùn)練和驗證準確率分別是99.59%和96.88%。可見,ResNet50訓(xùn)練模型性能明顯優(yōu)于其他3種訓(xùn)練模型。因而,本文初步選出ResNet50卷積神經(jīng)網(wǎng)絡(luò)對水稻生育期圖像進行分類。

3.2.2 ResNet50網(wǎng)絡(luò)模型參數(shù)調(diào)優(yōu)

在ResNet50網(wǎng)絡(luò)模型超參數(shù)調(diào)優(yōu)中,學(xué)習(xí)率采用指數(shù)標尺選取1.00×10-4、1.00×10-3和1.40×10-2組學(xué)習(xí)率,批大小選取16、32、64和128進行對比試驗[32-34]。網(wǎng)絡(luò)模型初始學(xué)習(xí)率為1.00×104、批大小為16,經(jīng)過多次試驗調(diào)參后[35],最終確定學(xué)習(xí)率為1.00×10-4、批大小為32,學(xué)習(xí)對比結(jié)果如表4所示。可見,調(diào)參后ResNet50網(wǎng)絡(luò)模型準確率提高,損失值降低,模型的驗證準確率和損失值分別達到97.66%和0.009,且訓(xùn)練時間縮減了737 s,可見,調(diào)參后模型性能更優(yōu),故本文將采用調(diào)參后的ResNet50卷積神經(jīng)網(wǎng)絡(luò)模型進行水稻生育期圖像分類。

表4 參數(shù)調(diào)優(yōu)前后學(xué)習(xí)結(jié)果比較

3.2.3 不同優(yōu)化器的對比分析

采用Adam和RAdam兩種優(yōu)化器后模型準確率和損失值的變化如圖6所示,隨著迭代輪次的增加,loss值不斷減少,在迭代第4輪次時開始收斂,并逐漸趨近于零。在前面3個輪次迭代訓(xùn)練,使用Adam優(yōu)化器模型收斂的慢,準確率和損失值也沒有使用RAdam優(yōu)化器的模型表現(xiàn)得好。

可見,在水稻各生育期圖像識別訓(xùn)練中,ResNet50卷積神經(jīng)網(wǎng)絡(luò)模型采取RAdam優(yōu)化器的收斂速度快明顯優(yōu)于采用Adam優(yōu)化器,故本文采用RAdam優(yōu)化器。

3.2.4 各水稻生育期識別結(jié)果驗證

為了驗證在實際場景下模型對水稻各生育期的識別率,以人工方式拍攝了其他稻田的各生育期水稻圖片,同時收集了互利網(wǎng)公開的水稻圖片,共計150張(每個生育期的圖片樣本均為30張),作為為模型驗證樣本來進行機器識別,與人工識別結(jié)果對比分析,結(jié)果見表5,5個生育期平均正確識別率達到97.33%,錯誤識別樣本4個,其中返青期、抽穗期、乳熟期的正確識別率達到100%,表明本文提出的基于RAdam優(yōu)化器的ResNet50方法,經(jīng)過參數(shù)調(diào)優(yōu)后具有較高的水稻生育期識別精度。

表5 各生育期模型識別結(jié)果與人工實際觀測結(jié)果對比

4 結(jié) 論

本文提出一種卷積神經(jīng)網(wǎng)絡(luò)水稻生育期的視覺識別策略,首先構(gòu)建了水稻圖像數(shù)據(jù)集,對圖像進了擴充、分割等預(yù)處理,利用 ResNet50模型構(gòu)建了適合水稻關(guān)鍵生育期識別的網(wǎng)絡(luò)模型,并對影響模型性能的批大小等參數(shù)調(diào)優(yōu)、優(yōu)化器進行了分析,結(jié)果表明:

1)VGG16、VGG19、Inception v3和ResNet50四種卷積神經(jīng)網(wǎng)絡(luò)模型中,ResNet50模型的性能最佳;基于ResNet50模型,構(gòu)建并訓(xùn)練的水稻生育期識別模型可以較好的識別水稻5種關(guān)鍵生育期,訓(xùn)練中驗證準確率為97.66%。

2)Adam、RAdam兩種優(yōu)化器下ResNet50模型識別水稻生育期的準確率和損失值變化差異較大,RAdam穩(wěn)定性高,RAdam的收斂速度比Adam快。

3)本文提出的方法對各水稻關(guān)鍵生育期的平均正確識別率達到97.33%,其中分蘗期與拔節(jié)期由于分類特征不明顯識別率較低,以后仍需對分類特征不明顯的水稻孕穗期和成熟期識別進行進一步研究。

本研究對于深度學(xué)習(xí)在農(nóng)業(yè)氣象研究與服務(wù)領(lǐng)域的實際應(yīng)用具有重要意義,本研究所用方法對水稻其他物候特征和其他農(nóng)作物方面的應(yīng)用仍待繼續(xù)研究。

[1]陸佳嵐,王凈,馬成,等. 長江流域中稻產(chǎn)量和品質(zhì)性狀差異與其生育期氣象因子的相關(guān)性[J]. 江蘇農(nóng)業(yè)學(xué)報,2020,36(6):1361-1372.

Lu Jialan, Wang Jing, Ma Cheng, et al. Correlation between the differences in yield and quality traits among various types of middle rice and meteorological factors during growth period in the Yangtze River basin[J]. JAS, 2020, 36(6): 1361-1372. (in Chinese with English abstract)

[2]劉丹,于成龍,李帥,等. 基于遙感的黑龍江省東部水稻種植信息提取[J]. 中國農(nóng)學(xué)通報,2013,29(27):30-34.

Liu Dan, Yu Chenglong, Li Shuai, et al. Rice planting extraction in northern heilongjiang province based on remote sensing[J]. Chinese Agricultural Science Bulletin, 2013, 29(27): 30-34. (in Chinese with English abstract)

[3]孫華生,黃敬峰,彭代亮. 利用MODIS數(shù)據(jù)識別水稻關(guān)鍵生長發(fā)育期[J]. 遙感學(xué)報,2009,13(6):1122-1137.

Sun Huasheng, Huang Jingfeng, Peng Dailiang. Detecting major growth stages of paddy rice using MODIS data[J]. Journal of Remote Sensing, 2009, 13(6): 1122-1137. (in Chinese with English abstract)

[4]白曉東. 基于圖像的水稻關(guān)鍵發(fā)育期自動觀測技術(shù)研究[D]. 武漢:華中科技大學(xué),2014.

Bai Xiaodong. Research on Automatic Observation Technology of Rice Critical Development Period Based on Image[D]. Wuhang:Huazhong University of Science and Technology, 2014. (in Chinese with English abstract)

[5]楊振忠,方圣輝,彭漪,等. 基于機器學(xué)習(xí)結(jié)合植被指數(shù)閾值的水稻關(guān)鍵生育期識別[J]. 中國農(nóng)業(yè)大學(xué)學(xué)報,2020,25(1):76-85.

Yang Zhenzhong, Fang Shenghui, Peng Yi, et al. Recognition of the rice growth stage by machine learning combined with vegetation index threshold[J]. Journal of China Agricultural University, 2020, 25(1): 76-85. (in Chinese with English abstract)

[6]Ikasari I H, Ayumi V, Fanany M I, et al. Multiple regularizations deep learning for paddy growth stages classification from LANDSAT-8[C]//2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2016: 512-517.

[7]Gupta K, Rani R, Bahia N K. Plant-seedling classification using transfer learning-based deep convolutional neural networks[J]. International Journal of Agricultural and Environmental Information Systems (IJAEIS), 2020, 11(4):25-40.

[8]李小占,馬本學(xué),喻國威,等. 基于深度學(xué)習(xí)與圖像處理的哈密瓜表面缺陷檢測[J]. 農(nóng)業(yè)工程學(xué)報,2021,37(1):223-232.

Li Xiaozhan, Ma Benxue, Yu Guowei, et al. Surface defect detection of Hami melon using deep learning and image processing[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(1): 223-232. (in Chinese with English abstract)

[9]李穎,陳懷亮. 機器學(xué)習(xí)技術(shù)在現(xiàn)代農(nóng)業(yè)氣象中的應(yīng)用. 應(yīng)用氣象學(xué)報,2020,31(3):257-266.

Li Ying, Chen Huailiang. Review of machine learning approaches for modern agrometeorology[J]. J Appl Meteor Sci, 2020, 31(3): 257-266. (in Chinese with English abstract)

[10]Liu L Y, Jiang H M, He P C, et al. On the variance of the adaptive learning rate and beyond[C/OL]. ICLR, 2020. 2020-04-17. arXiv:1908.03265v3

[11]嚴美春,曹衛(wèi)星,羅衛(wèi)紅,江海東. 小麥發(fā)育過程及生育期機理模型的研究I. 建模的基本設(shè)想與模型的描述[J]. 應(yīng)用生態(tài)學(xué)報,2000,11(3):355-359.

Yan Meichun, Cao Weixing, Luo Weihong, et al. A mechanistic model of phasic and phenological development of wheat I. Assumption and description of the model[J]. Chinese Journal of Applied Ecology, 2000, 11(3): 355-359. (in Chinese with English abstract)

[12]蘇李君,劉云鶴,王全九. 基于有效積溫的中國水稻生長模型的構(gòu)建[J]. 農(nóng)業(yè)工程學(xué)報,2020,36(1):162-174.

Su Lijun, Liu Yunhe, Wang Quanjiu. Rice growth model in China based on growing degree days[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(1): 162-174. (in Chinese with English abstract)

[13]孟亞利,曹衛(wèi)星,周治國,等. 基于生長過程的水稻階段發(fā)育與物候期模擬模型[J]. 中國農(nóng)業(yè)科學(xué),2003,36(11):1362-1367.

Meng Yali, Cao Weixing, Zhou Zhiguo, et al. A process-based model for simulating phasic development and phenology in rice[J]. Scientia Agricultura Sinica, 2003, 36(11): 1362-1367. (in Chinese with English abstract)

[14]韓江洪,袁稼軒,衛(wèi)星,等. 基于深度學(xué)習(xí)的井下巷道行人視覺定位算法[J]. 計算機應(yīng)用,2019,39(3):688-694.

Han Jianghong, Yuan Jiaxuan, Wei Xing, et al. Pedestrian visual positioning algorithm for underground roadway based on deep learning[J]. Journal of Computer Applications, 2019, 39(3): 688-694. (in Chinese with English abstract)

[15]王東方,汪軍. 基于遷移學(xué)習(xí)和殘差網(wǎng)絡(luò)的農(nóng)作物病害分類[J]. 農(nóng)業(yè)工程學(xué)報,2021,37(4):199-207.

Wang Dongfang, Wang Jun. Crop disease classification with transfer learning and residue networks[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(4): 199-207. (in Chinese with English abstract)

[16]王靖宇,王霰禹,張科,等. 基于深度神經(jīng)網(wǎng)絡(luò)的低空弱小無人機目標檢測研究[J]. 西北工業(yè)大學(xué)學(xué)報,2018,36(2):258-263.

Wang Jingyu, Wang Xianyu, Zhang Ke, et al. Small UAV target detection model based on deep neural network[J]. Northwestern Polytechnical University, 2018, 36(2): 258-263. (in Chinese with English abstract)

[17]宋森森,賈振紅,楊杰,等. 結(jié)合Ostu閾值法的最小生成樹圖像分割算法[J]. 計算機工程與應(yīng)用,2019,55(9):178-183.

Song Sensen, Jia Zhenhong, Yang Jie, et al. Image segmentation algorithm of minimum spanning tree combined with ostu threshold method[J]. CEA, 2019, 55(9): 178-183. (in Chinese with English abstract)

[18]黃巧義,張木,李蘋,等. 支持向量機和最大類間方差法結(jié)合的水稻冠層圖像分割方法[J]. 中國農(nóng)業(yè)科技導(dǎo)報,2019,21(4):52-60.

Huang Qiaoyi, Zhang Mu, Li Ping, et al. Rice canopy image segmentation using support vector machine and otsus method[J]. Journal of Agricultural Science and Technology, 2019, 21(4): 52-60. (in Chinese with English abstract)

[19]劉帥兵,楊貴軍,景海濤,等.基于無人機數(shù)碼影像的冬小麥氮含量反演[J].農(nóng)業(yè)工程學(xué)報,2019,35(11):75-85.

Liu Shuaibing, Yang Guijun, Jing Haitao, et al.Retrieval of winter wheat nitrogen content based on UAV digital image[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2019,35(11):75-85. (in Chinese with English abstract)

[20]王見,周勤,尹愛軍. 改進Otsu算法與ELM融合的自然場景棉桃自適應(yīng)分割方法[J]. 農(nóng)業(yè)工程學(xué)報,2018,34(14):173-180.

Wang Jian, Zhou Qin, Yin Aijun. Self-adaptive segmentation method of cotton in natural scene by combining improved Otsu with ELM algorithm[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(14): 173-180. (in Chinese with English abstract)

[21]岑海燕,朱月明,孫大偉,等. 深度學(xué)習(xí)在植物表型研究中的應(yīng)用現(xiàn)狀與展望[J]. 農(nóng)業(yè)工程學(xué)報,2020,36(9):1-16.

Cen Haiyan, Zhu Yueming, Sun Dawei, et al. Current status and future perspective of the application of deep learning in plant phenotype research[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(9): 1-16. (in Chinese with English abstract)

[22]張瑞青,李張威,郝建軍,等. 基于遷移學(xué)習(xí)的卷積神經(jīng)網(wǎng)絡(luò)花生莢果等級圖像識別[J]. 農(nóng)業(yè)工程學(xué)報,2020,36(23):171-180.

Zhang Ruiqing, Li Zhangwei, Hao Jianjun, et al. Image recognition of peanut pod grades based on transfer learning with convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(23): 171-180. (in Chinese with English abstract)

[23]郭敏鋼,宮鶴. 基于Tensorflow對卷積神經(jīng)網(wǎng)絡(luò)的優(yōu)化研究[J]. 計算機工程與應(yīng)用,2020,56(1):158-164.

Guo Mingang, Gong He. Optimization of convolutional neural network based on tensorflow[J]. CEA, 2020, 56(1): 158-164. (in Chinese with English abstract)

[24]王丹丹,何東健. 基于R-FCN深度卷積神經(jīng)網(wǎng)絡(luò)的機器人疏果前蘋果目標的識別[J]. 農(nóng)業(yè)工程學(xué)報,2019,35(3):156-163.

Wang Dandan, He Dongjian. Recognition of apple targets before fruits thinning by robot based on R-FCN deep convolution neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(3): 156-163. (in Chinese with English abstract)

[25]趙春江,文朝武,林森,等. 基于級聯(lián)卷積神經(jīng)網(wǎng)絡(luò)的番茄花期識別檢測方法[J]. 農(nóng)業(yè)工程學(xué)報,2020,36(24):143-152.

Zhao Chunjiang, Wen Chaowu, Lin Sen, et al. Tomato florescence recognition and detection method based on cascaded neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(24): 143-152. (in Chinese with English abstract)

[26]張小莉,程光,張慰慈. 基于改進深度卷積神經(jīng)網(wǎng)絡(luò)的網(wǎng)絡(luò)流量分類方法. 中國科學(xué):信息科學(xué),2021,51(1):56-74.

Zhang Xiaoli, Cheng Guang, Zhang Weici. Network traffic classification method based on improved deep convolutional neural network[J]. Sci Sin Inform, 2021, 51(1): 56-74. (in Chinese with English abstract)

[27]高耀東,侯凌燕,楊大利. 基于多標簽學(xué)習(xí)的卷積神經(jīng)網(wǎng)絡(luò)的圖像標注方法[J]. 計算機應(yīng)用,2017,37(1):228-232.

Gao Yaodong, Hou Lingyan, Yang Dali. Automatic image annotation method using multi-label learning convolutional neural network[J]. Journal of Computer Applications, 2017, 37(1): 228-232. (in Chinese with English abstract)

[28]王雨瀅,趙慶生,梁定康. 基于深度學(xué)習(xí)網(wǎng)絡(luò)的電氣設(shè)備圖像分類[J]. 科學(xué)技術(shù)與工程,2020,20(23):9491-9496.

Wang Yuying, Zhao Qingsheng,Liang Dingkan. Electrical equipment image classification based on deep learning network[J]. Science Technology and Engineering, 2020, 20(23): 9491-9496. (in Chinese with English abstract)

[29]黃雙萍,孫超,齊龍,等. 基于深度卷積神經(jīng)網(wǎng)絡(luò)的水稻穗瘟病檢測方法[J]. 農(nóng)業(yè)工程學(xué)報,2017,33(20):169-176.

Huang Shuangping, Sun Chao, Qi Long, et al. Rice panicle blast identification method based on deep convolution neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(20): 169-176. (in Chinese with English abstract)

[30]趙立新,侯發(fā)東,呂正超,等.基于遷移學(xué)習(xí)的棉花葉部病蟲害圖像識別[J].農(nóng)業(yè)工程學(xué)報,2020,36(7):184-191.

Zhao Lixin, Hou Fadong, Lü Zhengchao, et al.Image recognition of cotton leaf diseases and pests based on transfer learning[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2020,36(7):184-191. (in Chinese with English abstract)

[31]王東方,汪軍. 基于遷移學(xué)習(xí)和殘差網(wǎng)絡(luò)的農(nóng)作物病害分類[J]. 農(nóng)業(yè)工程學(xué)報,2021,37(4):199-207.

Wang Dongfang,Wang Jun.Crop disease classification with transfer learning and residual networks[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2021,37(4):199-207. (in Chinese with English abstract)

[32]王丹丹,何東健. 基于R-FCN深度卷積神經(jīng)網(wǎng)絡(luò)的機器人疏果前蘋果目標的識別[J]. 農(nóng)業(yè)工程學(xué)報,2019,35(3):156-163.

Wang Dandan, He Dongjian. Recognition of apple targets before fruits thinning by robot based on R-FCN deep convolution neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(3): 156-163. (in Chinese with English abstract)

[33]陸楷煜,夏春蕾,戴曙光,等. 特征融合在植物葉片識別中的應(yīng)用研究[J]. 軟件導(dǎo)刊,2020,19(10):71-75.

Lu Kaiyu, Xia Chunlei, Dai Shuguang, et al. Application research on multi-feature fusion in plant leaf recognition[J]. Software Guide, 2020, 19(10): 71-75. (in Chinese with English abstract)

[34]苗開超,周建平,陶鵬,等. 自適應(yīng)混合卷積神經(jīng)網(wǎng)絡(luò)的霧圖能見度識別[J]. 計算機工程與應(yīng)用,2020,56(10):205-212.

Miao Kaichao, Zhou Jianping, Tao Peng, et al. Visibility recognition of fog figure based on self-adaptive hybrid convolutional neural network[J]. CEA, 2020, 56(10): 205-212. (in Chinese with English abstract)

[35]羅會蘭,易慧. 基于迭代訓(xùn)練和集成學(xué)習(xí)的圖像分類方法[J]. 計算機工程與設(shè)計,2020,41(5):1301-1307.

Luo Huilan, Yi Hui. Image classification method based on iterative training and ensemble learning[J]. Computer Engineering and Design, 2020, 41(5): 1301-1307. (in Chinese with English abstract)

Image recognition for different developmental stages of rice by RAdam deep convolutional neural networks

Xu Jianpeng1,2, Wang Jie1,2, Xu Xiang1, Ju Shucun1,2

(1.,230031,; 2.,230031,)

An improved Convolutional Neural Network (CNN) was proposed to replace the current manual observation of the rice development period for higher efficiency and accuracy. In this study, a CNN image recognition was established with 50 layers using a risk adaptive authorization mechanism (RAdam) optimizer. Five developmental stages of rice were selected to automatically detect, including regreening, tillering, jointing, heading, and milk stage. Two cameras were assumed in 12 test fields for two consecutive years, where two pre-set points were set in each test field. Images and videos of rice were taken continuously at 8:00 and 16:00 each day. The geometric transformation of image was also used to increase the amount of input data. Finally, 35422 datasets of grading images were obtained on rice development stages. Training and test datasets were divided at the ratio of 7:3, where the original 1920x1080 pixel image was processed into 224x224 pixel size. Each image was then classified and labelled manually. A combined ExG factor with Otsu threshold was utilized to segment the rice images, to avoid the interference of some factors (water, soil, and garbage) in the rice field on the characteristics of rice development period. Strong robustness was obtained when the light and color changed, indicating high requirements of extracting the “green” characteristics of rice plant images. The parallel operation of CNN was realized by Tensor flow GPU. Four pre-trained CNN models were selected to conduct comparative experiments, including VGG16, VGG19, ResNet50, and Inception v3. The initial learning rate was set to be 0.001. The training accuracies of the VGG16, VGG19, and Inception v3 network models were 99.46%, 94.36%, and 98.70%, respectively whereas the verification accuracies were 94.76%, 89.43%, and 93.59%, respectively. The training accuracy of the ResNet50 network model was about 5% higher than that of the VGG19 network model, also higher than those of the VGG16, and Inception v3 network models. The loss value of the ResNet50 network model was also about 90% lower than those of models. Thus, it was inferred that the ResNet50 model was better suitable for the identification of key developmental stages of rice. Nevertheless, the accuracy and loss of the ResNet50 model varied greatly under the Adam and RAdam optimizers. The RAdam optimizer was faster than Adam, indicating high stability and convergence speed. Specifically, the convergence speed for Adam was 11 s per step, while that for RAdam was 12 s per step. Multiple experiments were performed on the batch size and learning rate, and further to evaluate the performance of the ResNet50 model. The training time was reduced by 737 s, when the learning rate was set to be 0.001, and the batch size was 32. Subsequently, 5 experiments were performed on the ResNet50 network model to train the datasets of rice images during different developmental stages. The accuracies of the training and validation set were 99.53%, and 97.66%, respectively, when the training iteration reached the 18th round. Once the iterative training continued, the accuracies of the training and validation set remained stable. The constructed CNN model can be expected to recognize rice images in different developmental stages, with an average recognition accuracy of 97.33%, while high network stability and fast convergence speed. The finding can provide an effective way to automatically monitor the development stages of rice in intelligent agriculture.

image recognition; neural networks; models; rice; RAdam; ResNet50; developmental stage

徐建鵬,王杰,徐祥,等. 基于RAdam卷積神經(jīng)網(wǎng)絡(luò)的水稻生育期圖像識別[J]. 農(nóng)業(yè)工程學(xué)報,2021,37(8):143-150.doi:10.11975/j.issn.1002-6819.2021.08.016 http://www.tcsae.org

Xu Jianpeng, Wang Jie, Xu Xiang, et al. Image recognition for different developmental stages of rice by RAdam deep convolutional neural networks[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(8): 143-150. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2021.08.016 http://www.tcsae.org

2020-01-02

2021-03-10

安徽省重大科技專項(202003A06020016);科技助力經(jīng)濟2020氣象行業(yè)項目(KJZLJJ202002)

徐建鵬,高級工程師,研究方向為機器學(xué)習(xí)、農(nóng)業(yè)信息化、農(nóng)業(yè)氣象等。Email:20333800@qq.com

10.11975/j.issn.1002-6819.2021.08.016

S126

A

1002-6819(2021)-08-0143-08

猜你喜歡
水稻優(yōu)化模型
一半模型
什么是海水稻
超限高層建筑結(jié)構(gòu)設(shè)計與優(yōu)化思考
水稻種植60天就能收獲啦
軍事文摘(2021年22期)2021-11-26 00:43:51
民用建筑防煙排煙設(shè)計優(yōu)化探討
關(guān)于優(yōu)化消防安全告知承諾的一些思考
一道優(yōu)化題的幾何解法
重要模型『一線三等角』
一季水稻
文苑(2020年6期)2020-06-22 08:41:52
重尾非線性自回歸模型自加權(quán)M-估計的漸近分布
主站蜘蛛池模板: 伊人激情综合网| 九九九精品成人免费视频7| 婷婷综合色| 成人午夜免费观看| 欧洲成人在线观看| 国产精品妖精视频| 伊人久热这里只有精品视频99| 天堂av综合网| 高清不卡一区二区三区香蕉| yy6080理论大片一级久久| 免费国产无遮挡又黄又爽| 免费看久久精品99| 国产精品99r8在线观看| 色AV色 综合网站| 制服无码网站| 漂亮人妻被中出中文字幕久久| 国产精品开放后亚洲| 亚洲中文字幕在线观看| 久久77777| 久久天天躁夜夜躁狠狠| 精品三级网站| 中国黄色一级视频| 久久国产免费观看| 在线免费亚洲无码视频| 无码国内精品人妻少妇蜜桃视频| 色综合中文字幕| 99精品热视频这里只有精品7| 亚洲性视频网站| 国产无码高清视频不卡| 永久在线播放| 亚洲欧美日韩精品专区| 在线欧美a| 久久综合国产乱子免费| 国产一区二区精品福利| 91美女视频在线| 日韩欧美综合在线制服| 在线观看免费黄色网址| 无码aⅴ精品一区二区三区| 久久免费视频6| 国产欧美网站| 国产99视频在线| 国产经典在线观看一区| 亚洲性一区| 亚洲欧美在线精品一区二区| 无码中文字幕乱码免费2| 国产女人水多毛片18| Aⅴ无码专区在线观看| 欧美日韩成人在线观看| 国产91麻豆视频| 91色老久久精品偷偷蜜臀| 性激烈欧美三级在线播放| 黄色成年视频| a级毛片视频免费观看| 综合网久久| 久996视频精品免费观看| 一本大道东京热无码av| 日本一本正道综合久久dvd| 精品欧美一区二区三区在线| 在线观看视频99| 成年人视频一区二区| 国产在线高清一级毛片| 色婷婷在线播放| 欧美精品亚洲精品日韩专区va| 亚洲最大综合网| 亚洲av中文无码乱人伦在线r| 一级在线毛片| 香蕉蕉亚亚洲aav综合| 亚洲中文字幕在线一区播放| 一级成人a做片免费| 国产男人天堂| 国产精品视频a| 国产精欧美一区二区三区| 亚洲黄网在线| 视频国产精品丝袜第一页| 亚洲成综合人影院在院播放| 福利国产微拍广场一区视频在线| 久久永久视频| 色综合日本| 人人澡人人爽欧美一区| 午夜人性色福利无码视频在线观看| 久久久噜噜噜久久中文字幕色伊伊 | 久久国产亚洲偷自|