:
twitter line
研究生: 謝承樺
研究生(外文): HSIEH, CHENG-HUA
論文名稱: 應用點雲影像之植物表型分析系統
論文名稱(外文): The Plant Phenotyping Analysis System using Point Cloud Data
指導教授: 陳定宏 陳定宏引用關係
指導教授(外文): CHEN,DING-HORNG
口試委員: 孫永年 柯建全
口試委員(外文): SUN,YONG-NIAN KE,JIAN-QUAN
口試日期: 2020-07-28
學位類別: 碩士
校院名稱: 南臺科技大學
系所名稱: 資訊工程系
學門: 工程學門
學類: 電資工程學類
論文種類: 學術論文
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 56
中文關鍵詞: 植物表型分析系統 點雲影像 葉片分割 深度學習
外文關鍵詞: Plant Phenotyping System Point Cloud Leaf Segmentation Deep Learning
相關次數:
  • 被引用 被引用:0
  • 點閱 點閱:361
  • 評分 評分:
  • 下載 下載:35
  • 收藏至我的研究室書目清單 書目收藏:0
伴隨自動化與計算機圖像辨識技術的進步,高通量植物表型系統也開始結合這些技術來進行量測,藉此利用非破壞性、非侵入性的測量方式來獲取重要的植物生長資訊。本論文透過深度相機對擬南芥取得3D點雲影像,設計植物表型分析系統方面。此系統流程包含前處理、葉片分割、葉片追蹤、表型參數計算以及三維模型重構,並產出模型檔以及表型參數JSON檔,結合海博特公司的植物生長觀察箱,讓使用者以Web介面瀏覽。
葉片分割為植物表型分析的關鍵步驟之一,分割的結果將會影響到後續的葉片參數計算。本論文將3D點雲深度資訊,配合本論文提出的轉換函數來調整二維影像色彩,藉此強化學習特徵,並利用Mask R-CNN訓練模型完成葉片分割。實驗結果顯示,根據Leaf Segmentation Challenge(LSC)的標準評估方式進行評估,本論文所提出之演算法有助於提升葉片分割精準度。
本研究與植物表型研究人員討論出28項植物表型參數,所有的表型參數分為整株參數以及單葉參數,在整株參數的部分提出有關植物外觀、角度、面積及高度的相關參數,單葉參數提出葉片幾何、面積、角度的相關參數,其中也包含目前市面上常見的表型參數,如葉片面積、植物生長高度等。本研究推導所有表型參數公式並利用三維點雲影像分析技術來計算出該表型參數,最後計算結果利用標準模型來進行校準,各項參數比對結果顯示誤差皆在10%以內。

With the advancement of image processing and computer vision technology, the high-throughput plant phenotyping system has become an important tool for understanding the growth process of plants. This thesis proposes a plant phenotyping analysis system based on 3D point cloud data. The process of this system is divided into pre-processing, leaf segmentation, leaf tracking, 3D model reconstruction and phenotyping parameter calculation.
The leaf segmentation is one of the key steps of this system. In this thesis, we adjust the color of the two-dimensional image based on the depth information of the point cloud and the activation function, thereby strengthening the leaf boundary features, and using the Mask R-CNN training model to process the leaf segmentation. According to the reported standard evaluation method of leaf segmentation, the proposed algorithm can effectively improve the accuracy of leaf segmentation. In addition, we discussed several key plant growth parameters with botanical researchers, derived their calculation formulas, and used calibration models for numerical verification. The plant phenotyping parameters are expressed in JSON format and can be imported to a user-friendly user interface to browse and compare the data. The experimental results show that the errors in the calculation results of the proposed plant phenotyping parameters are all less than 10%.

摘 要 i
ABSTRACT ii
目 錄 iii
圖目錄 v
表目錄 vii
第一章 緒論 1
1.1背景與研究動機 1
1.2研究限制 3
1.3論文架構 4
第二章 文獻探討 5
2.1植物表型分析系統 5
2.1.1 PHI: Phenome high-throughput investigator 5
2.1.2 RIPPS: RIKEN integrated plant phenotyping system 6
2.1.3 HHIS: High-throughput hyperspectral imaging system 6
2.1.4 PhenoTrac 4 7
2.2葉片分割方法 7
2.2.1 Nottingham: Segmentation with SLIC superpixels 8
2.2.2 IPK Gatersleben:3D histogram-based segmentation 8
2.2.3 Wageningen: Leaf segmentation with watersheds 9
2.2.4 Leaf segmentation through the classification of edges 9
2.2.5 Deep leaf segmentation using synthetic data 10
第三章 植物表型分析系統 11
3.1 系統概觀 11
3.2 點雲前處理 14
3.2.1 K-dimensional tree 14
3.2.2雜訊點過濾 16
3.2.3點雲表面平滑化 16
3.3 葉片分割 17
3.3.1葉片分割流程 17
3.3.2色彩調整 18
3.3.3影像標記 21
3.3.4資料擴增 22
3.3.5 Mask R-CNN 23
3.3.6 k-fold交叉驗證 24
3.4 葉片追蹤 25
3.4.1點雲質心計算 26
3.4.2葉片對位 27
3.5 植物模型重構 28
3.5.1貪婪投影演算法 29
3.5.2德勞內三角化 30
3.6 植物表型分析參數 31
3.6.1植物表型參數說明 31
第四章 實驗結果與討論 39
4.1 葉片分割結果 39
4.1.1葉片分割評估指標 39
4.1.2評估結果 41
4.2 表型參數計算驗證 45
4.3 使用者介面 47
4.3.1介面概觀 48
4.3.2功能說明 49
第五章 結論 52
參考文獻 54
[1]M. Roser and H. Ritchie, “Food Supply,” Our World in Data, Mar. 2013, Accessed: May 14, 2020. [Online]. Available: https://ourworldindata.org/food-supply.
[2]J. Kearney, “Food consumption trends and drivers,” Phil. Trans. R. Soc. B, vol. 365, no. 1554, pp. 2793–2807, Sep. 2010, doi: 10.1098/rstb.2010.0149.
[3]N. Fahlgren, M. A. Gehan, and I. Baxter, “Lights, camera, action: high-throughput plant phenotyping is ready for a close-up,” Current Opinion in Plant Biology, vol. 24, pp. 93–99, Apr. 2015, doi: 10.1016/j.pbi.2015.02.006.
[4]I. Janusch, W. G. Kropatsch, and W. Busch, “Topological Image Analysis and (Normalised) Representations for Plant Phenotyping,” in 2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Romania, Sep. 2014, pp. 579–586, doi: 10.1109/SYNASC.2014.83.
[5]R. T. Furbank and M. Tester, “Phenomics – technologies to relieve the phenotyping bottleneck,” Trends in Plant Science, vol. 16, no. 12, pp. 635–644, Dec. 2011, doi: 10.1016/j.tplants.2011.09.005.
[6]D. Houle, D. R. Govindaraju, and S. Omholt, “Phenomics: the next challenge,” Nat Rev Genet, vol. 11, no. 12, pp. 855–866, Dec. 2010, doi: 10.1038/nrg2897.
[7]“IPPN-SURVEY_2016.” https://www.plant-phenotyping.org/ippn-survey_2016 (accessed Dec. 11, 2018).
[8]“CodaLab - Leaf Segmentation Challenge (LSC).” https://competitions.codalab.org/competitions/18405 (accessed May 14, 2020).
[9]tim.cardinal, “Lemnatec,” LemnaTec. https://www.lemnatec.com/ (accessed May 18, 2020).
[10]“Danforth Plant Science Center - Danforth Plant Science Center.” https://www.danforthcenter.org/ (accessed May 18, 2020).
[11]CSIRO, “Commonwealth Scientific and Industrial Research Organisation, Australian Government.” https://www.csiro.au (accessed May 18, 2020).
[12]J. I. Lyu et al., “High-Throughput and Computational Study of Leaf Senescence through a Phenomic Approach,” Front. Plant Sci., vol. 8, 2017, doi: 10.3389/fpls.2017.00250.
[13]M. Fujita, T. Tanabata, K. Urano, S. Kikuchi, and K. Shinozaki, “RIPPS: A Plant Phenotyping System for Quantitative Evaluation of Growth Under Controlled Environmental Stress Conditions,” Plant Cell Physiol, vol. 59, no. 10, pp. 2030–2038, Oct. 2018, doi: 10.1093/pcp/pcy122.
[14]H. Feng et al., “An integrated hyperspectral imaging and genome-wide association analysis platform provides spectral and genetic insights into the natural variation in rice,” Scientific Reports, vol. 7, no. 1, Art. no. 1, Jun. 2017, doi: 10.1038/s41598-017-04668-8.
[15]G. Barmeier and U. Schmidhalter, “High-Throughput Field Phenotyping of Leaves, Leaf Sheaths, Culms and Ears of Spring Barley Cultivars at Anthesis and Dough Ripeness,” Front. Plant Sci., vol. 8, 2017, doi: 10.3389/fpls.2017.01920.
[16]H. Scharr et al., “Leaf segmentation in plant phenotyping: a collation study,” Machine Vision and Applications, vol. 27, no. 4, pp. 585–606, May 2016, doi: 10.1007/s00138-015-0737-3.
[17]R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC Superpixels Compared to State-of-the-Art Superpixel Methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, Nov. 2012, doi: 10.1109/TPAMI.2012.120.
[18]J.-M. Pape and C. Klukas, “3-D Histogram-Based Segmentation and Leaf Detection for Rosette Plants,” in Computer Vision - ECCV 2014 Workshops, Cham, 2015, pp. 61–74, doi: 10.1007/978-3-319-16220-1_5.
[19]J. Bell and H. M. Dee, “Leaf segmentation through the classification of edges,” arXiv:1904.03124 [cs], Apr. 2019, Accessed: Jun. 09, 2020. [Online]. Available: http://arxiv.org/abs/1904.03124.
[20]D. Ward, P. Moghadam, and N. Hudson, “Deep Leaf Segmentation Using Synthetic Data,” arXiv:1807.10931 [cs], Mar. 2019, Accessed: Jun. 24, 2020. [Online]. Available: http://arxiv.org/abs/1807.10931.
[21]“Parts of a Stereo Vision System - NI Vision 2015 Concepts Help - National Instruments.” http://zone.ni.com/reference/en-XX/help/372916T-01/nivisionconcepts/stereo_parts_of_a_stereo_vision_system/ (accessed Aug. 04, 2020).
[22]R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in 2011 IEEE International Conference on Robotics and Automation, May 2011, pp. 1–4, doi: 10.1109/ICRA.2011.5980567.
[23]“k-d tree,” Wikipedia. Jun. 07, 2020, Accessed: Jul. 22, 2020. [Online]. Available: https://en.wikipedia.org/w/index.php?title=K-d_tree&oldid=961252065.
[24]M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva, “Computing and rendering point set surfaces,” IEEE Trans. Visual. Comput. Graphics, vol. 9, no. 1, pp. 3–15, Jan. 2003, doi: 10.1109/TVCG.2003.1175093.
[25]C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5099–5108.
[26]B. Yang et al., “Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds,” arXiv:1906.01140 [cs], Sep. 2019, Accessed: Mar. 23, 2020. [Online]. Available: http://arxiv.org/abs/1906.01140.
[27]A. Dobrescu, M. V. Giuffrida, and S. A. Tsaftaris, “Understanding Deep Neural Networks for Regression in Leaf Counting,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, Jun. 2019, pp. 2600–2608, doi: 10.1109/CVPRW.2019.00316.
[28]K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” arXiv:1703.06870 [cs], Jan. 2018, Accessed: May 15, 2020. [Online]. Available: http://arxiv.org/abs/1703.06870.
[29]J. E. Hershberger and J. Snoeyink, Speeding up the Douglas-Peucker line-simplification algorithm. University of British Columbia, Department of Computer Science Vancouver, BC, 1992.
[30]“Ramer–Douglas–Peucker algorithm,” Wikipedia. Apr. 28, 2020, Accessed: Jul. 23, 2020. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm&oldid=953709585.
[31]S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” arXiv:1506.01497 [cs], Jan. 2016, Accessed: Jul. 02, 2020. [Online]. Available: http://arxiv.org/abs/1506.01497.
[32]T. Huang, “交叉驗證(Cross-validation, CV),” Medium, Aug. 17, 2018. https://medium.com/@chih.sheng.huang821/%E4%BA%A4%E5%8F%89%E9%A9%97%E8%AD%89-cross-validation-cv-3b2c714b18db (accessed Aug. 07, 2020).
[33]J. Liu, D. Bai, and L. Chen, “3-D point cloud registration algorithm based on greedy projection triangulation,” Applied Sciences, vol. 8, no. 10, p. 1776, 2018.
[34]B. Delaunay and others, “Sur la sphere vide,” Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk, vol. 7, no. 793–800, pp. 1–2, 1934.
[35]M. de Berg, Ed., Computational geometry: algorithms and applications, 3rd ed. Berlin: Springer, 2008.
[36]“Delaunay triangulation,” Wikipedia. Jul. 17, 2020, Accessed: Jul. 23, 2020. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Delaunay_triangulation&oldid=968134392.
[37]“Phenospex,” PHENOSPEX. https://phenospex.com/ (accessed Aug. 04, 2020).
[38]S. Communication, M. Polak, H. Zhang, and M. Pi, Contents lists available at ScienceDirect Image and Vision Computing. .
[39]V. Mezaris, I. Kompatsiaris, and M. G. Strintzis, “Still Image Objective Segmentation Evaluation using Ground Truth,” p. 6.