The accurate detection and classification of wafer defects are critical in semiconductor fabrication. It provides interpretable data to identify the root causes of the problems. According to these information or data, the manufacture engineer can execute quality management and yield improvement activities to reduce the wafer defects rate. The traditional approach of wafer defect classification, which conducted manually by expert engineers utilizing computer-aided tools, which is time-consuming and can be inaccurate. As a result, automated identification of wafer defects using deep learning algorithms has received substantial attention in order to increase detection process performance. Engineers to identify corrective measures for manufacturing process variation and to prevent wafer defects can use the timely defect categorization result and finally achieve the goal of reducing quality cost and zero defect. In our research, we evaluate the You Only Look Once (YOLO) architecture for classifying mixed-type wafer map defects. We train Yolov8 classification models with 4,000 wafer maps with mixed-type defects, and the experimental results show that classification accuracy can reach 99.4%. The Yolov8 image classification task is shown to be very efficient and beneficial in classifying mixed-type defects on semiconductor wafers.
ABSTRACT I
摘要 II
LIST OF CONTENTS III
LIST OF TABLES V
LIST OF FIGURES VI
CHAPTER 1. INTRODUCTION 1
1.1 BACKGROUND 1
1.2 DESCRIPTION OF THE OBJECTIVES 3
1.3 STRUCTURE OF THE THESIS 5
CHAPTER 2. RELATED WORK 6
2.1 WAFER MAP DEFECT PATTERN CLASSIFICATION 6
2.2 DIFFERENT GENERATION OF YOLO MODELS 8
CHAPTER 3. PROPOSED METHODOLOGY 13
3.1 WAFER MANUFACTURING OVERVIEW 13
3.1.1 Wafer Manufacturing Process 13
3.1.2 Wafer Testing and Wafer Bin Map 18
3.1.3 Wafer Defect Recognition 21
3.2 MIXED-TYPE WAFER DEFECT PATTERN DATASET 22
3.3 METHODOLOGY 25
3.3.1 Yolov8 Classification Model 25
3.4 METRICS 29
3.4.1 Confusion Matrix 30
3.4.2 Top1 Accuracy 31
3.5 WORK FLOW 32
3.5.1 Target Statement and Definition 33
3.5.2 Data Preparation 33
3.5.3 Model Selection 34
3.5.4 Analysis and Interpretation 35
3.6 EXPERIMENTAL ENVIRONMENTS 35
CHAPTER 4. RESULTS AND ANALYSIS 36
4.1 CLASSIFICATION RESULTS 36
4.1.1 Model Performance 36
4.1.2 Inference Results 44
4.2 COMPARISON OF MODEL PERFORMANCE 46
CHAPTER 5. CONCLUSIONS AND FUTURE WORKS 48
5.1 CONCLUSIONS 48
5.2 FUTURE WORKS 49
REFERENCES 50
[1]Bhatt, P. M., Malhan, R. K., Rajendran, P., Shah, B. C., Thakar, S., Yoon, Y. J., and Gupta, S. K., "Image-Based Surface Defect Detection Using Deep Learning: A Review", ASME. J. Comput. Inf. Sci. Eng., August 2021; 21(4): 040801. https://doi.org/10.1115/1.4049535
[2]"Automotive Zero Defects Framework", Automotive Electronics Council, AEC - Q004 – Rev February 26, 2020.
[3]Yu, N., Chen, H., Xu, Q., Hasan, M. M., and Sie, O., “Wafer map defect patterns classification based on a lightweight network and data augmentation”, CAAI Transactions on Intelligence Technology, 2022. (https://doi.org/10.1049/cit2.12126)
[4]Jin, C. H., Na, H. J., Piao, M., Pok, G., and Ryu, K. H., “A novel DBSCAN-based defect pattern detection and classification framework for wafer bin map”, IEEE Transactions on Semiconductor Manufacturing, 32(3), pp. 286-292, 2019.
[5]Wang, J., Xu, C., Yang, Z., Zhang, J., and Li, X., “Deformable convolutional networks for efficient mixed-type wafer defect pattern recognition”, IEEE Transactions on Semiconductor Manufacturing, 33(4), pp. 587-596, 2020.
[6]Kim, K. O., Kuo, W., and Luo, W., “A relation model of gate oxide yield and reliability”, Microelectronics Reliability, 44(3), pp. 425-434, 2004.
[7]Hwang, J. Y., and Kuo, W., “Model-based clustering for integrated circuit yield enhancement”, European Journal of Operational Research, 178(1), pp.143-153, 2007.
[8]Wu, M. J., Jang, J. S. R., and Chen, J. L., “Wafer map failure pattern recognition and similarity ranking for large-scale data sets”, IEEE Transactions on Semiconductor Manufacturing, 28(1), pp.1-12, 2014.
[9]Hwang, J. Y., and Kuo, W., “Model-based clustering for integrated circuit yield enhancement”, European Journal of Operational Research, 178(1), pp.143-153, 2007.
[10]Jeong, Y. S., Kim, S. J., and Jeong, M. K., “Automatic identification of defect patterns in semiconductor wafer maps using spatial correlogram and dynamic time warping”, IEEE Transactions on Semiconductor manufacturing, 21(4), pp.625-637, 2008.
[11]Yuan, T., Kuo, W., and Bae, S. J., “Detection of spatial defect patterns generated in semiconductor fabrication processes”, IEEE Transactions on Semiconductor Manufacturing, 24(3), pp.392-403, 2011.
[12]Wu, M. J., Jang, J. S. R., and Chen, J. L., “Wafer map failure pattern recognition and similarity ranking for large-scale data sets”, IEEE Transactions on Semiconductor Manufacturing, 28(1), pp.1-12, 2014.
[13]Byun, Y., and Baek, J. G., “ Mixed pattern recognition methodology on wafer maps with pre-trained convolutional neural networks”, The 12th International Conference on Agents and Artificial Intelligence ( ICAART 2020), February 22–24, 2020, Valletta, Malta, SciTePress, pp.974-979.
[14]Nag, S., Makwana, D., Mittal, S., and Mohan, C. K., “WaferSegClassNet-A light-weight network for classification and segmentation of semiconductor wafer defects”, Computers in Industry, 142, Article 103720, 2022.
[15]Jeong, Y. S., “Semiconductor wafer defect classification using support vector machine with weighted dynamic time warping kernel function”, Industrial Engineering & Management Systems, 16(3), pp.420-426, 2017.
[16]Yuan, T., Kuo, W., and Bae, S. J., “Detection of spatial defect patterns generated in semiconductor fabrication processes”, IEEE Transactions on Semiconductor Manufacturing, 24(3), pp.392-403, 2011.
[17]Abdullah, M., Rahman, M. H., and Akhter, S., “Pattern Recognition in Analog Wafer maps with Multiple Ensemble Approaches”, The 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), January 5-7, 2021, Dhaka,Bangladesh, IEEE, pp.587-591.
[18]Jin, C. H., Na, H. J., Piao, M., Pok, G., and Ryu, K. H., “ A novel DBSCAN-based defect pattern detection and classification framework for wafer bin map”, IEEE Transactions on Semiconductor Manufacturing, 32(3), pp.286-292, 2019.
[19]Ji, Y., and Lee, J. H., “Using GAN to improve CNN performance of wafer map defect type classification: Yield enhancement”, The 31st Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), August 24-26, 2020, Saratoga Springs, NY, USA, IEEE, pp.1-6.
[20]Chien, J. C., Wu, M. T., and Lee, J. D., “Inspection and classification of semiconductor wafer surface defects using CNN deep learning networks”, Applied Sciences, 10(15), 5340, 2020. (https://doi.org/10.3390/app10155340)
[21]Theodosiou, T., Rapti, A., Papageorgiou, K., Tziolas, T., Papageorgiou, E., Dimitriou, N. and Tzovaras, D.,” A Review Study on ML-based Methods for Defect-Pattern Recognition in Wafer Maps”, Procedia Computer Science, 217, pp.570-583, 2023.
[22]Girshick, R., Donahue, J., Darrell, T., & Malik, J., “Rich feature hierarchies for accurate object detection and semantic segmentation”, In Proceedings of the IEEE conference on computer vision and pattern recognition, June 23-28, 2014, Columbus, OH, USA, pp. 580-587.
[23]Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You only look once: Unified, real-time object detection”, In Proceedings of the IEEE conference on computer vision and pattern recognition, June 27-30, 2016, Las Vegas, NV, USA, pp.779-788.
[24]Girshick, R., “Fast r-cnn”, In Proceedings of the IEEE international conference on computer vision, December 7-13, 2015, Santiago, Chile, pp. 1440-1448.
[25]Ren, S., He, K., Girshick, R., & Sun, J., “Faster r-cnn: Towards real-time object detection with region proposal networks”, Conference on Neural Information Processing Systems (NIPS), December 7-10, 2015, Quebec, Canada, pp. 91-99.
[26]Redmon, J., and Farhadi, A., “YOLO9000: better, faster, stronger”, In Proceedings of the IEEE conference on computer vision and pattern recognition, July 21-26, 2017, Honolulu, HI, U.S.A., pp.7263-7271.
[27]Redmon, J., and Farhadi, A., “Yolov3: An incremental improvement”, arXiv preprint arXiv: 1804.02767, 2018.
[28]Bochkovskiy, A., Wang, C. Y., and Liao, H. Y. M., “Yolov4: Optimal speed and accuracy of object detection”, arXiv preprint arXiv: 2004.10934, 2020.
[29]Jocher, G., “YOLOv5 by Ultralytics”, https://github.com/ultralytics/yolov5, 2020. Accessed: May 23, 2023.
[30]Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L. and Wei, X., “YOLOv6: A single-stage object detection framework for industrial applications”, arXiv preprint arXiv:2209.02976, 2022.
[31]Wang, C. Y., Bochkovskiy, A., and Liao, H. Y. M., “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv:2207.02696, 2022.
[32]Jocher, G. Chaurasia, A., and Qiu, J., “YOLO by Ultralytics.” https://github.com/ultralytics/, 2023. Accessed: May 23, 2023.
[33]Maksim, K., Kirill, B., Eduard, Z., Nikita, G., Aleksandr, B., Arina, L., and Nikolay, K., “Classification of wafer maps defect based on deep learning methods with small amount of data”, International Conference on Engineering and Telecommunication, November 20-21, 2019, Dolgoprudny, Russia, pp. 1-5.
[34]Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., and Fei-Fei, L., “Imagenet: A large-scale hierarchical image database”, IEEE conference on computer vision and pattern recognition, June 20-25, 2009, Miami, FL , USA, pp. 248-255.
[35]Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature pyramid networks for object detection”, In Proceedings of the IEEE conference on computer vision and pattern recognition”, July 21-26, 2017, Honolulu, HI, USA, pp. 2117-2125.
[36]Scius-Bertrand, A., Jungo, M., Wolf, B., Fischer, A., and Bui, M., “Transcription alignment of historical Vietnamese manuscripts without human-annotated learning samples”, Applied Sciences, 11(11), 4894, 2021.
[37]Jocher, G., “Yolov8 by Ultralytics”, https://github.com/ultralytics/ultralytics, 2023. Accessed: June 8, 2023.
[38]Wang, Chien-Yao, Bochkovskiy, Alexey, and Liao, Hong-Yuan Mark. “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv: 2207.02696, 2022.
[39]Wang, Chien-Yao, Bochkovskiy, Alexey, and Liao, Hong-Yuan Mark, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv:2207.02696, 2022.
[40]Ang, G. J. N., Goil, A. K., Chan, H., Lee, X. C., Mustaffa, R. B. A., Jason, T., and Shen, B., “A novel application for real-time arrhythmia detection using YOLOv8”, arXiv preprint arXiv:2305.16727, 2023.
[41]Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., and Zitnick, C. L., “Microsoft coco: Common objects in context”, In Computer Vision, ECCV 2014: 13th European Conference, September 6-12, 2014, Zurich, Switzerland, pp. 740-755.
[42]Krizhevsky, A., Sutskever, I., and Hinton, G. E., “ Imagenet classification with deep convolutional neural networks”, Communications of the ACM, 60(6), pp.84-90, 2017.