In this study, we investigate the effectiveness of ResNet, a deep neural network architecture, for a deep learning approach to address the problem of printed document identification. ResNet is known for its ability to handle the vanishing gradient problem and learn highly representative features. Multiple variations of ResNet have been applied, including ResNet50, ResNet101, and ResNet152, which provide the backbone architecture of our classification model and are trained on a comprehensive dataset of microscopic printed images containing some microscopic printing patterns from various source printers. We also incorporate Mix-up augmentation, a technique that generates virtual training samples by interpolating pairs of images and labels, to further enhance the performance and generalization capability of the model. The experimental results showed that ResNet101 and ResNet152 variants outperformed in accurately distinguishing printer sources based on microscopic printed patterns. We developed a mobile app to test the feasibility of our findings in practice. In conclusion, this study aims to lay the groundwork for creating a sufficiently pre-trained model with accurate performance of identification that can be deployed on mobile devices to detect the printed sources of documents.
Tạp chí khoa học Trường Đại học Cần Thơ
Lầu 4, Nhà Điều Hành, Khu II, đường 3/2, P. Xuân Khánh, Q. Ninh Kiều, TP. Cần Thơ
Điện thoại: (0292) 3 872 157; Email: tapchidhct@ctu.edu.vn
Chương trình chạy tốt nhất trên trình duyệt IE 9+ & FF 16+, độ phân giải màn hình 1024x768 trở lên