Interpreting and explaining complex models such as ensemble machine learning models for opinion mining is essential to increase the level of transparency fairness and reliability of positive and negative opinion prediction results. Although ensemble learning models offer significant benefits, their lack of interpretability poses a major challenge in understanding the rationale behind their prediction, creating a complex problem related to the interpretation of the model. There is also limited research on developing ensemble learning models that describe the internal function and behavior of the model. In this paper, we propose a new approach for opinion mining with random density forest interpretation to provide explanatory power in opinion mining. Using the Local Interpretable Model-agnostic Explanation (LIME), we further interpret the random density forest model leading to the prediction of opinion polarization in opinion mining according to specific domains related to online reviews of restaurants and hotels. It has demonstrated accurate results in terms of the contribution of opinion features in mining the overall opinion. In addition, we also compared the probability density of opinion feature words and were interested in the contribution of essential features to the results. Model prediction using the SHAPLEY value, based on the interaction value of opinion feature words, has shown the level of influence in predicting positive or negative opinion polarization results. Empirical results show that the proposed system tries to explain efficiency.
Tạp chí khoa học Trường Đại học Cần Thơ
Lầu 4, Nhà Điều Hành, Khu II, đường 3/2, P. Xuân Khánh, Q. Ninh Kiều, TP. Cần Thơ
Điện thoại: (0292) 3 872 157; Email: tapchidhct@ctu.edu.vn
Chương trình chạy tốt nhất trên trình duyệt IE 9+ & FF 16+, độ phân giải màn hình 1024x768 trở lên