研究資料首頁-> 期刊論文
研究資料明細
論文名稱 | Deep Multi-Layer Neural Network with Variable-Depth Output |
發表日期 | 2023-12-20 |
論文收錄分類 | SCI |
所有作者 | Shiueng-Bien Yang, Ting-Wen Liang |
作者順序 | 第二作者 |
通訊作者 | 否 |
刊物名稱 | International Journal of Pattern Recognition and Artificial Intelligence |
發表卷數 | |
是否具有審稿制度 | 是 |
發表期數 | 37 |
期刊或學報出版地國別/地區 | NATTWN-中華民國 |
發表年份 | 2023 |
發表月份 | 12 |
發表形式 | 紙本 |
所屬計劃案 | 無 |
可公開文檔 | |
可公開文檔 | |
可公開文檔 | |
附件 | S021800142359022X.pdf |
[英文摘要] :
In this study, a deep multi-layer neural network (DMLNN) with variable-depth output
(VDO), called VDO-DMLNN, is proposed for classification. Unlike the traditional
DMLNN, for which a user must define the network architecture in advance, VDODMLNN
is produced from the top–down, layer by layer, until the classification error
rate of VDO-DMLNN no longer decreases. The user thus does not need to define the
depth of VDO-DMLNN in advance. The combination of the genetic algorithm (GA) and
the self-organizing feature map (SOFM), called GA–SOFM, is proposed to automatically
generate the weights and proper number of nodes for each layer in VDO-DMLNN.
In addition, the output nodes can be at different levels in VDO-DMLNN rather than all
being at the last layer, as in the traditional DMLNN. Thus, the average of computing
time required for the recognition of an input sample in VDO-DMLNN is less than that
in traditional DMLNN when they have the same classification error rate. Finally, VDODMLNN is compared with some state-of-the-art neural networks in the experiments.
[參考文獻] :
1. R. Ahmed et al., Deep neural network-based contextual recognition of Arabic handwritten
scripts, Entropy 23 (2021) 340.
2. F. Arce et al., Dendrite morphological neural networks trained by differential evolution,
in Proc. 2016 IEEE Symp. Series on Computational Intelligence (SSCI), Vol. 1
(IEEE, 2016), pp. 1–8.
3. A. Ashiquzzaman et al., An efficient recognition method for handwritten Arabic
numerals using CNN with data augmentation and dropout, in Data Management,
Analytics and Innovation, Advances in Intelligent Systems and Computing, Vol. 808
(Springer, Singapore, 2019), pp. 299–309.
4. C. Blake and C. J. Merz, UCI Machine Learning Repository (1998), Department
of Information and Computer Science, University of California, http://www.ics.uci.
edu/mlearn/MLRepository.html.
5. J. D. Bodapati et al., Composite deep neural network with gated-attention mechanism
for diabetic retinopathy severity classification, J. Ambient Intell. Human Comput. 12
(2021) 9825–9839, doi:10.1007/s12652-020-02727-z.
6. S. Chatterjee, S. Bandopadhyay and D. Machuca, Ore grade prediction using a genetic
algorithm and clustering based ensemble neural network model, Math. Geosci. 42(3)
(2010) 309–326.
7. T. E. Chen et al., S1 and S2 heart sound recognition using deep neural networks,
IEEE Trans. Biomed. Eng. 64(2) (2017) 372–380.
8. D. Ciresan et al., Multi-column deep neural networks for image classification, in Proc.
2012 IEEE Conf. Computer Vision and Pattern Recognition (Institute of Electrical
and Electronics Engineers, 2012), pp. 3642–3649.
9. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A large-scale
hierarchical image database, in Proc. 2009 IEEE Conf. Computer Vision and Pattern
Recognition (Curran Associates, Inc., 2009).
10. C. Dong, C. C. Loy, K. He and X. Tang, Image super-resolution using deep convolutional
networks, IEEE Trans. Pattern Anal. Mach. Intell. 38(2) (2016) 295–307.
11. K. He, X. Zhang, S. Ren and J. Sun, Delving deep into rectifiers: Surpassing humanlevel
performance on ImageNet classification, preprint, arXiv:1502.01852 [cs.CV].
12. K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition,
preprint, arXiv:1512.03386 [cs.CV].
13. G. Hinton et al., Deep neural networks for acoustic modeling in speech recognition:
The shared views of four research groups, IEEE Signal Process. Mag. 29 (2012) 82–97.
14. S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by
reducing internal covariate shift, preprint, arXiv:1502.03167 [cs.LG].
15. S. Kabir, S. Patidar, X. Xia, Q. Liang, J. Neal and G. Pender, A deep convolutional
neural network model for rapid prediction of fluvial flood inundation, J. Hydrol. 590
(2020) 125481, doi:10.1016/j.jhydrol.2020.125481.
16. M. Khayyat, L. Lam and C. Y. Suen, Learning-based word spotting system for Arabic
handwritten documents, Pattern Recognit. 47(3) (2014) 1021–1030.
17. H. Larochelle et al., Exploring strategies for training deep neural networks, J. Mach.
Learn. Res. 10 (2009) 1–40.
18. W. P. Lee, S. Hasan, S. M. Shamsuddin and N. Lopes, GPUMLib: Deep learning
SOM library for surface reconstruction, Int. J. Adv. Soft Comput. Appl. 9(2) (2017)
1–16.
19. X. Lu et al., Speech enhancement based on deep denoising autoencoder, in Proc.
INTERSPEECH 2013 (International Speech Communication Association, 2013),
pp. 436–440.
20. B. Luca, J. H. Joao, V. Jack, T. Philip and V. Andrea, Learning feed-forward oneshot
learners, in Advances in Neural Information Processing Systems, Vol. 29 (Curran
Associates, Inc., 2016), pp. 523–531.
21. H. Mahmoudabadi, M. Izadi and M. B. Menhaj, A hybrid method for grade estimation
using genetic algorithm and neural networks, Comput. Geosci. 13 (2009) 91–101.
22. C. Micheloni, A. Rani, S. Kumarb and G. L. Foresti, A balanced neural tree for
pattern classification, Neural Netw. 27 (2012) 81–90.
23. M. E. Mustafa et al., A deep learning approach for handwritten Arabic names recognition,
Int. J. Adv. Comput. Sci. Appl. 11 (2020) 678–682.
24. X. Rao et al., Distracted driving recognition method based on deep convolutional
neural network, J. Ambient Intell. Humaniz. Comput. 12 (2021) 193–200.
25. T. Reza and K. Mohammadreza, A method for handwritten word spotting based
on particle swarm optimization and multi-layer perceptron, IET Softw. 12(2) (2018)
152–159.
26. A.-B. M. Salem, M. M. Syiam and A. F. Ayad, Improving self-organizing feature
map (SOFM) training algorithm using K-means initialization, in Proc. 5th Int. Conf.
Enterprise Information Systems, Vol. 2 (International Speech Communication Association,
2003), pp. 399–405.
27. B. Samanta, S. Bandopadhyay and R. Ganguli, Data segmentation and genetic algorithms
for sparse data division in Nome placer gold grade estimation using neural
network and geostatistics, Explor. Mining Geol. 11(1) (2004) 69–76.
28. S. Z. Selim and M. A. Ismail, K-means-type algorithm: Generalized convergence theorem
and characterization of local optimality, IEEE Trans. Pattern Anal. Mach. Intell.
6 (1984) 81–87.
29. P. Y. Simard et al., Best practices for convolutional neural networks applied to visual
document analysis, in Proc. Seventh Int. Conf. Document Analysis and Recognition
(IEEE Computer Society, 2003), pp. 958–963.
30. K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale
image recognition, preprint, arXiv:1409.1556 [cs.CV].
31. G. Singh and A. Kaur, Comparative analysis of K-means and Kohonen-SOM data
mining algorithms based on student behaviors in sharing information on Facebook,
Int. J. Eng. Comput. Sci. 6(4) (2017) 20990–20993.
32. S. M. Siniscalchi, An artificial neural network approach to automatic speech processing,
Neurocomputing 140 (2014) 326–338.
33. H. Sossa et al., Morphological neural networks with dendritic processing for pattern
classification, in Advanced Topics on Computer Vision, Control and Robotics in
Mechatronics, eds. O. Vergara Villegas, M. Nandayapa and I. Soto (Springer, Cham,
2018), pp. 27–47, doi:10.1007/978-3-319-77770-2 2.
34. S. Sudholt and G. A. Fink, A deep convolutional neural network for word spotting in
handwritten documents, in Proc. 15th Int. Conf. Frontiers in Handwriting Recognition
(ICFHR) (IEEE, 2016), pp. 277–282.
35. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke
and A. Rabinovich, Going deeper with convolutions, preprint, arXiv:1409.4842
[cs.CV].
36. P. Tahmasebi and A. Hezarkhani, Application of optimized neural network by genetic
algorithm, IAMG09, Vol. 2 (Stanford University, California, 2009), pp. 15–23.
37. A. H. Toselli, J. Puigcerver and E. Vidal, Two methods to improve confidence scores
for Lexicon-free word spotting in handwritten text, in Proc. 15th Int. Conf. Frontiers
in Handwriting Recognition (ICFHR) (IEEE Computer Society, 2016), pp. 349–354.
38. J. Vesanto and E. Alhoniemi, Clustering of the self-organizing map, IEEE Trans.
Neural Netw. 11(3) (2000) 586–600.
39. Y. Xu et al., A regression approach to speech enhancement based on deep neural
networks, IEEE Trans. Audio Speech Lang. Process. 23(1) (2015) 7–19.
40. K. Zagoris, I. Pratikakis and B. Gatos, Unsupervised word spotting in historical handwritten
document images using document-oriented local features, IEEE Trans. Image
Process. 26(8) (2017) 4032–4041.
41. M. Zhang and J. Fulcher, Face recognition using artificial neural networks group-based
adaptive tolerance (GAT) trees, IEEE Trans. Neural Netw. 7 (1996) 555–567.
42. Y. Zhang, Y. Xie, Y. Zhang, J. Qiu and S. Wu, The adoption of deep neural network
(DNN) to the prediction of soil liquefaction based on shear wave velocity, Bull. Eng.
Geol. Environ. 80 (2021) 5053–5060, doi:10.1007/s10064-021-02250-1.