Jin Xie, San-Yang Liu, Jia-Xi Chen. A Framework for Distributed Semi-supervised Learning Using Single-layer Feedforward Networks. International Journal of Automation and Computing.
Citation: Jin Xie, San-Yang Liu, Jia-Xi Chen. A Framework for Distributed Semi-supervised Learning Using Single-layer Feedforward Networks. International Journal of Automation and Computing.

A Framework for Distributed Semi-supervised Learning Using Single-layer Feedforward Networks

More Information
  • Author Bio:

    Jin Xie received the B. Sc. degree in mathematics from North China Electric Power University, China in 2013, and received the Ph. D. degree in mathematics from Xidian University, China in 2020. He is currently a lecturer with School of Mathematics and Statistics, Xidian University, China.His research interests include machine learning, neural networks, semi-supervised learning, and distributed learning

    San-Yang Liu received the B. Sc. or B. Eng degree from Shaanxi Normal University in 1982, M. Sc. or degree from Xidian University, China in 1984, and received the Ph. D. degree from Xi′an Jiao Tong University, China in 1989. Currently, he is the Director of Institute of Industrial and Applied Mathematics, Director of the Center for Mathematics and Cross Science of Xidian University, China.His research interests include optimization methods and their applications, nonlinear analysis, and system modeling

    Jia-Xi Chen received the M. Sc. and Ph. D. degrees in applied mathematics from Xidian University, China in 2018 and 2020, respectively. He is currently a lecturer at Department of Applied Mathem- atics, Xidian University, China.His research interests include adaptive control, multi-agent systems and Takagi-Sugeno (T-S) fuzzy systems

  • Received Date: 2021-03-15
  • Accepted Date: 2021-08-19
  • This paper aims to propose a framework for manifold regularization (MR) based distributed semi-supervised learning (DSSL) using single layer feed-forward neural network (SLFNN). The proposed framework, denoted as DSSL-SLFNN is based on the SLFNN, MR framework, and distributed optimization strategy. Then, a series of algorithms are derived to solve DSSL problems. In DSSL problems, data consisting of labeled and unlabeled samples are distributed over a communication network, where each node has only access to its own data and can only communicate with its neighbors. In some scenarios, DSSL problems cannot be solved by centralized algorithms. According to the DSSL-SLFNN framework, each node over the communication network the initial parameters of the SLFNN with the same basis functions for semi-supervised learning (SSL). All nodes calculate the global optimal coefficients of the SLFNN by using distributed datasets and local updates. During the learning process, each node only exchanges local coefficients with its neighbors rather than raw data. It means that DSSL-SLFNN based algorithms work in a fully distributed fashion and are privacy preserving methods. Finally, several simulations are presented to show the efficiency of the proposed framework and the derived algorithms.

     

  • loading
  • [1]
    W. Jia, J. Gao, W. Xia, Y. Zhao, H. Min, J. T. Lu. A performance evaluation of classic convolutional neural networks for 2D and 3D palmprint and palm vein recognition. International Journal of Automation and Computing, vol. 18, no. 1, pp. 18–44, 2021. DOI: 10.1007/s11633-020-1257-9.
    [2]
    X. B. Fu, S. L. Yue, D. Y. Pan. Camera-based basketball scoring detection using convolutional neural network. International Journal of Automation and Computing, vol. 18, no. 2, pp. 266–276, 2021. DOI: 10.1007/s11633-020-1259-7.
    [3]
    K. Aukkapinyo, S. Sawangwong, P. Pooyoi, W. Kusakunniran. Localization and classification of rice-grain images using region proposals-based convolutional neural network. International Journal of Automation and Computing, vol. 17, no. 2, pp. 233–246, 2020. DOI: 10.1007/s11633-019-1207-6.
    [4]
    T. Matias, F. Souza, R. Araujo, C. H. Antunes. Learning of a single-hidden layer feedforward neural network using an optimized extreme learning machine. Neurocomputing, vol. 129, pp. 428–436, 2014. DOI: 10.1016/j.neucom.2013.09.016.
    [5]
    M. Li, D. H. Wang. Insights into randomized algorithms for neural networks: Practical issues and common pitfalls. Information Sciences, vol. 382-383, pp. 170–178, 2017. DOI: 10.1016/j.ins.2016.12.007.
    [6]
    D. H. Wang, M. Li. Stochastic configuration networks: Fundamentals and algorithms. IEEE Transactions on Cybernetics, vol. 47, no. 10, pp. 3466–3479, 2017. DOI: 10.1109/TCYB.2017.2734043.
    [7]
    M. Belkin, P. Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, vol. 56, no. 1−3, pp. 209–239, 2004. DOI: 10.1023/B:MACH.0000033120.25363.1e.
    [8]
    M. Belkin, P. Niyogi, V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, vol. 7, pp. 2399–2434, 2006.
    [9]
    O. Chapelle, B. Scholkopf, A. Zien. Semi-Supervised Learning, Cambridge, USA: MIT Press, 2006.
    [10]
    S. Melacci, M. Belkin. Laplacian support vector machines trained in the primal. Journal of Machine Learning Research, vol. 12, pp. 1149–1184, 2011.
    [11]
    Z. Q. Qi, Y. J. Tian, Y. Shi. Laplacian twin support vector machine for semi-supervised classification. Neural Networks, vol. 35, pp. 46–53, 2012. DOI: 10.1016/j.neunet.2012.07.011.
    [12]
    S. Scardapane, D. Comminiello, M. Scarpiniti, A. Uncini. A semi-supervised random vector functional-link network based on the transductive framework. Information Sciences, vol. 364−365, pp. 156–166, 2016. DOI: 10.1016/j.ins.2015.07.060.
    [13]
    S. R. Sain. The nature of statistical learning theory. Technometrics, vol. 38, no. 4, Article number 409, 1996.
    [14]
    J. Wang, X. Shen. On transductive support vector machines. Prediction &Discovery, vol. 27, no. 6, pp. 1463–1462, 2007.
    [15]
    D. P. Kingma, D. J. Rezende, S. Mohamed, M. Welling. Semi-supervised learning with deep generative models. In Proceedings of the 27th International Conference on Neural Information Processing Systems, MIT Press, Cambridge, USA, pp. 3581−3589, 2014.
    [16]
    Y. F. Li, J. T. Kwok, Z. H. Zhou. Cost-sensitive semi-supervised support vector machine. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, AAAI Press, Atlanta, USA, pp. 500−505, 2010.
    [17]
    M. F. A. Hady, F. Schwenker, G. Palm. Semi-supervised learning for tree-structured ensembles of RBF networks with Co-training. Neural Networks, vol. 23, no. 4, pp. 497–509, 2010. DOI: 10.1016/j.neunet.2009.09.001.
    [18]
    T. Miyato, S. I. Maeda, M. Koyama, S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 8, pp. 1979–1993, 2018. DOI: 10.1109/TPAMI.2018.2858821.
    [19]
    K. Avrachenkov, V. S. Borkar, K. Saboo. Distributed and asynchronous methods for semi-supervised learning. In Proceedings of the 13th International Workshop on Algorithms and Models for the Web-Graph, Springer, Montreal, Canada, pp. 34−46, 2016.
    [20]
    X. Y. Chang, S. B. Lin, D. X. Zhou. Distributed semi-supervised learning with kernel ridge regression. Journal of Machine Learning Research, vol. 18, no. 1, pp. 1493–1514, 2017.
    [21]
    P. C. Shen, X. Du, C. G. Li. Distributed semi-supervised metric learning. IEEE Access, vol. 4, pp. 8558–8571, 2016. DOI: 10.1109/ACCESS.2016.2632158.
    [22]
    S. Scardapane, R. Fierimonte, P. Di Lorenzo, M. Panella, A. Uncini. Distributed semi-supervised support vector machines. Neural Networks, vol. 80, pp. 43–52, 2016. DOI: 10.1016/j.neunet.2016.04.007.
    [23]
    R. Fierimonte, S. Scardapane, A. Uncini, M. Panella. Fully decentralized semi-supervised learning via privacy-preserving matrix completion. IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 11, pp. 2699–2711, 2017. DOI: 10.1109/TNNLS.2016.2597444.
    [24]
    B. Güler, A. S. Avestimehr, A. Ortega. Privacy-aware distributed graph-based semi-supervised learning. In Proceedings of the 29th International Workshop on Machine Learning for Signal Processing, IEEE, Pittsburgh, USA, pp. 1−6, 2019.
    [25]
    J. Xie, S. Y. Liu, H. Dai. A distributed semi-supervised learning algorithm based on manifold regularization using wavelet neural network. Neural Networks, vol. 118, pp. 300–309, 2019. DOI: 10.1016/j.neunet.2018.10.014.
    [26]
    W. Ai, W. S. Chen, J. Xie. A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights. Information Sciences, vol. 373, pp. 404–418, 2016. DOI: 10.1016/j.ins.2016.09.016.
    [27]
    J. Xie, W. S. Chen, H. Dai. Distributed cooperative learning algorithms using wavelet neural network. Neural Computing and Applications, vol. 31, no. 4, pp. 1007–1021, 2019. DOI: 10.1007/s00521-017-3134-1.
    [28]
    J. X. Chen, J. M. Li. Global FLS-based consensus of stochastic uncertain nonlinear multi-agent systems. International Journal of Automation and Computing, vol. 18, no. 5, pp. 826−837.
    [29]
    J. Xie, S. Y. Liu, H. Dai, Y. Rong. Distributed semi-supervised learning algorithms for random vector functional-link networks with distributed data splitting across samples and features. Knowledge-Based Systems, vol. 195, Article number 105577, 2020. DOI: 10.1016/j.knosys.2020.105577.
    [30]
    Y. H. Chen, B. Yang, J. W. Dong. Time-series prediction using a local linear wavelet neural network. Neurocomputing, vol. 69, no. 4−6, pp. 449–465, 2006. DOI: 10.1016/j.neucom.2005.02.006.
    [31]
    D. S. Broomhead, D. Lowe. Radial basis functions, multi-variable functional interpolation and adaptive networks. Advances in Neural Information Processing Systems, vol. 4148, pp. 728–734, 1988.
    [32]
    K. Z. Mao. RBF neural network center selection based on Fisher ratio class separability measure. IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1211–1217, 2002. DOI: 10.1109/TNN.2002.1031953.
    [33]
    R. Olfati-Saber, J. A. Fax, R. M. Murray. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007. DOI: 10.1109/JPROC.2006.887293.
    [34]
    J. Lu, C. Y. Tang. Zero-gradient-sum algorithms for distributed convex optimization: The continuous-time case. IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2348–2354, 2012. DOI: 10.1109/TAC.2012.2184199.
    [35]
    S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends ® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. DOI: 10.1561/2200000016.
    [36]
    A. H. Sayed. Adaptive networks. Proceedings of the IEEE, vol. 102, no. 4, pp. 460–497, 2014. DOI: 10.1109/JPROC.2014.2306253.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(4)

    Article Metrics

    Article views (23) PDF downloads(2) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return