Ann Smith, Fengshou Gu and Andrew D. Ball. An Approach to Reducing Input Parameter Volume for Fault Classifiers. International Journal of Automation and Computing, vol. 16, no. 2, pp. 199-212, 2019. DOI: 10.1007/s11633-018-1162-7
Citation: Ann Smith, Fengshou Gu and Andrew D. Ball. An Approach to Reducing Input Parameter Volume for Fault Classifiers. International Journal of Automation and Computing, vol. 16, no. 2, pp. 199-212, 2019. DOI: 10.1007/s11633-018-1162-7

An Approach to Reducing Input Parameter Volume for Fault Classifiers

  • As condition monitoring of systems continues to grow in both complexity and application, an overabundance of data is amassed. Computational capabilities are unable to keep abreast of the subsequent processing requirements. Thus, a means of establishing computable prognostic models to accurately reflect process condition, whilst alleviating computational burdens, is essential. This is achievable by restricting the amount of information input that is redundant to modelling algorithms. In this paper, a variable clustering approach is investigated to reorganise the harmonics of common diagnostic features in rotating machinery into a smaller number of heterogeneous groups that reflect conditions of the machine with minimal information redundancy. Naïve Bayes classifiers established using a reduced number of highly sensitive input parameters realised superior classification powers over higher dimensional classifiers, demonstrating the effectiveness of the proposed approach. Furthermore, generic parameter capabilities were evidenced through confirmatory factor analysis. Parameters with superior deterministic power were identified alongside complimentary, uncorrelated, variables. Particularly, variables with little explanatory capacity could be eliminated and lead to further variable reductions. Their information sustainability is also evaluated with Naïve Bayes classifiers, showing that successive classification rates are sufficiently high when the first few harmonics are used. Further gains were illustrated on compression of chosen envelope harmonic features. A Naïve Bayes classification model incorporating just two compressed input variables realised an 83.3% success rate, both an increase in classification rate and an immense improvement volume-wise on the former ten parameter model.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return