Volume 4, Number 4, 2007

Special Issue on Reliable Computing (pp.325-412)

Display Mode:          |     

Regular Paper
An Imperfect-debugging Fault-detection Dependent-parameter Software
2007, vol. 4, no. 4, pp. 325-328, doi: 10.1007/s11633-007-0325-8
Abstract PDF SpringerLink
Abstract:
Software reliability growth models (SRGMs) incorporating the imperfect debugging and learning phenomenon of devel- opers have recently been developed by many researchers to estimate software reliability measures such as the number of remaining faults and software reliability.However,the model parameters of both the fault content rate function and fault detection rate function of the SRGMs are often considered to be independent from each other.In practice,this assumption may not be the case and it is worth to investigate what if it is not.In this paper,we aim for such study and propose a software reliability model connecting the imperfect debugging and learning phenomenon by a common parameter among the two functions,called the imperfect-debugging fault-detection dependent-parameter model.Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power by determining the non-zero initial debugging process.
Sliding Mode Control Design via Reduced Order Model Approach
B. Bandyopadhyay, Alemayehu G/Egziabher Abera, S. Janardhanan, Victor Sreeram
2007, vol. 4, no. 4, pp. 329-334, doi: 10.1007/s11633-007-0329-4
Abstract PDF SpringerLink
Abstract:
This paper presents a design of continuous-time sliding mode control for the higher order systems via reduced order model.It is shown that a continuous-time sliding mode control designed for the reduced order model gives similar performance for the higher order system.The method is illustrated by numerical examples.The paper also introduces a technique for design of a sliding surface such that the system satisfies a cost-optimality condition when on the sliding surface.
Analyzing Effect of Demand Rate on Safety of Systems with Periodic Proof-tests
Manoj Kumar, A. K. Verma, A. Srividya
2007, vol. 4, no. 4, pp. 335-341, doi: 10.1007/s11633-007-0335-6
Abstract PDF SpringerLink
Abstract:
Quantitative safety assessment of safety systems plays an important role in decision making at all stages of system lifecycle, i.e.,design,deployment and phase out.Most safety assessment methods consider only system parameters,such as configuration,hazard rate,coverage,repair rate,etc.along with periodic proof-tests (or inspection).Not considering demand rate will give a pessimistic safety estimate for an application with low demand rate such as nuclear power plants,chemical plants,etc.In this paper,a basic model of IEC 61508 is used.The basic model is extended to incorporate process demand and behavior of electronic- and/or computer-based system following diagnosis or proof-test.A new safety index,probability of failure on actual demand (PFAD) based on extended model and demand rate is proposed.Periodic proof-test makes the model semi-Markovian,so a piece-wise continuous time Markov chain (CTMC) based method is used to derive mean state probabilities of elementary or aggregated state.Method to determine probability of failure on demand (PFD) (IEC 61508) and PFAD based on these state probabilities are described.In example,safety indices of PFD and PFAD are compared.
A New Subdivision Algorithm for the Bernstein Polynomial Approach to Global Optimization
P. S. V. Nataraj, M. Arounassalame
2007, vol. 4, no. 4, pp. 342-352, doi: 10.1007/s11633-007-0342-7
Abstract PDF SpringerLink
Abstract:
In this paper,an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems.The proposed algorithm is based on the Bernstein polynomial approach.Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point,modified rules for the selection of the subdivision direction,and a new acceleration device to avoid some unnecessary subdivisions.The performance of the proposed algorithm is numerically tested on a collection of 16 test problems.The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics.
Computational Analysis of Performance for Heterogeneous Integrated System with Test Automation
K. Krishna Mohan, A. Srividya, Ravikumar Gedela
2007, vol. 4, no. 4, pp. 353-358, doi: 10.1007/s11633-007-0353-4
Abstract PDF SpringerLink
Abstract:
Heterogeneity is inevitable in enterprises due to their various input requirements.The usage of proprietary integration products results in the increased cost of enterprises.During the integration,the focus area has been found to often address only the functional requirements,while the non-functional requirements are side-stepped during the initial stages of a project.Moreover, the use of proprietary integration products and non-standards-based integration platform has given rise to an inflexible integration infrastructure resulting in adaptability concerns.Web services-based integration,based on open standards,is deemed to be the only feasible solution in such cases.This paper explains the performance analysis of enterprise integration in heterogeneous environments for the distributed and the transactional applications.The analysis presented in this paper is seen as a step towards making intelligent decisions well in advance when choosing the integration mechanism/products to address the functional as well as the non-functional requirements considering the future integration needs.
Considering the Fault Dependency Concept with Debugging Time Lag in Software Reliability Growth Modeling Using a Power Function of Testing Time
V. B. Singh, Kalpana Yadav, Reecha Kapur, V. S. S. Yadavalli
2007, vol. 4, no. 4, pp. 359-368, doi: 10.1007/s11633-007-0359-y
Abstract PDF SpringerLink
Abstract:
Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling.In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products.A number of SRGMs have been proposed in the literature to represent time-dependent fault identification/removal phenomenon;still new models are being proposed that could fit a greater number of reliability growth curves.Often,it is assumed that detected faults axe immediately corrected when mathematical models are developed.This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault,the skill and experience of the personnel,the size of the debugging team,the technique,and so on.Thus,the detected fault need not be immediately removed,and it may lag the fault detection process by a delay effect factor.In this paper,we first review how different software reliability growth models have been developed,where fault detection process is dependent not only on the number of residual fault content but also on the testing time,and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor.Based on the power function of the testing time concept,we propose four new SRGMs that assume the presence of two types of faults in the software:leading and dependent faults.Leading faults are those that can be removed upon a failure being observed.However,dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag.These models have been tested on real software error data to show its goodness of fit,predictive validity and applicability.
Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency
P. K. Kapur, Anshu Gupta, P. C. Jha
2007, vol. 4, no. 4, pp. 369-379, doi: 10.1007/s11633-007-0369-9
Abstract PDF SpringerLink
Abstract:
Failure of a safety critical system can lead to big losses.Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems.Fault-tolerant softwares are used to increase the overall reliability of software systems.Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme),fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme).These softwares incorporate the ability of system survival even on a failure.Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems.Most of them consider the stable system reliability.Few attempts have been made in reliability modeling to study the reliability growth for an NVP system.Recently,a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency.In this model,a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed.In this paper,we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation.Using this model,a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system.The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required.It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost.In this paper,we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.
Coverage Modeling and Reliability Analysis Using Multi-state Function
S. Prabhudeva, A. K. Verma
2007, vol. 4, no. 4, pp. 380-387, doi: 10.1007/s11633-007-0380-1
Abstract PDF SpringerLink
Abstract:
Fault tree analysis is an effective method for predicting the reliability of a system.It gives a pictorial representation and logical framework for analyzing the reliability.Also,it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems.In this paper,we propose a new general coverage model (GCM) based on hardware independent faults.Using this model,an effective software tool can be constructed to detect,locate and recover fault from the faulty system.This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).
Software Operational Profile Based Test Case Allocation Using Fuzzy Logic
K. Saravana Kumar, Ravindra Babu Misra
2007, vol. 4, no. 4, pp. 388-395, doi: 10.1007/s11633-007-0388-6
Abstract PDF SpringerLink
Abstract:
Software operational profile (SOP) is used in software reliability prediction,software quality assessment,performance analysis of software,test case allocation,determination ofwhen to stop testing,etc.Due to the limited data resources and large efforts required to collect and convert the gathered data into point estimates,reluctance is observed by the software professionals to develop the SOP.A framework is proposed to develop SOP using fuzzy logic,which requires usage data in the form of linguistics.The resulting profile is named fuzzy software operational profile (FSOP).Based on this work,this paper proposes a generalized approach for the allocation of test cases,in which occurrence probability of operations obtained from FSOP are combined with the criticality of the operations using fuzzy inference system (FIS).Traditional methods for the allocation of test cases do not consider the application in which software operates.This is intuitively incorrect.To solve this problem,allocation of test cases with respect to software application using the FIS model is also proposed in this paper.
Discrete Software Reliability Growth Modeling for Errors of Different Severity Incorporating Change-point Concept
D. N. Goswami, Sunil K. Khatri, Reecha Kapur
2007, vol. 4, no. 4, pp. 396-405, doi: 10.1007/s11633-007-0396-6
Abstract PDF SpringerLink
Abstract:
Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development.In most of the existing research available in the literatures,it is considered that a similar testing effort is required on each debugging effort.However,in practice,different types of faults may require different amounts of testing efforts for their detection and removal.Consequently,faults are classified into three categories on the basis of severity:simple,hard and complex.This categorization may be extended to (?) type of faults on the basis of severity.Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults,they assume that the FRR remains constant during the overall testing period.On the contrary,it has been observed that as testing progresses,FRR changes due to changing testing strategy,skill,environment and personnel resources.In this paper,a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept.Then,the models are formulated for two particular environments.The models were validated on two real-life data sets.The results show better fit and wider applicability of the proposed models as to different types of failure datasets.
Fuzzy Logic Based Group Maturity Rating for Software Performance Prediction
A. K. Verma, Anil R, Om Prakash Jain
2007, vol. 4, no. 4, pp. 406-412, doi: 10.1007/s11633-007-0406-8
Abstract PDF SpringerLink
Abstract:
Driven by market requirements,software services organizations have adopted various software engineering process models (such as capability maturity model (CMM),capability maturity model integration (CMMI),ISO 9001:2000,etc.) and practice of the project management concepts defined in the project management body of knowledge.While this has definitely helped organizations to bring some methods into the software development madness,there always exists a demand for comparing various groups within the organization in terms of the practice of these defined process models.Even though there exist many metrics for comparison,considering the variety of projects in terms of technology,life cycle,etc.,finding a single metric that caters to this is a difficult task.This paper proposes a model for arriving at a rating on group maturity within the organization.Considering the linguistic or imprecise and uncertain nature of software measurements,fuzzy logic approach is used for the proposed model.Without the barriers like technology or life cycle difference,the proposed model helps the organization to compare different groups within it with reasonable precision.
Formal Reduction of Interfaces to Large-scale Process Control Systems
Walter Hussak, Shuang-Hua Yang
2007, vol. 4, no. 4, pp. 413-421, doi: 10.1007/s11633-007-0413-9
Abstract PDF SpringerLink
Abstract:
A formal methodology is proposed to reduce the amount of information displayed to remote human operators at interfaces to large-scale process control plants of a certain type.The reduction proceeds in two stages.In the first stage,minimal reduced subsets of components,which give full information about the state of the whole system,are generated by determining functional dependencies between components.This is achieved by using a temporal logic proof obligation to check whether the state of all components can be inferred from the state of components in a subset in specified situations that the human operator needs to detect,with respect to a finite state machine model of the system and other human operator behavior.Generation of reduced subsets is automated with the help of a temporal logic model checker.The second stage determines the interconnections between components to be displayed in the reduced system so that the natural overall graphical structure of the system is maintained.A formal definition of an aesthetic for the required subgraph of a graph representation of the full system,containing the reduced subset of components,is given for this purpose. The methodology is demonstrated by a case study.
Modeling and Control of Time-pressure Dispensing for Semiconductor Manufacturing
Cong-Ping Chen, Han-Xiong Li, Han Ding
2007, vol. 4, no. 4, pp. 422-427, doi: 10.1007/s11633-007-0422-8
Abstract PDF SpringerLink
Abstract:
To improve the consistency of the adhesive amount dispensed by the time-pressure dispenser for semiconductor manu- facturing,a non-Newtonian fluid flow rate model is developed to represent and estimate the adhesive amount dispensed in each cycle. Taking account of gas compressibility,an intelligent model-based control strategy is proposed to compensate the deviation of adhesive amount dispensed from the desired one.Both simulations and experiments show that the dispensing consistency is greatly improved by using the model-based control strategy developed in this paper.
A Feature-based Robust Digital Image Watermarking Against Desynchronization Attacks
Xiang-Yang Wang, Jun Wu
2007, vol. 4, no. 4, pp. 428-432, doi: 10.1007/s11633-007-0428-2
Abstract PDF SpringerLink
Abstract:
In this paper,a new content-based image watermarking scheme is proposed.The Harris-Laplace detector is adopted to extract feature points,which can survive a variety of attacks.The local characteristic regions (LCRs) are adaptively constructed based on scale-space theory.Then,the LCRs are mapped to geometrically invariant space by using image normalization technique.Finally, several copies of the digital watermark are embedded into the nonoverlapped LCRs by quantizing the magnitude vectors of discrete Fourier transform (DFT) coefficients.By binding a watermark with LCR,resilience against desynchronization attacks can be readily obtained.Simulation results show that the proposed scheme is invisible and robust against various attacks which includes common signals processing and desynchronization attacks.
Current Issue

2019 Vol.16 No.1

Table of Contents

ISSN 1476-8186

E-ISSN 1751-8520

CN 11-5350/TP

Editors-in-chief
Tieniu TAN, Chinese Academy of Sciences Guoping LIU, University of South Wales Huosheng HU, University of Essex
Global Visitors