Data Mining Techniques to Automate Software Testing

Seema Phogat, Research Scholar & Dr. Sumeet Gill, Research Supervisor Singhania University Data mining techniques to automate software testing Abstract Report: The design of software tests is mostly based on the testers’ expertise, while test automation tools are limited to execution of pre-planned tests only. Evaluation of test outputs is also associated with a considerable effort by human testers who often have imperfect knowledge of the requirements specification. Not surprisingly, this manual approach to software testing results in heavy losses to the world’s economy.

The costs of the so-called” catastrophic” software failures (such as Mars Polar Lander shutdown in 1999) are even hard to measure. In this paper, we demonstrate the potential use of data mining algorithms for automated induction of functional requirements from execution data. The induced data mining models of tested software can be utilized for recovering missing and incomplete specifications, designing a minimal set of regression tests, and evaluating the correctness of software outputs when testing new, potentially flawed releases of the system.

To study the feasibility of the proposed approach, we have applied a novel data mining algorithm called Info-Fuzzy Network (IFN) to execution data of a general-purpose code for solving partial differential equations. After being trained on a relatively small number of randomly generated input-output examples, the model constructed by the IFN algorithm has shown a clear capability to discriminate between correct and faulty versions of the program. Keywords Automated Software Testing, Regression Testing, Input-Output Analysis, Info-Fuzzy Networks, Finite Element Solver. Introduction:

Data mining Data mining commonly involves four classes of tasks: Association rule learning – Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis. •Clustering – is the task of discovering groups and structures in the data that are in some way or another “similar”, without using known structures in the data. Classification – is the task of generalizing known structure to apply to new data. For example, an email program might attempt to classify an email as legitimate or spam. Common algorithms include decision tree learning, nearest neighbor, naive Bayesian classification, neural networks and support vector machines. •Regression – Attempts to find a function which models the data with the least error. Results validation The final step of knowledge discovery from data is to verify the patterns produced by the data mining algorithms occur in the wider data set.

Not all patterns found by the data mining algorithms are necessarily valid. It is common for the data mining algorithms to find patterns in the training set which are not present in the general data set. This is called over fitting. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained. The learned patterns are applied to this test set and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish spam from legitimate emails would be trained on a training set of sample emails.

Once trained, the learned patterns would be applied to the test set of emails on which it had not been trained. The accuracy of these patterns can then be measured from how many emails they correctly classify. A number of statistical methods may be used to evaluate the algorithm such as ROC curves. If the learned patterns do not meet the desired standards, then it is necessary to reevaluate and change the pre-processing and data mining. If the learned patterns do meet the desired standards then the final step is to interpret the learned patterns and turn them into knowledge.

Notable uses Games Since the early 1960s, with the availability of oracles for certain combinatorial games, also called tablebases (e. g. for 3×3-chess) with any beginning configuration, small-board dots-and-boxes, small-board-hex, and certain endgames in chess, dots-and-boxes, and hex; a new area for data mining has been opened. This is the extraction of human-usable strategies from these oracles. Current pattern recognition approaches do not seem to fully acquire the high level of abstraction required to be applied successfully.

Instead, extensive experimentation with the tablebases, combined with an intensive study of tablebase-answers to well designed problems and with knowledge of prior art, i. e. pre-tablebase knowledge, is used to yield insightful patterns. Berlekamp in dots-and-boxes etc. and John Nunn in chess endgames are notable examples of researchers doing this work, though they were not and are not involved in tablebase generation. Business Data mining in customer relationship management applications can contribute significantly to the bottom line.

Rather than randomly contacting a prospect or customer through a call center or sending mail, a company can concentrate its efforts on prospects that are predicted to have a high likelihood of responding to an offer. More sophisticated methods may be used to optimize resources across campaigns so that one may predict to which channel and to which offer an individual is most likely to respond—across all potential offers. Additionally, sophisticated applications could be used to automate the mailing.

Once the results from data mining (potential prospect/customer and channel/offer) are determined, this “sophisticated application” can either automatically send an e-mail or regular mail. Finally, in cases where many people will take an action without an offer, uplift modeling can be used to determine which people will have the greatest increase in responding if given an offer. Data clustering can also be used to automatically discover the segments or groups within a customer data set. Businesses employing data mining may see a return on investment, but also they recognize that the number of predictive models can quickly become very large.

Rather than one model to predict how many customers will churn, a business could build a separate model for each region and customer type. Then instead of sending an offer to all people that are likely to churn, it may only want to send offers to customers. Finally, it may want to determine which customers are going to be profitable over a window of time and only send the offers to those that are likely to be profitable. In order to maintain this quantity of models, they need to manage model versions and move to automated data mining.

Data mining can also be helpful to human-resources departments in identifying the characteristics of their most successful employees. Information obtained, such as universities attended by highly successful employees, can help HR focus recruiting efforts accordingly. Additionally, Strategic Enterprise Management applications help a company translate corporate-level goals, such as profit and margin share targets, into operational decisions, such as production plans and workforce levels. Another example of data mining, often called the market basket analysis, relates to its use in retail sales.

If a clothing store records the purchases of customers, a data-mining system could identify those customers who favor silk shirts over cotton ones. Although some explanations of relationships may be difficult, taking advantage of it is easier. The example deals with association rules within transaction-based data. Not all data are transaction based and logical or inexact rules may also be present within a database. Market basket analysis has also been used to identify the purchase patterns of the Alpha consumer.

Alpha Consumers are people that play a key role in connecting with the concept behind a product, then adopting that product, and finally validating it for the rest of society. Analyzing the data collected on this type of user has allowed companies to predict future buying trends and forecast supply demands. Data Mining is a highly effective tool in the catalog marketing industry. Catalogers have a rich history of customer transactions on millions of customers dating back several years. Data mining tools can identify patterns among customers and help identify the most likely customers to respond to upcoming mailing campaigns.

Data mining for business applications is a component which needs to be integrated into a complex modelling and decision making process. Reactive Business Intelligence (RBI) advocates a holistic approach that integrates data mining, modeling and interactive visualization, into an end-to-end discovery and continuous innovation process powered by human and automated learning. In the area of decision making the RBI approach has been used to mine the knowledge which is progressively acquired from the decision maker and self-tune the decision method accordingly.

Related to an integrated-circuit production line, an example of data mining is described in the paper “Mining IC Test Data to Optimize VLSI Testing. ” In this paper the application of data mining and decision analysis to the problem of die-level functional test is described. Experiments mentioned in this paper demonstrate the ability of applying a system of mining historical die-test data to create a probabilistic model of patterns of die failure. These patterns are then utilized to decide in real times which die to test next and when to stop testing.

This system has been shown, based on experiments with historical test data, to have the potential to improve profits on mature IC products. Science and engineering In recent years, data mining has been used widely in the areas of science and engineering, such as bioinformatics, genetics, medicine, education and electrical power engineering. In the study of human genetics, an important goal is to understand the mapping relationship between the inter-individual variation in human DNA sequences and variability in disease susceptibility.

In lay terms, it is to find out how the changes in an individual’s DNA sequence affect the risk of developing common diseases such as cancer. This is very important to help improve the diagnosis, prevention and treatment of the diseases. The data mining method that is used to perform this task is known as multifactor dimensionality reduction. In the area of electrical power engineering, data mining methods have been widely used for condition monitoring of high voltage electrical equipment.

The purpose of condition monitoring is to obtain valuable information on the insulation’s health status of the equipment. Data clustering such as self-organizing map (SOM) has been applied on the vibration monitoring and analysis of transformer on-load tap-changers (OLTCS). Using vibration monitoring, it can be observed that each tap change operation generates a signal that contains information about the condition of the tap changer contacts and the drive mechanisms. Obviously, different tap positions will generate different signals.

However, there was considerable variability amongst normal condition signals for exactly the same tap position. SOM has been applied to detect abnormal conditions and to estimate the nature of the abnormalities. Data mining methods have also been applied for dissolved gas analysis (DGA) on power transformers. DGA, as a diagnostics for power transformer, has been available for many years. Methods such as SOM has been applied to analyze data and to determine trends which are not obvious to the standard DGA ratio methods such as Duval Triangle.

A fourth area of application for data mining in science/engineering is within educational research, where data mining has been used to study the factors leading students to choose to engage in behaviors which reduce their learning and to understand the factors influencing university student retention. A similar example of the social application of data mining is its use in expertise finding systems, whereby descriptors of human expertise are extracted, normalized and classified so as to facilitate the finding of experts, particularly in scientific and technical fields.

In this way, data mining can facilitate Institutional memory. Info-Fuzzy Network Structure A comprehensive description of the info-fuzzy network (IFN) methodology for data-driven induction of predictive models is provided in [26]. Info-fuzzy network (see Figure 1) has an “oblivious” tree-like structure, where the same input attribute is used across all nodes of a given layer (level). The network components include the root node, a changeable number of hidden layers (one layer for each selected input), and the target (output) layer representing the possible output values.

Each target node is associated with a value (class) in the domain of a target attribute. The target layer has no equivalent in decision trees, but a similar concept of category nodes is used in decision graphs. If the IFN model is aimed at predicting the values of a continuous target attribute, the target nodes represent disjoint intervals in the attribute range. There is no limit as to the maximum number of output nodes in an info-fuzzy network. The sample network in Figure 1 has three output nodes denoted by numbers 1, 2, and 3. A hidden layer No. consists of nodes representing conjunctions of values of the first l input attributes, which is similar to the definition of an internal node in a standard decision tree. For continuous inputs, the values represent intervals identified automatically by the network construction algorithm. In Figure 1, we have two hidden layers (No. 1 and No. 2). Like in Oblivious Read-Once Decision Graphs [19], all nodes of a hidden IFN layer are labeled by the same feature. However, IFN extends the “readonce” restriction to continuous input attributes by allowing multiway splits of a continuous domain at the same network level.

The final (terminal) nodes of the network represent non-redundant conjunctions of input values that produce distinct outputs. The five terminal nodes of Figure 1 include (1,1), (1,2), 2, (3,1), and (3,2). If the network is induced from execution data of a software system, each interconnection between a terminal and a target node represents a possible output of a test case. For example, the connection (1,1) > 1 in Figure 1 means that we expect the output value of 1 for a test case where both input variables are equal to 1. The connectionist nature of IFN resembles the structure of a multi-layer neural network.

The idea of restricting the order of input attributes in graph-based representations of Boolean functions, called “function graphs”, has proven to be very useful for automatic verification of logic design, especially in hardware. Bryant [5] has shown that each Boolean function has a unique (up to isomorphism) reduced function graph representation, while any other function graph denoting the same function contains more vertices. Moreover, all symmetric functions can be represented by graphs where the number of nodes grows at most as the square of the number of input attributes.

As indicated by Kohavi , these properties of Boolean function graphs can be easily generalized to oblivious read-once decision graphs representing k-categorization functions. Consequently, the “read-once” structure of info-fuzzy networks makes them a natural modelling technique for testing complex software systems. Review of related literature: A recent study by the National Institute of Standards & Technology [32] found that “the national annual costs of an inadequate infrastructure for software testing is estimated to range from $22. 2 to $59. 5 billion” (p. ES-3) or about 0. percent of the US gross domestic product. This number does not include costs associated with catastrophic failures of mission-critical software (such as the $165 million Mars Polar Lander shutdown in 1999). According to another report, U. S. Department of Defense alone loses over four billion dollars a year due to software failures. A program fails when it does not do what it is required to do. The purpose of testing a program is to discover faults that cause the system to fail rather than proving the program correctness. Some even argue that the program correctness can never be demonstrated through software testing.

The reasons for that include such complexity issues as the size of the input domain, the number of possible paths through the program, and wrong or incomplete specifications. A successful test should reveal a problem in software; tests that do not expose any faults are useless, since they hardly provide any indication that the program works properly. In developing a large system, the test of the entire application (system testing) is usually preceded by the stages of unit testing and integration testing. The activities of system testing include function testing, performance testing, acceptance testing, and installation testing.

The ultimate goal of function testing is to verify that the system performs its functions as specified in the requirements and there are no undiscovered errors left. Since proving the code correctness is not feasible, especially for large software systems, the practical testing is limited to a series of experiments showing the program behavior in certain situations. Each choice of input testing data is called a test case. If the structure of the tested program itself is used to build a test case, this is called a whitebox (or open-box) approach. Several white-box methods for automated generation of test cases are described in literature.

For example, the technique of uses mutation analysis to create test cases for unit and module testing. A test set is considered adequate if it causes all mutated (incorrect) versions of the program to fail. The idea of testing programs by injecting simulated faults into the code is further extended in. Another paper presents a family of strategies for automated generation of test cases from Boolean specifications. However, as indicated by, modern software systems are too large to be tested by the white-box approach as a single entity. White-box testing techniques can work only at the subsystem level.

In function tests that are aimed at checking that a complex software system meets its specification, black-box (or closed box) test Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. INPUT-OUTPUT ANALYSES WITH INFO-FUZZY NETWORKS The architecture of the IFN-based environment for automated input-output analysis is shown in Figure 2.

Random Tests Generator (RTG) obtains the list of system inputs and outputs along with their types (discrete, continuous, etc. ) from System Specification. No information about the functional requirements is needed, since the IFN algorithm automatically reveals input/output relationships from randomly generated training cases. In our case study (see the next section), we explore the effect of the number of randomly generated training cases on the predictive accuracy of the IFN model. Systematic, non-random approaches to training set generation may also be considered.

The IFN algorithm is trained on inputs provided by RTG and outputs obtained from a legacy system by means of the Test Bed module. As indicated above, a separate IFN model is built for each output variable. The following information can be derived from each IFN model: 1. A set of input attributes relevant to the corresponding output. 2. Logical (if… then…) rules expressing the relationships between the selected input attributes and the corresponding output. The set of rules appearing at each terminal node represents the distribution of output values at that node.

These rules can be used to determine the predicted (most probable) value of the output in the corresponding test case. 3. Discretization intervals of each continuous input attribute included in the network. In testing terms, each interval represents an “equivalence class”, since for all values of a given interval the output values conform to the same distribution. 4. A set of non-redundant test cases. The terminal nodes in the network are converted into test cases, each representing a non-redundant conjunction of input values / equivalence classes and the corresponding distribution of output values.

Whenever the behavior of the tested application is characterized by some regular pattern, we expect the IFN based number of test cases to be much smaller than the number of random cases used for training the network. This assumption is supported by the results of our case studies including the one presented in this paper. Legacy System (LS). This module represents a program, a component or a system to be tested in subsequent versions of the software. We assume here that this module is data-driven, i. e. it should have a well defined interface in terms of obtained inputs and returned outputs.

Examples of data-driven software range from real-time controllers to business logic applications. Specification of Application Inputs and Outputs (SAIO). Basic data on each input and output variable in the Legacy System interface includes variable name, type (discrete, continuous, nominal, etc. ), and a list or a range of possible values. Such information is generally available from requirements management and test management tools (e. g. , Rational Requisite Pro® or Test Director®). Random Tests Generator (RTG). This module generates random combinations of values in the range of each input variable.

Variable ranges are obtained from the SAIO module . The number of training cases to generate is determined by the user. The generated training cases are used by the Test Bed and the IFN modules. Test Bed (TB). This module, sometimes called “test harness”, feeds training cases generated by the RTG module to the Legacy System (LS). Execution of training cases can be performed by commercial test automation tools. The TB module obtains the LS outputs for each executed case and forwards them to the IFN module. Info-Fuzzy Network Algorithm (IFN).

The input to the IFN algorithm includes the training cases randomly generated by the RTG module and the outputs produced by the Legacy System for each test case. IFN also uses the descriptions of variables stored by the SAIO module. The IFN algorithm is run repeatedly to find a subset of input variables relevant to each output and the corresponding set of non-redundant test cases. Actual test cases are generated from the automatically detected equivalence classes by using an existing testing policy (e. g. , one test for each side of every equivalence class).

We have applied the IFN algorithm to execution data of a small business application (Credit Approval), where the algorithm has chosen 165 representative test cases from a total of 11 million combinatorial tests. The Credit Approval application had mostly discrete inputs and two outputs (one of them binary). It was based on well-defined business rules implemented in less than 300 lines of code. In the next section, we evaluate the proposed approach on a much more complex program, which, unlike Credit Approval, is characterized by real-valued inputs and outputs only.

A Case study: finite element solver The Finite Element Method The finite element method was introduced in the late 1960’s for solving problems in mechanical engineering and quickly became a powerful tool for solving differential equations in mathematics, physics and engineering sciences [28] [12] [14]. The method consists of two stages. The first stage is finding a “functional”, which is usually an integral that contains the input of the problem and an unknown function and is minimized by the solution of the differential equation.

The existence of such a functional is a necessary condition for implementing the finite element method. The second stage is partitioning the domain over which the equation is solved into “elements”, usually triangles. This process is called “triangulation” and it provides the “finite elements”. Over each element the solution is approximated by a low degree polynomial (usually no more than fourth-order). The coefficients of each polynomial are unknown but they are determined at the end of the minimization process.

The union of the polynomials associated with all the finite elements provides an approximate solution to the problem. As in finite differences, a finer “finite element mesh” will provide a better approximation. The choice of quadratic or cubic polynomials generates a nonlinear approximation over each element and usually improves the accuracy of the approximate solution. The Case Study Consider solving the Laplace equation ?2 = + = 0 xx yy ? ? ? (1) over the unit square D = {(x, y) | 0 ? x ? 1,0 ? y ? 1} (2) where the solution equals some given f (x, y) on the square’s boundary. This problem, i. . finding ? (x, y) inside the square, has a unique solution. To find it we first define the functional = ?? + D x y F (? 2 ? 2 )dxdy (3) where ? (x, y) is an arbitrary function which equals f (x, y) on the square’s boundary. This functional attains its minimum at 0 ? ? = where 0 ? is the exact solution. We now triangulate the square as shown in Figure 3. Figure 3 Triangulation of the domain A low degree polynomial (say, linear) is chosen to approximate the solution over each triangle. The unknown coefficients of each polynomial are determined by minimizing the functional F (Eq. 3)) subject to the boundary conditions f: ? = f , on B (4) The code We applied an Unstructured Mesh Finite Element Solver (UMFES) which is a general finite element program for solving 2D elliptic partial differential equations (e. g. , Laplace’s equation) over an arbitrary bounded domain. UMFES was implemented in about 3,000 lines of FORTRAN 77 code. The program consists of two parts. The first part obtains the domain over which the differential equation is solved as input, and triangulates it using fuzzy expert system’s technique.

The input for this part is a sequence of points which defines the domain’s boundary counter clockwise. It is assumed that the domain is simply connected, i. e. that it does not include holes. If it does, the user must first divide it into several simply connected sub domains and then provide their boundaries separately. The output of the code’s first part is a sequence of triangles whose union is the original domain. The triangulation is carried out in such a way that two arbitrary triangles whose intersection is not an empty set, may have either one complete side in common or a single vertex.

The approximate solution is calculated at the triangles’ vertices which are called nodes. The nodes are numbered so that each two nodes whose numbers are close, are also geometrically close, ‘as much as possible’. The fulfillment of this request guarantees that the coefficient matrix of the unknowns whose inverse is calculated at the final stage of the numerical process, is such that its nonzero elements are located close to the main diagonal, which is advantageous for large matrices.

The triangulation rules which are based on a vast accumulation of knowledge and experience were chosen to minimize the process computational burden and to increase the accuracy of the approximate solution for a prefixed number of nodes. For example, given an initial domain, we first divide its boundary to small segments by extending the initial sequence of points. We have now obtained the boundary nodes and are ready to start the triangulation. Next, we are requested to find the best choice of a single secant that will divide the given domain into a couple of sub domains.

The answer, which is a rule of thumb, is to draw a secant that will be completely within the domain, will provide two sub domains with ‘similar’ numbers of nodes on their boundary and will be “as short as possible”. After drawing this ‘optimal secant, we treat each subdomain separately and repeat the process. The triangulation terminates when each subdomain is a triangle. Other rules guarantee that each triangle will hardly ever possess an angle close to 00 or to1800. This allows us to use fewer triangles and increases the solution’s accuracy.

Two types of triangulation are commonly used: (a) The triangles are ‘close’ to right-angled isosceles, and (b) The triangles are ‘close’ to equilateral. We may also adjust the density of the triangles within the domain. A triangulation with a variable density is demonstrated in Figure 4 where the density function is ? (x, y) = 1+ 2×2 + 3y 2 (5) Consequently, the triangulation is such that the least number of triangles per area unit is at (0, 0) and the largest occurs at (1, 1). The density function in Figure 3 is clearly ? (x, y) = const.

The UMFES code can cope with any two-dimensional finite domain with a finite number of holes. Once the triangulation is completed, the code’s second part applies the finite element method to solve any 2-D elliptic partial differential equation. The user now provides the input’s second part which is the boundary conditions (B. C. ), i. e. information about the solution over the domain’s boundary. Figure 4 Triangulation Using a Variable Density Function The code can treat 3 types of B. C. The first is Dirichlet type, where the solution ? s a specified function over some portion of the boundary. The second type is called homogeneous Neumann and states ?? /? n = 0 over the second portion of the boundary. The third is a mixed type B. C. , represented by the relation ?? /? n +?? =? where ? and ? are functions, given along a third portion of the boundary. The density of the finite elements is specified by the user and should be based on some preliminary knowledge of the solution’s features such as the sub-domains where it varies slow or fast. The Experiments We have solved the case study by replacing f (x, y) by constants.

Each problem was associated with Dirichlet B. C. defined by four constants (inputs), which specified the solution over the sides of the unit square. The four inputs were denoted as Side1 … Side4 respectively. The five outputs were taken as the solution’s values at the fixed points defined by the following coordinate-pairs: (1/ 4,1/ 4) , (3/ 4,1/ 4) , (3/ 4,3/ 4) , (1/ 4,3/ 4) , (1/ 2,(1/ 2) Accordingly, we referred to the outputs as Out1… Out5. The minimal distance from the boundary affects the predictive accuracy of the numerical solution.

Thus, we can expect that the prediction for the last point (Out5) will be the least accurate one. The B. C. conditions were taken randomly between -10 and +10, and we chose a sufficiently fine mesh of elements that guaranteed at least 1% accuracy at each output. Since the maximum and the minimum values of the solution are obtained on the boundary, the numerical values inside the domain must be between -10 and 10 just like the boundary (input) values. Thus, this typical problem was represented by a vector of 9 real-valued components: 4 inputs and 5 outputs.

Our first series of experiments was aimed at finding the minimal number of cases required to train the info-fuzzy network up to a reasonable accuracy. In this and all subsequent experiments we have partitioned the values of all target attributes into 20 intervals of equal frequency each, assuming this would be a sufficient precision for identifying the basic input-output relationships of the program. The algorithm was repeatedly trained on the number of cases varying between 50 and 5,000, producing five networks representing the five outputs in each run.

All training cases were generated randomly in the input space of the problem. The predictions of every induced model have been compared to the actual output values in the training cases and in separate 500 validation cases using the Root Mean Square Error (RMSE). Summary and conclusions In this research, we present and evaluate an emerging DM-based methodology for automated input-output analysis of data-driven software systems. As indicated in [36], such systems include embedded (real-time) applications, application program interfaces (API), and form-based web applications.

The proposed method will address the following unsolved problems associated with regression testing of legacy systems: • The method will automatically produce a set of no redundant test cases covering the most common functional relationships existing in software (including the corresponding equivalence classes). Existing methods and tools for automated software testing do not provide this capability. • The method will be applicable to testing complex software systems, since it does not depend on the analysis of system code like the white-box methods of automated test case selection. • No significant human effort is required to use the method.

Current methods and techniques of test case design assume manual analysis of either the requirements, or the code. • Missing, outdated, or incomplete requirements are not an obstacle for the proposed methodology, since it learns the functional relationships automatically from the execution data itself. Current methods of test case generation assume existence of detailed requirements, which may be unavailable or incomplete in many legacy systems. Issues for ongoing research include inducing a single model based on multiple outputs, test set generation and reduction, and application of the proposed methodology to large-scale software systems.

Future experiments will also include evaluation of the method’s capability to detect various types of errors injected in the code of the tested application. REFERENCES Astra QuickTest from Mercury Interactive http://astratryandbuy. mercuryinteractive. com Beizer, B. Software Testing Techniques. 2nd Edition, Thomson, 1990. Blackburn, M. R. , Busser, R. D. , Fontaine, J. S. Automatic Generation of Test Vectors for SCR-Style Specifications. Proceedings of the 12th Annual Conference on Computer Assurance (Gaithersburg, Maryland, June 1997). Breiman, L. , Friedman, J. H. , Olshen, R. A. , and Stone, P. J. Classification and Regression Trees.

Wadsworth, 1984. Bryant, R. E. Graph-Based Algorithms for Boolean Function Manipulation. IEEE Transactions on Computers, C-35-8, 677-691, 1986. Cover, T. M. , and Thomas, J. A. Elements of Information Theory. Wiley, 1991. DeMillo R. A. , and Offlut, A. J. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 17, 9, 900-910, 1991. Dustin, E. , Rashka, J. , Paul, J. Automated Software Testing: Introduction, Management, and Performance. Addison- Wesley, 1999. Elbaum, S. , Malishevsky, A. G. , Rothermel, G. Prioritizing Test Cases for Regression Testing. Proc. of ISSTA ’00, 102- 112, 2000.

El-Ramly, M. , Stroulia, E. , Sorenson, P. From Run-time Behavior to Usage Scenarios: An Interaction-pattern Mining Approach. Proceedings of KDD-2002 (Edmonton, Canada, July 2002), ACM Press, 315 – 327. Fayyad, U. , and Irani, K. Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. Proc. Thirteenth Int’l Joint Conference on Artificial Intelligence (San Mateo, CA, 1993), 1022-1027. Friedman, M. , Rosenfeld, Y. , Rabinovitch A. , and Thieberger, R. Finite Element Methods for Solving the Two Dimensional Schrodinger Equation. J. of Computational Physics, 26, 2, and 1978. Friedman, M. , Schneider, M. Kandel, A. FIDES – Fuzzy Intelligent Differential Equation Solver. Avignon, 1987. Friedman, M. , Strauss, M. , Amendt, P. , London R. A. , and Glinsky, M. E. Two-Dimensional Rayleigh Model For Bubble Evolution in Soft Tissue. Physics of Fluids, 14, 5, 2002. Hamlet, D. What Can We Learn by Testing a Program? Proc. of ISSTA 98, 50-52, 1998. Hildebrandt, R. , Zeller, A. Simplifying Failure-Inducing Input. Proc. of ISSTA ’00, 135-145, 2000. Kaner, C. , Falk, J. , Nguyen, H. Q. Testing Computer Software. Wiley, 1999. Kohavi R. Bottom-Up Induction of Oblivious Read-Once Decision Graphs. Proceedings of the ECML-94, European

Conference on Machine Learning (Catania, Italy, April 6-8, 1994), 154-169. Kohavi R. , and Li, C-H. Oblivious Decision Trees, Graphs, and Top-Down Pruning. Proc. of International Joint Conference on Artificial Intelligence (IJCAI), 1071-1077, 1995. Last M. , and Kandel, A. Automated Quality Assurance of Continuous Data. NATO Advanced Research Workshop on Systematic Organisation of Information in Fuzzy Systems (Vila Real, Portugal, October 25-27), 2001. Last M. , and Kandel, A. Automated Test Reduction Using an Info-Fuzzy Network. to appear in Annals of Software Engineering, Special Volume on Computational Intelligence n Software Engineering, 2003 . Last M. , and Maimon, O. A Compact and Accurate Model for Classification. to appear in IEEE Transactions on Knowledge and Data Engineering, 2003. Last, M. Online Classification of Nonstationary Data Streams. Intelligent Data Analysis, 6, 2, 129-147, 2002. Last, M. , Kandel, A. , Maimon, O. Information-Theoretic Algorithm for Feature Selection. Pattern Recognition Letters, 22 (6-7), 799-811, 2001. Last, M. , Maimon, O. , Minkov, E. Improving Stability of Decision Trees. International Journal of Pattern Recognition and Artificial Intelligence, 16, 2, 145-159, 2002. Maimon O. , and Last, M.

Knowledge Discovery and Data Mining – The Info-Fuzzy Network (IFN) Methodology. Kluwer Academic Publishers, Massive Computing, Boston, December 2000. Mayrhauser, A. von, Anderson, C. W. , Chen, T. , Mraz, R. , Gideon, C. A. On the Promise of Neural Networks to Support Software Testing. In W. Pedrycz and J. F. Peters (eds. ). Computational Intelligence in Software Engineering. World Scientific, 3-32, 1998. McDonald B. H. , and Wexler, A. Finite Element Solution of Unbounded Field Problems. IEEE Transactions on Microwave Theory and Techniques, MTT-20, 12, 1972. Mikhlin, S. G. Variational Methods in Mathematical Physics.

Oxford, Pergamon Press, 1965. Minium, E. W. , Clarke, R. B. , Coladarci, T. Elements of Statistical Reasoning. Wiley, New York, 1999. Nahamias, S. Production and Operations Analysis. 2nd ed. , Irwin, 1993 National Institute of Standards & Technology. The Economic Impacts of Inadequate Infrastructure for Software Testing. Planning Report 02-3, May 2002. Pfleeger, S. L. Software Engineering: Theory and Practice. 2nd Edition, Prentice-Hall, 2001. Quinlan, J. R. C4. 5: Programs for Machine Learning. Morgan Kaufmann, 1993. Rao, C. R. , and Toutenburg, H. Linear Models: Least Squares and Alternatives. Springer-Verlag, 1995. Schroeder P.

J. , and Korel, B. Black-Box Test Reduction Using Input-Output Analysis. Proc. of ISSTA ’00, 173-177, 2000. Vanmali, M. , Last, M. , Kandel, A. Using a Neural Network in the Software Testing Process. International Journal of Intelligent Systems, 17, 1, 45-62, 2002. Voas J. M. , and McGraw, G. Software Fault Injection: Inoculating Programs against Errors. Wiley, 1998. Weyuker, E. , Goradia, T. , and Singh, A. Automatically Generating Test Data from a Boolean Specification. IEEE Transactions on Software Engineering, 20, 5, 353-363, 1994. Zienkiewicz, O. C. The Finite Element Method in Engineering Science. London, McGraw-Hill, 1971.

July 15, 2017