EMMC-CSA: Discussion of approaches to validation

Within the EMMC-CSA we have made some preliminary investigations of techniques for validation and their relation to materials model validation. This reveals a three-step approach to model validation formulated by Naylor and Finger [1967] that has been widely followed:

Step 1. Build a model that has high face validity.

Step 2. Validate model assumptions.

Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.

How does this apply to physics or data- based models?

  1. Face validity
    1. A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.
    2. Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies
    3. Typically this might involve systematic studies of the variation of performance with input parameters – does the performance mimic realistic expectations.It seems that this is a natural part of model development but exposure to a wider (independent) population is part of the validation process, perhaps particularly for academic software owners looking to expand the use of their models in industry.
  2. Validation of model assumptions. Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions
    1. Structural assumptions. Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions. If possible the workings of the actual system should be closely observed to understand how it operates. The systems structure and operation should also be verified with users of the actual system.
    2. Data assumptions. There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail. Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data. The assumed statistical model should be tested using goodness of fit tests and other techniques. Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.In relation to materials models there are two model types to consider in this context.
      1. Physics based models. In this case there are structural model assumptions which can be well quantified, e.g. when creating models of atomic structures, we assume that the atoms do not come to close to each other or that quite often atoms are found in octahedral or tetrahedral surrounding. These assumptions then enter the models used to work with the input data. Sensitivity of the model output to these assumptions should be part of the validation process.
      2. Data based models. The examples given suggest that this stage is applicable to ad-hoc (empirical) models. However, data assumptions are certainly a stage that needs to be applied to data- based models. We note that  there is potentially a link between physics models and data-based models in that the latter can use data generated by physics based models so it is suggested to consider this when drawing up final recommendations for model validation.
  3. Validating input/output transformation
    1. The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test. The model output that is of primary interest should used as the measure of performance.
    2. We think that this seem to involve a systematic investigation of the transfer function of the model, considered as a black box, against experimental data, which is a good validation of a physics model, for example DFT calculations can be validated by calculating minimum energy configurations and lattice spacings. Each model type (discrete, mesoscopic, continuum) would need its own set of benchmark data to allow comparison of codes. Informal discussions with Volker and Erich suggest that this is the essential validation approach generally adopted by MDS, with the proviso that other companies may use different approaches, presumably determined by the software and the customer

To comment and engage in the discussions please follow the instructions to register and join the Modelling and Validation Working Group Online and then visit the Groups Forum.

Leave a comment

%d bloggers like this: