People, tool, process and data can be seen as the four pillars of a successful deployment of materials modelling in the industrial community. During this meeting, we would like to develop strategies with leading modelling experts in industry through discussions.
The objective of this meeting is therefore to gather a consistent number of experts from the different stakeholders interested in materials adoption/development, and provide a discussion platform for industrial requirements as compared to state-of the-art modelling techniques. The ambition of the workshop is to cover the more relevant aspects which are critical to the widespread adoption of materials modelling techniques in industry. To this, six thematic sessions are organized and will cover:
state-of-the-art modelling techniques and guidelines for further model developments;
the perspective of the European software owners/developers;
the economic impact of materials modelling on industrial innovation;
strategies for improving the two-way transfer of knowledge between academia and industry – i.e. Translators and their training requirements;
interoperability requirements and frameworks – e.g. ontologies – for integration of models and software. Finally, a special session will be dedicated to the discussion of the potential of artificial intelligence in the framework of materials modelling, with particular emphasis on the benefit for high-throughput simulations, big data and their mining – e.g. data-driven modelling – towards Industry 4.0.
Organisation, Contact & Support
This is an “Invitation-only” event. You will be contacted by the organisers. After you have confirmed your participation you will receive the registration link.
Are you interested in software commercialisation and business models relevant to the field of materials modelling? EMMC would be pleased to invite you to a workshop where you can learn from a range of experts.
Experienced Software Owners will share their successful practice with the audience and will cover open-source, non-profit and commercial business models.
The workshop is aimed in particular at PostDocs, academic research and industrial R&D group leaders or advanced PhD students who actively work on materials modelling software and codes and are looking into business models. The workshop will be a collaborative event. We expect active interactions between impulse speakers, trainers and motivated participants. Hence, we would like to invite everyone who has interest to express their experience and aim to participate and to apply for one of the 50 available places.
We know that atoms and molecules are attracted to each other simply because we know that matter condenses around us. We also know that they repel each other as molecules cannot be pushed very close to each other due to a strong repulsion. This means that in all attempts to describe inter-molecular interactions we need a wall at the short distances and a well at longer distances. A common wall-well model is the Lennard-Jones potential.
When we do ab initio modelling with nuclei and electrons the interactions between the nuclei, in the presence of the electrons, come from the first-principles and are calculated from the laws of physics. No need to think about attractions or repulsions separately. But when we go over from quantum mechanics (electronic models) to classical mechanics (atomistic models) we have to model both the sizes of the atoms and their mutual interactions. For this we have the several decades old molecular mechanical (MM) force fields (FF) which still have a strong position in all-atom (AA) molecular simulations. More sophisticated terms (polarizable, reactive, cross-terms, non-additive, three-body, etc) are being developed worldwide but most simulations are still performed with the simplest possible terms as they often are very robust and established models in classical physics. A development is in progress where machine learning techniques are applied to create accurate potential energy surfaces and force fields.
However, when we go from atomistic models to mesoscopic models we do no more have similar well-defined conceptual interaction blocks as in AA models to construct a mesoscale force field. Mesoscopic particles are generally very much softer than atoms, thus requiring much simpler and softer potentials. Nearly all the internal degrees of freedom that we have in molecules are gone except artificial bonds to connect the beads and sometimes artificial angles for three neighbouring beads. A few types of mesoscale force fields exist, but they can rather be characterized as ad hoc where some very simple objects (spheres) have been fitted to roughly reproduce some experimental data. It is not clear at all how to construct generally valid or transferable mesoscale force field. In the future, there will be an increased need for mesoscale and coarse-grained simulations as there will be a need to move towards larger and more complex soft-matter systems, simulated over longer times. But as long as we do not have accurate mesoscopic models it is difficult to couple and link these models with continuum or more macroscopic models. In materials science it may simply be better to connect atomistic models with continuum models, for example using, with finite elements and skip the third level of discrete models, namely the mesoscopic models.
We would like to initiate an informal discussion about mesoscopic discrete particle models. What is/are the best model(s) for soft-particle mesoscopic simulations according to you ? Tell us about your experiences and visions!
Kersti Hermansson & Aatto Laaksonen – Uppsala University
Every one of us who writes computer programs to solve scientific and engineering problems knows that the coding itself takes only a small part of total development time. What takes most of the time is the tedious debugging of the program before the programmer and the users are satisfied. Sometimes not all of the bugs will ever be found which may, or may not, have consequences.
There are many ways to carry out the debugging, from placing simple write statements in the code, to checking that calculated numbers are reasonable, to using debugging and optimization tools. Some programming languages are safer than others. In general syntax errors are relatively easy to find and fix already by the compiler, while, for example, logical errors may take a long time to spot as the program runs well and keeps producing results. Additional types of errors are those when we attempt to divide by zero or an undefined number while running the program.
We learn to find the errors (most of them at least) in our own ways by systematically or randomly searching for them. There exist many well-known and embarrassing errors, from computer chips being rather bad in maths to maps for GPSs not leading anywhere or rather over a cliff or satellites not starting to orbit. Long ago as a post-doctoral fellow at the IBM laboratories, a colleague of us shared a story from the lab he came from where an office mate had changed a sign in the program he had been writing for the last three months. Not as a practical joke but rather to make his life miserable in a tough and competitive environment.
By writing this blog we would like you to tell about your own experiences concerning bugs in the program. What was the most fatal or curious bug you had in your own software or in a software written by other? And how did you find it? Share your experiences with us!
Kersti Hermansson & Aatto Laaksonen – Uppsala University
This EMMC-CSA White Paper provides a basis for the standards of modelling software development and addresses areas such as method description, assumptions, accuracy and limitations; testing requirements; issue resolution; version control; user documentation and continuous support and resolution of issues.
The document is based on the work already carried out in the context of the EMMC to drive the adoption of software quality measures, and to ensure sustainable implementation of this EMMC initiative. Given the high level of sophistication of each of the developments which solve particular aspects of the multi-physics/chemistry spectrum of materials modelling, the industrial usefulness of individual achievements requires integration into larger software systems. Thus, guidelines and standards are needed, which will enable the exploitation of these codes.
The major outcome are guidelines for academic software developers creating materials modelling codes. In many cases, design decisions taken at an early stage have unforeseeable consequences for many years ahead. In this context, the white paper gives academic researchers a framework, which paves the way for successful integration and industrial deployment of materials modelling. This goal is achieved by addressing a range of topics including model descriptions and software architectures, implementation, programming languages and deployment, intellectual property and license considerations, verification, testing, validation, and robustness, organization of software development, metadata, user documentation, and support.
In version 2.0 an appendix with “Online resources to development of scientific software” has been added.
What is materials modelling good for? The webinar examines the impact materials modelling makes, both on a macro-economic and organisational level. In particular, the wide range of impact types and mechanisms will be discussed, based on evidence from surveys and interviews with users. It will be argued that a much wider potential remit for modelling should be considered than is commonly done.
In the light of these impact mechanisms, ways of measuring and increasing impact are discussed. Setting and assessing impact levels is shown to be important, and in this context a maturity model will be introduced. Higher levels of maturity are associated with integration and optimisation and set the scene for modelling as a key factor impacting on digitalisation.
Modelling and simulation practitioners
Researchers and engineers
Key Learing Objectives
Scope of materials modelling
Macro-economic-impact of the modelling field and micro-economic impact on organizations and their value chain
Impact types and mechanisms
Increasing impact by stronger integration of modelling
Within the EMMC-CSA we have made some preliminary investigations of techniques for validation and their relation to materials model validation. This reveals a three-step approach to model validation formulated by Naylor and Finger  that has been widely followed:
Step 1. Build a model that has high face validity.
Step 2. Validate model assumptions.
Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.
How does this apply to physics or data- based models?
A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.
Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies
Typically this might involve systematic studies of the variation of performance with input parameters – does the performance mimic realistic expectations.It seems that this is a natural part of model development but exposure to a wider (independent) population is part of the validation process, perhaps particularly for academic software owners looking to expand the use of their models in industry.
Validation of model assumptions. Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions
Structural assumptions. Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions. If possible the workings of the actual system should be closely observed to understand how it operates. The systems structure and operation should also be verified with users of the actual system.
Data assumptions. There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail. Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data. The assumed statistical model should be tested using goodness of fit tests and other techniques. Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.In relation to materials models there are two model types to consider in this context.
Physics based models. In this case there are structural model assumptions which can be well quantified, e.g. when creating models of atomic structures, we assume that the atoms do not come to close to each other or that quite often atoms are found in octahedral or tetrahedral surrounding. These assumptions then enter the models used to work with the input data. Sensitivity of the model output to these assumptions should be part of the validation process.
Data based models. The examples given suggest that this stage is applicable to ad-hoc (empirical) models. However, data assumptions are certainly a stage that needs to be applied to data- based models. We note that there is potentially a link between physics models and data-based models in that the latter can use data generated by physics based models so it is suggested to consider this when drawing up final recommendations for model validation.
Validating input/output transformation
The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test. The model output that is of primary interest should used as the measure of performance.
We think that this seem to involve a systematic investigation of the transfer function of the model, considered as a black box, against experimental data, which is a good validation of a physics model, for example DFT calculations can be validated by calculating minimum energy configurations and lattice spacings. Each model type (discrete, mesoscopic, continuum) would need its own set of benchmark data to allow comparison of codes. Informal discussions with Volker and Erich suggest that this is the essential validation approach generally adopted by MDS, with the proviso that other companies may use different approaches, presumably determined by the software and the customer
The “Workshop on Interoperability in Materials Modelling” took place in Cambridge on November 7-8, 2017 and presented a wide range of stakeholder communities, including academic and commercial materials modelling software owners, manufacturing industry , modellers covering different types of models and applications, repository owners , academic and commercial data and science/informatics software owners/consultants . It also brought together a number of current EU projects, the three DGCNECT COEs in the materials modelling field (NOMAD, MAX, E-CAM) and EUDAT .
The objective of the workshop was for EMMC to seek the support of and endorsement by the wider materials modelling community for the European Materials Modelling Ontology (EMMO). EMMC is also gathering requirements and outlining plans for interoperability between data repositories and marketplaces.
The major outcome of the workshop was the wide agreement on the need for ontologies in materials modelling. Moreover, there was a call for an integrated effort to develop ontologies for the whole field, including materials characterisation and modelling all the way to chemicals and materials development in industry. Such a development is regarded as key to success in digitalisation. The draft EMMO was well received and its current development was widely endorsed. The status, requirements, expectations, benefits, as well as potential pitfalls of ontologies were discussed in detail. The need for a common semantic basis and hence ontologies that drive the marketplace platform that will link all stakeholders (providers and buyers) and integrate translation and decision support was highlighted. There is good evidence that the Use of the RoMM terminology during communication about modelling and simulation is spreading well beyond EU projects. Finally, there is good evidence that the Documentation of simulations using MODA is now widely endorsed and is supporting communication and the development of the MODA portal.
The workshop has been organized within the MODSIM project of the Subcommittee on Polymer Terminology, which is part of the Polymer Division of IUPAC. The aim of the MODSIM project is establish a consensus around the technical terms used in modeling and simulation of polymers and related materials.
The workshop is part of the yearly activities of the IT-SIMUL node of CECAM.
The organizers acknowledge also the PRIN 2015 project “Molecular organization in organic thin films via computer simulation of their fabrication processes” (2015XJA9NT).