22
Computer-Aided Design 45 (2013) 4–25 Contents lists available at SciVerse ScienceDirect Computer-Aided Design journal homepage: www.elsevier.com/locate/cad Key computational modeling issues in Integrated Computational Materials Engineering Jitesh H. Panchal a , Surya R. Kalidindi b , David L. McDowell c,a School of Mechanical and Materials Engineering, Washington State University, Pullman, WA 99164-2920, USA b Department of Materials Science and Engineering, Department of Mechanical Engineering and Mechanics, Drexel University, Philadelphia, PA 19104, USA c School of Materials Science and Engineering, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405, USA article info Keywords: Materials design Multiscale modeling ICME Databases Uncertainty abstract Designing materials for targeted performance requirements as required in Integrated Computational Materials Engineering (ICME) demands a combined strategy of bottom–up and top–down modeling and simulation which treats various levels of hierarchical material structure as a mathematical representation, with infusion of systems engineering and informatics to deal with differing model degrees of freedom and uncertainty. Moreover, with time, the classical materials selection approach is becoming generalized to address concurrent design of microstructure or mesostructure to satisfy product-level performance requirements. Computational materials science and multiscale mechanics models play key roles in evaluating performance metrics necessary to support materials design. The interplay of systems-based design of materials with multiscale modeling methodologies is at the core of materials design. In high performance alloys and composite materials, maximum performance is often achieved within a relatively narrow window of process path and resulting microstructures. Much of the attention to ICME in the materials community has focused on the role of generating and representing data, including methods for characterization and digital representation of microstructure, as well as databases and model integration. On the other hand, the computational mechanics of materials and multidisciplinary design optimization communities are grappling with many fundamental issues related to stochasticity of processes and uncertainty of data, models, and multiscale modeling chains in decision- based design. This paper explores computational and information aspects of design of materials with hierarchical microstructures and identifies key underdeveloped elements essential to supporting ICME. One of the messages of this overview paper is that ICME is not simply an assemblage of existing tools, for such tools do not have natural interfaces to material structure nor are they framed in a way that quantifies sources of uncertainty and manages uncertainty in representing physical phenomena to support decision- based design. © 2012 Elsevier Ltd. All rights reserved. 1. The path to Integrated Computational Materials Engineering (ICME) Within the past decade, several prominent streams of research and development have emerged regarding integrated design of materials and products. One involves selection of materials, em- phasizing population of databases and efficient search algorithms for properties or responses that best suit a set of specified perfor- mance indices [1], often using combinatorial search methods [2], pattern recognition, and so on. In such materials informatics ap- proaches, attention is focused on data mining, visualization, and providing convenient and powerful interfaces for the designer to Corresponding author. Tel.: +1 404 894 5128; fax: +1 404 894 0186. E-mail address: [email protected] (D.L. McDowell). support materials selection. Another class of approaches advocates simulation-based design to exploit computational materials sci- ence and physics in accelerating the discovery of new materials, computing structure and properties using a bottom–up approach. This can be combined with materials informatics. For example, the vision for Materials Cyber-Models for Engineering is a compu- tational materials physics and chemistry perspective [3] on using quantum and molecular modeling tools to explore for new ma- terials and compounds, making the link to properties. Examples include the computational estimate of stable structure and prop- erties of multicomponent phases using first principles approaches (e.g., Refs. [4–6]). These approaches have been developed and pop- ularized largely within the context of the materials chemistry, physics and science communities. A third approach, involving more the integration of databases with tools for modeling and simulation, as well as design 0010-4485/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.cad.2012.06.006

Key computational modeling issues in Integrated Computational

  • Upload
    doanthu

  • View
    238

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Key computational modeling issues in Integrated Computational

Computer-Aided Design 45 (2013) 4–25

Contents lists available at SciVerse ScienceDirect

Computer-Aided Design

journal homepage: www.elsevier.com/locate/cad

Key computational modeling issues in Integrated ComputationalMaterials EngineeringJitesh H. Panchal a, Surya R. Kalidindi b, David L. McDowell c,∗a School of Mechanical and Materials Engineering, Washington State University, Pullman, WA 99164-2920, USAb Department of Materials Science and Engineering, Department of Mechanical Engineering and Mechanics, Drexel University, Philadelphia, PA 19104, USAc School of Materials Science and Engineering, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405, USA

a r t i c l e i n f o

Keywords:Materials designMultiscale modelingICMEDatabasesUncertainty

a b s t r a c t

Designing materials for targeted performance requirements as required in Integrated ComputationalMaterials Engineering (ICME) demands a combined strategy of bottom–up and top–down modeling andsimulationwhich treats various levels of hierarchicalmaterial structure as amathematical representation,with infusion of systems engineering and informatics to deal with differing model degrees of freedomand uncertainty. Moreover, with time, the classical materials selection approach is becoming generalizedto address concurrent design of microstructure or mesostructure to satisfy product-level performancerequirements. Computational materials science and multiscale mechanics models play key roles inevaluating performance metrics necessary to support materials design. The interplay of systems-baseddesign of materials with multiscale modeling methodologies is at the core of materials design. In highperformance alloys and compositematerials, maximumperformance is often achievedwithin a relativelynarrow window of process path and resulting microstructures.

Much of the attention to ICME in the materials community has focused on the role of generating andrepresenting data, includingmethods for characterization and digital representation ofmicrostructure, aswell as databases andmodel integration. On the other hand, the computationalmechanics ofmaterials andmultidisciplinary design optimization communities are grappling with many fundamental issues relatedto stochasticity of processes and uncertainty of data, models, andmultiscale modeling chains in decision-based design. This paper explores computational and information aspects of design of materials withhierarchical microstructures and identifies key underdeveloped elements essential to supporting ICME.One of the messages of this overview paper is that ICME is not simply an assemblage of existing tools, forsuch tools do not have natural interfaces tomaterial structure nor are they framed in away that quantifiessources of uncertainty andmanages uncertainty in representing physical phenomena to support decision-based design.

© 2012 Elsevier Ltd. All rights reserved.

1. The path to Integrated Computational Materials Engineering(ICME)

Within the past decade, several prominent streams of researchand development have emerged regarding integrated design ofmaterials and products. One involves selection of materials, em-phasizing population of databases and efficient search algorithmsfor properties or responses that best suit a set of specified perfor-mance indices [1], often using combinatorial search methods [2],pattern recognition, and so on. In such materials informatics ap-proaches, attention is focused on data mining, visualization, andproviding convenient and powerful interfaces for the designer to

∗ Corresponding author. Tel.: +1 404 894 5128; fax: +1 404 894 0186.E-mail address: [email protected] (D.L. McDowell).

0010-4485/$ – see front matter© 2012 Elsevier Ltd. All rights reserved.doi:10.1016/j.cad.2012.06.006

supportmaterials selection. Another class of approaches advocatessimulation-based design to exploit computational materials sci-ence and physics in accelerating the discovery of new materials,computing structure and properties using a bottom–up approach.This can be combined with materials informatics. For example,the vision for Materials Cyber-Models for Engineering is a compu-tational materials physics and chemistry perspective [3] on usingquantum and molecular modeling tools to explore for new ma-terials and compounds, making the link to properties. Examplesinclude the computational estimate of stable structure and prop-erties of multicomponent phases using first principles approaches(e.g., Refs. [4–6]). These approaches have been developed and pop-ularized largely within the context of the materials chemistry,physics and science communities.

A third approach, involving more the integration of databaseswith tools for modeling and simulation, as well as design

Page 2: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 5

Fig. 1. Hierarchy of levels from atomic scale to system level in concurrent materials and product design (multi-level design). Existing systems design methods focus onlevels to the right of the vertical bar, treating ‘materials design’ as a materials selection problem. Levels to the left require computational materials science and mechanicsmodels of microstructure–property relations. Olson’s linear mapping of process–structure–property–performance relations in Materials by DesignTM is shown on the upperleft [10].

optimization and concurrent design for manufacture, is thatof Integrated Computational Materials Engineering (ICME). It isconcerned with multiple levels of structure hierarchy typical ofmaterials with microstructure. ICME aims to reduce the time tomarket of innovative products by exploiting concurrent designand development of material, product, and process/manufacturingpath, as described in the National Research Council report by aNational Materials Advisory Board (NMAB) committee [7]. It is ‘‘anapproach to design products, the materials that comprise them,and their associated materials processing methods by linkingmaterials models at multiple length scales’’.

ICME was preceded by another major Defense AdvancedResearch Projects Agency (DARPA)-funded initiative, AcceleratedInsertion of Materials (AIM), from 2001–2003, which sought tobuild systems approaches to accelerate the insertion of new and/orimproved materials into products. An NMAB study [8] discussedthe DARPA-AIM program, highlighting the role of small technologystartup companies in enabling this technology, and summarizingthe commercial computational tools and supporting currentlyavailable databases. The impact of AIM on framing of ICME isdiscussed further in Ref. [9].

The significant distinction of ICME from the first two ap-proaches (materials informatics and combinatorial searches foratomic/molecular structures) is its embrace of the engineering per-spective of a top–down, goal–means strategy elucidated by Olson[10]. This perspective was embraced and refined for the academicand industry/government research communities at a 1997 Na-tional Science Foundation (NSF)-sponsored workshop hosted bythe Georgia Institute of Technology and Morehouse College [11]onMaterials Design Science and Engineering (MDS&E). Effectively,ICME incorporates informatics and combinatorics, as well as mul-tiscale modeling and experiments. As mentioned, ICME is fullycognizant of the important role that microstructure plays in tailor-ing material properties/responses in most engineering applications,well beyond the atomic or molecular scale. This is a key conceptin materials science, as typically distinct from materials physicsor chemistry. For example, mechanical properties depend notonly on atomic bonding and atomic/molecular structure, but arestrongly influenced by morphology of multiple phases, phase andinterface strengthening, etc. Fig. 1 shows a hierarchy of material

length scales of atomic structure and microstructure that can beconsidered in design of advanced metallic alloys for aircraft gasturbine engine components in commercial aircraft. The diagram in-set in the upper-left in Fig. 1 shows the Venn diagram from Olson[10] of overlapping process–structure–property–performance re-lations that are foundational to design ofmaterials to the left of thevertical bar; conventional engineering design methods and mul-tidisciplinary design optimization (MDO) are rather well estab-lished. Extending systems design methods down to address theconcurrent design of material structure at various levels of hierar-chy is a substantial challenge [12–15]. The process of tailoring thematerial to deliver application-dependent performance require-ments is quite distinct from that of traditional materials selection(relating properties to performance requirements) or from combi-natorial searching for atomic/molecular structure–property rela-tions at the level of individual phases; such combinatorial searchestypically do not consider the role of microstructure, which is typ-ically central to meeting multifunctional performance require-ments. The conventional approach in science-based modeling ofhierarchical systems is a bottom–up, deductive methodology ofmodeling the material’s process path, microstructure, and result-ing properties, with properties then related to performance re-quirements, as shown in Fig. 1. Design is a top–down process inwhich systems level requirements drive properties, structure andprocess routes.

The pervasive difficulty in inverting process–structure andstructure–property relations in modeling and simulation greatlycomplicates this top–down enterprise, and owes to certain un-avoidable realities of material systems:

• Nonlinear, nonequilibrium path dependent behavior that re-quires intensive simulation effort, limiting parametric studyand imparting dependence upon initial conditions.

• Transitions from dynamic (lower length and time scales) tothermodynamic (higher length and time scales) models forcooperative phenomena in multiscale modeling, with non-uniqueness due to reduction of model degrees of freedomduring scale-up.

• Potential for a wide range of suboptimal solutions that compli-cate design search and optimization.

Page 3: Key computational modeling issues in Integrated Computational

6 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

Fig. 2. Typical separation of materials development and materials selection activities in the process of integrated design of materials and products.

• Approximations made in digital microstructure representationof material structure.

• Dependence of certain properties such as true fracture duc-tility, fracture toughness, fatigue strength, etc., on extremevalue distributions ofmicrostructure attributes that affect dam-age/degradation.

• Microstructure metastability and long term evolution in ser-vice.

• Lack and variability of experimental data for validation pur-poses or to support methods based on pattern recognition.

• Uncertainty of microstructure, models, and model parameters.

Since uncertainty is endemic in most facets of ICME, managinguncertainty and its propagation throughmodel/experiment chainsis of paramount importance.Materials design is complicated by thecomplexity of nonlinear phenomena and pervasive uncertainty inmaterials processing, related models and model parameters, forexample. This motivates the notion of robust design and othermeans to address the role of uncertainty. The materials designchallenge is to developmethods that employ bottom–upmodelingand simulation, calibrated and validated by characterization andmeasurement to the extent possible, combined with top–down,requirements-driven exploration of the hierarchy of materiallength scales shown in Fig. 1. It is amulti-level engineering systemsdesign problem, characterized by top–down design of systemswith successive embedded sub-systems, such as the aircraft shownin Fig. 1. Multiscale modeling pertains more to the bottom–upconsideration of scales of materials hierarchy.

Another critical aspect of materials design identified in theNMAB ICME report [7] is the key role played by quantitative repre-sentation of microstructure. The notion ofmicrostructure-mediateddesign is predicated on the use of mathematical/digital represen-tation of microstructure, rather than properties, as the taxonomicbasis for describing material structure hierarchy in materials de-sign. This is not merely semantics, but has quite practical im-plications in terms of actual pathways of material and productdevelopment. Specifically, process–structure relations tend to bequite dissociated from structure–property relations, with differentmodel developers, organizations involved, degree of empiricism,etc. The timeframe for process–structure relations in materialsdevelopment often do not match those of structure–propertyrelations. Process–structure relations tend to be more highlyempirical, complex, and less developed than structure–propertyrelations. The light dashed line on the left in Fig. 2 effectivelyrepresents a ‘‘fence’’ between organizations/groups pursuing pro-cess–structure and structure–property relations, reflecting thecommon decomposition of activities between material suppliersand OEMs. One goal of an integrated systems strategy for materi-als design and development is to more closely couple these activ-ities. Clearly, microstructure is solidly at this interface, the resultof process–structure relations. Hence, framing ICME in terms ofmicrostructure-mediated design facilitates enhanced concurrencyof material suppliers and OEM activities. Another important goalof ICME should be to effectively remove the bold dashed line to

the right in Fig. 2 that serves as a ‘‘fence’’ betweenmaterials devel-opment and materials selection. Conventionally, design engineershave worked with databases of existing materials to select, ratherthan design, materials in the process of designing components(systems, sub-systems). Tools for materials selection [1] are rea-sonably well-developed, supporting the materials selection prob-lem. However, as discussed by McDowell et al. [12–15], materialsselection supports the property-performance linkage to the right inFig. 2 and is just one component of the overall integration of ma-terials and product design. Integration of design engineering intothe overall process requires a coupling back to material suppliers,for example. This is one of the great challenges to realizing ICME interms of concurrent computational materials design andmanufac-turing. As stated in the report of the 1998 MDS&E workshop [11],‘‘The field of materials design is entrepreneurial in nature, similarto such areas as microelectronic devices or software. MDS&E mayvery well spawn a ‘‘cottage industry’’ specializing in tailoring ma-terials for function, depending on how responsive large supplierindustries can be to this demand. In fact, this is already underway’’.

The ICME report [7] focused mainly on advances in modelingand simulation, the role of experiments in validation, databasesand microstructure representation, and design integration tools.In addition, cultural barriers to ICME in industry, government andacademia were discussed. Elements of ICME broadly encompass:

• Modeling and simulation across multiple levels of hierarchyof structure, including both process–structure and structure–property relations

• Experiments and prototyping• Distributed collaborative networks• Materials databases• Microstructure representation as the focus of the materials

design problem• Knowledge framework that integrates databases with experi-

ments and modeling simulation to pursue inverse design• Uncertainty quantification and management• Integration with OEMS and materials suppliers.

The Materials Genome Initiative aspires to build on the preced-ing AIM and ICME initiatives to develop an infrastructure to accel-erate advanced materials discovery and deployment, including allphases of development, property optimization, systems design andintegration, certification and manufacturing [16]. As outlined inRefs. [7,15], this goal and the premise of ICME is more comprehen-sive in addressing the connection with product design and man-ufacturing than combinatorial searching algorithms applied usingfirst principles calculations to discover newmaterials ormulticom-ponent phases, for example. The notion of genomics as predictiveexploration of crystal structures, single phases, or molecules fordesirable phase properties is a valuable part of the overall con-cept of ICME to more broadly scope during design exploration,but the ‘‘connective tissue’’ of multiscale modeling andmulti-leveldecision-based design in ICME are essential aspects of its integra-tive character that links microstructure to performance and con-siders integrated systems design, certification, manufacturing, and

Page 4: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 7

product scale-up. Moreover, the analogy of ICME to bioinformat-ics, DNA sequencing and so forth is therefore limited and not fullydescriptive of scope and challenges of ICME. In our view, the no-tion of integrating manufacturing processes with concurrent de-sign of materials and products is at the core of ICME, consideringthe inescapable role of the hierarchy of material structure (i.e., mi-crostructure) on properties and performance. Of course, one canconceive of important application domains in which discovery ofnew material phases perhaps addresses the majority of the accel-erated deployment problem; it may be entirely sufficient for pur-poses of searching for specific molecular or atomic structures fornanoscale application such as quantum dots, semiconductors, etc.But even inmany applicationswhere certain keyperformance indi-cators are dictated by atomic structure (e.g., piezoelectrics, batteryand fuel cell electrodes, etc.), material durability and utility in ser-vice is often dictated bymesoscopic heterogeneity of structure andits lack of reversibility under cycling or in extreme environments.Therefore, materials informatics is a key technology for nanoscaledesign problems but only one toolset in the solution strategy formulti-level materials design that must consider scale-up and longterm product performance.

This paper will focus on several critical path issues in realizingICME, namelymaterials databases, the role ofmultiscalemodeling,materials knowledge systems and strategies that support inversedesign (top–down in Fig. 1), and uncertainty quantification andmanagement considerations that undergird plausible approaches.

2. Materials databases and the need for a structure-basedtaxonomy

The NMAB report [7] has emphasized the key role of materialsdatabases in capturing, curating, and archiving the criticalinformation needed to facilitate ICME. While the report doesnot elaborate on how these databases should be structured orwhat specifically they should include, it discusses how thesenew databases will produce substantial savings by acceleratingthe development of advanced high performance materials, whileeliminating the unintended repetition of effort from multiplegroups of researchers possibly over multiple generations.

Before we contemplate on the ideal structure and content ofsuch novel databases, it would be beneficial to get a historic per-spective on the successes and failures of the prior efforts on build-ing materials databases. In a recent summary, Freiman et al. [17]discuss the lessons learned from building and maintaining theNational Materials Property Data Network that was operated com-mercially until 1995. Although this network contained numer-ous databases on many different materials, it proved too costlyto maintain because of (i) a lack of a single common standard forinformation on the very broad range of materials included in thedatabase, (ii) enhancements/modifications to testing proceduresused for evaluating properties of interest that sometimes invali-dated previousmeasurements, and (iii) the placement of onus on asingle entity as opposed to being jointly developed bymultiple en-tities that possess the broad range of requisite sophisticated skillsets. Notwithstanding these shortcomings, a major deficiency inthese, as well as other similar, databases is that they did not aimto capture the details of themicrostructure. It is well-known in thematerials science and engineering discipline that the microstruc-ture–property connections are not unique, and that compositionalone is inadequate to correlate to the many physical properties ofthe material. Indeed, the spatial arrangement of local features or‘‘states’’ in the internal structure (e.g., morphological attributes) atvarious constituent length scales play an influential role in deter-mining the overall properties of the material [18–20].

The keymaterials databases needed to facilitate ICMEhave to bebuilt around microstructure information. There is therefore a crit-ical need to establish a comprehensive framework for a rigorous

quantification ofmicrostructure in a large number of disparatema-terial systems. Furthermore, the framework has to be sufficientlyversatile to allow compact representation and visualization of thesalient microstructure–property–processing correlations, whichpopulate the core knowledge bases used by materials designers. Itis important to recognize here that only the microstructure spacecan serve as the common higher dimensional space in which themuch needed knowledge of the process–microstructure–propertycorrelations can be stored and communicated reliably withoutappreciable loss of information. Any attempt to represent thiscore knowledge by circumventing microstructure using lower di-mensional spaces (e.g., processing–property correlations or com-position–property correlations) should be expected to result insubstantial loss of fidelity and utility in the generated knowledgebases.

In arriving at a rigorous framework for microstructure quan-tification, it is extremely important to recognize and understandthe difference between a micrograph (or a microstructure dataset)and the microstructure function [21]. Microstructure function isbest interpreted as a stochastic process, in essence a response toapplied stimuli, whose experimental outcome is a micrograph (atwo or three dimensional microstructure dataset). As such, themi-crostructure function cannot be observed directly and is expectedto capture the ensemble-averaged structure information from amultitude of micrographs extracted from multiple samples pro-cessed by nominally the same manufacturing history. Since indi-vidual micrographs have no natural origin, the only meaningfulinformation one can extract from each micrograph has to be con-cerning the relative spatial arrangement or spatial correlations oflocal material states (e.g., phases, orientations) in the structure. Anumber of different measures of the spatial correlations in the mi-crostructure have been proposed and utilized in literature [19,20].Of these, only the n-point spatial correlations (or n-point statis-tics) provide the most complete set of measures that are naturallyorganized according to increasing degree of microstructure infor-mation. For example, the most basic of the n-point statistics arethe 1-point statistics, and they reflect the probability density as-sociated with finding a specific local state of interest at any ran-domly selected single point (or voxel) in the microstructure. Inother words, they essentially capture the information on volumefractions of the various distinct local states present in the mate-rial system. The next higher level of microstructure information iscontained in the 2-point statistics, denoted f hh

r , which capture theprobability density associated with finding local states h and h′ atthe tail and head, respectively, of a prescribed vector, r , randomlyplaced into the microstructure. There is a tremendous increase inthe amount ofmicrostructure information contained in the 2-pointstatistics compared to the 1-point statistics. Higher-order correla-tions (3-point and higher) are defined in a completely analogousmanner. Another benefit of treating themicrostructure function asa stochastic process is the associated rigorous quantification of thevariance associated with the microstructure [21]. The variance instructure can then be combined appropriately with the other un-certainties in the process (for example, those associated with themeasurements and those associated with the models used to pre-dict overall properties or performance characteristics of interest inan engineering application) to arrive at the overall variance in theperformance characteristics or responses of interest.

Materials Informatics has recently emerged as a new field ofstudy that focuses on the development of new toolkits for efficientsynthesis of the core knowledge in large materials datasets (gen-erated by both experiments andmodels). The NMAB report [7] hasspecifically highlighted the important role to be played by Mate-rials Informatics in the realization of the potential of ICME. Thestandardization and adoption of a rigorous framework, such as the

Page 5: Key computational modeling issues in Integrated Computational

8 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

n-point statistics, in enabling the quantification of the microstruc-ture is a critical and essential step in the emergence of MaterialsInformatics. In recent work, it was demonstrated that the use ofconcepts borrowed from informatics can produce a new toolkit[22] with the following unprecedented capabilities: (i) archival,real-time searches, and direct quantitative comparisons of dif-ferent microstructure datasets within a large microstructuredatabase, (ii) automatic identification and extraction of mi-crostructure features or other metrics of interest from very largedatasets, (iii) establishment of reliable microstructure–propertycorrelations using objective measures of the microstructure,and (iv) reliable extraction of precise quantitative insightson how the local neighborhood influences the localization ofmacroscale loading and/or the local evolution of microstructureleading to development of robust, scale-bridging, microstruc-ture–property–processing linkages. The new field of MaterialsInformatics is still very much in its embryonic stage of devel-opment. The customization and adoption of modern informationtechnology tools such as image-based search engines, data-mining,machine learning and crowd-sourcing can potentially identifycompletely new research directions for addressing successfullysome of the most difficult computational challenges in the imple-mentation of the ICME paradigm.

3. Multiscale modeling and materials design

Multiscale modeling involves seeking solutions to physicalproblems that have important features at multiple spatial and/ortemporal scales. Multiscale models are necessary to addressthe relative contributions of various levels of material structurehierarchy shown in Fig. 1 to responses at higher scales [12–15,23–25]. However, multiscale models are subservient to the overalldesign integration strategy in materials design. In other words,they offer utility in various specific ways to materials design,including, but not limited to:

• relating polycrystalline and/or polyphase microstructures toproperties involving cooperative behavior at higher scales(e.g., yield strength, ductility, ductile fracture, etc.),

• quantifying sensitivity of higher scale responses to differentmechanisms and levels of structure/microstructure hierarchy,

• supportingmicrostructure-sensitive prediction of material fail-ure or degradation,

• quantifying the influence of environment or complicating con-tributions of impurities, manufacturing induced variability ordefects,

• considering competition of distinct mechanical, chemical andtransport phenomena without relying too heavily on intuitionto guide solutions in the case of highly nonlinear interactions,and

• bridging domains of quantum physics and chemistry with en-gineering for problems that span vast ranges of length and timescales.

Olson’s Venn diagram structure in Fig. 1 should not be confusedwith multiscale modeling strategies; the former is much morecomprehensive in scope, embedding multiscale modeling as partof the suite ofmodeling and simulation tools that provide decision-support in design. For example, structure–property relationscan involve the full gamut of length and time scales, as canprocess–structure relations. In other words, the levels in Olson’slinear design strategy of Fig. 1 do not map uniquely to levelsof material hierarchy. Even concurrent multiscale models whichattempt to simultaneously execute models at different levels ofresolution or fidelity do not serve the full purpose of top–downmaterials design. Materials design and multiscale modeling arenot equivalent pursuits. This distinction is important because

it means that notions of cyberdiscovery of new or improvedmaterials must emphasize not only modeling and simulation toolsbut also systems design strategies for using simulations to supportdecision-based concurrent design of materials and products.

Materials with long range order, such as crystalline materialsor oriented fibrous composites or thermoplastic polymer chains,offer a particular challenge and motivate sophisticated multiscalemodeling methods. Characteristic scale transitions achieved bymultiscale modeling do not follow a single mathematical oralgorithm structure, owing both to the consideration of long rangeorder and to the fact that as models are coarse grained, the modelsshift from those concerning nonequilibirum, dynamic particlesystems to continuously distributed mass systems that ofteninvoke statistical equilibrium arguments and radically reducedegrees of freedom. A distinction is made between so-calledhierarchical and concurrent multiscale modeling schemes. Theformer apply scale-appropriate models to different levels ofhierarchy to generate responses or properties at that scale. Incontrast, the latter attempt to cast the problem over commonspatial and temporal domains by virtue of boundary conditionsand introduction of consistent variable order descriptions ofkinematics to achieve a variable resolution/fidelity description.Clearly, concurrent schemes are more costly but are argued toprovide high value in cases in which localization of responseswithin a microstructure cannot be located or anticipated a priori,as well as the propagation of such localization events across abroad range of scales shown in Fig. 1. Information is lost in eitherway — in the case of hierarchical multiscale models, information islost (i.e., increase of information entropy) by disposing of modeldegrees of freedom, while in the case of concurrent multiscalemodeling information is lost by virtue of the idealization necessaryto frame the models with consistent structure to pass informationback and forth across length and time scales.

It is a daunting task to briefly summarize a range of multi-scale modeling approaches for all relevant process–structure orstructure–property relations for all material classes, beyond thescope of any single work. We accordingly limit discussion here tomultiscale dislocation plasticity of metals to illustrate the rangeof considerations involved. Dislocation plasticity is a multiscale,multi-mechanism process [24–27] that focuses on evolving irre-versible microstructure rearrangement [28] associated with linedefects crystals. Existing approaches address mechanisms over awide range of length and time scales, as summarized below.(a) Discrete to continuousmodeling approaches based on contigu-

ous (domain decomposition) or overlapping (coarse graining)domains:• Domain decomposition,with an atomistic domain abutted to

an overlapping continuum domain to atomistically resolvecrack tips and interfaces [29–35] to model fracture anddefect interactions.

• Quasi-continuum approaches with the continuum elasticenergy function informed by nonlocal atomistic interatomicpotentials, suitable for long range fields without defects[36–39], with adaptivity to selectively model regions withfull atomic resolution [40].

• Dynamically equivalent continuum or quasi-continuum,consistent with fully dynamic atomistic simulations oversame overlapping domain [41–46].

• Coarse-grained molecular dynamics [47–51] for somewhatlonger ranged spatial interactions of microstructure anddefects such as dislocation pile-ups.

• Continuum phase field modeling (PFM) [52,53] to simu-late microstructure morphology and evolution, includingmicroscopic PFM [54] informed by ab initio or atomisticsimulations in terms of energy functions, with kinetics(mobilities) established by matching overlapping MD orexperiments [55].

Page 6: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 9

Table 1Examples of dominant material length scales (in each row) for design with regard to bulk mechanical properties of metallic systems.

Strength Bulk polycrystalsInelastic flow andhardening, uniform ductility

Brittle fractureresistance

Ductile fractureresistance

High cycle fatigue Low cycle fatigue

2 nm — unit processes ofnucleation (low defect densitysingle crystals, nanoscalemultilayers)

2 nm — bond breaking(cleavage)

20 nm — source activation(nanocrystalline metals withprior cold work)

20 nm — dislocationreactions with grain, phaseor twin boundaries

20 nm — cooperativeeffects of impurities,segregants, oxides, etc.(intergranular)

20 nm — valve effectdislocationnucleation

200 nm — source controlled(NEMS, MEMS)

200 nm — dislocationmultiplication, junctions

200 nm — sourceactivation

2 µm— free surfaces andstarvation (micropillars,unconstrained layers)

2 µm— statistical field ofjunctions, forests

2 µm— fractureprocess zone

2 µm— damageprocess zone

2 µm— localizedslip structures

20 µm— intergranularinteractions

20 µm— boundaryinteractions,roughness and crackpath bifurcation

20 µm— individualactivated grains orphases, barrier spacing

20 µm –damageprocess zone,barrier spacing

200 µm— effect ofneighbors

(b) Concurrent multiscale continuum modeling schemes forplastic deformation and failure processes based on impositionof consistent multiscale boundary conditions and/or higherorder continua accounting for heterogeneous deformation atfine scales over the same physical domain [56–61].

(c) Multiscale modeling schemes of hierarchical nature for inelas-tic deformation and damage in the presence of microstruc-ture:• Multilevel models for plasticity and fracture based on

handshaking [62–65].• Self-consistent micromechanics schemes [66–68].• Internal state variable models informed by atomistic/conti-

nuum simulations and coupled with micromechanics or FEmethods [69,70].

• Weak form variational principle of virtual velocities (PVV)with addition of enhanced kinematics based on microstruc-ture degrees of freedom [71–73].

• Extended weak form variational approaches of generalizedcontinua or second gradient theory with conjugate mi-crostresses to represent gradient effects [74–79].

(d) Statistical continuum theories that directly incorporate dis-crete dislocation dynamics [80–89] and dislocation substruc-ture evolution [24,25].

(e) Field dislocation mechanics based on incompatibility withdislocation flux and Nye transport [90–93].

(f) Dislocation substructure formation and evolution modelsbased on fragmentation and blocky single slip or laminatedflow characteristics, including non-convex energy functions[94–98].

(g) Transition state theory, based on use of atomistic unit processmodels [99–101] to inform event probabilities in kineticMonteCarlo models of mesoscopic many body behavior [102] andother coarse graining approaches [103].

Clearly, even in the focused application area of multiscale dis-location plasticity, no single multiscale modeling strategy existsor suffices to wrap together the various scale transitions. Eachmaterial class and response of interest is influenced by a fi-nite, characteristic set of length and time scales associated withkey controlling microstructure attributes. For this reason, mate-rials design can be (and often is) pursued using sets of modelsthat are scale appropriate to the length scales and microstruc-ture attributes deemed most influential on the target response(s)at the scale of the application. So if the scale of the applica-tion is on the order of tens of microns, as in MEMS devices,

both the atomic scale structure and mesostructure of defect dis-tributions (dislocations, grain boundaries, etc.) can be impor-tant. For nanostructures, however, atomic scale arrangement andits evolution is dominant. In many systems, the hierarchy in-cludes strong heterogeneity or phase contrast (e.g., weldments instructures, fiber-reinforced polymer composites) and these scalesmay dominate responses of interest. Even interfacial-dominatedphenomena, such as fracture of heterogeneous materials (cf.[104–107]), is complicated according to whether the fracture oc-curs via brittle or ductile separation,with the latter involving inter-face structure only as a gatingmechanism for inelastic deformationmodes [108].

For illustrative purposes, at the risk of oversimplifying thewide range of phenomena and scales over a wide range ofmetallic systems, Table 1 attempts to identify dominant materiallength scales to address in design for various target mechanicalproperties of metallic systems. Although phenomena occur overa full spectrum of scales, the role of defects and microstructureis largely manifested at various discrete levels of structure. Oneimportant point is the dominant mechanisms and scales changewith the scale and nature of the application. Another point isthat certain properties are sensitive to material structure at onlyone or two levels of hierarchy. These observations suggest thatdistinct models or scale transition methods that bridge severalscales can supply first order information for materials design.Clearly, no single multiscale modeling strategy exists or suffices tolink the various scale transitions. Table 2 offers a partial glimpseinto the complexities of models at various length and time scalesfor metallic systems, distinguishing scale transition methods frommodels appropriate to each scale.

It is emphasized that there is considerable uncertainty in anykind of multiscale modeling scheme, including the selection ofspecific length scales tomodel, approximationsmade in separatinglength and time scales in models used, model structure andparameter uncertainty at each scale, approximations made invarious scale transition methods as well as boundary conditions,and lack of complete characterization of initial conditions.Furthermore, microstructure itself has random character, such assizes anddistributions of grains, phases, and so forth. These sourcesof uncertainty give rise to the need to consider sensitivity ofproperties and responses of interest to variation of microstructureat various scales, as well as propagation of model uncertaintythrough multiscale model chains. Uncertainty will be addressed ina later section of this paper.

Page 7: Key computational modeling issues in Integrated Computational

10 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

Table 2Models and scale bridging methods for multiscale dislocation plasticity in metallic crystals and polycrystals. Shaded boxes appear in rows associated with scale bridgingmethodologies.

Length scale Time scale Models Examples of scale bridging approaches Primary sources of uncertainty

2 nm NA/ groundstate

First principles, e.g., DensityFunctional Theory (DFT)

Assumptions in DFT method, placement of atoms

QuantumMD200 nm 10 ns Molecular dynamics (MD) Interatomic potential, cutoff, thermostat and ensemble

Domain decomposition, coupledatomistics discrete dislocation (CADD),coarse grained MD, kinetic Monte Carlo

Attenuation due to abrupt interfaces of models, passingdefects, coarse graining defects

2 µm s Discrete dislocationdynamics

Discretization of dislocation lines, cores, reactions andjunctions, grain boundaries

Multiscale crystal plasticity Averaging methods for defect kinetics and lattice rotation20 µm 1000 s Crystal plasticity, including

generalized continuummodels (gradient,micropolar, micromorphic)

Kinetics, slip system hardening (self and latent) relations,cross slip, obstacle interactions, increasing # adjustableparameters

RVE simulations,polycrystal/composite homogenization

RVE size, initial and boundary conditions, eigenstrainfields, self-consistency

200 µm Days Heterogeneous FE withsimplified constitutiverelations

Mesh refinement, convergence, model reduction

Substructuring, variable fidelity,adaptive remeshing, multigrid

Loss of information, remeshing error, boundaryconditions

>2 mm Years Structural FE Low order models, meshing, geometric modeling

4. Methods for requirements-driven, top–down design

As mentioned earlier, many of the currently available materialsdatabases are better suited for material selection [109] instead ofrigorousmaterials design. This is becausematerials design requiresknowledge of a validated set of process–microstructure–propertylinkages that are amenable to information flow in both forward andreverse directions, as essential to facilitate solutions to top–down,requirements-driven (inverse) design problems.

For example, sophisticated analytical theories and compu-tational models already exist for predicting several macroscale(effective or homogenized) properties associated with a given mi-crostructure. However, most of these theories are not well suitedfor identifying the complete set of microstructures that are the-oretically predicted to meet or exceed a set of designer-specifiedmacroscale properties or performance criteria. Topology optimiza-tion is one of the early approaches developed for materials design.In this method, the region to be occupied by the material is tes-sellated into spatial bins and numerical approaches such as finiteelement methods are used to solve the governing field equations.The design problem is set up as an optimization problem, wherethe objective is the maximization of a selected macroscale prop-erty (or a performance criterion) and the design space enumeratesall of the possible assignments of one of the available choices ofmaterial phases (e.g., a solid or a pore phase) to each of the spatialbins in the numerical model [110,111]. Although the method hasbeen successfully applied to a limited class of problems [111–116],its main drawback is that it is likely to explore only a very small re-gion of the vast design space. As an example, consider a relativelysmall problem with about 100 elements covering the space andassume that there are two choices for the selection of a materialphase in each element. For this simple problem, the design spaceconstitutes 2100 discrete choices of microstructures. It is thereforevery unlikely that an optimization process can explore the com-plete design space unless constraints (perhaps ad hoc or limitedin nature) are imposed. Moreover, explicit simulation of a largeparametric range of microstructures via finite element or finitedifference methods is extremely time-consuming, particularly ifnonlinear and time-dependent processes are considered.

A number of modern approaches to material design start bydramatically reducing the design space by focusing on the most

important measures of the microstructure deemed relevant tothe problem. However, this reduction of the design space isattained by rational consideration of the governing physics ofthe problem and/or taking into account the constraints on theavailability of microstructures (i.e., recognizing that only a verysmall fraction of all theoretically possible microstructures areaccessible by currently employedmanufacturing options). In otherwords, in these approaches, the design space in the simple problemillustrated above will be drastically reduced first by considerationof suitably selected subset of microstructure measures deemedrelevant to the selected physical phenomenon of interest, andthen optimal solutions will be explored in this reduced space.It is anticipated that the approach described above will producebetter solutions compared to conventional approaches that relyexclusively on the optimization algorithms. We next discussexamples of approaches that attempt to reduce complexity oftop–down design as an illustration.

Microstructure Sensitive Design (MSD) was formulated usingspectral representations across the process, microstructure, andproperty design spaces in order to drastically reduce the com-putational requirements for bi-directional linkages between thespaces that can enable inverse design [20,117–125]. Much of thefocus in MSD was placed on anisotropic, polycrystalline materi-als, where the local material state is defined using a proper or-thogonal tensor that captures the local orientation of the crystallattice in each spatial bin in the microstructure with respect to afixed sample (or global) reference frame. The corresponding lo-cal state space is identified as the fundamental zone of SO3 (theset of all distinct proper orthogonal tensors in 3-D) that takesinto account the inherent symmetries of the crystal lattice. Thedistinguishing feature of MSD is its use of spectral representa-tions (e.g., generalized spherical harmonics [126]) of functionsdefined over the orientation space. This leads to compact repre-sentations of process–structure–property relationships that can beexecuted at minimal computational cost. Most of the MSD suc-cess stories have focused on 1-point statistics of microstructure(also calledOrientationDistribution Function [126]). At its core, theMSD effort involved the development of very large and highly effi-cient databases organized in the form of microstructure hulls andanisotropic property closures. Microstructure hulls delineate thecomplete set of theoretically feasible statistical distributions that

Page 8: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 11

quantitatively capture the important details of the microstructure.Property closures identify the complete set of theoretically feasiblecombinations of macroscale effective properties for a selected ho-mogenization (composite) theory. The primary advantages of theMSD approach are its (a) consideration of anisotropy of the proper-ties at the local length scales, (b) exploration of the complete set ofrelevant microstructures (captured in the microstructure hull andproperty closures) leading to global optima, and (c) invertibility ofthe process–microstructure–property relations (due to the use ofspectral methods in representing these linkages).

Extension of theMSD framework to include 2-point statistics oforientation has proved to be computationally expensive becauseof the very large number of statistical variables and termsinvolved. It is important to recognize that 1-point statisticsare defined exclusively over the local state space, whereas thehigher-order statistics are defined over the product spaces of theorientation space (for the local state variables) and the real three-dimensional physical space (for the vectors defining the spatialcorrelations). While the use of generalized spherical harmonicsprovides substantial compaction for functions over the orientationspace, the analogous compact representations for functions overthe real three-dimensional physical space are yet to be developed.

In an effort to address the need to establish high fidelity mi-crostructure–property–process relations that include higher-orderdescriptors of the microstructure, a new mathematically rigorousapproach for scale-bridging called Microstructure Knowledge Sys-tems (MKS) was recently developed. MKS facilitates bi-directionalexchange of information between multiple length (and possiblytime) scales. In the MKS approach [127–131], the focus is on thelocalization (the opposite of homogenization), which describes thespatial distribution of the response field of interest (e.g., stress orstrain fields) at the microscale for an imposed loading condition atthe macroscale. Localization is critically important in correlatingvarious failure-related properties of the material with appropriatemicrostructure conformations that produce the hot spots associ-ated with damage initiation in the material. Moreover, if one cansuccessfully establish accurate expression for localization (referredto as localization linkages), it automatically leads to highly accuratehomogenization and computationally efficient multiscale model-ing and simulation.

TheMKS approach is built on the statistical continuum theoriesdeveloped by Kröner [132,133]. However, the MKS approach dra-matically improves the accuracy of these expressions by calibrat-ing the convolution kernels in these expressions to results frompreviously validated physics-based models. In recent work [131],the MKS approach was demonstrated to successfully capture thetails of the microscale stress and strain distributions in compos-ite material systems with relatively high phase contrast, using thehigher-order terms in the localization relationships. The approachhas its analogue in the use of Volterra kernels [134,135] for mod-eling the dynamic response of nonlinear systems.

In theMKS framework, the localization relationships of interestare expressed as calibrated meta-models that take the form ofa simple algebraic series whose terms capture the individualcontributions from a hierarchy of microstructure descriptors. Eachterm in this series expansion is expressed as a convolution of theappropriate local microstructure descriptor and its correspondinglocal influence at the microscale. The microstructure is definedas a digital signal using the discrete approximation of themicrostructure function,mh

s , reflecting the volume fraction of localstate h in the spatial bin (or voxel) s. Let ps be the averagedlocal response in spatial cell s defined on the same spatial gridas the microstructure function; it reflects a thoughtfully selectedlocal physical response central to the phenomenon being studied,and may include stress, strain or microstructure evolution rate

variables [127–130]. In the MKS framework, the localizationrelationship captures the local response field using a set of kernelsand their convolution with higher-order descriptions of the localmicrostructure [132,133,136–142].

The localization relationship can be expressed as

ps =

H

h1=1

S−1t1=0

ah1t1 mh1s+t1

+

Hh1=1

Hh2=1

S−1t1=0

S−1t2=0

ah1h2t1t2 mh1s+t1m

h2s+t1+t2

+

Hh1=1

Hh2=1

Hh3=1

S−1t1=0

S−1t2=0

S−1t3=0

× ah1h2h3t1t2t3 mh1s+t1m

h2s+t1+t2m

h3s+t1+t2+t3 + · · ·

, (1)

where H and S denote the number of distinct local states and thenumber of spatial bins used to define the microstructure signal(mh

s ), and all the ti enumerate the different vectors used to definethe spatial correlations. The kernels, or influence coefficients, aredefined by the set

ah1t1 , ah1h2t1t2 , ah1h2h3t1t2t3 , . . .

. ah1t1 , ah1h2t1t2 , and ah1h2h3t1t2t3

are the first, second, and third-order influence coefficients, respec-tively. The higher-order descriptions of the local microstructureare expressed by the products mh1

s+t1mh2s+t1+t2 · · ·mhN

s+t1+t2+···+tN ,where N denotes the order.

The higher-order localization relationship shown in Eq. (1)is a particular form of Volterra series [134,135,143–146] andimplicitly assumes that the system is spatially-invariant, causal,and has finite memory. The influence coefficients in Eq. (1) areanalogous to Volterra kernels. The higher-order terms in theseries are designed to capture systematically the nonlinearityin the system. The influence coefficients are expected to beindependent of the microstructure, since the system is assumedto be spatially invariant. Application of the concepts of Volterraseries directly to materials phenomena poses major challenges.Materials phenomena require an extremely large number ofinput channels (because of the complexities associated withmicrostructure representation) and an extremely large number ofoutput channels (because of the broad range of response fields ofinterest). However, the microstructure representation describedabove allows a reformulation of the localization relationship intoa form that is amenable for calibration to results from previouslyvalidated numerical models (e.g. finite element models, phase-fieldmodels) by linear regression in the Discrete Fourier transformspace [127–131],which is one of the distinctive features of theMKSframework.

A major challenge in the MKS framework is the calibrationof the selected higher-order influence coefficients via linearregression. Although other approaches have been used in literatureto approximate the higher-order Volterra kernels (e.g. cross-correlation methods [145,146], artificial neural networks [144],and Bayesian methods [147,148]), the solutions obtained usingordinary least squares fit [131] generally provide the bestsolutions for the MKS case studies conducted to date. Fig. 3demonstrates the accuracy of the MKS approach for predictingthe local elastic response in an example microstructure with acontrast of 10 in the values of the Young’s moduli of the localstates. The MKS methodology has thus far been successfullyapplied to capturing thermo-elastic stress (or strain) distributionsin composite materials, rigid-viscoplastic deformation fields incomposite materials, and the evolution of the compositionfields in spinodal decomposition of binary alloys [127–131].The latter two phenomena (rigid-viscoplastic deformations and

Page 9: Key computational modeling issues in Integrated Computational

12 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

5

10

15

20

25

30

35

40

45

10 20 30 40

00 2 4 6 7 10

Strain12 14 16 18

× 10-4

× 10-4

500

1000Freq

uenc

y

2000

1500

2500

5

10

15

20

25

30

35

45

10 20 30 400

5

10

15

Materials Knowledge system

Finite Element Model

Fig. 3. Demonstration of the accuracy of the MKS approach in predicting the local elastic fields in a two-phase composite system with a contrast of 10 in the elastic moduliof the constituent phases. The strain contours on a mid-section through the 3-D composite microstructure are shown along with the strain distributions in each phase inthe entire 3-D volume element. It is seen that the MKS approach produced excellent predictions at a fraction of the computational cost associated with the finite elementmethod (FEM).

spinodal decomposition) involve highly nonlinear governingfield equations. However, MKS framework has not yet beenapplied to highly complex microstructures such as those seenin polycrystalline material systems. For the compact inclusion oflattice orientation in the higher-order localization relationships, itwould be necessary to introduce the spectral representations overthe orientation space (used in the MSD framework) into the MKSframework described above.

MSD and MKS approaches described in the preceding ad-dress the compact representation of high fidelity microstruc-ture–property–processing linkages at a selected length scale. Inorder to obtain viable materials design solutions, one needs tolink relevant models from different length and time scales, quan-tify uncertainty in each model, and communicate the uncertaintythroughout the model chain in a robust multi-level design frame-work.

In a later section, the Inductive Design Exploration Method(IDEM), will be introduced as a means of mitigating or managinguncertainty while facilitating top–down searches for candidatemicrostructures based on mappings compiled from bottom–upsimulations. In general, the utility of IDEM might be greatestfor applications involving highly heterogeneous datasets and/ormaterial models for which the inverse design strategy affordedby MSD or MKS approaches might be less clearly related tomicrostructure representation. Such cases may be typical of multi-physics phenomena with weak coupling to each other or toradically different length scales of microstructure, or to problemsfor which the microstructure–property/response relations runthrough a sequence of hierarchical models, with each transitioncharacterized by a reduction of model order.

5. Characterization formodel validation and process–structuremodels

Verification and validation (V&V) is a core component ofICME [7]. Model verification and validation is an underdevelopedaspect of simulation-supported design of materials and productswithin the ICME vision. Within systems engineering, verificationrefers to the process of ensuring that the ‘‘system is built right’’(i.e., the systemmeets its requirements) whereas validation refersto building the ‘‘right system’’ (i.e., the system satisfies theusers’ needs) [149]. These are general definitions that apply toverification and validation of individual and multiscale models,design processes and design outcomes. Within the context ofsimulation model development [150], verification and validationhave specific definitions, which are consistent with the systemsengineering definitions above:

• Verification: The process of determining that a model imple-mentation accurately represents the developer’s conceptual de-scription of the model and the solution to the model.

• Validation: The process of determining the degree to which amodel is an accurate representation of the real world from theperspective of the intended uses of the model.

One of the most difficult tasks in validating models is thedocumentation of the material microstructure and its evolutionin any selected process. As noted earlier, it is essential toaddress stochasticity of microstructure. Consequently, there is acritical need to establish rigorous protocols for documenting ina statistically meaningful way the rich three-dimensional (3-D)topological details of the microstructure that might span multiplelength scales. These protocols need to establish reliable statisticalmeasures to assess if a sufficient number of datasets have been

Page 10: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 13

acquired, and if the spatial extent and spatial resolution in theacquired datasets are adequate. The protocols should also addressreconstructions of representative volume elements (RVEs) fromthe measured datasets. Since the microstructure has a stochasticcharacter, it is unlikely that a single sample or statistical volumeelement of microstructure [24,25] can accurately represent themicrostructure. Therefore, one aim is to establish protocols forextracting a set of such samples that accurately reflect thenatural variance in the microstructure such that the statistics ofthe ensemble can be statistically representative (i.e., statisticallyhomogeneous).

Obtaining 3-D microstructure datasets from samples contin-ues to be a major challenge for most advanced material systems.Impressive advances have been made in generating 3-D micro-structure datasets [151–162]. Despite these advances, the acqui-sition of such datasets remains effort intensive, highly expensive,very slow, and inaccessible to the broader scientific commu-nity. Moreover, the volumes of materials studied using theseapproaches are often limited, which raises critical questions re-garding their adequacy for reliably extracting desired microstruc-ture measures and their variances.

If we restrict attention to 2-point and 3-point statistics ofthe microstructure (these constitute the most important spatialcorrelations), it should be possible to acquire and assemble 3-Ddescriptions of these statistics from large 2-Dmicrostructure scanson multiple oblique sections into the sample [138,163]. This isbecause all of the vectors used to define the 2-point and 3-pointstatistics can be populated on carefully selected oblique sectionsinto the sample. Since it is significantly less expensive to obtainlarge 2-D microstructure scans in most materials, this approach ismuch more practical for statistically meaningful characterizationof the microstructure in a given sample set.

It is important to emphasize that the complete set of 2-pointstatistics is an extremely large set. Compact and meaningfulgrouping of the dominant 2-point statistics in a given materialsystem is an area of active current research. It has long beenknown that the complete set of 2-point statistics exhibits manyinterdependencies. Gokhale et al. [164] showed that at most(N (N − 1)) /2 2-point correlations are independent for a materialsystem comprised of N distinct local states. Niezgoda et al. [165]recently showed that only (N − 1) of the 2-point correlationsare actually independent by exploiting known properties of theDiscrete Fourier transform representation of the correlations.However, for a material system comprising a very large number oflocal states (e.g. a polycrystalline material system), even tracking(N − 1) spatial correlations is an arduous task. It was recentlydemonstrated that techniques such as principal componentanalysis (PCA), commonly used in face recognition and patternrecognition, can be used to obtain objective low dimensionalrepresentations of the 2-point statistics [21,22], as shown in Fig. 4.Since the n-point statistics are smooth analytic functions witha well defined origin, PCA decomposition is quite effective inestablishing an uncorrelated orthogonal basis for any ensembleof microstructure datasets. The PCA decomposition allows oneto represent each microstructure as a linear combination of theeigenvectors of a correlation matrix, with the weights interpretedin the same manner as Fourier coefficients. The weights now forman objective lowdimensional representation of themuch larger setof independent 2-point statistics. The PCA representations allowsimple visualization of an ensemble of microstructures, and arigorous quantification of the microstructural variance within theensemble [120] (see Fig. 4). The same concepts can also be usedeffectively to define weighted sets of statistical volume elements(WSVEs) as an alternative to identifying RVEs for a given ensembleof microstructure datasets [22,166,167].

-2

4

2

0

-60-40

-20

200

0

2

Low Variance

Mid Variance

High Variance

α 3

α2

α1

Fig. 4. Visualization of ensembles ofmicrostructure datasets and their variance us-ing PCA representations of the 2-point statistics. Three ensembles ofmicrostructuredatasets are shown with High-, Mid-, and Low-Variance. Each point in the figurecorresponds to a complete set of 2-point statistics, projected in the first three di-mensions of the PCA space [21]. The αi values in the figure are the correspondingPCA weights. The points corresponding to each ensemble are enclosed in convexhulls to help visualize the space spanned by each ensemble. Note that the range ofthe PCA weights decreases substantially with each new PCA dimension.

Successful pursuit of the ICME vision requires integration ofall of the different components described earlier. The NMAB re-port on ICME [7] specifically identifies three main tasks for inte-gration tools: (i) linking information from different sources anddifferent knowledge domains, (ii) networking and collaborativedevelopment, and (iii) optimization. Certainly, collaborative mul-tidisciplinary design optimization presents challenges due to thegeographically distributed nature of the materials supply andproduct design enterprises, often undertaken in different corpo-rations and in different divisions, as well as in consulting anduniversity laboratories [15]. Major emphasis to date on design in-tegration has been placed on use of commercial software [13,14]that includes capabilities to manage databases and pursue vari-ous optimization strategies. With regard to optimization, mate-rials design differs substantially from conventional problems indesign optimization owing to the complexity, nonlinearity, anduncertainty [168,169] of material process–structure and struc-ture–property models. It is not merely the case where existingtools can be identified and deployed, but rather basic research is-sues at the intersection of computational materials science andengineering, experimental mechanics and processing, and multi-disciplinary design optimizationmust be identified and addressed.

As mentioned earlier, one of the most difficult challenges in theintegration is the visualization of microstructure–property–proce-ssing linkages in a common space. This is exacerbated by the factthat a comprehensive description of the microstructure requiresa very large number of statistics or spatial correlations. Therefore,reduced-order representations of the microstructure are essential.The PCA decompositions of the n-point statistics discussed earlieroffer new exciting opportunities to build new integrations tools.As an example, consider the process design problem where theobjective is to identify hybrid processing routes comprising anarbitrary sequence of available manufacturing routes aimed attransforming a given initialmicrostructure into a desired one. Fig. 5shows schematically how one might address the process designproblem by representing the microstructure–property–processinglinkages in a common (microstructure) space. The green volume inFig. 5 depicts a microstructure hull corresponding to an ensembleof digitally created porous solids [22]. Each point in this spacerepresents a microstructure (an example is depicted in the topright in Fig. 5) through the PCA weights of its 2-point correlations.The elastic moduli for all microstructure members in the selected

Page 11: Key computational modeling issues in Integrated Computational

14 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

-6

0

-3000-2000

-10000 1000

20003000

-10

ProcessingPathlines

-4

-2

0α3

α2α1

2

4

-40-1

0 0.01 0.02 0.03 0.04 0.05

-0.5 0 0.52nd principal Component

Predicted Modules

Relative Error

1 1.5 2

-30

-20

-10

0

10

20

30

40

50

6.2

6.4

6.6

Mod

ules

(M

Pa)6.8

7

10

606040

402020

20

30

40

50

60

Fig. 5. Illustration of the compact representation of microstructure–property–processing linkages using the reduced-order representations of the microstructure obtainedfrom principal component analyses of higher-order statistics (2-point statistics). The αi values in the figure represent the PCA weights in the reduced-order representationof the n-point spatial correlations. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

ensemble were evaluated using finite element models (FEM),and were correlated to the first two PCA weights using artificialneural networks. The correlation and its validation are depictedin the contour plot on the bottom right in Fig. 5. It is seenthat the microstructure–modulus correlation provides excellentpredictions for variousmembers of the ensemble (with less than5%error) [22]. It is noted that the PCA weights of the microstructurecan also be related to properties of interest using various regressionmethods.

Fig. 5 illustrates schematically the superposition of processinginformation on themicrostructure hull in the formof process path-lines. In any selected processing option, the salient features of themicrostructure are expected to change, which can be visualized asa pathline in the multi-dimensional microstructure space. Differ-ent processing operations are expected to change the microstruc-ture in different ways, and therefore there are likely to be differentprocessing pathlines intersecting at each microstructure. Identifi-cation (either experimentally or by modeling) of all the processingpathlines in the microstructure hull corresponding to all availableprocessing options thatmaybe exercised results in a large database(or a process network). Once such a database is established, itshould be possible to identify the ideal hybrid process route (basedon cost or other criterion such as sustainability) thatwill transformany selected initialmicrostructure into one that has been predictedto exhibit superior properties or performance characteristics. Thebasic concept was evaluated recently in a simple case study thataimed at transforming the given initial crystallographic texture ina sample into a desired one [123]. In all examples considered, theprocess network identified solutions that could not be arrived at bypure intuition or by brute-force methods (including constant per-formance maximization or repeated trials). There is a critical needto further develop this framework to include higher-order descrip-tions of microstructure (possibly based on n-point statistics) andhigher-order homogenization theories (e.g. MKS framework). This

linkage of models that explicitly consider microstructure to pro-cess path design is a key underdeveloped technology within theICME paradigm, as discussed earlier in reference to Fig. 2.

6. Uncertainty in ICME

One of the key enablers of the ICME paradigm is the abilityto account for different aspects of uncertainty within models,experiments, and the design process. Uncertainty is a firstorder concern in the process of simulation-supported, decision-based materials design. It is often overlooked in discussionof ICME and materials design, possibly because it is assumedthat approaches exist within systems design and uncertaintymanagement methods that are suitable. This is generally notthe case in addressing nuances and complexity of multi-levelmaterials design. Only two paragraphs were devoted to theuncertainty section in the NMAB ICME report [7], and littlediscussion was devoted to MDO aspects and the richness ofrequired verification and validation approaches; however, theseare perhaps among the most critical scientific obstacles torealizing ICME owing to its nascent stages of formulation. TheICME report focused more on model integration/communication,i.e., assembling and networking modeling and simulation toolsalong with accompanying supportive experimental methods.There are various sources of uncertainty in addressing ICME,inexhaustively listed without classification as [15]:

• Variability induced byprocessing, operating conditions, etc. anddue to randomness of microstructure.

• Incomplete knowledge of model parameters due to insufficientor inaccurate data, including initial and boundary conditions.

• Uncertain structure of a model due to insufficient knowledge(necessitating approximations and simplifications) about aphysical process such as process–structure or structure–pro-perty relations.

Page 12: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 15

• Uncertainty in meshing and numerical solution methods.• Propagation of uncertainty through a chain of models (amplifi-

cation or attenuation).• Experimental idealizations and measurement error.• Lack of direct connection of measured quantities to model

degrees of freedom.

Accounting for uncertainty involves four aspects: uncertaintyquantification, uncertainty propagation, uncertainty mitigation,and uncertainty management. Uncertainty quantification is theprocess of identifying different sources of uncertainty anddeveloping corresponding mathematical representations. Thereare perhaps several unique aspects of materials with regard touncertainty quantification [168]:

• The long range character of defect/structure interactions inmaterials necessitates complex nonlocal formulations of con-tinuum field equations or corresponding coarse grain internalvariables. An economics analogy might be consumer reactionto market instabilities abroad.

• Use of 2D images to reconstruct 3-D microstructures to esti-mate properties/responses is often necessitated by lack of ex-pensive and time-consuming 3-D quantitative microstructureinformation; moreover, measured microstructures of new ma-terials of interest to materials design do not yet exist, and pre-dicted microstructures may not be thermodynamically feasibleor metastable.

• Extreme value (rare event) problems are common in failure ofmaterials, such as localization of damagewithin themicrostruc-ture in fracture or high cycle fatigue. An analogymight be insur-ance company estimation of probabilities of rare events such as1000 year floods, tsunamis, and so forth.

• The large range of equations used in solving field equationsof nonlinear, time-dependent, multi-physics behavior of ma-terials, including constraints, side conditions, jump conditions,thresholds, etc., which may vary from group to group (i.e., littlestandardization).

Uncertainty propagation involves determining the effect of in-put uncertainties on the output uncertainties, both for individ-ual models and chains of models. The combination of uncertaintyrepresentation and propagation is sometimes referred to as uncer-tainty analysis. Uncertainty mitigation utilizes techniques for re-ducing the impact of input uncertainties on the performance of thematerial system. Finally, uncertainty management involves decid-ing the levels of uncertainty that are appropriate for making de-sign decisions and prioritizing uncertainty reduction alternativesbased on the tradeoff between efforts and benefits. While uncer-tainty quantification and propagation activities are carried out byanalysts or model developers, uncertaintymitigation andmanage-ment are activities performed by system designers during the de-sign process. These four aspects and corresponding techniques arediscussed in a fairly general way in Sections 6.1 through 6.4, fol-lowed by a brief discussion of some of the unique aspects associ-ated with multiscale modeling and design of materials.

6.1. Uncertainty quantification

During the past few decades, a number of different classifi-cations of uncertainty have been proposed. Different classifica-tions have emerged within diverse fields such as economics andfinance, systems engineering, management science, product de-velopment, and computational modeling and simulation. Uncer-tainty classification has also evolved over time, e.g., within eco-nomics the classification has evolved from (a) risk and uncertainty

in early 20th century to (b) environmental and behavioral uncer-tainty in the mid 20th century, to the recent classification of fun-damental uncertainty and ambiguity [169]. Uncertainty is also re-lated to the concept of risk which can be defined as a triplet of sce-narios, likelihood of scenarios, and possible consequences [170].Uncertainty gives rise to risk. While risk is always associated withundesirable consequences, uncertainty may have desirable conse-quences [171]. An understanding of these classifications is neces-sary to properly position themethods for design under uncertainty.

Within computational modeling and simulation, uncertaintyhas been classified from different standpoints. Chen et al. [172]classify uncertainty into three types — Type 1 uncertainty associ-ated with inherent variation in the physical system, Type 2 uncer-tainty associated with lack of knowledge, and Type 3 uncertaintyassociatedwith known limitations of simulationmodels.With spe-cific goal of materials design, Choi et al. [173] classify uncertaintyassociated with different aspects of simulation models into modelparameter uncertainty, model structural uncertainty, and propa-gated uncertainty.

The most widely adopted classification splits uncertainty intoaleatory uncertainty and epistemic uncertainty. Aleatory uncer-tainty is due to the inherent randomness in material systems, suchas nearest neighbor distributions of grain size, orientation, andshape in metallic polycrystals. This uncertainty is also referred toas irreducible uncertainty, statistical variability, inherent uncer-tainty, or stochastic uncertainty. Aleatory uncertainty associatedwith parameters is commonly represented in terms of probabilitydistributions. Probability theory is very well developed and hencehas served as a language and mathematics for uncertainty quan-tification. This is one of the reasons that aleatory uncertainty hasreceived significant attention both in engineering design and com-putational materials science.

Epistemic uncertainty, on the other hand, refers to the lack ofknowledge which may be due to the use of idealizations (simpli-fied relationships between inputs and outputs), approximations,unanticipated interactions, or numerical errors. Examples includesize of the representative volume element (RVE) of microstruc-ture used to predict properties or responses, simplified mate-rial constitutive models, including the form and/or structure ofsuch models, and approximate model parameters. Epistemic un-certainty is also referred to as reducible uncertainty, subjectiveuncertainty, and/or model-form uncertainty. Attempts have beenmade to use a Bayesian approach to quantify epistemic uncer-tainty using probability functions. Under the Bayesian frameworkfor epistemic uncertainty, the subjective interpretation of proba-bility functions is used [174]. Subjective interpretation of proba-bility is different from the frequentist interpretation in the sensethat subjective probabilities refer to degree of belief rather thanthe frequency of occurrence. The subjective interpretation allowsfor modeling all types of uncertainty (epistemic and aleatory) intoa common framework, which is convenient for uncertainty anal-ysis. The Bayesian approach involves determining prior probabil-ity distributions of uncertain parameters that represent modeler’sinitial belief about uncertainty and using Bayes’ theorem to up-date the probability functions based on new pieces of information(e.g., from experiments).

However, by the very nature of epistemic uncertainty, itis inherently difficult or impossible to characterize probabilityfunctions associated with limited knowledge such as modelidealization. An incorrect representation of uncertainty mayresult in wrong decisions. Hence, different types of mathematicaltechniques are needed for representing epistemic uncertainty.Alternative approaches for representing epistemic uncertaintyinclude fuzzy set theory, evidence theory, possibility theory,and interval analysis [175]. Different representations are suitableunder different situations.

Page 13: Key computational modeling issues in Integrated Computational

16 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

While there has been significant progress in accounting foraleatory uncertainty using probability theory, little progress hasbeen made on (a) holistic quantification of epistemic uncertaintywithin multiscale material models, and (b) simultaneous consid-eration of different mathematical constructs for representing bothaleatory and epistemic uncertainty in an integrated manner. Weregard this as one of the primary challenges facing ICME.

6.2. Uncertainty propagation

Within statistics, uncertainty propagation refers to the effect ofuncertainties in independent variables on the dependent variables.In the context of simulation-supported design (decisions basedat least in part on modeling and simulation), this translates tothe effect of input uncertainty on the output uncertainty. Ofcourse, the same general logic applies to input–output relations inexperiments, which assert certain models in their own right andhave error and approximation. In addition to the uncertainty atindividual scales, materials design is associated with propagationof uncertainties in networks of interconnected models, whichmay either cause amplification or reduction of uncertainties inserial application of models. Hence, for the realization of ICME,techniques are required for (a) propagating uncertainty from theinputs of one model to its outputs, (b) propagating uncertaintyfrom lower level models to upper level models within a modelchain, (c) propagating uncertainty from material compositionto structure through process route, material microstructure tomaterial properties or responses, and from material properties toproduct performance.

A number of techniques such as Monte Carlo techniques andglobal sensitivity analysis are available for propagating uncertaintyin the inputs (represented as random variables) to the outputs.Probability-based techniques have also been developed for prop-agating uncertainty across models. Lee and Chen [176] discuss fivecategories of methods for (typically aleatory) uncertainty prop-agation: Monte-Carlo methods, local expansion based methods(e.g., Taylor’s series expansion), most probable point (MPP) basedmethods (e.g., first order reliabilitymethod and second order relia-bility method), functional expansion based methods (e.g., polyno-mial chaos expansion), and numerical integration based methods.A comparison of full factorial numerical integration, univariate di-mensional reduction method, and polynomial chaos expansion ispresented in Ref. [176].

The primary challengewithMonte Carlo techniques is that theyare computationally intensive in nature. Repeated execution ofsimulation models may be computationally expensive and gen-erally infeasible when using multiscale models with a numberof sources of uncertainty. Hence, efficient techniques for uncer-tainty propagation are necessary. Chen and co-authors [172] havedeveloped techniques for efficient uncertainty propagation usingmetamodels, but the use of metamodels in multiscale modelingwhere sound physical basis must exist for models at fine scaleshas not been fully acknowledged or pursued. There is strong sus-picion that metamodels are too limited for this task, in general. Yinet al. [177] use random fields tomodel uncertainty withinmaterialmicrostructure and use dimension reduction techniques for effi-cient uncertainty propagation. Methods for propagation of uncer-tainties represented as intervals are also available [178]. However,approaches for propagating different types of uncertainty in a sin-gle framework are in early stages.

A related issue is whether to engage description of stochas-ticity at each level of simulation in terms of parameter distribu-tion (e.g., variation of microstructure, model parameters) or touse many deterministic realizations of microstructure and corre-sponding models/parameters extracted from assigned probabilis-tic distributions at some ‘‘primal’’ scale to propagate uncertainty

to higher levels of predicted response(s). The latter approach isstraightforward and tractable, but requires identification of somescale as a starting point at which finer scale response can betreated as deterministic, which of course does not fully accordwithphysics. From a practical perspective, Table 1 suggests that eachproperty or response may have a dominant minimal scale whichcan serve as a starting point for applying deterministicmodelswithprobability distributions of inputs to propagate through a multi-scalemodeling framework. On the other hand, the temptationwitha fully stochastic approach is to avoid the hard but fruitful work ofassessing the viability of such a scale, perhaps leading to an earlyconcession to Bayesian approaches to provide answers where thephysics is complex or microstructure-sensitive models are limitedor avoided. In this regard, there are some recent works that mergethe two approaches in multiscale modeling in a manner that mayprovide promising avenues for future applications [179].

6.3. Uncertainty mitigation

Uncertainty mitigation refers to reducing the effect of uncer-tainty on systems design. Techniques such as robust design andreliability based design and optimization (RBDO) have been devel-oped in the systems design community to achieve desired systemperformance in the presence of uncertainty. Robust design tech-niques are based on the principle of minimizing the effects of un-certainty on system performance [180] whereas reliability-baseddesign optimization techniques are based on finding optimized de-signs characterized by lowprobability of failure [181]. Note that re-ducing the effect of uncertainty ondesignusing these approaches isdifferent from reducing the (epistemic) uncertainty itself. In robustdesign and RBDO approaches, the levels of uncertainty are gener-ally assumed to be fixed (or given). The goal is to determine thebest design that is insensitive to the given uncertainty or has lowprobability of failure.

Robust design techniques are rooted in Taguchi’s quality en-gineering methodology [182]. One of the earliest robust designapproaches for systems design include the Robust Concept Ex-ploration Method (RCEM) [183] which combines statistical tech-niques of surrogate modeling and compromise decision supportproblems [184] to perform robust design. RCEMhas been extendedto account for other sources of uncertainty (e.g., uncertainty in in-put variables) [185], and to develop product platforms [186]. Themulti-disciplinary design optimization (MDO) community has de-veloped techniques for optimizing systems in the presence of un-certainty [187–190].

Existing techniques for multi-level deterministic optimizationhave also been extended to account for uncertainty. An example isthe extension of the Analytical Target Cascading (ATC) approachto Probabilistic Analytical Target Cascading (PATC) [191]. ATCis an approach for cascading overall system design targets tosub-system specifications based on an hierarchical multi-leveldecomposition [192], similar to the concurrent material andproduct design problem shown in Fig. 1, when layered with theunderstanding of Olson’s Venn diagram. The subsystem designproblems are formulated as minimum deviation optimizationproblems and the outputs of the subsystem problems are used asinputs to the system-level problem. Within PATC, uncertainty ismodeled in termsof randomvariables anduncertainty propagationtechniques are used to determine the impact of uncertainties at thebottom level on parameters at the top level.

While analytical target cascading approach is based on choos-ing a design point during each iteration and repeating the opti-mization loops until convergence is achieved, approaches such asthe Inductive Design Exploration Method (IDEM) use ranges of pa-rameters and design variables [15,193,194]. The primary advan-tage of working with ranged sets of design points is the possibility

Page 14: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 17

Fig. 6. Inductive Design Exploration Method [15,193–195]. Step 1 involves bottom–up simulations or experiments, typically conducted in parallel fashion, to mapcomposition into structure and then into properties, with regions in blue showing the feasible ranged set of points from thesemappings. Step 2 involves top–down evaluationof points from the ranged set of specified performance requirements that overlap feasible regions (red) established as subsets of bottom–up simulations in Step 1. (Forinterpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

of parallelization of bottom–up simulations that project composi-tion onto structure via process route and structure onto properties.IDEM facilitates design while accounting for uncertainty in mod-els and its propagation through chains of models. IDEM [193,194]has two major objectives: (i) to explore top–down, requirements-driven design, guiding bottom–up modeling and simulation, and(ii) to manage uncertainty in model chains. Fig. 6 illustrates theconcept of IDEM. We consider multiple spaces of different dimen-sion (DOF), between which models, simulations or experimentsserve asmappings that project points in a given space (e.g., compo-sition) to the next (e.g., microstructure) and to the final (e.g., prop-erties/responses). First we must configure the design system,including information flow between specific databases and mod-eling and simulation tools, and specify ranged set of performancerequirements. Then, steps in IDEM, shown below in Fig. 6, are asfollows:

1. Perform parallel discrete points evaluation at each level ofdesign process (bottom–up simulations and experiments).

2. Perform inductive (top–down) feasible space exploration basedon metamodels from Step 2.

3. Obtain a robust solution with respect to model uncertainty.

Projected points need not comprise simply connected regionsin each space, but we plot as such for simplicity. There is alsoprovision for reconfiguring the design system in case there is nooverlap between performance requirements and feasible rangesof the bottom–up projections and/or to mitigate uncertainty.Mitigation of uncertainty is a second primary feature of IDEM.If two candidate solutions exist, say Designs 1 and 2, for whichfinal performance is identical, thenwhich design shouldwe select?In general, we select solutions that lie the furthest from theconstraint boundaries in each space, including the boundariesof feasible regions established in Step 1 of IDEM (blue regionsin Fig. 6). Current robust design methods cannot address thisissue since these methods only focus on the final performancerange. The fundamental difference between IDEM and othermulti-level design methods is that IDEM involves finding ranged sets of

design specifications by passing feasible solution spaces from finalperformance requirements by way of an interdependent responsespace to the design space while preserving the feasible solutionspace to the greatest extent possible.

Choi and co-authors [195] use the IDEM framework todesign a multi-functional energetic structural material (MESM)using a multiscale model consisting of a microscale discreteparticle mixture (DPM) model and a continuum non-equilibriumthermodynamic mixture (NTM) model. The inputs of the DPMmodel are volume fractions and mean radii of the MESMconstituents and the output is the weighted average temperatureof local hotspots. The inputs of the NTM model include volumefractions and critical temperature for reaction initiation and theoutput is the accumulated reaction product. For each input–outputmapping, the inputs space is discretized using experimental designtechniques. The model is executed at discrete points consideringvarious types of uncertainty (discussed in Section 6.1). Due to theuncertainties, each point maps to a subspace in the output space.Specific techniques have been developed to account for inputsshared across multiple levels. After generating the mappingsbetween different levels, measures such as HyperdimensionalError-Margin Index (HD-EMI) are used to determine the robustnessof the design to various forms of uncertainty while satisfyingthe problem constraints. Using the IDEM approach, Choi and co-authors [195] determine robust combinations of design variables(volume fractions, and radii of constituents) to achieve the desiredaccumulation of reaction products. Other examples of the IDEMimplementation are discussed in [15,193,194].

The forward mappings in Step 1 of IDEM pertain either tomultiscale models, in which information from a model at onescale is projected onto inputs into higher scale models, or multi-level models, in which composition is mapped into structureand structure into properties. By remaining as far as possiblefrom constraints, IDEM attempts to pursue solutions that areless sensitive to uncertainty, particularly in terms of solutionsthat avoid infeasible designs. The IDEM process accounts forthe effect of aleatory uncertainty in the input parameters (such

Page 15: Key computational modeling issues in Integrated Computational

18 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

as randomness in particle sizes and randomness in simulatedmicrostructure) on the outputs of the multiscale model, alongwith the propagation of aleatory uncertainty along a design chain.Instead of using a concurrent multiscale modeling approach,an information-passing approach is used in IDEM. The primarydistinction is that instead of passing information about singledesign points throughout themodel chain, ranged sets of solutionsare passed across different levels. It allows for independentexecution of models at different scales, which simplifies theuncertainty propagation within models. Since the process ofuncertainty analysis is decoupled from design exploration, IDEMis flexible to be extended for other forms of uncertainty usingexisting uncertainty propagation techniques. Recently, IDEM hasbeen extended to account for model parameter uncertainty [196].Further work needs to be carried out to account for otherrepresentations of epistemic uncertainty such as intervals andfuzzy sets.

6.4. Uncertainty management

Uncertainty management is the process of deciding the levelsof uncertainty that are appropriate for design decisions and priori-tizing uncertainty reduction alternatives based on the tradeoff be-tween efforts and benefits. The focus in uncertaintymanagement ison decidingwhen to reduce epistemic uncertainty by gaining addi-tional knowledge andwhen to reduce the complexity of the designprocess by using simpler models and design processes. The notionof introducing uncertainty by using simplified models during theearly conceptual design phases has received little attention fromthe engineering design and the materials science communities.

When designing a material system, the focus is on achievingproperties such as strength and ductility that address performancerequirements. In contrast, when managing uncertainty in thedesign process, focus is placed on design-process goals such asreducing the complexity of the design process and achieving thebest design in the shortest possible time. Thenotion of designing thedesign process is also important because of (a) evolving simulationmodels, resulting in multiple fidelities of models at differentstages of a design process, and (b) significant model developmentand execution costs, necessitating judicious use of computationalresources. Moreover, the needs for accurate information dependon whether the goal is to narrow down the alternatives to aspecific class of materials (i.e., during early design phase) or tooptimize the composition and structure of a specific material(i.e., during the later stages of design). In the early phases ofdesign, models and accurate representations of uncertainty maynot be available — they may also not be necessary. Uncertaintymanagement can be applied to problems such as refinementof simulation models [197], selection of models from a set ofavailable models [198], and reducing the complexity of the designprocess [199].

A potential approach to uncertainty management is to uti-lize concepts and mathematical constructs from information eco-nomics [200,201] such as value of information to quantify thecost/benefit tradeoff to make the best design decisions in the pres-ence of uncertainty. Information economics is the study of costsand benefits of information in the context of decision-making.Metrics from information economics, specifically the expected valueof information, are extended to account for both statistical variabil-ity and imprecision in models. The value of this added informationis referred to as the improvement in a designer’s decision-makingability. Mitigation of uncertainty in simulationmodels is effectivelyanalogous to the acquisition of a new source of information, andthe set of results from the execution of this refined model is anal-ogous to additional information for making a decision.

Different measures are developed for quantifying the benefit ofuncertainty mitigation in individual models [197,202]. These in-clude (a) the improvement potential, and (b) process performanceindicator (PPI). The improvement potential quantifies the maxi-mum possible improvement in a designer’s decision that can beachieved by refining a simulation model [197] (e.g., through in-creasing fidelity, improving the physical basis of the model, etc.).If the improvement potential is high, the design decision can pos-sibly be improved by refining the simulation model. However, ifthe improvement potential is low, then the possibility of improv-ing the design solution is low. Hence the level of refinement of themodel is appropriate for decision-making. The main assumption isthat the bounds on the outputs of models, which translate to thebounds on payoff, are available throughout the design space. Theimprovement potential measure is applied to problems involvingsimulationmodel refinement [197] and the coupling between sim-ulation models at different scales [15,199].

The PPI is developed for scenarios where information aboutupper and lower bounds on the outputs ofmodels are not availablebut it is possible to generate accurate information at a fewpoints inthe design space [198,203]. This is generally the casewhenphysicalexperiments are expensive and can only be executed at a fewpoints in the design space. Another application of the PPI is whenthere are multiple models (or refinements of the samemodel) thata design can choose from, and one of the models is known to bethe most truthful. The PPI is calculated as the difference in payoffbetween the outcomes of decisions made using the most truthfulmodel and the outcomes achievable using less complex models.

The improvement potential and the process performanceindicator account for the benefits of gaining additional informationby reducing epistemic uncertainty but do not directly capture theeffort involved in reducing uncertainty. This is due to a lack ofholistic models that predict the effort involved in developing andexecuting simulationmodels. The effortmodelsmust be predictivein nature because they need to be employed before the simulationis developed or refined. An effort model for ICME would be basedon the cost models from software development and systemsengineering [204]. The effort model accounts for three dimensionsof effort — person hours, monetary cost, and computational time.Using this effort model, measures that account for both value andeffort are under development.

Constructs based on information economics can also be used tochoose between a concurrent multiscale modeling approach and ahierarchical multiscale modeling approach. In the case of concur-rent multiscale modeling, models at various scales are tightly cou-pled with each other. In contrast, hierarchical multiscale modelingapproaches typically involve passing information between modelsin the formof ranges of values or surrogatemodels. Due to the tightcoupling in concurrentmultiscalemodels, it is more challenging toidentify the impact of specific sources of uncertainties and specificmodels on the overall output uncertainty. Additionally, the impactof uncertainty propagation through individualmodels is difficult toevaluate. Hence, it is more challenging to determine whichmodelswithin amultiscale framework should be refined to reduce the im-pact uncertainty or which models can be simplified to reduce thecomplexity of the process for initial design exploration purposes.Information economics can be used to strike the balance betweenthe complexity of concurrent multiscale models and the loss of in-formation in hierarchical multiscalemodels. These are very impor-tant practical considerations for tools for addressing uncertainty inmaterials design.

Efficient and effectivemeans for uncertaintymanagementmustbe integrated with the materials design strategy, and can be lever-aged tremendously by incorporation of parallelized computationalevaluations of process–structure and structure–property relations.Clearly, the parametric nature of the underlyingmodeling and sim-ulation necessary to quantify sensitivity to variation of materialstructuremust rely on increasing computational power to enhancefeasibility and utility.

Page 16: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 19

7. Verification and validation

Verification and validation were defined at the beginning ofSection 5. In the context of ICME, pursuit of V&V consists offollowing activities:

1. Individual model V&V — single model focusing on single lengthand/or time scales.

2. Multiscale model V&V — single model or coupled set of modelscomprising a simulation operating over multiple length/timescales in concurrent or hierarchical manner.

3. Multi-physics model V&V — ensuring that the implementationof a modeling framework spanning multiple phenomena isconsistent mathematically and physically.

4. Design process validation—ensuring that the configured designprocess will yield a solution that satisfies design requirements.

5. Design (outcome) validation — comparing design outcomes tosystem-level requirements.

It is noted that this decomposition is much broader than typicalnotions of V&V applied either to models or design processes.

7.1. Individual model V&V

Development of a computational model involves various stepssuch as conceptual modeling, mathematical modeling, discretiza-tion and algorithm selection, computer programming, numericalsolution, solution representation [205]. Uncertainties and errorsmay be introduced during each of these steps. The process of veri-fication and validation of simulation models is illustrated in Fig. 7.It consists of the following tasks [206]:

• Conceptual model validation (also called theoretical validation):process of substantiating that the theories and assumptionsunderlying a model are correct and that the representation ofthe system is reasonable for the intended simulation study.

• Model verification: process of assuring that a computerizedmodel is a sufficiently accurate representation of the corre-sponding conceptual model.

• Operational validation: process of determining whether thecomputerizedmodel is sufficiently accurate for the needs of thesimulation study.

• Data validation: process of ensuring that all numerical data usedto support the models are accurate and consistent.

Verification and validation of models has recently receivedsignificant attention in simulation-based design community.The Bayesian framework is one of the commonly adoptedframeworks for model validation [207]. Bayesian techniques formodel validation have also been extended to include epistemicuncertainty regarding statistical quantities [208]. Another widelyadopted technique involves concurrently validating models andexploring the design space [209]. The traditional view of validationof simulation models is that if the outputs of the models are closeenough to the experimental data, the models are valid. However,this view is much too limiting in ICME, which is characterizedby a hierarchy of material length scales and focuses on designingnovel materials that may not have been previously realizedor characterized. Furthermore, it is essential to recognize thatexperimental data also have uncertainty. This uncertainty typicallyincreases at lower scales owing to an interplay between spatialresolution of the measurement device/method and duration ofmeasurement signal accumulation. In fact at the molecular level,the uncertainty in experiments due to signal resolution andaccumulation may exceed that of models. It may be infeasibleor too expensive to run the experiments for complete modelvalidation before using them for design.Hence, validation activitiesand design activities need to be carried out in an iterative manner.

Reality of

Interest

Computer

Model

Mathematical

Model

Modeling

Simulation

Outcomes

Confirmation

Verification

Implementation

Validation

Assessment Activities

Modeling and Simulation Activities

Fig. 7. Model verification and validation process [206].

Models are used to guide design, and the design process guides thevalidation experiments.

There is another validation-related challenge in ICME due to thefocus on design. When models are developed and employed to de-sign materials that do not exist, the models cannot be experimen-tally validated before the system is designed. Experiments are alsocharacterized by epistemic uncertainty such as the lack of knowl-edge regarding the system configuration. Hence, validation usingexperiments must account for both aleatory and epistemic uncer-tainty associated with the experiments. In that sense, the distinc-tion between simulation models and experiments is blurred.

Due to the specialized nature of models, their reuse is currentlyseverely limited in ICME. It is generally assumed that someonewhois familiar with the complete implementation details of the modelis always available to customize or adapt the models for differentscenarios. The notion of using models as black boxes is generallydiscouraged because their validity outside the ranges for whichthey are developed is unknown and generally undocumented.Currently, significant attention is not given to the reuse of modelswithin ICME. While the models capture a significant amountof domain-specific knowledge, they are fragile in nature. Theirfragility is not due to a non-robust implementation but due to alack of documentation of their capabilities, scope of application,and limitations. Hence, the models cannot be used by systemdesigners for integration into the design process as reusable, plug-and-play, ‘‘black box’’ units as might be desired for a robustmethodology. As ICME develops we maintain that significantefforts are needed to embed knowledge and methods regardingthe validity of models [210] so that they can be used by designerswho may be generally familiar with the underlying physics butnot familiar with all the internal details of the model and itsimplementation.

Finally, the validity of a model depends on its purpose. Modelvalidation is effectively the process of determining the degree towhich amodel is an accurate representation of the real world fromthe perspective of the intended uses of the model. As discussed in theuncertainty management section above, the designers may preferto use simpler models during the early design phases to narrowdown the design space. Hence, there is a need for different levelsof scope,maturity and fidelity ofmodels within the design process.There is a need for a classification scheme to classify models basedon their scope and levels of fidelity [211], perhaps relating to acombination of model scope, readiness, and material length scalelevel of input/output structure. The systems designers can use thisclassification scheme in association with information economicsconstructs to choose appropriate models that are valid within agiven application context.

Page 17: Key computational modeling issues in Integrated Computational

20 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

7.2. Multiscale model V&V

Valid models for given characteristic length and time scalesmay not result in valid multiscale models across scales. Hence,multiscale model V&V is an important aspect of the overall V&Vstrategy, and very little attention has been devoted to it. Multiscalemodel V&V consists of two aspects:

(a) ensuring compatibility between the domains of validity forindividual models, and

(b) estimating the propagation of uncertainty through the chain ofmodels.

Compatibility assessment is the process of determining whetherthe domain of inputs of an upper-level model is consistent withthe domain of outputs of the lower-level model. Individual modelsare generally valid within a domain of inputs. The input domaincan be defined as a hypercube in terms of the lower and upperbounds on input variables [xlb xub]. The set of inputs withinthis hypercube generates an output domain, which can be non-convex with disjoint sub-domains. This output domain servesas the input domain to a higher-level model. Multiscale modelV&V involves ensuring that the output domain of the lower-level model is a subset of the valid input domain of the upper-level model. Otherwise, the multilevel model would result inerroneous system-level predictions. Techniques such as supportvector machines have been used for mathematically quantifyingthe validity domains of individual models [212].

The second aspect of multiscale modeling V&V is related tothe effect of uncertainty propagation across a chain of models.Uncertainty in models at lower material length scales and/or inthe process–structure–property mappings may be amplified asinformation flows from lower-level to upper-level models. Bothtypes of chains are relevant to materials design, the first interms of multiscale modeling and the latter in terms of multi-level design. The goal of V&V is to ensure that the effects ofuncertainty at the lower-level(s) do not amplify beyond thedesired uncertainty levels atwhich design decisionsmust bemade.Uncertainty propagation can be viewed from both bottom–upor top–down perspectives. From the bottom–up perspective, theeffect of lower-level uncertainties on system-level prediction isdetermined. From a top–down view, the allowable uncertaintyin system-level predictions is used to determine the allowableuncertainty in lower-scale/level models.

7.3. Design process V&V

Design processes involve decisions regarding how to search thespace of alternatives so that the design targets can be achieved ata minimum cost. It involves decisions such as how to decomposethe design problem, how to reduce the design space, sequencingfixation of design parameters in iterations, which models todevelop, how to invest the resources in achieving the designobjectives, etc. These decisions affect not only the final designoutcome but also affect the amount of effort required.

Analogous to model validation, where the goal is to ensurethat models accurately predict the desired response, the goalof validating design processes is to ensure that a given designprocess will yield a solution that satisfies the design requirements.Design process validation is important to guide the selectionof appropriate design methods and to support the developmentand evaluation of new methods. In simulation-based multileveldesign, the design processes represent the manner in which thenetworks of decisions and simulation models are configured. Oneof the approaches for validating design methods is the validationsquare [213,214]. The validation square, shown in Fig. 8, consistsof four phases:

TheoreticalStructuralValidity

TheoreticalPerformanceValidity

EmpiricalStructuralValidity

EmpiricalPerformanceValidity

1 4

2 3

Fig. 8. Four phases in the validation square [213,214].

1. Theoretical structural validation: Internal consistency of designmethod, i.e., the logical soundness of its constructs bothindividually and integrated.

2. Empirical structural validation: The appropriateness of thechosen example problem(s) intended to test design method.

3. Empirical performance validation: The ability to produce usefulresults for the chosen example problem(s).

4. Theoretical performance validation: The ability to produce usefulresults beyond the chosen example problem(s). This requires a‘leap of faith’ which is eased by the process (1)–(3) of buildingconfidence in the general usefulness of the design method.

7.4. Design (outcome) validation

Li et al. [215] argue that validation should be done on the designrather than computational models used to obtain them. The goal ofdesign validation is to gain confidence in the resulting design (ofthe material) rather than the simulation model(s) used for design.Validation of the design involves determining whether the systemas designed will perform as expected. The design outcomes areevaluated against the system-level design requirements. This isgenerally carried out using experiments.

8. Closure: advancing computational modeling and simulationto pursue the ICME vision

The vision of Integrated Computational Materials Engineeringis compelling in terms of value-added technology that reducestime to market for new products that exploit advanced, tailoredmaterials. This article addresses underdeveloped concepts/toolswithin the portfolio, as well as some important points regardingthe contribution of multiscale modeling to the overall materialsdesign process. Effectively, the role of computationalmodeling andsimulation is so pervasive that the probability of success of ICMEis inextricably intertwined with addressing certain scientific andtechnological ‘‘pressure points’’ as follows:

• ICME extends beyond informatics and combinatorics pursuedat the level of individual phases, addressing hierarchical mi-crostructures/structures of materials and embracing the ‘‘sys-tems’’ character of integrating materials design with productdesign and development. An engineering systems approach isessential.

• Recognition of material structure hierarchy is of first order im-portance in multiscale modeling and multi-level materials de-sign. Multiscale modeling and multi-level materials design arecomplementary but distinct enterprises, with the former serv-ing the needs of the latter in the context of ICME.

Page 18: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 21

Fig. 9. Expanded elements of the ‘materials innovation infrastructure’.

• Novelmethods for inverse design including knowledge systemsand process path design are underdeveloped and require atten-tion.

• Uncertainty is a pervasive and complex issue to address, withuncertainty quantification, propagation, mitigation and man-agement carefully distinguished. Improvement of physically-based models provides a promising foundational technology.Approaches to uncertainty that recognize and embrace the fullcomplexity of ICME are in their infancy.

• Integration of multiscale modeling and multi-level design ofmaterials with manufacturing and systems optimization isessential to pursuing the vision of ICME.

In advancing methodologies and tools to address ICME, it isessential tomaintain a consistent long termvision among industry,academia, and government. A critical assessment of prospects forprogress towards the ICME vision in this regard was recentlyoffered by McDowell [216]. ICME is an engineering-directedactivity, requiring distributed, collaborative systems based design.Clearly, one consistent thread throughout the past two decadesof developing this field of integrated design of materials andproducts is the trend towards increasing emphasis on the roleof computational materials science and mechanics across a broadrange of length and time scales, depending on application. Much isto be gained by extending beyond the conventional computationalmaterials science community to incorporate important advances inmultiscalemodeling and design optimization in the computationalmechanics of materials and structures communities, as well as theMDO andmanufacturing communities. The tools andmethods thatare foundational to ICME effectively constitute the elements ofan expanded ‘materials innovation infrastructure’ in the MaterialsGenome Initiative [16], as shown in Fig. 9.

Acknowledgments

JHP acknowledges financial support from National ScienceFoundation awards CMMI-0954447 and CMMI- 1042350. SRKacknowledges support from Office of Naval Research (ONR) awardN00014-11-1-0759. DLM acknowledges the support of the CarterN. Paden, Jr. Distinguished Chair in Metals Processing, as wellas the NSF Industry/University Cooperative Research Center forComputational Materials Design (CCMD), supported by CCMDmembers, through grants NSF IIP-0541674 (Penn State) and IIP-541678 (Georgia Tech). Any opinions, findings and conclusions orrecommendations expressed in this publication are those of theauthor(s) and do not necessarily reflect the views of the NSF orONR.

References

[1] Ashby MF. Materials selection in mechanical design. 2nd ed. Oxford, UK:Butterworth-Heinemann; 1999.

[2] Broderick S, Suh C, Nowers J, Vogel B, Mallapragada S, Narasimhan B, et al.Informatics for combinatorial materials science. JOM Journal of the Minerals,Metals and Materials Society 2008;60:56–9.

[3] Billinge SJE, Rajan K, Sinnot SB. From cyberinfrastructure to cyberdis-covery in materials science: enhancing outcomes in materials research,Report of NSF-Sponsored workshop held in Arlington, VA, August 3–5.http://www.mcc.uiuc.edu/nsf/ciw_2006/, accessed October 19, 2011. Seealso http://www.nsf.gov/crssprgm/cdi/, accessed October 19, 2011, 2006.

[4] Liu ZK. First principles calculations and Calphad modeling of thermodynam-ics. Journal of Phase Equilibria and Diffusion 2009;30:517–34.

[5] Hautier G, Fischer C, Ehrlacher V, Jain A, Ceder G. Data mined ionicsubstitutions for the discovery of new compounds. Inorganic Chemistry2011;50:656–63.

[6] Dudiy SV, Zunger A. Searching for alloy configurations with target physicalproperties: impurity design via a genetic algorithm inverse band structureapproach. Physical Review Letters 2006;97:046401. (4 pages).

[7] Pollock TM, Allison JE, Backman DG, Boyce MC, Gersh M, Holm EA,et al. Integrated computational materials engineering: A transformationaldiscipline for improved competitiveness and national security. NationalMaterials Advisory Board, NAE, National Academies Press; 2008. ReportNumber: ISBN-10: 0-309-11999-5.

[8] Apelian D, Alleyne A, Handwerker CA, Hopkins D, Isaacs JA, Olson GB, et al.Accelerating technology transition: Bridging the valley of death for materialsand processes in defense systems. National Materials Advisory Board, NAE,National Academies Press; 2004. Report Number: ISBN-10: 0-309-09317-1.

[9] Allison J, Backman D, Christodoulou L. Integrated computational materialsengineering: a newparadigm for the globalmaterials profession. JOM Journalof the Minerals, Metals and Materials Society 2006;58:25–7.

[10] Olson GB. Computational design of hierarchically structured materials.Science 1997;277:1237–42.

[11] McDowell DL, Story TL. New directions in materials design science andengineering (MDS&E), Report of a NSF DMR-sponsored workshop, October19-21, 1998. http://www.me.gatech.edu/paden/material-design/md_se.pdf,accessed May 5, 2012.

[12] McDowell DL. Simulation-assistedmaterials design for the concurrent designof materials and products. JOM Journal of the Minerals, Metals and MaterialsSociety 2007;59:21–5.

[13] McDowell DL, Olson GB. Concurrent design of hierarchical materials andstructures. Scientific Modeling and Simulation 2008;15:207–40.

[14] McDowell DL, Backman D. Simulation-assisted design and acceleratedinsertion of materials. In: Ghosh S, Dimiduk D, editors. Computationalmethods for microstructure-property relationships. Springer; 2010.

[15] McDowell DL, Panchal JH, Choi H-J, Seepersad CC, Allen JK, Mistree F.Integrated design of multiscale, multifunctional materials and products.Elsevier; 2009.

[16] Materials genome initiative for global competitiveness, National Scienceand Technology Council, Committee on Technology, June 2011. http://www.whitehouse.gov/sites/default/files/microsites/ostp/materials_genome_initiative-final.pdf, accessed May 5, 2012.

[17] Freiman S, Madsen LD, Rumble J. A perspective on materials databases.American Ceramic Society Bulletin 2011;90:28–32.

[18] Milton GW. The theory of composites. Cambridge monographs on appliedand computational mathematics, 2001.

[19] Torquato S. Random heterogeneous materials. New York: Springer-Verlag;2002.

Page 19: Key computational modeling issues in Integrated Computational

22 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

[20] Fullwood DT, Niezgoda SR, Adams BL, Kalidindi SR. Microstructure sensitivedesign for performance optimization. Progress in Materials Science 2010;55:477–562.

[21] Niezgoda SR, Yabansu YC, Kalidindi SR. Understanding and visualizingmicrostructure and microstructure variance as a stochastic process. ActaMaterialia 2011;59:6387–400.

[22] Kalidindi SR, Niezgoda SR, Salem AA. Microstructure informatics usinghigher-order statistics and efficient data-miningprotocols. JOM Journal of theMinerals, Metals and Materials Society 2011;63:34–41.

[23] McDowell DL. Materials design: a useful research focus for inelastic behaviorof structural metals. Theoretical and Applied Fracture Mechanics 2001;37:245–59. Special issue on the prospects ofmesomechanics in the 21st century:current thinking on multiscale mechanics problems.

[24] McDowell DL. Viscoplasticity of heterogeneous metallic materials. MaterialsScience and Engineering R 2008;62:67–123.

[25] McDowell DL. A perspective on trends in multiscale plasticity. InternationalJournal of Plasticity 2010;26:1280–309.

[26] Ziegler H. In: Becker E, Budiansky B, Koiter WT, Lauwerier HA, editors. Anintroduction to thermomechanics. 2nd ed. North Holland series in appliedmathematics andmechanics, vol. 21. Amsterdam–New York: North Holland;1983.

[27] Drucker DC. Material response and continuum relations: or frommicroscalestomacroscales. ASME Journal of EngineeringMaterials and Technology 1984;106:286–9.

[28] McDowell DL. Internal state variable theory. In: Yip S, Horstemeyer MF,editors. Handbook of materials modeling, Part A: methods. the Netherlands:Springer; 2005. p. 1151–70.

[29] Gumbsch P. An atomistic study of brittle fracture: toward explicit failurecriteria from atomistic modeling. Journal of Materials Research 1995;10:2897–907.

[30] Weinan E, Huang Z. Matching conditions in atomistic-continuum modelingof materials. Physical Review Letters 2001;87:135501.

[31] Shilkrot LE, Curtin WA, Miller RE. A coupled atomistic/continuum model ofdefects in solids. Journal of the Mechanics and Physics of Solids 2002;50:2085–106.

[32] Shilkrot LE, Miller RE, Curtin WA. multiscale plasticity modeling: coupledatomistics and discrete dislocation mechanics. Journal of the Mechanics andPhysics of Solids 2004;52:755–87.

[33] Shiari B, Miller RE, Curtin WA. Coupled atomistic/discrete dislocationsimulations of nanoindentation at finite temperature. ASME Journal ofEngineering Materials and Technology 2005;127:358–68.

[34] Qu S, Shastry V, Curtin WA, Miller RE. A finite-temperature dynamiccoupled atomistic/discrete dislocation method. Modelling and Simulation inMaterials Science and Engineering 2005;13:1101–18.

[35] Nair AK, Warner DH, Hennig RG, Curtin WA. Coupling quantum andcontinuum scales to predict crack tip dislocation nucleation. ScriptaMaterialia 2010;63:1212–5.

[36] Tadmor EB, Ortiz M, Phillips R. Quasicontinuum analysis of defects in solids.Philosophical Magazine A 1996;73:1529–63.

[37] Tadmor EB, Phillips R, Ortiz M. Mixed atomistic and continuum models ofdeformation in solids. Langmuir 1996;12:4529–34.

[38] Shenoy VB, Miller R, Tadmor EB, Phillips R, Ortiz M. Quasicontinuummodelsof interfacial structure and deformation. Physical Review Letters 1998;80:742–5.

[39] Miller R, Tadmor EB, Phillips R, Ortiz M. Quasicontinuum simulation offracture at the atomic scale. Modelling and Simulation in Materials Scienceand Engineering 1998;6:607–38.

[40] Shenoy VB, Miller R, Tadmor EB, Rodney D, Phillips R, Ortiz M. An adaptivefinite element approach to atomic-scale mechanics — the quasicontinuummethod. Journal of the Mechanics and Physics of Solids 1999;47:611–42.

[41] Curtarolo S, Ceder G. Dynamics of an inhomogeneously coarse grainedmultiscale system. Physical Review Letters 2002;88:255504.

[42] Zhou M, McDowell DL. Equivalent continuum for dynamically deformingatomistic particle systems. Philosophical Magazine A 2002;82:2547–74.

[43] Chung PW, Namburu RR. On a formulation for a multiscale atomistic-continuum homogenization method. International Journal of Solids andStructures 2003;40:2563–88.

[44] ZhouM. Thermomechanical continuumrepresentation of atomistic deforma-tion at arbitrary size scales. Proceedings of the Royal Society A 2005;461:3437–72.

[45] Dupuy LM, Tadmor EB, Miller RE, Phillips R. Finite-temperature quasicon-tinuum: molecular dynamics without all the atoms. Physical Review Letters2005;95:060202.

[46] Ramasubramaniam A, Carter EA. Coupled quantum-atomistic and quantum-continuummechanicsmethods inmaterials research. MRS Bulletin 2007;32:913–8.

[47] Rudd RE, Broughton JQ. Coarse-grained molecular dynamics and the atomiclimit of finite elements. Physical Review B 1998;58:R5893–6.

[48] Rudd RE, Broughton JQ. Concurrent coupling of length scales in solid statesystems. Physica Status Solidi B-Basic Research 2000;217:251–91.

[49] Rudd RE, Broughton JQ. Coarse-grainedmolecular dynamics: nonlinear finiteelements and finite temperature. Physical Review B 2005;72:144104.

[50] Kulkarni Y, Knap J, Ortiz M. A variational approach to coarse-graining ofequilibriumand non-equilibriumatomistic description at finite temperature.Journal of the Mechanics and Physics of Solids 2008;56:1417–49.

[51] Chen Y. Reformulation of microscopic balance equations for multiscalematerials modeling. Journal of Chemical Physics 2009;130:134706.

[52] Wang YU, Jin YM, Cuitino AM, Khachaturyan AG. Nanoscale phase fieldmicroelasticity theory of dislocations: model and 3-D simulations. ActaMaterialia 2001;49:1847–57.

[53] Chen L-Q. Phase-fieldmodels formicrostructure evolution. Annual Review ofMaterials Research 2002;32:113–40.

[54] Shen C, Wang Y. Modeling dislocation network and dislocation–precipitateinteraction at mesoscopic scale using phase field method. InternationalJournal of Multiscale Computational Engineering 2003;1:91–104.

[55] Belak J, Turchi PEA, Dorr MR, Richards DF, Fattebert J-L, Wickett ME. etal. Coupling of atomistic and meso-scale phase-field modeling of rapidsolidification, in 2009 APS March Meeting, March 16–20, Pittsburgh, PA,2009.

[56] Ghosh S, Lee K, Raghavan P. A multi-level computational model for multi-scale damage analysis in composite and porous materials. InternationalJournal of Solids and Structures 2001;38:2335–85.

[57] Liu WK, Karpov EG, Zhang S, Park HS. An introduction to computationalnanomechanics andmaterials. Computer Methods in AppliedMechanics andEngineering 2004;193:1529–78.

[58] Liu WK, Park HS, Qian D, Karpov EG, Kadowaki H, Wagner GJ. Bridging scalemethods for nanomechanics and materials. Computer Methods in AppliedMechanics and Engineering 2006;195:1407–21.

[59] Oden JT, Prudhomme S, Romkes A, Bauman PT. Multiscale modeling ofphysical phenomena: adaptive control of models. SIAM Journal on ScientificComputing 2006;29:2359–89.

[60] Ghosh S, Bai J, Raghavan P. Concurrent multi-level model for damageevolution in micro structurally debonding composites. Mechanics ofMaterials 2007;39:241–66.

[61] Nuggehally MA, Shephard MS, Picu CR, Fish J. Adaptive model selectionprocedure for concurrent multiscale problems. Journal of MultiscaleComputational Engineering 2007;5:369–86.

[62] Hao S, Moran B, Liu WK, Olson GB. A hierarchical multi-physics model fordesign of high toughness steels. Journal of Computer-AidedMaterials Design2003;10:99–142.

[63] Hao S, LiuWK,Moran B, Vernerey F, Olson GB.Multi-scale constitutivemodeland computational framework for the design of ultra-high strength, hightoughness steels. Computer Methods in Applied Mechanics and Engineering2004;193:1865–908.

[64] ShenoyM, Tjiptowidjojo Y, McDowell DL. Microstructure-sensitivemodelingof polycrystalline in 100. International Journal of Plasticity 2008;24:1694–730.

[65] Groh S, Marin EB, Horstemeyer MF, Zbib HM. Multiscale modeling of theplasticity in an aluminum single crystal. International Journal of Plasticity2009;25:1456–73.

[66] Suquet PM. Homogenization techniques for composite media. Lecture notesin physics, Berlin: Springer; 1987.

[67] Mura T.Micromechanics of defects in solids. 2nd ed. TheNetherlands: KluwerAcademic Publishers; 1987.

[68] Lebensohn RA, Liu Y, Ponte Castañeda P. Macroscopic properties and fieldfluctuations inmodel power-lawpolycrystals: full-field solutions versus self-consistent estimates. Proceedings of the Royal Society of London A 2004;460:1381–405.

[69] Warner DH, Sansoz F, Molinari JF. Atomistic based continuum investigationof plastic deformation in nanocrystalline copper. International Journal ofPlasticity 2006;22:754–74.

[70] Capolungo L, Spearot DE, Cherkaoui M, McDowell DL, Qu J, Jacob K.dislocation nucleation from bicrystal interfaces and grain boundary ledges:relationship to nanocrystalline deformation. Journal of the Mechanics andPhysics of Solids 2007;55:2300–27.

[71] Needleman A, Rice JR. Plastic creep flow effects in the diffusive cavitation ofgrain-boundaries. Acta Metallurgica 1980;28:1315–32.

[72] Moldovan D, Wolf D, Phillpot SR, Haslam AJ. Role of grain rotation duringgrain growth in a columnar microstructure by mesoscale simulation. ActaMaterialia 2002;50:3397–414.

[73] Cleri F, D’Agostino G, Satta A, Colombo L. Microstructure evolution from theatomic scale up. Compuational Materials Science 2002;24:21–7.

[74] Kouznetsova V, GeersMGD, BrekelmansWAM.Multi-scale constitutivemod-elling of heterogeneous materials with a gradient-enhanced computationalhomogenization scheme. International Journal for Numerical Methods inEngineering 2002;54:1235–60.

[75] Kouznetsova VG, Geers MGD, Brekelmans WAM. Multi-scale second-ordercomputational homogenization of multi-phase materials: a nested finiteelement solution strategy. Computer Methods in Applied Mechanics andEngineering 2004;193:5525–50.

[76] Larsson R, Diebels S. A second-order homogenization procedure for multi-scale analysis based on micropolar kinematics. International Journal ofNumerical Methods in Engineering 2007;69:2485–512.

[77] Vernerey F, Liu WK, Moran B. Multi-scale micromorphic theory forhierarchical materials. Journal of the Mechanics and Physics of Solids 2007;55:2603–51.

[78] Luscher DJ, McDowell DL, Bronkhorst CA. A second gradient theoreticalframework for hierarchical multiscale modeling of materials. InternationalJournal of Plasticity 2010;26:1248–75.

Page 20: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 23

[79] Luscher DJ, McDowell DL, Bronkhorst CA. Essential features of fine scaleboundary conditions for second gradient multiscale homogenization of sta-tistical volume elements, Journal of Multiscale Computational Engineering(in press).

[80] Amodeo RJ, Ghoniem NM. A review of experimental-observations andtheoretical-models of dislocation cells and subgrains. Res Mechanica 1988;23:137–60.

[81] Kubin LP, Canova G. The modeling of dislocation patterns. Scripta Metallur-gica 1992;27:957–62.

[82] Groma I. Link between the microscopic and mesocopic length-scaledescription of the collective behavior of dislocations. Physical ReviewB1997;56:5807–13.

[83] Zaiser M. Statistical modeling of dislocation systems. Material Science andEngineering A 2001;309–310:304–15.

[84] Arsenlis A, Parks DM. Crystallographic aspects of of geometrically-necessaryand statistically-stored dislocation density. Acta Materialia 1999;47:1597–611.

[85] Zbib HM, de la Rubia TD, Bulatov V. A multiscale model of plasticity basedon discrete dislocation dynamics. ASME Journal of EngineeringMaterials andTechnology 2002;124:78–87.

[86] Zbib HM, de la Rubia TD. A multiscale model of plasticity. InternationalJournal of Plasticity 2002;18:1133–63.

[87] Zbib HM, Khaleel MA. Recent advances in multiscale modeling of plasticity.International Journal of Plasticity 2004;20(6):979–1182.

[88] Rickman JM, LeSar R. Issues in the coarse-graining of dislocation energeticsand dynamics. Scripta Materialia 2006;54:735–9.

[89] Ohashi T, Kawamukai M, Zbib H. A multiscale approach for modeling scale-dependent yield stress in polycrystalline metals. International Journal ofPlasticity 2007;23:897–914.

[90] Acharya A. A model of crystal plasticity based on the theory of continuouslydistributed dislocations. Journal of theMechanics and Physics of Solids 2001;49:761–84.

[91] Acharya A. Driving forces and boundary conditions in continuum dislocationmechanics. Proceedings of the Royal Society of London A 2003;459:1343–63.

[92] Acharya A, Beaudoin AJ, Miller R. New perspectives in plasticity theory:dislocation nucleation, waves and partial continuity of the plastic strain rate.Mathematics and Mechanics of Solids 2008;13:292–315.

[93] Acharya A, Roy A, Sawant A. Continuum theory and methods for coarse-grained, mesoscopic plasticity. Scripta Materialia 2006;54:705–10.

[94] Kuhlmann-Wilsdorf D. Theory of plastic deformation: properties of lowenergydislocation structures.Materials Science andEngineeringA1989;113:1–41.

[95] Leffers T. Lattice rotations during plastic deformation with grain subdivision.Materials Science Forum 1994;157–162:1815–20.

[96] Hughes DA, Liu Q, Chrzan DC, Hansen N. Scaling of microstructuralparameters: misorientations of deformation induced boundaries. ActaMaterialia 1997;45:105–12.

[97] Ortiz M, Repetto EA, Stainier L. A theory of subgrain dislocation structures.Journal of the Mechanics and Physics of Solids 2000;48:2077–114.

[98] Carstensen C, Hackl K, Mielke A. Non-convex potentials and microstructuresin finite-strain plasticity. Proceedings of the Royal Society of London A 2002;458:299–317.

[99] Li J. The mechanics and physics of defect nucleation. MRS Bulletin 2007;32:151–9.

[100] Zhu T, Li L, Samanta A, Kim HG, Suresh S. Interfacial plasticity governs strainrate sensitivity and ductility in nanostructured metals. Proceedings of theNational Academy of Sciences of the United States of America 2007;104:3031–6.

[101] Zhu T, Li L, Samanta A, Leach A, Gall K. Temperature and strain-ratedependence of surface dislocation nucleation. Physical Review Letters 2008;100:025502.

[102] Deo CS, Srolovitz DJ. First passage time Markov chain analysis of rare eventsfor kinetic Monte Carlo: double kink nucleation during dislocation glide.Modeling and Simulation in Materials Science and Engineering 2002;10:581–96.

[103] Li J, Kevrekidis PG, Gear CW, Kevrekidis IG. Deciding the nature of the coarseequation through microscopic simulations: the baby-bathwater scheme.SIAM Review 2007;49:469–87.

[104] Needleman A. Micromechanical modelling of interfacial decohesion. Ultra-microscopy 1992;40(3):203–14.

[105] Camacho G, Ortiz M. Computational modeling of impact damage in brittlematerials. International Journal of Solids Structures 1996;33:2899–938.

[106] Zhai J, Tomar V, Zhou M. Micromechanical simulation of dynamic fractureusing the cohesive finite element method. Journal of Engineering Materialsand Technology, ASME 2004;126:179–91.

[107] Tomar V, ZhouM. Deterministic and stochastic analyses of fracture processesin a brittle microstructure system. Engineering Fracture Mechanics 2005;72:1920–41.

[108] Hao S, Moran B, Liu W-K, Olson GB. A hierarchical multi-physics model fordesign of high toughness steels. Journal of Computer-AidedMaterials Design2003;10:99–142.

[109] CES selector, Granta Material Intelligence, 2008.[110] Bendsoe MP, Sigmund O. Topology optimization theory, methods and

applications. New York: Springer-Verlag; 2003.[111] Sigmund O, Torquato S. Design of materials with extreme thermal expansion

using a three-phase topology optimizationmethod. Journal of theMechanicsand Physics of Solids 1997;45:1037–67.

[112] Sigmund O. Materials with prescribed constitutive parameters: an inversehomogenization problem. International Journal of Solids and Structures1994;31:2313–29.

[113] SigmundO. Tailoringmaterials with prescribed elastic properties. Mechanicsof Materials 1995;20:351–68.

[114] Gibiansky LV, Sigmund O. Multiphase composites with extremal bulkmodulus. Journal of the Mechanics and Physics of Solids 2000;48:461–98.

[115] Neves MM, Rodrigues H, Guedes JM. Optimal design of periodic linear elasticmicrostructures. Computers & Structures 2000;76:421–9.

[116] Gu S, Lu TJ, Evans AG. On the design of two-dimensional cellular metals forcombined heat dissipation and structural load capacity. International Journalof Heat and Mass Transfer 2001;44:2163–75.

[117] Knezevic M, Kalidindi SR. Fast computation of first-order elastic-plasticclosures for polycrystalline cubic-orthorhombic microstructures. Computa-tional Materials Science 2007;39:643–8.

[118] Adams BL, Henrie A, Henrie B, Lyon M, Kalidindi SR, Garmestani H.Microstructure-sensitive design of a compliant beam. Journal of theMechanics and Physics of Solids 2001;49:1639–63.

[119] Proust G, Kalidindi SR. Procedures for construction of anisotropic elastic-plastic property closures for face-centered cubic polycrystals using first-order bounding relations. Journal of the Mechanics and Physics of Solids2006;54:1744–62.

[120] Houskamp JR, Proust G, Kalidindi SR. Integration of microstructure-sensitivedesign with finite element methods: elastic-plastic case studies in FCCpolycrystals. International Journal for Multiscale Computational Engineering2007;5:261–72.

[121] Knezevic M, Kalidindi SR, Mishra RK. Delineation of first-order closures forplastic properties requiring explicit consideration of strain hardening andcrystallographic texture evolution. International Journal of Plasticity 2008;24:327–42.

[122] Wu X, Proust G, Knezevic M, Kalidindi SR. Elastic-plastic property closuresfor hexagonal close-packed polycrystallinemetals using first-order boundingtheories. Acta Materialia 2007;55:2729–37.

[123] Shaffer JB, Knezevic M, Kalidindi SR. Building texture evolution networksfor deformation processing of polycrystalline FCC metals using spectralapproaches: applications to process design for targeted performance.International Journal of Plasticity 2010;26:1183–94.

[124] Fullwood DT, Adams BL, Kalidindi SR. Generalized Pareto front methodsapplied to second-ordermaterial property closures. ComputationalMaterialsScience 2007;38:788–99.

[125] Adams BL, Xiang G, Kalidindi SR. Finite approximations to the second-orderproperties closure in single phase polycrystals. Acta Materialia 2005;53:3563–77.

[126] Bunge H-J. Texture analysis in materials science. Mathematical methods.Göttingen: Cuvillier Verlag; 1993.

[127] Fast T, Niezgoda SR, Kalidindi SR. A new framework for computationallyefficient structure-structure evolution linkages to facilitate high-fidelityscale bridging in multi-scale materials models. Acta Materialia 2011;59:699–707.

[128] Kalidindi SR, Niezgoda SR, Landi G, Vachhani S, Fast A. A novel frameworkfor building materials knowledge systems. Computers Materials & Continua2009;17:103–26.

[129] Landi G, Kalidindi SR. Thermo-elastic localization relationships for multi-phase composites. Computers, Materials & Continua 2010;16:273–93.

[130] Landi G, Niezgoda SR, Kalidindi SR. Multi-scale modeling of elastic responseof three-dimensional voxel-based microstructure datasets using novelDFT-based knowledge systems. Acta Materialia 2009;58:2716–25.

[131] Fast T, Kalidindi SR. Formulation and calibration of higher-order elasticlocalization relationships using the MKS approach. Acta Materialia 2011;59:4595–605.

[132] Kröner E. Statistical modelling. In: Gittus J, Zarka J, editors. Modelling smalldeformations of polycrystals. London: Elsevier Science Publishers; 1986.p. 229–91.

[133] Kröner E. Bounds for effective elastic moduli of disordered materials. Journalof the Mechanics and Physics of Solids 1977;25:137–55.

[134] Volterra V. Theory of functionals and integral and integro-differentialequations. New York: Dover; 1959.

[135] Wiener N. Nonlinear problems in random theory. Cambridge, MA:MIT Press;1958.

[136] Garmestani H, Lin S, Adams BL, Ahzi S. Statistical continuum theory for largeplastic deformation of polycrystallinematerials. Journal of theMechanics andPhysics of Solids 2001;49:589–607.

[137] Lin S, Garmestani H. Statistical continuum mechanics analysis of an elastictwo-isotropic-phase composite material. Composites Part B: Engineering2000;31:39–46.

[138] Mason TA, Adams BL. Use of microstructural statistics in predictingpolycrystallinematerial properties. Metallurgical andMaterials TransactionsA: Physical Metallurgy and Materials Science 1999;30:969–79.

[139] Garmestani H, Lin S, Adams BL. Statistical continuum theory for inelasticbehavior of a two-phase medium. International Journal of Plasticity 1998;14:719–31.

[140] Beran MJ, Mason TA, Adams BL, Olsen T. Bounding elastic constants of anorthotropic polycrystal using measurements of the microstructure. Journalof the Mechanics and Physics of Solids 1996;44:1543–63.

[141] Beran MJ. Statistical continuum theories. New York: Interscience Publishers;1968.

Page 21: Key computational modeling issues in Integrated Computational

24 J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25

[142] Milton GW. In: Ciarlet PG, Iserles A, Khon RV, Wright MH, editors. Thetheory of composites. Cambridgemonographs on applied and computationalmathematics, Cambridge: Cambridge University Press; 2002.

[143] Cherry JA. Distortion analysis ofweakly nonlinear filters usingVolterra series.Ottawa: National Library of Canada; 1995.

[144] Wray J, Green G. Calculation of the Volterra kernels of non-linear dynamicsystems using an artificial neural network. Biological Cybernetics 1994;71:187–95.

[145] Korenberg M, Hunter I. The identification of nonlinear biological systems:volterra kernel approaches. Annals of Biomedical Engineering 1996;24:250–68.

[146] Lee YW, Schetzen M. Measurement of the wiener kernels of a non-linearsystem by cross-correlation. International Journal of Control 1965;2:237–54.

[147] Cockayne E, van deWalle A. Building effectivemodels from sparse but precisedata: application to an alloy cluster expansion model. Physical Review B2009;81:012104.

[148] Kerschen G, Golinval J-C, Hemez FM. Bayesian model screening for theidentification of nonlinear mechanical structures. Journal of Vibration andAcoustics 2003;125:389–97.

[149] Buede DM. The engineering design of systems: models and methods. NewYork, N.Y: John Wiley & Sons, Inc; 2000.

[150] AIAA, Guidelines for the verification and validation of computational fluiddynamics simulations, 1998, Report Number: AIAA-G-077-1998.

[151] Sivel VG, Van den Brand J, Wang WR, Mohdadi H, Tichelaar FD, Alkemade P,et al. Application of the dual-beam fib/sem to metals research. Journal ofMicroscopy 2004;21:237–45.

[152] Giannuzzi LA, Stevie FA. Introduction to focused ion beams : instrumentation,theory, techniques, and practice. New York: Springer; 2005.

[153] Spowart J, Mullens H, Puchala B. Collecting and analyzing microstructures inthree dimensions: a fully automated approach. JOM Journal of the Minerals,Metals and Materials Society 2003;55:35–7.

[154] Spowart JE. Automated serial sectioning for 3-d analysis of microstructures.Scripta Materialia 2006;55:5–10.

[155] Salvo L, Cloetens P, Maire E, Zabler S, Blandin JJ, Buffiére JY, et al. X-raymicro-tomography an attractive characterisation technique in materials science.Nuclear Instruments and Methods in Physics Research Section B: BeamInteractions with Materials and Atoms 2003;200:273–86.

[156] Midgley PA, Weyland M. 3-D electron microscopy in the physical sciences:the development of Z-contrast and EFTEM tomography. Ultramicroscopy2003;96:413–31.

[157] Stiénon A, Fazekas A, Buffière JY, Vincent A, Daguier P, Merchi F. Anew methodology based on X-ray micro-tomography to estimate stressconcentrations around inclusions in high strength steels. Materials Scienceand Engineering: A 2009;513–514:376–83.

[158] Proudhon H, Buffière JY, Fouvry S. Three-dimensional study of a frettingcrack using synchrotron X-ray micro-tomography. Engineering FractureMechanics 2007;74(5):782–93.

[159] Tiseanu I, Craciunescu T, Petrisor T, Corte AD. 3-D X-ray micro-tomographyfor modeling of NB3SN multifilamentary superconducting wires. FusionEngineering and Design 2007;82(5–14):1447–53.

[160] Miller MK, Forbes RG. Atom probe tomography. Materials Characterization2009;60(6):461–9.

[161] Arslan I, Marquis EA, Homer M, Hekmaty MA, Bartelt NC. Towards better3-D reconstructions by combining electron tomography and atom-probetomography. Ultramicroscopy 2008;108(12):1579–85.

[162] Rowenhorst DJ, Lewis AC, Spanos G. Three-dimensional analysis of graintopology and interface curvature in a β-titanium alloy. Acta Materialia 2010;58(16):5511–9.

[163] Gao X, Przybyla CP, Adams BL. Methodology for recovering and analyzingtwo-point pair correlation functions in polycrystalline materials. Metallur-gical and Materials Transactions A 2006;37:2379–87.

[164] Gokhale AM, Tewari A, Garmestani H. Constraints on microstructural two-point correlation functions. Scripta Materialia 2005;53:989–93.

[165] Niezgoda SR, Fullwood DT, Kalidindi SR. Delineation of the space of 2-point correlations in a composite material system. Acta Materialia 2008;56:5285–92.

[166] Niezgoda SR, Turner DM, Fullwood DT, Kalidindi SR. Optimized structurebased representative volume element sets reflecting the ensemble averaged2-point statistics. Acta Materialia 2010;58:4432–45.

[167] McDowell DL, Ghosh S, Kalidindi SR. Representation and computationalstructure-property relations of random media. JOM Journal of the Minerals,Metals and Materials Society 2011;63:45–51.

[168] SAMSIworkshoponuncertainty quantification in engineering and renewableenergy, http://www.samsi.info/programs/2011-12-program-uncertainty-quantification-engineering-and-renewable-energy, accessed May 5, 2012.

[169] Thunnissen DP. Propagating and mitigating uncertainty in the design ofcomplex multidisciplinary systems. Pasadena, CA: California Institute ofTechnology; 2005.

[170] Kaplan S, Garrick BJ. On the quantitative definition of risk. Risk Analysis 1981;1:11–26.

[171] McManus H, Hastings D. A framework for understanding uncertaintyand its mitigation and exploitation in complex systems. IEEE EngineeringManagement Review 2006;34(3):81–94.

[172] Chen W, Baghdasaryan L, Buranathiti T, Cao J. Model validation viauncertainty propagation and data transformations. AIAA Journal 2004;42:1406–15.

[173] Choi H-J, McDowell DL, Rosen D, Allen JK, Mistree F. An inductive designexploration method for robust multiscale materials design. ASME Journal ofMechanical Design 2008;130: (031402)1–13.

[174] Kennedy MC, O’Hagan A. Bayesian calibration of computer models. Journalof the Royal Statistical Society: Series B (Statistical Methodology) 2001;63:425–64.

[175] Helton JC, Oberkampf WL. Alternative representations of epistemic uncer-tainty. Reliability Engineering & System Safety 2004;85:1–10.

[176] Lee SH, Chen W. A comparative study of uncertainty propagation methodsfor black-box-type problems. Structural and Mulidisciplinary Optimization2009;37:239–53.

[177] Yin X, Lee S, Chen W, Liu WK, Horstemeyer MF. Efficient random fielduncertainty propagation in design using multiscale analysis. Journal ofMechancial Design 2009;131: 021006(1–10).

[178] Ferson S, Kreinovich V, Hajagos J, Oberkampf W, Ginzburg L. Experimentaluncertainty estimation and statistics for data having interval uncertainty,Sandia National Laboratories, 2007, Report Number: SAND2007-0939.

[179] Zabaras N. Advanced computational techniques for materials-by-design, inJoint Contractors Meeting of the AFOSR Applied Analysis and ComputationalMathematics Programs, Long Beach, CA, 2006 (August 7–11).

[180] Allen JK, Seepersad CC, Choi H-J, Mistree F. Robust design for multidis-ciplinary and multiscale applications. ASME Journal of Mechanical Design2006;128:832–43.

[181] Gano SE, Agarwal H, Renaud JE, Tovar A. Reliability-based design usingvariable fidelity optimization. Structure and Infrastructure Engineering:Maintenance, Management, Lifecycle 2006;2:247–60.

[182] Taguchi G. Introduction to quality engineering. New York: UNIPUB; 1986.[183] Chen W, Allen JK, Mistree F. A robust concept exploration method

for enhancing productivity in concurrent systems design. ConcurrentEngineering: Research and Applications 1997;5:203–17.

[184] Mistree F, Hughes O, Bras B. The compromise decision support problem andthe adaptive linear programming algorithm. In: Kamat M, editor. Structuraloptimization: status and promise. Washington, DC: AIAA; 1992. p. 251–90.

[185] Chen W, Allen JK, Tsui K-L, Mistree F. A procedure for robust design:minimizing variations caused by noise factors and control factors. ASMEJournal of Mechanical Design 1996;118:478–85.

[186] Simpson T, Maier J, Mistree F. Product platform design: method andapplication. Research in Engineering Design 2001;13:2–22.

[187] Sues RH, Oakley DR, Rhodes GS.Multidisciplinary stochastic optimization, in:10th conference on engineering mechanics, Boulder, CO, 1995, 934–7.

[188] Oakley DR, Sues RH, Rhodes GS. Performance optimization of multidisci-plinary mechanical systems subject to uncertainties. Probabilistic Engineer-ing Mechanics 1998;13:15–26.

[189] Du X, Chen W. Efficient uncertainty analysis methods for multidisciplinaryrobust design. AIAA Journal 2002;40:545–52.

[190] Gu X, Renaud JE, Batill SM, Brach RM, Budhiraja AS. Worst case propagateduncertainty of multidisciplinary systems in robust design optimization.Structural and Multidisciplinary Optimization 2000;20:190–213.

[191] Liu H, Chen W, Kokkolaras M, Papalambros PY, Kim HM. Probabilisticanalytical target cascading: a moment matching formulation for multileveloptimization under uncertainty. Journal of Mechanical Design 2006;128:991–1000.

[192] Kim HM, Michelena NF, Papalambros PY, Jiang T. Target cascading in optimalsystem design. Journal of Mechanical Design 2003;125:481–9.

[193] Choi H-J, Allen JK, Rosen D, McDowell DL, Mistree F. An inductive designexploration method for the integrated design of multi-scale materials andproducts, in: Design engineering technical conferences — design automationconference, Long Beach, CA, 2005, Paper Number DETC2005-85335.

[194] Choi H-J. A robust design method for model and propagated uncertainty,Ph.D. Dissertation, The GW Woodruff School of Mechanical Engineering,Georgia Institute of Technology, Atlanta, GA, 2005.

[195] Choi H-J, McDowell DL, Allen JK, Mistree F. An inductive design explorationmethod for hierarchical systems design under uncertainty. EngineeringOptimization 2008;40:287–307.

[196] Sinha A, Panchal JH, Allen JK, Mistree F. Managing uncertainty in multiscalesystems via simulation model refinement, in: ASME 2011 internationaldesign engineering technical conferences and computers and informationin engineering conference, August 28–31, Washington, DC, 2011. PaperNumber: DETC2011/47655.

[197] Panchal JH, Paredis CJJ, Allen JK, Mistree F. A value-of-information basedapproach to simulation model refinement. Engineering Optimization 2008;40:223–51.

[198] Messer M, Panchal JH, Krishnamurthy V, Klein B, Yoder PD, Allen JK, et al.Model selection under limited information using a value-of-information-based indicator. Journal of Mechancial Design 2010;132: 121008(1–13).

[199] Panchal JH, Choi H-J, Allen JK, McDowell DL, Mistree F. A systems basedapproach for integrated design of materials, products, and design processchains. Journal of Computer-Aided Materials Design 2007;14:265–93.

[200] Howard R. Information value theory. IEEE Transactions on Systems Scienceand Cybernetics 1966;SSC-2:779–83.

[201] Lawrence DB. The economic value of information. New York: Springer; 1999.[202] Panchal JH, Allen JK, Paredis CJJ, Mistree F. Simulation model refinement

for decision making via a value-of-information based metric, in: Design

Page 22: Key computational modeling issues in Integrated Computational

J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25 25

automation conference, Philadelphia, PA, 2006. Paper Number: DETC2006-99433.

[203] Messer M, Panchal JH, Allen JK, Mistree F. Model refinement decisions usingthe process performance indicator. Engineering Optimization 2011;43(7):741–62.

[204] Valerdi R. The constructive systems engineering cost model (Cosysmo), Ph.D.Dissertation, Industrial and Systems Engineering, University of SouthernCalifornia, Los Angeles, 2005.

[205] OberkampfWL, DeLand SM, Rutherford BM, Diegert KV, Alvin KF. Estimationof total uncertainty in modeling and simulation, Sandia National Laborato-ries, 2000, Report Number: SAND2000-0824.

[206] Sargent R. Verification and validation of simulation models, in: Wintersimulation conferences, New Orleans, LA, 2003, 37-48.

[207] BayarriMJ, Berger JO, Paulo R, Sacks J, Cafeo JA, Cavendish J, et al. A frameworkfor validation of computer models. Technometrics 2007;49:138–54.

[208] Sankararaman S, Mahadevan S. Model validation under epistemic uncer-tainty. Reliability Engineering and System Safety 2011;96:1232–41.

[209] ChenW,XiongY, Tsui K-L,Wang S. A design-driven validation approach usingBayesine prediction models. ASME Journal of Mechanical Design 2008;130:021101(1–12).

[210] Malak RJ, Paredis CJJ. Validating behavioral models for reuse. Research inEngineering Design 2007;18:111–28.

[211] Mocko GM, Panchal JH, Fernández MG, Peak R, Mistree F. Towardsreusable knowledge-based idealizations for rapid design and analysis, in:

45th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materialsconference, Palm Springs, CA, 2004. Paper Number: AIAA-2004-2009.

[212] Malak RJ, Paredis CJJ. Using support vector machines to formalize the validinput domain in predictive models in systems design problems. Journal ofMechancial Design 2010;132: 101001(1–14).

[213] Pedersen K, Emblemsvag J, Bailey R, Allen J, Mistree F. The ‘valida-tion square’ — validating design methods, in: ASME design theory andmethodology conference, New York, 2000. Paper Number: ASME DETC2000/DTM-14579.

[214] Seepersad CC, Pedersen K, Emblemsvag J, Bailey RR, Allen JK, Mistree F.The validation square: how does one verify and validate a design method?In: Chen W, Lewis K, Schmidt L, editors. Decision-based design: makingeffective decisions in product and systems design. New York, NY: ASMEPress; 2005.

[215] Li J, Mourelatos ZP, Kokkolaras M, Papalambros PY. Validating designsthrough sequential simulation-based optimization, in: ASME 2010 designengineering technical conferences & computers in engineering conference,Montreal, Quebec, 2010. Paper Number: DETC2010-28431.

[216] McDowell DL. Critical path issues in ICME, models, databases, and simulationtools needed for the realization of integrated computational materialsengineering. In: Arnold SM, Wong TT, editors. Proc. symposium held atmaterials science and technology 2010. Houston, Tx: ASM International;2011. p. 31–7.