Transcript
Page 1: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC

This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications.

Contract No. DE-AC36-08GO28308

Qualification Testing Versus Quantitative Reliability Testing of PV - Gaining Confidence in a Rapidly Changing Technology Preprint Sarah Kurtz1, Kent Whitfield2, Nancy Phillips3, Tony Sample4, Christos Monokroussos5, Edward Hsi6, Ingrid Repins1, Peter Hacke1, Dirk Jordan1, John Wohlgemuth7, Peter Seidel8, Ulrike Jahn9, Michael Kempe1, Tadanori Tanahashi10, Yingnan Chen11, Bengt Jaeckel12, and Masaaki Yamamichi13 1National Renewable Energy Laboratory, 2Underwriters Laboratories, 3DuPont, 4European Commission, 5TÜV 6Swiss RE, Switzerland, 7PowerMark Corporation, 8First Solar, 9TÜV, 10National Institute of Advanced Industrial Science and Technology (AIST), 11China General Certification Center, 12Underwriters Laboratories, and 13RTS Corporation

Presented at the 2017 European PV Solar Energy Conference and Exhibition Amsterdam, The Netherlands September 25-29, 2017

Conference Paper NREL/CP-5J00-70163 October 2017

Page 2: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

NOTICE

The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No. DE-AC36-08GO28308. Accordingly, the US Government and Alliance retain a nonexclusive royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for US Government purposes.

This report was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or any agency thereof.

This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications.

Available electronically at SciTech Connect http:/www.osti.gov/scitech

Available for a processing fee to U.S. Department of Energy and its contractors, in paper, from:

U.S. Department of Energy Office of Scientific and Technical Information P.O. Box 62 Oak Ridge, TN 37831-0062 OSTI http://www.osti.gov Phone: 865.576.8401 Fax: 865.576.5728 Email: [email protected]

Available for sale to the public, in paper, from:

U.S. Department of Commerce National Technical Information Service 5301 Shawnee Road Alexandria, VA 22312 NTIS http://www.ntis.gov Phone: 800.553.6847 or 703.605.6000 Fax: 703.605.6900 Email: [email protected]

Cover Photos by Dennis Schroeder: (left to right) NREL 26173, NREL 18302, NREL 19758, NREL 29642, NREL 19795.

NREL prints on paper that contains recycled content.

Page 3: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

1 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

QUALIFICATION TESTING VERSUS QUANTITATIVE RELIABILITY TESTING OF PV – GAINING CONFIDENCE IN A RAPIDLY CHANGING TECHNOLOGY

Sarah Kurtz1, Kent Whitfield2, Nancy Phillips3, Tony Sample4, Christos Monokroussos5, Edward Hsi6, Ingrid Repins1, Peter Hacke1, Dirk Jordan1, John Wohlgemuth7, Peter Seidel8, Ulrike Jahn9, Michael Kempe1, Tadanori Tanahashi10,

Yingnan Chen11, Bengt Jaeckel12, Masaaki Yamamichi13 1National Renewable Energy Laboratory, Golden, CO USA; 2Underwriters Laboratories, Fremont, CA USA; 3DuPont,

USA; 4European Commission, JRC, Ispra, Italy; 5TÜV Rheinland, Shanghai, China; 6Swiss RE, Switzerland; 7PowerMark Corporation, USA; 8First Solar, Mainz, Germany; 9TÜV Rheinland, Cologne, Germany; 10National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan; 11China General Certification Center, Beijing,

China; 12Underwriters Laboratories, Zeppelinheim, Germany; 13RTS Corporation, Tokyo, Japan

ABSTRACT: Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty pre-dictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured acceleration factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assess-ments. Recent and planned potential improvements in international standards are also summarized.

Keywords: PV reliability, accelerated testing, risk assessment, lifetime prediction

1 INTRODUCTION

As the total investment in the solar industry has in-creased, persistent questions arise: “What is the degrada-tion rate of the module and system and how do they com-pare to the component manufacturer’s stated warranty and system performance guarantee?” “How much more should I be willing to pay for a module/system that has demonstrated improved reliability?” or, conversely, “What information demonstrates higher confidence to justify a higher price?” While there is ample evidence that PV installations have performed mostly as expected [1-3], there have also been some product recalls and ob-servations of new (unexpected) failures.1

Fig. 1 shows our simplistic starting point: certifying a module design by testing 8 engineering samples to the IEC 61215 qualification test [4] is inadequate (and not intended) for making 25-year risk assessments2. While IEC 61215 has been demonstrated to be valuable for rap-idly uncovering well known failure mechanisms, it is in-sufficient for assessing long-term risk, evaluating newer or less common materials and designs, establishing field performance degradation, setting insurance rates that must consider power generation over at least 25 years, or selecting the best product for a specific project. Today’s rapidly changing PV technology requires many compa-nies to launch new versions of their product every few months while requiring warranties that are decades long.

Today, in addition to qualification testing (IEC 61215 and IEC 61730 [5]) most PV companies require a robust quality management system that controls many aspects of the manufacturing process (incoming materials, process- 1 Here, “failure” broadly refers to any problem that may cause risk for a business partner. 2 Most investors today plan to obtain return on investment in < 25 years; we refer to 25 years to reflect warranty time lines as well as the needs of second owners.

es, etc.) as well as testing beyond IEC 61215. As the PV industry matures, the methods used for quality control (QC) are evolving to utilize new knowledge and to be more consistent, enabling lower QC costs, as with IEC TS 62941 [6]. We expect this evolution to move from pass-fail qualification testing to more sophisticated anal-yses that provide more quantitative assessment of risk specific to a particular location or type of location, and, thus, enable more quantitative assessment of the value of high-quality components, both in terms of degradation rates and failure rates. One proposed approach to com-pleting a quantitative assessment assigns a Cost Priority Number (CPN) that reflects the cost of repair or loss of revenue associated with a problem [7]. Assignment of a CPN or other rating methodologies [8] relies on being able to link knowledge about the components and system with the anticipated outcomes. The industry has not yet agreed upon the best approaches for gathering and using the information needed for quantifying overall risk.

This paper begins by describing approaches for risk assessment that range from an initial qualification test to the ideal for quantitative assessments. It then proposes a strategy for building on the foundational qualification testing to use standardized data collection from extended stress testing and thoughtful analysis of that data within the context of known field results to provide a useful risk assessment. An overview of a subset of recent and planned international standards development is also pre-sented, highlighting the joint work of the International PV Quality Assurance Task Force (PVQAT) and IEC Technical Committee 82. While this paper is largely fo-cused on PV modules, the concepts must be applied to all components as well as to the entire system.

Page 4: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

2 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

Figure 1: Our challenge with this rapidly changing technology is to use a few months of testing to provide the basis for busi-ness decisions that may have impacts for decades. This paper discusses strategies for improving risk assessments. 2 QUALIFICATION vs QUANTITATIVE TESTING

As shown schematically in Figure 2, risk assessments of PV can evolve to be more quantitative. The initial idea (see left side of Fig. 2) is evaluated and analyzed; proto-types are stress tested to qualify a design, and samples from a production line are subjected to extended-stress testing. Finally, a mature product with a specified bill of materials manufactured within a defined process window for a specific application may be tested and modeled quantitatively. From left to right in the simplistic descrip-tion in Fig. 2, the quantitative nature of the assessment increases. Differentiating the roles and utility of each type of test encounters many complexities, as discussed in this section to set the stage for the strategy discussed in section 3.

Figure 2: Evolution of maturity of risk assessment as a product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays fixed long enough to complete the needed testing.

Qualification tests typically provide a pass-fail out-come reflecting whether the test article exceeds a mini-mum acceptable key indicator. In contrast, a quantitative test is associated with a numerical output providing the user with access to the degree of impact a test has on a key indicator. When multiple data points are measured as a function of varying levels of a stress factor, these may be used to derive a model from which a quantitative pre-diction may be made. While there is common interest in developing quantitative models (right side of Fig. 2), many such models today fail to provide the desired accu-racy and do not include all possible failure mecha-nisms/stresses; experts are often hesitant to suggest that such models can build confidence over the product’s life-time. While we understand this hesitancy, we assert that we should not let that hesitancy deter us from acquiring and using all relevant data to make increasingly quantita-

tive predictions. It is the purpose of this paper to explore the best strategies to accomplish this goal.

2.1 Can qualification testing be quantitative? Today, most PV module designs are certified to

both IEC 61215 [4] and IEC 61730 [5]; these prescribe a set of stress tests to qualify a prototype with respect to functionality and safety, respectively, before beginning manufacturing. While these two standards are pass-fail by nature, they can provide information beyond a simple pass-fail by:

• Differentiating PV modules with regard to some at-tributes that are defined on the data sheet, e.g. perfor-mance, system voltage and mechanical load. As tests mature, additional attributes may be treated in this way.

• Reporting the observations. For example, if one test article fails IEC 61215, the test can be repeated and passed. The failure suggests risk that some samples are defective; such risk can be quantified through statistical sampling from the production population. Many manu-facturers decline to share the detailed IEC 61215 test report with their customers even though the data in this report would be valuable toward analyzing the suitabil-ity of a module design. Transparency in sharing test re-sults would be beneficial to the community.

2.2 Uncertainty as the metric for quantitative assess-ment

An accurate model-based prediction of degradation or failure rates of a PV product is very difficult. For example, when a degradation mode involves a multi-step process, understanding each step well enough to model it is a challenge. Furthermore, to test a model may require years of work. Often the most difficult part of creating the prediction is assessing the uncertainty of the prediction, because the uncertainty depends on vari-ations in the manufacturing process as well as uncertain-ty in the kinetics of the failure mechanism and uncer-tainty in the applied stresses relative to those in the ac-tual use conditions. At the extreme, a prediction of a service life of 25 years ± 25 years may be the result of the study, providing a scientifically rigorous result that has essentially no value.

Studies of acceleration factors to enable prediction of PV degradation or failure rates often lack statistical confidence, or simply are not comprehensive. Typically, the focus is on some details of the prediction (the identi-fication of failure mechanisms, measurement of the ki-netic rates, and application of a model to the intended use environment) derived from a small sample set with little analysis of the uncertainty. An unintended impuri-ty in an encapsulant may change its rate of discolora-tion; variations in the thickness of a solder bond may

1-GW plant makes 4,000,000 Panels/y

If we test 8 engi-neering samples by

IEC 61215

25 years?

Are we prepared to set war-ranties, insurance rates & estimate profit for > 25 y?

Page 5: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

3 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

change its failure rate; mounting of a module in such a way that it overheats could accelerate its degradation, etc. As a result, many experimentally determined degradation rates are bound to have high uncertainties. The accuracy of the output of any model is subject to the degree of ac-curacy of its input parameters. Compared with the studies of quantifying failure mechanisms, few studies have fo-cused on quantifying and decreasing the uncertainty re-lated to inputs such as the variability of the manufactur-ing process. Moreover, it is challenging to model the deg-radation behavior when parallel degradation processes take place. To overcome the difficulties most of the pro-posed models initiate simplifications and assume that cer-tain processes will dominate under certain environmental conditions and ignore the perplexity of multi-dimensional reaction paths, which further increase the uncertainty of prediction. Similarly, warranties may not specify the use environment, even though the physics predicts that the life of a module varies with use environment.

A general test such as IEC 61215 is not intended for quantitative predictions of lifetime. Efforts to quantify the implications of passing a general test are inherently limited. Although it is clear that different products may show different acceleration factors, preventing confident predictions from a general type approval test, there is an urgent and practical need to better understand and ana-lyze test results in order to provide the basis for setting insurance rates, estimating return on investment, and making other business decisions.

Thus, studies of the mechanisms and associated ac-celeration factors to make quantitative predictions should be encouraged, but these should always assess the ex-pected uncertainty in the predictions. In some cases, the uncertainty may be reduced more by the application of tighter control of the production process (when it is known which parameters need to be controlled), or the definition of the use environment than by improved measurements of the kinetics of the failure rates.

2.3 Is extended-stress testing useful? Over the years, many protocols have been developed

for extended stress testing [9-19]. The number of these tests and the frequency of their use by the community imply that they provide value. Most such tests are de-signed to apply stresses in a manner that reproduces rele-vant field stresses, either in combination or sequentially and records a series of data from the extended sequences. But, there is usually a lot of guesswork and some debate about the “best” number of cycles (see section 2.4) and which sequences of testing should be chosen. There is concern that the tests sometimes cause irrelevant failures, adding unnecessary cost, raising unnecessary concerns, or leading to unwise decisions.

We suggest that extended-stress testing can make risk assessment more effective by:

• Comparisons: Provides a way to compare two versions of a product, which can be especially useful if a change in the bill of materials or manufacturing process may change field failure rates. Care must be taken to compare products only for test results that are relevant to the use environment. This may be the most common application of extended testing since many companies release new versions of their products every 3-12 months. The dis-connect between the intended lifetime of the product (> 25 years) and the production lifetime (3-12 months) is at

the core of our challenge.

• More extensive data: Most extended testing protocols report observations after each step, providing more data than a simple pass-fail at the end of the test. Standardi-zation of the extended-stress test methodology enables comparison of the test data with a larger database of test results, facilitating the comparison of test results from many samples and the correlation of these with field data if accessible records are kept.

• Identify potential vulnerabilities: Similar to test-to-failure protocols, overstressing samples can identify vulnerabilities that should be assessed for relevance to the intended use environment. This includes component materials tests that screen out non-durable materials from the design, or identify when mitigation may be required.

• Use of production modules: Extended test protocols may be applied to production-line samples, possibly from multiple lines or factory sites, differentiating the extended test from qualification testing of prototypes and potentially leading to an improved understanding of product production controls.

• Educate the market: Many experts have reported that buyers and financial institutions often request extended testing, but have minimalistic requirements in terms of test duration, severity and sample size, while expecting that results will provide confidence over the product’s expected lifetime. It is therefore important that the sci-entific community lay the fundamental guidelines for a test protocol with realistic test requirements that can be accomplished in a reasonable timeframe (e.g. 6 months). Such guidelines would provide a basis that will help educate the market over what is realistically achievable and align expectations with reality.

• Standardization as a means to build a market con-sensus: Those who use extended testing routinely report that such testing is very useful toward assessing risk even when the results are not directly associated with quantitative models. But, a plethora of extended test protocols exists. It is often confusing for the buyers, or those who drive financial decisions to understand which one is “best” for their needs. Standardization of the ex-tended-stress test methodology would help build a con-sensus in the market in terms of both requirements and expectations, and would reduce the time and cost of testing. What the PV community needs is a single, commonly accepted and standardized protocol that pro-vides the information to increase confidence over IEC 61215/61730 qualification and type approval.

2.4 More samples vs. longer stress For a given budget, one can choose to test more samples for shorter stress or fewer/smaller samples for longer stress. Testing more samples is especially useful when an issue is observed infrequently. On the other hand, extended-stress testing identifies how a product wears out and potential weaknesses in a product, ena-bling more complete risk assessment, just as test-to-failure is used for risk assessment. While extended-stress testing has become quite popular (see section 2.3) and is offered by most test labs and required by many customers, there is substantial debate about the optimal balance between the number of samples and the length of the stress test. The use of smaller samples (e.g. 1- or

Page 6: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

4 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

4-cell mini modules, or test coupons containing packag-ing materials with no electrical components or cells) re-duces cost (but also introduces new uncertainties), and may be especially useful for screening new materials and for mechanism-specific testing. While it is not well es-tablished how to integrate this sort of information, an ef-fective method to screen out materials that are not dura-ble in the application will unquestionably improve the risk assessment. The optimal balance depends on the goal of the testing, as discussed below.

2.5 Risk assessment versus service-life prediction Although studies often state an ultimate goal of pre-

dicting service life for PV modules, failure and degrada-tion rates as a function of time are much more useful to those who are estimating maintenance costs, setting in-surance rates, or quantifying risk of an investment. Fail-ure and degradation rates can be reduced into a single number (service life prediction), but only the use of the time-dependent functions allows calculation of costs and revenues over the lifetime of the system. It’s important to describe both the failure rates and degradation rates because failure of a module may require replacement and associated maintenance cost while degradation reduces the electricity generated, both of which can affect the re-turn on investment. In practice, every failure/degradation mechanism must be quantified and the combined effects estimated. Here, we use “risk assessment” as a short way to refer to the evaluation of degradation and failure rates as well as any assessment of risk.

For every risk function, quantifying the uncertainty (which generally increases with time into the project) is essential [7]. As described in section 2.2, quantifying the uncertainty is sometimes more difficult than making the prediction and the challenge is even greater when the un-certainty increases as a function of expected lifetime du-ration.

Inherent to both risk assessment and service predic-tion is a means to describe the stresses in the specific ap-plication, e.g. the relevant application and climate specif-ics. The reliability of a given module for 2 different loca-tions, or application types (e.g. field or roof mounted) can be very different, and the relevant parameters (e.g. mod-ule temperature, extent of thermal cycling, exposure to salt spray or snow load) need to be understood so appro-priate data can be included in the analysis.

2.6 Why should empirical data be included in risk as-sessments?

Physics-based models (based on understanding the mechanism and quantifying the kinetics) may be required to theoretically explain and understand the development of a degradation process or reaction, but due to the com-plexity of the processes involved in the field, it is often more practical to rely on experimental observations and correlations of laboratory and field testing. Insurance companies define their rates based on both “exposure rating” (based on physics-based studies) and “experience rating” (based on actuarial-type statistics, which are semi-quantitative and relevant even when they don’t re-flect the physics of failure).

Using uncertainty as a metric, physics-based models may be inferior to experimental or field observations when the uncertainties in the physics-based predictions are large. In an example of damage caused by hurricanes,

historical data may provide a more accurate prediction than a physical model. If the uncertainty of the empiri-cal data is smaller than the uncertainty of the physics-based model, then the empirical data is likely to be su-perior and should be respected for the value it brings.

On the other hand, experimental data or field obser-vations may be irrelevant if the prediction is made for a product that has different failure mechanisms than de-scribed by the so-far observed data. Thus, the best ap-proach is likely to use correlations of accelerated and field test results combined with analyses of how the products differ and how those differences may affect the final results. Comparisons of field data can also provide very useful insight into the effects of design or compo-nent selection, if these are analyzed carefully.

3 STRATEGY FOR BETTER ASSESSING RISK

This section proposes a strategy for improving risk assessments by using a tiered approach using standards, knowledge, and processes that consider multiple aspects of product design and implementation.

Goals for future improvements include:

1) Address failures that are being observed in the field, 2) Address new failure mechanisms for new products, 3) Reduce cost by removing unnecessary requirements

and standardizing testing protocols, and 4) Reduce the uncertainty in reliability assessments.

3.1 Overall strategy and types of tests Various types of testing and analysis were described

in Fig. 2. We suggest that a robust approach to low-uncertainty risk assessment will benefit from using mul-tiple elements in a systematic way (see Fig. 3).

Figure 3: Strategy for improving risk assessments of PV. The colors indicate the time line.

Qualification testing for the intended application provides the foundation (Tier 1 of Fig. 3) of our ap-proach to quantifying risk. The second tier in Fig. 3 de-scribes standardized data collection for larger sample sets and for extended-stress test sequences that have been found to be useful in identifying product that will be successful in the field. These data are analyzed using all available knowledge, including the correlation be-tween laboratory test results and field results for the relevant use environment, (third tier in Fig. 3) to deter-mine the risk functions that define the final risk assess-ment (top tier in Fig. 3). The colors in Fig. 3 suggest a time line: Green for Tier 1 suggests that qualification testing is already in place with need for only incremen-tal changes; Yellow suggests completion of a first ver-sion of a standard within a year or two; Orange implies that more research is needed before we can standardize our approach. Red implies that a critical confidence has to be reached in observations before making conclusions about risk. The following sections discuss each of these

Page 7: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

5 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

elements.

Table I: Summary of key goals of testing programs and recommended approaches. Goal Test element Approach 1. Identify failure mecha-nisms for a totally new product

Accelerated test to uncover new and known failure modes; test to failure

Simulate use conditions (including field deployment) to iden-tify expected field failures; test to failure to identify product weaknesses

2. Compare two similar versions of a product

Accelerated test that quickly identifies (known) failures; test to failure

Use extended-stress tests known to cause same failures as in the field, collecting multiple data points; standardize so as to leverage learning from community

3. Quality control Accelerated test that quickly identifies weakness

Use “smart” (comprehensive) sampling along with tests from IEC 61215, IEC 61730; use standard extended-stress test for fewer samples

4. Reliability assessment All of the above Compare data with previously collected data and with litera-ture publications to provide the best analysis possible

3.2 Tier 1: Continued improvements of qualification tests As products evolve, there is a constant need to update

the standards for qualification testing. Based on current reports from PV systems, priority issues include:

1) A range of failures of power electronics associated with both design (IEC 62093) and quality assurance (IEC TS 63157),

2) Potential-induced degradation (PID) of modules, 3) Cracked cells in modules and a range of problems

that may be caused during installation, 4) Hot spots, and 5) Quality issues resulting in inconsistent nameplate

rating and/or durability.

In response to these and other opportunities, amend-ments of IEC 61215 and IEC 61730 are being discussed (see section 4). In general, new test methods are first pub-lished as IEC Test Specifications (TS), typically without pass-fail requirements. After being used for several years and when well accepted by the community, the test method along with a defined minimum threshold for es-tablishing passage or failure of the test, may be merged into the IEC 61215 and IEC 61730 tests. As an example, IEC 62804-1 was published in 2015 to define how to test c-Si modules for PID. After using the two proposed tests, the IEC committee may now be ready to amend IEC 61215 to include a pass-fail PID test based on IEC 62804-1 experience.

A critical element of starting production of a new de-sign is defining quality assurance procedures. IEC 62941 was developed to guide this process for modules. Meth-ods for consistent implementation of IEC 62941 are be-ing defined by IECRE [20].

Qualification testing provides a “pass” if the condi-tions of the tests are met, but the conditions may be inten-tionally varied. For example, the “pass” result may char-acterize the ability to withstand variable levels of system voltage, snow or wind load, and hail. Now, a Test Speci-fication is being developed for higher temperature opera-tion. If new Test Specifications become well accepted by the community as useful tests, in the future, IEC 61215 may be modified to test for multiple levels for operation at higher temperatures, or extended UV exposure and for robustness to stresses during installation.

3.3 Tier 2: Strategy for additional data collection While the need for more standardized data is clear,

there is not yet agreement on the test methodology and data collection. We suggest that confusion arises from the

multiple purposes of the additional data. Table I summa-rizes key goals and strategies for the data collection.

Outdoor testing in a range of use environments is part of evaluating any product, but getting useful results takes longer than the product-development cycle. Even so, systematic collection of data regarding both the per-formance and physical changes is essential to analyzing the relevance of the accelerated test data.

If a company is developing an entirely new product and has little information about how the product will fail, it is useful to carefully analyze possible problems, as well as to use accelerated stress testing to simulate conditions of the anticipated use environment [17, 18]; it is also use-ful to stress the product until it fails so as to identify its weaknesses. (See first item in Table I)

It is more common (second item in Table I) to com-pare two similar designs (e.g. with different bill of mate-rials or similar products from two manufacturers.) For common PV module designs, the probable failure and degradation mechanisms have been well studied and ef-fective tests exist. Using a standard set of tests and data collection methodology enables consistent comparisons and leverages collective knowledge from the community. While these standardized tests should duplicate failures that are relevant in the field, there can also be value in testing to failure in cases where a change in a design may introduce a new weakness in the product. Thus, attempt-ing to align the accelerated and field stresses (even if it were possible) may be less useful than cost-effectively collecting an extensive extended-stress data set and eval-uating that data set based on field experience coupled with an understanding of the physical processes.

A third goal of collecting data is for quality control. As part of a quality management system (QMS), compa-nies develop reliability or quality monitoring programs that detect changes in the product. Companies may at-tempt to confirm product quality through a short (incom-plete IEC 61215) set of tests before releasing inventory and then may continue stressing a subset of samples to increase confidence in the consistency of the manufactur-ing process. Some customers require third party testing of random samples from the lots that are shipped to them. To provide effective quality control, it is essential to sample all production lines and to provide feedback very quickly; we suggest leveraging a quality assurance guide such as IEC 62941 and include:

1) Careful and consistent control of incoming materi-als/components,

Page 8: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

6 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

2) “Smart” (rather than purely random) sampling to test appropriately weighted sample populations that rep-resent the production population as a whole with re-spect to the failure modes of interest,

3) Rapid stress testing such as application of parts of IEC 61215/61730 for modules or IEC 62093 for in-verters and other balance-of-system components.

In many cases, application of the standard qualifica-tion tests adequately identifies problematic variations in manufacturing. However, extended-stress testing of a subset of samples may also be useful. Although one could apply extended tests to the full sets of samples, practically, there is a tradeoff between the number of samples and the length of the stress tests; we suggest that, especially when quality control is the goal, shorter tests (to give quick feedback to the production process) and broader sampling (to test all production hardware) may be better than using extended testing for all samples.

Developing a useful, standardized set of extended stress tests is critical. For modules, multiple versions of extended-stress testing have already been introduced to the community [9-19]. In many cases, these apply IEC 61215 tests multiple times, recording the results after each cycle [11, 12, 15]. In some cases, different types of stress are applied sequentially [14, 15]. Applying stresses in combination more closely simulates the use environ-ment, so including combined-stress tests is preferable if costs are acceptable. Additionally, it can be useful to measure indicators of changes such as adhesion, leakage current and a polymer’s ability to deform without fractur-ing. Because acceleration factors can vary by orders of magnitude for different failure mechanisms and use con-ditions, it will not be possible to design the “perfect” ex-tended-stress test that would identify all relevant failures while avoiding irrelevant failures. On the other hand, the industry’s extensive experience can be used to identify and standardize a useful test sequence that builds on the huge success of IEC 61215 and IEC 61730 to provide more extensive data at minimal cost.

Reliability assessments should use all of the available data in scientifically appropriate ways, as discussed be-low.

3.3 Tier 3: Strategy for data analysis The industry will benefit from an increasing engage-

ment by all parties in understanding and analysing data. This analysis will be easier and more effective if the set of tests and associated data collection methodology are standardized and if research efforts are directed toward comparing the standardized test results to field results in parallel with development of physics-based models, in-cluding more detailed understanding of the physical changes. As a body of knowledge is developed, that knowledge should be used to systematically improve the qualification tests, standardized extended-test methodol-ogy and associated data analysis.

The focus of the data analysis should always be on providing useful inputs into the specific risk assessment (section 3.4), which may vary from project to project. For example, a module design that is susceptible to failure under high snow load may be quite acceptable for appli-cation in a tropical environment or an inverter that is op-erated in an air-conditioned enclosure may not function well in a hot ambient with direct sunshine. Also, the data analysis should include considerations of both testing of

the design and of consistency of the manufacturing.

The data analysis should reflect both scientific studies that elucidate the details of the failure mechanisms and experimental or field observation studies that indicate the kinetics of the mechanism at stress levels present in the field. For example, potential-induced degradation de-pends not only on system voltage, but also on tempera-ture, humidity, light, stress history, and electrical loading. There is opportunity for substantial research to better un-derstand the stresses that cause failures/degradation as well as the material and product attributes that may affect the failures and degradation. This understanding, along with knowledge of the process controls that are used in the manufacturing and installation processes, will aid in making more accurate analyses.

Comparison of field experience with accelerated test results is a critical part of verifying any model, yet the number of years required for completion of such studies often prevents clear conclusions and as indicated above, the production lifetime for a fixed bill of materials and production process is often between 3-12 months leading to low manufacturer incentive for lengthy studies. If rec-ords of accelerated test data and installation procedures can be kept alongside of the as-built plant documenta-tion, the meaning of future field outcomes will be clearer.

A scientific approach must always be taken when an-alysing both the accelerated test data and field data. The hardware and application being analysed may differ from those for which the data are available. Relevance of the results of extended-stress testing will depend on the site and application being evaluated.

3.4 Tier 4: Useful risk assessments The highest tier shown in Fig 3 describes the end

goal: a risk assessment that highlights the greatest risks while providing the comprehensive information needed to make business decisions.

As discussed in section 2.5, service life prediction is useful, but not sufficient: we must quantify the expected degradation and failure rates as a function of time. Whether non-linear degradation occurs mostly near the beginning or end of life can have a large effect on the levelized cost of energy (LCOE) [1]. Thus, reliability functions that reflect both degradation and failure rates are much more useful than a single number that is intend-ed to reflect an average lifetime.

Additionally, better data for business decisions will reduce the associated risk, and, potentially, the cost of capital, lowering the overall project cost. This requires more comprehensive information such as a five-part analysis:

1. Failure/Degradation Mode 2. Failure/Degradation Consequence (severity) – Assign weight to reflect cost associated with the failure. 3. Failure/Degradation Timing – This assesses impli-cation on both time value of money as well as the in-vestment horizon. 4. Failure/Degradation Cause – This may facilitate risk improvement and management. 5. Failure/Degradation Scale (occurrence) – Identifies the fraction of products that could be affected.

All information used in the risk assessment has an as-sociated uncertainty. Part of the risk assessment is esti-

Page 9: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

7 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

mating this uncertainty and including the uncertainty and its confidence in the conclusions.

3.5 Risk assessments at the system level The four-tier approach of Fig. 3 should be applied to

all of the components and the summary risk assessment completed at the system level, using any available data from similar systems with as many years of experience as possible. System design, installation, and operation may affect component reliability functions, as well as the func-tion of the entire system, so are an essential part of the risk assessment.

4 RECENT PROGRESS IN STANDARDIZATION 4.1 IEC and PVQAT efforts

The International Electrotechnical Commission (IEC) Technical Committee 82 (TC82) writes standards for so-lar photovoltaic energy systems. The International PV Quality Assurance Task Force (PVQAT), initiated in 2011, established a Type A Liaison to IEC TC82 in 2017, supporting discussions and coordinated research to lay the groundwork for new and improved IEC standards.

In 2015, PVQAT-prioritized efforts to address issues identified from failures in the field were summarized (see Table III of [21]). Progress on these and related efforts is summarized in this section.

4.2 Recent progress in PV standards development Table II summarizes selected recent improvements in

IEC documents relevant to this paper.

A key effort of PVQAT has been to improve guide-lines for quality assurance. PVQAT initiated discussions of this topic in 2011 and IEC TS 62941 was published in January 2016 to provide guidelines for quality assurance of PV module manufacturing [22, 23]. The IECRE con-formity assessment board is implementing IEC TS 62941 [20]. Similarly, opportunities for improved quality assur-ance were identified for PV system installation and oper-ations/maintenance as described in IEC TS 63049 and implemented by IECRE.

The recent revision of IEC 61215 combined IEC 61215 and IEC 61646 into a single series of documents to align the common testing procedures for silicon and thin-film modules while retaining differentiation of test meth-ods by type of module, where appropriate. The changes to this set of documents were extensive (see Table II for a few highlights).

Two test methods for identifying the susceptibility of silicon modules to potential-induced degradation (PID) were published in 2015 (IEC TS 62804-1).

The cyclic (dynamic) mechanical load test (IEC TS 62782) was published in 2016 with the intent to use, along with thermal cycling and humidity freeze stress, to identify modules that have cracked cells that are likely to degrade in performance after exposure in the field.

Table II: Summary of some recent improvements in IEC standards relevant to this paper Document Publication Description Value IEC TS 62941 2016 Quality assurance guideline Guideline to improve quality of PV modules

IEC TS 63049 2017 Quality assurance guideline Guideline to improve quality of PV plant installation and operation/maintenance

IEC 61215 se-ries 2016

Hot-spot test measures shunting of every cell

Testing the most vulnerable cells is more likely to identify a problem

Marking requirements for nameplate and general documentation are better defined

Provides customers important information in a consistent way

Power output is checked both before and after stress testing

To pass, the modules must perform within specification before and after stress

NOCT (nominal operating cell tempera-ture) replaced by NMOT (nominal module operating temperature)

NMOT reflects the module temperature when biased at the maximum power point

Robustness of terminations test evaluates both cables and junction boxes More likely to catch problems

IEC 61730-1 IEC 61730-2 2016

Implemented insulation coordination, overvoltage category, classes, pollution degree and materials groups

Is better aligned with other IEC documents, facilitating better treatment of these concepts

Definition of creepage, clearance and dis-tance through insulation

Is better aligned with other IEC documents, streamlines introduction of new materials such as edge seals

Implementation of component qualification Prequalifying junction boxes and other com-ponents reduces time to qualify new design

IEC TS 62804-1 2015 Tests for susceptibility to potential-induced

degradation (PID) in Si modules Provides two tests that may become a basis of a pass-fail test within IEC 61215

IEC TS 62782 2016 Cyclic (dynamic) mechanical load testing Detects cracked cells in module

IEC TS 62916 2017 Bypass diode electrostatic discharge (ESD) susceptibility testing

Quantifies control needed to avoid damage from ESD; referenced by IEC 62941

IEC 62979 2017 Bypass diode - Thermal runaway test Tests for thermal runaway when diode switches from forward to reverse bias

IEC 62788 (Materials Testing)

2016-2017

62788-1: Encapsulants 62788-2: Frontsheets/Backsheets 62788-5: Edge Seal 62788-7: Component weathering expo-sures

Facilitates comparison of candidate materi-als for use in PV modules and for quality control of incoming materials

Page 10: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

8 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

IEC 61724 se-ries 2016-2017

Three parts define parameters and proce-dures for quantifying performance of PV plants

Enables execution of performance guaran-tee, consistent comparison of systems, and tracking of system health

Two new standards have been published to improve

confidence in bypass diodes. The first discusses the con-trol of electrostatic discharge (ESD). Implementation of this standard is one of the key elements in the quality as-surance guideline IEC TS 62941. Additionally, to re-spond to an observed failure mechanism of thermal run-away involving a bypass diode switching from the for-ward bias state to the reverse bias state while being hot was published as IEC 62979. This failure mechanism is difficult to characterize because it occurs during a quick transient; there is some discussion about whether IEC 62979 is the best approach given the details of the failure mechanism.

A suite of tests (IEC 62788) for characterizing key properties of PV module materials (e.g. encapsulant and back sheet) is being developed with a number of individ-ual documents already published. This set of documents provides consistent data to facilitate comparison of mod-ule materials, designed to be very helpful to component suppliers and module manufacturers as they identify ways to reduce cost. These test procedures may also be used for quality control of incoming materials.

The IEC 61724 series describes characterization of PV system performance, including capacity- and energy-test methods. Ultimately, the electricity generated by a PV system is a key metric.

4.3 Planned progress in PV standards development When the revisions of IEC 61215 and 61730 were

published in 2016, there was a decision to delay some changes, but to move forward as quickly as possible with amendments to both documents. Table III summarizes some of the changes that are being discussed. Notably, there is a need for a pass-fail test in IEC 61215 for PID (derived from IEC 62804-1) and there is a plan to intro-duce the cyclic (dynamic) mechanical load (IEC 62782) as part of a sequence to identify cracked cells in modules. Also, consistent characterization of bifacial and flexible modules will facilitate introduction of these new products into the marketplace. Finally, there has been discussion about modifying IEC 61215’s thermal cycling test to in-clude some cycles during which current flows through the bypass diodes instead of the cells. Some failures of bypass diodes have been observed and current flow to cause local heating of thermal fatigued joints accelerates relevant failures without requiring as much stress time as without the current flow. PVQAT Task Group 4 recom-mends that 50 thermal cycles with current flow through the diodes be added to the 200 thermal cycles that are already done in IEC 61215 with current flow through the cells for a total of 250 cycles. The functionality of the diodes must be verified at the end of this stress.

A suite of climate-specific tests was originally pro-posed as IEC 62892. In 2017, TC82 decided to revise the approach to climate-specific testing. The document de-scribing extended thermal cycling for a range of applica-tions will be published as a separate document. The stress conditions for encapsulants that have been discussed as IEC 62892-3 is now being discussed as IEC 62788-1-7.

A new effort (IEC 63126) is defining how to apply IEC 61215 and IEC 61730 tests to modules destined for

operation at higher temperature. A key element of IEC 63126 may be to increase the thermal endurance testing of bypass diodes; there is a common view that the current thermal endurance test is inadequate for hotter applica-tions. Once established, the option for higher temperature testing may be rolled into IEC 61215 and IEC 61730 much like these currently enable tests to be completed for a range of system voltages.

To address the common problems that are reported for inverters, IEC 62093 is being rewritten to reflect ex-perience with testing inverters; the draft now includes tables differentiating testing of central, string, and micro inverters. A guide for quality assurance of inverter manu-facturing (similar to IEC 62941) is also being developed as IEC TS 63157. Given the dominance of inverter prob-lems, a meaningful inverter qualification test, coupled with the planned quality control guideline, should make a substantial difference to overall PV system reliability.

A test for permanent damage resulting from partial shade is being discussed as IEC TS 63140. The procedure includes methods for determining the size of an adverse shadow and for repeatedly applying that adverse shadow. The method is applicable to monolithically integrated PV modules with one series-connected cell group or with multiple series-connected cell groups that are in turn connected in parallel.

While IEC 62759-1 identifies modules that may be damaged during transportation to the installation site, it is widely noted that modules can also be damaged during installation if bounced, dropped, carried on hard hats, walked on, or installed with too much stress. It is antici-pated that it would be useful to write a part 2 of IEC 62759 to identify the care that needs to be taken to avoid damage during installation.

Similarly, there is interest in developing tests for var-ious building-integrated PV (BIPV) products. The ap-proach for testing BIPV products is challenging because of the many potential form factors and because of a lack of data identifying all failure mechanisms. A single pro-ject team will be discussing the various types of BIPV products to define an approach to tackling this problem.

As discussed above, there is widespread interest in creating a standard for collecting data during and after application of extended stress. Existing extended-stress test sequences will be reviewed to identify commonalities and differences and users will be surveyed to select the test sequences that have been most useful in identifying weaknesses that correlate with field failures. As noted above, it is not possible to define a test that uncovers all field-relevant failures without also causing some failures that will not be found in the specific use environment.

5 CONCLUSIONS Practicality calls on us to do the impossible: Confi-

dently predicting decades of performance for products that are designed and launched in a few months is a for-midable challenge. Yet, billions of euros of investment each year require assessment of the risk in those invest-ments. Here, we propose a multi-tier strategy for collect-ing and using information in the most effective ways,

Page 11: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

9 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

striving toward meeting this tremendous challenge.

Table III: Summary of priorities for planned amendments of IEC 61215 and IEC 61730 and development of new documents.

Document Anticipated Publication Description Value

IEC 61215 new edition 2019

Add cyclic (dynamic) mechanical load test (IEC TS 62782) Detect cracked cells in modules

Add pass-fail requirement for PID (IEC TS 62804-1)

Define consistent way of characterizing PID resistance

Methods for testing bifacial and flexi-ble modules

These new products are entering the market-place without consistent ways of communi-cating their value

Correction of hot-spot test Addresses problem with testing certain mod-ules with series-parallel architecture

Improve simulator requirements More accurate power output measurements, with less effect from spectral mismatch

Thermal cycling with current flow through bypass diodes

Thermal fatigue associated with the bypass diodes is sometimes observed; application of current flow through the diodes will increase probability of detecting a problem

IEC 62892 2018 Extended thermal cycling Provides greater confidence, especially for locations that experience more thermal fatigue

IEC 61730-1 amendment 2018

Allow for insulation thickness to be determined by distance-through-insulation test of 62788-2 or MST-04

Ensures required minimum insulation thick-ness after lamination

Addition of weathering requirement for relied-upon insulation

The current test only tests safety of initial conditions; this change gives confidence in safety after weathering

IEC 61730-2 2018 How to deal with bubbles, in particular for a module with a cemented joint

Ensures that reduction in distance between edge of module and active cells is still safe

IEC 62788 (Materials Testing)

2018-2020

62788-1: Encapsulants 62788-2: Frontsheets/backsheets 62788-5: Edge seal 62788-6: Multiple component tests 62788-7: Component weathering expo-sures

Facilitates comparison of candidate materials for use in PV modules and for quality control of incoming materials

IEC TS 63126 2019 Defines modifications to module quali-fication tests for modules that will op-erate at higher temperatures

Useful for modules that will be deployed in the hottest locations and applications

IEC 62093 2019 Updates qualification test for inverters to reflect years of experience Identify problems with inverter designs

IEC TS 63157 2019 Guideline to improve quality control of power electronics manufacturing Improve quality of power electronics

IEC TS 63140 2019 Partial shade endurance test Quantifies permanent changes after part of the module is shaded

IEC 62759-2 2020 Test ability of modules to withstand stresses applied during installation

Rough handling of modules during installa-tion (or walking on after installation) can cause cracks or other damage

Not assigned 2022 Various forms of BIPV will be consid-ered by a single project team

BIPV applications are growing and more comprehensive standards will support growth

TS Extended-stress test 2020 Apply longer stress, periodically re-

porting changes Provides additional information about module durability to be analyzed as discussed here

The strategy starts with accelerated tests standardized as qualification tests that quickly identify design problems for the intended application. In manufacturing and instal-lation, the initial qualification testing must be coupled with robust quality assurance programs such as de-scribed by IEC TS 62941 and IEC TS 63049. Not only should incoming materials/components be tested, but smart sampling should be used to select finished product from all manufacturing lines and shifts to confirm uni-form and appropriate production; during installation, systematic inspections should confirm that all crews

have appropriate training and oversight. Qualification tests should be updated to reflect failures observed in the field and should provide options for testing to multiple levels of stress when that is shown to be useful, e.g. for varying snow loads and system voltages.

Building on the tests used for qualification testing and based on experience obtained during deployment of many GWs of PV, standardized extended-stress tests should be developed, widely used, well documented, and well understood. Such testing provides a means to com-pare different versions of products to facilitate im-

Page 12: Qualification Testing Versus Quantitative Reliability ...product evolves from concept to a mature product that can be tested in a more quantitative way because the product design stays

10 This report is available at no cost from the National Renewable Energy Laboratory at www.nrel.gov/publications.

provements of the bill of materials and other aspects of the products. Standardization and openness of such data collection will facilitate scientific analysis, by leveraging data from the broader community to identify test results that do or do not correlate with field problems in the ap-plication of interest. The extended tests should be de-signed to cost-effectively apply relevant stresses, testing to failure when the added cost is acceptable, and provid-ing preventative measures to reduce technical risks in investments. Predictive models based on a detailed un-derstanding of relevant failure mechanisms can then be applied to both the extended-test and field data to assess the accuracy of the models, identify indications of prod-uct weakness that may be relevant to the intended appli-cation, and to estimate the uncertainty associated with the risk assessment.

The final risk assessment should use all available da-ta (published in the literature, as well as collected for the product of interest) to facilitate estimation of the antici-pated return on investment (ROI) and the associated risk that the ROI may be less. This estimation process will need to include not only an understanding of the physics of failure, but the variability in the manufacturing, instal-lation and maintenance in order to assess the timing, consequence and scale of possible failures and degrada-tion. Many details must be understood to provide the best risk assessments; by pooling data and coordinating studies to better understand failure and degradation, the community will be able to accelerate progress toward the end goal of low-uncertainty predictions.

Standardized qualification and extended-stress test-ing can play an important role in accelerating this pro-gress. Recent and planned progress in new standards is summarized in section 4.

6 REFERENCES [1] D. C. Jordan, S. R. Kurtz, K. VanSant, and J.

Newmiller, Progress in Photovoltaics: Research and Applications 24 (2016) 978.

[2] U. Jahn and W. Nasse, Progress in Photovoltaics: Research and applications 12 (2004) 441.

[3] D. Jordan and S. Kurtz, 29th European PV Solar Energy Conference and Exhibition, (2014).

[4] IEC 61215 Terrestrial photovoltaic (PV) modules - Design qualification and type approval, 2016.

[5] IEC 61730-1 Photovoltaic (PV) module safety qualification, 2016.

[6] IEC/TS 62941 Guideline for increased confidence in PV module design qualification and type approval, 2016.

[7] U. Jahn, M. Herz, E. Ndrio, E. Janknecht, D. Moser, G. Belluardo, et al., Solar Bankability WP1 Deliverable D 1.

[8] S. M. Shrestha, J. K. Mallineni, K. R. Yedidi, B. Knisely, S. Tatapudi, J. Kuitche, et al., IEEE Journal of Photovoltaics 5 (2015) 174.

[9] D. Cunningham, B. Jaeckel, and A. Roth, PV Module Reliability Workshop, Golden, CO, (2012).

[10] B. Jaeckel, A. Krtschil, D. Cunningham, N. Forney, C. LaMothe, A. Nguyen, et al., Proceedings 26th European Photovoltaic Solar En-ergy Conference, Hamburg, (2011) 4AV.2.1 3484.

[11] A. Funcell, PV Module Reliability Workshop, Golden, CO, (2012).

[12] J. Meydbray, PV Module Reliability Workshop, Golden, CO, (2012).

[13] D. Meakin, PV Module Reliability Workshop, Golden, CO, (2012).

[14] P. Hacke, K. Terwilliger, S. Glick, R. Smith, G. Perrin, S. Kurtz, et al., Proceedings 40th IEEE Photovoltaic Specialist Conference, (2014) 930.

[15] G. TamizhMani, PV Module Reliability Workshop, Golden, CO, (2012).

[16] "Introducing 'Best of modules' Quality assurance," PV Magazine, 2011 (2011) 92.

[17] K. P. Scott and A. Zielnik, PV Module Reliability Workshop, Golden, CO, (2012).

[18] A. Zielnik, "PV Durability and Reliability Issues," Photovoltaics World, Nov/Dec (2009) 10. http://www.renewableenergyworld.com/rea/news/article/2009/12/pv-durability-and-reliability-issues

[19] J. Wohlgemuth and S. Kurtz, Proceedings 40th IEEE Photovoltaic Specialist Conference, (2014) 3589.

[20] G. Kelly, A. Häring, T. Spooner, G. Ball, S. Kurtz, M. Heinze, et al., Proceedings 43rd IEEE Photovoltaic Specialists Conference, (2016) 1264.

[21] S. Kurtz, T. Sample, J. Wohlgemuth, W. Zhou, N. Bosco, J. Althaus, et al., Proceedings 42nd IEEE Photovoltaic Specialists Conference, (2015).

[22] G. Ramu, M. Yamamichi, W. Zhou, A. Mikonowicz, S. Lokanath, Y. Eguchi, et al., "Updated Proposal for a Guide for Quality Management Systems for PV Manufacturing. Supplemental Requirements to ISO 9001-2008," Technical Report No. NREL/TP--5J00-63742, National Renewable Energy Laboratory, Golden, CO (United States) (2015).

[23] Y. Eguchi, G. Ramu, S. V. Lokanath, M. Yamamichi, S. Kurtz, J. Wohlgemuth, et al., Proceedings 40th IEEE Photovoltaic Specialists Conference, (2014) 3574.


Recommended