Wait! Before you go...

Subscribe to TheShift e-newsletter for LCA and sustainability resources and news.

New IJLCA Article Looks at LCA Aggregation Methods: What Insensitivity to Weighting Means for Decision-Making

IJLCA cover

If your life cycle assessment (LCA) results are, like many such findings, being used for decision-making, I’d like to draw your attention to my newly published article in the International Journal of Life Cycle Assessment.

This article, which was coauthored by colleagues from several organizations which I’ve had the pleasure of working with during my professional journey, is part of the IJLCA special issue on Interpretation. It illustrates how our current methods of aggregating comparative LCA results can be insensitive to weight factors. The basic issue is that there is a gap in interpretation research — we either focus on the results at the impact-category level (the facts), or on the weights associated with those impact categories (the values), but little is done on how these two connect with each other.

Weights in LCA are used to highlight values and priorities in decision-making, and they can help guide a selection process of comparative LCAs when there are tradeoffs to be made. Weights, and specifically weight factors, can be used in LCA to generate single scores and thus facilitate a decision. But while several studies delve into the evaluation and extraction of such weight values, few have explored the role of weights in creating the single score.

Our new study evaluates the role of weight factors in two forms of aggregation: conventional single-score, which consists of a linear weighted sum, and outranking, a non-linear aggregation method based on mutual differences between alternatives, which is also inclusive of uncertainty. The study tests both CML-IA and ReCiPe methods.

The goal is to identify how weight values do and don’t affect the final aggregated score. This allows evaluation of the sensitivity of aggregation methods.
We found that conventional aggregation methods are dominated by a few indicators and hence have a lower sensitivity to weights. They are, therefore, less receptive to values. Outranking, on the other hand, is more receptive to weights and gives priorities for different criteria a greater role in the final result.

The study also finds that outranking generates consistent results across both impact assessment methods (CML and ReCiPe). By contrast, results of conventional aggregation, using either regional or global normalization references, are inconsistent across impact assessment methods, and both scales of normalization references are also dependent on a few impact categories.

The bottom line is that this study illustrates how our aggregation methods need to take into account the mathematical relationship between "facts" (what we calculate with LCA) and “values” (what we reflect in our weightings) and how that process affects the overall score.

This is especially important for the decision-making process, as it means that values and priorities can be reflected in the interpretation of results.