Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Only Sessions at Location/Venue 
 
Session Overview
Location: Ross Island/Morrison
Date: Tuesday, 25/Jun/2019
2:00pm - 3:30pmOpen: Exploring Sustainability in different ways
Session Chair: Lise Laurin
Ross Island/Morrison 
 
2:00pm - 2:20pm

Framework for Evaluating Water Security in Megacities from an Environmental and Socioeconomic Perspective

Tatiana C. G. Trindade, I. Daniel Posen, Heather L. MacLean

University of Toronto, Canada

With the abrupt and uncontrolled growth of megacities in the developing world, managing their urbanization process became a daunting challenge for governments and stakeholders, and their capacity to respond to urban water issues has been often exceeded. Water management in megacities has been a recurrent research concern, and urban water (in)security is an emerging topic. To evaluate urban water security in megacities in the developing world and provide valuable insights for future water policies, plans, and projects, this paper develops a framework to assess current water-related issues from a socioeconomic and environmental perspective.

Typically, risk assessments tend focus on a single aspect of urban water sustainability. Instead, this work proposes a critical review and deployment of multiple methodologies to identify the pivotal factors affecting the water security of a megacity. The critical review assesses different risk methodologies that have been used to analyse urban water issues, suggesting further improvements and modifications. The modified methods are then used to develop a framework to evaluate socio-environmental and economic aspects of urban water security. First, the social and environmental aspects are analysed by extending the Water Insecurity Index (WII), originally developed by [1] (Refer to pdf file). The index is composed of six categories of indicators: capacity, environment, use, access, resources, and climate. Then, the environmental analysis is complemented by a disaster risk assessment to identify flooding and landslides hazard based on geomorphologic, climatic and environmental characteristics. Finally, the economic aspects are assessed by the estimation of direct and intangible economic losses caused by flooding, considering the extent and damage of registered events. This framework was applied in the assessment of a case-study in the megacity of Sao Paulo, Brazil.

The results highlighted how water insecurity tends to increase from the core to the outskirts of the city (Figure 1, refer to pdf file), and how this insecurity is highly correlated with average income and education. More than 70% of districts in the lowest income quintiles had a WII higher than Sao Paulo’s average (0.425), compared to less than 5% in the highest quintiles. However, considering vulnerability to urban disasters, the city centre is more prone to flooding, while the outskirts have a greater landslide hazard. Sao Paulo’s centre is located on a flatter terrain with higher population density, whereas the outskirts have lower density and are located on areas of steep slopes. This indicates that the precipitation run-off tends to cause mass movements on the outer parts of the city, and to accumulate on its centre, justifying why the flooding hazard has a tendency of increasing with population density. Considering that a great part of Sao Paulo’s population works in its centre, these districts are expected to have the highest value of direct and intangible economic losses from flooding events. The results emphasize the need for a better management of water runoff both in the outskirts and the centre of Sao Paulo, but also bring out the importance of developing district-based urban policies that tackle the specific and heterogeneous vulnerabilities of each smaller portion of a megacity. Overall, this paper develops a comprehensive analysis of water security in megacities, identifying which neighbourhoods need more attention when different urban water risks are considered, and also lays the groundwork for future studies on how this vulnerability might be affected by future climate and environmental changes.



2:20pm - 2:40pm

An evaluation of alternatives to diesel fuel for use in Canada’s long-haul heavy-duty vehicles

Maddy Ewing, Heather MacLean, I. Daniel Posen

University of Toronto, Canada

This research aims to evaluate promising alternatives to diesel for class 8b heavy-duty vehicles (HDVs) in Canada on the basis of greenhouse gas (GHG) emission reductions and total lifetime costs. These vehicles weighing more than 27 tonnes are responsible for the long-haul movement of freight and contribute disproportionately to Canada’s total heavy-duty vehicle kilometers travelled. A suite of alternatives to diesel including biofuels, natural gas, hydrogen fuel cell and battery electric trucks are presently being reviewed. Although electric and hydrogen fuel cell HDVs are appealing in terms of their elimination of tailpipe emissions, their contributions to GHG emission reduction targets are dependent on low-carbon electrical grids and hydrogen sources. Additionally, these technologies require major investments in infrastructure and thus may not yet be economically feasible. On the other hand, natural gas-based fuels may be appealing in terms of lower costs, but their ability to contribute to climate targets is uncertain. Finally, life cycle GHG emissions and costs associated with biofuels vary depending on the specific fuel, geographic location, production method, assumptions surrounding land use change, and vehicle engine design. Promising alternatives to diesel will be selected for further evaluation based on technological state-of-readiness for deployment, and positive results from previous studies that demonstrate promising reductions in GHG emissions without substantial increases in cost. To evaluate potential contributions to Canada’s GHG emission reduction targets, a life cycle assessment (LCA) of each of the selected alternatives will be conducted. Results are primarily being generated through GHGenius, with key parameters updated to cite the most recent and relevant available data for Canada. Preliminary results suggest that renewable natural gas produced through anaerobic digestion, dimethyl ether produced from wood residue and fuel cell vehicles fuelled using hydrogen produced from wood residue are the three most promising technologies for GHG emission reductions. On the other hand, dimethyl ether and Fischer-Tropsch diesel produced from natural gas are not expected to produce any benefits. Lifetime costs including purchase costs, fuel costs and maintenance costs of each alternative will be quantified using data from publicly available resources. Based on results obtained in previous sections, the cost-effectiveness of GHG reductions for each alternative will be calculated (e.g., $/tonne of CO2eq. reduced). Risks/opportunities associated with the adoption of each technology will be identified and discussed, including fuel availability, reliability of LCA results, impact on air quality, potential for future process improvement, and expected fluctuations in cost. The most robust alternatives to diesel (i.e. those that perform well under a variety of scenarios) will be identified, and opportunities for their adoption will be discussed by identifying the set of conditions that make each alternative favourable. This will include, for instance, examining the influence of the GHG intensity of upstream activities (e.g., electricity, natural gas or biomass production) on results, and identifying key thresholds that lead to one alternative being favoured over another. Results from this assessment will help policy-makers and fleet operators determine the most effective ways by which Canada can reduce GHG emissions from its long-haul HDV sector.



2:40pm - 3:00pm

Quantifying the Environmental Impacts of Cannabis Cultivation

Hailey Summers, Drayton Browning, Jason Quinn

Colorado State University, United States of America

The Intergovernmental Panel on Climate Change has recently affirmed anthropogenic contribution to greenhouse gas emissions is the primary driver of climate change. Industries historically targeted for the majority of anthropogenic emissions include transportation, electricity generation and food, primarily beef. However, it is possible that a rapidly emerging industry could have a larger impact than those previously targeted. The cannabis industry, prior to medical and recreational legalization, has been recognized for its environmental burdens, but results were largely estimated as industry size was not accurately quantifiable due to illegal, off-the-grid growing practices. With recent legalization of medical and recreational cannabis use in several U.S. states, energy consumption data are now publicly available at an industry scale. This work aims to quantify the environmental impacts of cultivating cannabis bud with initial work focused on the Colorado industry. Foundational work was developed through a detailed process model, capturing the mass and energy required to cultivate cannabis in indoor, greenhouse and outdoor environments. Process model results were combined with life cycle inventory data from Ecoinvent 3.4 and U.S. Life Cycle Inventory database to characterize environmental impacts based on TRACI 2.1. Preliminary work shows that the emissions associated with the indoor and greenhouse cannabis industry in Colorado are larger than other anthropogenic industries currently targeted for significant contributions to climate change. The largest impacts observed from indoor and greenhouse cannabis cultivation are primarily due to high-intensity grow lights and heating, ventilation, and air conditioning requirements associated with simulating plant growth environments indoors. Environmental impact results were significantly reduced when switching cultivation to outdoor practices, reducing the cannabis industry from the largest anthropogenic contributor in Colorado. Future work includes expanding results to include several use phases thereby generating full life cycle assessments, quantifying impacts of the cannabis industry across the United States and outreach efforts to inform cannabis cultivators. Through our research, we have seen that some cannabis cultivators are taking action to improve growing techniques, directed at minimizing their carbon footprint. However, with little environmental impact information publicly available as well as little incentive due to high economic margins, the cannabis industry is expected to continue to be a major contributor to global greenhouse gas emissions.



3:00pm - 3:20pm

New Developments in S-ROI

Lise Laurin, Caroline Taylor, Valentina Prado

EarthShift Global, United States of America

Sustainable Return on Investment, a method to assess the social, environmental and economic impacts of a decision or investment, has been around for over two decades now. Its adoption has been sporadic and fleeting, even with strong support from the AIChE and the introduction of a LEED pilot credit. Rebranding, from Total Cost Assessment to S-ROI to Triple Bottom Line Cost Benefit Analysis has not improved uptake. At the same time, there is strong interest in the Return on Investment of sustainability investments of all types. We have been exploring some of the reasons why the method has had such low adoption rates and working on solutions.

The first aspect of the methodology which has been solved is how to properly assign values to environmental impacts. By first letting go of the need to have a single “right” value and then by encompassing multiple viewpoints in the assessment, environmental impacts can now be assessed in a way that works for all stakeholders. A library of these values, constantly updated, allows us to rapidly assess the outputs of an LCA and/or risk assessment, providing meaningful and useable results.

A second aspect of the methodology which is particularly powerful, yet at the same time is a barrier to adoption is the inclusion of stakeholder input. Stakeholder input is critical to properly assess societal impacts, yet inclusion of stakeholders in the S-ROI discussion may reveal company proprietary information which could put the company at risk. For impact investments, getting stakeholder input is costly and may provide unwelcome answers.

Yet, in many corporate decisions at least, the external stakeholder impacts are minimal or predictable by externally facing experts within the corporation. Most impacts of infrastructure projects such as green roofs, berms and water catchments have predictable economic costs and economic, environmental and social benefits. Impact infrastructure has captured these costs and benefits in a successful tool called AutoCase. While AutoCase won’t capture the benefits of rooftop farms in low income neighborhoods, for the vast majority of projects, most impacts will be captured. Similar to an LCA, this type of assessment requires only a tool with a library of data and an experienced modeler. Costs to complete an assessment are minimal.

Inspired by this example, EarthShift Global recently worked with an aerospace company to develop a new tool to assess corporate efficiency improvements. The types of projects turned out to be ubiquitous: HVAC and boiler replacements, chilled water recycling units, energy efficient lighting installations, etc. One of the key additions to the tool was the inclusion of contingent liabilities, such as the result of an HVAC failure. Productivity impacts, often forgotten in a traditional ROI, can result in an ROI fast enough to meet even the toughest financial requirements.

The next step in the development will be to enable the addition of project-specific contingencies, opportunities or risks. This will allow the user to assess most of the impacts without stakeholder input, yet bring the stakeholders in as needed, add contingencies, and capture new or unexplored features of the investments.

 
4:00pm - 5:30pmESS: Ecosystem Services for Sustainable Systems
Session Chair: Rebecca Jan Hanes
Ross Island/Morrison 
 
4:00pm - 4:20pm

Spatially-Explicit Sustainable Manufacturing Site Design Using Techno-Ecological Synergy

Michael Charles, Bhavik Bakshi

The Ohio State University, United States of America

A history of separating technology and ecology has led to many unintentional environmental impacts. These impacts affect ecosystems that provide services which we rely on for survival and well-being. Ironically, in search for technology that improves the livelihood of mankind, we risk the well-being that nature provides. Therefore, to make better decisions, there is a need for designing with respect to both technological and ecological systems. Along with decreasing negative environmental impacts, including ecosystems in sustainable design can lead to innovative solutions which utilize the functions and co-benefits of nature as we search for sustainable solutions amid increasing population and demand of ecosystems.

Previous work in techno-ecological process design has lacked inclusion of spatial heterogeneity of ecosystem service supply and demand. Previous research in ecosystem services does provide methods for mapping ecosystem services across different spaces and scales; however, mostly fails to connect these services to specific beneficiaries, such as manufacturing sites. Without this connection, the concept of servicesheds cannot be easily quantified for use in design. Servicesheds are the areas which provide specific ecosystem services to specific beneficiaries. For spatially-explicit sustainable design to be a reality, it is important to understand how, where, and when mass and energy flows across components of the techno-ecological system. Understanding the capacity of local ecosystems to regulate the chemicals released to the atmosphere from industrial processes can provide understanding of absolute sustainability, the condition where our demand on ecosystems is less than their available supply within a scale of interest. This research will both analyze the spatially-explicit interactions between ecosystem services and a manufacturing site and explore design potentials in response.

The ecosystem service that this research initially focuses on is air quality regulation. As a result, it uses physical dispersion models to determine the spatial heterogeneity of air pollution and ecological deposition. As part of the analysis, we provided quantitative definitions for air quality regulating servicesheds to understand the location of ecosystems which provide services to the manufacturing site. These definitions were applied to a case study biodiesel plant along the Ohio River near Cincinnati. Further, we modeled additional scenarios, such as land management and restoration, technological process changes, and manufacturing site location. Using this information, we were able to determine optimal sets of land where management and restoration has the highest potential to increase the capacity of deposition for criteria air pollutants. Further, we were able to use the results of these scenarios to compare techno-ecological design options based on sustainability and financial feasibility. As a step towards including multiple ecosystem services in the sustainability analysis of the scenarios, initial results of including other services will also be discussed. Providing quantitative definitions of servicesheds and exploring spatially-explicit design options for techno-ecological systems enables smarter industrial site design and is one step closer towards bridging ecological knowledge with engineering practice.



4:20pm - 4:40pm

Analysis of Urban Metabolism Models from an Ecological Perspective

Zackery B Morris1, Marc Weissburg2, Bert Bras1

1School of Mechanical Engineering, Georgia Institute of Technology, United States of America; 2School of Biological Sciences, Georgia Institute of Technology, United States of America

Nature is often copied or mimicked in science and engineering for novel solutions and unique analysis, e.g., in biologically inspired design of products. Another example of this is the field of Urban Metabolism which is based on the idea that material and energy use in human population centers is similar to metabolism in a living organism. Urban Metabolism looks to characterize what is consumed, how much is consumed, and what is exported (or excreted) within a specified geographic area. Tools such as material flow analysis and input-output analysis are used to track flows into, within, and out of a city. These flows are often a measured material flow such as water or nutrients but can also be energy or the mass of equivalent coal.

While this is a unique way of looking at cities by viewing it as a biological process, it has been criticized for treating cities as an organism when their function is more closely related to a collection of organisms, in other words, an ecosystem. Ecologists have established different ways to analyze ecosystems to gain insight into how they function and are developed. One of the most prominent forms of analysis used by ecologists is Ecological Network Analysis (ENA). ENA, rooted in information theory, is performed by creating a network of connections between organisms within the ecosystem and analyzing the structure and/or flow of material or energy between them. This analysis includes a number of ecological metrics that describe the overall health, maturity, and function of individual organisms as well as the ecosystem as a whole. These metrics look at the relationships between organisms and how those affect overall ecosystem performance. ENA is a tool for analyzing networks and as a result this can be applied to cities, specifically Urban Metabolism models, to understand these systems through the lens of ecology.

In this study, a number of cities with Urban Metabolism models are analyzed using ENA. The results show that, in terms of the ecological metrics, these cities are lacking in performance when compared to natural ecosystems. The cities have less cycling, lower resiliency, and far fewer connections than the natural ecosystems. If nature is to be used as a benchmark for sustainable design, these results indicate that there is still much room for improvement to reach true eco-cities that mimic the function of ecosystems. Additionally, the cities are compared one to another as well as to other human designed networks to show a ranking of ecological performance, and through this it is seen that most human designed networks are similar. This analysis of flows around urban areas provides a better understanding of the performance of these networks that goes beyond the typical Urban Metabolism model and can be used to further design these systems using nature as the guiding principle.



4:40pm - 5:00pm

Machine learning-based model for estimating carbon losses linked to road expansion in the Peruvian Amazon

Gustavo Larrea-Gallegos, Ian Vázquez-Rowe

Pontificia Universidad Católica del Perú, Peru

Extraction for raw materials in the Amazon has increased in recent years, with cattle ranching, palm oil, urban sprawling or informal mining generating important deforestation rates. For these raw materials to be extracted efficiently, road expansion is needed. Recent studies, in fact, estimate that by 2050 there will be at least 60% more roads than in 2010, most of which will be constructed in tropical areas of the globe. In this context, the main objective of this study is to increase the understanding of the deforestation patterns linked to road planning and expansion in the Peruvian Amazon. For this, four machine learning techniques (e.g., random forest, neural networks) were implemented in order to propose and evaluate adapted deforestation models to the regional characteristics, using variables associated with road and physical parameters. A large-scale analysis was performed to the whole Peruvian Amazon territory using cloud-based tools (e.g., Google Earth Engine). A one hundred meter/pixel resolution map has been generated containing the probability of deforestation and the amounts of released carbon. This map is publically available for its visualization and use. Carbon emissions for future scenarios can be estimated by combining georeferenced carbon density data and predicted deforestation. It is expected that these results will allow stakeholders, namely policy-makers, to quantify the environmental impacts of existing and future road expansion plans in the yet not widely populated Peruvian Amazon.

Keywords: deforestation; machine learning; neural networks; Peru; random forest



5:00pm - 5:20pm

Predicting spatially explicit life cycle environmental impacts of crops under future climate scenarios with machine learning approaches

Xiaobo Xue Romeiko

State University of New York at Albany, United States of America

Agriculture production, as a primary stage for providing essential food and fuel supplies, is associated with a range of environmental challenges spanning from burgeoning greenhouse gas (GHG) emissions to water pollution. Agriculture currently contributes to approximately 20-25% of life cycle GHG emissions, in the United States (US) and globally. Nitrogen and phosphorus from agricultural production are among the leading causes of water pollution. Without immediate and effective mitigation efforts, climate change accompanied with the continuously increasing demand for food and fuel, will further accelerating environmental degradation. Efficiently quantifying environmental releases from agriculture is urgently required to ensure the long term sustainability.

However, our understanding of spatially explicit life cycle environmental impacts from crop production under future climate scenarios is very limited to date. Currently, emission factors and process-based mechanism models are popular approaches for estimating agricultural life cycle impacts. Despite valuable, the emission factors are incapable of describing spatial heterogeneity of agricultural emissions, whereas process-based mechanism models, capturing the heterogeneity, tend to be very complicated, and time-consuming to apply. To address this method challenge, this study develops rapid predictive approaches for estimating future life cycle environmental impacts from agricultural production, by utilizing novel machine learning techniques.

To build the rapid predictive models with best accuracy, we first tested three cutting-edge machine learning techniques, including Boosted Regression Tree (BRT), Random Forest (RF), and Deep Neural Network (DNN), based on the soil, climate, farming practices, topographic information along with historical estimates of life cycle environmental impacts of crop production in the Midwest counties. Then, using the best fitting model, we estimated future life cycle environmental impacts under four under four representative scenarios identified by Intergovernmental Panel on Climate Change (IPCC), including Representative Concentration Pathway scenarios (RCP) 2.6. 4.5, 6.0, and 8.5.

The preliminary results suggested that machine learning models such as BRT, RF and DNN yielded high predictive performance and cross-validated accuracy, with DNN presenting the highest cross validation accuracy of 0.93. The life cycle GHG emissions and nutrient releases of crop production exhibited significant variability across Midwest counties in the US. The life cycle global warming and eutrophication impacts of crops (such as corn and soybean) under RCP 8.5 scenario were significantly higher than RCP 4.5 scenario for years 2020-2100. Overall, this study provides the first machine learning models for rapidly predicting life cycle environmental impacts of agricultural production at county scale. Compared with traditional approaches, our machine learning models have greatly advanced computational efficiency (at least 1000 times faster than process-based approaches), and captured the spatial heterogeneity of life cycle environmental impacts.

 
7:30pm - 8:30pmThe Future of ISSST
Session Chair: Thomas Seager
Session Chair: Daniel Posen

All are welcome as we discuss the future of this conference, and how we will go forward in 2020

Ross Island/Morrison 
Date: Wednesday, 26/Jun/2019
8:00am - 9:30amOC-1: Gaps and data integration in open communications
Session Chair: Brandon Kuczenski
Ross Island/Morrison 
 
8:00am - 8:20am

Developing Publicly Available LCA Guidance, Data, and Tools for Environmental Understanding of Emerging CO2 Utilization Research

Timothy John Skone1, Michele Mutchek2, Gregory Cooney2

1U.S. DOE, NETL, United States of America; 2Contractor to U.S. DOE, NETL, United States of America

Capturing carbon dioxide (CO2) emissions from power and industrial sources and using that CO2 to make useful products is an emerging area of research that will benefit from a consistent and unbiassed framework like life cycle assessment/analysis (LCA) to understand the environmental impacts and net life cycle GHG reductions compared to the current state of the alternative in the marketplace. From a methodological perspective, CO2 utilization systems are complex due to the intrinsic links established between the power sector and the utilization sector (e.g., biofuels, cement, chemicals, etc.).

Technology developers and LCA analysts could benefit from guidance that establishes best practices for CO2 utilization LCA. For example, it is not uncommon to see CO2 utilization LCAs that focus mainly on the utilization technology and apply a simplified approach to the upstream CO2 source. We would argue that a robust treatment of the upstream CO2 source is imperative in any CO2 utilization LCA, because of the important link between the source of the CO2 and the use of the CO2 in the overall environmental impact.

Conducting LCA guidance work early is important in the development of these emerging technologies, because it allows time to implement change while technologies are still nascent. Additionally, U.S. Department of Energy (DOE) and the federal government is increasingly requiring LCA as part of funding for primary research and tax incentives like “45Q” for CO2 capture, utilization, and storage projects (H.R. 1892, 2018).

In the interest of supporting the creation of useful LCAs of CO2 utilization projects, the DOE is developing guidance, data, and tools for CO2 utilization LCA. Working with actual CO2 utilization projects funded under Federal Opportunity Announcements (FOA), the DOE is providing specific guidance on methodological issues and choosing a comparison system. The DOE is also providing upstream and downstream data that is relevant to CO2 utilization projects. The guidance, data, and tools will be publicly available and free.



8:20am - 8:40am

Insights from the Database Integration Workshop: Building the Data Capacity for Food-Energy-Water Research

Yuan Yao1, Runze Huang2, Richard Venditti1, Kai Lan1, Zhenzhen Zhang3

1Department of Forest Biomaterials, North Carolina State University, United States of America; 2ExLattice, Inc. Raleigh, NC, United States of America; 3Department of Forestry and Environmental Resources, North Carolina State University, United States of America

Advancing the knowledge of Food, Energy, and Water (FEW) system interactions and identifying critical challenges that could be addressed by simultaneous management of three systems require massive datasets. Government agencies, research communities, and industries have made intensive efforts to collect and generate datasets to meet the data needs of diverse stakeholders. However, those data sources are usually scattered with high heterogeneity, making them difficult to be used for data synthesis and integration in research and decision making in interdisciplinary areas such as FEW. It is critical to provide easy-accessing, knowledge-sharing data management platforms or frameworks to support system-level analysis, decision-making, and stakeholder collaborations for better understandings and improvements of FEW systems.

Funded by the U.S. Department of Agriculture (USDA), a FEW workshop focusing on database integration and capacity building was hosted at North Carolina State University on Sept.11, 2018. The workshop gathered participants from U.S. government agencies (i.e., USDA, Environmental Protection Agency, U.S. Department of Energy, and U.S. Forest Service), International Energy Agency, five U.S. national labs, university and research institutes. The workshop was organized around three key questions:

• What are the frontiers of data from both public and private sources related to FEW systems?

• How can we leverage and integrate existing databases for new insights?

• Who should be involved and how can we encourage data generating, sharing, and engagement from a broad range of stakeholders in government, academia, and industry?

The presentation will discuss the insights learned from the workshop. To better understand the current data capacity, workshop participants generated a comprehensive list of existing databases and data resources related to FEW systems. Although diverse data resources are available, there are large data gaps and challenges in supporting current and future interdisciplinary FEW research, such as overlapped databases with inconsistency, the lack of high-resolution data, low data discoverability, accessibility and usability, and various data needs for inter-, multi-, and trans- disciplinary researchers. The presentation will discuss the vision of future database integration and data sharing proposed in the workshop. Challenges and barriers for integrating, sharing, and synthesizing diverse databases were identified and ranked by the workshop participants. We will present the results and discuss the action plans with short-term and long-term goals in order to address the top challenges, especially those related to infrastructure, mechanism, and policy to promote data sharing across stakeholders such as government, academia, the private sector, and the public.



8:40am - 9:00am

Identifying data gaps in the energy supply chains of manufacturing sectors with an input-output LCA model

Xiaoju {Julie} Chen1, H. Scott Matthews1, Rebecca Hanes2, Alberta Carpenter2

1Carnegie Mellon University, United States of America; 2The National Renewable Energy Laboratory, United States of America

U.S. manufacturing sectors’ fuel intensity decreased by more than 4% from 2010 to 2014 . The decrease was possibly due to energy input switches and the incorporation of new technologies. Understanding the manufacturing energy consumption associated with these changes is important to further improve the efficiencies and sustainability in manufacturing industries. To better interpret the influences of these changes on manufacturing energy consumption, National Renewable Energy Levorotary (NREL) recently developed the Materials Flows through Industry (MFI) tool, which analyzes energy consumption across the supply chains of U.S. manufacturing industries under different energy and technology scenarios. Due to the limitation of the coverage of data sources, the MFI tool may have incomplete energy consumption data in some industries’ supply chains. These data gaps affect the accuracy of the results provided by the tool. To overcome this issue, this study, which is collaborative between Carnegie Mellon University and NREL, aims to identify data gaps in the MFI tool with the information in input-output life cycle assessment (IO-LCA) models. First, an IO-LCA was created to estimate the total and sectoral energy consumption in each U.S. manufacturing industry’s entire supply chain. The IO-LCA model was the 2007 economic input-output LCA model, generated by data provided by the U.S. Department of Commerce Bureau of Economic Analysis and U.S. Environmental Protection Agency (USEEIO). Then, for each industry, the estimations from the IO-LCA model were compared with the inventory data in the MFI tool. As the functional unit in the IO-LCA model was in U.S. dollar, different than the units used in the MFI tool, the comparison was based on the ratios of the process energy to the supply chain energy consumption for each industry. Based on the comparison, potential data gaps in the MFI tool were identified. Potential data gaps were identified in many processes such as gravel, sand, and iron ore. The results also indicated that based on the significance of their data gaps, priorities should be given to certain processes when new information is available. Based on the priority level, five scenarios were given to provide guidance for data updates. Scenario 1 processes in the MFI tool should be given priority in terms of data updates and scenario 5 processes were industries that did not have data gaps in the MFI tool. Examples of scenario 1 processes include gravel and sand processes, which should be given priority when updating their inventory data. Most of the plastic products were categorized as scenario 4 processes, which should not be prioritize comparing with other processes in the MFI tool. Processes that fell in scenario 5 were fuel processes, such as crude oil and diesel. The results of this study can help LCA practitioners to optimize activities to improve LCA models and assist data providers to prioritize efforts in completing inventory data. The methodology provided in this study was an example of how to use top-down LCA models (IO-LCA models) to assist data updates and data collection in top-down (such as MFI tool) LCA models.



9:00am - 9:20am

Sensitivity to weighting in Life Cycle Impact Assessment (LCIA)

Valentina Prado1, Marco Cinelli2, Sterre F ter Haar3, Dwarakanath Ravikumar4, Reinout Heijungs5, Jeroen Guinée3, Thomas Seager6

1Earthshift Global, LLC, United States of America; 2Institute of Computing Science, Poznań University of Technology; 3Institute of Environmental Sciences (CML),; 4School for Environment and Sustainability, University of Michigan; 5Department of Econometrics and Operations Research, Vrije Universiteit Amsterdam; 6Sustainable Engineering and the Built Environment, Arizona State University

Weighting in LCA incorporates stakeholder preferences in the decision-making process of comparative LCAs and this study evaluates the relationship between normalization and weights and their effect on single scores. We evaluate the sensitivity of aggregation methods to weights in different LCIA methods to provide insight on the receptiveness of single score results to value systems.

Sensitivity to weights in two LCIA methods is assessed by exploring weight spaces stochastically and evaluating the rank of alternatives via the Rank Acceptability Index (RAI). We assess two aggregation methods: a weighted sum based on externally normalized scores and a method of internal normalization based on outranking across two midpoint impact assessment.

The study finds that the Influence of weights in single scores depend on the scaling/normalization step more than the value of the weight itself. In each LCIA, aggregated results from a weighted sum with external normalization references show a higher weight insensitivity in RAI than outranking-based aggregation because in the former, results are driven by a few dominant impact categories due to the normalization procedure.

Contrary to the belief that the choice of weights is decisive in aggregation of LCIA results, in this case study it is shown that the normalization step has the greatest influence in the results. Practitioners aiming to include stakeholder values in single scores for LCIA should be aware of how the weights are treated in the aggregation method as to ensure proper representation of values.

 
10:30am - 12:00pmOC-2: Thematic Keynote: Advancing Societal Sustainability with Open Data
Session Chair: Brandon Kuczenski
Ross Island/Morrison 
 
10:30am - 11:00am

Advancing Sustainability Science: A Political-Industrial Ecology Perspective

Jennifer Baka1, Joshua Cousins2

1Penn State, United States of America; 2SUNY-ESF, United States of America

This paper evaluates how the emerging field of political-industrial ecology (PIE) can advance sustainability science, particularly in terms of methodological and theoretical innovations. Less than a decade old, PIE integrates theories and methods from political and industrial ecology to evaluate how biophysical and political systems are entwined in shaping nature-society relations and processes. Normative in its approach, PIE seeks to better embed societal metabolisms within their broader historic, ecological, and political economic context in pursuit of reducing the environmental impact of industrial ecosystems and resource flows. In the paper, we first outline the theoretical and historical origins of the field before synthesizing the findings of eight PIE case studies which have helped to catalyze the field. We conclude with a discussion on the future prospects of PIE, specifically the methodological and data challenges/opportunities of the field.



11:00am - 11:30am

Building Energy Data Transparency -- the travails of making data accessible

Stephanie Pincetl

UCLA, United States of America

Building energy use is an important contributor to greenhouse gas emissions and is also strongly tied to thermal comfort. Understanding building energy use at a granular level provides many important insights that are unobtainable without that data. The UCLA Energy Atlas is built on address level billing data, aggregated for customer privacy on the public facing interactive website, but shows building energy use by neighborhood, city and Council of Government. Data is matched to attributes including vintage, square footage, industrial classification code, income, and more. It enables local governments and researchers to discover energy use patterns, target energy efficiency incentives, evaluate programs, and understand equity differences across regions and among residents and industries. Such data is indispensable for the energy transition. This talk explains the process of developing the Energy Atlas and some of the implications for sustainable systems and technology.



11:30am - 12:00pm

A General Data Model for Industrial Ecology and its Implementation in a data commons Prototype

Stefan Pauliuk

University of Freiburg

Till this day, data in industrial ecology are commonly seen as existing within the domain of particular methods or models, such as input-output, life cycle assessment, urban metabolism, or material flow analysis data.

This artificial division of data into methods contradicts the common phenomena described by those data: the objects and processes in the industrial system, or socioeconomic metabolism. A consequence of this scattered organization of related data across methods is that IE researchers and consultants spend too much time searching for and reformatting data from diverse and incoherent sources, time that could be invested into quality control and analysis of model results instead. This talk outlines a solution to two major barriers to data exchange within industrial ecology: i) the lack of a generic structure for industrial ecology data and ii) the lack of a bespoke platform to exchange industrial ecology datasets.

We present a general data model for socioeconomic metabolism that can be used to structure all data that can be located in the industrial system, including process descriptions, product descriptions, stocks, flows, and coefficients of all kind. We describe a relational database built on the general data model and a user interface to it, both of which are open source and can be implemented by individual researchers, groups, institutions, or the entire community. In the latter case, one could speak of an industrial ecology data commons (IEDC), and we unveil an IEDC prototype containing a diverse set of datasets from the literature.

 
2:00pm - 3:30pmB&I-1: Resilience
Session Chair: Jeremy Gregory
Ross Island/Morrison 
 
2:00pm - 2:20pm

Hurricane Resilience: An approach for community-informed building-scale assessments

Ipek Bensu Manav, Jeremy Gregory, Randolph Kirchain

MIT, United States of America

Building-scale resilience assessments are generally carried out through convolving hazard curves with fragility curves. Hazard curves describe the probability of excitation level, wind speed in the case of hurricanes, and fragility curves describe the probability of physical damage for each given excitation level. Our group at the MIT Concrete Sustainability Hub is working on updating hazard and fragility curves to include community characteristics, such as orientation and mitigation of the building stock. Hazard curves are modified to include “texture”, the orientation of buildings relative to each other, which can act to amplify wind risk. Fragility curves are dictated by the design of structural and nonstructural elements. Design iterations are modeled and simulated using molecular dynamics to produce detailed fragility curves representing incremental levels of mitigation. Finally, characterizing the building stock enables aggregation of community-scale loss. The extent of this aggregated loss influences demand surge, the increase in repair time and unit repair costs in larger-scale disaster events due to demand for materials and labor outpacing local supply. Our research goal is to understand the mechanisms of demand surge and couple surge with "texture" effects for an estimation of loss amplifications in different communities. We find that building-scale repair costs are avoided for higher standard buildings especially when located in communities where surrounding buildings are also appropriately mitigated. Our aim is to promote performance-based design while emphasizing the importance of community-scale implementation.



2:20pm - 2:40pm

Metrics of Community Resilience: Estimating the social burden of attaining critical services following major power disruptions

Sara Peterson1, Susan Spierre Clark1, Robert Jeffers2, Michael Shelly1

1University at Buffalo; 2Sandia National Laboratories

Electric power is critical to almost every aspect of American life, powering everything from healthcare systems to transportation to telecommunications. These and other infrastructure systems vitally depend on a functional power grid; consequently, the federal government has deemed the energy sector “uniquely important” to the overall resilience of infrastructure systems. If the value of the nation’s infrastructure systems is understood to be derived from its ability to “provide the essential services that underpin American society,” (PPD-21, 2013) then infrastructure resilience—defined as “the ability to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions” (PPD-21, 2013)—can be understood similarly. From this perspective, the value of infrastructure resilience (notably, grid resilience) can be assessed by the degree to which it serves to bolster community resilience.

Despite the practical connections between grid and community resilience, there is a disconnect between efforts to plan for these two objectives. Efforts to plan for grid resilience often are aligned with a utility perspective of energy, which focuses on the supply of energy, and considers energy resilience in terms of the frequency and duration of a power outage. In contrast, efforts to plan for community resilience often focus on the human outcomes of energy supply, considering energy resilience through health and wellbeing indicators such as access to food, water, sanitation, and healthcare (The Rockefeller Center & Arup, 2015; Cutter, 2016). The disconnect between these two perspectives prevents efforts to plan and regulate for community-focused grid resilience. The lack of resilience metrics leaves utilities with few incentives to invest in grid resilience (Mukhopadhyay & Hastak, 2016), and fewer still to go beyond kilowatts and kilowatt hours to evaluate potential investments in terms of how energy supply contributes to human wellbeing in outage events.

This research seeks to bridge the gap between these two perspectives and facilitate efforts to plan and regulate for community-focused grid resilience. Drawing upon theories of human development, namely the human capabilities approach (Nussbaum, 2003; Sen, 2005), this research explicitly draws the link between infrastructure systems and infrastructure services, and the ultimate human benefits they provide (Clark, Seager, & Chester, 2018). The capabilities framework provides a theoretical basis for the key objective of the project: the development and validation of a social burden metric to quantify the strain placed upon members of a community to attain all their infrastructure service needs after a disaster. The capabilities framework highlights three key concepts integral to the development of the social burden metric: need, or the ways in which different demographics require different types and quantities of particular services; ability, the differing resources certain populations have at their disposal and the ways in which these resources might facilitate resource acquisition; and acquisition effort, the difficulty of satisfying service needs, based on service availability and properties of the service location. Building from this theoretical basis, the social burden metric adapts a variant of the travel cost method (TCM) known as the Random Utility Model, an approach long used by environmental economists seeking to quantify the value of recreational services to communities (Heal, 2000). This adapted RUM explicitly reflects the needs of different populations, their abilities, and the level of effort necessary acquire their service needs in power outages.

The employment of RUM as a means of quantifying the social burden of power outage demonstrates a new application of a long-established method. In addition to such scholarly contribution, this research has the potential to inform planners and policymakers for assessing the human impact of proposed infrastructure investments.



2:40pm - 3:00pm

A resilience engineering approach to integrating human and socio-technical system capacities and processes for national infrastructure resilience

John Egbert Thomas1, Daniel Eisenberg2, Thomas Seager3, Erik Fisher4

1Resilience Engineering Institute, Tempe, AZ; 2Department of Operations Research, Naval Postgraduate School, Monterey, CA; 3School of Sustainable Engineering and the Built Environment, Arizona State University, Tempe, AZ; 4School of The Future of Innovation in Society, Arizona State University, Tempe, AZ

Despite Federal directives calling for an integrated approach to strengthening the resilience of critical infrastructure systems, little is known about the relationship between human behavior and infrastructure resilience. While it is well recognized that human response can either amplify or mitigate catastrophe, the role of human or psychological resilience when infrastructure systems are confronted with surprise remains an oversight in policy documents and resilience research. Existing research treats human resilience and technological resilience as separate capacities that may create stress conditions that act upon one another.

This interdependence between human and technological aspects of resilient infrastructure systems is not yet fully appreciated in infrastructure policy or practice. In particular, guiding Federal policy directives for U.S. infrastructure security and resilience do not explicitly identify human or social behavior as essential components of critical infrastructure system resilience. A prominent example of this is the National Infrastructure Protection Plan of 2013 (NIPP 2013) – a guide to managing national infrastructure risks created by the Department of Homeland Security in response to Presidential Policy Directive 21. Although NIPP 2013 names 16 sectors of critical infrastructure (including “communication” systems), and acknowledges that threat prevention, recovery, and mitigation requires close coordination of partnerships between public and private interests the document fails to consider how human behavior impacts infrastructure resilience. Further, while the NIPP emphasizes that critical infrastructure security and resilience is essential to national well-being, it makes no reference to how infrastructure designers, operators, maintenance workers, or users might contribute to or undermine infrastructure resilience. Thus, a gap remains regarding the study of human attributes that relate to infrastructure and help build resilience to support national goals. Given that human performance is dynamically coupled with infrastructure performance, a comprehensive approach to resilience must consider this coupling.

To address this gap, we review resilience engineering and psychology research to produce four novel outputs that inform an integrated perspective of human and infrastructure resilience not available elsewhere in the literature: (1) a list of resilient system capacities for engineered systems, (2) a list of human psychological resilience capacities for the people embedded in infrastructure systems, (3) a conceptual framework for linking system and human capacities together via four socio-technical processes for resilience: sensing, anticipating, adapting, and learning (SAAL), and (4) a mapping of human and system characteristics using the framework to inform infrastructure resilience policies. Our analysis shows that the human and technical resilience capacities reviewed are interconnected, interrelated, and interdependent when applied to the SAAL framework. While reinforcing the important roles of cognitive and behavioral dimensions, our findings further suggests that the affective dimension of human resilience is effectively ignored in the resilience engineering literature. Together, we present a simple way to link the resilience of technological systems to the cognitive, behavioral, and affective dimensions of humans responsible for the system design, operation, and management.

 
4:00pm - 5:30pmOpen-2: New Inspirations: Biomimicry; Sustainability Education
Session Chair: Thomas Seager
Ross Island/Morrison 
 
4:00pm - 4:20pm

Integrating LCA & Biomimicry – Paper Case Study

Rebe Feraldi

TranSustainable Enterprises, LLC, United States of America

Eco-design tools such as life cycle assessment (LCA) use a systematic approach for quantifying the environmental performance of industrial products and services. The LCA methodology has great potential for holistically identifying areas of a supply chain with relatively poor environmental performance, a.k.a. “hotspots.” However, LCA is poor at inspiring “productively disruptive innovations” (Feraldi 2018). In contrast, nature-inspired design strategies such as Biomimicry are based on learning from deep principles found in nature and “regard nature as the paradigm of sustainability” (de Pauw 2010). These tools offer a radically different approach for developing designs in balance with the natural environment. Biomimicry is often referred to as the “conscious emulation of life’s genius” in order to solve human design and engineering challenges (Benyus 1994). The emulation aspect of the tenets of Biomimicry emphasizes integrating biological knowledge at the form, process, and system levels into design and engineering by identifying biological strategies and mechanisms that have evolved to survive the test of time. This type of approach is inspiring a paradigm shift of sorts in terms of addressing human design challenges but lacks the quantitative rigor of tools such as LCA. This work describes the implementation of a sustainability approach that is an amalgam between LCA and Biomimicry. The quantitative value of LCA helps to make substantive assessments and measurements of hotspots in a product supply chain. With this information, the Biomimicry approach can be applied to open the design space at these hotspots and reconnect our vision of our built environment and its place within the rest of the biosphere. Printing and writing paper product life cycles are highlighted as an example to demonstrate the utility of using this integrated approach. The combined value of these sustainability tools has the potential to revolutionize how industry, analysts, and policymakers address our relationship with the built and natural environment. It is the author’s hope that this integrated approach can help humans raise the “sustainability” bar to not only endeavour to sustain human life but to create systems that, in the words of Biomimicry specialists, “create conditions conducive to [all] life” (Benyus 1997).

REFERENCES:

Benyus J (1997). Biomimicry: Innovation Inspired by Nature, BIOMIMICRY © 1997 by Janine M. Benyus, HarperCollins Publishers Inc., New York, 1997.

de Pauw I, Kandachar P, Karana E, Peck D, Wever R (2010). Nature inspired design: Strategies towards sustainability, Article in Conference Proceedings for the Knowledge Collaboration & Learning for Sustainable Innovation: 14th European Roundtable on Sustainable Consumption and Production (ERSCP) Conference and the 6th Environmental Management for Sustainable Universities (EMSU) Conference, © 2010 De Pauw I, Kandachar P, Karana E, Peck D, Wever R. Accessed on September 30, 2018 at: https://repository.tudelft.nl/islandora/object/uuid:98ce3f26-eff8-40f5-82dc-ed92fec7e8f9?collection=research.

Feraldi R (2018). The Zoom Out, Environmental Leverage, Assess & Re-Invent (ZELAR) Approach: Plastic Box Case Study, Article on Sustainability Approach Amalgams for Biomimicry Masters Course on Communicating Biomimicry, January 2018.



4:20pm - 4:40pm

Biologically-Inspired Optimization of a Water Distribution Network

Stephen Malone1, Zackery B Morris1, Bert Bras1, Marc Weissburg2

1School of Mechanical Engineering, Georgia Institute of Technology, United States of America; 2School of Biological Sciences, Georgia Institute of Technology, United States of America

Mathematical modeling and optimization are established techniques that have proven effective as quantitative benchmarking tools at design conception that aid an engineer’s ability to achieve desired results. However, when designing systems where sustainability is the desired outcome, engineers rely on qualitative targets of performance that are subjective in nature as comprehensive quantitative metrics presently are lacking. In the absence of such measures, engineers have limited capacity at the design phase to predict, monitor and evaluate the performance of engineered systems with respect to sustainability. However, emerging studies suggest that biology may provide a useful template in the establishment of quantitative sustainability benchmarks. For example, biologists have applied principles of information theory to develop quantitative metrics that derive from the exchange of resources found in ecological communities, describing indicators such as community health and maturity. Industrial ecologists have extended this type of network analysis by using ecological metrics to achieve a different perspective that can relate the configuration of already constructed engineered systems to material and energy cycling. The components of these engineered systems represent a network of consumers and producers (i.e. species), and the efficacy of the structural organization and flow of materials are determined by employing the ecological metrics and comparing the results to those found in natural communities.

This study extends the work of biologists and industrial ecologists by incorporating ecological metrics at conception in the design of engineered systems with a case study. This case study involves two optimization models of a water distribution network, both with the overall goal of cost minimization. The first model uses a traditional cost-based approach in its optimization by summing the flow rates of water with infrastructure, pumping, and treatments costs. The second model uses these same initial calculations of cost while also adding a penalty parameter that is based on a cycling-based ecological metric (Finn Cycling Index). Finn Cycling Index quantifies the proportion of cycled material through flow in an ecological community, prompting its use as an indicator of community health and maturity among scientists. The penalty parameter is then bounded by the range of values for the Finn Cycling Index found in mature ecological communities. Mature communities are those that have evolved for millennia into sustainable and robust networks of species that balance resource efficiency and redundancy. Contrasting the results in a traditional cost-based optimization model to the results of the same model with a metric-driven penalty, one may ascertain the influence of the ecological metrics on the optimization results. The results from this study demonstrate the optimization model with the metric-driven penalty produces greater amounts of cycling within the network at a similar level of cost with the traditional cost-based model. This validates the use of ecological metrics as a quantitative benchmark in sustainable engineering design that also has complementary outcomes with traditional cost-based optimization models. This bio-inspired optimization approach demonstrates the potential of using the properties of natural systems to guide efficient and robust engineering design at conception, a tool presently lacking in the sustainable design of engineered systems.



4:40pm - 5:00pm

The Origins, Evolution, and Current Crises in Industrial Ecology

Thomas Seager

Arizona State University, United States of America

As a new science, industrial ecology was founded on a biomimicry hypothesis -- to wit, that the holistic environmental impact of technological systems could be reduced if they were organized to be more like ecological systems to be increasingly interconnected (e.g., reuse of waste materials) and driven by abundant renewable energy. Two decades analytic tool development, including environmental life cycle assessment (LCA) and materials flow analysis (MFA) have codified just one aspect of the natural analog that has come to dominate the current paradigm of industrial ecology. Nonetheless, evidence that this paradigm is inadequate to the challenges of the post-industrial economy is no increasing. As the economy in developed countries evolves to incorporate the rapid growth of digital technologies, a resilience perspective on the natural analog has emerged as critically important. This presentation will identify the origins of the eco-efficiency perspective that currently dominates the natural analog, describe the existential dangers of this approach, and outline a complementary perspective on biomimicry that emphasizes adaptive capacity.



5:00pm - 5:20pm

Sustainability Education and Entrepreneurship (SEE): A new-knowledge process in the face of complexity and accelerating change for children in grades preK-6

Don Sweet1, Thomas P Seager2, Janet A McDonald3

1Sustainable Intelligence LLC, United States of America; 2Arizona State University; 3Rochester Roots, Inc.

The increasingly complex and rapidly evolving interdependencies of sociological, ecological and technological systems that characterize post-industrial societies pose tremendous challenges to current educational institutions. While the current educational paradigm emphasizes the efficient transfer of knowledge from experts to pupils, the wicked problems of the current age require constant transformation of knowledge – including knowledge co-creation, innovation, and embodiment in action. Each of these knowledge processes are characteristic of entrepreneurship, but rarely incorporated into elementary school education. Nevertheless, nurturing children’s innate capacities for knowledge transformation may prepare them for the life-long learning and adaptations necessary to build sustainable organizations and social institutions that contribute to human well-being in the face of accelerating change. Critical to this new educational paradigm, but largely absent from urban student populations, is an awareness of complex, living systems. Whereas prior generations may have engaged directly with such systems in agricultural settings, the experience of today's young children living in urban settings is largely sociological and technological, rather than ecological.

This paper describes a garden-based curriculum and Sustainability Education and Entrepreneurship (SEE) program, developed by a not-for-profit and for profit, Rochester Roots and Sustainable Intelligence, delivered at public Montessori and traditional elementary schools in Rochester and Greece, NY. Children receive instruction in systems modeling for exploring the interconnectivity of sociological, ecological, and technological systems dimensions of sustainability, participate in twenty-six interrelated Sustainability Laboratories, design products and manufacture prototypes, enlist the support of university faculty, students, businesspersons and subject matter experts, and participate in a culminating symposium. The school garden serves as both a metaphor for knowledge transformation in sustainability and provides raw materials for many of the product and business concepts developed. Now in its ninth year, over 850 students in two schools participate in different aspects of the program.

The next phase of SEE is to support children going out into community as ChangeMakers that share knowledge-transformation processes and entrepreneurship that adds value to life by improving well-becoming pathways and trajectories. This is a transition that their experiences have prepared them for, including; 1) the mindset of being young citizens influencing SEE learning community critical thinking and collaborative decision-making and 2) the leadership and interactional expertise developed through marshaling feedback from peers and adult mentors for their businesses. As citizens, they become catalysts that build community SEE cognitive and cognition infrastructure, which we call Cognitive Resilient Infrastructure, with the capacities to inform three community knowledge-transformation processes for a sustainability milieu: knowledge-creation, adaptive-innovation and resiliency-building.

 
5:45pm - 6:45pmDefeating ocean plastics… usefully
Session Chair: Caroline Taylor
Ross Island/Morrison 
 

Defeating ocean plastics… usefully

Caroline Taylor1, Julie Sinistore2, Peter Canepa3

1Earthshift Global; 2WSP; 3Oregon DEQ

 
Date: Thursday, 27/Jun/2019
7:30am - 8:30amISSST2020: Planning meeting-open to all
Ross Island/Morrison 
8:30am - 10:00amSpecial Session: Reimagining LCA
Session Chair: Brandon Kuczenski

Panel discussion

Ross Island/Morrison 
 
8:30am - 10:00am

Special Session: Reimagining LCA

Brandon Kuczenski

University of California, Santa Barbara, United States of America

This panel session will include a brief summary of the persistent challenges faced by life cycle assessment before turning toward creative and boundary-breaking new ideas for escaping or overcoming them. Panel composition and exact topics TBD. Invitees:

Tim Skone, NETL (accepted)

Laurent Vandepaer, Paul Scherrer Institut (accepted)

Alberta Carpenter, NREL (invited)

Caroline Taylor, Earth Shift Global (invited)

Greg Schivley, CMU (invited)

 
10:30am - 12:00pmB&I-3: Building Energy
Session Chair: Ehsan Vahidi
Ross Island/Morrison 
 
10:30am - 10:50am

Assessing the impacts of energy code evolution on energy performance and embodied environmental impacts of multi-family residential buildings in the US

Ehsan Vahidi1, Randolph Kirchain2, Jeremy Gregory1

1Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139; 2Materials Research Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139

The buildings sector represents a significant fraction of greenhouse gas (GHG) emissions and energy consumption, making up 40% of the total global energy demand in both developed and developing countries. It is even larger when considering the entire footprint of the building sector, including embodied GHG emissions from raw material extraction, building material manufacturing, and on-site construction. Therefore, a life cycle perspective when studying the environmental impacts of buildings is imperative because any reduction in operational and embodied GHG emissions by the building sector can lead to significant environmental and economic gains.

Improvements to building energy efficiency through more stringent energy codes are often proposed as a mechanism for reducing the GHG emissions in the building sector. Building life cycle assessments (LCA) have shown that GHG emissions due to energy consumption represent the largest fraction of the life cycle for conventional building designs and thus, improving energy efficiency is a good strategy to lower GHG emissions. However, improving energy efficiency is often associated with more insulation and improved building materials and thus, it is expected that the embodied phase in the whole life cycle footprint is increasing.

The objectives of this analysis are to understand the potential for changes in building energy codes to reduce GHG emissions in the US and the impacts such changes will have on embodied emissions in building materials and construction. We use the case of multi-family residential buildings and conduct analyses for 14 cities representing each of the ASHRAE climate zones. Different design scenarios using both 2009 and 2015 codes developed by the International Energy Conservation Code (IECC) were investigated. Furthermore, to study future buildings that will need to comply with stricter codes, an energy efficient design scenario was also studied. Subsequently, considering the number of new multi-family residential buildings constructed in the United States, the impacts of building codes and standards on buildings’ embodied as well as operational GHG emissions were quantified in each state.

A 100,000 sqft multi-family residential apartment building with mixed one and two-bedroom units was analyzed in different locations over a life span of 50 years. Significant effort was invested to establish and account for a variety of design scenarios, including building envelope requirements for wood and insulated concrete form (ICF) wall systems, roof and slab insulations, and air leakage requirements.

The results show that applying the energy efficient design scenario to the newly constructed multi-family buildings in the whole US can reduce GHG emissions by 17% and can save up to 69 million metric tons of CO2 (eq) in the whole US. Our analysis of building code evolution showed that a concrete structure built to 2015 standards in Virginia demonstrates a considerable saving and generates 222 metric tons less CO2 (eq) when compared to the same concrete structure built to 2009 standard in North Carolina. Furthermore, the fraction of embodied impacts in a multi-family structure increases from 9.3% to 14.9% after code evolution from IECC 2009 to an energy efficient design scenario.

Moreover, the majority of the energy loads for the buildings including lighting, large appliances, and miscellaneous activities are constant across design scenarios. These loads take up between 56 and 73 percent of the overall energy consumption of a building and this means that energy efficiency improvements mostly influence heating and cooling loads, which suggests that it is more meaningful to report savings caused by design decisions relating to the building envelope in heating and cooling terms, and not in overall energy savings.



10:50am - 11:10am

Buildings as batteries: an experimental investigation into energy efficiency impacts of demand response.

Aditya Keskar1, David Anderson2, Jeremiah Johnson1, Ian Hiskens2, Johanna Mathieu2

1North Carolina State University; 2University of Michigan, Ann Arbor

There is an increasing need for flexible resources to maintain reliable power grid operation due to the combined effect of reduced grid inertia and the addition of supply-side stochasticity caused by renewables. Commercial building heating ventilation and air conditioning (HVAC) systems are attractive candidates for load shifting due to their large thermal inertia and inherently sophisticated building controls. Recent work has suggested potential adverse impacts on energy efficiency associated with such demand response activity.

To explore this phenomenon, we conducted over hundred experiments on three buildings at the University of Michigan and will soon be conducting similar experiments on three buildings at North Carolina State University. We perturb the building temperature setpoints in predefined patterns, causing the building to change its power consumption over and below its baseline power use, thereby acting like a battery from the grid’s perspective. This emulation of energy neutral demand response events is then used to assess the impact of the tests on overall building energy efficiency.

We developed novel metrics to assess the building performance and evaluated the performance of various demand response signal designs. We present results from the experiments and quantify the efficiency of building response by focusing on the round trip efficiency as well as the additional energy consumed by the building while providing this demand response service. The three buildings respond with mean roundtrip efficiencies ranging from 34% to 81%, with individual tests yielding efficiencies far outside that range. We also find that the efficiency of building response depends on the magnitude and polarity of the temperature setpoint changes. Our results are consistent with past experimental results, but inconsistent with past modelling results. At North Carolina State University, we will assess the diversity in building characteristics while evaluating the response of buildings to concurrent demand response events. The change in location of experiments will also help us investigate the impact of local climatic conditions on the efficiency of building response. Our findings offer new and practical insights into the impacts of demand response on building operations and potential challenges needed to be overcome to achieve commercial viability.



11:10am - 11:30am

Do LED lightbulbs save natural gas? Detecting simultaneity bias in examining program impacts

Oluwatobi G. Adekanye, Alex Davis, Inês L. Azevedo

Carnegie Mellon University, United States of America

Reducing energy consumption through energy efficiency has been seen as a cost-effective means of promoting energy reductions in buildings. As a result, different energy efficiency programs have been implemented at the local, state, and federal levels in promoting reductions in building use. Traditional assessments of these programs use ex-ante engineering analyses to estimate these program impacts. However, such analyses will be accurate to the degree that their assumptions are met in the real world, and will be inaccurate when those assumptions are wrong.

Complementing engineering analyses are data-driven approaches, that provide empirical estimates of the energy savings of new technology in specific contexts, such as residential households. With the use of appropriate statistical models, it is possible to empirically determine the impact of these new technologies on energy use. This data-driven approach requires different assumptions about behavior than engineering analyses. One of the most fundamental, and difficult to test, is the assumption that the adoption of new technology occurs in the absence of other changes to a building’s energy profile. Changes that occur at the same time as the introduction of the new technology to the system will lead to biased estimates of the energy savings of the new technology, a phenomenon called simultaneity bias. While the randomized control trial approach may be used to circumvent the possibility of the simultaneity bias (as households are being randomly assigned into treatment and control groups), the high costs and infeasibility for some projects make it difficult to implement. Therefore, most ex-post evaluation studies use other quasi-experimental approaches in estimating program impacts with which the simultaneity bias may be present.

Here, we provide a means of addressing the simultaneity bias by examining monthly electricity and gas billing data from approximately 27,000 households in the City of Palo Alto, California from 2010 to 2016. By using simultaneous measurements of both electricity and gas consumption, we examine the effects of electricity saving programs on gas usage and vice versa. LED lightbulbs, for example, cannot physically change gas usage, therefore finding impacts of LED lightbulb use on gas usage is a first-level indication of simultaneity bias. Using the differences-in-differences and event history model approach, we find varied effects of the different energy efficiency programs with significant average reductions of 3% - 8% for electricity use and 4% - 8% for gas usage. We also find evidence that behavioral programs are more effective than financial incentive programs as energy audit programs show the highest reductions in both electricity and gas use. However, we also find evidence of the simultaneity bias effect, as for some electricity-only programs, we observe significant simultaneous gas reductions. Even after accounting for short and long-run effects, we still find evidence of the simultaneity bias with an LED lightbulb program.

Our findings yield significant implications for future analysis as we find that ex-post evaluation of program impacts needs to be carefully examined such that biases such as the simultaneity bias is eliminated for accurate detection of program impacts.



11:30am - 11:50am

The Potential for Emissions Reductions with Residential Demand Response

Jeremiah Johnson, Madeline Rose Macmillan

North Carolina State University, United States of America

The primary goal of demand response (DR) is to reduce peak electricity demand. In this study, we examine an alternative goal of using DR to reduce air emissions. For the US, we estimate the diurnal and seasonal demand profiles for suitable residential end uses including air conditioning, electric heating, and water heating. We assume that the DR events are load-neutral and test a range of tolerances for demand deferral. We develop an emissions minimization model that utilizes hourly marginal emissions factors for 22 grid regions to show significant potential to reduce CO2 emissions through DR approaches. For each region, we calculate a heating and a cooling coefficient on the ratio of heating or cooling electricity consumption per household to the cumulative annual heating degree day (HDD) and cooling degree day (CDD) values. These coefficients are then applied to region-level hourly HDD or CDD values to estimate hourly heating and cooling electricity demand in each region.

Our results show that the magnitude of the benefits is limited by the length of the demand deferral and DR adoption rate. With participation and high tolerance for load shifting, we estimate up to 15%, 12%, and 13% decreases in CO2 emissions from electric heating, air conditioning, and electric water heating applications, respectively. The potential varies across regions and the regions with high residential demand and high variance in marginal emissions factors yield the greatest potential to reduce CO2 emissions. The magnitude of the emissions reduction can differ greatly from the percent of emissions reduced, as driven by the population of the region and the size of the electric load. Load deferral shows the greatest potential for electric heating during the winter and for air conditioning during the summer months, while electric water heating potential is relatively constant throughout the year. This study shows the magnitude of emissions reduction is sufficiently large and that these emissions reductions can be met without significant loss of energy services. In addition to direct emissions reductions, DR initiatives introduce the possibility of alleviating some of the system constraints that lead to solar and wind curtailment, resulting in greater emissions reductions than DR initiatives alone.

 
2:00pm - 6:00pmPostconf Workshop
Session Chair: Brandon Kuczenski
Ross Island/Morrison 
 

Workflow Workshop, a.k.a. "how to get results"

Brandon Kuczenski

University of California, Santa Barbara, United States of America