Carbon Chaser: Upscaling Global Hourly CO2 Capture with Temporal Fusion Transformer
Accepted Oral Presentation at CVPR 2023 MultiEarth Workshop.
Problem & Motivation
With increasing attention and investment in carbon reduction from both public and private sectors, reliable estimates of carbon flux, including Gross Primary Productivity (GPP), are crucial for assessing the accountability and effectiveness of various climate change initiatives. FLUXNET provides empirical estimates of GPP from flux towers, which can be used for upscaling, or extrapolating GPP estimates from the available local measurements across the globe. However, global estimates of GPP continue to have high uncertainty due to the limited number of flux sites that provide CO2 fluxes, and regional discrepancies among existing sites. Machine learning haved been applied to GPP upscaling with limited preformance and do not account the temporal dependencies within the data.
This research study aims to Improve model performance of global hourly GPP upscale models through Temporal Fusion Transformer (TFT) and contribute to scientific understanding of GPP trends and contributing factors through TFT’s interpretability.
Data Source & Data Science Approach
The target features of this research is the GPP measuremet from 129 sites from FLUXNET, a network of flux tower sites that each collect various ecological measurements and pool together public datasets for the global research community. These flux tower sites span acriss the globe but are primarily concentrated in the United States and Europe. Though valuable predictor features are provided by the FLUXNET datasets, only features that are globally available can be used for training upscaling models. Therefore, a wide set of globally available meteorological and remote sensing datasets, each with varying temporal and spatial resolutions were explored for use in modeling.
The primary goal for this research is to improve model performance by applying time-aware deep learning models, such as the Temporal Fusion Transformer (TFT), and also to enable deeper insights into GPP phenomena via interpretable outputs from attention-based models. The modeling process included setting a benchmark of TFT performance by training a model with past GPP values as predictors, as well as training two upscaling TFT models: one trained without past GPP as input, and another two-stage model that inputs GPP predictions from tree-based models into a final TFT model.
Modeling & Evaluation
When compared to the benchmark performance metrics established in previous work (Bodesheim 2018), several of our tree-based baseline models and TFT-based upscaling models resulted in improved performance along RMSE, MAE, and Nash Sutcliffe Efficiency (NSE). Though improving on benchmark results, the TFT-based models did not generalize well across the globe, and future work could explore training distinct models for each global region or climate type. This project also contributed to the advancement of GPP research by providing a set of model interpretation outputs, which were said to be highly useful by researchers at the UC Berkeley Quantitative Ecosystem Dynamics Lab.
Key Learnings & Impact
This project has been an impactful learning journey as the team studied and applied cutting-edge deep learning methods, and also dove deep into the niche ecological domain of carbon flux measurement. Within a semester, the team successfully implemented TFT with unconventional application to solve time-series forecasting problem without past target values and formed a novel approach of analytics based on TFT’s Interpretability outputs to provide insight on temporal dynamics of feature importance before predictions. These findings not only lays the fundation for future research in upscaling carbon flux, but also can be applied to other academic field and industry with any upscaling problems, time-series forecasting without the availability of past target values.
We are grateful to have the guidance and encouragements from our Capstone Advisors, Dr. Alberto Todeschini and Dr. Puya H. Vahabi, of the Mastert in Data Science program at School of Information, University of California, Berkeley. We could not have undertaken this journey without the advice and support from Dr. Yanghui Kang and Dr. Maoya Bassiouni from the Quantitative Ecosystem Dynamics Lab at UC Berkeley & Lawrence Berkeley National Lab. We also wish to thank to Alex Carite for the assistance in XGBoost modeling.