Abstract
The current handling of data in earth observation, modelling and prediction measures gives cause for critical consideration, since we all too often carelessly ignore data uncertainty. We think that Earth scientists are generally aware of the importance of linking data to quantitative uncertainty measures. But we also think that uncertainty quantification of Earth observation data too often fails at very early stages. We claim that data acquisition without uncertainty quantification is not sustainable and machine learning and computational modelling cannot unfold their potential when analysing complex natural systems like the Earth. Current approaches such as stochastic perturbation of parameters or initial conditions cannot quantify uncertainty or bias arising from the choice of model, limiting scientific progress. We need incentives stimulating the honest treatment of uncertainty starting during data acquisition, continuing through analysis methodology and prediction results. Computational modellers and machine learning experts have a critical role, since they enjoy high esteem from stakeholders and their methodologies and their results critically depend on data uncertainty. If both want to advance their uncertainty assessment of models and predictions of complex systems like the Earth, they have a common problem to solve. Together, computational modellers and machine learners could develop new strategies for bias identification and uncertainty quantification offering a more all-embracing uncertainty quantification than any known methodology. But since it starts for computational modellers and machine learners with data and their uncertainty, the fundamental first step in such a development would be leveraging shareholder esteem to insistently advocate for reduction of ignorance when it comes to uncertainty quantification of data.