Skip Ribbon Commands
Skip to main content
MRV/Transparency helpdesk


ForumComments :DefaultUse SHIFT+ENTER to open the menu (new window).Open MenuOpen Menu

Comments for each discussion article
Technical resources for implementing the measurement, reporting and verification arrangements under the Convention and the enhanced transparency framework under the Paris Agreement.
Ideally, you should identify the probability density function (PDF) of each individual variable, then run the simulations based on the variable, and then combine the simulations to identify the final distribution of the variable in question. You should only make assumptions about the distribution when you do not have access to the underlying dataset. If this is the case, then you have to make educated assumptions about the data. There is extensive literature on the distribution of tree sizes, so you could research studies on this topic focusing on the geographic region and forest type of interest to come up with the reasonable probability density function. Because it is a natural phenomenon, one could assume that the distribution is normal. This, however, is a broad assumption, and it is better to verify with an actual distribution fit to a PDF using a goodness of fit test or through scientific literature review.
(Anna McMurray, Winrock  International)
In step 1 of the Monte Carlo approach, which distribution would you use when it comes to combined variables, such as. i.e. tree dbh's that are either in forest or non-forest. Individually they may be normal, but their interaction may not be 
It is good to use Monte Carlo for both of these scenarios since the conditions for applying it are so flexible. The times when it is better to use Monte Carlo than propagation of error does not depend on the number of variables. Applying Monte Carlo will lead to more accurate results when the variables have high uncertainty, non-normal equations, data are correlated, models are complex, and uncertainty varies between inventory years.
(Anna McMurray, Winrock International)
Is it still good to use Monte Carlo approach when we have only one emission factor value? (the activity data are varied). Or is it more applicable for analysis involving more than one value of both EF & AD? 
You can find the guidance document we developed on this approach, in English and in Spanish, here: Each software will have their own guidance materials on the specific application of the software.
(Anna McMurray, Winrock International)
Do you have any documents/tutorials about Monte Carlo approach? 
While there is no perfect software out there, for Excel-based software, XLSTAT is the most comprehensive (i.e., will allow you to perform all the steps needed to carry out the Monte Carlo approach) from our experience. We cannot claim to have reviewed all possible software available and can only tell you our experience.
(Anna McMurray, Winrock International)
As most Software need to purchased, what is the most reliable software that we should choose? Excel is easier if available. 
While R does require some knowledge of programming and therefore may not be considered as easy to use as Excel, there is a huge amount of information available online on how to use it (google “R Cran” or “R statistics”). There are also a number of free courses available that will teach you how to use it. That being said, to our knowledge, there is no ready-made 'R' script specifically designed to run Monte Carlo for our purposes.
(Anna McMurray, Winrock International)
'R' is open source but we do not know programming. Is there any ready-made 'R' script or any suggestion to use 'R'? 
Section of Uncertainty Chapter in the 2006 Guidelines addresses this question. See below. Note that Approach 1 refers to propagation of error and Approach 2 refers to Monte Carlo.

“Where the conditions for applicability are met (relatively low uncertainty, no correlation between sources except those dealt with explicitly by Approach 1), Approach 1 and Approach 2 will give identical results. However, and perhaps paradoxically, these conditions are most likely to be satisfied where Tier 2 and Tier 3 methods are widely used and properly applied in the inventory, because these methods should give the most accurate and perhaps also the most precise results. There is therefore no direct theoretical connection between choice of Approach and choice of Tier. In practice, when Tier 1 methods are applied, Approach 1 will usually be used while the ability to apply Approach 2 is more likely where Tier 2 and 3 methods are being used, moreover for quantifying the uncertainty of emissions/removal estimates of complex systems such as in the AFOLU Sector.

When Approach 2 is selected, as part of QA/QC activities inventory agencies also are encouraged to apply Approach 1 because of the insights it provides and because it will not require a significant amount of additional work. Where Approach 2 is used, its estimates of overall uncertainty are to be preferred when reporting uncertainties (see Section”
(Anna McMurray, Winrock International)
Some are saying Monte Carlo Approach should use for Tier 2. Is that correct? 
There is nothing in the 2006 GL and UNFCCC reporting GL that forbids the use of Monte Carlo to estimate emissions. As encouraged by the IPCC Guidelines, Parties can also use national methodologies where they consider these to be better able to reflect their national situation, provided that these methodologies are consistent, transparent and well documented. Because of the potential inconsistencies resulting from just applying Monte Carlo to estimate uncertainties and not to estimate actual emissions, from our experience in the forestry sector, we highly recommend using Monte Carlo for both as resources allow/permit.
(Anna McMurray, Winrock International)
How should compilers’ prioritize use of the Monte Carlo approach? Start application of this approach with estimating uncertainty?