As performing an attribute analysis can be tedious, costly and generally uncomfortable for all stakeholders (the analysis is simple versus execution), it is best to take a moment to really understand what should be done and why. The review should help determine which specific individuals and codes are the main causes of the problems, and the evaluation of the attribute agreement should help determine the relative contribution of repeatability and reproducibility issues to these specific codes (and individuals). In addition, many bug tracking systems have problems with precision readings that indicate where a defect has occurred, because the location where the defect is detected is recorded and not where the defect appeared. Where the error is found, it does not help much to identify the causes, which is why the accuracy of the site assignment should also be an element of the test. Often, what you are trying to evaluate is too complex to rely on the effectiveness of one person. For example, contracts, design drawings with specifications and parts lists, as well as software codes. One solution is to use a team-based approach or an inspection/verification meeting where identifying errors is at the heart of the discussion. Often, several people can get a common individual assessment that is better than what each of them could have produced on their own. It is a way to mitigate the most difficult sources of repeatability and reproducibility to control. However, a bug tracking system is not an ongoing payment. The assigned values are correct or not; There is no (or should not) grey area. If codes, locations and degrees of gravity are defined effectively, there is only one attribute for each of these categories for a particular error.

Once it is established that the bug tracking system is a system for measuring attributes, the next step is to examine the concepts of accuracy and accuracy that relate to the situation. First, it helps to understand that accuracy and precision are terms borrowed from the world of continuous (or variable) gags. For example, it is desirable that the speedometer in a car can carefully read the right speed over a range of speeds (z.B. 25 mph, 40 mph, 55 mph and 70 mph), regardless of the drive. The absence of distortion over a range of values over time can generally be described as accuracy (Bias can be considered wrong on average). The ability of different people to interpret and reconcile the same value of salary multiple times is called accuracy (and accuracy problems may be due to a payment problem, not necessarily to the people who use it). Second, the evaluation of the attribute agreement should be applied and the detailed results of the audit should provide a number of information that will help to understand how evaluation can be the best way to be organized. Repeatability and reproducibility are components of accuracy in an analysis of the attribute measurement system, and it is advisable to first determine if there is a precision problem. This means that before designing an attribute contract analysis and selecting the appropriate scenarios, an analyst should urgently consider monitoring the database to determine if past events have been properly coded. The accuracy of a measurement system is analyzed by segmenting into two main elements: repeatability (the ability of a particular evaluator to assign the same value or attribute several times under the same conditions) and reproducibility (the ability of several assessors to agree on a set of circumstances).

In the case of an attribute measurement system, repeatability or reproducibility problems necessarily pose precision problems.