A Kappa coefficient is used to check the presence of the themes presented. The kappa coefficient is a statistical measure of reliability or compliance between evaluators, which is used to evaluate qualitative documents and determine the concordance between two evaluators. The equation used for Kappa`s calculation is as follows: a final concern about the reliability of evaluators was introduced by Jacob Cohen, a leading statistician who developed in the 1960s the key statistics for measuring the reliability of the interrater, Cohens Kappa (5). Cohen pointed out that there is likely some degree of convergence between data collectors if they don`t know the right answer, but only advise. He hypothesized that a number of presumptions would be congruent and that insurance statistics would have to take into account this random concordance. He developed kappa statistics as a tool to control this random concordance factor. To manage this, we can define different levels of convergence, with 0 totally disagreeing and 1 being fully agreed. The concept of “concordance between evaluators” is quite simple and for many years the reliability of interraters has been measured as a percentage of concordance between data collectors. . . .