Table 3 Inter-rater reliability measures for diagnostic and overall assessments

From: High inter-rater reliability in consensus diagnoses and overall assessment in the Asian Cohort for Alzheimer’s Disease Study

 

Observed agreement (%)

Cohen’s Kappa (lower & upper CI)

Standard error (SE)

Mispresented data

Diagnosis agreement

88

0.835 (0.700–0.971)

0.069

0.119

Overall assessment agreement

88

0.835 (0.700–0.971)

0.069

0.119

  1. The table presents inter-rater reliability data for diagnosis and overall assessment agreements in a clinical study, using Cohen’s Kappa to quantify agreement levels. Both categories show an 88% observed agreement and a Cohen’s Kappa value of 0.835, indicating excellent reliability beyond chance. The standard error (0.069) and the 95% confidence interval, with a lower bound of 0.700 and an upper bound of 0.971, are also provided, indicating precise and reliable kappa estimations. Additionally, “misrepresented data” at 0.119 for both categories refers to inaccuracies in data representation, potentially due to errors in data entry or interpretation, affecting the study’s accuracy and reliability.