Table 3 Inter-rater reliability measures for diagnostic and overall assessments
Observed agreement (%) | Cohen’s Kappa (lower & upper CI) | Standard error (SE) | Mispresented data | |
---|---|---|---|---|
Diagnosis agreement | 88 | 0.835 (0.700–0.971) | 0.069 | 0.119 |
Overall assessment agreement | 88 | 0.835 (0.700–0.971) | 0.069 | 0.119 |