Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis

A L Herrick, C Roberts, A Tracey, A Silman, M Anderson, M Goodfield, Neil J McHugh, L Muir, C P Denton

Research output: Contribution to journalArticle

  • 30 Citations

Abstract

Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers.
Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient.
Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted kappa = 0.46 [95% CI 0.35, 0.57]).
Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame.
LanguageEnglish
Pages878-882
Number of pages5
JournalArthritis & Rheumatism
Volume60
Issue number3
DOIs
StatusPublished - 2009

Fingerprint

Systemic Scleroderma
Ulcer
Confidence Intervals
Observer Variation
Fingers
Multicenter Studies
Clinical Trials
Physicians
Weights and Measures
Rheumatologists

Cite this

Herrick, A. L., Roberts, C., Tracey, A., Silman, A., Anderson, M., Goodfield, M., ... Denton, C. P. (2009). Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis. Arthritis & Rheumatism, 60(3), 878-882. DOI: 10.1002/art.24333

Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis. / Herrick, A L; Roberts, C; Tracey, A; Silman, A; Anderson, M; Goodfield, M; McHugh, Neil J; Muir, L; Denton, C P.

In: Arthritis & Rheumatism, Vol. 60, No. 3, 2009, p. 878-882.

Research output: Contribution to journalArticle

Herrick, AL, Roberts, C, Tracey, A, Silman, A, Anderson, M, Goodfield, M, McHugh, NJ, Muir, L & Denton, CP 2009, 'Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis' Arthritis & Rheumatism, vol 60, no. 3, pp. 878-882. DOI: 10.1002/art.24333
Herrick AL, Roberts C, Tracey A, Silman A, Anderson M, Goodfield M et al. Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis. Arthritis & Rheumatism. 2009;60(3):878-882. Available from, DOI: 10.1002/art.24333
Herrick, A L ; Roberts, C ; Tracey, A ; Silman, A ; Anderson, M ; Goodfield, M ; McHugh, Neil J ; Muir, L ; Denton, C P. / Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis. In: Arthritis & Rheumatism. 2009 ; Vol. 60, No. 3. pp. 878-882
@article{d57222bf993a4ee7a136d35cbec15d51,
title = "Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis",
abstract = "Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting {"}ulcer{"} or {"}no ulcer,{"} and if {"}ulcer,{"} then either {"}inactive{"} or {"}active.{"} Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95{\%} confidence interval [95{\%} CI] 0.77, 0.84). Interrater reliability was poorer (weighted kappa = 0.46 [95{\%} CI 0.35, 0.57]). Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame.",
author = "Herrick, {A L} and C Roberts and A Tracey and A Silman and M Anderson and M Goodfield and McHugh, {Neil J} and L Muir and Denton, {C P}",
year = "2009",
doi = "10.1002/art.24333",
language = "English",
volume = "60",
pages = "878--882",
journal = "Arthritis & Rheumatism",
issn = "0004-3591",
publisher = "John Wiley and Sons Inc.",
number = "3",

}

TY - JOUR

T1 - Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis

AU - Herrick,A L

AU - Roberts,C

AU - Tracey,A

AU - Silman,A

AU - Anderson,M

AU - Goodfield,M

AU - McHugh,Neil J

AU - Muir,L

AU - Denton,C P

PY - 2009

Y1 - 2009

N2 - Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted kappa = 0.46 [95% CI 0.35, 0.57]). Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame.

AB - Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted kappa = 0.46 [95% CI 0.35, 0.57]). Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame.

UR - http://www.scopus.com/inward/record.url?scp=61649095440&partnerID=8YFLogxK

UR - http://dx.doi.org/10.1002/art.24333

U2 - 10.1002/art.24333

DO - 10.1002/art.24333

M3 - Article

VL - 60

SP - 878

EP - 882

JO - Arthritis & Rheumatism

T2 - Arthritis & Rheumatism

JF - Arthritis & Rheumatism

SN - 0004-3591

IS - 3

ER -