ACR Meeting Abstracts

ACR Meeting Abstracts

  • Meetings
    • ACR Convergence 2024
    • ACR Convergence 2023
    • 2023 ACR/ARP PRSYM
    • ACR Convergence 2022
    • ACR Convergence 2021
    • ACR Convergence 2020
    • 2020 ACR/ARP PRSYM
    • 2019 ACR/ARP Annual Meeting
    • 2018-2009 Meetings
    • Download Abstracts
  • Keyword Index
  • Advanced Search
  • Your Favorites
    • Favorites
    • Login
    • View and print all favorites
    • Clear all your favorites
  • ACR Meetings

Abstract Number: 2174

Test-Retest Reliability and Validity of a Mobile Health Application to Automate the 30 Seconds Chair Stand Test – Preliminary Data to Create a Contemporary Instrument for Randomized Clinical Trials

Lucas Dantas1, Matthew Harkey 2, André Dantas 3, Lori Lyn Price 4, Jeffrey Driban 5 and Timothy McAlindon 1, 1Tufts Medical Center, Boston, MA, 2Tufts Medical Center, University of Massachusetts Medical School, Boston, MA, 3CI&T, Tokyo, Japan, 4Tufts Medical Center, Tufts University, Boston, MA, 5Tufts Medical Center, Boston

Meeting: 2019 ACR/ARP Annual Meeting

Keywords: clinical trials, digital technologies, outcome measures and technology, physical function

  • Tweet
  • Email
  • Print
Session Information

Date: Tuesday, November 12, 2019

Title: Osteoarthritis – Clinical Poster II

Session Type: Poster Session (Tuesday)

Session Time: 9:00AM-11:00AM

Background/Purpose: Contemporary technologies offer potential solutions to improve and automate data collection of randomized clinical trials by transitioning assessments from the clinic to the real-world. Mobile health (mHealth) applications (Apps) associated with smartphone embedded sensors offer the possibility to automate objective physical function tests by tracking, recording, and analyzing quantitative participant data. An automated mHealth app brings to clinicians and researchers the opportunity to reliably monitor physical function at more frequent intervals within a real-world setting. To create this tool, we collected preliminary data to establish the test-retest reliability and validity of an automated mHealth app to assess performance on a 30-second chair-stand test when compared with the gold standard assessment technique.

Methods: We recruited 10 healthy individuals to participate in two data collection sessions separated by 7 days. Individuals were at least 21 years old, able to comfortably walk 20 meters without an assistive device, and had an iPhone 5 or higher. We developed a mHealth App that uses algorithms associated with the smartphone’s embedded motion sensors (gravitometer and accelerometer) to automate the count of the repetitions during the 30-seconds chair-stand test. During the test, the participant’s smartphone was positioned inside an elastic strap at the chest level. Participants performed three trials in which they completed as many repetitions of standing and sitting from a standard chair during 30 seconds. For each trial, the mHealth app counted the number of repetitions completed by the participants. During the same three trials, an assessor manually counted the number of repetitions as the gold standard for assessing performance. The number of repetitions between the three trials was averaged for the mHealth app and the gold standard. Bland-Altman plots and Intraclass Correlation Coefficients (ICC) were used to establish agreement and inter-rater reliability between the gold standard and the mHealth app, respectively. ICCs were used to assess test-retest reliability between the two different sessions for the gold standard and the mHealth app.

Results: The majority of our sample was female (80%) with a mean ± standard deviation age of 30 ± 7 years, and body mass index of 24.7 ± 5.4 kg/m2. The Bland-Altman plots demonstrate good agreement between the gold standard test and the mHealth app since no data points fell outside the 95% limits of agreement (upper limit:1.0 and lower limit -1.5 chair stands; Figure 1). Additionally, there is excellent reliability between the gold standard and the mHealth app (ICC2,k = 0.98). When comparing across the two different sessions, the gold standard (ICC2,k = 0.93) and mHealth app (ICC2,k = 0.89) demonstrate similar test-retest reliability.

Conclusion: The results suggest that our mHealth app is a reliable and valid tool to objectively quantify performance during the 30-second chair-stand test in healthy individuals. This data will lay the foundation for further development of algorithms to automate other objective physical function tests, as well as future implementation in clinical trials.

Figure 1. Bland-Altman Plot for the 30-second Chair Stand Test.


Disclosure: L. Dantas, None; M. Harkey, None; A. Dantas, None; L. Price, None; J. Driban, None; T. McAlindon, None.

To cite this abstract in AMA style:

Dantas L, Harkey M, Dantas A, Price L, Driban J, McAlindon T. Test-Retest Reliability and Validity of a Mobile Health Application to Automate the 30 Seconds Chair Stand Test – Preliminary Data to Create a Contemporary Instrument for Randomized Clinical Trials [abstract]. Arthritis Rheumatol. 2019; 71 (suppl 10). https://acrabstracts.org/abstract/test-retest-reliability-and-validity-of-a-mobile-health-application-to-automate-the-30-seconds-chair-stand-test-preliminary-data-to-create-a-contemporary-instrument-for-randomized-clinical-t/. Accessed .
  • Tweet
  • Email
  • Print

« Back to 2019 ACR/ARP Annual Meeting

ACR Meeting Abstracts - https://acrabstracts.org/abstract/test-retest-reliability-and-validity-of-a-mobile-health-application-to-automate-the-30-seconds-chair-stand-test-preliminary-data-to-create-a-contemporary-instrument-for-randomized-clinical-t/

Advanced Search

Your Favorites

You can save and print a list of your favorite abstracts during your browser session by clicking the “Favorite” button at the bottom of any abstract. View your favorites »

All abstracts accepted to ACR Convergence are under media embargo once the ACR has notified presenters of their abstract’s acceptance. They may be presented at other meetings or published as manuscripts after this time but should not be discussed in non-scholarly venues or outlets. The following embargo policies are strictly enforced by the ACR.

Accepted abstracts are made available to the public online in advance of the meeting and are published in a special online supplement of our scientific journal, Arthritis & Rheumatology. Information contained in those abstracts may not be released until the abstracts appear online. In an exception to the media embargo, academic institutions, private organizations, and companies with products whose value may be influenced by information contained in an abstract may issue a press release to coincide with the availability of an ACR abstract on the ACR website. However, the ACR continues to require that information that goes beyond that contained in the abstract (e.g., discussion of the abstract done as part of editorial news coverage) is under media embargo until 10:00 AM ET on November 14, 2024. Journalists with access to embargoed information cannot release articles or editorial news coverage before this time. Editorial news coverage is considered original articles/videos developed by employed journalists to report facts, commentary, and subject matter expert quotes in a narrative form using a variety of sources (e.g., research, announcements, press releases, events, etc.).

Violation of this policy may result in the abstract being withdrawn from the meeting and other measures deemed appropriate. Authors are responsible for notifying colleagues, institutions, communications firms, and all other stakeholders related to the development or promotion of the abstract about this policy. If you have questions about the ACR abstract embargo policy, please contact ACR abstracts staff at [email protected].

Wiley

  • Online Journal
  • Privacy Policy
  • Permissions Policies
  • Cookie Preferences

© Copyright 2025 American College of Rheumatology