This is the classic website, which will be retired eventually. Please visit the modernized ClinicalTrials.gov instead.
Working…
ClinicalTrials.gov
ClinicalTrials.gov Menu

Video Assisted Speech Technology to Enhance Motor Planning for Speech (VAST)

The safety and scientific validity of this study is the responsibility of the study sponsor and investigators. Listing a study does not mean it has been evaluated by the U.S. Federal Government. Read our disclaimer for details.
 
ClinicalTrials.gov Identifier: NCT04764539
Recruitment Status : Completed
First Posted : February 21, 2021
Results First Posted : May 9, 2023
Last Update Posted : May 9, 2023
Sponsor:
Collaborator:
National Institute on Deafness and Other Communication Disorders (NIDCD)
Information provided by (Responsible Party):
Lois Brady, iTherapy, LLC

Brief Summary:
Nearly 3.5 million Americans are diagnosed with Autistic Spectrum Disorder (ASD), a communication disorder that causes skill limitations in the areas of language acquisition, sensory integration, and behavior. This lack of functional language ability limits conversation to its most basic parts, making daily tasks difficult for minimally to non-verbal individuals to achieve. iTherapy is developing the VAST platform, a personalized educational experience for students with ASD by creating a virtual reality-based video-modeling program to stimulate engagement and speech production practice, ultimately providing those with ASD an opportunity to enhance their quality of life by increasing their speech abilities which will enable them to build social networks and handle the events of daily life.

Condition or disease Intervention/treatment Phase
Autism Spectrum Disorder Apraxia of Speech Behavioral: Video Assisted Speech Therapy (VAST) Not Applicable

Detailed Description:

Autism Spectrum Disorder (ASD) is a neurodevelopmental communication disorder resulting in functional language and behavioral delays affecting over 3.5 million Americans. These delays vary with the severity of symptoms that present in ASD but often result in limited speech and increased communication challenges. Alongside linguistic acquisition, oral motor coordination is a crucial part of speech production.

Current clinical techniques have shown varying degrees of efficacy in improving functional language proficiency. Most techniques follow a drill-like procedure, where the child is made to repeat various sounds and phrases until they are retained. However, such a process requires potentially over twenty therapy sessions to show improvement which may then only be focused on one aspect of speech. This significantly limits the linguistic and social skills a student will acquire. To improve the efficacy of these therapy sessions, new technology must be developed to provide the most effective educational experience.

Video-assisted speech technology (VAST) is a method of using a video of a close-up model of the mouth and speaking simultaneously with it. Rather than present the individual with a static photograph of the initial phoneme, the entire sequence of oral movements can be presented sequentially via video-recorded segments of the orofacial area producing connected speech, combining best practices, video modeling, and literacy with auditory cues to provide unprecedented support the development of vocabulary, word combinations and communication.

In this SBIR Phase I proposal, iTherapy will develop a personalized educational experience for students with ASD by creating a virtual reality (VR) based VAST program to stimulate engagement and speech production practice. VR offers several benefits as a therapy technique: overcoming sensory difficulties, more effectively generalizing information, employing visual learning, and providing individualized treatment. As a user moves through the stages of the program, they will be immersed in a proactive environment where they will engross themselves with continuous content.

Rather than present the individual with a static photograph of the initial phoneme, the entire sequence of oral movements can be presented sequentially via VR-modelled segments of the orofacial area producing connected speech, combining best practices, video modeling, music therapy, and literacy with auditory cues to provide unprecedented support the development of vocabulary, word combinations and communication. The innovation will be a video series of a realistic VR mouth which will require the use of an app on a tablet or a smartphone, VR goggles, and bone conduction headphones.

Layout table for study information
Study Type : Interventional  (Clinical Trial)
Actual Enrollment : 6 participants
Allocation: Randomized
Intervention Model: Parallel Assignment
Intervention Model Description: Six children with ASD, between the ages of 4 and 8, were recruited to participate in a 12-sessions-long study that utilized the Video-Assisted Speech Therapy (VAST) application. The participants were divided into two groups: one which received the VR-integrated prototype, and one that received a 2D application on a tablet. Each session was approximately 15 minutes long (+/- 5 minutes), occurring twice per week.
Masking: None (Open Label)
Primary Purpose: Treatment
Official Title: Video Assisted Speech Technology to Enhance Functional Language Abilities in Individuals With Autism Spectrum Disorder
Actual Study Start Date : December 1, 2019
Actual Primary Completion Date : November 30, 2020
Actual Study Completion Date : November 30, 2020


Arm Intervention/treatment
Active Comparator: Stimuli administered via 2D format on an iPad Pro
Participants were given the Video-Assisted Speech Therapy (VAST) video-modeling stimuli in a 2D format (iPad Pro). Three children with ASD, between the ages of 4 and 8, participated in a 14-sessions-long study that utilized the tablet-based VAST application. Sessions were held twice a week with each lasting approximately 15 minutes (i.e. +/- 5 minutes).
Behavioral: Video Assisted Speech Therapy (VAST)
Six children with ASD, between the ages of 4 and 8, participated in a 14-sessions-long study that utilized the VR-integrated and the tablet-based VAST application. Three subjects received a 3D VR-integrated, bone conduction VAST prototype, while the remaining group of three received a tablet with a 2D version of the software. Sessions were held twice a week with each lasting approximately 15 minutes (i.e. +/- 5 minutes).

Active Comparator: Stimuli administered in 3D format via VR goggles and bone conduction headphones
Participants were given the Video-Assisted Speech Therapy (VAST) video-modeling stimuli in a VR format paired with a custom 3D-printed VR headset. Three children with ASD, between the ages of 4 and 8, participated in a 14-sessions-long study that utilized a 3D VR-integrated VAST prototype with bone conduction audio. Sessions were held twice a week with each lasting approximately 15 min (i.e. +/- 5 minutes).
Behavioral: Video Assisted Speech Therapy (VAST)
Six children with ASD, between the ages of 4 and 8, participated in a 14-sessions-long study that utilized the VR-integrated and the tablet-based VAST application. Three subjects received a 3D VR-integrated, bone conduction VAST prototype, while the remaining group of three received a tablet with a 2D version of the software. Sessions were held twice a week with each lasting approximately 15 minutes (i.e. +/- 5 minutes).




Primary Outcome Measures :
  1. Change in Mean Length of Utterance (MLU) [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]

    Participants (aged 4 to 8 years) were given a pre- and post-test 15-minute language sample. MLU was calculated for tests and gain from pre-test to post-test was compared.

    NOTE: This measure is calculated based on a change in the number of morphemes per utterance during pre-test and post-test language samples. During a five-minute period, two licensed speech-language pathologists (SLP) observed a parent interacting and talking with their child. Parents Both SLPs transcribed the subjects' speech and calculated a mean length of utterance (MLU) for each subject. MLU was calculated by determining how many bound and free morphemes were included within every spoken utterance produced by a subject. The total number of morphemes produced within the 5-minute period were then divided by total number of utterances, which then produced the MLU for each subject. This procedure was use for determining MLU in both the pre- and post-testing procedures.


  2. Change in Percentage of Correctly Transcribed Words Using Automatic Speech Recognition [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]

    15-minute pre- and post-testing was performed using speech recognition software and transcribed by a licensed speech pathologist. Differences pre and post intervention were compared across group and within groups.

    NOTE: During our assessment, we used Google's native closed captioning function (a tool which uses machine learning to recognize and transcribe speech) and a third party app, Tactiq Pins, which allows users to keep a transcript of all speaker utterances during a call. We compared our video to the Tactiq Pin transcripts in order to measure any change in the amount of accurately transcribed spoken words between pre-test and post-test language samples. Specific transcription results for each group can be found in the data tables provided.


  3. Change in Articulation Accuracy [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]
    Change in % of correct phonemes in each attempted stimulus


Secondary Outcome Measures :
  1. Parent Perceptions of Communication Changes, Resulting From Study Participation. [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]

    Parent observations -- perceptions of changes in their children's motor-speech, behavioral, and social communication skills after having participated in the study

    Scale title: Net Positive Changes Score Maximum possible value: 18 Minimum possible value: -2 Higher score is better.


  2. Change in Type-Token Ratios [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]
    A type-token ratio measures the total number of unique words in a given segment of language.

  3. Increase in Response Rate to Treatment Stimuli [ Time Frame: Seven weeks--each subject participated in the study twice a week over a 7-week period for a total of 14 sessions. The first and last sessions (session #1 and session #14) were reserved for pre-test and post-test language sample collection and assessment. ]
    The change in response rate measures any significant differences in how often children responded to pre- and post-testing stimuli after having received treatment between the iPad Pro and VR goggles groups. A response is considered a verbal or non-verbal reaction (e.g., eye contact, gestures, vocalizations) to the stimuli presented during the therapy sessions. Higher response rates indicate better engagement and responsiveness to the treatment. The change in response rate is calculated as the value at the post-test time point minus the value at the pre-test time point, with positive numbers representing increases and negative numbers representing decreases in response rate.



Information from the National Library of Medicine

Choosing to participate in a study is an important personal decision. Talk with your doctor and family members or friends about deciding to join a study. To learn more about this study, you or your doctor may contact the study research staff using the contacts provided below. For general information, Learn About Clinical Studies.


Layout table for eligibility information
Ages Eligible for Study:   4 Years to 8 Years   (Child)
Sexes Eligible for Study:   All
Accepts Healthy Volunteers:   Yes
Criteria

Inclusion Criteria:

  • Nonverbal-minimally verbal children (0-5 words)
  • Diagnosis of Autism Spectrum Disorder

Exclusion Criteria:

  • No history of seizures for participating with VR goggles.

Information from the National Library of Medicine

To learn more about this study, you or your doctor may contact the study research staff using the contact information provided by the sponsor.

Please refer to this study by its ClinicalTrials.gov identifier (NCT number): NCT04764539


Locations
Layout table for location information
United States, California
All research was conducted via tele-research due to COVID-19
Vallejo, California, United States, 94591
Sponsors and Collaborators
iTherapy, LLC
National Institute on Deafness and Other Communication Disorders (NIDCD)
  Study Documents (Full-Text)

Documents provided by Lois Brady, iTherapy, LLC:
Publications:
Autism Spectrum Disorder: Communication Problems in Children. (2018, August 30). Retrieved from https://www.nidcd.nih.gov/health/autism-spectrum-disorder-communication-problems-children
Lord, C., & Paul, R. (1997). Language and communication in autism. In D. Cohen & F. Volkmar (Eds.), Handbook of autism and pervasive developmental disorders (pp. 195-225). New York, NY: Wiley.
Rates of comorbid symptoms in children with ASD, ADHD, and comorbid ASD and ADHD. (2013, May 22). Retrieved from https://www.sciencedirect.com/science/article/pii/S0891422213001832
Symptoms and Diagnosis of ADHD | CDC. (n.d.). Retrieved from https://www.cdc.gov/ncbddd/adhd/diagnosis.html.
Sarah Parsons & Sue Cobb (2011) State-of-the-art of virtual reality technologies for children on the autism spectrum, European Journal of Special Needs Education, 26:3, 355-366, DOI: 10.1080/08856257.2011.593831
Evers, K., Noens, I., Steyaert, J., & Wagemans, J. (2011). Combining strengths and weaknesses in visual perception of children with an autism spectrum disorder: Perceptual matching of facial expressions. Research in Autism Spectrum Disorders,5(4), 1327-1342. doi:10.1016/j.rasd.2011.01.004
Handbook of Sensory Physiology - rd.springer.com. (n.d.). Retrieved from https://rd.springer.com/content/pdf/bfm:978-3-642-88658-4/1.pdf
Why Speech Rate? Why are results inconclusive? (n.d.). Retrieved from https://www.asha.org/Events/convention/handouts/2007/1337_Chon_Hee_Cheong/ Brown, R. (1973) A First Language London: Allen and Unwin.

Layout table for additonal information
Responsible Party: Lois Brady, Principal Investigator, iTherapy, LLC
ClinicalTrials.gov Identifier: NCT04764539    
Other Study ID Numbers: 1R43DC018447-01 ( U.S. NIH Grant/Contract )
1R43DC018447-01 ( U.S. NIH Grant/Contract )
First Posted: February 21, 2021    Key Record Dates
Results First Posted: May 9, 2023
Last Update Posted: May 9, 2023
Last Verified: April 2023
Individual Participant Data (IPD) Sharing Statement:
Plan to Share IPD: No
Plan Description: All data from individual participants will be kept private.

Layout table for additional information
Studies a U.S. FDA-regulated Drug Product: No
Studies a U.S. FDA-regulated Device Product: No
Keywords provided by Lois Brady, iTherapy, LLC:
autism
apraxia
speech therapy
speech pathology
nonverbal
Additional relevant MeSH terms:
Layout table for MeSH terms
Apraxias
Autistic Disorder
Autism Spectrum Disorder
Child Development Disorders, Pervasive
Neurodevelopmental Disorders
Mental Disorders
Psychomotor Disorders
Neurobehavioral Manifestations
Neurologic Manifestations
Nervous System Diseases