Bonini, Roberto
(2024)
Integrating Neuro-Ocular Data for Accurate Hand Movement Decoding in Brain-Controlled Robotic Systems.
[Laurea magistrale], Università di Bologna, Corso di Studio in
Artificial intelligence [LM-DM270], Documento full-text non disponibile
Il full-text non è disponibile per scelta dell'autore.
(
Contatta l'autore)
Abstract
Brain-computer interfaces (BCIs) are rapidly advancing, particularly in cognitive areas like the Posterior Parietal Cortex (PPC). This study aims to enhance continuous hand movement decoding by integrating neural and eye movement data for direct robotic prosthetic control. We leveraged single-neuron activity from the PPC area V6A in macaques during two different reaching tasks—a Fixation-to-reach and a Constant-gaze task—combined with eye coordinates data to form a comprehensive neuro-ocular dataset. Convolutional (CNN) and recurrent (RNN) neural networks were employed for decoding hand movements. Our findings reveal that incorporating ocular data significantly improves decoding accuracy, with the RNN-based model outperforming others. This research also explores intuitive, non-invasive systems for robotic arm control using temporary surrogate interfaces instead of decoded velocities. These advancements not only deepen our understanding of neural decoding but also pave the way for sophisticated assistive technologies and enhanced robotic control systems.
Abstract
Brain-computer interfaces (BCIs) are rapidly advancing, particularly in cognitive areas like the Posterior Parietal Cortex (PPC). This study aims to enhance continuous hand movement decoding by integrating neural and eye movement data for direct robotic prosthetic control. We leveraged single-neuron activity from the PPC area V6A in macaques during two different reaching tasks—a Fixation-to-reach and a Constant-gaze task—combined with eye coordinates data to form a comprehensive neuro-ocular dataset. Convolutional (CNN) and recurrent (RNN) neural networks were employed for decoding hand movements. Our findings reveal that incorporating ocular data significantly improves decoding accuracy, with the RNN-based model outperforming others. This research also explores intuitive, non-invasive systems for robotic arm control using temporary surrogate interfaces instead of decoded velocities. These advancements not only deepen our understanding of neural decoding but also pave the way for sophisticated assistive technologies and enhanced robotic control systems.
Tipologia del documento
Tesi di laurea
(Laurea magistrale)
Autore della tesi
Bonini, Roberto
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Brain-Computer Interfaces,Hybrid BCI,Ocular Data Integration,Neural decoding,Posterior Parietal Cortex,area V6A,Single-neuron activity,Convolutional Neural Networks,Recurrent Neural Networks,Robotic Arm,Assistive technologies,Robotic Operating System (ROS)
Data di discussione della Tesi
23 Luglio 2024
URI
Altri metadati
Tipologia del documento
Tesi di laurea
(NON SPECIFICATO)
Autore della tesi
Bonini, Roberto
Relatore della tesi
Correlatore della tesi
Scuola
Corso di studio
Ordinamento Cds
DM270
Parole chiave
Brain-Computer Interfaces,Hybrid BCI,Ocular Data Integration,Neural decoding,Posterior Parietal Cortex,area V6A,Single-neuron activity,Convolutional Neural Networks,Recurrent Neural Networks,Robotic Arm,Assistive technologies,Robotic Operating System (ROS)
Data di discussione della Tesi
23 Luglio 2024
URI
Gestione del documento: