Publication Date



Open access

Embargo Period


Degree Type


Degree Name

Doctor of Philosophy (PHD)


Biomedical Engineering (Engineering)

Date of Defense


First Committee Member

Justin C. Sanchez

Second Committee Member

Ozcan Ozdamar

Third Committee Member

Jorge E. Bohorquez

Fourth Committee Member

Edelle C. Field-Fote

Fifth Committee Member

Chris Bennett


Each year, more than 10 people per million will incur a spinal cord injury (SCI). Of these injuries, one-third is reported to result in tetraplegia. People living with tetraplegia rank hand function as the ability they would most like to see restored. With decrease use of hand movements, plastic reorganization causes secondary damage in the motor cortex. Methods are needed to help restore or supplement motor abilities. One approach to produce a more comprehensive therapy is to augment standard rehabilitation with new developments from the study of Brain-Computer Interfaces (BCI). BCI’s record brain activity and translate it into actions in the physical world. BCI's do this by decoding electroencephalography (EEG) data with a computer system to determine a user's intent. By engaging the user’s brain to actively control extremities during rehabilitation, BCI’s combined with rehabilitation could offer the unique ability to rehabilitate the motor system as a whole, including secondary damage in the motor cortex. Not all EEG signals can be directly mapped to desired outputs; however including some of them may improve the performance of the BCI. One possible EEG signal to include in a BCI is ErrPs. These potentials occur when the subject notices an error has been made. A new BCI architecture that incorporated reinforcement learning and ErrPs could better process the EEG signal. To validate the reinforcement learning based BCI for rehabilitation a closed-loop system was developed. The system presented cues to the user instructing them to perform motor imagery thus generating motor potentials. The system then provided feedback to the user through a display and functional electrical stimulation (FES), which caused the user to generate an ErrP if an error occurred. The system was able to use reinforcement learning to determine the mapping of motor potentials to intended actions based on user generated ErrPs. Choosing an appropriate size for a neural network when using reinforcement learning for a BCI application is difficult because of the bias-variance tradeoff. By starting with a small network and using dynamic feature addition to grow the number of inputs to the network over time the performance of the BCI can be improved over both small and large networks in both early trials and later trials. The order in which features are added during dynamic feature addition can affect the performance of the system. By taking into account how useful features are for discriminating between different cues and adding features that are more useful in early trials, the performance of the system can be improved. Various update rules could be used in the rehabilitation system: back propagation, scaled back propagation, Hebbian style learning, and scaled Hebbian style learning. In simulations Hebbian style learning performed better than back propagation. While scaled Hebbian style learning performed better than Hebbian style learning. Scaled Hebbian style learning also takes advantage of the online nature of reinforcement learning used in the system. By adjusting the learning rate, the algorithm adapted the weights more quickly in areas where the slope of the error surface is small and converges on a minimum more quickly in areas where the slope is high.


brain-computer interfaces; reinforcement learning; neural networks