NeuroToys is a senior capstone project aimed at developing a non-invasive brain-computer interface (BCI) that allows users to control a robotic device in real-time using EEG signals. The system interprets brainwave patterns to translate them into commands for robot movement, improving accessibility for individuals with physical disabilities. This project combines expertise in mechanical design, electrical engineering, and software applications.
Currently, users are able to command the robotic car to move forward, turn left, and turn right. A single blink with the left eye turns the car left, and a single blink with the right eye turns the car right. Achieving a high level of concentration moves the car forward. We control this with a central PyQt GUI, which controls connecting to the EEG, streaming its data, connecting to the Bluetooth car, monitoring incoming data and thresholds/detection, setting manual focus thresholds, and calibrating focus level. For focus detection, we currently use an RMS-based dynamic threshold, isolating the beta range of brain data coming in from the AF7 sensor of the Muse 2 EEG headset (directly above the left eye). This is the range between 12 to 30 Hz of signals coming in. The "focus level" is the calculated power of these waves coming in in real time. When the focus level goes above the threshold, a Bluetooth command is sent to the car to move forward. For blink detection, we filter out 0.5-5 Hz and use blinking artifacts to send commands to turn the car's direction left or right. We use the AF7 (left eye) and AF8 (right eye) channels on the headset for the signals, and detect it by detecting if the filtered curve goes down then up steeply in a short period of time. This is classified as a blink.
Because of the delicacy of the focus threshold, we have a manual threshold setter, a threshold calibration mode, and a threshold reset button for the greatest amount of flexibility. Despite this, placement of the EEG on the forehead is the greatest factor for success. For example, the BCI currently works more accurately on some of our team member's heads than others. This is a fault and more user-friendly ways of calibrating the EEG to universally work are areas for future improvement. Additionally, the blink detection can be fragile. Currently, we have a threshold-based detection that detects the specific shape of the filtered curve to designate a blink. However, when a blink occurs, this shape is not always the same, and therefore in these cases it is not classified well. We considered collecting data to train our own machine learning model to classify blinks based on our setup instead of using a fragile, hardcoded approach as we are using now. Our GUI is also not currently standalone. We run a main Python script which starts our PyQt GUI. Future improvements include creating a standalone GUI in addition to an app to host this on that is more user friendly, so a user would not have to be running a Python script in a terminal and install dependencies manually.
To keep this interface working with an accessible forehead EEG, we relied on unconventional techniques to control the car. These include focus detection and blinking to control movement, given the electrodes are detecting voltages from the Prefrontal Cortex and muscle activity around the forhead and eyes. This is because without direct access to detect electrical impulses from other brain areas, like the motor cortex, it is not possible to obtain accurate results for commands based on purely thoughts about movement. This conclusion came directly from a study done with the Muse 2 Headset that achieved low accuracy with detected brain patterns matching specific thought patterns due to its electrode placement and non-invasive approach. This is also a factor in the device's inaccuracy at points, but is just a tradeoff to the approachable nature of it.