NeuralAnalytics: Real-Time Brain Signal Analysis for my Bachelor's Thesis

December 01, 2025 - Neirth

After months of hard work, I’m thrilled to finally share the culmination of my Bachelor’s Thesis at the Universitat Politècnica de València. What started as a curious exploration into the intersection of neuroscience and software engineering has evolved into a fully functional brain-computer interface system capable of analyzing EEG signals in real-time using deep learning techniques.

This project, which I’ve named NeuralAnalytics, represents not just the end of my academic journey, but also the beginning of something that I believe has genuine potential to improve lives. The core idea is straightforward but ambitious: capture brain signals, process them in real-time, and translate specific neural patterns into actionable commands—like turning on a light bulb using only the power of thought.

The Vision Behind the Project

When I first started thinking about what my Bachelor’s Thesis should be, I knew I wanted something challenging—something that would push me beyond the comfortable boundaries of conventional software development. My passion for informatics began in adolescence, but I wanted to apply it to a domain that could make a tangible difference in people’s lives. The idea of creating a system that could interpret human brain activity felt like the perfect intersection of my interests in embedded systems, deep learning, and real-time computing. What particularly motivated me was the potential to help individuals with motor disabilities—when I read about Stephen Hawking’s communication challenges, I realized this technology could genuinely improve quality of life for people who struggle with traditional interfaces. Additionally, I was driven by the desire to contribute to the scientific community. Rather than pursuing commercial gain, I chose to publish all code and models openly, hoping other researchers could build upon this work. The project represents not just an academic requirement, but a meaningful step toward making brain-computer interface technology more accessible and practical for everyday use.

The project follows the regulatory framework of UNE-EN 62304 for medical device software, which added an extra layer of complexity but also gave me invaluable experience in developing software for critical applications. This wasn’t just about writing code that worked; it was about writing code that could be trusted.

Understanding the Technical Challenge

The fundamental challenge of analyzing EEG signals in real-time lies in the nature of the data itself. Electroencephalographic signals are notoriously noisy, with artifacts from eye movements, muscle contractions, and electrical interference constantly threatening to overwhelm the actual neural patterns we’re trying to detect. Additionally, the brain regions we’re interested in—specifically the occipital and temporal lobes—produce signals that vary significantly between individuals and even between sessions for the same person.

To tackle this, I designed a system architecture that separates concerns cleanly. The signal acquisition layer uses the BrainBit device, a consumer-grade EEG headband that captures data from four channels (T3, T4, O1, O2). This data flows through a preprocessing pipeline that normalizes and segments the signals before feeding them to the deep learning model for classification.

The classification task itself is framed around three states:

  • RED: A specific mental task indicating one command
  • GREEN: A different mental task indicating another command
  • TRASH: Everything else—noise, artifacts, or ambiguous signals

This three-class approach allows the system to be conservative, defaulting to “TRASH” when the model isn’t confident, which is crucial for a real-time control system where false positives could be problematic.

Deep Learning Architecture: CNN-LSTM Hybrid

After extensive experimentation with different model architectures, I settled on a hybrid approach combining Convolutional Neural Networks (CNN) for spatial feature extraction with Long Short-Term Memory (LSTM) networks for temporal pattern recognition.

The rationale behind this design is grounded in the nature of EEG data. The CNN layers excel at extracting local patterns—frequency components, amplitude variations, and cross-channel relationships—while the LSTM layers capture the temporal dynamics that distinguish one mental state from another.

class NeuralAnalyticsModel(nn.Module):
    def __init__(self):
        super(NeuralAnalyticsModel, self).__init__()

        # CNN Feature Extractor
        self.conv1 = nn.Conv1d(in_channels=4, out_channels=16, kernel_size=5, padding=2)
        self.bn1 = nn.BatchNorm1d(16)
        self.pool1 = nn.MaxPool1d(kernel_size=2, stride=2)
        
        self.conv2 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
        self.bn2 = nn.BatchNorm1d(32)
        self.pool2 = nn.MaxPool1d(kernel_size=2, stride=2)
        
        # LSTM Temporal Encoder
        self.lstm = nn.LSTM(input_size=32, hidden_size=32, num_layers=1,
                            batch_first=True, bidirectional=True)
        
        # Classifier
        self.classifier = nn.Sequential(
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Dropout(0.3),
            nn.Linear(32, 3),
            nn.Softmax(dim=1)
        )

The input data is processed in windows of 62 samples with 50% overlap, providing sufficient context for pattern detection while maintaining real-time responsiveness. Each window passes through z-score normalization per channel, ensuring consistent feature scales regardless of signal amplitude variations.

The Importance of Data Normalization

One aspect that significantly impacted model performance was the normalization strategy. Initially, I experimented with various approaches, but z-score normalization per window proved to be the most robust:

X norm = X - μ σ

Where μ is the mean and σ is the standard deviation of each channel within the window. This approach accounts for the natural drift in EEG baseline values and ensures that the model focuses on relative patterns rather than absolute amplitudes.

Rust-Based Inference Engine

For the inference side, I chose Rust as the implementation language. This decision was driven by several factors: deterministic memory management for real-time constraints, excellent performance characteristics, and the availability of the Tract library for ONNX model inference.

The model trained in PyTorch is exported to ONNX format, allowing clean separation between the training environment (Python with GPU acceleration) and the inference environment (Rust on a Raspberry Pi 4). This architecture mirrors what I explored in my previous post about creating a predictive system in Rust and PyTorch, though the complexity here is significantly higher due to real-time constraints.

impl Default for NeuralAnalyticsService {
    fn default() -> Self {
        let model = tract_onnx::onnx()
            .model_for_path("assets/neural_analytics.onnx")
            .expect("Failed to load model")
            .with_input_fact(0, InferenceFact::dt_shape(f32::datum_type(), tvec!(1, 62, 4)))
            .expect("Failed to set input shape")
            .into_optimized()
            .expect("Failed to optimize model")
            .into_runnable()
            .expect("Failed to create runnable model");

        NeuralAnalyticsService { model }
    }
}

The state machine architecture handles the continuous signal flow, managing the buffering, preprocessing, and inference pipeline while maintaining strict timing constraints. The system runs on a Raspberry Pi 4 Model B (8GB) with a real-time operating system configuration to guarantee response times.

Hardware Integration and Smart Home Control

One of the most satisfying aspects of this project was the tangible output: controlling a Tapo Smart Bulb using brain signals. When the model detects a valid “GREEN” pattern with sufficient confidence, it triggers a state change in the smart bulb. The feedback loop is immediate and visceral—you think, and the light responds.

The first time the system actually worked as intended was both surreal and incredibly rewarding. After weeks of debugging signal processing issues and model accuracy problems, I had one late-night session where everything finally clicked. I was wearing the BrainBit headband, focusing on associating the color green with a specific mental visualization (imagining a bright, energizing light), and when the Tapo bulb switched on reliably in response to that thought pattern, I literally jumped out of my chair.

There were definitely funny moments along the way—like the time I kept getting false positives whenever I laughed, because the facial muscle movements were being misinterpreted as brain signals. Or when my cat walked across the keyboard during a recording session and somehow triggered a series of commands that made the light flicker erratically. These mishaps taught me valuable lessons about signal isolation and the importance of proper grounding.

The most memorable moment came during a dry run for my thesis defense presentation. I had invited a few friends to watch, and when I successfully turned the lamp on and off three times in a row using only thought commands, the room erupted in cheers. It was validation not just of the technical work, but of the months of persistent troubleshooting and refinement that had led to that point.

The BrainFlow SDK handles the Bluetooth communication with the BrainBit device, abstracting away the low-level protocol details and providing a clean streaming interface. This allowed me to focus on the signal processing and machine learning aspects without getting bogged down in hardware-specific implementation details.

Project Structure and Code Organization

The project follows a modular structure that separates concerns across different packages:

NeuralAnalytics/
├── packages/
│   ├── neural_analytics_core/     # Core Rust implementation
│   ├── neural_analytics_data/     # Data capture utilities
│   ├── neural_analytics_gui/      # GUI for signal visualization
│   └── neural_analytics_model/    # PyTorch model training
├── docs/                          # LaTeX thesis documentation
└── dataset/                       # Training data organized by class

Each package has clear responsibilities, and the boundaries between them are well-defined. This modular approach made iterative development much easier—I could refine the model training pipeline without touching the Rust inference code, and vice versa.

Challenges and Lessons Learned

The technical journey presented several significant challenges that forced me to deepen my understanding across multiple domains.

The hardest problem to solve was dealing with inter-session variability in EEG signals. Early in development, I noticed that a model trained on one day’s data would perform poorly the next day, even with the same subject and similar mental states. This wasn’t just about signal noise—it appeared to be fundamental shifts in the baseline neural patterns, possibly due to factors like fatigue, hydration levels, or even subtle changes in electrode positioning. I addressed this by implementing domain adaptation techniques and creating more robust normalization strategies that focused on relative patterns rather than absolute values.

Another major challenge was meeting real-time constraints on the Raspberry Pi 4. The initial Python prototype had unacceptable latency (over 200ms), which made the system feel unresponsive. Migrating the inference engine to Rust with the Tract library reduced this to under 50ms, but required completely rethinking my approach to memory management and data buffering.

If I were to start over, I would invest more time upfront in designing a subject-independent feature extraction pipeline. Rather than trying to normalize away individual differences after the fact, I’d explore techniques like Riemannian geometry for covariance matrices or transfer learning approaches that could leverage population data while adapting quickly to new users. I would also implement a more comprehensive signal quality monitoring system from the beginning, rather than adding it as an afterthought when poor signal quality ruined entire recording sessions.

The journey wasn’t without its obstacles. One particularly challenging aspect was dealing with the variability in EEG signals between recording sessions. A model that performed excellently on one day’s data could struggle the next. This led me to implement more robust augmentation strategies and to be more careful about the stratification of training and validation sets.

Another lesson was the importance of end-to-end testing. It’s one thing to achieve high accuracy on pre-recorded datasets, but real-time performance with a live signal stream is a different beast entirely. Latency, jitter, and the psychological pressure of a live demonstration all introduced factors that weren’t present in offline evaluation.

Media Coverage and Public Recognition

I was fortunate that this project caught the attention of major Spanish media outlets. El Español published an article about the project, and I was invited to demonstrate the system live on Antena 3’s “Y Ahora Sonsoles” program.

The media coverage was both unexpected and deeply humbling. When El Español reached out to feature the project, I was initially surprised that a technical thesis project would generate such interest, but it quickly became clear that the story resonated because it represented something tangible—technology that people could see and understand immediately.

Appearing on Antena 3’s “Y Ahora Sonsoles” program was an entirely different experience. The studio environment with its bright lights, multiple cameras, and live audience created pressure I hadn’t anticipated. Unlike my controlled lab environment, I couldn’t retake failed attempts or adjust parameters on the fly. There was a moment during the live demo where the signal quality dropped due to movement artifacts, and for several tense seconds, the system wasn’t responding as expected. I had to calmly guide the host through the recalibration process while millions watched—a situation that tested both my technical knowledge and my ability to communicate under pressure.

What struck me most was the genuine curiosity and enthusiasm from the audience and hosts. Rather than treating it as a magic trick, they asked thoughtful questions about how the technology actually worked, its limitations, and its potential applications. This reinforced my belief that when complex technology is explained accessibly, it can inspire meaningful conversations about innovation and its role in society.

The public interest in this project has been overwhelming and humbling. It reinforced my belief that technology, when applied thoughtfully, has the potential to genuinely improve people’s lives—particularly for those with motor disabilities who could benefit from brain-computer interfaces for communication and control.

Technical Specifications and Results

For those interested in the technical details, here’s a summary of the system specifications:

Component Specification
EEG Device BrainBit (4 channels: T3, T4, O1, O2)
Processing Platform Raspberry Pi 4 Model B (8GB)
Model Architecture CNN-LSTM Hybrid (16→32 conv, 32×2 LSTM)
Window Size 62 samples with 50% overlap
Input Normalization Z-score per channel per window
Output Classes 3 (RED, GREEN, TRASH)
Inference Library Tract (ONNX runtime for Rust)
Smart Device Tapo Smart Bulb

The model achieves reliable classification performance in controlled conditions, with the system maintaining real-time responsiveness on the constrained hardware platform.

Future Directions

Looking ahead, I see brain-computer interface technology evolving along several exciting trajectories. The field is moving beyond simple binary control toward more nuanced, multi-dimensional interaction paradigms. I’m particularly excited about the potential for adaptive systems that can learn from users in real-time, reducing the calibration burden and accommodating natural variations in brain signals.

My personal plans involve continuing to contribute to open-source BCI tools and frameworks. I believe the key to widespread adoption lies in making these technologies more accessible—not just from a cost perspective, but also in terms of usability and setup simplicity. I’m exploring ways to simplify the signal processing pipeline while maintaining robustness, potentially leveraging edge AI accelerators for more complex feature extraction.

Specific applications I’m eager to explore include:

  • Communication aids: Developing more sophisticated text-entry systems for individuals with severe motor impairments
  • Neurofeedback applications: Creating tools for mental wellness, attention training, and stress management
  • Augmented reality interfaces: Combining BCI with AR/VR for immersive, hands-free interaction
  • Passive BCI: Using brain signals not for explicit commands, but for implicit feedback to adapt interfaces based on cognitive load or emotional state

The convergence of BCI with other emerging technologies—like advanced materials for more comfortable electrodes, improved signal processing algorithms, and more intuitive machine learning approaches—promises to unlock applications we can barely imagine today.

While this project represents the completion of my Bachelor’s Thesis, I don’t consider it finished. There are numerous avenues for improvement and extension:

  • Expanded command vocabulary: Moving beyond binary control to multiple distinct commands
  • Personalization pipelines: Real-time adaptation to individual users without extensive retraining
  • Alternative output modalities: Integration with wheelchair controls, computer cursors, or speech synthesis
  • Edge deployment optimization: Quantization and pruning for even lower latency

The field of brain-computer interfaces is evolving rapidly, and I’m excited to continue contributing to it.

Conclusion

This project has been one of the most challenging and rewarding experiences of my academic career. It pushed me to learn about domains I had never explored before—neuroscience, real-time systems, regulatory compliance for medical devices—while also deepening my expertise in areas I was already passionate about, like deep learning and Rust development.

Reflecting on this project, I realize it has fundamentally transformed how I view the intersection of technology and human potential. Before NeuralAnalytics, I saw software engineering primarily as a tool for building efficient systems and solving logical puzzles. This thesis revealed to me that technology’s true power lies in its ability to extend human capabilities—especially for those whose abilities are limited by circumstance.

The journey taught me that meaningful innovation requires more than technical skill; it demands empathy, interdisciplinary curiosity, and the courage to venture into unfamiliar territories. Working at the boundary of neuroscience and engineering forced me to learn new languages (both literal and figurative), to respect the complexity of biological systems, and to appreciate that sometimes the most elegant solutions come from embracing rather than fighting variability.

Professionally, this experience has solidified my commitment to developing technology that serves people first. Whether I’m working on biometric systems at Facephi or exploring other domains, I now constantly ask: “Who does this serve? How does it improve lives? Is it accessible and ethical?” The project also gave me confidence to tackle ambitious, ill-defined problems—knowing that persistence, systematic experimentation, and learning from failure can lead to breakthroughs even in seemingly opaque domains like brain signal interpretation.

Most importantly, NeuralAnalytics reminded me that engineering excellence isn’t just about writing correct code—it’s about creating systems that dignify and empower human experience. When I see someone smile as they turn on a light with their thoughts, I’m reminded why I fell in love with engineering in the first place: to build things that matter.

The complete source code is available on GitHub under the GPL-3.0 license. The repository includes the training code, the Rust inference engine, documentation, and everything needed to replicate or extend this work. I hope it serves as a useful reference for anyone interested in exploring the fascinating intersection of neuroscience and software engineering.

Credits

The header image of this post is made using Midjourney AI.