-
Boeing Research & Technology
Senior Machine Learning Engineer with a love for all things AI, which started during my Mechanical Engineering undergrad at SIUE (2016) and continued through my Master's in Engineering with an AI focus at Saint Louis University (2022). Over the years, I've dived deep into algorithm development for AI, ML, and Deep Learning, specializing in natural language processing, computer vision, anomaly detection, and multimodal time-series modeling.
Research is at the core of my career. Right now, I'm leading multiple teams at Boeing combining cross-functional software, DevOps, and ML teams to explore and deliver on AI-powered capabilities utilizing customer Deep Neural Networks, particularly VAE, GAN, and Transformers. This experience has me in a sweet spot of scraping together SOTA journal publications for quick prototyping, while still having the need to meet business requirements for fully deployed systems on edge devices or in the cloud. Along the way, I've (unfortunately) picked up some quirks: I’m a proud vim elitist (spaces > tabs forever), I'll die on the hill of clean, bite-sized code commits, and I've acquired the strong belief that DNNs are not the solution to every problem (despite me wishing for the contrary).
When I'm not hacking away at the future of AI, you'll probably find me hiking with my dog Dani, getting lost in a Brandon Sanderson or Neil Gaiman book, or—let's be honest—still coding for fun. If any of that sounds cool, feel free to connect with me on social media!
Email me at kydepro@gmail.com
if you have any questions!
Click here to download my resume.
Here are my projects from the past couple years, ranging from those on github, publication materials, to personal projects. Unfortunately, some of them were developed in private repos, but enjoy those that weren't! :)
Personal exploratory effort to construct sophisticated DNN GenAI structures from scratch without the assitance of utility (crutches) functions provided in popular ML stacks (namely Autograd). Heavily inspired by the work in eduardoleao052's work in the same domain, this Pytorch project includes implementations of LSTM and Multi-head Attention networks with fully defined forward and back-prop functions. This effort is trained on NLP corpuses, mainly Jules Vernes and Shakespeare 13M and 1M characters respectively, and is capable of generating text characteristic of the source.
Personal project based on the Kaggle competition to predict MDS-UPDR scores, which measure progression in patients with Parkinson's disease. The Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) is a comprehensive assessment of both motor and non-motor symptoms associated with Parkinson's. Features are derived from raw data consisting of protein and peptide levels over time in subjects with Parkinson’s disease versus normal age-matched control subjects. The current solution, inspired by Kaggle user Dott1718, is baselined with a Light Gradient Boosting Machine to identify useful characteristics and relationships in the raw data. Additionally, it can be shown with this problem set that a Neural Network utilizing the competition metric SMAPE+1 as the loss function is another competitive solution given a leaky ReLU activation function on the output.
What started as a small Quantopian.com project to explore tensorflow and pandas real-time data processing blossomed (spun out of control?) into a multi-year project to create a dynamic machine learning framework for financial (quant) analysis. I was frustrated with how brittle the reference literature found on most ML forums seemed to be and how un-generalizable any solution derived from these examples would ultimately be. Instead of building one-off models that are built on hardcoded parameters, this library takes an design driven approach to building machine learning models. Users can specify the model type, unique features, prediction target, and financial asset and the library will dynamically solve for the tedious work of calculating features and deriving model layers. Check out the Github page to see results of using the library to create three LSTM models from scratch (with one using outputs of the other two as features), and then deploying them to predict results in unseen data. This work is entirely in Python, tensorflow, and pandas and is forked from my private repo which has a bunch more features such as a full-blown backtesting engine...but..needs work. Please feel free to contact me for more information or to see more of the private repo.
Recently I've come to the conclusion that I can only play Baldur's Gate 3 so many times before having to put it down. To fill the hole in my life, I've moved on to improving some of my fundamental Data Structures and Computational Analytics skills that I have not really revisited since college. Please follow along with me as I spend my free-time somehow being even more nerdy than playing D&D clones all-night.
In SLU's CHROME lab, we primarily focused on the study of the haptic sensation and how to develop technology around it. This project was a unique opportunity to reverse that order and instead teach the haptic senses to technology we already developed, specifically a telerobotic arm. By extracting haptic features of objects held by the end effector, I was able to train models to attempt to detect what it was holding. The best model was a LSTM solution, but NN, Random Forests, and SVM techniques were also investigated. As would be expected, our model performance suffered greatly when attempting to detect objects that had haptic profiles that differed depending on the orientation of how the object was actually held in the end effector. Picture holding a hammer by the head vs the handle. But I believe as part of a sensor fusion solution, especially with a vision system, these results could enhance the overall performance of traditional detection algorithms!
Imagine developing a game for your phone or tablet where you have full control of not only what the user sees and hears, but also what they FEEL! This specialized field of surface haptics is only starting to be explored with commercial devices so my solution to this lack of technology was the HUE tablet. Built to be as user friendly as possible, this 14" tablet housed a Raspberry Pi/Arduino control solution and actuated two different forms of haptic sensations on a user's finger. Ultrasonic vibrations of the screen actuated through hidden piezo actuators caused surface friction to decrease at the touchscreen, and Electrostatic actuation created a sensation of the screen feeling "sticky". By modulating these two sensations, I could create "3D" haptic textures which I had the opportunity to demo to the IEEE 2018 Haptics Symposium. Software solution is written in a mix of Python and C (Arduino) and is executed within the ROS framework.
Modeling the highly dynamic and unstable flight characteristics of the fighter jets we see every day is challenging enough, but actually deriving the control algorithms which allow them to be pilotable is on another level. This project is an exploration into one technique to accomplish this task by linearizing the state-space representations of small sections of the overall, non-linear flight envelope and solving for optimal control parameters in that localized solution space. Solution is written in Python and is inspired by work of Dr. Kenneth Buckholtz.
Here's a project where I was consulted to write an application to control a 2DOF stage that moved a surface under a Laser Doppler Vibrometer (LDV) sensor to take accurate vibrational measurements. This code communicated with two-stepper motors to precisely move a stage over every x,y positional combination while collecting corresponding data from an NI DAQ connected to the LDV sensor. After perfoming some FFT calculations, this software outputs the result to a local file for further data processing.
My general github profile for past projects and source code to these projects.