Research
I have many research interests. I’ve been interested in operating
systems and hypervisors for a long time, as well as computer architetures.
My professional work now as a researcher at the Johns Hopkins University
Applied Physics Lab centers around advanced techniques for computer security.
However, most of my current research interest has centered around Machine
Learning and how that can be applied in new and interesting fields.
Machine Learning
-
Neural Peripheral Device Modeling I’m currently researching using deep recurrent neural networks to model the state machines of unknown computer peripheral devices. This approach can model more complex devices than traditional black-box learning approaches because the network models a functional approximation. The first part of this ongoing work will be presented at IJCNN 2018.
-
Architecture Classification I’ve collected a dataset of object from from many different architectures and have shown that machine learning techniques can be used to accurately classify unknown chunks of object code. This work led to a conference paper at DFRWS 2015.
-
Frequent Subgraph Mining for Image Classification The hypothesis of this work is that you can convert natural images of objects into planar graphs, and then you can use frequent subgraph mining to classify objects.
-
Implementations of gSpan I’ve written a few implementations of gSpan, a leading subgraph mining algorithm, including one in Python and one in C. The goals of these implementation is less about performance and more about teaching how gSpan works.
Computer Security
- Maat: A Platform Service for Measurement and Attestation (arXiv)
- [Presentation]: Trustworthy Runtime Verification on Resource-constrained Platforms (ARM TechCon 2017)
- [Poster]: Extending Trust and Attestation to the Edge (IEEE/ACM SEC 2016)
- [Demo]: IoTA: IoT Assurance (accepted, IEEE HOST 2017)
OS / Linux Research
- Asymetric Multi-processing for Linux An early attempt at asymetric multieprocessing on Linux.