Spotlight Projects
AffectivaIn 2017 and 2018, I spent twelve weeks living independently in Boston, New York, and abroad programming for Affectiva, an emotion measurement company spun out of MIT’s Media Lab and backed by Kleiner Perkins. After a few weeks on the job, I found myself on the research team and working increasingly with the speech research lead. Under her tutelage, I built a video dataset, created a deep-learning model, and designed a web demo. In the following year, I worked on a methodology of determining heart rate, a stress indicator, from solely a video modality.
The project was an Emotion Speech API which analyzes a pre-recorded audio segment to identify emotion events and gender. The API analyzes changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender. For our research, we had two objectives: create an emotion classification model and a laughter regression model. Ultimately, the laughter model was featured at MIT Media Lab’s 2017 Emotion AI summit, was picked up by TechCrunch and NPR, and will result in a patent with me as a named inventor. My joint patent with Affectiva on avatar animation using auto-encoder latent layers: Patent Swarm |
StudyMuseIn 2019, I worked in Krishnaswamy Labs on StudyMuse. StudyMuse is the project name I use to describe my efforts in creating generative music. An early Markov Model won the New England High School Hackathon. Since then, I've pivoted to using deep neural networks to model the intricate structure of the music. Stay tuned for an ICML paper in the 2019-2020 year. The first 9 seconds in the clip below is a Bach Chorale seed followed by music generated by my architecture.
|
Planet DetectionIn partnership with the Center for Astrophysics at Harvard, I created a Convolutional Neural Network-based discriminator to find tell-tale signs of planets in the Kepler K2 Dataset. In my presentation for my astrophysics research class, I presented several novel planet candidates.
Github: link |
Peabody NotecardsThe Peabody Institute of Archaeology had thousands of unorganized note cards from hundreds of archaeological digs around the world. I led the effort in converting them into a readable format using SOTA OCR (State of the art optical character recognition). From there, we created a simple, searchable interface for the history museum to use to quickly sort through and find their items.
Github's: link link |
Public Mentions
My patent on avatar animation using auto-encoder latent layers: Patent Swarm
My laughter detection model and speech model I worked on mentioned: Businesswire Affectiva.com
Both articles written after winning HackNEHS in 2016 and 2017: Phillipian Phillipian
Written after creating autonomous drone in Makerspace: Phillipian
My laughter detection model and speech model I worked on mentioned: Businesswire Affectiva.com
Both articles written after winning HackNEHS in 2016 and 2017: Phillipian Phillipian
Written after creating autonomous drone in Makerspace: Phillipian