Hi there! I'm David (He/Him)

I'm passionate about making quality tools that empower AI practitioners and generative artists.
I'm an ML research engineer at stability.ai and eleuther.ai. Functionally, I'm an independent open source developer operating under a general mandate to "do cool shit" as long as it falls under the broad scope of contributing to open source ML.
Creator/maintainer/curator of https://github.com/pytti-tools ->
Some projects I'm currently or recently involved with:
- Developing tools to facilitate parameterizing complex, multi-scene animation sequences
- Invented a technique for generating a multi-scene music video from audio
- inventing and building cutting edge, state of the art, AI animation tools and techniques
- working with electronic musicians and VJs to advance audio-reactive animation research
- maintaining and extending pytti-tools
- building a library to facilitate working with pre-trained CLIP-like models
- building a library to facilitate working with and management of messy research code
- working with researchers to build data collection tools that will be used to turn hackathon activities into training data for code generative language modeling
- Implementing notebooks to facilitate AI artist use of pre-trained research models, including FiLM and blended defusion
- model design for a forthcoming AI art model
- Shepherding the launch of the Stable Diffusion API and accompanying infra and tooling (public SDK, CI/CD, etc.) as engineering lead through the launch of the DreamStudio product
Broad Research Interests
- Text guided image synthesis
- AI-assisted animation
- Representation learning
- contrastive
- semi-supervised
- adversarial
- composable
- multi-modal
- Application of topolgical and geometric methods to machine learning
- Generative models
- Inductive priors
- Learning theory
Current Areas of Research Focus
- Modeling with implicit representations and operators
- Composable representations
- Latent-space manipulation
- Scale-agnostic learning
- Artistic applications of multi-modal generative models







