About the Role
We’re looking for a PhD student with a strong background in Machine Learning to explore new training strategies for building leaner, more interpretable AI Models (neural networks). You'll work with us on Graybox, a research toolchain designed for fine-grained intervention in model training — including neuron freezing, pruning, and progressive expansion.
This opportunity is ideal for someone who enjoys blending theory with experimental practice, and wants to help define more principled ways to grow and structure models — not just train them blindly.
What You’ll Work On
Explore progressive model growth strategies: starting from minimal architectures and selectively expanding capacity during training
Experiment with controlled neuron activation, using curated input subsets to guide learning
Design and test pruning or slimming strategies that preserve performance while improving interpretability and efficiency
Analyze redundancy and generalization in learned representations
Deliver short internal write-ups or summaries that capture your findings