Neurobus is developing cutting-edge vision solutions and systems, leveraging neuromorphic technologies to enhance the intelligence and efficiency of embedded devices and robots in the Space and Defense sectors.
We are opening an internship focused on deep learning for computer vision, with an emphasis on neuromorphic-inspired model architectures designed for efficient edge deployment. The core objective is to implement and evaluate a next-generation transformer-style vision model that operates on discrete, relative spike-timing, representations and supports efficient scaling to higher-resolution inputs.
You will contribute to building a research-grade prototype in PyTorch, validating it on standard vision benchmarks, and iterating toward architectures that offer favorable trade-offs between accuracy, latency, memory bandwidth, and compute. Because parts of the approach are not widely supported in existing deep learning libraries, this internship involves implementing key components from scratch, including custom forward/backward passes when necessary.
As a Deep Learning Computer Vision Intern at Neurobus, you will:
Implement core building blocks of an efficient, transformer-style vision model in PyTorch, including components that rely on discrete/event-like computations.
Establish training and evaluation baselines on standard computer vision tasks (e.g., image classification, object detection), demonstrating stable learning and reproducible results.
Extend the architecture with multi-scale or hierarchical processing to improve efficiency and scalability for larger images and higher token counts.
Benchmark performance against strong modern baselines, with attention to both model quality and efficiency metrics (runtime, memory, throughput).
Investigate positional and spatial representation strategies suited to discrete/event-like processing and assess their effect on training stability and accuracy.
Perform systematic ablation studies across key architectural and hyperparameter choices (e.g., depth, width, attention configuration, comparison/lookup mechanisms), and quantify impacts on compute, memory, and accuracy.
Explore regularization and robustness techniques tailored to discrete and lookup-based model components, and evaluate their benefits across tasks.
Document implementations, experiments, and results, and present progress updates to the team.