Role overview
Do you think Computer Vision and Machine Learning can change the world? Do you think it can transform the way millions of people collect, discover and share the most special moments of their lives? We truly believe it can! The System Intelligence Machine Learning (SIML) organization is looking for a Machine Learning Research Engineer with a strong foundation in Machine Learning and Computer Vision to develop next generation multi-modal human sensing technologies. We work at the core of Apple Intelligence across modalities such as vision, language, gesture, gaze, touch, etc. to enable highly intuitive and personalized intelligence experiences across the Apple ecosystem.
You will be part of a fast-paced, impact-driven Applied Research organization working on cutting-edge machine learning that is at the heart of the most loved features on Apple platforms, including Apple Intelligence, Camera, Photos, Visual Intelligence, and more. Come join the team building the next generation of Apple Intelligence!
SELECTED REFERENCES TO OUR TEAM’S WORK:
https://www.youtube.com/watch?v=GGMhQkHCjxo&t=255s
https://support.apple.com/guide/iphone/use-visual-intelligence-iph12eb1545e/ios
**Description**
This role requires experience in vision-language models, and ability to fine-tune/adapt/distill multi-modal LLMs. As a Machine Learning Research Engineer, you will help design and develop models and algorithms for multimodal perception and reasoning leveraging Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs). You will collaborate with expert researchers and engineers to explore new techniques, evaluate performance, and translate product needs into impactful ML solutions. Your work will contribute directly to user-facing features across billions of devices.","responsibilities":"Contribute to the development and adaptation of AI/ML models for multimodal perception and reasoning
Innovate robust algorithms that integrate visual and language data for comprehensive understanding
Collaborate closely with multi-functional teams to translate product requirements into effective ML solutions.
Conduct hands-on experimentation, model training, and performance analysis
Communicate research outcomes effectively to technical and non-technical stakeholders, providing actionable insights
Stay current with emerging methods in VLMs, MLLMs, and related areas
**Preferred Qualifications**
Proven track record of research contributions demonstrated through publications in top-tier conferences and journals
Background in multi-modal reasoning, VLM, and MLLM research with impactful software projects
Solid understanding of natural language processing (NLP) and computer vision fundamentals
**Minimum Qualifications**
Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or related field; or relevant proven experience
Proficiency in Python and deep learning frameworks such as PyTorch
Practical experience with training and evaluating neural networks
Familiarity with multimodal learning, vision-language models, or large language models
Strong problem solving skills and ability to work in a collaborative, product-focused environment
Ability to communicate technical results clearly and concisely
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .