Director of AI

arrow

Los Angeles / $230000 - $400000 annum

INFO

Salary
SALARY:

$230000 - $400000

Location

LOCATION

Los Angeles

Job Type
JOB TYPE

Permanent

Senior AI/ML Engineer - Foundation Models & Agentic AI
Life Sciences AI | Biotech / Therapeutics / Computational Biology



What You'll Work On


  • Develop and train proprietary foundation models on multi-omics data (genomics, transcriptomics, proteomics, metabolomics)
  • Design novel algorithms to integrate and differentiate across omics modalities
  • Own the full training pipeline: data ingestion, tokenization, pretraining, and evaluation
  • Push the state of the art - we are building models that do not exist yet

Agentic AI for Life Sciences

  • Build and deploy AI agents that assist researchers across therapeutic development, computational chemistry, and biology
  • Design multi-agent architectures using LangChain, LangGraph, and custom orchestration layers
  • Integrate agents with internal databases, experimental systems, and third-party scientific tools
  • Develop agentic workflows for use cases in cosmetics R&D, drug discovery, and clinical analysis

Production ML Infrastructure

  • Own model deployment, scaling, and serving infrastructure
  • Optimize inference using NVIDIA TensorRT-LLM, Dynamo, Triton, and related tooling
  • Partner with software engineering on integration into our broader platform
  • Maintain reliability, performance, and observability across deployed models

What We're Looking For


Required

  • Strong hands-on ML/AI engineering experience - you write real code, not just direct others
  • Deep proficiency in Python and PyTorch - this is non-negotiable
  • Experience training, fine-tuning, and deploying foundation models from scratch or from pretrained checkpoints
  • Familiarity with the Hugging Face ecosystem (Transformers, Datasets, PEFT, Accelerate)
  • Experience with LangChain and/or LangGraph for agentic pipeline development
  • Working knowledge of the NVIDIA stack: TensorRT-LLM, Triton Inference Server, Dynamo
  • Comfort with large-scale distributed training tools: DeepSpeed, xFormers
  • Strong understanding of state-of-the-art model architectures (transformers, SSMs, diffusion, etc.)
  • Ability to collaborate across technical and non-technical teams - bio, chem, software, clinical
  • Excellent communication skills - you can explain a training run to a chemist

Strongly Preferred

  • Experience with omics data (any modality: genomic, proteomic, transcriptomic, metabolomic)
  • Background in computational biology, computational chemistry, or bioinformatics
  • Prior work in biotech, pharma, or life sciences AI (not required but a plus)
  • Experience building or contributing to multi-agent systems in a production environment
  • Track record of independent research or open-source contributions in ML

Technical Skills Summary


Core ML / Training

  • PyTorch - primary framework
  • DeepSpeed - distributed training and ZeRO optimization
  • xFormers - memory-efficient attention and transformer components
  • Hugging Face Transformers, PEFT, Accelerate
  • Training and fine-tuning large language and multimodal models

Inference & Deployment

  • NVIDIA TensorRT-LLM - high-performance LLM inference
  • NVIDIA Dynamo - inference orchestration and scheduling
  • Triton Inference Server - model serving and batching
  • Model quantization, distillation, and latency optimization

Agentic & Orchestration

  • LangChain / LangGraph - agent design and multi-agent orchestration
  • Tool-use, RAG pipelines, and memory systems
  • API and system integration across scientific data sources







CONTACT

Tim Lucas

Manager

SIMILAR
JOB RESULTS

4k-Harnham_DA copy
CAN’T FIND THE RIGHT OPPORTUNITY?

STILL
LOOKING?

If you can’t see what you’re looking for right now, send us your CV anyway – we’re always getting fresh new roles through the door.