Senior Data Platform Engineer

arrow

New York / $180000 - $220000 annum

INFO

Salary
SALARY:

$180000 - $220000

Location

LOCATION

New York

Job Type
JOB TYPE

Permanent

Inspiren - Senior Data Platform Engineer


1. Role Overview

  • Title: Senior Data Platform Engineer
  • Department / Function: Engineering / Data Platform / Infrastructure
  • Reports To (Name / Title): Likely Engineering Leadership / Head of Data / Platform Lead / Aaron (Hiring Manager)
  • Level (IC / Manager / Director / Exec): Senior IC
  • Reason for Hire (Growth / Backfill / Transformation): Growth + Platform Transformation
  • Urgency / Timeline: High priority / active immediate search

2. Compensation & Incentives

  • Base Salary Range: $180,000 - $200,000
  • Bonus (Structure / Target %): Likely discretionary / not primary lever
  • Equity (Yes/No + Details): Yes, meaningful startup equity likely included
  • Total Comp Range (if applicable): $200k+ depending on equity and experience
  • Flexibility on Comp: Moderate to strong for top-tier talent

3. Location & Work Environment

  • Primary Location: Remote (US or Canada), NYC preferred
  • Remote / Hybrid / Onsite (Days / Expectations): Fully remote with preference for proximity to NYC leadership hub
  • Time Zone Requirements: EST or overlap strongly preferred
  • Travel Requirements: Minimal occasional team offsites likely
  • Relocation Offered (Yes/No): Likely case-by-case
  • Visa Sponsorship (Yes/No): Unknown / likely limited

4. Organization & Team Context

  • Company Overview (Product, Mission, Market):
    Inspiren builds AI-powered technology for senior living communities. Their platform blends real-time monitoring, analytics, workflow intelligence, and operational tools to improve resident outcomes, caregiver efficiency, and profitability.
  • Organizational Focus: AI, Data Infrastructure, Computer Vision, Healthcare Operations, Platform Scale
  • Funding Stage / Revenue / Growth:
    Series B growth-stage company with ~$155M total funding and strong recent momentum.
  • Team Size & Structure:
    Scaling engineering org with growing investments across data, ML, CV, and software engineering.
  • Cross-Functional Partners:
    Engineering, Product, Analytics, ML, Data Science, Operations, Leadership
  • Team Culture & Working Style:
    Fast-paced, mission-driven, high ownership, builder mentality, collaborative, pragmatic

5. Role Purpose

  • Core Problem This Role Solves:
    Build and scale the foundational data platform that powers analytics, ML systems, internal reporting, operational intelligence, and product decision-making.
  • Why This Role Matters Now:
    Company growth is accelerating and data has become mission critical. Need mature infrastructure to support scale, reliability, and AI expansion.
  • What Success Looks Like (6-12 months):
  1. Streaming ingestion layer modernized
  2. Databricks environment optimized for scale/cost
  3. Governance and RBAC strengthened
  4. Reusable tooling adopted by internal teams
  5. Reliable platform trusted across business units

6. Key Responsibilities

Core Ownership Areas

  • Databricks platform architecture
  • AWS data infrastructure
  • Streaming ingestion (Kafka / Kinesis)
  • Governance / RBAC / lineage
  • Cost optimization
  • Internal data tooling

Day-to-Day Responsibilities

  • Build pipelines and platform components
  • Optimize compute/storage spend
  • Partner with ML / analytics stakeholders
  • Troubleshoot reliability issues
  • Improve observability and standards

Short-Term Projects (First 3-6 months)

  • Assess current platform bottlenecks
  • Improve ingestion reliability
  • Implement quality frameworks
  • Tighten governance model

Long-Term Initiatives

  • Multi-year scalable data platform roadmap
  • ML / AI data enablement
  • Self-service analytics infrastructure
  • Best-in-class internal developer experience

Stakeholder Interaction

Frequent interaction with engineering, product, analytics, ML, and leadership.


7. Technical / Functional Requirements

Must-Haves

  • Strong Databricks hands-on experience
  • Modern lakehouse / warehouse expertise
  • AWS cloud experience
  • Pipeline architecture at scale
  • Governance / RBAC understanding
  • Strong Python / SQL
  • Strong communication skills

Nice-to-Haves

  • Kafka / Kinesis
  • Startup experience
  • Healthtech experience
  • AI tooling adoption (Cursor / Claude Code)

Tech Stack

  • Languages: Python, SQL
  • Frameworks: Spark / PySpark
  • Cloud / Infra: AWS, Databricks
  • Tools / Platforms: Kafka, Kinesis, Airflow, Terraform, monitoring stack

8. Ideal Candidate Profile

  • Years of Experience: 5-10+ years
  • Target Background:
    High-growth startups, healthcare tech, modern SaaS, data-heavy product companies, elite enterprise platform teams
  • Education Preferences: Strong technical foundation; degree helpful not mandatory

Top Resume Signals

  1. Databricks ownership
  2. AWS production scale
  3. Streaming ingestion systems
  4. Governance / Unity Catalog / RBAC
  5. Cost optimization + stakeholder impact

Key Traits / Soft Skills

  • Ownership mentality
  • Strong communicator
  • Builder mindset
  • Comfortable in ambiguity
  • Pragmatic operator
  • Cross-functional collaborator

9. Red Flags / Non-Starters

  • Pure BI / reporting profile
  • No Databricks hands-on depth
  • No production scale systems
  • Weak communication
  • Highly siloed enterprise-only mindset
  • No ownership examples

10. Interview Process

  • Number of Stages: Likely 4-5
  • Interview Format: Recruiter screen, HM screen, technical deep dive, stakeholder rounds, final
  • Key Stakeholders Involved: Cameron, Aaron, Engineering leaders, cross-functional peers
  • Timeline: Fast-moving if strong candidate identified

Assessment Areas

  • Databricks depth
  • Platform architecture
  • Problem solving
  • Communication
  • Ownership / initiative
  • Startup adaptability

11. Hiring Criteria / Evaluation Framework

Core Competencies Being Assessed

  • Technical depth
  • System design
  • Communication
  • Leadership / ownership
  • Business thinking
  • Reliability mindset

Deal Breakers

  • Cannot operate autonomously
  • No hands-on architecture depth
  • Weak stakeholder presence
  • Overly theoretical profile

Nice Differentiators

  • Healthtech experience
  • AI/ML platform support experience
  • Elite company pedigree
  • Strong cost optimization track record

12. Selling Points / Why a Candidate Would Join

  • Mission-driven work improving elder care
  • Real-world AI impact
  • Strong funding and runway
  • High ownership role
  • Databricks + AWS modern stack
  • Ability to shape platform strategy early
  • Equity upside
  • Meaningful technical challenges

13. Additional Notes / Nuances

Need someone who can both build and influence. This should not be a passive ticket-taker. Strong preference for candidates who proactively create standards, tooling, and scalable systems.

CONTACT

Tim Greenwald

Senior Recruitment Consultant

SIMILAR
JOB RESULTS

4k-Harnham_DA copy
CAN’T FIND THE RIGHT OPPORTUNITY?

STILL
LOOKING?

If you can’t see what you’re looking for right now, send us your CV anyway – we’re always getting fresh new roles through the door.