DATA ENGINEERING
FOCUS
Specialist Focus Areas
At Harnham, we specialize in the following key areas within Data Engineering:
-
- Analytics Engineering: Bridging the gap between data engineering and data science to enable sophisticated analytics.
-
- Cloud Engineering: Developing and managing scalable cloud-based data solutions.
-
- Data & DevOps: Integrating data management with DevOps practices to streamline workflows and enhance productivity.
-
- Data Architecture: Designing and implementing the overall data framework and architecture for your organization.
-
- Data Engineering & Big Data: Handling large-scale data processing and building data pipelines.
-
- Data Platform Engineering: Creating and managing data platforms that support analytics and data science operations.
-
- Data Product Management: Overseeing the development and management of data products.
-
- DevOps Engineering: Combining software development and IT operations to improve deployment and efficiency.
-
- Platform Engineering: Building and maintaining the platforms that support data operations.
-
- Software Engineering: Developing software solutions that enhance data engineering capabilities.
JOBS
LATEST Data engineering
OPPORTUNITIES
With over 17 years of experience, Harnham has established itself as the leading global authority in Data and AI Recruitment
Sr. Fullstack Software Engineer
New York
$170000 - $190000
+ Data Engineering
PermanentNew York
To Apply for this Job Click Here
Sr. Fullstack Engineer
Hybrid (NYC)
$170k – $190k Base + bonus & equity
Overview
A global, high‑growth technology company operating at the intersection of financial technology, consumer platforms, and digital security is expanding its engineering leadership team. The organisation builds large‑scale, consumer‑facing platforms used by hundreds of millions of users worldwide, with a strong focus on quality, long‑term maintainability, and responsible AI‑assisted development.
This role sits within a financial products marketplace that partners with major financial institutions to deliver personalised consumer experiences. The position is ideal for a senior‑level engineer who values engineering judgement, thoughtful architecture, and technical leadership over speed alone.
Responsibilities
- Own and uphold a high engineering quality bar across design reviews, code reviews, testing strategy, and production readiness
- Collaborate closely with product, design, and engineering partners to shape solutions and evaluate technical trade‑offs
- Architect and build frontend‑leaning, full‑stack systems that are modular, composable, and resilient to change
- Champion performance, reliability, and maintainability in AI‑assisted development environments
- Apply functional programming principles and immutability to reduce side effects and improve system clarity
- Lead the effective use of AI agents for prototyping, refactoring, and exploration while ensuring production‑grade outcomes
- Drive a robust automated testing strategy across unit, integration, UI/component, and end‑to‑end layers
- Provide technical leadership through mentorship, delegation, and high‑quality code reviews
Must‑Have Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field
- 8+ years of experience building production web applications, including senior‑level ownership
- Prior experience operating at principal, lead, or staff engineer level
- Expert proficiency with TypeScript and React, including server‑side rendered frameworks such as Next.js
- Strong understanding of functional programming concepts and immutability in modern frontend systems
- Advanced knowledge of automated testing strategies and CI‑driven feedback loops

To Apply for this Job Click Here
Staff Data Engineer
San Francisco
$220000 - $300000
+ Data Engineering
PermanentSan Francisco, California
To Apply for this Job Click Here
Staff Data Engineer
San Francisco, CA
$240K-$300K base + equity
We’re partnered with one of the most recognized developer platforms in the world, trusted by engineering teams at OpenAI, Meta, Netflix, and Adobe, who are scaling their data platform and looking to hire a Senior Data Engineer to own the pipelines and foundations at the core of their data ecosystem.
You’ll own the full pipeline lifecycle, from ingestion architecture through to analytics-ready data, working closely with Data Platform Engineers and partnering with analysts and data scientists who depend on what you build every day.
What You’ll Do
- Design and build scalable ingestion pipelines and orchestration frameworks across structured, semi-structured, and event-based sources
- Own reliability, observability, and performance across the full pipeline lifecycle from raw ingestion to analytics-ready delivery
- Build and maintain dbt transformation pipelines serving as the single source of truth across Finance, Product, GTM, and Engineering
- Ensure revenue and billing data meets the accuracy and auditability required for public company reporting, including SOX compliance
- Apply software engineering principles throughout: CI/CD, testing, observability, version control, and automation
- Enable self-serve analytics through semantic layer development, strong abstractions, and clear documentation
- Champion data quality and governance across classification, ownership, access policies, and data lifecycle management
What We’re Looking For
- 8+ years in data engineering or a hybrid data/analytics engineering role, with a track record of owning pipelines end-to-end in high-growth or enterprise environments
- Advanced SQL and dbt (Core or Cloud), Snowflake or comparable cloud data warehouse, Python, and Airflow
- Deep experience with Kafka and streaming data systems, with strong working knowledge of ClickHouse, Iceberg, or similar technologies
- Experience with ingestion tools such as Fivetran, Airbyte, or Polytomic
- Cloud-native architecture expertise across AWS, GCP, or Azure
- Experience designing and scaling data systems to support petabyte-level workloads
- Track record of building data infrastructure that meets compliance and governance requirements, including SOX or equivalent
- Strong communication and collaboration skills, with the ability to operate as a senior technical voice across engineering and business stakeholders

To Apply for this Job Click Here
Senior Data Engineer
San Francisco
$180000 - $240000
+ Data Engineering
PermanentSan Francisco, California
To Apply for this Job Click Here
Senior Data Engineer
San Francisco, CA
$180-240K base + equity
We’re partnered with one of the most recognized developer platforms in the world, trusted by engineering teams at OpenAI, Meta, Netflix, and Adobe, who are scaling their data platform and looking to hire a Senior Data Engineer to own the pipelines and foundations at the core of their data ecosystem.
You’ll own the full pipeline lifecycle, from ingestion architecture through to analytics-ready data, working closely with Data Platform Engineers and partnering with analysts and data scientists who depend on what you build every day.
What you’ll do
- Design and build scalable ingestion pipelines and orchestration frameworks across structured, semi-structured, and event-based sources
- Own reliability, observability, and performance across the full pipeline lifecycle from raw ingestion to analytics-ready delivery
- Build and maintain dbt transformation pipelines serving as the single source of truth across Finance, Product, GTM, and Engineering
- Ensure revenue and billing data meets the accuracy and auditability required for public company reporting, including SOX compliance
- Apply software engineering principles throughout: CI/CD, testing, observability, version control, and automation
- Enable self-serve analytics through semantic layer development, strong abstractions, and clear documentation
- Champion data quality and governance across classification, ownership, access policies, and data lifecycle management
What we’re looking for
- 5+ years in data engineering or a hybrid data/analytics engineering role, with a track record of owning pipelines end-to-end in high-growth or enterprise environments
- Advanced SQL and dbt (Core or Cloud), Snowflake or comparable cloud data warehouse, Python, and Airflow
- Experience with Kafka and streaming data systems, with working knowledge of ClickHouse, Iceberg, or similar technologies
- Experience with ingestion tools such as Fivetran, Airbyte, or Polytomic
- Cloud-native architecture experience across AWS, GCP, or Azure
- Exposure to designing data systems that meet compliance and governance requirements, including SOX or equivalent
- Strong communication and collaboration skills, with the ability to contribute as a technical voice across engineering and business stakeholders

To Apply for this Job Click Here
Data Engineer (GCP/DBT)
Leeds
£65000 - £70000
+ Data Engineering
PermanentLeeds, West Yorkshire
To Apply for this Job Click Here
DATA ENGINEER
Up to £70,000 + BENEFITS
UK (Remote‑first)
This is a great opportunity to join a high‑growth education technology business where you can take ownership of building and scaling a modern data platform that underpins decision‑making across the entire organisation.
THE COMPANY:
The business is scaling rapidly and investing heavily in data to create a single, trusted view of performance across marketing, product and operations.
THE ROLE:
You will take ownership of a modern data platform. Key responsibilities include:
- Owning and developing the end‑to‑end data platform
- Building and maintaining robust data ingestion pipelines into BigQuery
- Designing and maintaining analytics‑ready dbt models
- Partnering closely with the BI Analyst to support company‑wide reporting and self‑serve analytics
YOUR SKILLS AND EXPERIENCE:
You will bring strong capability in:
- Google BigQuery/GCP (3+ years’ hands‑on experience)
- Data ingestion tools such as Fivetran
- dbt Cloud and analytics‑ready data modelling
THE BENEFITS:
You will receive a salary of up to £70,000 depending on experience, along with a comprehensive benefits package.
HOW TO APPLY:
Please register your interest by sending your CV to Molly Bird via the apply link on this page.

To Apply for this Job Click Here
Senior Data Platform Engineer
New York
$180000 - $220000
+ Data Engineering
PermanentNew York
To Apply for this Job Click Here
Inspiren – Senior Data Platform Engineer
1. Role Overview
- Title: Senior Data Platform Engineer
- Department / Function: Engineering / Data Platform / Infrastructure
- Reports To (Name / Title): Likely Engineering Leadership / Head of Data / Platform Lead / Aaron (Hiring Manager)
- Level (IC / Manager / Director / Exec): Senior IC
- Reason for Hire (Growth / Backfill / Transformation): Growth + Platform Transformation
- Urgency / Timeline: High priority / active immediate search
2. Compensation & Incentives
- Base Salary Range: $180,000 – $200,000
- Bonus (Structure / Target %): Likely discretionary / not primary lever
- Equity (Yes/No + Details): Yes, meaningful startup equity likely included
- Total Comp Range (if applicable): $200k+ depending on equity and experience
- Flexibility on Comp: Moderate to strong for top-tier talent
3. Location & Work Environment
- Primary Location: Remote (US or Canada), NYC preferred
- Remote / Hybrid / Onsite (Days / Expectations): Fully remote with preference for proximity to NYC leadership hub
- Time Zone Requirements: EST or overlap strongly preferred
- Travel Requirements: Minimal occasional team offsites likely
- Relocation Offered (Yes/No): Likely case-by-case
- Visa Sponsorship (Yes/No): Unknown / likely limited
4. Organization & Team Context
- Company Overview (Product, Mission, Market):
Inspiren builds AI-powered technology for senior living communities. Their platform blends real-time monitoring, analytics, workflow intelligence, and operational tools to improve resident outcomes, caregiver efficiency, and profitability. - Organizational Focus: AI, Data Infrastructure, Computer Vision, Healthcare Operations, Platform Scale
- Funding Stage / Revenue / Growth:
Series B growth-stage company with ~$155M total funding and strong recent momentum. - Team Size & Structure:
Scaling engineering org with growing investments across data, ML, CV, and software engineering. - Cross-Functional Partners:
Engineering, Product, Analytics, ML, Data Science, Operations, Leadership - Team Culture & Working Style:
Fast-paced, mission-driven, high ownership, builder mentality, collaborative, pragmatic
5. Role Purpose
- Core Problem This Role Solves:
Build and scale the foundational data platform that powers analytics, ML systems, internal reporting, operational intelligence, and product decision-making. - Why This Role Matters Now:
Company growth is accelerating and data has become mission critical. Need mature infrastructure to support scale, reliability, and AI expansion. - What Success Looks Like (6-12 months):
- Streaming ingestion layer modernized
- Databricks environment optimized for scale/cost
- Governance and RBAC strengthened
- Reusable tooling adopted by internal teams
- Reliable platform trusted across business units
6. Key Responsibilities
Core Ownership Areas
- Databricks platform architecture
- AWS data infrastructure
- Streaming ingestion (Kafka / Kinesis)
- Governance / RBAC / lineage
- Cost optimization
- Internal data tooling
Day-to-Day Responsibilities
- Build pipelines and platform components
- Optimize compute/storage spend
- Partner with ML / analytics stakeholders
- Troubleshoot reliability issues
- Improve observability and standards
Short-Term Projects (First 3-6 months)
- Assess current platform bottlenecks
- Improve ingestion reliability
- Implement quality frameworks
- Tighten governance model
Long-Term Initiatives
- Multi-year scalable data platform roadmap
- ML / AI data enablement
- Self-service analytics infrastructure
- Best-in-class internal developer experience
Stakeholder Interaction
Frequent interaction with engineering, product, analytics, ML, and leadership.
7. Technical / Functional Requirements
Must-Haves
- Strong Databricks hands-on experience
- Modern lakehouse / warehouse expertise
- AWS cloud experience
- Pipeline architecture at scale
- Governance / RBAC understanding
- Strong Python / SQL
- Strong communication skills
Nice-to-Haves
- Kafka / Kinesis
- Startup experience
- Healthtech experience
- AI tooling adoption (Cursor / Claude Code)
Tech Stack
- Languages: Python, SQL
- Frameworks: Spark / PySpark
- Cloud / Infra: AWS, Databricks
- Tools / Platforms: Kafka, Kinesis, Airflow, Terraform, monitoring stack
8. Ideal Candidate Profile
- Years of Experience: 5-10+ years
- Target Background:
High-growth startups, healthcare tech, modern SaaS, data-heavy product companies, elite enterprise platform teams - Education Preferences: Strong technical foundation; degree helpful not mandatory
Top Resume Signals
- Databricks ownership
- AWS production scale
- Streaming ingestion systems
- Governance / Unity Catalog / RBAC
- Cost optimization + stakeholder impact
Key Traits / Soft Skills
- Ownership mentality
- Strong communicator
- Builder mindset
- Comfortable in ambiguity
- Pragmatic operator
- Cross-functional collaborator
9. Red Flags / Non-Starters
- Pure BI / reporting profile
- No Databricks hands-on depth
- No production scale systems
- Weak communication
- Highly siloed enterprise-only mindset
- No ownership examples
10. Interview Process
- Number of Stages: Likely 4-5
- Interview Format: Recruiter screen, HM screen, technical deep dive, stakeholder rounds, final
- Key Stakeholders Involved: Cameron, Aaron, Engineering leaders, cross-functional peers
- Timeline: Fast-moving if strong candidate identified
Assessment Areas
- Databricks depth
- Platform architecture
- Problem solving
- Communication
- Ownership / initiative
- Startup adaptability
11. Hiring Criteria / Evaluation Framework
Core Competencies Being Assessed
- Technical depth
- System design
- Communication
- Leadership / ownership
- Business thinking
- Reliability mindset
Deal Breakers
- Cannot operate autonomously
- No hands-on architecture depth
- Weak stakeholder presence
- Overly theoretical profile
Nice Differentiators
- Healthtech experience
- AI/ML platform support experience
- Elite company pedigree
- Strong cost optimization track record
12. Selling Points / Why a Candidate Would Join
- Mission-driven work improving elder care
- Real-world AI impact
- Strong funding and runway
- High ownership role
- Databricks + AWS modern stack
- Ability to shape platform strategy early
- Equity upside
- Meaningful technical challenges
13. Additional Notes / Nuances
Need someone who can both build and influence. This should not be a passive ticket-taker. Strong preference for candidates who proactively create standards, tooling, and scalable systems.

To Apply for this Job Click Here
Data Engineer
Manhattan
$160 - $200
+ Data Engineering
PermanentManhattan, New York
To Apply for this Job Click Here
Data Engineer
Location: San Francisco OR New York City
This will be an onsite role, expectations are that you are in the office 3-4x per week
Compensation: Base $160,000 – $200,000 plus bonus
This role sits at the intersection of data engineering and investment analytics, responsible for building the data foundations that power AI systems, portfolio insights, and advanced analytics.
Key Responsibilities
- Build and maintain production-grade data pipelines powering AI systems, including ingestion from internal platforms, external vendors, and unstructured data sources
- Ensure data quality, consistency, and reliability across datasets used for AI applications and portfolio monitoring
- Collaborate with infrastructure teams on data architecture, integration, and security requirements
- Develop portfolio analytics solutions, including dashboards, external data integration platforms, and investment performance analysis
- Support analytics applications through engineering improvements and AI-assisted data processing workflows
- Provide quantitative and technical support to investment teams on live transactions using internal tools
- Build and manage external data vendor integrations, including APIs, schema documentation, and data lineage tracking
Requirements
- 2-4 years of experience in data engineering, building and maintaining production pipelines that feed GenAI systems
- Strong proficiency in SQL and Python
- Experience working with unstructured data is a must
- Experience with modern data stack tools (e.g. Snowflake, dbt, Airflow or equivalents)
- Experience with cloud data infrastructure (AWS), including ETL/ELT patterns and API integrations
- Exposure to datasets used in AI/ML workflows (e.g. embeddings, vector stores, retrieval pipelines) is a strong plus
- Familiarity with AI-native development tools and workflows (e.g. LLM-based coding assistants)
- Strong analytical mindset with attention to data quality and pipeline reliability
- Experience building dashboards and monitoring systems
- Strong communication and collaboration skills across technical and non-technical stakeholders
- Experience in financial datasets or financial services is beneficial but not required
- Bachelor’s degree in Computer Science, Engineering, Data Science, or a quantitative discipline
Please note: this role is not eligible for visa sponsorship and we are unable to accept C2C or contractor arrangements.

To Apply for this Job Click Here
Data Engineer
San Francisco
$160 - $200
+ Data Engineering
PermanentSan Francisco, California
To Apply for this Job Click Here
Data Engineer
Location: San Francisco OR New York City
This will be an onsite role, expectations are that you are in the office 3-4x per week
Compensation: Base $160,000 – $200,000 plus bonus
This role sits at the intersection of data engineering and investment analytics, responsible for building the data foundations that power AI systems, portfolio insights, and advanced analytics.
Key Responsibilities
- Build and maintain production-grade data pipelines powering AI systems, including ingestion from internal platforms, external vendors, and unstructured data sources
- Ensure data quality, consistency, and reliability across datasets used for AI applications and portfolio monitoring
- Collaborate with infrastructure teams on data architecture, integration, and security requirements
- Develop portfolio analytics solutions, including dashboards, external data integration platforms, and investment performance analysis
- Support analytics applications through engineering improvements and AI-assisted data processing workflows
- Provide quantitative and technical support to investment teams on live transactions using internal tools
- Build and manage external data vendor integrations, including APIs, schema documentation, and data lineage tracking
Requirements
- 2-4 years of experience in data engineering, building and maintaining production pipelines that feed GenAI systems
- Strong proficiency in SQL and Python
- Experience working with unstructured data is a must
- Experience with modern data stack tools (e.g. Snowflake, dbt, Airflow or equivalents)
- Experience with cloud data infrastructure (AWS), including ETL/ELT patterns and API integrations
- Exposure to datasets used in AI/ML workflows (e.g. embeddings, vector stores, retrieval pipelines) is a strong plus
- Familiarity with AI-native development tools and workflows (e.g. LLM-based coding assistants)
- Strong analytical mindset with attention to data quality and pipeline reliability
- Experience building dashboards and monitoring systems
- Strong communication and collaboration skills across technical and non-technical stakeholders
- Experience in financial datasets or financial services is beneficial but not required
- Bachelor’s degree in Computer Science, Engineering, Data Science, or a quantitative discipline
Please note: this role is not eligible for visa sponsorship and we are unable to accept C2C or contractor arrangements.

To Apply for this Job Click Here
Senior Software Engineer, Data Platform
Orange
$170000 - $200000
+ Data Engineering
PermanentOrange, California
To Apply for this Job Click Here
Senior Software Engineer Data Platform – Costa Mesa, CA – $170k-$200k + RSUs – Relocation support available – US citizenship or Green Card required
I’m partnering with a fast-scaling technology company on a senior platform hire. They operate in a complex, hardware-integrated environment and are at an inflection point – standing up large-scale manufacturing operations and needing the data infrastructure to match. This is a builder role for someone who thinks in systems, not features.
What you’ll be doing:
- Lead the design and roadmap for the data platform
- Build and own real-time streaming pipelines and batch data systems
- Architect and evolve the data lake, including modernisation and scalability
- Own ingest and egress frameworks that stitch together multiple data sources
- Build backend data services on top of the lake
- Drive reliability, observability, and SLA ownership across the platform
- Partner with engineering, operations, and analytics teams to advocate best practices
What they’re looking for:
- 6+ years blending software engineering and data platform experience
- Strong Python, SQL, and Java or Scala
- Hands-on experience with Spark and Kafka or Flink
- Solid distributed systems and architecture fundamentals
- Experience maintaining and evolving a data lake
- Background in hardware, manufacturing, or physical system data is a strong plus
- High ownership mindset – platform-level thinking, not just feature development
US citizenship or Green Card required.

To Apply for this Job Click Here
Data Engineer (GCP/DBT)
Northampton
£65000 - £70000
+ Data Engineering
PermanentNorthampton, Northamptonshire
To Apply for this Job Click Here
DATA ENGINEER
Up to £70,000 + BENEFITS
UK (Remote‑first)
This is a great opportunity to join a high‑growth education technology business where you can take ownership of building and scaling a modern data platform that underpins decision‑making across the entire organisation.
THE COMPANY:
The business is scaling rapidly and investing heavily in data to create a single, trusted view of performance across marketing, product and operations.
THE ROLE:
You will take ownership of a modern data platform. Key responsibilities include:
- Owning and developing the end‑to‑end data platform
- Building and maintaining robust data ingestion pipelines into BigQuery
- Designing and maintaining analytics‑ready dbt models
- Partnering closely with the BI Analyst to support company‑wide reporting and self‑serve analytics
YOUR SKILLS AND EXPERIENCE:
You will bring strong capability in:
- Google BigQuery/GCP (3+ years’ hands‑on experience)
- Data ingestion tools such as Fivetran
- dbt Cloud and analytics‑ready data modelling
THE BENEFITS:
You will receive a salary of up to £70,000 depending on experience, along with a comprehensive benefits package.
HOW TO APPLY:
Please register your interest by sending your CV to Molly Bird via the apply link on this page.

To Apply for this Job Click Here
CAN’T FIND THE RIGHT OPPORTUNITY?
GET IN TOUCH
today
If you can’t see what you’re looking for right now, send us your CV anyway – we’re always getting fresh new roles through the door.
Industry Hub
HARNHAM
News & Blog
With over 10 years experience working solely in the Data & AI sector our consultants are able to offer detailed insights into the industry.
Visit our Blogs & News portal or check out our recent posts below.
Testimonials
Client and
candidate
testimonials
A trusted partner of professionals
across the globe.
We understand the challenges our
customers face and offer the
recruitment solutions needed to drive
business success through Data &
AI.