Realize the full value of the cloud.
Forward Deployed Engineer – Data Migration, Data Consolidation Platforms
Location
United States
Posted
24 days ago
Salary
Not specified
Job Description
Job Requirements
- 7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
- 3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
- Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
- Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
- Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
- Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
- Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
- Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
- Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP), including core infrastructure, managed data services, and security configurations
- Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
- Familiarity with knowledge graph construction, semantic modeling, ontology frameworks (RDF, OWL), or platforms such as Neo4j, AI Foundry, or Stardog
- Practical experience integrating LLMs or AI-driven tooling into data transformation, schema inference, or mapping workflows (OpenAI, Anthropic, AWS Bedrock)
- Experience with low-code/no-code application platforms for rapid solution delivery (AI Foundry, Mendix, OutSystems, PowerApps)
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
Design, build, optimize, and maintain large-scale database systems, ETL/ELT pipelines, data lakes/warehouses, and secure data architectures. Mentor team members, participate in Scrum, perform code/design reviews, and collaborate cross-functionally to deliver production-quality data products.
Deploy and implement AI-powered solutions with customers: translate operational needs into production systems, build and iterate LLM/RAG workflows, integrate with APIs and hardware, troubleshoot production issues, document playbooks, and partner across product, engineering, and GTM to scale deployments.
Mechanical Data Engineer – Mechanical Data Exp Required
Foundation EGIEngineering General Intelligence
Data Engineer transforming customer data into high-fidelity datasets at AI startup
Lead and manage architecture and engineering for enterprise data platforms across ingestion, storage, processing, analytics, and consumption. Translate business requirements into technical designs, enforce governance/security, oversee data pipelines, and coach teams to ensure scalable, reliable, and cost-efficient data solutions.