Realize the full value of the cloud.
Forward Deployed Engineer - Data Migration & Data Consolidation Platforms
Location
United States
Posted
2 days ago
Salary
$164K - $274K / year
Seniority
Mid Level
Job Description
Key Responsibilities:
- Migration Execution & Cloud Architecture: Lead end-to-end delivery of enterprise data migrations from corporate systems (SAP, Oracle, Epic ERP) to target cloud data platforms, including the design of cloud landing zones, data governance frameworks, and system rationalization strategies. Establish migration compliance controls, automated rollback procedures, and operational readiness gates while owning full technical accountability for 12–18+ month migration roadmaps.
- Data Pipeline Engineering & Transformation: Build production-grade data connectors to SAP (RFC, IDoc, BAPI, OData), Oracle (AQ, GoldenGate, APIs), and SQL/non-relational sources. Develop ETL/ELT pipelines with LLM-enabled transformation logic, multi-layer validation and reconciliation frameworks, and optimized throughput for datasets scaling from tens of millions to billions of records with built-in CDC and incremental loading.
- Ontology Layer Development & Schema Automation: Construct semantic ontology layers translating raw ERP structures into business-consumable objects (Customer, Order, Invoice, Product, Vendor, Asset). Deploy automated schema mapping agents for source-to-target analysis and transformation logic generation. Build unified master data models with row/column-level security, cross-system lineage tracking, and AI-ready semantic structures.
- Application & Workflow Delivery: Build operational dashboards, migration control centers, and agent-driven workflows for automated validation, exception handling, and anomaly detection using low-code platform tools. Generate TypeScript/Python SDKs for custom integrations and deliver real-time monitoring and self-service interfaces for migration progress, data quality KPIs, and compliance tracking.
- Multi-System Consolidation & Master Data Management: Lead consolidation of 5–15+ fragmented ERP instances into standardized master data models. Resolve complex entity resolution challenges including customer matching, product harmonization, and chart of accounts unification. Establish golden record frameworks, data quality scorecards, survivorship rules, and data stewardship workflows for post-migration governance.
- Client Engagement, Discovery & Modernization Advisory: Serve as primary technical advisor to C-suite and enterprise architecture stakeholders across all engagement phases. Deploy discovery agents to analyze legacy data estates, conduct assessment workshops, facilitate solution design sessions, and deliver executive briefings, go/no-go readiness assessments, and prioritized modernization roadmaps.
- Knowledge Transfer, Enablement & IP Development: Build reusable migration accelerators, playbooks, and reference architectures that scale across engagements. Lead knowledge transfer to upskill client teams for post-migration ownership and collaborate with internal product and sales engineering teams to feed field insights back into platform development and delivery methodology.
- Leadership & Executive Engagement: Operate autonomously in ambiguous, high-stakes client environments, driving outcomes with minimal oversight; translate deeply technical concepts into clear, business-level narratives for C-suite audiences through executive briefings and stakeholder communications; navigate organizational complexity, competing stakeholder priorities, and enterprise change management dynamics to maintain momentum across multi-workstream engagements; mentor junior engineers, cultivate technical capability within delivery teams, and foster a culture of knowledge sharing and continuous improvement.
Required Qualifications:
- 7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
- 3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
- Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
- Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
- Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
- Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
- Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
- Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
- Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP), including core infrastructure, managed data services, and security configurations
- Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
- Familiarity with knowledge graph construction, semantic modeling, ontology frameworks (RDF, OWL), or platforms such as Neo4j, AI Foundry, or Stardog
- Practical experience integrating LLMs or AI-driven tooling into data transformation, schema inference, or mapping workflows (OpenAI, Anthropic, AWS Bedrock)
- Experience with low-code/no-code application platforms for rapid solution delivery (AI Foundry, Mendix, OutSystems, PowerApps)
Preferred Qualifications:
- Certifications: AI Foundry (Data Engineer, Ontologist, or Application Developer), SAP Certified Technology Associate/Professional, cloud architecture or data engineering credentials (AWS Solutions Architect, Azure Data Engineer, GCP Professional Data Engineer), or data governance/MDM certifications (CDMP, DAMA)
- Advanced Technical Skills: Deep, production-level knowledge of real-time event streaming platforms (Kafka, Kinesis, Event Hubs, Pub/Sub); demonstrated expertise with enterprise MDM platforms (Informatica MDM, SAP MDG, Profisee, Reltio); hands-on proficiency in API development, microservices architecture, and service mesh patterns; strong command of CI/CD pipelines and infrastructure-as-code tooling (Jenkins, GitLab CI, Azure DevOps, Terraform, ArgoCD); comprehensive understanding of data security, privacy, and regulatory compliance frameworks (GDPR, HIPAA, SOC 2, CCPA, FedRAMP)
- Domain Knowledge: Working understanding of financial close processes, supply chain operations, revenue cycle management, or procurement workflows; experience with industry-specific data standards (EDI, HL7, FHIR, SWIFT, XBRL); familiarity with process mining tools (Celonis, UiPath Process Mining, Signavio) and data observability, cataloging, and lineage platforms (Monte Carlo, Collibra, Alation, Apache Atlas)
About Rackspace Technology
We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future.
More on Rackspace Technology
Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.
Job Requirements
- 7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles.
- 3-5+ years directly leading end-to-end data migration or multi-system consolidation programs.
- Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives.
- Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector.
- Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records.
- Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript.
- Hands-on expertise with modern ETL/ELT and data integration platforms.
- Strong working knowledge of at least one major cloud provider (AWS, Azure, or GCP).
- Experience with enterprise data warehouse and lakehouse platforms.
- Familiarity with knowledge graph construction, semantic modeling, ontology frameworks.
- Practical experience integrating LLMs or AI-driven tooling into data transformation workflows.
- Experience with low-code/no-code application platforms for rapid solution delivery.
- Certifications: AI Foundry, SAP Certified Technology Associate/Professional, cloud architecture or data engineering credentials.
- Deep, production-level knowledge of real-time event streaming platforms.
- Demonstrated expertise with enterprise MDM platforms.
- Hands-on proficiency in API development, microservices architecture, and service mesh patterns.
- Strong command of CI/CD pipelines and infrastructure-as-code tooling.
- Comprehensive understanding of data security, privacy, and regulatory compliance frameworks.
Benefits
- Compensation reflects the cost of labor across several geographic markets.
- Base pay ranges from $164,851.50/year to $274,752.50/year based on location and experience.
- Compensation package may include incentive compensation opportunities, equity awards, and an Employee Stock Purchase Plan (ESPP).
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
The role involves developing tools and techniques to improve process efficiencies and query performance while creating and maintaining ETL scripts, tools, and queries for healthcare data management, validation, and reporting. Responsibilities also include programming data transformations using T-SQL, reviewing data accuracy, and troubleshooting complex database issues.
This role involves owning the architectural direction of the data platform and AI products, acting as a player-coach by balancing experimentation with production reliability while contributing significant hands-on code. Responsibilities also include designing, developing, and implementing robust, cloud-native data and AI solutions, and leading root-cause analysis for complex system issues.
Senior Data Warehouse Engineer
United Biosource CorporationUBC is devoted to empowering health solutions for a better tomorrow. We take pride in improving patient outcomes and advancing healthcare. At UBC, we provide services to enhance the entire drug development process and commercialization lifecycle - From clinical trial support to real-world evidence generation. UBC fosters a culture built on our Core Values of Respect, Accountability, Innovation, Quality, Integrity, and Collaboration. We believe in an inclusive workplace that fosters creativity.
The Senior Data Warehouse Engineer will be a part of UBC’s Enterprise Data Warehouse (EDW) team. This team is responsible for the architecture, design, optimization, data modeling, extract/transform/load (ETL), data governance, and solutioning of the EDW. The primary objective ...
Staff Data Platform Engineer (ClickHouse | OLAP)
Growth Acceleration PartnersConsult • Design • Build • Modernize
The Staff Data Platform Engineer will architect and optimize large-scale ClickHouse or OLAP databases, designing distributed systems capable of handling multi-terabyte datasets and defining best practices for analytical data models and query performance. This role also involves mentoring engineers on system design, guiding architectural decisions, and ensuring the platform supports real-time analytics and AI workloads.



