We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team. We appreciate your interest and wish you the best! Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time. #LI-CL1 We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Microsoft Fabric Data Engineer
Location
United States
Posted
1 day ago
Salary
Not specified
No structured requirement data.
Job Description
Role Description
We are looking for a skilled Microsoft Fabric Data Engineer to design, build, and optimize enterprise-scale data solutions that enable data-driven decision-making. In this role, you will:
- Develop scalable data pipelines.
- Implement Lakehouse architectures.
- Integrate diverse data sources across cloud platforms.
- Work closely with IT, analytics, and business teams to translate complex requirements into robust data solutions.
- Mentor junior engineers while delivering high-quality, reliable data services.
The environment is collaborative, fast-paced, and focused on modern data engineering practices, including automation, real-time processing, and advanced analytics. This is a remote role with milestone-based travel requirements.
Accountabilities:
- Design, build, and maintain distributed, scalable data pipelines using Microsoft Fabric and Apache Spark to process structured and unstructured data.
- Integrate data from multiple internal and external systems, ensuring consistency, reliability, and proper lineage.
- Optimize ETL/ELT workloads to improve throughput, cost efficiency, and performance of large-scale analytics environments.
- Implement and enforce data quality, metadata management, governance, and compliance standards.
- Collaborate with data scientists, analysts, architects, and business stakeholders to deliver insights and integrate analytical models.
- Document pipeline architectures, workflows, schemas, and operational processes, while troubleshooting and ensuring enterprise-grade reliability.
- Explore emerging technologies to enhance data engineering practices, including Lakehouse architecture, real-time processing, and automation.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
- 7+ years of experience in data engineering, data architecture, or large-scale data platform development.
- Strong expertise in Apache Spark for batch and streaming processing.
- Hands-on experience with Microsoft Fabric, Data Factory, pipelines, and Lakehouse implementations.
- Advanced proficiency in SQL, Python, and/or Scala.
- Experience with cloud platforms such as Azure, AWS, or GCP.
- Solid understanding of distributed systems, lakehouse architecture, and data modeling.
- Proven ability to design and optimize complex ETL/ELT pipelines.
- Strong communication, leadership, and mentoring skills.
Requirements
- Preferred: Certifications in Azure Data Engineering, Apache Spark, or Microsoft Fabric.
- Experience with real-time streaming technologies (Kafka, Azure Event Hubs).
- DevOps practices including CI/CD and Infrastructure as Code.
- Knowledge of Power BI or Tableau.
Benefits
- Competitive salary and performance-based incentives.
- Flexible remote work with milestone-based travel opportunities.
- Comprehensive healthcare and retirement plans.
- Opportunities for professional development and skill growth.
- Collaborative and innovative technology environment.
- Access to cutting-edge data engineering tools and cloud technologies.
Job Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
- 7+ years of experience in data engineering, data architecture, or large-scale data platform development.
- Strong expertise in Apache Spark for batch and streaming processing.
- Hands-on experience with Microsoft Fabric, Data Factory, pipelines, and Lakehouse implementations.
- Advanced proficiency in SQL, Python, and/or Scala.
- Experience with cloud platforms such as Azure, AWS, or GCP.
- Solid understanding of distributed systems, lakehouse architecture, and data modeling.
- Proven ability to design and optimize complex ETL/ELT pipelines.
- Strong communication, leadership, and mentoring skills.
- Preferred: Certifications in Azure Data Engineering, Apache Spark, or Microsoft Fabric.
- Experience with real-time streaming technologies (Kafka, Azure Event Hubs).
- DevOps practices including CI/CD and Infrastructure as Code.
- Knowledge of Power BI or Tableau.
Benefits
- Competitive salary and performance-based incentives.
- Flexible remote work with milestone-based travel opportunities.
- Comprehensive healthcare and retirement plans.
- Opportunities for professional development and skill growth.
- Collaborative and innovative technology environment.
- Access to cutting-edge data engineering tools and cloud technologies.
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
Contract Data Engineer
SilvurThe only retirement planning platform exclusively built for those over 50. Get your Retirement Score.
About UsSilvur is modernizing what retirement decision-making in America looks like. We partner with financial institutions to deliver world-class tools and educational resources to their account holders—including personalized content, calculators, and...
We’re looking for an experienced Software Engineer to join the “Costco” team at Zus, which builds services for managing our rapidly growing bulk data offerings while adhering to complex healthcare access control requirements. Process, store, and deliver the entire health re...
Manager of Data Delivery and Governance will lead our data systems integration delivery and governance team by supporting critical healthcare payer integrations and enterprise data management initiatives. This role will oversee a team of data integration systems analysts responsi...
Blue River Technology — the team behind John Deere's See & Spray™ — uses computer vision and machine learning to help farmers reduce herbicide use and grow smarter. Our ML Platform team needs a Data Engineer to keep that intelligence flowing. What you'll do: Own data pipeli...