We are a faith-based nonprofit organization devoted to helping people grow through meaningful service, community, and care. Our team combines professional excellence with a shared mission to make a lasting impact.
Data Engineer
Location
United States
Posted
8 days ago
Salary
Not specified
No structured requirement data.
Job Description
Role Description
We are looking for a highly technical, detail-oriented Data Engineer to serve as the technical engine and quality-control specialist for our data operations. Reporting directly to the Director of Data (who also serves as the Lead Architect), you will execute the end-to-end development of our ETL/ELT pipelines under their strategic direction.
- Execute the full data lifecycle—Extract, Transform, Load—utilizing industry-standard tools like Fivetran to automate ingestion.
- Help manage and scale complex data workflows using tools like Apache Airflow, ensuring task dependencies are met and pipelines run with 100% reliability.
- Perform rigorous data cleansing and transformation within the SQL layer to turn raw, messy data into "Gold-standard" KPI-ready logic.
- Function as the primary monitor for the data environment. Execute automated monitoring protocols to identify schema drift or data anomalies before they reach a stakeholder.
- Assist the Director of Data in the maintenance and optimization of cloud data warehouses, specifically BigQuery, ensuring performance and cost-efficiency.
- Partner with Marketing and Content teams to normalize data from complex stacks, including GA4, PostHog, Facebook Ads, Reddit, and others, and custom website tracking.
Qualifications
- SQL Mastery: You are a "black belt" in SQL. You write complex joins, window functions, and CTEs that are both performant and easily maintainable.
- Pipeline Expertise: Proven experience building and managing data pipelines at scale using Fivetran or similar automated ingestion tools.
- Orchestration Power User: Experience with Apache Airflow (or similar tools like Prefect/Dagster) for managing complex pipeline dependencies and scheduling.
- Warehouse & Lake Experience: Strong technical proficiency in BigQuery and Redshift, including an understanding of partitioning, clustering, and cost-optimization.
- The "Data Instinct": You have a natural "whiz" for spotting when data is incorrect. You see the anomalies that indicate a tracking break, a sync error, or an API failure instantly.
Requirements
- Modern Marketing Stack Knowledge: A deep understanding of the nuances of GA4 and /or PostHog, web tracking schemas, and media pixel implementation.
- Content Management Data: Experience sourcing and modeling data from Content Management Systems (CMS) to track content performance and attribution.
- Marketing Analytics: Familiarity with how Marketing teams utilize data for campaign optimization and customer journey mapping.
Company Description
We are a faith-based nonprofit organization devoted to helping people grow through meaningful service, community, and care. Our team combines professional excellence with a shared mission to make a lasting impact.
Job Requirements
- SQL Mastery: You are a "black belt" in SQL. You write complex joins, window functions, and CTEs that are both performant and easily maintainable.
- Pipeline Expertise: Proven experience building and managing data pipelines at scale using Fivetran or similar automated ingestion tools.
- Orchestration Power User: Experience with Apache Airflow (or similar tools like Prefect/Dagster) for managing complex pipeline dependencies and scheduling.
- Warehouse & Lake Experience: Strong technical proficiency in BigQuery and Redshift, including an understanding of partitioning, clustering, and cost-optimization.
- The "Data Instinct": You have a natural "whiz" for spotting when data is incorrect. You see the anomalies that indicate a tracking break, a sync error, or an API failure instantly.
- Modern Marketing Stack Knowledge: A deep understanding of the nuances of GA4 and /or PostHog, web tracking schemas, and media pixel implementation.
- Content Management Data: Experience sourcing and modeling data from Content Management Systems (CMS) to track content performance and attribution.
- Marketing Analytics: Familiarity with how Marketing teams utilize data for campaign optimization and customer journey mapping.
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
Senior Data Engineer
Vantage Risk CompaniesVantage Group Holdings Ltd. (Vantage) was established in late 2020 as a re/insurance partner designed for the future. Driven by relentless curiosity, our team of trusted experts provides a fresh perspective on our clients’ risks. We add creativity to tech-enabled efficiency and robust analytics to address risks others avoid. Vantage provides specialty re/insurance through its operating subsidiaries in Bermuda and the U.S. Approximately 365 colleagues in both the United States and Bermuda. Offices in Chicago, IL, Norwalk, CT, Arlington, VA, Boston, MA, New York, NY, Atlanta, GA and Hamilton, Bermuda. Highly geographically diverse workforce with colleagues based in 35 states and counting. Fully support work flexibility including remote and hybrid work arrangements.
The role involves researching and evaluating data sources to solve business problems and designing/implementing data pipelines for analytics and data science initiatives, including end-to-end ETL/ELT workflows. Responsibilities also include building robust data infrastructure following medallion architecture and architecting solutions for complex data challenges like CDC pipelines and dimensional modeling.
Lead Warehouse Design Engineer - Remote
GXO LogisticsGXO is a leading provider of cutting-edge supply chain solutions to the most successful companies in the world. We help our customers manage their goods most efficiently using our technology and services. Our greatest strength is our global team – energetic, innovative people of all experience levels and talents who make GXO a great place to work.
The Lead Industrial Engineer will lead the development of design packages including material flow diagrams, facility layouts, and equipment designs, while also managing all engineering activities during implementation, including vendor management and system testing. Responsibilities include collaborating on data analysis for slotting and capacity planning, designing overall facility flow, and supporting the business development process through RFP responses.
Senior Cloud Data Engineer (Snowflake)
LED FastStartWe are GDIT. A global technology and professional services company that delivers consulting, technology and mission services to every major agency across the U.S. government, defense and intelligence community. Our 30,000 experts extract the power of technology to create immediate value and deliver solutions at the edge of innovation. We operate across 50 countries worldwide, offering leading capabilities in digital modernization, AI/ML, Cloud, Cyber and application development. Together with our clients, we strive to create a safer, smarter world by harnessing the power of deep expertise and advanced technology.
The Senior Cloud Data Engineer will design, architect, deploy, and maintain cloud data solutions, working hands-on with Snowflake to implement data models while ensuring system integrity and security. This role involves supporting automation, enhancing knowledge of AWS cloud services, and collaborating across teams to develop strategies and resolve escalations.
The role involves designing, building, and operating core data infrastructure to power AI-powered products, analytics, and decision-making across a global portfolio, focusing on building scalable data pipelines and developing the Data Management Platform on Databricks. Key tasks include enabling AI/MCP integrations, operationalizing ML models, and implementing robust data quality and governance frameworks.