Data Engineer

Data EngineerData EngineerFull TimeRemoteMid LevelTeam 501-1,000

Location

United States

Posted

2 days ago

Salary

$43 - $45 / hour

Seniority

Mid Level

SQLETLAzure Data FactoryData lakesApache SparkParquetDelta LakeData modelingData pipeline optimizationCloud data systems

Job Description

Role Description

As a Data Engineer on the Data Pipeline team, you will build and support modern, large-scale analytics platforms that power data-driven decision-making across the studio. You’ll work hands-on with cloud-based lakehouse and warehouse architectures, designing and maintaining real-time and batch data pipelines that serve business, design, test, and development stakeholders. This role sits at the intersection of data engineering, analytics enablement, and documentation, helping teams not only use data effectively but also understand and trust the systems behind it.

You’ll contribute to a culture of test-driven, data-informed development by shaping how data is captured, processed, documented, and shared. Attention to detail, strong engineering fundamentals, and a passion for scalable, modern data architecture are key to success in this role.

Responsibilities

  • Design, build, and maintain scalable, high-quality data pipelines supporting both real-time and batch analytics workloads
  • Develop and optimize ETL/ELT processes using modern cloud data technologies and orchestration tools
  • Contribute to the design and evolution of lakehouse and warehouse data models that support analytics and reporting needs
  • Partner with business, design, test, and development teams to understand data requirements and improve data capture strategies
  • Help lead technical writing initiatives, including goal-setting, planning, and execution
  • Write, edit, and maintain clear, concise, and well-structured technical documentation
  • Create and improve documentation standards, organization, and knowledge-sharing processes
  • Organize and curate documentation to ensure it is intuitive, discoverable, and up to date
  • Advocate for consistent knowledge sharing across teams and studios
  • Research and propose improvements to documentation workflows and solutions to existing pain points
  • Apply modern engineering best practices to ensure reliability, scalability, and data quality across platforms

Qualifications

  • 5+ years of professional experience working with SQL
  • 5+ years of experience designing and implementing scalable ETL processes, including data movement and data quality tooling
  • Hands-on experience with cloud-based data orchestration solutions (e.g., Azure Data Factory or equivalent)
  • 3+ years of experience with modern big data analytics platforms, including:
    • Data lakes
    • Distributed processing frameworks (e.g., Spark)
    • Columnar storage formats such as Parquet
  • 2+ years of experience building and supporting cloud-hosted data systems
  • Strong understanding of data modeling, pipeline reliability, and performance optimization

Preferred Qualifications

  • Experience building data pipelines using cloud-native analytics platforms and services (e.g., data factory tools, analytics warehouses, and Spark-based processing)
  • Hands-on experience working with Delta Lake and transactional data lake formats
  • Experience querying and integrating data from high-performance analytics engines (e.g., time-series or log-based systems such as Kusto/Azure Data Explorer)
  • Exposure to AI/ML-focused data engineering use cases, including:
    • Feature engineering and feature stores
    • Model training and serving datasets
    • Model monitoring and observability pipelines
  • Experience preparing, governing, and securing datasets for modern AI applications, including:
    • LLM and RAG workflows
    • Experimentation and A/B testing
    • Privacy-aware and compliant data access patterns

Salary Range

At Blueprint, we strive to offer competitive pay that reflects the value of our team members. Compensation for this role is influenced by a variety of factors, including skills, education, responsibilities, experience, and geographic market. For candidates based in Washington State, the anticipated salary range is $43 to $45.38 USD/Hourly. Please note that we typically do not hire new employees at the top of the posted range. Actual starting pay will be determined based on experience, skills, and internal equity. The final salary and job title may vary depending on the selected candidate’s qualifications and could fall outside the stated range.

Benefits

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development

Equal Opportunity Employer

Blueprint Technologies, LLC is an equal employment opportunity employer. Qualified applicants are considered without regard to race, color, age, disability, sex, gender identity or expression, orientation, veteran/military status, religion, national origin, ancestry, marital, or familial status, genetic information, citizenship, or any other status protected by law.

If you need assistance or a reasonable accommodation to complete the application process, please reach out to: recruiting@bpcs.com

Location

Remote

Job Requirements

  • 5+ years of professional experience working with SQL
  • 5+ years of experience designing and implementing scalable ETL processes, including data movement and data quality tooling
  • Hands-on experience with cloud-based data orchestration solutions (e.g., Azure Data Factory or equivalent)
  • 3+ years of experience with modern big data analytics platforms, including: Data lakes
  • Distributed processing frameworks (e.g., Spark)
  • Columnar storage formats such as Parquet
  • 2+ years of experience building and supporting cloud-hosted data systems
  • Strong understanding of data modeling, pipeline reliability, and performance optimization
  • Preferred Qualifications
  • Experience building data pipelines using cloud-native analytics platforms and services (e.g., data factory tools, analytics warehouses, and Spark-based processing)
  • Hands-on experience working with Delta Lake and transactional data lake formats
  • Experience querying and integrating data from high-performance analytics engines (e.g., time-series or log-based systems such as Kusto/Azure Data Explorer)
  • Exposure to AI/ML-focused data engineering use cases, including: Feature engineering and feature stores
  • Model training and serving datasets
  • Model monitoring and observability pipelines
  • Experience preparing, governing, and securing datasets for modern AI applications, including: LLM and RAG workflows
  • Experimentation and A/B testing
  • Privacy-aware and compliant data access patterns
  • Salary Range
  • At Blueprint, we strive to offer competitive pay that reflects the value of our team members. Compensation for this role is influenced by a variety of factors, including skills, education, responsibilities, experience, and geographic market. For candidates based in Washington State, the anticipated salary range is $43 to $45.38 USD/Hourly. Please note that we typically do not hire new employees at the top of the posted range. Actual starting pay will be determined based on experience, skills, and internal equity. The final salary and job title may vary depending on the selected candidate’s qualifications and could fall outside the stated range.

Benefits

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development
  • Equal Opportunity Employer
  • Blueprint Technologies, LLC is an equal employment opportunity employer. Qualified applicants are considered without regard to race, color, age, disability, sex, gender identity or expression, orientation, veteran/military status, religion, national origin, ancestry, marital, or familial status, genetic information, citizenship, or any other status protected by law.
  • If you need assistance or a reasonable accommodation to complete the application process, please reach out to: recruiting@bpcs.com
  • Location
  • Remote

Related Categories

Related Job Pages

More Data Engineer Jobs

Aspire logo

SAP Replication and Data Migration Specialist

Aspire

All-in-one-finance for modern businesses

Data Engineer2 days ago
Full TimeRemoteTeam 501-1,000Since 2018

SAP Replication & Data Migration Specialist managing data replication across SAP systems

Cloud
United States

Data Engineer

James Madison University

James Madison University is located in Harrisonburg, Virginia and provides several Undergraduate, Master's, and Doctoral degree programs for their student body

Data Engineer2 days ago
Full TimeRemote

The Data Engineer will design, build, and maintain secure, reliable, and scalable data infrastructure to support high-quality analytics and decision-making, primarily by developing and maintaining ETL/ELT pipelines from various enterprise systems.

SQLSQL ServerSSISETLData ModelingData WarehousingData PipelinesTableauData GovernancePythonRelational Databases
United States
Turner & Townsend logo

Cost Estimator - Data Center Construction

Turner & Townsend

A global consultancy business serving clients in the real estate, infrastructure and natural resources sectors.

Data Engineer2 days ago
Full TimeRemoteTeam 10,001+H1B No Sponsor

The role involves preparing detailed construction estimates and cost plans across all project stages, performing quantity takeoffs, and supporting bid reviews by analyzing contractor proposals. Responsibilities also include contributing to benchmarking, participating in value engineering workshops, and assisting with procurement and tender documentation preparation.

United States
$100K - $140K / year
SmartLight Analytics logo

Senior Data Engineer

SmartLight Analytics

Harness the power of your healthcare data

Data Engineer2 days ago
Full TimeRemoteTeam 1-10Since 2016H1B No Sponsor

The Senior Data Engineer will be responsible for designing, building, and maintaining ETL and reverse-ETL pipelines between Snowflake, Azure Data Factory, and legacy SQL Server systems while developing and optimizing Snowflake data warehouse models. Key tasks include implementing security policies, migrating legacy processes, building transformations using dbt, and mentoring peers.

SnowflakeETLSQLPythondbtAzure Data FactoryData Warehouse ModelingHIPAAHealthcare DataRow-Level SecurityColumn-Level Security
United States