Openly logo
Openly

Premium, straightforward insurance

Senior Data Engineer (Remote, US)

Data EngineerData EngineerOtherRemoteSeniorTeam 201-500Since 2017H1B SponsorCompany SiteLinkedIn

Location

United States

Posted

2 days ago

Salary

$140.8K - $158.4K / year

Seniority

Senior

Job Description

Why Openly

Openly is rebuilding insurance from the ground up. We are re-envisioning and enhancing every aspect of the customer experience. Doing this requires a rapidly growing team of exceptional, curious, empathetic people with a wide range of skill sets, spanning technology, data science, product, marketing, sales, service, claims handling, finance, etc.

The Openly Difference

We created Openly because we saw an evident gap in the market for premium insurance made simple. Consumers deserve more complete coverage at competitive prices.

  • The Price Difference: Using cutting-edge data and technology, we provide you with customizable, competitive prices to protect your most valuable assets.
  • The Policy Difference: Coverages are truly customizable to meet your individual protection needs, for both standard coverages and optional add-ons.
  • The Experience Difference: From tailored claims handling to highly responsive customer service, we are focused on making the home insurance purchasing process a better overall experience.
Welcome to your next adventure.

At Openly, our people are just as important as our product. For us, collaboration, communication, and work-life balance are more than nice-to-haves— they’re the must-haves that make us who we are. We believe a great company is the result of a shared set of values, so we look for these qualities in every candidate we hire. 

  • Integrity
  • Empathy
  • Teamwork
  • Curiosity
  • Urgency

We've designed our hiring process with you, the candidate, in mind. At every step, you have the chance to present your strengths and learn more about what makes Openly a great place to work.

We're committed to Diversity, Equity, & Inclusion

We embrace individuality and believe diverse teams are winning teams. Our commitment to inclusion across race, gender, age, religion, identity, and experience drives us forward every day.

Job Details   

We are seeking a Senior Data Engineer to be a technical leader on our data engineering team. This role involves designing and delivering robust, scalable data solutions for Openly's insurance platform. You will apply deep expertise across the data lifecycle—from architecture and pipeline design to optimization and mentoring—to shape how we build, manage, and access data for our products and business intelligence.

Key Responsibilities  
  • Architect, build, and maintain high-quality, scalable data pipelines, data models, and data infrastructure across our GCP-based platform.
  • Lead technical design decisions related to data architecture, pipeline orchestration, data modeling, and cloud infrastructure.
  • Develop and optimize complex SQL queries and data transformations in BigQuery and PostgreSQL, ensuring performance, reliability, and correctness.
  • Write production-grade code in Python and/or Go to build and enhance data management frameworks, services, and pipeline tooling.
  • Partner with data science, business intelligence, product, and operations teams to translate business requirements into reliable data solutions.
  • Own and improve the full Software Development Lifecycle (SDLC) for data projects: design, development, testing, CI/CD deployment, monitoring, and maintenance.
  • Leverage streaming and event-driven architectures using Kafka (Aiven/Debezium) to build real-time data pipelines.
  • Utilize and optimize distributed data processing frameworks such as Apache Spark for large-scale data transformations.
  • Mentor and provide technical guidance to junior and mid-level data engineers; conduct code reviews and promote engineering best practices.
  • Share your knowledge within the data engineer team and the engineering organization through all-hands presentations, learning hours, domain meetings, and written documentation.
Our stack
  • Backend/Core: Go & Postgresql
  • Frontend: Browser-based, VueJS, Vite, Webpack, Nuxt & Tailwind
  • Research/Data Science: R, ArcGIS, Vertex, & Python
  • Data: GCP GCS, BigQuery, Composer/Airflow, Cloud Functions, Postgres, SQL, Python, Aiven Debezium and Kafka, Fivetran
  • Infrastructure: Google Cloud, specifically Cloud Run, Kubernetes, Pub/Sub, BigQuery, and CloudSQL, managed with Terraform. We use GitHub for code hosting, DataDog and for monitoring, PagerDuty for on-call, and CircleCI for running our CI/CD pipelines.
  • Remote work tools: Slack, Zoom
Requirements
  • 4+ years of data engineering and data management experience, with a proven track record of delivering complex, production-grade data systems.
  • Strong proficiency in SQL and SQL optimization — including query tuning, indexing strategies, execution plan analysis, and data modeling in BigQuery and PostgreSQL.
  • Expert-level scripting and programming in Python; experience with Go is a strong plus.
  • Deep expertise with Google Cloud Platform (GCP), including BigQuery, GCS, Composer/Airflow, Cloud Functions, Cloud Run, Pub/Sub, and CloudSQL.
  • Proven experience building and operating event-driven and streaming data pipelines using Kafka or similar technologies (Aiven/Debezium experience a plus).
  • Strong understanding of modern data warehouse and Lakehouse architectures, including multi-layered data modeling patterns (bronze/silver/gold or equivalent).
  • Infrastructure as Code (IaC) experience with Terraform to define, manage, and version cloud data infrastructure.
  • Solid understanding of Software Development Lifecycle (SDLC) best practices: CI/CD pipelines, automated testing, code review processes, code repositories, and deployment management.
  • Experience with data replication tools (e.g., Fivetran, Debezium) and understanding of CDC (Change Data Capture) patterns.
  • Ability to independently drive data architecture decisions, translate business requirements into source-to-target data mappings, and deliver working, maintainable solutions.
  • Strong communication skills; able to effectively collaborate with and educate both technical and non-technical stakeholders.
  • Experience mentoring engineers and leading technical initiatives within a team environment.
Nice to Have
  • Hands-on experience with Apache Spark or other distributed data processing frameworks for large-scale batch and/or streaming workloads.
  • Familiarity with data observability and monitoring tools (e.g., DataDog, Monte Carlo, Great Expectations).
  • Prior experience in a regulated industry such as insurance, finance, or healthcare.
  • Contributions to open-source data engineering projects or internal data platform tooling.
  • Experience with AI tools such as Claude, Copilot or equivalent

Compensation & Benefits: 

Below is the budgeted salary range for this position.  Actual compensation for this position will be determined based on the successful candidate's experience and skills. We are committed to providing a compensation package that not only reflects the responsibilities and requirements of the role, but also the unique expertise that the chosen candidate will bring to our team.

Budgeted Salary Range

$140,800$158,400 USD

The full salary range shows the min to max salary range for this position.  Actual compensation will be commensurate with experience and qualifications and determined based on various factors including the candidate's qualifications, skills, and experience.

Full Salary Range

$140,800$211,200 USD

Benefits & Perks

  • Remote-First Culture - We supported #remotelife long before it was a given. We'll keep promoting it.
  • Competitive Salary & Equity
  • Comprehensive Medical, Dental, and Vision Plan Offerings
  • Life and disability coverage including voluntary options
  • Parental Leave - up to 8 weeks (320 hours) of paid parental leave based on meeting eligibility requirements
    (Birthing parents may be eligible for additional leave through STD)
  • 401K Company Contribution - Openly contributes 3% of the employee's gross income, even if the employee does not contribute.
  • Work-from-home stipend - We provide a $1,500 allowance to spend on setting up your home workplace
  • Annual Professional Development Fund: Each employee has $2,000 in professional development (PD) funds to spend on activities or resources annually. We want each Openly employee to achieve personal and professional success and to feel supported, confident, and informed about improving their efficiency and productivity.
  • Be Well Program - Employees receive $50 per month to use towards your overall well-being
  • Paid Volunteer Service Hours
  • Referral Program and Reward

Depending on position, Employees generally are eligible for cash incentive compensation, including commissions for sales eligible roles. In all cases, eligibility for compensation and benefits is subject to applicable plan and policy terms in effect from time to time.

U.S. Citizens, Green Card Holders, and those authorized to work in the U.S. for any employer and currently residing in the US will be considered.

Openly is committed to equal employment opportunity and non-discrimination for all employees and qualified applicants without regard to a person's race, color, sex, gender identity or expression, age, religion, national origin, ancestry, ethnicity, disability, veteran status, genetic information, sexual orientation, marital status, or any characteristic protected under applicable law. Openly is an E-Verify Employer in the United States. Openly will make reasonable accommodations for qualified individuals with known disabilities under applicable law.


We strive to provide an exceptional applicant and candidate journey when you engage with us. In an effort to respond to applicants in a timely manner, we leverage AI to organize applications and resumes based on required and applicable skills and experience. To allow our applicants to drive their initial interview experience with us, we may leverage an AI-supported scheduling tool so you can choose when to meet with our team. While AI assists with efficiency, all hiring decisions are made by our team members. Rest assured, your data is protected according to privacy laws and company policies. Contact our recruitment team with any questions about our AI-assisted hiring process.

Job Requirements

  • 4+ years of data engineering and data management experience, with a proven track record of delivering complex, production-grade data systems.
  • Strong proficiency in SQL and SQL optimization — including query tuning, indexing strategies, execution plan analysis, and data modeling in BigQuery and PostgreSQL.
  • Expert-level scripting and programming in Python; experience with Go is a strong plus.
  • Deep expertise with Google Cloud Platform (GCP), including BigQuery, GCS, Composer/Airflow, Cloud Functions, Cloud Run, Pub/Sub, and CloudSQL.
  • Proven experience building and operating event-driven and streaming data pipelines using Kafka or similar technologies (Aiven/Debezium experience a plus).
  • Strong understanding of modern data warehouse and Lakehouse architectures, including multi-layered data modeling patterns (bronze/silver/gold or equivalent).
  • Infrastructure as Code (IaC) experience with Terraform to define, manage, and version cloud data infrastructure.
  • Solid understanding of Software Development Lifecycle (SDLC) best practices: CI/CD pipelines, automated testing, code review processes, code repositories, and deployment management.
  • Experience with data replication tools (e.g., Fivetran, Debezium) and understanding of CDC (Change Data Capture) patterns.
  • Ability to independently drive data architecture decisions, translate business requirements into source-to-target data mappings, and deliver working, maintainable solutions.
  • Strong communication skills; able to effectively collaborate with and educate both technical and non-technical stakeholders.
  • Experience mentoring engineers and leading technical initiatives within a team environment.
  • Nice to Have
  • Hands-on experience with Apache Spark or other distributed data processing frameworks for large-scale batch and/or streaming workloads.
  • Familiarity with data observability and monitoring tools (e.g., DataDog, Monte Carlo, Great Expectations).
  • Prior experience in a regulated industry such as insurance, finance, or healthcare.
  • Contributions to open-source data engineering projects or internal data platform tooling.
  • Experience with AI tools such as Claude, Copilot or equivalent.
  • Compensation & Benefits
  • Budgeted Salary Range: $140,800 — $158,400 USD
  • Full Salary Range: $140,800 — $211,200 USD
  • Remote-First Culture - We supported #remotelife long before it was a given. We'll keep promoting it.
  • Competitive Salary & Equity
  • Comprehensive Medical, Dental, and Vision Plan Offerings
  • Life and disability coverage including voluntary options
  • Parental Leave - up to 8 weeks (320 hours) of paid parental leave based on meeting eligibility requirements
  • 401K Company Contribution - Openly contributes 3% of the employee's gross income, even if the employee does not contribute.
  • Work-from-home stipend - We provide a $1,500 allowance to spend on setting up your home workplace.
  • Annual Professional Development Fund: Each employee has $2,000 in professional development (PD) funds to spend on activities or resources annually.
  • Be Well Program - Employees receive $50 per month to use towards your overall well-being.
  • Paid Volunteer Service Hours
  • Referral Program and Reward

Benefits

  • 401(K), Adoption Assistance, Childcare benefits, Company equity, Company-sponsored outings, Continuing education stipend, Dental insurance, Disability insurance, Volunteer in local community, Family medical leave, Fitness stipend, Flexible Spending Account (FSA), Flexible work schedule, Generous parental leave, Generous PTO, Health insurance, Job training & conferences, Open door policy, Life insurance, Paid volunteer time, Online course subscriptions available, Paid holidays, Paid industry certifications, Paid sick days, Performance bonus, Pet insurance, Promote from within, Lunch and learns, Remote work program, Return-to-work program post parental leave, Team workouts, Continuing education available during work hours, Mandated unconscious bias training, Vision insurance, Wellness programs, Mental health benefits, Home-office stipend for remote employees, Hiring practices that promote diversity, Employee resource groups, Employee-led culture committees, Employee awards, Pay transparency, Transgender health care benefits, Wellness days, Abortion travel benefits, Personal development training, Virtual coaching services, Flexible time off, Floating holidays, Bereavement leave benefits, Hardship benefits

Related Categories

Related Job Pages

More Data Engineer Jobs

John Deere logo

2026007 - Staff Data Engineer

John Deere

Life can’t evolve without innovation. That’s why we’re ideating to help feed the planet, build smarter, and help our farmers and growers to sustainably optimize their land. The mission is underway, but we need the right people to continue bringing it to life. From coders to assemblers and from engineers and tech experts to marketers and field teams. All find a purpose to drive them, a culture to thrive in, and a world of opportunities.

Data Engineer2 days ago
OtherRemoteTeam 69,000Since 1837

As a Staff Data Engineer, you develop data products, design data for analytics, and support Precision Ag business through data insights and tools. You will harmonize data, implement data pipelines, and lead teams on complex projects, ensuring efficient and impactful solutions for agriculture.

Illinois
$170K - $180K / year

We need a highly technical Senior Data Engineer to build and optimize production ETL pipelines in a Databricks environment. Working within an agile Scrum team, you will take high-level table designs and SQL transformation sketches from the Data Modeling team and implement them in...

United States
Veeva logo

Senior Data Engineer

Veeva

Headquartered in Pleasanton, California, Veeva is a leading provider of cloud-based software and services for the life sciences industry. As an employer, Veeva

Data Engineer2 days ago

Senior Data Engineer leading Data Lakehouse design and implementation

California
$115K - $175K / year
Apex Companies logo

Microsoft Fabric Specialist / Data Warehouse Engineer

Apex Companies

Apex Companies is one of the fastest growing engineering and environmental consulting firms in the US, recognized by the Zweig Group as one of the fastest growing firms in the AEC industry. We take pride in providing our clients with exceptional service and have earned numerous awards for project excellence. We continue to grow, and we want you to grow with us.

Data Engineer2 days ago
OtherRemoteTeam 1,001-5,000

The role focuses on designing, implementing, and optimizing solutions within Microsoft Fabric for data integration, analytics, and governance across the enterprise. Responsibilities include developing lakehouses, dataflows, and pipelines while collaborating with stakeholders to build robust data models.

United States
$80K - $120K / year