To enable broadband service providers of all sizes to simplify, innovate and grow.
Software Engineering Intern, Data Engineer
Location
United States + 2 moreAll locations: United States, Canada, United Kingdom
Posted
8 days ago
Salary
$24 - $35 / hour
No structured requirement data.
Job Description
We are looking for a Software Engineering Intern to join our Products organization for the Summer. In this role, you will be part of a unique and award-winning internship program within the company. The program provides the opportunity to learn new skills through training and on the job learning. The duration of the program is expected to be 90 days.
The intern will join the software engineering and analytics team to design, develop, and optimize cloud based workflows and data processing applications. The role involves working with technologies such as Python, AWS, and Elasticsearch to create reliable and scalable solutions that support analytics and system monitoring. Success in this position means writing clean, efficient code, learning modern engineering practices, and collaborating closely with experienced team members. This internship offers exposure to real-world production systems, mentorship opportunities, and hands-on experience at the intersection of software engineering, data analytics, and emerging AI frameworks.
Responsibilities and Duties:
- Develop and maintain Python-based applications and automation workflows.
- Build and enhance software pipelines that collect, process, and organize data from multiple sources.
- Support and deploy solutions using cloud services such as AWS Lambda, InfluxDB, or equivalent GCP tools.
- Analyze structured and unstructured data, including timeseries and network telemetry data, to identify key patterns.
- Contribute to creating and maintaining analytics components using Elasticsearch and Kibana or equivalent.
- Collaborate with engineers using Git, CI/CD workflows.
- Document technical processes, write reusable code, and implement best practices for system reliability.
- Explore new technologies such as agentic AI frameworks (e.g., LangGraph) and LLM-based automation solutions.
Qualifications:
- Currently enrolled in a college degree program in Computer Science, Computer Engineering, Software Engineering, or a related field.
- Proficiency in Python for backend development, scripting, or automation.
- Understanding of data pipeline and integration concepts — collecting, transforming, and preparing data for analytics.
- Exposure to AWS cloud services (Lambda, InfluxDB) or corresponding GCP components.
- Familiarity with version control systems such as Git.
- Experience with analytics or data‑driven projects through coursework, internships, or research.
- Strong foundation in data structures, algorithms, and API design.
- Excellent problem-solving skills, attention to detail, and ability to work collaboratively.
- Preferred: Knowledge of the Elasticsearch Stack (Elasticsearch, Kibana, Beats, Logstash) or equivalent.
- Preferred: Exposure to agentic or autonomous system development and machine learning workflows.
- Knowledge of timeseries data or network telemetry data is a plus.
- Able to work for the complete summer break (May - August or June - September).
#LI-Remote
The base pay range for this position varies based on the geographic location. More information about the pay range specific to candidate location and other factors will be shared during the recruitment process. Individual pay is determined based on location of residence and multiple factors, including job-related knowledge, skills and experience.
San Francisco Bay Area:
27.60 - 34.50 USD HourlyAll Other US Locations:
24.00 - 30.00 USD HourlyFor information on our benefits click here.
Related Guides
Related Categories
Related Job Pages
More Data Engineer Jobs
Data Engineer building analytics solutions for Biologics at Thermo Fisher Scientific
Data Engineering, Principal Research Data Engineer
Vertex Inc.Vertex is a global biotechnology company that invests in scientific innovation.
The Principal Data Engineer will lead and manage the Data Engineering team to strategically enable scientists by developing, curating, and maintaining critical data assets for pharmaceutical research. Key duties include integrating and curating data from research systems, managing and improving the Vertex Data Platform solutions, and overseeing the team to ensure reliable data workloads.
The ETL Data Migration Validation Engineer will independently validate large-scale data migrations across platforms using complex SQL queries, focusing heavily on reconciliation strategies like set-based comparisons to ensure accuracy and integrity of over 100K records. Responsibilities also include validating schema alignment, transformation logic, and operating objectively from development teams to document findings and provide risk-based recommendations.
This role involves designing, building, and evolving data systems to support analytics, experimentation, machine learning, and clinical outcomes, operating with high ownership within a regulated healthcare environment. Responsibilities include owning data architecture decisions, establishing quality standards, and leading technical initiatives across the data engineering team.