Data Operations Engineer at Tempus
About our teams:
At Tempus, products are owned and developed by small, autonomous teams composed of developers, designers, data scientists, and product managers. You and your team set the goals, build the software, deploy the code, and contribute to a growing software platform that will make a lasting impact in the field of cancer research and treatment.
As a Data Operations Engineer, you’ll be implementing data pipelines to integrate partner data with the world’s largest library of clinical and molecular data.
What you’ll do at Tempus:
- You’ll analyze, document, and implement data transformation to populate Tempus data models with partner data.
- You’ll implement resilient data pipelines as code to guarantee timely ingestion/egress of partner data.
- You’ll code instrumentation to measure source data quality, and communicate with partners’ technical contacts to resolve live integration issues.
- You’ll identify and socialize improvements to data processing infrastructure and workflows.
Why we’re looking for you:
- You write code to transform data between data models and formats, preferably in Python or PySpark.
- You've worked in agile environments and are comfortable iterating quickly.
- You are comfortable working with high-level direction and querying subject matter experts to extract the information you need to operate in complex domains.
Bonus points for:
- Experience with messaging platforms and real-time SOA integration patterns.
- Experience working with RESTful endpoints in ETL processes.Healthcare domain knowledge and experience implementing integrations with healthcare transaction protocols (FHIR, HL7, ANSI X12).
- Experience building cloud-native applications and supporting technologies / patterns / practices including: AWS, Docker, CI/CD, DevOps, and microservices.
- You have knowledge of relational database physical modeling concepts and writing advanced SQL queries.