Senior Data Engineer
We are looking for a Senior Data Engineer to be a key member of the engineering team, helping build our personalized healthcare and financial platform from the ground up.
Yaro is building digital products that put consumers in the center of their healthcare. We’re an experience transformation engine putting the consumer first, so that it’s simple to make smart benefits choices, shop and pay for care, and establish financial well-being. We work with health plans, providers, and health systems to serve members as consumers, delivering better health outcomes, increased transparency, and lower costs. We want to have a continuous stream of diverse, smart engineers and designers who are excited and hungry to use software to solve challenging problems that immediately impact lives.
- Guided by clarity. We start with people - our clients, our users, and our team. By approaching our work with focus, transparency, and intention, we are clear on our priorities and what needs to get done and when.
- Building with intensity. Our passion has purpose, and that purpose drives us to build and sustain momentum. Staying agile and efficient gives us the ability to build and adapt quickly and securely. We want to get things done: built, launched, and making positive impact.
- Working in community. As team members, we radically collaborate in trusting relationships and create something we’re proud of. Bringing diverse experiences, perspectives, and skills, we build together as a company, with partners, and as part of the healthcare community.
As a senior data engineer on the team, you will get to solve real world problems that affect all of us and in turn make a huge impact in the healthcare space. You will help create a healthcare and financial wellness platform curated for each person via massive amounts of third party and internal data, and run through complex rules to deliver targeted, proactive health and financial wellness solutions, at scale. If you passionate about building large scale data processing systems and are motivated to make an impact in crafting next-generation data architectures, analytics & machine learning algorithms, we would love to talk to you.
- You obsess over the customer and are excited about new ways to use data to continuously create a better customer experience
- You insist on the highest standards - You take responsibility for code quality and proliferate best practices across the development team through code reviews and testing
- You seek to build & simplify frameworks, tools & technologies and advocate for architectural improvements to minimize pain points within our infrastructure and code base
- You think big! You strive to automate all processes and focus on scale
- You believe in continuous delivery and iterating on code and features
- You take ownership of the results - You are interested in using analytics to derive insights through software
- You are able to manage your own time and are hungry to contribute and learn
- You are able to help build a team of smart creatives
What You Might Do in Your First Year:
- Build data pipelines to automate the ingestion, transformation, validation, and augmentation of structured and semi-structured data
- Advance the data architecture, ensuring commitment to architectural and industry best-practices
- Own and extend the data pipelines through the collection, storage, and processing of large datasets
- Partner with Data Scientists, Software Engineers, and business groups to build the data platform, including data processes, data warehouses, and data pipelines
- Collaborate with Data Scientists to implement machine learning algorithms
- Build analytics reports, looking at usage patterns of applications across product suite
- Guide engineering and business teams to understand and consume the data sets to drive improved decision-making and analytical capabilities
- Contribute to the engineering culture, and mentor other software engineers within the team
About You in the Role:
- Several years of solid, on-the-ground experience in the design and development of data pipelines, data marts, and data warehouses
- Proficient in Python
- Strong understanding of ETL processing with large data stores
- Experience working with ETL systems like Airflow, Spark, Talend, Alooma, or Nifi
- Strong understanding of stream processing services such as Kafka, Kinesis, Apache Storm, or Spark Streaming
- Expertise in working with and modeling data in varied forms, including relational, columnar, flattened, and dimensional data stores
- Experience designing and implementing business-critical data pipelines by leveraging modern big data architectures to satisfy real-time and batch processing requirements
- Proficient in creating and tuning data queries
- Strong communication skills and ability to collaborate across the engineering team
- A proven record of personally taking large data projects from ideation to implementation
Understands and evangelizes our development philosophies:
- Distributed Systems, SaaS
- Domain Driven Design
- Event-Based Architecture
- Instrumentation to log data for all types of insights
- Code maintainability, SOLID Principles
- Don’t over-engineer the solution
- Discuss and learn from failure
Perks of the Job:
- The chance to help build a platform and a company - you will make an impact here!
- Competitive salary, bonus potential and stock option eligibility
- Open PTO and flexible work hours
- Casual, open-layout workspace right on Michigan Avenue at the Chicago River
- An amazing view from the 34th floor
- Fully stocked kitchen (with plenty of LaCroix) and access to the building’s gym and secure bike room
- Health insurance, disability and life insurance, 401(k) with match, commuter benefits, and more
- Team events and happy hours
- Parental leave
About the Team:
Yaro is an impassioned work environment built by a diverse group with a strong sense of community. We’re here to revolutionize healthcare, put the power in the hands of the customer, and provide a inspiring and meaningful place to work. (We’re also pretty fun.) Join us!