Data Engineer at Uptake
What we do
Uptake helps industrial companies digitally transform with open, purpose-built software that delivers outcomes that matter. Built on a foundation of data science and machine learning, our vision is to create a world that always works — one where the machines and equipment we depend on daily don’t break, and industrial companies are once again the creators of economic growth and opportunity.
What you’ll do
As a Data Engineer on the Data Science team, you’ll work with Uptake’s data scientists and product team to design and build data infrastructure in support of Uptake’s Data Science. The tools you create will have lasting impact on model development and deployment, performance and outcomes reporting, as well as data monitoring. The ideal candidate has strong analytic and technical abilities, as well as the ability to be flexible and adaptive to rapidly evolving needs of the team.
- Design and implement data warehouses, real-time ETL, and batch processing of data to support modeling and reporting needs
- Work with data ingestion teams to develop data expertise and resolve upstream issues relating to data quality
- Define best practices and design for the management of data
- Partner with Data Scientists to build and maintain internal data processing and visualization tools
- Translate requests into replicable analytic reports using varying applications
- Create tools to serve data such as APIs and packages
- Bachelor’s degree in computer science, information technology/information systems, or a field related to a computational science or 2+ years experience working as a data engineer
- Ability to write efficient SQL queries
- Experience managing data ETL processes and making data available through service applications and databases.
- 1+ years experience with NoSQL databases (Cassandra or Elasticsearch preferred)
- 3+ years experience with programming languages (Python, Java, R, and/or Scala preferred)
- Familiarity with a variety of data processing technologies (e.g. Spark, Kafka, Hadoop)
- Excellent communication skills, including a knack for clear documentation
- Experience with or knowledge of REST APIs and making data available through microservices.
- Experience using version control (Git, Mercurial, SVN, etc.) for collaborative code development.
- MS or PhD in Computer Science or other technical field
- Ability to architect data solutions
- Some knowledge of machine learning and data science processes
- Experience supporting data science and analytical efforts is preferred
- Some experience with frontend web-development
- Experience defining and implementing APIs
- Experience working with Docker
Why Work Here
We build and deliver, then explore to build more. Curiosity and flexibility enable everything we do, and we get stronger as we make each new industry smarter. As a team, we bring our diverse backgrounds, beliefs and experiences to solve problems no one has yet to solve, at a speed no one has yet to experience. We support and challenge one another to bring out a new best in each of us, and we might have a little fun along the way.