Data Engineer (Remote) at Fetch Rewards
What we’re building and why we’re building it.
Fetch is a build-first technology company creating a rewards program to power the world. Over the last 5 years, we’ve grown from 0 to 10M active users and taken over the rewards game in the US with our free app. The foundation has been laid. In the next 5 years we will become a global platform that completely transforms how people connect with brands.
It all comes down to two core beliefs. First, that people deserve to be rewarded when they create value. If a third party directly benefits from an action you take or data you provide, you should be rewarded for it. And not just the “you get to use our product!” cop-out. We’re talkin’ real, explicit value. Fetch points, perhaps.
Second, we also believe brands need a better and more direct connection with what matters most to them: their customers. -- Brands need to understand what people are doing, and have a direct line to be able to do something about it. Not just advertise, but ACT. Sounds nice right?
That’s why we’re building the world’s rewards platform. A closed-loop, standardized rewards layer across all consumer behavior that will lead to happier shoppers and stronger brands.
Fetch Rewards is an equal employment opportunity employer.
The data engineering team is working to use all the latest technology to build a performant, reliable, and scalable platform for delivering data. The work of data engineers is to enable all stakeholders to be able to access and use endless amounts of data that come from an ever-growing variety of data sources. At fetch our motto is to make data processing appear seamless and effortless for both producers and consumers of data. With a goal of having world class data availability with terabytes of daily data, data engineering is critical to Fetch’s success.
- REQUIRED: Python programming skills
- Solid SQL skills
- Familiarity with Unix systems, shell scripting, and Git
- Experience with relational (SQL), non-relational (NoSQL), and/or object data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB)
- Interest in building and experimenting with different tools and tech, and sharing your learnings with the broader organization
- The desire to work with other teams in the organization (e.g., Development, Business Intelligence, Data Science) to build tools and solutions that support and help manage data within the Fetch ecosystem
- Bachelor’s degree in Computer Science (or equivalent)
- At least 3 years of relevant full-time work experience
Bonus points for:
- Excellent written and verbal communication skills
- Familiarity with open source software and dependency management
- ETL process, data pipeline, and/or micro-service development experience
- Cloud engineering and DevOps skills (e.g., AWS, CloudFormation, Docker)
- Familiarity with messaging and asynchronous technologies (e.g., SQS, Kinesis, RabbitMQ, Kafka)
- Big data development skills (e.g., Spark, Hadoop, MPP DW)
- Experience with visualization tools (e.g., Tableau)
- Love of Dogs! . . . Or just tolerance. We're a very canine-friendly workplace