At Fetch Rewards, we have two sayings that sum up the ethos of our organization launched in 2013:
Defeat the Odds!
At Fetch Rewards, our vision is to help people digitize their shopping in a way that is fun and rewarding. Millions of people use our app every month and we are growing rapidly. Headquartered in Madison, WI with offices in Chicago, Denver, San Francisco, and New York, we pride ourselves on two things – speed and excellence.
The DataOps team embodies these values and works with a laser focused objective to enable data driven services and analytics for end users, internal stakeholders, and external partners. We are looking for a Data Engineer to contribute to this vision and reap the rewards of joining an exciting company in the high growth phase.
Your focus will be on developing pipeline frameworks, micro-services, and data solutions that can scale to match the company’s growth trajectory. We don't lock ourselves into particular technologies, but some we are currently using include AWS, Snowflake, Python, Spark, Lambda, CloudFormation, Docker, Kinesis, MongoDB, and Tableau. You’ll also get to join a team of talented individuals who will provide you with hands-on mentorship on topics ranging from engineering to DevOps to analytics. Success in this role requires the ability to analyze challenging problems, propose solutions under the guidance of experienced teammates, and implement designs within timeframes that keep up with business needs.You possess:
- Solid SQL skills
- Familiarity with Unix systems, shell scripting, and Git
- Experience with relational (SQL), non-relational (NoSQL), and/or object data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB)
- Interest in building and experimenting with different tools and tech, and sharing your learnings with the broader organization
- The desire to work with other teams in the organization (e.g., Development, Business Intelligence, Data Science) to build tools and solutions that support and help manage data within the Fetch ecosystem
- Bachelor’s degree in Computer Science (or equivalent)
- At least 2 years of relevant full-time work experience
- Excellent written and verbal communication skills
- Familiarity with open source software and dependency management
- ETL process, data pipeline, and/or micro-service development experience
- Cloud engineering and DevOps skills (e.g., AWS, CloudFormation, Docker)
- Familiarity with messaging and asynchronous technologies (e.g., SQS, Kinesis, RabbitMQ, Kafka)
- Big data development skills (e.g., Spark, Hadoop, MPP DW)
- Experience with visualization tools (e.g., Tableau)
- Love of Dogs! . . . Or just tolerance. We're a very canine-friendly workplace