SR. SOFTWARE ENGINEER, HADOOP at Epsilon
About the Opportunity
Conversant analyzes anonymized, privacy-safe data at internet scale. We handle 200B+ of online interactions a day and have $3.8 trillion in multichannel purchases in our database. We are seeking a Senior Software Engineer with a balance of skills in design, development and performance tuning to work out of our Chicago, IL location supporting data science, engineering and analytics teams.
Duties and Responsibilities:
- Meet with end users to gather feature enhancements and validate platform capabilities.
- Work closely with data science, engineering, analytics and data warehouse teams to define and implement solutions.
- Act as “on site interface” for data science, engineering, analytics and data warehouse teams to remote engineering resources.
- Write stories (or requirements) of platform enhancements.
- Work with data science, engineering, analytics and data warehouse teams during sprint cycles to validate design and provide feedback.
- Provide tuning support and analysis for applications.
- Provide framework support for Spark, HBase and Hive.
- Drive best practices for application development.
- 3 - 5 years of experience in software engineering.
- Consultative background (experience or personality) with ability to act as “first in”. You will be the one on the scene, developing the relationships, delivering daily, communicating broadly, and tackling the problems at hand.
- Able to navigate multiple big data environments.
- Experience building data-driven solutions at scale. Preference given to candidates with demonstratable experience in machine learning, artificial intelligence, statistics, or graph algorithms.
- Experience with Scala and Spark environments
- Experience with Keras or Tensorflow is a plus
- Strong presentation and interdisciplinary communication skills (written and verbal).
Education / Certifications:
MS in Computer Science or related discipline plus 3 years experience or BA/BS in Computer Science or related discipline and equivalent work experience required. Hadoop and/or AWS certification is a plus. Spark certification is a plus.