Senior Software Engineer - Streaming Data
Discover. A brighter future.
With us, you’ll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it — we want you to grow and make a difference at one of the world's leading digital banking and payments companies. We value what makes you unique so that you have an opportunity to shine.
Come build your future, while being the reason millions of people find a brighter financial future with Discover.
Job Description
As a Senior Software Engineer – Streaming Data, you will provide engineering leadership to build products that will support our streaming data pipelines and other real-time based solutions. You will be on the cutting edge of finding and integrating new technologies/platforms to be used by a large community of companywide engineers. We are looking for a talented individual that is strong technically, creative, and can work across many new technologies. You will be expected to drive innovation, R&D new solutions and must be able to operate without prescriptive directives from management.
Some examples of initiatives you will work on include building out our Change Data Capture platform, Kafka/KSQL DB, and developing our real time data transformation/consumption frameworks. The products your team develops mush have user experience top of mind as we strive to create products that bring delight and efficiencies to our users. You will work with technologies such as Java, Python, Spark & Spark streaming APIs, SQL, Kafka, Qlik Replicate, Snowflake, AWS/Google Cloud services and CI/CD tools.
Responsibilities
- Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs.
- Ability to quickly identify an opportunity and recommend possible technical solutions.
- Strong desire and capability to automate EVERYTHING
- Utilize multiple development languages & tools such as Qlik Replicate, Python, Java and SQL to build prototypes and evaluate results for effectiveness and feasibility.
- Operationalize open source data-analytic tools for enterprise use.
- Utilize tools available to you across AWS and Google Services
- Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python, AWS/Google Services
- Custom Data pipeline development (Cloud and locally hosted)
- Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and source data providers.
- Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
- Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
- Develops and maintains complex front-ends with a focus on user experience
- Develops and maintains backend systems
- Works directly with business partners to understand requirements and outline solutions. Works with the engineering team to innovate on and enhance their development practices and processes
- Supports live systems to ensure business continuity
- Creates and maintains devops processes, application infrastructure, and utilizes cloud services (including database systems and models/schemas)
Required Skills
- Bachelor's Degree in Information Technology, or related field
- 4+ years of experience in Computer Science, Information Technology, or other related field
- In lieu of a degree, 6+ years of experience in Computer Science, Information Technology, or other related field
Desired Skills
- Deep level understanding and implementation experience across AWS Data Services such as Lambda, Kinesis, SQS/SNS, EMR, S3, Cloudwatch etc..
- Experience with ELK stack
- Snowflake MPP Database
- Deep level understanding and efficiency in the Hadoop technology stack
- Proficient in Spark application coding
- Proficient in Java or Python development
- Expert level knowledge in SQL, and relational databases
- Experience as part of an Agile engineering or development team
- Strong understanding of object-oriented principles with an ability to write clean code
- Strong experience working with a relational database and NoSQL database
- Strong experience with CI/CD pipelines with Jenkins or similar; Git/GitHub; Artifactory
- Proven skills in high availability and scalability design, as well as performance monitoring
- Experience developing and implementing API service architecture
- Experience in working in a cloud environment such as AWS, GCP or Azure.
- Understanding of messaging systems like MQ, Rabbit MQ, Kafka, or Kinesis.
- Build secure web applications with user authentication
- Experience with relational databases such as MySQL or Postgres and understanding of columnar data stores such as Redshift or Snowflake
- Strong technical understanding of data architecture, data quality and related technologies
- Collaborative individual who excels in working within a team and with business partners identify, develop and deliver innovative data solutions
- Ability to demonstrate leadership to managers, and peer level staff.
#Remote
#BI-Remote
#LI-KE
What are you waiting for? Apply today!
The same way we treat our employees is how we treat all applicants – with respect. Discover Financial Services is an equal opportunity employer (EEO is the law). We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover.