Lead Data Engineer
Job Description
At Discover, be part of a culture where diversity, teamwork and collaboration reign. Join a company that is just as employee-focused as it is on its customers and is consistently awarded for both. We’re all about people, and our employees are why Discover is a great place to work. Be the reason we help millions of consumers build a brighter financial future and achieve yours along the way with a rewarding career.
As a Lead Data Engineer, you will provide technical (design & development) leadership to the Enterprise Data Warehouse Team in development of Extract/Transform/Load (ETL) applications that will interface with all key Discover applications.
Position requires excellent communication skills for understanding business vision and the ability to translate the vision into technical artifacts. Strong technical analysis and design background is also a must-have to ensure that technical deliverables are providing flexible, architecturally sound infrastructure with the ability to reuse in future Data Stores, analytical and operational in AWS.
Job Responsibilities:
- Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs.
- Ability to quickly identify an opportunity and recommend possible technical solutions.
- Develop application systems that comply with the standard system development methodology and concepts for design, programming, backup, and recovery to deliver solutions that have superior performance and integrity.
- Contribute to determining programming approach, tools, and techniques that best meet the business requirements.
- Understand and follow the PDP process to develop, deploy and deliver the solutions.
- Be pro-active and diligent in identifying and communicating design and development issues.
- Provide business analysis and develop ETL code and scripting to meet all technical specifications and business requirements according to the established designs.
- Offer system support as part of a support rotation with other team members.
- Utilize multiple development languages/tools such as Python, SPARK, Hive to build prototypes and evaluate results for effectiveness and feasibility.
- Operationalize open source data-analytic tools for enterprise use.
- Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python and AWS based solutions.
- Custom Data pipeline development (Cloud and locally hosted)
- Work heavily within the Cloud ecosystem and migrate data from Teradata to AWS based platform.
- Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and source data providers.
- Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
- Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
Minimum Qualifications
At a minimum, here’s what we need from you:
- Bachelor’s Degree in Computer Science, Business Computer Systems, or related technical field
- 6+ years of experience in Software Engineering or related field
- In lieu of degree, 8+ years of experience in Software Engineering or related field
Preferred Qualifications
If we had our say, we’d also look for:
- 6+ years of systems development and analysis experience in designing, developing, implementing & maintaining applications
- Deep knowledge and very strong in SQL, and Relational Databases
- Strong analytical skills and experience with writing and performance tuning complex SQL queries
- Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio)
- Experience with Amazon Web Services (AWS) based solutions
- Experience in migrating ETL processes (not just data) from relational warehouse Databases to AWS based solutions.
- Experience in building & utilizing tools and frameworks within the Big Data ecosystem including Hadoop, Kafka, Hive, Spark, HBase, NoSQL.
- Strong written and verbal communication skills
- Ability to think independently and work with various teams on projects
- Ability to handle multiple initiatives concurrently
- Experience working in a collaborative team environment
#LI-KE
The same way we treat our employees is how we treat all applicants – with respect. Discover Financial Services is an equal opportunity employer (EEO is the law). We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover.