Principal Cloud Data Engineer
At Discover, be part of a culture where diversity, teamwork and collaboration reign. Join a company that is just as employee-focused as it is on its customers and is consistently awarded for both. We’re all about people, and our employees are why Discover is a great place to work. Be the reason we help millions of consumers build a brighter financial future and achieve yours along the way with a rewarding career.
Discover Financial Services is seeking a Principal Cloud Data Engineer to join our Data Operations team. Successful candidates will partner with our business partners to understand their data needs, and build data pipelines on-Premise environment as well as in the Cloud. Additionally you will provide engineering leadership to create and enhance data solutions enabling seamless integration and flow of data across our data ecosystem. You also will provide lead level technical consulting to peer data engineers during design and development for highly complex and critical data projects. Some of these projects will include designing and developing data ingestion and processing/transformation frameworks leveraging open source tools such as Java, Python, Spark, Scala, pySpark, etc.
Duties may include:
- Be part of Data Engineering team translating business and technology requirements into our ETL/ELT architecture using tools such as Informatica or Abinitio.
- Collaborate with cross functional teams such as AWS cloud & platform engineering, DBAs and business teams.
- Develop real-time and batch data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Kinesis, Java, NoSQL DBs, AWS EMR.
- Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs.
- Ability to quickly identify an opportunity and recommend possible technical solutions.
- Develop Custom Data pipeline (Cloud and locally hosted).
- Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and source data providers.
- Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
- Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
- Provides senior-level technical consulting to peer-data engineers during design and development for highly complex and critical data projects.
- Provides engineering leadership to create and enhance data solutions enabling seamless integration and flow of data across the data ecosystem.
- Designs and develops data ingestion frameworks leveraging Open-Source tools such as NiFi, Sqoop, Hive, Java, Pig, Python, as well as data processing/transformation frameworks leveraging Open-Source tools.
- Provides support for deployed data applications and analytical models.
- Designs and develops real-time processing solutions using Open-Source tools.
At a minimum, here’s what we need from you:
- Bachelor’s Degree in Computer Science or related field
- 6+ years of experience in Data Platform Administration/Engineering
If we had our say, we’d also look for:
- 8+ years of experience in Data Platform Administration/Engineering
- Experienced in Agile methodologies.
- Experience working in Relational Databases and BigData databases like Teradata, Aster, Oracle, Netezza and open source Databases such as Postgresql, mysql
- Proficiency in one of the scripting languages such as Shell/Python/Scala/Java.
- Good understanding of Big Data technology trends, with knowledge of technologies such as Kinesis, Kafka, Spark, Hive, pySpark.
- Knowledge of Infrastructure as Code, Immutable Infrastructure, & continuous integration/deployment practices
- Experience with one of the ETL tools (Abinitio, Informatica or Data Stage) and tools/frameworks within the Big Data ecosystem
- Hands on experience with AWS based solutions such as RDS, Lambda, Dynamodb, Redshift, Snowflake, EC2, Chef Cookbooks and S3.
- Experience in version control systems such as Git, GitLab, etc.
- Ability to work in a fast-paced, rapidly changing environment.
- AWS Cloud Services certification is a big plus.
Discover Financial Services is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, protected veteran status, among other things, or as a qualified individual with a disability.
So, what are you waiting for? Apply today!