Lead Platform Engineer (Hadoop)
Discover. A more rewarding way to work.
At Discover Financial Services, you’ll find yourself in the company of some of the industry’s smartest and most reliable professionals. And at a company that rewards dedication, values innovation and supports growth.
Thrive in an environment that promotes teamwork and shared success. Build on a foundation of mutual respect. Join the company that understands rewarding careers like no other, with this exceptional opportunity:
Lead Platform Engineer (Hadoop)
Job Description
We are seeking bright, talented and driven engineers to join a team of passionate and innovative technologists. In this role, you will experience hands on engineering and administration experience on Discovers next generation platforms supporting the most critical payments applications for all of Discover network brands.
Responsibilities:
- Contributing member of a high performing engineering and administration team over Critical Hadoop Application clusters.
- Provide technical expertise to design efficient engineering solutions for next generation platforms which include the following technologies: (Kafka, Storm, Spark, Solr, Zookeeper, NiFi, HBase, HDFS, HIVE, YARN, Ranger, Knox, Ambari and Kerberos)
- Big Data Cluster platform provisioning-administration
- Big Data Cluster Resiliency and Performance engineering and administration
- Big Data Cluster Security Implementations
- Big Data engineering and administration for high availability, replications and disaster recovery solutions
- Big Data database engineering and administration
- Leverage DevOps techniques and practices to include Continuous Integration, Continuous Deployment test build automation working with key application architects and application developers
- Promote a risk-aware culture, ensure efficient and effective risk and compliance management practices by adhering to required standards and processes.
- Level 2/3 “go to” team for operational support
Skills
- Bachelor’s Degree (preferably in Information Technology) or the equivalent work experience
- 5+ years working within Infrastructure Technology
- 4+ years’ experience working with Hadoop Cluster Engineering and Administration
- A level of hands-on experience working with Kafka, Storm, Spark, Solr, Zookeeper, NiFi, HBase, HDFS, HIVE, YARN, Ranger, Knox, Ambari and Kerberos
- Experience leveraging DevOps techniques
#LI-MF1
We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.