Working as a member of our data, platform, and analystics team, Gladson is looking for a hands-on Data Engineer who will focus on designing and implementing optimal data engineering solutions to scale with unpredictable data patterns while maintaining and monitoring them. This is an individual contributor position, requiring the ability to take on complex requests and transform them into clean data solutions and integrating that data with the architecture used across the company. The role will be also be evaluating and implementating new tools and frameworks, or extending existing ones by leveraging the on-premise and cloud services in support of the company's data science, data warehousing and visualization initiatives.
This full time role will be based in Gladson's Chicago office. The interview process for this position will require the completion of a case study which will be sent to selected candidates after an initial phone screen with the recruiting team.
- Design and implement end to end automated data pipelines from data ingestion to delivery while selecting and integrating tools and frameworks required to provide the requested capabilities as per business requirements.
- Perform capacity planning required to create and maintain enterprise relational and NoSQL databases and processing demands while providing all facets of database administration to production, development and quality assurance systems.
- Implementing ETL process and monitoring performance and advising any necessary infrastructure changes (defining data retention policies, database tuning parameters)
- Designing and building self-improving software by leveraging machine learning techniques and technology at scale.
- Application of modern data processing technology stacks, streaming data architectures and technologies for real-time and low-latency data processing.
- Proficient understanding of distributed computing principles and integration of data from multiple data sources and formats
- Leverage data mining, statistics, and machine learning to develop best-in-class analysis techniques & data visualizations that answer strategic client / category questions
- Knowledge of various ETL techniques and frameworks, such as Flume, Talend, Spoon and various messaging systems, such as Kafka
- Understanding of how to build solutions for data science and client delivery while productionizing any machine learning models and collaborating across various teams
- Understanding of agile development methods including: core values, guiding principles, and key agile practices
- Understanding of the theory and application of Continuous Integration/Delivery
- Experience with SQL & NoSQL databases, such as MSSQL, PostGres, MongoDB, Cassandra, Neo4J
- Good understanding of cloud platforms ( GCP/AWS ) and Lambda Architecture, along with its advantages and drawbacks
- Knowledge of High Availability (HA) and Disaster Recovery (DR) options.
- Bachelor's degree in Computer Science, mathematics, engineering, or equivalent.
- 4+ years in BI or data engineering, working with structured and unstructured data formats along with batch and streaming framework design & implementation.
- Demonstrated knowledge and experience with software development and deployment in on-premise and cloud environments.
- Strong consultative skills to establish relationships across the broader organization
- Comfort communicating and interacting with cross functional teams, as well as understanding and translating the science of data to a more general audience
- Desire to "roll up the sleeves" and get into the heart of the business
- Demonstrable ability to work across a global matrix and ability to put strong communication, leadership and influencing skills to work to be successful