LEAD DATA ENGINEER at Ulta Beauty

| Chicago
Sorry, this job was removed at 2:49 p.m. (CST) on Tuesday, November 12, 2019
Find out who's hiring in Chicago.
See all Data + Analytics jobs in Chicago

OUR STORY:

Ulta Beauty is the largest specialty beauty retailer in the United States and the place for the true beauty enthusiast who gets butterflies as she shops for beauty and experiments throughout our store. We are the only one to provide our guests prestige, mass and salon products and services under one roof – All Things Beauty, All in One Place™. We put our guests at the center of all we do, committing to offer her unrivaled ways to be beautiful in an environment that provides the thrill of exploration and delight of discovery.

 

The individual must be able to effectively troubleshoot application and technical operation issues, communicate root cause analysis results and recommend improvements for the application and systems. In addition, he/she needs to identify and implement applicable operational tools, processes and procedures to facilitate proactive monitoring the digital channels and establish a baseline of service level agreements (SLAs) to measure the overall availability and stability of the systems under his/her responsibility.  Individual should be comfortable working with minimal direction for extended periods, as well as assisting other team members to accomplish project success.  Assignments may vary from well-defined, long term projects to ad-hoc work requests urgent to business success.

 

POSITION SUMMARY:

Lead Data Engineer, documents, and supports our marketing initiatives in a highly dynamic and fast paced environment.  The individual will work closely with product management, software developers and system engineers to understand system functionality and the overall system design / architecture so that appropriate operational tools can be implemented to proactively monitor the systems ensuring maximum uptime. They will participate in quality assurance activities, including performance testing to validate new business capabilities in order to prepare and plan for production deployments and subsequent operational support.

 

CORE JOB RESPONSIBILITIES:

  • Requirements Definition: Uses established techniques as directed to identify current problems and elicit, specify and document business functional, data and non-functional requirements for various subject areas with clearly defined boundaries.  Assists in defining acceptance tests for the specified requirements.
  • Business Process Modeling: Produces an abstract or refines a representation of real world, business situations in applications, to aid the communication and understanding of existing, conceptual or proposed scenarios. Predominantly focused around the representation of processes, roles, data, organization and time.
  • Programming/Software Development - Contributes to the design, development, testing and documentation of complex programs from agreed specifications, and subsequent iterations, using agreed standards and tools. Assesses own work and leads reviews of colleagues' work. Mentors less experienced colleagues as required.
  • Development Testing - Performs the execution of given test scripts under supervision. Records results and reports issues. Develops an understanding of the role of testing within system development, as a tool for design improvement as well as a validation process.
  • Release Deployment:  The management of the processes, systems and functions to package, build, test and deploy changes and updates (which are bounded as “releases”) into a production or non-prod environments, establishing or continuing the specified Service, to enable controlled and effective handover to Operations and the user community.
  • Application Support: Identifies and resolves issues with applications, following agreed procedures.  Carries out agreed applications maintenance tasks.
  • Customer Service: Liaises as the routine contact point, receiving and handling requests for support. Carries out a broad range of service requests for support by providing information to fulfill requests or enable resolution.
  • Incident Management: Undertakes the identification, registration and categorization of incidents.  Gathers information to enable incident resolution and promptly escalates incidents as appropriate.
  • Research: Within given research goals, assists in selection and review of credible and reliable resources to gain an up-to-date knowledge of any relevant field. Documents work carried out and may contribute sections of material of publication quality.

 

ADDITIONAL RESPONSIBILITIES:

  • Support integrating enterprise systems to the Google Cloud Platform
  • Transform business data utilizing Google BigQuery and Cloud DataStore
  • Guide other team members on Google Cloud Platform related best practices
  • Monitor and retrain NLP data models
  • Lead a team of consultants to provide round the clock support and monitoring
  • Work alongside senior leadership to help develop business and technical strategy for our data and analytics practice
  • Architect and implement best-in-class solutions using technologies such as Spark, Cassandra, Kafka, Airflow, Google Cloud DataFlow, BigQuery, Snowflake, Amazon Kinesis / RedShift among others
  • Create robust and automated pipelines to ingest and process structured and unstructured data from source systems into analytical platforms using batch and streaming mechanisms
  • Work with data scientists to operationalize and scale machine learning training and scoring components by joining and aggregating data from multiple datasets to produce complex models and low-latency feature stores.
  • Leverage graph databases and other NoSQL data stores to accomplish tasks that are not possible with traditional databases
  • Assist in our recruiting and interviewing process
  • Develop content and thought leadership that can be published on our web site, or that can be presented at relevant conferences
  • Assist in the development of proposals and other business development related materials
  • Contribute to an internal knowledge base to build expertise and awareness within the organization
  • Document Standard operations procedures
  • Build a knowledge-based documentation for easy onboarding of support resources.
  • Other duties as assigned

 


Requirements

 

REQUIREMENTS FOR CONSIDERATION:

  • 3+ years of deep software development and engineering experience required
  • 1+ years of experience in the data and analytics space required
  • 1+ years in key aspects of software engineering such as parallel data processing, data flows, REST APIs, JSON, XML, and microservice architectures required
  • 2+ years of Java and/or Scala experience a plus
  • 2+ years of RDBMS concepts with strong SQL skills required
  • Knowledge, Skills, and Abilities: 
  • Collegial and collaborative working style
  • Must be a self-starter and team player, capable of working and communicating with internal and client resources
  • Strong verbal and written communication skills
  • A passion for technology with a strong desire to constantly be learning and honing your craft
  • An Understanding of what it takes to make a modern data solution production-ready including operations and monitoring strategies and tools
  • Detail oriented with the curiosity that compels you to dive deep into the problem, whether to identify the root cause of a quality issue or understand hidden patterns
  • Understanding and experience with stream processing and analytics tools and technologies such as Kafka, Spark Streaming, Storm, Flink, etc.
  • Experience working in a scrum/agile environment and associated tools (Jira)
  • Proficient with application build and continuous integration tools (e.g., Maven, Gradle, SBT, Jenkins, Git, etc.)
  • General knowledge of big data and analytics solutions provided by major public cloud vendors (AWS, Google Cloud Platform, Microsoft Azure) is needed.
  • Hands-on experience with DevOps solutions such as Jenkins, Puppet, Chef, Ansible, CloudFormation, etc.
  • Any certifications related to Big Data platforms, NoSQL databases or cloud providers are a plus
  • Experience with large data sets and associated job performance tuning and troubleshooting.
  • Ability to wrangle data at scale using tools such as BigQuery, Hive, Spark, and other distributed data processing tools.

#LI-CH1

Read Full Job Description

Location

Our satellite campus is in Chicago at 120 S. Riverside Plaza with 100 workstations & conference rooms which associates can reserve.