Data Engineer
Sorry, this job was removed at 1:12 p.m. (CST) on Tuesday, February 6, 2018
By clicking Apply Now you agree to share your profile information with the hiring company.
About the Role
We are looking for a talented, passionate Data Engineer with the skill and desire to contribute to our team. In simplest terms, you'll be building the foundation upon which we grow our business.
If you have experience working to extract knowledge and actionable information from multiple data sources using scala and/or python, then we would love to talk with you. If you are the type of person who comes to work every day expecting to learn, contribute, teach, and have fun, then we think you will fit right in.
About our Team
We aim to derive meaning from our data, enabling us to run our business better, and also equip our clients to do the same. We believe in agile software development (lowercase 'a') and use elements from Scrum and Lean as a base for how we manage our work. 'Inspect & Adapt' is more than just a catchphrase to us. We pair regularly as-needed, but not as a rule. We are willing to have every problem under the sun exactly once, in exchange for never having the same problem twice.
Core Responsibilities & Qualifications
- In this role, you will focus on the design, implementation, and operation of data management systems to meet the Brads Deal's business needs. This includes designing how the data will be stored, consumed, and integrated into our systems.
- You will take business requirements, transform them into data models, and develop ETL processes to populate those models. We are a small (and growing) team so it's important that you enjoy working a project from end to end.
- You should have hands-on experience with a variety of the data warehousing concepts and practices, covering both technical development as well as 'not necessarily technical practices' such as data governance. This list of experiences includes data manipulation, database partitioning, data structures, data management, and best engineering practices.
- You have experience with JVM based languages (such as Java, Scala or Clojure) and scripting languages (such as Python).
- In addition to developing and implementing ETL processes, you have experience dealing with performance and scaling issues.
- The ideal candidate will have extensive experience with Star Schemas, Dimensional Models, Datamarts in Traditional Data Warehouses as well as in Big Data / Advanced Analytics domains.
- You have at least 3-7 years of end to end experience with data warehousing and BI systems (including data modeling, ETL architectures and OLAP).
- You have hands-on experience with AWS and the different data tools that are offered on that platform (Redshift, EMR, Kinesis, S3, etc)
Read Full Job Description