Principal Big Data Engineer

| Chicago
Sorry, this job was removed at 11:06 a.m. (CST) on Tuesday, November 27, 2018
Find out who's hiring in Chicago.
See all Developer + Engineer jobs in Chicago
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

What​ ​we​ ​do

Uptake harnesses the power of underutilized data to empower businesses to make informed decisions. We partner with industry leaders to build a predictive analytics software platform that grows smarter in one industry because of what we learn in another. The result is a powerful platform that identifies problems before they happen, ultimately saving money, time and lives.

Why Work Here

Uptake is a values-driven organization, and we are excited about what we do. We’re flexible, honest, hardworking, and collaborative. As a team, we bring our diverse backgrounds, beliefs, and experiences together to solve tough, important problems. We support and challenge one another to bring out the best in each of us, and we might have a little fun along the way. We’re also proud to be one of Chicago’s best places to work in 2018 according to Forbes and Great Place to Work Institute.

We offer generous benefits including health, dental, vision, parental leave, 401K match, and unlimited vacation. We are lifelong learners, and our Uptake University program offers training and professional development on a wide variety of topics. We also have employee-led community groups including Women@Uptake, Pride@Uptake, Science@Uptake, Parents@Uptake, and many more. Learn more at https://www.uptake.com/careers.

What​ ​you’ll​ ​do:

As a Big Data Engineer, you’ll be responsible for the architecture of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Big Data Engineer understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage Engineers to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate.

Responsibilities:

As a Big Data Engineer, your responsibilities may include, but are not limited to, the following:

● Build a scalable Big Data Platform designed to serve many different use-cases and requirements
● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale
● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc.
● Establish automated build and deployment pipelines
● Implement machine learning models that enable customers to glean hidden insights about their data

Qualifications:

● Bachelor's degree in Computer Science or related field
● 6+ years of system building experience
● 4+ years of programming experience using JVM based languages
● A passion for DevOps and an appreciation for continuous integration/deployment
● A passion for QA and an understanding that testing is not someone else’s responsibility
● Experience automating infrastructure and build processes
● Outstanding programming and problem solving skills
● Strong passion for technology and building great systems
● Excellent communication skills and ability to work using Agile methodologies
● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment
● Experience with service-oriented (SOA) and event-driven (EDA) architectures
● Experience using big data solutions in an AWS environment
● Experience with javascript or associated frameworks

Preferred​ ​skills:

We value these qualities, but they’re not required for this role:

● Masters or Ph.D. in related field
● Experience as an open source contributor
● Experience with Akka, stream processing technologies and concurrency frameworks
● Experience with Data modeling
● Experience with Chef, Puppet, Ansible, Salt or equivalent
● Experience with Docker, Mesos and Marathon
● Experience with distributed messaging services, preferably Kafka
● Experience with distributed data processors, preferably Spark
● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent
● Experience with Reactive and/or Functional programming
● Understanding of Thrift, Avro or protocol buffers

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Location

We are located in River North just right off the Chicago Brown Line stop. We also provide you with a free shuttle service to/from Ogilvie and Union.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about UptakeFind similar jobs