Big Data Administrator
What we do:
Uptake harnesses the power of underutilized data to empower businesses to make informeddecisions. We partner with industry leaders to build a predictive analytics software platform that
grows smarter in one industry because of what we learn in another. The result is a powerful platform
that identifies problems before they happen, ultimately saving money, time and lives.
What you’ll do:
As a Big Data Administrator, you’ll be responsible for the administration and governance of acomplex analytics platform that is already changing the way large industrial companies manage their
assets. A Big Data Administrator understands cutting-edge tools and frameworks, and is able to
determine what the best tools are for any given task. You will enable and work with our other
developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and
mapping, and machine learning, to name a few. We also strongly encourage everyone to tinker with
existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our
existing systems don’t stagnate or deteriorate.
As a Big Data Engineer, your responsibilities may include, but are not limited to, the following:
● Build a scalable Big Data Platform designed to serve many different use-cases and requirements● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale
● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc.
● Establish automated build and deployment pipelines
● Implement machine learning models that enable customers to glean hidden insights about their data
● Implementing security and integrating with components such as LDAP, AD, Sentry, Kerberos.
● Strong understanding of row level and role based security concepts such as inheritance
● Establishing scalability benchmarks for predictable scalability thresholds.
● Bachelor's degree in Computer Science or related field● 6+ years of system building experience
● 4+ years of programming experience using JVM based languages
● A passion for DevOps and an appreciation for continuous integration/deployment
● A passion for QA and an understanding that testing is not someone else’s responsibility
● Experience automating infrastructure and build processes
● Outstanding programming and problem solving skills
● Strong passion for technology and building great systems
● Excellent communication skills and ability to work using Agile methodologies
● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment
● Experience with service-oriented (SOA) and event-driven (EDA) architectures
● Experience using big data solutions in an AWS environment
● Experience with noSQL data stores: Cassandra, HDFS and/or Elasticsearch
We value these qualities, but they’re not required for this role:
● Masters or Ph.D. in related field
● Experience as an open source contributor
● Experience with Akka, stream processing technologies and concurrency frameworks
● Experience with Data modeling
● Experience with Chef, Puppet, Ansible, Salt or equivalent
● Experience with Docker, Mesos and Marathon
● Experience with distributed messaging services, preferably Kafka
● Experience with distributed data processors, preferably Spark
● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent
● Experience with Reactive and/or Functional programming
● Understanding of Thrift, Avro or protocol buffers
If you think you would be a good fit for this role, and are interested in joining the best engineering team in Chicago, please provide your resume with a cover letter.