Ocient's Professional Services team is seeking an experienced and savvy Data Engineer to join our world class team. We are looking for ultimate team players who are dedicated to improving our product and Customer experience with every interaction. This Data Engineer will act as a technical lead and work on some of the most exciting exabyte-scale data projects at private and public sector organizations. They will engage from Proof of Concept (PoC) stages through to production implementation on high impact environments.
The hire will be responsible for customizing and optimizing our data and data pipeline architecture, as well as optimizing data flow for Customer implementations. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys building and optimizing data systems. We are looking for individuals that are driven to build cutting-edge products, solve complex problems, and push the bounds of what is possible with performance and scale. They must be self-motivated, enthusiastic and driven. They will work with every part of the internal Ocient organization to help our Customer's adopt and accelerate the use of Ocient's products and data solutions.
- Work with Ocient's Solutions Owner and the Customer to design, implement and deploy the recommended solutions based on the Customer's requirements
- Create and maintain optimal data pipeline architecture
- Assemble large, complex sets of data that meet non-functional and functional business requirements
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Working closely with Ocient's teams at all levels to assist with data-related technical issues and help ensure the success of Customer project engagements
- Document and present complex architectures for the Customers technical teams
- Drive projects with Customers to successful completion
- Write and produce technical documentation, knowledge base articles
- Keep current with Big Data ecosystem technologies
- Possible travel up to 25% (post covid)
- Ability to understand and translate Customer requirements into technical requirements
- Strong experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Strong experience performing root cause analysis, debugging & profiling, on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Excellent analytic skills associated with working on structured and unstructured datasets
- Ability to build processes that support data transformation, workload management, data structures, dependency and metadata
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Understanding of network configuration, devices, protocols, speeds and optimizations.
- Experience supporting and working with cross-functional teams in a dynamic environment
- Excellent organization, verbal and written communications skills
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with relational SQL databases
- Experience with data pipeline tools, including Kafka
- Experience with stream-processing systems, including Kafka Streams
- Experience with object-oriented/object function scripting languages: Python, Java, etc.
- Experience with writing to network-based APIs, preferably REST/JSON
- Experience with implementing software and/or solutions in the enterprise Linux or Unix environments
Ocient is a Chicago-based, venture-funded startup building a SQL compliant, exabyte-scale database platform that achieves better performance than Hadoop and NoSQL systems. It is a distributed system optimized for NVMe drives, RDMA networks and high core count processors and is written in C++. We are led by a management team with seven successful startup exits, including Cleversafe which was one of the largest software startup exit in Chicago’s history.