Principal Data Engineer/Architect
Braintree lets you move money from one place to another safely and securely. Every time you pay for an Uber ride, book a stay through Airbnb, or pay with PayPal when you check out online, you’re probably using our product. It sounds complex (and it is), but we make it so simple you can’t tell we’re there.
We solve world-scale problems and provide opportunities to match. We build diverse teams that recognize our strengths and allow us to work on our weaknesses. You bring skills and a relentless focus on the customer, and we'll provide the support you need to do the best work of your life.
Check us out: GitHub | Blog | Twitter | LinkedIn | Facebook | The Muse | Glassdoor
Our focus is teaming with smart engineers who are passionate about their craft and excited to build software for our unique solutions in the space.
At Braintree, developers have the chance to work on various teams and stacks. Although most of our software is written in Ruby, we believe in using the best tool for the job. For example, we’ve written data platforms in Clojure, payment gateways in Java, and contextual commerce in Python. Here’s more:
- Communication is key to our process, and we don't want to hinder it with walls. Many teams program in pairs, which means you always have another set of eyes to help you.
- We practice test-driven development and believe that it helps us deliver simple solutions focused on real customer needs. We have no QA department – developers test, release and monitor their own code.
- We keep the team in sync with daily stand-ups and have regular retrospectives to discuss things that are going well and opportunities for improvements.
- We value unique perspectives brought by diverse backgrounds and experiences. A broad range of ideas and perspectives help us to create the best possible product.
As the Principal Data Engineer / Architect, you will be an integral part of our Data and Analytics organization, driving the design, development and implementation of the platforms and tooling we use to ingest, store and analyze massive quantities of data every day. You’ll also partner closely with teams across Product/Engineering, Analytics and Data Science (among others) to translate business needs into functional requirements for best-in-class data solutions and products that empower our internal stakeholders and merchants to turn insights into action.
- Provide thought leadership and architectural direction for building highly scalable, resilient, distributed systems based on modern “Big Data” architectural paradigms and industry best practices.
- Lead the evaluation, planning and deployment of proprietary, open-source or 3rd party tools for streaming and batch ingestion of data from heterogeneous systems.
- Foster a culture of engineering excellence, guiding establishment of standards and common practices around team collaboration, code quality, testing and benchmarking, continuous integration/deployment and monitoring.
- Build collaborative partnerships with architects, technical leads, product managers and key individuals within other cross functional groups.
- A minimum of 8+ years of experience in software development
- 5+ years experience leading the design, deployment and maintenance of large-scale distributed systems, streaming platforms, high volume data processing frameworks and storage layers, query execution engines, ETL pipelines and workflow orchestration tools including:
- Primary Hadoop ecosystem components (HDFS, Hive, Pig, Spark, Storm, HBase)
- Workflow/ETL orchestration tools (Apache Oozie, Airflow, Luigi)
- Streaming technologies (Kafka, Spark Streaming, Flink, Samza)
- Columnar and relational database platforms (PostgreSQL, MySQL, Redshift, Cassandra)
- NoSQL data stores (DynamoDB, Solr, or Elasticsearch)
- Experience driving engineering excellence through standards around:
- DevOps and continuous integration/deployment (Git, Jenkins)
- Virtualization and server automation technologies (Chef, Puppet, Docker, Kubernetes)
- Testing and quality assurance
- Experience with Data science/ML workflow enablement, tools and infrastructure
- Experience with Agile methodologies/practices (Scrum, Lean, MVP)
- Excellent communication and collaboration skills
- B.S. or advanced degree in Computer Science, Engineering or equivalent experience
- Open dev days: every two weeks we spend a day working on projects that interest us and help us expand our skills and knowledge
- Participation in the technology community: we help cover travel and attendance costs for conferences, and we offer opportunities and tools for speaking.
- Check out our Careers page for more company perks.
We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply. You can also check out our FAQ!