Orijin Logo

Orijin

Senior Data Engineer

Reposted 5 Days Ago
Remote
Hiring Remotely in USA
120K-127K Annually
Senior level
Remote
Hiring Remotely in USA
120K-127K Annually
Senior level
As a Senior Data Engineer, you'll build and scale the data platform, ensuring data quality and supporting analytics and machine learning initiatives, while leading data governance, architecture and pipeline ownership.
The summary above was generated by AI
Orijin is on a mission to prepare every justice-impacted individual for sustainable employment. We are a Public Benefit Corporation (PBC) and certified B-Corporation, with a business model that never charges incarcerated individuals or their friends and families for its technology or services. (https://orijin.works)

The Opportunity

    As a Senior Data Engineer at Orijin, you will be a technical leader responsible for building, scaling, and modernizing the company’s data platform. Your primary focus will be on data modeling, pipelines, architecture, reliability, and performance, ensuring that data is trusted, timely, and production-ready.
     
    You will partner closely with data analysts, engineers and product managers to shape how data is modeled and used. You will bring an analytical mindset to pipeline design and enable high-quality insights across the organization. You will ensure company-wide confidence in data quality and enable data-enabled differentiating products and services.

Job Requirements

    Data Platform & Architecture Leadership
  • Design and evolve Orijin’s data architecture to support scalability, reliability, and near–real-time use cases.
  • Define standards for data modeling, orchestration, versioning, and deployment.
  • Lead efforts around data governance, security, lineage, and compliance in partnership with stakeholders.
  • Drive the transition toward modern data stack best practices (event-driven ingestion and streaming where appropriate).
  • Data Engineering & Pipeline Ownership
  • Own the design, build, and maintenance of production-grade data pipelines across batch and streaming workloads.
  • Build systems that support:
  • Monitoring, alerting, and observability for data pipelines.
  •  Backfills re-runs, and safe rollbacks when failures or data issues occur.
  •  High data quality and reliability through automated checks and validation.
  • Optimize pipelines for performance, cost efficiency, and scalability.
  • Lead the move toward near real-time data processing where it delivers business value.
  • Tooling & Infrastructure
  • Architect and maintain data systems using tools such as:
  • AWS (S3, RDS, Redshift, Lambda, DMS, Glue etc.)
  • Data orchestration and ETL tools like Airflow, Airbyte and dbt
  • Improve CI/CD for data workflows, including testing, deployment, and environment management.
  • Evaluate and introduce new tooling for orchestration, monitoring, and data quality as the platform matures.
  • ML & AI Enablement 
  • Design build, and operate data and feature pipelines that support machine learning and AI-driven product features, including training, evaluation, inference, monitoring, and safe rollout to downstream systems.
  • Support vectorization and embedding workflows, including generation, storage, refresh, and backfill of embeddings.
  • Partner with team stakeholders to translate model requirements into scalable, reliable data systems.
  • Contribute to early experimentation and prototyping of ML-powered features. 
  • Analytics Enablement & Collaboration
  • Partner with analysts and product teams to ensure pipelines and data models support meaningful analysis and reporting.
  • Provide architectural input on metrics design, data models, and semantic layers.
  • Enable self-service analytics by ensuring clean, well-documented, and accessible datasets.
  • Basic proficiency in data visualization platforms with demonstrated ability to build and maintain data dashboards
  • Contribute to exploratory analysis or metric definition when deeper engineering context is required.
  • Efficiency & Reliability Focus
  • Continuously improve:
  • Query performance
  • Storage and compute costs
  • Pipeline runtime and failure rates
  • Lead incident response for data outages and quality issues, including root-cause analysis and permanent fixes.
  • Establish SLAs and reliability standards for critical data assets.

Qualifications

  • Bachelor’s or advanced degree in Computer Science, Engineering, Data Science, or equivalent work experience.
  • Expertise in the areas of data engineering, platform engineering, or backend engineering roles.
  • Proven experience designing and operating large-scale data pipelines and data platforms in production enviroments.
  • Strong proficiency in Python and SQL for data engineering workflows.
  • Hands-on experience with AWS data tools like Redshift, Lambda and Glue or equivalents; experience with data orchestration and ETL tools like Airflow, Airbyte and dbt in production enviroments.
  • Experience implementing monitoring, alerting, and data quality frameworks.
  • Familarity with streaming or near–real-time systems (e.g., Kafka, Kinesis, or similar) is a plus.
  • Hands on experience with PostgreSQL databases and NoSQL style databases like MongoDB, DynamoDB, etc.
  • Experience supporting machine learning or AI workflows (e.g., feature engineering, embedding pipelines, model inputs/outputs, embeddings, vector databases).
  • Strong collaboration and communication skills - able to translate business and analytical needs into robust technical systems.
  • Experience with data governance, security, and compliance in regulated or sensitive-data environments.

Equal Opportunity Employer :
Orijin is an Equal Opportunity Employer and firmly believes in creating a workplace that respects and values diversity of cultural, ethnic, and experiential backgrounds. We encourage all qualified applicants to apply. As an organization committed to the successful reentry of justice-involved persons, we strongly encourage candidates who share the life experiences of the citizens we serve to apply
 
Disclaimer: The above statements are intended to describe the general nature and level of work being performed by the individual assigned to this position. They are not intended to be an exhaustive list of all duties, responsibilities, and skills required. Job duties may change or new duties assigned at any time with or without notice

Similar Jobs

2 Days Ago
In-Office or Remote
IN, USA
165K-190K Annually
Senior level
165K-190K Annually
Senior level
Consumer Web • eCommerce • Food • Healthtech • Natural Language Processing • Social Impact
Thrive Market seeks a Senior Data Engineer to deliver data engineering solutions, collaborate across teams, and manage data architecture, focusing on complex data processing and streaming frameworks.
Top Skills: AirflowSparkAWSDbtGitJenkinsKafkaPythonSnowflakeSQLTerraform
12 Days Ago
In-Office or Remote
62K-74K Annually
Senior level
62K-74K Annually
Senior level
Fintech • Machine Learning • Payments • Social Impact • Software • Financial Services
The Senior Data Engineer will design and build scalable data lakehouse solutions, integrate data pipelines, develop data models, manage AWS infrastructure, and collaborate with teams to enhance the data platform.
Top Skills: AirflowAWSDbtGlue CatalogIamKubernetesPythonS3Secrets ManagerSnowflakeSQLTerraform
13 Days Ago
Easy Apply
Remote
USA
Easy Apply
167K-200K Annually
Senior level
167K-200K Annually
Senior level
Big Data • Healthtech • HR Tech • Machine Learning • Software • Telehealth • Big Data Analytics
The Senior Data Engineer will build and maintain data pipelines, optimize data processing, and ensure data privacy and quality within the healthcare tech space.
Top Skills: AirbyteAirflowArgoAWSDbtDuckdbElasticsearchIcebergPostgres/SqlPythonSnowflakeSparkTerraform

What you need to know about the Chicago Tech Scene

With vibrant neighborhoods, great food and more affordable housing than either coast, Chicago might be the most liveable major tech hub. It is the birthplace of modern commodities and futures trading, a national hub for logistics and commerce, and home to the American Medical Association and the American Bar Association. This diverse blend of industry influences has helped Chicago emerge as a major player in verticals like fintech, biotechnology, legal tech, e-commerce and logistics technology. It’s also a major hiring center for tech companies on both coasts.

Key Facts About Chicago Tech

  • Number of Tech Workers: 245,800; 5.2% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: McDonald’s, John Deere, Boeing, Morningstar
  • Key Industries: Artificial intelligence, biotechnology, fintech, software, logistics technology
  • Funding Landscape: $2.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Pritzker Group Venture Capital, Arch Venture Partners, MATH Venture Partners, Jump Capital, Hyde Park Venture Partners
  • Research Centers and Universities: Northwestern University, University of Chicago, University of Illinois Urbana-Champaign, Illinois Institute of Technology, Argonne National Laboratory, Fermi National Accelerator Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account