As a Data Engineer, you will architect and implement data solutions, collaborate with cross-functional teams, and support AI and ML tasks while managing data pipelines using technologies like Databricks and AWS.
hatch I.T. is partnering with Packaged Agile to find a Data Engineer. See details below:
About the Role:
As a Data Engineer, you will play a crucial part in architecting and implementing data solutions that drive business value and enable informed decisionmaking. As a key member of their team, you will collaborate closely with cross-functional teams to understand business requirements, design robust data pipelines, and leverage cutting-edge technologies such as Databricks, Python, and various AWS services. If you thrive in a dynamic and collaborative environment and are driven by the opportunity tosolve complex challenges, we encourage you to apply and join their team at Packaged Agile.
They are proud to be a United States Small Business Administration (SBA) HUBZone certified company. Though not a requirement for this position, we strongly prefer candidates who live in an SBA HUBZone.
About the Company:
Packaged Agile is a boutique, agile, and digital services-focused coaching and consulting company. We help our clients deliver measurable value to their customers and aspire to bring "good agile" to clients across the public and private sectors.
We hire passionate people who believe in agility, understand how to meet people where they are, and treat them with respect while helping them succeed. And we want to hire people that see their continuous growth and improvement as a way of life.
Responsibilities:
- Collaborate with Artificial Intelligence and Machine Learning team members to operationalize data pipelines and ML tasks.
- Provide day-to-day support for deploying Python-native ML pipelines.
- Implement efficient data ingestion strategies and change data capture mechanisms, leveraging Databricks and AWS services, to ensure real-time and accurate data updates for analytics and decision-making processes.
- Utilize Databricks to design, develop, and maintain scalable data pipelines, enabling efficient extraction, transformation, and loading (ETL) processes.
- Leverage Python programming expertise to enhance data processing scripts, ensuring robustness, efficiency, and maintainability in AWS Lambda environments.
- Execute data engineering tasks to facilitate AI/ML capabilities.
- Communicate results effectively to diverse audiences through presentations.
- Assist in architectural leadership, technical support, and advisement services to ensure integration of identity management system technologies meets security requirements.
- Provide support to leadership engaging with senior-level executives at a public facing Federal agency, offering subject matter expertise in security architecture and other key domains.
- Identify and address problems and inefficiencies, implementing solutions.
- Identify and optimize data bottlenecks, leveraging automation where applicable.
- Establish and manage data lifecycle policies, including retention backups/restores
Requirements:
- US Citizenship with the ability to receive a Public Trust clearance or higher
- Must be eligible to obtain a Public Trust Clearance.
- Minimum 7 years of experience in Software Development focused on data proficiency in designing and maintaining data architectures to meet business needs.
- Experience in acquiring, processing, and optimizing datasets.
- Ability to support AI/ML teams by enhancing feature engineering code.
- Strong familiarity with Spark, including regular updates.
- Skilled in creating, managing, and optimizing Spark Structured Streaming jobs, leveraging technologies like Delta Live Tables and/or DBT.
- Competence in developing and managing ksqlDB and Kafka Streams queries/code.
- Experience maintaining and updating Python-based data processing scripts executed on AWS Lambdas.
- Commitment to conducting unit tests for all Spark, Python data processing, and Lambda code.
- Bachelor’s degree in computer science or a related field.
- Strongly preferred: candidates who live in an SBA HUBZone
Top Skills
AWS
Aws Lambda
Databricks
Dbt
Delta Live Tables
Kafka Streams
Ksqldb
Python
Spark
Similar Jobs
eCommerce • Information Technology • Retail • Industrial
As a Senior Data Engineer, you will design and maintain scalable data pipelines, develop data products, and collaborate across teams to deliver insights.
Top Skills:
SparkAws GlueCloudFormationDockerGithub ActionsKafkaKubernetesPostgresPythonSap S/4HanaScalaSnowflakeSQLTerraform
AdTech • Digital Media • Marketing Tech • Software • Automation
The Sr Data Engineer will design, implement, and maintain deployment and ETL pipelines while integrating various data sources and ensuring efficient developer experiences.
Top Skills:
Apache AirflowArgo Ci/CdArgo WorkflowsBazelCircle CiDockerFlywayGitHarnessJavaJenkinsKubernetesLookerPower BIPythonSnowflakeSQLThoughtspot
Healthtech • Insurance • Software
As a Software Engineer, you will design and operate frameworks for data processing, build internal APIs, and ensure data security. You will collaborate on systems that handle real-time and batch workloads efficiently.
Top Skills:
Auth0Cloud RunCloud StorageDockerGCPGoIamIcebergK NativeKubernetesPostgresPub/SubPythonRest ApisSQLTrino
What you need to know about the Chicago Tech Scene
With vibrant neighborhoods, great food and more affordable housing than either coast, Chicago might be the most liveable major tech hub. It is the birthplace of modern commodities and futures trading, a national hub for logistics and commerce, and home to the American Medical Association and the American Bar Association. This diverse blend of industry influences has helped Chicago emerge as a major player in verticals like fintech, biotechnology, legal tech, e-commerce and logistics technology. It’s also a major hiring center for tech companies on both coasts.
Key Facts About Chicago Tech
- Number of Tech Workers: 245,800; 5.2% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: McDonald’s, John Deere, Boeing, Morningstar
- Key Industries: Artificial intelligence, biotechnology, fintech, software, logistics technology
- Funding Landscape: $2.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Pritzker Group Venture Capital, Arch Venture Partners, MATH Venture Partners, Jump Capital, Hyde Park Venture Partners
- Research Centers and Universities: Northwestern University, University of Chicago, University of Illinois Urbana-Champaign, Illinois Institute of Technology, Argonne National Laboratory, Fermi National Accelerator Laboratory



