Kraken Digital Asset Exchange Logo

Kraken Digital Asset Exchange

Senior AI Compute Infrastructure Engineer

Posted Yesterday
Remote
Hiring Remotely in United States
127K-254K Annually
Senior level
Remote
Hiring Remotely in United States
127K-254K Annually
Senior level
The engineer will own and operate GPU clusters for AI workloads, optimize pipelines, improve scheduling and tooling, and ensure the reliability of the compute infrastructure at Kraken.
The summary above was generated by AI
Building the Future of Crypto 

Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.

What makes us different?

Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.

Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.

As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.

Become a Krakenite and build the future of crypto!

Proof of work
 
The team

Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline.

The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation.

You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken's AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.

 
The opportunity
  • Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.

  • Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.

  • Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.

  • Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.

  • Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.

  • Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.

  • Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.

  • Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.

  • Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.

  • Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.

 
Skills you should HODL
  • 5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.

  • Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.

  • Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.

  • Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.

  • Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.

  • Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.

  • Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.

  • Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.

  • Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.

  • Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

 
Nice to haves
  • Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.

  • Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.

  • Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.

  • Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.

  • Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.

  • Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.

  • Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.

Unless a specific application deadline is stated in the job posting, applications are accepted on an ongoing basis.

Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.

We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.

Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!

We may ask candidates to complete job-related skills or work-style assessments as part of our hiring process. These assessments are designed to evaluate competencies relevant to the role and are applied consistently across candidates for similar positions. Assessment results are considered alongside other relevant information, such as experience and interviews, and are not the sole basis for any employment decision.

As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws. 

Stay in the know

Follow us on Twitter

Learn on the Kraken Blog

Connect on LinkedIn


Candidate Privacy Notice

Similar Jobs

An Hour Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
128K-240K Annually
Senior level
128K-240K Annually
Senior level
Fintech • Mobile • Software • Financial Services
Design, build, deploy, and optimize scalable full-stack applications integrating databases, APIs, and AI/LLM workflows. Lead architecture, CI/CD (ArgoCD, GitLab), Airflow pipelines, and collaborate cross-functionally to deliver production-grade fintech solutions.
Top Skills: Apache AirflowArgocdAWSGitlab Ci/CdLlmsNode.jsReactRetrieval-Augmented Generation (Rag)SnowflakeSnowflake Cortex
An Hour Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
154K-264K Annually
Senior level
154K-264K Annually
Senior level
Fintech • Mobile • Software • Financial Services
As a Staff AI Software Engineer, you'll develop and optimize scalable AI applications, manage deployment pipelines, and collaborate with cross-functional teams to drive innovative solutions.
Top Skills: Apache AirflowArgocdAWSCortexGenerative AiGitlab Ci/CdNode.jsReactSnowflake
An Hour Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
125K-215K Annually
Senior level
125K-215K Annually
Senior level
Fintech • Mobile • Software • Financial Services
The Senior Manager of Corporate Communications will develop strategies to enhance awareness and trust in SoFi's crypto products, manage media relations, and track communication effectiveness.
Top Skills: Ai ToolsCommunication StrategiesDigital AssetsFinancial Technology

What you need to know about the Chicago Tech Scene

With vibrant neighborhoods, great food and more affordable housing than either coast, Chicago might be the most liveable major tech hub. It is the birthplace of modern commodities and futures trading, a national hub for logistics and commerce, and home to the American Medical Association and the American Bar Association. This diverse blend of industry influences has helped Chicago emerge as a major player in verticals like fintech, biotechnology, legal tech, e-commerce and logistics technology. It’s also a major hiring center for tech companies on both coasts.

Key Facts About Chicago Tech

  • Number of Tech Workers: 245,800; 5.2% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: McDonald’s, John Deere, Boeing, Morningstar
  • Key Industries: Artificial intelligence, biotechnology, fintech, software, logistics technology
  • Funding Landscape: $2.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Pritzker Group Venture Capital, Arch Venture Partners, MATH Venture Partners, Jump Capital, Hyde Park Venture Partners
  • Research Centers and Universities: Northwestern University, University of Chicago, University of Illinois Urbana-Champaign, Illinois Institute of Technology, Argonne National Laboratory, Fermi National Accelerator Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account