Data Architect

Sorry, this job was removed at 9:49 a.m. (CST) on Thursday, March 8, 2018
Find out who's hiring remotely in Chicago.
See all Remote Data + Analytics jobs in Chicago
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

As the Data Architect, you will expand our platform's ability to ingest, warehouse, query, and analyze data for our existing and future enterprise customers. You will drive decisions related to system architecture and integration of third-party technologies. As an engineering leader, you will evangelize standard methodologies throughout the organization and teach others to think about data from well-established industry perspectives.

Responsibilities:

  • Own and iterate on the technical roadmap of the data processing stack
  • Propose ways to improve the data stack by researching, prototyping, and benchmarking potential ideas
  • Evaluate and recommend the best, most significant third-party tools to support the ETL, warehouse, and BI subsystems
  • Document and build diagrams describing the current and future data stack aligned with the roadmap
  • Coach developers on how to properly model and process data
  • Partner with the Product team to define platform features that conform to industry best-practices to avoid reinventing the wheel
  • Work with the DevOps and IT teams to configure and operate data processing subsystems that are fault-tolerant and highly scalable
  • Identify, track, and report on metrics related to data pipeline performance

Requirements:

  • 7+ years experience in data warehouse design, development, and administration
  • Experience with various data ingestion patterns like ETL and ELT and associated tools, SSIS or Talend
  • Extensive experience with SQL and querying or operating more than one relational SQL databases such as SQL Server, PostgreSQL, or MySQL
  • Experience with Business Intelligence (BI) tools such as Tableau, Qlik, Power BI, and MicroStrategy
  • Experience with dimensional modeling techniques and OLAP
  • Experience with open-source data pipeline tools, for example Luigi and Airflow
  • Validated problem solving and analytical skills to resolve technical issues
  • Strong written and verbal communication skills; the ability to speak to business owners, end users, and engineers
  • Proficiency in Python or other scripting language
  • Bachelor’s degree in a related field or equivalent
  • Experience in Agile/Scrum environments

Bonus:

  • Experience with ingesting data from a NoSQL database such as MongoDB, Cassandra, or Neo4j
  • Previous design or code contributions to an open-source database
  • SQL tuning or optimizing ETL processes
  • Big data technologies such as Hadoop, HDFS, and Spark SQL
  • Data streaming technologies, for example, Kafka

About Us:

Narrative Science is the leader in advanced natural language generation (Advanced NLG) for the enterprise. Quill, its Advanced NLG platform, learns and writes like a person, automatically transforming data into Intelligent Narratives—insightful, conversational communications full of audience-relevant information that provide complete transparency into how analytic decisions are made.

Narrative Science provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Location

We became a distributed workforce in March 2020 as a result of Covid-19 and will remain distributed moving forward. Our Chicago HQ still remains but we want our team to have flexibility around when and where they work.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about DO NOT USE - Narrative ScienceFind similar jobs