Senior Software Engineer
Job Duties:
Act as a leading contributor in the implementation and maintenance of statistical software applications, machine learning applications, research databases, risk modelling, and other data products. This role requires significant interaction with both upstream and downstream stakeholders across Technology, Data, Products, Sales/Service, and Research. Assist in the transition of approved research products from the prototype phase to fully-fledged, scalable, and client-facing services. Often, these services must be integrated into company’s platform of financial products, such as Direct, so that our clients can use these software tools in the investment decision-making process. Create efficient and easy ways to help company’s applications successfully transition to the Amazon Web Services cloud. Build high-performance technology-based risk model solutions that reduce the development and business cost of globalizing models and features.
Job Requirements:
- This position requires a Bachelor’s degree in Information Technology, Computer Science, or a related field and 5 years of experience in software engineering.
- In the alternative, we will accept a Master’s degree in Information Technology, Computer Science, or a related field and 2 years of experience in software engineering.
- Experience with Algorithms, Data Structures, Object Oriented Design, and Databases.
- Experience with at least one OO languages.
- Experience developing and deploying solutions using services in the Amazon AWS ecosystem (Lambda, EC2, RDS, EMR, DynamoDB).
- Experience developing APIs and microservices hosted in the cloud.
- Experience with back-end XML, relational, and file-based databases (e.g. SQL, Postgres, Redshift, Netezza, HDFS).
- Experience with Agile software engineering practices.
- Experience with a professional software engineering practices for the full software development life cycle, such as coding standards, code reviews, source control management (stash/git), build processes, testing, and operations.
- Academic or professional experience with common data mining and machine learning techniques such as preprocessing data, training and evaluation of classification and regression models, and statistical evaluation of experimental data.
- Academic or professional experience with the Hadoop stack (MapReduce, Pig, Hive, Nifi, Spark).
- Academic or professional experience with Python packages, including pandas and/or numpy.
- Academic or professional experience developing high-performance, highly available, and scalable distributed computing applications.