4 Local Software Engineers Reveal the Challenges That Helped Them Grow

The life of a software engineer is one of continuous discovery. 4 local engineers share what they learned from overcoming some of their biggest challenges.   

Written by Olivia McClure
Published on Mar. 23, 2021
4 Local Software Engineers Reveal the Challenges That Helped Them Grow
Brand Studio Logo
Reverb
Reverb

Software engineering is by far one of the world’s fastest-growing — and most rewarding — tech professions. 

According to a Payscale study, more than half of software engineers say they enjoy the work they do on a daily basis. And that work is hardly ever straightforward — which is exactly why people like it. 

Many people choose software engineering as a career path partly due to the interesting challenges of the trade, from debugging code to implementing code into the product infrastructure and everything in between. 

But sometimes engineers will encounter new challenges that require quick analytical thinking and a lot of teamwork. And despite the long hours of work spent fixing them, these roadblocks open up new avenues for growth and understanding. 

Built In Chicago caught up with four local software engineers to learn about the biggest technical challenges they’ve overcome recently. 

 

Eric Schneider
Software Engineer II • Rally Health

Rally Health’s platform is designed to help people unravel complex topics, discover nearby doctors, understand costs and set personalized health goals.   

 

What’s the biggest technical challenge you've faced recently in your work, and what made it so tricky? 

Definitely the building of our ingestion pipeline on our new data platform. This task has been particularly tricky because ingesting different data from all over the organization so it can be hosted on our data platform is no small feat. Data comes in all shapes and sizes. 

A prime example of a challenge my team and I have faced is the work we’ve been doing with a relatively new technology called Debezium that allows us to collect change-data-capture events from upstream databases. It is an extremely powerful tool for the organization, as these events will be used for building better reporting, analytics and data science products. 

Although Debezium is fantastic, its novelty has caused us some grief as we work out the kinks. For example, we’ve run into issues validating that Debezium is working properly in the first place since there’s no clear way of determining that the data being ingested is being ingested the right way. And, indeed, we ran into a situation where Debezium seemed to be functioning correctly and was ingesting data but would periodically log an exception that should have caused the app to crash, which was very puzzling to us at first.

 

How did you and your team overcome this challenge in the end? What were some of the specific tools or technologies you used?

We were eventually able to overcome this issue purely by testing Debezium and looking at the results of the data ingestion. Usually, engineers on the ingestion-side of the data team don’t need to look at data we’re processing, and we instead focus on getting the data where it needs to land so that it’s easy to use further downstream. However, this problem was a little trickier since we were able to stand up all the apps correctly, but we were seeing Debezium throw periodic errors that should have caused the Kafka instance it was running on to fail outright. The strange part was that besides that exception, Debezium seemed to be running well and was ingesting data, so it wasn’t until we dug a little deeper into the data before we discovered the issue: Debezium had encountered the exception that caused it to fail, restart and then start ingesting data from the beginning all over again.

Obviously, this was incredibly difficult to discover because the only starting point was that we were seeing a stray exception and we had to inspect other parts of the infrastructure using Databricks runbooks and ubectl commands to better understand the situation. 
 

I think helping solve this issue really allowed me to conceptualize the data ingestion pipeline in a more holistic way.”


How did this technical challenge help you grow as an engineer or strengthen a specific skill?

I think helping solve this issue really allowed me to conceptualize the data ingestion pipeline in a more holistic way. When you’re engineering new features or building tooling on a day-to-day basis, it’s easy to lose sight of the fact that all of our apps have to work together in a way that benefits the entire platform. Taking shortcuts and assuming an app is working when you’re seeing indications that it’s malfunctioning won’t only affect the app but likely the entire platform and could derail deliverable timelines if left unchecked.

Becoming familiar with tools like Databricks notebooks to debug data and kubectl for understanding our Kubernetes infrastructure helped me better understand how our apps work. Fortunately, I work on an incredible engineering team and we were able to dig into and solve the problem while still meeting our deliverables.

 

Shelby Ruettiger
Data Engineer • Reverb

Reverb aims to help musicians and music sellers connect within its e-commerce marketplace. Their platform allows consumers to buy and sell musical gear, which ranges from music-making software to guitars. 

 

What’s the biggest technical challenge you’ve faced recently in your work, and what made it so tricky?

As the largest online marketplace dedicated to musical instruments, the size of our massive database of musical gear creates interesting challenges, including one I tackled recently. Not only does music gear differ by category — from all types of guitars to band instruments and DJ gear — but each instrument differs based on characteristics like color, model, condition and year. We use machine learning to group similar instruments together, so if you search for a 2018 Sonic Red Fender Player Stratocaster, you’ll see all of the other listings of that same model and color. 

Our machine learning training pipeline is orchestrated using Apache Airflow. The way it was originally set up, we were statically defining the brand and product type combo like “Korg + Synthesizer,” so we had to create a pipeline for every single combo. At around 100 combos, Airflow ground to a halt. This was a problem because, for guitars alone, we have more than 100 combos, and if we didn’t fix this problem, instruments that didn’t match those 100 combos wouldn’t get in front of the musicians looking for them. 

 

How did you and your team overcome this challenge in the end? What were some of the specific tools or technologies you used?

We want to provide the best possible solution for small business owners, musicians and others who rely on our marketplace for things like income or the piece of music gear that will help them write their next song. Our product and engineering team has an environment that makes team members feel comfortable asking for help and collaborating, and we know that’s how we’ll get to the best solution. 

I brought this challenge to my manager during a one-on-one, and without hesitation, he scheduled an hour for pair programming so we could work through the problem the next day. I’ve never had a manager so willing to jump in and help out. During this discussion, we concluded that we needed to make the pipeline dynamic so we wouldn’t have to define the brand and product type combinations ahead of time. 

Ultimately, I came up with a solution that allows us to run these brand and product type combinations at runtime without having to define the combinations first. In Airflow, a directed acyclic graph (DAG) is a bunch of tasks that you can configure to run sequentially or in parallel. I used an obscure method to trigger the target DAG from within an operator in the parent DAG, which isn’t a normal procedure.  
 

Moving fast and adhering to processes have a place in software engineering, but sometimes you need to take the time to find a creative solution.”


How did this technical challenge help you grow as an engineer or strengthen a specific skill?

Pair programming is part of our operating rhythm at Reverb, which fosters a collaborative environment where diverse perspectives are welcome. And because we’re building something that doesn’t exist anywhere else, there are lots of opportunities to work together to solve challenging problems and see the impact of your work. 

Moving fast and adhering to processes have a place in software engineering, but sometimes you need to take the time to find a creative solution. At Reverb, being innovative includes having the freedom to dig more, experiment without having to stick with norms, and take the time we need to do things right.

I learned that if the obvious, and seemingly only, solution feels clunky, there’s probably a better solution hidden under the surface. If something doesn’t go as planned at first, I know I can search for a more sustainable solution even if it takes a little more time to get there — because it’s worth it. And at Reverb, I’m not afraid to ask for that time. 

 

Anusha Dwivedula
Software Engineering Manager • Morningstar

Morningstar builds products designed to connect financial professionals to the information and tools they need. 

 

What’s the biggest technical challenge you’ve faced recently in your work, and what made it so tricky?

Morningstar at its core is a data company, which makes us a good candidate to leverage technologies like big data, machine learning and cloud to generate intelligent insights. However, the lack of a central data repository made it hard for our teams to use these technologies. Hence, my team was tasked with building a central data lake platform that makes data accessible and discoverable to all members of our data ecosystem, which includes data engineers, data scientists, data analysts and data SMEs.

There were a few aspects that made this challenge particularly tricky. For instance, our platform should be able to ingest any kind of data — structured, unstructured and semi-structured at any scale — and store it in a query optimized format so that data consumers can use it to run analytics in a performant way. Morningstar takes data quality seriously, so we had to ensure the quality in order for data consumers to trust the insights they generate from the lake.

Additionally, the platform should be simple and intuitive to use so that non-technical members, such as data SMEs, could use it without worrying about the inner workings. The platform should also be self-service so that the central team does not become a blocker for our users.

 

How did you and your team overcome this challenge in the end? What were some of the specific tools or technologies you used?

Our initial version began as a basic file storage mechanism that used AWS S3 for storage, AWS Athena for query capabilities and Apache Airflow for data movement. We provided services like basic CRUD Restful APIs for our users to ingest data and the associated metadata into the lake and built search functionality. The whole system was event-driven and was built using serverless framework principles.

Even though this initial version served most of our use cases, we soon hit roadblocks with schema validation at scale and our ability to transform more complex file types like XML, complex Avro and Parquet into query optimized format. As we were searching for solutions, one of our team members learned about a third-party tool called Etleap at an AWS summit. This tool not only served most of our needs but also gave us the ability to build a multi-cloud data lake solution.

We went ahead and not only pivoted our implementation to use Etleap but also partnered with them to make their system event-driven to ensure low data latency. One last piece to the puzzle was the data quality framework for which we integrated Great Expectations, an open-source Python library with our platform.
 

It’s important to not reinvent the wheel and instead leverage existing solutions.”


How did this technical challenge help you grow as an engineer or strengthen a specific skill?

First off, the technical challenge exposed me to a ton of AWS services like S3, Athena, Glue Catalog, Lake Formation, Lambda, as well as other open-source technologies like Apache Airflow and Great Expectations in a short span. It helped me learn best practices in architecting solutions on AWS.

Apart from the technical skills, there are a few key learnings I gained from this project. For example, it’s important to not reinvent the wheel and instead leverage existing solutions. It’s also crucial to pivot quickly and not marry yourself to an existing implementation when building next-gen technology products. In our case, we had to pivot multiple times when we found better tools and services in the market.

Technology conferences and meetups are a good way to learn about cutting-edge technologies, so it’s important for development teams to attend these on a regular basis. In fact, we found the solution to our technical challenge during one of those conferences. It’s critical for the whole team to participate in architectural design discussions instead of having an architect make all the decisions so that the team can own the design and pivot quickly if needed.

 

Abhay Kukreja
Executive Director, Enterprise Architecture • OCC

Options Clearing Corporation clears and settles trades for the options industry. 

 

What’s the biggest technical challenge you’ve faced recently in your work, and what made it so tricky?

We have a unique opportunity to embed practices that enhance security and regulatory adherence as part of our approach to technology innovation. We want to give our development and business community the latitude to work on new products and technology while ensuring that we are doing this in the safest and most resilient way possible.

 

How did you and your team overcome this challenge in the end? What were some of the specific tools or technologies you used?

We implemented a “trust but verify” mentality and built this into our process environment and data differentiation. “Trust but verify” is predicated on trusting that the technology community is taking the right steps to ensure security and regulations are upheld while verifying the development artifacts before deploying these products into higher environments where confidential data could be present. All this needs to happen within the processing framework of CI/CD tooling.
 

We implemented a ‘trust but verify’ mentality and built this into our process environment and data differentiation.”


How did this technical challenge help you grow as an engineer or strengthen a specific skill?

Resolving any technical challenge strengthens technical skills and also improves collaboration with empowerment. At the technical level, one gets a better understanding of the DevOps tools and checks and balances that are built into the tools from a security, quality and governance point of view. From a behavior and relationship point of view, you learn about the nuances of working with every team, which helps you take them on a journey of adoption without causing a major disruption in their processes.

Responses have been edited for length and clarity. Images via listed companies.

Hiring Now
Capital One
Fintech • Machine Learning • Payments • Software • Financial Services