Resources Blog Wicked Good Development Episode 6: The logic of code quality

Wicked Good Development Episode 6: The logic of code quality

Wicked Good Development is dedicated to the future of open source. This space is to learn about the latest in the developer community and talk shop with open source software innovators and experts in the industry.

Write code so good you don't need documentation to go along with it. Achieving code quality that is measurable, efficient, and scalable across even the leanest development teams can feel like a stunt. Yet, the costs of growing tech debt make setting a standard a no-brainer. From defining code quality, how to measure it, and the best time to involve quality checks in the development process, join a comprehensive talk on the logic of high code quality.

Listen to the episode


Wicked Good Development is available wherever you find your podcasts. Visit our page on Spotify's

Show notes


  • Rohan Bhaumik, Product Manager, Sonatype 
  • Sal Kimmich, Developer Advocate, Sonatype 
  • Stephen Magill, VP Product Innovation, Sonatype 


  • Kadi Grigg
  • Omar Torres

Topics discussed

The importance of code quality.



Welcome to today's episode of wicked good development. We have an amazing team here from Sonatype, including Product Manager Rohan Bhaumik, Developer Advocate extraordinaire Sal Kimmich, and our returning guest, VP of Product Innovation, Stephen Magill. Welcome all and thanks for being here. 


Thank you.


Thank you.


Thanks for having us.


Glad y'all are here. Today, our topic of conversation is going to be code quality, and I know there's a lot there that we can dig into. Um, but before we jump into some of the questions, could you just talk a little bit about why is this topic of interest?


Yeah, I, you know, I'm fascinated by code quality just because it's like this pervasive set of concerns that applies to all software projects, right? Like you can be really into compilers and there's a lot of projects that require compiler knowledge or, you know, maybe there's a certain performance critical software, and so, you know, that sort of expertise plays in there, but code quality really is pervasive.

And, you know, no matter what type of software you're developing- mobile apps, backend software, you know, whatever, having high-quality code, it makes a big difference in terms of how development progresses, how agile you can be, um, and really just how code-base continues to function well over time. Um, and so I think it's really important to understand, uh, no matter what your focus area is.


Yeah. Well, I think that this has been evolving over a couple of years, not just from my view of it, but from the way that we've had to change the way we think about it. It really did used to be that good code quality meant that your code was running. And that was a pretty minimally sufficient term for it.

But now, I think that that's changing a little bit more to be more holistic and a little bit more encompassing. And what I'm seeing more increasingly is not just using the static analysis tools, making sure that our code runs as executable, but it has to be human-readable on top of that. And I think that's an interesting extension of pairing together the, uh, you know, code base along with the organizational layer on top of that.

And I think that that is something that as we become more mature, and in our enterprises this is something we need to be considering more. 


So my perspective on it is slightly different primarily because I'm looking at it as, how does code quality affect me as a product manager or a product managers in general? Right.

I've had experiences where, you know, um, I'm working with a team and, you know, they have inherited. A code base that wasn't easy to read are, you know, there were, um, concessions made to code quality in the interest of time that came back to bite us.

Code bases that are “bad'' or not up to have a really strong quality actually slows us down in more ways than we can imagine. And that is why I'm interested. I don't know if I can help show my team what is good code, but I think what I can provide is a perspective of why following good  development practices and having good code quality can be beneficial, not just to the term, but in the long-term as well.


I'll just comment. I think that, you know, that leadership aspect that Rohan was getting to there at the end of the…. Even if you can't define what code quality means for your team, you know, and maybe that probably the team should define that themselves, right? It's not necessarily, um, something you need to impose, but like just setting up that expectation and culture of, um, you know, we want to maintain a high-quality code base. We want that to be our standard. We want all work together to be, uh, sort of maintaining those standards. Um, that's super important, that at least that bit of direction comes, you know, all up and down sort of the manager. 


You know, for the sake of this conversation, just to make sure we're all on the same page, Stephen, can you tell us what code quality means? 


Yeah. So in my mind code quality is really about, sort of all the non-functional requirements of software. And Sal, you know, emphasized one of those, which is super important- readability, um, you know, and maintainability and, you know, they're sometimes referred to as the "ilities" because a lot of them follow that pattern. Not all of them do, right? Performance that doesn't, but, um, you know, it's reliability, maintainability, um, Uh, performance, all the things that are sort of cross-cutting across, you know, various types of software. Um, and, then there's a lot of, sort of more minor things that play into those, right? Like, so readability. It's about variable naming. It's about a proper use of abstraction. It's about API design. You know, there's a lot of things that go into that. Um, but that general concept, um, you know, if you can really nail that, it makes a big difference in, um, as Rohan said, bring a new team on, you know, uh, changing a software project, onboarding new developers, um, uh, everything involved in continuing to maintain a software product.


You know, one thing that y'all mentioned was that code quality can mean different things for every team. What…is there, is there a standard of measurement or how can, how can teams measure whether or not their code is good?


Yeah, well, I, I mean, there's a couple of different approaches to this, right?

Because this is really the subjective side of executing code, right? There's a standard of excellence for running and there's a standard of excellence for readable and reusable and scalable. Uh, that's absolutely different. And I think that's why we have to use proxy measures in order to get this right.

Um, one way that I've seen this done well, and I welcome much other, uh, sort of different opinions on this, but one way that I've seen this done really well, is that at the end of a sprint, uh, just passing along the newly developed feature to entirely different team -Who's not familiar with the logic or the intention of the production- and ask them if they would be able to summarize what the intention was for that feature, just from the code itself. And in elucidating, what sort of the shortcomings are even of a well-executing code in communicating what its logic as an intention is, you could actually go in and backfill either refactor that code to make it clearer or add in the combination, that's going to allow you to be able to hold on to that in the future, because for all of us developers, right, we are, we're not just creating a product. We are trying to hold it within that, the business logic. And that is what gets loosened up over time. When you don't have a historical record of what the intention was.


I think that's fantastic. Um, so Sal, I think the way you touched on that, I think it was primarily from a maintainability slash readability perspective. Uh, that definitely, you know, as you, as I said before, I think when ownership changes of code bases over time, I think that's the first, uh, that's the, that's the first thing that people do.

Like, oh, I don't know what this is? What this line of code? Or this blocks? Or what its function is actually trying to do. This is really complicated and that really slows us down. So, um, I think what you highlighted could be a way for us to get over that. Um, my feeling on how you measure code quality is like, I think we mentioned that it was kind of subjective based on where you are.

So how you define code quality can be a little subjective based on your, um, based on where you are as, as a team, as a company, as, as, as a project. Um, and therefore what you choose to measure will be different from somebody else. Um, You know, like for example, um, teams that are just starting out, um, you know, writing…oh God, I hate using the term MVP, but just stick with me here.

Let's say they're pushing out like an MVP. Um, and it may be okay given their context to be a little, um, let's say, um, you know, not building up very highly maintainable thing just to go out and test the market. That is totally fine. However, you know, they wouldn't necessarily do that if they were building or are adding to critical infrastructure.

Right. So given that, that is subject, uh, that, you know, how you talk about code quality, subjective, tied to your context, there's different ways to measure it. Um, the one thing that I will add about measurement is like, you would have to define metrics that matter to you. So, um, That will…. So whatever you measure, like you, you're able to kind of map that to metrics.

And once, once you're able to review those metrics, you know, constantly not just point in time, uh, that is what will then give you a better picture of the overall quality of your product. So to summarize, it's indirect, dependent on context, and totally depend on the metrics that the team chooses to follow.


Yeah, I'll just follow the metrics conversation, because I love talking about metrics and I think it's important how you design those. I like to give an example of, um, some quality attributes that, you know, you could decide to focus on and measure. Like, I think testability, I would claim as an important quality attribute of software, you know, writing software, that's easily testable that you can, you can mock out things and you can, uh, you know, sort of cover all the functionality, um, in your test cases.

Um, and so. You can measure that using a test coverage, right. That's a very common metric. It's important to track. Um, I think it's important also whenever you're putting in place a metric like that, uh, not to have that be the sole focus, right. Because, uh, you know, if you have too narrow a set of metrics, uh, people start to gain them.

Right. And you get sometimes unintended consequences. So like it's easy to boost test coverage by just writing tests that don't actually, uh, you know, execute the functionality, they just cover it. You know, you run the code with this as a test input, say, oh, this is the output. Okay. We're going to test that the output always equals that, you know, without thinking about like this test is really trying to check some higher level of functionality goal, and it should be somewhat independent of the code.

And so, um, you know, also paying attention to, how many tests failures you have, how often are errors caught by your test cases? Uh, you know, as you're doing development, um, that gives you a better sense of the quality of those tests. And that's whether you're hitting that testability quality goal in your software.


I want to back up a minute. It's good to know that there there's metrics we can put on this and you all have said it's a little subjective, you know, as long as it's, it's tied back to the context in which it's in, right. My question is, is this something that every organization is doing? You know, doing code quality analysis, or is this something that organizations are viewed as more forward-thinking if they're doing this? Um, you know, and if they're not doing this, why should they be scanning and analyzing the code that's written?


Um, I'm curious what you've seen Sal. I've noticed a trend of companies becoming much more focused on code quality. Once they reach some threshold of, uh, sort of market penetration and development of their software, you know, like Rohan was saying when you're in that MVP stage and you're just trying to find out, you know, do people even care about the product I'm building?

You know, you're focused on that and, and not necessarily on all the good engineering principles that, you know, that are important, longer-term. Uh, but then once you do establish that. And you're like, oh, okay, we're getting traction here. You know? And now we want to defend that and make sure we don't slip up. We don't disappoint our customers.

We don't allow our competitors to come in. Then it becomes, there becomes much more focus on.


Yeah. I mean, I've definitely seen this as a trend, but I think it exists for a reason. Um, we're definitely moving into the age of dashboard development, if you will. Um, all of us are using graph-based metrics in order to be able to view and communicate, really more broadly what our architectures are doing for us. Um, but because of that, there's been a shift in the way that we communicate, uh, the way that our feature development is tying to business value. And in some ways that's really good, right? It's a bottom-line indicator. If you've got bad runtime, Uh, you are not going to be able to provide for your clients and you might lose clients.

Um, but yes, there's absolutely this tipping point where a business goes from delivering features to catcher customers, to maintaining architecture, to deliver the product value that they've already secured a client-based for. Now, I do think that that's why. And when you start seeing these anti-patterns of using quality, because it can't be a proxy metric that we're trying to derive from making the wrong choices about that. And I've definitely seen that pattern of engineering for the test, but I think that that's the really careful point. If you are making that transition to being more scalable, to trying to provide more service, as you're accelerating on your feature delivery, you have to make sure that you are still developing with that integrity to provide business value.

It's the second that delivery of features or the delivery of even code refactoring becomes centered and isolated in its own environment, on a developer team that that becomes an issue. Um, I think one thing that I'm sort of thinking about right now, um, so Simon Brown has the C4 architecture models, which I often would use if I was going into a consulting space, just to help them figure out where their business intelligence was tying to their code and where that had become a dream instead of a real indicator of a roadmap.

Um, and what Simon Brown says is that your code should be good enough quality so that you do not need documentation to go along with it. Now that is an indicator of excellent code quality. And I will say every single organization, whether they got five lines of code or 5 million is doing code quality. It's just a matter of to what degree they're doing it well.

You can have incredibly poor code quality. If you have code quality is something that you necessarily need to be paying attention to. 


Rohan, I'm interested in your perspective here also, because you said as a product manager, you've seen it happen where people do invest time in it, where people don't invest time in it.

I'm curious, what's been your experience? 


First off, Uh, it's hard to follow up from what Sal said. I think that was awesome. So I'm going to try, uh, I'm going to try to give my quick thought on where in the journey companies decide to start looking at, uh, analysis tools for code quality, right. 

Um, it goes back to, I think what the general trend of the conversation has been, right? Like what is most important to you given your context? I would argue that at some level most companies, and I think Sal made that point as well, most companies are doing code quality throughout, right? Uh, like I think there'll be doing different, um, aspects of it. Um, I mean to oversimplify, of course, when do they start flipping over to, you know, the use of static analysis to solve like very specific code quality problems?

That is slightly more nuanced. And I'm going to, like, I'm going to go a little bit into what I have seen and try to extrapolate from there. I would imagine that, um, a lot of this comes in when like, like you become larger as a company and I'll try to quantify by, uh, what I mean, my larger, I think you'd likely start, you know, going from about a engineering team of about 50, to probably a hundred where you realize that, Hey, it's time for us to standardize how we do software development. Um, um, it's it's, as we start thinking about, you know, um, how we can be more efficient as a software development organization and how we can make sure that we are able to keep scaling.

So when you talk, when you talk about, in terms of scale, when you talk about in terms of being efficient, um, that's where you would be looking- if I was, You know, a VP of engineering, Um, that's where I.. It's at that point where I would start looking outside my toolbox, um, to, to go for help from the market. 


I think Rohan, you just made me think of this, you know, and talking about company scaling, right, there's a lot of like heartache and that goes along with that process. And honestly, I just think about it in my day to day life, you know, what am I responsible for? So when, when you do start scaling and growing, you're adding more people to the mix, and, and, and processes to the mix who ultimately becomes responsible for this?

Is it anyone like one group choice or is it multiple people? You know what I mean? Like, I think it's like the chicken or the egg question: who is responsible?


You asked me 20 questions, my response to each of those 20 is going to be, it depends on. You know, it varies from people to people. So it depends.

I've seen organizations where it's centralized, where, you know, leadership says, no, this is the tool and this is where we are going. And I've seen other organizations I've been in other organizations where it's more bottoms up. Um, I think what.. whether an organization is top-down in these kinds of decision-making or bottom-up in those kinds of decision makings, again, depends on two factors, uh, they are you know, size, and I think the second is really about the culture.

Um, yeah, I like those are my offhand thoughts on like the kind of people who would be involved. And what I will add is what I've seen. What I'm seeing nowadays is the top-down thing is probably moving away. I E -who says I E- that is, um, people are, you know, most of the top-down decision making will probably happen for large-scale infrastructure choices.

Like for example, if I am looking to have, uh, like where does, what is my cloud platform going to be like? That is probably not something that it can do independently or maybe they can, but like, I would be surprised. However, what choice of tool to use for my team, for us to move forward on, on, on, on, on these projects that we're working on, I think more and more, it's becoming bottoms up.

Or at least, you know, from a team up to management, I don't know if Sal or Stephen, you have thoughts on that. 


Yeah, I think, um, it often more and more, it starts bottoms up. Uh, you know, I, I still see, even when it does sort of originate from the development team, sort of self-selecting what, what tools they find valuable. Um, I do see a lot of value in eventual, uh, centralization and unification of the tool chain, um, for the reasons of scale that we were just discussing. You know, it's when you're a single development team, you know, it's really easy to say, okay, this is, these are our coding standards. This is our style guide. You know, we're going to enforce this. We have this guy who reviews all our pull requests and is super pedantic and points out everything that we, uh, you know, do wrong.

Um, And that just doesn't scale, right? Unless you bring in tools to sort of take over that role, right. And this is what tools are really well suited to. Right. They can, they can check the style guide, they can check for, uh, you know, certain of these issues, security issues and things like that. Um, and just sort of take that off of the plate of the development team.

Um, which frees them up to focus on, you know, all the other things that developers are being asked to attend to now, you know, are supporting things in operations and, uh, building security and, and, you know, all of these other principles, um, you know, you can really, you can free up a lot of time and capacity and get a lot more consistency by, uh, eventual unification of that toolchain.


Yeah, I really wanted to speak to two points there. Um, I, it is, it is my personal belief that across an enterprise, no matter what the size you need to at least standardize your linters, you need to have a common expectation of how code is going to be presented and what you absolutely will not accept as a poor practice, even if it's not executionally limiting.

Um, but really sort of what we're speaking to here in, uh, the difference between a small and a large organization to being, even if we're talking between 10 and a 100, um, that really gets down into, you know, organizational analysis and really what humans are limited in being able to communicate. Code is just a massive base of logic.

And if you have written it, or if you have dealt with that code base for a couple of months, you will have a strong understanding of that. But maximally at any given time for a given function, it's likely that you will have six people at most, you know, a small agile team that's been working dedicated on a feature, to one or two individuals within an organization that actually understand how that works and how it fits into the larger code base. Now that's incredibly important because we're also seeing an incredibly, it's a magnitude difference of feature acceleration between a team of 10 and a team of 100. Right.

So when you start to see that problem, it begins to compound. You now have a lot of intellectually siloed teams that are delivering with a faster rate of acceleration. This does not necessarily mean that we're going to get a code base that functions. Well, it means we're going to get to Friday. We're going to try to merge and then none of us can leave until 2:00 AM.

That's not a situation we want to get into. That's why, when you're getting to that scaling level, you really do have to put tooling in place to make sure that everyone has that standard of operation before you try to get to the point of. 


Okay on that note about how, you know, it's a complicated problem when you're scaling and you have a lot of teams inserting specific pieces of code and to a larger code base. Right. Um, so when do these tools come into play? When is it important to surface, you know, accurate feedback for developers? How do you do that for the smaller team and then across the team, how does that happen? 


I mean, my recommendation is always, uh, surface things before it's merged before it's part of the main line of the code base. And, um, you know, and there's various points that, that satisfy that criteria, right? There's the IDE, um, there's a code review. There's the build chain, you know, you can, uh, in NCI block things, um, and you know, I would say that, uh, in code review has sort of emerged as a particularly critical point, um, to surface at least certain sorts of feedback.

There have been, there's a couple of really great communications with the ACM articles by Google and Facebook. Once went from each company, talking about how they, um, deployed static analysis tools across their organization, um, at scale, and, you know, there's sort of no larger scale than like Google and Facebook size, uh, development teams.

Right. And, um, and they, uh, they both independently sort of hit on code review as a really critical time because it, um, it's, it is pre merged, right? It's before it's gone into the main branch before it's gone into production. So you're catching things early. Um, but you still have enough time sort of between when the code is submitted and when someone picks it up to do, you know, get eyes on that, that code and improve it.

Um, you have enough time to, to do a deeper sort of code analysis and, you know, maybe surface some, some issues that, uh, you can't catch with the simple linting like there's... Definitely run linters and your IDE, you know, it, most places aren't standardized on one IDE. So, you know, that means different things for different people.

You won't necessarily have consistency there, but catching things early is important. Code review is where you can have consistency where you can go a little bit deeper, um, and where you can sort of enforce all of those standards. And it's, you know, it's this great process. That's like a social collaborative, you know, uh, attempt to maintain those quality standards, right?

It's the right place to have that conversation. So it, it has that benefit. 


Yeah, I haven't been in an engineering role for a minute, but when I was in charge of code reviews, I would wear the same shirt every time and it was a Bob Ross shirt that says no mistakes, only happy accidents, because that's really the attitude that you have to take into that.

Right? Like we are going to see some problems today and we are just going to be happy we caught them before production. 


I love that. Yeah. 


From my perspective, I guess not much to add to that really. What I would say is considered the flip side. Uh, I keep saying, I eat. That is what happens if you, if you don't catch it at, uh, or if you don't have anything in place that tries to catch good quality issues during review.

Um, so as we, as, as we heard code reviews, when you're talking about what you're actually looking to merge, or you're actually looking to introduce, right. It could be a new feature, it could be an approvement, whatever. And, um, that's when you have context, the most context around what it is that you're trying to, you know, uh, trying to, um, trying to merge in or trying to push out.

Um, I would argue that that is also where you have the most bandwidth and where it is the most, and I'm putting on my financial hat here, but the most ROI positive or more profitable place, really, for you to fix something because what's going to happen if you don't do it then?

 uh, it makes its way out into the wild.

You move on. Somebody catches later, someone creates, you know, a whole scene about it comes back and tells you about it. And then you have to drop what you're doing, which is still valuable things. And then go back and try to fix something. And don't even get to an, like, I'm not even talking about the potential exposure of shipping bad stuff that puts you on your team into it.

So yeah, I think the best place to do it is, you know, it might seem a little slower, but like in the long run, the best place to do it is you're in code review. 


Is it fair to say that, how strong your processes and code quality of review is related to tech debt? To me, it's, it seems like you've all alluded to it where sometimes code review process, whether it's strong or, or not so hot, you know, it can affect whether or not features functions get pushed out faster, or if you're going to be stuck doing refactoring or rework a lot.


So, and this is my view as a product manager. Right. So I'm going to say that up front here. Um, in my, in my, in my view, or in my opinion, any code quality issues that you let fester will the contact that the way you tackle it is very similar to the way you tackle tech debt, except tech debt is probably further down the road and you chip away and being mindful of code quality and investing in code quality tooling, or even not tooling, but practices is a way for you to not be, not have to be down tech debt or have to pay down less tech debt in the future. Um, but yeah, the impact is the same, right? Like for example, if I, if I don't address, uh, code quality issues now, um, that will become tech debt and that will make it more expensive for me to get back to a really healthy spot. 

That's the very, like the overly simplistic view that I take. Um, Sal, Stephen, I don't know if you all have probably more nuanced. 


Yeah. Well, I will say, I don't think tech debt is the best proxy for the specific circumstance of code quality, because really, I think what Rohan was speaking to before is, you know, tech debt is really, you know, what is it that we've been wanting to take care of that's sitting in the backlog? But when you get into serious issues with code quality, honestly, the best proxy for that is whether or not your SRE team hates you or not. Right. It's whether or not you've got a system that's on fire all the time. Um, in my point of view, Um, and what that does is you're not necessarily creating a new backlog, a really good proxy for that is actually how often you have to create a new vector or a new expectation for your forward looking product roadmap.

Because when you're running into situations of severe code quality and the inability to integrate across a large enterprise, often we're seeing that is not in the debt that you're developing, but in the limitation of being able to move forward with new production, you often have to change the direction or change the limitations that you're working with because your architecture is becoming more complicated in order to be able to refactor.


Yeah, I think that's great. And, uh, I think, you know, I'd say the only similarity between the two is the way in which they accumulate if you don't address them. And that they're easier to pay down early, as Rohan was saying. I'll just leave it there. 


I know we're at like the 30 minute mark. So I think now it's kind of a good place to just put your final thoughts on this.

I know, code quality of review discussion, and I'd really like to know. What do you think would be your best advice, whether you're a seasoned professional in the field or you're someone who is new to code quality review, you know, either or I'll, I'll let you pick your poison. Um, but what would be your advice to those individuals.


Well, I mean, I think I say the same thing over and over, but I think it's so essential. Oftentimes developer teams, especially in large enterprises can get over-focused on feature development. In a way that removes them from providing immediate business value. But if you're focusing on what is the business value of the feature that I am intending to provide here, um, oftentimes sort of it's in, it's a mental shift to keeping your focus there, keeping your focus on the business logic can help to create simpler code, more communicable code.

Um, and one that someone in five years may be able to come back to with fresh eyes and still be able to refactor and reintegrate pretty easily. 


From my perspective, what I will say is that, um, like I've said before, it has impact. It has impact not just on your teams writing and maintaining software, but it also has the impact on the success of a product and how quickly it can iterate and be, you know, commercially or, uh, non-commercial impactful right.In the mind, the market or whatever. 

Um, so much like how you have a dialogue with product in other departments within, within the company or within the team around paying down tech debt, I think it's important that people like even product managers understand the barriers that code quality can introduce.

And once your one step is dialogue,  once there is understanding, there is, um, like to put it simply room to actually go in. You know, invest in good quality, whether that is in terms of tooling or whether that's done well, whether that is from the perspective of actually taking some time out and actually educating people within the org, you know, uh, uh, and, and putting in place standards and practices.

So please highlight that. So my, my, my advice would be, if you find that, you know, there you're under the cosh to kind of put features out the door all the time. Use that as an opportunity to engage in dialogue. 


Yeah, I would say, um, that it's important as you start taking steps down the code quality path to have a clear goal in mind.

Um, you know, don't, it's not going to be super effective to just say we want to improve code quality. You know, go do that. Right. Um, having clear goals, like we want to improve testability, or we want to improve our test suite, or we want to improve readability or performance has been an issue. We need to figure out how, how to handle that. Or, you know, the architecture is holding us back, right? Having clear things that you want to address gives you a clear starting point, um, and lets you make incremental progress, uh, in this direction and, and, and also, uh, makes it easier to measure, uh, how you are doing in, in reaching your goals.

Picture of Kadi Grigg

Written by Kadi Grigg

Kadi is passionate about the DevOps / DevSecOps community since her days of working with COBOL development and Mainframe solutions. At Sonatype, she collaborates with developers and security researchers and hosts Wicked Good Development, a podcast about the future of open source. When she's not working with the developer community, she loves running, traveling, and playing with her dog Milo.