Code Quality is a technical debt management tool, part of an AI code analysis suite of products. Designed to ensure the highest standards of development. Project done at OutSystems, a B2B low-code software development company.
User Experience Lead
Led the AI Experience team at OutSystems, which is responsible for multiple projects, including Code Quality. Reporting to UX management, I managed a multidisciplinary design team from concept to release.
The main goal was to port over a widely successful tool from an existing product line to a new one, meeting the expectations of a highly demanding user base and adapting to differences in experience and architecture that were unique to the latter.
New ways of design
Naming issues
The different pillars support the whole suite of AI products. The team worked on all of them separately.
The different stages in which we expected users to operate Code Quality on. Work done with UX strategic research.
For Code Quality, we didn’t limit ourselves to building the same product we had before. We used the opportunity to go through all the usage data we had already and talk to some of our power users, who often are very vocal about what’s not working or opportunities for improvement.
One of the new key features we’ve introduced was a whole activity log with a commentary mechanism, which allowed teams to more efficiently communicate as they addressed the technical debt in their code.
This included adding a reason and explanation to each technical debt finding addressed, a tagging system to guarantee key changes stand out on the activity log, and the capability to add comments beyond any finding status change. All of it was based on the feedback that teams often worked together on addressing technical debt and were finding it hard to know who did what already during that time.
The new activity log on the left with a few comments from a team working on technical debt and a pop-up to select a reason and comment after dismissing a technical debt finding on the right.