Improving code quality with SonarQube

Maintaining a certain level of code quality and readability is of crucial importance when it comes to successful development in today’s dynamic development environment, where multiple teams work on the same code and changes are often made. Such an environment requires following certain coding conventions, making the code understandable to everyone involved in the process.

For us at Infobip, it is imperative that the products we deliver meet the highest quality standards. At the core of each product is the source code, making code quality the most important factor in determining the quality of the overall product.

To maintain high quality control, we chose to implement code review into our development process. However, after some time the reviews began to take up too much time, steering us away from other important tasks. The code review is consisted not only of a creative review process, but also from checking for repetitive errors, use of convention rules etc. The entire process came into question, so a logical solution was to automate part of it.

In our development centers we are trying to automate as many manual tasks as we can, so we can focus on innovation which makes us the best choice for our customers. In this moment our developers can automatically deploy services, quickly and easily write public APIs and generate client libraries in different programming languages. It took us a long time to find the perfect solution, helping us with the partial automation of the code review. After researching various tools, we realized that SonarQube meets all of our requirements.

About SonarQube

SonarQube is an open source platform for code quality analysis. It belongs to the static code analysis tools, along with Understand, semmle, and others.

The platform receives the source code as an input. This code can either be sent from IDE or pulled from SCM. There are SonarQube plugins for the most popular IDEs that make running code analyses much easier. Based on the input, the platform starts to apply predefined rules and check if they are fulfilled. As an analysis output, a lot of useful information and propose improvements are provided as well.

The reason we chose SonarQube are the numerous and extensive Java rules. At the moment there is over 700 Java rules, and the number is constantly increasing. We are primarily performing analysis on code written in Java, but it can be easily performed on code written in one of 20 other programming languages.

Also, SonarQube is pluggable, so you can write your own plugins if you need support for a specific language or you want to write your own rules. If you need to add your own code rules, it’s possible by using XPath expressions or by creating a new plugin using Java.

The SonarQube platform consists of four components: analyzers, server, plugins installed on the server and, last but not least, database.

SonarQube_platform

SonarQube architecture

Analyzers are responsible for running line-by-line code analysis. They can provide information about technical debt, code coverage, code complexity, detected problems, etc. The problems, detected in code, can be some bugs, potential bugs, things that can lead to mistakes in future, etc. When the analysis is done, the results can be viewed on the web page hosted by SonarQube web server. Web server simplifies SonarQube instance configuration, plugin installation, and provides an intuitive results overview.

Results_overview

Results overview

There are a lot of issues that could be found in some code. The rules differ and can be put in one of the 5 groups, based on their severity: Blocker, Critical, Major, Minor and Info. So if there is any bug or potential bug it will be characterized as Blocker or Critical issue, and some minor issues as “Magic numbers should not be used” is ranked as Minor or Info severity.

Here is the example of the most violated rules:

Most_violated_rules

Most violated rules

The great thing about SonarQube is that all the data is stored in relational database with which all the developers are generally familiar. The database of our choice was a MySQL database. Primarily, this happened because we are avid supporters of open source technologies. If you have other preferences when it comes to databases or you have more experience in working with another database, some of supported databases are: PostgreSQL, Oracle, etc.

SonarQube setup

SonarQube architecture allows separating server and database and even making database replications and deploying server on multiple machines to get better performance and scalability.

For testing purposes and for trying out various SonarQube functionalities, you can try with a web server with an embedded database and analyzing one or two projects. If the environment is set up correctly from the beginning, there will be no trouble with migrating the database and the SonarQube server from a local development machine onto the server. Docker, with its containers, can help with that, because you can wrap up everything you need (code, runtime, system libraries) and easily deploy it on any machine. There is also an official SonarQube docker container with an embedded H2 version, so you don’t have to waste time by creating your own. External database can be also configured, needed for any more serious work with SonarQube.

We started with one dedicated virtual machine and we had no problems until the number of projects has passed certain and we started to analyze code on daily basis. However, we did notice a drastic performance drop and longer analysis duration times. That’s when we started to separate different SQ (architecture) layers on different machines.

For starters, we separated the database and Sonar server. We’ve accepted recommendations of SonarQube to be on the same network. For SonarQube deployment we are using a docker container which makes it easy to install it to another machine if we need better performance levels.

SonarQube and Continuous Integration

As mentioned previously, we take care of automation and try to spend less effort on things that could be automated, thus creating more time for the creative part of the job. Code analysis also fits in perfectly into the story of Continuous Integration.

We practice and preach Continuous Integration and agile development, so this is how one task solving process looks like from our side:

It starts with developer (let’s call her Alisa), assigning task to herself and starting the progress. During task solving, Alisa runs code analysis in IDE time after time and gets the results. She can see if all the code quality requirements are satisfied. When it comes to logic, she needs to check it on her own. When Alisa is certain that the written code fulfils all the requirements, she commits new code to the repository and asks Bob to review it. After committing the changes to git repository, web hook triggers Jenkins build. The build runs automatically and the artifact with the new feature is available in internal maven repository and can be easily deployed into production.

Bob reviews the code Alisa had written and runs analysis to determine whether the code quality is on the desirable level. After results interpretation, Bob only has to do the creative part of code review – to review the logic. If everything is ok, the task has been completed and the new feature is production ready.

Source_code_management

Source code management with CI server and SonarQube

In version 4.0 the incremental analysis mode is introduced. This mode could be used for checking if new changes will break some of the important code rules and should be run by the developer who has made those changes. Before version 4.0, a developer would start the analysis on the whole project even if he changed only two or three files. With the new feature, time needed for analysis is significantly shorter. Beside the new, incremental mode, there is also the whole project analysis – line-by-line analysis with sending result to the server and storing them in the database, and preview mode that does the complete analysis, but without storing the results in the database.

At the moment we’re using Whole project analysis mode with storing results – for night builds, and preview mode – after every commit.

This provides a way to track quality control of the code from commit to commit. In addition to that we can have historical data about code quality, so we can observe how quality behaved in for the duration of the project.

Conclusion

Using SonarQube facilitates code quality control and decreases the number of real and potential bugs. Developers are now more focused on the logic itself and can devote their time to business analysis requirements and to finding optimal solution for a concrete case. Also, after its implementation, managers started tracking metrics, because based in the results, they believe it is possible to have better insight in development work.

I will quote John F. Woods:

Always code as if the guy who ends up maintaining your code is a violent psychopath who knows where you live.

What are you waiting for? Today is a great day for code analysis.

By Nevena Menkovic, Software Engineer

Looking for a development career at Infobip?

Find it today!
Jan 22nd, 2016
7 min read