Github reviews as a way to improve code quality?
Thursday, 17 March 11
Today I tweeted quite a bit about Ruby software quality.
The trigger was simply a Ruby/Sinatra application I run for personal use that crashed from time to time due to a GC bug in the Ruby interpreter I was using (the default of a recent Ubuntu install, sadly).
This is an interpreter bug, and I hope it is a rare one, but this episode made me thinking about how many bad stories I accumulated since my preferred high level programming language is Ruby: bad documented gems, not working gems, dependencies hell, hard to install things, bad performances, and so forth.
The test culture of Ruby helps a bit, but if you are an experienced software developer you know how a few tests can't guarantee software quality. Honestly I saw testing as part of the problem from time to time, with the attitude "if it passes tests it can be merged". Testing is useful but not so powerful, unfortunately.
There is also a lot of great and documented code, but in general it seems to me that there is a real problem about code quality, and maybe there is something that github.com can do to improve the state of things: to make users aware that some code may not be perfect, and to make the developers aware, too.
It is as simple as this: github, please make us able to rate projects. If project-wide reviews are too bold or may appear too rude, just do it the old way, with "stars". If users could rate:
- usability / installation
- documentation
- code quality
- stability
- performances
All this anonymously (you could be surprised about how much people in a given community are connected, coworkers, and in general not willing to say the truth about your project).
with a vote between 1 and 5 starts, this may help a lot of people to make an initial idea about the quality of a project. The developer can gain very important info about what could be improved, that is a very valuable information, as there are a lot of people that are ready to say your project is cool, but constructive criticism about what is the weak side of the project is hard to obtain.
This could be done as a separated web service using the github API, but what's the point in doing this? Having it integrated into github is much better of course. But if github will think that it's not a great match for the site, it can be an interesting week end project to work on.
Please leave comments on HN instead of using the blog comments.
Edit: Arguments from discussions on twitter / comments:
- How to prevent spam? The only hope I've is that the OSS programming community is less interested into producing some spam vote. But you can restrict the vote to followers of a project or to use other tricks, like allowing to vote only if the user is trusted enough by a given number of parameters.
- Using number of followers of a project as a meter? I think does not work, I'll follow a project that's not very good if I need it, and there are little or not better alternatives. Also I may love a project that is badly documented, so I'll provide five stars for everything but for doc.
Edit2:
- What about the project getting better with time, or the the other way around? Just provide a lot more weight in the average computation to recent reviews. Also put the old ones into the mix but with some math rule so that, the older, the less relevant in the weighted sum.
14 comments
home