The pleasure of block selection whilst editing code

Sometimes you wish that writing code would be easier… or at least a bit more efficient. Today I used the block selection mode (default shortcut: Alt + Shift + A) in Eclipse once again. There was a need to insert a common piece of text into several lines. The first thing I noticed was that the lines were all aligned perfectly. This made me think that block selection mode would be the best way to save some time and avoid making mistakes.

The first thing I did was selecting the block of lines that needed editing, but keeping the selection’s width to zero because there was nothing that needed replacing. Then it was time to type in the text, putting in the same words across several lines at once! Was this efficient? Yes, it was because any mistakes made will be uniform and, while I was still in block mode, the same would go for the corrections. Then I decided to blog about it, so in the end I actually lost some time on it after all.

Advertisements

Git-ting my branches up to date

Once upon a time a developer created a huge number, as far as more than five is considered huge, of feature branches; each of them based on a master branch and there wasn’t any significance to the actual timestamps of commits. “Soon”, he vowed, “… soon, I shall integrate all features! Soon but not today.” So he created a short script to update all branches to the latest and greatest of the master branch, adding some color for his own pleasure and gracefully failing the merges in case of any errors. The following piece of code was the result.

#!/bin/sh

# Lets define some nice colours
txtbld=$(tput bold)
txtrst=$(tput sgr0)
txtred=$(tput setaf 1)
txtwht=$(tput setaf 7)
txtblu=$(tput setaf 4)

# maybe I want a list of all failed updates
fails=""

git for-each-ref 'refs/heads/*' | \
   while read rev type ref; do
      branch=$(expr "$ref" : 'refs/heads/\(.*\)' )
      revs=$(git rev-list $rev..master)
      if [ -n "$revs" ]; then
         # Ok, so this branch isnt up to date, mention it
         echo -e "${txtbld}$branch needs update${txtrst}"
         git rebase master $branch
         if [ -d "`git rev-parse --git-dir`/rebase-apply" ]; then
            git rebase --abort # fail gracefully, so basically abort and mention it
            echo -e "${txtred}rebase aborted for branch ${txtblu}${txtbld}$branch${txtrst}"
            fails="$fails $branch"
         fi
      fi
   done

After months this turned out to be a very useful script. It saved him from tediously updating each feature branch he had and he never forgot to update them all.

Git rebasing to last public commit

As part of my workflow I make lots of small commits which I push to a central vcs once I finished a feature or fixed a bug. Often I find the need to re-base some of my commits interactively in git.

One reason is that I tend to forget to add something in a commit, so a fix-up I made afterwards needs to be squashed or placed closer to another commit. Another one is that changing the order of my commits will offer a better or more rational train of thought. And yet another reason is that at times my attention is so scatter I have two or more branches for features, futures or wip/dev. All those branches are based on a local master branch, which in turn is based on the central vcs’ master.

Before I used to type/copy/bash reverse-i-search the git rebase --interactive HEAD~N command. This could of course be shortened a bit to just git rebase -i HEAD~N, but it still left me with the task of figuring out how many commits I want to revise. Just guessing usually got me either to many or too little. Then I grew into the habit of firing up gitk, which in my case is an alias for gitk --all so I can see all branches, then count by hand how much commits I needed to revise and then use that number.

Although a slight improvement over just guessing it still left an error margin of 3 commits in the worst case. Then I noticed that I usually revise up to a branching point, so using git rebase -i branch/tag-I-started-to-deviate is shorter and saves time. For this to work its of course mandatory to keep feature branches short and regularly re-base all branches, including the local master branch, against its parent.

How I fixed my PostgreSQL re-install problems

As mentioned in an earlier post, I had some trouble re-installing PostgreSQL. For some obscure reason re-installing PostgreSQL (postgres) locally wasn’t possible anymore. Over time something (or someone, probably me) changed the system in a way that would always get me errors in the last steps of the PostgreSQL Windows installer. The part were the installer tries to set up the basic database structures.

The first time I met the problem I didn’t bother fixing it, the central database worked just fine for me. As time passed tracing back your steps becomes harder. As long as there was no pressure and time to spend on the problem, I stopped after a few fruitless attempts at finding the culprit on each try. This time was different. Determined to squash this inconvenience permanently, I decided not to stop searching. After a while some mailing posts seemed to lead to a solution to the problem.

It turned out the environment variable COMSPEC had a value that confused the install script of the PostgreSQL installer. The value was not a path to a shell executable like command.com or cmd.exe. The installer script being written in vbscript would run differently when a system or user variable with that name existed and the value was not a valid path to a shell executable. This seemed is probably an interpreter feature and not a bug in the script. When I dropped the variable, the install finishes successfully. I didn’t look for further, but one of the posts said that restoring the variable to its original value wouldn’t pose any problems.

Benefits of running your own test database

Setting up your own work environment proper and suiting your needs is an important and often undervalued part of software development. One of the things is to consider where and how to test the code. In big teams you have to conform to the standards or come up with a common understanding. I happen to have more freedom in this aspect, which made me change from using the “everything works on my pc” idea to the “keep everything centralized” one and then to the current one, a mix of both.

It has been only recently that I got it the way I wanted, more about that in another post. In the current configuration there is a production database, central test database that conforms with the latest changes in the version control system and a local test instance which has any changes which have not been committed yet. The main driver being that among others the continuous integration (CI) uses the central test database for testing and I want to continue development without breaking tests run by the CI server.

The advantages of my new configuration are:

  • the CI server can run its test without interference by developers
  • developers can their test without outside interference
  • speed, database access is less of a bottleneck

The disadvantages are:

  1. developers need their own test database to prevent interference
    1. the developer must update his test database schema with the latest changes from the central test database schema
    2. the developer must update the central test database’s schema, if necessary
  2. unrealistic performance expectations

Disadvantage 1 isn’t that hard to fix. For disadvantages 2.1 and 2.2 the solution is to restrict the number of developers on code that interacts with the database through a persistence subproject. And the last disadvantage might exist on several levels. The test database’s server might have different performance from the production server, which will certain differ between clients.

In conclusion I think I made my world a little better through these improvements. Another night with happy dreams ahead of me.