Continuous builds with Jenkins
by Sebastien Mirolo on Wed, 7 Sep 2011Jenkins is another continuous integration server written as a java webapp just like cruisecontrol. Jenkins seems to be the most popular continuous integration package today. There is extensive documentation to install it on many different operating systems and it comes with lots of plugins.
Going through the motion to install an "apache-tomcat bridge to run a java webapp" once, it shouldn't be too long to get Jenkins up and running.
I decided to download the latest war (1.425) inside the apache tomcat webapps directory. I had to configure my apache to redirect all /jenkins/* requests to tomcat. The startup page would then load correctly.
$ cd webapps $ mkdir jenkins $ cd jenkins $ jar xvf ~/Downloads/jenkins.war $ cd /etc/apache2/other $ diff -u jk_mod.prev jk_mod.conf JkMount /cruisecontrol/* ajp13 + JkMount /jenkins/* ajp13 <LocationMatch ".*WEB-INF.*"> $ sudo /usr/sbin/apachectl restart $ apache-tomcat/bin/startup.sh $ curl -O http://localhost/jenkins/
Time to build the first project and that's where it started to get tricky. Jenkins is fully configurable through the web user interface (no XML). Well that is all good and great but I would rather be able to automate installation, backup and restoration of our systems. I always find it very odd for tools like continuous integration servers to emphasis configuration through a graphical user interface instead of scripts. It seems very awkward to advocate manual configuration steps on a tool deployed for automation.
After looking through for a while, I found the following as a starting point. That indicates I should look into "the job configuration file (config.xml)". Without being able to find more information on the config.xml, I resorted to use the web user interface to create a project and later reverse engineer the file format.
Job name: worlds Description: Builds all of djaodjin public repositories "Build a free-style software project" Build/Shell: dws build http://localhost/reps/tests/client-test.git
I added a Build step to run a shell script as that seemed the closest option to what I was looking for: running an external python script. The associated inline documentation is somewhat interesting in that respect. Though it is correct, I had to re-read it at least five times before I understood what seemed to be the logical thing to do: Enter the command line to run the python script with the appropriate arguments.
Jenkins comes with cvs and subversion support built-in but not git so I downloaded the appropriate plugin and installed it as described in the documentation.
$ cd ~/.jenkins/plugins $ curl -O http://updates.jenkins-ci.org/latest/git.hpi $ apache-tomcat/bin/shutdown.sh $ apache-tomcat/bin/startup.sh
Distributed builds
The jenkins documentation about distributed builds is a good starting point. Technically you need to have an account on the "slave" build server that allows mechanical logins. The Jenkins master machine will use it to start remote commands on the slave. I found identity.key and secret.key in the jenkins home directory but since I am not sure how they got created and what they are used for, I still created a a new key for the ssh connections.
# Commands to run on the master # (Make sure to use an empty pass-phrase) $ ssh-keygen -q -f /usr/share/jetty/.jenkins/jenkins_rsa -t rsa $ scp /usr/share/jetty/.jenkins/jenkins_rsa.pub admin@buildIP:/home/admin/.ssh # Commands to run on the slave $ sudo yum install java-1.7.0-openjdk $ sudo useradd jenkins $ sudo -u jenkins mkdir -p /home/jenkins/.ssh $ sudo mv /home/admin/.ssh/jenkins_rsa.pub /home/jenkins/.ssh/authorized_keys $ sudo chown jenkins:jenkins /home/jenkins/.ssh/authorized_keys # Test you can get a remote shell from master to slave $ sudo -u jetty ssh -v -i .jenkins/jenkins_rsa jenkins@50.56.81.245
Now that the master can use key-based authentication to run remote commands on the slave, go to http://jenkins/computer/, click on the link "New Node" and fill in the blanks.
--- jenkins/config.xml.prev +++ jenkins/config.xml @@ -14,6 +14,23 @@ <clouds/> <slaves> + <slave> + <name>*nodeName*</name> + <description></description> + <remoteFS>/home/jenkins</remoteFS> + <numExecutors>2</numExecutors> + <mode>NORMAL</mode> + <retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/> + <launcher class="hudson.plugins.sshslaves.SSHLauncher"> + <host>*ipAddr*</host> + <port>22</port> + <username>jenkins</username> + <password></password> + <privatekey>*jenkinsHome*/jenkins_rsa</privatekey> + </launcher> + <label></label> + <nodeProperties/> + </slave> </slaves>
Everything should work out smoothly. You can check by going to http://jenkins/computer/ and click on the "refresh" link. Then on the slave, make sure the master initiated connection, got accepted and some files were copied over into the jenkins home.
$ tail /var/log/auth.log $ ls /home/jenkins
Historical design trickles through Today's implementation. While buildbot was designed to validate code on multiple platforms, Jenkins was first designed to compile Java code and run unit tests. As a result, matrix validation is a little bit awkward to setup. If of any clue:
$ cat *jenkinsHome*/jobs/myjob/config.xml <?xml version='1.0' encoding='UTF-8'?> <project> <actions/> ... $ cat *jenkinsHome*/jobs/myjob-on-all-platforms/config.xml <?xml version='1.0' encoding='UTF-8'?> <matrix-project> <actions/> ...
Thus multiple platforms validation cannot be an afterthought. You will have to create a new "multi-configuration project" job if you suddenly decide to run a previously configured job on multiple platforms.
Job dependencies
Once you pass a threshold in complexity your source base will be split into multiple projects, most likely in different repositories with long lists of prerequisites and dependencies. Simple jenkins jobs don't cut it anymore. It does make a lot of sense to create a job per project with the adequate dependencies but before doing so three things need to be understood:
- Jenkins uses a workspace directory per job - as in jenkinsHome/jobs/jobName/workspace.
- Jenkins uses a unique workspace directory per job runs - as in jenkinsHome/jobs/jobName/workspace :).
- Jenkins does not do deep recursion analysis when building a job on a machine and a dependent job on another machine.
All of this means:
- Your build scripts need to install the files produced by one job in a place the dependent job can find them, most likely outside the jobs' workspaces.
- You have to be very careful when those prerequisite files get installed and how they are used with regards to the timing of the jenkins jobs scheduler. That is where the "Block build when upstream project is building" and "Block build when downstream project is building" configuration option come handy.
- You have to manually make sure all jobs in a dependency graph will execute on a single machine or build scripts are copying things around left and right appropriately.
Conclusion
Jenkins is a great improvement over many continuous integration systems that have been available open source so far. It is very easy to setup and straightforward to create a first running job. Multi-configuration jobs and job dependencies are a bit clunky in my opinion, especially when you try to use both of them in conjunction. I wouldn't say it does not work but it is definitely begging for huge improvements there. Last on Jenkins, you must-have the Chuck Norris plug-in.
Jenkins is only a continuous integration system, and even if it as plug-ins to integrate with source control browsers, wikis, etc., I still like the idea of an all-in-one integrated product. Bitten, written in python and closely related to Trac, might be next on the play list.