Sunday, November 4, 2012

Install Sonatype Nexus Cluster/Replication/High Availability

Setup Sonatype Nexus Cluster
Nexus cluster with Nginx Box

Nexus is a repository management providing development teams with the ability to proxy remote repositories and share software artifacts.

Opensource version  Nexus  provides less facilty vis-a-vis professional version. Features such as Replication and failover is not avialable .

I have implemented it without using any high availability /cluster utility such as  HA or DRBD,

I have used 2 linux box running Nexus and 1 Nginx Box serving as Proxy Server. 

Step by Step guide to implement Nexus Cluster.

Setup 2 Linux box  as usual. Name 1 server hostname  as Nexus1 and another server hostname as Nexus2. Make necessary DNS or /etc/hosts to resolve name.

Ensure name is resolve by using ping command
ping nexus1
ping nexus2

Create directory to Install Nexus


(I have created application user to run Nexus, you can create any user . )

Create directory to store nexus files.

#mkdir /data/sonatype-work
#chown application:application /data/sonatype-work


Change to the application user and continue with the following steps:


 sudo su - application  
 cd /opt/  
 wget http://www.sonatype.org/downloads/nexus-2.1.2-bundle.zip  
 unzip nexus-2.1.2-bundle.zip  
 mv nexus-2.1.2-bundle nexus  

 #chown application:application /opt/nexus  
 #cd /opt/nexus  

Edit nexus/conf/nexus.properties, and update with the following contents:

 #vim nexus/conf/nexus.properties 
 # Jetty section  
 application-port=8081  
 application-host=0.0.0.0  
 nexus-webapp=${bundleBasedir}/nexus  
 nexus-webapp-context-path=/  
 # Nexus section  
 nexus-work=/data/sonatype-work/nexus  
 runtime=${bundleBasedir}/nexus/WEB-INF  
 pr.encryptor.publicKeyPath=/apr/public-key.txt  


Create directory for Nexus data.

 mkdir /data/sonatype-work/nexus  
 mkdir /home/application/pid  

(process id of nexus will be stored in this file)

logout from application user and continue as a user with root privileges:

 exit  
 cp /opt/nexus/bin/nexus /etc/init.d/  


Edit /etc/init.d/nexus, and update the following properties:

 vim /etc/init.d/nexus  
 NEXUS_HOME="/opt/nexus"  

start nexus

 $chkconfig nexus on; service nexus start  



Password Less Trust between 2 Nexus Boxes.

I have enabled password less trust between 2 boxes, so that rsync and HA Script can work without prompting for password.

Load Balance/High Availability

Only 1 Nexus server will be active at all time and passive server will update its files/repository from active server and as active server fails passive server automatically takes over and become active server, while the server which is down becomes passive and start updating its file/repository from active server.
create script using code as shown below and save it as
/usr/local/bin/check_remote_nexus

 #!/bin/bash  
 rsysname=nexus2  
 lpid=`cat /home/application/pid/nexus.pid`  
 rpid=`ssh $rsysname cat /home/application/pid/nexus.pid`  
 date_time=`date`  
 if [ -f /home/application/pid/nexus.pid ];  
   then  
    echo "$date_time: Nexus is running on localhost"  
     elif [ ! -z "$rpid" ]; then  
   echo "$date_time: Nexus is already running on $rsysname.Remote PID " `echo $rpid`  
  else  
     echo "$date_time: Nexus is not available"  
         echo "$date_time: Starting server"  
     rm -rf /data/sonatype-work/nexus/timeline/index  
         rm -rf /data/sonatype-work/nexus/indexer  
         /etc/init.d/nexus start  
         echo "$date_time: Nexus started"  
       sleep 60;  
   curl --request DELETE -u admin:admin123 http://localhost:8081/service/local/data_index/repositories  
     echo " Nexus Index deleted"  
  fi  
 #########################checking split brain condition##################  
 ##Script to avoid split brain condition  
 #################checking uptime of Nexus###################  
 lpid_uptime=`ps -eo pid,etime|grep $lpid|grep -v grep |awk '{print $2}'`  
 rpid_uptime=`ssh $rsysname ps -eo pid,etime|grep $rpid|grep -v grep |awk '{print $2}'`  
 lpid_uptime=`echo $lpid_uptime |awk -F: '{seconds=($1*60)*60; seconds=seconds+($2*60); seconds=seconds+$3; print seconds}'`  
 rpid_uptime=`echo $rpid_uptime |awk -F: '{seconds=($1*60)*60; seconds=seconds+($2*60); seconds=seconds+$3; print seconds}'`  
 #################shutting down youngest Nexus process ###################  
 if [ $lpid_uptime -eq $rpid_uptime ];then  
 ((lpid_uptime++));  
 fi  
 if [ $lpid_uptime -gt $rpid_uptime ];then  
     echo Nexus running on localhost started earliest. shutting down Nexus from $rsysname  
     ssh $rsysname /etc/init.d/nexus stop  
     echo $rsysname stopped.  
 else  
     echo Nexus running on locahost started after Nexus running on $rsysname. Shutting down localhost Nexus  
     /etc/init.d/nexus stop  
     echo locahost Nexus stoped.  
 fi  

add crontask
nexus1
 0-59/6 * * * * /usr/local/bin/check_remote_nexus >> /var/log/check_remote_nexus.log  

nexus2

 3-59/6 * * * * /usr/local/bin/check_remote_nexus >> /var/log/check_remote_nexus.log  

This will ensure script is run at the interval of every 3 min to check nexus status. 


Replication
Rsync configure in application  user cron tab. Both server will check its running status. If the server is not running on localhost it will pull data from remote server and update its /data/sonatype-work/nexus folder with remote server /data/sonatype-work/nexus/ folder.

Script is as follows:
change sysname variable to your remote nexus server name.

 sysname=nexus2;  
 date_time=`date`  
 if [ -f /home/application/pid/nexus.pid ];  
 then  
  echo "$date_time: Nexus is running on localhost. rsync not required"  
     else  
     echo "------------------starting rsync----------------------------------"  
     echo "rsync pulling files from $sysname at $date_time "  
     rsync -av --delete $sysname:/data/sonatype-work/ /data/sonatype-work/  
     echo " -----------------rsync completed---------------------------------"  
 fi  

save this script as /usr/local/bin/update_nexus.sh
create a file as /var/log/update_nexus.log
change it to ownership of application user(or user you have created)
chown application.application /var/log/update_nexus.log

add to crontab in both servers.

#crontab -e
 2 * * * *  /usr/local/bin/update_nexus.sh >> /var/log/update_nexus.log  

Confirure nginx as load balancer/proxy

setup nginx in a new box
 yum install nginx  
 cd /etc/nginx/conf.d  



create a file
vi nexus.conf
 upstream nexus {  
     server nexus1:8081;  
     server nexus2:8081;  
 }  
 server {  
     listen 80;  
     server_name nexus.mydomain.com;  
     access_log /var/log/nginx/nginx.access.log main;  
     proxy_set_header Host $host;  
     proxy_set_header X-Real-IP $remote_addr;  
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;  
     location / {  
         proxy_pass http://nexus/;  
         proxy_redirect default;  
         proxy_read_timeout 120;  
     }  
 }  
  
 Restart nginx server  
  
 

10 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
    Replies
    1. ust so you know, this comment from anonymous was spam, you can safely delete it and maybe install an antispam plugin

      Delete
  2. I do accept as true with all of the concepts you've presented on your post. They're
    really convincing and will certainly work. Still, the posts are too quick for starters.
    May you please lengthen them a little from subsequent time?
    Thanks for the post.

    Have a look at my weblog: Best Muscle Growth Supplement

    ReplyDelete
    Replies
    1. Pls specify which points need to be clarified. I will try to do it.

      My content is for layman too . just copy and paste and it will work

      Delete
    2. just so you know, this comment from anonymous was spam, you can safely delete it and maybe install an antispam plugin

      Delete
  3. Hello, after reading this awesome paragraph i am too glad to share my knowledge here
    with colleagues.


    Ultra Force review **

    ReplyDelete
  4. Note that you should avoid replicating all of sonatype-work for speed and stability.

    "It is likely that the replication will catch them in an inconsistent state, and this may cause trouble in the failover nexus instance when it starts up"

    https://support.sonatype.com/entries/21451383

    Additionally running rsync against a multi-terrabyte directory is not reasonable, and if you're in a space to need HA, its probably pretty large. Focusing on only release level repositories and excluding indexes significantly reduces the time and overhead.

    ReplyDelete

Please enter comments to enhance this blogs