Sunday, February 26, 2017

Docker

  • Installation
    • dnf install docker
    • systemctl start docker
    • systemctl enable docker
    • sudo groupadd docker
    • sudo gpasswd -a ${USER} docker
    • sudo systemctl restart docker
  • docker pull fedora
  • docker save -o <save image to pathj> <image name>
  • docker load -i <path to image tar file>
  • docker images, docker ps, docker rmi, docker rm
  • docker run -v /home:/home --rm=true -i -t docker.io/fedora /bin/bash
  • docker run --rm -it docker_test_1 /bin/bash
  • docker commit <image> <image_name>
  • docker history <image_name>
  • Remove all stopped containers
    • docker rm $(docker ps -a -q)
  • Remove all noname images
  • docker rmi $(docker images -f "dangling=true" -q)
  • docker upgrade to CE
    • dnf remove docker docker-common docker-selinux docker-engine-selinux docker-engine
    • dnf -y install dnf-plugins-core
    • dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
    • dnf -y install docker-ce
  • add user to docker
    • sudo groupadd docker
    • sudo gpasswd -a yjpark docker
    • sudo usermod -aG yjpark
  • docker-compose upgrade
    • curl -L https://github.com/docker/compose/releases/download/1.17.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    • replace /usr/bin/docker-compose
  • docker registry
    • docker pull registry:2
    • docker run -d -p 5000:5000 --restart always --name registry registry
vi /etc/docker/daemon.json



{

  "insecure-registries" : ["10.0.0.211:5000"]

}

      • push docker
        • docker tag docker_test_1 10.0.0.211:5000/docker_test_1
        • docker push 10.0.0.211:5000/docker_test_1
      • pull docker image from repository
        • CE:

    vi /etc/docker/daemon.json



    {

      "insecure-registries" : ["10.4.38.164:5000"]

    }

        • Older:

    vi /etc/sysconfig/docker



    INSECURE_REGISTRY='--insecure-registry 10.4.38.164:5000'

      • docker pull 10.4.38.164:5000/utcit-image
      • docker tag  10.4.38.164:5000/utcit-image   utcit-image
      • lookup
        • curl 'http://10.0.0.211:5000/v2/_catalog'
      • stop registry
        • docker stop registry
      • cleanup registry
        • docker rm -v registry
    • Docker Swarm Cheatsheet

      Cluster Status

      Use the following command to get the status of a cluster. This can only be run from a manager node.
      docker node ls
      
      
      1. This node is the "primary" manager that actually manages the cluster. The other managers are replicas there for redundancy.
      2. This node is "drained" meaning that it is not running any services, nor will any services be deployed to it in future. This is can be good for managers as it means they are dedicated to managing.
      3. This node is unreachable, meaning it has likely gone offline.
      4. This star indicates the node that you ran the query on.
      You may notice that one of the nodes has a blank under the manager status. This means that it is a worker node and not a manager.

      Node Management

      Add A Node

      When you created the cluster, you should have been given the command that you need to give to other nodes in order to join the cluster. If you don't have this command anymore, you can retrieve it by executing:
      docker swarm join-token worker
      
      
      If you want to get the command to join nodes as managers, you can execute:
      docker swarm join-token manager
      
      
      The output of both of these commands will be similar to below:
      To add a manager to this swarm, run the following command:
      
          docker swarm join \
          --token SWMTKN-1-154zu01lysyz6qbqxhm5i27b591gg8ffhce2jq438damwelgz6-2qve6pcf9y6j5t3d9dmqnl5kg \
          10.1.0.48:2377
      
      
      Scale the service in the swarm Now that you have the relevant command, simply execute it from the node that you wish to have join the swarm. Make sure not to have more than 7 managers, but you can have as many workers as you like.

      Remove A Node

      To gracefully remove a node from a cluster, use the following command on the node itself:
      docker swarm leave
      
      
      If the node is currently a manager, you will need to demote it before trying to leave the swarm.
      The node will still show in the list of nodes in the cluster, but its status will be Down and its availability will be Active.
      To now remove the node from the cluster, run the following command on the manager.
      docker node rm $NODE_ID
      
      

      Inspect Node

      docker node inspect --pretty $NODE_ID
      
      

      Drain Node

      If you want to gracefully remove any containers from a node, and prevent services from running on that node in future then you want to drain it. This could be a useful step to run before updating and rebooting nodes.
      docker node update \
      --availability drain \
      $NODE_ID
      
      
      Draining your manager nodes may be a good idea to keep them dedicated to managing the cluster. You may wish to maintain a few tiny, dedicated management nodes for redundancy/reliability and have larger worker nodes for hosting the services.
      To undo this change, execute:
      docker node update --availability active $NODE_ID
      
      
      Refer here for more info on draining nodes.

      Demote Node

      To demote a node from a manager to a "follower" node, use the command below:
      docker node demote $NODE_ID
      
      

      Promote Node

      To promote a node from a worker to a manager, execute:
      docker node promote $NODE_ID
      
      
      Make sure to have an odd number of managers, and no more than 7. All managers should have minimal downtime and have static IPs.

      Services

      To deploy your application to a swarm cluster, you deploy it as a service. By being a service, it has the ability to:
      • be deployed to any of the nodes.
      • be automatically re-deployed if it dies.
      • can be scaled to be run any number of times simultaneously across the cluster.
      • have requests be load-balanced across running containers of the service.

      Create Service

      To deploy your application as a service run:
      docker service create $IMAGE
      
      
      Chances are that you probably want to give your service a name to reference it by, and you may want to specify the number of instances/replicas:
      docker service create --name my_web --replicas 3 $IMAGE
      
      
      Please refer here for the full list of optional parameters.

      Remove Service

      When you want to remove a service from your cluster, use the following command:
      docker service rm $SERVICE_ID
      
      

      List Services

      To see which services are running on your cluster:
      docker service ls
      
      

      Scale Service

      If you want to scale up/down after a service has already been deployed...
      docker service scale $SERVICE_ID=$NUM_INSTANCES
      
      
      For example
      docker service scale whoami=3
      
      

      List Service Processes

      When there are multiple instances of a single service running, you can see them with:
      docker service ps $SERVICE_ID
      
      
      This will indicate which nodes the service's containers are running on.
      ID                         NAME          SERVICE     IMAGE   LAST STATE          DESIRED STATE  NODE
      8p1vev3fq5zm0mi8g0as41w35  helloworld.1  helloworld  alpine  Running 7 minutes   Running        worker2
      c7a7tcdq5s0uk3qr88mf8xco6  helloworld.2  helloworld  alpine  Running 24 seconds  Running        worker1
      6crl09vdcalvtfehfh69ogfb1  helloworld.3  helloworld  alpine  Running 24 seconds  Running        worker1
      auky6trawmdlcne8ad8phb0f1  helloworld.4  helloworld  alpine  Running 24 seconds  Accepted       manager1
      ba19kca06l18zujfwxyc5lkyn  helloworld.5  helloworld  alpine  Running 24 seconds  Running        worker2
      
      

      Overlay Networks

      Containers deployed on the same overlay network can communicate with each other, even when they are on different nodes. For more information pleas refer here.

      Create Overlay Network

      The command below will create an overlay network called my-network.
      docker network create \
      --driver overlay \
      --subnet 10.0.9.0/24 \
      --opt encrypted \
      my-network
      
      

      Remove Overlay Network

      docker network rm $NETWORK_ID
      
      

      Deploy Service To Specific Network

      Use the --network [overlay network ID] option when creating a service if you wish to specify which network to join.

      List Networks

      docker network ls
      
      Example output:
      NETWORK ID          NAME                DRIVER              SCOPE
      a924a6335935        bridge              bridge              local               
      0eb588929cb3        docker_gwbridge     bridge              local               
      6520f47d6e19        host                host                local               
      8puto62h939d        ingress             overlay             swarm               
      064db85ed9e3        none                null                local               
      0fv9x8scsntd        traefik-net         overlay             swarm
      
      

      Inspect Network

      docker network inspect $NETWORK_ID
      
      
    • delete volumes

      // see: https://github.com/chadoe/docker-cleanup-volumes
      
      $ docker volume rm $(docker volume ls -qf dangling=true)
      $ docker volume ls -qf dangling=true | xargs -r docker volume rm
      

      delete networks

      $ docker network ls  
      $ docker network ls | grep "bridge"   
      $ docker network rm $(docker network ls | grep "bridge" | awk '/ / { print $1 }')
      

      remove docker images

      // see: http://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images
      
      $ docker images
      $ docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
      
      $ docker images | grep "none"
      $ docker rmi $(docker images | grep "none" | awk '/ / { print $3 }')
      

      remove docker containers

      // see: http://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images
      
      $ docker ps
      $ docker ps -a
      $ docker rm $(docker ps -qa --no-trunc --filter "status=exited")
      

      Resize disk space for docker vm


      $ docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default


    • Test swarm:
    • Dockerfile
    FROM fedora:24

    # Clean the new Docker image and update all packages to latest.
    RUN dnf clean all

    # View and update repositories.
    RUN dnf repolist
    RUN cat /etc/dnf/dnf.conf

    # Install additional packages
    RUN dnf install -y tar
    RUN dnf install -y python-devel
    RUN dnf install -y gcc-c++
    RUN dnf install -y make
    RUN dnf install -y npm
    RUN dnf install -y bzip2

    # Install Javascript packages.
    RUN npm cache clean -f
    RUN npm install n -g
    RUN npm install npm@latest -g
    RUN n stable
    RUN npm install swagger -g

    RUN npm install pm2 -g
    RUN mkdir /code
    WORKDIR /code/
    ADD ./ /code/

    • docker build -t ci_vote .
    • docker tag ci_vote 10.0.0.211:5000/ci_vote
    • docker push 10.0.0.211:5000/ci_vote


    • docker volume create --name mongoconfig1
    • docker volume create --name mongodata1
    • docker network create --driver overlay --internal mongo
    • docker service create --replicas 1 --detach=false --name mongodb --network mongo --mount type=bind,source=`pwd`/mongodb/db,target=/data/db -p 27017:27017 mongo:latest mongod --smallfiles --logpath=/dev/null # --quiet
    • docker service create --replicas 1 --network mongo --mount type=volume,source=mongodata1,target=/data/db --mount type=volume,source=mongoconfig1,target=/data/configdb --constraint 'node.labels.mongo.replica == 1' --name mongo1 --detach=false mongo:3.2 mongod


    • git clone https://github.com/kotarac/rockmongo-docker.git
    • docker build -t rockmongo .
    • docker tag rockmongo 10.0.0.211:5000/rockmongo
    • docker push 10.0.0.211:5000/rockmongo


    • docker service create --replicas 1 --detach=false --name mongodb --mount type=bind,source=/home/yjpark/perforce/workspace/ci/mongodb/db,target=/data/db --env MONGO_DATA_DIR=/data/db --env MONGO_LOG_DIR=/dev/null -p 27017:27017 mongo:latest mongod --smallfiles --logpath=/dev/null # --quiet
    • docker service create --replicas 1 --detach=false --name rockmongo --env MONGO_HOST=10.0.0.211 -p 27018:80 10.0.0.211:5000/rockmongo
    • docker service create --replicas 1 --detach=false --name ci_vote --mount type=bind,source=/home/yjpark/perforce/workspace/ci/vote,target=/code:rw -p 9100:9100 10.0.0.211:5000/ci_vote /bin/bash -c "/usr/local/bin/npm install; /usr/local/bin/node app.js"
    • docker service create --name=visualizer --publish=8280:8080/tcp --constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --detach=false dockersamples/visualizer


    • Jenkins
      • mkdir jenkins
      • docker run -p 8080:8080 -p 50000:50000 -v /home/yjpark/perforce/workspace/ci/jenkins:/var/jenkins_home -d --restart always jenkins/jenkins:lts