Sunday, February 26, 2017

Python


  • install
    • dnf install -y python-devel
    • dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel lcms2-devel libwebp-devel tcl-devel tk-devel
    • graph
      • PIL: pip install Image
      • dnf install tkinter
      • pip install matplotlib
  • subprocess.Popen
    • def subprocess_Popen(cmd, wait = True):
    •     pipe = subprocess.Popen(cmd, shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
    •     outs, errs = pipe.communicate()
    •     if (outs):
    •         print(outs)
    •     if (errs):
    •         print(errs)
  • Django

Docker

  • Installation
    • dnf install docker
    • systemctl start docker
    • systemctl enable docker
    • sudo groupadd docker
    • sudo gpasswd -a ${USER} docker
    • sudo systemctl restart docker
  • docker pull fedora
  • docker save -o <save image to pathj> <image name>
  • docker load -i <path to image tar file>
  • docker images, docker ps, docker rmi, docker rm
  • docker run -v /home:/home --rm=true -i -t docker.io/fedora /bin/bash
  • docker run --rm -it docker_test_1 /bin/bash
  • docker commit <image> <image_name>
  • docker history <image_name>
  • Remove all stopped containers
    • docker rm $(docker ps -a -q)
  • Remove all noname images
  • docker rmi $(docker images -f "dangling=true" -q)
  • docker upgrade to CE
    • dnf remove docker docker-common docker-selinux docker-engine-selinux docker-engine
    • dnf -y install dnf-plugins-core
    • dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
    • dnf -y install docker-ce
  • add user to docker
    • sudo groupadd docker
    • sudo gpasswd -a yjpark docker
    • sudo usermod -aG yjpark
  • docker-compose upgrade
    • curl -L https://github.com/docker/compose/releases/download/1.17.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    • replace /usr/bin/docker-compose
  • docker registry
    • docker pull registry:2
    • docker run -d -p 5000:5000 --restart always --name registry registry
vi /etc/docker/daemon.json



{

  "insecure-registries" : ["10.0.0.211:5000"]

}

      • push docker
        • docker tag docker_test_1 10.0.0.211:5000/docker_test_1
        • docker push 10.0.0.211:5000/docker_test_1
      • pull docker image from repository
        • CE:

    vi /etc/docker/daemon.json



    {

      "insecure-registries" : ["10.4.38.164:5000"]

    }

        • Older:

    vi /etc/sysconfig/docker



    INSECURE_REGISTRY='--insecure-registry 10.4.38.164:5000'

      • docker pull 10.4.38.164:5000/utcit-image
      • docker tag  10.4.38.164:5000/utcit-image   utcit-image
      • lookup
        • curl 'http://10.0.0.211:5000/v2/_catalog'
      • stop registry
        • docker stop registry
      • cleanup registry
        • docker rm -v registry
    • Docker Swarm Cheatsheet

      Cluster Status

      Use the following command to get the status of a cluster. This can only be run from a manager node.
      docker node ls
      
      
      1. This node is the "primary" manager that actually manages the cluster. The other managers are replicas there for redundancy.
      2. This node is "drained" meaning that it is not running any services, nor will any services be deployed to it in future. This is can be good for managers as it means they are dedicated to managing.
      3. This node is unreachable, meaning it has likely gone offline.
      4. This star indicates the node that you ran the query on.
      You may notice that one of the nodes has a blank under the manager status. This means that it is a worker node and not a manager.

      Node Management

      Add A Node

      When you created the cluster, you should have been given the command that you need to give to other nodes in order to join the cluster. If you don't have this command anymore, you can retrieve it by executing:
      docker swarm join-token worker
      
      
      If you want to get the command to join nodes as managers, you can execute:
      docker swarm join-token manager
      
      
      The output of both of these commands will be similar to below:
      To add a manager to this swarm, run the following command:
      
          docker swarm join \
          --token SWMTKN-1-154zu01lysyz6qbqxhm5i27b591gg8ffhce2jq438damwelgz6-2qve6pcf9y6j5t3d9dmqnl5kg \
          10.1.0.48:2377
      
      
      Scale the service in the swarm Now that you have the relevant command, simply execute it from the node that you wish to have join the swarm. Make sure not to have more than 7 managers, but you can have as many workers as you like.

      Remove A Node

      To gracefully remove a node from a cluster, use the following command on the node itself:
      docker swarm leave
      
      
      If the node is currently a manager, you will need to demote it before trying to leave the swarm.
      The node will still show in the list of nodes in the cluster, but its status will be Down and its availability will be Active.
      To now remove the node from the cluster, run the following command on the manager.
      docker node rm $NODE_ID
      
      

      Inspect Node

      docker node inspect --pretty $NODE_ID
      
      

      Drain Node

      If you want to gracefully remove any containers from a node, and prevent services from running on that node in future then you want to drain it. This could be a useful step to run before updating and rebooting nodes.
      docker node update \
      --availability drain \
      $NODE_ID
      
      
      Draining your manager nodes may be a good idea to keep them dedicated to managing the cluster. You may wish to maintain a few tiny, dedicated management nodes for redundancy/reliability and have larger worker nodes for hosting the services.
      To undo this change, execute:
      docker node update --availability active $NODE_ID
      
      
      Refer here for more info on draining nodes.

      Demote Node

      To demote a node from a manager to a "follower" node, use the command below:
      docker node demote $NODE_ID
      
      

      Promote Node

      To promote a node from a worker to a manager, execute:
      docker node promote $NODE_ID
      
      
      Make sure to have an odd number of managers, and no more than 7. All managers should have minimal downtime and have static IPs.

      Services

      To deploy your application to a swarm cluster, you deploy it as a service. By being a service, it has the ability to:
      • be deployed to any of the nodes.
      • be automatically re-deployed if it dies.
      • can be scaled to be run any number of times simultaneously across the cluster.
      • have requests be load-balanced across running containers of the service.

      Create Service

      To deploy your application as a service run:
      docker service create $IMAGE
      
      
      Chances are that you probably want to give your service a name to reference it by, and you may want to specify the number of instances/replicas:
      docker service create --name my_web --replicas 3 $IMAGE
      
      
      Please refer here for the full list of optional parameters.

      Remove Service

      When you want to remove a service from your cluster, use the following command:
      docker service rm $SERVICE_ID
      
      

      List Services

      To see which services are running on your cluster:
      docker service ls
      
      

      Scale Service

      If you want to scale up/down after a service has already been deployed...
      docker service scale $SERVICE_ID=$NUM_INSTANCES
      
      
      For example
      docker service scale whoami=3
      
      

      List Service Processes

      When there are multiple instances of a single service running, you can see them with:
      docker service ps $SERVICE_ID
      
      
      This will indicate which nodes the service's containers are running on.
      ID                         NAME          SERVICE     IMAGE   LAST STATE          DESIRED STATE  NODE
      8p1vev3fq5zm0mi8g0as41w35  helloworld.1  helloworld  alpine  Running 7 minutes   Running        worker2
      c7a7tcdq5s0uk3qr88mf8xco6  helloworld.2  helloworld  alpine  Running 24 seconds  Running        worker1
      6crl09vdcalvtfehfh69ogfb1  helloworld.3  helloworld  alpine  Running 24 seconds  Running        worker1
      auky6trawmdlcne8ad8phb0f1  helloworld.4  helloworld  alpine  Running 24 seconds  Accepted       manager1
      ba19kca06l18zujfwxyc5lkyn  helloworld.5  helloworld  alpine  Running 24 seconds  Running        worker2
      
      

      Overlay Networks

      Containers deployed on the same overlay network can communicate with each other, even when they are on different nodes. For more information pleas refer here.

      Create Overlay Network

      The command below will create an overlay network called my-network.
      docker network create \
      --driver overlay \
      --subnet 10.0.9.0/24 \
      --opt encrypted \
      my-network
      
      

      Remove Overlay Network

      docker network rm $NETWORK_ID
      
      

      Deploy Service To Specific Network

      Use the --network [overlay network ID] option when creating a service if you wish to specify which network to join.

      List Networks

      docker network ls
      
      Example output:
      NETWORK ID          NAME                DRIVER              SCOPE
      a924a6335935        bridge              bridge              local               
      0eb588929cb3        docker_gwbridge     bridge              local               
      6520f47d6e19        host                host                local               
      8puto62h939d        ingress             overlay             swarm               
      064db85ed9e3        none                null                local               
      0fv9x8scsntd        traefik-net         overlay             swarm
      
      

      Inspect Network

      docker network inspect $NETWORK_ID
      
      
    • delete volumes

      // see: https://github.com/chadoe/docker-cleanup-volumes
      
      $ docker volume rm $(docker volume ls -qf dangling=true)
      $ docker volume ls -qf dangling=true | xargs -r docker volume rm
      

      delete networks

      $ docker network ls  
      $ docker network ls | grep "bridge"   
      $ docker network rm $(docker network ls | grep "bridge" | awk '/ / { print $1 }')
      

      remove docker images

      // see: http://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images
      
      $ docker images
      $ docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
      
      $ docker images | grep "none"
      $ docker rmi $(docker images | grep "none" | awk '/ / { print $3 }')
      

      remove docker containers

      // see: http://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images
      
      $ docker ps
      $ docker ps -a
      $ docker rm $(docker ps -qa --no-trunc --filter "status=exited")
      

      Resize disk space for docker vm


      $ docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default


    • Test swarm:
    • Dockerfile
    FROM fedora:24

    # Clean the new Docker image and update all packages to latest.
    RUN dnf clean all

    # View and update repositories.
    RUN dnf repolist
    RUN cat /etc/dnf/dnf.conf

    # Install additional packages
    RUN dnf install -y tar
    RUN dnf install -y python-devel
    RUN dnf install -y gcc-c++
    RUN dnf install -y make
    RUN dnf install -y npm
    RUN dnf install -y bzip2

    # Install Javascript packages.
    RUN npm cache clean -f
    RUN npm install n -g
    RUN npm install npm@latest -g
    RUN n stable
    RUN npm install swagger -g

    RUN npm install pm2 -g
    RUN mkdir /code
    WORKDIR /code/
    ADD ./ /code/

    • docker build -t ci_vote .
    • docker tag ci_vote 10.0.0.211:5000/ci_vote
    • docker push 10.0.0.211:5000/ci_vote


    • docker volume create --name mongoconfig1
    • docker volume create --name mongodata1
    • docker network create --driver overlay --internal mongo
    • docker service create --replicas 1 --detach=false --name mongodb --network mongo --mount type=bind,source=`pwd`/mongodb/db,target=/data/db -p 27017:27017 mongo:latest mongod --smallfiles --logpath=/dev/null # --quiet
    • docker service create --replicas 1 --network mongo --mount type=volume,source=mongodata1,target=/data/db --mount type=volume,source=mongoconfig1,target=/data/configdb --constraint 'node.labels.mongo.replica == 1' --name mongo1 --detach=false mongo:3.2 mongod


    • git clone https://github.com/kotarac/rockmongo-docker.git
    • docker build -t rockmongo .
    • docker tag rockmongo 10.0.0.211:5000/rockmongo
    • docker push 10.0.0.211:5000/rockmongo


    • docker service create --replicas 1 --detach=false --name mongodb --mount type=bind,source=/home/yjpark/perforce/workspace/ci/mongodb/db,target=/data/db --env MONGO_DATA_DIR=/data/db --env MONGO_LOG_DIR=/dev/null -p 27017:27017 mongo:latest mongod --smallfiles --logpath=/dev/null # --quiet
    • docker service create --replicas 1 --detach=false --name rockmongo --env MONGO_HOST=10.0.0.211 -p 27018:80 10.0.0.211:5000/rockmongo
    • docker service create --replicas 1 --detach=false --name ci_vote --mount type=bind,source=/home/yjpark/perforce/workspace/ci/vote,target=/code:rw -p 9100:9100 10.0.0.211:5000/ci_vote /bin/bash -c "/usr/local/bin/npm install; /usr/local/bin/node app.js"
    • docker service create --name=visualizer --publish=8280:8080/tcp --constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --detach=false dockersamples/visualizer


    • Jenkins
      • mkdir jenkins
      • docker run -p 8080:8080 -p 50000:50000 -v /home/yjpark/perforce/workspace/ci/jenkins:/var/jenkins_home -d --restart always jenkins/jenkins:lts


    Tuesday, February 14, 2017

    Fedora Commands

    • Linux version
      • uname -a
    • HDD capacity
      • df -h
    • Memory
      • free -m
    • Hostname
      • vi /etc/hostname
      • hostname -F /etc/hostname
    • Shell
      • cat /etc/shells
      • echo $SHELL
      • chsh: permanently change shell for user
    • Prompt
      • export PS1="[\\u@\\h:\\w:]\\$ "
    • Size of a directory
      • du -sh .
    • User
      • useradd userid -d <home-dir>
      • passwd userid
      • userdel -r userid
      • /etc/sudoers: add "user_id ALL=(ALL) NOPASSWD: ALL"
        • dnf install sudo
      • id username
    • Reset profile
      • dracut --regenerate-all --force
      • sync 
    • Extend the HDD
      • lvextend -l +100%FREE /dev/mapper/fedora-root
      • resize2fs /dev/mapper/fedora-root
    • Disk image backup & restore: disks and gparted
    • Manage start applications
      • by gnome-tweak-tool (can be installed in Software)
    • Manage services
      • systemctl status smb.service
    • To show gateway
      • ip route show
    • nis
      • stop & disable firewalld.service
      • edit /etc/sysconfig/selinux to disabled 
      • dnf install ypbind rpcbind
      • /etc/nfsmount.conf
        • Defaultvers=3
      • ypdomainname hq.k.grp
      • /etc/sysconfig/authconfig
        • USENIS=yes
      • /etc/yp.conf
        • domain hq.k.grp server 10.4.50.16
        • domain hq.k.grp server 10.4.50.17
      • /etc/nsswitch.conf
        • passwd: add nis
        • shadow: comment out
        • group: add nis
        • netgroup: nis sss
        • automount: files nis sss
      • systemctl enable rpcbind ypbind and reboot
    • autofs
      • dnf install autofs
      • /etc/sysconfig/autofs
        • BROWSE_MODE="yes"
      • /etc/auto.master
        • auto.master.mtvlnx
      • /etc/resolv.conf
        • search hq.k.grp
        • nameserver 10.4.40.7
        • nameserver 10.4.20.46
      • systemctl enable & start autofs.service
    • nfs
      • NFS mount
        • create /mnt/platform
        • mount -t nfs 10.73.1.118:/platform /mnt/platform
        • vi /etc/fstab
          • 10.73.1.118:/platform /mnt/platform nfs defaults 0 0
        • With username, password
          • mount -t cifs -o username=username,password=password //10.4.38.58/stormtest/flash/build /mnt/stormtest
        • vi /etc/fstab
          • //10.4.38.58/stormtest/flash/build /mnt/stormtest cifs username=username,password=password 0 0
      • NFS export
        • /etc/exports
        • /home/yjpark *(rw,no_root_squash)
        • systemctl start rpcbind nfs-server
    • samba
      • yum install samba
      • system-config-samba or
      • vi /etc/samba/smb.conf
         [global]
         workgroup = HQ
         server string = yjpark_linux
              security = user 

              [homes]
              comment = Home Directories
              valid users = %S, %D%w%S
              browseable = Yes
              writable = yes
              inherit acls = Yes


         [yjpark] => doesn't seem to need it
         comment = Yongjin's linux PC
         path = /local/yjpark
         public = yes
         writable = yes
              browseable = Yes

      • service smb restart
      • smbpasswd -a yjpark
      • firewall-cmd --permanent --add-service=samba
      • service firewalld restart
    • http
      • systemctl enable/start httpd.service
    • telnet
      • yum install telnet-server
      • systemctl start telnet.socket or systemctl enable telnet.socket
      • firewall-cmd --permanent --add-service=telnet
      • service firewalld restart
    • ftp
      • yum install vsftpd
      • /etc/vsftpd/vsftpd.conf
             local_enable=yes
      • service vsftpd start
    • tftp
      • dnf install tftp-server
      • systemctl enable & start tftp.socket
      • cd /; ln -s /var/lib/tftpboot
      • change home: /lib/systemd/system/tftp.service
    • libs
      • dnf
        • sudo, procps, passwd, python-devel, libxslt-devel, libxml2-devel, redhat-rpm-config, python-pip, pandoc, nodejs, npm, daemonize, libXScrnSaver, libXScrnSaver-devel, GConf2, fontconfig, cairo, cairo-devel, cairomm-devel, libjpeg-turbo-devel, pango, pango-devel, pangomm, pangomm-devel, giflib-devel, libXi, libXcursor
      • pip
        • --upgrade pip, lxml, beautifulsoup4, junit_xml
      • npm
        • npm@latest -g, selenium-webdriver -g, selenium-webdriver (as localadmin)
    • OTV5-CI docker node setup:
      • create /home/jk for localadmin:localadmin
      • cp /users/yjpark/p4/p4 to /usr/local/bin
      • dnf update
      • dnf install docker, enable & start
      • docker load -i /users/yjpark/docker/*
      • pip install --upgrade pip, lxml, bs4, junit_xml
    • firewall
      • firewall-cmd --add-port=80/tcp --permanent
      • firewall-cmd --reload
      • service firewalld stop
    • query service ports
      • vi /etc/services
      • or sudo nmap -sT -O localhost
      • or netstat -anp
    • Apach2
      • dnf install httpd
      • version: httpd -v
      • systemctl start httpd.service
      • default index file: touch /var/www/html/index.html
    • PHP
      • dnf install php
      • image handling module: dnf install php-gd
      • multi language: dnf install php-mbstring
    • MySQL
      • dnf install mariadb
      • dnf install mariadb-server
      • systemcdtl start mariadb.service
      • /usr/bin/mysql_secure_installation
        • Setup root password, etc
      • dnf install php-mysql (FC24)
      • dnf install php-pdo_mysql (FC25)
      • dnf install MySQL-python
      • vi /etc/my.cnf
             [mysqld]
                    :
             character-set-server = utf8mb4
             collation-server = utf8mb4_unicode_ci
      • systemctl restart mariadb.service
      • mkdir www
      • exit
    • phpMyAdmin
      • dnf install phpmyadmin
      • vi /etc/httpd/conf.d/phpMyAdmin.conf
       AddDefaultCharset UTF-8

       <IfModule mod_authz_core.c>
         # Apache 2.4
         <RequireAny>
           #ADD following line:
           Require all granted
           Require ip 127.0.0.1
           Require ip ::1
         </RequireAny>
       </IfModule>
       <IfModule !mod_authz_core.c>
         # Apache 2.2
         #CHANGE following 2 lines:
         Order Allow,Deny
         Allow from All
         Allow from 127.0.0.1
         Allow from ::1
       </IfModule>
    </Directory>



    • ffmpeg
      • rpm -Uvh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm
      • In case need GPG key:
        • https://rpmfusion.org/keys?action=AttachFile&do=view&target=RPM-GPG-KEY-rpmfusion-free-fedora-20
        • sudo rpm --import key_above
      • dnf install ffmpeg
    • nginx
      • dnf install nginx
      • systemctl enable nginx
      • vi /etc/nginx/nginx.conf:
              listen 8090 default_server;
              root /home/yjpark/www
      • semanage port -l | grep http_port_t
      • semanage port -a -t http_port_t -p tcp 8090
      • systemctl start nginx
      • getenforce
      • setenforce Permissive
      • systemctl stop nginx; systemctl start nginx
      • chcon -Rt httpd_sys_content_t /home/yjpark/perforce/workspace/www/nginx
    • nginx: install from source
      • openssl:
        • wget http://www.openssl.org/source/openssl-1.0.2f.tar.gz
        • cd openssl-1.0.2f
        • ./Configure linux-x86_64 --prefix=/usr
        • make; sudo make install
      • pcre
        • wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.40.tar.gz
        • ./configure; make; sudo make install
      • wget http://zlib.net/zlib-1.2.11.tar.gz
        • ./configure; make; sudo make install
      • download https://github.com/arut/nginx-rtmp-module
      • wget http://nginx.org/download/nginx-1.11.9.tar.gz
        • ./configure --add-module=../nginx-rtmp-module-master --with-http_ssl_module
        • make; sudo make install
      •   mkdir /tmp/HLS/cam
      • In /usr/local/nginx/conf/nginx.conf:

    rtmp {

        server {

            listen 1935;

            allow play all;



            chunk_size 4000;



            # video on demand for flv files

            application cam {

                allow play all;

                live on;

                hls on;

                hls_nested on;

                hls_path /tmp/HLS/cam;

                hls_fragment 10s;

                exec_static ffmpeg -i rtmps://stream-delta.dropcam.com/nexus/a203d89605a4454a94caed1b16024f79 -c:v libx264 -an -f flv rtmp://localhost:1935/cam/nest;
                exec_static ffmpeg -i rtmps://stream-delta.dropcam.com/nexus/a203d89605a4454a94caed1b16024f79 -g 1 -s 320x240 -vcodec libx264 -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1 -f flv rtmp://localhost:1935/cam/nest;

            }



            # video on demand for mp4 files

            application vod {

                allow play all;

                play /home/yjpark/www/nginx/mp4s;

            }



            application hls {

                live on;

                hls on;

                hls_path /tmp/HLS;

            }



        }

    }



    http {

        include       mime.types;

        default_type  application/octet-stream;



        #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

        #                  '$status $body_bytes_sent "$http_referer" '

        #                  '"$http_user_agent" "$http_x_forwarded_for"';

        #access_log  logs/access.log  main;

        sendfile        on;
        #tcp_nopush     on;

        #keepalive_timeout  0;
        keepalive_timeout  65;

        #gzip  on;

        server {
            listen       8090;
            server_name  localhost;

            #charset koi8-r;

            #access_log  logs/host.access.log  main;

            location / {
                root /home/yjpark/www/nginx/html;
                index index.html index.htm;
            }

            #creates the http-location for our full-resolution (desktop) HLS stream - "http://my-ip/cam/nest/index.m3u8"
            location /cam {
                types {
                    application/vnd.apple.mpegurl m3u8;
                    video/mp2t ts;
                }
                alias /tmp/HLS/cam;
                add_header Cache-Control no-cache;
            }

            #error_page  404              /404.html;

            # redirect server error pages to the static page /50x.html
            #
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }

            # proxy the PHP scripts to Apache listening on 127.0.0.1:80
            #
            #location ~ \.php$ {
            #    proxy_pass   http://127.0.0.1;
            #}
            # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
            #
            #location ~ \.php$ {
            #    root           html;
            #    fastcgi_pass   127.0.0.1:9000;
            #    fastcgi_index  index.php;
            #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
            #    include        fastcgi_params;
            #}

            # deny access to .htaccess files, if Apache's document root
            # concurs with nginx's one
            #
            #location ~ /\.ht {
            #    deny  all;
            #}
        }
    }

      • sudo /usr/local/nginx/sbin/nginx -s stop;  sudo /usr/local/nginx/sbin/nginx
    • Server/client certificates
      • http://theheat.dk/blog/?p=1023
      • https://engineering.circle.com/https-authorized-certs-with-node-js-315e548354a2
    • Perforce
      • Download p4d from https://www.perforce.com/downloads/helix-versioning-engine-p4d
      • vi /etc/systemd/system/p4d.service
        • # Example Perforce systemd file (p4d.service):
          #
          # This service file will start Perforce at boot, and
          # provide everything needed to use systemctl to control
          # the Perforce server process.
          
          [Unit]
          # Note that descriptions are limited to 80 characters:
          Description=Perforce Server
          
          # Starts Perforce only after the network services are 
          # ready:
          #After=network.target
          After=network-online.target
          
          [Service]
          # The type should always be set to "forking" to support
          # multiple Perforce processes:
          Type=forking
          
          # Set the system user used to launch this process (usually
          # 'perforce':
          User=perforce
          
          # The command used to start Perforce:
          ExecStart=/usr/bin/p4d -r /home/yjpark/p4d/p4d -p 10.0.0.211:1666 -d
          
          [Install]
          # Describes the target for this service -- this will always
          # be 'multi-user.target':
          WantedBy=multi-user.target
        • service start p4d
        • service enable p4d
          • a symbolic link to p4d.service file will be created in /etc/systemd/system/multi-user.target.wants
      • Download p4v from https://www.perforce.com/downloads/helix-visual-client-p4v
    • dual boot order
      • /etc/default/grub
      • change GRUB_DEFAULT=<number>
      • grub2-mkconfig -o /boot/grub2/grub.cfg
    •   Nomachine for Fedora 26+
      • Until we add support for Wayland protocol you should disable Wayland in the gdm configuration by adding the following key in /etc/gdm/custom.conf:
      • WaylandEnable=false