Overview
I’m setting up a an OpenShift demo following the helloworld-msa:
The laptop i’m using is a lenovo thinkpad running Fedora 23. The notebook is used for my day 2 day work and additionally as a presentation and demo laptop. Setting up the OpenShift Demo is therefore a natural step.
Preparing the Demo
Installing vagrant
I decided to install the fedora-provided vagrant. This delivers version 1.8.1 instead of 1.8.4 (which is the current version today). I will see whether this works out. I also want to use vagrant with libvirt as this is the default virtualization provider on fedora and i hope not to run into any dependency issues.
I follow this route:
https://fedoramagazine.org/running-vagrant-fedora-22/
[mschreie@mschreie ~]$ sudo dnf install vagrant [mschreie@mschreie ~]$ sudo dnf install vagrant-libvirt [mschreie@mschreie ~]$ sudo cp /usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.32/polkit/10-vagrant-libvirt.rules /etc/polkit-1/rules.d/ [mschreie@mschreie ~]$ systemctl restart libvirtd [mschreie@mschreie ~]$ systemctl restart polkit [mschreie@mschreie ~]$ sudo usermod -aG vagrant mschreie [mschreie@mschreie ~]$
I did not install lxc drivers (as i prefer to use docker).
Installing container tool kit
I downloaded
- Red Hat Container Tools (cdk-2.1.0.zip)
- RHEL 7.2 Vagrant for libvirt
from https://access.redhat.com/downloads/content/293/ver=2.1/rhel—7/2.1.0/x86_64/product-software
I moved the vagrant box somewhere in my filesystem, where i hope it fits in:
[mschreie@mschreie ~]$ sudo mkdir /VirtualMachines/vagrant [mschreie@mschreie ~]$ sudo chown mschreie: /VirtualMachines/vagrant [mschreie@mschreie ~]$ ln -s /VirtualMachines/vagrant vagrant [mschreie@mschreie ~]$ mv "/Archive/RPMs&tgz/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box" vagrant/
and unpacked and installed the cdk content as:
[mschreie@mschreie RPMs&tgz]$ cd [mschreie@mschreie ~]$ unzip /Archive/RPMs\&tgz/cdk-2.1.0.zip [mschreie@mschreie ~]$ sudo dnf install ruby-devel zlib-devel [mschreie@mschreie ~]$ sudo dnf install rubygem-rubyzip Last metadata expiration check: 3:02:11 ago on Thu Jul 14 14:24:17 2016.
things run smoothly:
[mschreie@mschreie ~]$ vagrant plugin install vagrant-service-manager Installing the 'vagrant-service-manager' plugin. This can take a few minutes... Installed the plugin 'vagrant-service-manager (1.2.0)'! [mschreie@mschreie ~]$ vagrant plugin install vagrant-registration Installing the 'vagrant-registration' plugin. This can take a few minutes... Installed the plugin 'vagrant-registration (1.2.2)'! [mschreie@mschreie ~]$ vagrant plugin install vagrant-sshfs Installing the 'vagrant-sshfs' plugin. This can take a few minutes... Installed the plugin 'vagrant-sshfs (1.1.0)'! [mschreie@mschreie ~]$ vagrant plugin install zip Installing the 'zip' plugin. This can take a few minutes... Installed the plugin 'zip (2.0.2)'!
Now adding the vagrant box and starting it:
[mschreie@mschreie RPMs&tgz]$ vagrant box add --name cdkv2 ./vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box ==> box: Box file was not detected as metadata. Adding it directly... ==> box: Adding box 'cdkv2' (v0) for provider: box: Unpacking necessary files from: file:///home/mschreie/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box ==> box: Successfully added box 'cdkv2' (v0) for 'libvirt'! [mschreie@mschreie RPMs&tgz]$ [mschreie@mschreie RPMs&tgz]$ cd cdk/components/rhel/rhel-ose/ [mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192 [mschreie@mschreie rhel-ose]$ vagrant up [mschreie@mschreie rhel-ose]$ eval "$(vagrant service-manager env docker)"
I did not find a fedora package containing oc and therefor downloaded OpenShift 3.2 client (not 3.1. as linked on the documentation page) here:
https://access.redhat.com/downloads/content/290/ver=3.2/rhel—7/3.2.1.4/x86_64/product-software
[mschreie@mschreie RPMs&tgz]$ tar -xvf oc-3.2.1.4-linux.tar.gz mnt/redhat/staging-cds/ose-clients-3.2.1.4/usr/share/atomic-openshift/linux/oc [mschreie@mschreie RPMs&tgz]$ ln -s `pwd`/mnt/redhat/staging-cds/ose-clients-3.2.1.4/usr/share/atomic-openshift/linux/oc ~/bin/oc
Logged in via browser:
Open Openshift console: https://10.1.2.2:8443/console/
(Accept the certificate and proceed)
Use openshift-dev/devel as your credentials in CDK
or log in via cli: [mschreie@mschreie RPMs&tgz]$ oc login 10.1.2.2:8443 -u openshift-dev -p devel
Installing the helloworld-msa demo
I’m installing necessary tools which are needed to prepare the demo. I’m using Andy Neebs scripts to speed up some things.
[mschreie@mschreie rhel-ose]$ sudo dnf install maven npm [mschreie@mschreie frontend]$ sudo install bower -g [mschreie@mschreie rhel-ose]$ wget https://github.com/andyneeb/msa-demo/raw/master/create-msa-demo.sh
I added a couple of lines at the top to catch the output of the script and in case of any failure i wanted the script to stop and even wanted to know where it stops. This approach is not elegant but it works…
[mschreie@mschreie rhel-ose]$ cat create-msa-demo_msi.sh #!/bin/bash #catch output in a file exec >> ./create-msa-demo.log exec 2>&1 set +x # Cleanup # rm aloha/ api-gateway/ bonjour/ frontend/ hola/ ola/ -rf # Login and create project oc login 10.1.2.2:8443 -u openshift-dev -p devel || exit 1 oc new-project helloworld-msa || exit 2 # Deploy hola (JAX-RS/Wildfly Swarm) microservice ## git clone https://github.com/redhat-helloworld-msa/hola || exit 3 cd hola/ git pull || exit 3 oc new-build --binary --name=hola -l app=hola || exit 4 mvn package || exit 5 oc start-build hola --from-dir=. --follow || exit 6 oc new-app hola -l app=hola,hystrix.enabled=true || exit 7 oc expose service hola || exit 8 oc set probe dc/hola --readiness --get-url=http://:8080/api/health || exit 9 cd .. # Deploy aloha (Vert.x) microservice ## git clone https://github.com/redhat-helloworld-msa/aloha || exit 10 cd aloha/ git pull || exit 10 oc new-build --binary --name=aloha -l app=aloha || exit 11 mvn package || exit 12 oc start-build aloha --from-dir=. --follow || exit 12 oc new-app aloha -l app=aloha,hystrix.enabled=true || exit 13 oc expose service aloha || exit 14 oc patch dc/aloha -p '{"spec":{"template":{"spec":{"containers":[{"name":"aloha","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 15 oc set probe dc/aloha --readiness --get-url=http://:8080/api/health || exit 16 cd .. # Deploy ola (Spring Boot) microservice ## git clone https://github.com/redhat-helloworld-msa/ola || exit 17 cd ola/ git pull || exit 17 oc new-build --binary --name=ola -l app=ola || exit 18 mvn package || exit 19 oc start-build ola --from-dir=. --follow || exit 20 oc new-app ola -l app=ola,hystrix.enabled=true || exit 21 oc expose service ola || exit 22 oc patch dc/ola -p '{"spec":{"template":{"spec":{"containers":[{"name":"ola","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 23 oc set probe dc/ola --readiness --get-url=http://:8080/api/health || exit 24 cd .. # Deploy bonjour (NodeJS) microservice ## git clone https://github.com/redhat-helloworld-msa/bonjour || exit 25 cd bonjour/ git pull || exit 25 oc new-build --binary --name=bonjour -l app=bonjour || exit 26 npm install || exit 27 oc start-build bonjour --from-dir=. --follow || exit 28 oc new-app bonjour -l app=bonjour || exit 29 oc expose service bonjour || exit 30 oc set probe dc/bonjour --readiness --get-url=http://:8080/api/health || exit 31 cd .. # Deploy api-gateway (Spring Boot) ## git clone https://github.com/redhat-helloworld-msa/api-gateway || exit 32 cd api-gateway/ git pull || exit 32 oc new-build --binary --name=api-gateway -l app=api-gateway || exit 33 mvn package || exit 34 oc start-build api-gateway --from-dir=. --follow || exit 35 oc new-app api-gateway -l app=api-gateway,hystrix.enabled=true || exit 36 oc expose service api-gateway || exit 37 oc patch dc/api-gateway -p '{"spec":{"template":{"spec":{"containers":[{"name":"api-gateway","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}' || exit 38 oc set probe dc/api-gateway --readiness --get-url=http://:8080/health || exit 39 cd .. # Deploy Kubeflix oc create -f http://central.maven.org/maven2/io/fabric8/kubeflix/packages/kubeflix/1.0.17/kubeflix-1.0.17-kubernetes.yml || exit 40 oc new-app kubeflix || exit 41 oc expose service hystrix-dashboard || exit 42 oc policy add-role-to-user admin system:serviceaccount:helloworld-msa:turbine || exit 43 # Deploy Kubernetes ZipKin oc create -f http://repo1.maven.org/maven2/io/fabric8/zipkin/zipkin-starter-minimal/0.0.8/zipkin-starter-minimal-0.0.8-kubernetes.yml || exit 44 oc expose service zipkin-query || exit 45 # Deploy frontend (NodeJS/HTML5/JS) ## git clone https://github.com/redhat-helloworld-msa/frontend || exit 46 cd frontend/ git pull || exit 46 oc new-build --binary --name=frontend -l app=frontend || exit 47 npm install || exit 48 oc start-build frontend --from-dir=. --follow || exit 49 oc new-app frontend -l app=frontend || exit 50 oc expose service frontend || exit 51 cd .. # Deploy Jenkins oc login -u admin -p admin || exit 52 oc project openshift || exit 53 oc create -f https://raw.githubusercontent.com/redhat-helloworld-msa/jenkins/master/custom-jenkins.build.yaml || exit 54 oc start-build custom-jenkins-build --follow || exit 55 oc login -u openshift-dev -p devel || exit 56 oc new-project ci || exit 57 oc new-app -p MEMORY_LIMIT=1024Mi https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/jenkins-ephemeral-template.json || exit 58 oc project helloworld-msa || exit 59
And then just run the script:
[mschreie@mschreie rhel-ose]$ bash -x create-msa-demo_msi.sh
The script takes a while. Please check the return code directly after the script finished:
[mschreie@mschreie rhel-ose]$ echo §? 0
Seeing 0 is very good. Any other number gives you the hint which “exit – command” initiated a stop and therefor which cmd went wrong.
Additionally it is wise to search the output for anything gone wrong:
[mschreie@mschreie rhel-ose]$ egrep -i "err|warn|not found" ./create-msa-demo.log
Testing the setup:
Access the endpoint microservices:
- http://hola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/hola
- http://aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/aloha
- http://ola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/ola
- http://bonjour-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/bonjour
- http://api-gateway-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api
- http://hystrix-dashboard-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/
- http://zipkin-query-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/
and the frontend itself:
- http://frontend-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/
- https://jenkins-ci.rhel-cdk.10.1.2.2.xip.io/
To demonstrate the demo i also use some other scripts of Andy Neeb, which need to be downloaded:
[mschreie@mschreie rhel-ose]$ wget https://raw.githubusercontent.com/andyneeb/msa-demo/master/break-production.sh [mschreie@mschreie rhel-ose]$ wget https://raw.githubusercontent.com/andyneeb/msa-demo/master/trigger-jenkins.sh
Using the environment / demoing
Starting and Stopping the environment
Stopping the demo
[mschreie@mschreie rhel-ose]$ vagrant halt
maybe it is better to stop the box via “init 0” from within the box. At least some troubles during restart vanished doing so.
[mschreie@mschreie rhel-ose]$ vagrant ssh Last login: Tue Sep 20 09:06:12 2016 from 192.168.121.1 [vagrant@rhel-cdk ~]$ sudo -i [root@rhel-cdk ~]# init 0
Starting the demo
[mschreie@mschreie ~]$ cd cdk/components/rhel/rhel-ose/ [mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192 [mschreie@mschreie rhel-ose]$ vagrant up [mschreie@mschreie rhel-ose]$ eval "$(vagrant service-manager env docker)"
Demoing the CI/CD Pipeline
Look at what you have
The application frontend http://frontend-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/ should show some nice webpage with four different backends stating hello in different languages.
Go to the OpenShift WebUI https://10.1.2.2:8443/login and login with openshift-dev / devel (admin / admin not needed here) to get an overview of your projects. Navigate to the project helloword-msa and find information about all pods in that project.
Scale Out via console
[mschreie@mschreie rhel-ose]$ oc login 10.1.2.2:8443
Authentication required for https://10.1.2.2:8443 (openshift)
Username: openshift-dev
Password:
Login successful.
You have access to the following projects and can switch between them with ‘oc project <projectname>’:
* ci
* helloworld-msa (current)
* helloworld-msa-dev
* helloworld-msa-qa
* sample-project
Using project “helloworld-msa”.
first find the right Replication controller for your aloha service
[mschreie@mschreie rhel-ose]$ oc get rc NAME DESIRED CURRENT AGE .... aloha-7 0 0 54d aloha-8 1 1 41m api-gateway-1 0 0 63d ....
then scale this out to 3 pods (and watch your OpenShift Webfrontend for the scale):
[mschreie@mschreie rhel-ose]$ oc scale --replicas=3 rc aloha-8 replicationcontroller "aloha-8" scaled [mschreie@mschreie rhel-ose]$
Looking at the Webfrontend of your application you will see the content is provided by different pods. This can easily be seen here http://aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/aloha
Scale Back via Webfrontend
You can scale back the aloha service to one Pod using the WebUI
Prepare an error
Prepare an error in production so we have some reason to fix:
[mschreie@mschreie ~]$ cd cdk/components/rhel/rhel-ose/ [mschreie@mschreie rhel-ose]$ bash -x break-production.sh
Note: this error was injected directly into production. You will find other builds with other behavior in dev and qa.
You might want check :
- http://aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/aloha or
- http://frontend-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/
you should find an output “aloca” instead of “aloha”
Demo the CI/CD Pipeline
Now we correct the code again:
[mschreie@mschreie rhel-ose]$ sed -i 's/return String.format(Aloca mai %s, hostname);/return String.format(Aloha mai %s, hostname);/g' aloha/src/main/java/com/redhat/developers/msa/aloha/AlohaVerticle.java
and trigger the build pipeline through jenkins:
[mschreie@mschreie rhel-ose]$ bash trigger-jenkins.sh
Please look at:
https://jenkins-ci.rhel-cdk.10.1.2.2.xip.io/job/Aloha%20Microservices/
Login with: admin / password
You will see the build chain stops with “wait for approval”
Before continuing, please also check the OpenShift WebUI:
https://10.1.2.2:8443/console/
Login with: openshift-dev / devel
Navigate to -> helloworld-msa ->
For the aloha-helloworld-msa… please note the Image-id.
You can also click on the service and verify the output.
You should do the same for helloworld-msa-dev and helloworld-msa-qa as well.
The image-id should be identical in dev and qa and different in prod.
After approval you might see how a new pod is fired up in prod and afterwards the old pod is tiered down. Now prod should now have the same image-id.
Troubleshooting:
You might run into some issues, some of them are mentioned here with an adequate solution:
“can’t find header files for ruby” and/or
“zlib is missing” while installing vagrant-service-manager
[mschreie@mschreie ~]$ vagrant plugin install vagrant-service-manager
might throw the following errors:
/usr/bin/ruby -r ./siteconf20160714-27092-1aqlxn4.rb extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h
or:
em::Ext::BuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby -r ./siteconf20160714-27504-ti4z51.rb extconf.rb zlib is missing; necessary for building libxml2
You need to install additional rpms to get rid of these errors:
[mschreie@mschreie ~]$ sudo dnf install ruby-devel zlib-devel
“cannot load such file — zip” while adding vagrant box:
The following error-message expresses that the plugin named “zip” can not be loaded – after installing the additional vagrant plugin this was fixed:
[mschreie@mschreie ~]$ vagrant box add --name cdkv2 ./vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box Vagrant failed to initialize at a very early stage: The plugins failed to load properly. The error message given is shown below. cannot load such file -- zip
I fixed that with the following commands:
[mschreie@mschreie RPMs&tgz]$ sudo dnf install rubygem-rubyzip Last metadata expiration check: 3:02:11 ago on Thu Jul 14 14:24:17 2016. [mschreie@mschreie RPMs&tgz]$ vagrant plugin install zip Installing the 'zip' plugin. This can take a few minutes... Installed the plugin 'zip (2.0.2)'!
accessing the docker daemon via docker cli:
I had some issues with docker:
[mschreie@mschreie ~]$ docker ps -a -q --no-trunc Cannot connect to the Docker daemon. Is the docker daemon running on this host? and fixed them via: [mschreie@mschreie ~]$ sudo usermod -aG docker mschreie
the docker cmd run through after a relogin.
vagrant box hangs:
I experienced my box hanging repeatedly. This led to following situations:
oc- commands returned
Unable to connect to the server: net/http: TLS handshake timeout
or returned
The connection to the server 10.1.2.2:8443 was refused - did you specify the right host or port?
Also the OpenShift web UI showed pots to be unresponsive:
"This pod has been stuck in the pending state for more than five minutes."
During these hangs i could not run any command in the ssh promt of the box either. But while being responsive i checked memory. If everything is correct it should look like this:
[mschreie@mschreie rhel-ose]$ vagrant ssh Last login: Tue Jul 19 13:15:27 2016 from 192.168.121.1 [vagrant@rhel-cdk ~]$ cat /proc/meminfo | grep MemTotal MemTotal: 8011096 kB
If this does not show 8 GB, than you did not set the memory correctly: You need to define and export VM_MEMORY variable before starting the vagrant box.
vagrant up – issues:
[mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192 [mschreie@mschreie rhel-ose]$ vagrant up Bringing machine 'default' up with 'libvirt' provider... Name `rhel-ose_default` of domain about to create is already taken. Please try to run `vagrant up` command again.
I had quite some hustle to fix this, but i believe following commands made the trick:
[mschreie@mschreie rhel-ose]$ vagrant destroy ==> default: Remove stale volume... ==> default: Domain is not created. Please run `vagrant up` first. [mschreie@mschreie rhel-ose]$ vagrant box list cdkv2 (libvirt, 0) [mschreie@mschreie rhel-ose]$ vagrant box remove cdkv2 Removing box 'cdkv2' (v0) with provider 'libvirt'... Vagrant-libvirt plugin removed box only from you LOCAL ~/.vagrant/boxes directory From libvirt storage pool you have to delete image manually(virsh, virt-manager or by any other tool) [mschreie@mschreie rhel-ose]$ find / -name .vagrant 2>/dev/null .... [mschreie@mschreie rhel-ose]$ rm -rf .vagrant/ [mschreie@mschreie rhel-ose]$ sudo virsh list | grep rhel-ose_default [mschreie@mschreie rhel-ose]$ sudo virsh managedsave-remove rhel-ose_default Removed managedsave image for domain rhel-ose_default [mschreie@mschreie rhel-ose]$ sudo virsh undefine rhel-ose_default Domain rhel-ose_default has been undefined [mschreie@mschreie rhel-ose]$ sudo rm /VirtualMachines/rhel-ose_default.img [mschreie@mschreie rhel-ose]$ systemctl restart libvirtd [mschreie@mschreie rhel-ose]$
And then finally:
[mschreie@mschreie rhel-ose]$ vagrant box add --name cdkv2 ~/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box ==> box: Box file was not detected as metadata. Adding it directly... ==> box: Adding box 'cdkv2' (v0) for provider: box: Unpacking necessary files from: file:///home/mschreie/vagrant/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-libvirt.box ==> box: Successfully added box 'cdkv2' (v0) for 'libvirt'! [mschreie@mschreie rhel-ose]$ export VM_MEMORY=8192 [mschreie@mschreie rhel-ose]$ vagrant up Bringing machine 'default' up with 'libvirt' provider... ==> default: Creating image (snapshot of base box volume). ==> default: Creating domain with the following settings... ==> default: -- Name: rhel-ose_default ==> default: -- Domain type: kvm ==> default: -- Cpus: 2 ==> default: -- Memory: 8192M ==> default: -- Management MAC: ==> default: -- Loader: ==> default: -- Base box: cdkv2 ==> default: -- Storage pool: default ==> default: -- Image: /var/lib/libvirt/images/rhel-ose_default.img (41G) ==> default: -- Volume Cache: default ==> default: -- Kernel: ==> default: -- Initrd: ==> default: -- Graphics Type: vnc ==> default: -- Graphics Port: 5900 ==> default: -- Graphics IP: 127.0.0.1 ==> default: -- Graphics Password: Not defined ==> default: -- Video Type: cirrus ==> default: -- Video VRAM: 9216 ==> default: -- Keymap: en-us ==> default: -- TPM Path: ==> default: -- INPUT: type=mouse, bus=ps2 ==> default: -- Command line : ==> default: Creating shared folders metadata... ==> default: Starting domain. ==> default: Waiting for domain to get an IP address... ==> default: Waiting for SSH to become available... default: default: Vagrant insecure key detected. Vagrant will automatically replace default: this with a newly generated keypair for better security. default: default: Inserting generated public key within guest... default: Removing insecure key from the guest if it's present... default: Key inserted! Disconnecting and reconnecting using new SSH key... ==> default: Registering box with vagrant-registration... default: Would you like to register the system now (default: yes)? [y|n]n ==> default: Configuring and enabling network interfaces... Copying TLS certificates to /home/mschreie/cdk/components/rhel/rhel-ose/.vagrant/machines/default/libvirt/docker ==> default: Rsyncing folder: /home/mschreie/cdk/components/rhel/rhel-ose/ => /vagrant ==> default: Running provisioner: shell... default: Running: inline script ==> default: Created symlink from /etc/systemd/system/multi-user.target.wants/openshift.service to /usr/lib/systemd/system/openshift.service. ==> default: Running provisioner: shell... default: Running: inline script ==> default: Successfully started and provisioned VM with 2 cores and 819 MB of memory. ==> default: To modify the number of cores and/or available memory set the environment variables ==> default: VM_CPU respectively VM_MEMORY. ==> default: You can now access the OpenShift console on: https://10.1.2.2:8443/console ==> default: To use OpenShift CLI, run: ==> default: $ vagrant ssh ==> default: $ oc login 10.1.2.2:8443 ==> default: Configured users are (<username>/<password>): ==> default: openshift-dev/devel ==> default: admin/admin ==> default: If you have the oc client library on your host, you can also login from your host. [mschreie@mschreie rhel-ose]$
trouble installing “frontend”
Some of my microservices just did not work. Even though i could not find any “Error” in the output of the script. For the troublesome services I ran the commands one by one by hand and found some error somewhere in the output of
[mschreie@mschreie frontend]$ npm install ...... > bower install sh: bower: command not found
no “error” keyword to grep for -perhaps the “WARN” messages are related to that or grepping for “not found” would have helped.
After the following additional install, things ran smoothly.
[mschreie@mschreie frontend]$ sudo install bower -g
Interesting links:
The Red Hat Container Deployment Kit – getting started guide:
https://access.redhat.com/documentation/en/red-hat-container-development-kit/2.1/getting-started-guide/
The main page from which i set up my demo:
https://htmlpreview.github.io/?https://github.com/redhat-helloworld-msa/helloworld-msa/blob/master/readme.html
Andy Neebs scripts:
https://github.com/andyneeb/msa-demo
Conclusion
We managed to set up an OpenShift demo on our laptop using vagrant. We demonstrate a possible solution for typical demands on automated but controlled deployment chains.