2017-03-03 15:07
Hi

2017-03-03 15:08
How can we have access to Kubernetes Config? Deployed Kubernetes in Amazon

greg
2017-03-03 15:12
You can edit the role attributes on k8s-config (it is in the upper corner of the deployment view). Changing values and committing the deployment will make the attributes take effect.

greg
2017-03-20 17:49
Hi All - I've update to latest kargo and let most of our defaults that same. The big change is that the default api server port has moved from 443 to 6443.

greg
2017-03-20 17:50
So you will need to add :6443 to your url to access the api server.

2017-03-20 18:02
Hello! Can anyone please tell me how to add additional arguments to openstack cloud provider? I'm using v3 api, and I need to add os-user-domain-name and os-identity-api-version, but there are no such fields in provider interface. Thanks in advance!

greg
2017-03-20 18:06
hmm - checking

greg
2017-03-20 18:23
@alex3594 - My guess is that we don't currently handle that.

greg
2017-03-20 18:24
We would need to update the cloudwrap container with a version of the openstack command that supports v3 if it doesn't already, and then add those two fields to the passed through structure and put them on the openstack command.

greg
2017-03-20 18:24
Can you open an issue for this, please?

2017-03-20 18:25
@zehicle Sure, thanks!

2017-03-20 18:30
I created the issue, here is the link: https://github.com/digitalrebar/digitalrebar/issues/237 Thanks!

greg
2017-03-20 18:31
Thanks

zehicle
2017-03-20 18:33
Here's the parameter mapping for the openstack CLI in cloudwrap

zehicle
2017-03-20 18:33
openstack --os-username \'#{endpoint['os-username']}\' " \ "--os-password \'#{endpoint['os-password']}\' " \ "--os-project-name \'#{endpoint['os-project-name']}\' " \ "--os-region-name \'#{endpoint['os-region-name']}\' " \ "--os-auth-url \'#{endpoint['os-auth-url']}\' " \ "#{cmd}"

zehicle
2017-03-20 18:34
what fields do you require for your regular openstack CLI calls?

greg
2017-03-20 18:34
@zehicle - he opened a bug for it with the fields and all.

2017-03-20 18:34
--os-user-domain-name 'default' and --os-identity-api-version '3'

zehicle
2017-03-20 18:35
ah, thanks

zehicle
2017-03-21 13:12
I've got a patch out there for that --os issue

greg
2017-03-21 13:19
I'll attempt to pull it in and rebuild the container.

greg
2017-03-21 14:49
alex3594 - Containers and code are updated.

greg
2017-03-21 14:49
You can do a git pull in the digitalrebar directory.

greg
2017-03-21 14:50
You will need to do the following commands from the deploy/compose directory

greg
2017-03-21 14:50
docker-compose restart rebar-api

greg
2017-03-21 14:50
docker-compose stop cloudwrap ; docker-compose rm -f cloudwrap ; docker-compose up -d cloudwrap

greg
2017-03-21 14:50
or just start over.

2017-03-21 16:23
Thank you! Will test it now!

zehicle
2017-03-21 16:59
You should watch the cloudwrap log to make sure parameters are right

2017-03-21 20:04
hey guys

2017-03-21 20:05
I am trying to setup a digitalrebar admin server

2017-03-21 20:05
and I keep getting an error in this ansible task : TASK [Pull compose images [SLOW]]

2017-03-21 20:06
the output basically says that the images are already up-to-date

2017-03-21 20:07
https://gist.github.com/dadicool/e1ee0ab50bcec157dedf0ac4e3f18a57

2017-03-21 20:07
I tried to rerunning multiple times as suggested here : http://digital-rebar.readthedocs.io/en/latest/deployment/troubleshooting/specific-environment/Run-In-System.html

2017-03-21 20:07
but I keep hitting the same error ...

2017-03-21 20:08
any suggestions would be welcome!

greg
2017-03-21 20:13
it is me.

greg
2017-03-21 20:13
I built a bad cloudwrap again.

2017-03-21 20:14
:)

2017-03-21 20:14
any reason why the script tries to pull with DR_TAG=master

2017-03-21 20:14
that feels ... adventurous

greg
2017-03-21 20:14
Because your git tree is master. We only currently have a functioning master. Our bad probably.

2017-03-21 20:15
is there anything I could do at this moment to get me a functional setup?

greg
2017-03-21 20:16
no - I need to repush an image. I'd love for people to yell at docker for changing how docker works on a mac, but ...

2017-03-21 20:16
got it

2017-03-21 20:16
is this a matter of minutes, hours, days?

greg
2017-03-21 20:16
And I need time to generate a good branch strategy.

greg
2017-03-21 20:16
hopefully, minutes to 1 hour.

2017-03-21 20:16
got it

2017-03-21 20:17
drop a message here when your push is good to go.

greg
2017-03-21 20:17
We have the tools ,but haven't spent the time to push cut releases.

2017-03-21 20:17
and thanks!

2017-03-21 20:17
is this something we can help with in the open or is this behind the firewall?

greg
2017-03-21 20:18
All open. In theory, you could build your own image. It is in the containers tree of the source code.

greg
2017-03-21 20:18
./rebuild-containers --pull --force cloudwrap

2017-03-21 20:19
in which folder is that script?

2017-03-21 20:19
I don't see it in the checkout

greg
2017-03-21 20:19
containers

greg
2017-03-21 20:19
digitalrebar/containers

2017-03-21 20:20
I need a working GO environment for rebuilding the containers? fun times :)

greg
2017-03-21 20:20
sorry.

greg
2017-03-21 20:20
you need a sws command.

2017-03-21 20:21
since I don't have my GO env setup on this machine, I will wait for your signal that there is a new cloudwrap image

2017-03-21 20:22
and for reference, I am on ubuntu 16.04LTS with latest docker

2017-03-21 20:22
ubuntu@ip-172-30-0-51:~/digitalrebar/containers$ docker --version Docker version 17.03.0-ce, build 60ccb22

greg
2017-03-21 20:22
ooo - shiny

2017-03-21 20:22
thanks a lot for the help - hoping for another drop later tonight (in CET here)

greg
2017-03-21 20:22
docker 1.12.5 is what I use on mac.

greg
2017-03-21 20:55
@dadicool - you should be good now.

2017-03-21 21:40
I can confirm the new image looks good - hoping to get my hands on my first digital.rebar environment shortly :)

2017-03-21 21:41
thanks @galthaus

greg
2017-03-21 21:41
awesome! Sorry about the break. We try to keep master runnable always.

greg
2017-03-21 21:42
At some point, it would be cool to hear about what you are doing with DR. publicly or privately.

2017-03-21 21:53
I am happy to share why digital.rebar caught my attention

2017-03-21 21:53
basically, we're a startup operating in the healthcare space in europe

2017-03-21 21:54
and there pretty specific regulations that prevent us from hosting on public clouds

2017-03-21 21:54
we want to benefit from the latest innovation in infrastructure automation (looking really closely at K8S)

2017-03-21 21:55
but given that the approved hosting providers for healthcare data are 2-3x more expensive than your standard public cloud, we only want to host our production workloads at the "fancy" hosting provider, our dev, qa, etc would be on a public cloud

2017-03-21 21:56
therefore, we want to be able to deploy/manage K8S on public cloud and a "sea of VMs" using the same tools, especially for upgrades and rollouts

2017-03-21 21:57
this is where digital.rebar comes into the picture

2017-03-21 21:57
I really see it as the "spinnaker of infrastructure"

2017-03-21 21:57
hopefully, that's how you guys see it too:)

greg
2017-03-21 21:57
It should be able to do must of that or could be added.

greg
2017-03-21 21:58
big huge covering sail that drives everything.

greg
2017-03-21 21:58
nice - not one we've used before.

2017-03-21 21:59
now, my first question is how do I join existing nodes (say the VMs that the hosting provider has spun up for me) to DR?

2017-03-21 21:59
the docs are detailed on the baremetal case and the public cloud case

2017-03-21 22:01
I obviously already have an admin DR running

2017-03-21 22:01
on the same subnet

greg
2017-03-21 22:01
Initially, you will need a script for that. It mostly exists. Let me find it.

greg
2017-03-21 22:01
The second way is to create a provider for your cloud host. I assume they have some kinda of API.

2017-03-21 22:02
I have no api available, it's one of those "managed vmware" environmnets

2017-03-21 22:02
so that script is probably the way to go

greg
2017-03-21 22:02
okay - we lose some managability, but you may not need it.

2017-03-21 22:02
can you be more explicit?

greg
2017-03-21 22:03
well - we won't necessarily be able to reboot or config bios and other stuff, but that is probably okay.

greg
2017-03-21 22:03
You want post os-install operation most likely.

greg
2017-03-21 22:03
so , the "join" script should be fine.

2017-03-21 22:03
in deed

greg
2017-03-21 22:04
you seems script aware so here we go.

greg
2017-03-21 22:04
There are two scripts.

greg
2017-03-21 22:06
in digitalrebar/deploy - add-from-ssh.sh - script assumes you have some ssh path to the node in question. It will attempt to ssh into the provided IP and then run a couple of script to join the node to rebar so that rebar can ssh into it.

2017-03-21 22:06
let try that

greg
2017-03-21 22:06
it runs scripts/ssh-copy-id.sh to attempt to setup keys if your current running user can't access the node.

greg
2017-03-21 22:07
Then it runs scripts/join_rebar.sh from scripts/rebar_join.sh

greg
2017-03-21 22:07
BUT FIRST You need to make an edit that I haven't done yet. We don't use these often.

2017-03-21 22:07
what's the edit and can I submit it as a PR after I verify that it works? :-)

greg
2017-03-21 22:08
in all three scripts, find :3000 and remove it.

greg
2017-03-21 22:09
yep - that is annoying, but missed those on the unified frontend change.

2017-03-21 22:09
ubuntu@ip-172-30-0-51:~/digitalrebar/deploy/scripts$ grep -r ":3000" * wait_for_rebar.sh:export REBAR_ENDPOINT=${REBAR_ENDPOINT:-https://127.0.0.1:3000}

2017-03-21 22:09
do I need to change wait_for_rebar.sh?

greg
2017-03-21 22:10
checking. It won't hurt. :3000 isn't exposed directly, anymore

greg
2017-03-21 22:10
./add-from-ssh.sh --admin-ip <ADMINIP> <cidr IP of target node>

greg
2017-03-21 22:10
e.g. ./add-from-ssh.sh --admin-ip 192.168.124.11 192.168.124.100/24

2017-03-21 22:11
:3000 no mode

greg
2017-03-21 22:11
cool

greg
2017-03-21 22:12
If that completes, you should be able to see the node in the UX.

2017-03-21 22:20
what is the argument to that script : init-ident

greg
2017-03-21 22:22
the user to login the first time.

greg
2017-03-21 22:22
that your system as access to, I think.

greg
2017-03-21 22:23
if your initial allowed user is fred (and it has sudo access), then use fred.

greg
2017-03-21 22:23
for example

2017-03-21 22:23
understood - I am finding a couple of typos in the scripts, working through them :)

2017-03-21 22:25
this is the first ssh call in the add-from-ssh script

2017-03-21 22:25
ssh -oBatchMode=yes -o StrictHostKeyChecking=no root@$IP date

greg
2017-03-21 22:25
we haven't driven them in awhile. Yeah - it is trying to see if your current user has ssh access as root to the IP in question.

2017-03-21 22:25
that clearly assumes that root is a user that is (1) usable for ssh

2017-03-21 22:26
let me adjust things to use the script params into account

greg
2017-03-21 22:26
hmm - yeah.

greg
2017-03-21 22:30
Yes - the first one is testing if the current running system/user combo can log in as root.

greg
2017-03-21 22:30
If not, then you need to run ssh-copy-id.sh to add root access. The ssh-copy-id.sh script takes the init-ident parameter to let you use a non-root user to setup root user access.

greg
2017-03-21 22:30
The rest of the script assume root.

greg
2017-03-21 22:31
In the end, you need the following.

2017-03-21 22:31
I am pretty familiar with bash

2017-03-21 22:31
I will get through this and I will try to clean things up and submit it back

greg
2017-03-21 22:31
okay - then I'll quit splaining that.

greg
2017-03-21 22:31
In the end, the goal is that the target node has the following:

greg
2017-03-21 22:32
the ssh access keys from the rebar attribute that is acquired by curl in the root authorized keys file.

greg
2017-03-21 22:32
and that a node has been created in DR, with the rebar-joined node role attached, and the node marked alive.

greg
2017-03-21 22:33
That last one is done by the rebar-join script (calls to rebar).

greg
2017-03-21 22:34
we used this script to add packet nodes that were already existing so it will need to be tailored to your environment.

2017-03-21 22:41
The ssh part is work-ish with some hacking, now to the node registration part

2017-03-21 22:41
I am hitting this : curl: (22) The requested URL returned error: 502 Bad Gateway

2017-03-21 22:42
I am trying to figure out why the admin is responding like that when it's clearly running fine

greg
2017-03-21 22:42
yeah - that is an issue. Give me second.

greg
2017-03-21 22:44
do you have rebar cli in your path?

greg
2017-03-21 22:44
rebar nodes list

2017-03-21 22:45
nope

2017-03-21 22:45
another script typo, it seems :

greg
2017-03-21 22:45
no

greg
2017-03-21 22:45
oh

2017-03-21 22:45
ssh root@$IP /root/join_rebar.sh $ADMIN_IP

2017-03-21 22:46
when join_rebar.sh expects a second arg

greg
2017-03-21 22:46
ok

2017-03-21 22:46
ADMIN_IP=$1 PASSED_IN_IP=$2

2017-03-21 22:46
Let me try to fix this first

2017-03-21 22:47
ok, that didn't help

2017-03-21 22:47
so what about the rebar cli

2017-03-21 22:47
I must have missed the pre step to in the docs

greg
2017-03-21 22:47
I was looking at wrong curl. You are farther than I thought

greg
2017-03-21 22:48
add nodes curl call

2017-03-21 22:48
I am trying to add some echo logging to figure out which curl call in join_rebar is blowing up

greg
2017-03-21 22:49
Okay the PASSED_IN_IP isn't needed unless you want to explicitly declare the IP to use. If these aren't multi-homed boxes, it shouldn't matter.

greg
2017-03-21 22:49
The other thing is to log into your system and run the script from there with -x set.

greg
2017-03-21 22:50
This is dire need of updating.

2017-03-21 22:51
:)

2017-03-21 22:51
I logged into the system I am trying to join to rebar

2017-03-21 22:51
and run join_rebar by hand

greg
2017-03-21 22:54
yeah - I'll probably have to update this.

2017-03-21 22:54
This is the curl that's returning a 502 bad gateway

2017-03-21 22:54
exists=$(curl -k -s -o /dev/null -w "%{http_code}" --digest -u "$REBAR_KEY" \ -X GET "$REBAR_WEB/api/v2/nodes/$HOSTNAME")

greg
2017-03-21 22:55
hmm what are rebar_key and rebar_web

2017-03-21 22:56
export REBAR_KEY="$REBAR_USER:$REBAR_PASSWORD" export REBAR_WEB="https://$ADMIN_IP"

greg
2017-03-21 22:56
Have you logged in the UX?

2017-03-21 22:57
yes

greg
2017-03-21 22:57
good.

greg
2017-03-21 22:57
just making sure

greg
2017-03-21 22:57
from the node do this:

greg
2017-03-21 22:58
curl http://<admin_ip>:8092/files/rebar -o rebar

greg
2017-03-21 22:58
chmod +x rebar

greg
2017-03-21 22:58
export REBAR_ENDPOINT=https://<ADMIN_IP>

greg
2017-03-21 22:58
export REBAR_KEY=rebar:rebar1

greg
2017-03-21 22:58
./rebar nodes list

2017-03-21 22:59
ubuntu@k8s-node1:~$ curl -v http://172.30.0.51:8092/files/rebar _ Trying 172.30.0.51... _ Connected to 172.30.0.51 (172.30.0.51) port 8092 (#0) > GET /files/rebar HTTP/1.1 > Host: 172.30.0.51:8092 > User-Agent: curl/7.47.0 > Accept: _/_ > * Connection #0 to host 172.30.0.51 left intact

2017-03-21 22:59
I am getting nothing back

greg
2017-03-21 23:00
is the admin node multi-homed?

greg
2017-03-21 23:00
hmm - that shouldn't matter.

greg
2017-03-21 23:00
at least not for this.

2017-03-21 23:00
it's not

2017-03-21 23:00
and that's the IP that I access the UI at

greg
2017-03-21 23:01
on the admin node as the user you run the install script from, ls ~/.cache/digitalrebar/tftpboot/files

2017-03-21 23:01
looking at the various docker containers of the admin, I don't see any port 8092 that is open ...

greg
2017-03-21 23:01
oh - sigh.

greg
2017-03-21 23:01
do you have a provisioner?

2017-03-21 23:01
not yet

2017-03-21 23:02
how does that work for nodes that are already existing?

greg
2017-03-21 23:02
thinking about it.

greg
2017-03-21 23:02
okay - should be okay, but you don't have a quick and handy rebar cli to get.

greg
2017-03-21 23:02
Sooo - from the admin node do this:

greg
2017-03-21 23:03
docker cp compose_rebar_api_1:/usr/local/bin/rebar rebar

greg
2017-03-21 23:03
that will get you one to play with

2017-03-21 23:03
got it

2017-03-21 23:03
now I want to push this to the node I want to join, right?

greg
2017-03-21 23:03
sure. Let's go for it.

2017-03-21 23:04
got it

greg
2017-03-21 23:04
that is handy to keep around on the admin node because you can tweak things and get at some things programmatically easier with it.

greg
2017-03-21 23:05
```chmod +x rebar export REBAR_ENDPOINT=https://<ADMIN_IP> export REBAR_KEY=rebar:rebar1 ./rebar nodes list```

2017-03-21 23:05
ubuntu@k8s-node1:~$ ./rebar nodes list [ { "admin": false, "alive": true, "allocated": false, "arch": "x86_64", "available": true, "bootenv": "local", "created_at": "2017-03-21T21:43:48.650Z", "deployment_id": 1, "description": "", "icon": "check_circle", "id": 1, "name": "system-phantom.internal.local", "node-control-address": null, "os_family": "linux", "profiles": [], "provider_id": 1, "quirks": [], "state": 0, "system": true, "target_role_id": null, "tenant_id": 1, "updated_at": "2017-03-21T21:43:48.650Z", "uuid": "557d47bc-4abd-44a9-83e8-58d6176ffe1a", "variant": "phantom" }

2017-03-21 23:05
already did

greg
2017-03-21 23:05
yeah - something

greg
2017-03-21 23:05
so the node can drive it.

greg
2017-03-21 23:05
The curl commands are wonky.

2017-03-21 23:06
is the plan to replace the curls with the rebar cli ?

greg
2017-03-21 23:06
The curl worked for me on my system here.

greg
2017-03-21 23:06
So - I was wanting to make sure that something worked.

greg
2017-03-21 23:07
If you could add set -x to the top of the script and dump the curl command that is executing for me to try would be helpful.

2017-03-21 23:07
right away

2017-03-21 23:08
+ curl -k -f -g --digest -u rebar:rebar1 -X POST -d name=k8s-node1.xxxx -d ip=172.30.yyy.zzz/24 -d provider=metal -d variant=metal -d os_family=linux -d arch=x86_64 https://172.30.0.51/api/v2/nodes/

greg
2017-03-21 23:09
okay - here you go- change the curl to this:

2017-03-21 23:10
I think the curl I mentioned further above was the wrong one

2017-03-21 23:10
This is the one in the script that is blowing up

2017-03-21 23:10
curl -k -f -g --digest -u "$REBAR_KEY" -X POST \ -d "name=$HOSTNAME" \ -d "ip=$IP" \ -d "provider=metal" \ -d "variant=metal" \ -d "os_family=linux" \ -d "arch=$(uname -m)" \ "$REBAR_WEB/api/v2/nodes/" || {

greg
2017-03-21 23:12
``` curl -k -f -g --digest -u "$REBAR_KEY" -X POST \ -d "{ \"name\": \"$HOSTNAME\", \ \"ip\": \"$IP\", \ \"provider\": \"metal\", \ \"variant\": \"metal\", \ \"os_family\": \"linux\", \ \"arch\": \"$(uname -m)\" }" \ "$REBAR_WEB/api/v2/nodes/" ```

2017-03-21 23:13
There is a missing } somewhere

greg
2017-03-21 23:13
```curl -k -f -g --digest -u "$REBAR_KEY" -X POST \ -d "{ \"name\": \"$HOSTNAME\", \ \"ip\": \"$IP\", \ \"provider\": \"metal\", \ \"variant\": \"metal\", \ \"os_family\": \"linux\", \ \"arch\": \"$(uname -m)\" }" \ "$REBAR_WEB/api/v2/nodes/"```

2017-03-21 23:14
trying

2017-03-21 23:17
youhou

2017-03-21 23:17
I got my first node in :)

greg
2017-03-21 23:17
okay

greg
2017-03-21 23:17
did the other curls value or work?

2017-03-21 23:18
I just run that curl on the cli and the node appeared

2017-03-21 23:18
let me fix the script and try to run it completelz

greg
2017-03-21 23:19
it won't, but it is better. :slightly_smiling_face:

2017-03-21 23:24
I have another one that's blowing up :

2017-03-21 23:24
curl -k -f -g --digest -u rebar:rebar1 -X POST -d '{"node":"k8s-node1.xxx.zzz" , "role":"rebar-joined-node" }' https://172.30.0.51/api/v2/node_roles/

2017-03-21 23:28
ok, it's all good

2017-03-21 23:28
a couple of missing things

2017-03-21 23:28
really need to have : -H "Content-Type: application/json"

greg
2017-03-21 23:28
yes

greg
2017-03-21 23:28
Did it make it through?

2017-03-21 23:28
that's what was leading to the 502

2017-03-21 23:28
yeap, full script made it through

2017-03-21 23:29
need to try running it now from the add-from-ssh.sh

greg
2017-03-21 23:29
so your node should have start starting to run on it.

greg
2017-03-21 23:29
stuff that is.

greg
2017-03-21 23:29
I think.

greg
2017-03-21 23:29
You should see it in the system deployment with some node roles

2017-03-21 23:29
I see

2017-03-21 23:30
side note : if rebar expect to use root to remote ssh into nodes under management, that's a real problem

greg
2017-03-21 23:31
it currently does.

greg
2017-03-21 23:31
We can talk about that at some point.

2017-03-21 23:31
hmm

2017-03-21 23:31
ok

greg
2017-03-21 23:33
I think it will be a "small" change for your environment, but has consequences that I think I need to work through. We should change away from root access. The problem is that it requires sudo because some of the actions need root abilities. That isn't always done consistently.

greg
2017-03-21 23:33
for example, do you have unrestricted sudo access?

greg
2017-03-21 23:33
or is it command level enablement?

2017-03-21 23:34
for the user I would use for rebar, it would be unrestricted

greg
2017-03-21 23:34
okay - so, we have this thing called an SSH hammer. It is the underlying component that provides SSH access to the rest of the system.

greg
2017-03-21 23:34
It assumes that the root user has the keys in place.

2017-03-21 23:35
I understand that that is a sensible assumption for bare metal environments

2017-03-21 23:35
I am just a little surprised that the same practice carried over to the cloud provisioners for example

greg
2017-03-21 23:35
it is and it isn't. ubuntu makes us do extra gyrations

2017-03-21 23:35
(at least, that's my conclusion)

greg
2017-03-21 23:35
common shared code

greg
2017-03-21 23:36
we try to try everything the same after certain points from a normalization perspective.

2017-03-21 23:36
classic cross-OS scripting ...

greg
2017-03-21 23:36
That way same code runs in both places. Part of the value. Normalize hardware so the cloud instance looks like the hardware instance and vice versa.

greg
2017-03-21 23:37
Yeah - but in this case it is more at platform boundaries as well.

2017-03-21 23:37
is there a way on gitter to send a screen capture?

greg
2017-03-21 23:37
not sure.

greg
2017-03-21 23:37
email:greg@rackn.com if that is easier

greg
2017-03-21 23:38
For a quick check, can you do this for me from the admin node:

greg
2017-03-21 23:38
rebar hammers list

2017-03-21 23:39
ubuntu@ip-172-30-0-51:~/digitalrebar/deploy$ ./rebar hammers list [ { "actions": { "power": [ "reboot" ], "run": [ "run" ], "xfer": [ "copy_from", "copy_to" ] }, "available_hammer_id": 4, "endpoint": null, "id": 1, "name": "ssh", "node_id": 2, "priority": 0, "username": "root", "uuid": "c117a7d0-2b89-44bd-8333-82a5b39801ff" } ]

2017-03-21 23:40
I think the "node prep" isn't starting because the admin doesn't know how to ssh to node

greg
2017-03-21 23:40
that is the secure shell hammer (you don't have an IPMI because you aren't hardware)

greg
2017-03-21 23:40
okay - probably.

greg
2017-03-21 23:40
I suspect that we are missing a parameter on node create.

greg
2017-03-21 23:41
By the way, to start over. just delete the NODE in the UI and rerun the join script.

greg
2017-03-21 23:41
let me check real quick

greg
2017-03-21 23:41
The username in hammer could be changed to something else, but a core change is also needed.

greg
2017-03-21 23:41
no container rebuilds though

2017-03-21 23:42
what does this mean : "but a core change is also needed."

greg
2017-03-21 23:42
the rails app needs two changes. We didn't use username correctly in all the places and you need a way to set something other than root.

2017-03-21 23:43
yeap - understood

2017-03-21 23:44
ah

2017-03-21 23:44
I took a step back and I rerun add-from-ssh after dropping in the updated join_rebar.sh script and things seem to be moving somehow

greg
2017-03-21 23:44
okay

2017-03-21 23:45
all components are green in the UI

2017-03-21 23:45
hmm

2017-03-21 23:45
It's getting late here (1am almost)

2017-03-21 23:46
I am gonna fork the repo, push my fixes to my fork, try to cleanup somethings that I hardcoded to ubuntu

2017-03-21 23:46
and try with a second node out of the box tomorrow as I find time.

greg
2017-03-21 23:46
okay - I'm missing family dinner time here. :slightly_smiling_face: Thanks for playing along and working through.

greg
2017-03-21 23:46
That would be great. I'll be around tomorrow.

2017-03-21 23:46
sounds good

2017-03-21 23:46
thanks for all the help

2017-03-21 23:47
I understand we're a special case but I do believe we're not alone though

2017-03-21 23:47
to be continued

2017-03-21 23:47
cheers

greg
2017-03-21 23:47
no prob. THis is good for us too. You are and not. Later.

2017-03-23 16:30
hello, I'm new at DigitalRebar and will try something out before i can make a advice for a customer project in my company. One point I have find is the need of have full access to the internet for some installation reason. Can you provide a list of packages which will used/ needed by DigitalRebar to be sure all packages are available at a customer repository? In a prodictive customer environment we have no chance for direct internet access. all packages will be stored on a private SatelliteServer p.e.. Thanks!

greg
2017-03-23 16:43
@theta-my - is this for the initial install of digital rebar or post-install of Digital rebar and you are trying to provision the managed nodes?

2017-03-23 16:47
Hi, this at first for the first installation steps via quickstart.sh.

greg
2017-03-23 16:47
okay - so installing digitalrebar.

2017-03-23 16:47
yes

greg
2017-03-23 16:47
It is not packages based. It is container based. So, you will need docker packages, ansible, and python packages.

2017-03-23 16:48
An other approach can be I would find the missing packages in a log...

greg
2017-03-23 16:48
Then you will need a way for docker to get images from docker hub and then you will need to stage some boot discovery images in a cache directory.

greg
2017-03-23 16:49
There are a lot of steps that we haven't documented outside of code I'm afraid.

2017-03-23 16:49
I know, and that's is my problem ;)

greg
2017-03-23 16:49
Our assumption is that the admin node will have at least proxy-based internet access. We are trying to change some of that in the coming months, but we aren't there yet.

greg
2017-03-23 16:50
Once installed, the admin node can be configured to point at internal repos and things like that to address the internet gap.

2017-03-23 16:51
I can understand your problem, but in a proper production environment no one will give your a direct link to the internet to download some stuff without a security check.

2017-03-23 16:52
And the best way to prevent this needs, is to have a own repository. That's why i need the list.

greg
2017-03-23 16:52
well - we've found that many differentiate tool install from tool use, but I understand your point.

2017-03-23 16:53
A proper log, where i can find in a first (dry) run all needed packages can be a good solution to reduce the effort on your site...

greg
2017-03-23 16:54
Tool use can use other repos. Tool install currently can't. You can get close by from an internet accessible location - docker pull all of our images, docker export them, and the docker import them on the admin node. That will get you most of the way to running digitalrebar without internet access, but there are all sorts of little gotchas. It just hasn't come up has a high priority item. It is something we are tracking because we hear about it as a concern but not a blocker (until now).

greg
2017-03-23 16:55
What is the OS you are using for the admin node?

2017-03-23 16:56
This must run at RHEL7

2017-03-23 16:57
I will try to go further with minimum tools/ services you have described and will come back if i need more... ;)

2017-03-23 16:57
Thanks for your intention!

greg
2017-03-23 16:58
The base system will need: sudo, git, ansible, python-netaddr, curl, jq (epel-release required)

greg
2017-03-23 16:58
This will get you ansible capable.

greg
2017-03-23 16:59
You will need this: curl -so rebar https://s3-us-west-2.amazonaws.com/rebar-bins/linux/amd64/rebar - in /usr/local/bin and chmod +x

greg
2017-03-23 16:59
on the admin node target as a user:

greg
2017-03-23 16:59
you will need the digitalrebar repo pulled from git as the directory digitalrebar.

2017-03-23 17:01
The last step is clear. I have make a git get to my workstation and copy the complete download to the target server. Than i have run the quickstart script and this ends with one error

2017-03-23 17:01
no proper epel repository access...

greg
2017-03-23 17:02
At that point, you should be able to run quickstart.sh (assuming other packages are installed).

greg
2017-03-23 17:02
So, what error do you get?

greg
2017-03-23 17:03
Did you add the provisioner flag to quickstart, you probably should.

2017-03-23 17:03
TASK [Install EPEL [SLOW]] ***************************************************** failed: [10.241.236.92] (item=[u'epel-release']) => {"changed": true, "failed": true, "item": ["epel-release"], "msg": "warning: /var/cache/yum/x86_64/7Server/BA-20170319-epel_rhel7_x86_64/packages/epel-release-7-7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY\n\n\nPublic key for epel-release-7-7.noarch.rpm is not installed\n Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.", "rc": 1, "results": ["Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos,\n : subscription-manager\nThis system is receiving updates from RHN Classic or Red Hat Satellite.\nResolving Dependencies\n--> Running transaction check\n---> Package epel-release.noarch 0:7-7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n epel-release noarch 7-7 BA-20170319-epel_rhel7_x86_64 14 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 14 k\nInstalled size: 24 k\nDownloading packages:\nPublic key for epel-release-7-7.noarch.rpm is not installed\n"]}

2017-03-23 17:04
@theta-my were you able to get an end-to-end system running before trying to get DR running w/o internet connectivity?

2017-03-23 17:04
not for this case, but before you tried to run disconnected...

2017-03-23 17:05
only on a laptop by my site, yes, this should be go

2017-03-23 17:06
you mean, I install all this stuff on a local system and provide the docker container to the target location?

2017-03-23 17:07
this should be run, than i only need the base services and tools to run docker.

2017-03-23 17:07
good idea

2017-03-23 17:08
Some time I can not see a tree in a forest... :)

2017-03-23 17:10
I will try this approach out. Thanks!

2017-03-25 19:06
Hi all. Just got started with a 3 node Dell r610 cluster. Just looking to see if I can also have the Admin Node run a workload as well.

2017-03-25 19:15
Also looking at clustering Admin Nodes between hosts.

2017-03-25 19:30
@Orionx86 - you already have rebar running or are you trying to get started?

2017-03-25 19:31
the default install will include several workloads, Kubernetes is the one I'd recommend first

2017-03-25 19:32
clustering of Rebar admin would likely take some 1x1 discussion

2017-03-25 19:32
building a k8s cluster should work fine - will require a external load balancer


2017-03-25 19:33
I have rebar running already on a dell Centos Admin node

2017-03-25 19:33
getting no available nodes for kubernetes wizard though

2017-03-25 19:34
nice, for the wizard, you need to have discovered nodes in a deployment (defaults to system)

2017-03-25 19:34
or create a provider and install to cloud

2017-03-25 19:34
the wizard will let you choose which deployment to use as the source for "use existing nodes"

2017-03-25 19:34
Can I add to system? won't let me throw on to it.

2017-03-25 19:35
discovered nodes go into system when they are booted

2017-03-25 19:35
are you trying to add existing nodes?

2017-03-25 19:35
I will but I wanted to see if I can also use the admin node since most of it is currently unused

2017-03-25 19:36
ah, not a recommended config because systems could be asked to reboot

2017-03-25 19:36
you CAN install the kvm-slaves and boot those locally

2017-03-25 19:36
we do that all the time to run the provisioning cycle

2017-03-25 19:37
gotcha. worth a try. Just sad because I have a 16 thread/32 gb system and im using threads and 4 gb's

2017-03-25 19:37
4 thread/4gbs

2017-03-25 19:37
yeah, that's what we use core/tools/kvm-slave for ... getting a doc link

2017-03-25 19:37
Just about to spin up node 2. This is a great system. I've watched a bunch of your videos so far

2017-03-25 19:38
http://digital-rebar.readthedocs.io/en/latest/development/dev_env/kvm-slaves.html

2017-03-25 19:38
thanks!

2017-03-25 19:38
Will you guys be at docker con this year?

2017-03-25 19:38
yes, I'll be there (attending, no booth)

2017-03-25 19:38
we're based in Austin, so the team's nearby. (also, my daughter is presenting)

2017-03-25 19:39
I'll be in town all week. That's great! whats she presenting?

2017-03-25 19:42
talking about workplace diversity on Tuesday 11:05 - Moby's Theater, EH-4 & EH-5

2017-03-25 19:42
cool I'll make it a point to attend.

2017-03-25 19:44
I'll be online for a while if you have more questions

2017-03-25 19:45
ya definitly. I saw there is integration between the 4 big automation tools but havent seen too much on saltstack. Are their any docs I cant find surrounding it? looking to build a workflow around Vmware

2017-03-25 19:46
Salt Proxy Minion is the obvious choice for me

2017-03-25 19:52
it's been a long time since we worked on the salt integration. it's still there but likely needs work

2017-03-25 19:52
ansible & bash are our primary targets for latest dev

2017-03-25 19:53
provisioning VMware or using a VMware provider?

2017-03-25 19:53
Yes, a provider.

2017-03-25 19:54
for providers, check out the pattern w/ the openstack provider. it's the most recent and splits all the code into a dedicated file.

2017-03-25 19:55
I'm happy to put together a training video on how to build a provider

2017-03-25 19:56
dusting off the salt jig would likely involve talking w/ @galthaus because there are some choices that we could revisit based on the use-cases. Right now, the plan was mainly to plug it into an existing salt server and turn over control. We've got some more integration capabilities that could be used.

2017-03-25 19:59
I understand. I've been toying with openstack ironic and saltstack in my org so this is a welcomed addition if I can get this up and running in my homelab atleast.

2017-03-25 20:05
Have you seen any issues with people using DDWRT configs with PXE booting?

greg
2017-03-26 14:09
Not familiar with DDWRT off the bat.

2017-03-26 17:42
http://www.dd-wrt.com/help/english/HManagement.asp ? We used to see issues w/ port fast settings on switches not being set correctly.

greg
2017-03-26 22:28
Okay. We routinely tell people to set portfast on server ports. To keep ipxe firm wares from timing out