Installing single-node OpenShift (SNO) on a bee-link GTR5

After working on the HP Chromebox G1 – I discovered that a single 32 GB DDR3 SODIMM was going to cost 3 times what the Chromebox itself cost me to begin with.  It quickly became evident my openshift experiment was going to be limited using the Chromebox, so I decided to try on another PC I had available, this was a bee-link GTR5.  In addition to the internal SSD, I also added a 1 TB NVME drive.

The chromebox G1’s might be possible to use as a microshift cluster but still waiting on the parts to really determine if that’s possible.

The GTR5 was previously used as a desktop machine running the i3 respin of Fedora.  First step was to back up everything and then off to the races with openshift.

I started out following this guide.

https://www.redhat.com/sysadmin/low-cost-openshift-cluster

Installation followed pretty closely, I’m only going to note any special steps I did on my side.

I’m running a pretty simple consumer grade router, but it let me configured the DHCP hostname – I set the GTR5 as “hive.geolaw.loc” and used that in the cluster details.

Cluster Name: hive
Base Domain: geolaw.loc

Copied my ssh .pub and then generated the discovery iso

DNS entries : like I said, I’ve got a cheap consumer class router, does not support adding DNS entries.
So on the machines I plan on accessing the web GUI or ‘oc’ –  I plan on just using the following /etc/hosts entries :

$ grep hive /etc/hosts
192.166.29.7 api.hive.geolaw.loc *.apps.hive.geolaw.loc api-int.hive.geolaw.loc

Booting the discovery.iso

I had an existing Ventoy USB drive that I first tried just dropping the iso file into the Ventoy partition – this did not boot properly for me and went to an emergency shell.  I then just used dd to write the discovery iso to the thumb drive:
$ sudo dd if=discovery_image_hive.iso of=/dev/sdb bs=1024

Once this finished I rebooted the GTR5 and from the UEFI level selected the USB to boot from.

After booting, the agent.service was failing due to it being unable to pull from the redhat.io registry:

Jun 22 14:26:26 hive podman[17680]: Error: initializing source docker://registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-264: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/articles/3399531
Jun 22 14:26:29 hive podman[17749]: Trying to pull registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-264…

 

To fix this I ssh’d into the openshift installer, su’d to root, and then logged into to registry.redhat.io.  Once I logged in, I restarted the agent.service and away it went!

 

$ ssh core@hive
** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
This is a host being installed by the OpenShift Assisted Installer.
It will be installed from scratch during the installation.

The primary service is agent.service. To watch its status, run:
sudo journalctl -u agent.service

To view the agent log, run:
sudo journalctl TAG=agent
** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
Last login: Thu Jun 22 14:26:22 2023 from 192.168.29.16
[core@hive ~]$ sudo su –
Last login: Thu Jun 22 14:17:22 UTC 2023 on pts/0
[root@hive ~]# podman login registry.redhat.io
Authenticating with existing credentials for registry.redhat.io
Existing credentials are invalid, please enter valid username and password
Username (|uhc-pool-81ec5a21-635b-4c43-8409-63e45c46ad51): glaw@redhat.com
Password:
Login Succeeded!
[root@hive ~]# systemctl restart agent

The discovered host eventually popped up in the assisted installer and I was able to select my network and continue the install.

The host rebooted several times along the way as it was processing the install.

Watching the console I could see where it was pulling down the containers and starting them.

but again getting the registry errors and the containers going into a ImagePullBackOff state

Jun 22 15:42:07 hive kubenswrapper[2978]: E0622 15:42:07.423504 2978 pod_workers.go:965] “Error syncing pod, skipping” err=”failed to \”StartContainer\” for \”registry-server\” with ImagePullBackOff: \”Back-off pulling image \\\”registry.redhat.io/redhat/certified-operator-index:v4.13\\\”\”” pod=”openshift-marketplace/certified-operators-bb2nx” podUID=0f76c0fa-cb11-436f-9e7e-77357117b313

 

I tried doing the podman register again, as root, as core, as containers .. no bueno 🙁

 

Oh well, good first test, will have to retry later.

Adventures with HP Chromebox Pt1

In my daytime job, I support ODF and CEPH.

I recently picked up 3 refurbished HP Chromebox G1s for $49 each with hopes of creating a mini local ODF cluster ala https://www.redhat.com/sysadmin/low-cost-openshift-cluster

https://www.amazon.com/gp/product/B00URW6WEY/

System specs –
CPU  – Intel Core I7 I7-4600U 2.10 Ghz
4 GB RAM DDR3

First step was going to be able to install linux
I started following this guide
https://rianoc.github.io/2020/04/19/Linux-Chromebox/ and then the links provided within

1) First step was enabling “Developer mode” which links to https://wiki.galliumos.org/Installing/Panther#Enable_Developer_Mode_and_Boot_Flags

a) first boot after reset
This worked as described except I was using a logitech wireless keyboard/mouse combo.  The ctrl-d from the wireless keyboard was not being accepted.  Having had issues previously with recovery on a mac mini which only saw wireless keyboards on the innermost  rear USB port, I tried the dongle in both the front and back USB ports.  I had to pull out a wired USB keyboard and CTRL-D was accepted right away.

b) trying to “sudo crossystem dev_boot_legacy=1” from the terminal window.

Try as I might, sudo was prompting me for a password, which went against almost every set of instructions I found.  CTRL-ALT-T for a terminal, “shell”, then “sudo crossystem dev_boot_legacy=1“.  First pass I had logged in using my google id – I have a gsuite (aka google apps) account for geolaw.com. This showed a “this device is managed by geolaw.com” – so I was not sure if possibly that was blocking me from getting into sudo, so I rebooted and when it prompted me that “OS verification is off”, I turned it back on and repeated the developer mode.

Second time around, after it reset and re-enabled developer mode.

At the “Welcome!” screen, I clicked “enable debugging features”.  From there it prompted me to set a root password.  First time around, I set a password, second time, no password.  I just clicked “Enable” and then “OK”.  Back to the “Welcome!” screen, I clicked “Let’s go >”.  From “Connect to network”, I clicked “Next” since I was connected via ethernet.  Google Chrome OS terms, clicked “Accept and Continue”. After a short “Checking for updates”, it prompted me to “Sign in to your Chromebox”.   I used “Browse as a Guest” at the bottom, I was still unable to “sudo crossystem dev_boot_legacy=1” as it kept prompting me for a password.

What finally worked was CTRL-ALT-F2 (using the wired keyboard) and then logged in as “root” with the password of “test0000”.  From there, I gave the command “crossystem dev_boot_legacy=1” (I was root, so no sudo needed).  I also gave the command “chromeos-setdevpasswd” to set the chronos password since the next step also required sudo 🙂

CTRL-ALT-F1 took me back to the GUI where I still had the terminal window open there.
cd;curl -LO https://mrchromebox.tech/firmware-util.sh && sudo bash firmware-util.sh

Note – the rianoc github link above shows he used option 3 for “3) Install/Update Full ROM Firmware”
When I ran this June 16th, 2023, this was option 2.

DUH! I forgot to remove the screw to enable the firmware update.

After shutting down and removing the firmware screw, I rebooted.  Note, after the reboot, the firmware-util.sh file did not persist (I was in a guest login after all).  After running the script, I chose “2” .  It prompted me to back up the current stock firmware, which I did to a USB thumb drive, then it downloaded and flashed the firmware to the device.

“R” to reboot and Immediately I could see the difference as the google boot screen was replaced with a rabbit logo.  there was no boot device.  “Booting from ‘SATA: LITEON IT LST-16S9G-HP ‘ failed: verify it contains a 64-bit UEFI OS.”

Step 1 done, now to track down a coreos image for this device 🙂