Adventures with HP Chromebox Pt1

In my daytime job, I support ODF and CEPH.

I recently picked up 3 refurbished HP Chromebox G1s for $49 each with hopes of creating a mini local ODF cluster ala https://www.redhat.com/sysadmin/low-cost-openshift-cluster

https://www.amazon.com/gp/product/B00URW6WEY/

System specs –
CPU  – Intel Core I7 I7-4600U 2.10 Ghz
4 GB RAM DDR3

First step was going to be able to install linux
I started following this guide
https://rianoc.github.io/2020/04/19/Linux-Chromebox/ and then the links provided within

1) First step was enabling “Developer mode” which links to https://wiki.galliumos.org/Installing/Panther#Enable_Developer_Mode_and_Boot_Flags

a) first boot after reset
This worked as described except I was using a logitech wireless keyboard/mouse combo.  The ctrl-d from the wireless keyboard was not being accepted.  Having had issues previously with recovery on a mac mini which only saw wireless keyboards on the innermost  rear USB port, I tried the dongle in both the front and back USB ports.  I had to pull out a wired USB keyboard and CTRL-D was accepted right away.

b) trying to “sudo crossystem dev_boot_legacy=1” from the terminal window.

Try as I might, sudo was prompting me for a password, which went against almost every set of instructions I found.  CTRL-ALT-T for a terminal, “shell”, then “sudo crossystem dev_boot_legacy=1“.  First pass I had logged in using my google id – I have a gsuite (aka google apps) account for geolaw.com. This showed a “this device is managed by geolaw.com” – so I was not sure if possibly that was blocking me from getting into sudo, so I rebooted and when it prompted me that “OS verification is off”, I turned it back on and repeated the developer mode.

Second time around, after it reset and re-enabled developer mode.

At the “Welcome!” screen, I clicked “enable debugging features”.  From there it prompted me to set a root password.  First time around, I set a password, second time, no password.  I just clicked “Enable” and then “OK”.  Back to the “Welcome!” screen, I clicked “Let’s go >”.  From “Connect to network”, I clicked “Next” since I was connected via ethernet.  Google Chrome OS terms, clicked “Accept and Continue”. After a short “Checking for updates”, it prompted me to “Sign in to your Chromebox”.   I used “Browse as a Guest” at the bottom, I was still unable to “sudo crossystem dev_boot_legacy=1” as it kept prompting me for a password.

What finally worked was CTRL-ALT-F2 (using the wired keyboard) and then logged in as “root” with the password of “test0000”.  From there, I gave the command “crossystem dev_boot_legacy=1” (I was root, so no sudo needed).  I also gave the command “chromeos-setdevpasswd” to set the chronos password since the next step also required sudo 🙂

CTRL-ALT-F1 took me back to the GUI where I still had the terminal window open there.
cd;curl -LO https://mrchromebox.tech/firmware-util.sh && sudo bash firmware-util.sh

Note – the rianoc github link above shows he used option 3 for “3) Install/Update Full ROM Firmware”
When I ran this June 16th, 2023, this was option 2.

DUH! I forgot to remove the screw to enable the firmware update.

After shutting down and removing the firmware screw, I rebooted.  Note, after the reboot, the firmware-util.sh file did not persist (I was in a guest login after all).  After running the script, I chose “2” .  It prompted me to back up the current stock firmware, which I did to a USB thumb drive, then it downloaded and flashed the firmware to the device.

“R” to reboot and Immediately I could see the difference as the google boot screen was replaced with a rabbit logo.  there was no boot device.  “Booting from ‘SATA: LITEON IT LST-16S9G-HP ‘ failed: verify it contains a 64-bit UEFI OS.”

Step 1 done, now to track down a coreos image for this device 🙂

Adding NFS to my Buffalo LinkStation

I’ve bought a second hand Buffalo LinkStation several years ago off ebay and although its not the most powerful NAS unit, it still serves the basic purpose of file sharing.  I also own a Lenovo ix2-dl and the one complaint I have about the buffalo unit is there is no NFS built in.  Years ago I had hacked in some other firmware but after having to reset the NAS, it defaulted back to the cifs only firmware.

  1. First step was to get in via ssh – Done using the acp_commander java code – https://advanxer.com/blog/2013/02/buffalo-linkstation-acp-commander-gui/
    1. After enabling ssh, I had to configure my .ssh/config like so to connect due to older ssh ciphers :Host buffalo-nas
      user root
      hostname 192.168.29.10
      KexAlgorithms diffie-hellman-group1-sha1
      PubkeyAcceptedKeyTypes +ssh-rsa
  2. Then I copied over my ssh keys :
    # ssh-copy-id root@buffalo-nas
  3. This stuff survives a reboot because the root home directory all lives on /, however, since the ipkg package manager and the packages it installs live on /opt, which does not exist as part of the default OS, as soon as the NAS reboots, it loses NFS every time.
  4. Next step  install ipkg – the instructions from here worked fine
    (https://github.com/skx/Buffalo-220-NAS)
    run the following to retreive the list of available packages.
    # ipkg update
  5. From here, there are several nfs packages available.   despite the note not to use it, unfs3 finally worked for me
    # ipkg install unfs3
  6.  Then configure nfs … create my exports with my shares
    # cat /opt/etc/exports
    /mnt/array1/Pictures 192.168.29.0(rw,sync)
    /mnt/array1/Music 192.168.29.0(rw,sync)
    /mnt/array1/storage 192.168.29.0(rw,sync)Then kill/restart nfs :
    # /opt/etc/init.d/S56unfsd
  7. Then I was able to see all the mounts and mount these from my newer fedora 35 and Ubuntu 21.10 machines$ showmount -e buffalo-nas
    Export list for buffalo-nas:
    /mnt/array1/storage 192.168.29.0
    /mnt/array1/Music 192.168.29.0
    /mnt/array1/Pictures 192.168.29.0
  8. Add to /etc/fstab
    # cat /etc/fstab |grep Pictures
    192.168.29.10:/mnt/array1/Pictures /Pictures nfs defaults,noatime,vers=3 0 0
  9. And mount
    # sudo mount /Pictures

 

Alternately, after setting up ssh and ssh keys, i finally rewrote this process as an ansible playbook [1]

The python installed on the linkstation is old, 2.6 and does not have zlib or pip or anything extra installed. You can force Anisble not to use zlib with the following config option :

$ cat ansible.cfg
[defaults]
module_compression = ‘ZIP_STORED’

$ cat hosts
[buffalo]
buffalo-nas

And then its just a matter of running the playbook:

# ansible-playbook -v -v -v -i hosts buffalo.yml

The playbook is fairly well commented with plenty of checks –

1 – Playbook : buffalo.yml

 

 

bit torrent in a docker container with VPN

I cut the cord on cable years ago and have been relying on SABnzbd + Sickbeard/Sonarr to grab all of my TV shows off usenet.  Occasionally, Sickbeard/Sonarr will miss an episode and by the time I go back to start looking for it, it is long gone on Usenet. This leaves me either dependent on watching the show “on demand” which means commercials or once in a while I will have to reach out to the pirate bay and bit torrent a copy which I usually try to avoid doing.

On the rare occasion I’ve had to do this in the past, the one time I forgot to check if my VPN was up and running, I get a nasty gram in the mail from ATT a few weeks later because apparently HBO was monitoring the torrent downloaders 😉

Enter docker – https://github.com/haugene/docker-transmission-openvpn/

Turns out this was so easy I don’t know why I did not look at doing it before.

My docker-compose.yml file  – direct from the github except for the last line:

version: ‘3.3’
services:
transmission-openvpn:
cap_add:
– NET_ADMIN
volumes:
– ‘/Downloads2/:/data’
environment:
– OPENVPN_PROVIDER=PIA
– OPENVPN_CONFIG=ca_montreal,ca_ontario,ca_toronto,ca_vancouver
– OPENVPN_USERNAME=XXXXX
– OPENVPN_PASSWORD=XXXXX
– LOCAL_NETWORK=192.168.0.0/16
– PUID=1000
logging:
driver: json-file
options:
max-size: 10m
ports:
– ‘9091:9091’
image: haugene/transmission-openvpn
restart: unless-stopped

I added the last line to make sure this always auto started when the host machine rebooted.

Coupled with a bash script to check the VPN this works perfectly

runs via cron every 10 minutes and makes sure the docker container’s IP is not the same as the host machines ip (e.g. VPN is up and running)

#!/bin/bash

function check {
     # hack to make sure docker container is using VPN
     ATT_IP=$(curl -s http://ipinfo.io/ip);

     # transmission container
     TID=$(docker ps |grep trans |awk ‘{print $1}’);
     TRANS_IP=$(docker exec -it $TID /bin/bash -c “curl -s http://ipinfo.io/ip”)
}

check
i=”0″
echo $ATT_IP
echo $TRANS_IP
while [ $i -lt 5 ]; do

     if [ “$ATT_IP” == “$TRANS_IP” ]; then
          echo “uh oh, docker running on ATT IP restarting and retrying in 60 seconds”
          docker restart $TID
          i=$[$i+1]
          sleep 60
          check
     else
          echo “we’re good, docker running on VPN IP $TRANS_IP”
          exit;
     fi
done

 

 

Moving Plex to docker

I’ve been running plexmediaserver on my linux rigs for several years now but recently started moving several of my home media services over to docker images.

With a little help from this docker compose yml : https://hub.docker.com/r/linuxserver/plex

I was able to do this fairly quick and painlessly with (nearly) zero down time while retaining all of my historical plex data with my libraries all fully intact.  Hopefully this helps make my plex more portable 🙂

My plex setup has my TV shows on /TV/TV/ and my Movies on /TV/Movies – these are both on a NFS share coming off my qnap NAS. Files are all owned by my user “glaw” (UID 1000) and group “users” (GID 100).

First step was to prep the docker compose file

$ cat docker-compose.yml

version: “2.1”
services:
plex:
image: ghcr.io/linuxserver/plex
container_name: plex
network_mode: host
environment:
– PUID=1000
– PGID=100
– VERSION=docker
– PLEX_CLAIM=claim-SyGiy3XXXXXXXXXX
volumes:
– /var/lib/plexmediaserver:/config
– /TV/TV:/TV/TV
– /TV/Movies:/TV/Movies
restart: unless-stopped

Since my media lives out as sub directories under /TV (capitalized), I adjusted the volumes to also reflect the capitalization. The first time around, with /tv and /movies, none of my media was playing.

I got the PLEX_CLAIM token from https://plex.tv/claim – just before I did the docker-compose up down below. The claim token is only good for 4 minutes.

Second I stopped and disabled plexmediaserver on my main linux rig. (SS is an alias to sudo systemctl)

# SS stop plexmediaserver
# SS disable plexmediaserver

Third, make a backup copy of all of my plex data and then chown it so the PUID/PGID in the docker container matches.

# sudo cp -r /var/lib/plexmediaserver /var/lib/plexmediaserver.sav
# sudo chown -R glaw:users /var/lib/plexmediaserver

Next I brought up the docker image. The first time it pulled down all of the docker layers and then started up plex. Each additional time it just recreates the same image since all of the layers are already present.

# docker-compose up

Pulling plex (ghcr.io/linuxserver/plex:)…
latest: Pulling from linuxserver/plex
1f5e15c78208: Pull complete
a8bf534b5e6e: Pull complete
e633a0fa06b1: Pull complete
e26072cac69d: Pull complete
57c07b9b6c59: Pull complete
b2d9d0061554: Pull complete
ec31a11d59ba: Pull complete
43c725c27329: Pull complete
Digest: sha256:f92f4238cd7bc72ba576f22571ddc05461a2467bc0a1a5dd41864db7064d6fa6
Status: Downloaded newer image for ghcr.io/linuxserver/plex:latest
Creating plex … done
Attaching to plex
plex | [s6-init] making user provided files available at /var/run/s6/etc…exited 0.

Lastly, I rebooted the host machine and verified all docker containers were running :

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1760b65b5108 haugene/transmission-openvpn “dumb-init /etc/open?” About a minute ago Up 57 seconds (health: starting) 0.0.0.0:9091->9091/tcp transmission_transmission-openvpn_1
1f35aae81c73 ghcr.io/linuxserver/plex “/init” 24 minutes ago Up 2 minutes plex
c165f0c9d947 ghcr.io/linuxserver/jackett “/init” 24 hours ago Up 2 minutes 0.0.0.0:9117->9117/tcp jackett

The final test was to turn off wifi on my cell phone and verify I could still get to my home plex just as if plexmediaserver were still running natively on the host machine.

Lenovo ix2-dl corrupted firmware

What a pain in the $%^&*( ass.

Lenovo has no concept of using a md5 checksum on a file to confirm its integrity before you flash it and end up with a corrupted NAS like me.

AND to top it off, Lenovo support only offers a destructive way to reflash the NAS, double %^&*()^& in the ass.

So, initially I thought the drives were set up as a zfs disk set – installed the zfs needed debs on my debian jessie system only to find out all the lenovo disk set is is a linux md raid set, so  (pulling this from my bash history)

Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 276C4C51-BFA8-4E33-AB51-FC7033AA6D56

Device Start End Sectors Size Type
/dev/sdd1 65536 42008575 41943040 20G Microsoft basic data
/dev/sdd2 42008576 3907028991 3865020416 1.8T Microsoft basic data

Once I figured out it was just a md linux raid set, it was easy peasy to import :

root@dell:~# mdadm –assemble –run /dev/md1 /dev/sdd2
mdadm: /dev/md1 has been started with 1 drive (out of 2).

# DOH! no lvm2 installed on the system
root@dell:~# mount /dev/md1 /mnt
mount: unknown filesystem type ‘LVM2_member’
root@dell:~# pvscan
-su: pvscan: command not found
root@dell:~# apt-get install lvm2

pvscan, vgscan, and lvscan bought it LV into devicemapper

root@dell:~# pvscan
PV /dev/md1 VG bad2c48_vg lvm2 [1.80 TiB / 0 free]
Total: 1 [1.80 TiB] / in use: 1 [1.80 TiB] / in no VG: 0 [0 ]
root@dell:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “bad2c48_vg” using metadata type lvm2
root@dell:~# lvscan
ACTIVE ‘/dev/bad2c48_vg/lv3140cc7e’ [1.80 TiB] inherit

Mounted it up,
root@dell:~# sudo mount /dev/bad2c48_vg/lv3140cc7e /mnt

bingo – now I can find 1.8TB worth of space elsewhere to rsync all that %^&*( data off
so I can follow Lenovo’s destructive NAS rebuild.

—————————————————————–
Update 07/01/2017
I got an email from a guy who had a 4 drive Lenovo PX4-300R NAS with a RAID 5 array set up.
He also had very little experience with Linux.

This is the rough process that worked for him

1. First, download ubuntu and burn it to a usb drive or cd  http://ubuntu.com and then get it on to a USB drive. https://wiki.ubuntu.com/Win32DiskImager/iso2usb

2. pull your drives from the NAS one at a time, I am not sure if there is anything that designates the first disk, second disk, etc, I think left to right in mine if I remember correctly but make sure to label them.

3. prepare the PC you will be booting from, connect the 4 sata cables/drives in order or at least first guess.   boot up to the flash drive — note, if you are playing with a UEFI machine you may need to disable secure boot.  ubuntu will start automatically and should pop up asking if you want to try or install. click “try”
4. right click on the desktop and open terminal. commands below are prefixed by # or $ for readability, don’t type the # or $
ubuntu@ubuntu:~$ sudo su – 
5. Get a list of the disks :
     root@ubuntu:~# fdisk -l
You should see your 4 identical disks, should be obvious based on their size :

I had 2 x 1.8TB drives, set up in a raid1 and since the disks are complete mirrors of each other, I could access the data with just a single disk.

My disk in question showed up like this :

Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 276C4C51-BFA8-4E33-AB51-FC7033AA6D56

Device Start End Sectors Size Type
/dev/sdd1 65536 42008575 41943040 20G Microsoft basic data
/dev/sdd2 42008576 3907028991 3865020416 1.8T Microsoft basic data

Make a note of the devices that correlate to your NAS drives … your first disk (could be your windows disk if you leave your windows drive connected), it may be /dev/sda, the second is /dev/sdb, /dev/sdc, … /dev/sdd  etc.  the usb drive might also be a /dev/sd* drive

6.  This guy reported back to me that mdadm was not included on the default ubuntu livecd , so install it

# apt install mdadm
7. Assemble the raid – Using /dev/sdd from my blog….
/dev/sdd1 above is I’m guessing something specific to the lenovo set up
/dev/sdd2 above is there the data lives.
So once you identify your drives, for example here I will assume /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde … you should see the 2 partitions like /dev/sdd1 and /dev/sdd2 above for each (/dev/sdb1 and /dev/sdb2, /dev/sdc1 and /dev/sdc2, /dev/sdd1 and /dev/sdd2, /dev/sde1 and /dev/sde2

The “2” partition should all be used in the linux md array … so something like this :

root@ubuntul:~# mdadm –assemble –run /dev/md1 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 

If this works, you should get something like :

mdadm: /dev/md1 has been started with 4 drives (out of 5).

8. If not, the disks may be in the wrong order.  You *may* be able to just tweak the command and give them in reverse order. ala

root@ubuntul:~# mdadm –assemble –run /dev/md1 /dev/sde2 /dev/sdd2 /dev/sdc2 /dev/sdb2 
You want to shoot for the “mdadm: /dev/md1 has been started….” output
If you can get there, you should be able to use the following to scan in the logical volume(s) off the RAID5
9.  Load in the lvols
root@dell:~# pvscan
PV /dev/md1 VG bad2c48_vg lvm2 [1.80 TiB / 0 free]
Total: 1 [1.80 TiB] / in use: 1 [1.80 TiB] / in no VG: 0 [0 ]
root@dell:~# vgscan
Reading all physical volumes. This may take a while…
Found volume group “bad2c48_vg” using metadata type lvm2
root@dell:~# lvscan 
ACTIVE ‘/dev/bad2c48_vg/lv3140cc7e’ [1.80 TiB] inherit
^^^^^^^^^^^^^^^^^^^^^^^^^^^ – this is your logical volume name
You could realistically get multiple volume groups and logical volumes, depending on how the nas slices up the array
Mounted it up, -o ro mounts as “read only”  – so you’re not changing anything on it … just to copy off.
root@dell:~# mount -o ro /dev/bad2c48_vg/lv3140cc7e /mnt
then you can use this command to list you your files :
root@ubuntu:~# ls -la /mnt
10. From here he was able to plug in another USB drive to recover his data to, then using the “Files” icon on the side bar, copy from /mnt to the USB drive.
Update – 2022
Shut down my linkstation today but used this same concept to add the first disk into a linux machine,
then rsync data from another smaller md raid array with a failed disk to the 3TB disk and then hopefully,
stop the first md raid and insert the second disk and then ??? hopefully be able to bring up the new
mdarray.

Updating a Raspberry Pi 2 boot disk to a Raspberry Pi 3

This is just the real basics as I figured out and a work in progress.  I have not really figured out yet what wireless network driver that needs to be added to a Pi2 image to make it see the embedded Pi3 wireless network, but using a wired ethernet it at least gives me the option to capture some code off a distributed Pi2 image so it can be dropped onto a fresh install of 2016-05-27-raspian-jessie.

So in the example, I am using the ProxyMagic image for a Raspberry Pi 2 and want to drop the code onto a newer Pi3 Raspbian image.

  1. Download the latest raspbian image – I am using the debian jessie version dated May 27th 2016 from https://www.raspberrypi.org/downloads/raspbian/
  2. Unpack the .zip to expand to the .img file 2016-05-27-raspbian-jessie.img
  3. View the disk contents – this shows 2 partitions, the 63MB MSDOS boot partition and the 3.7GB linux partition
    $> fdisk -lu 2016-05-27-raspbian-jessie.img

    Disk 2016-05-27-raspbian-jessie.img: 3.8 GiB, 4019191808 bytes, 7849984 sectors 
    Units: sectors of 1 * 512 = 512 bytes 
    Sector size (logical/physical): 512 bytes / 512 bytes 
    I/O size (minimum/optimal): 512 bytes / 512 bytes 
    Disklabel type: dos 
    Disk identifier: 0x14c20151 
    Device                          Boot  Start     End   Sectors  Size Id   Type 
    2016-05-27-raspbian-jessie.img1        8192  137215    129024   63M  c    W95 FAT32 (LBA)
    2016-05-27-raspbian-jessie.img2      137216 7849983   7712768  3.7G 83    Linux
    
  4. Copy this off to a 8GB microSD card  – my sdcard came in as /dev/sdd – you can check your dmesg output after inserting your card to get the device.
    $> sudo dd if=2016-05-27-raspbian-jessie.img of=/dev/sdd
  5. As soon as the dd completes, my linux file manager (nemo) refreshed with the “boot” partition and a 3.7GB Volume.  I can click on each to mount them in userspace – ie, the mount as /media/glaw/boot and /media/glaw/<some big long UID>
  6. In a terminal window, I did a cd to the sdcard ext4 mount  and wiped everything out
    $> cd /media/glaw/fc254b57-8fff-4f96-9609-ea202d871acf
    $> sudo rm -rf *
    $> sudo sync
  7. Now to mount up the ProxyMagic image to copy the files over.  I’ve read about how you can calculate the total sectors offset based on the start # and the sector size and then specify the offset when doing a loop back mount, but found that kpartx does the trick very well.
    $> kpartx -v -a ProxyMagic-RPI-v1.img
    add map loop2p1 (253:0): 0 114688 linear /dev/loop2 8192
    add map loop2p2 (253:1): 0 5662720 linear /dev/loop2 122880
  8. Up pops an authentication window asking for sudo rights to mount the new boot file system – it should mount as /media/glaw/boot1 and then click on the 2.9GB volume to mount.S05NLP~U
  9. Locate the other

Open Sourcing my Hackintosh ;)

Running a HP SPP mini tower with Yosemite installed and finally getting tired of finder and iTerm. Don’t get me wrong, I like iTerm, but from my linux background, my fingers just know terminator so much better 🙂

I tried installing terminator via brew but it did not seem very stable – I opened up a 4-up terminator view and after the icon spun for 5 minutes, it crashed.

Fink seems to be much more stable and integrates its programs right into XQuartz where the brew installed version popup’d a separate python window.

1. install fink:  bash script to do it all for ya https://raw.githubusercontent.com/fink/scripts/master/srcinstaller/Install%20Fink.tool

2. fink install terminator

3. fink install nautilus

4. Setup dbus for OSX :

sudo launchctl load -w /sw/share/dbus/launchd/org.finkproject.dbus-session.plist

launchctl load -w /sw/share/dbus/launchd/org.finkproject.dbus-session.plist

 

Screen Shot 2016-03-20 at 8.17.54 PM

Remaster a Linux install CD to allow installation on a Macbook

Turns out this was pretty easy to get going on my Macbook 2006 model to install Xubuntu.

On my fedora computer I installed isomaster (yum install isomaster), then opened the Xubuntu.iso I downloaded with isomaster.

glaw@fedora ~ $ isomaster xubuntu-14.04-desktop-amd64.iso 

Highlight the EFI folder, ad then click the icon 5th icon on the bottom (I think its supposed to be a trashcan.

Then file -> Save As and save a new copy of the ISO. 

You will end up with a slightly smaller iso:

glaw@fedora ~ $ ls -la x.iso xubuntu-14.04-desktop-amd64.iso 
-rw——- 1 glaw users 953790464 Jun  4 20:22 x.iso
-rw-r–r– 1 glaw users 957349888 May  6 10:21 xubuntu-14.04-desktop-amd64.iso

Now burn the x.iso with your favorite burner. 

glaw@fedora ~ $ basero x.iso

Once your have burned this off successfully, insert into your Macbook, boot up while pressing the alt/option key and select the “Windows” Icon (the mac thinks this is a boot camp install).

This other website will help in getting your Mac setup for the new OS : 

http://www.rodsbooks.com/ubuntu-efi/

 

Docker UCLUG 5/13/2014

Awesome meeting last night with my fellow geeks at UCLUG  – Many Thanks to Tim Fowler – https://twitter.com/roobixx

http://youtube.com/watch?v=DuD0tYS_Sls

The topic was Docker … Docker is based on the concept of containers .. no necessarily the LXE containers (akin to Solaris zones), although the docker containers will run within LXE containers, basically docker is described as chroot on steroids.

The minimum basis for a docker container is the bare minimum libraries required to run a Linux distribution (probably version close to a net install).  So I downloaded the base fedora docker image – about 250 MB … it booted up in docker in about 3 seconds.

Screenshot - 05142014 - 06:41:13 PM

The idea is that you start with one of these base images, yum install all required packages to run your app.  For example, for a simple wordpress website container, you may install apache, php, mysql.  You do a docker commit to save your changes.   The changes get saved as a separate layer.  

So my understanding of the internals of docker is that when you boot a docker image, the base image plus all layers get joined together via unionfs  http://en.wikipedia.org/wiki/UnionFS, and then the linux system invokes a chroot to that unionfs mount and invokes the normal system startup as if it was a stand alone machine.  Almost a virtual machine, but it does not run a separate kernel.  Essentially you can pull down  the base ubuntu image and run Ubuntu linux on top of Fedora.  In addition, there are settings that control CPU/Memory affinity for each container as well as some SDN (software defined networking) components that allow you to control how one container can communicate with other containers and/or back to/from the host machine.  So you can expose port 80/443 running in a container to the host machine (and out to the network)

Unlike a virtual machine where you may want to duplicate a base “machine” to use it as the basis for several different applications, you can build several docker containers, all with their own unique purposes, based on the same fedora base image.

CONTAINER_A : APACHE/PHP stack
CONTAINER_B : MySQL DB

Unlike a virtual machine clone where you are duplicating the total OS, with docker, CONTAINER_A and CONTAINER_B  (as well as all other containers built off the same base image) all reference back to the same base fedora image and then just apply their own individual layers representing the deltas.

That’s it in a nutshell

WordPress – importing images from non-wordpress images directory

Working on a freelance site and somehow a bunch of images got uploaded to /images/2012/09/…  what the heck? I cannot even figure out how that happened.

Trying this new “smush.it” plugin and guess what, it only works off images inside the media manager.

After looking around for a couple hours, I finally figured it all out.  DISCLAIMER:  Before trying this, make sure you back up your database.  PHPMyAdmin has an “Export” option that will let you do this.

  1. First is a plug-in called “add from server” – http://wordpress.org/extend/plugins/add-from-server/.  This plugin gives you a file manager type interface to browser your website directories, so I pointed it at /images and let it import from there.  I wish it was recursive, but no such luck.  Lucky for me, I only had to drill down into /images/2012/09/ and import from there as well as my top level /images directory.  Importing gives you the option of using the original file date, so it copies the files to /wp-content/uploads/2012/09 and maintains that same date based hierarchy.  From the /images directory I just imported those ones with today’s date, so they ended up in /wp-content/uploads/2013/01/…
  2. After importing I had to refresh some of my MySQL skills.  The trick was to search the database for src=”/images/…” and update that to be src=”/wp-content/uploads/…”.  It took me a couple tries to get it right.  I started with a SELECT statement to get the syntax correct.
    SELECT post_content,replace(post_content,’src=”/images’,’src=”/wp-content/uploads’) FROM `wp_posts` WHERE post_content like ‘%src=”/images%’;
    This let me verify the substitution was happening the way I wanted it to, then it was just a matter of rewrite this as an UPDATE query (after I BACKED UP MY DATABASE!), this was the query:update wp_posts set post_content=replace(post_content,’src=”/images’,’src=”/wp-content/uploads’)  WHERE post_content like ‘%src=”/images%’;