Useful Container Commands
Docker Snippets
Extract a Dockerfile from an Image
n.b. You won't get the names of the images included by "From", nor will you get any changes made using docker commit
Collect ImageID for image you'd like to view the DockerFile for:
argv=$(docker images | grep frontend | awk '{ print $3 }')
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*[kMG]*B\s*$,,g' | head -n -1
Filter Images based on Labels - Note Add more Labels to your DockerFiles!
docker images --filter label=maintainer=the.soulman.is@gmail.com
Rename (the Tag) an Image
docker image tag soulmanos-amzscraper-htmlify-csv:latest soulmanos/amzscraper-htmlify-csv
docker rmi soulmanos-amzscraper-htmlify-csv
Update already running container to always restart
root@SL-NL-UB-VPS-01:~# docker update --restart=always soulmanos-nginx
soulmanos-nginx
Start and only once running, attach to last running container
output=$(docker ps -aq) && docker start $output && running=$(docker ps -q) && docker attach $running
Debug a shitty container, by starting it at a easy entrypoint
docker run -it --entrypoint /bin/bash soulmanos/amzscraper
Install some stuff and commit the changes from another terminal window
docker commit <container name retrieved from docker ps> soulmanos/amzscraper
Another example of starting from a different entrypoint - Note argument comes after image name
docker run -d -e CATEGORY="Baby" --entrypoint /usr/bin/python3 soulmanos/amzscraper /usr/src/python-amzScraper/amzScraper.py
Build a Container from Dockerfile in current directory
docker build -t soulmanos/amzscraper-htmlify-csv .
LXC Snippets
Initial Config of LXD - lxd init
List running Containers - lxc list
List lxc repositories - lxc remote list
List available images in repo -
lxc image list images:
lxc image alias list images: | grep centos
lxc image list images:centos/7/amd64
lxc image show images:centos/7
Copy image to local system -
lxc image copy images:/ubuntu/trusty/amd64 local: --alias=trusty-amd64
or using fingerprint
lxc image copy images:3874fcaa1fb3 local: --alias=ubuntu-16.04
lxc image copy images:87beb0b7ef9e local: --alias centos-7
Launch container - lxc launch local:ubuntu-16.04 grafana-ub-16-04
Output Container Info - lxc info local:pihole-ub
Stop a running Container - lxc stop pihole-ub
Delete a running Container - lxc delete pihole-ub
Push File into a Container - lxc file push unifi_sysvinit_all.deb unifi-controller-5-6-30-ub-16-04/root/unifi_sysvinit_all.deb
Pull File from a Container lxc file pull mysql-db/root/ghost-blog-v2_20191117_222244.sql .
Snapshot a Container -
lxc snapshot wordpress-4-9-4-ub-16-04 snap-1-initial-Install --verbose
Restore a Snapshot - lxc restore wordpress-4-9-4-ub-16-04 snap-1-initial-Install
List Snapshots - lxc info wordpress-4-9-4-ub-16-04 | grep -A4 Snapshots:
Delete a Snapshot - lxc delete magic-mirror-ub-16-04/created-pi-user-pre-npm-install
Console into a container -
lxc exec python-dev-ub-16-04 /bin/bash
Check Container Storage
for i in $(ls /var/lib/lxd/storage-pools/default/containers); do du -sh $i; done
snap.lxd.daemon.service Fails to start because disk removed or failed?
- Error seen in
journalctl -xe -u snap.lxd.daemon.service
Sep 08 22:15:15 soulsvr.soulnet.local lxd.daemon[17404]: t=2020-09-08T22:15:15+0100 lvl=eror msg="Failed to start the daemon: Failed initializing storage pool \"zoneminder\": Failed to mount '/var/lib/snapd/hostfs/data-stratis/zon>
Sep 08 22:15:15 soulsvr.soulnet.local lxd.daemon[17404]: Error: Failed initializing storage pool "zoneminder": Failed to mount '/var/lib/snapd/hostfs/data-stratis/zoneminder' on '/var/snap/lxd/common/lxd/storage-pools/zoneminder':>
- Dump the schema
lxd sql global .schema
- You could use a
sqlstatement to delete therow
lxd sql global "select * from storage_pools;"
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
+----+------------------+--------+-------------+-------+
| id | name | driver | description | state |
+----+------------------+--------+-------------+-------+
| 4 | lxd-storage-pool | lvm | | 1 |
| 5 | zoneminder | dir | | 1 |
+----+------------------+--------+-------------+-------+
Delete a Read-only Filesystem Snapshot created by LXC
# btrfs subvolume show /var/lib/lxd/storage-pools/default/snapshots/squid-ub-17-10/snap0.ro
snapshots/squid-ub-17-10/snap0.ro
Name: snap0.ro
UUID: 06e9c848-ffd6-744f-b796-86836ac969f6
Parent UUID: 2ab6820c-e97b-514f-810c-ab5a4dbe0341
Received UUID: -
Creation time: 2019-02-04 20:34:52 +0000
Subvolume ID: 415
Generation: 1254771
Gen at creation: 1254766
Parent ID: 258
Top level ID: 258
Flags: readonly
Snapshot(s):
# btrfs subvolume list /var/lib/lxd/storage-pools/default
...
ID 415 gen 1254771 top level 258 path snapshots/squid-ub-17-10/snap0.ro
...
# btrfs subvolume delete /var/lib/lxd/storage-pools/default/snapshots/squid-ub-17-10/snap0.ro
Delete subvolume (no-commit): '/var/lib/lxd/storage-pools/default/snapshots/squid-ub-17-10/snap0.ro'
Resize an LXC Volume
# lxc storage show default
config:
size: 15GB
source: /var/lib/lxd/disks/default.img
description: ""
name: default
driver: btrfs
# truncate -s +5G /var/lib/lxd/disks/default.img
# ls -hl /var/lib/lxd/disks/default.img
-rw------- 1 root root 20G Feb 5 21:06 /var/lib/lxd/disks/default.img
# REBOOT - Maybe not required, but I needed to
# btrfs filesystem show /var/lib/lxd/storage-pools/default
Label: 'default' uuid: 58c388ce-4f50-41a9-8b73-a60d619845f7
Total devices 1 FS bytes used 13.41GiB
devid 1 size 15.00GiB used 15.00GiB path /dev/loop0
root@SL-NL-UB-VPS-01:/var/lib/lxd/disks# btrfs filesystem resize max /var/lib/lxd/storage-pools/default
Resize '/var/lib/lxd/storage-pools/default' of 'max'
root@SL-NL-UB-VPS-01:/var/lib/lxd/disks# btrfs filesystem show /var/lib/lxd/storage-pools/default
Label: 'default' uuid: 58c388ce-4f50-41a9-8b73-a60d619845f7
Total devices 1 FS bytes used 13.23GiB
devid 1 size 20.00GiB used 16.00GiB path /dev/loop0
# df -h | grep lxd
tmpfs 100K 0 100K 0% /var/lib/lxd/shmounts
tmpfs 100K 0 100K 0% /var/lib/lxd/devlxd
/dev/loop0 20G 14G 5.5G 71% /var/lib/lxd/storage-pools/default
Basic post launch Container Config
apt install -y curl ca-certificates && update-ca-certificates
apt upgrade -y
n.b. curl can't pull data from https:// sites because of SSL error, hence the requirement to install and update-ca-certificates
Optional Enable SSH
apt install -y ssh
We can enable SSH into a container without setting the root password passwd or running ssh-keygen, however, this would need to be run if you wanted to ssh 'from' the container to another container based on public key authentication; you can still ssh based on password authentication
ssh-keygen - This creates the ~/.ssh/ dir and id_rsa.pub certificates - It's required for ssh'ing into another host from the container based on certificates
cat ~/.ssh/id_rsa.pub - Your client certificate, this needs copying into the remote host ~/.ssh/authorized_keys file
ssh-copy-id - If you ran passwd for root, set the password and from the remote box, run this command you can inject your client key into the container without a manual copy as below
In the Container - For times when you don't run passwd and ssh-copy-id
mkdir ~/.ssh
echo 'ssh-rsa AAAAB3NzaC1yc2EAAA<< your client ssh key >>5GXp8jpopHE6eTMTuAGX root@SL-NL-UB-VPS-01' >> ~/.ssh/authorized_keys
LXC Virtual machines
Find VM images (Cloud image containing cloud-init)
lxc image list images: | grep centos
lxc image copy images:1ee42a5d5966 local: --alias=c8vm-cloud
Initialize the VM
lxc init local:c8vm-cloud c8vm-k8 --vm
Use cloud-init to set the image root password. Create config and set to user.user-data
(
cat << EOF
#cloud-config
chpasswd:
list: |
root:centos
expire: False
EOF
) | lxc config set c8vm-k8 user.user-data -
Mount the config as a device
lxc config device add c8vm-k8 config disk source=cloud-init:config
lxc config show c8vm-k8
lxc config edit c8vm-k8
Start the VM
lxc start c8vm-k8
Connect to the console from the host (Will also output boot log)
lxc console c8vm-k8