track ct changes... Signed-off-by: Alex A. Naanou <alex.nanou@gmail.com>
proxmox-utils (EXPERIMENTAL)
A set of scripts for automating setup and tasks in proxmox.
TODO
- CT updates / upgrades
Right now the simplest way to update the infrastructure CT's if the sources changed is to simply rebuild them -- add rebuild command.- backup
- build (new reserve)
- destroy
- clone
- cleanup
one way to apptoach this is to add a set of default action scripts (see:
nextcloud/*.sh) need to automate this...
- backup/restore
- config manager -- save/use/..
- mail server
- which is better?
- Makefile (a-la ./wireguard/templates/root/Makefile)
- shell (a-la ./shadow/templates/root/update-shadowsocks.sh)
- separate templates/assets into distribution and user directories ...this is needed to allow the user to change the configs without the fear of them being overwritten by git (similar to how config is handlerd)
Motivation
This was simply faster to implement than learning and writing the same functionality in Ansible.
NOTE: for a fair assessment of viability of further development an Ansible version will be implemented next as a direct comparison.
Fun.
Architecture
Goals
- Separate concerns
Preferably one service/role per CT - Keep things as light as possible
This for the most part rules out Docker as a nested virtualization layer under Proxmox, and preferring light distributions like Alpine Linux - Pragmatic simplicity
This goal yields some compromises to previous goals, for example TKL is used as a base for Nextcloud effectively simplifying the setup and administration of all the related components at the cost of a heavier CT, transparently integrating multiple related services
Network
Internet Admin
v v
+----|----------------------------------------------------|-----+
| | | |
| (wan) (lan) (admin) |
| | | | |
| | | pve --+ |
| | | | |
| | +--------------------------------+ |
| | / | | |
| +--($WAN_SSH_IP)- ssh ---------------+ | |
| | ^ | | |
| | (ssh:23) | | |
| | . | | |
| | . +------------------------(nat)--+ |
| | ./ | | |
| +------($WAN_IP)- gate ------(nat)---+ | |
| . | | |
| . +-- ns ---------+ |
| . | | |
| + - (udp:51820)-> +-- wireguard --+ |
| System . | | |
| - - - - - - - - - - - . - - - - - - - - | - - - - - - - | - - |
| Application . +-- syncthing --+ |
| . | |
| + - - - (https)-> +-- nextcloud |
| . | |
| + - (ssh/https)-> +-- gitea |
| |
+---------------------------------------------------------------+
The system defines two networks:
- LAN
Hosts all the service CT's (*.srv) - ADMIN
Used for administration (*.adm)
The ADMIN network is connected to the admin port.
Both networks are provided DNS and DHCP services by the ns CT.
Services on either network are connected to the outside world (WAN) via
a NAT router implemented by the gate CT (iptables).
The gate CT also implements a reverse proxy (traefik),
routing requests from the WAN ($WAN_IP) to appropriate service CT's on
the LAN.
Services expose their administration interfaces only on the ADMIN network when possible.
The host Proxmox (pve.adm) is only accessible through the ADMIN network.
The gate and ns CT's are only accessible for administration from the
host (i.e. via lxc-attach ..).
Three ways of access to the ADMIN network are provided:
wireguardVPN (CT) viagatereverse proxy,sshservice (CT) via thegatereverse proxy,sshservice (CT) via the direct$WAN_SSH_IP(fail-safe).
Getting started
Prerequisites
Install Proxmox and connect it to your device/network.
Proxmox will need to have access to the internet to download assets and updates.
Note that Proxmox repositories must be configured for apt to work
correctly, i.e. either subsctiprion or no-subscribtion repos must be
active and working, for more info rfer to:
https://pve.proxmox.com/wiki/Package_Repositories
Notes
This setup will use three IP addresses:
- The static (usually) IP initially assigned to Proxmox on install. This will not be used after setup is done,
- WAN IP address to be used for the main set of applications, this is the address that all the requests will be routed from to various services on the LAN network,
- Fail-safe ssh IP address, this is the connection used for recovery in case the internal routing fails.
Setup
Open a terminal on the host, either ssh (recommended) or via the UI.
Optionally, set a desired default editor (default: nano) via:
export EDITOR=nano
Download the bootstrap.sh script and execute it:
curl -O 'https://raw.githubusercontent.com/flynx/proxmox-utils/refs/heads/master/scripts/bootstrap.sh' && sudo bash bootstrap.sh
It is recommended to review the script/code before starting.
This will:
- Install basic dependencies,
- Clone this repo,
- Run
make bootstrapon the repo:- bootstrap configure the network (2 out of 3 stages)
- build and infrastructure start CT's (
gate,ns,ssh, andwireguard)
At this point WAN interface exposes two IPs:
- Main server (config:
$DFL_WAN_IP/$WAN_IP)- ssh:23
- wireguard:51820
- Fail-safe ssh (config:
$DFL_WAN_SSH_IP/$WAN_SSH_IP)- ssh:22
The Proxmox administrative interface is available behind the Wireguard proxy on the WAN port or directly on the ADMIN port, both on https://10.0.0.254:8006.
At this point, it is recommended to check both the fail-safe ssh
connection now and the Wireguard access.
Additional administrative tasks can be performed now if needed.
To finalize the setup run:
make finalize
This will
- Setup firewall rules.
Note that the firewall will not be enabled, this should be done manually after rule review. - Detach the host from any external ports and make it accessible only
from the internal network.
See: Architecture and Bootstrapping
This will break the ssh connection when done, reconnect via the WAN port
to continue (see: Accessing the host), or connect
directly to the ADMIN port (DHCP) and ssh into $HOST_ADMIN_IP (default: 10.0.0.254).
Note that the ADMIN port is configured for direct connections only, connecting it to a configured network can lead to unexpected behavior -- DHCP races, IP clashes... etc.
Accessing the host
The simplest way is to connect to wireguard VPN and open http://pve.adm:8006
in a browser (a profile was created during the setup process and stored
in the /root/clients/ directory on the wireguard CT).
The second approach is to ssh to either:
ssh -p 23 <user>@<WAN_IP>
or:
ssh <user>@<WAN_SSH_IP>
The later will also work if the gate CT is down or not accessible.
And from the ssh CT:
ssh root@pve
WARNING: NEVER store any ssh keys on the ssh CT, use ssh-agent instead!
Configuration
XXX
The following CT's interfaces can not be configured in the Proxmox UI:
gatensnextcloudwireguard
This is done mostly to keep Proxmox from touching the hostname $(hostname)
directive (used by the DNS server to assigned predefined IP's) and in
the case of gate and wireguard to keep it from touching the additional
bridges or interfaces defined.
(XXX this restriction may be lifted in the future)
Services
Install all user services:
make all
Includes:
Install development services:
make dev
Includes:
Syncthing
make syncthing
Syncthing administration interface is accessible via https://syncthing.adm/ on the ADMIN network, it is recommended to set an admin password on the web interface as soon as possible.
No additional routing or network configuration is required, Syncthing is smart enough to handle its own connections itself.
For more info see: https://syncthing.net/
Nextcloud
make nextcloud
Nextcloud will get mapped to subdomain $NEXTCLOUD_SUBDOMAIN of
$NEXTCLOUD_DOMAIN (defaulting to $DOMAIN, if not defined).
For basic configuration edit the generated: config.global and for defaults: config.global.example.
For deeper management use the TKL consoles
(via https://nextcloud.srv, on the LAN network) and ssh, for more details
see: https://www.turnkeylinux.org/nextcloud
For more info on Nextcloud see: https://nextcloud.com/
Gitea
make gitea
Gitea is mapped to the subdomain $GITEA_SUBDOMAIN of $GITEA_DOMAIN
or $DOMAIN if the former is not defined.
For basic configuration edit the generated: config.global and for defaults: config.global.example.
For more info see: https://gitea.com/
Custom services
XXX traefik rules
Extending
Directory structure
proxmox-utils/
+- <ct-type>/
| +- templates/
| | +- ...
| +- assets/
| | +- ...
| +- staging/
| | +- ...
| +- make.sh
| +- config
| +- config.last-run
+- ...
+- Makefile
+- config.global
+- config.global.example
Recovery and Troubleshooting
-
Configuration or bridge failure while bootstrapping
Remove all the CT's that were created by make:
pct destroy IDCleanup the interfaces:
make clean-interfacesRevise configuration if
./config.globalCleanup CT cached configuration:
make cleanRebuild the bridges:
make host-bootstrapAnd select (type "y") "Create bridges" while rejecting all other sections.
Or, do a full rebuild selecting/rejecting the appropriate sections:
make bootstrap -
Failure while creating the
gateCTCheck if the bridges are correct, and check if the host as internet access.
Remove the
gateCT (replacing 110 if you created it with a different ID):pct destroy 110Build the bootstrapped gate:
make gate-bootstrapCheck if gate is accesable and if it has internet access.
Then create the base CT's:
make ns ssh wireguardfinally cleanup:
make bootstrap-cleannow the setup can be finalized (see: Setup)
-
Failure while creating other CT's
Check if gate is accesable and if it has internet access, if it is not then this will fail, check or rebuild the gate.
Simply remove the CT
pct destroy IDThen rebuild it:
make CT_NAME -
Full clean rebuild
Remove any of the base CT's:
pct destroy IDRestore bridge configuration:
make clean-interfacesCleanup the configuration data:
make clean-allFollow the instructions in Setup