You are not logged in.
Architecture Review of Designing a Triple‑Lane Arch Setup (Stable / Canary / LTS) with True Isolation
Good afternoon everyone,
I’ve been working on a multi‑lane Arch Linux setup and recently discovered that my current configuration isn’t providing the isolation I originally intended. I’d like to outline my existing state, my goals, and the architecture I’m planning to move toward. I’d appreciate feedback from the community.
1. Current State:
- Disk layout:
- `/dev/nvme1n1p1` → EFI System Partition (vfat, ~10GB)
- `/dev/nvme1n1p2` → Single ext4 root filesystem (~900GB)
- I have three systemd‑boot entries:
- ArchA (stable kernel)
- ArchB (mainline/UKI)
- ArchC (LTS kernel)
- All three entries point to the **same root partition** (`/dev/nvme1n1p2`) and the same `/boot` (ESP).
This means:
- All three “systems” share the same `/`, `/etc`, `/usr`, `/var`, `/home`, and `/boot`.
- Switching between lanes feels identical (same Firefox tabs, Dolphin windows, Plasma session, etc.).
- Kernel and UKI updates affect all lanes simultaneously.
- Backups:
- I keep 5 rotating backups of the entire root filesystem.
- Combined size of all 5 backups ≈ 346GB.
- Free space on backup drive ≈ 490GB.
2. Goal:
I want three truly isolated Arch installations, each with its own root filesystem, package database, system configuration, and kernel track:
- ArchA → Stable lane
- ArchB → Canary / testing lane
- ArchC → LTS fallback lane
However, I want the user experience to remain continuous and seamless across all lanes, meaning:
- Same `/home`
- Same Firefox profile, software layouts, configurations, etc.
- Same Plasma layout
- Same application state
- Same SSH keys, configs, etc.
In other words:
Three isolated OS roots, one shared user environment.
3. Proposed Future Architecture:
3.1. Partition Layout:
nvme1n1p1 → ESP (/boot), shared by all lanes
nvme1n1p2 → ArchA root (ext4)
nvme1n1p3 → ArchB root (ext4)
nvme1n1p4 → ArchC root (ext4)
nvme1n1p5 → /home (ext4 or xfs), shared across all lanes
3.2. Rationale:
- Each OS gets its own isolated root (`/`), so package updates, kernel changes, and system configs don’t affect the others.
- `/home` is shared so the user experience remains identical across lanes.
- Backups become much smaller:
- Each root ≈ 20–30GB
- 5 backups per lane ≈ ~450GB total (instead of tripling 350GB).
3.3. Migration Plan (High‑Level)
1. Boot from Arch ISO.
2. Shrink `/dev/nvme1n1p2` to free space.
3. Create:
- p3 for ArchB root
- p4 for ArchC root
- p5 for shared `/home`
4. Move existing `/home` to p5.
5. Install ArchB into p3 and ArchC into p4.
6. Mount the shared ESP at `/boot` for all three.
7. Create separate systemd‑boot entries:
- `arch-stable.conf` → root=UUID(p2)
- `arch-canary.conf` → root=UUID(p3)
- `arch-lts.conf` → root=UUID(p4)
8. Ensure each lane builds its own UKIs into its own directory under `/boot/EFI/`.
4. Questions:
1. Does this architecture make sense for a triple‑lane setup?
(Three isolated roots, shared ESP, shared `/home`.)
2. Are there any pitfalls with sharing `/home` across three separate Arch installations?
(Especially regarding Plasma, Firefox, and application configs.)
3. Is there a recommended size for each root partition?
I’m thinking 30–40GB per lane.
4. Any best practices for systemd‑boot when managing multiple UKIs across multiple OS roots?
5. Any suggestions for improving backup strategy in this multi‑lane design?
6. Any other strategy that you recommend for a triple-lane OS?
Thanks in advance for your guidance.
Offline
I want three truly isolated Arch installations, each with its own root filesystem, package database, system configuration, and kernel track:
However, I want the user experience to remain continuous and seamless across all lanes, meaning:
Those goals seem to contradict eachother .
A few potential issues:
systemwide config is done in /etc which is in / which means you'll have 3 different setups
To keep user xp consistent, all setups need to run the same version of browser, DE, filemanager, mediaplayers etc .
How do you intend to achieve that ?
We need to know more about your endgoal.
What differences (apart from kernel) do you want to be able to test between the individual machines ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
Lone_Wolf: Thanks for the thoughtful reply. Let me try to explain the logic and my understanding. Please correct me if I’m wrong anywhere.
1. About /etc differing across lanes
There will be only one user and superuser on this machine – me. My understanding is that /etc controls system level behavior (services, drivers, systemd configs, kernel modules, networking, etc.), while the user experience such as Plasma layout, Firefox profile, Dolphin state, Konsole settings, etc., lives almost entirely under:
• ~/.config
• ~/.local/share
• browser profiles
• Plasma configs
• application settings
Hence, my plan is to put /home on a separate partition and mount it identically across the three installations. That way, even if /etc differs, the user facing environment should remain consistent because the per user configs are shared.
If I’m misunderstanding how much /etc influences user experience, then please let me know what the effective way of handling this would be.
2. Keeping user facing applications aligned
To keep modules/packages like Plasma, Firefox, Dolphin, etc. consistent across lanes, I plan to maintain a shared package list for user facing applications and apply it to all three systems. Something like:
pacman -Qqe > pkglist.txt
pacman -S --needed - < pkglist.txt
This way, even though the system layers differ, the versions of user facing apps remain synchronized.
If there’s a better approach, I’m open to it.
3. Why I want three lanes
The project running on this machine is critical. I can’t afford to have downtime if updates fail. During the initial testing I had bricked the system multiple times after update, granted it may have been user error and lack of experience, I wanted to reduce the downtime while providing safety net. To me safety is more important than fast recovery or smaller payloads.
The way I think about it is similar to power systems:
• ArchB is like solar — it takes the variability and “tests” the conditions first
• ArchA is like the utility grid — stable and used for daily work
• ArchC is like a battery backup — not meant for long term use, but essential if both primary sources fail
I can’t rely on the “battery” (ArchC / LTS) alone because the project requires recent CUDA, cuDNN, and NVIDIA drivers, which often may not work well on LTS kernels. So ArchC is a safety net, not the primary environment.
The goal is to isolate system level risk while keeping the user environment consistent and the project working, while working on fixing the broken system, if it does, as time permits. If there’s a flaw in this reasoning, I’d really appreciate your insight.
4. Why Arch then?
After testing 5 Linux distros, Arch ended up being the best fit for this project because of its flexibility, minimalism, modularity, and the simplicity and power of pacman. It lets me build exactly what I need without unnecessary components and dependencies found in other systems.
Offline
> 1. About /etc differing across lanes
> If I’m misunderstanding how much /etc influences user experience, then please let me know what the effective way of handling this would be.
It is not the different configuration in /etc that may cause problems. It is the different program versions. Think of a major Plasma update. By starting these new versions, data structures in $HOME can now also be changed. For example: config files/variables, databases, or their table structure.
Such a major update will affect you either in ArchB(test) or ArchA(stable) after a system update. If you then realize, "Oh, no good. I want/need to use one of the other lanes," the version status of the "old" lanes will encounter the already changed structure in $HOME. And that can lead to problems.
> 3. Why I want three lanes
In my opinion, it's okay if it makes you feel good.
However, I would perhaps advise the following:
1. Each of your "lanes" has its own main user account. This account always matches the respective software version of your own system (configs in $HOME).
2. If possible, move your project to a directory outside of the respective $HOMES. For example, to /home/project. Make sure that each of your individual Lane accounts has the necessary access to the project directory.
2a. This assumes that your project does not require any specific settings from the $HOMES. Otherwise, you may need to try to keep the necessary configurations from the $HOME's synchronized.
3. If you are satisfied with the state of the updated system, you should update the other lanes as soon as possible, starting/logging in if necessary, so that their user accounts also have any changes to configurations. If you are not satisfied, you will have to think of a procedure for restoring this "lane" (and its user account) to its "old" good state.
If you don't like the idea of different user accounts per lane, another option would be to back up your current user account BEFORE each system update. In your case, backups of the $HOMES are mandatory anyway.
Assumed scenario:
- You perform a system update in ArchB and notice that something is not working (e.g., with Plasma). However, user configurations in the current user account may have already been changed or adapted by starting/using the versions after the update.
- Therefore, you now switch to ArchA. Here, you must restore the backup of the user account from before the update in ArchB so that the configurations match the version status of ArchA. This is essentially a "downgrade" of the user account. Of course, this should happen BEFORE the user logs in.
tl:dr:
- All your lanes should quickly have the same (current) software status if you rate its condition as "good."
- New versions of software in any lane can also change configurations in $HOME through use ("user update"). This "update" usually works automatically without any problems. However, the use of new configurations by "older" program versions is usually problematic, as a kind of "user downgrade" is not provided for. An old version may not start because "cannot read configuration->parameters->foobar".
My avatar: "It's not just a toilet... a FERGUSON. The King of bowls. Sit down and give me your best shot." Al Bundy
Offline
1. Each of your "lanes" has its own main user account. This account always matches the respective software version of your own system (configs in $HOME).
Use the same UID w/ separate /home and bind-mount (or at a push symlink) project data from a separate partition into $HOME/MDR so you can work on MDR/ColdHarbor using the same user/permissions on all systems.
Sharing dotfiles across versions requires careful curation (because they're typically only forward-migrated)
You could maintain them w/ 3 separate git branches you can conditionally merge and cherry-pick, but that's gonna be effort.
Offline
GerBra/Seth thank you for taking time to respond - with detailed explanation and challenges. I wanted to clarify one important point about the project, because it might change how you view the risk profile.
Project data is lane‑agnostic
All of the project directories, datasets, and the PostgreSQL cluster live on their own dedicated disks. They are completely independent of /home and of the system partitions. Any lane can mount the project disks and continue working without relying on user‑specific configs. So the only thing I’m really trying to keep consistent across the Arches is the user experience (Plasma layout, browser profile, Dolphin settings, etc.), not the project state.
One Account Across 3 Lanes:
I am starting to understand the risks you mentioned about forward‑migrated configs in ~/.config and ~/.local/share. But for my workflow, having a single user account is essential — I prefer the same identity, environment, dotfiles, and apps/modules setup across all lanes. I realize this choice adds some risk, but the seamless workflow is important enough to me that I’m willing to manage that risk carefully. I think that ArchC (LTS) is the most likely to lag behind in user‑facing app versions, so I will only boot into it when absolutely necessary.
Given that, and your comments/recommendations, I’m considering a disciplined workflow:
1. ArchB updates first - no farther apart than once a month - dedicated window to update, test and check
2. If ArchB is good → update ArchA immediately
3. Never let ArchB run newer user‑facing apps than ArchA
4. Always back up /home before updating ArchB
5. ArchC (LTS) will also use the same user account, but I will only boot into it when absolutely necessary
I have some questions, based on your experience, how do you tackle things below:
1. How do you personally handle config drift across major DE updates?
Do you rely on backups, snapshots, or something else?
2. How often do you see backward‑incompatible changes in Plasma, Postgres, CUDA/CUDNN or major apps in real‑world use?
Is this something that happens frequently, or mainly during big transitions like Postgres 17 → 18?
3. If you were designing a mission‑critical multi‑lane setup with a single user account, what precautions or workflow would you put in place?
I’m trying to understand what approach you’d take based on your experience.
Thanks again for all the guidance; this has been incredibly valuable.
Last edited by z-06 (2026-03-11 17:56:58)
Offline
1. How do you personally handle config drift across major DE updates?
pacdiff, ~/.* needs to be curated manually but most™ software tends to be backward compatible and/or migrate itself
2. How often do you see backward‑incompatible changes in Plasma, Postgres, CUDA/CUDNN or major apps in real‑world use?
Rarely because you'd typically don't provoke that situation. Keep in mind that a lot of configuration is also *written* by the application, so failing to properly read it and writing back defaults as consequence can very quickly get you into trouble.
With the reliability demands you're suggesting, one such incident would be too many.
3. If you were designing a mission‑critical multi‑lane setup with a single user account, what precautions or workflow would you put in place?
"not" - you've a productive system and a test system, idk where you'd want to put a third system.
If anything you mirror the setup on additional hardware to account for problems in that regard (the bestmostreliable update strategy won't help when your CPU fuses)
You update the test system and assert its function, then either align the productive system or swap their roles (ie. test system becomes new production system and you're now testing updates on the former production system)
There're many ways to skin that cat but both systems have to be fully disjunct and you'd eg. use rsync to align the $HOME or as previously mentioned maintain the active parts of your $HOME (ie. stuff that's not the production data itself) w/ a version control system (git) where the branches provide the separation.
Maintaining your config data (dotfiles) in a VCS is a more commonly taken approach also to track regressions through (manual) config changes and as long as it's (usually) not sensitive: you get a github backup for free (well, MS pays for it
)
Offline