<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Sadegh sobhi]]></title><description><![CDATA[Sadegh sobhi]]></description><link>https://blog.sadegh-sobhi.ir</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 02:38:51 GMT</lastBuildDate><atom:link href="https://blog.sadegh-sobhi.ir/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Longhorn-Restoring a PostgreSQL Cluster]]></title><description><![CDATA[Restoring a PostgreSQL Cluster (CloudNativePG) on Kubernetes Using an Existing Backup & Longhorn
1. Scenario & Goal
Overview (one sentence): We use an existing Longhorn backup to seed a new PostgreSQL cluster (managed by CNPG) in a new Kubernetes clu...]]></description><link>https://blog.sadegh-sobhi.ir/longhorn-restoring-a-postgresql-cluster</link><guid isPermaLink="true">https://blog.sadegh-sobhi.ir/longhorn-restoring-a-postgresql-cluster</guid><dc:creator><![CDATA[Sadegh Sobhi]]></dc:creator><pubDate>Tue, 04 Nov 2025 11:07:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762255720675/b68bf12f-a1b1-46c7-bc10-ba73d15223c5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-restoring-a-postgresql-cluster-cloudnativepg-on-kubernetes-using-an-existing-backup-amp-longhorn">Restoring a PostgreSQL Cluster (CloudNativePG) on Kubernetes Using an Existing Backup &amp; Longhorn</h1>
<h1 id="heading-1-scenario-amp-goal">1. Scenario &amp; Goal</h1>
<p>Overview (one sentence): We use an existing Longhorn backup to seed a new PostgreSQL cluster (managed by CNPG) in a new Kubernetes cluster via a fixed flow: restore → PVC → VolumeSnapshot → CNPG bootstrap.</p>
<h1 id="heading-2-procedure">2. Procedure</h1>
<ol>
<li><p>Restore the backup (set replica count).</p>
<ul>
<li><p>Restore the required Longhorn backup and set the number of Longhorn replicas for the restored volume (e.g., 2 or 3).</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762253801127/730d4f6d-37f0-4484-b0c7-4a2b7a045e52.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Create a new PVC with a distinct name.</p>
<ul>
<li><p>In the target namespace, create a PersistentVolumeClaim that binds to Longhorn for this restored data. Use a name different from any previous PVC.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762253975463/77644ef6-2c2d-4880-8ab8-dc4143b0fbc1.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Create a Kubernetes VolumeSnapshot from that PVC</p>
<ul>
<li><p>Using the CSI snapshot API, create a VolumeSnapshot that references the newly created PVC as its source. This snapshot becomes the hand-off artifact to your CNPG bootstrap.</p>
</li>
<li><pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">snapshot.storage.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">VolumeSnapshotClass</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">longhorn-snapclass</span>
  <span class="hljs-attr">driver:</span> <span class="hljs-string">driver.longhorn.io</span>
  <span class="hljs-attr">deletionPolicy:</span> <span class="hljs-string">Retain</span>
  <span class="hljs-string">---</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">snapshot.storage.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">VolumeSnapshot</span>
  <span class="hljs-attr">metadata:</span> { <span class="hljs-attr">name:</span> <span class="hljs-string">VolumeSnapshot-test</span>, <span class="hljs-attr">namespace:</span> <span class="hljs-string">pro</span> }
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">volumeSnapshotClassName:</span> <span class="hljs-string">longhorn-snapclass</span>
    <span class="hljs-attr">source:</span> { <span class="hljs-attr">persistentVolumeClaimName:</span> <span class="hljs-string">pvc-test</span> }
</code></pre>
</li>
</ul>
</li>
<li><p>Bootstrap CNPG from the VolumeSnapshot.</p>
<ul>
<li><p>In the CNPG cluster spec, reference that VolumeSnapshot in the bootstrap section so the new PostgreSQL cluster initializes from it.</p>
</li>
<li><pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">postgresql.cnpg.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Cluster</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">project-postgres-cluster</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">pro</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">instances:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">imageName:</span> <span class="hljs-string">ghcr.io/cloudnative-pg/postgresql:17.5</span>
    <span class="hljs-attr">enableSuperuserAccess:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">storage:</span>
      <span class="hljs-attr">storageClass:</span> <span class="hljs-string">longhorn</span>
      <span class="hljs-attr">size:</span> <span class="hljs-string">100Gi</span>
    <span class="hljs-attr">bootstrap:</span>
      <span class="hljs-attr">recovery:</span>
        <span class="hljs-attr">volumeSnapshots:</span>
          <span class="hljs-attr">storage:</span>
            <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">snapshot.storage.k8s.io</span>
            <span class="hljs-attr">kind:</span> <span class="hljs-string">VolumeSnapshot</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">VolumeSnapshot-test</span>
</code></pre>
</li>
</ul>
</li>
</ol>
<ul>
<li>Important note: The PostgreSQL image version must exactly match the image version used in the old cluster from which the backup was taken.</li>
</ul>
<h1 id="heading-fainally">Fainally</h1>
<p>    Wait until all CNPG pods are Ready and the cluster status is Healthy.</p>
<p>    Connect and run simple checks (e.g., record counts, integrity queries) to confirm the dataset is correct.</p>
]]></content:encoded></item><item><title><![CDATA[Longhorn — Backup & Restore Volumes]]></title><description><![CDATA[1. Prerequisites
Longhorn installed and healthy (longhorn-system pods ready).
A Backup Target reachable from all Longhorn manager pods:
S3-compatible bucket (with access/secret and optional custom endpoint), or
NFS export (read/write, stable network)...]]></description><link>https://blog.sadegh-sobhi.ir/longhorn-backup-and-restore-volumes</link><guid isPermaLink="true">https://blog.sadegh-sobhi.ir/longhorn-backup-and-restore-volumes</guid><dc:creator><![CDATA[Sadegh Sobhi]]></dc:creator><pubDate>Tue, 04 Nov 2025 10:06:16 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-1-prerequisites">1. Prerequisites</h1>
<p>Longhorn installed and healthy (longhorn-system pods ready).</p>
<p>A Backup Target reachable from all Longhorn manager pods:</p>
<p>S3-compatible bucket (with access/secret and optional custom endpoint), or</p>
<p>NFS export (read/write, stable network).</p>
<p>Sufficient free space on the BackupStore.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762245379727/e46a9708-1685-4659-875e-344aefb384d5.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-2set-the-backup-target">2.Set the Backup Target</h1>
<p>UI (recommended)</p>
<ol>
<li><p>Open Longhorn UI → Settings → General.</p>
</li>
<li><p>Set Backup Target (examples):</p>
</li>
</ol>
<ul>
<li><p>S3: s3://my-bucket@us-east-1/longhorn</p>
</li>
<li><p>NFS: nfs://10.0.0.20:/export/longhorn-backups</p>
</li>
</ul>
<ol start="3">
<li>If using S3, set Backup Target Credential Secret (Access Key, Secret, Region, Endpoint if non-AWS).</li>
</ol>
<p>kubectl (advanced)</p>
<ul>
<li>Create a Secret with S3 creds in longhorn-system, then patch Settings (keys vary by version). In most cases the UI is simpler and safer.</li>
</ul>
<h1 id="heading-3-create-snapshots-amp-backups">3. Create Snapshots &amp; Backups</h1>
<p>Manual (per volume)</p>
<ol>
<li><p>Longhorn UI → Volumes → click your volume.</p>
</li>
<li><p>Snapshots tab → Create Snapshot (name it).</p>
</li>
<li><p>Backups tab → Create Backup from the snapshot (or directly “Backup”).</p>
</li>
</ol>
<p>Recurring (recommended)</p>
<ul>
<li><p>Per volume → Recurring Jobs:</p>
</li>
<li><ul>
<li>Hourly/Daily Snapshot (e.g., keep last 7)</li>
</ul>
</li>
<li><ul>
<li>Daily/Weekly Backup (e.g., keep last 30)</li>
</ul>
</li>
<li><p>Attach the recurring job(s) to the volume (or set globally and opt-in).</p>
</li>
</ul>
<h1 id="heading-4-verify-backups">4. Verify Backups</h1>
<p>UI</p>
<ul>
<li><p>Volume → Backups tab → you should see backup entries with timestamps and sizes.</p>
</li>
<li><p>Click a backup to view details (labels, CRC, etc.).</p>
</li>
</ul>
<p>Best practice</p>
<ul>
<li>Periodically restore a backup to a scratch/test volume and run a quick read/write check.</li>
</ul>
<h1 id="heading-5-restore-a-backup-new-volume">5. Restore a Backup (New Volume)</h1>
<p>UI</p>
<ol>
<li><p>Longhorn UI → Backups (left menu).</p>
</li>
<li><p>Choose the Backup Volume (it groups backups by original volume).</p>
</li>
<li><p>Pick the desired Backup → Restore.</p>
</li>
<li><p>Set New Volume Name, size (auto from backup), Data Engine, # of replicas, and (optional) Create PV/PVC.</p>
</li>
<li><p>Click OK. Longhorn creates the volume and starts rebuilding replicas.</p>
</li>
<li><p>When the volume is Healthy, Attach it to a node, then mount it via your workload (or use the auto-created PVC).</p>
</li>
</ol>
<p>Notes</p>
<ul>
<li><p>If you created PV/PVC automatically, just point your Pod/Deployment/StatefulSet to that PVC.</p>
</li>
<li><p>File system (ext4/xfs) will match the original backup unless you override.</p>
</li>
</ul>
<h1 id="heading-6-disaster-recovery-cross-cluster-or-failover">6. Disaster Recovery (Cross-cluster or Failover)</h1>
<p>DR Volume approach</p>
<ol>
<li><p>On the target cluster, configure the same Backup Target.</p>
</li>
<li><p>Longhorn UI → Backups → select the source Backup Volume.</p>
</li>
<li><p>Click Create Disaster Recovery Volume (DR Volume).</p>
</li>
<li><p>Longhorn creates a DR volume that periodically pulls incremental backups.</p>
</li>
<li><p>If the source cluster goes down, Activate the DR volume:</p>
</li>
</ol>
<ul>
<li><p>DR Volume → Activate → it becomes a regular volume.</p>
</li>
<li><p>Attach, and use like normal.</p>
</li>
</ul>
<p>When to use</p>
<ul>
<li>For warm standby and faster RTO than restoring from scratch.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Longhorn]]></title><description><![CDATA[Waht is Longhorn?
Longhorn is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes.
Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a incubating project of ...]]></description><link>https://blog.sadegh-sobhi.ir/longhorn</link><guid isPermaLink="true">https://blog.sadegh-sobhi.ir/longhorn</guid><dc:creator><![CDATA[Sadegh Sobhi]]></dc:creator><pubDate>Tue, 04 Nov 2025 08:16:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762244176593/94f47675-9fab-4de1-9339-518b0d7fff68.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-waht-is-longhorn">Waht is Longhorn?</h1>
<p>Longhorn is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes.</p>
<p>Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a incubating project of the Cloud Native Computing Foundation.</p>
<h1 id="heading-why-choose-longhorn">Why choose Longhorn?</h1>
<p>Simple to deploy &amp; operate: Install via Helm/manifests, clean web UI, great Rancher integration.</p>
<p>Kubernetes-native: Everything is CRDs/CSI; snapshots, backups, restores, and automation are all inside the cluster.</p>
<p>High availability &amp; self-healing: Each volume keeps multiple replicas across nodes; disk/node failures trigger automatic rebuilds.</p>
<p>Built-in backup/DR: Incremental backups to S3/NFS, recurring jobs, Disaster Recovery volumes, and cross-cluster restore.</p>
<p>Infra flexibility &amp; cost: Runs on local disks (SSD/HDD) on almost any hardware (on-prem/edge/ARM64); no vendor lock-in.</p>
<p>Useful features: Thin-provisioning, soft anti-affinity, RWX via NFS provisioner, fast local snapshots, online volume expansion.</p>
<p>Observability: Health checks, metrics, and a UI that shows data paths and replicas.</p>
<h1 id="heading-when-is-it-a-good-fit">When is it a good fit?</h1>
<p>Small to mid-size clusters, lean DevOps teams, and edge/branch sites.</p>
<p>General stateful workloads: app services, CI runners (Jenkins/GitLab), MinIO, light analytics—anything not ultra-latency-sensitive.</p>
<p>Straightforward, low-cost DR with S3/NFS backups and quick cross-cluster restore.</p>
<p>When Ceph feels too heavy but you still need HA block storage.</p>
<p>Limitations / performance notes</p>
<p>Network/CPU overhead from block-level replication; not ideal for ultra-IOPS, low-latency OLTP databases.</p>
<p>Throughput/latency is typically below direct local PVs or a well-tuned Ceph for very heavy workloads.</p>
<p>Needs adequate bandwidth and capacity between nodes (10 GbE recommended for serious loads).</p>
<p>Prefer SSDs/NVMe; pure HDD setups rebuild slowly and add latency.</p>
<h1 id="heading-best-practices-quick-hits">Best practices (quick hits)</h1>
<p>Run ≥3 nodes for true HA; set 2–3 replicas per volume.</p>
<p>Separate OS and dgxth iliata disks; use Maintenance Mode before draining a node.</p>
<p>Configure an S3/NFS BackupStore and recurring snapshot/backup jobs; test restores regularly.</p>
<p>Provide multiple StorageClasses (e.g., fast-ssd with replica=2; standard with replica=3).</p>
<p>Monitor node/filesystem health, free space, and rebuild progress with alerts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762240909633/d196f1f6-f2fe-49d5-8da7-a6da0f3eb829.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-installation-prerequisites-minimal-yet-sufficient">Installation prerequisites (minimal yet sufficient)</h1>
<p>Kubernetes v1.25+ with ≥3 worker nodes for true HA.</p>
<p>Nodes: x86_64 (SSE4.2) or ARM64; ≥4 GiB RAM (8 GiB+ recommended), up-to-date Linux.</p>
<p>Dedicated data disk for Longhorn (prefer SSD/NVMe); avoid placing data on the OS/root disk.</p>
<p>Stable, low-latency network between nodes; firewalls must not block intra-cluster storage traffic.</p>
<p>Container runtime: containerd or Docker (compatible versions).</p>
<p>iSCSI (classic data engine): install open-iscsi and keep iscsid running on every node.</p>
<p>NVMe/TCP (optional newer engine): ensure kernel modules nvme, nvme-core, nvme-tcp are available and loaded at boot.</p>
<p>Notes</p>
<p>multipathd can interfere with attaches—disable it or blacklist Longhorn devices.</p>
<p>Keep swap off (or properly configured) on Kubernetes nodes.</p>
<p>Use NTP/chrony for clock sync across nodes.</p>
<h1 id="heading-quick-install-ubuntudebian">Quick install (Ubuntu/Debian)</h1>
<pre><code class="lang-bash"><span class="hljs-comment"># On every node</span>
sudo apt-get update
sudo apt-get install -y open-iscsi nfs-common cryptsetup
sudo systemctl <span class="hljs-built_in">enable</span> --now iscsid

<span class="hljs-comment"># (Optional) NVMe/TCP</span>
<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"nvme\nnvme-core\nnvme-tcp"</span> | sudo tee /etc/modules-load.d/nvme-tcp.conf
sudo modprobe nvme nvme-core nvme-tcp
</code></pre>
<h1 id="heading-quick-install-rhelrockyalma">Quick install (RHEL/Rocky/Alma)</h1>
<pre><code class="lang-bash"><span class="hljs-comment"># On every node</span>
sudo dnf install -y iscsi-initiator-utils nfs-utils cryptsetup
sudo systemctl <span class="hljs-built_in">enable</span> --now iscsid

<span class="hljs-comment"># (Optional) NVMe/TCP</span>
<span class="hljs-built_in">echo</span> -e <span class="hljs-string">"nvme\nnvme-core\nnvme-tcp"</span> | sudo tee /etc/modules-load.d/nvme-tcp.conf
sudo modprobe nvme nvme-core nvme-tcp
</code></pre>
<h1 id="heading-deploy-longhorn-with-helm">Deploy Longhorn with Helm</h1>
<pre><code class="lang-bash">helm repo add longhorn https://charts.longhorn.io
helm repo update

kubectl create namespace longhorn-system

helm install longhorn longhorn/longhorn -n longhorn-system

<span class="hljs-comment"># Access the UI (NodePort by default)</span>
kubectl -n longhorn-system get svc | grep longhorn-frontend
</code></pre>
<h1 id="heading-example-storageclasses">Example StorageClasses</h1>
<pre><code class="lang-yaml"><span class="hljs-comment"># fast-ssd (replica=2) — good for common workloads with moderate risk tolerance</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">storage.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">StorageClass</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">fast-ssd</span>
<span class="hljs-attr">provisioner:</span> <span class="hljs-string">driver.longhorn.io</span>
<span class="hljs-attr">parameters:</span>
  <span class="hljs-attr">numberOfReplicas:</span> <span class="hljs-string">"2"</span>
  <span class="hljs-attr">staleReplicaTimeout:</span> <span class="hljs-string">"30"</span>
  <span class="hljs-attr">fsType:</span> <span class="hljs-string">"ext4"</span>
<span class="hljs-attr">reclaimPolicy:</span> <span class="hljs-string">Delete</span>
<span class="hljs-attr">allowVolumeExpansion:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">volumeBindingMode:</span> <span class="hljs-string">WaitForFirstConsumer</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># standard-longhorn (replica=3) — higher resiliency for more critical data</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">storage.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">StorageClass</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">standard-longhorn</span>
<span class="hljs-attr">provisioner:</span> <span class="hljs-string">driver.longhorn.io</span>
<span class="hljs-attr">parameters:</span>
  <span class="hljs-attr">numberOfReplicas:</span> <span class="hljs-string">"3"</span>
  <span class="hljs-attr">staleReplicaTimeout:</span> <span class="hljs-string">"30"</span>
  <span class="hljs-attr">fsType:</span> <span class="hljs-string">"xfs"</span>
<span class="hljs-attr">reclaimPolicy:</span> <span class="hljs-string">Delete</span>
<span class="hljs-attr">allowVolumeExpansion:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">volumeBindingMode:</span> <span class="hljs-string">WaitForFirstConsumer</span>
</code></pre>
<h1 id="heading-backupstore-configuration">BackupStore configuration</h1>
<p>S3-compatible: Longhorn UI → Settings → Backup Target, e.g. s3://my-bucket@us-east-1/longhorn Provide a Backup Target Credentials Secret with access key, secret, and custom endpoint if not AWS.</p>
<p>NFS: e.g. nfs://10.0.0.20:/export/longhorn-backups (must be reachable from Longhorn manager pods on all nodes).</p>
<h1 id="heading-recurring-jobs">Recurring jobs</h1>
<p>Per-volume, schedule hourly/daily snapshots and daily/weekly backups; keep sensible retention (e.g., 7/30 versions).</p>
<h1 id="heading-test-restore-recommended-routine">Test restore (recommended routine)</h1>
<p>Restore a recent backup into a new volume.</p>
<p>Bind to a temporary PVC and run read/write checks.</p>
<p>Record timings and results in your runbook.</p>
<h1 id="heading-operations-amp-maintenance">Operations &amp; maintenance</h1>
<p>Volume expansion: supported online via UI or by increasing <a target="_blank" href="http://spec.resources.requests.storage">spec.resources.requests.storage</a> on the PVC.</p>
<p>Node offboarding: enable Maintenance Mode, drain, return node, then monitor rebuild to completion.</p>
<p>Thin provisioning: default is enabled—watch actual disk consumption and alert around 80–85% capacity.</p>
<p>Monitoring &amp; alerts</p>
<p>Track read/write latency, IOPS, rebuild duration/progress, free space, and replica health.</p>
<p>Use Prometheus/Grafana (or your stack) and alert on disk pressure and abnormal latency.</p>
<h1 id="heading-troubleshooting-common">Troubleshooting (common)</h1>
<p>Attach/Mount failures: verify iscsid (or NVMe/TCP) is running, multipathd disabled/blacklisted, firewalls open, and K8s/Longhorn versions compatible.</p>
<p>Slow rebuilds: network bottlenecks or HDD-only pools—migrate to SSD/NVMe and improve bandwidth.</p>
<p>High latency: too many replicas, pod contention on the same node, or disk contention—tune replica counts and placement.</p>
<h1 id="heading-nvmetcp-vs-iscsi-quick-note">NVMe/TCP vs iSCSI — quick note</h1>
<p>iSCSI (classic, mature): wide compatibility, simple setup on most distros.</p>
<p>NVMe/TCP (newer): potential for lower latency &amp; better throughput; requires newer kernels and loaded modules.</p>
<h1 id="heading-security-amp-compliance">Security &amp; compliance</h1>
<p>Keep RBAC enabled; scope Longhorn access to the longhorn-system namespace.</p>
<p>Pod Security: add the minimal required permissions (Privileged/HostPath) per Longhorn docs.</p>
<h1 id="heading-upgrades-safe-practice">Upgrades — safe practice</h1>
<p>Validate backups/DR and perform a test restore before upgrading.</p>
<p>Choose a chart version compatible with your Longhorn target; upgrade in stages and watch volume health/latency.</p>
]]></content:encoded></item><item><title><![CDATA[Install K3S]]></title><description><![CDATA[slm mahdi man slm]]></description><link>https://blog.sadegh-sobhi.ir/install-k3s</link><guid isPermaLink="true">https://blog.sadegh-sobhi.ir/install-k3s</guid><dc:creator><![CDATA[Sadegh Sobhi]]></dc:creator><pubDate>Sun, 19 Oct 2025 11:41:12 GMT</pubDate><content:encoded><![CDATA[<p>slm mahdi man slm</p>
]]></content:encoded></item></channel></rss>