Different updates
This commit is contained in:
76
content/2.general/5.zfs.md
Normal file
76
content/2.general/5.zfs.md
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
navigation: true
|
||||||
|
title: ZFS
|
||||||
|
main:
|
||||||
|
fluid: false
|
||||||
|
---
|
||||||
|
:ellipsis{left=0px width=40rem top=10rem blur=140px}
|
||||||
|
# ZFS
|
||||||
|
|
||||||
|
::alert{type="info"}
|
||||||
|
🎯 __Objectives:__
|
||||||
|
- Understand what ZFS is and why it's useful
|
||||||
|
::
|
||||||
|
|
||||||
|
ZFS is widely used in the world of servers, NAS systems (like FreeNAS / TrueNAS), virtualization, and even by tech-savvy individuals who want reliable storage. It is both a _file system_ (like NTFS for Windows, EXT4, FAT32, etc.) and a __volume manager__ (similar to LVM).
|
||||||
|
|
||||||
|
To put it simply:
|
||||||
|
- A **volume manager** organizes physical storage (like one or more hard drives).
|
||||||
|
- A **file system** organizes how data blocks are written, read, and deleted within those volumes.
|
||||||
|
|
||||||
|
ZFS goes far beyond traditional file systems in terms of performance and features.
|
||||||
|
Here’s what we’re most interested in:
|
||||||
|
- Its __snapshot management__ features, allowing you to quickly roll back in case of issues.
|
||||||
|
- Its support for disk groupings and [__RAID-like structures__](/global/RAID) (Z-Mirror, RAIDZ1, RAIDZ2, RAIDZ3).
|
||||||
|
- Its __automatic recovery of corrupted data__ (through scrubbing).
|
||||||
|
- Its performance, enhanced by RAM caching (ZFS ARC).
|
||||||
|
- Its robust error notifications and monitoring.
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
ZFS has a unique structure:
|
||||||
|
|
||||||
|
- **vdev** (virtual device): a group of physical or virtual disks.
|
||||||
|
- **zpool**: a collection of vdevs configured as a single storage pool. A zpool can contain multiple vdevs, but a vdev belongs to only one zpool.
|
||||||
|
- **dataset**: a logical data container within a zpool. Each dataset can have its own settings (compression, quotas, permissions, etc.).
|
||||||
|
|
||||||
|
There are several dataset types:
|
||||||
|
- **file system**: a standard ZFS filesystem, mounted without storage quotas.
|
||||||
|
- **zvol**: a "virtual disk" with a defined size, which you can format and partition as if it were a physical disk.
|
||||||
|
- **snapshot**: a frozen-in-time version of another dataset. Snapshots can be created manually or through backup tools. They can be mounted to browse data as it was at the snapshot time.
|
||||||
|
|
||||||
|
## Why ZFS over others?
|
||||||
|
|
||||||
|
### Data Integrity
|
||||||
|
|
||||||
|
ZFS continuously checks that your stored data hasn't become corrupted. Every block of data is associated with a checksum, allowing ZFS to detect even the smallest alteration. If corruption is found and a healthy copy exists elsewhere, ZFS can repair the data automatically.
|
||||||
|
|
||||||
|
### Built-in RAID
|
||||||
|
|
||||||
|
ZFS includes its own volume management system (vdevs). You can build a zpool using multiple disks—similar to traditional [RAID](/global/RAID) setups—but with more flexibility. For example:
|
||||||
|
- **Z-mirror** → equivalent to RAID 1
|
||||||
|
- **RAIDZ1** → equivalent to RAID 5 (tolerates 1 disk failure)
|
||||||
|
- **RAIDZ2** → equivalent to RAID 6 (tolerates 2 disk failures)
|
||||||
|
- **RAIDZ3** → tolerates up to 3 disk failures
|
||||||
|
|
||||||
|
ZFS handles all this natively—no external RAID software needed.
|
||||||
|
|
||||||
|
::alert{type="info"}
|
||||||
|
:::list{type="info"}
|
||||||
|
- Check out the [article on RAID](/global/RAID) to find the right solution for your needs.
|
||||||
|
:::
|
||||||
|
::
|
||||||
|
|
||||||
|
### Snapshots and Clones
|
||||||
|
|
||||||
|
ZFS allows you to create snapshots—instantaneous images of a dataset's state. Snapshots take up minimal space and can be scheduled frequently. You can also create clones: writable copies of snapshots.
|
||||||
|
|
||||||
|
### Compression and Deduplication
|
||||||
|
|
||||||
|
ZFS can compress data on the fly (transparently to the user), saving disk space. It also supports deduplication (removing duplicate data), though this feature requires a lot of memory and is not recommended for all use cases.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Now you know why ZFS is *the* file system to deploy on your NAS.
|
@ -49,11 +49,20 @@ root
|
|||||||
│ └── overseerr
|
│ └── overseerr
|
||||||
│ └── config
|
│ └── config
|
||||||
└── media
|
└── media
|
||||||
|
├── downloads
|
||||||
├── tvseries
|
├── tvseries
|
||||||
├── movies
|
├── movies
|
||||||
└── library
|
└── library
|
||||||
```
|
```
|
||||||
|
|
||||||
|
::alert{type="warning"}
|
||||||
|
:::list{type="warning"}
|
||||||
|
- __Warning:__ Make sure to follow this file structure carefully, especially the `media` folder. This folder must be mounted **exactly the same way** in both the _Qbittorrent_ compose file (`/your/path/media:/media`) and the _arr_ applications.
|
||||||
|
If not, the _arr_ apps may not recognize the path provided by Qbittorrent and will fail to create _hardlinks_.
|
||||||
|
Without hardlinks, the _arr_ apps will copy the files instead—**doubling the space used** on your storage.
|
||||||
|
:::
|
||||||
|
::
|
||||||
|
|
||||||
Open Docker and your `plex` stack. Modify the compose file as follows:
|
Open Docker and your `plex` stack. Modify the compose file as follows:
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
|
@ -7,6 +7,8 @@ main:
|
|||||||
---
|
---
|
||||||
:ellipsis{left=0px width=40rem top=10rem blur=140px}
|
:ellipsis{left=0px width=40rem top=10rem blur=140px}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
::terminal{style="margin-top:80px;"}
|
::terminal{style="margin-top:80px;"}
|
||||||
---
|
---
|
||||||
content:
|
content:
|
||||||
|
4
public/img/global/zfs.svg
Normal file
4
public/img/global/zfs.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 143 KiB |
4
public/img/stockeex/stockeex-raid.svg
Normal file
4
public/img/stockeex/stockeex-raid.svg
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 165 KiB |
Reference in New Issue
Block a user