Komga Full Text Search
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
zfs
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
zfsonlinux.org
OpenZFS on Linux
Native port of ZFS to Linux.
Parity on files .rar, .r01, .r02, .r03, .r04, .r05, r06, .r07, .r09, r10.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
SSD drives can be used as cache drives or log drives (for home or small business it's overkill). Have 4GB+ though and hopefully 8GB+. SSD cache drive = ARC (adaptive replacement cache) which is a block level cache in systems memory. Read requests sped up for frequently accessed data. L2ARC refers to log drive and since OpenZFS 2.0.0 (November 2020) is persistent meaning it's maintained on a reboot so it doesn't have to warm up. SSD log drive = ZIL or ZFS intent log acts as a logging mechanism to store synchronous writes, until they are safely written to the main data structure on the storage pool.. SSD log drive will improve synchronous write performance..
zfs set compression=on
LZ4 has about a 2.0 compression ratio and is superfast. Compression=ZSTD aka ZStandard has been introduced in OpenZFS 2.0.0 (November 2020) and since it's has close to a 3.0 compression ratio albeit a tad slower. This Ratio vs Speed Comparison 4.0GHz is from 2017 and the 5.0GHz one is from 2018.
OpenZFS 2.1.0 released in July 2021 but waiting for deb file to be released which will have ZSTD for compression. Also if it calculates each file if it won't compress to more than 7/8 then it won't compress that file.
LZ4 has about a 2.0 compression ratio and is superfast. Compression=ZSTD aka ZStandard has been introduced in OpenZFS 2.0.0 (November 2020) and since it's has close to a 3.0 compression ratio albeit a tad slower. This Ratio vs Speed Comparison 4.0GHz is from 2017 and the 5.0GHz one is from 2018.
OpenZFS 2.1.0 released in July 2021 but waiting for deb file to be released which will have ZSTD for compression. Also if it calculates each file if it won't compress to more than 7/8 then it won't compress that file.
Test out what kind of compression ZSTD offers get the free PeaZip multiplatform (win, mac, linux)
compressratio NO NO <1.00x or higher if compressed> will show you the amount compressed. If set after you copy data it's ok but only new data copied to pool will be compressed.
checksum=on | off | fletcher2 | fletcher4 | sha256 | noparity | sha512 | skein| edonr default is flecter4 since it's superfast relative to the other checksum methods and set to on. If using deduplication must use sha256.
copies= 1 | 2 | 3 ....I would never use these like copies=2 creates two copies for each files. Copies=3 can't be used with encryption. There's already redundancy with the parity of mirror or RAIDZ1, RAIDZ2, RAIDZ3.
deduplication never use it unless you have a million dollar setup since it's RAM intensive. Checks if an exact copy of a file already exists so it doesn't need to copy it again just symlink it.
zfs set encyption=on (default is off) uses aes-256-ccm and must be set at creation time. Sending snapshots to remote locations without encryption key works.
Encryption Enabling the encryption feature allows for the creation of encrypted filesystems and volumes. ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits, directory listings, FUID mappings, and userused / groupused data. ZFS will not encrypt metadata related to the pool structure, including dataset and snapshot names, dataset hierarchy, properties, file size, file holes, and deduplication tables (though the deduplicated data itself is encrypted).
compressratio NO NO <1.00x or higher if compressed> will show you the amount compressed. If set after you copy data it's ok but only new data copied to pool will be compressed.
checksum=on | off | fletcher2 | fletcher4 | sha256 | noparity | sha512 | skein| edonr default is flecter4 since it's superfast relative to the other checksum methods and set to on. If using deduplication must use sha256.
copies= 1 | 2 | 3 ....I would never use these like copies=2 creates two copies for each files. Copies=3 can't be used with encryption. There's already redundancy with the parity of mirror or RAIDZ1, RAIDZ2, RAIDZ3.
deduplication never use it unless you have a million dollar setup since it's RAM intensive. Checks if an exact copy of a file already exists so it doesn't need to copy it again just symlink it.
zfs set encyption=on (default is off) uses aes-256-ccm and must be set at creation time. Sending snapshots to remote locations without encryption key works.
Encryption Enabling the encryption feature allows for the creation of encrypted filesystems and volumes. ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits, directory listings, FUID mappings, and userused / groupused data. ZFS will not encrypt metadata related to the pool structure, including dataset and snapshot names, dataset hierarchy, properties, file size, file holes, and deduplication tables (though the deduplicated data itself is encrypted).
PeaZip file archiver utility, free RAR ZIP software
PeaZip free archiver utility, open extract RAR TAR ZIP files
Free file archiver utility for Windows, macOS, Linux, Open Source file compression and encryption software. Open, extract RAR TAR ZIP archives, 200+ formats supported
Key rotation is managed by ZFS. Changing the user's key (e.g. a passphrase) does not require re-encrypting the entire dataset. Datasets can be scrubbed, resilvered, renamed, and deleted without the encryption keys being loaded (see the zfs load-key subcommand for more info on key loading).
Creating an encrypted dataset requires specifying the encryption and keyformat properties at creation time, along with an optional keylocation and pbkdf2iters. After entering an encryption key, the created dataset will become an encryption root.
Encrypted datasets may not have copies=3 since the implementation stores some encryption metadata where the third copy would normally be.
userquota@user=size|none
Creating an encrypted dataset requires specifying the encryption and keyformat properties at creation time, along with an optional keylocation and pbkdf2iters. After entering an encryption key, the created dataset will become an encryption root.
Encrypted datasets may not have copies=3 since the implementation stores some encryption metadata where the third copy would normally be.
userquota@user=size|none
sharesmb=on | off | opts
Controls whether the file system is shared by using Samba USERSHARES and what options are to be used. Other‐ wise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the net(8) command is invoked to create a USERSHARE. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name.
If the sharesmb property is set to off, the file systems are unshared.
The share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default.
Controls whether the file system is shared by using Samba USERSHARES and what options are to be used. Other‐ wise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the net(8) command is invoked to create a USERSHARE. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name.
If the sharesmb property is set to off, the file systems are unshared.
The share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default.
zfs set mountpoint=/data zfsbaby/data
zfs set sharesmb=on zfsbaby/data
zfs share zfsbaby/dataverify it's working either remotely or locally
smbclient -U guest -N -L localhoststop the samba CIFS share
zfs unshare tank/datadisable the share forever
zfs sharesmb=off tank/dataNAME PROPERTY VALUE SOURCE zfsbaby type filesystem - zfsbaby creation Thu Sep 9 23:42 2021 - zfsbaby used 183G - zfsbaby available 6.86T - zfsbaby referenced 183G - zfsbaby compressratio 1.00x - zfsbaby mounted yes - zfsbaby quota none default zfsbaby reservation none default zfsbaby recordsize 128K default zfsbaby mountpoint /zfsbaby default zfsbaby sharenfs off default zfsbaby checksum on default zfsbaby compression lz4 local
zfs get all
zpool status -c lsblk,media
pool: zfsbaby state: ONLINE scan: resilvered 828K in 0 days 00:00:01 with 0 errors on Fri Sep 10 02:59:22 2021 config: NAME STATE READ WRITE CKSUM size vendor model media zfsbaby ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 3.7T WD My_Passport_25E2 hdd sdc ONLINE 0 0 0 3.7T WD My_Passport_25E2 hdd sdd ONLINE 0 0 0 3.7T WD My_Passport_2626 hdd errors: No known data errors
pool: zfsbaby state: ONLINE scan: resilvered 828K in 0 days 00:00:01 with 0 errors on Fri Sep 10 02:59:22 2021 config: NAME STATE READ WRITE CKSUM size vendor model media zfsbaby ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 3.7T WD My_Passport_25E2 hdd sdc ONLINE 0 0 0 3.7T WD My_Passport_25E2 hdd sdd ONLINE 0 0 0 3.7T WD My_Passport_2626 hdd errors: No known data errors
Scrubbing
ZFS can be scheduled to perform a "scrub" on all the data in a storage pool, checking each piece of data with its corresponding checksum to verify its integrity, detect any silent data corruption and to correct any errors where possible.
When the data is stored in a redundant fashion - in a mirrored or RAIDZ or DRAID-type array - it can be self-healed automatically and since data corruption is logged. Scrubbing is given low I/O priority so that it has a minimal effect on system performance and can operate while the storage pool is in use and online.
ZFS can be scheduled to perform a "scrub" on all the data in a storage pool, checking each piece of data with its corresponding checksum to verify its integrity, detect any silent data corruption and to correct any errors where possible.
When the data is stored in a redundant fashion - in a mirrored or RAIDZ or DRAID-type array - it can be self-healed automatically and since data corruption is logged. Scrubbing is given low I/O priority so that it has a minimal effect on system performance and can operate while the storage pool is in use and online.
Snapshots
An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are created extremely quick, since all the data composing the snapshot is already stored. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. The first snapshot consumes no additional disk space within the pool. As data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data. As a result, the snapshot prevents the data from being freed back to the pool.
create a snapshot
An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are created extremely quick, since all the data composing the snapshot is already stored. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. The first snapshot consumes no additional disk space within the pool. As data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data. As a result, the snapshot prevents the data from being freed back to the pool.
create a snapshot
zfs snapshot zfsbaby data@2022-09-11destroy a snapshot
zfs destroy zfsbaby/data@2022-09-11Clones
Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is possible due to the copy-on-write design.
Sending & Receiving Snapshots
Snapshots of ZFS file systems and volumes can be sent to remote hosts over the network. This data stream can be an entire file system or volume, or it can be the changes since it was last sent. When sending only the changes, the stream size depends on the number of blocks changed between the snapshots. This provides a very efficient strategy for synchronizing backups.
create a initial snapshot
Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is possible due to the copy-on-write design.
Sending & Receiving Snapshots
Snapshots of ZFS file systems and volumes can be sent to remote hosts over the network. This data stream can be an entire file system or volume, or it can be the changes since it was last sent. When sending only the changes, the stream size depends on the number of blocks changed between the snapshots. This provides a very efficient strategy for synchronizing backups.
create a initial snapshot
zfs snapshot zfsbaby/data@initialsend it (initial snapshot) to another local pool, named ''zfskid'', and calling the dataset ''storage''
zfs send zfsbaby/data@initial | zfs recv -F zfskid/storagesend it to a remote pool, named ''zfsadult'' at remote side
zfs send zfsbaby/data@initial | ssh remotehost zfs recv -F zfsadult/dataafter using ''zfsbaby/data'' for a week, create another snapshot
zfs snapshot zfsbaby/data@2022-09-18T22-30incrementally send the new state to remote
zfs send -i initial zfsbaby/data@2022-09-18T22-30 | ssh remotehost zfs recv -F zfsbaby/dataAdd a spare (hot-swap drive) if one fails ZED will tell it to automatically do a sequential resilver.
zpool add -f zfsbaby spare sdgAdd an additional disk to pool
zpool add -f zfsbaby sdgReplace a corrupted disk (sdg=corrupted sdh=new)
zpool replace zfsbaby sdg sdhDestroy a pool
zpool destroy zfsbabyView Input/Output stats every 11 seconds, ctrl-C stop
zpool iostat 11Move pool to a new system first export it then import it
zpool export zfsbabyCreate a new pool using RAIDZ1 using -f (force) if data on disks
zpool import zfsbaby
zpool create raidz1 sdb sdc sddCreate a new pool using RAIDZ2
zpool create raidz2 sdb sdc sdd sde sdfTo remove line breaks, line endings, paragraph marks automatically. If you copy and paste text from a PDF you always get unwanted line breaks and the text doesn't flow.
Best way is to use the online website https://removelinebreaks.net/
as it can convert paragraphs to Double Line
also this one can too
https://anytexteditor.com/remove-line-breaks
But sometimes you need to choose Base number of chars (x) Manual 89 for Double Line to work.
On Linux in LibreOffice Writer
— View - check Formatting Marks to be able to view the Paragraph marks
— Edit / Find & Replace and check Regular expressions ...find: $ and Replace: [space]
But still need to put all the line breaks and double line so it's not a solution. But I finally created a macro in LibreOffice to remove line breaks https://t.iss.one/geektips/270
Best way is to use the online website https://removelinebreaks.net/
as it can convert paragraphs to Double Line
also this one can too
https://anytexteditor.com/remove-line-breaks
But sometimes you need to choose Base number of chars (x) Manual 89 for Double Line to work.
On Linux in LibreOffice Writer
— View - check Formatting Marks to be able to view the Paragraph marks
— Edit / Find & Replace and check Regular expressions ...find: $ and Replace: [space]
But still need to put all the line breaks and double line so it's not a solution. But I finally created a macro in LibreOffice to remove line breaks https://t.iss.one/geektips/270
Firefox I only use two extensions —
for dark mode on websites I use
Dark Background and Light Text
and just press F2 to toggle light/dark mode
Global Preferences
Default foreground color #dcdccc
Default background color #333333
Default link color #ad7fa8
Default visited link color #ffafff
Default active link color #ff0000
Default selection color #d3d7cf
uBlock Origin
which is an ad blocker and any other element on a website you wish to block out.
DownThemAll!
which lets you download all images or mp3s on a web page.
for dark mode on websites I use
Dark Background and Light Text
and just press F2 to toggle light/dark mode
Global Preferences
Default foreground color #dcdccc
Default background color #333333
Default link color #ad7fa8
Default visited link color #ffafff
Default active link color #ff0000
Default selection color #d3d7cf
uBlock Origin
which is an ad blocker and any other element on a website you wish to block out.
DownThemAll!
which lets you download all images or mp3s on a web page.
Install then Tools | Add-ons & ThemesRecord video from your iphone or ipad. Compile and install rpiplay and start it in a terminal by typing rpiplay
https://github.com/FD-/RPiPlay
https://github.com/FD-/RPiPlay
Use SimpleScreenRecorder and make sure PulseAudio is checked if you wish to do voiceover for your video.
https://www.maartenbaert.be/simplescreenrecorder/
https://www.maartenbaert.be/simplescreenrecorder/
use x264 and not x265 to not have any dropped frames. You will want to re-encode it anyway to massively reduce file size with x265. Audio codec Vorbis seemed best at 64kbps.
Sometimes if trying to capture game footage if the ipad screen barely changes but you made a move in the game you have to move the screen a tad for it to detect motion. Doesn't have all the time but it does happen.
Sometimes if trying to capture game footage if the ipad screen barely changes but you made a move in the game you have to move the screen a tad for it to detect motion. Doesn't have all the time but it does happen.