m4b-tool is multi platform Windows, Mac, Linux for setting it up is for advanced users but once it's setup you're good to go and each time you only need to change the directory location, filename, title, author.
m4b-tool (free) is better than AudioBook Builder as it doesn't have the iPod 32 bit limitation so the created audiobook can be as big as size as necessary. Also it can use threads so it's much faster. Plus it's free.
m4b-tool (free) is better than AudioBook Builder as it doesn't have the iPod 32 bit limitation so the created audiobook can be as big as size as necessary. Also it can use threads so it's much faster. Plus it's free.
GitHub
GitHub - sandreas/m4b-tool: m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg…
m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg, flac, m4a or m4b - sandreas/m4b-tool
Install the according to the OS instructions then after done don't forget the final step (scroll down) on m4b-tool page and install the latest beta version release.
If you are sure, all dependencies are installed, the next step is to download the latest release of m4b-tool from https://github.com/sandreas/m4b-tool/releases
Depending on the operating system, you can rename m4b-tool.phar to m4b-tool and run m4b-tool --version directly from the command line. If you are not sure, you can always use the command php m4b-tool.phar --version to check if the installation was successful. This should work on every system.
Linux Mint 20.2 Uma and Ubuntu 20.4 I had to install these two files which solved the following error:
merging 4 files into test-tmpfiles/tmp_test.m4b, this can take a while Could not detect length for file 1-finished.m4b, output 'sh: 1: exec: mp4info: not found ' does not contain a valid length value
mp4v2-utils_2.0.0~dfsg0-6_amd64.deb
libmp4v2-2_2.0.0~dfsg0-6_amd64.deb
sudo dpkg -i mp4v2-utils_2.0.0~dfsg0-6_amd64.deb libmp4v2-2_2.0.0~dfsg0-6_amd64
If you are sure, all dependencies are installed, the next step is to download the latest release of m4b-tool from https://github.com/sandreas/m4b-tool/releases
Depending on the operating system, you can rename m4b-tool.phar to m4b-tool and run m4b-tool --version directly from the command line. If you are not sure, you can always use the command php m4b-tool.phar --version to check if the installation was successful. This should work on every system.
Linux Mint 20.2 Uma and Ubuntu 20.4 I had to install these two files which solved the following error:
merging 4 files into test-tmpfiles/tmp_test.m4b, this can take a while Could not detect length for file 1-finished.m4b, output 'sh: 1: exec: mp4info: not found ' does not contain a valid length value
mp4v2-utils_2.0.0~dfsg0-6_amd64.deb
libmp4v2-2_2.0.0~dfsg0-6_amd64.deb
sudo dpkg -i mp4v2-utils_2.0.0~dfsg0-6_amd64.deb libmp4v2-2_2.0.0~dfsg0-6_amd64
GitHub
Releases · sandreas/m4b-tool
m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg, flac, m4a or m4b - sandreas/m4b-tool
m4b-tool list (list commands)
m4b-tool merge --help
Here is the command line I use in Terminal to create a single m4b audiobook with chapters from mp3 files in a directory. The mp3 files names will be how the chapters will be shown in the m4b audiobook. Put cover.jpg in the same directory.
--artist "Seneca" = author of audiobook for metadata. Title and Length metadata will be automatically included.
--name "Letters from a Stoic" = title of the audiobook for metadata
--use-filenames-as-chapters = name your files as you want the chapters to be named
--no-chapter-reindexing = forces it to use chapter names especially on large audiobooks
--audio-bitrate 32k = 32kbps seems to be good enough
--audio-codec aac = make sure you compiled or installed ffmpeg with libfdk_aac for best audio quality (very important) as detailed in the installation instructions.
--audio-profile aac_he_v2 = (Advanced Audio Codec High Efficiency version 2) saves 2-7MB or so per audiobook.
--jobs 4 = I have a quad-core CPU so I specify jobs 4 so it uses all CPU cores simultaneously
in quotes "put the path" to the .mp3 chapters to create your audiobook
-o "author and title of m4b audiobook" = output filename (make sure you put the .m4b extension as it will error and won't work)
m4b-tool merge --help
Here is the command line I use in Terminal to create a single m4b audiobook with chapters from mp3 files in a directory. The mp3 files names will be how the chapters will be shown in the m4b audiobook. Put cover.jpg in the same directory.
m4b-tool merge -v --artist "God" --name "Bible King James Version" --use-filenames-as-chapters --no-chapter-reindexing --audio-bitrate 32k --audio-samplerate 22050 --audio-codec aac --audio-profile aac_he_v2 --jobs 4 "/home/mint/Music/bible" -o "Bible.m4b"-v = verbose
--artist "Seneca" = author of audiobook for metadata. Title and Length metadata will be automatically included.
--name "Letters from a Stoic" = title of the audiobook for metadata
--use-filenames-as-chapters = name your files as you want the chapters to be named
--no-chapter-reindexing = forces it to use chapter names especially on large audiobooks
--audio-bitrate 32k = 32kbps seems to be good enough
--audio-codec aac = make sure you compiled or installed ffmpeg with libfdk_aac for best audio quality (very important) as detailed in the installation instructions.
--audio-profile aac_he_v2 = (Advanced Audio Codec High Efficiency version 2) saves 2-7MB or so per audiobook.
--jobs 4 = I have a quad-core CPU so I specify jobs 4 so it uses all CPU cores simultaneously
in quotes "put the path" to the .mp3 chapters to create your audiobook
-o "author and title of m4b audiobook" = output filename (make sure you put the .m4b extension as it will error and won't work)
If you wanna update or correct the author (artist) or title (name) of the book you can do it to an existing m4b audiobook. Say you named it --name "Prince" but wanna correct it to "The Prince" do like so:
m4b-tool meta --name "The Prince" The Prince.m4bsplit audio into 20 min equal time interval segments split00.mp3, split01.mp3, split02.mp3, split03.mp3
%02d outputs 00, 01, 02, 03, etc. change to %03d for 000, 001, 002, 003, etc.
ffmpeg -i input.m4b -c copy -f segment -segment_time 20:00 -reset_timestamps 1 split%02d.mp3
-reset_timestamps 1 is critical as for each segment it recalculates the time%02d outputs 00, 01, 02, 03, etc. change to %03d for 000, 001, 002, 003, etc.
1) split an m4b audiobook into multiple mka or mkv audio files for each chapter (if it already has chapters) with MKVToolnix. Drag m4b file into MKVToolnix into Multiplexer then choose Output tab
Split mode: Before chapters
Chapter numbers: all
Must rename chapters to their original names though.
2) With Freac by importing m4b and outputting an mp3 for each chapter but has to re-encode them
3) With m4b-tool
m4b-tool split --audio-format mp3 --audio-bitrate 96k "data/my-audio-book.m4b"
extracts all chapters from m4b audio book and re-encodes them
Split mode: Before chapters
Chapter numbers: all
Must rename chapters to their original names though.
2) With Freac by importing m4b and outputting an mp3 for each chapter but has to re-encode them
3) With m4b-tool
m4b-tool split --audio-format mp3 --audio-bitrate 96k "data/my-audio-book.m4b"
extracts all chapters from m4b audio book and re-encodes them
Komga https://komga.org
PDF, CBZ, CBR (Comic books) and epub server that's free. (No built in reader for epubs just can download and see their covers) a huge advantage over Ubooquity is it streams the PDF a page or two at a time so you don't have to download the entire PDF to read it. Tracks reading progress, built-in streaming full screen reader, regex searching for titles.
PDF, CBZ, CBR (Comic books) and epub server that's free. (No built in reader for epubs just can download and see their covers) a huge advantage over Ubooquity is it streams the PDF a page or two at a time so you don't have to download the entire PDF to read it. Tracks reading progress, built-in streaming full screen reader, regex searching for titles.
java -jar -Xmx2400m /home/mint/appimages/komga-0.149.2.jar --komga.libraries-scan-cron="0 0 2 * * FRI" --komga.remember-me.key="secretkey666"command line starts up Komga server
-Xmx2400m starts it up with half of 2400MB and could grow if needed a tad bit.
/home/mint/AppImages/komga is just the path of where the komga java jar file resides --komga.libraries-scan-cron="0 0 2 * * FRI" will scan all libraries automatically every Friday at 2AM second, minute, hour, day, month, weekday. (Month and weekday names can be given as the first three letters of the English names.) 2 * * the first * says do every day and the second * says do every month--komga.remember-me.key="secretkey666" generates a cookie (valid for 2 weeks) to auto-login accountsKomga is great as a photo server also. Just zip up each folder of jpg images sepase rately. If you have hundreds or thousands of folders you need to batch do this.
Linux use this command to do the following
Linux use this command to do the following
for i in */; do zip -r "${i%/}.zip" "$i"; done
Windows using 7zip a script to compress every folder into individual zips is:for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.zip" "%%X\"or to do it without including the folder in the zip archive
for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.7z" ".\%%X\*"For Mac use Keka to batch zip folders separately
Keka is a free compression software for zip and other formats. It has one huge time saving feature (Archive items separately) in case you need to batch zip up hundreds of folders and each folder zip of separately. Useful to take directories of photos and zip them up individually then move those .zip into Komga book / comic book server.
Keka is a free compression software for zip and other formats. It has one huge time saving feature (Archive items separately) in case you need to batch zip up hundreds of folders and each folder zip of separately. Useful to take directories of photos and zip them up individually then move those .zip into Komga book / comic book server.
Komga Full Text Search
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
zfs
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
zfsonlinux.org
OpenZFS on Linux
Native port of ZFS to Linux.
Parity on files .rar, .r01, .r02, .r03, .r04, .r05, r06, .r07, .r09, r10.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
SSD drives can be used as cache drives or log drives (for home or small business it's overkill). Have 4GB+ though and hopefully 8GB+. SSD cache drive = ARC (adaptive replacement cache) which is a block level cache in systems memory. Read requests sped up for frequently accessed data. L2ARC refers to log drive and since OpenZFS 2.0.0 (November 2020) is persistent meaning it's maintained on a reboot so it doesn't have to warm up. SSD log drive = ZIL or ZFS intent log acts as a logging mechanism to store synchronous writes, until they are safely written to the main data structure on the storage pool.. SSD log drive will improve synchronous write performance..
zfs set compression=on
LZ4 has about a 2.0 compression ratio and is superfast. Compression=ZSTD aka ZStandard has been introduced in OpenZFS 2.0.0 (November 2020) and since it's has close to a 3.0 compression ratio albeit a tad slower. This Ratio vs Speed Comparison 4.0GHz is from 2017 and the 5.0GHz one is from 2018.
OpenZFS 2.1.0 released in July 2021 but waiting for deb file to be released which will have ZSTD for compression. Also if it calculates each file if it won't compress to more than 7/8 then it won't compress that file.
LZ4 has about a 2.0 compression ratio and is superfast. Compression=ZSTD aka ZStandard has been introduced in OpenZFS 2.0.0 (November 2020) and since it's has close to a 3.0 compression ratio albeit a tad slower. This Ratio vs Speed Comparison 4.0GHz is from 2017 and the 5.0GHz one is from 2018.
OpenZFS 2.1.0 released in July 2021 but waiting for deb file to be released which will have ZSTD for compression. Also if it calculates each file if it won't compress to more than 7/8 then it won't compress that file.
Test out what kind of compression ZSTD offers get the free PeaZip multiplatform (win, mac, linux)
compressratio NO NO <1.00x or higher if compressed> will show you the amount compressed. If set after you copy data it's ok but only new data copied to pool will be compressed.
checksum=on | off | fletcher2 | fletcher4 | sha256 | noparity | sha512 | skein| edonr default is flecter4 since it's superfast relative to the other checksum methods and set to on. If using deduplication must use sha256.
copies= 1 | 2 | 3 ....I would never use these like copies=2 creates two copies for each files. Copies=3 can't be used with encryption. There's already redundancy with the parity of mirror or RAIDZ1, RAIDZ2, RAIDZ3.
deduplication never use it unless you have a million dollar setup since it's RAM intensive. Checks if an exact copy of a file already exists so it doesn't need to copy it again just symlink it.
zfs set encyption=on (default is off) uses aes-256-ccm and must be set at creation time. Sending snapshots to remote locations without encryption key works.
Encryption Enabling the encryption feature allows for the creation of encrypted filesystems and volumes. ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits, directory listings, FUID mappings, and userused / groupused data. ZFS will not encrypt metadata related to the pool structure, including dataset and snapshot names, dataset hierarchy, properties, file size, file holes, and deduplication tables (though the deduplicated data itself is encrypted).
compressratio NO NO <1.00x or higher if compressed> will show you the amount compressed. If set after you copy data it's ok but only new data copied to pool will be compressed.
checksum=on | off | fletcher2 | fletcher4 | sha256 | noparity | sha512 | skein| edonr default is flecter4 since it's superfast relative to the other checksum methods and set to on. If using deduplication must use sha256.
copies= 1 | 2 | 3 ....I would never use these like copies=2 creates two copies for each files. Copies=3 can't be used with encryption. There's already redundancy with the parity of mirror or RAIDZ1, RAIDZ2, RAIDZ3.
deduplication never use it unless you have a million dollar setup since it's RAM intensive. Checks if an exact copy of a file already exists so it doesn't need to copy it again just symlink it.
zfs set encyption=on (default is off) uses aes-256-ccm and must be set at creation time. Sending snapshots to remote locations without encryption key works.
Encryption Enabling the encryption feature allows for the creation of encrypted filesystems and volumes. ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits, directory listings, FUID mappings, and userused / groupused data. ZFS will not encrypt metadata related to the pool structure, including dataset and snapshot names, dataset hierarchy, properties, file size, file holes, and deduplication tables (though the deduplicated data itself is encrypted).
PeaZip file archiver utility, free RAR ZIP software
PeaZip free archiver utility, open extract RAR TAR ZIP files
Free file archiver utility for Windows, macOS, Linux, Open Source file compression and encryption software. Open, extract RAR TAR ZIP archives, 200+ formats supported
Key rotation is managed by ZFS. Changing the user's key (e.g. a passphrase) does not require re-encrypting the entire dataset. Datasets can be scrubbed, resilvered, renamed, and deleted without the encryption keys being loaded (see the zfs load-key subcommand for more info on key loading).
Creating an encrypted dataset requires specifying the encryption and keyformat properties at creation time, along with an optional keylocation and pbkdf2iters. After entering an encryption key, the created dataset will become an encryption root.
Encrypted datasets may not have copies=3 since the implementation stores some encryption metadata where the third copy would normally be.
userquota@user=size|none
Creating an encrypted dataset requires specifying the encryption and keyformat properties at creation time, along with an optional keylocation and pbkdf2iters. After entering an encryption key, the created dataset will become an encryption root.
Encrypted datasets may not have copies=3 since the implementation stores some encryption metadata where the third copy would normally be.
userquota@user=size|none
sharesmb=on | off | opts
Controls whether the file system is shared by using Samba USERSHARES and what options are to be used. Other‐ wise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the net(8) command is invoked to create a USERSHARE. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name.
If the sharesmb property is set to off, the file systems are unshared.
The share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default.
Controls whether the file system is shared by using Samba USERSHARES and what options are to be used. Other‐ wise, the file system is automatically shared and unshared with the zfs share and zfs unshare commands. If the property is set to on, the net(8) command is invoked to create a USERSHARE. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name.
If the sharesmb property is set to off, the file systems are unshared.
The share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", ie. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default.
zfs set mountpoint=/data zfsbaby/data
zfs set sharesmb=on zfsbaby/data
zfs share zfsbaby/dataverify it's working either remotely or locally
smbclient -U guest -N -L localhoststop the samba CIFS share
zfs unshare tank/datadisable the share forever
zfs sharesmb=off tank/dataNAME PROPERTY VALUE SOURCE zfsbaby type filesystem - zfsbaby creation Thu Sep 9 23:42 2021 - zfsbaby used 183G - zfsbaby available 6.86T - zfsbaby referenced 183G - zfsbaby compressratio 1.00x - zfsbaby mounted yes - zfsbaby quota none default zfsbaby reservation none default zfsbaby recordsize 128K default zfsbaby mountpoint /zfsbaby default zfsbaby sharenfs off default zfsbaby checksum on default zfsbaby compression lz4 local
zfs get all