On Mac and Windows veracrypt under Emby just works with encrypted external drives.
Linux Mint here's what needs to be done.
then to umount it do
Linux Mint here's what needs to be done.
sudo mkdir /media/embyeach time you mount the encrypted drive
veracrypt /dev/sdb2 --filesystem=none(or you can use the GUI and click OPTIONS then Filesystem check Do Not Mount)..change sdb2 to whatever your disk partition is
sudo mount -o umask=022,uid=mint,gid=mint /dev/mapper/veracrypt1 /media/embyput your username in place of mint
then to umount it do
sudo umount /media/embythen in veracrypt GUI dismount
m4b audiobook players
iOS BookPlayer (free) shows chapters.
Android Smartbook Audio Player $2 and Listen Audiobook Player $2 both show chapters.
Mac OS use IINA or VLC - Linux use Cozy and for more than 132+ chapters use DeadBeef audioplayer.
iOS BookPlayer (free) shows chapters.
Android Smartbook Audio Player $2 and Listen Audiobook Player $2 both show chapters.
Mac OS use IINA or VLC - Linux use Cozy and for more than 132+ chapters use DeadBeef audioplayer.
GitHub
GitHub - TortugaPower/BookPlayer: Player for your DRM-free audiobooks
Player for your DRM-free audiobooks. Contribute to TortugaPower/BookPlayer development by creating an account on GitHub.
m4b-tool is multi platform Windows, Mac, Linux for setting it up is for advanced users but once it's setup you're good to go and each time you only need to change the directory location, filename, title, author.
m4b-tool (free) is better than AudioBook Builder as it doesn't have the iPod 32 bit limitation so the created audiobook can be as big as size as necessary. Also it can use threads so it's much faster. Plus it's free.
m4b-tool (free) is better than AudioBook Builder as it doesn't have the iPod 32 bit limitation so the created audiobook can be as big as size as necessary. Also it can use threads so it's much faster. Plus it's free.
GitHub
GitHub - sandreas/m4b-tool: m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg…
m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg, flac, m4a or m4b - sandreas/m4b-tool
Install the according to the OS instructions then after done don't forget the final step (scroll down) on m4b-tool page and install the latest beta version release.
If you are sure, all dependencies are installed, the next step is to download the latest release of m4b-tool from https://github.com/sandreas/m4b-tool/releases
Depending on the operating system, you can rename m4b-tool.phar to m4b-tool and run m4b-tool --version directly from the command line. If you are not sure, you can always use the command php m4b-tool.phar --version to check if the installation was successful. This should work on every system.
Linux Mint 20.2 Uma and Ubuntu 20.4 I had to install these two files which solved the following error:
merging 4 files into test-tmpfiles/tmp_test.m4b, this can take a while Could not detect length for file 1-finished.m4b, output 'sh: 1: exec: mp4info: not found ' does not contain a valid length value
mp4v2-utils_2.0.0~dfsg0-6_amd64.deb
libmp4v2-2_2.0.0~dfsg0-6_amd64.deb
sudo dpkg -i mp4v2-utils_2.0.0~dfsg0-6_amd64.deb libmp4v2-2_2.0.0~dfsg0-6_amd64
If you are sure, all dependencies are installed, the next step is to download the latest release of m4b-tool from https://github.com/sandreas/m4b-tool/releases
Depending on the operating system, you can rename m4b-tool.phar to m4b-tool and run m4b-tool --version directly from the command line. If you are not sure, you can always use the command php m4b-tool.phar --version to check if the installation was successful. This should work on every system.
Linux Mint 20.2 Uma and Ubuntu 20.4 I had to install these two files which solved the following error:
merging 4 files into test-tmpfiles/tmp_test.m4b, this can take a while Could not detect length for file 1-finished.m4b, output 'sh: 1: exec: mp4info: not found ' does not contain a valid length value
mp4v2-utils_2.0.0~dfsg0-6_amd64.deb
libmp4v2-2_2.0.0~dfsg0-6_amd64.deb
sudo dpkg -i mp4v2-utils_2.0.0~dfsg0-6_amd64.deb libmp4v2-2_2.0.0~dfsg0-6_amd64
GitHub
Releases · sandreas/m4b-tool
m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg, flac, m4a or m4b - sandreas/m4b-tool
m4b-tool list (list commands)
m4b-tool merge --help
Here is the command line I use in Terminal to create a single m4b audiobook with chapters from mp3 files in a directory. The mp3 files names will be how the chapters will be shown in the m4b audiobook. Put cover.jpg in the same directory.
--artist "Seneca" = author of audiobook for metadata. Title and Length metadata will be automatically included.
--name "Letters from a Stoic" = title of the audiobook for metadata
--use-filenames-as-chapters = name your files as you want the chapters to be named
--no-chapter-reindexing = forces it to use chapter names especially on large audiobooks
--audio-bitrate 32k = 32kbps seems to be good enough
--audio-codec aac = make sure you compiled or installed ffmpeg with libfdk_aac for best audio quality (very important) as detailed in the installation instructions.
--audio-profile aac_he_v2 = (Advanced Audio Codec High Efficiency version 2) saves 2-7MB or so per audiobook.
--jobs 4 = I have a quad-core CPU so I specify jobs 4 so it uses all CPU cores simultaneously
in quotes "put the path" to the .mp3 chapters to create your audiobook
-o "author and title of m4b audiobook" = output filename (make sure you put the .m4b extension as it will error and won't work)
m4b-tool merge --help
Here is the command line I use in Terminal to create a single m4b audiobook with chapters from mp3 files in a directory. The mp3 files names will be how the chapters will be shown in the m4b audiobook. Put cover.jpg in the same directory.
m4b-tool merge -v --artist "God" --name "Bible King James Version" --use-filenames-as-chapters --no-chapter-reindexing --audio-bitrate 32k --audio-samplerate 22050 --audio-codec aac --audio-profile aac_he_v2 --jobs 4 "/home/mint/Music/bible" -o "Bible.m4b"-v = verbose
--artist "Seneca" = author of audiobook for metadata. Title and Length metadata will be automatically included.
--name "Letters from a Stoic" = title of the audiobook for metadata
--use-filenames-as-chapters = name your files as you want the chapters to be named
--no-chapter-reindexing = forces it to use chapter names especially on large audiobooks
--audio-bitrate 32k = 32kbps seems to be good enough
--audio-codec aac = make sure you compiled or installed ffmpeg with libfdk_aac for best audio quality (very important) as detailed in the installation instructions.
--audio-profile aac_he_v2 = (Advanced Audio Codec High Efficiency version 2) saves 2-7MB or so per audiobook.
--jobs 4 = I have a quad-core CPU so I specify jobs 4 so it uses all CPU cores simultaneously
in quotes "put the path" to the .mp3 chapters to create your audiobook
-o "author and title of m4b audiobook" = output filename (make sure you put the .m4b extension as it will error and won't work)
If you wanna update or correct the author (artist) or title (name) of the book you can do it to an existing m4b audiobook. Say you named it --name "Prince" but wanna correct it to "The Prince" do like so:
m4b-tool meta --name "The Prince" The Prince.m4bsplit audio into 20 min equal time interval segments split00.mp3, split01.mp3, split02.mp3, split03.mp3
%02d outputs 00, 01, 02, 03, etc. change to %03d for 000, 001, 002, 003, etc.
ffmpeg -i input.m4b -c copy -f segment -segment_time 20:00 -reset_timestamps 1 split%02d.mp3
-reset_timestamps 1 is critical as for each segment it recalculates the time%02d outputs 00, 01, 02, 03, etc. change to %03d for 000, 001, 002, 003, etc.
1) split an m4b audiobook into multiple mka or mkv audio files for each chapter (if it already has chapters) with MKVToolnix. Drag m4b file into MKVToolnix into Multiplexer then choose Output tab
Split mode: Before chapters
Chapter numbers: all
Must rename chapters to their original names though.
2) With Freac by importing m4b and outputting an mp3 for each chapter but has to re-encode them
3) With m4b-tool
m4b-tool split --audio-format mp3 --audio-bitrate 96k "data/my-audio-book.m4b"
extracts all chapters from m4b audio book and re-encodes them
Split mode: Before chapters
Chapter numbers: all
Must rename chapters to their original names though.
2) With Freac by importing m4b and outputting an mp3 for each chapter but has to re-encode them
3) With m4b-tool
m4b-tool split --audio-format mp3 --audio-bitrate 96k "data/my-audio-book.m4b"
extracts all chapters from m4b audio book and re-encodes them
Komga https://komga.org
PDF, CBZ, CBR (Comic books) and epub server that's free. (No built in reader for epubs just can download and see their covers) a huge advantage over Ubooquity is it streams the PDF a page or two at a time so you don't have to download the entire PDF to read it. Tracks reading progress, built-in streaming full screen reader, regex searching for titles.
PDF, CBZ, CBR (Comic books) and epub server that's free. (No built in reader for epubs just can download and see their covers) a huge advantage over Ubooquity is it streams the PDF a page or two at a time so you don't have to download the entire PDF to read it. Tracks reading progress, built-in streaming full screen reader, regex searching for titles.
java -jar -Xmx2400m /home/mint/appimages/komga-0.149.2.jar --komga.libraries-scan-cron="0 0 2 * * FRI" --komga.remember-me.key="secretkey666"command line starts up Komga server
-Xmx2400m starts it up with half of 2400MB and could grow if needed a tad bit.
/home/mint/AppImages/komga is just the path of where the komga java jar file resides --komga.libraries-scan-cron="0 0 2 * * FRI" will scan all libraries automatically every Friday at 2AM second, minute, hour, day, month, weekday. (Month and weekday names can be given as the first three letters of the English names.) 2 * * the first * says do every day and the second * says do every month--komga.remember-me.key="secretkey666" generates a cookie (valid for 2 weeks) to auto-login accountsKomga is great as a photo server also. Just zip up each folder of jpg images sepase rately. If you have hundreds or thousands of folders you need to batch do this.
Linux use this command to do the following
Linux use this command to do the following
for i in */; do zip -r "${i%/}.zip" "$i"; done
Windows using 7zip a script to compress every folder into individual zips is:for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.zip" "%%X\"or to do it without including the folder in the zip archive
for /d %%X in (*) do "c:\Program Files\7-Zip\7z.exe" a "%%X.7z" ".\%%X\*"For Mac use Keka to batch zip folders separately
Keka is a free compression software for zip and other formats. It has one huge time saving feature (Archive items separately) in case you need to batch zip up hundreds of folders and each folder zip of separately. Useful to take directories of photos and zip them up individually then move those .zip into Komga book / comic book server.
Keka is a free compression software for zip and other formats. It has one huge time saving feature (Archive items separately) in case you need to batch zip up hundreds of folders and each folder zip of separately. Useful to take directories of photos and zip them up individually then move those .zip into Komga book / comic book server.
Komga Full Text Search
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
Komga leverages Full Text Search (opens new window)(FTS hereafter) to provide relevant results from your libraries. This isn't searching inside the documents. Just searching all database fields of the filenames and metadata of books.
FTS will order results by relevance
FTS matches on complete words: bat will not match Batman
The order of words is not important: batman robin will match Robin & Batman
You can search by prefix by adding the * character: bat* will match Batman
You can search books by ISBN
You can search series by publisher using the publisher:term syntax: publisher:dc will match all series published by DC Comics
You can use the AND, OR and NOT operators (UPPERCASE) to build complex queries:
batman NOT publisher:dc
will match all Batman series not published by DC Comics
batman OR robin will match Batman or Robin
batman AND (robin OR superman) will match Superman & Batman and Batman & Robin
You can search by initial token using the ^ character: batman ^superman will match Superman/Batman but not Batman/Superman
You can search for sequence of terms by enclosing them in the " character: "three joker" will match Batman: Three Jokers but not The Joker War: Part Three
zfs
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
ZFS protects the data on disk against silent data corruption caused by bit rot, current spikes, bugs in disk firmware, phantom writes, misdirected reads/writes, memory parity errors between the array and server memory, driver errors and accidental overwrites.
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten - it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the data through time as changes are made. File writes using ZFS are transactional - either everything or nothing is written to disk. View changelogs on zfsonlinux.org
Putting ZFS on Linux mint for home user or small business aimed at the beginner.
If you want to run a free embedded version with a WebGUI off a server booting from a USB drive use XigmaNAS aka NAS4FREE uses FreeBSD
If you want to run a free community edition with a dark mode WebGUI off a dedicated server use TrueNAS aka FreeNAS uses FreeBSD although they're switching to Debian 11 for TrueNAS Scale (enterprise) to use dockers.
zfsonlinux.org
OpenZFS on Linux
Native port of ZFS to Linux.
Parity on files .rar, .r01, .r02, .r03, .r04, .r05, r06, .r07, .r09, r10.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
If you downloaded those and wanted to extract them to get one file if r.03 was corrupt or missing you're screwed. So parity files exists par, .p01, p.02, p.03 and usually with 10 to 20% parity files could fix corrupt or missing .r03
RAID 5 and RAID 6 In traditional RAID when any disk failed it would take ages to wait for the raid to be rebuilt. ZFS uses RAIDZ and in ZFS 2.1.0 (July 2021) DRAID. To see how much space will be used for parity use ZFS raidz calculator.
RAIDZ1 with 5 disks is 20% parity, with 4 disks is 27% parity and with 3 disks is 33% parity.
raidz1 (1-disk parity, similar to RAID 5, one disk can fail)
raidz2 (2-disk parity, similar to RAID 6, two disks can fail)
raidz3 (3-disk parity, no RAID analog, three disks can fail)
ZFS does away with any RAID controller and is much easier to manage.
SSD drives can be used as cache drives or log drives (for home or small business it's overkill). Have 4GB+ though and hopefully 8GB+. SSD cache drive = ARC (adaptive replacement cache) which is a block level cache in systems memory. Read requests sped up for frequently accessed data. L2ARC refers to log drive and since OpenZFS 2.0.0 (November 2020) is persistent meaning it's maintained on a reboot so it doesn't have to warm up. SSD log drive = ZIL or ZFS intent log acts as a logging mechanism to store synchronous writes, until they are safely written to the main data structure on the storage pool.. SSD log drive will improve synchronous write performance..