eBoox (free) works great and best option for free on eyeOS. No need to wait for index. Fast. Can export annotations as text, email, text file, Notes.
Only con is the Strongs links won’t work in table of contents although the two Strongs indexes will Hebrew and Greek. It’s not all that bad as you can still access Strongs from verses by clicking the X.
Only con is the Strongs links won’t work in table of contents although the two Strongs indexes will Hebrew and Greek. It’s not all that bad as you can still access Strongs from verses by clicking the X.
Screenshot predefined area to capture on Linux. With
custom one
To define the area each and every time is crazy if you're doing batches. I did it till card #150 then decided better find a better way..have to go to #541. Scrot app to the rescue.
Create a custom Keyboard shortcut. I chose
SimpleScreenRecorder you can simply select an area and it'll only video record that area. There are a quite a few of screenshot apps on Linux. Usually just use Alt-PrtSc = capture windowPrtSc = capture entire desktopShift-PrtSc = capture area select with mousecustom one
Super-PrtSc = Normcap OCRTo define the area each and every time is crazy if you're doing batches. I did it till card #150 then decided better find a better way..have to go to #541. Scrot app to the rescue.
sudo apt install scrot
scrot -a 24,318,538,538
24px in from the left side318px down from top of screen538px width of area to capture538px height of area to captureCreate a custom Keyboard shortcut. I chose
Super+F1. Call it scrot and in the command section putscrot -a 24,318,538,538 -e 'mv $f ~/Pictures/'By default it'll capture to your home directory so mv will move it to your Pictures directory each screenshot. Now just hit
Next for next card and Super-F1 each time.To convert HEIC (apple) to jpg on linux
for file in *.heic; do heif-convert $file ${file/%.heic/.jpg}; doneAutomatically Truncate (cut out) Silence with
You can batch cut a few at a time and make sure to check Truncate track independently.
For testing purposes I generated silence at beginning and end of file with
Audacity. Select All then choose Effect | Truncate Silence. I set a minimum of 5 seconds. Two areas of silence it replaces them with half a second of silence.You can batch cut a few at a time and make sure to check Truncate track independently.
For testing purposes I generated silence at beginning and end of file with
OcenAudio. Think in Audacity you'd have to install a plugin for that.Add this part to remove silence from the entire audio (beginning, middle, end) in videomass. Useful for doing hundreds at a time.
-c:a libopus -vbr off -b:a 32k -ar 48000 -af highpass=500,lowpass=1000,afftdn,aformat=channel_layouts=stereo,volume=12dB,"silenceremove=start_periods=1:stop_periods=-1:start_threshold=-50dB:stop_threshold=-50dB:start_silence=1:start_duration=2:stop_duration=5:detection=peak",dynaudnormIf you only wish to remove silence from beginning of audio put instead
start_periods=0
What's the equivalent to detect minimum amount of silence duration of say 5 seconds like you can do in Audacity? stop_duration=5 is.followup to removing silence with ffmpeg
I guess minimum duration of silence seems to work with
Black waveform you can see the 12s is still there and portions 1m 8s (1st part is 14s, 2nd part is 8s as 3rd part was longer than 30s and removed). Also removed the 1m 20s and seemed to remove the end.
Red waveform shows
Last thing is I tried
I guess minimum duration of silence seems to work with
stop_duration=30 for example. It always removes silence at the beginning though. Black waveform you can see the 12s is still there and portions 1m 8s (1st part is 14s, 2nd part is 8s as 3rd part was longer than 30s and removed). Also removed the 1m 20s and seemed to remove the end.
Red waveform shows
stop_duration=15 seconds and it seems it worked well.Last thing is I tried
start_duration=2 and it did remove one click but missed the other click. So based on this it'll be pretty safe to use these default settings.stop_duration=30:start_duration=2Image Upscale 2x, 4x, 8x, etc. Some online services for this. Gimp | Image | Scale and choose Linear | Cubic | NoHalo doesn't matter even if you use a Sharpen filter afterwards. Not even close.
https://github.com/nihui/waifu2x-ncnn-vulkan
to batch process see here
https://github.com/nihui/waifu2x-ncnn-vulkan
./waifu2x-ncnn-vulkan -i ~/Pictures/rainbow.jpg -o ~/Pictures/rainbow4x.jpg -s 4 -n 1Usage:
waifu2x-ncnn-vulkan -i infile -o outfile [options]...
-n noise-level denoise level (-1/0/1/2/3, default=0) -s scale upscale ratio (1/2/4/8/16/32, default=2) -t tile-size >=32 ..if GPU error reduce tile to like 64 or 128, default=auto)to batch process see here
rainbow4x.jpg
3.2 MB
rainbow4x.jpg Image Upscaled 4x to 2720x2196
I tried a few online AI image upscaling services and this gives similar results.
I tried a few online AI image upscaling services and this gives similar results.
rainbowGimp.jpg
1.3 MB
rainbowGimp.jpg Used GIMP to upscale 4x | NoHalo and here's the result...just terrible.
rainbow_4ximgscaler.com.jpg
2.1 MB
imgscaler dot com ($$$) for comparison 4x 2720x2196
Shows chapters on VLC on your computer Linux, Mac or Windows. In
If you wish to increase or decrease the playback speed (0.25x to 4.00x) be sure to check under
Preferences under Interface settings set Continue playback to always so it'll resume from where you stopped it.If you wish to increase or decrease the playback speed (0.25x to 4.00x) be sure to check under
Preferences | Audio that Enable Time-Stretching audio is checked as they adjusts the pitch to improve output at faster or slower speeds. VLC on Linux doesn't show chapter durations or starting times unfortunately.VLC (free) on iOS can’t stop playback at the end of a chapter but does have a sleep timer. Supports variable playback speed (0.25x to 8.00x). On iOS it’ll show the duration of each chapter and VLC on Android shows starting time of each chapter. VLC Settings on iOS make sure
Continue audio playback is set to Always so it’ll resume from the point you last listened to. For variable playback speed make sure Time-stretching audio is checked.Making a PDF with an index / bookmark of images along with their corresponding filenames (not technically bookmarks but even I'm confused which is the correct term....so confusing. Bookmarks are usually user created for certain points while reading a PDF document so an index would be more accurate).
Then compressing each image before feeding it to
open terminal in directory of
In
If it's a huge amount of images and PDFSAM crashes then open java app PDFSAM with 4.4GB of RAM like so
In these 175 celebrity images it went from
Then compressing each image before feeding it to
PDFSAM to create the indexed PDF with images and img2pdf *.jpg -o output.pdf gives one PDF with all images but doesn't show the filenames of the jpgs. Thus the solution. Install ocrmypdf and parallel if not already installed. If you wish to avoid ocrmypdf image compression convert / compress prior with whatever image editor as img2pdf makes pdf with lossless compression.open terminal in directory of
jpgs or other imagesmkdir output ; parallel -j2 img2pdf {} -o 'output/{.}.pdf' ::: *.jpg && parallel --tag -j2 ocrmypdf -s -O 2 -s --skip-big .1 '{}' './{}' ::: output/*.pdf
-j2 = two jobs simultaneously max-O 2 = compression level 3 is best if image quality is acceptable which usually it is. Only if doing some OCR on scanned text does it blur the text at times...then use 2 instead (it's an letter O not numeral zero 0)-s skip text OCR--skip-big .1 skips all OCR processing with pages with only 0.1 megapixels which is each page{.} = outputs filename without extension..if you just use {}here then it outputs filename.jpg.pdf instead of filename.pdf which you want. I did that at first and had to batch rename them with thunar or nemo. Command line to batch rename files from filename.jpg.pdf to filename.pdf rename 's/.jpg//' *.pdf
But this step is avoided luckily.In
PDFSAM Basic drag all pdf files into the merge section and under Bookmarks handling: Create one entry for each merged document. Done.If it's a huge amount of images and PDFSAM crashes then open java app PDFSAM with 4.4GB of RAM like so
java -jar -Xmx4400m /opt/pdfsam-basic/pdfsam-basic-4.3.1.jar
quickly compare which compression you wantocrmypdf -O 2 -s —skip-big .1 someimage.jpg someimage_opt2.jpg
ocrmypdf -O 3 -s —skip-big .1 someimage.jpg someimage_opt3.jpg
original image in PDF 240KB, -O 2 205KB, -O 3 115KB and to me the file size savings and still acceptable image quality is worth it so I'm going with -O 3 (optimization level 3)In these 175 celebrity images it went from
39MB down to 18MB so a tad more than 2X file size savings.Then clean with exifcleaner (free app) I use the appimage on linux to clean out all PDF metadata. In this case it deleted Creator, Producer and ModifyDate.
Wanted images pretty much uniform in size. Didn't want a pdf with small, medium and super large images. So kept all of them to be max width and height 1000 pixels max using
img2pdf --imgsize 1000x1000 *.jpg -o output.pdfBut before I did that there were some small say 400 pixel photos in Ben Garrison, Sheeple, Ads collection. So I batched upscaled many of them 2x with
waifu2x-ncnn-vulkan. See next post. Even if an image were already say 1500x1500 and you 2X upscale it to 3000x3000 it's doable as you set max to 1000x1000 with img2pdf.copy waifu2x-ncnn-vulkan into usr/bin path with elevated privileges
Don't need an expensive AMD or Nvidia graphics card either. Just takes longer.
sudo nemo
or sudo thunar
which let's use copy the 3 directories and waifu2x-ncnn-vulkan exectuable to /usr/bin
command line (-r = recurvsive)sudo cp -r waifu2x-ncnn-vulkan models-* /usr/bin
when you wish to delete (be careful)sudo cd /usr/bin
sudo rm -r waifu2x-ncnn-vulkan models-*
Upscale only one imagewaifu2x-ncnn-vulkan -i input.jpg -o output.jpg -s 4 -n 2
To batch upscale many images (jpg/png/webp) and specify the input and output directory plus the image formatwaifu2x-ncnn-vulkan -i ~/Pictures/input -o ~/Pictures/output -s 2 -n 0 -f jpg -t 64
-s scale 1/2/4/8/16/32 (default 2)-n noise-level -1/0/1/2/3 (default 0)Don't need an expensive AMD or Nvidia graphics card either. Just takes longer.
-f format type for batch processing dirs- t tile-size >=32 default=auto...I got a GPU error so reduced tile size to 64 with -t 64 and it worked fine but albeit even slower.Shows the white space to be cropped / trimmed / removed automatically with the k2pdfopt
OCR works but I'll stick with
k2pdfopt input.pdf -ui- -x -mode tm -om 0.01,0.01,0.01,0.01 -coutputs a file name
input_k2opt.pdf
-ui- disables interactive GUI on Linux-x exits when finished-c color output as default is black and White-mode tm (trim margins / auto crop)-om 0.01,0.01,0.01,0.01 output margins adds just a little bit of margins to left, top, right, bottom of pagesOCR works but I'll stick with
ocrmypdf since when it OCR's text and images it converts the text to images which is terrible. Images have overlaying OCR text . In ocrmypdf you can --force-ocr it and it will keep your text as text and overlay text on images too. Huge major difference.k2pdfopt input.pdf -ui- -p 1-4 -x -mode tm -om 0.01,0.01,0.01,0.01 -ocr t -ocrhmax 1.5 -ocrdpi 400 -ocrvis s -ocrd p -c
-p 1-4 (page ranges 1-2) 1- is page 1 to end, e for even and o for odd pages-ocrd p is to send Tesseract a page at a time rather a line at a time. This was necessary