GeekTips
109 subscribers
586 photos
3 videos
77 files
231 links
Linux Mint, video encoding, ffmpeg, geek tips, regex, pdf manipulation, substitcher, mpv config
Download Telegram
Split a double paged scanned into one PDF with ScanTailor Advanced.

Showing the original pages 54 and 55. PDF is 46.4MB and final PDF is 1.5MB. This is the biggest reason to optimize these files besides making them searchable with OCR.

mkdir dump

pdftoppm -tiff -tiffcompression deflate -r 300 What-About-The-Seedline-Doctrine.pdf dump/img

or extract to png instead of tiff if you have a fast computer
pdftoppm -png -r 300 What-About-The-Seedline-Doctrine.pdf dump/img

-r 300 sets the dpi to 300...can do 600 if you wish. deflate compression is ~30% better in file sizes than lzw and only about 10% slower.

ScanTailor Advanced open directory of png images.

edit: used to use pdfimages but it will extract each layer per page. Only found MasterPDF Editor | Export all images actually flattens but puts a DEMO watermark. pdftoppm can do the same by flattening the multilayered images in a PDF.
ScanTailor under Split Pages it usually detects automatically where to split the page.
ScanTailor outputs the tif images into a subdirectory out/

img2pdf --pagesize A4 out/*.tif | ocrmypdf --optimize 3 --jbig2-lossy - output_ocr.pdf

final result (kinda...compressed image on telegram) page 54 with a PDF that has a single page rather than double pages per page.
page 55. Didn't make an index as it's only 56 pages....any PDF under 100 pages most likely doesn't need an index.
Make covers fit exactly without margins in an A4 PDF just resize an image and input in the dimensions exactly 595 x 842 and save it as a .tif image.

If you wish to add one after you've already OCR'd the PDF then use PDFArranger or PDFSlicer to remove existing cover and add a new cover. Save the file to .jpg then
img2pdf -S A4 001cover2x.jpg -o 001cover.pdf
Extract URLs from a webpage

https://www.convertcsv.com/url-extractor.htm

Wish to download an entire podcast series to turn it into a few opus audiobooks.

Soundcloud I couldn't figure out how to get the urls. Podbean had to input 50 different pages. Radiopublic dot com had all episodes (500) listed on one page thus enabling extraction of all URLs at once.

1) Load the url from radiopublic
2) filter URL that only have mp3
3) extract then save file as linksextracted.txt or whatever you wish

Had to change the filenames though.
yt-dlp -j -a linksextracted.txt

-j, --dump-json

With this command can find the name I want as it's giving me TstlYjWBcmTs.128 [TstlYjWBcmTs.128].mp3 which I don't want. Looking at screenshot I want to use the output template command to original_url as it's outputting the webpage_url_basename it appears.

• webpage_url (string): A URL to the video webpage which if given to yt-dlp should allow to get the same result again

• webpage_url_basename (string): The basename of the webpage URL

• webpage_url_domain (string): The domain of the webpage URL

• original_url (string): The URL given by the user (or same as webpage_url for playlist entries)

yt-dlp -o "%(original_url)s.%(ext)s" -a linksextracted.txt
filename as wanted and unwanted text can be fixed when batch renaming files.

https -_feeds.soundcloud.com_stream_1198353229-truth-in-history-556-divine-blueprint-part-2.mp3.unknown_video
in TextEditor (xed) just press Ctrl-U for UPPERCASE, Ctrl-l lowercase, Ctrl-T Title Case but Title Case it's better to do exceptions

Title Case with exceptions for:
articles: a, an, the
prepositions: at, by, in, on, of, with
conjunctions: and, but, for, nor, so
except if they're first or last word of line

First and last word always capitalized even for articles, prepositions, conjunctions

single and double quotes
keep urls lowercase

custom exception list for say acronyms
touch ~/.titlecase.txt
xed ~/.titlecase.txt


then it will always keep those lowercase

python titlecase
tons of examples here

pip3 install titlecase 

includes a commandline name titlecase
titlecase -f input.txt -o output.txt

Try not to feed it all UPPERCASE...better to all lowercase feed it. It will usually preserve acronyms FBI, etc. if already capitalized.

Say you have chapters with numbers...as long as you put a ' : ' after the number (spaces are optional). Colon : or semi-colon ; or a hyphen - work

3: a victory on a massive scale
3: A Victory on a Massive Scale

25 - the conquest of the world
25 - The Conquest of the World

2 - a bad time and a terrible waste of money
2 - A Bad Time and a Terrible Waste of Money

nothing to be afraid of
Nothing to Be Afraid Of

'small word in quotes - "a trick, perhaps?"'
'Small Word in Quotes - "A Trick, Perhaps?"'
original is on bottom and image with text is very light and hard to read. ScanTailor with Otsu and other algorithms darkens the text as seen on top image.
Use xreader instead of evince (no zooming thumbnails) for PDF reader to zoom in on thumbnails to quickly locate chapters to assign page numbers to them for booky.sh

In this case I typed them out and didn't both capitalizing anything which saved me some time.
convert to title case with article exceptions and notice : after chapter number
titlecase -f aaa_index.txt -o titlecase.txt
generate index for the PDF with booky.sh
booky.sh output_ocr.pdf titlecase.txt
This bash script generates bookmarks automatically as shown on the screenshot on the right. On the left PDFSam Basic let's you choose whether to retain existing bookmarks or not. Here the Bible already had bookmarks so it retains them.

Quite impressive script to quickly get bookmarks. I'll spend a little time trying to figure out how to retain bookmarks but most likely that's over my head.

combine multiple PDFs into a single PDF
Create one bookmark (filename) for each PDF in directory.
Won't retain existing bookmarks like PDFSam
author Mateen Ulhaq
#!/bin/bash

out_file="combined.pdf"
tmp_dir="/tmp/pdftk_unite"
bookmarks_file="$tmp_dir/bookmarks.txt"
bookmarks_fmt="BookmarkBegin
BookmarkTitle: %s
BookmarkLevel: 1
BookmarkPageNumber: 1
"

rm -rf "$tmp_dir"
mkdir -p "$tmp_dir"

for f in *.pdf; do
echo "Bookmarking $f..."
title="${f%.*}"
printf "$bookmarks_fmt" "$title" > "$bookmarks_file"
pdftk "$f" update_info "$bookmarks_file" output "$tmp_dir/$f"
done

pdftk "$tmp_dir"/*.pdf cat output "$out_file"
This is pretty cool.... Adobe renamed ClearScan in 2015 to Edit Text in Image. It basically creates a anti-aliases around the text and vectorizes the text. Many hours spent try to get this work and will spare you all the details of tried this and that but this is what did actually work:

Download about 700MB and installs 1.8GB of texlive. This is a LaTex package as recommended by pdfsak. You may skip the fonts and extra and see if it works but get base and potrace are for sure required.

sudo apt install texlive-latex-base texlive-fonts-recommended texlive-fonts-extra texlive-latex-extra
potrace

download appimage of magick (ImageMagick)
right click and Permissions | check to Execute allow
sudo cp magick /usr/bin/
Now install pdfsak in python
pip3 install —upgrade pdfsak
pdfsak -if input.pdf -o clearscan.pdf --clearscan

screenshots:
original on top (blurry)
clearscan on bottom

A few years ago I used Adobe Acrobat DC Pro back when I was on a mac and remember it could replace the image with text by creating a custom font on-the-fly and create a much much smaller file size. Still need to manually go through and check the accuracy of the words so it's still somewhat time consuming. This doesn't do that but still it really helps clean up old PDFs.
Screenshot is showing zoomed in after cleanscan.  Notice many tiny dots.

Best to despeckle it (remove the tiny dots) in ScanTailor Advanced (unpaper is over my head). So some PDFs can do clearscan first then feed to ScanTailor but old ones with crappy background and double pages, etc. will have to ScanTailor it. img2pdf it to a pdf then clearscan them back to ScanTailor to despeckle then to ocrmypdf.
despeckle of 2.5 in ScanTailor when used with clearscan looks like it's removing ALL dots. 2.0 almost removed all dots (marked in red) say 95% but not all. Default is 1 and max is 3.
This one is the true test.

screenshot
top one is ScanTailor only

bottom one is clearscan (not cleanscan..get confused with ocrmypdf which has a --clean option) and then processed with ScanTailor. File size is a tad bigger but I believe the improvement is worth it.
ScanTailorOnly.pdf
109.5 KB
ScanTailorOnly.pdf is 109KiB
ClearScan-ScanTailor.pdf is 272KiB

need to do these on an entire book to see if it's double or triple in size. If so guess it may not be worth it.
edit: ClearScan not CleanScan
edit2: seems like I won't use ClearScan only when the image is rather good to start with. Meaning the text in the image isn't blurred all that much.