Shows the white space to be cropped / trimmed / removed automatically with the k2pdfopt
OCR works but I'll stick with
k2pdfopt input.pdf -ui- -x -mode tm -om 0.01,0.01,0.01,0.01 -coutputs a file name
input_k2opt.pdf
-ui- disables interactive GUI on Linux-x exits when finished-c color output as default is black and White-mode tm (trim margins / auto crop)-om 0.01,0.01,0.01,0.01 output margins adds just a little bit of margins to left, top, right, bottom of pagesOCR works but I'll stick with
ocrmypdf since when it OCR's text and images it converts the text to images which is terrible. Images have overlaying OCR text . In ocrmypdf you can --force-ocr it and it will keep your text as text and overlay text on images too. Huge major difference.k2pdfopt input.pdf -ui- -p 1-4 -x -mode tm -om 0.01,0.01,0.01,0.01 -ocr t -ocrhmax 1.5 -ocrdpi 400 -ocrvis s -ocrd p -c
-p 1-4 (page ranges 1-2) 1- is page 1 to end, e for even and o for odd pages-ocrd p is to send Tesseract a page at a time rather a line at a time. This was necessaryScanTailor a PDF from 19.2MB (on left side with yellow background) to 3x smaller at 6.5MB (on right side) and put uniform margins for each page and add a chapter index.
1) extract images with pdfimages
1) extract images with pdfimages
2) change black background with White text to a White background and black texts with convert3) ScanTailor margins, deskew, despeckle
4) combine .tif images and pipe to ocrmypdf for OCR and image optimization5) create bookmarks / index with booky.sh
pdfimages -list -f 1 -l 5 Conquest-of-a-Continent.pdf
--list only lists images in PDF without extractting them-f first page to process-l last page to processShowing just page 1 to 5 you can see each page has 3 multi-layered images. An image with an alpha channel, a yellow background and an smask (soft mask) jbig2.
in the directory with the Conquest of a Continent
mkdir imagesExtracts all images from pdf and prepends page number
pdfimages -jp2 -p Conquest-of-a-Continent.pdf images/img
-p to each imageimg-001-000.jp2
img-001-001.jp2
img-001-002.pbm
img-002-003.jp2
img-002-004.jp2
img-002-005.pbm
img-003-006.jp2
img-003-007.jp2
img-003-008.pbm
Cover looks bad so screenshot it and save it as
-001cover.jpg to the images/ directory. If you save image from PDF it'll have an alpha channel.Only want the pbm files so delete the .jp2 files
cd images/
rm *.jp2The pbm images are inverted so we need to negative (invert) them with convert to get a White background with black text. Also we'll convert it to png so ScanTailor can read them.
for f in *.pbm; do convert "$f" -negate png"$f".png; done
check to make sure the png images look good then delete the pbm imagesrm *.pbm
edit: Oops found out ScanTailor can convert that yellow background to White so no need for pdfimages and inverting, etc. If there are masks, soft-masks, stencils (multi-layered) PDF and want one image per page instead of 3 then use pdftoppm instead of pdfimages
pdftoppm -png -r 300 input.pdf dump/imgNow add the directory of pngs in ScanTailor Advanced. On Linux ScanTailor Advanced has an appimage or flatpak.
On import I had to Fix DPI so I choose 300 for all images.
On import I had to Fix DPI so I choose 300 for all images.
ScanTailor after checking
Orientation, Split Pages for one page of two page scans, deskew (slighty rotated), and Select Content (defines text and image areas). I let it batch process all pages by clicking the green arrow. Set Margins and apply to This page and the following ones.
Before batch processing best to select page 1. Once output is done batch processing it saves *.tif files in a subdirectory named out/ and you can close the project and exit ScanTailor.Combine all images into a PDF with
img2pdf and pipe it to ocrmypdf (notice the - by itself). A4 sets all pages to that size with the downside being some white margins on cover but all pages are uniform. A4 is close to in size to Letter.img2pdf --pagesize A4 out/*.tif | ocrmypdf --optimize 3 --jbig2-lossy - ../output_ocr.pdfbooky.sh is a script which let's you edit a text file to generate bookmarks / outline Install booky by unzipping code file then putting it where it's safe to keep. Then add your path to it so it can be run from anywhere the PDF is.
echo $PATH/home/geektips/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/sbin:/usr/sbin:/sbin/
so export it to your path
nano ~/.bashrc and at end of file appendexport= PATH=$PATH:/home/geektips/appimages/apps/booky/save it then refresh bash without logging out with
source ~/.bashrc/home/geektips/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/sbin:/usr/sbin:/sbin:/home/geektips/appimages/apps/booky/
echo $PATH
For
You can have link breaks between chapters doesn't matter. Just put a comma , after the chapter name then the page #. Only one chapter per line.
Easiest way to find chapters if the PDF lists the page # is to search for said page # looking for a matching chapter title on the left pane. Still takes about 5 to 10 mins depending on how many chapters.
Note put a comma after
booky make a text file save it as index.txt (or whatever). Copy from PDF or use NORMCAP or TextSnatcher to screenshot OCR the chapters and paste and edit them in a text editor.You can have link breaks between chapters doesn't matter. Just put a comma , after the chapter name then the page #. Only one chapter per line.
Easiest way to find chapters if the PDF lists the page # is to search for said page # looking for a matching chapter title on the left pane. Still takes about 5 to 10 mins depending on how many chapters.
booky booky.sh Conquest\ of\ a\ Continent.pdf index.txt
and it creates a new PDF with your index with chapters and appends _new to filename.Note put a comma after
Maps like so Maps,Split a double paged scanned into one PDF with
Showing the original pages 54 and 55. PDF is 46.4MB and final PDF is 1.5MB. This is the biggest reason to optimize these files besides making them searchable with OCR.
ScanTailor Advanced open directory of png images.
edit: used to use
ScanTailor Advanced.Showing the original pages 54 and 55. PDF is 46.4MB and final PDF is 1.5MB. This is the biggest reason to optimize these files besides making them searchable with OCR.
mkdir dumpor extract to png instead of tiff if you have a fast computer
pdftoppm -tiff -tiffcompression deflate -r 300 What-About-The-Seedline-Doctrine.pdf dump/img
pdftoppm -png -r 300 What-About-The-Seedline-Doctrine.pdf dump/img
-r 300 sets the dpi to 300...can do 600 if you wish. deflate compression is ~30% better in file sizes than lzw and only about 10% slower. ScanTailor Advanced open directory of png images.
edit: used to use
pdfimages but it will extract each layer per page. Only found MasterPDF Editor | Export all images actually flattens but puts a DEMO watermark. pdftoppm can do the same by flattening the multilayered images in a PDF.Make covers fit exactly without margins in an A4 PDF just resize an image and input in the dimensions exactly
If you wish to add one after you've already OCR'd the PDF then use PDFArranger or PDFSlicer to remove existing cover and add a new cover. Save the file to .jpg then
595 x 842 and save it as a .tif image.If you wish to add one after you've already OCR'd the PDF then use PDFArranger or PDFSlicer to remove existing cover and add a new cover. Save the file to .jpg then
img2pdf -S A4 001cover2x.jpg -o 001cover.pdfExtract URLs from a webpage
https://www.convertcsv.com/url-extractor.htm
Wish to download an entire podcast series to turn it into a few opus audiobooks.
1) Load the url from radiopublic
2) filter URL that only have
Had to change the filenames though.
https://www.convertcsv.com/url-extractor.htm
Wish to download an entire podcast series to turn it into a few opus audiobooks.
Soundcloud I couldn't figure out how to get the urls. Podbean had to input 50 different pages. Radiopublic dot com had all episodes (500) listed on one page thus enabling extraction of all URLs at once.1) Load the url from radiopublic
2) filter URL that only have
mp3
3) extract then save file as linksextracted.txt or whatever you wishHad to change the filenames though.
yt-dlp -j -a linksextracted.txt-j, --dump-json
With this command can find the name I want as it's giving me
TstlYjWBcmTs.128 [TstlYjWBcmTs.128].mp3 which I don't want. Looking at screenshot I want to use the output template command to original_url as it's outputting the webpage_url_basename it appears.• webpage_url (string): A URL to the video webpage which if given to yt-dlp should allow to get the same result again
• webpage_url_basename (string): The basename of the webpage URL
• webpage_url_domain (string): The domain of the webpage URL
• original_url (string): The URL given by the user (or same as webpage_url for playlist entries)
yt-dlp -o "%(original_url)s.%(ext)s" -a linksextracted.txt
in TextEditor (xed) just press
Title Case with exceptions for:
except if they're first or last word of line
single and double quotes
custom exception list for say acronyms
then it will always keep those lowercase
python titlecase
tons of examples here
includes a commandline name titlecase
Try not to feed it all UPPERCASE...better to all lowercase feed it. It will usually preserve acronyms FBI, etc. if already capitalized.
Say you have chapters with numbers...as long as you put a '
3: a victory on a massive scale
3: A Victory on a Massive Scale
25 - the conquest of the world
25 - The Conquest of the World
2 - a bad time and a terrible waste of money
2 - A Bad Time and a Terrible Waste of Money
nothing to be afraid of
Nothing to Be Afraid Of
'small word in quotes - "a trick, perhaps?"'
'Small Word in Quotes - "A Trick, Perhaps?"'
Ctrl-U for UPPERCASE, Ctrl-l lowercase, Ctrl-T Title Case but Title Case it's better to do exceptionsTitle Case with exceptions for:
articles: a, an, theprepositions: at, by, in, on, of, withconjunctions: and, but, for, nor, soexcept if they're first or last word of line
First and last word always capitalized even for articles, prepositions, conjunctions
single and double quotes
keep urls lowercase
custom exception list for say acronyms
touch ~/.titlecase.txt
xed ~/.titlecase.txt
then it will always keep those lowercase
python titlecase
tons of examples here
pip3 install titlecase
includes a commandline name titlecase
titlecase -f input.txt -o output.txt
Try not to feed it all UPPERCASE...better to all lowercase feed it. It will usually preserve acronyms FBI, etc. if already capitalized.
Say you have chapters with numbers...as long as you put a '
: ' after the number (spaces are optional). Colon : or semi-colon ; or a hyphen - work3: a victory on a massive scale
3: A Victory on a Massive Scale
25 - the conquest of the world
25 - The Conquest of the World
2 - a bad time and a terrible waste of money
2 - A Bad Time and a Terrible Waste of Money
nothing to be afraid of
Nothing to Be Afraid Of
'small word in quotes - "a trick, perhaps?"'
'Small Word in Quotes - "A Trick, Perhaps?"'
GitHub
GitHub - ppannuto/python-titlecase: Python library to capitalize strings as specified by the New York Times Manual of Style
Python library to capitalize strings as specified by the New York Times Manual of Style - ppannuto/python-titlecase