I recently created scripts that saved me hours and will benefit future users of my project. This inspired me to share them here, and I may add more over time. A GitHub repository could house all these scripts! :)
It’s Time to Develop Scripts for Automation
Some scripts are very short and concise, yet they can still be useful, even if they are rarely used...
1. Copy the Contents of a File to the System Clipboard
Dependencies:
copy_file_to_clipboard() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "copy_file_to_clipboard: Copies the contents of a file to the clipboard."
echo "Usage: copy_file_to_clipboard <file_path>"
return
fi
local file_path=$(realpath "${1}")
xclip -selection c <"${file_path}"
}
2. Format C and C++ Files
Dependencies:
format_c_and_cplusplus_files() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "format_c_and_cplusplus_files: Formats C and C++ files using clang-format."
echo "Usage: format_c_and_cplusplus_files [style]"
return
fi
local style="${1:-chromium}"
clang-format -i --style "${style}" "*.c" &&
clang-format -i --style "${style}" "*.cpp"
}
3. Format Shell Script Files
Dependencies:
format_bash_scripts() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "format_bash_scripts: Formats Bash scripts using shfmt."
echo "Usage: format_bash_scripts [indent]"
return
fi
setopt NULL_GLOB # To avoid "no matches found" error
local sh_files=(*.sh)
if [ "${#sh_files[@]}" -eq 0 ]; then
echo "No .sh files found in the current directory."
unsetopt NULL_GLOB
return
fi
local indent="${1:-2}"
shfmt -i "${indent}" -w *.sh
}
4. Format and Validate Python Script Files
Dependencies:
format_and_lint_python_scripts() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "format_and_lint_python_scripts: Formats and checks Python scripts using ruff."
echo "Usage: format_and_lint_python_scripts"
return
fi
ruff format *.py && ruff check *.py
}
5. Convert Text to URL Slug
You might be wondering why this exists at all! Well, I've got to admit, I'm a bit lazier than you might think! :)
Dependencies:
text_to_slug() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "text_to_slug: Converts the given text to a slug."
echo "Usage: text_to_slug <text>"
return
fi
tr '[:upper:]' '[:lower:]' <<<"${1}" | sed -E 's/\W+/-/g'
}
6. Correcting Punctuation and Spacing in the Text
#!/usr/bin/env python
import re
import sys
original_text = (
sys.argv[1:]
if len(sys.argv) >= 2
else """
In the heart of the city ,the abandoned theater loomed like a forgotten relic.
Why had it been left to decay ?! Some said it was haunted, others claimed
it was cursed . I had to find out for myself—what if there was a hidden
history waiting to be discovered?! As I pushed open the heavy doors,
a chill ran down my spine.The air was thick with dust and secrets.
Could I unravel the mystery before it was too late? ?! I ventured deeper ,
my footsteps echoing in the empty halls..
Read more :
https://example.com
"""
)
# Remove excess whitespace
cleaned_text = re.sub(r"[ \t]{2,}", " ", original_text)
# No space before common punctuation
cleaned_text = re.sub(r"\s+([\?\.\!,;\:]+)", r"\1 ", cleaned_text)
# Replace multiple dots with an ellipsis
cleaned_text = re.sub(r"\.{2,}", "...", cleaned_text)
# Just maintain the line breaks
cleaned_text = re.sub(r"[ \t]*([\n\r]+)[ \t]*", r"\1", cleaned_text)
# Protect URLs by replacing them with a temporary placeholder
urls = re.findall(r"(?:https?|ftp)://\S+", cleaned_text)
for i, url in enumerate(urls):
cleaned_text = cleaned_text.replace(url, f"URL_PLACEHOLDER_{i}")
# Split the text into sentences
tokenized_sentences = [
sentence.strip()
for sentence in re.split(r"(?<=[\.\?\!])[ \t]*(?=[A-Za-z0-9])", cleaned_text)
if sentence
]
# Restore the URLs
for i, url in enumerate(urls):
tokenized_sentences = [
sentence.replace(f"URL_PLACEHOLDER_{i}", url)
for sentence in tokenized_sentences
]
print(" ".join(tokenized_sentences))
7. Convert Text From CLI to Speech Using Google TTS
The gTTS library can perform this function, but I plan to write this code to occupy my free time.
Dependencies:
#!/usr/bin/env python
import sys
import argparse
import urllib.parse
import requests
# Define constants
GOOGLE_TTS_URL = (
"https://translate.google.com/translate_tts?ie=utf-8&client=tw-ob"
"&tl={target_lang}&q={text}"
)
DEFAULT_TEXT = "Hello! This is your personalized report."
DEFAULT_LANG = "en"
DEFAULT_OUTPUT = (
"C:\\Windows\\Temp\\generated_audio.mp3"
if sys.platform in ("win32", "cygwin", "msys")
else "/tmp/generated_audio.mp3"
)
# Argument parser setup
parser = argparse.ArgumentParser(description="Convert text to speech using Google TTS.")
parser.add_argument(
"-t", "--text", type=str, default=DEFAULT_TEXT, help="text to convert to speech"
)
parser.add_argument(
"-l",
"--lang",
type=str,
default=DEFAULT_LANG,
help=f"target language (default: {DEFAULT_LANG})",
)
parser.add_argument(
"-o",
"--output",
type=str,
default=DEFAULT_OUTPUT,
help=f"output file (default: {DEFAULT_OUTPUT})",
)
args = parser.parse_args()
# Generate URL
encoded_text = urllib.parse.quote(args.text)
url = GOOGLE_TTS_URL.format(target_lang=args.lang, text=encoded_text)
# Fetch speech data
try:
response = requests.get(url)
response.raise_for_status()
except requests.exceptions.HTTPError as e:
print(f"HTTP error: {e}")
sys.exit(1)
except Exception as e:
print(e)
sys.exit(1)
# Save the speech data to a file
with open(args.output, "wb") as file:
file.write(response.content)
8. Collect V2Ray Configs Using Subscription IDs
Dependencies:
#!/bin/bash
set -euo pipefail
readonly REPOSITORY_URL="https://raw.githubusercontent.com/barry-far/V2ray-Configs"
readonly DEFAULT_OUTPUT="v2ray_resources.txt"
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "Usage: ${0} <subscription_id1 subscription_id2 ...> [output_file]"
echo "Description: Download V2Ray config files from the repository and save to the output file."
echo "If no output_file is specified, ${DEFAULT_OUTPUT} will be used by default."
exit 0
fi
# Determine the output file
if [[ "${@: -1}" =~ ^[0-9]+$ ]]; then
output_file="${DEFAULT_OUTPUT}"
subscription_ids=("${@}")
else
output_file="${@: -1}" # Last argument is the output file
subscription_ids=("${@:1:$((${#} - 1))}") # Store all but the last argument
fi
# Download and save V2Ray configurations
for id in "${subscription_ids[@]}"; do
curl -s "${REPOSITORY_URL}/main/Sub${id}.txt" >>"${output_file}" &
done
# Wait for all background jobs to finish
wait
9. Easily Download Images From DDG With a Query
This is for my Thinga project, but I believe it also fits well here...
Dependencies:
#!/usr/bin/env python
import argparse
import os
import uuid
import random
import re
import asyncio
import http
from typing import Optional
import aiohttp
import aiofiles
from playwright.async_api import Page, async_playwright
def _correct_image_url(url: str) -> str:
"""Preprocesses the image URL to remove extra slashes."""
corrected_url = re.sub(r"^//", r"https://", url)
return corrected_url
async def collect_image_urls(
page: Page,
query: str,
max_images: int,
) -> list[str]:
"""Collects image URLs from DuckDuckGo."""
await page.goto(
f"https://duckduckgo.com/?t=h_&q={query}&iax=images&ia=images",
wait_until="domcontentloaded",
)
await asyncio.sleep(8) # Wait for the page to load
image_elements = await page.query_selector_all(
"xpath=//img[contains(@class, 'tile--img__img')]"
)
random.shuffle(image_elements)
image_urls = [
_correct_image_url(await img.get_attribute("src"))
for img in image_elements[:max_images]
]
return image_urls
def _get_file_extension_from_mime(mime: str) -> Optional[str]:
"""Converts a MIME to a file extension."""
mime_to_extension = {
"image/jpeg": ".jpg",
"image/png": ".png",
"image/gif": ".gif",
}
return mime_to_extension.get(mime)
async def _save_image(
response: aiohttp.ClientResponse,
output_dir: str,
) -> None:
"""Saves the image to the specified directory."""
content_type = response.headers.get("content-type")
file_extension = _get_file_extension_from_mime(content_type)
if file_extension is None:
print(f"Skipping `{response.url}` due to unsupported MIME type...")
return None
unique_id = uuid.uuid4().hex[:15]
file_path = os.path.join(output_dir, f"{unique_id}{file_extension}")
async with aiofiles.open(file_path, mode="wb") as f:
async for chunk in response.content:
await f.write(chunk)
print(f"Downloaded `{response.url}` to `{file_path}`.")
async def download_images(image_urls: list[str], output_dir: str) -> None:
"""Downloads images from the given URLs and saves them to the specified directory."""
os.makedirs(output_dir, exist_ok=True)
async with aiohttp.ClientSession() as session:
for image_url in image_urls:
try:
async with session.get(image_url) as response:
if response.status != http.HTTPStatus.OK:
print(f"Failed to download `{image_url}`...")
continue
await _save_image(response, output_dir)
except Exception as e:
print(f"Error downloading `{image_url}`: {e}")
async def main(args: argparse.Namespace) -> None:
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
context = await browser.new_context()
page = await context.new_page()
image_urls = await collect_image_urls(page, args.query, args.max_images)
await download_images(image_urls, args.output_dir)
await browser.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Download images from DuckDuckGo."
)
parser.add_argument(
"query", type=str, help="write what you are looking for"
)
parser.add_argument(
"-m",
"--max-images",
type=int,
default=5,
help="maximum images to download",
)
parser.add_argument(
"-o",
"--output-dir",
type=str,
default="collected-images",
help="a folder to store images",
)
args = parser.parse_args()
asyncio.run(main(args))
10. Use Bionic Reading to Read Faster
It might not be part of the automation class, but it can be very useful at times, based on my personal experience!
#!/usr/bin/env python
import os
import re
import argparse
DEFAULT = "\033[0m"
FOCUS = "\033[1;31m"
def _process_token(token: str) -> str:
word = re.search(r"[A-Za-z]+", token)
if not word:
return token
word = word.group()
divide_point = (len(word) + 1) // 2
primary, secondary = word[:divide_point], word[divide_point:]
styled_word = f"{FOCUS}{primary}{DEFAULT}{secondary}"
return token.replace(word, styled_word)
def _process_file(file_path: str) -> None:
if not os.path.isfile(file_path):
print(f"File not found: {file_path}")
return
with open(file_path, "r") as file:
lines = file.readlines()
for line in lines:
line = line.rstrip()
tokens = re.findall(r"\s*\S+", line)
processed_line = "".join(_process_token(token) for token in tokens)
print(processed_line)
def main() -> None:
parser = argparse.ArgumentParser(
description="Bionic reading script to highlight parts of words for easier reading."
)
parser.add_argument(
"file_path", metavar="FILE", help="path to the file to be processed"
)
args = parser.parse_args()
_process_file(args.file_path)
if __name__ == "__main__":
main()
11. Load Your Environment Variables From a File
load_env_variables() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "load_env_variables: Loads the available environment variables from the specified file."
echo "Usage: load_env_variables <file_path>"
return
fi
local file_path="${1:-.env}"
while IFS='=' read -r key value; do
if [[ -z "${key}" || "${key}" =~ ^# ]]; then
continue
fi
export "${key}"="${value}"
done <"${file_path}"
}
12. Generate Unreserved Random Ports
Unique random ports for each backend project avoid conflicts and enable independent operation on a single server.
Dependencies:
generate_free_port() {
if [[ "${1}" =~ ^(-h|--help)$ ]]; then
echo "generate_free_port: Generates a random unreserved port."
echo "Usage: generate_free_port [lower_bound_port] [upper_bound_port]"
return
fi
local lower_bound_port="${1:-8000}"
local upper_bound_port="${2:-9500}"
echo "Validating port range from ${lower_bound_port} to ${upper_bound_port}..."
if [[ "${lower_bound_port}" -gt "${upper_bound_port}" ]]; then
echo "Error: Minimum port (${lower_bound_port}) must be less than or equal to maximum port (${upper_bound_port})."
return 1
fi
while true; do
local random_port=$((RANDOM % (upper_bound_port - lower_bound_port + 1) + lower_bound_port))
echo "Checking port ${random_port}..."
if [[ $(netstat -tln | grep -c ":${random_port} ") -eq 0 ]]; then
echo "Available port found: ${random_port}"
return 0
else
echo "Port ${random_port} is in use. Trying another one..."
fi
done
}
13. Download and Install Nerd Fonts From Its Source
Feel free to use fonts other than Nerd Fonts if you'd like, but...
Dependencies:
#!/bin/bash
set -eu
# Capture and save all Nerd Fonts download URLs for future fun! :)
# wget -O- https://www.nerdfonts.com/font-downloads |
# grep -Eo 'href="[^"]+\.zip' |
# sed 's/href="//' >./nerd_fonts_urls.txt
if [[ "${#}" -eq 0 || "${1}" =~ ^(-h|--help)$ ]]; then
echo "Usage: ${0} <font_source_url1 font_source_url2 ...>"
echo "Description: Download and install Nerd Fonts from provided URLs."
exit 0
fi
readonly FONT_DIR="${HOME}/.fonts"
mkdir -p "${FONT_DIR}"
for font_source_url in "${@}"; do
temp_folder=$(mktemp -d)
echo "Downloading '${font_source_url}'..."
curl -L "${font_source_url}" -o "${temp_folder}/font.zip"
echo "Extracting and copying font files..."
unzip -q "${temp_folder}/font.zip" -d "${temp_folder}"
find "${temp_folder}" \( -iname "*.ttf" -o -iname "*.otf" \) -exec cp {} "${FONT_DIR}" \;
# echo "Cleaning up..."
# rm -rf "${temp_folder}"
echo "Font from '${font_source_url}' installed successfully."
done
echo "Updating font cache..."
fc-cache -fv
echo "All fonts installed successfully."
14. Copy Multiple Files’ Contents to the Clipboard
Dependencies:
#!/usr/bin/env python
import os
import sys
from typing import Optional
import pyperclip
def collect_file_content(file_path: str) -> Optional[str]:
try:
file_absolute_path = os.path.abspath(file_path)
print(f"Collecting content from {file_absolute_path!r}...")
with open(file_absolute_path, "r") as file:
file_header = f"# File: {file_path}\n{file.read()}"
return file_header
except FileNotFoundError:
print(f"Error: File {file_path!r} not found.")
return None
except Exception as e:
print(f"Error reading {file_path!r}: {e}")
return None
def main() -> None:
if len(sys.argv) < 2:
print(f"Usage: python {sys.argv[0]} <file1> <file2> ...")
sys.exit(1)
collected_content = []
for file_path in sys.argv[1:]:
file_data = collect_file_content(file_path)
if file_data is not None:
collected_content.append(file_data)
if not collected_content:
print("No content collected from files!")
sys.exit(1)
final_content = "\n".join(collected_content)
pyperclip.copy(final_content)
print("Content from files copied to clipboard successfully!")
if __name__ == "__main__":
main()
15. Dividing Large Files Into Smaller Sections
#!/usr/bin/env python
import os
import sys
def chunk_file_by_lines(input_file_path: str, lines_per_chunk: int) -> None:
try:
with open(input_file_path) as file:
lines = file.readlines()
except FileNotFoundError:
print(f"Error: File {input_file_path!r} not found.")
return None
except Exception as e:
print(f"Error reading file: {str(e)}")
return None
chunks = []
for chunk_index, start_index in enumerate(range(0, len(lines), lines_per_chunk)):
chunk = lines[start_index : start_index + lines_per_chunk]
chunks.append(chunk)
output_file_name = os.path.basename(input_file_path)
output_file_path = f"{output_file_name}.chunk{chunk_index+1}"
with open(output_file_path, "w") as output_file:
output_file.write("".join(chunk))
print(f"Chunk {chunk_index+1} saved to: {output_file_path}")
if __name__ == "__main__":
if len(sys.argv) != 3:
print(f"Usage: python {sys.argv[0]} <input_file_path> <lines_per_chunk>")
sys.exit(1)
input_file_path = sys.argv[1]
lines_per_chunk = int(sys.argv[2])
chunk_file_by_lines(input_file_path, lines_per_chunk)
Conclusion
Our world requires greater automation to enhance its beauty and efficiency due to technological advancements, but we must also consider its impact on employment!
If you have any suggestions that might help address the current issues and enhance performance, please feel free to share them in the comments section. Thank you! ;)
Some of my other projects and related works are as follows: