Tools via Unix Command Line Interface with no installation and just using Docker + Shell Script.
- My Ez CLI
It adds aliases to your ~/.zshrc file and symbolic links to your /usr/local/bin/ folder:
./setup.sh
# --------------------------------------------------------------------------------
# My Ez CLI • Setup
# --------------------------------------------------------------------------------
# Hope you enjoy it! :D
# --------------------------------------------------------------------------------
# Note: Aliases may be created in '~/.zshrc' file...
# --------------------------------------------------------------------------------
# Note: Symbolic links may be created in '/usr/local/bin/' folder...
# --------------------------------------------------------------------------------
# Warning: Root access may be needed.
# --------------------------------------------------------------------------------
# GitHub: https://github.com/DavidCardoso/my-ez-cli
# --------------------------------------------------------------------------------
# 1) ALL 7) npm 13) docker-compose-viz
# 2) aws 8) npx 14) playwright
# 3) terraform 9) yarn 15) python
# 4) cdktf 10) yarn-berry 16) promptfoo
# 5) gcloud 11) serverless 17) promptfoo-server
# 6) node 12) speedtest 18) EXIT
# Choose an option:See more.
# help
aws help
# list buckets
aws s3 ls --profile my-aws-profile
# download a file from a bucket
aws s3 cp s3://my-bucket/my-file /path/to/local/file --profile my-aws-profile# authenticate using MFA
aws-get-session-token <MFA_DIGITS># authenticate using SSO
aws-sso
#1) configure
#2) login
#3) logout
#Choose an option:If you need to get/know the SSO credentials being used, run:
aws-sso-cred $AWS_PROFILE
# or specify a profile of your choice
aws-sso-cred my-working-profileIt is using version
3.12.4as default.
# see version
python --version
# run interpreter
python
# run a script
python main.pyThis script is using the same env var used by PyEnv.
So all you need to do is to declare the PYENV_VERSION before calling the python command.
# Export directly or add it to your profile configs (e.g.,`.zshrc`).
export PYENV_VERSION=3.9.19
python main.py
# or pass it inline
PYENV_VERSION=3.9.19 python main.py
# Note: the respective docker image will be downloaded if not found locally
# Unable to find image 'python:3.9.19' locally
# 3.9.19: Pulling from library/python
# 21988c13fd96: Download complete
# 42d758104bc9: Download complete
# 6d0099138f57: Download complete
# Digest: sha256:47d6f16aa0de11f2748c73e7af8d40eaf44146c6dc059b1d0aa1f917f8c5cc58
# Status: Downloaded newer image for python:3.9.19It is using Node 22 (current LTS version) as default.
# see node version
node -v
# run node interpreter
node
# run a node script
node somefile.js# just add the node version as a suffix
node14 -v
node16 -v
node18 -v
node20 -v
node22 -v
node24 -vUse MEC_BIND_PORTS env var if you want to bind ports between the host and container:
MEC_BIND_PORTS="8080:80 9090:80" node
MEC_BIND_PORTS="8080:80 9090:80" npm
MEC_BIND_PORTS="8080:80 9090:80" npx
MEC_BIND_PORTS="8080:80 9090:80" yarn
# or
export MEC_BIND_PORTS="8080:80 9090:80"
node
npm
yarnTo be able to install NPM packages from a private repository,
you need to inform the respective NPM_TOKEN.
Method 1: Export the NPM_TOKEN on demand
NPM_TOKEN=your-token-here yarn
NPM_TOKEN=your-token-here npm
# or
export NPM_TOKEN=your-token-here
yarn
npmMethod 2: Setting it up in the ~/.npmrc config file
# ~/.npmrc example
# Set the default registry
registry=https://private.npm.registry.com/
# Example for accessing private repos using NPM_TOKEN
//private.npm.registry.com/:_authToken=${NPM_TOKEN}Hint: you can set the token(s) on your default shell config file.
Example for zsh:echo "export NPM_TOKEN=your-token-here" >> ~/.zshrc
It is using NodeJS 22 as default.
# see npm version
npm -v
# start the package.json from a JS project
npm init
# install a package as dev dependency
npm install some-pkg --save-dev
# install a package globally
npm install -g another-pkgSome NPM packages aren't compatible with older or newer NodeJS versions.
# just add the node version as a suffix
npm14 -v
npm16 -v
npm18 -v
npm20 -v
npm22 -vIt is using NodeJS 22 as default.
# exec a standalone package
npx cowsay "Hello!"
# Need to install the following packages:
# cowsay@1.6.0
# Ok to proceed? (y) y
# _______
# < Hello >
# -------
# \ ^__^
# \ (oo)\_______
# (__)\ )\/\
# ||----w |
# || ||It is using Node 22.
# see yarn version
yarn -v
# start the package.json from a JS project
yarn init
# install a package as dev dependency
yarn add some-pkg --dev
# install a package globally
yarn global add another-pkgSome NPM packages aren't compatible with older or newer NodeJS versions.
# just add the node version as a suffix
yarn14 -v
yarn16 -v
yarn18 -v
yarn20 -v
yarn22 -v# if you have replaced yarn
yarn --version # it should show 3.6+
# otherwise
yarn-berry --version # it should show 3.6+It is ready to work with AWS.
# see versions
serverless -v
# help
serverless --help
# Starting from a template
# note: replace "template-name" below with the folder name of the example you want to use
# method 1
serverless create \
-u https://github.com/serverless/examples/tree/master/template-name \
-n my-project-folder
# method 2 [recommended]
serverless init \
template-name \
-n my-project-folder
# Hint: if you get build errors, try it
cd my-project-folder && yarn
# Serverless.com account login
# note: It is also possible to use an access key to authenticate via serverless CLI
serverless login
# Deploy your project
serverless deploy
# Invoke a Lambda Function
serverless invoke -f hello
# Invoke and display lambda logs
serverless invoke -f hello --log
# Fetch lambda logs
serverless logs -f hello
serverless logs -f hello --tailImportant: ensure you are using the right provider credentials/roles/permissions before executing any command.
Take a look at Terraform AWS modules public registry and usage examples.
# help
terraform -help
# start the terraform in a project
mkdir my-terraform-project
cd my-terraform-project
terraform init
# set the right environment
# useful for multiple environments
# hint: avoid using default environment
terraform workspace list
terraform workspace new ${ENVIRONMENT}
terraform workspace select ${ENVIRONMENT}
# validate terraform files
terraform validate
# see changes
terraform plan
# save changes to an output file (recommended)
terraform plan -out=tfplan
# apply changes to the providers (aws, gcp, azure, etc)
terraform apply
# apply changes using tfplan output file (recommended)
terraform apply tfplan
# destroy created resources on the providers
# warning: do not run it in production! ;D
terraform destroyBy default, the parent directory is mounted on the container. This allows files inside parent folder to be referenced in the Terraform files.
For instance, if you need to use a Terraform module that is located two levels up
in the filesystem, you can use CONTEXT variable before the terraform command
to define the absolute path to that module (or another folder).
# option 1
CONTEXT=$(cd "$PWD/../../" && pwd) terraform --version
CONTEXT=$(cd "$PWD/../../" && pwd) terraform init
# option 2
CONTEXT=$(cd "$PWD/../../" && pwd)
CONTEXT=$CONTEXT terraform --version
CONTEXT=$CONTEXT terraform init
# option 3
export CONTEXT=$(cd "$PWD/../../" && pwd)
terraform --version
terraform initAll variables in DOTENV_FILE file will be available inside the container.
By default, the terraform container will use ${PWD}/.env file.
Inform a different value if you want to point to another one.
export DOTENV_FILE=local.env
terraform init
terraform plan
# or
DOTENV_FILE=local.env terraform init
DOTENV_FILE=local.env terraform planThis is used for Terraform Cloud login.
By default, the terraform container will use ${HOME}/.terraformrc file.
Inform a different value if you want to point to another one.
export TF_RC_FILE=/another/path/to/terraform-credentials/file
terraform init
# it should recognize the backend config pointing to your TF Cloud workspace(s)This is used for AWS CLI authentication.
By default, the terraform container will use ${HOME}/.aws folder.
Inform a different value if you want to point to another one.
export AWS_PROFILE=your-aws-profile
export AWS_CREDENTIALS_FOLDER=/another/path/to/credentials/folder/
terraform init
terraform plan
terraform apply
# it should be able to deploy to your aws account based on the credentials usedThis is used for GCP CLI authentication.
By default, the terraform container will use ${HOME}/.config/gcloud folder,
and /root/.config/gcloud/application_default_credentials.json file, respectively.
GOOGLE_APPLICATION_CREDENTIALSpath starts with/root/because this is the default user inside the container. Therefore you should not change it to your local user.
Inform different values if you want to point to another one.
export GCLOUD_CREDENTIALS_FOLDER=/another/path/to/credentials/folder/
export GOOGLE_APPLICATION_CREDENTIALS=/root/another/path/to/credentials/file
terraform init
terraform plan
terraform apply
# it should be able to deploy to your cloud account based on the credentials usedIt is ready to work with Python.
mkdir /my/folder/learn-cdktf
cd /my/folder/learn-cdktf
cdktf --help
# starts a new project from a template
cdktf init --template="python" --providers="aws@~>4.0"# help
speedtest --help
# run a speed test
speedtest# If are not logged in, run the command below and follow the steps:
# 1. Copy/paste the provided URL in your browser
# 2. Authorize using your Google account
# 3. Copy/paste the generated auth code back in your terminal
gcloud-login
# If your current project is [None] or you wanna change it, set one.
gcloud config set project <PROJECT_ID>
# Test if it is working...
gcloud version
gcloud help
gcloud storage lsThis will create a dependency graph in display only, dot, or image formats
based on a docker-compose YAML file (defaults to ./docker-compose.yml).
For more info, please check its official documentation.
# navigate to the directory where the docker-compose YAML file is
cd /my/project/with/docker-compose-file/
# using just default options
docker-compose-viz render
# using a custom docker compose file
docker-compose-viz render ./my-custom-docker-compose.yml
# dot output format
docker-compose-viz render --output-format=dot
# image output format
docker-compose-viz render --output-format=image
# setting the path/name of the output file
docker-compose-viz render --output-format=image --output-file=graph.pngplaywright # it will open the /bin/bash inside the container
# then you can run the other test related commands
npx playwright install chromium
npm run test
# etc...For more info, please check its official documentation.
LLM model and prompt evaluation tool. For more info, check official documentation.
# Environment variables that can be used:
export PROMPTFOO_CONFIG_DIR="/app/data" # folder to store promptfoo config and data
export PROMPTFOO_REMOTE_API_BASE_URL="http://localhost:33333" # API base URL
export PROMPTFOO_REMOTE_APP_BASE_URL="http://localhost:33333" # UI base URL
export OPENAI_API_KEY="your-key-here" # OpenAI API key
export ANTHROPIC_API_KEY="your-key-here" # Anthropic API key
export AZURE_API_KEY="your-key-here" # Azure API key
export LITELLM_API_KEY="your-key-here" # LiteLLM API key
export GITHUB_TOKEN="your-token-here" # GitHub token
# see version
promptfoo --version
# help
promptfoo --help
# run an evaluation
promptfoo eval
# share the evaluation results with your team
promptfoo share
# or do both in one command
promptfoo eval --shareSelf-hosted Promptfoo server for API and UI. For more info, check official documentation.
# Environment variables that can be used:
export PROMPTFOO_CONTAINER_SUFFIX="my-suffix" # unique identifier for container name (default: timestamp)
export PROMPTFOO_CONFIG_DIR="/app/data" # folder to store promptfoo config and data
export PROMPTFOO_API_PORT=33333 # port for API and UI (default: 33333)
export OPENAI_API_KEY="your-key-here" # OpenAI API key
export ANTHROPIC_API_KEY="your-key-here" # Anthropic API key
export AZURE_API_KEY="your-key-here" # Azure API key
export LITELLM_API_KEY="your-key-here" # LiteLLM API key
export GITHUB_TOKEN="your-token-here" # GitHub token
# start the server (it runs in detached mode)
promptfoo-server
# check if the server is running
docker ps | grep promptfoo_server
# access the UI
open http://localhost:33333Feel free to become a contributor! ;D