Skip to content

ebetancourt/fastapi-react-template-alpha

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fullstack-template

This is intended to be a fast template to spin-up a full stack project with Python back-end and React front-end

Project Goals:

Should Have Built in

  • Docker for easy development
  • OpenAPI
  • Frontend should use TypeScript
  • Frontend should run codegen against the OpenAPI
  • Wiring for common stuff like auth...
  • and sending emails
  • with MailHog for testing email locally
  • uploading files to S3 (getting signed URLs for frontend apps)
  • Wiring for job queues and workers
  • support for .env
  • Should be able to support AI projects (LangChain has an integration with FastAPI called LangServe)
  • Storybook prototyping
  • PostgreSQL DB
  • Enable PG vector store
  • Alembic Migrations
  • make scripts to be able to run tasks quickly
  • front-end for the Auth functionality (login form, password reset, registration)

Notes on Current Implementation:

Current implementation is based on project by the creator of FastAPI - tiangolo/full-stack-fastapi-postgresql: Full stack, modern web application generator. Using FastAPI, PostgreSQL as database, Docker, automatic HTTPS and more. - might consider using python cookie cutter for my template because of this work. (He's actually moving away from cookie cutter to use a git-based template!)

Back-end

  • Python
  • FastAPI
  • stripped out a lot of the reverse proxy / load-balancer stuff in the project to make it much simpler

Front-end

Legacy README

Everything from here down is from the project referenced above. I tried my best to cut out stuff that wasn't relevant, but I haven't really gone through it - so its just there as a guide to skim and trouble-shoot

Backend Requirements

Frontend Requirements

Backend local development

  • Start the stack with Docker Compose:
docker compose up -d
  • Now you can open your browser and interact with these URLs:

Frontend, built with Docker: http://localhost:8080/

Backend, JSON based web API based on OpenAPI: http://localhost:8888/api/

Automatic interactive documentation with Swagger UI (from the OpenAPI backend): http://localhost:8888/docs

Alternative automatic documentation with ReDoc (from the OpenAPI backend): http://localhost:8888/redoc

PGAdmin, PostgreSQL web administration: http://localhost:5050

Note: The first time you start your stack, it might take a minute for it to be ready. While the backend waits for the database to be ready and configures everything. You can check the logs to monitor it.

To check the logs, run:

docker compose logs

To check the logs of a specific service, add the name of the service, e.g.:

docker compose logs backend

Backend local development, additional details

General workflow

By default, the dependencies are managed with Poetry, go there and install it.

From ./backend/app/ you can install all the dependencies with:

$ poetry install

Then you can start a shell session with the new environment with:

$ poetry shell

Next, open your editor at ./backend/app/ (instead of the project root: ./), so that you see an ./app/ directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Poetry.

Modify or add SQLAlchemy models in ./backend/app/app/models/, Pydantic schemas in ./backend/app/app/schemas/, API endpoints in ./backend/app/app/api/, CRUD (Create, Read, Update, Delete) utils in ./backend/app/app/crud/. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs.

Add and modify tasks to the Celery worker in ./backend/app/app/worker.py.

If you need to install any additional package to the worker, add it to the file ./backend/app/celeryworker.dockerfile.

Docker Compose Override

During development, you can change Docker Compose settings that will only affect the local development environment, in the file docker-compose.override.yml.

The changes to that file only affect the local development environment, not the production environment. So, you can add "temporary" changes that help the development workflow.

For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast.

There is also a command override that runs /start-reload.sh (included in the base image) instead of the default /start.sh (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:

$ docker-compose up -d

There is also a commented out command override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session.

To get inside the container with a bash session you can start the stack with:

$ docker-compose up -d

and then exec inside the running container:

$ docker-compose exec backend bash

You should see an output like:

root@7f2607af31c3:/app#

that means that you are in a bash session inside your container, as a root user, under the /app directory.

There you can use the script /start-reload.sh to run the debug live reloading server. You can run that script from inside the container with:

$ bash /start-reload.sh

...it will look like:

root@7f2607af31c3:/app# bash /start-reload.sh

and then hit enter. That runs the live reloading server that auto reloads when it detects code changes.

Nevertheless, if it doesn't detect a change but a syntax error, it will just stop with an error. But as the container is still alive and you are in a Bash session, you can quickly restart it after fixing the error, running the same command ("up arrow" and "Enter").

...this previous detail is what makes it useful to have the container alive doing nothing and then, in a Bash session, make it run the live reload server.

Backend tests

To test the backend run:

$ DOMAIN=backend sh ./scripts/test.sh

The file ./scripts/test.sh has the commands to generate a testing docker-stack.yml file, start the stack and test it.

The tests run with Pytest, modify and add tests to ./backend/app/app/tests/.

If you use GitLab CI the tests will run automatically.

Local tests

Start the stack with this command:

DOMAIN=backend sh ./scripts/test-local.sh

The ./backend/app directory is mounted as a "host volume" inside the docker container (set in the file docker-compose.dev.volumes.yml). You can rerun the test on live code:

docker-compose exec backend /app/tests-start.sh

Test running stack

If your stack is already up and you just want to run the tests, you can use:

docker-compose exec backend /app/tests-start.sh

That /app/tests-start.sh script just calls pytest after making sure that the rest of the stack is running. If you need to pass extra arguments to pytest, you can pass them to that command and they will be forwarded.

For example, to stop on first error:

docker-compose exec backend bash /app/tests-start.sh -x

Test Coverage

Because the test scripts forward arguments to pytest, you can enable test coverage HTML report generation by passing --cov-report=html.

To run the local tests with coverage HTML reports:

DOMAIN=backend sh ./scripts/test-local.sh --cov-report=html

To run the tests in a running stack with coverage HTML reports:

docker-compose exec backend bash /app/tests-start.sh --cov-report=html

Live development with Python Jupyter Notebooks

If you know about Python Jupyter Notebooks, you can take advantage of them during local development.

The docker-compose.override.yml file sends a variable env with a value dev to the build process of the Docker image (during local development) and the Dockerfile has steps to then install and configure Jupyter inside your Docker container.

So, you can enter into the running Docker container:

docker-compose exec backend bash

And use the environment variable $JUPYTER to run a Jupyter Notebook with everything configured to listen on the public port (so that you can use it from your browser).

It will output something like:

root@73e0ec1f1ae6:/app# $JUPYTER
[I 12:02:09.975 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 12:02:10.317 NotebookApp] Serving notebooks from local directory: /app
[I 12:02:10.317 NotebookApp] The Jupyter Notebook is running at:
[I 12:02:10.317 NotebookApp] http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
[I 12:02:10.317 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 12:02:10.317 NotebookApp] No web browser found: could not locate runnable browser.
[C 12:02:10.317 NotebookApp]

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397

you can copy that URL and modify the "host" to be localhost or the domain you are using for development (e.g. local.dockertoolbox.tiangolo.com), in the case above, it would be, e.g.:

http://localhost:8888/token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397

and then open it in your browser.

You will have a full Jupyter Notebook running inside your container that has direct access to your database by the container name (db), etc. So, you can just run sections of your backend code directly, for example with VS Code Python Jupyter Interactive Window or Hydrogen.

Migrations

As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with alembic commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.

Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors.

  • Start an interactive session in the backend container:
$ docker-compose exec backend bash
  • If you created a new model in ./backend/app/app/models/, make sure to import it in ./backend/app/app/db/base.py, that Python module (base.py) that imports all the models will be used by Alembic.

  • After changing a model (for example, adding a column), inside the container, create a revision, e.g.:

$ alembic revision --autogenerate -m "Add column last_name to User model"
  • Commit to the git repository the files generated in the alembic directory.

  • After creating the revision, run the migration in the database (this is what will actually change the database):

$ alembic upgrade head

If you don't want to use migrations at all, uncomment the line in the file at ./backend/app/app/db/init_db.py with:

Base.metadata.create_all(bind=engine)

and comment the line in the file prestart.sh that contains:

$ alembic upgrade head

If you don't want to start with the default models and want to remove them / modify them, from the beginning, without having any previous revision, you can remove the revision files (.py Python files) under ./backend/app/alembic/versions/. And then create a first migration as described above.

Frontend development

  • Enter the frontend directory, install the NPM packages and start the live server using the npm scripts:
cd frontend
yarn
yarn dev

Then open your browser at http://localhost:8080

Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server with live reload.

Check the file package.json to see other available options.

Deployment

You can deploy the stack to a Docker Swarm mode cluster with a main Traefik proxy, set up using the ideas from DockerSwarm.rocks, to get automatic HTTPS certificates, etc.

And you can use CI (continuous integration) systems to do it automatically.

But you have to configure a couple things first.

Continuous Integration / Continuous Delivery

If you use GitLab CI, the included .gitlab-ci.yml can automatically deploy it. You may need to update it according to your GitLab configurations.

If you use any other CI / CD provider, you can base your deployment from that .gitlab-ci.yml file, as all the actual script steps are performed in bash scripts that you can easily re-use.

GitLab CI is configured assuming 2 environments following GitLab flow:

  • prod (production) from the production branch.
  • stag (staging) from the master branch.

If you need to add more environments, for example, you could imagine using a client-approved preprod branch, you can just copy the configurations in .gitlab-ci.yml for stag and rename the corresponding variables. The Docker Compose file and environment variables are configured to support as many environments as you need, so that you only need to modify .gitlab-ci.yml (or whichever CI system configuration you are using).

Docker Compose files and env vars

There is a main docker-compose.yml file with all the configurations that apply to the whole stack, it is used automatically by docker-compose.

And there's also a docker-compose.override.yml with overrides for development, for example to mount the source code as a volume. It is used automatically by docker-compose to apply overrides on top of docker-compose.yml.

These Docker Compose files use the .env file containing configurations to be injected as environment variables in the containers.

They also use some additional configurations taken from environment variables set in the scripts before calling the docker-compose command.

It is all designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily).

They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the docker-compose command with a different DOMAIN environment variable instead of having to change the domain in several places inside the Docker Compose files.

Also, if you want to have another deployment environment, say preprod, you just have to change environment variables, but you can keep using the same Docker Compose files.

The .env file

The .env file is the one that contains all your configurations, generated keys and passwords, etc.

Depending on your workflow, you could want to exclude it from Git, for example if your project is public. In that case, you would have to make sure to set up a way for your CI tools to obtain it while building or deploying your project.

One way to do it could be to add each environment variable to your CI/CD system, and updating the docker-compose.yml file to read that specific env var instead of reading the .env file.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published