Laravel – development environment

laravel with sail

basics

  • install docker compose with specific commands of the system it is used on
  • install composer locally: wget https://getcomposer.org/installer
  • install sail (docker environment for laravel): curl -s “https://laravel.build/project-name?with=mysql,selenium,mailhog,redis” | bash
  • check for local and forward ports for docker in file .env and add available ports
    • (APP_PORT=38080
      FORWARD_DB_PORT=33306
      FORWARD_REDIS_PORT=36379
      FORWARD_MEILISEARCH_PORT=37700
      FORWARD_MAILHOG_PORT=31025
      FORWARD_MAILHOG_DASHBOARD_PORT=38025)
  • start docker environment in background: ./vendor/bin/sail up -d
  • new app may be accessed via http://localhost:port
  • stop the server again: sail down

command alias: alias sail='[ -f sail ] && sh sail || sh vendor/bin/sail'

Proxy

in App\Http\Middleware\TrustProxies the protected variable $proxies must be changed to:

protected $proxies = [‘127.0.0.1’];

git

  • initialize git repo in current folder: git init
  • git remote add laravel ssh://git@olkn.myvnc.com/home/git/repo-laravel-dev.git
  • .gitignore to list all files that should not be included in git repo (sail automatically generates the file)
  • add files to staging: git add -A
  • commit changes to repo: git commit -m “comment“
  • push changes to remote repo: git push laravel

Laravel Breeze

  • install the breeze packages to start of with: sail composer require laravel/breeze –dev
  • install blade frontend with breeze: sail php artisan breeze:install blade (complete template including user authentication)
  • compile CSS and refresh browser: sail npm run dev
  • migrate data base: sail php artisan migrate

Models/Migrations/Controllers

sail php artisan make:model -mrc

Models

interface to the tables in the data base; eloquent model

app/Models/.php

Migrations

create and modify tables in data base

database/migrations/_create__tables.php

Controllers

processing requests towards application and returning responses

app/Http/Controller/Controller.php

deployment

steps on server

  • git clone serverAddressAndFolder
  • point web server root to folder public
  • modify/update .env file
  • php artisan key:generate – generates APP_KEY in .env file
  • php artisan migrate – migrate the data base schema
  • php artisan db:seed – if you want to seed your data base
  • php artisan down – shut website down for maintenance
  • git pull – pull latest git files to server
  • composer install – to check for any necessary updates from composer.lock file
  • php artisan migrate – migrate the data base schema
  • systemctl restart apache2 – to kill any php session
  • php artisan queue:restart – to enable and restart any queues
  • php artisan cache:clear – clear cache
  • php artisan up – start laravel website again

.env

php artisan config:cache enable caching of env in production environment


APP_DEBUG = true for development only
APP_EMV=staging for development server
APP_URL=http://localhost - may be different for dev server
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=

maintenance mode

php artisan down #enable maintenance

php artisan up #disable maintenance

folder structure

  • app – core code
    • Broadcasting – broadcast channel classes
    • Console – custom artisan commands
    • Events – event classes
    • Exceptions – app exceptions handlers
    • Http – controllers and middleware
    • Jobs – queueable jobs
    • Listeners – classes that handle events
    • Mail – email classes
    • Models – eloquent model classes
    • Notifications – transactional notifications
    • Policies – authorization policy classes
    • Providers – service provider classes
    • Rules – custom validation rules
  • bootstrap – app.php to bootstrap the framework
  • config
  • database – database migrations, model factories and seeds
  • lang – language files
  • public – index.php as entry point
  • resources – all views and raw assets (CSS, JavaScript)
  • routes – all route definitions
  • storage – logs, compiled Blade templates, session files, file caches
  • tests – automated tests
  • vendor – composer dependencies

Request Lifecycle

some useful commands

  • sail shell (access a shel within the docker container)
  • sail root-shell
  • ./vendor/bin/sail php –version
  • ./vendor/bin/sail artisan –version
  • ./vendor/bin/sail composer –version
  • ./vendor/bin/sail npm –version

speed up

  • php artisan config:cache
  • php artisan route:cache
  • php artisan optimize –force

clean up

  • php artisan config:clear
  • php artisan route:clear
  • php artisan view:clear

Laravel Sail Docker Environment

vendor/laravel/sail/runtimes/8.2/Dockerfile

FROM ubuntu:22.04

LABEL maintainer="Taylor Otwell"

ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14

WORKDIR /var/www/html

ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.2-cli php8.2-dev \
php8.2-pgsql php8.2-sqlite3 php8.2-gd \
php8.2-curl \
php8.2-imap php8.2-mysql php8.2-mbstring \
php8.2-xml php8.2-zip php8.2-bcmath php8.2-soap \
php8.2-intl php8.2-readline \
php8.2-ldap \
# php8.2-msgpack php8.2-igbinary php8.2-redis php8.2-swoole \
# php8.2-memcached php8.2-pcov php8.2-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.2

RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail

COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.2/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container

EXPOSE 8000

ENTRYPOINT ["start-container"]

./docker-compose.yml

# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- mailhog
- selenium
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
mailhog:
image: 'mailhog/mailhog:latest'
ports:
- '${FORWARD_MAILHOG_PORT:-1025}:1025'
- '${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025'
networks:
- sail
selenium:
image: 'selenium/standalone-chrome'
extra_hosts:
- 'host.docker.internal:host-gateway'
volumes:
- '/dev/shm:/dev/shm'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local

VSCode

Export extensions via local shell command (STRG+SHIFT+P terminal local):

code --list-extensions | sed -e 's/^/code --install-extension /' > my_vscode_extensions.sh

Import extensions via:

bash my_vscode_extensions.sh

factory method in python


class ObjectStore:
""" a generic class to store objects either to SQL or the filesystem
the interface storeaway uses a specific storeformat and gets the associated implementation storer"""
def storeaway(self, storable, storeformat):
storer = factory.get_storeaway(storeformat) """get implementation for storeaway using the storeformat"""
storeable.storeaway(storer) """invoke the implementation of storeaway for the given format"""
return

class storeawayFactory:
def __init__(self):
self._creators = {}
def register_format(self, storeformat, creator):
self._creators[storeformat] = creator
def get_storeaway(self, storeformat):
creator = self._creators.get(storeformat)
if not creator:
raise ValueError(storeformat)
return creator()

factory = storeawayFactory()
factory.register_format("SQL", SQLStoreaway)
factory.register_format("FILE", FileStoreaway)

In the final object to work with we need to implement the

web scraper

contents

  • logging
  • data base access
  • solr indexing
  • filesystem access
  • web scraping

logging

Data base access

– mysql in python


import mysql.connector
# from mysql.connector import Error

# pip3 install mysql-connector
# https://dev.mysql.com/doc/connector-python/en/connector-python-reference.html

class DB():
    def __init__(self, config):
        self.connection = None
        self.connection = mysql.connector.connect(**config)
        
    def query(self, sql, args):
        cursor = self.connection.cursor()
        cursor.execute(sql, args)
        return cursor

    def insert(self,sql,args):
        cursor = self.query(sql, args)
        id = cursor.lastrowid
        self.connection.commit()
        cursor.close()
        return id

    # https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
    def insertmany(self,sql,args):
        cursor = self.connection.cursor()
        cursor.executemany(sql, args)
        rowcount = cursor.rowcount
        self.connection.commit()
        cursor.close()
        return rowcount

    def update(self,sql,args):
        cursor = self.query(sql, args)
        rowcount = cursor.rowcount
        self.connection.commit()
        cursor.close()
        return rowcount

    def fetch(self, sql, args):
        rows = []
        cursor = self.query(sql, args)
        if cursor.with_rows:
            rows = cursor.fetchall()
        cursor.close()
        return rows

    def fetchone(self, sql, args):
        row = None
        cursor = self.query(sql, args)
        if cursor.with_rows:
            row = cursor.fetchone()
        cursor.close()
        return row

    def __del__(self):
        if self.connection != None:
            self.connection.close()

  # write your function here for CRUD operations

solr indexing

filesystem access

web scraping

development environment

summary

  • create folder on server as: repo--dev.git
  • initialize git repo: git init –bare
  • create folder on local machine
  • get files from remote repo: git clone ssh://git@olkn.myvnc.com/repo--dev.git
  • add files to staging: git add -A
  • commit changes to repo: git commit -m “
  • push changes to remote repo: git push origin

git

config

  • git config –global user.name “olkn”
  • git config –global user.email “olkn@gmail.com”
  • git remote add olkn ssh://olkn.myvnc.com/git/repo-python-development
  • git config –global pull.rebase true
  • git config –global color.ui auto
  • git config –global core.editor vim
  • .gitignore to list all files that should not be included in git repo

initialise git repo

git init

git init –bare : if the repo is on a server and no working tree is necessary (should end in .git)

stage

Staging adds file to a stage prepared for later commit

  • git add -A
  • git stage .

commit

All files in staging were added/committed to the repo

git commit -m “Comment for commit”

password less login

ssh-keygen -t rsa -b 4096 -C “

ssh-add .ssh/id-rsa

ssh-copy-id -i .ssh/id-rsa @

initialise git repo

mkdir

cd

git init –bare –shared

add files to repo

git add -A

git commit -m “

add remote repo

git remote add live ssh://@/

git push live

docker

docker build -t : build the given image based on Dockerfile

docker images : list all available docker images in the system

docker run -it : run interactive mode image

docker run -d -p 5000:5000 : run image in daemon mode with ports exposed

docker ps : show all running docker container

docker start/stop : start/stop a container

docker-compose up -d : start docker environment detached

docker-composer down : stop all containers

docker-composer ps : list all running containers

docker-composer build : build images from scratch

docke-compose logs : show the log files for a container

python

project folder
— docker-compose.yml
–|– app (program sources we are working on)
—-|– Dockerfile
—-|– requirements.txt
—-|– src
——-|– python.py
–|– db (data base server and config, if any)
—–|– password.txt
–|– web (web server and config, if any)

docker composer file


ersion: "3.7"
services:
db:
image: mysql:8.0.19
command: '--default-authentication-plugin=mysql_native_password'
restart: always
secrets:
- db-password
volumes:
- db-data:/var/lib/mysql
networks:
- backend-network
environment:
- MYSQL_DATABASE=example
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password

app:
build: app
restart: always
secrets:
- db-password
networks:
- backend-network
- frontend-network
volumes:
- ./app/src:/code
ports: (debugger ports for flask inside the container)
- 5678:5678

web:
build: web
restart: always
ports:
- 80:80
networks:
- frontend-network
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
networks:
backend-network:
frontend-network:

docker-file


# set base image (host OS)
FROM python:3.8

# set the working directory in the container
WORKDIR /code

# copy the dependencies file to the working directory
COPY requirements.txt .

# install dependencies
RUN pip install -r requirements.txt

# copy the content of the local src directory to the working directory
COPY src/ .

# command to run on container start
CMD [ "python", "./server.py" ]

laravel

laravel with sail

  • install docker compose
  • install composer: wget https://getcomposer.org/installer

initialize a new project

curl -s https://laravel.build/project-name?with=mysql,selenium,mailhog |bash

configuration

edit .env

docker composer file

docker-compose.yml

docker-file

programming

python

laravel

alternative

python

  • python3 -m venv
  • source /bin/activate

git

  • git init . (executed in folder of course
  • create .gitignore with contents (*.pyc and __pycache__)

README

create a file README with project description and usage instructions

markdown format (README.md) would be a good choice for the formatting

git commit

  • git add .gitignore README.md
  • git add *
  • git comit -m “initial release”

skeleton

file structure

  • folder docs – program documentation (spinx)
  • folder src/app – source code of the application
  • folder tests – unit test files
  • .gitignore – see above
  • requirements.txt – pip requirements file
  • README.md – see above
  • TODO.md – open issues to implement
  • LICENSE – ???
  • setup.py – see below

program skeleton

structure with functions, classes, modules, etc but with docstrings only as the contents will follow later with the implementation

create the main file, e.g. .py


""" descriptive text"""
import sys

"""definition of constants"""
URL = "https://olkn.myvnc.com"

def main():
""" main entry point for the program """
pass

"""definition of functions"""
def get_url(url):
"""describe function"""
pass

if __name__ == '__main__'
sys.exit(main())

requirements

list all the requirements in a file: pip freeze requirements.txt

solr – managed-schema field definitions

name type description active flags deactive flags
ignored_* string catchall for all undefined metadata multiValued
id string unique id field stored, required multiValued
_version_ plong internal solr field indexed, stored
text text_general content field for facetting multiValued docValues, stored
content text_general main content field as extracted by tika stored, multiValued, indexed docValues
author string author retrieved from tika multiValued, indexed, docValues stored
*author string dynamic field for authors retrieved from tika multiValued, indexed, docValues stored
title string title retrieved from tika multiValued, indexed, docValues stored
*title string dynamic title field retrieved from tika multiValued, indexed, docValues stored
date string date retrieved from tika multiValued, indexed, docValues stored
content_type plongs content_type retrieved from tika multiValued, indexed, docValues stored
stream_size string stream_size retrieved from tika multiValued, indexed, docValues stored
cat string category defined by user through manifoldcf multiValued, docValues stored

Additional copyField statements to insert data in fields:

  • source=”content” dest=”text”
  • source=”*author” dest=”author”
  • source=”*title” dest=”title”

Linux Commands

tar:


Create tar archive:

tar czvf .tgz

rsync:


local:

rsync -a

remote:

rsync -avnzb -e "ssh -l "

rsync -azP --exclude @:

Find:


find . -type f -printf "%20s %p\n" | sort -nr | head 20

du -s -nr | head 20

VNC:


ssh -L -c:: @

sieve commands:


sieve-filter -v -e -W -u