Espresso.Bin Firewall

After installing Armbian I was running into network issues which could be solved with:

systemctl unmask systemd-networkd; systemctl start systemd-networkd

https://wiki.archlinux.org/title/Systemd-networkd#Wired_adapter_using_a_static_IP

root@ebin:/etc/systemd/network# cat 10-br0.network

[Match]
Name=br0

[Network]
Address=10.1.10.9/24
Gateway=10.1.10.1
DNS=10.1.10.1
#DNS=8.8.8.8
[Match]
Name = br0

[Network]
Address = 192.168.179.23 / 24
Gateway = 192.168.179.1
DNS = 192.168.179.1
# DNS = 8.8.8.8

pocket file server

A small sized server that may be used as a file server with battery power.

The server shall support the following points:

  • NFS
  • samba
  • ssh
  • ftp?
  • webdav
  • web front end
  • ePaper display for
    1. statistics on file access, space left, power consumption, etc
    2. battery status
    3. ap mode ip and ssid
  • buttons/switches for
    • AP and client mode
    • shutdown

development environment

summary

  • create folder on server as: repo--dev.git
  • initialize git repo: git init –bare
  • create folder on local machine
  • get files from remote repo: git clone ssh://git@olkn.myvnc.com/repo--dev.git
  • add files to staging: git add -A
  • commit changes to repo: git commit -m “
  • push changes to remote repo: git push origin

git

config

  • git config –global user.name “olkn”
  • git config –global user.email “olkn@gmail.com”
  • git remote add olkn ssh://olkn.myvnc.com/git/repo-python-development
  • git config –global pull.rebase true
  • git config –global color.ui auto
  • git config –global core.editor vim
  • .gitignore to list all files that should not be included in git repo

initialise git repo

git init

git init –bare : if the repo is on a server and no working tree is necessary (should end in .git)

stage

Staging adds file to a stage prepared for later commit

  • git add -A
  • git stage .

commit

All files in staging were added/committed to the repo

git commit -m “Comment for commit”

password less login

ssh-keygen -t rsa -b 4096 -C “

ssh-add .ssh/id-rsa

ssh-copy-id -i .ssh/id-rsa @

initialise git repo

mkdir

cd

git init –bare –shared

add files to repo

git add -A

git commit -m “

add remote repo

git remote add live ssh://@/

git push live

docker

docker build -t : build the given image based on Dockerfile

docker images : list all available docker images in the system

docker run -it : run interactive mode image

docker run -d -p 5000:5000 : run image in daemon mode with ports exposed

docker ps : show all running docker container

docker start/stop : start/stop a container

docker-compose up -d : start docker environment detached

docker-composer down : stop all containers

docker-composer ps : list all running containers

docker-composer build : build images from scratch

docke-compose logs : show the log files for a container

python

project folder
— docker-compose.yml
–|– app (program sources we are working on)
—-|– Dockerfile
—-|– requirements.txt
—-|– src
——-|– python.py
–|– db (data base server and config, if any)
—–|– password.txt
–|– web (web server and config, if any)

docker composer file


ersion: "3.7"
services:
db:
image: mysql:8.0.19
command: '--default-authentication-plugin=mysql_native_password'
restart: always
secrets:
- db-password
volumes:
- db-data:/var/lib/mysql
networks:
- backend-network
environment:
- MYSQL_DATABASE=example
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db-password

app:
build: app
restart: always
secrets:
- db-password
networks:
- backend-network
- frontend-network
volumes:
- ./app/src:/code
ports: (debugger ports for flask inside the container)
- 5678:5678

web:
build: web
restart: always
ports:
- 80:80
networks:
- frontend-network
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
networks:
backend-network:
frontend-network:

docker-file


# set base image (host OS)
FROM python:3.8

# set the working directory in the container
WORKDIR /code

# copy the dependencies file to the working directory
COPY requirements.txt .

# install dependencies
RUN pip install -r requirements.txt

# copy the content of the local src directory to the working directory
COPY src/ .

# command to run on container start
CMD [ "python", "./server.py" ]

laravel

laravel with sail

  • install docker compose
  • install composer: wget https://getcomposer.org/installer

initialize a new project

curl -s https://laravel.build/project-name?with=mysql,selenium,mailhog |bash

configuration

edit .env

docker composer file

docker-compose.yml

docker-file

programming

python

laravel

alternative

python

  • python3 -m venv
  • source /bin/activate

git

  • git init . (executed in folder of course
  • create .gitignore with contents (*.pyc and __pycache__)

README

create a file README with project description and usage instructions

markdown format (README.md) would be a good choice for the formatting

git commit

  • git add .gitignore README.md
  • git add *
  • git comit -m “initial release”

skeleton

file structure

  • folder docs – program documentation (spinx)
  • folder src/app – source code of the application
  • folder tests – unit test files
  • .gitignore – see above
  • requirements.txt – pip requirements file
  • README.md – see above
  • TODO.md – open issues to implement
  • LICENSE – ???
  • setup.py – see below

program skeleton

structure with functions, classes, modules, etc but with docstrings only as the contents will follow later with the implementation

create the main file, e.g. .py


""" descriptive text"""
import sys

"""definition of constants"""
URL = "https://olkn.myvnc.com"

def main():
""" main entry point for the program """
pass

"""definition of functions"""
def get_url(url):
"""describe function"""
pass

if __name__ == '__main__'
sys.exit(main())

requirements

list all the requirements in a file: pip freeze requirements.txt

solr – managed-schema field definitions

name type description active flags deactive flags
ignored_* string catchall for all undefined metadata multiValued
id string unique id field stored, required multiValued
_version_ plong internal solr field indexed, stored
text text_general content field for facetting multiValued docValues, stored
content text_general main content field as extracted by tika stored, multiValued, indexed docValues
author string author retrieved from tika multiValued, indexed, docValues stored
*author string dynamic field for authors retrieved from tika multiValued, indexed, docValues stored
title string title retrieved from tika multiValued, indexed, docValues stored
*title string dynamic title field retrieved from tika multiValued, indexed, docValues stored
date string date retrieved from tika multiValued, indexed, docValues stored
content_type plongs content_type retrieved from tika multiValued, indexed, docValues stored
stream_size string stream_size retrieved from tika multiValued, indexed, docValues stored
cat string category defined by user through manifoldcf multiValued, docValues stored

Additional copyField statements to insert data in fields:

  • source=”content” dest=”text”
  • source=”*author” dest=”author”
  • source=”*title” dest=”title”

solr search server with tika and manifoldcf

I finally managed to get my search server running using solr as main engine and tika for extraction. The setup is competed by a manifoldcf for access to files, emails, wiki, rss and web.

solr

A short overview on the basic file structure of solr is shown below:

filestructure


<solr-home-directory/
solr.xml
core_name1/
core.properties
conf/
solrconfig.xml
managed-schema
data/

And here is my core.properties file without cloud on a single server and very basic as well.

core.properties


Name=collection name
Config=solrconfig.xml
dataDir=collection name/data

schema fields from tika

The following fields are essential for my setup:

  • id – the identifier unique for solr
  • _version_ – also some internal stuff for solr
  • content – the text representation of the extraction results from tika
  • ignored_* – as a catchall for any metadata that is not covered by a field in the index

The solr install is following the instructions given by the project team. As I am using debian the solr.in.sh is barely standard. Here are the settings:


SOLR_PID_DIR="/var/solr"
SOLR_HOME="/var/solr/data"
LOG4J_PROPS="/var/solr/log4j2.xml"
SOLR_LOGS_DIR="/var/solr/logs"
SOLR_PORT="8983"

Solr is started via old init.d style script from the project team. No modifications here.

The specific managed-schema and solrconfig.xml files are not listed here but took the most time to get them running. Some comments:

  • grab some information on the metadata extracted by tika to find the fields that should be worth a second look
  • check for the configuration given in /var/solr/data/conf/
  • especially the solr log at /var/solr/logs/solr.log
  • managed-schema shoud be adjusted for the metadata retrived through tika
  • delete any old collection files by removing /var/solr/data/collection name/collection name/index/
  • solr cell is responsible for importing/indexing files in foreign formats like PDF, Word, etc
  • set stored false as often as possible
  • set indexed false as much as possible
  • remove copyfields as far as possible
  • set indexed false for text_general
  • use catchall field for indexing
  • start JVM in server mode
  • set logging on higher level only
  • integrate everything in tomcat
  • set indexed or docValues to true but not both
  • some field type annottations: Solr Manual 8.11

some interesting commands

  • /bin/solr start
  • /bin/solr stop -all
  • /bin/post -c collection input
  • /bin/solr delete -c collection
  • /bin/solr create -c collection -d configdir
  • velocity setup

    velocity may be used as a search interface for solr but my setup is not completed yet.

    tika

    The tika server version is also installed as described by the project team. I only added a start script for systemd as follows:


    [Unit]
    Description=Apache Tika Server
    After=network.target

    [Service]
    Type=simple
    User=tika
    Environment="TIKA_INCLUDE=/etc/default/tika.in.sh"
    ExecStart=/usr/bin/java -jar /opt/tika/tika-server-standard-2.3.0.jar --port 9998 --config /opt/tika/tika-config.xml
    Restart=always

    [Install]
    WantedBy=multi-user.target

    The tika.in.sh is once again copied from project team suggestion without modifications:


    TIKA_PID_DIR="/var/tika"
    LOG4J_PROPS="/var/tika/log4j.properties"
    TIKA_LOGS_DIR="/var/tika/logs"
    TIKA_PORT="9998"
    TIKA_FORKED_OPTS=""

    The tika-config.xml is quit empty at the moment but I hope to get logging running soon.

    ManifoldCF

    And finally the manifoldcf installation from scratch as the interface to the various information resources.

    and here is my systemd start script:

    [Unit]
    Description=ManifoldCF service
    [Service]
    WorkingDirectory=/opt/manifoldcf/example
    ExecStart=/usr/bin/java -Xms512m -Xmx512m -Dorg.apache.manifoldcf.configfile=./properties.xml -Dorg.apache.manifoldcf.jettyshutdowntoken=secret_token -Djava.security.auth.login.config= -cp .:../lib/mcf-core.jar:../lib/mcf-agents.jar:../lib/mcf-pull-agent.jar:../lib/mcf-ui-core.jar:../lib/mcf-jetty-runner.jar:../lib/jetty-client-9.4.25.v20191220.jar:../lib/jetty-continuation-9.4.25.v20191220.jar:../lib/jetty-http-9.4.25.v20191220.jar:../lib/jetty-io-9.4.25.v20191220.jar:../lib/jetty-jndi-9.4.25.v20191220.jar:../lib/jetty-jsp-9.2.30.v20200428.jar:../lib/jetty-jsp-jdt-2.3.3.jar:../lib/jetty-plus-9.4.25.v20191220.jar:../lib/jetty-schemas-3.1.M0.jar:../lib/jetty-security-9.4.25.v20191220.jar:../lib/jetty-server-9.4.25.v20191220.jar:../lib/jetty-servlet-9.4.25.v20191220.jar:../lib/jetty-util-9.4.25.v20191220.jar:../lib/jetty-webapp-9.4.25.v20191220.jar:../lib/jetty-xml-9.4.25.v20191220.jar:../lib/commons-codec-1.10.jar:../lib/commons-collections-3.2.2.jar:../lib/commons-collections4-4.2.jar:../lib/commons-discovery-0.5.jar:../lib/commons-el-1.0.jar:../lib/commons-exec-1.3.jar:../lib/commons-fileupload-1.3.3.jar:../lib/commons-io-2.5.jar:../lib/commons-lang-2.6.jar:../lib/commons-lang3-3.9.jar:../lib/commons-logging-1.2.jar:../lib/ecj-4.3.1.jar:../lib/gson-2.8.0.jar:../lib/guava-25.1-jre.jar:../lib/httpclient-4.5.8.jar:../lib/httpcore-4.4.10.jar:../lib/jasper-6.0.35.jar:../lib/jasper-el-6.0.35.jar:../lib/javax.servlet-api-3.1.0.jar:../lib/jna-5.3.1.jar:../lib/jna-platform-5.3.1.jar:../lib/json-simple-1.1.1.jar:../lib/jsp-api-2.1-glassfish-2.1.v20091210.jar:../lib/juli-6.0.35.jar:../lib/log4j-1.2-api-2.4.1.jar:../lib/log4j-api-2.4.1.jar:../lib/log4j-core-2.4.1.jar:../lib/mail-1.4.5.jar:../lib/serializer-2.7.1.jar:../lib/slf4j-api-1.7.25.jar:../lib/slf4j-simple-1.7.25.jar:../lib/velocity-1.7.jar:../lib/xalan-2.7.1.jar:../lib/xercesImpl-2.10.0.jar:../lib/xml-apis-1.4.01.jar:../lib/zookeeper-3.4.10.jar:../lib/javax.activation-1.2.0.jar:../lib/javax.activation-api-1.2.0.jar: -jar start.jar
    User=solr
    Type=simple
    SuccessExitStatus=143
    TimeoutStopSec=10
    Restart=on-failure
    RestartSec=10
    [Install]
    WantedBy=multi-user.target

Linux Commands

tar:


Create tar archive:

tar czvf .tgz

rsync:


local:

rsync -a

remote:

rsync -avnzb -e "ssh -l "

rsync -azP --exclude @:

Find:


find . -type f -printf "%20s %p\n" | sort -nr | head 20

du -s -nr | head 20

VNC:


ssh -L -c:: @

sieve commands:


sieve-filter -v -e -W -u