BT Serial Console (Android->Linux)

– connect from android to linux box via BT (pair and eventually set key)
– check if serial service on linux is present: sdptool browse local
– note the channel above or add service via: sdptool add –channel=22 SP
– listen on this channel via rfcomm: rfcomm listen /dev/rfcomm0 22
– use BlueTerm on android to connect to the linux box

Linux box side:
sdptool add –channel=3 SP
mknod -m 666 /dev/rfcomm0 c 216 0
rfcomm watch /dev/rfcomm0 3 /sbin/agetty rfcomm0 115200 linux

Client side:
sdptool add –channel=3 SP
rfcomm connect /dev/rfcomm0 [SERVER_ADDR] 3
screen /dev/rfcomm0 115200

/etc/bluetooth/rfcomm.conf:
rfcomm0 {
# Automatically bind the device at startup
bind no;
# Bluetooth address of the device
device 11:22:33:44:55:66;
# RFCOMM channel for the connection
channel 3;
# Description of the connection
comment “This is Device 1’s serial port.”;
}

hcitool scan
rfcomm bind 0 20:15:12:08:62:95 1

Laravel – development environment

laravel with sail

basics

  • install docker compose with specific commands of the system it is used on
  • install composer locally: wget https://getcomposer.org/installer
  • install sail (docker environment for laravel): curl -s “https://laravel.build/project-name?with=mysql,selenium,mailhog,redis” | bash
  • check for local and forward ports for docker in file .env and add available ports
    • (APP_PORT=38080
      FORWARD_DB_PORT=33306
      FORWARD_REDIS_PORT=36379
      FORWARD_MEILISEARCH_PORT=37700
      FORWARD_MAILHOG_PORT=31025
      FORWARD_MAILHOG_DASHBOARD_PORT=38025)
  • start docker environment in background: ./vendor/bin/sail up -d
  • new app may be accessed via http://localhost:port
  • stop the server again: sail down

command alias: alias sail='[ -f sail ] && sh sail || sh vendor/bin/sail'

Proxy

in App\Http\Middleware\TrustProxies the protected variable $proxies must be changed to:

protected $proxies = [‘127.0.0.1’];

git

  • initialize git repo in current folder: git init
  • git remote add laravel ssh://git@olkn.myvnc.com/home/git/repo-laravel-dev.git
  • .gitignore to list all files that should not be included in git repo (sail automatically generates the file)
  • add files to staging: git add -A
  • commit changes to repo: git commit -m “comment“
  • push changes to remote repo: git push laravel

Laravel Breeze

  • install the breeze packages to start of with: sail composer require laravel/breeze –dev
  • install blade frontend with breeze: sail php artisan breeze:install blade (complete template including user authentication)
  • compile CSS and refresh browser: sail npm run dev
  • migrate data base: sail php artisan migrate

Models/Migrations/Controllers

sail php artisan make:model -mrc

Models

interface to the tables in the data base; eloquent model

app/Models/.php

Migrations

create and modify tables in data base

database/migrations/_create__tables.php

Controllers

processing requests towards application and returning responses

app/Http/Controller/Controller.php

deployment

steps on server

  • git clone serverAddressAndFolder
  • point web server root to folder public
  • modify/update .env file
  • php artisan key:generate – generates APP_KEY in .env file
  • php artisan migrate – migrate the data base schema
  • php artisan db:seed – if you want to seed your data base
  • php artisan down – shut website down for maintenance
  • git pull – pull latest git files to server
  • composer install – to check for any necessary updates from composer.lock file
  • php artisan migrate – migrate the data base schema
  • systemctl restart apache2 – to kill any php session
  • php artisan queue:restart – to enable and restart any queues
  • php artisan cache:clear – clear cache
  • php artisan up – start laravel website again

.env

php artisan config:cache enable caching of env in production environment


APP_DEBUG = true for development only
APP_EMV=staging for development server
APP_URL=http://localhost - may be different for dev server
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=

maintenance mode

php artisan down #enable maintenance

php artisan up #disable maintenance

folder structure

  • app – core code
    • Broadcasting – broadcast channel classes
    • Console – custom artisan commands
    • Events – event classes
    • Exceptions – app exceptions handlers
    • Http – controllers and middleware
    • Jobs – queueable jobs
    • Listeners – classes that handle events
    • Mail – email classes
    • Models – eloquent model classes
    • Notifications – transactional notifications
    • Policies – authorization policy classes
    • Providers – service provider classes
    • Rules – custom validation rules
  • bootstrap – app.php to bootstrap the framework
  • config
  • database – database migrations, model factories and seeds
  • lang – language files
  • public – index.php as entry point
  • resources – all views and raw assets (CSS, JavaScript)
  • routes – all route definitions
  • storage – logs, compiled Blade templates, session files, file caches
  • tests – automated tests
  • vendor – composer dependencies

Request Lifecycle

some useful commands

  • sail shell (access a shel within the docker container)
  • sail root-shell
  • ./vendor/bin/sail php –version
  • ./vendor/bin/sail artisan –version
  • ./vendor/bin/sail composer –version
  • ./vendor/bin/sail npm –version

speed up

  • php artisan config:cache
  • php artisan route:cache
  • php artisan optimize –force

clean up

  • php artisan config:clear
  • php artisan route:clear
  • php artisan view:clear

Laravel Sail Docker Environment

vendor/laravel/sail/runtimes/8.2/Dockerfile

FROM ubuntu:22.04

LABEL maintainer="Taylor Otwell"

ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14

WORKDIR /var/www/html

ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.2-cli php8.2-dev \
php8.2-pgsql php8.2-sqlite3 php8.2-gd \
php8.2-curl \
php8.2-imap php8.2-mysql php8.2-mbstring \
php8.2-xml php8.2-zip php8.2-bcmath php8.2-soap \
php8.2-intl php8.2-readline \
php8.2-ldap \
# php8.2-msgpack php8.2-igbinary php8.2-redis php8.2-swoole \
# php8.2-memcached php8.2-pcov php8.2-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.2

RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail

COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.2/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container

EXPOSE 8000

ENTRYPOINT ["start-container"]

./docker-compose.yml

# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- mailhog
- selenium
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
mailhog:
image: 'mailhog/mailhog:latest'
ports:
- '${FORWARD_MAILHOG_PORT:-1025}:1025'
- '${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025'
networks:
- sail
selenium:
image: 'selenium/standalone-chrome'
extra_hosts:
- 'host.docker.internal:host-gateway'
volumes:
- '/dev/shm:/dev/shm'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local

VSCode

Export extensions via local shell command (STRG+SHIFT+P terminal local):

code --list-extensions | sed -e 's/^/code --install-extension /' > my_vscode_extensions.sh

Import extensions via:

bash my_vscode_extensions.sh

web scraper

contents

  • logging
  • data base access
  • solr indexing
  • filesystem access
  • web scraping

logging

Data base access

– mysql in python


import mysql.connector
# from mysql.connector import Error

# pip3 install mysql-connector
# https://dev.mysql.com/doc/connector-python/en/connector-python-reference.html

class DB():
    def __init__(self, config):
        self.connection = None
        self.connection = mysql.connector.connect(**config)
        
    def query(self, sql, args):
        cursor = self.connection.cursor()
        cursor.execute(sql, args)
        return cursor

    def insert(self,sql,args):
        cursor = self.query(sql, args)
        id = cursor.lastrowid
        self.connection.commit()
        cursor.close()
        return id

    # https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
    def insertmany(self,sql,args):
        cursor = self.connection.cursor()
        cursor.executemany(sql, args)
        rowcount = cursor.rowcount
        self.connection.commit()
        cursor.close()
        return rowcount

    def update(self,sql,args):
        cursor = self.query(sql, args)
        rowcount = cursor.rowcount
        self.connection.commit()
        cursor.close()
        return rowcount

    def fetch(self, sql, args):
        rows = []
        cursor = self.query(sql, args)
        if cursor.with_rows:
            rows = cursor.fetchall()
        cursor.close()
        return rows

    def fetchone(self, sql, args):
        row = None
        cursor = self.query(sql, args)
        if cursor.with_rows:
            row = cursor.fetchone()
        cursor.close()
        return row

    def __del__(self):
        if self.connection != None:
            self.connection.close()

  # write your function here for CRUD operations

solr indexing

filesystem access

web scraping

solr – managed-schema field definitions

name type description active flags deactive flags
ignored_* string catchall for all undefined metadata multiValued
id string unique id field stored, required multiValued
_version_ plong internal solr field indexed, stored
text text_general content field for facetting multiValued docValues, stored
content text_general main content field as extracted by tika stored, multiValued, indexed docValues
author string author retrieved from tika multiValued, indexed, docValues stored
*author string dynamic field for authors retrieved from tika multiValued, indexed, docValues stored
title string title retrieved from tika multiValued, indexed, docValues stored
*title string dynamic title field retrieved from tika multiValued, indexed, docValues stored
date string date retrieved from tika multiValued, indexed, docValues stored
content_type plongs content_type retrieved from tika multiValued, indexed, docValues stored
stream_size string stream_size retrieved from tika multiValued, indexed, docValues stored
cat string category defined by user through manifoldcf multiValued, docValues stored

Additional copyField statements to insert data in fields:

  • source=”content” dest=”text”
  • source=”*author” dest=”author”
  • source=”*title” dest=”title”

solr search server with tika and manifoldcf

I finally managed to get my search server running using solr as main engine and tika for extraction. The setup is competed by a manifoldcf for access to files, emails, wiki, rss and web.

solr

A short overview on the basic file structure of solr is shown below:

filestructure


<solr-home-directory/
solr.xml
core_name1/
core.properties
conf/
solrconfig.xml
managed-schema
data/

And here is my core.properties file without cloud on a single server and very basic as well.

core.properties


Name=collection name
Config=solrconfig.xml
dataDir=collection name/data

schema fields from tika

The following fields are essential for my setup:

  • id – the identifier unique for solr
  • _version_ – also some internal stuff for solr
  • content – the text representation of the extraction results from tika
  • ignored_* – as a catchall for any metadata that is not covered by a field in the index

The solr install is following the instructions given by the project team. As I am using debian the solr.in.sh is barely standard. Here are the settings:


SOLR_PID_DIR="/var/solr"
SOLR_HOME="/var/solr/data"
LOG4J_PROPS="/var/solr/log4j2.xml"
SOLR_LOGS_DIR="/var/solr/logs"
SOLR_PORT="8983"

Solr is started via old init.d style script from the project team. No modifications here.

The specific managed-schema and solrconfig.xml files are not listed here but took the most time to get them running. Some comments:

  • grab some information on the metadata extracted by tika to find the fields that should be worth a second look
  • check for the configuration given in /var/solr/data/conf/
  • especially the solr log at /var/solr/logs/solr.log
  • managed-schema shoud be adjusted for the metadata retrived through tika
  • delete any old collection files by removing /var/solr/data/collection name/collection name/index/
  • solr cell is responsible for importing/indexing files in foreign formats like PDF, Word, etc
  • set stored false as often as possible
  • set indexed false as much as possible
  • remove copyfields as far as possible
  • set indexed false for text_general
  • use catchall field for indexing
  • start JVM in server mode
  • set logging on higher level only
  • integrate everything in tomcat
  • set indexed or docValues to true but not both
  • some field type annottations: Solr Manual 8.11

some interesting commands

  • /bin/solr start
  • /bin/solr stop -all
  • /bin/post -c collection input
  • /bin/solr delete -c collection
  • /bin/solr create -c collection -d configdir
  • velocity setup

    velocity may be used as a search interface for solr but my setup is not completed yet.

    tika

    The tika server version is also installed as described by the project team. I only added a start script for systemd as follows:


    [Unit]
    Description=Apache Tika Server
    After=network.target

    [Service]
    Type=simple
    User=tika
    Environment="TIKA_INCLUDE=/etc/default/tika.in.sh"
    ExecStart=/usr/bin/java -jar /opt/tika/tika-server-standard-2.3.0.jar --port 9998 --config /opt/tika/tika-config.xml
    Restart=always

    [Install]
    WantedBy=multi-user.target

    The tika.in.sh is once again copied from project team suggestion without modifications:


    TIKA_PID_DIR="/var/tika"
    LOG4J_PROPS="/var/tika/log4j.properties"
    TIKA_LOGS_DIR="/var/tika/logs"
    TIKA_PORT="9998"
    TIKA_FORKED_OPTS=""

    The tika-config.xml is quit empty at the moment but I hope to get logging running soon.

    ManifoldCF

    And finally the manifoldcf installation from scratch as the interface to the various information resources.

    and here is my systemd start script:

    [Unit]
    Description=ManifoldCF service
    [Service]
    WorkingDirectory=/opt/manifoldcf/example
    ExecStart=/usr/bin/java -Xms512m -Xmx512m -Dorg.apache.manifoldcf.configfile=./properties.xml -Dorg.apache.manifoldcf.jettyshutdowntoken=secret_token -Djava.security.auth.login.config= -cp .:../lib/mcf-core.jar:../lib/mcf-agents.jar:../lib/mcf-pull-agent.jar:../lib/mcf-ui-core.jar:../lib/mcf-jetty-runner.jar:../lib/jetty-client-9.4.25.v20191220.jar:../lib/jetty-continuation-9.4.25.v20191220.jar:../lib/jetty-http-9.4.25.v20191220.jar:../lib/jetty-io-9.4.25.v20191220.jar:../lib/jetty-jndi-9.4.25.v20191220.jar:../lib/jetty-jsp-9.2.30.v20200428.jar:../lib/jetty-jsp-jdt-2.3.3.jar:../lib/jetty-plus-9.4.25.v20191220.jar:../lib/jetty-schemas-3.1.M0.jar:../lib/jetty-security-9.4.25.v20191220.jar:../lib/jetty-server-9.4.25.v20191220.jar:../lib/jetty-servlet-9.4.25.v20191220.jar:../lib/jetty-util-9.4.25.v20191220.jar:../lib/jetty-webapp-9.4.25.v20191220.jar:../lib/jetty-xml-9.4.25.v20191220.jar:../lib/commons-codec-1.10.jar:../lib/commons-collections-3.2.2.jar:../lib/commons-collections4-4.2.jar:../lib/commons-discovery-0.5.jar:../lib/commons-el-1.0.jar:../lib/commons-exec-1.3.jar:../lib/commons-fileupload-1.3.3.jar:../lib/commons-io-2.5.jar:../lib/commons-lang-2.6.jar:../lib/commons-lang3-3.9.jar:../lib/commons-logging-1.2.jar:../lib/ecj-4.3.1.jar:../lib/gson-2.8.0.jar:../lib/guava-25.1-jre.jar:../lib/httpclient-4.5.8.jar:../lib/httpcore-4.4.10.jar:../lib/jasper-6.0.35.jar:../lib/jasper-el-6.0.35.jar:../lib/javax.servlet-api-3.1.0.jar:../lib/jna-5.3.1.jar:../lib/jna-platform-5.3.1.jar:../lib/json-simple-1.1.1.jar:../lib/jsp-api-2.1-glassfish-2.1.v20091210.jar:../lib/juli-6.0.35.jar:../lib/log4j-1.2-api-2.4.1.jar:../lib/log4j-api-2.4.1.jar:../lib/log4j-core-2.4.1.jar:../lib/mail-1.4.5.jar:../lib/serializer-2.7.1.jar:../lib/slf4j-api-1.7.25.jar:../lib/slf4j-simple-1.7.25.jar:../lib/velocity-1.7.jar:../lib/xalan-2.7.1.jar:../lib/xercesImpl-2.10.0.jar:../lib/xml-apis-1.4.01.jar:../lib/zookeeper-3.4.10.jar:../lib/javax.activation-1.2.0.jar:../lib/javax.activation-api-1.2.0.jar: -jar start.jar
    User=solr
    Type=simple
    SuccessExitStatus=143
    TimeoutStopSec=10
    Restart=on-failure
    RestartSec=10
    [Install]
    WantedBy=multi-user.target

docker web dev

To simplify the development I tried docker for building the runtime environment. Here are the major steps to get it running.

Prerequisites

The docker installation is straight forward as described on the docker home page:

un-install old version

aptitude remove docker docker-engine docker.io

add reporitory

aptitude update

aptitude install apt-transport-https ca-certificates curl gnupg2 software-properties-common

add docker official key

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

check fingerprint

apt-key fingerprint 0EBFCD88

add repository

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

aptitude update

install docker from repo

aptitude install docker-ce

check if everything works

docker run hello-world

install docker compose

curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

and check everything works

docker-compose --version

Docker Cheat Sheet

## List Docker CLI commands
docker
docker container –help

## Display Docker version and info
docker –version
docker version
docker info

## Execute Docker image
docker run hello-world

## List Docker images
docker image ls

## List Docker containers (running, all, all in quiet mode)
docker container ls
docker container ls –all
docker container ls -aq

Development Environment

create folder for docker environment

mkdir dockerproject

create yaml file for docker compose

vi docker-compose.yaml

At least the following sections should be available:

  • app
  • main application configuration

  • web
  • Webserver configuration is required

  • database
  • database configuration is required

and here comes an exmaple


version: '3'
services:
app:
build:
context: ./
dockerfile: app.dockerfile
working_dir: /var/www
volumes:
- ./../laravel:/var/www
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
web:
build:
context: ./
dockerfile: web.dockerfile
working_dir: /var/www
volumes:
- ./../laravel:/var/www
ports:
- 8080:80
database:
image: mysql:5.6
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "33061:3306"
volumes:
dbdata:

and now the app configuration


FROM php:7.1.3-fpm

RUN apt-get update
RUN apt-get install -y libmcrypt-dev
RUN apt-get install -y mysql-client
RUN apt-get install -y libmagickwand-dev --no-install-recommends
RUN pecl install imagick
RUN docker-php-ext-enable imagick
RUN docker-php-ext-install mcrypt pdo_mysql
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN apt-get install -y git
RUN apt-get update && apt-get install -y zlib1g-dev
RUN docker-php-ext-install zip

the web configuration


FROM nginx:1.10

ADD vhost.conf /etc/nginx/conf.d/default.conf

snmp v3

to get the net-snmp-config tool the libsnmp-dev package must be installed:

# apt-get install libsnmp-dev
# net-snmp-config –create-snmpv3-user -ro -A ‘geheim’ -X ‘secret’ -a SHA -x AES icinga

you may also create a new user using snmp commands:

#snmpusm -v 3 -u -l authNoPriv -a MD5 -A localhost passwd

to simplify the usage a local user profile should be created in ~/.snmp/snmp.conf:

defSecurityName
defContext “”
defAuthType MD5
defSecurityLevel authNoPriv
defAuthPassphrase defVersion 3

now a simple command looks like:

#snmpget localhost sysUpTime.0

mirror.sketch

The sketch for my mirror with display:

/* mirror
*
* control PWM for fan, read temperatures from DS1820 and interface to PIR
*/
#include
#include
#include // interrupt routine
#define ONE_WIRE_BUS 3 // define port for DS1820 interface
#define TEMP_DEVICE_IN 28E08F3606000003
#define TEMP_DEVICE_EX 289A78370600002C
#define TEMP_DEVICE_BOARD 2873163706000020
#define LED 4
#define PIR_INTERFACE 5 // define port for PIR
#define FANSENSE 8
#define FANPWM 9
#define THRESHOLD_OFF 25
#define THRESHOLD_LOW 30
#define THRESHOLD_HIGH 40
#define THRESHOLD_ON 45
#define TASTER_DOWN 6
#define TASTER_UP 7

DeviceAddress devices[] = {TEMP_DEVICE_IN, TEMP_DEVICE_EX, TEMP_DEVICE_BOARD };
// Setup a oneWire instance to communicate with any OneWire devices
OneWire oneWire(ONE_WIRE_BUS);
// Pass our oneWire reference to Dallas Temperature.
DallasTemperature sensors(&oneWire);
int buttonPressed = 0;

void setup() {
Serial.begin(9600);// start serial port
sensors.begin();// Start up the library
for( int i=0; i < 3; i++){ sensors.setResolution(devices[i], 10);// set the resolution to 10 bit } pinMode(PIR_INTERFACE, INPUT);// read PIR status from external Board pinMode(FANSENSE, INPUT); // read RPM data from sense of FAN digitalWrite(FANSENSE, HIGH); // activate internal Pull Up pinMode(LED, OUTPUT); // LED for status info pinMode(TASTER_DOWN, INPUT); // external key for navigation input pinMode(TASTER_DOWN, INPUT); } void setLED(int pulse){ if(pulse > 0){
digitalWrite(LED, HIGH);
delay(10*pulse);
digitalWrite(LED, LOW);
delay(10*pulse);
}
}

void loop() {
/* main loop sends list of values via serial line in format:
* up/down (PIR status flag), TempExternal, TempInternal, TempBoard, RPM from Fan
*/
float Temperature = 0;
double frequency = 0;
unsigned long pulseDuration = 0;
int pirStatus = 0;
pirStatus = digitalRead(PIR_INTERFACE);
if( digitalRead(TASTER_UP) == HIGH ){
buttonPressed = 2; // up taster pressed
} else if ( digitalRead(TASTER_UP) == HIGH ) {
buttonPressed = 1; // down taster pressed
} else {
buttonPressed = 0; // no button pressed
}
if (pirStatus == HIGH) {
Serial.print("up,");
} else {
Serial.print("down,");
}
for(int i = 0; i < 3; i++){ Temperature = Temperature + sensors.getTempC(devices[i]); Serial.print(Temperature); Serial.print(","); } Temperature = Temperature / 3; // Temperature Hytherese pulseDuration = pulseIn(FANSENSE, LOW); frequency = 1000000/pulseDuration; Serial.print(frequency); Serial.println();// new line for next data // analogWrite(FANPWM, 70); //set PWM signal }

mirror – setup

Software Installations for Min Server

  • fetchmail
  • postfix
  • cups
  • dnsmasq
  • logwatch
  • smartmontools
  • ClamAV
  • fail2ban
  • dovecot
  • shellInABox
  • nagios
  • cacti
  • rsyslog
  • Mysql
  • netsnmp
  • spamassassin
  • git
  • apache
  • webalizer
  • wordpress
  • nextcloud
  • glype
  • gitlist
  • roundcube
  • gitlist
  • libreoffice
  • php
  • imagemagick
  • amavis
  • spamd
  • managesieve
  • VPN Tunnel
  • ntpd
  • nfs
  • samba
  • music streamer
  • video streamer

Nagios Components

  • postfix
  • dovecot
  • apache
  • dnsmasq
  • cups
  • mysql
  • clamav
  • nextclou
  • wordpress
  • rsyslog
  • ntpd
  • git
  • smartd
  • shellinanbox
  • spamd
  • nfs
  • minidlna
  • samba
  • timemachine
  • afpd
  • cups

actual to do:

  • shinken als nagios Ersatz
  • samba/timemaschine
  • shellinabox

second network card with same driver

I do own three network cards with a RTL 8139 chipset and finally managed to get them work with my installation by simply adding a new file:

/etc/modprobe.d/8139too.config

alias eth1 8139too
alias eth2 8139too

The interface eth0 is reserved for the internal network card of the board.