Setup guide
This section will describe the requirements needed to deploy a station. Here we describe first the software requirements followed by the hardware requirements and lastly, their physical installation and deployment.
Device source code
This project contains code for two different STM32 boards. Each project is managed by STM32Cube. Devices should be programmed using the same software.
Setting up docker
The file docker-compose.yml provides the template necesary to launch those services. However, it is necesary to update the configuration values in the file before deploying. The main parameters to modify are user and password of both RabbitMQ and PostgreSQL. The other important parameter is the volume configuration for postgreSQL, (e.g. where to store the data in the computer). The path before the semicolon points to the path in the computer where to store the samples. The path after the semicolorn should not be modified.
Note
We can think of docker as a virtual machine. We can provide some paths (here volumes
) in the computer that will get linked to a path inside the container. The syntax is path_in_computer:path_in_docker
.
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.3.0
container_name: broker
ports:
# To learn about configuring Kafka for access across networks see
# https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
$ docker-compose up -d -f /path/to/docker-compose.yml
$ docker-compose up -d # If in the same path as docker-compose.yml
Deployment services
Once all the software is installed and the hardware is properly connected, the station should be ready for deployment. The deployment of the station can be carried out with the use of systemd services.
#!/usr/bin/env python3
from sramplatform import Dispatcher, ConnParameters
# Custom implementation of Reader
from customreader import CustomReader
reader = CustomReader("Discovery", 0, 125_000)
params = ConnParameters("rabbitmq_user", "rabbitmq_pass")
platform = Dispatcher(
params, "exchange_commands", "station_name", "exchange_logs"
)
platform.add_command({"method": "read"}, reader.handle_read)
platform.add_command({"method": "write", "data": True}, reader.handle_write)
if __name__ == '__main__':
platform.run()
[Unit]
Description=SRAM Reliability Platform
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=5
WorkingDirectory=/path/to/SRAMPlatform
ExecStart=/path/to/virtualenv/bin/python3 main.py
[Install]
WantedBy=multi-user.target
Operations can be scheduled by using the send_command.py script provided and a systemd timer (very similar to a cron job). The following example illustrates how to create the files necesary to power off the platform every friday at 17:00.
[Unit]
Description=Power off the SRAM Platform
[Timer]
OnCalendar=Fri *-*-* 17:00:00
Persistent=true
[Install]
WantedBy=timers.target
[Unit]
Description=Power off the SRAM Platform
After=network.target
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/path/to/SRAMPlatform
ExecStart=/path/to/virtualenv/bin/python3 send_command.py "OFF"
[Install]
WantedBy=multi-user.target
Configuring a dispatcher
agent:
url: "amqp://user:password@hostname"
name: "agent name"
exchange: "rabbitmq exchange"
reader:
board_type: "Type of board the reader manages"
port: "/path/to/ttyUSB"
baudrate: 125000
logging:
format: "[%(asctime)s] [%(levelname)-8s] %(message)s"
datefmt: "%H:%M:%S %d-%m-%Y"
loggers:
- TelegramHandler:
level: WARNING # INFO by default
token: "Telegram Bot Token"
chat_ids: 00000000000
# Custom log format
format: "[%(asctime)s] %(name)s\n%(message)s"
# Filter logs with highel level than filter_level
# If level and filter are defined, the logs allowed are
# level <= level < filter_level
filter_level: RESULTS
- RabbitMQHandler:
key: "routing key"
exchange: ""
- StreamHandler:
level: DEBUG
- MailHandler:
email: "email@gmail.com"
oauth: "/path/to/oauth.json"
recipients:
subject:
- FileHandler:
path: "/path/to/file.log"
- RotatingFileHandler:
path: "/path/to/file.log"
maxBytes: 20000
backupCount: 7
- TimedRotatingFileHandler:
path: "/path/to/file.log"
when: "midnight"
backupCount: 7