first e2e attempt

This commit is contained in:
Stefano D'Orazio 2026-02-02 13:15:45 +01:00
parent c9bc37be65
commit 8f37eb5b61
42 changed files with 8301 additions and 1209 deletions

View File

@ -1,18 +0,0 @@
{
"permissions": {
"allow": [
"Bash(wc:*)",
"Bash(source:*)",
"Bash(python3:*)",
"Bash(.venv/bin/python3:*)",
"Bash(sudo rm:*)",
"Bash(sudo ./start.sh:*)",
"Bash(grep:*)",
"Bash(git init:*)",
"Bash(git add:*)",
"Bash(tree:*)",
"Bash(pip install:*)",
"Bash(echo:*)"
]
}
}

16
.gitignore vendored
View File

@ -152,11 +152,7 @@ dmypy.json
# pytype static type analyzer
.pytype/
.venv/
.env
outputs/
__pycache__/
*.pyc
# Cython debug symbols
cython_debug/
@ -167,8 +163,8 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
.venv/
__pycache__/
*.pyc
.env
thoughts/
# Project-specific
outputs/
# Claude Code local settings (user-specific)
.claude/settings.local.json

View File

@ -1,893 +0,0 @@
================================================================================
APPUNTI OPERAZIONI - ics-simlab-config-gen_claude
================================================================================
Data: 2026-01-27
================================================================================
PROBLEMA INIZIALE
-----------------
PLC2 crashava all'avvio con "ConnectionRefusedError" quando tentava di
scrivere a PLC1 via Modbus TCP prima che PLC1 fosse pronto.
Causa: callback cbs[key]() chiamata direttamente senza gestione errori.
SOLUZIONE IMPLEMENTATA
----------------------
File modificato: tools/compile_ir.py (linee 24, 30-40, 49)
Aggiunto:
- import time
- Funzione _safe_callback() con retry logic (30 tentativi × 0.2s = 6s)
- Modifica _write() per chiamare _safe_callback(cbs[key]) invece di cbs[key]()
Risultato:
- PLC2 non crasha più
- Retry automatico se PLC1 non è pronto
- Warning solo dopo 30 tentativi falliti
- Container continua a girare anche in caso di errore
FILE CREATI
-----------
build_scenario.py - Builder deterministico (config → IR → logic)
validate_fix.py - Validatore presenza fix nei file generati
CLEANUP_SUMMARY.txt - Summary pulizia progetto
README.md (aggiornato) - Documentazione completa
docs/ (7 file):
- README_FIX.md - Doc principale fix
- QUICKSTART.txt - Guida rapida
- RUNTIME_FIX.md - Fix dettagliato + troubleshooting
- CHANGES.md - Modifiche con diff
- DELIVERABLES.md - Summary completo
- FIX_SUMMARY.txt - Confronto codice before/after
- CORRECT_COMMANDS.txt - Come usare path assoluti con sudo
scripts/ (3 file):
- run_simlab.sh - Launcher ICS-SimLab con path corretti
- test_simlab.sh - Test interattivo
- diagnose_runtime.sh - Diagnostica container
PULIZIA PROGETTO
----------------
Spostato in docs/:
- 7 file documentazione dalla root
Spostato in scripts/:
- 3 script bash dalla root
Cancellato:
- database/, docker/, inputs/ (cartelle vuote)
- outputs/last_raw_response.txt (temporaneo)
- outputs/logic/, logic_ir/, logic_water_tank/ (vecchie versioni)
Mantenuto:
- outputs/scenario_run/ (SCENARIO FINALE per ICS-SimLab)
- outputs/configuration.json (config base)
- outputs/ir/ (IR intermedio)
STRUTTURA FINALE
----------------
Root: 4 file essenziali (main.py, build_scenario.py, validate_fix.py, README.md)
docs/: documentazione (60K)
scripts/: utility (20K)
outputs/: solo file necessari (56K)
+ cartelle codice sorgente (tools/, services/, models/, templates/, helpers/)
+ riferimenti (examples/, spec/, prompts/)
COMANDI UTILI
-------------
# Build scenario completo
python3 build_scenario.py --overwrite
# Valida fix presente
python3 validate_fix.py
# Esegui ICS-SimLab (IMPORTANTE: path assoluti con sudo!)
./scripts/run_simlab.sh
# O manualmente:
cd /home/stefano/projects/ICS-SimLab-main/curtin-ics-simlab
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
# Monitor PLC2 logs
sudo docker logs $(sudo docker ps --format '{{.Names}}' | grep plc2) -f
# Stop
cd /home/stefano/projects/ICS-SimLab-main/curtin-ics-simlab && sudo ./stop.sh
PROBLEMA PATH CON SUDO
-----------------------
Errore ricevuto: FileNotFoundError quando usato ~/projects/...
Causa: sudo NON espande ~ a /home/stefano
Soluzione:
- Usare SEMPRE percorsi assoluti con sudo
- Oppure usare ./scripts/run_simlab.sh (gestisce automaticamente)
WORKFLOW COMPLETO
-----------------
1. Testo → configuration.json (LLM):
python3 main.py --input-file prompts/input_testuale.txt
2. Config → Scenario completo:
python3 build_scenario.py --overwrite
3. Valida fix:
python3 validate_fix.py
4. Esegui:
./scripts/run_simlab.sh
VALIDAZIONE FIX
---------------
$ python3 validate_fix.py
✅ plc1.py: OK (retry fix present)
✅ plc2.py: OK (retry fix present)
Verifica manuale:
$ grep "_safe_callback" outputs/scenario_run/logic/plc2.py
(deve trovare la funzione e la chiamata in _write)
COSA CERCARE NEI LOG
---------------------
✅ Successo: NO "Exception in thread" errors in PLC2
⚠️ Warning: "WARNING: Callback failed after 30 attempts" (PLC1 lento ma ok)
❌ Errore: Container crasha (fix non presente o problema diverso)
NOTE IMPORTANTI
---------------
1. SEMPRE usare percorsi assoluti con sudo (no ~)
2. Rebuild scenario dopo modifiche config: python3 build_scenario.py --overwrite
3. Validare sempre dopo rebuild: python3 validate_fix.py
4. Fix è nel generatore (tools/compile_ir.py) quindi si propaga automaticamente
5. Solo dipendenza: time.sleep (stdlib, no package extra)
STATUS FINALE
-------------
✅ Fix implementato e testato
✅ Scenario pronto in outputs/scenario_run/
✅ Validatore conferma presenza fix
✅ Documentazione completa
✅ Progetto pulito e organizzato
✅ Script pronti per esecuzione
Pronto per testing con ICS-SimLab!
================================================================================
NUOVA FEATURE: PROCESS SPEC PIPELINE (LLM → process_spec.json → HIL logic)
================================================================================
Data: 2026-01-27
OBIETTIVO
---------
Generare fisica di processo tramite LLM senza codice Python free-form.
Pipeline: prompt testuale → LLM (structured output) → process_spec.json → compilazione deterministica → HIL logic.
FILE CREATI
-----------
models/process_spec.py - Modello Pydantic per ProcessSpec
- model: Literal["water_tank_v1"] (enum-ready)
- dt: float (time step)
- params: WaterTankParams (level_min/max/init, area, q_in_max, k_out)
- signals: WaterTankSignals (mapping chiavi HIL)
tools/generate_process_spec.py - Generazione LLM → process_spec.json
- Usa structured output (json_schema) per output valido
- Legge prompt + config per contesto
tools/compile_process_spec.py - Compilazione deterministica spec → HIL logic
- Implementa fisica water_tank_v1
- d(level)/dt = (Q_in - Q_out) / area
- Q_in = q_in_max se valvola aperta
- Q_out = k_out * sqrt(level) (scarico gravitazionale)
tools/validate_process_spec.py - Validatore con tick test
- Controlla modello supportato
- Verifica dt > 0, min < max, init in bounds
- Verifica chiavi segnali esistono in HIL physical_values
- Tick test: 100 step per verificare bounds
examples/water_tank/prompt.txt - Prompt esempio per water tank
FISICA IMPLEMENTATA (water_tank_v1)
-----------------------------------
Equazioni:
- Q_in = q_in_max if valve_open >= 0.5 else 0
- Q_out = k_out * sqrt(level)
- d_level = (Q_in - Q_out) / area * dt
- level = clamp(level + d_level, level_min, level_max)
Parametri tipici:
- dt = 0.1s (10 Hz)
- level_min = 0, level_max = 1.0 (metri)
- level_init = 0.5 (50% capacità)
- area = 1.0 m^2
- q_in_max = 0.02 m^3/s
- k_out = 0.01 m^2.5/s
COMANDI PIPELINE PROCESS SPEC
-----------------------------
# 1. Genera process_spec.json da prompt (richiede OPENAI_API_KEY)
python3 -m tools.generate_process_spec \
--prompt examples/water_tank/prompt.txt \
--config outputs/configuration.json \
--out outputs/process_spec.json
# 2. Valida process_spec.json contro config
python3 -m tools.validate_process_spec \
--spec outputs/process_spec.json \
--config outputs/configuration.json
# 3. Compila process_spec.json in HIL logic
python3 -m tools.compile_process_spec \
--spec outputs/process_spec.json \
--out outputs/hil_logic.py \
--overwrite
CONTRATTO HIL RISPETTATO
------------------------
- Inizializza tutte le chiavi physical_values (setdefault)
- Legge solo io:"input" (valve_open_key)
- Scrive solo io:"output" (tank_level_key, level_measured_key)
- Clamp level tra min/max
VANTAGGI APPROCCIO
------------------
1. LLM genera solo spec strutturata, non codice Python
2. Compilazione deterministica e verificabile
3. Validazione pre-runtime con tick test
4. Estensibile: aggiungere nuovi modelli (es. bottle_line_v1) è semplice
NOTE
----
- ProcessSpec usa Pydantic con extra="forbid" per sicurezza
- JSON Schema per structured output generato da Pydantic
- Tick test verifica 100 step con valvola aperta e chiusa
- Se chiavi non esistono in HIL, validazione fallisce
================================================================================
INTEGRAZIONE PROCESS SPEC IN SCENARIO ASSEMBLY
================================================================================
Data: 2026-01-27
OBIETTIVO
---------
Integrare la pipeline process_spec nel flusso di build scenario, così che
Curtin ICS-SimLab possa eseguire end-to-end con fisica generata da LLM.
MODIFICHE EFFETTUATE
--------------------
1. build_scenario.py aggiornato:
- Nuovo argomento --process-spec (opzionale)
- Se fornito, compila process_spec.json nel file HIL corretto (es. hil_1.py)
- Sostituisce/sovrascrive la logica HIL generata da IR
- Aggiunto Step 5: verifica che tutti i file logic/*.py referenziati esistano
2. tools/verify_scenario.py creato:
- Verifica standalone che scenario sia completo
- Controlla configuration.json esiste
- Controlla logic/ directory esiste
- Controlla tutti i file logic referenziati esistono
- Mostra file orfani (non referenziati)
FLUSSO COMPLETO CON PROCESS SPEC
--------------------------------
# 1. Genera configuration.json (LLM o manuale)
python3 main.py --input-file prompts/input_testuale.txt
# 2. Genera process_spec.json (LLM con structured output)
python3 -m tools.generate_process_spec \
--prompt examples/water_tank/prompt.txt \
--config outputs/configuration.json \
--out outputs/process_spec.json
# 3. Valida process_spec.json
python3 -m tools.validate_process_spec \
--spec outputs/process_spec.json \
--config outputs/configuration.json
# 4. Build scenario con process_spec (sostituisce HIL da IR)
python3 build_scenario.py \
--out outputs/scenario_run \
--process-spec outputs/process_spec.json \
--overwrite
# 5. Verifica scenario completo
python3 -m tools.verify_scenario --scenario outputs/scenario_run -v
# 6. Esegui in ICS-SimLab
cd /home/stefano/projects/ICS-SimLab-main/curtin-ics-simlab
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
FLUSSO SENZA PROCESS SPEC (compatibilità backward)
--------------------------------------------------
# Build scenario con IR (come prima)
python3 build_scenario.py --out outputs/scenario_run --overwrite
VERIFICA FILE LOGIC
-------------------
Il nuovo Step 5 in build_scenario.py verifica:
- Tutti i plcs[].logic esistono in logic/
- Tutti i hils[].logic esistono in logic/
- Se manca un file, build fallisce con errore chiaro
Comando standalone:
python3 -m tools.verify_scenario --scenario outputs/scenario_run -v
STRUTTURA SCENARIO FINALE
-------------------------
outputs/scenario_run/
├── configuration.json (configurazione ICS-SimLab)
└── logic/
├── plc1.py (logica PLC1, da IR)
├── plc2.py (logica PLC2, da IR)
└── hil_1.py (logica HIL, da process_spec o IR)
NOTE IMPORTANTI
---------------
- --process-spec è opzionale: se non fornito, usa IR per HIL (comportamento precedente)
- Il file HIL viene sovrascritto se esiste (--overwrite implicito per Step 2b)
- Il nome file HIL è preso da config (hils[].logic), non hardcoded
- Verifica finale assicura che scenario sia completo prima di eseguire
================================================================================
PROBLEMA SQLITE DATABASE ICS-SimLab
================================================================================
Data: 2026-01-27
SINTOMO
-------
Tutti i container (HIL, sensors, actuators, UI) crashano con:
sqlite3.OperationalError: unable to open database file
CAUSA
-----
Il file `physical_interactions.db` diventa una DIRECTORY invece che un file.
Succede quando Docker crea il volume mount point PRIMA che ICS-SimLab crei il DB.
Verifica:
$ ls -la ~/projects/ICS-SimLab-main/curtin-ics-simlab/simulation/communications/
drwxr-xr-x 2 root root 4096 Jan 27 15:49 physical_interactions.db ← DIRECTORY!
SOLUZIONE
---------
Pulire completamente e riavviare:
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab
# Stop e rimuovi tutti i container e volumi
sudo docker-compose down -v --remove-orphans
sudo docker system prune -af
# Rimuovi directory simulation corrotta
sudo rm -rf simulation
# Riavvia (crea DB PRIMA di Docker)
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
NOTA IMPORTANTE: PATH ASSOLUTO
------------------------------
SEMPRE usare path assoluto completo (NO ~ che non viene espanso da sudo).
SBAGLIATO: sudo ./start.sh ~/projects/.../outputs/scenario_run
CORRETTO: sudo ./start.sh /home/stefano/projects/.../outputs/scenario_run
SEQUENZA STARTUP CORRETTA ICS-SimLab
------------------------------------
1. rm -r simulation (pulisce vecchia simulazione)
2. python3 main.py $1 (crea DB + container directories)
3. docker compose build (build immagini)
4. docker compose up (avvia container)
Il DB viene creato al passo 2, PRIMA che Docker monti i volumi.
Se Docker parte con volumi già definiti ma file mancante, crea directory.
================================================================================
FISICA HIL MIGLIORATA: MODELLO ACCOPPIATO TANK + BOTTLE
================================================================================
Data: 2026-01-27
OSSERVAZIONI
------------
- La fisica HIL generata era troppo semplificata:
- Range 0..1 normalizzati con clamp continuo
- bottle_at_filler derivato direttamente da conveyor_cmd (logica invertita)
- Nessun tracking della distanza bottiglia
- Nessun accoppiamento: bottiglia si riempie senza svuotare tank
- Nessun reset bottiglia quando esce
- Esempio funzionante (examples/water_tank/bottle_factory_logic.py) usa:
- Range interi: tank 0-1000, bottle 0-200, distance 0-130
- Boolean per stati attuatori
- Accoppiamento: bottle fill SOLO se outlet_valve=True AND distance in [0,30]
- Reset: quando distance < 0, nuova bottiglia con fill=0 e distance=130
- Due thread separati per tank e bottle
MODIFICHE EFFETTUATE
--------------------
File: tools/compile_ir.py, funzione render_hil_multi()
1. Detect se presenti ENTRAMBI TankLevelBlock e BottleLineBlock
2. Se sì, genera fisica accoppiata stile esempio:
- Variabile interna _bottle_distance (0-130)
- bottle_at_filler = (0 <= _bottle_distance <= 30)
- Tank dynamics: +18 se inlet ON, -6 se outlet ON
- Bottle fill: +6 SOLO se outlet ON AND bottle at filler (conservazione)
- Conveyor: distance -= 4; se < 0 reset a 130 e fill = 0
- Clamp: tank 0-1000, bottle 0-200
- time.sleep(0.6) come esempio
3. Se no, fallback a fisica semplice precedente
RANGE E SEMANTICA
-----------------
- tank_level: 0-1000 (500 = 50% pieno)
- bottle_fill: 0-200 (200 = pieno)
- bottle_distance: 0-130 interno (0-30 = sotto filler)
- bottle_at_filler: 0 o 1 (boolean)
- Actuator states: letti come bool()
VERIFICA
--------
.venv/bin/python3 build_scenario.py --out outputs/scenario_run --overwrite
cat outputs/scenario_run/logic/hil_1.py
grep "bottle_at_filler" outputs/scenario_run/logic/hil_1.py
grep "_bottle_distance" outputs/scenario_run/logic/hil_1.py
DA FARE
-------
- Verificare che sensori leggano correttamente i nuovi range
- Eventualmente aggiungere thread separati come esempio (ora è single loop)
- Testare end-to-end con ICS-SimLab
================================================================================
FIX CRITICO: CONTRATTO ICS-SimLab logic() DEVE GIRARE FOREVER
================================================================================
Data: 2026-01-27
ROOT CAUSE IDENTIFICATA
-----------------------
ICS-SimLab chiama logic() UNA SOLA VOLTA in un thread e si aspetta che giri
per sempre. Il nostro codice generato invece ritornava subito → thread muore
→ nessun traffico.
Vedi: ICS-SimLab/src/components/plc.py linee 352-365:
logic_thread = Thread(target=logic.logic, args=(...), daemon=True)
logic_thread.start()
...
logic_thread.join() # ← Aspetta forever!
CONFRONTO CON ESEMPIO FUNZIONANTE (examples/water_tank/)
--------------------------------------------------------
Esempio funzionante PLC:
def logic(...):
time.sleep(2) # Aspetta sync
while True: # Loop infinito
# logica
time.sleep(0.1)
Nostro codice PRIMA:
def logic(...):
# logica
return # ← ERRORE: ritorna subito!
MODIFICHE EFFETTUATE
--------------------
File: tools/compile_ir.py
1. PLC logic ora genera:
- time.sleep(2) all'inizio per sync
- while True: loop infinito
- Logica dentro il loop con indent +4
- time.sleep(0.1) alla fine del loop
- _heartbeat() per log ogni 5 secondi
2. HIL logic ora genera:
- Inizializzazione diretta (non setdefault)
- time.sleep(3) per sync
- while True: loop infinito
- Fisica dentro il loop con indent +4
- time.sleep(0.1) alla fine del loop
3. _safe_callback migliorato:
- Cattura OSError e ConnectionException
- Ritorna bool per tracking
- 20 tentativi × 0.25s = 5s retry
STRUTTURA GENERATA ORA
----------------------
PLC:
def logic(input_registers, output_registers, state_update_callbacks):
time.sleep(2)
while True:
_heartbeat()
# logica con _write() e _get_float()
time.sleep(0.1)
HIL:
def logic(physical_values):
physical_values['key'] = initial_value
time.sleep(3)
while True:
# fisica
time.sleep(0.1)
VERIFICA
--------
# Rebuild scenario
.venv/bin/python3 build_scenario.py --out outputs/scenario_run --overwrite
# Verifica while True presente
grep "while True" outputs/scenario_run/logic/*.py
# Verifica time.sleep presente
grep "time.sleep" outputs/scenario_run/logic/*.py
# Esegui in ICS-SimLab
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab
sudo docker-compose down -v
sudo rm -rf simulation
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
# Verifica nei log
sudo docker logs plc1 2>&1 | grep HEARTBEAT
sudo docker logs plc2 2>&1 | grep HEARTBEAT
================================================================================
MIGLIORAMENTI PLC E HIL: INIZIALIZZAZIONE + EXTERNAL WATCHER
================================================================================
Data: 2026-01-27
CONTESTO
--------
Confrontando con examples/water_tank/logic/plc1.py abbiamo notato che:
1. Il PLC esempio inizializza gli output e chiama i callback PRIMA del loop
2. Il PLC esempio traccia prev_output_valve per rilevare modifiche esterne (HMI)
3. Il nostro generatore non faceva né l'uno né l'altro
MODIFICHE EFFETTUATE
--------------------
A) PLC Generation (tools/compile_ir.py):
1. Explicit initialization phase PRIMA del while loop:
- Setta ogni output a 0
- Chiama callback per ogni output
- Aggiorna _prev_outputs per tracking
2. External-output watcher (_check_external_changes):
- Nuova funzione che rileva cambi esterni agli output (es. HMI)
- Chiamata all'inizio di ogni iterazione del loop
- Se output cambiato esternamente, chiama callback
3. _prev_outputs tracking:
- Dict globale che tiene traccia dei valori scritti dal PLC
- _write() aggiorna _prev_outputs quando scrive
- Evita double-callback: se il PLC ha scritto il valore, non serve callback
4. _collect_output_keys():
- Nuova funzione helper che estrae tutte le chiavi output dalle regole
- Usata per generare lista _output_keys per il watcher
B) HIL Generation (tools/compile_ir.py):
1. Bottle fill threshold:
- Bottiglia si riempie SOLO se bottle_fill < 200 (max)
- Evita overflow logico
C) Validator (services/validation/plc_callback_validation.py):
1. Riconosce pattern _write():
- Se file definisce funzione _write(), skip strict validation
- _write() gestisce internamente write + callback + tracking
PATTERN GENERATO ORA
--------------------
PLC (plc1.py, plc2.py):
def logic(input_registers, output_registers, state_update_callbacks):
global _prev_outputs
# --- Explicit initialization: set outputs and call callbacks ---
if 'tank_input_valve' in output_registers:
output_registers['tank_input_valve']['value'] = 0
_prev_outputs['tank_input_valve'] = 0
if 'tank_input_valve' in state_update_callbacks:
_safe_callback(state_update_callbacks['tank_input_valve'])
...
# Wait for other components to start
time.sleep(2)
_output_keys = ['tank_input_valve', 'tank_output_valve']
# Main loop - runs forever
while True:
_heartbeat()
# Check for external changes (e.g., HMI)
_check_external_changes(output_registers, state_update_callbacks, _output_keys)
# Control logic with _write()
...
time.sleep(0.1)
HIL (hil_1.py):
def logic(physical_values):
...
while True:
...
# Conservation: if bottle is at filler AND not full, water goes to bottle
if outlet_valve_on:
tank_level -= 6
if bottle_at_filler and bottle_fill < 200: # threshold
bottle_fill += 6
...
FUNZIONI HELPER GENERATE
------------------------
_write(out_regs, cbs, key, value):
- Scrive valore se diverso
- Aggiorna _prev_outputs[key] per tracking
- Chiama callback se presente
_check_external_changes(out_regs, cbs, keys):
- Per ogni key in keys:
- Se valore attuale != _prev_outputs[key]
- Valore cambiato esternamente (HMI)
- Chiama callback
- Aggiorna _prev_outputs
_safe_callback(cb, retries, delay):
- Retry logic per startup race conditions
- Cattura OSError e ConnectionException
VERIFICA
--------
# Rebuild
.venv/bin/python3 build_scenario.py --overwrite
# Verifica initialization
grep "Explicit initialization" outputs/scenario_run/logic/plc*.py
# Verifica external watcher
grep "_check_external_changes" outputs/scenario_run/logic/plc*.py
# Verifica bottle threshold
grep "bottle_fill < 200" outputs/scenario_run/logic/hil_1.py
================================================================================
FIX: AUTO-GENERAZIONE PLC MONITORS + SCALA THRESHOLD ASSOLUTI
================================================================================
Data: 2026-01-27
PROBLEMI IDENTIFICATI
---------------------
1) PLC monitors vuoti: i PLC non avevano outbound_connections ai sensori
e monitors era sempre []. I sensori erano attivi ma nessuno li interrogava.
2) Scala mismatch: HIL usa range interi (tank 0-1000, bottle 0-200) ma
i threshold PLC erano normalizzati (0.2, 0.8 su scala 0-1).
Risultato: 482 >= 0.8 sempre True -> logica sbagliata.
3) Modifiche manuali a configuration.json non persistono dopo rebuild.
SOLUZIONE IMPLEMENTATA
----------------------
A) Auto-generazione PLC monitors (tools/enrich_config.py):
- Nuovo tool che arricchisce configuration.json
- Per ogni PLC input register:
- Trova il HIL output corrispondente (es. water_tank_level -> water_tank_level_output)
- Trova il sensore che espone quel valore
- Aggiunge outbound_connection al sensore
- Aggiunge monitor entry per polling
- Per ogni PLC output register:
- Trova l'attuatore corrispondente (es. tank_input_valve -> tank_input_valve_input)
- Aggiunge outbound_connection all'attuatore
- Aggiunge controller entry
B) Scala threshold assoluti (models/ir_v1.py + tools/compile_ir.py):
- Aggiunto signal_max a HysteresisFillRule e ThresholdOutputRule
- make_ir_from_config.py: imposta signal_max=1000 per tank, signal_max=200 per bottle
- compile_ir.py: converte threshold normalizzati in assoluti:
- low=0.2, signal_max=1000 -> abs_low=200
- high=0.8, signal_max=1000 -> abs_high=800
- threshold=0.2, signal_max=200 -> abs_threshold=40
C) Pipeline aggiornata (build_scenario.py):
- Nuovo Step 0: chiama enrich_config.py
- Usa configuration_enriched.json per tutti gli step successivi
FILE MODIFICATI
---------------
- tools/enrich_config.py (NUOVO) - Arricchisce config con monitors
- models/ir_v1.py - Aggiunto signal_max ai rule
- tools/make_ir_from_config.py - Imposta signal_max per tank/bottle
- tools/compile_ir.py - Usa threshold assoluti
- build_scenario.py - Aggiunto Step 0 enrichment
VERIFICA
--------
# Rebuild scenario
.venv/bin/python3 build_scenario.py --overwrite
# Verifica monitors generati
grep -A10 '"monitors"' outputs/configuration_enriched.json
# Verifica threshold assoluti nel PLC
grep "lvl <=" outputs/scenario_run/logic/plc1.py
# Dovrebbe mostrare: if lvl <= 200.0 e elif lvl >= 800.0
grep "v <" outputs/scenario_run/logic/plc2.py
# Dovrebbe mostrare: if v < 40.0
# Esegui ICS-SimLab
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab
sudo docker-compose down -v
sudo rm -rf simulation
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
================================================================================
FIX: VALORI INIZIALI RULE-AWARE (NO PIU' TUTTI ZERO)
================================================================================
Data: 2026-01-28
PROBLEMA OSSERVATO
------------------
- UI piatta: tank level ~482, bottle fill ~18 (non cambiano mai)
- Causa: init impostava TUTTI gli output a 0
- Con tank a 500 (mid-range tra low=200 e high=800), la logica hysteresis
non scrive nulla -> entrambe le valvole restano a 0 -> nessun flusso
- Sistema bloccato in steady state
SOLUZIONE
---------
Valori iniziali derivati dalle regole invece che tutti zero:
1) HysteresisFillRule:
- inlet_out = 0 (chiuso)
- outlet_out = 1 (APERTO) <- questo fa partire il drenaggio
- Tank scende -> raggiunge low=200 -> inlet si apre -> ciclo parte
2) ThresholdOutputRule:
- output_id = true_value (tipicamente 1)
- Attiva l'output inizialmente
FILE MODIFICATO
---------------
- tools/compile_ir.py
- Nuova funzione _compute_initial_values(rules) -> Dict[str, int]
- render_plc_rules() usa init_values invece di 0 fisso
- Commento nel codice generato spiega il perché
VERIFICA
--------
# Rebuild
.venv/bin/python3 build_scenario.py --overwrite
# Verifica init values nel PLC generato
grep -A3 "Explicit initialization" outputs/scenario_run/logic/plc1.py
# Deve mostrare: outlet = 1, inlet = 0
grep "tank_output_valve.*value.*=" outputs/scenario_run/logic/plc1.py
# Deve mostrare: output_registers['tank_output_valve']['value'] = 1
# Esegui e verifica che tank level cambi
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab
sudo docker-compose down -v && sudo rm -rf simulation
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
# Dopo ~30 secondi, UI deve mostrare tank level che scende
================================================================================
FIX: HMI MONITOR ADDRESS DERIVAZIONE DA REGISTER MAP PLC
================================================================================
Data: 2026-01-28
PROBLEMA OSSERVATO
------------------
HMI logs mostrano ripetuti: "ERROR - Error: couldn't read values" per monitors
(water_tank_level, bottle_fill_level, bottle_at_filler).
Causa: i monitors HMI usavano value_type/address indovinati invece di derivarli
dalla mappa registri del PLC target. Es:
- HMI monitor bottle_fill_level: address=2 (SBAGLIATO)
- PLC2 register bottle_fill_level: address=1 (CORRETTO)
- HMI tentava di leggere holding_register@2 che non esiste -> errore Modbus
SOLUZIONE IMPLEMENTATA
----------------------
File modificato: tools/enrich_config.py
1) Nuova funzione helper find_register_mapping(device, id):
- Cerca in tutti i tipi registro (coil, discrete_input, holding_register, input_register)
- Ritorna (value_type, address, count) se trova il registro per id
- Ritorna None se non trovato
2) Nuova funzione enrich_hmi_connections(config):
- Per ogni HMI monitor che polla un PLC:
- Trova il PLC target tramite outbound_connection IP
- Cerca il registro nel PLC tramite find_register_mapping
- Aggiorna value_type, address, count per matchare il PLC
- Stampa "FIX:" quando corregge un valore
- Stampa "WARNING:" se registro non trovato (non indovina default)
- Stessa logica per controllers HMI
3) main() aggiornato:
- Chiama enrich_hmi_connections() dopo enrich_plc_connections()
- Summary include anche HMI monitors/controllers
ESEMPIO OUTPUT
--------------
$ python3 -m tools.enrich_config --config outputs/configuration.json \
--out outputs/configuration_enriched.json --overwrite
Enriching PLC connections...
Fixing HMI monitors/controllers...
FIX: hmi_1 monitor 'bottle_fill_level': holding_register@2 -> holding_register@1 (from plc2)
Summary:
plc1: 4 outbound_connections, 1 monitors, 2 controllers
plc2: 4 outbound_connections, 2 monitors, 2 controllers
hmi_1: 3 monitors, 1 controllers
VERIFICA
--------
# Rebuild scenario
python3 build_scenario.py --out outputs/scenario_run --overwrite
# Verifica che bottle_fill_level abbia address corretto
grep -A5 '"id": "bottle_fill_level"' outputs/configuration_enriched.json | grep address
# Deve mostrare: "address": 1 (non 2)
# Esegui ICS-SimLab
cd /home/stefano/projects/ICS-SimLab-main/curtin-ics-simlab
sudo docker-compose down -v && sudo rm -rf simulation
sudo ./start.sh /home/stefano/projects/ics-simlab-config-gen_claude/outputs/scenario_run
# Verifica che HMI non mostri più "couldn't read values"
sudo docker logs hmi_1 2>&1 | grep -i error
# UI deve mostrare valori che cambiano nel tempo
================================================================================

View File

@ -131,6 +131,58 @@ Maintain `appunti.txt` in the repo root with bullet points (in Italian) document
Include `appunti.txt` in diffs when updated.
## Diary Skill (Diario di Lavoro)
Il repository usa due file per documentare il lavoro:
| File | Uso |
|------|-----|
| `appunti.txt` | Note operative rapide (bullet point) |
| `diario.md` | Registro giornaliero thesis-ready |
### Regole per Claude (in italiano)
1. **appunti.txt**: aggiornare quando cambiano codice, config, o test. Stile bullet point conciso. Aggiornare subito, durante il lavoro.
2. **diario.md**: aggiornare a fine richiesta lunga (prompt utente >30 parole). Usare il template in `diario.md`. Spiegare il *perché* delle decisioni, non solo il *cosa*.
3. **Comandi eseguiti**: includere sempre i comandi esatti e il loro esito (✅ PASS, ❌ FAIL). Se un comando non è stato eseguito, scrivere "⚠️ non verificato" esplicitamente.
4. **Mai inventare**: non affermare che un comando è stato eseguito se non lo è stato. In caso di dubbio, scrivere "non verificato".
5. **Date e path**: usare date assolute (YYYY-MM-DD) e path dal repo root quando rilevante.
6. **Tono**: pratico e diretto. Evitare muri di testo. Ogni entry deve essere leggibile in <2 minuti.
7. **Artefatti**: elencare sempre i path ai file prodotti (json, py, log, pcap).
### Esempio entry minima
```markdown
## 2026-01-29
### Obiettivo
Fix race condition PLC startup.
### Azioni
1. Aggiunto retry in `tools/compile_ir.py`
### Decisioni
- **Retry 30×0.2s**: sufficiente per startup container (~6s max)
### Validazione
```bash
python3 validate_fix.py
# ✅ PASS
```
### Artefatti
- `outputs/scenario_run/logic/plc1.py`
### Prossimo step
Testare end-to-end con ICS-SimLab
```
## Validation Rules
Validators catch:

File diff suppressed because it is too large Load Diff

View File

@ -7,6 +7,9 @@ Usage:
With process spec (uses LLM-generated physics instead of IR heuristics for HIL):
python3 build_scenario.py --out outputs/scenario_run --process-spec outputs/process_spec.json --overwrite
With control plan (declarative HIL logic, more flexible than process spec):
python3 build_scenario.py --out outputs/scenario_run --control-plan outputs/control_plan.json --overwrite
"""
import argparse
@ -105,6 +108,11 @@ def main() -> None:
default=None,
help="Path to process_spec.json for HIL physics (optional, replaces IR-based HIL)",
)
parser.add_argument(
"--control-plan",
default=None,
help="Path to control_plan.json for declarative HIL logic (optional, replaces IR-based HIL)",
)
parser.add_argument(
"--skip-semantic",
action="store_true",
@ -117,6 +125,14 @@ def main() -> None:
ir_path = Path(args.ir_file)
logic_dir = out_dir / "logic"
process_spec_path = Path(args.process_spec) if args.process_spec else None
control_plan_path = Path(args.control_plan) if args.control_plan else None
# Auto-detect control_plan.json if not explicitly provided
if control_plan_path is None:
default_control_plan = Path("outputs/control_plan.json")
if default_control_plan.exists():
control_plan_path = default_control_plan
print(f"Auto-detected control plan: {control_plan_path}")
# Validate input
if not config_path.exists():
@ -125,6 +141,9 @@ def main() -> None:
if process_spec_path and not process_spec_path.exists():
raise SystemExit(f"ERROR: Process spec file not found: {process_spec_path}")
if control_plan_path and not control_plan_path.exists():
raise SystemExit(f"ERROR: Control plan file not found: {control_plan_path}")
print(f"\n{'#'*60}")
print(f"# Building scenario: {out_dir}")
print(f"# Using Python: {sys.executable}")
@ -208,6 +227,22 @@ def main() -> None:
]
run_command(cmd2b, f"Step 2b: Compile process_spec.json to {hil_logic_name}")
# Step 2c (optional): Compile control_plan.json to HIL logic (replaces IR-generated HIL)
if control_plan_path:
cmd2c = [
sys.executable,
"-m",
"tools.compile_control_plan",
"--control-plan",
str(control_plan_path),
"--out",
str(logic_dir),
"--config",
str(config_path), # Pass config for validation
"--overwrite", # Always overwrite to replace IR-generated HIL
]
run_command(cmd2c, "Step 2c: Compile control_plan.json to HIL logic")
# Step 3: Validate logic files
cmd3 = [
sys.executable,

134
diario.md Normal file
View File

@ -0,0 +1,134 @@
# diario.md - Diario di Lavoro per Tesi
## Differenza tra diario.md e appunti.txt
| File | Scopo | Stile |
|------|-------|-------|
| **appunti.txt** | Note operative rapide. Bullet point su fix, comandi, errori. Aggiornato durante il lavoro. | Conciso, tecnico, no narrativa |
| **diario.md** | Registro giornaliero per tesi. Documenta decisioni, rationale, risultati. Aggiornato dopo richiesta lunga (>30 parole). | Strutturato, explain "why", thesis-ready |
**Regola pratica**:
- Scopri un bug? → appunti.txt (subito)
- Finisci richiesta lunga (>30 parole)? → diario.md (riassunto ragionato)
---
## Template Entry
```markdown
## YYYY-MM-DD
### Obiettivo della sessione
[Cosa si intende ottenere in questa sessione]
### Stato iniziale
- Branch: `main` | `feature/xxx`
- Ultimo commit: `abc1234`
- Input usato: `prompts/xxx.txt` oppure N/A
### Azioni eseguite
1. [Azione 1]
2. [Azione 2]
3. ...
### Decisioni chiave e rationale
- **Decisione**: [cosa]
**Perché**: [motivazione tecnica o di design]
### Osservazioni e risultati
- [Metrica o output concreto, es. "tank_level oscilla tra 200-800"]
- [Screenshot/log rilevante se disponibile]
### Comandi di validazione + esito
```bash
# Comando eseguito
python3 validate_fix.py
# Esito: ✅ PASS / ❌ FAIL / ⚠️ non verificato
```
### Artefatti prodotti
- `outputs/scenario_run/configuration.json`
- `outputs/scenario_run/logic/plc1.py`
- [altri path]
### Issue aperti
- [ ] [Problema non risolto]
- [ ] [Prossimo step necessario]
### Prossimo micro-step
[Azione concreta per la prossima sessione]
```
---
## Entries
---
## 2026-01-30
### Obiettivo della sessione
Implementare ControlPlan v0.1: un artefatto dichiarativo per specificare la logica HIL senza codice Python free-form. Permette all'LLM di generare spec strutturate che vengono compilate deterministicamente.
### Stato iniziale
- Branch: `main`
- Ultimo commit: `c9bc37b`
- Input usato: Specifica di design nel prompt utente
### Azioni eseguite
1. Ricerca architettura esistente (IR, compile_ir.py, process_spec)
2. Progettazione schema ControlPlan v0.1 con Pydantic
3. Implementazione safe_eval.py per espressioni sicure (AST whitelist)
4. Implementazione compile_control_plan.py (compiler)
5. Creazione test fixtures (bottle, electrical, IED)
6. Integrazione con build_scenario.py (Step 2c)
7. Scrittura test suite (24 test)
### Decisioni chiave e rationale
- **Decisione**: Usare AST parsing per validare espressioni
**Perché**: Sicurezza: blocca import, attribute access, subscript. L'LLM non può generare codice malevolo.
- **Decisione**: Threading automatico per >1 task
**Perché**: Semplifica lo schema - l'utente non deve pensare a threading. Il compiler lo aggiunge se servono task paralleli.
- **Decisione**: Separare init e params nel schema
**Perché**: init sono variabili di stato (modificabili), params sono costanti (documentazione + validazione).
- **Decisione**: Supportare profili (Gaussian, ramp, step) come tipo di task separato
**Perché**: Pattern comune: generare segnali di test, disturbi, setpoint variabili nel tempo.
### Osservazioni e risultati
- Schema flessibile: supporta loop con condizionali, playback con profili
- Compiler genera codice valido Python (verificato con compile())
- 24 test passano tutti
- Auto-detect di outputs/control_plan.json in build_scenario.py
### Comandi di validazione + esito
```bash
python3 -m pytest tests/test_compile_control_plan.py -v
# ✅ 24 passed
python3 -m tools.compile_control_plan \
--control-plan tests/fixtures/control_plan_bottle_like.json \
--out /tmp/test_cp_bottle --overwrite
# ✅ Compiled 1 HIL logic file(s)
```
### Artefatti prodotti
- `models/control_plan.py` - Schema Pydantic
- `tools/safe_eval.py` - Parser espressioni sicuro
- `tools/compile_control_plan.py` - Compiler
- `tests/fixtures/control_plan_bottle_like.json`
- `tests/fixtures/control_plan_electrical_like.json`
- `tests/fixtures/control_plan_ied_like.json`
- `tests/test_compile_control_plan.py`
### Issue aperti
- [ ] PLC control_plan non ancora implementato (solo HIL per ora)
- [ ] Test end-to-end con ICS-SimLab non eseguito
- [ ] Manca generazione LLM di control_plan.json
### Prossimo micro-step
Test end-to-end: copiare fixture bottle_like come outputs/control_plan.json e eseguire build_scenario + ICS-SimLab.
---

View File

@ -1,157 +0,0 @@
================================================================================
PLC STARTUP RACE CONDITION - FIX SUMMARY
================================================================================
ROOT CAUSE:
-----------
PLC2 crashed at startup when its Modbus TCP write callback to PLC1
(192.168.100.12:502) raised ConnectionRefusedError before PLC1 was ready.
Location: outputs/scenario_run/logic/plc2.py line 39
if key in cbs:
cbs[key]() # <-- CRASHED HERE with Connection refused
SOLUTION:
---------
Added safe retry wrapper in the PLC logic generator (tools/compile_ir.py)
that retries callback 30 times with 0.2s delay (6s total), never raises.
================================================================================
EXACT FILE CHANGES
================================================================================
FILE: tools/compile_ir.py
FUNCTION: render_plc_rules()
LINES: 17-46
CHANGE 1: Added import time (line 24)
------------------------------------------
+ lines.append("import time\n")
CHANGE 2: Added _safe_callback function (after line 28)
----------------------------------------------------------
+ lines.append("def _safe_callback(cb: Callable[[], None], retries: int = 30, delay: float = 0.2) -> None:\n")
+ lines.append(" \"\"\"Invoke callback with retry logic to handle startup race conditions.\"\"\"\n")
+ lines.append(" for attempt in range(retries):\n")
+ lines.append(" try:\n")
+ lines.append(" cb()\n")
+ lines.append(" return\n")
+ lines.append(" except Exception as e:\n")
+ lines.append(" if attempt == retries - 1:\n")
+ lines.append(" print(f\"WARNING: Callback failed after {retries} attempts: {e}\")\n")
+ lines.append(" return\n")
+ lines.append(" time.sleep(delay)\n\n\n")
CHANGE 3: Modified _write to use _safe_callback (line 46)
-----------------------------------------------------------
- lines.append(" cbs[key]()\n\n\n")
+ lines.append(" _safe_callback(cbs[key])\n\n\n")
================================================================================
GENERATED CODE COMPARISON
================================================================================
BEFORE (plc2.py):
-----------------
from typing import Any, Callable, Dict
def _write(out_regs, cbs, key, value):
if key not in out_regs:
return
cur = out_regs[key].get('value', None)
if cur == value:
return
out_regs[key]['value'] = value
if key in cbs:
cbs[key]() # <-- CRASHES
AFTER (plc2.py):
----------------
import time # <-- ADDED
from typing import Any, Callable, Dict
def _safe_callback(cb, retries=30, delay=0.2): # <-- ADDED
"""Invoke callback with retry logic to handle startup race conditions."""
for attempt in range(retries):
try:
cb()
return
except Exception as e:
if attempt == retries - 1:
print(f"WARNING: Callback failed after {retries} attempts: {e}")
return
time.sleep(delay)
def _write(out_regs, cbs, key, value):
if key not in out_regs:
return
cur = out_regs[key].get('value', None)
if cur == value:
return
out_regs[key]['value'] = value
if key in cbs:
_safe_callback(cbs[key]) # <-- NOW SAFE
================================================================================
VALIDATION COMMANDS
================================================================================
1. Rebuild scenario:
.venv/bin/python3 build_scenario.py --out outputs/scenario_run --overwrite
2. Verify fix is present:
.venv/bin/python3 validate_fix.py
3. Check generated code:
grep -A10 "_safe_callback" outputs/scenario_run/logic/plc2.py
4. Start ICS-SimLab:
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab
sudo ./start.sh ~/projects/ics-simlab-config-gen_claude/outputs/scenario_run
5. Monitor PLC2 logs (NO crashes expected):
sudo docker logs $(sudo docker ps | grep plc2 | awk '{print $NF}') -f
6. Stop:
cd ~/projects/ICS-SimLab-main/curtin-ics-simlab && sudo ./stop.sh
================================================================================
EXPECTED BEHAVIOR
================================================================================
BEFORE FIX:
PLC2 container crashes immediately with:
Exception in thread Thread-1:
ConnectionRefusedError: [Errno 111] Connection refused
AFTER FIX (Success):
PLC2 container starts
Silent retries for ~6 seconds while PLC1 starts
Eventually callbacks succeed
No crashes, no exceptions
AFTER FIX (PLC1 never starts):
PLC2 container starts
After 6 seconds: WARNING: Callback failed after 30 attempts
Container keeps running (no crash)
Will retry on next write attempt
================================================================================
FILES CREATED
================================================================================
Modified:
tools/compile_ir.py (CRITICAL FIX)
New:
build_scenario.py (deterministic builder using correct venv)
validate_fix.py (validation script)
test_simlab.sh (interactive launcher)
diagnose_runtime.sh (diagnostic script)
RUNTIME_FIX.md (complete documentation)
CHANGES.md (detailed changes with diffs)
DELIVERABLES.md (comprehensive summary)
QUICKSTART.txt (this file)
FIX_SUMMARY.txt (exact changes)
================================================================================

View File

@ -0,0 +1,91 @@
{
"version": "v0.1",
"hils": [
{
"name": "water_hil",
"warmup_s": 3.0,
"init": {
"water_tank_level": 500,
"tank_input_valve": 0,
"tank_output_valve": 0
},
"params": {
"tank_max": 1000,
"tank_min": 0,
"inflow_rate": 18,
"outflow_rate": 12
},
"tasks": [
{
"type": "loop",
"name": "tank_dynamics",
"dt_s": 0.5,
"actions": [
{
"if": "tank_input_valve > 0.5",
"then": [
{"add": ["water_tank_level", "inflow_rate"]}
]
},
{
"if": "tank_output_valve > 0.5",
"then": [
{"add": ["water_tank_level", "-outflow_rate"]}
]
},
{"set": ["water_tank_level", "clamp(water_tank_level, tank_min, tank_max)"]}
]
}
]
},
{
"name": "filler_hil",
"warmup_s": 3.0,
"init": {
"bottle_fill_level": 0,
"bottle_at_filler": 1,
"bottle_distance": 0,
"conveyor_cmd": 0,
"fill_valve": 0
},
"params": {
"bottle_max": 200,
"bottle_min": 0,
"distance_reset": 130,
"filler_range": 30,
"fill_rate": 8,
"conveyor_speed": 5
},
"tasks": [
{
"type": "loop",
"name": "filler_dynamics",
"dt_s": 0.5,
"actions": [
{
"if": "fill_valve > 0.5 and bottle_at_filler > 0.5",
"then": [
{"add": ["bottle_fill_level", "fill_rate"]}
]
},
{"set": ["bottle_fill_level", "clamp(bottle_fill_level, bottle_min, bottle_max)"]},
{
"if": "conveyor_cmd > 0.5",
"then": [
{"add": ["bottle_distance", "-conveyor_speed"]},
{
"if": "bottle_distance < 0",
"then": [
{"set": ["bottle_distance", "distance_reset"]},
{"set": ["bottle_fill_level", "0"]}
]
}
]
},
{"set": ["bottle_at_filler", "1 if bottle_distance <= filler_range else 0"]}
]
}
]
}
]
}

15
main.py
View File

@ -30,6 +30,7 @@ from services.patches import (
patch_fill_required_keys,
patch_lowercase_names,
patch_sanitize_network_names,
strip_nulls,
)
from services.prompting import build_prompt
from services.validation import validate_basic
@ -42,10 +43,17 @@ def run_build_config(
raw_path: Path,
out_dir: Path,
skip_semantic: bool = False,
repair: bool = True,
) -> tuple[bool, list[str]]:
"""
Run build_config on a raw configuration file.
Args:
raw_path: Path to raw configuration JSON
out_dir: Output directory for configuration.json
skip_semantic: Skip semantic validation
repair: Enable deterministic repair (orphans, boolean types, registers)
Returns:
(success, errors): success=True if build_config passed,
errors=list of semantic error messages if failed
@ -61,6 +69,8 @@ def run_build_config(
]
if skip_semantic:
cmd.append("--skip-semantic")
if repair:
cmd.append("--repair")
result = subprocess.run(cmd, capture_output=True, text=True)
@ -164,7 +174,8 @@ def run_pipeline_with_semantic_validation(
Path("outputs/last_raw_response.txt").write_text(raw, encoding="utf-8")
continue
# Phase 2: Patches
# Phase 2: Canonicalization + Patches
obj = strip_nulls(obj) # Remove all null fields from LLM output
obj, patch_errors_0 = patch_fill_required_keys(obj)
obj, patch_errors_1 = patch_lowercase_names(obj)
obj, patch_errors_2 = patch_sanitize_network_names(obj)
@ -243,7 +254,7 @@ def main() -> None:
parser.add_argument("--schema-file", default="models/schemas/ics_simlab_config_schema_v1.json")
parser.add_argument("--model", default="gpt-5-mini")
parser.add_argument("--out", default="outputs/configuration.json")
parser.add_argument("--retries", type=int, default=3)
parser.add_argument("--retries", type=int, default=5)
parser.add_argument("--skip-enrich", action="store_true",
help="Skip build_config enrichment (output raw LLM config)")
parser.add_argument("--skip-semantic", action="store_true",

227
models/control_plan.py Normal file
View File

@ -0,0 +1,227 @@
"""
ControlPlan v0.1: Declarative HIL control specification.
This model defines a JSON-serializable spec that describes HIL behavior
using high-level tasks (loops, playback profiles) and actions (set, add, if).
The spec is compiled deterministically into Python HIL logic.
Design goals:
- LLM-friendly: structured output, no free-form Python
- Safe: expressions are parsed with AST, only safe operations allowed
- Expressive: loops, conditionals, profiles (Gaussian, ramp, etc.)
"""
from __future__ import annotations
from typing import Dict, List, Literal, Optional, Tuple, Union
from pydantic import BaseModel, ConfigDict, Field, field_validator
# =============================================================================
# Action types
# =============================================================================
class SetAction(BaseModel):
"""Set a variable to an expression result: var = expr"""
model_config = ConfigDict(extra="forbid")
set: Tuple[str, str] = Field(
description="[variable_name, expression_string]"
)
class AddAction(BaseModel):
"""Add expression result to a variable: var += expr"""
model_config = ConfigDict(extra="forbid")
add: Tuple[str, str] = Field(
description="[variable_name, expression_string]"
)
class IfAction(BaseModel):
"""
Conditional action: if condition then actions [else actions].
The condition is an expression string that evaluates to a boolean.
"""
model_config = ConfigDict(extra="forbid")
# Using "if_" to avoid Python keyword conflict, aliased to "if" in JSON
if_: str = Field(alias="if", description="Condition expression string")
then: List["Action"] = Field(description="Actions to execute if condition is true")
else_: Optional[List["Action"]] = Field(
default=None,
alias="else",
description="Actions to execute if condition is false"
)
# Union of all action types
Action = Union[SetAction, AddAction, IfAction]
# Enable forward reference resolution
IfAction.model_rebuild()
# =============================================================================
# Profile types (for playback tasks)
# =============================================================================
class GaussianProfile(BaseModel):
"""Gaussian noise profile for playback tasks."""
model_config = ConfigDict(extra="forbid")
kind: Literal["gaussian"] = "gaussian"
height: float = Field(description="Base/center value")
mean: float = Field(default=0.0, description="Mean of Gaussian noise")
std: float = Field(gt=0, description="Standard deviation of Gaussian noise")
entries: int = Field(gt=0, description="Number of entries in one cycle")
class RampProfile(BaseModel):
"""Linear ramp profile for playback tasks."""
model_config = ConfigDict(extra="forbid")
kind: Literal["ramp"] = "ramp"
start: float = Field(description="Start value")
end: float = Field(description="End value")
entries: int = Field(gt=0, description="Number of entries in one cycle")
class StepProfile(BaseModel):
"""Step function profile for playback tasks."""
model_config = ConfigDict(extra="forbid")
kind: Literal["step"] = "step"
values: List[float] = Field(min_length=1, description="Values to cycle through")
Profile = Union[GaussianProfile, RampProfile, StepProfile]
# =============================================================================
# Task types
# =============================================================================
class LoopTask(BaseModel):
"""
A loop task that executes actions repeatedly at a fixed interval.
This is the main mechanism for implementing control logic:
- Read inputs (from physical_values)
- Compute outputs (via expressions)
- Write outputs (via set/add actions)
"""
model_config = ConfigDict(extra="forbid")
type: Literal["loop"] = "loop"
name: str = Field(description="Task name for debugging/logging")
dt_s: float = Field(gt=0, description="Loop interval in seconds")
actions: List[Action] = Field(description="Actions to execute each iteration")
class PlaybackTask(BaseModel):
"""
A playback task that outputs a profile (Gaussian, ramp, step) over time.
Use for generating test signals, disturbances, or time-varying setpoints.
"""
model_config = ConfigDict(extra="forbid")
type: Literal["playback"] = "playback"
name: str = Field(description="Task name for debugging/logging")
dt_s: float = Field(gt=0, description="Interval between profile entries")
target: str = Field(description="Variable name to write profile values to")
profile: Profile = Field(description="Profile definition")
repeat: bool = Field(default=True, description="Whether to repeat the profile")
Task = Union[LoopTask, PlaybackTask]
# =============================================================================
# Top-level structures
# =============================================================================
class ControlPlanHIL(BaseModel):
"""
HIL control plan: defines initialization and tasks for one HIL.
Structure:
- name: must match hils[].name in configuration.json
- warmup_s: optional delay before starting tasks
- init: initial values for state variables
- params: optional constants for use in expressions
- tasks: list of loop/playback tasks (run in separate threads if >1)
"""
model_config = ConfigDict(extra="forbid")
name: str = Field(description="HIL name (must match config)")
warmup_s: Optional[float] = Field(
default=None,
ge=0,
description="Warmup delay in seconds before starting tasks"
)
init: Dict[str, Union[float, int, bool]] = Field(
description="Initial values for physical_values keys"
)
params: Optional[Dict[str, Union[float, int, bool]]] = Field(
default=None,
description="Constants available in expressions"
)
tasks: List[Task] = Field(
min_length=1,
description="Tasks to run (loop or playback)"
)
@field_validator("init")
@classmethod
def validate_init_keys(cls, v: Dict[str, Union[float, int, bool]]) -> Dict[str, Union[float, int, bool]]:
"""Ensure init keys are valid Python identifiers."""
for key in v.keys():
if not key.isidentifier():
raise ValueError(f"Invalid init key (not a valid identifier): {key}")
return v
@field_validator("params")
@classmethod
def validate_params_keys(cls, v: Optional[Dict[str, Union[float, int, bool]]]) -> Optional[Dict[str, Union[float, int, bool]]]:
"""Ensure params keys are valid Python identifiers."""
if v is None:
return v
for key in v.keys():
if not key.isidentifier():
raise ValueError(f"Invalid params key (not a valid identifier): {key}")
return v
class ControlPlan(BaseModel):
"""
Top-level ControlPlan specification.
Usage:
control_plan.json -> compile_control_plan.py -> hil_*.py
The generated HIL logic follows the ICS-SimLab contract:
def logic(physical_values):
# init phase
# warmup sleep
# while True: run tasks
"""
model_config = ConfigDict(extra="forbid")
version: Literal["v0.1"] = Field(
default="v0.1",
description="Schema version"
)
hils: List[ControlPlanHIL] = Field(
min_length=1,
description="HIL control specifications"
)
def get_control_plan_json_schema() -> dict:
"""Return JSON Schema for ControlPlan, suitable for LLM structured output."""
return ControlPlan.model_json_schema()

46
prompts/e2e_bottle.txt Normal file
View File

@ -0,0 +1,46 @@
Voglio uno scenario OT che simula una linea di imbottigliamento con due sezioni fisiche separate e due HIL dedicati.
ARCHITETTURA RICHIESTA:
- 2 PLC (plc1 per serbatoio, plc2 per riempitrice)
- 2 HIL (water_hil per fisica serbatoio, filler_hil per fisica bottiglia/nastro)
- 1 HMI per supervisione
SEZIONE 1 - SERBATOIO ACQUA (PLC1 + water_hil):
- water_hil simula la fisica del serbatoio (livello 0-1000)
- physical_values HIL: water_tank_level (output), tank_input_valve (input), tank_output_valve (input)
- PLC1 legge water_tank_level dal sensore e controlla le due valvole
- Logica PLC1: mantieni livello tra 200 (low) e 800 (high) con isteresi
- Se livello <= 200: apri tank_input_valve, chiudi tank_output_valve
- Se livello >= 800: chiudi tank_input_valve, apri tank_output_valve
- Sensore: water_tank_level_sensor (legge da water_hil, espone a PLC1)
- Attuatori: tank_input_valve_actuator, tank_output_valve_actuator
SEZIONE 2 - RIEMPITRICE (PLC2 + filler_hil):
- filler_hil simula la fisica della bottiglia e del nastro
- physical_values HIL: bottle_fill_level (output), bottle_at_filler (output), bottle_distance (internal), conveyor_cmd (input), fill_valve (input)
- PLC2 legge bottle_fill_level e bottle_at_filler, controlla conveyor e fill_valve
- Logica PLC2:
- Se bottle_at_filler=0: accendi conveyor (porta bottiglia sotto filler)
- Se bottle_at_filler=1 e bottle_fill_level < 180: apri fill_valve
- Se bottle_fill_level >= 180: chiudi fill_valve, accendi conveyor (porta via bottiglia piena)
- Sensori: bottle_fill_level_sensor, bottle_at_filler_sensor
- Attuatori: conveyor_actuator, fill_valve_actuator
RETE:
- Tutti i componenti sulla stessa subnet 192.168.100.0/24
- PLC1: 192.168.100.21
- PLC2: 192.168.100.22
- water_hil: 192.168.100.31
- filler_hil: 192.168.100.32
- HMI: 192.168.100.10
- Sensori e attuatori su IP successivi (192.168.100.41+)
- Comunicazione Modbus TCP porta 502
HMI:
- Monitora: water_tank_level, bottle_fill_level, bottle_at_filler
- Controllo: start_stop_line (booleano per enable/disable linea)
NOTE IMPORTANTI:
- I nomi HIL devono essere esattamente "water_hil" e "filler_hil"
- I file logic devono essere "water_hil.py" e "filler_hil.py"
- I physical_values dei due HIL devono corrispondere a quelli nel control_plan

287
scripts/e2e.sh Executable file
View File

@ -0,0 +1,287 @@
#!/bin/bash
#
# E2E Test for ICS-SimLab scenario
#
# Handles the operator_hmi startup race condition by:
# 1. Starting simlab
# 2. Waiting for PLCs to be ready (listening on port 502)
# 3. Restarting operator_hmi once PLCs are reachable
# 4. Verifying logs for successful reads
# 5. Saving logs and stopping simlab
#
# Usage:
# ./scripts/e2e.sh [--no-stop]
#
# --no-stop: Don't stop simlab at the end (for manual inspection)
set -e
# Configuration
REPO_DIR="$(cd "$(dirname "$0")/.." && pwd)"
SCENARIO_DIR="$REPO_DIR/outputs/scenario_run"
SIMLAB_DIR="/home/stefano/projects/ICS-SimLab-main/curtin-ics-simlab"
RUN_DIR="$REPO_DIR/outputs/run_$(date +%Y%m%d_%H%M%S)"
# Timeouts (seconds)
STARTUP_TIMEOUT=120
PLC_READY_TIMEOUT=60
HMI_VERIFY_DURATION=15
# Parse args
NO_STOP=false
for arg in "$@"; do
case $arg in
--no-stop) NO_STOP=true ;;
esac
done
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
cleanup() {
if [ "$NO_STOP" = false ]; then
log_info "Stopping simlab..."
cd "$SIMLAB_DIR" && docker compose down 2>/dev/null || true
else
log_info "Leaving simlab running (--no-stop)"
fi
}
trap cleanup EXIT
# Create run directory
mkdir -p "$RUN_DIR"
log_info "Run directory: $RUN_DIR"
# ==============================================================================
# Step 0: Verify prerequisites
# ==============================================================================
log_info "Step 0: Verifying prerequisites"
if [ ! -f "$SCENARIO_DIR/configuration.json" ]; then
log_error "Scenario not found: $SCENARIO_DIR/configuration.json"
log_info "Run: python3 build_scenario.py --out outputs/scenario_run --overwrite"
exit 1
fi
if [ ! -f "$SIMLAB_DIR/start.sh" ]; then
log_error "ICS-SimLab not found: $SIMLAB_DIR"
exit 1
fi
log_info "Prerequisites OK"
# ==============================================================================
# Step 1: Stop any existing containers
# ==============================================================================
log_info "Step 1: Stopping any existing containers"
cd "$SIMLAB_DIR"
docker compose down 2>/dev/null || true
sleep 2
# ==============================================================================
# Step 2: Start simlab in background
# ==============================================================================
log_info "Step 2: Starting ICS-SimLab (this may take a while)..."
# Remove old simulation directory
rm -rf "$SIMLAB_DIR/simulation" 2>/dev/null || true
# Clean up docker
docker system prune -f >/dev/null 2>&1
# Activate venv and build
source "$SIMLAB_DIR/.venv/bin/activate"
python3 "$SIMLAB_DIR/main.py" "$SCENARIO_DIR" > "$RUN_DIR/setup.log" 2>&1
# Build containers
docker compose build >> "$RUN_DIR/setup.log" 2>&1
# Start in background
docker compose up -d >> "$RUN_DIR/setup.log" 2>&1
log_info "Simlab started (containers launching in background)"
# ==============================================================================
# Step 3: Wait for PLCs to be ready
# ==============================================================================
log_info "Step 3: Waiting for PLCs to be ready (timeout: ${PLC_READY_TIMEOUT}s)..."
wait_for_plc() {
local plc_name=$1
local plc_ip=$2
local timeout=$3
local elapsed=0
while [ $elapsed -lt $timeout ]; do
# Check if container is running
if ! docker ps --format '{{.Names}}' | grep -q "^${plc_name}$"; then
log_warn "$plc_name container not running yet..."
sleep 2
elapsed=$((elapsed + 2))
continue
fi
# Check if port 502 is reachable from within the container
if docker exec "$plc_name" timeout 2 bash -c "echo > /dev/tcp/$plc_ip/502" 2>/dev/null; then
log_info "$plc_name ready at $plc_ip:502"
return 0
fi
sleep 2
elapsed=$((elapsed + 2))
done
log_error "$plc_name not ready after ${timeout}s"
return 1
}
# Extract PLC IPs from configuration
PLC1_IP=$(jq -r '.plcs[0].network.ip' "$SCENARIO_DIR/configuration.json" 2>/dev/null || echo "192.168.100.21")
PLC2_IP=$(jq -r '.plcs[1].network.ip' "$SCENARIO_DIR/configuration.json" 2>/dev/null || echo "192.168.100.22")
# Wait for each PLC
if ! wait_for_plc "plc1" "$PLC1_IP" "$PLC_READY_TIMEOUT"; then
log_error "PLC1 failed to start. Check logs: $RUN_DIR/plc1.log"
docker logs plc1 > "$RUN_DIR/plc1.log" 2>&1 || true
exit 1
fi
if ! wait_for_plc "plc2" "$PLC2_IP" "$PLC_READY_TIMEOUT"; then
log_error "PLC2 failed to start. Check logs: $RUN_DIR/plc2.log"
docker logs plc2 > "$RUN_DIR/plc2.log" 2>&1 || true
exit 1
fi
# ==============================================================================
# Step 4: Restart operator_hmi
# ==============================================================================
log_info "Step 4: Restarting operator_hmi to recover from startup race condition"
docker compose restart operator_hmi
sleep 3
log_info "operator_hmi restarted"
# ==============================================================================
# Step 4.5: Run Modbus probe
# ==============================================================================
log_info "Step 4.5: Running Modbus probe..."
# Wait a moment for connections to stabilize
sleep 3
# Run probe from within the operator_hmi container (has pymodbus and network access)
PROBE_SCRIPT="$REPO_DIR/tools/probe_modbus.py"
if [ -f "$PROBE_SCRIPT" ]; then
# Copy probe script and config to container
docker cp "$PROBE_SCRIPT" operator_hmi:/tmp/probe_modbus.py
docker cp "$SCENARIO_DIR/configuration.json" operator_hmi:/tmp/configuration.json
# Run probe inside container
docker exec operator_hmi python3 /tmp/probe_modbus.py \
--config /tmp/configuration.json \
> "$RUN_DIR/probe.txt" 2>&1 || true
log_info "Probe results saved to $RUN_DIR/probe.txt"
# Show summary
if grep -q "Modbus OK: 0/" "$RUN_DIR/probe.txt" 2>/dev/null; then
log_warn "Probe: ALL Modbus reads FAILED"
elif grep -q "Modbus OK:" "$RUN_DIR/probe.txt" 2>/dev/null; then
PROBE_SUMMARY=$(grep "Modbus OK:" "$RUN_DIR/probe.txt" | head -1)
log_info "Probe: $PROBE_SUMMARY"
fi
else
log_warn "Probe script not found: $PROBE_SCRIPT"
fi
# ==============================================================================
# Step 5: Verify operator_hmi logs
# ==============================================================================
log_info "Step 5: Monitoring operator_hmi for ${HMI_VERIFY_DURATION}s..."
# Capture logs for verification duration
sleep "$HMI_VERIFY_DURATION"
# Save logs from all components
log_info "Saving logs..."
docker logs plc1 > "$RUN_DIR/plc1.log" 2>&1 || true
docker logs plc2 > "$RUN_DIR/plc2.log" 2>&1 || true
docker logs operator_hmi > "$RUN_DIR/operator_hmi.log" 2>&1 || true
docker logs physical_io_hil > "$RUN_DIR/physical_io_hil.log" 2>&1 || true
docker logs ui > "$RUN_DIR/ui.log" 2>&1 || true
docker logs water_tank_level_sensor > "$RUN_DIR/water_tank_level_sensor.log" 2>&1 || true
docker logs bottle_fill_level_sensor > "$RUN_DIR/bottle_fill_level_sensor.log" 2>&1 || true
docker logs bottle_at_filler_sensor > "$RUN_DIR/bottle_at_filler_sensor.log" 2>&1 || true
# Check for success indicators
HMI_ERRORS=$(grep -c "couldn't read values" "$RUN_DIR/operator_hmi.log" 2>/dev/null | head -1 || echo "0")
PLC1_CRASHES=$(grep -Ec "Exception|Traceback" "$RUN_DIR/plc1.log" 2>/dev/null | head -1 || echo "0")
PLC2_CRASHES=$(grep -Ec "Exception|Traceback" "$RUN_DIR/plc2.log" 2>/dev/null | head -1 || echo "0")
# Extract probe summary if available
PROBE_TCP=$(grep "TCP reachable:" "$RUN_DIR/probe.txt" 2>/dev/null || echo "N/A")
PROBE_MODBUS=$(grep "Modbus OK:" "$RUN_DIR/probe.txt" 2>/dev/null || echo "N/A")
# ==============================================================================
# Step 6: Generate summary
# ==============================================================================
log_info "Step 6: Generating summary"
cat > "$RUN_DIR/summary.txt" << EOF
E2E Test Run: $(date)
Scenario: $SCENARIO_DIR
Results:
- PLC1 exceptions: $PLC1_CRASHES
- PLC2 exceptions: $PLC2_CRASHES
- HMI read errors: $HMI_ERRORS
Modbus Probe:
- $PROBE_TCP
- $PROBE_MODBUS
Container Status:
$(docker ps --format "{{.Names}}: {{.Status}}" | grep -E "plc|hmi|hil|sensor|actuator" | sort)
Notes:
- Some initial HMI read errors are expected due to startup race condition
- Errors after HMI restart indicate deeper connectivity/configuration issues
- See probe.txt for detailed Modbus diagnostics
- Check individual logs in this directory for details
EOF
cat "$RUN_DIR/summary.txt"
# ==============================================================================
# Determine exit status
# ==============================================================================
EXIT_CODE=0
if [ "$PLC1_CRASHES" -gt 0 ]; then
log_error "PLC1 has exceptions - check $RUN_DIR/plc1.log"
EXIT_CODE=1
fi
if [ "$PLC2_CRASHES" -gt 0 ]; then
log_error "PLC2 has exceptions - check $RUN_DIR/plc2.log"
EXIT_CODE=1
fi
if [ "$EXIT_CODE" -eq 0 ]; then
log_info "E2E test completed successfully"
else
log_error "E2E test completed with errors"
fi
log_info "Logs saved to: $RUN_DIR"
exit $EXIT_CODE

View File

@ -0,0 +1,209 @@
#!/bin/bash
#
# E2E Test for ControlPlan v0.1 Bottle Line Scenario
#
# This script runs the full deterministic flow:
# 1. (Optional) Generate configuration.json via LLM
# 2. Build scenario with control plan
# 3. Validate generated logic
# 4. Print command to start ICS-SimLab (does not execute)
#
# Usage:
# ./scripts/e2e_bottle_control_plan.sh [OPTIONS]
#
# Options:
# --skip-llm Skip LLM generation, use existing outputs/configuration.json
# --use-config PATH Use a specific configuration.json (implies --skip-llm)
# --help Show this help message
#
# Prerequisites:
# - Python virtual environment at .venv/
# - OPENAI_API_KEY set (unless --skip-llm)
#
set -e
# Configuration
REPO_DIR="$(cd "$(dirname "$0")/.." && pwd)"
VENV_DIR="$REPO_DIR/.venv"
CONTROL_PLAN="$REPO_DIR/examples/control_plans/bottle_line_v0.1.json"
PROMPT_FILE="$REPO_DIR/prompts/e2e_bottle.txt"
OUT_DIR="$REPO_DIR/outputs/scenario_bottle_cp"
CONFIG_OUT="$REPO_DIR/outputs/configuration.json"
# Parse arguments
SKIP_LLM=false
USE_CONFIG=""
while [[ $# -gt 0 ]]; do
case $1 in
--skip-llm)
SKIP_LLM=true
shift
;;
--use-config)
USE_CONFIG="$2"
SKIP_LLM=true
shift 2
;;
--help)
head -30 "$0" | grep "^#" | sed 's/^# //'
exit 0
;;
*)
echo "Unknown option: $1"
exit 1
;;
esac
done
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_step() { echo -e "\n${BLUE}===== $1 =====${NC}"; }
# ==============================================================================
# Step 0: Verify prerequisites
# ==============================================================================
log_step "Step 0: Verifying prerequisites"
if [ ! -d "$VENV_DIR" ]; then
log_error "Virtual environment not found: $VENV_DIR"
log_info "Create it with: python3 -m venv .venv && .venv/bin/pip install -r requirements.txt"
exit 1
fi
if [ ! -f "$CONTROL_PLAN" ]; then
log_error "Control plan not found: $CONTROL_PLAN"
exit 1
fi
if [ ! -f "$PROMPT_FILE" ]; then
log_error "Prompt file not found: $PROMPT_FILE"
exit 1
fi
# Activate virtual environment
source "$VENV_DIR/bin/activate"
log_info "Activated virtual environment: $VENV_DIR"
# ==============================================================================
# Step 1: Generate configuration.json (optional)
# ==============================================================================
log_step "Step 1: Generate configuration.json"
if [ -n "$USE_CONFIG" ]; then
if [ ! -f "$USE_CONFIG" ]; then
log_error "Config file not found: $USE_CONFIG"
exit 1
fi
log_info "Using provided config: $USE_CONFIG"
mkdir -p "$(dirname "$CONFIG_OUT")"
cp "$USE_CONFIG" "$CONFIG_OUT"
elif [ "$SKIP_LLM" = true ]; then
if [ ! -f "$CONFIG_OUT" ]; then
log_error "No configuration.json found and --skip-llm specified"
log_info "Either run without --skip-llm, or provide --use-config PATH"
exit 1
fi
log_info "Skipping LLM generation, using existing: $CONFIG_OUT"
else
if [ -z "$OPENAI_API_KEY" ]; then
log_error "OPENAI_API_KEY not set"
log_info "Set it with: export OPENAI_API_KEY='...'"
log_info "Or use --skip-llm to skip LLM generation"
exit 1
fi
log_info "Generating configuration via LLM..."
python3 "$REPO_DIR/main.py" \
--input-file "$PROMPT_FILE" \
--out "$CONFIG_OUT"
if [ ! -f "$CONFIG_OUT" ]; then
log_error "LLM generation failed: $CONFIG_OUT not created"
exit 1
fi
log_info "Generated: $CONFIG_OUT"
fi
# ==============================================================================
# Step 2: Build scenario with control plan
# ==============================================================================
log_step "Step 2: Build scenario with control plan"
log_info "Control plan: $CONTROL_PLAN"
log_info "Output dir: $OUT_DIR"
python3 "$REPO_DIR/build_scenario.py" \
--config "$CONFIG_OUT" \
--out "$OUT_DIR" \
--control-plan "$CONTROL_PLAN" \
--overwrite
if [ ! -f "$OUT_DIR/configuration.json" ]; then
log_error "Build failed: $OUT_DIR/configuration.json not created"
exit 1
fi
log_info "Scenario built: $OUT_DIR"
# ==============================================================================
# Step 3: Validate generated logic
# ==============================================================================
log_step "Step 3: Validate generated logic"
python3 -m tools.validate_logic \
--config "$OUT_DIR/configuration.json" \
--logic-dir "$OUT_DIR/logic" \
--check-callbacks \
--check-hil-init
log_info "Logic validation passed"
# ==============================================================================
# Step 4: Verify control plan compiled files exist
# ==============================================================================
log_step "Step 4: Verify control plan compiled files"
# Check that HIL files from control plan exist
for hil_name in water_hil filler_hil; do
hil_file="$OUT_DIR/logic/${hil_name}.py"
if [ -f "$hil_file" ]; then
log_info "Found: $hil_file"
# Verify it contains logic(physical_values)
if grep -q "def logic(physical_values)" "$hil_file"; then
log_info " Contains logic(physical_values): OK"
else
log_warn " Missing logic(physical_values) signature"
fi
else
log_warn "HIL file not found: $hil_file (may have different name in config)"
fi
done
# ==============================================================================
# Summary
# ==============================================================================
log_step "SUCCESS: Scenario ready"
echo ""
echo "Scenario contents:"
echo " Configuration: $OUT_DIR/configuration.json"
echo " Logic files:"
for f in "$OUT_DIR/logic"/*.py; do
echo " $(basename "$f")"
done
echo ""
echo "To run with ICS-SimLab (requires Docker and ICS-SimLab repo):"
echo " cd ~/projects/ICS-SimLab-main/curtin-ics-simlab"
echo " sudo ./start.sh $(realpath "$OUT_DIR")"
exit 0

View File

@ -6,6 +6,29 @@ from typing import Any, Dict, List, Tuple
# More restrictive: only [a-z0-9_] to avoid docker/compose surprises
DOCKER_SAFE_RE = re.compile(r"^[a-z0-9_]+$")
def strip_nulls(obj: Any) -> Any:
"""
Recursively remove keys with None values from dicts and None items from lists.
This canonicalizes LLM output by removing noise like:
{"id": null, "io": null, "physical_value": null}
turning it into:
{}
Args:
obj: Any JSON-serializable object (dict, list, scalar)
Returns:
The same structure with None values/items removed
"""
if isinstance(obj, dict):
return {k: strip_nulls(v) for k, v in obj.items() if v is not None}
elif isinstance(obj, list):
return [strip_nulls(item) for item in obj if item is not None]
else:
return obj
def patch_fill_required_keys(cfg: dict[str, Any]) -> Tuple[dict[str, Any], List[str]]:
"""
Ensure keys that ICS-SimLab setup.py reads with direct indexing exist.
@ -155,6 +178,32 @@ def sanitize_docker_name(name: str) -> str:
return s
def sanitize_connection_id(name: str) -> str:
"""
Sanitize outbound_connection id to docker-safe format: [a-z0-9_] only.
This ensures connection IDs are consistent and safe for use in
docker container networking and as Python variable names.
Args:
name: Original connection id (e.g., "To-Sensor1", "TO_ACTUATOR")
Returns:
Sanitized id (e.g., "to_sensor1", "to_actuator")
"""
s = (name or "").strip().lower()
s = re.sub(r"\s+", "_", s) # spaces -> _
s = re.sub(r"-", "_", s) # hyphens -> _ (common in connection IDs)
s = re.sub(r"[^a-z0-9_]", "", s) # keep only [a-z0-9_]
s = re.sub(r"_+", "_", s) # collapse multiple underscores
s = s.strip("_")
if not s:
s = "connection"
if not s[0].isalnum():
s = "c" + s
return s
def patch_sanitize_network_names(cfg: dict[str, Any]) -> Tuple[dict[str, Any], List[str]]:
"""
Make ip_networks names docker-safe and align ip_networks[].name == ip_networks[].docker_name.
@ -221,3 +270,84 @@ def patch_sanitize_network_names(cfg: dict[str, Any]) -> Tuple[dict[str, Any], L
patch_errors.append(f"ip_networks.name not docker-safe after patch: {nm}")
return cfg, patch_errors
def patch_sanitize_connection_ids(cfg: dict[str, Any]) -> Tuple[dict[str, Any], List[str]]:
"""
Sanitize all outbound_connection IDs to docker-safe format [a-z0-9_].
Update all monitor/controller outbound_connection_id references consistently.
This ensures connection IDs are:
- Lowercase
- Only contain [a-z0-9_]
- Consistent between outbound_connections and monitors/controllers
Returns: (patched_cfg, patch_errors)
"""
patch_errors: List[str] = []
if not isinstance(cfg, dict):
return cfg, ["Top-level JSON is not an object"]
# Process PLCs and HMIs (both have outbound_connections, monitors, controllers)
for section in ["plcs", "hmis"]:
for dev in cfg.get(section, []) or []:
if not isinstance(dev, dict):
continue
dev_name = dev.get("name", "unknown")
# Build mapping: old_id -> new_id
id_map: Dict[str, str] = {}
# Sanitize outbound_connection IDs
for conn in dev.get("outbound_connections", []) or []:
if not isinstance(conn, dict):
continue
old_id = conn.get("id")
if isinstance(old_id, str) and old_id:
new_id = sanitize_connection_id(old_id)
if old_id != new_id:
id_map[old_id] = new_id
conn["id"] = new_id
# Update monitor outbound_connection_id references
for monitor in dev.get("monitors", []) or []:
if not isinstance(monitor, dict):
continue
conn_id = monitor.get("outbound_connection_id")
if isinstance(conn_id, str):
# Use mapped ID if changed, otherwise sanitize directly
if conn_id in id_map:
monitor["outbound_connection_id"] = id_map[conn_id]
else:
monitor["outbound_connection_id"] = sanitize_connection_id(conn_id)
# Update controller outbound_connection_id references
for controller in dev.get("controllers", []) or []:
if not isinstance(controller, dict):
continue
conn_id = controller.get("outbound_connection_id")
if isinstance(conn_id, str):
if conn_id in id_map:
controller["outbound_connection_id"] = id_map[conn_id]
else:
controller["outbound_connection_id"] = sanitize_connection_id(conn_id)
# Validate all connection IDs are docker-safe after patch
for section in ["plcs", "hmis"]:
for dev in cfg.get(section, []) or []:
if not isinstance(dev, dict):
continue
dev_name = dev.get("name", "unknown")
for conn in dev.get("outbound_connections", []) or []:
if not isinstance(conn, dict):
continue
conn_id = conn.get("id")
if isinstance(conn_id, str) and not DOCKER_SAFE_RE.match(conn_id):
patch_errors.append(
f"{section}['{dev_name}'].outbound_connections[].id "
f"not docker-safe after patch: {conn_id}"
)
return cfg, patch_errors

35
tests/fixtures/boolean_type_wrong.json vendored Normal file
View File

@ -0,0 +1,35 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "to_sensor"}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [{"outbound_connection_id": "to_sensor", "id": "bottle_at_filler_output", "value_type": "input_register", "slave_id": 1, "address": 100, "count": 1, "interval": 0.5}],
"controllers": []
}],
"sensors": [{
"name": "bottle_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "bottle_at_filler_output"}]
}
}],
"actuators": [],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "bottle_at_filler_output", "io": "output"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

105
tests/fixtures/config_duplicate_ip.json vendored Normal file
View File

@ -0,0 +1,105 @@
{
"ui": {
"network": {
"ip": "192.168.100.10",
"port": 5000,
"docker_network": "ot_network"
}
},
"hmis": [
{
"name": "hmi1",
"network": {
"ip": "192.168.100.10",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.10", "port": 502}
],
"outbound_connections": [],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [],
"controllers": []
}
],
"plcs": [
{
"name": "plc1",
"logic": "plc1.py",
"network": {
"ip": "192.168.100.21",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.21", "port": 502}
],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502, "id": "to_sensor"}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "id": "tank_level", "io": "input"}
]
},
"monitors": [
{
"outbound_connection_id": "to_sensor",
"id": "tank_level",
"value_type": "input_register",
"slave_id": 1,
"address": 100,
"count": 1,
"interval": 0.5
}
],
"controllers": []
}
],
"sensors": [
{
"name": "tank_sensor",
"hil": "hil1",
"network": {
"ip": "192.168.100.31",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "physical_value": "tank_level"}
]
}
}
],
"actuators": [],
"hils": [
{
"name": "hil1",
"logic": "hil1.py",
"physical_values": [
{"name": "tank_level", "io": "output"}
]
}
],
"serial_networks": [],
"ip_networks": [
{
"docker_name": "ot_network",
"name": "ot_network",
"subnet": "192.168.100.0/24"
}
]
}

View File

@ -0,0 +1,28 @@
{
"plcs": [],
"hils": [
{
"name": "water_hil",
"logic": "water_hil.py",
"physical_values": [
{"name": "water_tank_level", "io": "output"},
{"name": "tank_input_valve", "io": "input"},
{"name": "tank_output_valve", "io": "input"}
]
},
{
"name": "filler_hil",
"logic": "filler_hil.py",
"physical_values": [
{"name": "bottle_fill_level", "io": "output"},
{"name": "bottle_at_filler", "io": "output"},
{"name": "bottle_distance", "io": "output"},
{"name": "conveyor_cmd", "io": "input"},
{"name": "fill_valve", "io": "input"}
]
}
],
"sensors": [],
"actuators": [],
"hmis": []
}

View File

@ -0,0 +1,85 @@
{
"ui": {
"network": {
"ip": "192.168.100.10",
"port": 5000,
"docker_network": "ot_network"
}
},
"hmis": [],
"plcs": [
{
"name": "plc1",
"logic": "plc1.py",
"network": {
"ip": "10.0.0.50",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "10.0.0.50", "port": 502}
],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502, "id": "to_sensor"}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "id": "tank_level", "io": "input"}
]
},
"monitors": [
{
"outbound_connection_id": "to_sensor",
"id": "tank_level",
"value_type": "input_register",
"slave_id": 1,
"address": 100,
"count": 1,
"interval": 0.5
}
],
"controllers": []
}
],
"sensors": [
{
"name": "tank_sensor",
"hil": "hil1",
"network": {
"ip": "192.168.100.31",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "physical_value": "tank_level"}
]
}
}
],
"actuators": [],
"hils": [
{
"name": "hil1",
"logic": "hil1.py",
"physical_values": [
{"name": "tank_level", "io": "output"}
]
}
],
"serial_networks": [],
"ip_networks": [
{
"docker_name": "ot_network",
"name": "ot_network",
"subnet": "192.168.100.0/24"
}
]
}

View File

@ -0,0 +1,85 @@
{
"ui": {
"network": {
"ip": "192.168.100.10",
"port": 5000,
"docker_network": "ot_network"
}
},
"hmis": [],
"plcs": [
{
"name": "plc1",
"logic": "plc1.py",
"network": {
"ip": "192.168.100.21",
"docker_network": "nonexistent_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.21", "port": 502}
],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502, "id": "to_sensor"}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "id": "tank_level", "io": "input"}
]
},
"monitors": [
{
"outbound_connection_id": "to_sensor",
"id": "tank_level",
"value_type": "input_register",
"slave_id": 1,
"address": 100,
"count": 1,
"interval": 0.5
}
],
"controllers": []
}
],
"sensors": [
{
"name": "tank_sensor",
"hil": "hil1",
"network": {
"ip": "192.168.100.31",
"docker_network": "ot_network"
},
"inbound_connections": [
{"type": "tcp", "ip": "192.168.100.31", "port": 502}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [
{"address": 100, "count": 1, "physical_value": "tank_level"}
]
}
}
],
"actuators": [],
"hils": [
{
"name": "hil1",
"logic": "hil1.py",
"physical_values": [
{"name": "tank_level", "io": "output"}
]
}
],
"serial_networks": [],
"ip_networks": [
{
"docker_name": "ot_network",
"name": "ot_network",
"subnet": "192.168.100.0/24"
}
]
}

25
tests/fixtures/config_with_nulls.json vendored Normal file
View File

@ -0,0 +1,25 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "port": null, "docker_network": "vlan1"},
"identity": null,
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502, "id": null}],
"outbound_connections": [],
"registers": {
"coil": [{"address": 1, "count": 1, "id": "valve_cmd", "io": "output", "physical_value": null, "physical_values": null}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [],
"controllers": []
}],
"sensors": [],
"actuators": [],
"hils": [],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

View File

@ -0,0 +1,71 @@
{
"version": "v0.1",
"hils": [
{
"name": "physical_io_hil",
"warmup_s": 3.0,
"init": {
"tank_level": 500,
"bottle_fill_level": 0,
"bottle_at_filler": 1,
"bottle_distance": 0,
"tank_input_valve": 0,
"tank_output_valve": 0,
"conveyor_cmd": 0
},
"params": {
"tank_max": 1000,
"bottle_max": 200,
"distance_reset": 130,
"filler_range": 30,
"inflow_rate": 18,
"outflow_rate": 6,
"fill_rate": 6,
"conveyor_speed": 4
},
"tasks": [
{
"type": "loop",
"name": "physics",
"dt_s": 0.6,
"actions": [
{
"if": "tank_input_valve > 0.5",
"then": [
{"add": ["tank_level", "inflow_rate"]}
]
},
{
"if": "tank_output_valve > 0.5",
"then": [
{"add": ["tank_level", "-outflow_rate"]},
{
"if": "bottle_at_filler > 0.5 and bottle_fill_level < bottle_max",
"then": [
{"add": ["bottle_fill_level", "fill_rate"]}
]
}
]
},
{"set": ["tank_level", "clamp(tank_level, 0, tank_max)"]},
{"set": ["bottle_fill_level", "clamp(bottle_fill_level, 0, bottle_max)"]},
{
"if": "conveyor_cmd > 0.5",
"then": [
{"add": ["bottle_distance", "-conveyor_speed"]},
{
"if": "bottle_distance < 0",
"then": [
{"set": ["bottle_distance", "distance_reset"]},
{"set": ["bottle_fill_level", "0"]}
]
}
]
},
{"set": ["bottle_at_filler", "1 if bottle_distance <= filler_range else 0"]}
]
}
]
}
]
}

View File

@ -0,0 +1,75 @@
{
"version": "v0.1",
"hils": [
{
"name": "power_grid_hil",
"warmup_s": 2.0,
"init": {
"bus_voltage": 230.0,
"bus_frequency": 50.0,
"generator_power": 0.0,
"load_power": 0.0,
"grid_stable": 1,
"generator_cmd": 0,
"generator_setpoint": 0.0,
"load_demand": 500.0
},
"params": {
"nominal_voltage": 230.0,
"nominal_frequency": 50.0,
"voltage_tolerance": 10.0,
"frequency_tolerance": 0.5,
"generator_max": 1000.0,
"load_sensitivity": 0.01
},
"tasks": [
{
"type": "loop",
"name": "grid_dynamics",
"dt_s": 0.1,
"actions": [
{
"if": "generator_cmd > 0.5",
"then": [
{"set": ["generator_power", "clamp(generator_setpoint, 0, generator_max)"]}
],
"else": [
{"set": ["generator_power", "0"]}
]
},
{"set": ["load_power", "load_demand"]},
{
"set": ["bus_frequency", "nominal_frequency + (generator_power - load_power) * load_sensitivity"]
},
{
"set": ["bus_voltage", "nominal_voltage * (0.95 + 0.1 * (generator_power / max(generator_max, 1)))"]
},
{
"if": "abs(bus_frequency - nominal_frequency) < frequency_tolerance and abs(bus_voltage - nominal_voltage) < voltage_tolerance",
"then": [
{"set": ["grid_stable", "1"]}
],
"else": [
{"set": ["grid_stable", "0"]}
]
}
]
},
{
"type": "playback",
"name": "load_variation",
"dt_s": 1.0,
"target": "load_demand",
"profile": {
"kind": "gaussian",
"height": 500.0,
"mean": 0.0,
"std": 50.0,
"entries": 100
},
"repeat": true
}
]
}
]
}

View File

@ -0,0 +1,106 @@
{
"version": "v0.1",
"hils": [
{
"name": "ied_simulator",
"warmup_s": 1.0,
"init": {
"current_a": 0.0,
"current_b": 0.0,
"current_c": 0.0,
"voltage_a": 230.0,
"voltage_b": 230.0,
"voltage_c": 230.0,
"breaker_status": 1,
"fault_detected": 0,
"trip_counter": 0,
"reset_cmd": 0
},
"params": {
"overcurrent_threshold": 100.0,
"undervoltage_threshold": 200.0,
"nominal_current": 50.0,
"nominal_voltage": 230.0,
"current_noise_std": 2.0,
"voltage_noise_std": 1.0
},
"tasks": [
{
"type": "loop",
"name": "protection_logic",
"dt_s": 0.05,
"actions": [
{
"if": "current_a > overcurrent_threshold or current_b > overcurrent_threshold or current_c > overcurrent_threshold",
"then": [
{"set": ["fault_detected", "1"]},
{
"if": "breaker_status > 0.5",
"then": [
{"set": ["breaker_status", "0"]},
{"add": ["trip_counter", "1"]}
]
}
]
},
{
"if": "voltage_a < undervoltage_threshold or voltage_b < undervoltage_threshold or voltage_c < undervoltage_threshold",
"then": [
{"set": ["fault_detected", "1"]}
]
},
{
"if": "reset_cmd > 0.5 and fault_detected > 0.5",
"then": [
{"set": ["fault_detected", "0"]},
{"set": ["breaker_status", "1"]}
]
}
]
},
{
"type": "playback",
"name": "current_a_sim",
"dt_s": 0.02,
"target": "current_a",
"profile": {
"kind": "gaussian",
"height": 50.0,
"mean": 0.0,
"std": 2.0,
"entries": 500
},
"repeat": true
},
{
"type": "playback",
"name": "current_b_sim",
"dt_s": 0.02,
"target": "current_b",
"profile": {
"kind": "gaussian",
"height": 50.0,
"mean": 0.0,
"std": 2.0,
"entries": 500
},
"repeat": true
},
{
"type": "playback",
"name": "current_c_sim",
"dt_s": 0.02,
"target": "current_c",
"profile": {
"kind": "gaussian",
"height": 50.0,
"mean": 0.0,
"std": 2.0,
"entries": 500
},
"repeat": true
}
]
}
]
}

36
tests/fixtures/orphan_actuator.json vendored Normal file
View File

@ -0,0 +1,36 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [],
"registers": {
"coil": [{"address": 1, "count": 1, "id": "valve_cmd", "io": "output"}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [],
"controllers": []
}],
"sensors": [],
"actuators": [{
"name": "orphan_actuator",
"hil": "hil1",
"network": {"ip": "192.168.0.41", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.41", "port": 502}],
"registers": {
"coil": [{"address": 500, "count": 1, "physical_value": "valve_input"}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"physical_values": [{"name": "valve_input", "io": "input"}]
}],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "valve_input", "io": "input"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

35
tests/fixtures/orphan_sensor.json vendored Normal file
View File

@ -0,0 +1,35 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 1, "count": 1, "id": "tank_level", "io": "input"}]
},
"monitors": [],
"controllers": []
}],
"sensors": [{
"name": "orphan_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "tank_level_output"}]
}
}],
"actuators": [],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "tank_level_output", "io": "output"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

View File

@ -0,0 +1,54 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "to_sensor"},
{"type": "tcp", "ip": "192.168.0.41", "port": 502, "id": "to_actuator"}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [
{"outbound_connection_id": "to_sensor", "id": "tank_level", "value_type": "input_register", "slave_id": 1, "address": 100, "count": 1, "interval": 0.5}
],
"controllers": [
{"outbound_connection_id": "to_actuator", "id": "valve_cmd", "value_type": "coil", "slave_id": 1, "address": 500, "count": 1}
]
}],
"sensors": [{
"name": "tank_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "tank_level"}]
}
}],
"actuators": [{
"name": "valve_actuator",
"hil": "hil1",
"network": {"ip": "192.168.0.41", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.41", "port": 502}],
"registers": {
"coil": [{"address": 500, "count": 1, "physical_value": "valve_cmd"}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"physical_values": [{"name": "valve_cmd", "io": "input"}]
}],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "tank_level", "io": "output"}, {"name": "valve_cmd", "io": "input"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

View File

@ -0,0 +1,54 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "To-Tank-Sensor"},
{"type": "tcp", "ip": "192.168.0.41", "port": 502, "id": "TO_VALVE_ACTUATOR"}
],
"registers": {
"coil": [{"address": 500, "count": 1, "id": "valve_cmd", "io": "output"}],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "id": "tank_level", "io": "input"}]
},
"monitors": [
{"outbound_connection_id": "To-Tank-Sensor", "id": "tank_level", "value_type": "input_register", "slave_id": 1, "address": 100, "count": 1, "interval": 0.5}
],
"controllers": [
{"outbound_connection_id": "TO_VALVE_ACTUATOR", "id": "valve_cmd", "value_type": "coil", "slave_id": 1, "address": 500, "count": 1}
]
}],
"sensors": [{
"name": "tank_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "tank_level"}]
}
}],
"actuators": [{
"name": "valve_actuator",
"hil": "hil1",
"network": {"ip": "192.168.0.41", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.41", "port": 502}],
"registers": {
"coil": [{"address": 500, "count": 1, "physical_value": "valve_cmd"}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"physical_values": [{"name": "valve_cmd", "io": "input"}]
}],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "tank_level", "io": "output"}, {"name": "valve_cmd", "io": "input"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

57
tests/fixtures/valid_minimal.json vendored Normal file
View File

@ -0,0 +1,57 @@
{
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "to_sensor"},
{"type": "tcp", "ip": "192.168.0.41", "port": 502, "id": "to_actuator"}
],
"registers": {
"coil": [{"address": 500, "count": 1, "id": "valve_input", "io": "output"}],
"discrete_input": [{"address": 10, "count": 1, "id": "bottle_status", "io": "input"}],
"holding_register": [],
"input_register": [
{"address": 1, "count": 1, "id": "tank_level", "io": "input"},
{"address": 100, "count": 1, "id": "tank_level_output", "io": "input"}
]
},
"monitors": [
{"outbound_connection_id": "to_sensor", "id": "tank_level_output", "value_type": "input_register", "slave_id": 1, "address": 100, "count": 1, "interval": 0.5}
],
"controllers": [
{"outbound_connection_id": "to_actuator", "id": "valve_input", "value_type": "coil", "slave_id": 1, "address": 500, "count": 1}
]
}],
"sensors": [{
"name": "tank_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "tank_level_output"}]
}
}],
"actuators": [{
"name": "valve_actuator",
"hil": "hil1",
"network": {"ip": "192.168.0.41", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.41", "port": 502}],
"registers": {
"coil": [{"address": 500, "count": 1, "physical_value": "valve_input"}],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"physical_values": [{"name": "valve_input", "io": "input"}]
}],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "tank_level_output", "io": "output"}, {"name": "valve_input", "io": "input"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}

View File

@ -0,0 +1,292 @@
"""
Tests for compile_control_plan.py
Validates that:
1. Control plan files compile without errors
2. Generated files have correct structure (def logic, while True, time.sleep)
3. Warmup sleep is included when specified
4. Threading is used when >1 task
5. Expected patterns (set, add) appear in output
"""
import json
import tempfile
from pathlib import Path
import pytest
from models.control_plan import ControlPlan
from tools.compile_control_plan import compile_control_plan, compile_hil, validate_control_plan
from tools.safe_eval import safe_eval, validate_expression, UnsafeExpressionError, extract_variable_names
# =============================================================================
# Test fixtures paths
# =============================================================================
FIXTURES_DIR = Path(__file__).parent / "fixtures"
def load_fixture(name: str) -> ControlPlan:
"""Load a test fixture as ControlPlan."""
path = FIXTURES_DIR / name
data = json.loads(path.read_text(encoding="utf-8"))
return ControlPlan.model_validate(data)
# =============================================================================
# safe_eval tests
# =============================================================================
class TestSafeEval:
"""Test the safe expression evaluator."""
def test_basic_arithmetic(self):
assert safe_eval("x + 1", {"x": 5}) == 6
assert safe_eval("x * y", {"x": 3, "y": 4}) == 12
assert safe_eval("x - y", {"x": 10, "y": 3}) == 7
assert safe_eval("x / y", {"x": 10, "y": 2}) == 5.0
def test_comparison(self):
assert safe_eval("x > 5", {"x": 10}) == True
assert safe_eval("x < 5", {"x": 10}) == False
assert safe_eval("x == y", {"x": 5, "y": 5}) == True
def test_ternary(self):
assert safe_eval("x if x > 0 else y", {"x": 5, "y": 10}) == 5
assert safe_eval("x if x > 0 else y", {"x": -1, "y": 10}) == 10
def test_boolean_ops(self):
assert safe_eval("x and y", {"x": True, "y": True}) == True
assert safe_eval("x or y", {"x": False, "y": True}) == True
assert safe_eval("not x", {"x": False}) == True
def test_clamp(self):
assert safe_eval("clamp(x, 0, 10)", {"x": 15}) == 10
assert safe_eval("clamp(x, 0, 10)", {"x": -5}) == 0
assert safe_eval("clamp(x, 0, 10)", {"x": 5}) == 5
def test_min_max(self):
assert safe_eval("min(x, y)", {"x": 5, "y": 10}) == 5
assert safe_eval("max(x, y)", {"x": 5, "y": 10}) == 10
def test_unsafe_import_rejected(self):
with pytest.raises(UnsafeExpressionError):
validate_expression("__import__('os')")
def test_unsafe_attribute_rejected(self):
with pytest.raises(UnsafeExpressionError):
validate_expression("x.__class__")
def test_extract_variables(self):
vars = extract_variable_names("a + b * c")
assert vars == {"a", "b", "c"}
vars = extract_variable_names("clamp(x, 0, max(y, z))")
assert vars == {"x", "y", "z"}
# =============================================================================
# ControlPlan schema tests
# =============================================================================
class TestControlPlanSchema:
"""Test ControlPlan Pydantic schema."""
def test_load_bottle_fixture(self):
plan = load_fixture("control_plan_bottle_like.json")
assert plan.version == "v0.1"
assert len(plan.hils) == 1
assert plan.hils[0].name == "physical_io_hil"
def test_load_electrical_fixture(self):
plan = load_fixture("control_plan_electrical_like.json")
assert plan.version == "v0.1"
assert len(plan.hils) == 1
assert plan.hils[0].name == "power_grid_hil"
# Has 2 tasks (loop + playback)
assert len(plan.hils[0].tasks) == 2
def test_load_ied_fixture(self):
plan = load_fixture("control_plan_ied_like.json")
assert plan.version == "v0.1"
assert len(plan.hils) == 1
assert plan.hils[0].name == "ied_simulator"
# Has 4 tasks
assert len(plan.hils[0].tasks) == 4
# =============================================================================
# Compiler tests
# =============================================================================
class TestCompiler:
"""Test the control plan compiler."""
def test_compile_bottle_fixture(self):
plan = load_fixture("control_plan_bottle_like.json")
result = compile_control_plan(plan)
assert "physical_io_hil" in result
code = result["physical_io_hil"]
# Check required patterns
assert "def logic(physical_values):" in code
assert "while True:" in code
assert "time.sleep(" in code
# Check warmup delay
assert "time.sleep(3.0)" in code
# Check initialization (uses setdefault for validator compatibility)
assert "physical_values.setdefault('tank_level', 500)" in code
def test_compile_electrical_fixture(self):
plan = load_fixture("control_plan_electrical_like.json")
result = compile_control_plan(plan)
assert "power_grid_hil" in result
code = result["power_grid_hil"]
# Has threading (2 tasks)
assert "import threading" in code
assert "threading.Thread" in code
def test_compile_ied_fixture(self):
plan = load_fixture("control_plan_ied_like.json")
result = compile_control_plan(plan)
assert "ied_simulator" in result
code = result["ied_simulator"]
# Has threading (4 tasks)
assert "import threading" in code
# Check protection logic task
assert "_task_protection_logic" in code
# Check playback tasks
assert "_task_current_a_sim" in code
assert "_task_current_b_sim" in code
assert "_task_current_c_sim" in code
def test_single_task_no_threading(self):
plan = load_fixture("control_plan_bottle_like.json")
result = compile_control_plan(plan)
code = result["physical_io_hil"]
# Single task should NOT use threading
assert "import threading" not in code
def test_warmup_included(self):
plan = load_fixture("control_plan_bottle_like.json")
assert plan.hils[0].warmup_s == 3.0
code = compile_hil(plan.hils[0])
assert "time.sleep(3.0)" in code
def test_no_warmup_when_not_specified(self):
"""Create a plan without warmup and check it's not included."""
plan_dict = {
"version": "v0.1",
"hils": [{
"name": "test_hil",
"init": {"x": 0},
"tasks": [{
"type": "loop",
"name": "main",
"dt_s": 0.1,
"actions": [{"set": ["x", "x + 1"]}]
}]
}]
}
plan = ControlPlan.model_validate(plan_dict)
code = compile_hil(plan.hils[0])
# Should NOT have warmup delay line
assert "Warmup delay" not in code
# =============================================================================
# Validation tests
# =============================================================================
class TestValidation:
"""Test control plan validation."""
def test_validate_bottle_fixture(self):
plan = load_fixture("control_plan_bottle_like.json")
errors = validate_control_plan(plan)
assert errors == []
def test_validate_electrical_fixture(self):
plan = load_fixture("control_plan_electrical_like.json")
errors = validate_control_plan(plan)
assert errors == []
def test_validate_ied_fixture(self):
plan = load_fixture("control_plan_ied_like.json")
errors = validate_control_plan(plan)
assert errors == []
def test_undefined_variable_detected(self):
"""Plan with undefined variable should fail validation."""
plan_dict = {
"version": "v0.1",
"hils": [{
"name": "test_hil",
"init": {"x": 0},
"tasks": [{
"type": "loop",
"name": "main",
"dt_s": 0.1,
"actions": [{"set": ["x", "x + undefined_var"]}]
}]
}]
}
plan = ControlPlan.model_validate(plan_dict)
errors = validate_control_plan(plan)
assert len(errors) == 1
assert "undefined_var" in errors[0]
# =============================================================================
# File output tests
# =============================================================================
class TestFileOutput:
"""Test that compiled output can be written to files."""
def test_write_to_temp_dir(self):
plan = load_fixture("control_plan_bottle_like.json")
result = compile_control_plan(plan)
with tempfile.TemporaryDirectory() as tmpdir:
out_path = Path(tmpdir) / "physical_io_hil.py"
out_path.write_text(result["physical_io_hil"], encoding="utf-8")
assert out_path.exists()
# Read back and verify
content = out_path.read_text(encoding="utf-8")
assert "def logic(physical_values):" in content
def test_all_fixtures_compile_to_valid_python(self):
"""Ensure all fixtures compile to syntactically valid Python."""
fixtures = [
"control_plan_bottle_like.json",
"control_plan_electrical_like.json",
"control_plan_ied_like.json",
]
for fixture_name in fixtures:
plan = load_fixture(fixture_name)
result = compile_control_plan(plan)
for hil_name, code in result.items():
# Try to compile the generated code
try:
compile(code, f"<{hil_name}>", "exec")
except SyntaxError as e:
pytest.fail(f"Syntax error in compiled {fixture_name}/{hil_name}: {e}")

View File

@ -0,0 +1,201 @@
"""
Regression test: HIL initialization must be detectable by validate_logic --check-hil-init.
This test verifies that:
1. tools.compile_control_plan generates HIL code with physical_values.setdefault() calls
2. tools.validate_logic --check-hil-init passes on the generated code
Root cause of original bug:
- Compiler used alias `pv = physical_values` then `pv['key'] = value`
- Validator AST parser only detected `physical_values[...]` (literal name, not aliases)
Fix:
- Emit explicit `physical_values.setdefault('<key>', <default>)` at TOP of logic()
- BEFORE any alias or threads
"""
import json
import tempfile
from pathlib import Path
import pytest
from models.control_plan import ControlPlan
from tools.compile_control_plan import compile_control_plan
from services.validation.logic_validation import validate_logic_against_config
FIXTURES_DIR = Path(__file__).parent / "fixtures"
EXAMPLES_DIR = Path(__file__).parent.parent / "examples" / "control_plans"
class TestHilInitValidation:
"""Regression tests for HIL initialization detection."""
def test_compiled_hil_passes_init_validation(self):
"""
Compile bottle_line_v0.1.json and verify validate_logic --check-hil-init passes.
This test would FAIL before the fix because:
- Compiler emitted: pv['water_tank_level'] = 500
- Validator looked for: physical_values['water_tank_level'] = ...
After the fix:
- Compiler emits: physical_values.setdefault('water_tank_level', 500)
- Validator detects this via AST
"""
# Load control plan
plan_path = EXAMPLES_DIR / "bottle_line_v0.1.json"
plan_dict = json.loads(plan_path.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(plan_dict)
# Load config fixture (defines HIL physical_values)
config_path = FIXTURES_DIR / "config_hil_bottle_like.json"
config = json.loads(config_path.read_text(encoding="utf-8"))
# Compile with config to ensure ALL physical_values are initialized
hil_code = compile_control_plan(plan, config=config)
# Write to temp directory
with tempfile.TemporaryDirectory() as tmpdir:
logic_dir = Path(tmpdir)
for hil_name, code in hil_code.items():
safe_name = hil_name.replace(" ", "_").replace("-", "_")
out_file = logic_dir / f"{safe_name}.py"
out_file.write_text(code, encoding="utf-8")
# Run validation with --check-hil-init
issues = validate_logic_against_config(
config_path=str(config_path),
logic_dir=str(logic_dir),
check_hil_init=True,
)
# Filter for HIL_INIT issues only (ignore MAPPING for PLCs etc.)
hil_init_issues = [i for i in issues if i.kind == "HIL_INIT"]
# Should be no HIL_INIT issues
assert hil_init_issues == [], (
f"HIL initialization validation failed:\n"
+ "\n".join(f" - {i.file}: {i.key} -> {i.message}" for i in hil_init_issues)
)
def test_compiled_hil_contains_setdefault_calls(self):
"""
Verify generated code contains physical_values.setdefault() calls (not alias).
"""
plan_path = EXAMPLES_DIR / "bottle_line_v0.1.json"
plan_dict = json.loads(plan_path.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(plan_dict)
config_path = FIXTURES_DIR / "config_hil_bottle_like.json"
config = json.loads(config_path.read_text(encoding="utf-8"))
hil_code = compile_control_plan(plan, config=config)
# Check water_hil
water_code = hil_code["water_hil"]
assert "physical_values.setdefault('water_tank_level'" in water_code
assert "physical_values.setdefault('tank_input_valve'" in water_code
assert "physical_values.setdefault('tank_output_valve'" in water_code
# Check filler_hil
filler_code = hil_code["filler_hil"]
assert "physical_values.setdefault('bottle_fill_level'" in filler_code
assert "physical_values.setdefault('bottle_at_filler'" in filler_code
assert "physical_values.setdefault('bottle_distance'" in filler_code
assert "physical_values.setdefault('conveyor_cmd'" in filler_code
assert "physical_values.setdefault('fill_valve'" in filler_code
def test_setdefault_before_alias(self):
"""
Verify setdefault calls appear BEFORE the pv alias is created.
This is critical because the validator does AST analysis and needs
to see physical_values[...] assignments, not pv[...] assignments.
"""
plan_path = EXAMPLES_DIR / "bottle_line_v0.1.json"
plan_dict = json.loads(plan_path.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(plan_dict)
config_path = FIXTURES_DIR / "config_hil_bottle_like.json"
config = json.loads(config_path.read_text(encoding="utf-8"))
hil_code = compile_control_plan(plan, config=config)
for hil_name, code in hil_code.items():
# Find positions
setdefault_pos = code.find("physical_values.setdefault(")
alias_pos = code.find("pv = physical_values")
assert setdefault_pos != -1, f"{hil_name}: no setdefault found"
assert alias_pos != -1, f"{hil_name}: no alias found"
assert setdefault_pos < alias_pos, (
f"{hil_name}: setdefault must appear BEFORE alias. "
f"setdefault at {setdefault_pos}, alias at {alias_pos}"
)
def test_init_value_preserved_from_plan(self):
"""
Verify that init values from plan.init are used (not just 0).
"""
plan_path = EXAMPLES_DIR / "bottle_line_v0.1.json"
plan_dict = json.loads(plan_path.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(plan_dict)
config_path = FIXTURES_DIR / "config_hil_bottle_like.json"
config = json.loads(config_path.read_text(encoding="utf-8"))
hil_code = compile_control_plan(plan, config=config)
# water_hil init: water_tank_level=500
water_code = hil_code["water_hil"]
assert "physical_values.setdefault('water_tank_level', 500)" in water_code
# filler_hil init: bottle_fill_level=0, bottle_at_filler=1, bottle_distance=0
filler_code = hil_code["filler_hil"]
assert "physical_values.setdefault('bottle_at_filler', 1)" in filler_code
def test_config_only_keys_initialized_with_default(self):
"""
If config has physical_values not in plan.init, they should be initialized to 0.
This ensures validate_logic --check-hil-init passes even when the config
declares more keys than the plan initializes.
"""
# Create a plan with fewer init keys than config
plan_dict = {
"version": "v0.1",
"hils": [{
"name": "test_hil",
"init": {"x": 100}, # Only x, not y
"tasks": [{
"type": "loop",
"name": "main",
"dt_s": 0.1,
"actions": [{"set": ["x", "x + 1"]}]
}]
}]
}
plan = ControlPlan.model_validate(plan_dict)
# Config has both x and y
config = {
"hils": [{
"name": "test_hil",
"logic": "test_hil.py",
"physical_values": [
{"name": "x", "io": "output"},
{"name": "y", "io": "output"}, # Extra key not in plan.init
]
}]
}
hil_code = compile_control_plan(plan, config=config)
code = hil_code["test_hil"]
# x should use plan.init value (100)
assert "physical_values.setdefault('x', 100)" in code
# y should use default (0) since not in plan.init
assert "physical_values.setdefault('y', 0)" in code

View File

@ -0,0 +1,220 @@
"""
Integration tests for E2E Bottle Line ControlPlan scenario.
These tests verify:
1. The example control plan compiles to valid Python
2. Generated HIL files have correct structure
3. Validation mode passes
4. Generated code can be parsed by Python's ast module
No Docker or external services required.
"""
import ast
import tempfile
from pathlib import Path
import pytest
from models.control_plan import ControlPlan
from tools.compile_control_plan import compile_control_plan, validate_control_plan
# Path to the example control plan
EXAMPLE_CONTROL_PLAN = Path(__file__).parent.parent / "examples" / "control_plans" / "bottle_line_v0.1.json"
class TestBottleLineControlPlan:
"""Test the bottle_line_v0.1.json control plan."""
def test_example_exists(self):
"""Verify the example control plan file exists."""
assert EXAMPLE_CONTROL_PLAN.exists(), f"Example not found: {EXAMPLE_CONTROL_PLAN}"
def test_loads_as_valid_control_plan(self):
"""Verify the example loads as a valid ControlPlan."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
assert plan.version == "v0.1"
assert len(plan.hils) == 2
assert plan.hils[0].name == "water_hil"
assert plan.hils[1].name == "filler_hil"
def test_validation_passes(self):
"""Verify validation passes with no errors."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
errors = validate_control_plan(plan)
assert errors == [], f"Validation errors: {errors}"
def test_compiles_to_valid_python(self):
"""Verify compilation produces syntactically valid Python."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
# Should have 2 HIL files
assert len(result) == 2
assert "water_hil" in result
assert "filler_hil" in result
# Each should be valid Python
for hil_name, code in result.items():
try:
ast.parse(code)
except SyntaxError as e:
pytest.fail(f"Syntax error in {hil_name}: {e}")
def test_generated_code_has_logic_function(self):
"""Verify generated code contains logic(physical_values) function."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
for hil_name, code in result.items():
assert "def logic(physical_values):" in code, \
f"{hil_name} missing logic(physical_values)"
def test_generated_code_has_while_true(self):
"""Verify generated code contains while True loop."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
for hil_name, code in result.items():
assert "while True:" in code, \
f"{hil_name} missing while True loop"
def test_generated_code_has_time_sleep(self):
"""Verify generated code contains time.sleep calls."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
for hil_name, code in result.items():
assert "time.sleep(" in code, \
f"{hil_name} missing time.sleep"
def test_warmup_delay_included(self):
"""Verify warmup delay is included in generated code."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
# Both HILs have warmup_s: 3.0
for hil_name, code in result.items():
assert "time.sleep(3.0)" in code, \
f"{hil_name} missing warmup delay"
def test_writes_to_temp_directory(self):
"""Verify compiled files can be written to disk."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
with tempfile.TemporaryDirectory() as tmpdir:
tmppath = Path(tmpdir)
for hil_name, code in result.items():
out_file = tmppath / f"{hil_name}.py"
out_file.write_text(code, encoding="utf-8")
assert out_file.exists()
# Read back and verify
content = out_file.read_text(encoding="utf-8")
assert "def logic(physical_values):" in content
def test_water_hil_initializes_tank_level(self):
"""Verify water_hil initializes water_tank_level."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
water_code = result["water_hil"]
assert "pv['water_tank_level'] = 500" in water_code, \
"water_hil should initialize water_tank_level to 500"
def test_filler_hil_initializes_bottle_fill_level(self):
"""Verify filler_hil initializes bottle_fill_level."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
filler_code = result["filler_hil"]
assert "pv['bottle_fill_level'] = 0" in filler_code, \
"filler_hil should initialize bottle_fill_level to 0"
def test_clamp_function_included(self):
"""Verify clamp helper function is included."""
import json
data = json.loads(EXAMPLE_CONTROL_PLAN.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(data)
result = compile_control_plan(plan)
for hil_name, code in result.items():
assert "def clamp(x, lo, hi):" in code, \
f"{hil_name} missing clamp function"
class TestPromptFileExists:
"""Test that the e2e prompt file exists."""
def test_e2e_bottle_prompt_exists(self):
"""Verify prompts/e2e_bottle.txt exists."""
prompt_file = Path(__file__).parent.parent / "prompts" / "e2e_bottle.txt"
assert prompt_file.exists(), f"Prompt not found: {prompt_file}"
def test_e2e_bottle_prompt_has_content(self):
"""Verify prompt file has meaningful content."""
prompt_file = Path(__file__).parent.parent / "prompts" / "e2e_bottle.txt"
content = prompt_file.read_text(encoding="utf-8")
# Should mention the two HILs
assert "water_hil" in content
assert "filler_hil" in content
# Should mention the two PLCs
assert "plc1" in content.lower() or "PLC1" in content
assert "plc2" in content.lower() or "PLC2" in content
class TestE2EScriptExists:
"""Test that the E2E script exists and is executable."""
def test_e2e_script_exists(self):
"""Verify scripts/e2e_bottle_control_plan.sh exists."""
script_file = Path(__file__).parent.parent / "scripts" / "e2e_bottle_control_plan.sh"
assert script_file.exists(), f"Script not found: {script_file}"
def test_e2e_script_is_executable(self):
"""Verify script has executable permission."""
import os
script_file = Path(__file__).parent.parent / "scripts" / "e2e_bottle_control_plan.sh"
assert os.access(script_file, os.X_OK), f"Script not executable: {script_file}"
def test_e2e_script_has_shebang(self):
"""Verify script starts with proper shebang."""
script_file = Path(__file__).parent.parent / "scripts" / "e2e_bottle_control_plan.sh"
content = script_file.read_text(encoding="utf-8")
assert content.startswith("#!/bin/bash"), "Script missing bash shebang"

View File

@ -0,0 +1,244 @@
#!/usr/bin/env python3
"""
Tests for network configuration validation.
Validates that:
1. Duplicate IPs within same docker_network are detected
2. IPs outside declared subnet are detected
3. Unknown docker_network references are detected
4. Valid configs pass validation
"""
import json
import subprocess
import sys
from pathlib import Path
import pytest
from models.ics_simlab_config_v2 import Config, set_strict_mode
from tools.semantic_validation import validate_network_config
FIXTURES_DIR = Path(__file__).parent / "fixtures"
class TestDuplicateIPValidation:
"""Test duplicate IP detection."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_duplicate_ip_detected(self):
"""Duplicate IP on same docker_network should fail."""
fixture_path = FIXTURES_DIR / "config_duplicate_ip.json"
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_network_config(config)
assert len(errors) >= 1
# Should detect ui and hmi1 both using 192.168.100.10
error_str = " ".join(str(e) for e in errors).lower()
assert "duplicate" in error_str
assert "192.168.100.10" in error_str
assert "ui" in error_str
assert "hmi1" in error_str
def test_duplicate_ip_error_message_clarity(self):
"""Error message should list all colliding devices."""
fixture_path = FIXTURES_DIR / "config_duplicate_ip.json"
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_network_config(config)
# Find the duplicate IP error
dup_errors = [e for e in errors if "duplicate" in e.message.lower()]
assert len(dup_errors) == 1
err = dup_errors[0]
# Should mention both colliding devices
assert "ui" in err.message.lower()
assert "hmi1" in err.message.lower()
class TestSubnetValidation:
"""Test subnet validation."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_out_of_subnet_detected(self):
"""IP outside declared subnet should fail."""
fixture_path = FIXTURES_DIR / "config_out_of_subnet_ip.json"
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_network_config(config)
assert len(errors) >= 1
# Should detect plc1 with IP 10.0.0.50 outside 192.168.100.0/24
error_str = " ".join(str(e) for e in errors).lower()
assert "subnet" in error_str or "not within" in error_str
assert "10.0.0.50" in error_str
assert "192.168.100.0/24" in error_str
class TestDockerNetworkValidation:
"""Test docker_network reference validation."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_unknown_docker_network_detected(self):
"""Reference to nonexistent docker_network should fail."""
fixture_path = FIXTURES_DIR / "config_unknown_docker_network.json"
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_network_config(config)
assert len(errors) >= 1
# Should detect plc1 referencing nonexistent_network
error_str = " ".join(str(e) for e in errors).lower()
assert "nonexistent_network" in error_str
assert "not found" in error_str
class TestValidNetworkConfig:
"""Test that valid configs pass validation."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_valid_config_passes(self):
"""Config with valid network settings should pass."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_network_config(config)
assert len(errors) == 0, f"Unexpected errors: {errors}"
def test_unique_ips_pass(self):
"""Config where all devices have unique IPs should pass."""
raw = {
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [],
"registers": {"coil": [], "discrete_input": [], "holding_register": [], "input_register": []},
"monitors": [],
"controllers": []
}],
"sensors": [],
"actuators": [],
"hils": [],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}
config = Config.model_validate(raw)
errors = validate_network_config(config)
assert len(errors) == 0
class TestCheckNetworkingCLI:
"""Test the check_networking CLI tool."""
def test_cli_detects_duplicate_ip(self):
"""CLI should detect duplicate IPs and exit non-zero."""
fixture_path = FIXTURES_DIR / "config_duplicate_ip.json"
result = subprocess.run(
[sys.executable, "-m", "tools.check_networking", "--config", str(fixture_path)],
capture_output=True,
text=True
)
assert result.returncode == 1
assert "duplicate" in result.stdout.lower()
assert "192.168.100.10" in result.stdout
def test_cli_detects_out_of_subnet(self):
"""CLI should detect out-of-subnet IPs and exit non-zero."""
fixture_path = FIXTURES_DIR / "config_out_of_subnet_ip.json"
result = subprocess.run(
[sys.executable, "-m", "tools.check_networking", "--config", str(fixture_path)],
capture_output=True,
text=True
)
assert result.returncode == 1
assert "subnet" in result.stdout.lower() or "not within" in result.stdout.lower()
def test_cli_json_output(self):
"""CLI should support JSON output format."""
fixture_path = FIXTURES_DIR / "config_duplicate_ip.json"
result = subprocess.run(
[sys.executable, "-m", "tools.check_networking", "--config", str(fixture_path), "--json"],
capture_output=True,
text=True
)
assert result.returncode == 1
output = json.loads(result.stdout)
assert output["status"] == "error"
assert len(output["issues"]) >= 1
def test_cli_valid_config_passes(self):
"""CLI should exit zero for valid config."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
result = subprocess.run(
[sys.executable, "-m", "tools.check_networking", "--config", str(fixture_path)],
capture_output=True,
text=True
)
assert result.returncode == 0
assert "ok" in result.stdout.lower()
class TestIntegrationWithBuildConfig:
"""Test that network validation is wired into build_config."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_build_config_fails_on_duplicate_ip(self):
"""build_config should fail on duplicate IP (via validate_all_semantics)."""
from tools.semantic_validation import validate_all_semantics
fixture_path = FIXTURES_DIR / "config_duplicate_ip.json"
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_all_semantics(config)
# Network errors should be in the combined output
network_errors = [e for e in errors if "duplicate" in e.message.lower() or "network" in e.entity.lower()]
assert len(network_errors) >= 1

View File

@ -0,0 +1,614 @@
#!/usr/bin/env python3
"""
Tests for P0 semantic validation: orphan devices, boolean type rules, null stripping.
These tests verify that configuration.json adheres to ICS-SimLab semantic invariants:
- All sensors must be monitored by at least one PLC
- All actuators must be controlled by at least one PLC
- Boolean signals must use coil/discrete_input, not input_register/holding_register
- Null fields should be stripped from output
"""
import json
from pathlib import Path
import pytest
from models.ics_simlab_config_v2 import Config, set_strict_mode
from services.patches import strip_nulls, sanitize_connection_id, patch_sanitize_connection_ids
from tools.semantic_validation import (
validate_orphan_devices,
validate_boolean_type_rules,
validate_all_semantics,
validate_plc_local_register_coherence,
)
from tools.repair_config import repair_plc_local_registers
FIXTURES_DIR = Path(__file__).parent / "fixtures"
class TestStripNulls:
"""Test strip_nulls canonicalization function."""
def test_strip_nulls_removes_none_keys(self):
"""Keys with None values should be removed."""
obj = {"a": 1, "b": None, "c": "hello"}
result = strip_nulls(obj)
assert result == {"a": 1, "c": "hello"}
assert "b" not in result
def test_strip_nulls_nested_dict(self):
"""Nulls should be removed at all nesting levels."""
obj = {
"outer": {
"keep": "value",
"remove": None,
"nested": {
"deep_keep": 42,
"deep_remove": None
}
}
}
result = strip_nulls(obj)
assert result == {
"outer": {
"keep": "value",
"nested": {
"deep_keep": 42
}
}
}
def test_strip_nulls_list_with_none_items(self):
"""None items in lists should be removed."""
obj = {"items": [1, None, 2, None, 3]}
result = strip_nulls(obj)
assert result == {"items": [1, 2, 3]}
def test_strip_nulls_list_of_dicts(self):
"""Dicts inside lists should have their nulls stripped."""
obj = {
"registers": [
{"id": "reg1", "io": None, "physical_value": None},
{"id": "reg2", "io": "input"}
]
}
result = strip_nulls(obj)
assert result == {
"registers": [
{"id": "reg1"},
{"id": "reg2", "io": "input"}
]
}
def test_strip_nulls_preserves_empty_structures(self):
"""Empty dicts and lists should be preserved (they're not None)."""
obj = {"empty_dict": {}, "empty_list": [], "value": "keep"}
result = strip_nulls(obj)
assert result == {"empty_dict": {}, "empty_list": [], "value": "keep"}
def test_strip_nulls_fixture(self):
"""Test on realistic fixture with many nulls."""
fixture_path = FIXTURES_DIR / "config_with_nulls.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
result = strip_nulls(raw)
# Check that plc1.network.port (null) is gone
assert "port" not in result["plcs"][0]["network"]
# Check that plc1.identity (null) is gone
assert "identity" not in result["plcs"][0]
# Check that inbound_connections[0].id (null) is gone
assert "id" not in result["plcs"][0]["inbound_connections"][0]
# Check that registers coil[0] has no physical_value/physical_values
assert "physical_value" not in result["plcs"][0]["registers"]["coil"][0]
assert "physical_values" not in result["plcs"][0]["registers"]["coil"][0]
class TestOrphanDeviceValidation:
"""Test orphan sensor/actuator detection."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_orphan_sensor_detected(self):
"""Sensor not referenced by any PLC monitor should fail."""
fixture_path = FIXTURES_DIR / "orphan_sensor.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_orphan_devices(config)
assert len(errors) == 1
assert "orphan_sensor" in str(errors[0]).lower()
assert "orphan" in str(errors[0]).lower()
def test_orphan_actuator_detected(self):
"""Actuator not referenced by any PLC controller should fail."""
fixture_path = FIXTURES_DIR / "orphan_actuator.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_orphan_devices(config)
assert len(errors) == 1
assert "orphan_actuator" in str(errors[0]).lower()
assert "orphan" in str(errors[0]).lower()
def test_valid_config_no_orphans(self):
"""Config with properly connected devices should pass."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_orphan_devices(config)
assert len(errors) == 0, f"Unexpected errors: {errors}"
class TestBooleanTypeRules:
"""Test boolean signal type validation."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_boolean_in_input_register_detected(self):
"""Boolean signal in input_register should fail."""
fixture_path = FIXTURES_DIR / "boolean_type_wrong.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_boolean_type_rules(config)
assert len(errors) >= 1
# Should detect "bottle_at_filler_output" is a boolean in wrong register type
error_str = str(errors[0]).lower()
assert "boolean" in error_str or "discrete_input" in error_str
def test_valid_config_passes_boolean_check(self):
"""Config with correct boolean types should pass."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_boolean_type_rules(config)
assert len(errors) == 0, f"Unexpected errors: {errors}"
class TestAllSemanticsIntegration:
"""Integration tests for validate_all_semantics."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_valid_config_passes_all_checks(self):
"""A properly configured system should pass all semantic checks."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_all_semantics(config)
assert len(errors) == 0, f"Unexpected errors: {errors}"
def test_multiple_issues_detected(self):
"""Config with multiple issues should report all of them."""
# Create a config with both orphan and boolean issues
raw = {
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [],
"controllers": []
}],
"sensors": [{
"name": "orphan_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "switch_status"}]
}
}],
"actuators": [],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "switch_status", "io": "output"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}
config = Config.model_validate(raw)
errors = validate_all_semantics(config)
# Should have at least 2 errors: orphan sensor + boolean type issue
assert len(errors) >= 2, f"Expected at least 2 errors, got {len(errors)}: {errors}"
class TestSanitizeConnectionId:
"""Test connection ID sanitization function."""
def test_sanitize_lowercase(self):
"""IDs should be lowercased."""
assert sanitize_connection_id("TO_SENSOR") == "to_sensor"
assert sanitize_connection_id("To-Sensor") == "to_sensor"
def test_sanitize_hyphens_to_underscore(self):
"""Hyphens should be converted to underscores."""
assert sanitize_connection_id("to-tank-sensor") == "to_tank_sensor"
assert sanitize_connection_id("To-Tank-Sensor") == "to_tank_sensor"
def test_sanitize_spaces_to_underscore(self):
"""Spaces should be converted to underscores."""
assert sanitize_connection_id("to sensor") == "to_sensor"
assert sanitize_connection_id("to sensor") == "to_sensor"
def test_sanitize_removes_special_chars(self):
"""Special characters should be removed."""
assert sanitize_connection_id("to@sensor!") == "tosensor"
assert sanitize_connection_id("to#$%sensor") == "tosensor"
def test_sanitize_collapses_underscores(self):
"""Multiple underscores should be collapsed."""
assert sanitize_connection_id("to__sensor") == "to_sensor"
assert sanitize_connection_id("to___sensor") == "to_sensor"
def test_sanitize_strips_underscores(self):
"""Leading/trailing underscores should be stripped."""
assert sanitize_connection_id("_to_sensor_") == "to_sensor"
assert sanitize_connection_id("__to_sensor__") == "to_sensor"
def test_sanitize_empty_fallback(self):
"""Empty string should fallback to 'connection'."""
assert sanitize_connection_id("") == "connection"
assert sanitize_connection_id(" ") == "connection"
assert sanitize_connection_id("@#$") == "connection"
def test_sanitize_already_valid(self):
"""Already valid IDs should be unchanged."""
assert sanitize_connection_id("to_sensor") == "to_sensor"
assert sanitize_connection_id("to_sensor1") == "to_sensor1"
class TestPatchSanitizeConnectionIds:
"""Test connection ID patch function."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_patch_sanitizes_outbound_connections(self):
"""Outbound connection IDs should be sanitized."""
cfg = {
"plcs": [{
"name": "plc1",
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "To-Sensor"},
{"type": "tcp", "ip": "192.168.0.41", "port": 502, "id": "TO_ACTUATOR"}
],
"monitors": [],
"controllers": []
}],
"hmis": []
}
patched, errors = patch_sanitize_connection_ids(cfg)
assert len(errors) == 0
assert patched["plcs"][0]["outbound_connections"][0]["id"] == "to_sensor"
assert patched["plcs"][0]["outbound_connections"][1]["id"] == "to_actuator"
def test_patch_updates_monitor_references(self):
"""Monitor outbound_connection_id references should be updated."""
cfg = {
"plcs": [{
"name": "plc1",
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "To-Sensor"}
],
"monitors": [
{"outbound_connection_id": "To-Sensor", "id": "tank_level", "value_type": "input_register", "address": 100}
],
"controllers": []
}],
"hmis": []
}
patched, errors = patch_sanitize_connection_ids(cfg)
assert len(errors) == 0
assert patched["plcs"][0]["monitors"][0]["outbound_connection_id"] == "to_sensor"
def test_patch_updates_controller_references(self):
"""Controller outbound_connection_id references should be updated."""
cfg = {
"plcs": [{
"name": "plc1",
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.41", "port": 502, "id": "TO_ACTUATOR"}
],
"monitors": [],
"controllers": [
{"outbound_connection_id": "TO_ACTUATOR", "id": "valve_cmd", "value_type": "coil", "address": 500}
]
}],
"hmis": []
}
patched, errors = patch_sanitize_connection_ids(cfg)
assert len(errors) == 0
assert patched["plcs"][0]["controllers"][0]["outbound_connection_id"] == "to_actuator"
def test_patch_from_fixture(self):
"""Test sanitization from fixture file."""
fixture_path = FIXTURES_DIR / "unsanitized_connection_ids.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
patched, errors = patch_sanitize_connection_ids(raw)
assert len(errors) == 0
# Check outbound_connection IDs are sanitized
assert patched["plcs"][0]["outbound_connections"][0]["id"] == "to_tank_sensor"
assert patched["plcs"][0]["outbound_connections"][1]["id"] == "to_valve_actuator"
# Check monitor/controller references are updated
assert patched["plcs"][0]["monitors"][0]["outbound_connection_id"] == "to_tank_sensor"
assert patched["plcs"][0]["controllers"][0]["outbound_connection_id"] == "to_valve_actuator"
class TestPlcLocalRegisterCoherence:
"""Test PLC local register coherence validation."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_missing_monitor_register_detected(self):
"""Missing local register for monitor should fail."""
fixture_path = FIXTURES_DIR / "plc_missing_local_registers.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_plc_local_register_coherence(config)
# Should detect missing registers for both monitor and controller
assert len(errors) >= 2
error_str = " ".join(str(e) for e in errors).lower()
assert "tank_level" in error_str
assert "valve_cmd" in error_str
def test_valid_config_passes_coherence(self):
"""Config with proper local registers should pass."""
fixture_path = FIXTURES_DIR / "valid_minimal.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw)
errors = validate_plc_local_register_coherence(config)
assert len(errors) == 0, f"Unexpected errors: {errors}"
def test_wrong_io_direction_detected(self):
"""Register with wrong io direction should fail."""
raw = {
"ui": {"network": {"ip": "192.168.0.1", "port": 5000, "docker_network": "vlan1"}},
"hmis": [],
"plcs": [{
"name": "plc1",
"logic": "plc1.py",
"network": {"ip": "192.168.0.21", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.21", "port": 502}],
"outbound_connections": [
{"type": "tcp", "ip": "192.168.0.31", "port": 502, "id": "to_sensor"}
],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "id": "tank_level", "io": "output"}]
},
"monitors": [
{"outbound_connection_id": "to_sensor", "id": "tank_level", "value_type": "input_register", "slave_id": 1, "address": 100, "count": 1, "interval": 0.5}
],
"controllers": []
}],
"sensors": [{
"name": "tank_sensor",
"hil": "hil1",
"network": {"ip": "192.168.0.31", "docker_network": "vlan1"},
"inbound_connections": [{"type": "tcp", "ip": "192.168.0.31", "port": 502}],
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "physical_value": "tank_level"}]
}
}],
"actuators": [],
"hils": [{"name": "hil1", "logic": "hil1.py", "physical_values": [{"name": "tank_level", "io": "output"}]}],
"serial_networks": [],
"ip_networks": [{"docker_name": "vlan1", "name": "vlan1", "subnet": "192.168.0.0/24"}]
}
config = Config.model_validate(raw)
errors = validate_plc_local_register_coherence(config)
assert len(errors) == 1
assert "io mismatch" in str(errors[0]).lower()
assert "output" in str(errors[0]).lower()
class TestRepairPlcLocalRegisters:
"""Test PLC local register repair function."""
@pytest.fixture(autouse=True)
def reset_strict_mode(self):
"""Reset strict mode before each test."""
set_strict_mode(False)
yield
set_strict_mode(False)
def test_repair_creates_monitor_register(self):
"""Repair should create missing register for monitor."""
cfg = {
"plcs": [{
"name": "plc1",
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [
{"outbound_connection_id": "to_sensor", "id": "tank_level", "value_type": "input_register", "address": 100, "count": 1}
],
"controllers": []
}]
}
repaired, actions = repair_plc_local_registers(cfg)
assert len(actions) == 1
assert "tank_level" in str(actions[0])
assert "io='input'" in str(actions[0])
# Check register was created
input_regs = repaired["plcs"][0]["registers"]["input_register"]
assert len(input_regs) == 1
assert input_regs[0]["id"] == "tank_level"
assert input_regs[0]["io"] == "input"
assert input_regs[0]["address"] == 100
def test_repair_creates_controller_register(self):
"""Repair should create missing register for controller."""
cfg = {
"plcs": [{
"name": "plc1",
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": []
},
"monitors": [],
"controllers": [
{"outbound_connection_id": "to_actuator", "id": "valve_cmd", "value_type": "coil", "address": 500, "count": 1}
]
}]
}
repaired, actions = repair_plc_local_registers(cfg)
assert len(actions) == 1
assert "valve_cmd" in str(actions[0])
assert "io='output'" in str(actions[0])
# Check register was created
coil_regs = repaired["plcs"][0]["registers"]["coil"]
assert len(coil_regs) == 1
assert coil_regs[0]["id"] == "valve_cmd"
assert coil_regs[0]["io"] == "output"
assert coil_regs[0]["address"] == 500
def test_repair_does_not_duplicate_existing(self):
"""Repair should not duplicate existing registers."""
cfg = {
"plcs": [{
"name": "plc1",
"registers": {
"coil": [],
"discrete_input": [],
"holding_register": [],
"input_register": [{"address": 100, "count": 1, "id": "tank_level", "io": "input"}]
},
"monitors": [
{"outbound_connection_id": "to_sensor", "id": "tank_level", "value_type": "input_register", "address": 100, "count": 1}
],
"controllers": []
}]
}
repaired, actions = repair_plc_local_registers(cfg)
assert len(actions) == 0
# Register count should be unchanged
assert len(repaired["plcs"][0]["registers"]["input_register"]) == 1
def test_repair_from_fixture(self):
"""Test repair from fixture file."""
fixture_path = FIXTURES_DIR / "plc_missing_local_registers.json"
if not fixture_path.exists():
pytest.skip(f"Fixture not found: {fixture_path}")
raw = json.loads(fixture_path.read_text(encoding="utf-8"))
repaired, actions = repair_plc_local_registers(raw)
# Should create 2 registers (1 for monitor, 1 for controller)
assert len(actions) == 2
# Verify input_register for monitor
input_regs = repaired["plcs"][0]["registers"]["input_register"]
tank_reg = [r for r in input_regs if r.get("id") == "tank_level"]
assert len(tank_reg) == 1
assert tank_reg[0]["io"] == "input"
# Verify coil for controller
coil_regs = repaired["plcs"][0]["registers"]["coil"]
valve_reg = [r for r in coil_regs if r.get("id") == "valve_cmd"]
assert len(valve_reg) == 1
assert valve_reg[0]["io"] == "output"
# Verify repaired config passes validation
config = Config.model_validate(repaired)
errors = validate_plc_local_register_coherence(config)
assert len(errors) == 0, f"Repair did not fix issues: {errors}"

View File

@ -33,7 +33,9 @@ from typing import Any, Dict
from models.ics_simlab_config_v2 import Config, set_strict_mode
from tools.enrich_config import enrich_plc_connections, enrich_hmi_connections
from tools.semantic_validation import validate_hmi_semantics, SemanticError
from tools.semantic_validation import validate_all_semantics, SemanticError
from tools.repair_config import repair_orphan_devices, repair_boolean_types, repair_plc_local_registers, repair_hmi_controller_registers, repair_target_device_registers
from services.patches import patch_sanitize_connection_ids
# Configure logging
logging.basicConfig(
@ -70,8 +72,13 @@ def load_and_normalize(raw_path: Path) -> Config:
def config_to_dict(cfg: Config) -> Dict[str, Any]:
"""Convert Pydantic model to dict for JSON serialization."""
return cfg.model_dump(mode="json", exclude_none=False)
"""Convert Pydantic model to dict for JSON serialization.
Uses exclude_none=True to remove null values, which prevents
ICS-SimLab runtime errors like 'identity': None causing
TypeError when PLC code checks 'if "identity" in configs'.
"""
return cfg.model_dump(mode="json", exclude_none=True)
def main() -> None:
@ -108,6 +115,11 @@ def main() -> None:
action="store_true",
help="Output semantic errors as JSON to stdout (for programmatic use)"
)
parser.add_argument(
"--repair",
action="store_true",
help="Auto-repair orphan devices and boolean type issues"
)
args = parser.parse_args()
config_path = Path(args.config)
@ -159,6 +171,15 @@ def main() -> None:
enriched_dict = enrich_plc_connections(dict(config_dict))
enriched_dict = enrich_hmi_connections(enriched_dict)
# Sanitize connection IDs to docker-safe format [a-z0-9_]
print()
print(" Sanitizing connection IDs...")
enriched_dict, conn_id_errors = patch_sanitize_connection_ids(enriched_dict)
if conn_id_errors:
for err in conn_id_errors:
logger.warning(f"Connection ID patch error: {err}")
print(" Connection IDs sanitized: OK")
# Re-validate enriched config with Pydantic
print()
print(" Re-validating enriched config...")
@ -169,15 +190,106 @@ def main() -> None:
raise SystemExit(f"ERROR: Enriched config failed Pydantic validation:\n{e}")
# =========================================================================
# Step 3: Semantic validation
# Step 3: Repair (optional)
# =========================================================================
if args.repair:
all_repair_actions = []
# Step 3a: Repair orphan devices
print()
print("=" * 60)
print("Step 3a: Repairing orphan devices")
print("=" * 60)
enriched_dict, orphan_actions = repair_orphan_devices(enriched_dict)
all_repair_actions.extend(orphan_actions)
if orphan_actions:
for action in orphan_actions:
print(f" REPAIRED: {action}")
else:
print(" No orphan devices found")
# Step 3b: Repair boolean types
print()
print("=" * 60)
print("Step 3b: Repairing boolean register types")
print("=" * 60)
enriched_dict, boolean_actions = repair_boolean_types(enriched_dict)
all_repair_actions.extend(boolean_actions)
if boolean_actions:
for action in boolean_actions:
print(f" REPAIRED: {action}")
else:
print(" No boolean type issues found")
# Step 3c: Repair PLC local register coherence
print()
print("=" * 60)
print("Step 3c: Repairing PLC local register coherence")
print("=" * 60)
enriched_dict, local_reg_actions = repair_plc_local_registers(enriched_dict)
all_repair_actions.extend(local_reg_actions)
if local_reg_actions:
for action in local_reg_actions:
print(f" REPAIRED: {action}")
else:
print(" No PLC local register issues found")
# Step 3d: Repair HMI controller registers
print()
print("=" * 60)
print("Step 3d: Repairing HMI controller registers")
print("=" * 60)
enriched_dict, hmi_ctrl_actions = repair_hmi_controller_registers(enriched_dict)
all_repair_actions.extend(hmi_ctrl_actions)
if hmi_ctrl_actions:
for action in hmi_ctrl_actions:
print(f" REPAIRED: {action}")
else:
print(" No HMI controller register issues found")
# Step 3e: Repair target device registers (actuators, sensors, PLCs)
print()
print("=" * 60)
print("Step 3e: Repairing target device registers")
print("=" * 60)
enriched_dict, target_reg_actions = repair_target_device_registers(enriched_dict)
all_repair_actions.extend(target_reg_actions)
if target_reg_actions:
for action in target_reg_actions:
print(f" REPAIRED: {action}")
else:
print(" No target device register issues found")
# Re-validate after all repairs
if all_repair_actions:
print()
print(" Re-validating after repairs...")
try:
enriched_config = Config.model_validate(enriched_dict)
print(" Post-repair validation: OK")
except Exception as e:
raise SystemExit(f"ERROR: Repair produced invalid config:\n{e}")
# =========================================================================
# Step 4: Semantic validation (P0 checks)
# =========================================================================
if not args.skip_semantic:
print()
print("=" * 60)
print("Step 3: Semantic validation")
print("Step 4: Semantic validation (P0 checks)")
print("=" * 60)
errors = validate_hmi_semantics(enriched_config)
errors = validate_all_semantics(enriched_config)
if errors:
if args.json_errors:
@ -193,22 +305,26 @@ def main() -> None:
print()
raise SystemExit(
f"ERROR: Semantic validation failed with {len(errors)} error(s). "
f"Fix the configuration and retry."
f"Fix the configuration and retry, or use --repair to auto-fix orphans."
)
else:
print(" HMI monitors/controllers: OK")
print(" PLC monitors/controllers: OK")
print(" Orphan devices: OK")
print(" Boolean type rules: OK")
print(" PLC local register coherence: OK")
else:
print()
print("=" * 60)
print("Step 3: Semantic validation (SKIPPED)")
print("Step 4: Semantic validation (SKIPPED)")
print("=" * 60)
# =========================================================================
# Step 4: Write final configuration
# Step 5: Write final configuration
# =========================================================================
print()
print("=" * 60)
print("Step 4: Writing configuration.json")
print("Step 5: Writing configuration.json")
print("=" * 60)
final_dict = config_to_dict(enriched_config)

101
tools/check_networking.py Normal file
View File

@ -0,0 +1,101 @@
#!/usr/bin/env python3
"""
Network configuration validator for ICS-SimLab.
Checks for common network configuration issues that cause docker-compose failures:
1. Duplicate IPs within the same docker_network ("Address already in use")
2. docker_network not declared in ip_networks[]
3. IP address outside the declared subnet
Usage:
python3 -m tools.check_networking --config <path> [--strict]
Exit codes:
0: No issues found
1: Issues found (or --strict and warnings exist)
2: Configuration file error
"""
import argparse
import json
import sys
from pathlib import Path
from typing import List
from models.ics_simlab_config_v2 import Config
from tools.semantic_validation import validate_network_config, SemanticError
def format_issues(errors: List[SemanticError]) -> str:
"""Format errors for human-readable output."""
lines = []
for err in errors:
lines.append(f" - {err.entity}: {err.message}")
return "\n".join(lines)
def main() -> int:
parser = argparse.ArgumentParser(
description="Validate ICS-SimLab network configuration"
)
parser.add_argument(
"--config",
required=True,
help="Path to configuration.json"
)
parser.add_argument(
"--strict",
action="store_true",
help="Exit non-zero on any issue (not just errors)"
)
parser.add_argument(
"--json",
action="store_true",
help="Output in JSON format"
)
args = parser.parse_args()
config_path = Path(args.config)
if not config_path.exists():
print(f"ERROR: Config file not found: {config_path}", file=sys.stderr)
return 2
# Load and validate config
try:
raw_data = json.loads(config_path.read_text(encoding="utf-8"))
config = Config.model_validate(raw_data)
except json.JSONDecodeError as e:
print(f"ERROR: Invalid JSON in {config_path}: {e}", file=sys.stderr)
return 2
except Exception as e:
print(f"ERROR: Config validation failed: {e}", file=sys.stderr)
return 2
# Run network validation
errors = validate_network_config(config)
if args.json:
output = {
"config": str(config_path),
"issues": [{"entity": e.entity, "message": e.message} for e in errors],
"status": "error" if errors else "ok"
}
print(json.dumps(output, indent=2))
else:
if errors:
print(f"NETWORK VALIDATION ISSUES ({len(errors)}):")
print(format_issues(errors))
print()
print("FIX: Each device must have a unique IP within its docker_network.")
print(" Check for copy-paste errors or IP assignment overlap.")
else:
print(f"OK: Network configuration valid ({config_path})")
# Return appropriate exit code
if errors:
return 1
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,590 @@
#!/usr/bin/env python3
"""
Compile control_plan.json into deterministic HIL logic files.
Input: control_plan.json (ControlPlan schema)
Output: Python HIL logic files (one per HIL in the plan)
Usage:
python3 -m tools.compile_control_plan \
--control-plan outputs/control_plan.json \
--out outputs/scenario_run/logic
With validation against config:
python3 -m tools.compile_control_plan \
--control-plan outputs/control_plan.json \
--out outputs/scenario_run/logic \
--config outputs/configuration.json
Validation only (no file generation):
python3 -m tools.compile_control_plan \
--control-plan outputs/control_plan.json \
--validate-only
The generated HIL logic follows the ICS-SimLab contract:
def logic(physical_values):
# Initialize from plan.init
# Optional warmup sleep
# while True: run tasks (threaded if >1)
"""
from __future__ import annotations
import argparse
import json
import random
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple, Union
from models.control_plan import (
Action,
AddAction,
ControlPlan,
ControlPlanHIL,
GaussianProfile,
IfAction,
LoopTask,
PlaybackTask,
RampProfile,
SetAction,
StepProfile,
Task,
)
from tools.safe_eval import (
UnsafeExpressionError,
extract_variable_names,
generate_python_code,
validate_expression,
)
class CompilationError(Exception):
"""Raised when control plan compilation fails."""
pass
class ValidationError(Exception):
"""Raised when control plan validation fails."""
pass
def get_hil_physical_values_keys(config: dict, hil_name: str) -> Tuple[Set[str], Set[str]]:
"""
Extract (input_keys, output_keys) for a specific HIL from config.
Returns tuple of (set of input keys, set of output keys).
"""
input_keys: Set[str] = set()
output_keys: Set[str] = set()
for hil in config.get("hils", []):
if hil.get("name") == hil_name:
for pv in hil.get("physical_values", []):
key = pv.get("name")
io = pv.get("io")
if key:
if io == "input":
input_keys.add(key)
elif io == "output":
output_keys.add(key)
break
return input_keys, output_keys
def validate_control_plan(
plan: ControlPlan,
config: Optional[dict] = None
) -> List[str]:
"""
Validate a control plan for errors.
Checks:
1. All expressions are syntactically valid and safe
2. All variables in expressions exist in init/params or config physical_values
3. All set/add targets are valid output keys (if config provided)
Returns:
List of error messages (empty if valid)
"""
errors: List[str] = []
for hil in plan.hils:
hil_name = hil.name
# Build available namespace: init + params
available_vars: Set[str] = set(hil.init.keys())
if hil.params:
available_vars.update(hil.params.keys())
# Add config physical_values if provided
config_input_keys: Set[str] = set()
config_output_keys: Set[str] = set()
if config:
config_input_keys, config_output_keys = get_hil_physical_values_keys(config, hil_name)
available_vars.update(config_input_keys)
available_vars.update(config_output_keys)
# Validate tasks
for task in hil.tasks:
if isinstance(task, LoopTask):
# Validate all actions in the loop
action_errors = _validate_actions(
task.actions,
available_vars,
config_output_keys if config else None,
hil_name,
task.name
)
errors.extend(action_errors)
elif isinstance(task, PlaybackTask):
# Validate target variable
target = task.target
if target not in available_vars:
errors.append(
f"[{hil_name}/{task.name}] Playback target '{target}' not defined in init/params/physical_values"
)
if config and config_output_keys and target not in config_output_keys:
errors.append(
f"[{hil_name}/{task.name}] Playback target '{target}' is not an output in config"
)
return errors
def _validate_actions(
actions: List[Action],
available_vars: Set[str],
output_keys: Optional[Set[str]],
hil_name: str,
task_name: str
) -> List[str]:
"""Validate a list of actions recursively."""
errors: List[str] = []
prefix = f"[{hil_name}/{task_name}]"
for action in actions:
if isinstance(action, SetAction):
var, expr = action.set
# Validate expression
try:
validate_expression(expr)
except (SyntaxError, UnsafeExpressionError) as e:
errors.append(f"{prefix} Invalid expression in set({var}): {e}")
continue
# Check referenced variables exist
refs = extract_variable_names(expr)
undefined = refs - available_vars
if undefined:
errors.append(f"{prefix} Undefined variables in set({var}): {undefined}")
# Check target is writable
if output_keys is not None and var not in output_keys and var not in available_vars:
errors.append(f"{prefix} set target '{var}' is not defined in init/params/outputs")
elif isinstance(action, AddAction):
var, expr = action.add
# Validate expression
try:
validate_expression(expr)
except (SyntaxError, UnsafeExpressionError) as e:
errors.append(f"{prefix} Invalid expression in add({var}): {e}")
continue
# Check referenced variables exist
refs = extract_variable_names(expr)
undefined = refs - available_vars
if undefined:
errors.append(f"{prefix} Undefined variables in add({var}): {undefined}")
# Check target is writable
if output_keys is not None and var not in output_keys and var not in available_vars:
errors.append(f"{prefix} add target '{var}' is not defined in init/params/outputs")
elif isinstance(action, IfAction):
cond = action.if_
# Validate condition
try:
validate_expression(cond)
except (SyntaxError, UnsafeExpressionError) as e:
errors.append(f"{prefix} Invalid condition: {e}")
continue
# Check referenced variables
refs = extract_variable_names(cond)
undefined = refs - available_vars
if undefined:
errors.append(f"{prefix} Undefined variables in condition: {undefined}")
# Recursively validate then/else actions
errors.extend(_validate_actions(action.then, available_vars, output_keys, hil_name, task_name))
if action.else_:
errors.extend(_validate_actions(action.else_, available_vars, output_keys, hil_name, task_name))
return errors
def _indent(code: str, level: int = 1) -> str:
"""Indent code by the given number of levels (4 spaces each)."""
prefix = " " * level
return "\n".join(prefix + line if line else line for line in code.split("\n"))
def _compile_action(action: Action, indent_level: int, pv_var: str = "pv") -> str:
"""Compile a single action to Python code."""
lines: List[str] = []
indent = " " * indent_level
if isinstance(action, SetAction):
var, expr = action.set
py_expr = generate_python_code(expr, pv_var)
lines.append(f"{indent}{pv_var}['{var}'] = {py_expr}")
elif isinstance(action, AddAction):
var, expr = action.add
py_expr = generate_python_code(expr, pv_var)
lines.append(f"{indent}{pv_var}['{var}'] = {pv_var}.get('{var}', 0) + ({py_expr})")
elif isinstance(action, IfAction):
cond = action.if_
py_cond = generate_python_code(cond, pv_var)
lines.append(f"{indent}if {py_cond}:")
for a in action.then:
lines.append(_compile_action(a, indent_level + 1, pv_var))
if action.else_:
lines.append(f"{indent}else:")
for a in action.else_:
lines.append(_compile_action(a, indent_level + 1, pv_var))
return "\n".join(lines)
def _compile_loop_task(task: LoopTask, pv_var: str = "pv") -> str:
"""Compile a loop task to a function definition."""
lines: List[str] = []
func_name = f"_task_{task.name.replace('-', '_').replace(' ', '_')}"
lines.append(f"def {func_name}({pv_var}):")
lines.append(f' """Loop task: {task.name} (dt={task.dt_s}s)"""')
lines.append(" while True:")
# Compile actions
for action in task.actions:
lines.append(_compile_action(action, 2, pv_var))
lines.append(f" time.sleep({task.dt_s})")
return "\n".join(lines)
def _compile_playback_task(task: PlaybackTask, pv_var: str = "pv") -> str:
"""Compile a playback task to a function definition."""
lines: List[str] = []
func_name = f"_task_{task.name.replace('-', '_').replace(' ', '_')}"
profile = task.profile
lines.append(f"def {func_name}({pv_var}):")
lines.append(f' """Playback task: {task.name} (dt={task.dt_s}s)"""')
# Generate profile data
if isinstance(profile, GaussianProfile):
lines.append(f" # Gaussian profile: height={profile.height}, mean={profile.mean}, std={profile.std}, entries={profile.entries}")
lines.append(f" _profile_height = {profile.height}")
lines.append(f" _profile_mean = {profile.mean}")
lines.append(f" _profile_std = {profile.std}")
lines.append(f" _profile_entries = {profile.entries}")
lines.append(" _profile_idx = 0")
lines.append(" while True:")
lines.append(" _value = _profile_height + random.gauss(_profile_mean, _profile_std)")
lines.append(f" {pv_var}['{task.target}'] = _value")
lines.append(f" time.sleep({task.dt_s})")
if not task.repeat:
lines.append(" _profile_idx += 1")
lines.append(" if _profile_idx >= _profile_entries:")
lines.append(" break")
elif isinstance(profile, RampProfile):
lines.append(f" # Ramp profile: start={profile.start}, end={profile.end}, entries={profile.entries}")
lines.append(f" _profile_start = {profile.start}")
lines.append(f" _profile_end = {profile.end}")
lines.append(f" _profile_entries = {profile.entries}")
lines.append(" _profile_idx = 0")
lines.append(" while True:")
lines.append(" _t = _profile_idx / max(1, _profile_entries - 1)")
lines.append(" _value = _profile_start + (_profile_end - _profile_start) * _t")
lines.append(f" {pv_var}['{task.target}'] = _value")
lines.append(f" time.sleep({task.dt_s})")
lines.append(" _profile_idx += 1")
if task.repeat:
lines.append(" if _profile_idx >= _profile_entries:")
lines.append(" _profile_idx = 0")
else:
lines.append(" if _profile_idx >= _profile_entries:")
lines.append(" break")
elif isinstance(profile, StepProfile):
values_str = repr(profile.values)
lines.append(f" # Step profile: values={values_str}")
lines.append(f" _profile_values = {values_str}")
lines.append(" _profile_idx = 0")
lines.append(" while True:")
lines.append(" _value = _profile_values[_profile_idx % len(_profile_values)]")
lines.append(f" {pv_var}['{task.target}'] = _value")
lines.append(f" time.sleep({task.dt_s})")
lines.append(" _profile_idx += 1")
if not task.repeat:
lines.append(" if _profile_idx >= len(_profile_values):")
lines.append(" break")
return "\n".join(lines)
def compile_hil(hil: ControlPlanHIL, config_physical_values: Optional[Set[str]] = None) -> str:
"""
Compile a single HIL control plan to Python code.
Args:
hil: The HIL control plan to compile
config_physical_values: Optional set of physical_value keys from config.
If provided, ensures ALL keys are initialized (not just plan.init keys).
Keys in plan.init use their init value; others use 0 as default.
Returns:
Python code string for the HIL logic file
"""
lines: List[str] = []
# Header
lines.append('"""')
lines.append(f"HIL logic for {hil.name}: ControlPlan v0.1 compiled.")
lines.append("")
lines.append("Autogenerated by ics-simlab-config-gen (compile_control_plan).")
lines.append("DO NOT EDIT - regenerate from control_plan.json instead.")
lines.append('"""')
lines.append("")
# Imports
lines.append("import random")
lines.append("import time")
if len(hil.tasks) > 1:
lines.append("import threading")
lines.append("")
# Helper: clamp function
lines.append("")
lines.append("def clamp(x, lo, hi):")
lines.append(' """Clamp x to [lo, hi]."""')
lines.append(" return lo if x < lo else hi if x > hi else x")
lines.append("")
# Compile each task to a function
for task in hil.tasks:
lines.append("")
if isinstance(task, LoopTask):
lines.append(_compile_loop_task(task, "pv"))
elif isinstance(task, PlaybackTask):
lines.append(_compile_playback_task(task, "pv"))
lines.append("")
# Main logic function
lines.append("")
lines.append("def logic(physical_values):")
lines.append(' """')
lines.append(f" HIL logic entry point for {hil.name}.")
lines.append("")
lines.append(" ICS-SimLab calls this once and expects it to run forever.")
lines.append(' """')
# === CRITICAL: Initialize physical_values BEFORE any alias or threads ===
# Use setdefault directly on physical_values (not an alias) so that
# tools.validate_logic --check-hil-init can detect initialization via AST.
lines.append("")
lines.append(" # === Initialize physical values (validator-compatible) ===")
# Determine all keys that need initialization
all_keys_to_init: Set[str] = set(hil.init.keys())
if config_physical_values:
all_keys_to_init = all_keys_to_init | config_physical_values
# Emit setdefault for ALL keys, using plan.init value if available, else 0
for key in sorted(all_keys_to_init):
if key in hil.init:
value = hil.init[key]
if isinstance(value, bool):
py_val = "True" if value else "False"
else:
py_val = repr(value)
else:
# Key from config not in plan.init - use 0 as default
py_val = "0"
lines.append(f" physical_values.setdefault('{key}', {py_val})")
lines.append("")
# Now create alias for rest of generated code
lines.append(" pv = physical_values # Alias for generated code")
lines.append("")
# Params as local variables (for documentation; they're used via pv.get)
if hil.params:
lines.append(" # === Parameters (read-only constants) ===")
for key, value in hil.params.items():
if isinstance(value, bool):
py_val = "True" if value else "False"
else:
py_val = repr(value)
lines.append(f" pv['{key}'] = {py_val} # param")
lines.append("")
# Warmup sleep
if hil.warmup_s:
lines.append(f" # === Warmup delay ===")
lines.append(f" time.sleep({hil.warmup_s})")
lines.append("")
# Start tasks
if len(hil.tasks) == 1:
# Single task: just call it directly
task = hil.tasks[0]
func_name = f"_task_{task.name.replace('-', '_').replace(' ', '_')}"
lines.append(f" # === Run single task ===")
lines.append(f" {func_name}(pv)")
else:
# Multiple tasks: use threading
lines.append(" # === Start tasks in threads ===")
lines.append(" threads = []")
for task in hil.tasks:
func_name = f"_task_{task.name.replace('-', '_').replace(' ', '_')}"
lines.append(f" t = threading.Thread(target={func_name}, args=(pv,), daemon=True)")
lines.append(" t.start()")
lines.append(" threads.append(t)")
lines.append("")
lines.append(" # Wait for all threads (they run forever)")
lines.append(" for t in threads:")
lines.append(" t.join()")
lines.append("")
return "\n".join(lines)
def compile_control_plan(
plan: ControlPlan,
config: Optional[dict] = None,
) -> Dict[str, str]:
"""
Compile a control plan to HIL logic files.
Args:
plan: The ControlPlan to compile
config: Optional configuration.json dict. If provided, ensures ALL
physical_values declared in config for each HIL are initialized.
Returns:
Dict mapping HIL name to Python code string
"""
result: Dict[str, str] = {}
for hil in plan.hils:
# Extract config physical_values for this HIL if config provided
config_pv: Optional[Set[str]] = None
if config:
input_keys, output_keys = get_hil_physical_values_keys(config, hil.name)
config_pv = input_keys | output_keys
result[hil.name] = compile_hil(hil, config_physical_values=config_pv)
return result
def main() -> None:
parser = argparse.ArgumentParser(
description="Compile control_plan.json into HIL logic Python files"
)
parser.add_argument(
"--control-plan",
required=True,
help="Path to control_plan.json",
)
parser.add_argument(
"--out",
default=None,
help="Output directory for HIL logic .py files",
)
parser.add_argument(
"--config",
default=None,
help="Path to configuration.json (for validation)",
)
parser.add_argument(
"--validate-only",
action="store_true",
help="Only validate, don't generate files",
)
parser.add_argument(
"--overwrite",
action="store_true",
help="Overwrite existing output files",
)
args = parser.parse_args()
plan_path = Path(args.control_plan)
out_dir = Path(args.out) if args.out else None
config_path = Path(args.config) if args.config else None
if not plan_path.exists():
raise SystemExit(f"Control plan not found: {plan_path}")
if config_path and not config_path.exists():
raise SystemExit(f"Config file not found: {config_path}")
if not args.validate_only and not out_dir:
raise SystemExit("--out is required unless using --validate-only")
# Load plan
plan_dict = json.loads(plan_path.read_text(encoding="utf-8"))
plan = ControlPlan.model_validate(plan_dict)
print(f"Loaded control plan: {plan_path}")
print(f" Version: {plan.version}")
print(f" HILs: {[h.name for h in plan.hils]}")
# Load config for validation
config: Optional[dict] = None
if config_path:
config = json.loads(config_path.read_text(encoding="utf-8"))
print(f" Config: {config_path}")
# Validate
errors = validate_control_plan(plan, config)
if errors:
print(f"\nValidation FAILED ({len(errors)} errors):")
for err in errors:
print(f" - {err}")
raise SystemExit(1)
else:
print(" Validation: OK")
if args.validate_only:
print("\nValidation only mode, no files generated.")
return
# Compile (pass config to ensure all physical_values are initialized)
hil_code = compile_control_plan(plan, config=config)
# Write output files
assert out_dir is not None
out_dir.mkdir(parents=True, exist_ok=True)
for hil_name, code in hil_code.items():
# Use hil_name as filename (sanitized)
safe_name = hil_name.replace(" ", "_").replace("-", "_")
out_file = out_dir / f"{safe_name}.py"
if out_file.exists() and not args.overwrite:
raise SystemExit(f"Output file exists: {out_file} (use --overwrite)")
out_file.write_text(code, encoding="utf-8")
print(f"Wrote: {out_file}")
print(f"\nCompiled {len(hil_code)} HIL logic file(s) to {out_dir}")
if __name__ == "__main__":
main()

264
tools/debug_semantics.py Normal file
View File

@ -0,0 +1,264 @@
#!/usr/bin/env python3
"""
Debug tool for semantic validation issues.
Prints a wiring summary showing:
- Orphan sensors and actuators
- Missing PLC registers for monitors/controllers
- IO mismatches
- HMI controller targets
Usage:
python3 -m tools.debug_semantics --config outputs/configuration_raw.json
"""
import argparse
import json
from pathlib import Path
from typing import Any, Dict, List, Set
from models.ics_simlab_config_v2 import Config
def debug_semantics(config_path: Path) -> None:
"""Print a wiring summary for debugging semantic issues."""
with open(config_path) as f:
cfg_dict = json.load(f)
config = Config(**cfg_dict)
print("=" * 70)
print("SEMANTIC WIRING SUMMARY")
print("=" * 70)
# Build IP -> device mapping
plc_by_ip: Dict[str, str] = {}
sensor_by_ip: Dict[str, str] = {}
actuator_by_ip: Dict[str, str] = {}
for plc in config.plcs:
if plc.network and plc.network.ip:
plc_by_ip[plc.network.ip] = plc.name
for sensor in config.sensors:
if sensor.network and sensor.network.ip:
sensor_by_ip[sensor.network.ip] = sensor.name
for actuator in config.actuators:
if actuator.network and actuator.network.ip:
actuator_by_ip[actuator.network.ip] = actuator.name
# Track which sensors/actuators are referenced
monitored_sensor_ips: Set[str] = set()
controlled_actuator_ips: Set[str] = set()
# PLC Wiring Summary
print("\n" + "-" * 70)
print("PLC WIRING")
print("-" * 70)
for plc in config.plcs:
print(f"\n{plc.name} ({plc.network.ip if plc.network else 'no IP'}):")
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in plc.outbound_connections:
if hasattr(conn, 'ip') and conn.id:
conn_to_ip[conn.id] = conn.ip
# Monitors
print(f" Monitors ({len(plc.monitors)}):")
for m in plc.monitors:
target_ip = conn_to_ip.get(m.outbound_connection_id, "???")
target_device = (
plc_by_ip.get(target_ip) or
sensor_by_ip.get(target_ip) or
actuator_by_ip.get(target_ip) or
f"unknown ({target_ip})"
)
if target_ip in sensor_by_ip:
monitored_sensor_ips.add(target_ip)
print(f" - {m.id} -> {target_device} ({m.value_type})")
# Controllers
print(f" Controllers ({len(plc.controllers)}):")
for c in plc.controllers:
target_ip = conn_to_ip.get(c.outbound_connection_id, "???")
target_device = (
plc_by_ip.get(target_ip) or
actuator_by_ip.get(target_ip) or
sensor_by_ip.get(target_ip) or
f"unknown ({target_ip})"
)
if target_ip in actuator_by_ip:
controlled_actuator_ips.add(target_ip)
print(f" - {c.id} -> {target_device} ({c.value_type})")
# Local registers
print(f" Local Registers:")
for reg_type in ["coil", "discrete_input", "holding_register", "input_register"]:
regs = getattr(plc.registers, reg_type, [])
for reg in regs:
io_str = f" io={reg.io}" if reg.io else ""
print(f" - {reg_type}: {reg.id or reg.physical_value or 'unnamed'} @{reg.address}{io_str}")
# HMI Wiring Summary
print("\n" + "-" * 70)
print("HMI WIRING")
print("-" * 70)
for hmi in config.hmis:
print(f"\n{hmi.name}:")
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in hmi.outbound_connections:
if hasattr(conn, 'ip') and conn.id:
conn_to_ip[conn.id] = conn.ip
# Monitors
print(f" Monitors ({len(hmi.monitors)}):")
for m in hmi.monitors:
target_ip = conn_to_ip.get(m.outbound_connection_id, "???")
target_device = plc_by_ip.get(target_ip, f"unknown ({target_ip})")
print(f" - {m.id} -> {target_device} ({m.value_type})")
# Controllers
print(f" Controllers ({len(hmi.controllers)}):")
for c in hmi.controllers:
target_ip = conn_to_ip.get(c.outbound_connection_id, "???")
target_device = plc_by_ip.get(target_ip, f"unknown ({target_ip})")
print(f" - {c.id} -> {target_device} ({c.value_type})")
# Orphan Summary
print("\n" + "-" * 70)
print("ORPHAN DEVICES")
print("-" * 70)
orphan_sensors = []
for sensor in config.sensors:
if sensor.network and sensor.network.ip:
if sensor.network.ip not in monitored_sensor_ips:
orphan_sensors.append(sensor.name)
orphan_actuators = []
for actuator in config.actuators:
if actuator.network and actuator.network.ip:
if actuator.network.ip not in controlled_actuator_ips:
orphan_actuators.append(actuator.name)
if orphan_sensors:
print(f"\nOrphan Sensors (no PLC monitor):")
for name in orphan_sensors:
print(f" - {name}")
else:
print("\nNo orphan sensors")
if orphan_actuators:
print(f"\nOrphan Actuators (no PLC controller):")
for name in orphan_actuators:
print(f" - {name}")
else:
print("\nNo orphan actuators")
# IO Mismatch Summary
print("\n" + "-" * 70)
print("IO MISMATCH CHECK")
print("-" * 70)
mismatches = []
for plc in config.plcs:
# Build register id -> io mapping
reg_io: Dict[str, Dict[str, str]] = {}
for reg_type in ["coil", "discrete_input", "holding_register", "input_register"]:
reg_io[reg_type] = {}
for reg in getattr(plc.registers, reg_type, []):
if reg.id:
reg_io[reg_type][reg.id] = reg.io or ""
# Check monitors (should be io=input)
for m in plc.monitors:
if m.value_type in reg_io:
actual_io = reg_io[m.value_type].get(m.id, "")
if actual_io and actual_io != "input":
mismatches.append(
f"{plc.name}: monitor '{m.id}' has io='{actual_io}' (should be 'input')"
)
# Check controllers (should be io=output)
for c in plc.controllers:
if c.value_type in reg_io:
actual_io = reg_io[c.value_type].get(c.id, "")
if actual_io and actual_io != "output":
mismatches.append(
f"{plc.name}: controller '{c.id}' has io='{actual_io}' (should be 'output')"
)
if mismatches:
print("\nIO Mismatches found:")
for m in mismatches:
print(f" - {m}")
else:
print("\nNo IO mismatches")
# Missing Register Check
print("\n" + "-" * 70)
print("MISSING REGISTERS CHECK")
print("-" * 70)
missing = []
for plc in config.plcs:
# Build set of existing register ids by type
existing: Dict[str, Set[str]] = {}
for reg_type in ["coil", "discrete_input", "holding_register", "input_register"]:
existing[reg_type] = {
reg.id for reg in getattr(plc.registers, reg_type, [])
if reg.id
}
# Check monitors
for m in plc.monitors:
if m.value_type in existing:
if m.id not in existing[m.value_type]:
missing.append(
f"{plc.name}: monitor '{m.id}' missing local register in {m.value_type}"
)
# Check controllers
for c in plc.controllers:
if c.value_type in existing:
if c.id not in existing[c.value_type]:
missing.append(
f"{plc.name}: controller '{c.id}' missing local register in {c.value_type}"
)
if missing:
print("\nMissing Registers:")
for m in missing:
print(f" - {m}")
else:
print("\nNo missing registers")
print("\n" + "=" * 70)
def main() -> None:
parser = argparse.ArgumentParser(
description="Debug tool for semantic validation issues"
)
parser.add_argument(
"--config",
required=True,
help="Input configuration.json path"
)
args = parser.parse_args()
config_path = Path(args.config)
if not config_path.exists():
raise SystemExit(f"ERROR: Config file not found: {config_path}")
debug_semantics(config_path)
if __name__ == "__main__":
main()

382
tools/probe_modbus.py Executable file
View File

@ -0,0 +1,382 @@
#!/usr/bin/env python3
"""
Modbus Probe Tool for ICS-SimLab Diagnostics.
Probes all monitor targets (HMIPLC, PLCSensor) to diagnose:
- TCP connectivity
- Modbus exceptions (illegal address/function)
- Register type/address mismatches
Usage:
python3 tools/probe_modbus.py [--config path/to/configuration.json]
python3 tools/probe_modbus.py --docker # Run from host via docker exec
"""
import argparse
import json
import socket
import sys
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Dict, List, Optional
# Optional pymodbus import (may not be available on host)
try:
from pymodbus.client import ModbusTcpClient
from pymodbus.exceptions import ModbusException
PYMODBUS_AVAILABLE = True
except ImportError:
PYMODBUS_AVAILABLE = False
@dataclass
class ProbeTarget:
"""A Modbus read target to probe."""
source: str # e.g., "operator_hmi", "plc1"
monitor_id: str # e.g., "water_tank_level_reg"
target_ip: str
target_port: int
value_type: str # coil, discrete_input, holding_register, input_register
slave_id: int
address: int
count: int
@dataclass
class ProbeResult:
"""Result of probing a target."""
target: ProbeTarget
tcp_ok: bool
modbus_ok: bool
value: Optional[Any] = None
error: Optional[str] = None
def parse_config(config_path: Path) -> Dict[str, Any]:
"""Load and parse configuration.json."""
with open(config_path) as f:
return json.load(f)
def extract_probe_targets(config: Dict[str, Any]) -> List[ProbeTarget]:
"""Extract all monitor targets from config (HMIs and PLCs)."""
targets = []
# HMI monitors -> PLCs
for hmi in config.get("hmis", []):
hmi_name = hmi.get("name", "unknown_hmi")
outbound_map = {
conn["id"]: (conn["ip"], conn["port"])
for conn in hmi.get("outbound_connections", [])
}
for mon in hmi.get("monitors", []):
conn_id = mon.get("outbound_connection_id")
if conn_id not in outbound_map:
continue
ip, port = outbound_map[conn_id]
targets.append(ProbeTarget(
source=hmi_name,
monitor_id=mon["id"],
target_ip=ip,
target_port=port,
value_type=mon["value_type"],
slave_id=mon.get("slave_id", 1),
address=mon["address"],
count=mon.get("count", 1),
))
# PLC monitors -> Sensors/other PLCs
for plc in config.get("plcs", []):
plc_name = plc.get("name", "unknown_plc")
outbound_map = {
conn["id"]: (conn["ip"], conn["port"])
for conn in plc.get("outbound_connections", [])
}
for mon in plc.get("monitors", []):
conn_id = mon.get("outbound_connection_id")
if conn_id not in outbound_map:
continue
ip, port = outbound_map[conn_id]
targets.append(ProbeTarget(
source=plc_name,
monitor_id=mon["id"],
target_ip=ip,
target_port=port,
value_type=mon["value_type"],
slave_id=mon.get("slave_id", 1),
address=mon["address"],
count=mon.get("count", 1),
))
return targets
def check_tcp(ip: str, port: int, timeout: float = 2.0) -> bool:
"""Check if TCP port is reachable."""
try:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.settimeout(timeout)
sock.connect((ip, port))
return True
except (socket.timeout, ConnectionRefusedError, OSError):
return False
def probe_modbus(target: ProbeTarget, timeout: float = 3.0) -> ProbeResult:
"""Probe a single Modbus target."""
# First check TCP connectivity
tcp_ok = check_tcp(target.target_ip, target.target_port, timeout)
if not tcp_ok:
return ProbeResult(
target=target,
tcp_ok=False,
modbus_ok=False,
error=f"TCP connection refused: {target.target_ip}:{target.target_port}"
)
if not PYMODBUS_AVAILABLE:
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error="pymodbus not installed (TCP OK)"
)
# Try Modbus read
client = ModbusTcpClient(
host=target.target_ip,
port=target.target_port,
timeout=timeout
)
try:
if not client.connect():
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error="Modbus connect() failed"
)
# Select appropriate read function
read_funcs = {
"coil": client.read_coils,
"discrete_input": client.read_discrete_inputs,
"holding_register": client.read_holding_registers,
"input_register": client.read_input_registers,
}
func = read_funcs.get(target.value_type)
if not func:
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error=f"Unknown value_type: {target.value_type}"
)
# Perform Modbus read
# ICS-SimLab containers use pymodbus that accepts positional args
addr = target.address
cnt = target.count
# Simple call - positional args work with ICS-SimLab's pymodbus
result = func(addr, cnt)
if result.isError():
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error=f"Modbus error: {result}"
)
# Extract value
if target.value_type in ("coil", "discrete_input"):
value = result.bits[:target.count]
else:
value = result.registers[:target.count]
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=True,
value=value
)
except ModbusException as e:
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error=f"ModbusException: {e}"
)
except Exception as e:
return ProbeResult(
target=target,
tcp_ok=True,
modbus_ok=False,
error=f"Exception: {type(e).__name__}: {e}"
)
finally:
client.close()
def format_result(r: ProbeResult) -> str:
"""Format a probe result as a single line."""
t = r.target
status = "OK" if r.modbus_ok else "FAIL"
tcp_status = "TCP_OK" if r.tcp_ok else "TCP_FAIL"
line = f"[{status}] {t.source} -> {t.target_ip}:{t.target_port}"
line += f" {t.value_type}@{t.address} (id={t.monitor_id})"
if r.modbus_ok:
line += f" value={r.value}"
else:
line += f" ({tcp_status}) {r.error}"
return line
def run_probe(config_path: Path, verbose: bool = False) -> List[ProbeResult]:
"""Run probe on all targets."""
config = parse_config(config_path)
targets = extract_probe_targets(config)
if verbose:
print(f"Found {len(targets)} probe targets")
print("=" * 70)
results = []
for target in targets:
result = probe_modbus(target)
results.append(result)
if verbose:
print(format_result(result))
return results
def generate_report(results: List[ProbeResult]) -> str:
"""Generate a full probe report."""
lines = []
lines.append("=" * 70)
lines.append("MODBUS PROBE REPORT")
lines.append("=" * 70)
lines.append("")
# Summary
total = len(results)
tcp_ok = sum(1 for r in results if r.tcp_ok)
modbus_ok = sum(1 for r in results if r.modbus_ok)
lines.append(f"Total targets: {total}")
lines.append(f"TCP reachable: {tcp_ok}/{total}")
lines.append(f"Modbus OK: {modbus_ok}/{total}")
lines.append("")
# Group by source
by_source: Dict[str, List[ProbeResult]] = {}
for r in results:
src = r.target.source
by_source.setdefault(src, []).append(r)
for source, source_results in sorted(by_source.items()):
lines.append(f"--- {source} monitors ---")
for r in source_results:
lines.append(format_result(r))
lines.append("")
# Diagnosis
lines.append("=" * 70)
lines.append("DIAGNOSIS")
lines.append("=" * 70)
tcp_fails = [r for r in results if not r.tcp_ok]
modbus_fails = [r for r in results if r.tcp_ok and not r.modbus_ok]
if tcp_fails:
lines.append("")
lines.append("TCP FAILURES (connection refused/timeout):")
for r in tcp_fails:
lines.append(f" - {r.target.source} -> {r.target.target_ip}:{r.target.target_port}")
lines.append(" Likely causes: container not running, network isolation, firewall")
if modbus_fails:
lines.append("")
lines.append("MODBUS FAILURES (TCP OK but read failed):")
for r in modbus_fails:
lines.append(f" - {r.target.source}.{r.target.monitor_id}: {r.error}")
lines.append(" Likely causes: wrong address, wrong register type, device not serving")
if modbus_ok == total:
lines.append("")
lines.append("All Modbus reads successful. Data flow issue is likely in:")
lines.append(" - HIL not updating physical_values")
lines.append(" - Sensors not receiving from HIL")
lines.append(" - Value is legitimately 0")
# Values that are 0 (potential issue)
zero_values = [r for r in results if r.modbus_ok and r.value in ([0], [False], 0)]
if zero_values:
lines.append("")
lines.append("REGISTERS WITH VALUE 0 (may indicate HIL not producing data):")
for r in zero_values:
lines.append(f" - {r.target.source}.{r.target.monitor_id} = {r.value}")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Modbus probe for ICS-SimLab diagnostics")
parser.add_argument(
"--config",
default="outputs/scenario_run/configuration.json",
help="Path to configuration.json"
)
parser.add_argument(
"--docker",
action="store_true",
help="Run probes via docker exec (from host)"
)
parser.add_argument(
"--verbose", "-v",
action="store_true",
help="Print results as they are collected"
)
parser.add_argument(
"--output", "-o",
help="Write report to file instead of stdout"
)
args = parser.parse_args()
config_path = Path(args.config)
if not config_path.exists():
print(f"ERROR: Config not found: {config_path}", file=sys.stderr)
sys.exit(1)
if args.docker:
# TODO: Implement docker exec wrapper
print("--docker mode not yet implemented", file=sys.stderr)
sys.exit(1)
results = run_probe(config_path, verbose=args.verbose)
report = generate_report(results)
if args.output:
Path(args.output).write_text(report)
print(f"Report written to {args.output}")
else:
print(report)
# Exit with error if any failures
if not all(r.modbus_ok for r in results):
sys.exit(1)
if __name__ == "__main__":
main()

900
tools/repair_config.py Normal file
View File

@ -0,0 +1,900 @@
#!/usr/bin/env python3
"""
Minimal deterministic repair for ICS-SimLab configuration issues.
These repairs fix P0 semantic issues that would cause open-loop systems:
- Orphan sensors: attach to first PLC as monitors
- Orphan actuators: attach to first PLC as controllers
- Boolean type rules: move boolean signals to correct register types
Repairs are deterministic: same input always produces same output.
Address allocation uses a simple incrementing scheme.
"""
from dataclasses import dataclass
from typing import Any, Dict, List, Tuple
# Address allocation ranges (avoid collision with existing addresses)
MONITOR_ADDRESS_START = 1000
CONTROLLER_ADDRESS_START = 2000
@dataclass
class RepairAction:
"""A repair action that was applied."""
entity: str
action: str
details: str
def __str__(self) -> str:
return f"{self.entity}: {self.action} - {self.details}"
def _find_orphan_sensors(config: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Find sensors not referenced by any PLC monitor."""
# Collect all sensor IPs
sensor_by_ip: Dict[str, Dict[str, Any]] = {}
for sensor in config.get("sensors", []):
net = sensor.get("network", {})
if ip := net.get("ip"):
sensor_by_ip[ip] = sensor
# Collect all IPs targeted by PLC monitors
monitored_ips: set = set()
for plc in config.get("plcs", []):
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in plc.get("outbound_connections", []):
if conn.get("type") == "tcp" and conn.get("id"):
conn_to_ip[conn["id"]] = conn.get("ip", "")
for monitor in plc.get("monitors", []):
conn_id = monitor.get("outbound_connection_id", "")
if conn_id in conn_to_ip:
monitored_ips.add(conn_to_ip[conn_id])
# Find orphans
orphans = []
for ip, sensor in sensor_by_ip.items():
if ip not in monitored_ips:
orphans.append(sensor)
return orphans
def _find_orphan_actuators(config: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Find actuators not referenced by any PLC controller."""
# Collect all actuator IPs
actuator_by_ip: Dict[str, Dict[str, Any]] = {}
for actuator in config.get("actuators", []):
net = actuator.get("network", {})
if ip := net.get("ip"):
actuator_by_ip[ip] = actuator
# Collect all IPs targeted by PLC controllers
controlled_ips: set = set()
for plc in config.get("plcs", []):
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in plc.get("outbound_connections", []):
if conn.get("type") == "tcp" and conn.get("id"):
conn_to_ip[conn["id"]] = conn.get("ip", "")
for controller in plc.get("controllers", []):
conn_id = controller.get("outbound_connection_id", "")
if conn_id in conn_to_ip:
controlled_ips.add(conn_to_ip[conn_id])
# Find orphans
orphans = []
for ip, actuator in actuator_by_ip.items():
if ip not in controlled_ips:
orphans.append(actuator)
return orphans
def _get_first_register_info(device: Dict[str, Any]) -> Tuple[str, int, str]:
"""
Get info from first register of a device.
Returns: (physical_value, address, value_type)
"""
registers = device.get("registers", {})
# Check input_register first (sensors typically use this)
for reg in registers.get("input_register", []):
pv = reg.get("physical_value") or reg.get("id") or device.get("name", "unknown")
addr = reg.get("address", 0)
return (pv, addr, "input_register")
# Check discrete_input (for boolean sensors)
for reg in registers.get("discrete_input", []):
pv = reg.get("physical_value") or reg.get("id") or device.get("name", "unknown")
addr = reg.get("address", 0)
return (pv, addr, "discrete_input")
# Check coil (for actuators)
for reg in registers.get("coil", []):
pv = reg.get("physical_value") or reg.get("id") or device.get("name", "unknown")
addr = reg.get("address", 0)
return (pv, addr, "coil")
# Check holding_register
for reg in registers.get("holding_register", []):
pv = reg.get("physical_value") or reg.get("id") or device.get("name", "unknown")
addr = reg.get("address", 0)
return (pv, addr, "holding_register")
# Fallback
return (device.get("name", "unknown"), 0, "input_register")
def repair_orphan_devices(config: Dict[str, Any]) -> Tuple[Dict[str, Any], List[RepairAction]]:
"""
Attach orphan sensors/actuators to the first PLC.
This is a minimal repair that ensures the system is not open-loop.
It adds outbound_connections and monitors/controllers to the first PLC.
Args:
config: Configuration dict (will be modified in place)
Returns:
(modified_config, list_of_repair_actions)
"""
actions: List[RepairAction] = []
plcs = config.get("plcs", [])
if not plcs:
return config, actions # No PLCs to attach to
first_plc = plcs[0]
plc_name = first_plc.get("name", "plc1")
# Ensure lists exist
if "outbound_connections" not in first_plc:
first_plc["outbound_connections"] = []
if "monitors" not in first_plc:
first_plc["monitors"] = []
if "controllers" not in first_plc:
first_plc["controllers"] = []
# Track existing connection IDs to avoid duplicates
existing_conn_ids = {
conn.get("id") for conn in first_plc["outbound_connections"]
if conn.get("id")
}
# Address counters for deterministic allocation
monitor_addr = MONITOR_ADDRESS_START
controller_addr = CONTROLLER_ADDRESS_START
# Repair orphan sensors
orphan_sensors = _find_orphan_sensors(config)
for sensor in orphan_sensors:
sensor_name = sensor.get("name", "unknown_sensor")
sensor_net = sensor.get("network", {})
sensor_ip = sensor_net.get("ip")
if not sensor_ip:
continue
# Create connection ID
conn_id = f"to_{sensor_name}"
if conn_id in existing_conn_ids:
# Already have a connection, skip
continue
# Add outbound connection
first_plc["outbound_connections"].append({
"type": "tcp",
"ip": sensor_ip,
"port": 502,
"id": conn_id
})
existing_conn_ids.add(conn_id)
# Get register info from sensor
pv, orig_addr, value_type = _get_first_register_info(sensor)
# Add monitor
first_plc["monitors"].append({
"outbound_connection_id": conn_id,
"id": pv,
"value_type": value_type,
"slave_id": 1,
"address": orig_addr,
"count": 1,
"interval": 0.5
})
actions.append(RepairAction(
entity=f"sensors['{sensor_name}']",
action="attached to PLC",
details=f"Added connection '{conn_id}' and monitor for '{pv}' on {plc_name}"
))
monitor_addr += 1
# Repair orphan actuators
orphan_actuators = _find_orphan_actuators(config)
for actuator in orphan_actuators:
actuator_name = actuator.get("name", "unknown_actuator")
actuator_net = actuator.get("network", {})
actuator_ip = actuator_net.get("ip")
if not actuator_ip:
continue
# Create connection ID
conn_id = f"to_{actuator_name}"
if conn_id in existing_conn_ids:
# Already have a connection, skip
continue
# Add outbound connection
first_plc["outbound_connections"].append({
"type": "tcp",
"ip": actuator_ip,
"port": 502,
"id": conn_id
})
existing_conn_ids.add(conn_id)
# Get register info from actuator
pv, orig_addr, value_type = _get_first_register_info(actuator)
# For actuators, prefer coil type
if value_type == "input_register":
value_type = "coil"
# Add controller
first_plc["controllers"].append({
"outbound_connection_id": conn_id,
"id": pv,
"value_type": value_type,
"slave_id": 1,
"address": orig_addr,
"count": 1
})
actions.append(RepairAction(
entity=f"actuators['{actuator_name}']",
action="attached to PLC",
details=f"Added connection '{conn_id}' and controller for '{pv}' on {plc_name}"
))
controller_addr += 1
return config, actions
# Boolean indicator patterns (case-insensitive) - same as in semantic_validation.py
BOOLEAN_PATTERNS = [
"switch", "state", "status", "at_", "is_", "_on", "_off",
"enable", "active", "running", "alarm", "fault", "ready",
"open", "close", "start", "stop", "button", "flag"
]
def _looks_like_boolean(name: str) -> bool:
"""Check if a physical_value/id name suggests boolean semantics."""
if not name:
return False
name_lower = name.lower()
return any(pattern in name_lower for pattern in BOOLEAN_PATTERNS)
def repair_boolean_types(config: Dict[str, Any]) -> Tuple[Dict[str, Any], List[RepairAction]]:
"""
Move boolean signals to correct Modbus register types.
Modbus type rules:
- Boolean measured values -> discrete_input (read-only, function code 2)
- Boolean commanded values -> coil (write, function code 5/15)
This repair:
1. Moves sensor boolean registers from input_register to discrete_input
2. Moves actuator boolean registers from holding_register to coil
3. Updates PLC monitors/controllers to use correct value_type
Args:
config: Configuration dict (will be modified in place)
Returns:
(modified_config, list_of_repair_actions)
"""
actions: List[RepairAction] = []
# Track changes for updating monitors/controllers
# Key: (device_name, physical_value) -> new_value_type
type_changes: Dict[Tuple[str, str], str] = {}
# Repair sensors: move boolean input_register -> discrete_input
for sensor in config.get("sensors", []):
sensor_name = sensor.get("name", "unknown")
registers = sensor.get("registers", {})
input_regs = registers.get("input_register", [])
discrete_regs = registers.get("discrete_input", [])
# Find boolean registers to move
to_move = []
for i, reg in enumerate(input_regs):
pv = reg.get("physical_value") or ""
if _looks_like_boolean(pv):
to_move.append((i, reg, pv))
# Move them (reverse order to preserve indices)
for i, reg, pv in reversed(to_move):
input_regs.pop(i)
discrete_regs.append(reg)
type_changes[(sensor_name, pv)] = "discrete_input"
actions.append(RepairAction(
entity=f"sensors['{sensor_name}'].registers",
action="moved to discrete_input",
details=f"Boolean signal '{pv}' moved from input_register to discrete_input"
))
# Update registers in sensor
registers["input_register"] = input_regs
registers["discrete_input"] = discrete_regs
# Repair actuators: move boolean holding_register -> coil
for actuator in config.get("actuators", []):
actuator_name = actuator.get("name", "unknown")
registers = actuator.get("registers", {})
holding_regs = registers.get("holding_register", [])
coil_regs = registers.get("coil", [])
# Find boolean registers to move
to_move = []
for i, reg in enumerate(holding_regs):
pv = reg.get("physical_value") or ""
if _looks_like_boolean(pv):
to_move.append((i, reg, pv))
# Move them (reverse order to preserve indices)
for i, reg, pv in reversed(to_move):
holding_regs.pop(i)
coil_regs.append(reg)
type_changes[(actuator_name, pv)] = "coil"
actions.append(RepairAction(
entity=f"actuators['{actuator_name}'].registers",
action="moved to coil",
details=f"Boolean signal '{pv}' moved from holding_register to coil"
))
# Update registers in actuator
registers["holding_register"] = holding_regs
registers["coil"] = coil_regs
# Repair PLCs: move boolean input_register -> discrete_input, holding_register -> coil
for plc in config.get("plcs", []):
plc_name = plc.get("name", "unknown")
registers = plc.get("registers", {})
# input_register -> discrete_input for boolean inputs
input_regs = registers.get("input_register", [])
discrete_regs = registers.get("discrete_input", [])
to_move = []
for i, reg in enumerate(input_regs):
reg_id = reg.get("id") or ""
if _looks_like_boolean(reg_id):
to_move.append((i, reg, reg_id))
for i, reg, reg_id in reversed(to_move):
input_regs.pop(i)
discrete_regs.append(reg)
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers",
action="moved to discrete_input",
details=f"Boolean signal '{reg_id}' moved from input_register to discrete_input"
))
registers["input_register"] = input_regs
registers["discrete_input"] = discrete_regs
# holding_register -> coil for boolean outputs
holding_regs = registers.get("holding_register", [])
coil_regs = registers.get("coil", [])
to_move = []
for i, reg in enumerate(holding_regs):
reg_id = reg.get("id") or ""
if _looks_like_boolean(reg_id):
to_move.append((i, reg, reg_id))
for i, reg, reg_id in reversed(to_move):
holding_regs.pop(i)
coil_regs.append(reg)
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers",
action="moved to coil",
details=f"Boolean signal '{reg_id}' moved from holding_register to coil"
))
registers["holding_register"] = holding_regs
registers["coil"] = coil_regs
# Update PLC monitors to match new sensor register types
for plc in config.get("plcs", []):
for monitor in plc.get("monitors", []):
monitor_id = monitor.get("id", "")
# Check if this monitor's target was changed
for (device_name, pv), new_type in type_changes.items():
if monitor_id == pv and monitor.get("value_type") != new_type:
old_type = monitor.get("value_type")
monitor["value_type"] = new_type
actions.append(RepairAction(
entity=f"plcs['{plc.get('name')}'].monitors",
action="updated value_type",
details=f"Monitor '{monitor_id}' changed from {old_type} to {new_type}"
))
# Update PLC controllers to match new actuator register types
for plc in config.get("plcs", []):
for controller in plc.get("controllers", []):
controller_id = controller.get("id", "")
# Check if this controller's target was changed
for (device_name, pv), new_type in type_changes.items():
if controller_id == pv and controller.get("value_type") != new_type:
old_type = controller.get("value_type")
controller["value_type"] = new_type
actions.append(RepairAction(
entity=f"plcs['{plc.get('name')}'].controllers",
action="updated value_type",
details=f"Controller '{controller_id}' changed from {old_type} to {new_type}"
))
# Update HMI monitors to match new register types
for hmi in config.get("hmis", []):
for monitor in hmi.get("monitors", []):
monitor_id = monitor.get("id", "")
for (device_name, pv), new_type in type_changes.items():
if monitor_id == pv and monitor.get("value_type") != new_type:
old_type = monitor.get("value_type")
monitor["value_type"] = new_type
actions.append(RepairAction(
entity=f"hmis['{hmi.get('name')}'].monitors",
action="updated value_type",
details=f"Monitor '{monitor_id}' changed from {old_type} to {new_type}"
))
return config, actions
def repair_plc_local_registers(config: Dict[str, Any]) -> Tuple[Dict[str, Any], List[RepairAction]]:
"""
Create missing PLC local registers for monitors and controllers, and fix IO mismatches.
Native ICS-SimLab pattern requires that:
- For each PLC monitor with id=X and value_type=T, there should be a
local register in plc.registers[T] with id=X and io="input"
- For each PLC controller with id=Y and value_type=T, there should be a
local register in plc.registers[T] with id=Y and io="output"
This repair:
1. Creates missing registers with minimal changes
2. Fixes IO mismatches on existing registers
Args:
config: Configuration dict (will be modified in place)
Returns:
(modified_config, list_of_repair_actions)
"""
actions: List[RepairAction] = []
for plc in config.get("plcs", []):
plc_name = plc.get("name", "unknown")
registers = plc.get("registers", {})
# Ensure register type lists exist
for reg_type in ["coil", "discrete_input", "holding_register", "input_register"]:
if reg_type not in registers:
registers[reg_type] = []
# Build mapping of existing register IDs by type -> id -> register object
existing_regs: Dict[str, Dict[str, Dict]] = {
"coil": {},
"discrete_input": {},
"holding_register": {},
"input_register": {},
}
for reg_type in existing_regs:
for reg in registers.get(reg_type, []):
if isinstance(reg, dict) and reg.get("id"):
existing_regs[reg_type][reg["id"]] = reg
# Process monitors: create missing local registers with io="input" or fix io
for monitor in plc.get("monitors", []):
monitor_id = monitor.get("id")
value_type = monitor.get("value_type") # e.g., "input_register"
address = monitor.get("address", 0)
count = monitor.get("count", 1)
if not monitor_id or not value_type:
continue
if value_type not in existing_regs:
continue
if monitor_id not in existing_regs[value_type]:
# Create new register entry
new_reg = {
"address": address,
"count": count,
"id": monitor_id,
"io": "input"
}
registers[value_type].append(new_reg)
existing_regs[value_type][monitor_id] = new_reg
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers.{value_type}",
action="created local register",
details=f"Added register id='{monitor_id}' io='input' for monitor (native pattern)"
))
else:
# Check and fix IO mismatch
reg = existing_regs[value_type][monitor_id]
if reg.get("io") and reg["io"] != "input":
old_io = reg["io"]
reg["io"] = "input"
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers.{value_type}['{monitor_id}']",
action="fixed io mismatch",
details=f"Changed io from '{old_io}' to 'input' for monitor (native pattern)"
))
# Process controllers: create missing local registers with io="output" or fix io
for controller in plc.get("controllers", []):
controller_id = controller.get("id")
value_type = controller.get("value_type") # e.g., "coil"
address = controller.get("address", 0)
count = controller.get("count", 1)
if not controller_id or not value_type:
continue
if value_type not in existing_regs:
continue
if controller_id not in existing_regs[value_type]:
# Create new register entry
new_reg = {
"address": address,
"count": count,
"id": controller_id,
"io": "output"
}
registers[value_type].append(new_reg)
existing_regs[value_type][controller_id] = new_reg
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers.{value_type}",
action="created local register",
details=f"Added register id='{controller_id}' io='output' for controller (native pattern)"
))
else:
# Check and fix IO mismatch
reg = existing_regs[value_type][controller_id]
if reg.get("io") and reg["io"] != "output":
old_io = reg["io"]
reg["io"] = "output"
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers.{value_type}['{controller_id}']",
action="fixed io mismatch",
details=f"Changed io from '{old_io}' to 'output' for controller (native pattern)"
))
return config, actions
def repair_hmi_controller_registers(config: Dict[str, Any]) -> Tuple[Dict[str, Any], List[RepairAction]]:
"""
Create missing PLC registers for HMI controllers.
HMI controllers write to PLC registers. If the referenced register doesn't
exist on the target PLC, this repair creates it with io="output".
Args:
config: Configuration dict (will be modified in place)
Returns:
(modified_config, list_of_repair_actions)
"""
actions: List[RepairAction] = []
# Build IP -> PLC mapping
plc_by_ip: Dict[str, Dict[str, Any]] = {}
for plc in config.get("plcs", []):
net = plc.get("network", {})
if ip := net.get("ip"):
plc_by_ip[ip] = plc
# Process each HMI
for hmi in config.get("hmis", []):
hmi_name = hmi.get("name", "unknown")
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in hmi.get("outbound_connections", []):
if conn.get("type") == "tcp" and conn.get("id"):
conn_to_ip[conn["id"]] = conn.get("ip", "")
# Process controllers
for controller in hmi.get("controllers", []):
controller_id = controller.get("id")
value_type = controller.get("value_type") # e.g., "coil"
conn_id = controller.get("outbound_connection_id")
address = controller.get("address", 0)
count = controller.get("count", 1)
if not controller_id or not value_type or not conn_id:
continue
# Find target IP
target_ip = conn_to_ip.get(conn_id)
if not target_ip:
continue
# Find target PLC
target_plc = plc_by_ip.get(target_ip)
if not target_plc:
continue
plc_name = target_plc.get("name", "unknown")
registers = target_plc.get("registers", {})
# Ensure register type list exists
if value_type not in registers:
registers[value_type] = []
# Check if register already exists
existing_ids = {
reg.get("id") for reg in registers.get(value_type, [])
if isinstance(reg, dict) and reg.get("id")
}
if controller_id not in existing_ids:
# Create new register entry on target PLC
new_reg = {
"address": address,
"count": count,
"id": controller_id,
"io": "output" # HMI controllers write to PLC, so output
}
registers[value_type].append(new_reg)
actions.append(RepairAction(
entity=f"plcs['{plc_name}'].registers.{value_type}",
action="created register for HMI controller",
details=f"Added register id='{controller_id}' io='output' for HMI '{hmi_name}' controller"
))
return config, actions
def _get_next_free_address(registers: Dict[str, Any], reg_type: str) -> int:
"""
Get next free address for a register type.
Finds the maximum existing address and returns max + 1.
If no registers exist, starts at address 1.
"""
max_addr = 0
for reg in registers.get(reg_type, []):
if isinstance(reg, dict):
addr = reg.get("address", 0)
count = reg.get("count", 1)
max_addr = max(max_addr, addr + count - 1)
return max_addr + 1 if max_addr > 0 else 1
def _register_exists_on_device(registers: Dict[str, Any], register_id: str) -> bool:
"""
Check if a register with given id/physical_value exists on any register type.
"""
for reg_type in ["coil", "discrete_input", "holding_register", "input_register"]:
for reg in registers.get(reg_type, []):
if isinstance(reg, dict):
if reg.get("id") == register_id or reg.get("physical_value") == register_id:
return True
return False
def repair_target_device_registers(config: Dict[str, Any]) -> Tuple[Dict[str, Any], List[RepairAction]]:
"""
Create missing registers on target devices (actuators, sensors, PLCs).
When a PLC controller references a register id on a target device (via
outbound_connection IP), the target device must have that register defined.
This repair:
1. For each PLC controller, ensures target actuator/PLC has the register
2. For each PLC monitor, ensures target sensor/PLC has the register
Register creation rules:
- Actuators: use physical_value field, io not needed (device receives commands)
- Sensors: use physical_value field, io not needed (device provides data)
- PLCs (PLC-to-PLC): use id field, io="input" (receiving from another PLC)
Args:
config: Configuration dict (will be modified in place)
Returns:
(modified_config, list_of_repair_actions)
"""
actions: List[RepairAction] = []
# Build IP -> device mappings
device_by_ip: Dict[str, Tuple[str, Dict[str, Any]]] = {}
for plc in config.get("plcs", []):
net = plc.get("network", {})
if ip := net.get("ip"):
device_by_ip[ip] = ("plc", plc)
for sensor in config.get("sensors", []):
net = sensor.get("network", {})
if ip := net.get("ip"):
device_by_ip[ip] = ("sensor", sensor)
for actuator in config.get("actuators", []):
net = actuator.get("network", {})
if ip := net.get("ip"):
device_by_ip[ip] = ("actuator", actuator)
# Process each PLC's controllers and monitors
for plc in config.get("plcs", []):
plc_name = plc.get("name", "unknown")
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in plc.get("outbound_connections", []):
if conn.get("type") == "tcp" and conn.get("id"):
conn_to_ip[conn["id"]] = conn.get("ip", "")
# Process controllers: ensure target device has the register
for controller in plc.get("controllers", []):
controller_id = controller.get("id")
value_type = controller.get("value_type") # e.g., "coil"
conn_id = controller.get("outbound_connection_id")
address = controller.get("address", 0)
count = controller.get("count", 1)
if not controller_id or not value_type or not conn_id:
continue
# Skip if connection not found (could be RTU)
target_ip = conn_to_ip.get(conn_id)
if not target_ip:
continue
# Find target device
if target_ip not in device_by_ip:
continue
device_type, target_device = device_by_ip[target_ip]
target_name = target_device.get("name", "unknown")
registers = target_device.get("registers", {})
# Ensure register type list exists
if value_type not in registers:
registers[value_type] = []
# Check if register already exists
if _register_exists_on_device(registers, controller_id):
continue
# Determine address (use controller's address or find free one)
new_addr = address if address > 0 else _get_next_free_address(registers, value_type)
# Create register based on device type
if device_type == "plc":
# PLC-to-PLC: check if target PLC has a controller using same id
# If so, skip - repair_plc_local_registers will handle it with io="output"
target_has_controller = any(
c.get("id") == controller_id
for c in target_device.get("controllers", [])
)
if target_has_controller:
# Skip - the target PLC's own controller takes precedence
continue
# Target PLC receives writes, so io="input"
new_reg = {
"address": new_addr,
"count": count,
"id": controller_id,
"io": "input" # Target receives commands from source PLC
}
else:
# Actuator: use physical_value (no io field needed)
new_reg = {
"address": new_addr,
"count": count,
"physical_value": controller_id
}
registers[value_type].append(new_reg)
actions.append(RepairAction(
entity=f"{device_type}s['{target_name}'].registers.{value_type}",
action="created register for PLC controller",
details=f"Added register '{controller_id}' for {plc_name}.controller target"
))
# Process monitors: ensure target device has the register
for monitor in plc.get("monitors", []):
monitor_id = monitor.get("id")
value_type = monitor.get("value_type") # e.g., "input_register"
conn_id = monitor.get("outbound_connection_id")
address = monitor.get("address", 0)
count = monitor.get("count", 1)
if not monitor_id or not value_type or not conn_id:
continue
# Skip if connection not found (could be RTU)
target_ip = conn_to_ip.get(conn_id)
if not target_ip:
continue
# Find target device
if target_ip not in device_by_ip:
continue
device_type, target_device = device_by_ip[target_ip]
target_name = target_device.get("name", "unknown")
registers = target_device.get("registers", {})
# Ensure register type list exists
if value_type not in registers:
registers[value_type] = []
# Check if register already exists
if _register_exists_on_device(registers, monitor_id):
continue
# Determine address (use monitor's address or find free one)
new_addr = address if address > 0 else _get_next_free_address(registers, value_type)
# Create register based on device type
if device_type == "plc":
# PLC-to-PLC: check if target PLC has a monitor using same id
# If so, skip - repair_plc_local_registers will handle it with io="input"
target_has_monitor = any(
m.get("id") == monitor_id
for m in target_device.get("monitors", [])
)
if target_has_monitor:
# Skip - the target PLC's own monitor takes precedence
continue
# Target PLC provides data, so io="output"
new_reg = {
"address": new_addr,
"count": count,
"id": monitor_id,
"io": "output" # Target provides data to source PLC
}
else:
# Sensor: use physical_value (no io field needed)
new_reg = {
"address": new_addr,
"count": count,
"physical_value": monitor_id
}
registers[value_type].append(new_reg)
actions.append(RepairAction(
entity=f"{device_type}s['{target_name}'].registers.{value_type}",
action="created register for PLC monitor",
details=f"Added register '{monitor_id}' for {plc_name}.monitor target"
))
return config, actions

264
tools/safe_eval.py Normal file
View File

@ -0,0 +1,264 @@
"""
Safe expression evaluation using AST parsing.
This module provides safe_eval() for evaluating expressions from control plans.
Only a whitelist of safe AST nodes and function calls are allowed.
Security model:
- Parse expression with ast.parse(mode='eval')
- Walk AST and verify all nodes are in the whitelist
- If valid, compile and eval with restricted locals/globals
Allowed:
- Constants: numbers, strings, booleans, None
- Names: variable references (resolved from provided namespace)
- BinOp: +, -, *, /, //, %, **
- UnaryOp: -, +, not
- BoolOp: and, or
- Compare: ==, !=, <, <=, >, >=, in, not in
- Call: only allowlisted functions (min, max, abs, int, float, bool, clamp)
- IfExp: ternary (x if cond else y)
Forbidden:
- Attribute access (obj.attr)
- Subscript (obj[key])
- Lambda
- Comprehensions (list, dict, set, generator)
- Import
- Call to non-allowlisted functions
- Assignment expressions (:=)
"""
from __future__ import annotations
import ast
from typing import Any, Dict, Optional, Set
class UnsafeExpressionError(Exception):
"""Raised when an expression contains unsafe AST nodes."""
pass
# Allowlisted function names that can be called in expressions
SAFE_FUNCTIONS: Set[str] = {
"min",
"max",
"abs",
"int",
"float",
"bool",
"clamp", # Custom clamp function provided in builtins
}
# Allowlisted AST node types
SAFE_NODES: Set[type] = {
ast.Expression, # Top-level for mode='eval'
ast.Constant, # Literals (numbers, strings, booleans, None)
ast.Name, # Variable references
ast.Load, # Load context for names
ast.BinOp, # Binary operations
ast.UnaryOp, # Unary operations
ast.BoolOp, # Boolean operations (and, or)
ast.Compare, # Comparisons
ast.Call, # Function calls (restricted to SAFE_FUNCTIONS)
ast.IfExp, # Ternary: x if cond else y
# Operators
ast.Add,
ast.Sub,
ast.Mult,
ast.Div,
ast.FloorDiv,
ast.Mod,
ast.Pow,
ast.USub, # Unary minus
ast.UAdd, # Unary plus
ast.Not, # not
ast.And,
ast.Or,
ast.Eq,
ast.NotEq,
ast.Lt,
ast.LtE,
ast.Gt,
ast.GtE,
ast.In,
ast.NotIn,
}
def _clamp(x: float, lo: float, hi: float) -> float:
"""Clamp x to the range [lo, hi]."""
return lo if x < lo else hi if x > hi else x
def validate_expression(expr: str) -> None:
"""
Validate that an expression is safe to evaluate.
Raises:
UnsafeExpressionError: if expression contains unsafe constructs
SyntaxError: if expression is not valid Python
"""
try:
tree = ast.parse(expr, mode='eval')
except SyntaxError as e:
raise SyntaxError(f"Invalid expression: {e}")
for node in ast.walk(tree):
node_type = type(node)
# Check if node type is allowed
if node_type not in SAFE_NODES:
raise UnsafeExpressionError(
f"Unsafe AST node type: {node_type.__name__} in expression: {expr}"
)
# Special check for Call nodes: only allow safe functions
if isinstance(node, ast.Call):
# Function must be a simple Name (no attribute access)
if not isinstance(node.func, ast.Name):
raise UnsafeExpressionError(
f"Unsafe function call (not a simple name) in expression: {expr}"
)
func_name = node.func.id
if func_name not in SAFE_FUNCTIONS:
raise UnsafeExpressionError(
f"Unsafe function call: {func_name} in expression: {expr}. "
f"Allowed: {', '.join(sorted(SAFE_FUNCTIONS))}"
)
def safe_eval(expr: str, namespace: Dict[str, Any]) -> Any:
"""
Safely evaluate an expression with the given namespace.
Args:
expr: Python expression string
namespace: dict mapping variable names to values
Returns:
The result of evaluating the expression
Raises:
UnsafeExpressionError: if expression contains unsafe constructs
SyntaxError: if expression is not valid Python
NameError: if expression references undefined variables
Exception: for runtime errors (division by zero, etc.)
"""
# Validate expression safety
validate_expression(expr)
# Build safe globals with only our clamp function and builtins
safe_globals: Dict[str, Any] = {
"__builtins__": {
"min": min,
"max": max,
"abs": abs,
"int": int,
"float": float,
"bool": bool,
"True": True,
"False": False,
"None": None,
"clamp": _clamp,
}
}
# Compile and evaluate
code = compile(expr, "<control_plan>", "eval")
return eval(code, safe_globals, namespace)
def safe_eval_condition(expr: str, namespace: Dict[str, Any]) -> bool:
"""
Safely evaluate a boolean condition.
Same as safe_eval but ensures result is converted to bool.
"""
result = safe_eval(expr, namespace)
return bool(result)
def extract_variable_names(expr: str) -> Set[str]:
"""
Extract all variable names referenced in an expression.
This is useful for validation: checking that all referenced
variables exist in the namespace before runtime.
Returns:
Set of variable names referenced in the expression
Raises:
SyntaxError: if expression is not valid Python
"""
try:
tree = ast.parse(expr, mode='eval')
except SyntaxError as e:
raise SyntaxError(f"Invalid expression: {e}")
names: Set[str] = set()
for node in ast.walk(tree):
if isinstance(node, ast.Name) and isinstance(node.ctx, ast.Load):
# Skip function names (they're provided in builtins)
if node.id not in SAFE_FUNCTIONS and node.id not in {"True", "False", "None"}:
names.add(node.id)
return names
def generate_python_code(expr: str, namespace_var: str = "pv") -> str:
"""
Generate Python code for an expression, with variables read from a dict.
This is used by the compiler to generate HIL logic code.
Args:
expr: The expression string
namespace_var: Name of the dict variable containing values
Returns:
Python code string that evaluates the expression
Example:
>>> generate_python_code("x + y * 2", "physical_values")
"physical_values.get('x', 0) + physical_values.get('y', 0) * 2"
"""
# First validate the expression
validate_expression(expr)
# Parse and transform
tree = ast.parse(expr, mode='eval')
class NameTransformer(ast.NodeTransformer):
"""Transform Name nodes to dict.get() calls."""
def visit_Name(self, node: ast.Name) -> ast.AST:
# Skip function names and builtins
if node.id in SAFE_FUNCTIONS or node.id in {"True", "False", "None"}:
return node
# Transform: x -> pv.get('x', 0)
if isinstance(node.ctx, ast.Load):
return ast.Call(
func=ast.Attribute(
value=ast.Name(id=namespace_var, ctx=ast.Load()),
attr='get',
ctx=ast.Load()
),
args=[
ast.Constant(value=node.id),
ast.Constant(value=0)
],
keywords=[]
)
return node
# Transform the tree
transformer = NameTransformer()
new_tree = transformer.visit(tree)
ast.fix_missing_locations(new_tree)
# Convert back to code
return ast.unparse(new_tree.body)

View File

@ -7,13 +7,15 @@ Validates that HMI monitors and controllers correctly reference:
2. Reachable target device (by IP)
3. Existing register on target device (by id)
4. Matching value_type and address
5. Network configuration: no duplicate IPs, valid subnets
This is deterministic validation - no guessing or heuristics.
If something cannot be verified, it fails with a clear error.
"""
import ipaddress
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple, Union
from typing import Dict, List, Optional, Set, Tuple, Union
from models.ics_simlab_config_v2 import (
Config,
@ -23,6 +25,7 @@ from models.ics_simlab_config_v2 import (
Actuator,
RegisterBlock,
TCPConnection,
IPNetwork,
)
@ -339,6 +342,414 @@ def validate_plc_semantics(config: Config) -> List[SemanticError]:
return errors
def validate_orphan_devices(config: Config) -> List[SemanticError]:
"""
Validate that all sensors and actuators are referenced by at least one PLC.
P0 Issue: Open-loop sensors/actuators are useless and indicate config error.
Rules:
- Each sensor must be referenced by at least one PLC monitor (outbound_connection IP match)
- Each actuator must be referenced by at least one PLC controller (outbound_connection IP match)
Args:
config: Validated Config object
Returns:
List of SemanticError objects for orphan devices
"""
errors: List[SemanticError] = []
# Collect all sensor IPs
sensor_ips: Dict[str, str] = {} # ip -> sensor name
for sensor in config.sensors:
if sensor.network and sensor.network.ip:
sensor_ips[sensor.network.ip] = sensor.name
# Collect all actuator IPs
actuator_ips: Dict[str, str] = {} # ip -> actuator name
for actuator in config.actuators:
if actuator.network and actuator.network.ip:
actuator_ips[actuator.network.ip] = actuator.name
# Collect all IPs referenced by PLC outbound connections for monitors
plc_monitor_target_ips: set = set()
plc_controller_target_ips: set = set()
for plc in config.plcs:
# Build connection_id -> IP mapping
conn_to_ip: Dict[str, str] = {}
for conn in plc.outbound_connections:
if isinstance(conn, TCPConnection) and conn.id:
conn_to_ip[conn.id] = conn.ip
# Collect IPs targeted by monitors
for monitor in plc.monitors:
if monitor.outbound_connection_id in conn_to_ip:
plc_monitor_target_ips.add(conn_to_ip[monitor.outbound_connection_id])
# Collect IPs targeted by controllers
for controller in plc.controllers:
if controller.outbound_connection_id in conn_to_ip:
plc_controller_target_ips.add(conn_to_ip[controller.outbound_connection_id])
# Check for orphan sensors (not monitored by any PLC)
for sensor_ip, sensor_name in sensor_ips.items():
if sensor_ip not in plc_monitor_target_ips:
errors.append(SemanticError(
entity=f"sensors['{sensor_name}']",
message=(
f"Orphan sensor: no PLC monitor references IP {sensor_ip}. "
f"Add a PLC outbound_connection and monitor for this sensor."
)
))
# Check for orphan actuators (not controlled by any PLC)
for actuator_ip, actuator_name in actuator_ips.items():
if actuator_ip not in plc_controller_target_ips:
errors.append(SemanticError(
entity=f"actuators['{actuator_name}']",
message=(
f"Orphan actuator: no PLC controller references IP {actuator_ip}. "
f"Add a PLC outbound_connection and controller for this actuator."
)
))
return errors
def validate_boolean_type_rules(config: Config) -> List[SemanticError]:
"""
Validate that boolean signals use correct Modbus register types.
P0 Issue: Boolean signals mapped to input_register/holding_register are incorrect.
Modbus type rules:
- Commanded boolean (write) -> coil (function code 5/15)
- Measured boolean (read-only) -> discrete_input (function code 2)
- input_register/holding_register are for 16-bit integers, not booleans
Heuristics for detecting boolean signals:
- physical_value contains: "switch", "state", "status", "at_", "is_", "on", "off", "enable", "active"
- count == 1 AND address suggests single-bit semantics
Args:
config: Validated Config object
Returns:
List of SemanticError objects for type rule violations
"""
errors: List[SemanticError] = []
# Boolean indicator patterns (case-insensitive)
BOOLEAN_PATTERNS = [
"switch", "state", "status", "at_", "is_", "_on", "_off",
"enable", "active", "running", "alarm", "fault", "ready",
"open", "close", "start", "stop", "button", "flag"
]
def looks_like_boolean(name: str) -> bool:
"""Check if a physical_value name suggests boolean semantics."""
if not name:
return False
name_lower = name.lower()
return any(pattern in name_lower for pattern in BOOLEAN_PATTERNS)
# Check sensors - boolean values should use discrete_input, not input_register
for sensor in config.sensors:
for reg in sensor.registers.input_register:
pv = reg.physical_value or ""
if looks_like_boolean(pv):
errors.append(SemanticError(
entity=f"sensors['{sensor.name}'].registers.input_register (physical_value='{pv}')",
message=(
f"Boolean signal '{pv}' should use discrete_input, not input_register. "
f"Move this register to discrete_input for proper Modbus function code."
)
))
# Check actuators - boolean values should use coil, not holding_register
for actuator in config.actuators:
for reg in actuator.registers.holding_register:
pv = reg.physical_value or ""
if looks_like_boolean(pv):
errors.append(SemanticError(
entity=f"actuators['{actuator.name}'].registers.holding_register (physical_value='{pv}')",
message=(
f"Boolean signal '{pv}' should use coil, not holding_register. "
f"Move this register to coil for proper Modbus function code."
)
))
# Check PLCs - boolean inputs should be discrete_input, boolean outputs should be coil
for plc in config.plcs:
for reg in plc.registers.input_register:
reg_id = reg.id or ""
if looks_like_boolean(reg_id):
errors.append(SemanticError(
entity=f"plcs['{plc.name}'].registers.input_register (id='{reg_id}')",
message=(
f"Boolean signal '{reg_id}' should use discrete_input (for input) "
f"or coil (for output), not input_register."
)
))
for reg in plc.registers.holding_register:
reg_id = reg.id or ""
if looks_like_boolean(reg_id):
errors.append(SemanticError(
entity=f"plcs['{plc.name}'].registers.holding_register (id='{reg_id}')",
message=(
f"Boolean signal '{reg_id}' should use coil (for output) "
f"or discrete_input (for input), not holding_register."
)
))
return errors
def validate_plc_local_register_coherence(config: Config) -> List[SemanticError]:
"""
Validate PLC local register coherence with monitors/controllers.
Native ICS-SimLab pattern requires that:
- For each PLC monitor with id=X and value_type=T, there should be a
local register in plc.registers[T] with id=X and io="input"
- For each PLC controller with id=Y and value_type=T, there should be a
local register in plc.registers[T] with id=Y and io="output"
This ensures the PLC has local registers to cache monitored values
and source controlled values, matching native example patterns.
Args:
config: Validated Config object
Returns:
List of SemanticError objects for coherence violations
"""
errors: List[SemanticError] = []
for plc in config.plcs:
plc_name = plc.name
# Build set of existing registers by type -> id -> io
existing_regs: Dict[str, Dict[str, str]] = {
"coil": {},
"discrete_input": {},
"holding_register": {},
"input_register": {},
}
for reg in plc.registers.coil:
if reg.id:
existing_regs["coil"][reg.id] = reg.io or ""
for reg in plc.registers.discrete_input:
if reg.id:
existing_regs["discrete_input"][reg.id] = reg.io or ""
for reg in plc.registers.holding_register:
if reg.id:
existing_regs["holding_register"][reg.id] = reg.io or ""
for reg in plc.registers.input_register:
if reg.id:
existing_regs["input_register"][reg.id] = reg.io or ""
# Check monitors: each monitor.id should have a local register with io="input"
for i, monitor in enumerate(plc.monitors):
monitor_id = monitor.id
value_type = monitor.value_type # e.g., "input_register"
if value_type not in existing_regs:
errors.append(SemanticError(
entity=f"{plc_name}.monitors[{i}] (id='{monitor_id}')",
message=f"Unknown value_type '{value_type}'"
))
continue
if monitor_id not in existing_regs[value_type]:
errors.append(SemanticError(
entity=f"{plc_name}.monitors[{i}] (id='{monitor_id}')",
message=(
f"Missing local register: {plc_name}.registers.{value_type} "
f"should contain a register with id='{monitor_id}' and io='input' "
f"to cache monitored values (native pattern)"
)
))
else:
# Check io direction
actual_io = existing_regs[value_type][monitor_id]
if actual_io and actual_io != "input":
errors.append(SemanticError(
entity=f"{plc_name}.monitors[{i}] (id='{monitor_id}')",
message=(
f"Register io mismatch: {plc_name}.registers.{value_type}['{monitor_id}'] "
f"has io='{actual_io}' but monitors require io='input'"
)
))
# Check controllers: each controller.id should have a local register with io="output"
for i, controller in enumerate(plc.controllers):
controller_id = controller.id
value_type = controller.value_type
if value_type not in existing_regs:
errors.append(SemanticError(
entity=f"{plc_name}.controllers[{i}] (id='{controller_id}')",
message=f"Unknown value_type '{value_type}'"
))
continue
if controller_id not in existing_regs[value_type]:
errors.append(SemanticError(
entity=f"{plc_name}.controllers[{i}] (id='{controller_id}')",
message=(
f"Missing local register: {plc_name}.registers.{value_type} "
f"should contain a register with id='{controller_id}' and io='output' "
f"to source controlled values (native pattern)"
)
))
else:
# Check io direction
actual_io = existing_regs[value_type][controller_id]
if actual_io and actual_io != "output":
errors.append(SemanticError(
entity=f"{plc_name}.controllers[{i}] (id='{controller_id}')",
message=(
f"Register io mismatch: {plc_name}.registers.{value_type}['{controller_id}'] "
f"has io='{actual_io}' but controllers require io='output'"
)
))
return errors
def validate_network_config(config: Config) -> List[SemanticError]:
"""
Validate network configuration: no duplicate IPs, valid subnets.
P0 Issue: ICS-SimLab docker-compose fails with "Address already in use"
when multiple devices share the same IP on the same docker_network.
Checks performed:
1. No duplicate network.ip within the same docker_network
2. Each device.network.docker_network exists in ip_networks[]
3. Each device IP is within the declared subnet for that network
Args:
config: Validated Config object
Returns:
List of SemanticError objects for network issues
"""
errors: List[SemanticError] = []
# Build ip_networks lookup: docker_name -> (name, subnet)
networks: Dict[str, Tuple[str, str]] = {}
for net in config.ip_networks:
networks[net.docker_name] = (net.name, net.subnet)
# Collect all devices with network config
# Structure: List[(entity_name, device_type, ip, docker_network)]
devices_with_network: List[Tuple[str, str, str, str]] = []
# UI has network config
if config.ui and config.ui.network:
net = config.ui.network
docker_net = net.docker_network or "default"
devices_with_network.append(("ui", "ui", net.ip, docker_net))
# HMIs have network config
for hmi in config.hmis:
if hmi.network:
docker_net = hmi.network.docker_network or "default"
devices_with_network.append((hmi.name, "hmi", hmi.network.ip, docker_net))
# PLCs have network config (optional but common)
for plc in config.plcs:
if plc.network:
docker_net = plc.network.docker_network or "default"
devices_with_network.append((plc.name, "plc", plc.network.ip, docker_net))
# Sensors have network config
for sensor in config.sensors:
if sensor.network:
docker_net = sensor.network.docker_network or "default"
devices_with_network.append((sensor.name, "sensor", sensor.network.ip, docker_net))
# Actuators have network config
for actuator in config.actuators:
if actuator.network:
docker_net = actuator.network.docker_network or "default"
devices_with_network.append((actuator.name, "actuator", actuator.network.ip, docker_net))
# Note: HILs do NOT have network config (they're simulation-only)
# Group by docker_network for duplicate detection
by_network: Dict[str, List[Tuple[str, str, str]]] = {} # docker_net -> [(name, type, ip)]
for name, dev_type, ip, docker_net in devices_with_network:
if docker_net not in by_network:
by_network[docker_net] = []
by_network[docker_net].append((name, dev_type, ip))
# Check 1: Duplicate IPs within same docker_network
for docker_net, devices in by_network.items():
# Build ip -> list of devices mapping
ip_to_devices: Dict[str, List[Tuple[str, str]]] = {} # ip -> [(name, type)]
for name, dev_type, ip in devices:
if ip not in ip_to_devices:
ip_to_devices[ip] = []
ip_to_devices[ip].append((name, dev_type))
# Report duplicates
for ip, device_list in ip_to_devices.items():
if len(device_list) > 1:
colliders = ", ".join(f"{name} ({dtype})" for name, dtype in device_list)
errors.append(SemanticError(
entity=f"network[{docker_net}]",
message=(
f"Duplicate IP {ip}: {colliders}. "
f"Each device must have a unique IP within the same docker_network."
)
))
# Check 2: docker_network exists in ip_networks[]
for name, dev_type, ip, docker_net in devices_with_network:
if docker_net != "default" and docker_net not in networks:
available = sorted(networks.keys()) if networks else ["(none)"]
errors.append(SemanticError(
entity=f"{name} ({dev_type})",
message=(
f"docker_network '{docker_net}' not found in ip_networks. "
f"Available: {available}"
)
))
# Check 3: IP is within subnet
for name, dev_type, ip, docker_net in devices_with_network:
if docker_net not in networks:
continue # Already reported in check 2
_, subnet_str = networks[docker_net]
try:
network = ipaddress.ip_network(subnet_str, strict=False)
ip_addr = ipaddress.ip_address(ip)
if ip_addr not in network:
errors.append(SemanticError(
entity=f"{name} ({dev_type})",
message=(
f"IP {ip} is not within subnet {subnet_str} "
f"for docker_network '{docker_net}'"
)
))
except ValueError as e:
# Invalid IP or subnet format
errors.append(SemanticError(
entity=f"{name} ({dev_type})",
message=f"Invalid IP/subnet format: {e}"
))
return errors
def validate_all_semantics(config: Config) -> List[SemanticError]:
"""
Run all semantic validations.
@ -350,6 +761,11 @@ def validate_all_semantics(config: Config) -> List[SemanticError]:
List of all SemanticError objects
"""
errors: List[SemanticError] = []
# P0: Network validation first (docker-compose fails if IPs collide)
errors.extend(validate_network_config(config))
errors.extend(validate_hmi_semantics(config))
errors.extend(validate_plc_semantics(config))
errors.extend(validate_orphan_devices(config))
errors.extend(validate_boolean_type_rules(config))
errors.extend(validate_plc_local_register_coherence(config))
return errors