remove old code
This commit is contained in:
13
.flake8
13
.flake8
@@ -1,13 +0,0 @@
|
||||
[flake8]
|
||||
max-line-length = 88
|
||||
extend-ignore = E203, W503, E501
|
||||
exclude =
|
||||
.git,
|
||||
__pycache__,
|
||||
.venv,
|
||||
venv,
|
||||
build,
|
||||
dist,
|
||||
*.egg-info
|
||||
per-file-ignores =
|
||||
__init__.py:F401
|
||||
@@ -1,399 +0,0 @@
|
||||
# Watermaker PLC API - Complete Rebuild Specification
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides a comprehensive specification for rebuilding the Watermaker PLC API from scratch while maintaining exact functionality with improved code structure and documentation.
|
||||
|
||||
**Current API Version**: 1.1
|
||||
**PLC Target**: 198.18.100.141:502 (Modbus TCP)
|
||||
**Protocol**: Modbus TCP/IP
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [API Routes Specification](#api-routes-specification)
|
||||
2. [PLC Register Specifications](#plc-register-specifications)
|
||||
3. [DTS Process Documentation](#dts-process-documentation)
|
||||
4. [Timer Usage and Progress Monitoring](#timer-usage-and-progress-monitoring)
|
||||
5. [Rebuild Architecture Plan](#rebuild-architecture-plan)
|
||||
6. [Implementation Guidelines](#implementation-guidelines)
|
||||
|
||||
---
|
||||
|
||||
## API Routes Specification
|
||||
|
||||
### System & Status Endpoints
|
||||
|
||||
| Method | Endpoint | Description | Response Time |
|
||||
|--------|----------|-------------|---------------|
|
||||
| `GET` | [`/api/status`](watermaker_plc_api/controllers/system_controller.py:27) | Connection and system status | < 100ms |
|
||||
| `GET` | [`/api/all`](watermaker_plc_api/controllers/system_controller.py:44) | All PLC data in one response | < 500ms |
|
||||
| `GET` | [`/api/select`](watermaker_plc_api/controllers/system_controller.py:66) | Selective data retrieval (bandwidth optimized) | < 200ms |
|
||||
| `GET` | [`/api/errors`](watermaker_plc_api/controllers/system_controller.py:122) | Recent errors (last 10) | < 50ms |
|
||||
| `GET` | [`/api/config`](watermaker_plc_api/controllers/system_controller.py:194) | API configuration and available endpoints | < 50ms |
|
||||
| `POST` | [`/api/write/register`](watermaker_plc_api/controllers/system_controller.py:133) | Write single holding register | < 100ms |
|
||||
|
||||
### Data Monitoring Endpoints
|
||||
|
||||
| Method | Endpoint | Description | Response Time |
|
||||
|--------|----------|-------------|---------------|
|
||||
| `GET` | [`/api/sensors`](watermaker_plc_api/controllers/sensors_controller.py:19) | All sensor data | < 100ms |
|
||||
| `GET` | [`/api/sensors/category/<category>`](watermaker_plc_api/controllers/sensors_controller.py:32) | Sensors by category | < 100ms |
|
||||
| `GET` | [`/api/timers`](watermaker_plc_api/controllers/timers_controller.py:18) | All timer data | < 100ms |
|
||||
| `GET` | [`/api/timers/dts`](watermaker_plc_api/controllers/timers_controller.py:33) | DTS timer data | < 100ms |
|
||||
| `GET` | [`/api/timers/fwf`](watermaker_plc_api/controllers/timers_controller.py:51) | Fresh Water Flush timer data | < 100ms |
|
||||
| `GET` | [`/api/rtc`](watermaker_plc_api/controllers/timers_controller.py:69) | Real-time clock data | < 100ms |
|
||||
| `GET` | [`/api/outputs`](watermaker_plc_api/controllers/outputs_controller.py:18) | Output control data | < 100ms |
|
||||
| `GET` | [`/api/outputs/active`](watermaker_plc_api/controllers/outputs_controller.py:30) | Active output controls only | < 100ms |
|
||||
| `GET` | [`/api/runtime`](watermaker_plc_api/controllers/sensors_controller.py:54) | Runtime hours data (IEEE 754 float) | < 100ms |
|
||||
| `GET` | [`/api/water_counters`](watermaker_plc_api/controllers/sensors_controller.py:66) | Water production counters (gallon totals) | < 100ms |
|
||||
|
||||
**Valid Categories for `/api/sensors/category/<category>`:**
|
||||
- `system` - System status and operational mode
|
||||
- `pressure` - Water pressure sensors
|
||||
- `temperature` - Temperature monitoring
|
||||
- `flow` - Flow rate meters
|
||||
- `quality` - Water quality (TDS) sensors
|
||||
|
||||
### DTS Control Endpoints
|
||||
|
||||
| Method | Endpoint | Description | Response Time |
|
||||
|--------|----------|-------------|---------------|
|
||||
| `POST` | [`/api/dts/start`](watermaker_plc_api/controllers/dts_controller.py:901) | Start DTS watermaker sequence (async) | < 100ms |
|
||||
| `POST` | [`/api/dts/stop`](watermaker_plc_api/controllers/dts_controller.py:928) | Stop watermaker sequence (async, mode-dependent) | < 100ms |
|
||||
| `POST` | [`/api/dts/skip`](watermaker_plc_api/controllers/dts_controller.py:956) | Skip current step automatically (async) | < 100ms |
|
||||
| `GET` | [`/api/dts/status`](watermaker_plc_api/controllers/dts_controller.py:985) | Get latest DTS operation status | < 50ms |
|
||||
| `GET` | [`/api/dts/status/<task_id>`](watermaker_plc_api/controllers/dts_controller.py:1016) | Get specific DTS task status (legacy) | < 50ms |
|
||||
| `POST` | [`/api/dts/cancel`](watermaker_plc_api/controllers/dts_controller.py:1023) | Cancel running DTS operation | < 50ms |
|
||||
| `POST` | [`/api/dts/cancel/<task_id>`](watermaker_plc_api/controllers/dts_controller.py:1051) | Cancel DTS task (legacy) | < 50ms |
|
||||
| `GET` | [`/api/dts/current-step-progress`](watermaker_plc_api/controllers/dts_controller.py:1058) | Get current DTS step progress based on timers | < 100ms |
|
||||
| `GET` | [`/api/dts/r1000-monitor`](watermaker_plc_api/controllers/dts_controller.py:1127) | Get R1000 monitoring status and recent changes | < 100ms |
|
||||
|
||||
---
|
||||
|
||||
## PLC Register Specifications
|
||||
|
||||
### System Control Registers
|
||||
|
||||
| Register | Name | Type | Values | Description |
|
||||
|----------|------|------|--------|-------------|
|
||||
| [`R1000`](watermaker_plc_api/models/sensor_mappings.py:10) | System Mode | Direct | See mode table below | Main system operational mode |
|
||||
| [`R1036`](watermaker_plc_api/models/sensor_mappings.py:29) | System Status | Direct | 0=Standby, 5=FWF, 7=Service Mode | Current system status |
|
||||
| `R71` | Control Commands | Direct | Various command codes | Used for valve control and system commands |
|
||||
| `R67` | Step Skip Commands | Direct | 32841, 32968 | Used for skipping DTS steps |
|
||||
|
||||
**R1000 System Mode Values:**
|
||||
- `2` - Home/Standby
|
||||
- `3` - Alarm List
|
||||
- `5` - DTS Prime
|
||||
- `6` - DTS Initialization
|
||||
- `7` - DTS Running/Production
|
||||
- `8` - Fresh Water Flush
|
||||
- `9` - Settings
|
||||
- `15` - Service Mode (Quality & Flush Valves / Pumps)
|
||||
- `16` - Service Mode (Double Pass & Feed Valves)
|
||||
- `17` - Service Mode (APC Need Valves)
|
||||
- `18` - Service Mode (Sensors - TDS, PPM, Flow, Temperature)
|
||||
- `31` - Overview Schematic
|
||||
- `32` - Contact Support
|
||||
- `33` - Seawater (Choose Single or Double Pass)
|
||||
- `34` - DTS Request
|
||||
- `65535` - Standby
|
||||
|
||||
### Sensor Registers
|
||||
|
||||
| Register | Name | Scale | Unit | Category | Description |
|
||||
|----------|------|-------|------|----------|-------------|
|
||||
| [`R1003`](watermaker_plc_api/models/sensor_mappings.py:33) | Feed Pressure | Direct | PSI | pressure | Water feed pressure |
|
||||
| [`R1007`](watermaker_plc_api/models/sensor_mappings.py:34) | High Pressure #2 | Direct | PSI | pressure | High pressure sensor 2 |
|
||||
| [`R1008`](watermaker_plc_api/models/sensor_mappings.py:35) | High Pressure #1 | Direct | PSI | pressure | High pressure sensor 1 |
|
||||
| [`R1017`](watermaker_plc_api/models/sensor_mappings.py:47) | Water Temperature | ÷10 | °F | temperature | Water temperature |
|
||||
| [`R1125`](watermaker_plc_api/models/sensor_mappings.py:48) | System Temperature | ÷10 | °F | temperature | System temperature |
|
||||
| [`R1120`](watermaker_plc_api/models/sensor_mappings.py:38) | Brine Flowmeter | ÷10 | GPM | flow | Brine flow rate |
|
||||
| [`R1121`](watermaker_plc_api/models/sensor_mappings.py:39) | 1st Pass Product Flowmeter | ÷10 | GPM | flow | First pass product flow |
|
||||
| [`R1122`](watermaker_plc_api/models/sensor_mappings.py:40) | 2nd Pass Product Flowmeter | ÷10 | GPM | flow | Second pass product flow |
|
||||
| [`R1123`](watermaker_plc_api/models/sensor_mappings.py:43) | Product TDS #1 | Direct | PPM | quality | Total dissolved solids sensor 1 |
|
||||
| [`R1124`](watermaker_plc_api/models/sensor_mappings.py:44) | Product TDS #2 | Direct | PPM | quality | Total dissolved solids sensor 2 |
|
||||
|
||||
### Timer Registers
|
||||
|
||||
| Register | Name | Scale | Unit | Category | Expected Start | Description |
|
||||
|----------|------|-------|------|----------|----------------|-------------|
|
||||
| [`R136`](watermaker_plc_api/models/timer_mappings.py:10) | FWF Flush Timer | ÷10 | sec | fwf_timer | 600 | Fresh water flush timer |
|
||||
| [`R138`](watermaker_plc_api/models/timer_mappings.py:13) | DTS Valve Positioning Timer | ÷10 | sec | dts_timer | 150 | Valve positioning during DTS start |
|
||||
| [`R128`](watermaker_plc_api/models/timer_mappings.py:14) | DTS Priming Timer | ÷10 | sec | dts_timer | 1800 | DTS priming phase timer |
|
||||
| [`R129`](watermaker_plc_api/models/timer_mappings.py:15) | DTS Initialize Timer | ÷10 | sec | dts_timer | 600 | DTS initialization timer |
|
||||
| [`R133`](watermaker_plc_api/models/timer_mappings.py:16) | DTS Fresh Water Flush Timer | ÷10 | sec | dts_timer | 600 | DTS fresh water flush timer |
|
||||
| [`R135`](watermaker_plc_api/models/timer_mappings.py:17) | DTS Stop Timer | ÷10 | sec | dts_timer | 100 | DTS stop sequence timer |
|
||||
| [`R139`](watermaker_plc_api/models/timer_mappings.py:18) | DTS Flush Timer | ÷10 | sec | dts_timer | 600 | DTS flush timer |
|
||||
|
||||
### Output Control Registers
|
||||
|
||||
| Register | Name | Bit Position | Description |
|
||||
|----------|------|--------------|-------------|
|
||||
| [`R257`](watermaker_plc_api/models/output_mappings.py:9) | Low Pressure Pump | 0 | Low pressure pump control |
|
||||
| [`R258`](watermaker_plc_api/models/output_mappings.py:10) | High Pressure Pump | 1 | High pressure pump control |
|
||||
| [`R259`](watermaker_plc_api/models/output_mappings.py:11) | Product Divert Valve | 2 | Product divert valve control |
|
||||
| [`R260`](watermaker_plc_api/models/output_mappings.py:12) | Flush Solenoid | 3 | Flush solenoid control |
|
||||
| [`R264`](watermaker_plc_api/models/output_mappings.py:13) | Double Pass Solenoid | 7 | Double pass solenoid control |
|
||||
| [`R265`](watermaker_plc_api/models/output_mappings.py:14) | Shore Feed Solenoid | 8 | Shore feed solenoid control |
|
||||
|
||||
### Runtime & Counter Registers (32-bit IEEE 754 Float Pairs)
|
||||
|
||||
| Register | Name | Pair Register | Unit | Description |
|
||||
|----------|------|---------------|------|-------------|
|
||||
| [`R5014`](watermaker_plc_api/models/runtime_mappings.py:9) | Runtime Hours | R5015 | hours | Total system runtime |
|
||||
| [`R5024`](watermaker_plc_api/models/runtime_mappings.py:15) | Single-Pass Total Gallons | R5025 | gallons | Total single-pass water produced |
|
||||
| [`R5026`](watermaker_plc_api/models/runtime_mappings.py:17) | Single-Pass Since Last | R5027 | gallons | Single-pass water since last reset |
|
||||
| [`R5028`](watermaker_plc_api/models/runtime_mappings.py:19) | Double-Pass Total Gallons | R5029 | gallons | Total double-pass water produced |
|
||||
| [`R5030`](watermaker_plc_api/models/runtime_mappings.py:21) | Double-Pass Since Last | R5031 | gallons | Double-pass water since last reset |
|
||||
| [`R5032`](watermaker_plc_api/models/runtime_mappings.py:23) | DTS Total Gallons | R5033 | gallons | Total DTS water produced |
|
||||
| [`R5034`](watermaker_plc_api/models/runtime_mappings.py:25) | DTS Since Last Gallons | R5035 | gallons | DTS water since last reset |
|
||||
|
||||
### Real-Time Clock Registers
|
||||
|
||||
| Register | Name | Unit | Description |
|
||||
|----------|------|------|-------------|
|
||||
| [`R513`](watermaker_plc_api/models/timer_mappings.py:60) | RTC Minutes | min | Real-time clock minutes |
|
||||
| [`R514`](watermaker_plc_api/models/timer_mappings.py:61) | RTC Seconds | sec | Real-time clock seconds |
|
||||
| [`R516`](watermaker_plc_api/models/timer_mappings.py:62) | RTC Year | - | Real-time clock year |
|
||||
| [`R517`](watermaker_plc_api/models/timer_mappings.py:63) | RTC Month | - | Real-time clock month |
|
||||
| [`R518`](watermaker_plc_api/models/timer_mappings.py:64) | RTC Day | - | Real-time clock day |
|
||||
| [`R519`](watermaker_plc_api/models/timer_mappings.py:65) | RTC Month (Alt) | - | Alternative month register |
|
||||
|
||||
---
|
||||
|
||||
## DTS Process Documentation
|
||||
|
||||
### DTS Screen Flow Sequence
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Mode 34: DTS Requested] --> B[Mode 5: Priming Screen]
|
||||
B --> C[Mode 6: Init Screen]
|
||||
C --> D[Mode 7: Production Screen]
|
||||
D --> E[Mode 8: Fresh Water Flush]
|
||||
E --> F[Mode 2: Standby]
|
||||
|
||||
B -.->|Skip R67=32841| D
|
||||
C -.->|Skip R67=32968| D
|
||||
|
||||
style A fill:#e1f5fe
|
||||
style B fill:#f3e5f5
|
||||
style C fill:#fff3e0
|
||||
style D fill:#e8f5e8
|
||||
style E fill:#fce4ec
|
||||
style F fill:#f5f5f5
|
||||
```
|
||||
|
||||
### DTS Screen Definitions
|
||||
|
||||
| Mode | Screen Name | Description | Timer | Duration | Skippable |
|
||||
|------|-------------|-------------|-------|----------|-----------|
|
||||
| [`34`](watermaker_plc_api/models/timer_mappings.py:26) | DTS Requested | Press and hold DTS to START | None | Manual | No |
|
||||
| [`5`](watermaker_plc_api/models/timer_mappings.py:32) | Priming | Flush with shore pressure | R128 | 180 sec | Yes |
|
||||
| [`6`](watermaker_plc_api/models/timer_mappings.py:37) | Init | High pressure pump initialization | R129 | 60 sec | Yes |
|
||||
| [`7`](watermaker_plc_api/models/timer_mappings.py:44) | Production | Water flowing to tank | None | Manual | No |
|
||||
| [`8`](watermaker_plc_api/models/timer_mappings.py:50) | Fresh Water Flush | End of DTS process | R133 | 60 sec | No |
|
||||
|
||||
### DTS Process Start Sequence
|
||||
|
||||
**API Endpoint**: `POST /api/dts/start`
|
||||
|
||||
**Sequence Steps** (as implemented in [`execute_dts_sequence()`](watermaker_plc_api/controllers/dts_controller.py:283)):
|
||||
|
||||
1. **Check R1000 value** - Read current system mode
|
||||
2. **If not 34, set R1000=34** - Set preparation mode
|
||||
3. **Wait 2 seconds** - Allow mode change to settle
|
||||
4. **Set R71=256** - Send valve positioning command
|
||||
5. **Wait 2 seconds** - Allow command processing
|
||||
6. **Set R71=0** - Complete valve command sequence
|
||||
7. **Monitor R138** - Wait for valve positioning (up to 15 seconds)
|
||||
8. **Set R1000=5** - Start DTS priming mode
|
||||
|
||||
**Total Sequence Time**: ~10 seconds (excluding valve positioning)
|
||||
|
||||
### DTS Process Stop Sequences
|
||||
|
||||
**API Endpoint**: `POST /api/dts/stop`
|
||||
|
||||
Stop sequence varies by current system mode (as implemented in [`execute_stop_sequence()`](watermaker_plc_api/controllers/dts_controller.py:509)):
|
||||
|
||||
#### Mode 5 (Priming) Stop Sequence
|
||||
1. Set R71=512
|
||||
2. Wait 1 second
|
||||
3. Set R71=0
|
||||
4. Set R1000=8 (Fresh Water Flush)
|
||||
|
||||
#### Mode 7 (Production) Stop Sequence
|
||||
1. Set R71=513
|
||||
2. Wait 1 second
|
||||
3. Set R71=0
|
||||
4. Set R1000=8 (Fresh Water Flush)
|
||||
|
||||
#### Mode 8 (Flush) Stop Sequence
|
||||
1. Set R71=1024
|
||||
2. Wait 1 second
|
||||
3. Set R71=0
|
||||
4. Set R1000=2 (Standby)
|
||||
|
||||
### DTS Step Skip Sequences
|
||||
|
||||
**API Endpoint**: `POST /api/dts/skip`
|
||||
|
||||
Skip sequences automatically determine next step (as implemented in [`execute_skip_sequence()`](watermaker_plc_api/controllers/dts_controller.py:710)):
|
||||
|
||||
#### Skip from Mode 5 (Priming)
|
||||
- **Command**: R67=32841
|
||||
- **Result**: PLC advances to Mode 6 then Mode 7 (Production)
|
||||
|
||||
#### Skip from Mode 6 (Init)
|
||||
- **Command**: R67=32968
|
||||
- **Wait**: 1 second
|
||||
- **Command**: R1000=7 (Production)
|
||||
|
||||
---
|
||||
|
||||
## Timer Usage and Progress Monitoring
|
||||
|
||||
### Timer-Based Progress Calculation
|
||||
|
||||
**Formula** (as implemented in [`calculate_timer_progress_percent()`](watermaker_plc_api/models/timer_mappings.py:127)):
|
||||
```
|
||||
Progress % = (Initial_Value - Current_Value) / Initial_Value * 100
|
||||
```
|
||||
|
||||
**Special Values**:
|
||||
- Timer value `65535` = Timer not active (0% progress)
|
||||
- Timer value `0` = Timer complete (100% progress)
|
||||
|
||||
### DTS Mode to Timer Mapping
|
||||
|
||||
| DTS Mode | Timer Register | Expected Duration | Progress Monitoring |
|
||||
|----------|----------------|-------------------|-------------------|
|
||||
| 5 (Priming) | [`R128`](watermaker_plc_api/models/timer_mappings.py:179) | 180 seconds | Yes |
|
||||
| 6 (Init) | [`R129`](watermaker_plc_api/models/timer_mappings.py:180) | 60 seconds | Yes |
|
||||
| 7 (Production) | None | Manual | No |
|
||||
| 8 (Flush) | [`R133`](watermaker_plc_api/models/timer_mappings.py:182) | 60 seconds | Yes |
|
||||
|
||||
### Background Timer Monitoring
|
||||
|
||||
The system continuously monitors timers via [`update_dts_progress_from_timers()`](watermaker_plc_api/controllers/dts_controller.py:73) which:
|
||||
|
||||
1. **Reads current R1000 mode**
|
||||
2. **Maps mode to timer register**
|
||||
3. **Reads timer value**
|
||||
4. **Calculates progress percentage**
|
||||
5. **Updates operation state**
|
||||
6. **Detects external changes**
|
||||
|
||||
---
|
||||
|
||||
## Rebuild Architecture Plan
|
||||
|
||||
### High-Level Architecture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "API Layer"
|
||||
A[Flask Application]
|
||||
B[Route Controllers]
|
||||
C[Request/Response Handlers]
|
||||
D[Middleware & Validation]
|
||||
end
|
||||
|
||||
subgraph "Business Logic Layer"
|
||||
E[DTS Process Manager]
|
||||
F[Data Processing Service]
|
||||
G[Validation Service]
|
||||
H[Operation State Manager]
|
||||
end
|
||||
|
||||
subgraph "Data Access Layer"
|
||||
I[PLC Connection Manager]
|
||||
J[Register Reader/Writer]
|
||||
K[Data Cache Repository]
|
||||
L[Configuration Manager]
|
||||
end
|
||||
|
||||
subgraph "Infrastructure Layer"
|
||||
M[Background Task Manager]
|
||||
N[R1000 Monitor]
|
||||
O[Timer Progress Monitor]
|
||||
P[Error Handler & Logger]
|
||||
end
|
||||
|
||||
A --> B
|
||||
B --> C
|
||||
C --> D
|
||||
D --> E
|
||||
E --> F
|
||||
F --> G
|
||||
G --> H
|
||||
H --> I
|
||||
I --> J
|
||||
J --> K
|
||||
K --> L
|
||||
M --> N
|
||||
N --> O
|
||||
O --> P
|
||||
E --> M
|
||||
```
|
||||
|
||||
### Improved Project Structure
|
||||
|
||||
```
|
||||
watermaker_plc_api/
|
||||
├── api/
|
||||
│ ├── routes/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── system_routes.py # System & status endpoints
|
||||
│ │ ├── sensor_routes.py # Sensor data endpoints
|
||||
│ │ ├── timer_routes.py # Timer & RTC endpoints
|
||||
│ │ ├── output_routes.py # Output control endpoints
|
||||
│ │ └── dts_routes.py # DTS control endpoints
|
||||
│ ├── middleware/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── error_handler.py # Global error handling
|
||||
│ │ ├── request_validator.py # Request validation
|
||||
│ │ ├── response_formatter.py # Response formatting
|
||||
│ │ └── cors_handler.py # CORS configuration
|
||||
│ ├── schemas/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── request_schemas.py # Pydantic request models
|
||||
│ │ ├── response_schemas.py # Pydantic response models
|
||||
│ │ └── validation_schemas.py # Validation rules
|
||||
│ └── __init__.py
|
||||
├── core/
|
||||
│ ├── services/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── plc_service.py # PLC communication service
|
||||
│ │ ├── dts_service.py # DTS process management
|
||||
│ │ ├── data_service.py # Data processing & caching
|
||||
│ │ ├── monitoring_service.py # System monitoring
|
||||
│ │ └── validation_service.py # Business logic validation
|
||||
│ ├── models/
|
||||
│ │ ├── register_models.py
|
||||
│ │ ├── sensor_models.py
|
||||
│ │ ├── timer_models.py
|
||||
│ │ └── dts_models.py
|
||||
│ └── repositories/
|
||||
│ ├── plc_repository.py
|
||||
│ └── cache_repository.py
|
||||
├── infrastructure/
|
||||
│ ├── plc/
|
||||
│ │ ├── connection.py
|
||||
│ │ ├── register_reader.py
|
||||
│ │ └── register_writer.py
|
||||
│ ├── cache/
|
||||
│ │ └── data_cache.py
|
||||
│ └── background/
|
||||
│ ├── task_manager.py
|
||||
│ └── monitors.py
|
||||
├── config/
|
||||
│ ├── settings.py
|
||||
│ ├── register_mappings.py
|
||||
│ └── dts_config.py
|
||||
└── utils/
|
||||
├── logger.py
|
||||
├── validators.py
|
||||
└── converters.py
|
||||
@@ -1,496 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Static analysis script to identify unused code in the watermaker PLC API.
|
||||
This script analyzes the main application package to find unused functions,
|
||||
methods, imports, variables, and classes.
|
||||
"""
|
||||
|
||||
import ast
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Set, Tuple, Any
|
||||
from collections import defaultdict
|
||||
import json
|
||||
|
||||
class CodeAnalyzer(ast.NodeVisitor):
|
||||
"""AST visitor to analyze Python code for unused elements."""
|
||||
|
||||
def __init__(self, file_path: str):
|
||||
self.file_path = file_path
|
||||
self.imports = {} # name -> (module, alias)
|
||||
self.from_imports = {} # name -> (module, original_name)
|
||||
self.function_defs = set() # function names defined
|
||||
self.class_defs = set() # class names defined
|
||||
self.method_defs = {} # class_name -> set of method names
|
||||
self.variable_assignments = set() # variable names assigned
|
||||
self.function_calls = set() # function names called
|
||||
self.attribute_accesses = set() # attribute names accessed
|
||||
self.name_references = set() # all name references
|
||||
self.decorators = set() # decorator names
|
||||
self.string_literals = set() # string literals (for dynamic calls)
|
||||
|
||||
def visit_Import(self, node):
|
||||
"""Track import statements."""
|
||||
for alias in node.names:
|
||||
name = alias.asname if alias.asname else alias.name
|
||||
self.imports[name] = (alias.name, alias.asname)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_ImportFrom(self, node):
|
||||
"""Track from-import statements."""
|
||||
module = node.module or ''
|
||||
for alias in node.names:
|
||||
name = alias.asname if alias.asname else alias.name
|
||||
self.from_imports[name] = (module, alias.name)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_FunctionDef(self, node):
|
||||
"""Track function definitions."""
|
||||
self.function_defs.add(node.name)
|
||||
# Track decorators
|
||||
for decorator in node.decorator_list:
|
||||
if isinstance(decorator, ast.Name):
|
||||
self.decorators.add(decorator.id)
|
||||
elif isinstance(decorator, ast.Attribute):
|
||||
self.decorators.add(decorator.attr)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_AsyncFunctionDef(self, node):
|
||||
"""Track async function definitions."""
|
||||
self.function_defs.add(node.name)
|
||||
# Track decorators
|
||||
for decorator in node.decorator_list:
|
||||
if isinstance(decorator, ast.Name):
|
||||
self.decorators.add(decorator.id)
|
||||
elif isinstance(decorator, ast.Attribute):
|
||||
self.decorators.add(decorator.attr)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_ClassDef(self, node):
|
||||
"""Track class definitions and their methods."""
|
||||
self.class_defs.add(node.name)
|
||||
self.method_defs[node.name] = set()
|
||||
|
||||
# Track decorators
|
||||
for decorator in node.decorator_list:
|
||||
if isinstance(decorator, ast.Name):
|
||||
self.decorators.add(decorator.id)
|
||||
elif isinstance(decorator, ast.Attribute):
|
||||
self.decorators.add(decorator.attr)
|
||||
|
||||
# Track methods in this class
|
||||
for item in node.body:
|
||||
if isinstance(item, (ast.FunctionDef, ast.AsyncFunctionDef)):
|
||||
self.method_defs[node.name].add(item.name)
|
||||
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Assign(self, node):
|
||||
"""Track variable assignments."""
|
||||
for target in node.targets:
|
||||
if isinstance(target, ast.Name):
|
||||
self.variable_assignments.add(target.id)
|
||||
elif isinstance(target, ast.Tuple) or isinstance(target, ast.List):
|
||||
for elt in target.elts:
|
||||
if isinstance(elt, ast.Name):
|
||||
self.variable_assignments.add(elt.id)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Call(self, node):
|
||||
"""Track function calls."""
|
||||
if isinstance(node.func, ast.Name):
|
||||
self.function_calls.add(node.func.id)
|
||||
elif isinstance(node.func, ast.Attribute):
|
||||
self.function_calls.add(node.func.attr)
|
||||
if isinstance(node.func.value, ast.Name):
|
||||
self.name_references.add(node.func.value.id)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Attribute(self, node):
|
||||
"""Track attribute accesses."""
|
||||
self.attribute_accesses.add(node.attr)
|
||||
if isinstance(node.value, ast.Name):
|
||||
self.name_references.add(node.value.id)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Name(self, node):
|
||||
"""Track name references."""
|
||||
if isinstance(node.ctx, ast.Load):
|
||||
self.name_references.add(node.id)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Str(self, node):
|
||||
"""Track string literals for potential dynamic calls."""
|
||||
self.string_literals.add(node.s)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Constant(self, node):
|
||||
"""Track constant values including strings."""
|
||||
if isinstance(node.value, str):
|
||||
self.string_literals.add(node.value)
|
||||
self.generic_visit(node)
|
||||
|
||||
|
||||
class UnusedCodeDetector:
|
||||
"""Main class for detecting unused code in the project."""
|
||||
|
||||
def __init__(self, project_root: str):
|
||||
self.project_root = Path(project_root)
|
||||
self.package_root = self.project_root / "watermaker_plc_api"
|
||||
self.analyzers = {} # file_path -> CodeAnalyzer
|
||||
self.all_functions = set()
|
||||
self.all_classes = set()
|
||||
self.all_methods = {} # class_name -> set of methods
|
||||
self.all_imports = {} # file_path -> imports
|
||||
self.all_variables = {} # file_path -> variables
|
||||
self.used_names = set()
|
||||
self.flask_routes = set()
|
||||
self.entry_points = set()
|
||||
|
||||
def analyze_file(self, file_path: Path) -> CodeAnalyzer:
|
||||
"""Analyze a single Python file."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
tree = ast.parse(content, filename=str(file_path))
|
||||
analyzer = CodeAnalyzer(str(file_path))
|
||||
analyzer.visit(tree)
|
||||
return analyzer
|
||||
except Exception as e:
|
||||
print(f"Error analyzing {file_path}: {e}")
|
||||
return None
|
||||
|
||||
def find_python_files(self) -> List[Path]:
|
||||
"""Find all Python files in the main package."""
|
||||
python_files = []
|
||||
for root, dirs, files in os.walk(self.package_root):
|
||||
# Skip __pycache__ directories
|
||||
dirs[:] = [d for d in dirs if d != '__pycache__']
|
||||
|
||||
for file in files:
|
||||
if file.endswith('.py'):
|
||||
python_files.append(Path(root) / file)
|
||||
|
||||
return python_files
|
||||
|
||||
def identify_entry_points(self):
|
||||
"""Identify entry points that should never be removed."""
|
||||
entry_points = {
|
||||
'main', # from __main__.py
|
||||
'create_app', # from app.py
|
||||
'parse_args', # from __main__.py
|
||||
}
|
||||
|
||||
# Flask route handlers are entry points
|
||||
for analyzer in self.analyzers.values():
|
||||
for decorator in analyzer.decorators:
|
||||
if 'route' in decorator:
|
||||
# Find functions with route decorators
|
||||
# This is a simplified approach
|
||||
pass
|
||||
|
||||
return entry_points
|
||||
|
||||
def identify_flask_routes(self):
|
||||
"""Identify Flask route handler functions."""
|
||||
routes = set()
|
||||
|
||||
for file_path, analyzer in self.analyzers.items():
|
||||
if 'controller' in file_path:
|
||||
# Look for functions with @bp.route decorators
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
if '@' in line and '.route(' in line:
|
||||
# Next non-empty line should be function definition
|
||||
for j in range(i + 1, min(i + 5, len(lines))):
|
||||
next_line = lines[j].strip()
|
||||
if next_line.startswith('def '):
|
||||
func_name = next_line.split('(')[0].replace('def ', '')
|
||||
routes.add(func_name)
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"Error finding routes in {file_path}: {e}")
|
||||
|
||||
return routes
|
||||
|
||||
def build_usage_graph(self):
|
||||
"""Build a graph of what functions/classes are used where."""
|
||||
used_names = set()
|
||||
|
||||
# Start with entry points
|
||||
used_names.update(self.entry_points)
|
||||
used_names.update(self.flask_routes)
|
||||
|
||||
# Add commonly used patterns
|
||||
always_used = {
|
||||
'__init__', '__str__', '__repr__', '__call__',
|
||||
'get_logger', 'setup_error_handlers', 'create_app',
|
||||
'get_plc_connection', 'get_data_cache', 'get_operation_state_manager',
|
||||
'start_background_updates', 'get_task_manager'
|
||||
}
|
||||
used_names.update(always_used)
|
||||
|
||||
# Iteratively find used functions
|
||||
changed = True
|
||||
iterations = 0
|
||||
max_iterations = 10
|
||||
|
||||
while changed and iterations < max_iterations:
|
||||
changed = False
|
||||
iterations += 1
|
||||
old_size = len(used_names)
|
||||
|
||||
for analyzer in self.analyzers.values():
|
||||
# If any function in this file is used, mark its dependencies
|
||||
file_has_used_function = bool(
|
||||
analyzer.function_defs.intersection(used_names) or
|
||||
analyzer.class_defs.intersection(used_names)
|
||||
)
|
||||
|
||||
if file_has_used_function:
|
||||
used_names.update(analyzer.function_calls)
|
||||
used_names.update(analyzer.name_references)
|
||||
used_names.update(analyzer.attribute_accesses)
|
||||
|
||||
# Mark imported names as potentially used
|
||||
used_names.update(analyzer.imports.keys())
|
||||
used_names.update(analyzer.from_imports.keys())
|
||||
|
||||
if len(used_names) > old_size:
|
||||
changed = True
|
||||
|
||||
return used_names
|
||||
|
||||
def find_unused_imports(self) -> Dict[str, List[str]]:
|
||||
"""Find unused import statements."""
|
||||
unused_imports = {}
|
||||
|
||||
for file_path, analyzer in self.analyzers.items():
|
||||
unused_in_file = []
|
||||
|
||||
# Check regular imports
|
||||
for name, (module, alias) in analyzer.imports.items():
|
||||
if name not in analyzer.name_references and name not in self.used_names:
|
||||
# Check if it's used in string literals (dynamic imports)
|
||||
used_in_strings = any(name in s for s in analyzer.string_literals)
|
||||
if not used_in_strings:
|
||||
unused_in_file.append(f"import {module}" + (f" as {alias}" if alias else ""))
|
||||
|
||||
# Check from imports
|
||||
for name, (module, original) in analyzer.from_imports.items():
|
||||
if name not in analyzer.name_references and name not in self.used_names:
|
||||
# Check if it's used in string literals
|
||||
used_in_strings = any(name in s for s in analyzer.string_literals)
|
||||
if not used_in_strings:
|
||||
import_stmt = f"from {module} import {original}"
|
||||
if name != original:
|
||||
import_stmt += f" as {name}"
|
||||
unused_in_file.append(import_stmt)
|
||||
|
||||
if unused_in_file:
|
||||
unused_imports[file_path] = unused_in_file
|
||||
|
||||
return unused_imports
|
||||
|
||||
def find_unused_functions(self) -> Dict[str, List[str]]:
|
||||
"""Find unused function definitions."""
|
||||
unused_functions = {}
|
||||
|
||||
for file_path, analyzer in self.analyzers.items():
|
||||
unused_in_file = []
|
||||
|
||||
for func_name in analyzer.function_defs:
|
||||
# Skip special methods and entry points
|
||||
if (func_name.startswith('__') or
|
||||
func_name in self.used_names or
|
||||
func_name in self.flask_routes or
|
||||
func_name in self.entry_points):
|
||||
continue
|
||||
|
||||
# Check if function is called anywhere
|
||||
is_used = False
|
||||
for other_analyzer in self.analyzers.values():
|
||||
if (func_name in other_analyzer.function_calls or
|
||||
func_name in other_analyzer.name_references or
|
||||
any(func_name in s for s in other_analyzer.string_literals)):
|
||||
is_used = True
|
||||
break
|
||||
|
||||
if not is_used:
|
||||
unused_in_file.append(func_name)
|
||||
|
||||
if unused_in_file:
|
||||
unused_functions[file_path] = unused_in_file
|
||||
|
||||
return unused_functions
|
||||
|
||||
def find_unused_variables(self) -> Dict[str, List[str]]:
|
||||
"""Find unused variable assignments."""
|
||||
unused_variables = {}
|
||||
|
||||
for file_path, analyzer in self.analyzers.items():
|
||||
unused_in_file = []
|
||||
|
||||
for var_name in analyzer.variable_assignments:
|
||||
# Skip special variables and constants
|
||||
if (var_name.startswith('_') or
|
||||
var_name.isupper() or # Constants
|
||||
var_name in self.used_names):
|
||||
continue
|
||||
|
||||
# Check if variable is referenced
|
||||
if (var_name not in analyzer.name_references and
|
||||
not any(var_name in s for s in analyzer.string_literals)):
|
||||
unused_in_file.append(var_name)
|
||||
|
||||
if unused_in_file:
|
||||
unused_variables[file_path] = unused_in_file
|
||||
|
||||
return unused_variables
|
||||
|
||||
def analyze_project(self):
|
||||
"""Run complete analysis of the project."""
|
||||
print("🔍 Analyzing watermaker PLC API for unused code...")
|
||||
|
||||
# Find and analyze all Python files
|
||||
python_files = self.find_python_files()
|
||||
print(f"Found {len(python_files)} Python files to analyze")
|
||||
|
||||
for file_path in python_files:
|
||||
analyzer = self.analyze_file(file_path)
|
||||
if analyzer:
|
||||
self.analyzers[str(file_path)] = analyzer
|
||||
|
||||
print(f"Successfully analyzed {len(self.analyzers)} files")
|
||||
|
||||
# Identify entry points and Flask routes
|
||||
self.entry_points = self.identify_entry_points()
|
||||
self.flask_routes = self.identify_flask_routes()
|
||||
|
||||
print(f"Identified {len(self.flask_routes)} Flask route handlers")
|
||||
|
||||
# Build usage graph
|
||||
self.used_names = self.build_usage_graph()
|
||||
print(f"Identified {len(self.used_names)} used names")
|
||||
|
||||
# Find unused code
|
||||
unused_imports = self.find_unused_imports()
|
||||
unused_functions = self.find_unused_functions()
|
||||
unused_variables = self.find_unused_variables()
|
||||
|
||||
return {
|
||||
'unused_imports': unused_imports,
|
||||
'unused_functions': unused_functions,
|
||||
'unused_variables': unused_variables,
|
||||
'flask_routes': list(self.flask_routes),
|
||||
'entry_points': list(self.entry_points),
|
||||
'used_names_count': len(self.used_names),
|
||||
'total_files_analyzed': len(self.analyzers)
|
||||
}
|
||||
|
||||
def generate_report(self, results: Dict) -> str:
|
||||
"""Generate a detailed report of unused code."""
|
||||
report = []
|
||||
report.append("# Unused Code Analysis Report")
|
||||
report.append(f"Generated: {os.popen('date').read().strip()}")
|
||||
report.append("")
|
||||
|
||||
# Summary
|
||||
total_unused_imports = sum(len(imports) for imports in results['unused_imports'].values())
|
||||
total_unused_functions = sum(len(funcs) for funcs in results['unused_functions'].values())
|
||||
total_unused_variables = sum(len(vars) for vars in results['unused_variables'].values())
|
||||
|
||||
report.append("## Summary")
|
||||
report.append(f"- Files analyzed: {results['total_files_analyzed']}")
|
||||
report.append(f"- Flask routes found: {len(results['flask_routes'])}")
|
||||
report.append(f"- Used names identified: {results['used_names_count']}")
|
||||
report.append(f"- Unused imports: {total_unused_imports}")
|
||||
report.append(f"- Unused functions: {total_unused_functions}")
|
||||
report.append(f"- Unused variables: {total_unused_variables}")
|
||||
report.append("")
|
||||
|
||||
# Unused imports
|
||||
if results['unused_imports']:
|
||||
report.append("## Unused Imports")
|
||||
for file_path, imports in results['unused_imports'].items():
|
||||
rel_path = os.path.relpath(file_path, self.project_root)
|
||||
report.append(f"### {rel_path}")
|
||||
for imp in imports:
|
||||
report.append(f"- `{imp}`")
|
||||
report.append("")
|
||||
|
||||
# Unused functions
|
||||
if results['unused_functions']:
|
||||
report.append("## Unused Functions")
|
||||
for file_path, functions in results['unused_functions'].items():
|
||||
rel_path = os.path.relpath(file_path, self.project_root)
|
||||
report.append(f"### {rel_path}")
|
||||
for func in functions:
|
||||
report.append(f"- `{func}()`")
|
||||
report.append("")
|
||||
|
||||
# Unused variables
|
||||
if results['unused_variables']:
|
||||
report.append("## Unused Variables")
|
||||
for file_path, variables in results['unused_variables'].items():
|
||||
rel_path = os.path.relpath(file_path, self.project_root)
|
||||
report.append(f"### {rel_path}")
|
||||
for var in variables:
|
||||
report.append(f"- `{var}`")
|
||||
report.append("")
|
||||
|
||||
# Flask routes (for reference)
|
||||
if results['flask_routes']:
|
||||
report.append("## Flask Routes (Preserved)")
|
||||
for route in sorted(results['flask_routes']):
|
||||
report.append(f"- `{route}()`")
|
||||
report.append("")
|
||||
|
||||
return "\n".join(report)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the unused code analysis."""
|
||||
project_root = os.getcwd()
|
||||
|
||||
print("🚀 Starting unused code analysis for Watermaker PLC API")
|
||||
print(f"Project root: {project_root}")
|
||||
|
||||
detector = UnusedCodeDetector(project_root)
|
||||
results = detector.analyze_project()
|
||||
|
||||
# Generate report
|
||||
report = detector.generate_report(results)
|
||||
|
||||
# Save report to file
|
||||
report_file = "unused_code_analysis_report.md"
|
||||
with open(report_file, 'w') as f:
|
||||
f.write(report)
|
||||
|
||||
print(f"📊 Analysis complete! Report saved to: {report_file}")
|
||||
|
||||
# Save detailed results as JSON
|
||||
json_file = "unused_code_analysis_results.json"
|
||||
with open(json_file, 'w') as f:
|
||||
json.dump(results, f, indent=2, default=str)
|
||||
|
||||
print(f"📋 Detailed results saved to: {json_file}")
|
||||
|
||||
# Print summary
|
||||
total_unused = (
|
||||
sum(len(imports) for imports in results['unused_imports'].values()) +
|
||||
sum(len(funcs) for funcs in results['unused_functions'].values()) +
|
||||
sum(len(vars) for vars in results['unused_variables'].values())
|
||||
)
|
||||
|
||||
print(f"\n✨ Found {total_unused} potentially unused code elements")
|
||||
print("Review the report before proceeding with removal!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"API_BASE_URL": "http://localhost:5000/api",
|
||||
"REQUEST_TIMEOUT": 10,
|
||||
"RETRY_ATTEMPTS": 3,
|
||||
"RETRY_DELAY": 1,
|
||||
"POLLING_INTERVAL": 1.0,
|
||||
"TRANSITION_TIMEOUT": 300,
|
||||
"PROGRESS_UPDATE_INTERVAL": 5,
|
||||
"CONSOLE_VERBOSITY": "INFO",
|
||||
"LOG_FILE_ENABLED": true,
|
||||
"CSV_EXPORT_ENABLED": true,
|
||||
"HTML_REPORT_ENABLED": false,
|
||||
"EXPECTED_SCREEN_DURATIONS": {
|
||||
"5": 180,
|
||||
"6": 60,
|
||||
"8": 60
|
||||
},
|
||||
"STUCK_TIMER_THRESHOLD": 30,
|
||||
"SLOW_TRANSITION_THRESHOLD": 1.5
|
||||
}
|
||||
@@ -1,194 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug script to investigate DTS step transition issues.
|
||||
This script will monitor the DTS process and help identify why transitions aren't being detected.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
# Add the package directory to Python path
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from watermaker_plc_api.services.plc_connection import get_plc_connection
|
||||
from watermaker_plc_api.models.timer_mappings import (
|
||||
get_timer_for_dts_mode,
|
||||
get_timer_expected_start_value,
|
||||
calculate_timer_progress_percent,
|
||||
get_timer_info
|
||||
)
|
||||
from watermaker_plc_api.utils.logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
def monitor_dts_transition():
|
||||
"""Monitor DTS process and detect transition issues"""
|
||||
plc = get_plc_connection()
|
||||
|
||||
if not plc.connect():
|
||||
print("❌ Failed to connect to PLC")
|
||||
return
|
||||
|
||||
print("🔍 DTS Step Transition Monitor")
|
||||
print("=" * 50)
|
||||
|
||||
previous_mode = None
|
||||
previous_timer_value = None
|
||||
step_start_time = None
|
||||
|
||||
try:
|
||||
while True:
|
||||
# Read current system mode
|
||||
current_mode = plc.read_holding_register(1000)
|
||||
if current_mode is None:
|
||||
print("❌ Failed to read R1000 (system mode)")
|
||||
time.sleep(2)
|
||||
continue
|
||||
|
||||
# Check if mode changed
|
||||
if current_mode != previous_mode:
|
||||
print(f"\n🔄 Mode Change Detected: {previous_mode} → {current_mode}")
|
||||
previous_mode = current_mode
|
||||
step_start_time = datetime.now()
|
||||
previous_timer_value = None
|
||||
|
||||
# Get timer for current mode
|
||||
timer_address = get_timer_for_dts_mode(current_mode)
|
||||
|
||||
if timer_address:
|
||||
# Read timer value
|
||||
timer_value = plc.read_holding_register(timer_address)
|
||||
if timer_value is not None:
|
||||
# Get timer info
|
||||
timer_info = get_timer_info(timer_address)
|
||||
expected_start = get_timer_expected_start_value(timer_address)
|
||||
|
||||
# Calculate progress
|
||||
progress = calculate_timer_progress_percent(timer_address, timer_value)
|
||||
|
||||
# Check if timer value changed
|
||||
timer_changed = timer_value != previous_timer_value
|
||||
previous_timer_value = timer_value
|
||||
|
||||
# Calculate elapsed time since step started
|
||||
elapsed = ""
|
||||
if step_start_time:
|
||||
elapsed_seconds = (datetime.now() - step_start_time).total_seconds()
|
||||
elapsed = f" (elapsed: {elapsed_seconds:.1f}s)"
|
||||
|
||||
# Print status
|
||||
status_icon = "⏳" if timer_changed else "⚠️"
|
||||
print(f"{status_icon} Mode {current_mode} | Timer R{timer_address}: {timer_value} | "
|
||||
f"Progress: {progress}% | Expected Start: {expected_start}{elapsed}")
|
||||
|
||||
# Check for potential issues
|
||||
if timer_value == 0 and progress == 0:
|
||||
print(" 🚨 Timer is 0 but progress is 0% - possible issue!")
|
||||
elif timer_value > expected_start:
|
||||
print(f" 🚨 Timer value ({timer_value}) > expected start ({expected_start}) - unusual!")
|
||||
elif not timer_changed and timer_value > 0:
|
||||
print(" ⚠️ Timer not counting down - may be stuck!")
|
||||
|
||||
# Check if step should transition
|
||||
if timer_value == 0 and current_mode == 5:
|
||||
print(" ✅ Step 1 timer completed - should transition to Mode 6 soon")
|
||||
elif timer_value == 0 and current_mode == 6:
|
||||
print(" ✅ Step 2 timer completed - should transition to Mode 7 soon")
|
||||
|
||||
else:
|
||||
print(f"❌ Failed to read timer R{timer_address} for mode {current_mode}")
|
||||
else:
|
||||
print(f"ℹ️ Mode {current_mode} has no associated timer (production/transition mode)")
|
||||
|
||||
time.sleep(1) # Check every second
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Monitoring stopped by user")
|
||||
except Exception as e:
|
||||
print(f"❌ Error during monitoring: {e}")
|
||||
finally:
|
||||
plc.disconnect()
|
||||
|
||||
def check_current_dts_state():
|
||||
"""Check the current DTS state and provide detailed analysis"""
|
||||
plc = get_plc_connection()
|
||||
|
||||
if not plc.connect():
|
||||
print("❌ Failed to connect to PLC")
|
||||
return
|
||||
|
||||
print("📊 Current DTS State Analysis")
|
||||
print("=" * 40)
|
||||
|
||||
try:
|
||||
# Read current mode
|
||||
current_mode = plc.read_holding_register(1000)
|
||||
print(f"Current Mode (R1000): {current_mode}")
|
||||
|
||||
if current_mode is None:
|
||||
print("❌ Cannot read system mode")
|
||||
return
|
||||
|
||||
# Get timer for current mode
|
||||
timer_address = get_timer_for_dts_mode(current_mode)
|
||||
|
||||
if timer_address:
|
||||
timer_value = plc.read_holding_register(timer_address)
|
||||
timer_info = get_timer_info(timer_address)
|
||||
expected_start = get_timer_expected_start_value(timer_address)
|
||||
progress = calculate_timer_progress_percent(timer_address, timer_value)
|
||||
|
||||
print(f"Timer Address: R{timer_address}")
|
||||
print(f"Timer Name: {timer_info.get('name', 'Unknown')}")
|
||||
print(f"Current Value: {timer_value}")
|
||||
print(f"Expected Start: {expected_start}")
|
||||
print(f"Progress: {progress}%")
|
||||
print(f"Scale: {timer_info.get('scale', 'direct')}")
|
||||
print(f"Unit: {timer_info.get('unit', '')}")
|
||||
|
||||
# Analysis
|
||||
if timer_value == 0:
|
||||
print("✅ Timer completed - step should transition soon")
|
||||
elif timer_value == expected_start:
|
||||
print("🔄 Timer at start value - step just began")
|
||||
elif timer_value > expected_start:
|
||||
print("⚠️ Timer value higher than expected - unusual condition")
|
||||
else:
|
||||
remaining_time = timer_value / 10 # Assuming ÷10 scale
|
||||
print(f"⏳ Timer counting down - ~{remaining_time:.1f}s remaining")
|
||||
else:
|
||||
print("ℹ️ No timer for current mode (production/transition phase)")
|
||||
|
||||
# Check other relevant registers
|
||||
print("\n🔍 Additional Register Values:")
|
||||
|
||||
# Check R71 (command register)
|
||||
r71_value = plc.read_holding_register(71)
|
||||
print(f"R71 (Command): {r71_value}")
|
||||
|
||||
# Check R138 specifically (DTS Step 1 timer)
|
||||
r138_value = plc.read_holding_register(138)
|
||||
print(f"R138 (DTS Step 1 Timer): {r138_value}")
|
||||
|
||||
# Check R128 (DTS Step 2 timer)
|
||||
r128_value = plc.read_holding_register(128)
|
||||
print(f"R128 (DTS Step 2 Timer): {r128_value}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during analysis: {e}")
|
||||
finally:
|
||||
plc.disconnect()
|
||||
|
||||
def main():
|
||||
"""Main function"""
|
||||
if len(sys.argv) > 1 and sys.argv[1] == "monitor":
|
||||
monitor_dts_transition()
|
||||
else:
|
||||
check_current_dts_state()
|
||||
print("\n💡 To continuously monitor transitions, run:")
|
||||
print(" python debug_dts_step_transition.py monitor")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
173
demo_dts_test.py
173
demo_dts_test.py
@@ -1,173 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Demo script to show DTS API Test Suite capabilities without running a full DTS sequence.
|
||||
This demonstrates the monitoring and reporting features.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
from dts_api_test_suite import DTSAPITester, TestConfig
|
||||
|
||||
def demo_api_connectivity():
|
||||
"""Demo API connectivity testing"""
|
||||
print("🔍 Demo: API Connectivity Test")
|
||||
print("=" * 50)
|
||||
|
||||
tester = DTSAPITester()
|
||||
|
||||
print("Testing API connection...")
|
||||
if tester.check_system_status():
|
||||
print("✅ API is accessible and PLC is connected")
|
||||
|
||||
# Test getting current step progress
|
||||
print("\nTesting current step progress endpoint...")
|
||||
progress = tester.get_current_step_progress()
|
||||
if progress:
|
||||
current_mode = progress.get("current_mode", "unknown")
|
||||
timer_based = progress.get("timer_based_progress", False)
|
||||
print(f"📊 Current system mode: {current_mode}")
|
||||
print(f"⏱️ Timer-based progress available: {timer_based}")
|
||||
|
||||
if timer_based:
|
||||
timer_addr = progress.get("timer_address")
|
||||
timer_val = progress.get("current_timer_value")
|
||||
progress_pct = progress.get("progress_percent", 0)
|
||||
print(f" Timer R{timer_addr}: {timer_val} ({progress_pct}% progress)")
|
||||
else:
|
||||
print("⚠️ Could not get current step progress")
|
||||
|
||||
return True
|
||||
else:
|
||||
print("❌ API connectivity failed")
|
||||
return False
|
||||
|
||||
def demo_dts_status_monitoring():
|
||||
"""Demo DTS status monitoring without starting DTS"""
|
||||
print("\n🔍 Demo: DTS Status Monitoring")
|
||||
print("=" * 50)
|
||||
|
||||
tester = DTSAPITester()
|
||||
|
||||
# Check if there are any existing DTS tasks
|
||||
print("Checking for existing DTS tasks...")
|
||||
response, _ = tester._make_api_request("GET", "/dts/status")
|
||||
|
||||
if response:
|
||||
latest_task = response.get("latest_task")
|
||||
total_tasks = response.get("total_tasks", 0)
|
||||
active_tasks = response.get("active_tasks", [])
|
||||
|
||||
print(f"📋 Total DTS tasks in history: {total_tasks}")
|
||||
print(f"🔄 Active tasks: {len(active_tasks)}")
|
||||
|
||||
if latest_task:
|
||||
task_id = latest_task.get("task_id", "unknown")
|
||||
status = latest_task.get("status", "unknown")
|
||||
current_step = latest_task.get("current_step", "unknown")
|
||||
progress = latest_task.get("progress_percent", 0)
|
||||
|
||||
print(f"📊 Latest task: {task_id}")
|
||||
print(f" Status: {status}")
|
||||
print(f" Current step: {current_step}")
|
||||
print(f" Progress: {progress}%")
|
||||
|
||||
if latest_task.get("timer_info"):
|
||||
timer_info = latest_task["timer_info"]
|
||||
print(f" Timer info: Mode {timer_info.get('current_mode')}, "
|
||||
f"Timer R{timer_info.get('timer_address', 'N/A')}")
|
||||
else:
|
||||
print("ℹ️ No previous DTS tasks found")
|
||||
else:
|
||||
print("❌ Could not retrieve DTS status")
|
||||
|
||||
def demo_configuration():
|
||||
"""Demo configuration options"""
|
||||
print("\n🔍 Demo: Configuration Options")
|
||||
print("=" * 50)
|
||||
|
||||
# Show default configuration
|
||||
config = TestConfig()
|
||||
print("Default configuration:")
|
||||
print(f" API URL: {config.API_BASE_URL}")
|
||||
print(f" Polling interval: {config.POLLING_INTERVAL}s")
|
||||
print(f" Request timeout: {config.REQUEST_TIMEOUT}s")
|
||||
print(f" Log file enabled: {config.LOG_FILE_ENABLED}")
|
||||
print(f" CSV export enabled: {config.CSV_EXPORT_ENABLED}")
|
||||
|
||||
# Show expected screen durations
|
||||
print("\nExpected screen durations:")
|
||||
for mode, duration in config.EXPECTED_SCREEN_DURATIONS.items():
|
||||
screen_names = {5: "Priming", 6: "Init", 8: "Fresh Water Flush"}
|
||||
screen_name = screen_names.get(int(mode), f"Mode {mode}")
|
||||
print(f" {screen_name}: {duration}s")
|
||||
|
||||
def demo_api_endpoints():
|
||||
"""Demo available API endpoints"""
|
||||
print("\n🔍 Demo: Available API Endpoints")
|
||||
print("=" * 50)
|
||||
|
||||
tester = DTSAPITester()
|
||||
|
||||
# Test config endpoint
|
||||
print("Testing API configuration endpoint...")
|
||||
response, _ = tester._make_api_request("GET", "/config")
|
||||
|
||||
if response:
|
||||
api_version = response.get("api_version", "unknown")
|
||||
endpoints = response.get("endpoints", {})
|
||||
|
||||
print(f"📡 API Version: {api_version}")
|
||||
print("📋 Available endpoints:")
|
||||
|
||||
# Show DTS-related endpoints
|
||||
dts_endpoints = {k: v for k, v in endpoints.items() if "dts" in k.lower()}
|
||||
for endpoint, description in dts_endpoints.items():
|
||||
print(f" {endpoint}: {description}")
|
||||
else:
|
||||
print("❌ Could not retrieve API configuration")
|
||||
|
||||
def main():
|
||||
"""Run all demos"""
|
||||
print("🚀 DTS API Test Suite - Demo Mode")
|
||||
print("=" * 60)
|
||||
print("This demo shows the test suite capabilities without starting DTS")
|
||||
print("=" * 60)
|
||||
|
||||
# Run demos
|
||||
success = True
|
||||
|
||||
try:
|
||||
# Demo 1: API Connectivity
|
||||
if not demo_api_connectivity():
|
||||
success = False
|
||||
|
||||
# Demo 2: DTS Status Monitoring
|
||||
demo_dts_status_monitoring()
|
||||
|
||||
# Demo 3: Configuration
|
||||
demo_configuration()
|
||||
|
||||
# Demo 4: API Endpoints
|
||||
demo_api_endpoints()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
if success:
|
||||
print("✅ Demo completed successfully!")
|
||||
print("\nTo run actual DTS monitoring:")
|
||||
print(" python run_dts_test.py basic")
|
||||
print(" python dts_api_test_suite.py --verbose")
|
||||
else:
|
||||
print("❌ Demo completed with issues")
|
||||
print("Check API server status and PLC connection")
|
||||
print("=" * 60)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Demo interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Demo error: {e}")
|
||||
success = False
|
||||
|
||||
return 0 if success else 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,186 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Demonstration script showing how the R1000 monitoring system handles
|
||||
different external change scenarios.
|
||||
|
||||
This script simulates various external HMI actions and shows the system response.
|
||||
"""
|
||||
|
||||
import time
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
def print_scenario(title, description):
|
||||
"""Print a formatted scenario header"""
|
||||
print("\n" + "="*70)
|
||||
print(f"SCENARIO: {title}")
|
||||
print("-" * 70)
|
||||
print(description)
|
||||
print("="*70)
|
||||
|
||||
def simulate_r1000_change(old_value, new_value, scenario_name):
|
||||
"""Simulate what happens when R1000 changes externally"""
|
||||
|
||||
# Import the monitoring components (would be running in background)
|
||||
from watermaker_plc_api.services.background_tasks import R1000Monitor
|
||||
from watermaker_plc_api.services.data_cache import get_data_cache
|
||||
|
||||
# Create a mock monitor for demonstration
|
||||
monitor = R1000Monitor()
|
||||
cache = get_data_cache()
|
||||
|
||||
# Simulate the change detection
|
||||
change_info = {
|
||||
"previous_value": old_value,
|
||||
"new_value": new_value,
|
||||
"change_time": datetime.now().isoformat(),
|
||||
"change_type": monitor._classify_change(old_value, new_value),
|
||||
"external_change": True
|
||||
}
|
||||
|
||||
print(f"\n🔍 DETECTED CHANGE:")
|
||||
print(f" R1000: {old_value} → {new_value}")
|
||||
print(f" Type: {change_info['change_type']}")
|
||||
print(f" Time: {change_info['change_time']}")
|
||||
|
||||
# Show what would be logged
|
||||
log_message = f"R1000 External Change: {old_value} → {new_value} ({change_info['change_type']})"
|
||||
print(f"\n📝 LOGGED TO CACHE:")
|
||||
print(f" {log_message}")
|
||||
|
||||
# Show impact assessment
|
||||
print(f"\n⚠️ IMPACT ASSESSMENT:")
|
||||
if "Process_Start" in change_info['change_type']:
|
||||
print(" - External system started DTS process")
|
||||
print(" - API should be aware process is now running")
|
||||
print(" - Any pending API start operations may conflict")
|
||||
elif "Process_Stop" in change_info['change_type']:
|
||||
print(" - External system stopped DTS process")
|
||||
print(" - Running API tasks should be marked as externally stopped")
|
||||
print(" - Process ended without API control")
|
||||
elif "Step_Skip" in change_info['change_type']:
|
||||
print(" - External system skipped a DTS step")
|
||||
print(" - Running API tasks should update current step")
|
||||
print(" - Timer-based progress may be affected")
|
||||
elif "Step_Advance" in change_info['change_type']:
|
||||
print(" - External system advanced DTS step")
|
||||
print(" - Running API tasks should track the advancement")
|
||||
print(" - Normal process flow continued externally")
|
||||
elif "DTS_Start" in change_info['change_type']:
|
||||
print(" - DTS process started from requested state")
|
||||
print(" - Normal transition, may be API or external")
|
||||
print(" - Monitor will track subsequent steps")
|
||||
else:
|
||||
print(" - General mode change detected")
|
||||
print(" - May indicate system state change")
|
||||
print(" - Monitor will continue tracking")
|
||||
|
||||
return change_info
|
||||
|
||||
def main():
|
||||
"""Run demonstration scenarios"""
|
||||
|
||||
print("R1000 MONITORING SYSTEM - EXTERNAL CHANGE SCENARIOS")
|
||||
print("This demonstration shows how the system detects and handles external changes")
|
||||
|
||||
# Scenario 1: External HMI starts DTS process
|
||||
print_scenario(
|
||||
"External HMI Starts DTS Process",
|
||||
"User presses DTS start button on external HMI while system is in standby mode"
|
||||
)
|
||||
simulate_r1000_change(2, 5, "external_start")
|
||||
|
||||
# Scenario 2: External HMI stops running DTS process
|
||||
print_scenario(
|
||||
"External HMI Stops Running DTS Process",
|
||||
"User presses stop button on external HMI while DTS is in production mode"
|
||||
)
|
||||
simulate_r1000_change(7, 2, "external_stop")
|
||||
|
||||
# Scenario 3: External HMI skips priming step
|
||||
print_scenario(
|
||||
"External HMI Skips Priming Step",
|
||||
"User skips priming step on external HMI, jumping directly to production"
|
||||
)
|
||||
simulate_r1000_change(5, 7, "external_skip")
|
||||
|
||||
# Scenario 4: External HMI advances from init to flush
|
||||
print_scenario(
|
||||
"External HMI Advances to Flush",
|
||||
"User advances from init step directly to flush step on external HMI"
|
||||
)
|
||||
simulate_r1000_change(6, 8, "external_advance")
|
||||
|
||||
# Scenario 5: External system requests DTS
|
||||
print_scenario(
|
||||
"External System Requests DTS",
|
||||
"External system sets DTS requested mode, preparing for DTS start"
|
||||
)
|
||||
simulate_r1000_change(2, 34, "external_request")
|
||||
|
||||
# Scenario 6: Normal DTS progression (for comparison)
|
||||
print_scenario(
|
||||
"Normal DTS Progression (For Comparison)",
|
||||
"Normal progression from DTS requested to DTS priming (may be API or external)"
|
||||
)
|
||||
simulate_r1000_change(34, 5, "normal_progression")
|
||||
|
||||
# Summary
|
||||
print("\n" + "="*70)
|
||||
print("SUMMARY - R1000 MONITORING CAPABILITIES")
|
||||
print("="*70)
|
||||
|
||||
capabilities = [
|
||||
"✅ Detects all R1000 changes in real-time",
|
||||
"✅ Classifies change types automatically",
|
||||
"✅ Logs changes to data cache for API access",
|
||||
"✅ Assesses impact on running DTS tasks",
|
||||
"✅ Provides detailed change history",
|
||||
"✅ Integrates with existing background task system",
|
||||
"✅ Offers API endpoints for monitoring status",
|
||||
"✅ Supports callback system for custom handling"
|
||||
]
|
||||
|
||||
for capability in capabilities:
|
||||
print(f" {capability}")
|
||||
|
||||
print("\n" + "="*70)
|
||||
print("INTEGRATION WITH EXISTING SYSTEM")
|
||||
print("="*70)
|
||||
|
||||
integration_points = [
|
||||
"🔗 Background Tasks: Integrated into existing data update loop",
|
||||
"🔗 DTS Controller: Enhanced to detect external changes during operations",
|
||||
"🔗 Data Cache: Uses existing error logging system for change history",
|
||||
"🔗 API Endpoints: New endpoint provides monitoring status and history",
|
||||
"🔗 Logging System: Uses existing logger with appropriate warning levels",
|
||||
"🔗 PLC Connection: Uses existing PLC connection service"
|
||||
]
|
||||
|
||||
for point in integration_points:
|
||||
print(f" {point}")
|
||||
|
||||
print("\n" + "="*70)
|
||||
print("USAGE IN PRODUCTION")
|
||||
print("="*70)
|
||||
|
||||
usage_scenarios = [
|
||||
"🏭 Operators can use external HMI while API monitoring continues",
|
||||
"🏭 System detects conflicts between API and external operations",
|
||||
"🏭 Maintenance staff can see history of external changes",
|
||||
"🏭 Debugging is easier with detailed change classification",
|
||||
"🏭 Process integrity is maintained with change awareness",
|
||||
"🏭 Real-time alerts possible for critical external changes"
|
||||
]
|
||||
|
||||
for scenario in usage_scenarios:
|
||||
print(f" {scenario}")
|
||||
|
||||
print(f"\n{'='*70}")
|
||||
print("DEMONSTRATION COMPLETE")
|
||||
print(f"{'='*70}")
|
||||
print("The R1000 monitoring system is ready to detect and handle external changes.")
|
||||
print("Run 'python test_r1000_monitoring.py' for live testing with actual PLC.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,691 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
DTS API Test Suite - Comprehensive testing for DTS mode progression
|
||||
Monitors all screen transitions, tracks timer progress, and provides detailed debugging information.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import csv
|
||||
import argparse
|
||||
import requests
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from dataclasses import dataclass, asdict
|
||||
from pathlib import Path
|
||||
import logging
|
||||
from urllib.parse import urljoin
|
||||
|
||||
# Add the package directory to Python path for imports
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
@dataclass
|
||||
class TransitionEvent:
|
||||
"""Represents a screen transition event"""
|
||||
timestamp: str
|
||||
from_mode: int
|
||||
to_mode: int
|
||||
from_screen: str
|
||||
to_screen: str
|
||||
timer_info: Dict[str, Any]
|
||||
duration_seconds: float
|
||||
api_response_time_ms: int
|
||||
|
||||
@dataclass
|
||||
class TimerProgress:
|
||||
"""Represents timer progress data"""
|
||||
timestamp: str
|
||||
mode: int
|
||||
timer_address: Optional[int]
|
||||
timer_value: Optional[int]
|
||||
progress_percent: int
|
||||
countdown_rate: float
|
||||
expected_duration: Optional[int]
|
||||
|
||||
@dataclass
|
||||
class TestResults:
|
||||
"""Test execution results"""
|
||||
start_time: Optional[str] = None
|
||||
end_time: Optional[str] = None
|
||||
total_duration_seconds: float = 0.0
|
||||
transitions_detected: int = 0
|
||||
screens_completed: int = 0
|
||||
api_errors: int = 0
|
||||
timer_issues: int = 0
|
||||
success: bool = False
|
||||
error_messages: List[str] = None
|
||||
performance_metrics: Dict[str, Any] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.error_messages is None:
|
||||
self.error_messages = []
|
||||
if self.performance_metrics is None:
|
||||
self.performance_metrics = {}
|
||||
|
||||
class TestConfig:
|
||||
"""Configuration for DTS API testing"""
|
||||
|
||||
# API Settings
|
||||
API_BASE_URL = "http://localhost:5000/api"
|
||||
REQUEST_TIMEOUT = 10
|
||||
RETRY_ATTEMPTS = 3
|
||||
RETRY_DELAY = 1
|
||||
|
||||
# Monitoring Settings
|
||||
POLLING_INTERVAL = 1.0 # seconds
|
||||
TRANSITION_TIMEOUT = 300 # 5 minutes max per screen
|
||||
PROGRESS_UPDATE_INTERVAL = 5 # seconds
|
||||
|
||||
# Output Settings
|
||||
CONSOLE_VERBOSITY = "INFO" # DEBUG, INFO, WARNING, ERROR
|
||||
LOG_FILE_ENABLED = True
|
||||
CSV_EXPORT_ENABLED = True
|
||||
HTML_REPORT_ENABLED = False
|
||||
|
||||
# Test Parameters
|
||||
EXPECTED_SCREEN_DURATIONS = {
|
||||
5: 180, # Priming: 3 minutes
|
||||
6: 60, # Init: 1 minute
|
||||
8: 60 # Flush: 1 minute
|
||||
}
|
||||
|
||||
# Alert Thresholds
|
||||
STUCK_TIMER_THRESHOLD = 30 # seconds without timer change
|
||||
SLOW_TRANSITION_THRESHOLD = 1.5 # 150% of expected duration
|
||||
|
||||
class DTSAPITester:
|
||||
"""Main test orchestrator for DTS API testing"""
|
||||
|
||||
# Screen mode mappings
|
||||
SCREEN_MODES = {
|
||||
34: "dts_requested_active",
|
||||
5: "dts_priming_active",
|
||||
6: "dts_init_active",
|
||||
7: "dts_production_active",
|
||||
8: "dts_flush_active",
|
||||
2: "dts_process_complete"
|
||||
}
|
||||
|
||||
# Timer mappings for progress tracking
|
||||
TIMER_MAPPINGS = {
|
||||
5: {"timer_address": 128, "expected_duration": 180, "name": "Priming"},
|
||||
6: {"timer_address": 129, "expected_duration": 60, "name": "Init"},
|
||||
8: {"timer_address": 133, "expected_duration": 60, "name": "Fresh Water Flush"}
|
||||
}
|
||||
|
||||
def __init__(self, api_base_url: str = None, config: TestConfig = None):
|
||||
"""Initialize the DTS API tester"""
|
||||
self.config = config or TestConfig()
|
||||
self.api_base_url = api_base_url or self.config.API_BASE_URL
|
||||
self.session = requests.Session()
|
||||
self.session.timeout = self.config.REQUEST_TIMEOUT
|
||||
|
||||
# Test state
|
||||
self.current_task_id = None
|
||||
self.transition_history: List[TransitionEvent] = []
|
||||
self.timer_progress_history: List[TimerProgress] = []
|
||||
self.start_time = None
|
||||
self.test_results = TestResults()
|
||||
self.previous_state = None
|
||||
self.last_timer_values = {}
|
||||
|
||||
# Setup logging
|
||||
self.logger = self._setup_logger()
|
||||
|
||||
# Create output directories
|
||||
self._create_output_directories()
|
||||
|
||||
def _setup_logger(self) -> logging.Logger:
|
||||
"""Setup logging configuration"""
|
||||
logger = logging.getLogger('DTSAPITester')
|
||||
logger.setLevel(getattr(logging, self.config.CONSOLE_VERBOSITY))
|
||||
|
||||
# Console handler
|
||||
console_handler = logging.StreamHandler()
|
||||
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
|
||||
console_handler.setFormatter(console_formatter)
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
# File handler if enabled
|
||||
if self.config.LOG_FILE_ENABLED:
|
||||
log_dir = Path("logs")
|
||||
log_dir.mkdir(exist_ok=True)
|
||||
log_file = log_dir / f"dts_test_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log"
|
||||
|
||||
file_handler = logging.FileHandler(log_file)
|
||||
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
file_handler.setFormatter(file_formatter)
|
||||
logger.addHandler(file_handler)
|
||||
|
||||
self.log_file_path = log_file
|
||||
|
||||
return logger
|
||||
|
||||
def _create_output_directories(self):
|
||||
"""Create necessary output directories"""
|
||||
for directory in ["logs", "reports", "data"]:
|
||||
Path(directory).mkdir(exist_ok=True)
|
||||
|
||||
def _make_api_request(self, method: str, endpoint: str, **kwargs) -> Tuple[Optional[Dict], int]:
|
||||
"""Make API request with error handling and timing"""
|
||||
url = urljoin(self.api_base_url + "/", endpoint.lstrip("/"))
|
||||
start_time = time.time()
|
||||
|
||||
for attempt in range(self.config.RETRY_ATTEMPTS):
|
||||
try:
|
||||
response = self.session.request(method, url, **kwargs)
|
||||
response_time_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
if response.status_code == 200:
|
||||
return response.json(), response_time_ms
|
||||
else:
|
||||
self.logger.warning(f"API request failed: {response.status_code} - {response.text}")
|
||||
if attempt == self.config.RETRY_ATTEMPTS - 1:
|
||||
self.test_results.api_errors += 1
|
||||
return None, response_time_ms
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.warning(f"API request error (attempt {attempt + 1}): {e}")
|
||||
if attempt < self.config.RETRY_ATTEMPTS - 1:
|
||||
time.sleep(self.config.RETRY_DELAY * (attempt + 1))
|
||||
else:
|
||||
self.test_results.api_errors += 1
|
||||
return None, int((time.time() - start_time) * 1000)
|
||||
|
||||
return None, 0
|
||||
|
||||
def check_system_status(self) -> bool:
|
||||
"""Check if the API and system are ready"""
|
||||
self.logger.info("🔍 Checking system status...")
|
||||
|
||||
response, _ = self._make_api_request("GET", "/status")
|
||||
if response:
|
||||
connection_status = response.get("connection_status", "unknown")
|
||||
plc_connected = response.get("plc_config", {}).get("connected", False)
|
||||
|
||||
if connection_status == "connected" and plc_connected:
|
||||
self.logger.info("✅ System status: Connected")
|
||||
return True
|
||||
else:
|
||||
self.logger.warning(f"⚠️ System status: {connection_status}, PLC connected: {plc_connected}")
|
||||
return True # Still allow test to proceed
|
||||
else:
|
||||
self.logger.error("❌ System status: Failed to connect")
|
||||
return False
|
||||
|
||||
def start_dts_sequence(self) -> bool:
|
||||
"""Start DTS sequence via API"""
|
||||
self.logger.info("🔄 Starting DTS sequence...")
|
||||
|
||||
response, response_time = self._make_api_request("POST", "/dts/start")
|
||||
if response and response.get("success"):
|
||||
self.current_task_id = response.get("data", {}).get("task_id")
|
||||
self.logger.info(f"📋 DTS sequence started - Task ID: {self.current_task_id}")
|
||||
self.logger.info(f"⚡ API Response Time: {response_time}ms")
|
||||
return True
|
||||
else:
|
||||
self.logger.error("❌ Failed to start DTS sequence")
|
||||
self.test_results.error_messages.append("Failed to start DTS sequence")
|
||||
return False
|
||||
|
||||
def get_task_status(self) -> Optional[Dict]:
|
||||
"""Get current task status"""
|
||||
if not self.current_task_id:
|
||||
return None
|
||||
|
||||
response, response_time = self._make_api_request("GET", f"/dts/status/{self.current_task_id}")
|
||||
if response:
|
||||
return response.get("task", {})
|
||||
return None
|
||||
|
||||
def get_current_step_progress(self) -> Optional[Dict]:
|
||||
"""Get real-time step progress"""
|
||||
response, _ = self._make_api_request("GET", "/dts/current-step-progress")
|
||||
return response
|
||||
|
||||
def detect_screen_transition(self, current_state: Dict, previous_state: Dict) -> Optional[TransitionEvent]:
|
||||
"""Detect screen transitions"""
|
||||
if not previous_state:
|
||||
return None
|
||||
|
||||
# Check for mode change in timer_info
|
||||
current_mode = current_state.get("timer_info", {}).get("current_mode")
|
||||
previous_mode = previous_state.get("timer_info", {}).get("current_mode")
|
||||
|
||||
# Also check current_step for transitions
|
||||
current_step = current_state.get("current_step", "")
|
||||
previous_step = previous_state.get("current_step", "")
|
||||
|
||||
if current_mode != previous_mode and current_mode is not None:
|
||||
# Mode transition detected
|
||||
from_screen = self.SCREEN_MODES.get(previous_mode, f"mode_{previous_mode}")
|
||||
to_screen = self.SCREEN_MODES.get(current_mode, f"mode_{current_mode}")
|
||||
|
||||
# Calculate duration since last transition
|
||||
duration = 0.0
|
||||
if self.transition_history:
|
||||
last_transition = self.transition_history[-1]
|
||||
last_time = datetime.fromisoformat(last_transition.timestamp.replace('Z', '+00:00'))
|
||||
current_time = datetime.now()
|
||||
duration = (current_time - last_time).total_seconds()
|
||||
|
||||
transition = TransitionEvent(
|
||||
timestamp=datetime.now().isoformat(),
|
||||
from_mode=previous_mode,
|
||||
to_mode=current_mode,
|
||||
from_screen=from_screen,
|
||||
to_screen=to_screen,
|
||||
timer_info=current_state.get("timer_info", {}),
|
||||
duration_seconds=duration,
|
||||
api_response_time_ms=0 # Will be updated by caller
|
||||
)
|
||||
|
||||
return transition
|
||||
|
||||
return None
|
||||
|
||||
def log_transition_event(self, transition: TransitionEvent):
|
||||
"""Log detailed transition information"""
|
||||
self.transition_history.append(transition)
|
||||
self.test_results.transitions_detected += 1
|
||||
|
||||
# Get screen names
|
||||
from_name = self._get_screen_name(transition.from_mode)
|
||||
to_name = self._get_screen_name(transition.to_mode)
|
||||
|
||||
# Format duration
|
||||
duration_str = self._format_duration(transition.duration_seconds)
|
||||
|
||||
self.logger.info("📺 Screen Transition Detected:")
|
||||
self.logger.info(f" {from_name} → {to_name} (Mode {transition.from_mode} → {transition.to_mode})")
|
||||
|
||||
# Log timer information
|
||||
timer_info = transition.timer_info
|
||||
if timer_info.get("timer_address"):
|
||||
timer_addr = timer_info["timer_address"]
|
||||
timer_val = timer_info.get("raw_timer_value", "N/A")
|
||||
progress = timer_info.get("timer_progress", 0)
|
||||
self.logger.info(f" ⏱️ Timer R{timer_addr}: {timer_val} ({progress}% progress)")
|
||||
|
||||
if transition.duration_seconds > 0:
|
||||
self.logger.info(f" ⏳ Duration: {duration_str}")
|
||||
|
||||
# Check for timing issues
|
||||
expected_duration = self.config.EXPECTED_SCREEN_DURATIONS.get(transition.from_mode)
|
||||
if expected_duration and transition.duration_seconds > 0:
|
||||
if transition.duration_seconds > expected_duration * self.config.SLOW_TRANSITION_THRESHOLD:
|
||||
self.logger.warning(f" ⚠️ Slow transition: {duration_str} (expected ~{expected_duration}s)")
|
||||
|
||||
def analyze_timer_progress(self, current_state: Dict):
|
||||
"""Analyze and log timer progress"""
|
||||
timer_info = current_state.get("timer_info", {})
|
||||
if not timer_info:
|
||||
return
|
||||
|
||||
current_mode = timer_info.get("current_mode")
|
||||
timer_address = timer_info.get("timer_address")
|
||||
timer_value = timer_info.get("raw_timer_value")
|
||||
progress = timer_info.get("timer_progress", 0)
|
||||
|
||||
if timer_address and timer_value is not None:
|
||||
# Create timer progress record
|
||||
timer_progress = TimerProgress(
|
||||
timestamp=datetime.now().isoformat(),
|
||||
mode=current_mode,
|
||||
timer_address=timer_address,
|
||||
timer_value=timer_value,
|
||||
progress_percent=progress,
|
||||
countdown_rate=self._calculate_countdown_rate(timer_address, timer_value),
|
||||
expected_duration=self.config.EXPECTED_SCREEN_DURATIONS.get(current_mode)
|
||||
)
|
||||
|
||||
self.timer_progress_history.append(timer_progress)
|
||||
|
||||
# Check for stuck timer
|
||||
if self._is_timer_stuck(timer_address, timer_value):
|
||||
self.logger.warning(f"⚠️ Timer R{timer_address} appears stuck at {timer_value}")
|
||||
self.test_results.timer_issues += 1
|
||||
|
||||
def _calculate_countdown_rate(self, timer_address: int, current_value: int) -> float:
|
||||
"""Calculate timer countdown rate"""
|
||||
if timer_address not in self.last_timer_values:
|
||||
self.last_timer_values[timer_address] = {
|
||||
"value": current_value,
|
||||
"timestamp": time.time()
|
||||
}
|
||||
return 0.0
|
||||
|
||||
last_data = self.last_timer_values[timer_address]
|
||||
time_diff = time.time() - last_data["timestamp"]
|
||||
value_diff = last_data["value"] - current_value
|
||||
|
||||
if time_diff > 0:
|
||||
rate = value_diff / time_diff
|
||||
|
||||
# Update stored values
|
||||
self.last_timer_values[timer_address] = {
|
||||
"value": current_value,
|
||||
"timestamp": time.time()
|
||||
}
|
||||
|
||||
return rate
|
||||
|
||||
return 0.0
|
||||
|
||||
def _is_timer_stuck(self, timer_address: int, current_value: int) -> bool:
|
||||
"""Check if timer appears to be stuck"""
|
||||
if timer_address not in self.last_timer_values:
|
||||
return False
|
||||
|
||||
last_data = self.last_timer_values[timer_address]
|
||||
time_since_change = time.time() - last_data["timestamp"]
|
||||
|
||||
return (current_value == last_data["value"] and
|
||||
current_value > 0 and
|
||||
time_since_change > self.config.STUCK_TIMER_THRESHOLD)
|
||||
|
||||
def _get_screen_name(self, mode: int) -> str:
|
||||
"""Get human-readable screen name"""
|
||||
screen_names = {
|
||||
34: "DTS Requested",
|
||||
5: "Priming",
|
||||
6: "Init",
|
||||
7: "Production",
|
||||
8: "Fresh Water Flush",
|
||||
2: "Standby (Complete)"
|
||||
}
|
||||
return screen_names.get(mode, f"Mode {mode}")
|
||||
|
||||
def _format_duration(self, seconds: float) -> str:
|
||||
"""Format duration in human-readable format"""
|
||||
if seconds < 60:
|
||||
return f"{seconds:.1f}s"
|
||||
elif seconds < 3600:
|
||||
minutes = int(seconds // 60)
|
||||
secs = int(seconds % 60)
|
||||
return f"{minutes}m {secs}s"
|
||||
else:
|
||||
hours = int(seconds // 3600)
|
||||
minutes = int((seconds % 3600) // 60)
|
||||
return f"{hours}h {minutes}m"
|
||||
|
||||
def display_progress_bar(self, progress: int, width: int = 20) -> str:
|
||||
"""Create a visual progress bar"""
|
||||
filled = int(width * progress / 100)
|
||||
bar = "█" * filled + "░" * (width - filled)
|
||||
return f"{bar} {progress}%"
|
||||
|
||||
def monitor_dts_progress(self) -> bool:
|
||||
"""Main monitoring loop"""
|
||||
self.logger.info("🔍 Starting DTS progress monitoring...")
|
||||
|
||||
start_time = time.time()
|
||||
last_progress_update = 0
|
||||
monitoring_active = True
|
||||
|
||||
while monitoring_active:
|
||||
try:
|
||||
# Get current task status
|
||||
current_state = self.get_task_status()
|
||||
if not current_state:
|
||||
self.logger.error("❌ Failed to get task status")
|
||||
time.sleep(self.config.POLLING_INTERVAL)
|
||||
continue
|
||||
|
||||
# Check for transitions
|
||||
if self.previous_state:
|
||||
transition = self.detect_screen_transition(current_state, self.previous_state)
|
||||
if transition:
|
||||
self.log_transition_event(transition)
|
||||
|
||||
# Analyze timer progress
|
||||
self.analyze_timer_progress(current_state)
|
||||
|
||||
# Display progress updates
|
||||
current_time = time.time()
|
||||
if current_time - last_progress_update >= self.config.PROGRESS_UPDATE_INTERVAL:
|
||||
self._display_current_status(current_state)
|
||||
last_progress_update = current_time
|
||||
|
||||
# Check if DTS process is complete
|
||||
task_status = current_state.get("status", "")
|
||||
current_step = current_state.get("current_step", "")
|
||||
|
||||
if task_status == "completed" or current_step == "dts_process_complete":
|
||||
self.logger.info("✅ DTS process completed successfully!")
|
||||
self.test_results.success = True
|
||||
monitoring_active = False
|
||||
elif task_status == "failed":
|
||||
self.logger.error("❌ DTS process failed!")
|
||||
error_msg = current_state.get("last_error", {}).get("message", "Unknown error")
|
||||
self.test_results.error_messages.append(f"DTS process failed: {error_msg}")
|
||||
monitoring_active = False
|
||||
|
||||
# Check for timeout
|
||||
elapsed_time = current_time - start_time
|
||||
if elapsed_time > self.config.TRANSITION_TIMEOUT:
|
||||
self.logger.error(f"❌ Monitoring timeout after {elapsed_time:.1f}s")
|
||||
self.test_results.error_messages.append("Monitoring timeout exceeded")
|
||||
monitoring_active = False
|
||||
|
||||
# Store current state for next iteration
|
||||
self.previous_state = current_state
|
||||
|
||||
# Wait before next poll
|
||||
time.sleep(self.config.POLLING_INTERVAL)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
self.logger.info("👋 Monitoring stopped by user")
|
||||
monitoring_active = False
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Error during monitoring: {e}")
|
||||
self.test_results.error_messages.append(f"Monitoring error: {str(e)}")
|
||||
time.sleep(self.config.POLLING_INTERVAL)
|
||||
|
||||
# Calculate final results
|
||||
self.test_results.total_duration_seconds = time.time() - start_time
|
||||
self.test_results.screens_completed = len(self.transition_history)
|
||||
|
||||
return self.test_results.success
|
||||
|
||||
def _display_current_status(self, current_state: Dict):
|
||||
"""Display current status information"""
|
||||
current_step = current_state.get("current_step", "")
|
||||
progress = current_state.get("progress_percent", 0)
|
||||
timer_info = current_state.get("timer_info", {})
|
||||
|
||||
# Get current mode and screen name
|
||||
current_mode = timer_info.get("current_mode", 0)
|
||||
screen_name = self._get_screen_name(current_mode)
|
||||
|
||||
self.logger.info(f"⏳ Current: {screen_name} (Mode {current_mode})")
|
||||
|
||||
if timer_info.get("timer_address"):
|
||||
timer_addr = timer_info["timer_address"]
|
||||
timer_val = timer_info.get("raw_timer_value", 0)
|
||||
progress_bar = self.display_progress_bar(progress)
|
||||
self.logger.info(f"📊 Progress: {progress_bar}")
|
||||
self.logger.info(f"⏱️ Timer R{timer_addr}: {timer_val}")
|
||||
|
||||
# Show elapsed time
|
||||
if self.start_time:
|
||||
elapsed = time.time() - self.start_time
|
||||
self.logger.info(f"🕐 Elapsed: {self._format_duration(elapsed)}")
|
||||
|
||||
def generate_reports(self):
|
||||
"""Generate test reports"""
|
||||
self.logger.info("📊 Generating test reports...")
|
||||
|
||||
# Update final test results
|
||||
self.test_results.end_time = datetime.now().isoformat()
|
||||
|
||||
# Generate JSON report
|
||||
self._generate_json_report()
|
||||
|
||||
# Generate CSV report if enabled
|
||||
if self.config.CSV_EXPORT_ENABLED:
|
||||
self._generate_csv_report()
|
||||
|
||||
# Display summary
|
||||
self._display_test_summary()
|
||||
|
||||
def _generate_json_report(self):
|
||||
"""Generate detailed JSON report"""
|
||||
report_data = {
|
||||
"test_session": {
|
||||
"start_time": self.test_results.start_time,
|
||||
"end_time": self.test_results.end_time,
|
||||
"api_endpoint": self.api_base_url,
|
||||
"task_id": self.current_task_id,
|
||||
"total_duration_seconds": self.test_results.total_duration_seconds
|
||||
},
|
||||
"results": asdict(self.test_results),
|
||||
"transitions": [asdict(t) for t in self.transition_history],
|
||||
"timer_progress": [asdict(t) for t in self.timer_progress_history]
|
||||
}
|
||||
|
||||
# Save to file
|
||||
report_file = Path("reports") / f"dts_test_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
|
||||
with open(report_file, 'w') as f:
|
||||
json.dump(report_data, f, indent=2)
|
||||
|
||||
self.logger.info(f"📄 JSON report saved: {report_file}")
|
||||
|
||||
def _generate_csv_report(self):
|
||||
"""Generate CSV report for timer data"""
|
||||
if not self.timer_progress_history:
|
||||
return
|
||||
|
||||
csv_file = Path("data") / f"dts_timer_data_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
|
||||
|
||||
with open(csv_file, 'w', newline='') as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow([
|
||||
"timestamp", "mode", "timer_address", "timer_value",
|
||||
"progress_percent", "countdown_rate", "expected_duration"
|
||||
])
|
||||
|
||||
for progress in self.timer_progress_history:
|
||||
writer.writerow([
|
||||
progress.timestamp, progress.mode, progress.timer_address,
|
||||
progress.timer_value, progress.progress_percent,
|
||||
progress.countdown_rate, progress.expected_duration
|
||||
])
|
||||
|
||||
self.logger.info(f"📊 CSV data saved: {csv_file}")
|
||||
|
||||
def _display_test_summary(self):
|
||||
"""Display final test summary"""
|
||||
results = self.test_results
|
||||
|
||||
self.logger.info("\n" + "="*60)
|
||||
self.logger.info("📊 DTS API Test Summary")
|
||||
self.logger.info("="*60)
|
||||
|
||||
# Test outcome
|
||||
status_icon = "✅" if results.success else "❌"
|
||||
status_text = "SUCCESS" if results.success else "FAILED"
|
||||
self.logger.info(f"Status: {status_icon} {status_text}")
|
||||
|
||||
# Timing information
|
||||
self.logger.info(f"Total Duration: {self._format_duration(results.total_duration_seconds)}")
|
||||
self.logger.info(f"Screens Completed: {results.screens_completed}")
|
||||
self.logger.info(f"Transitions Detected: {results.transitions_detected}")
|
||||
|
||||
# Error information
|
||||
if results.api_errors > 0:
|
||||
self.logger.warning(f"API Errors: {results.api_errors}")
|
||||
if results.timer_issues > 0:
|
||||
self.logger.warning(f"Timer Issues: {results.timer_issues}")
|
||||
|
||||
# Error messages
|
||||
if results.error_messages:
|
||||
self.logger.error("Error Messages:")
|
||||
for error in results.error_messages:
|
||||
self.logger.error(f" - {error}")
|
||||
|
||||
# Transition summary
|
||||
if self.transition_history:
|
||||
self.logger.info("\nTransition Summary:")
|
||||
for i, transition in enumerate(self.transition_history, 1):
|
||||
from_name = self._get_screen_name(transition.from_mode)
|
||||
to_name = self._get_screen_name(transition.to_mode)
|
||||
duration_str = self._format_duration(transition.duration_seconds)
|
||||
self.logger.info(f" {i}. {from_name} → {to_name} ({duration_str})")
|
||||
|
||||
self.logger.info("="*60)
|
||||
|
||||
def run_test(self) -> bool:
|
||||
"""Run the complete DTS API test"""
|
||||
self.logger.info("🚀 DTS API Test Suite v1.0")
|
||||
self.logger.info(f"📡 API Endpoint: {self.api_base_url}")
|
||||
self.logger.info(f"⏰ Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
|
||||
self.start_time = time.time()
|
||||
self.test_results.start_time = datetime.now().isoformat()
|
||||
|
||||
try:
|
||||
# Step 1: Check system status
|
||||
if not self.check_system_status():
|
||||
return False
|
||||
|
||||
# Step 2: Start DTS sequence
|
||||
if not self.start_dts_sequence():
|
||||
return False
|
||||
|
||||
# Step 3: Monitor progress
|
||||
success = self.monitor_dts_progress()
|
||||
|
||||
# Step 4: Generate reports
|
||||
self.generate_reports()
|
||||
|
||||
return success
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Test execution failed: {e}")
|
||||
self.test_results.error_messages.append(f"Test execution failed: {str(e)}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(description="DTS API Test Suite")
|
||||
parser.add_argument("--api-url", default="http://localhost:5000/api",
|
||||
help="API base URL")
|
||||
parser.add_argument("--verbose", action="store_true",
|
||||
help="Enable verbose output")
|
||||
parser.add_argument("--export-csv", action="store_true",
|
||||
help="Export timer data to CSV")
|
||||
parser.add_argument("--config", help="Configuration file path")
|
||||
parser.add_argument("--polling-interval", type=float, default=1.0,
|
||||
help="Polling interval in seconds")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create configuration
|
||||
config = TestConfig()
|
||||
if args.verbose:
|
||||
config.CONSOLE_VERBOSITY = "DEBUG"
|
||||
if args.export_csv:
|
||||
config.CSV_EXPORT_ENABLED = True
|
||||
if args.polling_interval:
|
||||
config.POLLING_INTERVAL = args.polling_interval
|
||||
|
||||
# Load custom config if provided
|
||||
if args.config and os.path.exists(args.config):
|
||||
with open(args.config, 'r') as f:
|
||||
custom_config = json.load(f)
|
||||
for key, value in custom_config.items():
|
||||
if hasattr(config, key):
|
||||
setattr(config, key, value)
|
||||
|
||||
# Create and run tester
|
||||
tester = DTSAPITester(api_base_url=args.api_url, config=config)
|
||||
success = tester.run_test()
|
||||
|
||||
# Exit with appropriate code
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,63 +0,0 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=45", "wheel", "setuptools_scm[toml]>=6.2"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[tool.black]
|
||||
line-length = 88
|
||||
target-version = ['py38']
|
||||
include = '\.pyi?$'
|
||||
extend-exclude = '''
|
||||
/(
|
||||
# directories
|
||||
\.eggs
|
||||
| \.git
|
||||
| \.hg
|
||||
| \.mypy_cache
|
||||
| \.tox
|
||||
| \.venv
|
||||
| venv
|
||||
| _build
|
||||
| buck-out
|
||||
| build
|
||||
| dist
|
||||
)/
|
||||
'''
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
minversion = "6.0"
|
||||
addopts = "-ra -q --strict-markers --cov=watermaker_plc_api --cov-report=term-missing --cov-report=html"
|
||||
testpaths = [
|
||||
"tests",
|
||||
]
|
||||
python_files = [
|
||||
"test_*.py",
|
||||
"*_test.py",
|
||||
]
|
||||
python_classes = [
|
||||
"Test*",
|
||||
]
|
||||
python_functions = [
|
||||
"test_*",
|
||||
]
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["watermaker_plc_api"]
|
||||
omit = [
|
||||
"*/tests/*",
|
||||
"*/venv/*",
|
||||
"setup.py",
|
||||
]
|
||||
|
||||
[tool.coverage.report]
|
||||
exclude_lines = [
|
||||
"pragma: no cover",
|
||||
"def __repr__",
|
||||
"if self.debug:",
|
||||
"if settings.DEBUG",
|
||||
"raise AssertionError",
|
||||
"raise NotImplementedError",
|
||||
"if 0:",
|
||||
"if __name__ == .__main__.:",
|
||||
"class .*\\bProtocol\\):",
|
||||
"@(abc\\.)?abstractmethod",
|
||||
]
|
||||
13
pytest.ini
13
pytest.ini
@@ -1,13 +0,0 @@
|
||||
[tool:pytest]
|
||||
testpaths = tests
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
addopts =
|
||||
-v
|
||||
--tb=short
|
||||
--strict-markers
|
||||
--disable-warnings
|
||||
filterwarnings =
|
||||
ignore::DeprecationWarning
|
||||
ignore::PendingDeprecationWarning
|
||||
103
run_dts_test.py
103
run_dts_test.py
@@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple wrapper script to run DTS API tests with common configurations.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from dts_api_test_suite import DTSAPITester, TestConfig
|
||||
|
||||
def run_basic_test():
|
||||
"""Run basic DTS test with default settings"""
|
||||
print("🚀 Running Basic DTS API Test")
|
||||
print("=" * 50)
|
||||
|
||||
# Create tester with default configuration
|
||||
tester = DTSAPITester()
|
||||
|
||||
# Run the test
|
||||
success = tester.run_test()
|
||||
|
||||
if success:
|
||||
print("\n✅ Test completed successfully!")
|
||||
return 0
|
||||
else:
|
||||
print("\n❌ Test failed!")
|
||||
return 1
|
||||
|
||||
def run_verbose_test():
|
||||
"""Run DTS test with verbose output and CSV export"""
|
||||
print("🚀 Running Verbose DTS API Test")
|
||||
print("=" * 50)
|
||||
|
||||
# Create custom configuration
|
||||
config = TestConfig()
|
||||
config.CONSOLE_VERBOSITY = "DEBUG"
|
||||
config.CSV_EXPORT_ENABLED = True
|
||||
config.POLLING_INTERVAL = 0.5 # More frequent polling
|
||||
|
||||
# Create tester
|
||||
tester = DTSAPITester(config=config)
|
||||
|
||||
# Run the test
|
||||
success = tester.run_test()
|
||||
|
||||
if success:
|
||||
print("\n✅ Verbose test completed successfully!")
|
||||
return 0
|
||||
else:
|
||||
print("\n❌ Verbose test failed!")
|
||||
return 1
|
||||
|
||||
def run_custom_endpoint_test(api_url):
|
||||
"""Run DTS test against custom API endpoint"""
|
||||
print(f"🚀 Running DTS API Test against {api_url}")
|
||||
print("=" * 50)
|
||||
|
||||
# Create tester with custom endpoint
|
||||
tester = DTSAPITester(api_base_url=api_url)
|
||||
|
||||
# Run the test
|
||||
success = tester.run_test()
|
||||
|
||||
if success:
|
||||
print(f"\n✅ Test against {api_url} completed successfully!")
|
||||
return 0
|
||||
else:
|
||||
print(f"\n❌ Test against {api_url} failed!")
|
||||
return 1
|
||||
|
||||
def main():
|
||||
"""Main entry point with simple command handling"""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage:")
|
||||
print(" python run_dts_test.py basic # Run basic test")
|
||||
print(" python run_dts_test.py verbose # Run verbose test with CSV export")
|
||||
print(" python run_dts_test.py custom <url> # Run test against custom API endpoint")
|
||||
print("")
|
||||
print("Examples:")
|
||||
print(" python run_dts_test.py basic")
|
||||
print(" python run_dts_test.py verbose")
|
||||
print(" python run_dts_test.py custom http://192.168.1.100:5000/api")
|
||||
return 1
|
||||
|
||||
command = sys.argv[1].lower()
|
||||
|
||||
if command == "basic":
|
||||
return run_basic_test()
|
||||
elif command == "verbose":
|
||||
return run_verbose_test()
|
||||
elif command == "custom":
|
||||
if len(sys.argv) < 3:
|
||||
print("❌ Error: Custom command requires API URL")
|
||||
print("Usage: python run_dts_test.py custom <api_url>")
|
||||
return 1
|
||||
api_url = sys.argv[2]
|
||||
return run_custom_endpoint_test(api_url)
|
||||
else:
|
||||
print(f"❌ Error: Unknown command '{command}'")
|
||||
print("Available commands: basic, verbose, custom")
|
||||
return 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
37
setup.py
37
setup.py
@@ -1,37 +0,0 @@
|
||||
from setuptools import setup, find_packages
|
||||
|
||||
with open("README.md", "r", encoding="utf-8") as fh:
|
||||
long_description = fh.read()
|
||||
|
||||
with open("requirements.txt", "r", encoding="utf-8") as fh:
|
||||
requirements = [line.strip() for line in fh if line.strip() and not line.startswith("#")]
|
||||
|
||||
setup(
|
||||
name="watermaker-plc-api",
|
||||
version="1.1.0",
|
||||
author="Your Name",
|
||||
author_email="paul@golownia.com",
|
||||
description="RESTful API for Watermaker PLC monitoring and control",
|
||||
long_description=long_description,
|
||||
long_description_content_type="text/markdown",
|
||||
url="https://github.com/terbonium/watermaker-plc-api.git",
|
||||
packages=find_packages(),
|
||||
classifiers=[
|
||||
"Development Status :: 4 - Beta",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
],
|
||||
python_requires=">=3.8",
|
||||
install_requires=requirements,
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"watermaker-api=watermaker_plc_api.main:main",
|
||||
],
|
||||
},
|
||||
)
|
||||
@@ -1,122 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test the external stop fix with the real running system.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import time
|
||||
|
||||
# Add the project root to Python path
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from watermaker_plc_api.services.operation_state import get_operation_state_manager
|
||||
|
||||
def test_external_stop_with_running_system():
|
||||
"""Test external stop detection with the current system state"""
|
||||
print("Testing External Stop Fix with Running System")
|
||||
print("=" * 50)
|
||||
|
||||
# Get the current state manager
|
||||
state_manager = get_operation_state_manager()
|
||||
current_state = state_manager.get_current_state()
|
||||
|
||||
print("1. Current system state:")
|
||||
print(f" Status: {current_state['status']}")
|
||||
print(f" Current Mode: {current_state.get('current_mode', 'None')}")
|
||||
print(f" Current Step: {current_state.get('current_step', 'None')}")
|
||||
print(f" Is Running: {state_manager.is_running()}")
|
||||
|
||||
if current_state['status'] == 'cancelled' and current_state.get('current_mode') == 5:
|
||||
print("\n2. Simulating external stop from current DTS_Priming mode...")
|
||||
|
||||
# Simulate what happens when the system detects an external stop
|
||||
# Add the external stop change to the existing external changes
|
||||
external_changes = current_state.get("external_changes", [])
|
||||
stop_change = {
|
||||
"change_time": "2025-06-11T22:26:00.000000",
|
||||
"change_type": "Process_Stop: DTS_Priming → Standby",
|
||||
"external_change": True,
|
||||
"new_value": 2,
|
||||
"previous_value": 5
|
||||
}
|
||||
external_changes.append(stop_change)
|
||||
|
||||
# Reset the operation to running state first (to simulate it was running when stopped)
|
||||
state_manager.update_state({
|
||||
"status": "running",
|
||||
"external_changes": external_changes,
|
||||
"last_error": None
|
||||
})
|
||||
|
||||
print(" Operation reset to running state")
|
||||
|
||||
# Now simulate the external stop detection logic from update_dts_progress_from_timers
|
||||
current_mode = 2 # This is what would be read from PLC R1000
|
||||
|
||||
# Check if this was an external stop
|
||||
recent_external_stop = any(
|
||||
change.get("new_value") == 2 and "Process_Stop" in change.get("change_type", "")
|
||||
for change in external_changes[-2:] # Check last 2 changes
|
||||
)
|
||||
|
||||
if recent_external_stop:
|
||||
print(" External stop detected - applying fix...")
|
||||
updates = {
|
||||
"current_mode": current_mode,
|
||||
"note": "DTS process stopped externally via HMI - system in standby mode",
|
||||
"external_stop": True,
|
||||
"current_step": "dts_process_complete",
|
||||
"step_description": "DTS process stopped externally - system in standby mode",
|
||||
"progress_percent": 100
|
||||
}
|
||||
state_manager.update_state(updates)
|
||||
state_manager.complete_operation(success=True)
|
||||
print(" Operation completed with external stop handling")
|
||||
|
||||
# Check the final state
|
||||
print("\n3. Final state after external stop fix:")
|
||||
final_state = state_manager.get_current_state()
|
||||
print(f" Status: {final_state['status']}")
|
||||
print(f" Current Mode: {final_state['current_mode']}")
|
||||
print(f" Current Step: {final_state['current_step']}")
|
||||
print(f" Note: {final_state.get('note', 'None')}")
|
||||
print(f" External Stop Flag: {final_state.get('external_stop', False)}")
|
||||
print(f" Is Running: {state_manager.is_running()}")
|
||||
print(f" Is Complete: {final_state.get('is_complete', False)}")
|
||||
|
||||
# Verify the fix worked
|
||||
print("\n4. Fix Verification:")
|
||||
if final_state['current_mode'] == 2:
|
||||
print(" ✓ Current mode correctly shows 2 (Standby)")
|
||||
else:
|
||||
print(f" ✗ Current mode incorrect: {final_state['current_mode']}")
|
||||
|
||||
if final_state['current_step'] == 'dts_process_complete':
|
||||
print(" ✓ Current step correctly shows completion")
|
||||
else:
|
||||
print(f" ✗ Current step incorrect: {final_state['current_step']}")
|
||||
|
||||
if final_state['status'] == 'completed':
|
||||
print(" ✓ Status correctly shows 'completed'")
|
||||
else:
|
||||
print(f" ✗ Status incorrect: {final_state['status']}")
|
||||
|
||||
if final_state.get('external_stop'):
|
||||
print(" ✓ External stop flag correctly set")
|
||||
else:
|
||||
print(" ✗ External stop flag missing")
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("SUCCESS: The external stop fix is working correctly!")
|
||||
print("When an external stop is detected during a running DTS operation,")
|
||||
print("the API will now return the correct current_mode (2) and status.")
|
||||
|
||||
else:
|
||||
print("\n2. System is not in a suitable state for testing external stop.")
|
||||
print(" The fix will work when there's a running DTS operation that gets externally stopped.")
|
||||
print(" Current system shows correct idle/cancelled state behavior.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_external_stop_with_running_system()
|
||||
@@ -1,5 +0,0 @@
|
||||
"""
|
||||
Test suite for the Watermaker PLC API.
|
||||
"""
|
||||
|
||||
# Test configuration and utilities can be placed here
|
||||
@@ -1,304 +0,0 @@
|
||||
"""
|
||||
Tests for API controllers.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from unittest.mock import Mock, patch, MagicMock
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the parent directory to the path so we can import the package
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
|
||||
class TestControllers:
|
||||
"""Test cases for API controllers"""
|
||||
|
||||
@pytest.fixture
|
||||
def app(self):
|
||||
"""Create test Flask application"""
|
||||
# Mock the services to avoid actual PLC connections during testing
|
||||
with patch('watermaker_plc_api.services.plc_connection.get_plc_connection'), \
|
||||
patch('watermaker_plc_api.services.data_cache.get_data_cache'), \
|
||||
patch('watermaker_plc_api.services.background_tasks.start_background_updates'):
|
||||
|
||||
from watermaker_plc_api.app import create_app
|
||||
from watermaker_plc_api.config import TestingConfig
|
||||
|
||||
app = create_app(TestingConfig)
|
||||
app.config['TESTING'] = True
|
||||
return app
|
||||
|
||||
@pytest.fixture
|
||||
def client(self, app):
|
||||
"""Create test client"""
|
||||
return app.test_client()
|
||||
|
||||
def test_status_endpoint(self, client):
|
||||
"""Test /api/status endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.system_controller.cache') as mock_cache, \
|
||||
patch('watermaker_plc_api.controllers.system_controller.plc') as mock_plc:
|
||||
|
||||
mock_cache.get_connection_status.return_value = "connected"
|
||||
mock_cache.get_last_update.return_value = "2025-06-03T12:00:00"
|
||||
mock_plc.get_connection_status.return_value = {
|
||||
"ip_address": "127.0.0.1",
|
||||
"port": 502,
|
||||
"connected": True
|
||||
}
|
||||
|
||||
response = client.get('/api/status')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'connection_status' in data
|
||||
assert 'last_update' in data
|
||||
assert 'plc_config' in data
|
||||
assert 'timestamp' in data
|
||||
|
||||
def test_config_endpoint(self, client):
|
||||
"""Test /api/config endpoint"""
|
||||
response = client.get('/api/config')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'api_version' in data
|
||||
assert 'endpoints' in data
|
||||
assert 'variable_groups' in data
|
||||
assert 'total_variables' in data
|
||||
|
||||
def test_sensors_endpoint(self, client):
|
||||
"""Test /api/sensors endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.sensors_controller.cache') as mock_cache:
|
||||
# Mock cache data
|
||||
mock_cache.get_sensors.return_value = {
|
||||
"1000": {
|
||||
"name": "System Mode",
|
||||
"raw_value": 5,
|
||||
"scaled_value": 5,
|
||||
"unit": "",
|
||||
"category": "system"
|
||||
}
|
||||
}
|
||||
mock_cache.get_last_update.return_value = "2025-06-03T12:00:00"
|
||||
|
||||
response = client.get('/api/sensors')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'sensors' in data
|
||||
assert 'last_update' in data
|
||||
assert 'count' in data
|
||||
assert data['count'] == 1
|
||||
|
||||
def test_sensors_category_endpoint(self, client):
|
||||
"""Test /api/sensors/category/<category> endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.sensors_controller.cache') as mock_cache:
|
||||
mock_cache.get_sensors_by_category.return_value = {}
|
||||
mock_cache.get_last_update.return_value = "2025-06-03T12:00:00"
|
||||
|
||||
# Test valid category
|
||||
response = client.get('/api/sensors/category/system')
|
||||
assert response.status_code == 200
|
||||
|
||||
# Test invalid category
|
||||
response = client.get('/api/sensors/category/invalid')
|
||||
assert response.status_code == 400
|
||||
|
||||
def test_timers_endpoint(self, client):
|
||||
"""Test /api/timers endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.timers_controller.cache') as mock_cache:
|
||||
# Mock cache data
|
||||
mock_cache.get_timers.return_value = {
|
||||
"136": {
|
||||
"name": "FWF Timer",
|
||||
"raw_value": 0,
|
||||
"scaled_value": 0,
|
||||
"active": False
|
||||
}
|
||||
}
|
||||
mock_cache.get_active_timers.return_value = []
|
||||
mock_cache.get_last_update.return_value = "2025-06-03T12:00:00"
|
||||
|
||||
response = client.get('/api/timers')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'timers' in data
|
||||
assert 'active_timers' in data
|
||||
assert 'total_count' in data
|
||||
assert 'active_count' in data
|
||||
|
||||
def test_outputs_endpoint(self, client):
|
||||
"""Test /api/outputs endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.outputs_controller.cache') as mock_cache:
|
||||
# Mock cache data
|
||||
mock_cache.get_outputs.return_value = {
|
||||
"40017": {
|
||||
"register": 40017,
|
||||
"value": 0,
|
||||
"binary": "0000000000000000",
|
||||
"bits": []
|
||||
}
|
||||
}
|
||||
mock_cache.get_last_update.return_value = "2025-06-03T12:00:00"
|
||||
|
||||
response = client.get('/api/outputs')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'outputs' in data
|
||||
assert 'last_update' in data
|
||||
assert 'count' in data
|
||||
|
||||
def test_write_register_endpoint(self, client):
|
||||
"""Test /api/write/register endpoint"""
|
||||
# Test missing content-type (no JSON)
|
||||
response = client.post('/api/write/register')
|
||||
assert response.status_code == 400
|
||||
data = json.loads(response.data)
|
||||
assert "Request must be JSON" in data['message']
|
||||
|
||||
# Test missing data with proper content-type
|
||||
response = client.post('/api/write/register',
|
||||
data=json.dumps({}),
|
||||
content_type='application/json')
|
||||
assert response.status_code == 400
|
||||
|
||||
# Test missing fields
|
||||
response = client.post('/api/write/register',
|
||||
data=json.dumps({"address": 1000}),
|
||||
content_type='application/json')
|
||||
assert response.status_code == 400
|
||||
|
||||
# Test invalid values
|
||||
response = client.post('/api/write/register',
|
||||
data=json.dumps({"address": -1, "value": 5}),
|
||||
content_type='application/json')
|
||||
assert response.status_code == 400
|
||||
|
||||
def test_select_endpoint_no_params(self, client):
|
||||
"""Test /api/select endpoint without parameters"""
|
||||
response = client.get('/api/select')
|
||||
assert response.status_code == 400
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'usage' in data['details']
|
||||
|
||||
def test_dts_start_endpoint(self, client):
|
||||
"""Test /api/dts/start endpoint"""
|
||||
with patch('watermaker_plc_api.controllers.dts_controller.start_dts_sequence_async') as mock_start:
|
||||
# Mock successful start
|
||||
mock_start.return_value = (True, "DTS sequence started", {"task_id": "abc12345"})
|
||||
|
||||
response = client.post('/api/dts/start')
|
||||
assert response.status_code == 202
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is True
|
||||
assert 'task_id' in data
|
||||
assert 'status_endpoint' in data
|
||||
|
||||
def test_dts_start_conflict(self, client):
|
||||
"""Test /api/dts/start endpoint with conflict"""
|
||||
with patch('watermaker_plc_api.controllers.dts_controller.start_dts_sequence_async') as mock_start:
|
||||
# Mock operation already running
|
||||
mock_start.return_value = (False, "Operation already in progress", {"existing_task_id": "def67890"})
|
||||
|
||||
response = client.post('/api/dts/start')
|
||||
assert response.status_code == 409
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is False
|
||||
|
||||
def test_dts_status_endpoint_not_found(self, client):
|
||||
"""Test /api/dts/status/<task_id> endpoint with non-existent task"""
|
||||
response = client.get('/api/dts/status/nonexistent')
|
||||
assert response.status_code == 404
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert 'available_tasks' in data['details']
|
||||
|
||||
def test_dts_cancel_endpoint_not_found(self, client):
|
||||
"""Test /api/dts/cancel/<task_id> endpoint with non-existent task"""
|
||||
response = client.post('/api/dts/cancel/nonexistent')
|
||||
assert response.status_code == 404
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is False
|
||||
|
||||
def test_dts_cancel_endpoint_success(self, client):
|
||||
"""Test successful task cancellation"""
|
||||
with patch('watermaker_plc_api.controllers.dts_controller.dts_operations') as mock_operations:
|
||||
# Mock existing running task
|
||||
mock_task = {
|
||||
"task_id": "abc12345",
|
||||
"status": "running",
|
||||
"current_step": "waiting_for_valves"
|
||||
}
|
||||
mock_operations.get.return_value = mock_task
|
||||
|
||||
response = client.post('/api/dts/cancel/abc12345')
|
||||
assert response.status_code == 200
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is True
|
||||
|
||||
def test_dts_cancel_endpoint_not_running(self, client):
|
||||
"""Test cancelling non-running task"""
|
||||
with patch('watermaker_plc_api.controllers.dts_controller.dts_operations') as mock_operations:
|
||||
# Mock existing completed task
|
||||
mock_operations.get.return_value = {
|
||||
"task_id": "abc12345",
|
||||
"status": "completed",
|
||||
"current_step": "completed"
|
||||
}
|
||||
|
||||
response = client.post('/api/dts/cancel/abc12345')
|
||||
assert response.status_code == 400
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is False
|
||||
|
||||
|
||||
class TestErrorHandling:
|
||||
"""Test cases for error handling across controllers"""
|
||||
|
||||
@pytest.fixture
|
||||
def app(self):
|
||||
"""Create test Flask application"""
|
||||
# Mock the services to avoid actual PLC connections during testing
|
||||
with patch('watermaker_plc_api.services.plc_connection.get_plc_connection'), \
|
||||
patch('watermaker_plc_api.services.data_cache.get_data_cache'), \
|
||||
patch('watermaker_plc_api.services.background_tasks.start_background_updates'):
|
||||
|
||||
from watermaker_plc_api.app import create_app
|
||||
from watermaker_plc_api.config import TestingConfig
|
||||
|
||||
app = create_app(TestingConfig)
|
||||
app.config['TESTING'] = True
|
||||
return app
|
||||
|
||||
@pytest.fixture
|
||||
def client(self, app):
|
||||
"""Create test client"""
|
||||
return app.test_client()
|
||||
|
||||
def test_404_error(self, client):
|
||||
"""Test 404 error handling"""
|
||||
response = client.get('/api/nonexistent')
|
||||
assert response.status_code == 404
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is False
|
||||
assert data['error'] == 'Not Found'
|
||||
|
||||
def test_405_method_not_allowed(self, client):
|
||||
"""Test 405 method not allowed error"""
|
||||
response = client.delete('/api/status') # DELETE not allowed on status endpoint
|
||||
assert response.status_code == 405
|
||||
|
||||
data = json.loads(response.data)
|
||||
assert data['success'] is False
|
||||
assert data['error'] == 'Method Not Allowed'
|
||||
@@ -1,130 +0,0 @@
|
||||
"""
|
||||
Tests for data conversion utilities.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import struct
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the parent directory to the path so we can import the package
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from watermaker_plc_api.utils.data_conversion import (
|
||||
scale_value,
|
||||
convert_ieee754_float,
|
||||
convert_gallon_counter,
|
||||
get_descriptive_value,
|
||||
validate_register_value,
|
||||
format_binary_string
|
||||
)
|
||||
|
||||
|
||||
class TestDataConversion:
|
||||
"""Test cases for data conversion utilities"""
|
||||
|
||||
def test_scale_value_direct(self):
|
||||
"""Test direct scaling (no change)"""
|
||||
assert scale_value(100, "direct") == 100
|
||||
assert scale_value(0, "direct") == 0
|
||||
|
||||
def test_scale_value_divide(self):
|
||||
"""Test division scaling"""
|
||||
assert scale_value(100, "÷10") == 10.0
|
||||
assert scale_value(250, "÷10") == 25.0
|
||||
assert scale_value(1000, "÷100") == 10.0
|
||||
|
||||
def test_scale_value_multiply(self):
|
||||
"""Test multiplication scaling"""
|
||||
assert scale_value(10, "×10") == 100.0
|
||||
assert scale_value(5, "×2") == 10.0
|
||||
|
||||
def test_scale_value_invalid(self):
|
||||
"""Test invalid scaling types"""
|
||||
# Should return original value for invalid scale types
|
||||
assert scale_value(100, "invalid") == 100
|
||||
assert scale_value(100, "÷0") == 100 # Division by zero
|
||||
assert scale_value(100, "×abc") == 100 # Invalid multiplier
|
||||
|
||||
def test_convert_ieee754_float(self):
|
||||
"""Test IEEE 754 float conversion"""
|
||||
# Test known values
|
||||
# 1.0 in IEEE 754: 0x3F800000
|
||||
high = 0x3F80
|
||||
low = 0x0000
|
||||
result = convert_ieee754_float(high, low)
|
||||
assert result == 1.0
|
||||
|
||||
# Test another known value
|
||||
# 3.14159 in IEEE 754: approximately 0x40490FD0
|
||||
high = 0x4049
|
||||
low = 0x0FD0
|
||||
result = convert_ieee754_float(high, low)
|
||||
assert abs(result - 3.14) < 0.01 # Allow small floating point differences
|
||||
|
||||
def test_convert_gallon_counter(self):
|
||||
"""Test gallon counter conversion (same as IEEE 754)"""
|
||||
high = 0x3F80
|
||||
low = 0x0000
|
||||
result = convert_gallon_counter(high, low)
|
||||
assert result == 1.0
|
||||
|
||||
def test_get_descriptive_value_with_mapping(self):
|
||||
"""Test getting descriptive value with value mapping"""
|
||||
config = {
|
||||
"values": {
|
||||
"0": "Standby",
|
||||
"5": "Running",
|
||||
"7": "Service"
|
||||
}
|
||||
}
|
||||
|
||||
assert get_descriptive_value(0, config) == "Standby"
|
||||
assert get_descriptive_value(5, config) == "Running"
|
||||
assert get_descriptive_value(7, config) == "Service"
|
||||
assert get_descriptive_value(99, config) == "Unknown (99)"
|
||||
|
||||
def test_get_descriptive_value_without_mapping(self):
|
||||
"""Test getting descriptive value without value mapping"""
|
||||
config = {}
|
||||
assert get_descriptive_value(100, config) == 100
|
||||
|
||||
def test_validate_register_value(self):
|
||||
"""Test register value validation"""
|
||||
# Valid values
|
||||
assert validate_register_value(0) is True
|
||||
assert validate_register_value(1000) is True
|
||||
assert validate_register_value(65533) is True
|
||||
|
||||
# Invalid values
|
||||
assert validate_register_value(None) is False
|
||||
assert validate_register_value(-1) is False
|
||||
assert validate_register_value(65534) is False
|
||||
assert validate_register_value(65535) is False
|
||||
assert validate_register_value("string") is False
|
||||
|
||||
def test_validate_register_value_custom_max(self):
|
||||
"""Test register value validation with custom maximum"""
|
||||
assert validate_register_value(100, max_value=1000) is True
|
||||
assert validate_register_value(1000, max_value=1000) is False
|
||||
assert validate_register_value(999, max_value=1000) is True
|
||||
|
||||
def test_format_binary_string(self):
|
||||
"""Test binary string formatting"""
|
||||
assert format_binary_string(5) == "0000000000000101"
|
||||
assert format_binary_string(255) == "0000000011111111"
|
||||
assert format_binary_string(0) == "0000000000000000"
|
||||
|
||||
# Test custom width
|
||||
assert format_binary_string(5, width=8) == "00000101"
|
||||
assert format_binary_string(15, width=4) == "1111"
|
||||
|
||||
def test_ieee754_edge_cases(self):
|
||||
"""Test IEEE 754 conversion edge cases"""
|
||||
# Test with None return on error
|
||||
result = convert_ieee754_float(None, 0)
|
||||
assert result is None
|
||||
|
||||
# Test zero
|
||||
result = convert_ieee754_float(0, 0)
|
||||
assert result == 0.0
|
||||
@@ -1,125 +0,0 @@
|
||||
"""
|
||||
Tests for PLC connection functionality.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
# Add the parent directory to the path so we can import the package
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from watermaker_plc_api.services.plc_connection import PLCConnection, get_plc_connection
|
||||
from watermaker_plc_api.utils.error_handler import PLCConnectionError
|
||||
|
||||
|
||||
class TestPLCConnection:
|
||||
"""Test cases for PLCConnection class"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Reset the global connection instance before each test"""
|
||||
# Clear the global singleton for clean tests
|
||||
import watermaker_plc_api.services.plc_connection
|
||||
watermaker_plc_api.services.plc_connection._plc_connection = None
|
||||
|
||||
def test_singleton_pattern(self):
|
||||
"""Test that get_plc_connection returns the same instance"""
|
||||
conn1 = get_plc_connection()
|
||||
conn2 = get_plc_connection()
|
||||
assert conn1 is conn2
|
||||
|
||||
@patch('watermaker_plc_api.services.plc_connection.ModbusTcpClient')
|
||||
def test_successful_connection(self, mock_client_class):
|
||||
"""Test successful PLC connection"""
|
||||
# Setup mock
|
||||
mock_client = Mock()
|
||||
mock_client.connect.return_value = True
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
# Test connection
|
||||
plc = PLCConnection()
|
||||
result = plc.connect()
|
||||
|
||||
assert result is True
|
||||
assert plc.is_connected is True
|
||||
mock_client.connect.assert_called_once()
|
||||
|
||||
@patch('watermaker_plc_api.services.plc_connection.ModbusTcpClient')
|
||||
def test_failed_connection(self, mock_client_class):
|
||||
"""Test failed PLC connection"""
|
||||
# Setup mock
|
||||
mock_client = Mock()
|
||||
mock_client.connect.return_value = False
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
# Test connection
|
||||
plc = PLCConnection()
|
||||
result = plc.connect()
|
||||
|
||||
assert result is False
|
||||
assert plc.is_connected is False
|
||||
|
||||
@patch('watermaker_plc_api.services.plc_connection.ModbusTcpClient')
|
||||
def test_read_input_register(self, mock_client_class):
|
||||
"""Test reading input register"""
|
||||
# Setup mock
|
||||
mock_client = Mock()
|
||||
mock_client.connect.return_value = True
|
||||
mock_result = Mock()
|
||||
mock_result.registers = [1234]
|
||||
mock_result.isError.return_value = False
|
||||
mock_client.read_input_registers.return_value = mock_result
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
# Test read
|
||||
plc = PLCConnection()
|
||||
plc.connect()
|
||||
value = plc.read_input_register(1000)
|
||||
|
||||
assert value == 1234
|
||||
mock_client.read_input_registers.assert_called_with(1000, 1, slave=1)
|
||||
|
||||
@patch('watermaker_plc_api.services.plc_connection.ModbusTcpClient')
|
||||
def test_write_holding_register(self, mock_client_class):
|
||||
"""Test writing holding register"""
|
||||
# Setup mock
|
||||
mock_client = Mock()
|
||||
mock_client.connect.return_value = True
|
||||
mock_result = Mock()
|
||||
mock_result.isError.return_value = False
|
||||
mock_client.write_register.return_value = mock_result
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
# Test write
|
||||
plc = PLCConnection()
|
||||
plc.connect()
|
||||
success = plc.write_holding_register(1000, 5)
|
||||
|
||||
assert success is True
|
||||
mock_client.write_register.assert_called_with(1000, 5, slave=1)
|
||||
|
||||
def test_write_without_connection(self):
|
||||
"""Test writing register without PLC connection"""
|
||||
with patch('watermaker_plc_api.services.plc_connection.ModbusTcpClient') as mock_client_class:
|
||||
# Setup mock to fail connection
|
||||
mock_client = Mock()
|
||||
mock_client.connect.return_value = False
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
plc = PLCConnection()
|
||||
|
||||
with pytest.raises(PLCConnectionError):
|
||||
plc.write_holding_register(1000, 5)
|
||||
|
||||
def test_get_connection_status(self):
|
||||
"""Test getting connection status information"""
|
||||
plc = PLCConnection()
|
||||
status = plc.get_connection_status()
|
||||
|
||||
assert isinstance(status, dict)
|
||||
assert "connected" in status
|
||||
assert "ip_address" in status
|
||||
assert "port" in status
|
||||
assert "unit_id" in status
|
||||
assert "timeout" in status
|
||||
@@ -1,140 +0,0 @@
|
||||
"""
|
||||
Tests for register reader service.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
# Add the parent directory to the path so we can import the package
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from watermaker_plc_api.services.register_reader import RegisterReader
|
||||
|
||||
|
||||
class TestRegisterReader:
|
||||
"""Test cases for RegisterReader service"""
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_update_sensors_success(self, mock_cache, mock_plc):
|
||||
"""Test successful sensor data update"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_input_register.return_value = 1000
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test update
|
||||
reader = RegisterReader()
|
||||
result = reader.update_sensors()
|
||||
|
||||
assert result is True
|
||||
mock_cache_instance.set_sensors.assert_called_once()
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_update_sensors_read_failure(self, mock_cache, mock_plc):
|
||||
"""Test sensor update with read failure"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_input_register.return_value = None # Read failure
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test update
|
||||
reader = RegisterReader()
|
||||
result = reader.update_sensors()
|
||||
|
||||
assert result is True # Should still succeed even if no valid reads
|
||||
mock_cache_instance.set_sensors.assert_called_once()
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_update_timers_success(self, mock_cache, mock_plc):
|
||||
"""Test successful timer data update"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_holding_register.return_value = 100
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test update
|
||||
reader = RegisterReader()
|
||||
result = reader.update_timers()
|
||||
|
||||
assert result is True
|
||||
mock_cache_instance.set_timers.assert_called_once()
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_read_register_pair_success(self, mock_cache, mock_plc):
|
||||
"""Test successful register pair reading"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_holding_register.side_effect = [0x3F80, 0x0000] # IEEE 754 for 1.0
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test read
|
||||
reader = RegisterReader()
|
||||
success, converted, high, low = reader.read_register_pair(5014, 5015, "ieee754")
|
||||
|
||||
assert success is True
|
||||
assert converted == 1.0
|
||||
assert high == 0x3F80
|
||||
assert low == 0x0000
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_read_register_pair_failure(self, mock_cache, mock_plc):
|
||||
"""Test register pair reading with failure"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_holding_register.return_value = None # Read failure
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test read
|
||||
reader = RegisterReader()
|
||||
success, converted, high, low = reader.read_register_pair(5014, 5015, "ieee754")
|
||||
|
||||
assert success is False
|
||||
assert converted is None
|
||||
|
||||
@patch('watermaker_plc_api.services.register_reader.get_plc_connection')
|
||||
@patch('watermaker_plc_api.services.register_reader.get_data_cache')
|
||||
def test_read_selective_data(self, mock_cache, mock_plc):
|
||||
"""Test selective data reading"""
|
||||
# Setup mocks
|
||||
mock_plc_instance = Mock()
|
||||
mock_plc_instance.read_input_register.return_value = 500
|
||||
mock_plc_instance.read_holding_register.return_value = 100
|
||||
mock_plc.return_value = mock_plc_instance
|
||||
|
||||
mock_cache_instance = Mock()
|
||||
mock_cache.return_value = mock_cache_instance
|
||||
|
||||
# Test selective read
|
||||
reader = RegisterReader()
|
||||
result = reader.read_selective_data(["temperature"], ["1036"])
|
||||
|
||||
assert "sensors" in result
|
||||
assert "timers" in result
|
||||
assert "requested_groups" in result
|
||||
assert "requested_keys" in result
|
||||
assert "summary" in result
|
||||
|
||||
assert result["requested_groups"] == ["temperature"]
|
||||
assert result["requested_keys"] == ["1036"]
|
||||
@@ -1,15 +0,0 @@
|
||||
"""
|
||||
Watermaker PLC API Package
|
||||
|
||||
RESTful API for monitoring and controlling watermaker PLC systems.
|
||||
Provides access to sensors, timers, controls, and watermaker operation sequences.
|
||||
"""
|
||||
|
||||
__version__ = "1.1.0"
|
||||
__author__ = "Your Name"
|
||||
__email__ = "your.email@example.com"
|
||||
|
||||
from .app import create_app
|
||||
from .config import Config
|
||||
|
||||
__all__ = ['create_app', 'Config']
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Main entry point for the Watermaker PLC API server.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from .app import create_app
|
||||
from .config import Config
|
||||
from .utils.logger import get_logger
|
||||
from .services.background_tasks import start_background_updates
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
def parse_args():
|
||||
"""Parse command line arguments"""
|
||||
parser = argparse.ArgumentParser(description='Watermaker PLC API Server')
|
||||
parser.add_argument('--host', default='0.0.0.0',
|
||||
help='Host to bind to (default: 0.0.0.0)')
|
||||
parser.add_argument('--port', type=int, default=5000,
|
||||
help='Port to bind to (default: 5000)')
|
||||
parser.add_argument('--plc-ip', default='198.18.100.141',
|
||||
help='PLC IP address (default: 198.18.100.141)')
|
||||
parser.add_argument('--plc-port', type=int, default=502,
|
||||
help='PLC Modbus port (default: 502)')
|
||||
parser.add_argument('--debug', action='store_true',
|
||||
help='Enable debug mode')
|
||||
parser.add_argument('--no-background-updates', action='store_true',
|
||||
help='Disable background data updates')
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main application entry point"""
|
||||
args = parse_args()
|
||||
|
||||
# Update config with command line arguments
|
||||
Config.PLC_IP = args.plc_ip
|
||||
Config.PLC_PORT = args.plc_port
|
||||
Config.DEBUG = args.debug
|
||||
|
||||
logger.info("Starting Watermaker PLC API Server")
|
||||
logger.info(f"API Version: 1.1.0")
|
||||
logger.info(f"PLC Target: {Config.PLC_IP}:{Config.PLC_PORT}")
|
||||
logger.info(f"Server: http://{args.host}:{args.port}")
|
||||
|
||||
# Create Flask application
|
||||
app = create_app()
|
||||
|
||||
# Start background data updates (unless disabled)
|
||||
if not args.no_background_updates:
|
||||
start_background_updates()
|
||||
logger.info("Background data update thread started")
|
||||
else:
|
||||
logger.warning("Background data updates disabled")
|
||||
|
||||
# Log available endpoints
|
||||
logger.info("Available endpoints:")
|
||||
logger.info(" Data Monitoring:")
|
||||
logger.info(" GET /api/all - All PLC data")
|
||||
logger.info(" GET /api/select - Selective data (bandwidth optimized)")
|
||||
logger.info(" GET /api/sensors - All sensors")
|
||||
logger.info(" GET /api/timers - All timers")
|
||||
logger.info(" GET /api/outputs - Output controls")
|
||||
logger.info(" GET /api/runtime - Runtime hours")
|
||||
logger.info(" GET /api/water_counters - Water production counters")
|
||||
logger.info(" GET /api/status - Connection status")
|
||||
logger.info(" GET /api/config - API configuration")
|
||||
logger.info("")
|
||||
logger.info(" Control Operations:")
|
||||
logger.info(" POST /api/dts/start - Start DTS watermaker sequence")
|
||||
logger.info(" POST /api/dts/stop - Stop watermaker sequence")
|
||||
logger.info(" POST /api/dts/skip - Skip current step")
|
||||
logger.info(" GET /api/dts/status - Get DTS operation status")
|
||||
logger.info(" POST /api/write/register - Write single register")
|
||||
logger.info("")
|
||||
logger.info(" Examples:")
|
||||
logger.info(" /api/select?groups=temperature,pressure")
|
||||
logger.info(" /api/select?keys=1036,1003,1017")
|
||||
logger.info(" curl -X POST http://localhost:5000/api/dts/start")
|
||||
|
||||
try:
|
||||
# Start the Flask server
|
||||
app.run(
|
||||
host=args.host,
|
||||
port=args.port,
|
||||
debug=args.debug,
|
||||
threaded=True
|
||||
)
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Server shutdown requested by user")
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start server: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,79 +0,0 @@
|
||||
"""
|
||||
Flask application factory and setup.
|
||||
"""
|
||||
|
||||
from flask import Flask
|
||||
from flask_cors import CORS
|
||||
from .config import Config
|
||||
from .utils.logger import get_logger
|
||||
from .utils.error_handler import setup_error_handlers
|
||||
|
||||
# Import controllers
|
||||
from .controllers.system_controller import system_bp
|
||||
from .controllers.sensors_controller import sensors_bp
|
||||
from .controllers.timers_controller import timers_bp
|
||||
from .controllers.outputs_controller import outputs_bp
|
||||
from .controllers.dts_controller import dts_bp
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
def create_app(config_object=None):
|
||||
"""
|
||||
Application factory pattern for creating Flask app.
|
||||
|
||||
Args:
|
||||
config_object: Configuration class or object to use
|
||||
|
||||
Returns:
|
||||
Flask: Configured Flask application
|
||||
"""
|
||||
app = Flask(__name__)
|
||||
|
||||
# Configure the app
|
||||
if config_object is None:
|
||||
config_object = Config
|
||||
|
||||
app.config.from_object(config_object)
|
||||
|
||||
# Enable CORS for web-based control panels
|
||||
if config_object.CORS_ENABLED:
|
||||
CORS(app)
|
||||
logger.info("CORS enabled for web applications")
|
||||
|
||||
# Setup error handlers
|
||||
setup_error_handlers(app)
|
||||
|
||||
# Register blueprints
|
||||
register_blueprints(app)
|
||||
|
||||
# Log application startup
|
||||
logger.info(f"Flask application created")
|
||||
logger.info(f"Debug mode: {app.config.get('DEBUG', False)}")
|
||||
logger.info(f"PLC target: {config_object.PLC_IP}:{config_object.PLC_PORT}")
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def register_blueprints(app):
|
||||
"""Register all route blueprints with the Flask app"""
|
||||
|
||||
# System and status endpoints
|
||||
app.register_blueprint(system_bp, url_prefix='/api')
|
||||
|
||||
# Data monitoring endpoints
|
||||
app.register_blueprint(sensors_bp, url_prefix='/api')
|
||||
app.register_blueprint(timers_bp, url_prefix='/api')
|
||||
app.register_blueprint(outputs_bp, url_prefix='/api')
|
||||
|
||||
# Control endpoints
|
||||
app.register_blueprint(dts_bp, url_prefix='/api')
|
||||
|
||||
logger.info("All blueprints registered successfully")
|
||||
|
||||
# Log registered routes
|
||||
if app.config.get('DEBUG', False):
|
||||
logger.debug("Registered routes:")
|
||||
for rule in app.url_map.iter_rules():
|
||||
methods = ','.join(rule.methods - {'OPTIONS', 'HEAD'})
|
||||
logger.debug(f" {rule.rule} [{methods}] -> {rule.endpoint}")
|
||||
@@ -1,77 +0,0 @@
|
||||
"""
|
||||
Configuration settings for the Watermaker PLC API.
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Dict, Any
|
||||
|
||||
|
||||
class Config:
|
||||
"""Application configuration"""
|
||||
|
||||
# Flask Settings
|
||||
DEBUG = os.getenv('DEBUG', 'False').lower() == 'true'
|
||||
SECRET_KEY = os.getenv('SECRET_KEY', 'watermaker-plc-api-dev-key')
|
||||
|
||||
# PLC Connection Settings
|
||||
PLC_IP = os.getenv('PLC_IP', '198.18.100.141')
|
||||
PLC_PORT = int(os.getenv('PLC_PORT', '502'))
|
||||
PLC_UNIT_ID = int(os.getenv('PLC_UNIT_ID', '1'))
|
||||
PLC_TIMEOUT = int(os.getenv('PLC_TIMEOUT', '3'))
|
||||
PLC_CONNECTION_RETRY_INTERVAL = int(os.getenv('PLC_CONNECTION_RETRY_INTERVAL', '30'))
|
||||
|
||||
# API Settings
|
||||
API_VERSION = "1.1"
|
||||
CORS_ENABLED = True
|
||||
|
||||
# Background Task Settings
|
||||
DATA_UPDATE_INTERVAL = int(os.getenv('DATA_UPDATE_INTERVAL', '5')) # seconds
|
||||
ERROR_RETRY_INTERVAL = int(os.getenv('ERROR_RETRY_INTERVAL', '10')) # seconds
|
||||
MAX_CACHED_ERRORS = int(os.getenv('MAX_CACHED_ERRORS', '10'))
|
||||
|
||||
# Logging Settings
|
||||
LOG_LEVEL = os.getenv('LOG_LEVEL', 'INFO').upper()
|
||||
LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
|
||||
@classmethod
|
||||
def get_plc_config(cls) -> Dict[str, Any]:
|
||||
"""Get PLC connection configuration"""
|
||||
return {
|
||||
"ip_address": cls.PLC_IP,
|
||||
"port": cls.PLC_PORT,
|
||||
"unit_id": cls.PLC_UNIT_ID,
|
||||
"timeout": cls.PLC_TIMEOUT,
|
||||
"connected": False,
|
||||
"client": None,
|
||||
"last_connection_attempt": 0,
|
||||
"connection_retry_interval": cls.PLC_CONNECTION_RETRY_INTERVAL
|
||||
}
|
||||
|
||||
|
||||
|
||||
class DevelopmentConfig(Config):
|
||||
"""Development configuration"""
|
||||
DEBUG = True
|
||||
PLC_IP = '127.0.0.1' # Simulator for development
|
||||
|
||||
|
||||
class ProductionConfig(Config):
|
||||
"""Production configuration"""
|
||||
DEBUG = False
|
||||
|
||||
@property
|
||||
def SECRET_KEY(self):
|
||||
"""Get SECRET_KEY from environment, required in production"""
|
||||
secret_key = os.getenv('SECRET_KEY')
|
||||
if not secret_key:
|
||||
raise ValueError("SECRET_KEY environment variable must be set in production")
|
||||
return secret_key
|
||||
|
||||
|
||||
class TestingConfig(Config):
|
||||
"""Testing configuration"""
|
||||
TESTING = True
|
||||
DEBUG = True
|
||||
PLC_IP = '127.0.0.1'
|
||||
DATA_UPDATE_INTERVAL = 1 # Faster updates for testing
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
"""
|
||||
Controller modules for handling API routes and business logic.
|
||||
"""
|
||||
|
||||
from .system_controller import system_bp
|
||||
from .sensors_controller import sensors_bp
|
||||
from .timers_controller import timers_bp
|
||||
from .outputs_controller import outputs_bp
|
||||
from .dts_controller import dts_bp
|
||||
|
||||
__all__ = [
|
||||
'system_bp',
|
||||
'sensors_bp',
|
||||
'timers_bp',
|
||||
'outputs_bp',
|
||||
'dts_bp'
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,46 +0,0 @@
|
||||
"""
|
||||
Outputs controller for digital output control endpoints.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, jsonify
|
||||
from ..services.data_cache import get_data_cache
|
||||
from ..utils.logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Create blueprint
|
||||
outputs_bp = Blueprint('outputs', __name__)
|
||||
|
||||
# Initialize services
|
||||
cache = get_data_cache()
|
||||
|
||||
|
||||
@outputs_bp.route('/outputs')
|
||||
def get_outputs():
|
||||
"""Get output control data"""
|
||||
outputs = cache.get_outputs()
|
||||
|
||||
return jsonify({
|
||||
"outputs": outputs,
|
||||
"last_update": cache.get_last_update(),
|
||||
"count": len(outputs)
|
||||
})
|
||||
|
||||
|
||||
@outputs_bp.route('/outputs/active')
|
||||
def get_active_outputs():
|
||||
"""Get only active output controls"""
|
||||
active_outputs = cache.get_active_outputs()
|
||||
|
||||
# Calculate total active outputs across all registers
|
||||
total_active = sum(
|
||||
len(output.get("active_bits", []))
|
||||
for output in active_outputs.values()
|
||||
)
|
||||
|
||||
return jsonify({
|
||||
"active_outputs": active_outputs,
|
||||
"total_active": total_active,
|
||||
"register_count": len(active_outputs),
|
||||
"last_update": cache.get_last_update()
|
||||
})
|
||||
@@ -1,75 +0,0 @@
|
||||
"""
|
||||
Sensors controller for sensor data and runtime/water counter endpoints.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, jsonify
|
||||
from ..services.data_cache import get_data_cache
|
||||
from ..utils.logger import get_logger
|
||||
from ..utils.error_handler import create_error_response
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Create blueprint
|
||||
sensors_bp = Blueprint('sensors', __name__)
|
||||
|
||||
# Initialize services
|
||||
cache = get_data_cache()
|
||||
|
||||
|
||||
@sensors_bp.route('/sensors')
|
||||
def get_sensors():
|
||||
"""Get all sensor data"""
|
||||
sensors = cache.get_sensors()
|
||||
last_update = cache.get_last_update()
|
||||
|
||||
return jsonify({
|
||||
"sensors": sensors,
|
||||
"last_update": last_update,
|
||||
"count": len(sensors)
|
||||
})
|
||||
|
||||
|
||||
@sensors_bp.route('/sensors/category/<category>')
|
||||
def get_sensors_by_category(category):
|
||||
"""Get sensors filtered by category"""
|
||||
valid_categories = ['system', 'pressure', 'temperature', 'flow', 'quality']
|
||||
|
||||
if category not in valid_categories:
|
||||
return create_error_response(
|
||||
"Bad Request",
|
||||
f"Invalid category '{category}'. Valid categories: {', '.join(valid_categories)}",
|
||||
400
|
||||
)
|
||||
|
||||
filtered_sensors = cache.get_sensors_by_category(category)
|
||||
|
||||
return jsonify({
|
||||
"category": category,
|
||||
"sensors": filtered_sensors,
|
||||
"count": len(filtered_sensors),
|
||||
"last_update": cache.get_last_update()
|
||||
})
|
||||
|
||||
|
||||
@sensors_bp.route('/runtime')
|
||||
def get_runtime():
|
||||
"""Get runtime hours data (IEEE 754 float)"""
|
||||
runtime_data = cache.get_runtime()
|
||||
|
||||
return jsonify({
|
||||
"runtime": runtime_data,
|
||||
"last_update": cache.get_last_update(),
|
||||
"count": len(runtime_data)
|
||||
})
|
||||
|
||||
|
||||
@sensors_bp.route('/water_counters')
|
||||
def get_water_counters():
|
||||
"""Get water production counter data (gallon totals)"""
|
||||
water_counter_data = cache.get_water_counters()
|
||||
|
||||
return jsonify({
|
||||
"water_counters": water_counter_data,
|
||||
"last_update": cache.get_last_update(),
|
||||
"count": len(water_counter_data)
|
||||
})
|
||||
@@ -1,359 +0,0 @@
|
||||
"""
|
||||
System controller for status, configuration, and general API endpoints.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
from datetime import datetime
|
||||
from ..config import Config
|
||||
from ..services.data_cache import get_data_cache
|
||||
from ..services.plc_connection import get_plc_connection
|
||||
from ..services.register_reader import RegisterReader
|
||||
from ..services.register_writer import RegisterWriter
|
||||
from ..utils.logger import get_logger
|
||||
from ..utils.error_handler import create_error_response, create_success_response, RegisterWriteError, PLCConnectionError
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Create blueprint
|
||||
system_bp = Blueprint('system', __name__)
|
||||
|
||||
# Initialize services
|
||||
cache = get_data_cache()
|
||||
plc = get_plc_connection()
|
||||
reader = RegisterReader()
|
||||
writer = RegisterWriter()
|
||||
|
||||
|
||||
@system_bp.route('/status')
|
||||
def get_status():
|
||||
"""Get connection and system status"""
|
||||
plc_status = plc.get_connection_status()
|
||||
|
||||
return jsonify({
|
||||
"connection_status": cache.get_connection_status(),
|
||||
"last_update": cache.get_last_update(),
|
||||
"plc_config": {
|
||||
"ip": plc_status["ip_address"],
|
||||
"port": plc_status["port"],
|
||||
"connected": plc_status["connected"]
|
||||
},
|
||||
"timestamp": datetime.now().isoformat()
|
||||
})
|
||||
|
||||
|
||||
@system_bp.route('/all')
|
||||
def get_all_data():
|
||||
"""Get all PLC data in one response"""
|
||||
all_data = cache.get_all_data()
|
||||
summary = cache.get_summary_stats()
|
||||
|
||||
return jsonify({
|
||||
"status": {
|
||||
"connection_status": all_data["connection_status"],
|
||||
"last_update": all_data["last_update"],
|
||||
"connected": plc.is_connected
|
||||
},
|
||||
"sensors": all_data["sensors"],
|
||||
"timers": all_data["timers"],
|
||||
"rtc": all_data["rtc"],
|
||||
"outputs": all_data["outputs"],
|
||||
"runtime": all_data["runtime"],
|
||||
"water_counters": all_data["water_counters"],
|
||||
"summary": summary
|
||||
})
|
||||
|
||||
|
||||
@system_bp.route('/select')
|
||||
def get_selected_data():
|
||||
"""Get only selected variables by groups and/or keys to reduce bandwidth and PLC traffic"""
|
||||
# Get query parameters
|
||||
groups_param = request.args.get('groups', '')
|
||||
keys_param = request.args.get('keys', '')
|
||||
|
||||
# Parse groups and keys
|
||||
requested_groups = [g.strip() for g in groups_param.split(',') if g.strip()] if groups_param else []
|
||||
requested_keys = [k.strip() for k in keys_param.split(',') if k.strip()] if keys_param else []
|
||||
|
||||
if not requested_groups and not requested_keys:
|
||||
return create_error_response(
|
||||
"Bad Request",
|
||||
"Must specify either 'groups' or 'keys' parameter",
|
||||
400,
|
||||
{
|
||||
"usage": {
|
||||
"groups": "Comma-separated list: system,pressure,temperature,flow,quality,fwf_timer,dts_timer,rtc,outputs,runtime,water_counters",
|
||||
"keys": "Comma-separated list of register numbers: 1000,1003,1017,136,138,5014,5024",
|
||||
"examples": [
|
||||
"/api/select?groups=temperature,pressure",
|
||||
"/api/select?keys=1036,1003,1017",
|
||||
"/api/select?groups=dts_timer&keys=1036",
|
||||
"/api/select?groups=runtime,water_counters"
|
||||
]
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
# Check PLC connection
|
||||
if not plc.is_connected:
|
||||
if not plc.connect():
|
||||
return create_error_response(
|
||||
"Service Unavailable",
|
||||
"PLC connection failed",
|
||||
503,
|
||||
{"connection_status": cache.get_connection_status()}
|
||||
)
|
||||
|
||||
try:
|
||||
# Read selective data
|
||||
result = reader.read_selective_data(requested_groups, requested_keys)
|
||||
result["timestamp"] = datetime.now().isoformat()
|
||||
|
||||
return jsonify(result)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading selective data: {e}")
|
||||
return create_error_response(
|
||||
"Internal Server Error",
|
||||
f"Failed to read selective data: {str(e)}",
|
||||
500
|
||||
)
|
||||
|
||||
|
||||
@system_bp.route('/errors')
|
||||
def get_errors():
|
||||
"""Get recent errors"""
|
||||
errors = cache.get_errors(limit=10)
|
||||
|
||||
return jsonify({
|
||||
"errors": errors,
|
||||
"count": len(errors)
|
||||
})
|
||||
|
||||
|
||||
@system_bp.route('/write/register', methods=['POST'])
|
||||
def write_register():
|
||||
"""Write to a single holding register"""
|
||||
try:
|
||||
# Check if request has JSON data
|
||||
if not request.is_json:
|
||||
return create_error_response(
|
||||
"Bad Request",
|
||||
"Request must be JSON with Content-Type: application/json",
|
||||
400
|
||||
)
|
||||
|
||||
data = request.get_json()
|
||||
if not data or 'address' not in data or 'value' not in data:
|
||||
return create_error_response(
|
||||
"Bad Request",
|
||||
"Must provide 'address' and 'value' in JSON body",
|
||||
400
|
||||
)
|
||||
|
||||
address = int(data['address'])
|
||||
value = int(data['value'])
|
||||
|
||||
# Validate the write operation
|
||||
is_valid, error_msg = writer.validate_write_operation(address, value)
|
||||
if not is_valid:
|
||||
return create_error_response("Bad Request", error_msg, 400)
|
||||
|
||||
# Perform the write
|
||||
success = writer.write_holding_register(address, value)
|
||||
|
||||
return create_success_response(
|
||||
f"Successfully wrote {value} to register {address}",
|
||||
{
|
||||
"address": address,
|
||||
"value": value,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
)
|
||||
|
||||
except ValueError as e:
|
||||
return create_error_response(
|
||||
"Bad Request",
|
||||
f"Invalid address or value: {e}",
|
||||
400
|
||||
)
|
||||
except (RegisterWriteError, PLCConnectionError) as e:
|
||||
return create_error_response(
|
||||
"Service Unavailable",
|
||||
str(e),
|
||||
503
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error in write_register: {e}")
|
||||
return create_error_response(
|
||||
"Internal Server Error",
|
||||
"An unexpected error occurred",
|
||||
500
|
||||
)
|
||||
|
||||
|
||||
@system_bp.route('/config')
|
||||
def get_config():
|
||||
"""Get API configuration and available endpoints"""
|
||||
return jsonify({
|
||||
"api_version": Config.API_VERSION,
|
||||
"endpoints": {
|
||||
"/api/status": "Connection and system status",
|
||||
"/api/sensors": "All sensor data",
|
||||
"/api/sensors/category/<category>": "Sensors by category (system, pressure, temperature, flow, quality)",
|
||||
"/api/timers": "All timer data",
|
||||
"/api/timers/dts": "DTS timer data",
|
||||
"/api/timers/fwf": "FWF timer data",
|
||||
"/api/rtc": "Real-time clock data",
|
||||
"/api/outputs": "Output control data",
|
||||
"/api/outputs/active": "Active output controls only",
|
||||
"/api/runtime": "Runtime hours data (IEEE 754 float)",
|
||||
"/api/water_counters": "Water production counters (gallon totals)",
|
||||
"/api/all": "All data in one response",
|
||||
"/api/select": "Selective data retrieval (groups and/or keys) - BANDWIDTH OPTIMIZED",
|
||||
"/api/errors": "Recent errors",
|
||||
"/api/config": "This configuration",
|
||||
"/api/dts/start": "POST - Start DTS watermaker sequence (async)",
|
||||
"/api/dts/stop": "POST - Stop watermaker sequence (async, mode-dependent)",
|
||||
"/api/dts/skip": "POST - Skip current step automatically (async)",
|
||||
"/api/dts/status": "Get latest DTS operation status",
|
||||
"/api/dts/status/<task_id>": "Get specific DTS task status",
|
||||
"/api/dts/cancel/<task_id>": "POST - Cancel running DTS task",
|
||||
"/api/write/register": "POST - Write single holding register"
|
||||
},
|
||||
"control_endpoints": {
|
||||
"/api/dts/start": {
|
||||
"method": "POST",
|
||||
"description": "Start DTS watermaker sequence (ASYNC)",
|
||||
"parameters": "None required",
|
||||
"returns": "task_id for status polling",
|
||||
"response_time": "< 100ms (immediate)",
|
||||
"sequence": [
|
||||
"Check R1000 value",
|
||||
"Set R1000=34 if not already",
|
||||
"Wait 2 seconds",
|
||||
"Set R71=256",
|
||||
"Wait 2 seconds",
|
||||
"Set R71=0",
|
||||
"Monitor R138 for valve positioning",
|
||||
"Set R1000=5 to start DTS mode"
|
||||
],
|
||||
"polling": {
|
||||
"status_endpoint": "/api/dts/status/{task_id}",
|
||||
"recommended_interval": "1 second",
|
||||
"total_duration": "~10 seconds"
|
||||
}
|
||||
},
|
||||
"/api/dts/stop": {
|
||||
"method": "POST",
|
||||
"description": "Stop watermaker sequence (ASYNC, mode-dependent)",
|
||||
"parameters": "None required",
|
||||
"returns": "task_id for status polling",
|
||||
"response_time": "< 100ms (immediate)",
|
||||
"mode_sequences": {
|
||||
"mode_5_dts": "R71=512, wait 1s, R71=0, R1000=8",
|
||||
"mode_7_service": "R71=513, wait 1s, R71=0, R1000=8",
|
||||
"mode_8_flush": "R71=1024, wait 1s, R71=0, R1000=2"
|
||||
},
|
||||
"note": "Watermaker always ends with flush screen (mode 8)"
|
||||
},
|
||||
"/api/dts/skip": {
|
||||
"method": "POST",
|
||||
"description": "Skip current step automatically (ASYNC)",
|
||||
"parameters": "None required - auto-determines next step",
|
||||
"returns": "task_id for status polling",
|
||||
"response_time": "< 100ms (immediate)",
|
||||
"auto_logic": {
|
||||
"from_mode_5": "Skip step 2 → step 3: R67=32841 (PLC advances to mode 6)",
|
||||
"from_mode_6": "Skip step 3 → step 4: R67=32968, wait 1s, R1000=7"
|
||||
},
|
||||
"valid_from_modes": [5, 6],
|
||||
"example": "/api/dts/skip"
|
||||
},
|
||||
"/api/write/register": {
|
||||
"method": "POST",
|
||||
"description": "Write single holding register",
|
||||
"body": {"address": "register_number", "value": "value_to_write"},
|
||||
"example": {"address": 1000, "value": 5}
|
||||
}
|
||||
},
|
||||
"variable_groups": {
|
||||
"system": {
|
||||
"description": "System status and operational mode",
|
||||
"keys": ["1000", "1036"],
|
||||
"count": 2
|
||||
},
|
||||
"pressure": {
|
||||
"description": "Water pressure sensors",
|
||||
"keys": ["1003", "1007", "1008"],
|
||||
"count": 3
|
||||
},
|
||||
"temperature": {
|
||||
"description": "Temperature monitoring",
|
||||
"keys": ["1017", "1125"],
|
||||
"count": 2
|
||||
},
|
||||
"flow": {
|
||||
"description": "Flow rate meters",
|
||||
"keys": ["1120", "1121", "1122"],
|
||||
"count": 3
|
||||
},
|
||||
"quality": {
|
||||
"description": "Water quality (TDS) sensors",
|
||||
"keys": ["1123", "1124"],
|
||||
"count": 2
|
||||
},
|
||||
"fwf_timer": {
|
||||
"description": "Fresh water flush timers",
|
||||
"keys": ["136"],
|
||||
"count": 1
|
||||
},
|
||||
"dts_timer": {
|
||||
"description": "DTS process step timers",
|
||||
"keys": ["138", "128", "129", "133", "135", "139"],
|
||||
"count": 6
|
||||
},
|
||||
"rtc": {
|
||||
"description": "Real-time clock registers",
|
||||
"keys": ["513", "514", "516", "517", "518", "519"],
|
||||
"count": 6
|
||||
},
|
||||
"outputs": {
|
||||
"description": "Digital output controls",
|
||||
"keys": ["257", "258", "259", "260", "264", "265"],
|
||||
"count": 6
|
||||
},
|
||||
"runtime": {
|
||||
"description": "System runtime hours (IEEE 754 float)",
|
||||
"keys": ["5014"],
|
||||
"count": 1,
|
||||
"note": "32-bit float from register pairs R5014+R5015"
|
||||
},
|
||||
"water_counters": {
|
||||
"description": "Water production counters (gallon totals)",
|
||||
"keys": ["5024", "5028", "5032", "5034"],
|
||||
"count": 4,
|
||||
"note": "32-bit floats from register pairs (Single/Double/DTS Total/Since Last)"
|
||||
}
|
||||
},
|
||||
"selective_api_usage": {
|
||||
"endpoint": "/api/select",
|
||||
"description": "Retrieve only specified variables to reduce bandwidth and PLC traffic",
|
||||
"parameters": {
|
||||
"groups": "Comma-separated group names (system,pressure,temperature,flow,quality,fwf_timer,dts_timer,rtc,outputs,runtime,water_counters)",
|
||||
"keys": "Comma-separated register numbers (1000,1003,1017,136,etc.)"
|
||||
},
|
||||
"examples": {
|
||||
"temperature_and_pressure": "/api/select?groups=temperature,pressure",
|
||||
"specific_sensors": "/api/select?keys=1036,1003,1017,1121",
|
||||
"dts_monitoring": "/api/select?groups=dts_timer&keys=1036",
|
||||
"critical_only": "/api/select?keys=1036,1003,1121,1123",
|
||||
"runtime_and_counters": "/api/select?groups=runtime,water_counters"
|
||||
}
|
||||
},
|
||||
"total_variables": 36,
|
||||
"update_interval": f"{Config.DATA_UPDATE_INTERVAL} seconds (full scan) / on-demand (selective)",
|
||||
"plc_config": {
|
||||
"ip": Config.PLC_IP,
|
||||
"port": Config.PLC_PORT
|
||||
}
|
||||
})
|
||||
@@ -1,78 +0,0 @@
|
||||
"""
|
||||
Timers controller for timer and RTC data endpoints.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, jsonify
|
||||
from ..services.data_cache import get_data_cache
|
||||
from ..utils.logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Create blueprint
|
||||
timers_bp = Blueprint('timers', __name__)
|
||||
|
||||
# Initialize services
|
||||
cache = get_data_cache()
|
||||
|
||||
|
||||
@timers_bp.route('/timers')
|
||||
def get_timers():
|
||||
"""Get all timer data"""
|
||||
timers = cache.get_timers()
|
||||
active_timers = cache.get_active_timers()
|
||||
|
||||
return jsonify({
|
||||
"timers": timers,
|
||||
"last_update": cache.get_last_update(),
|
||||
"active_timers": active_timers,
|
||||
"total_count": len(timers),
|
||||
"active_count": len(active_timers)
|
||||
})
|
||||
|
||||
|
||||
@timers_bp.route('/timers/dts')
|
||||
def get_dts_timers():
|
||||
"""Get DTS timer data"""
|
||||
dts_timers = cache.get_timers_by_category("dts_timer")
|
||||
active_dts_timers = [
|
||||
addr for addr, timer in dts_timers.items()
|
||||
if timer.get("active", False)
|
||||
]
|
||||
|
||||
return jsonify({
|
||||
"dts_timers": dts_timers,
|
||||
"active_timers": active_dts_timers,
|
||||
"total_count": len(dts_timers),
|
||||
"active_count": len(active_dts_timers),
|
||||
"last_update": cache.get_last_update()
|
||||
})
|
||||
|
||||
|
||||
@timers_bp.route('/timers/fwf')
|
||||
def get_fwf_timers():
|
||||
"""Get Fresh Water Flush timer data"""
|
||||
fwf_timers = cache.get_timers_by_category("fwf_timer")
|
||||
active_fwf_timers = [
|
||||
addr for addr, timer in fwf_timers.items()
|
||||
if timer.get("active", False)
|
||||
]
|
||||
|
||||
return jsonify({
|
||||
"fwf_timers": fwf_timers,
|
||||
"active_timers": active_fwf_timers,
|
||||
"total_count": len(fwf_timers),
|
||||
"active_count": len(active_fwf_timers),
|
||||
"last_update": cache.get_last_update()
|
||||
})
|
||||
|
||||
|
||||
@timers_bp.route('/rtc')
|
||||
def get_rtc():
|
||||
"""Get real-time clock data"""
|
||||
rtc_data = cache.get_rtc()
|
||||
|
||||
return jsonify({
|
||||
"rtc": rtc_data,
|
||||
"last_update": cache.get_last_update(),
|
||||
"count": len(rtc_data)
|
||||
})
|
||||
@@ -1,20 +0,0 @@
|
||||
"""
|
||||
Data models and register mappings for PLC variables.
|
||||
"""
|
||||
|
||||
from .sensor_mappings import KNOWN_SENSORS, get_sensor_by_category
|
||||
from .timer_mappings import TIMER_REGISTERS, RTC_REGISTERS, get_timer_by_category
|
||||
from .output_mappings import OUTPUT_CONTROLS, get_output_controls
|
||||
from .runtime_mappings import RUNTIME_REGISTERS, WATER_COUNTER_REGISTERS
|
||||
|
||||
__all__ = [
|
||||
'KNOWN_SENSORS',
|
||||
'TIMER_REGISTERS',
|
||||
'RTC_REGISTERS',
|
||||
'OUTPUT_CONTROLS',
|
||||
'RUNTIME_REGISTERS',
|
||||
'WATER_COUNTER_REGISTERS',
|
||||
'get_sensor_by_category',
|
||||
'get_timer_by_category',
|
||||
'get_output_controls'
|
||||
]
|
||||
@@ -1,83 +0,0 @@
|
||||
"""
|
||||
Output control register mappings and configuration.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any
|
||||
|
||||
# Output control mappings
|
||||
OUTPUT_CONTROLS = {
|
||||
257: {"name": "Low Pressure Pump", "register": 40017, "bit": 0},
|
||||
258: {"name": "High Pressure Pump", "register": 40017, "bit": 1},
|
||||
259: {"name": "Product Divert Valve", "register": 40017, "bit": 2},
|
||||
260: {"name": "Flush solenoid", "register": 40017, "bit": 3},
|
||||
264: {"name": "Double Pass Solenoid", "register": 40017, "bit": 7},
|
||||
265: {"name": "Shore Feed Solenoid", "register": 40017, "bit": 8}
|
||||
}
|
||||
|
||||
|
||||
def get_output_controls() -> Dict[int, Dict[str, Any]]:
|
||||
"""
|
||||
Get all output control configurations.
|
||||
|
||||
Returns:
|
||||
Dict of output control configurations
|
||||
"""
|
||||
return OUTPUT_CONTROLS.copy()
|
||||
|
||||
|
||||
def get_output_registers() -> List[int]:
|
||||
"""
|
||||
Get list of unique output register addresses.
|
||||
|
||||
Returns:
|
||||
List of register addresses (e.g., [40017, 40018, ...])
|
||||
"""
|
||||
registers = set()
|
||||
for config in OUTPUT_CONTROLS.values():
|
||||
registers.add(config["register"])
|
||||
return sorted(list(registers))
|
||||
|
||||
|
||||
|
||||
|
||||
def extract_bit_value(register_value: int, bit_position: int) -> int:
|
||||
"""
|
||||
Extract a specific bit value from a register.
|
||||
|
||||
Args:
|
||||
register_value: Full register value
|
||||
bit_position: Bit position (0-15)
|
||||
|
||||
Returns:
|
||||
Bit value (0 or 1)
|
||||
"""
|
||||
return (register_value >> bit_position) & 1
|
||||
|
||||
|
||||
def create_output_bit_info(register: int, register_value: int) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Create bit information for all outputs in a register.
|
||||
|
||||
Args:
|
||||
register: Register address
|
||||
register_value: Current register value
|
||||
|
||||
Returns:
|
||||
List of bit information dicts
|
||||
"""
|
||||
bits = []
|
||||
|
||||
for bit in range(16):
|
||||
bit_value = extract_bit_value(register_value, bit)
|
||||
output_addr = ((register - 40017) * 16) + (bit + 1) + 256
|
||||
|
||||
control_info = OUTPUT_CONTROLS.get(output_addr, {})
|
||||
bits.append({
|
||||
"bit": bit,
|
||||
"address": output_addr,
|
||||
"value": bit_value,
|
||||
"name": control_info.get("name", f"Output {output_addr}"),
|
||||
"active": bit_value == 1
|
||||
})
|
||||
|
||||
return bits
|
||||
@@ -1,28 +0,0 @@
|
||||
"""
|
||||
Runtime and water counter register mappings (32-bit values from register pairs).
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any
|
||||
|
||||
# Runtime register mappings (32-bit IEEE 754 float pairs)
|
||||
RUNTIME_REGISTERS = {
|
||||
5014: {"name": "Runtime Hours", "scale": "ieee754", "unit": "hours", "category": "runtime",
|
||||
"pair_register": 5015, "description": "Total system runtime"}
|
||||
}
|
||||
|
||||
# Water counter register mappings (32-bit gallon counters)
|
||||
WATER_COUNTER_REGISTERS = {
|
||||
5024: {"name": "Single-Pass Total Gallons", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5025, "description": "Total single-pass water produced"},
|
||||
5026: {"name": "Single-Pass Total Gallons since last", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5027, "description": "Total single-pass water produced since last"},
|
||||
5028: {"name": "Double-Pass Total Gallons", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5029, "description": "Total double-pass water produced"},
|
||||
5030: {"name": "Double-Pass Total Gallons since last", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5031, "description": "Total double-pass water produced since last"},
|
||||
5032: {"name": "DTS Total Gallons", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5033, "description": "Total DTS water produced"},
|
||||
5034: {"name": "DTS Since Last Gallons", "scale": "gallon_counter", "unit": "gallons", "category": "water_counters",
|
||||
"pair_register": 5035, "description": "DTS water since last reset"}
|
||||
}
|
||||
|
||||
@@ -1,68 +0,0 @@
|
||||
"""
|
||||
Sensor register mappings and configuration.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any
|
||||
|
||||
# Known sensor mappings with categorization
|
||||
KNOWN_SENSORS = {
|
||||
# System Status & Control
|
||||
1000: {"name": "System Mode", "scale": "direct", "unit": "", "category": "system",
|
||||
"values": {
|
||||
"65535": "Standby",
|
||||
"2": "Home",
|
||||
"3": "Alarm List",
|
||||
"5": "DTS Prime",
|
||||
"6": "DTS Initialization",
|
||||
"7": "DTS Running",
|
||||
"8": "Fresh Water Flush",
|
||||
"9": "Settings",
|
||||
"15": "Service - Service Mode / Quality & Flush Valves / Pumps",
|
||||
"16": "Service - Double Pass & Feed Valves",
|
||||
"17": "Service - APC Need Valves",
|
||||
"18": "Service - Sensors - TDS, PPM, Flow, Temperature",
|
||||
"31": "Overview Schematic",
|
||||
"32": "Contact support",
|
||||
"33": "Seawater - Choose Single or Double Pass",
|
||||
"34": "DTS Request"
|
||||
}},
|
||||
1036: {"name": "System Status", "scale": "direct", "unit": "", "category": "system",
|
||||
"values": {"0": "Standby", "5": "FWF", "7": "Service Mode"}},
|
||||
|
||||
# Pressure Sensors
|
||||
1003: {"name": "Feed Pressure", "scale": "direct", "unit": "PSI", "category": "pressure"},
|
||||
1007: {"name": "High Pressure #2", "scale": "direct", "unit": "PSI", "category": "pressure"},
|
||||
1008: {"name": "High Pressure #1", "scale": "direct", "unit": "PSI", "category": "pressure"},
|
||||
|
||||
# Flow Meters
|
||||
1120: {"name": "Brine Flowmeter", "scale": "÷10", "unit": "GPM", "category": "flow"},
|
||||
1121: {"name": "1st Pass Product Flowmeter", "scale": "÷10", "unit": "GPM", "category": "flow"},
|
||||
1122: {"name": "2nd Pass Product Flowmeter", "scale": "÷10", "unit": "GPM", "category": "flow"},
|
||||
|
||||
# Water Quality
|
||||
1123: {"name": "Product TDS #1", "scale": "direct", "unit": "PPM", "category": "quality"},
|
||||
1124: {"name": "Product TDS #2", "scale": "direct", "unit": "PPM", "category": "quality"},
|
||||
|
||||
# Temperature Sensors
|
||||
1017: {"name": "Water Temperature", "scale": "÷10", "unit": "°F", "category": "temperature"},
|
||||
1125: {"name": "System Temperature", "scale": "÷10", "unit": "°F", "category": "temperature"}
|
||||
|
||||
}
|
||||
|
||||
|
||||
def get_sensor_by_category(category: str) -> Dict[int, Dict[str, Any]]:
|
||||
"""
|
||||
Get sensors filtered by category.
|
||||
|
||||
Args:
|
||||
category: Sensor category (system, pressure, temperature, flow, quality)
|
||||
|
||||
Returns:
|
||||
Dict of sensors in the specified category
|
||||
"""
|
||||
return {
|
||||
addr: config for addr, config in KNOWN_SENSORS.items()
|
||||
if config.get("category") == category
|
||||
}
|
||||
|
||||
|
||||
@@ -1,282 +0,0 @@
|
||||
"""
|
||||
Timer and RTC register mappings and configuration.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any
|
||||
|
||||
# Timer register mappings
|
||||
TIMER_REGISTERS = {
|
||||
# FWF Mode Timer
|
||||
136: {"name": "FWF Flush Timer", "scale": "÷10", "unit": "sec", "category": "fwf_timer", "expected_start_value": 600},
|
||||
|
||||
# DTS Screen Timers
|
||||
138: {"name": "DTS Valve Positioning Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 150},
|
||||
128: {"name": "DTS Priming Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 1800},
|
||||
129: {"name": "DTS Initialize Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 600},
|
||||
133: {"name": "DTS Fresh Water Flush Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 600},
|
||||
135: {"name": "DTS Stop Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 100},
|
||||
139: {"name": "DTS Flush Timer", "scale": "÷10", "unit": "sec", "category": "dts_timer", "expected_start_value": 600}
|
||||
}
|
||||
|
||||
# DTS Screen Flow and Definitions
|
||||
DTS_FLOW_SEQUENCE = [34, 5, 6, 7, 8]
|
||||
|
||||
# DTS Screen Definitions
|
||||
DTS_SCREENS = {
|
||||
34: {
|
||||
"name": "DTS Requested",
|
||||
"description": "DTS requested - press and hold DTS to START",
|
||||
"timer": None,
|
||||
"duration_seconds": None
|
||||
},
|
||||
5: {
|
||||
"name": "Priming",
|
||||
"description": "Flush with shore pressure",
|
||||
"timer": 128, # R128 - DTS Priming Timer
|
||||
"duration_seconds": 180
|
||||
},
|
||||
6: {
|
||||
"name": "Init",
|
||||
"description": "High pressure pump on, product valve divert",
|
||||
"timer": 129, # R129 - DTS Init Timer
|
||||
"duration_seconds": 60
|
||||
},
|
||||
7: {
|
||||
"name": "Production",
|
||||
"description": "High pressure pump on - water flowing to tank",
|
||||
"timer": None, # No timer for production
|
||||
"duration_seconds": None
|
||||
},
|
||||
8: {
|
||||
"name": "Fresh Water Flush",
|
||||
"description": "Fresh water flush - end of DTS process",
|
||||
"timer": 133, # R133 - DTS Fresh Water Flush Timer
|
||||
"duration_seconds": 60
|
||||
}
|
||||
}
|
||||
|
||||
# RTC register mappings
|
||||
RTC_REGISTERS = {
|
||||
513: {"name": "RTC Minutes", "scale": "direct", "unit": "min", "category": "rtc"},
|
||||
514: {"name": "RTC Seconds", "scale": "direct", "unit": "sec", "category": "rtc"},
|
||||
516: {"name": "RTC Year", "scale": "direct", "unit": "", "category": "rtc"},
|
||||
517: {"name": "RTC Month", "scale": "direct", "unit": "", "category": "rtc"},
|
||||
518: {"name": "RTC Day", "scale": "direct", "unit": "", "category": "rtc"},
|
||||
519: {"name": "RTC Month (Alt)", "scale": "direct", "unit": "", "category": "rtc"}
|
||||
}
|
||||
|
||||
|
||||
def get_timer_by_category(category: str) -> Dict[int, Dict[str, Any]]:
|
||||
"""
|
||||
Get timers filtered by category.
|
||||
|
||||
Args:
|
||||
category: Timer category (fwf_timer, dts_timer)
|
||||
|
||||
Returns:
|
||||
Dict of timers in the specified category
|
||||
"""
|
||||
return {
|
||||
addr: config for addr, config in TIMER_REGISTERS.items()
|
||||
if config.get("category") == category
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
def get_timer_info(address: int) -> Dict[str, Any]:
|
||||
"""
|
||||
Get configuration info for a specific timer.
|
||||
|
||||
Args:
|
||||
address: Timer register address
|
||||
|
||||
Returns:
|
||||
Timer configuration dict or empty dict if not found
|
||||
"""
|
||||
return TIMER_REGISTERS.get(address, {})
|
||||
|
||||
|
||||
def get_rtc_info(address: int) -> Dict[str, Any]:
|
||||
"""
|
||||
Get configuration info for a specific RTC register.
|
||||
|
||||
Args:
|
||||
address: RTC register address
|
||||
|
||||
Returns:
|
||||
RTC configuration dict or empty dict if not found
|
||||
"""
|
||||
return RTC_REGISTERS.get(address, {})
|
||||
|
||||
|
||||
def get_timer_expected_start_value(address: int) -> int:
|
||||
"""
|
||||
Get the expected start value for a timer register.
|
||||
|
||||
Args:
|
||||
address: Timer register address
|
||||
|
||||
Returns:
|
||||
Expected start value in raw register units, or 0 if not found
|
||||
"""
|
||||
timer_info = TIMER_REGISTERS.get(address, {})
|
||||
return timer_info.get("expected_start_value", 0)
|
||||
|
||||
|
||||
def calculate_timer_progress_percent(address: int, current_value: int, initial_value: int = None) -> int:
|
||||
"""
|
||||
Calculate progress percentage for a countdown timer.
|
||||
|
||||
Args:
|
||||
address: Timer register address
|
||||
current_value: Current timer value
|
||||
initial_value: Initial timer value when step started (optional, uses expected_start_value if not provided)
|
||||
|
||||
Returns:
|
||||
Progress percentage (0-100)
|
||||
"""
|
||||
# Handle invalid timer values
|
||||
if current_value is None:
|
||||
return 0
|
||||
|
||||
# Check for max value (65535) which indicates timer is not active
|
||||
if current_value == 65535:
|
||||
return 0
|
||||
|
||||
if initial_value is None:
|
||||
initial_value = get_timer_expected_start_value(address)
|
||||
|
||||
if initial_value <= 0:
|
||||
return 0
|
||||
|
||||
# Handle case where current value is greater than expected start (unusual but possible)
|
||||
if current_value > initial_value:
|
||||
# This might happen if the timer was pre-loaded with a different value
|
||||
# Return 0% progress in this case
|
||||
return 0
|
||||
|
||||
# For countdown timers: progress = (initial - current) / initial * 100
|
||||
progress = max(0, min(100, int((initial_value - current_value) / initial_value * 100)))
|
||||
|
||||
# If timer has reached 0, it's 100% complete
|
||||
if current_value == 0:
|
||||
progress = 100
|
||||
|
||||
return progress
|
||||
|
||||
|
||||
def get_dts_screen_timer_mapping() -> Dict[int, int]:
|
||||
"""
|
||||
Get mapping of DTS mode (R1000 value) to corresponding timer register.
|
||||
|
||||
Returns:
|
||||
Dict mapping mode values to timer register addresses
|
||||
"""
|
||||
return {
|
||||
# Based on actual DTS flow sequence [34, 5, 6, 7, 8]:
|
||||
# Mode 34: DTS Requested - no timer
|
||||
5: 128, # DTS Priming Timer (R128)
|
||||
6: 129, # DTS Init Timer (R129)
|
||||
# 7: No timer - Production phase (water flowing to tank)
|
||||
8: 133, # DTS Fresh Water Flush Timer (R133)
|
||||
}
|
||||
|
||||
|
||||
def get_timer_for_dts_mode(mode: int) -> int:
|
||||
"""
|
||||
Get the timer register address for a specific DTS mode.
|
||||
|
||||
Args:
|
||||
mode: DTS mode value (R1000)
|
||||
|
||||
Returns:
|
||||
Timer register address, or 0 if not found
|
||||
"""
|
||||
mapping = get_dts_screen_timer_mapping()
|
||||
return mapping.get(mode, 0)
|
||||
|
||||
|
||||
|
||||
|
||||
def get_dts_flow_sequence() -> List[int]:
|
||||
"""
|
||||
Get the expected DTS flow sequence.
|
||||
|
||||
Returns:
|
||||
List of mode values in expected order
|
||||
"""
|
||||
return DTS_FLOW_SEQUENCE.copy()
|
||||
|
||||
|
||||
def get_dts_screen_info(mode: int) -> Dict[str, Any]:
|
||||
"""
|
||||
Get screen information for a specific DTS mode.
|
||||
|
||||
Args:
|
||||
mode: DTS mode value (R1000)
|
||||
|
||||
Returns:
|
||||
Screen configuration dict or empty dict if not found
|
||||
"""
|
||||
return DTS_SCREENS.get(mode, {})
|
||||
|
||||
|
||||
def get_current_dts_screen_name(mode: int) -> str:
|
||||
"""
|
||||
Get the screen name for a specific DTS mode.
|
||||
|
||||
Args:
|
||||
mode: DTS mode value (R1000)
|
||||
|
||||
Returns:
|
||||
Screen name or empty string if not found
|
||||
"""
|
||||
screen_info = get_dts_screen_info(mode)
|
||||
return screen_info.get("name", "")
|
||||
|
||||
|
||||
def get_next_screen_in_flow(current_mode: int) -> int:
|
||||
"""
|
||||
Get the next expected screen in the DTS flow.
|
||||
|
||||
Args:
|
||||
current_mode: Current DTS mode value (R1000)
|
||||
|
||||
Returns:
|
||||
Next mode value in flow, or 0 if not found or at end
|
||||
"""
|
||||
try:
|
||||
current_index = DTS_FLOW_SEQUENCE.index(current_mode)
|
||||
if current_index < len(DTS_FLOW_SEQUENCE) - 1:
|
||||
return DTS_FLOW_SEQUENCE[current_index + 1]
|
||||
return 0 # End of flow
|
||||
except ValueError:
|
||||
return 0 # Mode not in flow
|
||||
|
||||
|
||||
def is_screen_skippable(mode: int) -> bool:
|
||||
"""
|
||||
Check if a DTS screen can be skipped.
|
||||
|
||||
Args:
|
||||
mode: DTS mode value (R1000)
|
||||
|
||||
Returns:
|
||||
True if screen can be skipped
|
||||
"""
|
||||
# Based on current skip logic: modes 5 and 6 can be skipped
|
||||
return mode in [5, 6]
|
||||
|
||||
|
||||
def is_mode_in_dts_flow(mode: int) -> bool:
|
||||
"""
|
||||
Check if a mode is part of the DTS flow sequence.
|
||||
|
||||
Args:
|
||||
mode: Mode value to check
|
||||
|
||||
Returns:
|
||||
True if mode is in DTS flow
|
||||
"""
|
||||
return mode in DTS_FLOW_SEQUENCE
|
||||
@@ -1,20 +0,0 @@
|
||||
"""
|
||||
Service layer for PLC communication, data caching, and background tasks.
|
||||
"""
|
||||
|
||||
from .plc_connection import PLCConnection, get_plc_connection
|
||||
from .data_cache import DataCache, get_data_cache
|
||||
from .register_reader import RegisterReader
|
||||
from .register_writer import RegisterWriter
|
||||
from .background_tasks import start_background_updates, BackgroundTaskManager
|
||||
|
||||
__all__ = [
|
||||
'PLCConnection',
|
||||
'get_plc_connection',
|
||||
'DataCache',
|
||||
'get_data_cache',
|
||||
'RegisterReader',
|
||||
'RegisterWriter',
|
||||
'start_background_updates',
|
||||
'BackgroundTaskManager'
|
||||
]
|
||||
@@ -1,259 +0,0 @@
|
||||
"""
|
||||
Background task management for continuous data updates.
|
||||
"""
|
||||
|
||||
import threading
|
||||
import time
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
from ..config import Config
|
||||
from ..utils.logger import get_logger
|
||||
from .register_reader import RegisterReader
|
||||
from .plc_connection import get_plc_connection
|
||||
from .data_cache import get_data_cache
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class R1000Monitor:
|
||||
"""Monitor R1000 for external changes that bypass the API"""
|
||||
|
||||
def __init__(self):
|
||||
self.plc = get_plc_connection()
|
||||
self.cache = get_data_cache()
|
||||
self._last_r1000_value = None
|
||||
self._last_change_time = None
|
||||
self._external_change_callbacks = []
|
||||
|
||||
def add_change_callback(self, callback):
|
||||
"""Add a callback function to be called when R1000 changes externally"""
|
||||
self._external_change_callbacks.append(callback)
|
||||
|
||||
def check_r1000_changes(self):
|
||||
"""Check for changes in R1000 and detect external modifications"""
|
||||
try:
|
||||
if not self.plc.connect():
|
||||
return
|
||||
|
||||
current_r1000 = self.plc.read_holding_register(1000)
|
||||
if current_r1000 is None:
|
||||
return
|
||||
|
||||
# Initialize on first read
|
||||
if self._last_r1000_value is None:
|
||||
self._last_r1000_value = current_r1000
|
||||
self._last_change_time = datetime.now()
|
||||
logger.info(f"R1000 Monitor: Initial value = {current_r1000}")
|
||||
return
|
||||
|
||||
# Check for changes
|
||||
if current_r1000 != self._last_r1000_value:
|
||||
change_time = datetime.now()
|
||||
|
||||
# Log the change
|
||||
logger.info(f"R1000 Monitor: Value changed from {self._last_r1000_value} to {current_r1000}")
|
||||
|
||||
# Store change information in cache for API access
|
||||
change_info = {
|
||||
"previous_value": self._last_r1000_value,
|
||||
"new_value": current_r1000,
|
||||
"change_time": change_time.isoformat(),
|
||||
"change_type": self._classify_change(self._last_r1000_value, current_r1000),
|
||||
"external_change": True # Assume external until proven otherwise
|
||||
}
|
||||
|
||||
# Add to cache errors for visibility
|
||||
self.cache.add_error(f"R1000 External Change: {self._last_r1000_value} → {current_r1000} ({change_info['change_type']})")
|
||||
|
||||
# Call registered callbacks
|
||||
for callback in self._external_change_callbacks:
|
||||
try:
|
||||
callback(change_info)
|
||||
except Exception as e:
|
||||
logger.error(f"R1000 Monitor: Callback error: {e}")
|
||||
|
||||
# Update tracking variables
|
||||
self._last_r1000_value = current_r1000
|
||||
self._last_change_time = change_time
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"R1000 Monitor: Error checking changes: {e}")
|
||||
|
||||
def _classify_change(self, old_value, new_value):
|
||||
"""Classify the type of change that occurred"""
|
||||
# Define mode classifications
|
||||
mode_names = {
|
||||
2: "Standby",
|
||||
5: "DTS_Priming",
|
||||
6: "DTS_Init",
|
||||
7: "DTS_Production",
|
||||
8: "DTS_Flush",
|
||||
34: "DTS_Requested"
|
||||
}
|
||||
|
||||
old_name = mode_names.get(old_value, f"Unknown({old_value})")
|
||||
new_name = mode_names.get(new_value, f"Unknown({new_value})")
|
||||
|
||||
# Classify change types
|
||||
if old_value == 2 and new_value in [5, 34]:
|
||||
return f"Process_Start: {old_name} → {new_name}"
|
||||
elif old_value in [5, 6, 7, 8, 34] and new_value == 2:
|
||||
return f"Process_Stop: {old_name} → {new_name}"
|
||||
elif old_value in [5, 6] and new_value == 7:
|
||||
return f"Step_Skip: {old_name} → {new_name}"
|
||||
elif old_value in [5, 6, 7] and new_value == 8:
|
||||
return f"Step_Advance: {old_name} → {new_name}"
|
||||
elif old_value == 34 and new_value == 5:
|
||||
return f"DTS_Start: {old_name} → {new_name}"
|
||||
else:
|
||||
return f"Mode_Change: {old_name} → {new_name}"
|
||||
|
||||
def get_current_r1000(self):
|
||||
"""Get the last known R1000 value"""
|
||||
return self._last_r1000_value
|
||||
|
||||
def get_last_change_time(self):
|
||||
"""Get the time of the last R1000 change"""
|
||||
return self._last_change_time
|
||||
|
||||
|
||||
class BackgroundTaskManager:
|
||||
"""Manages background tasks for PLC data updates"""
|
||||
|
||||
def __init__(self):
|
||||
self.reader = RegisterReader()
|
||||
self.r1000_monitor = R1000Monitor()
|
||||
self._update_thread: Optional[threading.Thread] = None
|
||||
self._running = False
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
# Register R1000 change callback
|
||||
self.r1000_monitor.add_change_callback(self._handle_r1000_change)
|
||||
|
||||
def _handle_r1000_change(self, change_info):
|
||||
"""Handle R1000 changes detected by the monitor"""
|
||||
logger.warning(f"External R1000 Change Detected: {change_info['change_type']} at {change_info['change_time']}")
|
||||
|
||||
# Check if this might affect running DTS operations
|
||||
try:
|
||||
from ..controllers.dts_controller import handle_external_dts_change
|
||||
from ..services.operation_state import get_operation_state_manager
|
||||
|
||||
state_manager = get_operation_state_manager()
|
||||
is_running = state_manager.is_running()
|
||||
|
||||
# Check if this is an external DTS start without existing API operation
|
||||
new_value = change_info.get("new_value")
|
||||
previous_value = change_info.get("previous_value")
|
||||
|
||||
# DTS modes that indicate active DTS process
|
||||
dts_active_modes = [5, 6, 7, 8, 34] # Priming, Init, Production, Flush, Requested
|
||||
|
||||
# If we're entering a DTS mode and there's no running operation, create external monitoring
|
||||
if (new_value in dts_active_modes and
|
||||
previous_value not in dts_active_modes and
|
||||
not is_running):
|
||||
|
||||
logger.info(f"Creating external DTS monitoring for mode {new_value}")
|
||||
external_operation_id = handle_external_dts_change(change_info)
|
||||
if external_operation_id:
|
||||
logger.info(f"External DTS monitoring started: {external_operation_id}")
|
||||
|
||||
# If there's a running operation, add this change to its external changes
|
||||
elif is_running:
|
||||
logger.warning(f"R1000 change detected while DTS operation running - possible external interference")
|
||||
current_state = state_manager.get_current_state()
|
||||
external_changes = current_state.get("external_changes", [])
|
||||
external_changes.append(change_info)
|
||||
state_manager.update_state({"external_changes": external_changes})
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling R1000 change impact on DTS operations: {e}")
|
||||
|
||||
def start_data_updates(self):
|
||||
"""Start the background data update thread"""
|
||||
if self._running:
|
||||
logger.warning("Background data updates already running")
|
||||
return
|
||||
|
||||
self._running = True
|
||||
self._stop_event.clear()
|
||||
self._update_thread = threading.Thread(
|
||||
target=self._data_update_loop,
|
||||
daemon=True,
|
||||
name="PLCDataUpdater"
|
||||
)
|
||||
self._update_thread.start()
|
||||
logger.info("Background data update thread started")
|
||||
|
||||
def stop_data_updates(self):
|
||||
"""Stop the background data update thread"""
|
||||
if not self._running:
|
||||
return
|
||||
|
||||
self._running = False
|
||||
self._stop_event.set()
|
||||
|
||||
if self._update_thread and self._update_thread.is_alive():
|
||||
self._update_thread.join(timeout=5)
|
||||
if self._update_thread.is_alive():
|
||||
logger.warning("Background thread did not stop gracefully")
|
||||
|
||||
logger.info("Background data updates stopped")
|
||||
|
||||
def _data_update_loop(self):
|
||||
"""Main data update loop running in background thread"""
|
||||
logger.info("Starting PLC data update loop with R1000 monitoring")
|
||||
|
||||
while self._running and not self._stop_event.is_set():
|
||||
try:
|
||||
# Update all PLC data
|
||||
self.reader.update_all_data()
|
||||
|
||||
# Monitor R1000 for external changes
|
||||
self.r1000_monitor.check_r1000_changes()
|
||||
|
||||
# Update DTS progress from timers
|
||||
try:
|
||||
from ..controllers.dts_controller import update_dts_progress_from_timers
|
||||
update_dts_progress_from_timers()
|
||||
except Exception as dts_error:
|
||||
logger.debug(f"DTS progress update error: {dts_error}")
|
||||
|
||||
# Wait for next update cycle
|
||||
self._stop_event.wait(Config.DATA_UPDATE_INTERVAL)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in data update loop: {e}")
|
||||
# Wait longer on error to avoid rapid retries
|
||||
self._stop_event.wait(Config.ERROR_RETRY_INTERVAL)
|
||||
|
||||
logger.info("PLC data update loop ended")
|
||||
|
||||
def is_running(self) -> bool:
|
||||
"""Check if background updates are running"""
|
||||
return self._running and self._update_thread is not None and self._update_thread.is_alive()
|
||||
|
||||
|
||||
# Global background task manager instance
|
||||
_task_manager: Optional[BackgroundTaskManager] = None
|
||||
|
||||
|
||||
def get_task_manager() -> BackgroundTaskManager:
|
||||
"""
|
||||
Get the global background task manager instance.
|
||||
|
||||
Returns:
|
||||
BackgroundTaskManager instance
|
||||
"""
|
||||
global _task_manager
|
||||
if _task_manager is None:
|
||||
_task_manager = BackgroundTaskManager()
|
||||
return _task_manager
|
||||
|
||||
|
||||
def start_background_updates():
|
||||
"""Start background data updates using the global task manager"""
|
||||
manager = get_task_manager()
|
||||
manager.start_data_updates()
|
||||
|
||||
@@ -1,223 +0,0 @@
|
||||
"""
|
||||
Centralized data cache for PLC sensor data, timers, and status.
|
||||
"""
|
||||
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Optional
|
||||
from ..config import Config
|
||||
from ..utils.logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class DataCache:
|
||||
"""Thread-safe data cache for PLC data"""
|
||||
|
||||
def __init__(self):
|
||||
self._lock = threading.RLock()
|
||||
self._data = {
|
||||
"sensors": {},
|
||||
"timers": {},
|
||||
"rtc": {},
|
||||
"outputs": {},
|
||||
"runtime": {},
|
||||
"water_counters": {},
|
||||
"last_update": None,
|
||||
"connection_status": "disconnected",
|
||||
"errors": []
|
||||
}
|
||||
|
||||
def get_all_data(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get all cached data (thread-safe).
|
||||
|
||||
Returns:
|
||||
Copy of all cached data
|
||||
"""
|
||||
with self._lock:
|
||||
return {
|
||||
"sensors": self._data["sensors"].copy(),
|
||||
"timers": self._data["timers"].copy(),
|
||||
"rtc": self._data["rtc"].copy(),
|
||||
"outputs": self._data["outputs"].copy(),
|
||||
"runtime": self._data["runtime"].copy(),
|
||||
"water_counters": self._data["water_counters"].copy(),
|
||||
"last_update": self._data["last_update"],
|
||||
"connection_status": self._data["connection_status"],
|
||||
"errors": self._data["errors"].copy()
|
||||
}
|
||||
|
||||
def get_sensors(self) -> Dict[str, Any]:
|
||||
"""Get all sensor data"""
|
||||
with self._lock:
|
||||
return self._data["sensors"].copy()
|
||||
|
||||
def get_sensors_by_category(self, category: str) -> Dict[str, Any]:
|
||||
"""Get sensors filtered by category"""
|
||||
with self._lock:
|
||||
return {
|
||||
addr: sensor for addr, sensor in self._data["sensors"].items()
|
||||
if sensor.get("category") == category
|
||||
}
|
||||
|
||||
def get_timers(self) -> Dict[str, Any]:
|
||||
"""Get all timer data"""
|
||||
with self._lock:
|
||||
return self._data["timers"].copy()
|
||||
|
||||
def get_active_timers(self) -> List[str]:
|
||||
"""Get list of active timer addresses"""
|
||||
with self._lock:
|
||||
return [
|
||||
addr for addr, timer in self._data["timers"].items()
|
||||
if timer.get("active", False)
|
||||
]
|
||||
|
||||
def get_timers_by_category(self, category: str) -> Dict[str, Any]:
|
||||
"""Get timers filtered by category"""
|
||||
with self._lock:
|
||||
return {
|
||||
addr: timer for addr, timer in self._data["timers"].items()
|
||||
if timer.get("category") == category
|
||||
}
|
||||
|
||||
def get_rtc(self) -> Dict[str, Any]:
|
||||
"""Get RTC data"""
|
||||
with self._lock:
|
||||
return self._data["rtc"].copy()
|
||||
|
||||
def get_outputs(self) -> Dict[str, Any]:
|
||||
"""Get output data"""
|
||||
with self._lock:
|
||||
return self._data["outputs"].copy()
|
||||
|
||||
def get_active_outputs(self) -> Dict[str, Any]:
|
||||
"""Get only active output controls"""
|
||||
with self._lock:
|
||||
active_outputs = {}
|
||||
for reg, output in self._data["outputs"].items():
|
||||
active_bits = [bit for bit in output.get("bits", []) if bit.get("active", False)]
|
||||
if active_bits:
|
||||
active_outputs[reg] = {
|
||||
**output,
|
||||
"active_bits": active_bits
|
||||
}
|
||||
return active_outputs
|
||||
|
||||
def get_runtime(self) -> Dict[str, Any]:
|
||||
"""Get runtime data"""
|
||||
with self._lock:
|
||||
return self._data["runtime"].copy()
|
||||
|
||||
def get_water_counters(self) -> Dict[str, Any]:
|
||||
"""Get water counter data"""
|
||||
with self._lock:
|
||||
return self._data["water_counters"].copy()
|
||||
|
||||
def set_sensors(self, sensors: Dict[str, Any]):
|
||||
"""Update sensor data"""
|
||||
with self._lock:
|
||||
self._data["sensors"] = sensors
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_timers(self, timers: Dict[str, Any]):
|
||||
"""Update timer data"""
|
||||
with self._lock:
|
||||
self._data["timers"] = timers
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_rtc(self, rtc: Dict[str, Any]):
|
||||
"""Update RTC data"""
|
||||
with self._lock:
|
||||
self._data["rtc"] = rtc
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_outputs(self, outputs: Dict[str, Any]):
|
||||
"""Update output data"""
|
||||
with self._lock:
|
||||
self._data["outputs"] = outputs
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_runtime(self, runtime: Dict[str, Any]):
|
||||
"""Update runtime data"""
|
||||
with self._lock:
|
||||
self._data["runtime"] = runtime
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_water_counters(self, water_counters: Dict[str, Any]):
|
||||
"""Update water counter data"""
|
||||
with self._lock:
|
||||
self._data["water_counters"] = water_counters
|
||||
self._data["last_update"] = datetime.now().isoformat()
|
||||
|
||||
def set_connection_status(self, status: str):
|
||||
"""Update connection status"""
|
||||
with self._lock:
|
||||
self._data["connection_status"] = status
|
||||
|
||||
def add_error(self, error: str):
|
||||
"""Add error to error list (thread-safe)"""
|
||||
with self._lock:
|
||||
self._data["errors"].append({
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"error": error
|
||||
})
|
||||
# Keep only last N errors
|
||||
max_errors = Config.MAX_CACHED_ERRORS
|
||||
if len(self._data["errors"]) > max_errors:
|
||||
self._data["errors"] = self._data["errors"][-max_errors:]
|
||||
|
||||
def get_errors(self, limit: int = 10) -> List[Dict[str, Any]]:
|
||||
"""Get recent errors"""
|
||||
with self._lock:
|
||||
return self._data["errors"][-limit:]
|
||||
|
||||
|
||||
def get_last_update(self) -> Optional[str]:
|
||||
"""Get last update timestamp"""
|
||||
with self._lock:
|
||||
return self._data["last_update"]
|
||||
|
||||
def get_connection_status(self) -> str:
|
||||
"""Get current connection status"""
|
||||
with self._lock:
|
||||
return self._data["connection_status"]
|
||||
|
||||
def get_summary_stats(self) -> Dict[str, int]:
|
||||
"""Get summary statistics"""
|
||||
with self._lock:
|
||||
return {
|
||||
"sensor_count": len(self._data["sensors"]),
|
||||
"active_timer_count": len([
|
||||
t for t in self._data["timers"].values()
|
||||
if t.get("active", False)
|
||||
]),
|
||||
"active_output_count": sum(
|
||||
len([b for b in output.get("bits", []) if b.get("active", False)])
|
||||
for output in self._data["outputs"].values()
|
||||
),
|
||||
"runtime_count": len(self._data["runtime"]),
|
||||
"water_counter_count": len(self._data["water_counters"]),
|
||||
"error_count": len(self._data["errors"])
|
||||
}
|
||||
|
||||
|
||||
|
||||
# Global data cache instance
|
||||
_data_cache: Optional[DataCache] = None
|
||||
|
||||
|
||||
def get_data_cache() -> DataCache:
|
||||
"""
|
||||
Get the global data cache instance (singleton pattern).
|
||||
|
||||
Returns:
|
||||
DataCache instance
|
||||
"""
|
||||
global _data_cache
|
||||
if _data_cache is None:
|
||||
_data_cache = DataCache()
|
||||
logger.info("Data cache initialized")
|
||||
return _data_cache
|
||||
|
||||
@@ -1,125 +0,0 @@
|
||||
"""
|
||||
Single operation state management for DTS operations.
|
||||
"""
|
||||
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict, Any, Tuple
|
||||
from ..utils.logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
class OperationStateManager:
|
||||
"""Manages single DTS operation state"""
|
||||
|
||||
def __init__(self):
|
||||
self._state_lock = threading.Lock()
|
||||
self._operation_state = self._create_idle_state()
|
||||
self._operation_history = [] # Optional: keep recent history
|
||||
|
||||
def _create_idle_state(self) -> Dict[str, Any]:
|
||||
"""Create a clean idle state"""
|
||||
return {
|
||||
"status": "idle",
|
||||
"operation_type": None,
|
||||
"operation_id": None,
|
||||
"current_step": None,
|
||||
"progress_percent": 0,
|
||||
"start_time": None,
|
||||
"end_time": None,
|
||||
"initiated_by": None,
|
||||
"current_mode": None,
|
||||
"target_mode": None,
|
||||
"steps_completed": [],
|
||||
"last_error": None,
|
||||
"timer_info": None,
|
||||
"external_changes": [],
|
||||
"screen_descriptions": {}
|
||||
}
|
||||
|
||||
def start_operation(self, operation_type: str, initiated_by: str = "api") -> Tuple[bool, str, Dict]:
|
||||
"""Start a new operation if none is running"""
|
||||
with self._state_lock:
|
||||
if self._operation_state["status"] == "running":
|
||||
return False, "Operation already in progress", {
|
||||
"current_operation": self._operation_state["operation_type"],
|
||||
"current_step": self._operation_state["current_step"]
|
||||
}
|
||||
|
||||
# Generate operation ID for logging
|
||||
operation_id = f"{operation_type}_{int(datetime.now().timestamp())}"
|
||||
|
||||
self._operation_state = self._create_idle_state()
|
||||
self._operation_state.update({
|
||||
"status": "running",
|
||||
"operation_type": operation_type,
|
||||
"operation_id": operation_id,
|
||||
"start_time": datetime.now().isoformat(),
|
||||
"initiated_by": initiated_by
|
||||
})
|
||||
|
||||
logger.info(f"Operation started: {operation_type} (ID: {operation_id})")
|
||||
return True, f"{operation_type} operation started", {"operation_id": operation_id}
|
||||
|
||||
def update_state(self, updates: Dict[str, Any]) -> None:
|
||||
"""Update current operation state"""
|
||||
with self._state_lock:
|
||||
self._operation_state.update(updates)
|
||||
|
||||
def complete_operation(self, success: bool = True, error_msg: str = None) -> None:
|
||||
"""Mark operation as completed or failed"""
|
||||
with self._state_lock:
|
||||
self._operation_state["end_time"] = datetime.now().isoformat()
|
||||
self._operation_state["status"] = "completed" if success else "failed"
|
||||
|
||||
if error_msg:
|
||||
self._operation_state["last_error"] = {
|
||||
"message": error_msg,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
# Add to history
|
||||
self._operation_history.append(dict(self._operation_state))
|
||||
|
||||
# Keep only last 10 operations in history
|
||||
if len(self._operation_history) > 10:
|
||||
self._operation_history = self._operation_history[-10:]
|
||||
|
||||
def cancel_operation(self) -> bool:
|
||||
"""Cancel current operation if running"""
|
||||
with self._state_lock:
|
||||
if self._operation_state["status"] != "running":
|
||||
return False
|
||||
|
||||
self._operation_state["status"] = "cancelled"
|
||||
self._operation_state["end_time"] = datetime.now().isoformat()
|
||||
self._operation_state["last_error"] = {
|
||||
"message": "Operation cancelled by user",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
return True
|
||||
|
||||
def get_current_state(self) -> Dict[str, Any]:
|
||||
"""Get current operation state (thread-safe copy)"""
|
||||
with self._state_lock:
|
||||
return dict(self._operation_state)
|
||||
|
||||
def is_idle(self) -> bool:
|
||||
"""Check if system is idle"""
|
||||
with self._state_lock:
|
||||
return self._operation_state["status"] == "idle"
|
||||
|
||||
def is_running(self) -> bool:
|
||||
"""Check if operation is running"""
|
||||
with self._state_lock:
|
||||
return self._operation_state["status"] == "running"
|
||||
|
||||
# Global state manager instance
|
||||
_state_manager: Optional[OperationStateManager] = None
|
||||
|
||||
def get_operation_state_manager() -> OperationStateManager:
|
||||
"""Get global operation state manager"""
|
||||
global _state_manager
|
||||
if _state_manager is None:
|
||||
_state_manager = OperationStateManager()
|
||||
return _state_manager
|
||||
@@ -1,216 +0,0 @@
|
||||
"""
|
||||
PLC connection management and Modbus communication.
|
||||
"""
|
||||
|
||||
import time
|
||||
from typing import Optional
|
||||
from pymodbus.client import ModbusTcpClient
|
||||
from ..config import Config
|
||||
from ..utils.logger import get_logger
|
||||
from ..utils.error_handler import PLCConnectionError
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class PLCConnection:
|
||||
"""Manages PLC connection and provides thread-safe access"""
|
||||
|
||||
def __init__(self):
|
||||
self.config = Config.get_plc_config()
|
||||
self.client: Optional[ModbusTcpClient] = None
|
||||
self._is_connected = False
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
"""Check if PLC is currently connected"""
|
||||
return self._is_connected and self.client is not None
|
||||
|
||||
def connect(self) -> bool:
|
||||
"""
|
||||
Establish connection to PLC with retry logic.
|
||||
|
||||
Returns:
|
||||
True if connected successfully, False otherwise
|
||||
"""
|
||||
current_time = time.time()
|
||||
|
||||
# Check if we should retry connection
|
||||
if (self.config["last_connection_attempt"] +
|
||||
self.config["connection_retry_interval"]) > current_time:
|
||||
return self._is_connected
|
||||
|
||||
self.config["last_connection_attempt"] = current_time
|
||||
|
||||
try:
|
||||
# Close existing connection if any
|
||||
if self.client:
|
||||
self.client.close()
|
||||
|
||||
# Create new client
|
||||
self.client = ModbusTcpClient(
|
||||
host=self.config["ip_address"],
|
||||
port=self.config["port"],
|
||||
timeout=self.config["timeout"]
|
||||
)
|
||||
|
||||
# Attempt connection
|
||||
self._is_connected = self.client.connect()
|
||||
self.config["connected"] = self._is_connected
|
||||
|
||||
if self._is_connected:
|
||||
logger.info(f"Connected to PLC at {self.config['ip_address']}:{self.config['port']}")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to connect to PLC at {self.config['ip_address']}:{self.config['port']}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error connecting to PLC: {e}")
|
||||
self._is_connected = False
|
||||
self.config["connected"] = False
|
||||
return False
|
||||
|
||||
def disconnect(self):
|
||||
"""Disconnect from PLC"""
|
||||
if self.client:
|
||||
try:
|
||||
self.client.close()
|
||||
logger.info("Disconnected from PLC")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error during PLC disconnect: {e}")
|
||||
finally:
|
||||
self.client = None
|
||||
self._is_connected = False
|
||||
self.config["connected"] = False
|
||||
|
||||
def read_input_register(self, address: int) -> Optional[int]:
|
||||
"""
|
||||
Read input register (function code 4).
|
||||
|
||||
Args:
|
||||
address: Register address
|
||||
|
||||
Returns:
|
||||
Register value or None if read failed
|
||||
"""
|
||||
if not self.is_connected:
|
||||
if not self.connect():
|
||||
return None
|
||||
|
||||
try:
|
||||
result = self.client.read_input_registers(
|
||||
address, 1, slave=self.config["unit_id"]
|
||||
)
|
||||
if hasattr(result, 'registers') and not result.isError():
|
||||
return result.registers[0]
|
||||
else:
|
||||
logger.warning(f"Failed to read input register {address}: {result}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading input register {address}: {e}")
|
||||
self._is_connected = False
|
||||
return None
|
||||
|
||||
def read_holding_register(self, address: int) -> Optional[int]:
|
||||
"""
|
||||
Read holding register (function code 3).
|
||||
|
||||
Args:
|
||||
address: Register address
|
||||
|
||||
Returns:
|
||||
Register value or None if read failed
|
||||
"""
|
||||
if not self.is_connected:
|
||||
if not self.connect():
|
||||
return None
|
||||
|
||||
try:
|
||||
result = self.client.read_holding_registers(
|
||||
address, 1, slave=self.config["unit_id"]
|
||||
)
|
||||
if hasattr(result, 'registers') and not result.isError():
|
||||
return result.registers[0]
|
||||
else:
|
||||
logger.warning(f"Failed to read holding register {address}: {result}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading holding register {address}: {e}")
|
||||
self._is_connected = False
|
||||
return None
|
||||
|
||||
def write_holding_register(self, address: int, value: int) -> bool:
|
||||
"""
|
||||
Write single holding register (function code 6).
|
||||
|
||||
Args:
|
||||
address: Register address
|
||||
value: Value to write
|
||||
|
||||
Returns:
|
||||
True if write successful, False otherwise
|
||||
"""
|
||||
if not self.is_connected:
|
||||
if not self.connect():
|
||||
raise PLCConnectionError("Cannot write register - PLC not connected")
|
||||
|
||||
try:
|
||||
result = self.client.write_register(
|
||||
address, value, slave=self.config["unit_id"]
|
||||
)
|
||||
if not result.isError():
|
||||
logger.info(f"Successfully wrote value {value} to register {address}")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Error writing register {address}: {result}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Exception writing register {address}: {e}")
|
||||
self._is_connected = False
|
||||
raise PLCConnectionError(f"Failed to write register {address}: {e}")
|
||||
|
||||
def get_connection_status(self) -> dict:
|
||||
"""
|
||||
Get current connection status information.
|
||||
|
||||
Returns:
|
||||
Dict with connection status details
|
||||
"""
|
||||
return {
|
||||
"connected": self.is_connected,
|
||||
"ip_address": self.config["ip_address"],
|
||||
"port": self.config["port"],
|
||||
"unit_id": self.config["unit_id"],
|
||||
"timeout": self.config["timeout"],
|
||||
"last_connection_attempt": self.config["last_connection_attempt"],
|
||||
"retry_interval": self.config["connection_retry_interval"]
|
||||
}
|
||||
|
||||
|
||||
# Global PLC connection instance
|
||||
_plc_connection: Optional[PLCConnection] = None
|
||||
|
||||
|
||||
def get_plc_connection() -> PLCConnection:
|
||||
"""
|
||||
Get the global PLC connection instance (singleton pattern).
|
||||
|
||||
Returns:
|
||||
PLCConnection instance
|
||||
"""
|
||||
global _plc_connection
|
||||
if _plc_connection is None:
|
||||
_plc_connection = PLCConnection()
|
||||
return _plc_connection
|
||||
|
||||
|
||||
def initialize_plc_connection() -> PLCConnection:
|
||||
"""
|
||||
Initialize and return the PLC connection.
|
||||
|
||||
Returns:
|
||||
PLCConnection instance
|
||||
"""
|
||||
connection = get_plc_connection()
|
||||
connection.connect()
|
||||
return connection
|
||||
@@ -1,487 +0,0 @@
|
||||
"""
|
||||
Service for reading PLC registers and updating the data cache.
|
||||
"""
|
||||
|
||||
from typing import Optional, Tuple, Dict, Any, List
|
||||
from ..models import (
|
||||
KNOWN_SENSORS, TIMER_REGISTERS, RTC_REGISTERS,
|
||||
RUNTIME_REGISTERS, WATER_COUNTER_REGISTERS
|
||||
)
|
||||
from ..models.output_mappings import get_output_registers, create_output_bit_info
|
||||
from ..utils.data_conversion import (
|
||||
scale_value, get_descriptive_value, validate_register_value,
|
||||
convert_ieee754_float, convert_gallon_counter, format_binary_string
|
||||
)
|
||||
from ..utils.logger import get_logger
|
||||
from .plc_connection import get_plc_connection
|
||||
from .data_cache import get_data_cache
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class RegisterReader:
|
||||
"""Service for reading PLC registers and updating cache"""
|
||||
|
||||
def __init__(self):
|
||||
self.plc = get_plc_connection()
|
||||
self.cache = get_data_cache()
|
||||
|
||||
def read_register_pair(self, high_address: int, low_address: int, conversion_type: str) -> Tuple[bool, Optional[float], Optional[int], Optional[int]]:
|
||||
"""
|
||||
Read a pair of registers and convert them based on type.
|
||||
|
||||
Args:
|
||||
high_address: High register address
|
||||
low_address: Low register address
|
||||
conversion_type: Conversion type (ieee754, gallon_counter)
|
||||
|
||||
Returns:
|
||||
Tuple of (success, converted_value, raw_high, raw_low)
|
||||
"""
|
||||
high_value = self.plc.read_holding_register(high_address)
|
||||
low_value = self.plc.read_holding_register(low_address)
|
||||
|
||||
if not validate_register_value(high_value) or not validate_register_value(low_value):
|
||||
return False, None, high_value, low_value
|
||||
|
||||
if conversion_type == "ieee754":
|
||||
converted = convert_ieee754_float(high_value, low_value)
|
||||
elif conversion_type == "gallon_counter":
|
||||
converted = convert_gallon_counter(high_value, low_value)
|
||||
else:
|
||||
converted = None
|
||||
|
||||
return True, converted, high_value, low_value
|
||||
|
||||
def update_sensors(self) -> bool:
|
||||
"""
|
||||
Update all sensor data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
sensors = {}
|
||||
|
||||
for address, config in KNOWN_SENSORS.items():
|
||||
raw_value = self.plc.read_input_register(address)
|
||||
|
||||
if validate_register_value(raw_value):
|
||||
scaled_value = scale_value(raw_value, config["scale"])
|
||||
descriptive_value = get_descriptive_value(raw_value, config)
|
||||
|
||||
sensors[str(address)] = {
|
||||
"name": config["name"],
|
||||
"raw_value": raw_value,
|
||||
"scaled_value": scaled_value,
|
||||
"descriptive_value": descriptive_value if isinstance(descriptive_value, str) else scaled_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"scale": config["scale"]
|
||||
}
|
||||
|
||||
self.cache.set_sensors(sensors)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating sensors: {e}")
|
||||
self.cache.add_error(f"Sensor update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_timers(self) -> bool:
|
||||
"""
|
||||
Update all timer data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
timers = {}
|
||||
|
||||
for address, config in TIMER_REGISTERS.items():
|
||||
raw_value = self.plc.read_holding_register(address)
|
||||
|
||||
if validate_register_value(raw_value):
|
||||
scaled_value = scale_value(raw_value, config["scale"])
|
||||
|
||||
timers[str(address)] = {
|
||||
"name": config["name"],
|
||||
"raw_value": raw_value,
|
||||
"scaled_value": scaled_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"active": raw_value > 0
|
||||
}
|
||||
|
||||
self.cache.set_timers(timers)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating timers: {e}")
|
||||
self.cache.add_error(f"Timer update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_rtc(self) -> bool:
|
||||
"""
|
||||
Update RTC data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
rtc_data = {}
|
||||
|
||||
for address, config in RTC_REGISTERS.items():
|
||||
raw_value = self.plc.read_holding_register(address)
|
||||
|
||||
if validate_register_value(raw_value):
|
||||
rtc_data[str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": raw_value,
|
||||
"unit": config["unit"]
|
||||
}
|
||||
|
||||
self.cache.set_rtc(rtc_data)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating RTC: {e}")
|
||||
self.cache.add_error(f"RTC update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_runtime(self) -> bool:
|
||||
"""
|
||||
Update runtime data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
runtime_data = {}
|
||||
|
||||
for address, config in RUNTIME_REGISTERS.items():
|
||||
success, converted_value, high_raw, low_raw = self.read_register_pair(
|
||||
address, config["pair_register"], "ieee754"
|
||||
)
|
||||
|
||||
if success and converted_value is not None:
|
||||
runtime_data[str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": converted_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"description": config["description"],
|
||||
"raw_high": high_raw,
|
||||
"raw_low": low_raw,
|
||||
"high_register": address,
|
||||
"low_register": config["pair_register"]
|
||||
}
|
||||
|
||||
self.cache.set_runtime(runtime_data)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating runtime: {e}")
|
||||
self.cache.add_error(f"Runtime update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_water_counters(self) -> bool:
|
||||
"""
|
||||
Update water counter data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
water_counter_data = {}
|
||||
|
||||
for address, config in WATER_COUNTER_REGISTERS.items():
|
||||
success, converted_value, high_raw, low_raw = self.read_register_pair(
|
||||
address, config["pair_register"], "gallon_counter"
|
||||
)
|
||||
|
||||
if success and converted_value is not None:
|
||||
water_counter_data[str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": converted_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"description": config["description"],
|
||||
"raw_high": high_raw,
|
||||
"raw_low": low_raw,
|
||||
"high_register": address,
|
||||
"low_register": config["pair_register"]
|
||||
}
|
||||
|
||||
self.cache.set_water_counters(water_counter_data)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating water counters: {e}")
|
||||
self.cache.add_error(f"Water counter update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_outputs(self) -> bool:
|
||||
"""
|
||||
Update output control data in cache.
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
outputs = {}
|
||||
output_registers = get_output_registers()
|
||||
|
||||
for reg in output_registers:
|
||||
modbus_addr = reg - 40001
|
||||
raw_value = self.plc.read_holding_register(modbus_addr)
|
||||
|
||||
if validate_register_value(raw_value):
|
||||
outputs[str(reg)] = {
|
||||
"register": reg,
|
||||
"value": raw_value,
|
||||
"binary": format_binary_string(raw_value),
|
||||
"bits": create_output_bit_info(reg, raw_value)
|
||||
}
|
||||
|
||||
self.cache.set_outputs(outputs)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating outputs: {e}")
|
||||
self.cache.add_error(f"Output update failed: {e}")
|
||||
return False
|
||||
|
||||
def update_all_data(self) -> bool:
|
||||
"""
|
||||
Update all PLC data in cache.
|
||||
|
||||
Returns:
|
||||
True if all updates successful, False if any failed
|
||||
"""
|
||||
success = True
|
||||
|
||||
# Update connection status
|
||||
if self.plc.is_connected:
|
||||
self.cache.set_connection_status("connected")
|
||||
else:
|
||||
self.cache.set_connection_status("disconnected")
|
||||
if not self.plc.connect():
|
||||
self.cache.set_connection_status("connection_failed")
|
||||
return False
|
||||
|
||||
# Update all data types
|
||||
success &= self.update_sensors()
|
||||
success &= self.update_timers()
|
||||
success &= self.update_rtc()
|
||||
success &= self.update_runtime()
|
||||
success &= self.update_water_counters()
|
||||
success &= self.update_outputs()
|
||||
|
||||
if success:
|
||||
logger.debug("All PLC data updated successfully")
|
||||
else:
|
||||
logger.warning("Some PLC data updates failed")
|
||||
|
||||
return success
|
||||
|
||||
def read_selective_data(self, groups: List[str], keys: List[str]) -> Dict[str, Any]:
|
||||
"""
|
||||
Read only selected variables by groups and/or keys.
|
||||
|
||||
Args:
|
||||
groups: List of group names
|
||||
keys: List of register keys
|
||||
|
||||
Returns:
|
||||
Dict containing selected data
|
||||
"""
|
||||
result = {
|
||||
"sensors": {},
|
||||
"timers": {},
|
||||
"rtc": {},
|
||||
"outputs": {},
|
||||
"runtime": {},
|
||||
"water_counters": {},
|
||||
"requested_groups": groups,
|
||||
"requested_keys": keys
|
||||
}
|
||||
|
||||
# Collect addresses to read based on groups and keys
|
||||
sensor_addresses = set()
|
||||
timer_addresses = set()
|
||||
output_registers = set()
|
||||
runtime_addresses = set()
|
||||
water_counter_addresses = set()
|
||||
|
||||
# Add addresses by groups
|
||||
group_mappings = {
|
||||
"system": [1000, 1036],
|
||||
"pressure": [1003, 1007, 1008],
|
||||
"temperature": [1017, 1125],
|
||||
"flow": [1120, 1121, 1122],
|
||||
"quality": [1123, 1124],
|
||||
"fwf_timer": [136],
|
||||
"dts_timer": [138, 128, 129, 133, 135, 139],
|
||||
"rtc": [513, 514, 516, 517, 518, 519],
|
||||
"outputs": [40017, 40018, 40019, 40020, 40021, 40022],
|
||||
"runtime": list(RUNTIME_REGISTERS.keys()),
|
||||
"water_counters": list(WATER_COUNTER_REGISTERS.keys())
|
||||
}
|
||||
|
||||
for group in groups:
|
||||
if group in group_mappings:
|
||||
addresses = group_mappings[group]
|
||||
if group == "outputs":
|
||||
output_registers.update(addresses)
|
||||
elif group == "runtime":
|
||||
runtime_addresses.update(addresses)
|
||||
elif group == "water_counters":
|
||||
water_counter_addresses.update(addresses)
|
||||
elif group in ["fwf_timer", "dts_timer", "rtc"]:
|
||||
timer_addresses.update(addresses)
|
||||
else:
|
||||
sensor_addresses.update(addresses)
|
||||
|
||||
# Add specific requested keys
|
||||
for key in keys:
|
||||
try:
|
||||
addr = int(key)
|
||||
if addr in KNOWN_SENSORS:
|
||||
sensor_addresses.add(addr)
|
||||
elif addr in TIMER_REGISTERS or addr in RTC_REGISTERS:
|
||||
timer_addresses.add(addr)
|
||||
elif addr in RUNTIME_REGISTERS:
|
||||
runtime_addresses.add(addr)
|
||||
elif addr in WATER_COUNTER_REGISTERS:
|
||||
water_counter_addresses.add(addr)
|
||||
elif addr >= 40017 and addr <= 40022:
|
||||
output_registers.add(addr)
|
||||
else:
|
||||
sensor_addresses.add(addr)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
# Read and populate selected data
|
||||
total_reads = 0
|
||||
|
||||
# Read sensors
|
||||
for address in sensor_addresses:
|
||||
raw_value = self.plc.read_input_register(address)
|
||||
if validate_register_value(raw_value):
|
||||
config = KNOWN_SENSORS.get(address, {
|
||||
"name": f"Register {address}",
|
||||
"scale": "direct",
|
||||
"unit": "",
|
||||
"category": "unknown"
|
||||
})
|
||||
|
||||
scaled_value = scale_value(raw_value, config["scale"])
|
||||
descriptive_value = get_descriptive_value(raw_value, config)
|
||||
|
||||
result["sensors"][str(address)] = {
|
||||
"name": config["name"],
|
||||
"raw_value": raw_value,
|
||||
"scaled_value": scaled_value,
|
||||
"descriptive_value": descriptive_value if isinstance(descriptive_value, str) else scaled_value,
|
||||
"unit": config["unit"],
|
||||
"category": config.get("category", "unknown"),
|
||||
"scale": config["scale"]
|
||||
}
|
||||
total_reads += 1
|
||||
|
||||
# Read timers/RTC
|
||||
for address in timer_addresses:
|
||||
raw_value = self.plc.read_holding_register(address)
|
||||
if validate_register_value(raw_value):
|
||||
if address in TIMER_REGISTERS:
|
||||
config = TIMER_REGISTERS[address]
|
||||
scaled_value = scale_value(raw_value, config["scale"])
|
||||
|
||||
result["timers"][str(address)] = {
|
||||
"name": config["name"],
|
||||
"raw_value": raw_value,
|
||||
"scaled_value": scaled_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"active": raw_value > 0
|
||||
}
|
||||
elif address in RTC_REGISTERS:
|
||||
config = RTC_REGISTERS[address]
|
||||
|
||||
result["rtc"][str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": raw_value,
|
||||
"unit": config["unit"]
|
||||
}
|
||||
total_reads += 1
|
||||
|
||||
# Read runtime registers
|
||||
for address in runtime_addresses:
|
||||
if address in RUNTIME_REGISTERS:
|
||||
config = RUNTIME_REGISTERS[address]
|
||||
success, converted_value, high_raw, low_raw = self.read_register_pair(
|
||||
address, config["pair_register"], "ieee754"
|
||||
)
|
||||
|
||||
if success and converted_value is not None:
|
||||
result["runtime"][str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": converted_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"description": config["description"],
|
||||
"raw_high": high_raw,
|
||||
"raw_low": low_raw,
|
||||
"high_register": address,
|
||||
"low_register": config["pair_register"]
|
||||
}
|
||||
total_reads += 2 # Register pair
|
||||
|
||||
# Read water counter registers
|
||||
for address in water_counter_addresses:
|
||||
if address in WATER_COUNTER_REGISTERS:
|
||||
config = WATER_COUNTER_REGISTERS[address]
|
||||
success, converted_value, high_raw, low_raw = self.read_register_pair(
|
||||
address, config["pair_register"], "gallon_counter"
|
||||
)
|
||||
|
||||
if success and converted_value is not None:
|
||||
result["water_counters"][str(address)] = {
|
||||
"name": config["name"],
|
||||
"value": converted_value,
|
||||
"unit": config["unit"],
|
||||
"category": config["category"],
|
||||
"description": config["description"],
|
||||
"raw_high": high_raw,
|
||||
"raw_low": low_raw,
|
||||
"high_register": address,
|
||||
"low_register": config["pair_register"]
|
||||
}
|
||||
total_reads += 2 # Register pair
|
||||
|
||||
# Read outputs
|
||||
for reg in output_registers:
|
||||
modbus_addr = reg - 40001
|
||||
raw_value = self.plc.read_holding_register(modbus_addr)
|
||||
if validate_register_value(raw_value):
|
||||
result["outputs"][str(reg)] = {
|
||||
"register": reg,
|
||||
"value": raw_value,
|
||||
"binary": format_binary_string(raw_value),
|
||||
"bits": create_output_bit_info(reg, raw_value)
|
||||
}
|
||||
total_reads += 1
|
||||
|
||||
# Add summary
|
||||
result["summary"] = {
|
||||
"sensors_read": len(result["sensors"]),
|
||||
"timers_read": len(result["timers"]),
|
||||
"rtc_read": len(result["rtc"]),
|
||||
"outputs_read": len(result["outputs"]),
|
||||
"runtime_read": len(result["runtime"]),
|
||||
"water_counters_read": len(result["water_counters"]),
|
||||
"total_plc_reads": total_reads
|
||||
}
|
||||
|
||||
return result
|
||||
@@ -1,111 +0,0 @@
|
||||
"""
|
||||
Service for writing to PLC registers.
|
||||
"""
|
||||
|
||||
from typing import Dict, Any
|
||||
from ..utils.logger import get_logger
|
||||
from ..utils.error_handler import RegisterWriteError, PLCConnectionError
|
||||
from .plc_connection import get_plc_connection
|
||||
from .data_cache import get_data_cache
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class RegisterWriter:
|
||||
"""Service for writing to PLC registers"""
|
||||
|
||||
def __init__(self):
|
||||
self.plc = get_plc_connection()
|
||||
self.cache = get_data_cache()
|
||||
|
||||
def write_holding_register(self, address: int, value: int) -> bool:
|
||||
"""
|
||||
Write a single holding register.
|
||||
|
||||
Args:
|
||||
address: Register address
|
||||
value: Value to write
|
||||
|
||||
Returns:
|
||||
True if write successful
|
||||
|
||||
Raises:
|
||||
RegisterWriteError: If write operation fails
|
||||
PLCConnectionError: If PLC connection fails
|
||||
"""
|
||||
try:
|
||||
# Validate inputs
|
||||
if not isinstance(address, int) or address < 0:
|
||||
raise RegisterWriteError(f"Invalid register address: {address}")
|
||||
|
||||
if not isinstance(value, int) or value < 0 or value > 65535:
|
||||
raise RegisterWriteError(f"Invalid register value: {value}. Must be 0-65535")
|
||||
|
||||
# Ensure PLC connection
|
||||
if not self.plc.is_connected:
|
||||
if not self.plc.connect():
|
||||
raise PLCConnectionError("Failed to connect to PLC")
|
||||
|
||||
# Perform write operation
|
||||
success = self.plc.write_holding_register(address, value)
|
||||
|
||||
if success:
|
||||
logger.info(f"Successfully wrote {value} to register {address}")
|
||||
return True
|
||||
else:
|
||||
raise RegisterWriteError(f"Failed to write register {address}")
|
||||
|
||||
except (RegisterWriteError, PLCConnectionError):
|
||||
# Re-raise our custom exceptions
|
||||
raise
|
||||
except Exception as e:
|
||||
error_msg = f"Unexpected error writing register {address}: {e}"
|
||||
logger.error(error_msg)
|
||||
self.cache.add_error(error_msg)
|
||||
raise RegisterWriteError(error_msg)
|
||||
|
||||
def write_multiple_registers(self, writes: Dict[int, int]) -> Dict[int, bool]:
|
||||
"""
|
||||
Write multiple holding registers.
|
||||
|
||||
Args:
|
||||
writes: Dict mapping addresses to values
|
||||
|
||||
Returns:
|
||||
Dict mapping addresses to success status
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for address, value in writes.items():
|
||||
try:
|
||||
results[address] = self.write_holding_register(address, value)
|
||||
except (RegisterWriteError, PLCConnectionError) as e:
|
||||
logger.error(f"Failed to write register {address}: {e}")
|
||||
results[address] = False
|
||||
|
||||
return results
|
||||
|
||||
def validate_write_operation(self, address: int, value: int) -> tuple:
|
||||
"""
|
||||
Validate a write operation before execution.
|
||||
|
||||
Args:
|
||||
address: Register address
|
||||
value: Value to write
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, error_message)
|
||||
"""
|
||||
if not isinstance(address, int):
|
||||
return False, "Address must be an integer"
|
||||
|
||||
if address < 0:
|
||||
return False, "Address must be non-negative"
|
||||
|
||||
if not isinstance(value, int):
|
||||
return False, "Value must be an integer"
|
||||
|
||||
if value < 0 or value > 65535:
|
||||
return False, "Value must be between 0 and 65535"
|
||||
|
||||
return True, ""
|
||||
@@ -1,15 +0,0 @@
|
||||
"""
|
||||
Utility modules for data conversion, logging, and error handling.
|
||||
"""
|
||||
|
||||
from .logger import get_logger
|
||||
from .data_conversion import scale_value, convert_ieee754_float, convert_gallon_counter
|
||||
from .error_handler import setup_error_handlers
|
||||
|
||||
__all__ = [
|
||||
'get_logger',
|
||||
'scale_value',
|
||||
'convert_ieee754_float',
|
||||
'convert_gallon_counter',
|
||||
'setup_error_handlers'
|
||||
]
|
||||
@@ -1,144 +0,0 @@
|
||||
"""
|
||||
Data conversion utilities for PLC register values.
|
||||
"""
|
||||
|
||||
import struct
|
||||
from typing import Union, Optional
|
||||
from .logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
def scale_value(value: Union[int, float], scale_type: str) -> Union[int, float]:
|
||||
"""
|
||||
Apply scaling to sensor values based on scale type.
|
||||
|
||||
Args:
|
||||
value: Raw register value
|
||||
scale_type: Scaling type (e.g., "direct", "÷10", "×100")
|
||||
|
||||
Returns:
|
||||
Scaled value
|
||||
"""
|
||||
if scale_type == "direct":
|
||||
return value
|
||||
elif scale_type.startswith("÷"):
|
||||
try:
|
||||
divisor = float(scale_type[1:])
|
||||
return value / divisor
|
||||
except (ValueError, ZeroDivisionError):
|
||||
logger.warning(f"Invalid divisor in scale_type: {scale_type}")
|
||||
return value
|
||||
elif scale_type.startswith("×"):
|
||||
try:
|
||||
multiplier = float(scale_type[1:])
|
||||
return value * multiplier
|
||||
except ValueError:
|
||||
logger.warning(f"Invalid multiplier in scale_type: {scale_type}")
|
||||
return value
|
||||
else:
|
||||
logger.warning(f"Unknown scale_type: {scale_type}")
|
||||
return value
|
||||
|
||||
|
||||
def convert_ieee754_float(high_register: int, low_register: int) -> Optional[float]:
|
||||
"""
|
||||
Convert two 16-bit registers to IEEE 754 32-bit float.
|
||||
|
||||
Args:
|
||||
high_register: High 16 bits
|
||||
low_register: Low 16 bits
|
||||
|
||||
Returns:
|
||||
Float value or None if conversion fails
|
||||
"""
|
||||
try:
|
||||
# Combine registers into 32-bit value (big-endian)
|
||||
combined_32bit = (high_register << 16) | low_register
|
||||
|
||||
# Convert to bytes and then to IEEE 754 float
|
||||
bytes_value = struct.pack('>I', combined_32bit) # Big-endian unsigned int
|
||||
float_value = struct.unpack('>f', bytes_value)[0] # Big-endian float
|
||||
|
||||
return round(float_value, 2) # Round to 2 decimal places like HMI
|
||||
except Exception as e:
|
||||
logger.error(f"Error converting IEEE 754 float: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def convert_gallon_counter(high_register: int, low_register: int) -> Optional[float]:
|
||||
"""
|
||||
Convert two 16-bit registers to gallon counter value.
|
||||
|
||||
Args:
|
||||
high_register: High 16 bits
|
||||
low_register: Low 16 bits
|
||||
|
||||
Returns:
|
||||
Gallon count as float or None if conversion fails
|
||||
"""
|
||||
try:
|
||||
# Combine registers into 32-bit value
|
||||
combined_32bit = (high_register << 16) | low_register
|
||||
|
||||
# Convert to IEEE 754 float (same as runtime hours)
|
||||
bytes_value = struct.pack('>I', combined_32bit)
|
||||
float_value = struct.unpack('>f', bytes_value)[0]
|
||||
|
||||
return round(float_value, 2)
|
||||
except Exception as e:
|
||||
logger.error(f"Error converting gallon counter: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def get_descriptive_value(value: Union[int, float], sensor_config: dict) -> Union[str, int, float]:
|
||||
"""
|
||||
Convert numeric values to descriptive text where applicable.
|
||||
|
||||
Args:
|
||||
value: Numeric register value
|
||||
sensor_config: Sensor configuration containing value mappings
|
||||
|
||||
Returns:
|
||||
Descriptive string or original value
|
||||
"""
|
||||
if "values" in sensor_config and isinstance(sensor_config["values"], dict):
|
||||
return sensor_config["values"].get(str(value), f"Unknown ({value})")
|
||||
return value
|
||||
|
||||
|
||||
def validate_register_value(value: Optional[int], max_value: int = 65536) -> bool:
|
||||
"""
|
||||
Validate that a register value is within acceptable range.
|
||||
|
||||
Args:
|
||||
value: Register value to validate
|
||||
max_value: Maximum acceptable value (default: 65536)
|
||||
|
||||
Returns:
|
||||
True if value is valid, False otherwise
|
||||
"""
|
||||
if value is None:
|
||||
return False
|
||||
|
||||
if not isinstance(value, (int, float)):
|
||||
return False
|
||||
|
||||
if value < 0 or value >= max_value:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def format_binary_string(value: int, width: int = 16) -> str:
|
||||
"""
|
||||
Format an integer as a binary string with specified width.
|
||||
|
||||
Args:
|
||||
value: Integer value to format
|
||||
width: Number of bits to display (default: 16)
|
||||
|
||||
Returns:
|
||||
Binary string representation
|
||||
"""
|
||||
return format(value, f'0{width}b')
|
||||
@@ -1,167 +0,0 @@
|
||||
"""
|
||||
Centralized error handling for the Flask application.
|
||||
"""
|
||||
|
||||
from flask import Flask, jsonify, request
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
from .logger import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
def setup_error_handlers(app: Flask):
|
||||
"""
|
||||
Setup error handlers for the Flask application.
|
||||
|
||||
Args:
|
||||
app: Flask application instance
|
||||
"""
|
||||
|
||||
@app.errorhandler(400)
|
||||
def bad_request(error):
|
||||
"""Handle 400 Bad Request errors"""
|
||||
logger.warning(f"Bad request: {request.url} - {error.description}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Bad Request",
|
||||
"message": error.description or "Invalid request parameters",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 400
|
||||
|
||||
@app.errorhandler(404)
|
||||
def not_found(error):
|
||||
"""Handle 404 Not Found errors"""
|
||||
logger.warning(f"Not found: {request.url}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Not Found",
|
||||
"message": f"Resource not found: {request.path}",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 404
|
||||
|
||||
@app.errorhandler(405)
|
||||
def method_not_allowed(error):
|
||||
"""Handle 405 Method Not Allowed errors"""
|
||||
logger.warning(f"Method not allowed: {request.method} {request.url}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Method Not Allowed",
|
||||
"message": f"Method {request.method} not allowed for {request.path}",
|
||||
"allowed_methods": list(error.valid_methods) if hasattr(error, 'valid_methods') else [],
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 405
|
||||
|
||||
@app.errorhandler(409)
|
||||
def conflict(error):
|
||||
"""Handle 409 Conflict errors"""
|
||||
logger.warning(f"Conflict: {request.url} - {error.description}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Conflict",
|
||||
"message": error.description or "Request conflicts with current state",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 409
|
||||
|
||||
@app.errorhandler(500)
|
||||
def internal_error(error):
|
||||
"""Handle 500 Internal Server Error"""
|
||||
logger.error(f"Internal server error: {request.url} - {str(error)}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Internal Server Error",
|
||||
"message": "An unexpected error occurred",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 500
|
||||
|
||||
@app.errorhandler(503)
|
||||
def service_unavailable(error):
|
||||
"""Handle 503 Service Unavailable errors"""
|
||||
logger.error(f"Service unavailable: {request.url} - {error.description}")
|
||||
return jsonify({
|
||||
"success": False,
|
||||
"error": "Service Unavailable",
|
||||
"message": error.description or "PLC connection unavailable",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}), 503
|
||||
|
||||
logger.info("Error handlers configured")
|
||||
|
||||
|
||||
def create_error_response(
|
||||
error_type: str,
|
||||
message: str,
|
||||
status_code: int = 400,
|
||||
details: Dict[str, Any] = None
|
||||
) -> tuple:
|
||||
"""
|
||||
Create a standardized error response.
|
||||
|
||||
Args:
|
||||
error_type: Type of error (e.g., "Bad Request", "PLC Error")
|
||||
message: Error message
|
||||
status_code: HTTP status code
|
||||
details: Optional additional error details
|
||||
|
||||
Returns:
|
||||
Tuple of (response_dict, status_code)
|
||||
"""
|
||||
response = {
|
||||
"success": False,
|
||||
"error": error_type,
|
||||
"message": message,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
if details:
|
||||
response["details"] = details
|
||||
|
||||
return jsonify(response), status_code
|
||||
|
||||
|
||||
def create_success_response(
|
||||
message: str = "Success",
|
||||
data: Dict[str, Any] = None,
|
||||
status_code: int = 200
|
||||
) -> tuple:
|
||||
"""
|
||||
Create a standardized success response.
|
||||
|
||||
Args:
|
||||
message: Success message
|
||||
data: Optional response data
|
||||
status_code: HTTP status code
|
||||
|
||||
Returns:
|
||||
Tuple of (response_dict, status_code)
|
||||
"""
|
||||
response = {
|
||||
"success": True,
|
||||
"message": message,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
if data:
|
||||
response.update(data)
|
||||
|
||||
return jsonify(response), status_code
|
||||
|
||||
|
||||
class PLCConnectionError(Exception):
|
||||
"""Exception raised when PLC connection fails"""
|
||||
pass
|
||||
|
||||
|
||||
class RegisterReadError(Exception):
|
||||
"""Exception raised when register read operation fails"""
|
||||
pass
|
||||
|
||||
|
||||
class RegisterWriteError(Exception):
|
||||
"""Exception raised when register write operation fails"""
|
||||
pass
|
||||
|
||||
|
||||
class DTSOperationError(Exception):
|
||||
"""Exception raised when DTS operation fails"""
|
||||
pass
|
||||
@@ -1,45 +0,0 @@
|
||||
"""
|
||||
Centralized logging configuration for the application.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from typing import Optional
|
||||
from ..config import Config
|
||||
|
||||
|
||||
def get_logger(name: str, level: Optional[str] = None) -> logging.Logger:
|
||||
"""
|
||||
Get a configured logger instance.
|
||||
|
||||
Args:
|
||||
name: Logger name (typically __name__)
|
||||
level: Optional log level override
|
||||
|
||||
Returns:
|
||||
logging.Logger: Configured logger instance
|
||||
"""
|
||||
logger = logging.getLogger(name)
|
||||
|
||||
# Only configure if not already configured
|
||||
if not logger.handlers:
|
||||
# Set log level
|
||||
log_level = level or Config.LOG_LEVEL
|
||||
logger.setLevel(getattr(logging, log_level, logging.INFO))
|
||||
|
||||
# Create console handler
|
||||
handler = logging.StreamHandler(sys.stdout)
|
||||
handler.setLevel(logger.level)
|
||||
|
||||
# Create formatter
|
||||
formatter = logging.Formatter(Config.LOG_FORMAT)
|
||||
handler.setFormatter(formatter)
|
||||
|
||||
# Add handler to logger
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Prevent duplicate logs from parent loggers
|
||||
logger.propagate = False
|
||||
|
||||
return logger
|
||||
|
||||
Reference in New Issue
Block a user