Spellchecking

This commit is contained in:
deajan 2025-02-19 17:58:23 +01:00
parent 4705f915a8
commit 087ab80cbf
32 changed files with 106 additions and 116 deletions

View file

@ -124,7 +124,7 @@ Link: https://www.virustotal.com/gui/file/48c7d828f878638ccf8736f1b87bccd0f8e544
Build type: Standalone Build type: Standalone
Compiler: Nuitka 2.6.4 commercial Compiler: Nuitka 2.6.4 commercial
Backend msvc 143 Backend msvc 143
Signes: Yes (EV Code signing certificate) Signed: Yes (EV Code signing certificate)
Build target: npbackup-cli.exe Build target: npbackup-cli.exe
Result: No security vendors flagged this file as malicious Result: No security vendors flagged this file as malicious
Link: https://www.virustotal.com/gui/file/50a12c697684194853e6e4ec1778f50f86798d426853c4e0a744f2a5c4d02def Link: https://www.virustotal.com/gui/file/50a12c697684194853e6e4ec1778f50f86798d426853c4e0a744f2a5c4d02def

View file

@ -6,7 +6,7 @@
- This is a major rewrite that allows using multiple repositories, adds repository groups and implements repository settings inheritance from group settings - This is a major rewrite that allows using multiple repositories, adds repository groups and implements repository settings inheritance from group settings
#### Features #### Features
- New viewer mode allowing to browse/restore restic repositories without any NPBackup configuation - New viewer mode allowing to browse/restore restic repositories without any NPBackup configuration
- Allows setting repository via environment variables, config file or directly in GUI - Allows setting repository via environment variables, config file or directly in GUI
- Multi repository support - Multi repository support
- Group settings for repositories - Group settings for repositories
@ -46,10 +46,10 @@
- Metrics now include auto upgrade state - Metrics now include auto upgrade state
- Dry mode now works for all operations where restic supports dry-mode - Dry mode now works for all operations where restic supports dry-mode
- Implemented scheduled task creator for Windows & Unix - Implemented scheduled task creator for Windows & Unix
- Added --no-cache option to disable cache for restic operations (neeeded on RO systems) - Added --no-cache option to disable cache for restic operations (needed on RO systems)
- Added CRC32 logging for config files in order to know when a file was modified - Added CRC32 logging for config files in order to know when a file was modified
- Missing exclude files will now search in current binary directory for a excludes directory - Missing exclude files will now search in current binary directory for a excludes directory
- Splitted releases between legacy and non legacy - Split releases between legacy and non legacy
- Updated legacy tcl8.6.13 to tc8.6.15 - Updated legacy tcl8.6.13 to tc8.6.15
- Updated legacy Python 3.7 to Python 3.9 (with openssl 1.1.1) for x64 linux builds - Updated legacy Python 3.7 to Python 3.9 (with openssl 1.1.1) for x64 linux builds
- Upgrade server - Upgrade server
@ -72,7 +72,7 @@
- Concurrency checks (pidfile checks) are now directly part of the runner - Concurrency checks (pidfile checks) are now directly part of the runner
- Allow a 30 seconds grace period for child processes to close before asking them nicely, and than not nicely to quit - Allow a 30 seconds grace period for child processes to close before asking them nicely, and than not nicely to quit
- Fully refactored prometheus metrics parser to be able to read restic standard or json outputs - Fully refactored prometheus metrics parser to be able to read restic standard or json outputs
- Reimplmented auto upgrade after CLI/GUI split - Reimplemented auto upgrade after CLI/GUI split
- Added initial tests - Added initial tests
- Exclude lists have been updated - Exclude lists have been updated
- Removed Windows installer from the project. We need to come up with a better solution - Removed Windows installer from the project. We need to come up with a better solution
@ -101,7 +101,7 @@
## 2.2.0 - rtm - 03/06/2023 ## 2.2.0 - rtm - 03/06/2023
- Fix potential deadlock in evaluate variables - Fix potential deadlock in evaluate variables
- Fix additionnal parameters should only apply for backup operations - Fix additional parameters should only apply for backup operations
- Fix unnecessary exclude statements when no exclusions are given - Fix unnecessary exclude statements when no exclusions are given
- Added source types (--files-from, --files-from-verbatim and --files-from-raw equivalent) - Added source types (--files-from, --files-from-verbatim and --files-from-raw equivalent)
- Add encrypted environment variables support - Add encrypted environment variables support
@ -195,7 +195,7 @@
- Fix config fails when restic password is an int - Fix config fails when restic password is an int
- Fix empty config files did not show a proper error message - Fix empty config files did not show a proper error message
- Fix various config file malformation will break execution - Fix various config file malformation will break execution
- Fix backup hangs when no restic password is given (restic asks for password in backgroud job) - Fix backup hangs when no restic password is given (restic asks for password in background job)
- Fix error message in logs when repo is not initialized - Fix error message in logs when repo is not initialized
## v2.2.0 - rc1 - 02/02/2023 ## v2.2.0 - rc1 - 02/02/2023

View file

@ -24,10 +24,10 @@ For the rest of this manual, we'll assume the you use:
- Intel: `/usr/local/bin/python3` - Intel: `/usr/local/bin/python3`
- ARM: `/opt/homebrew/bin/python3` - ARM: `/opt/homebrew/bin/python3`
You may also use a python virtual environement (venv) to have a python "sub interpreter", but this is out of scope here too. You may also use a python virtual environment (venv) to have a python "sub interpreter", but this is out of scope here too.
Once you've got yourself a working Python environment, you should download and extract the NPBackup sources (or clone the git). NPBackup has multiple python dependencies, which are stated in a file named `requirements.txt`. Once you've got yourself a working Python environment, you should download and extract the NPBackup sources (or clone the git). NPBackup has multiple python dependencies, which are stated in a file named `requirements.txt`.
You can install them all toghether by running `python -m pip install -r path/to/requirements.txt` (please note that `path/to/requirements.txt` would give something like `C:\path\to\requirements` on Windows) You can install them all together by running `python -m pip install -r path/to/requirements.txt` (please note that `path/to/requirements.txt` would give something like `C:\path\to\requirements` on Windows)
Examples: Examples:
- On Windows: `C:\python310-64\python.exe -m pip install -r c:\npbackup\npbackup\requirements.txt` - On Windows: `C:\python310-64\python.exe -m pip install -r c:\npbackup\npbackup\requirements.txt`

View file

@ -207,7 +207,7 @@ While admin user experience is important, NPBackup also offers a GUI for end use
`npbackup-cli` has all the functions the GUI has, and can run on any headless server. `npbackup-cli` has all the functions the GUI has, and can run on any headless server.
It also has a `--json` parameter which guarantees parseable output. It also has a `--json` parameter which guarantees parseable output.
You may run operations on multiple repositories, or repositories groups by specifying paramater `--repo-group` or `--repo-name`. You may run operations on multiple repositories, or repositories groups by specifying parameter `--repo-group` or `--repo-name`.
`--repo-name` allows to specify one or multiple comma separated repo names, also allows special `__all__` argument which selects all repositories. `--repo-name` allows to specify one or multiple comma separated repo names, also allows special `__all__` argument which selects all repositories.
`--repo-group` allows to specify one or multiple comme separated repo group names, also allows special `__all__` argument which selects all groups. `--repo-group` allows to specify one or multiple comme separated repo group names, also allows special `__all__` argument which selects all groups.
@ -312,7 +312,7 @@ python.exe -c "from windows_tools.signtool import SignTool; s=SignTool(); s.sign
## Misc ## Misc
NPBackup supports internationalization and automatically detects system's locale. NPBackup supports internationalization and automatically detects system's locale.
Still, locale can be overrided via an environment variable, eg on Linux: Still, locale can be overridden via an environment variable, eg on Linux:
``` ```
export NPBACKUP_LOCALE=en-US export NPBACKUP_LOCALE=en-US
``` ```

View file

@ -1,4 +1,4 @@
## List of various restic problems encountered while developping NPBackup ## List of various restic problems encountered while developing NPBackup
As of 2024/01/02, version 0.16.2: As of 2024/01/02, version 0.16.2:

View file

@ -1,7 +1,7 @@
## What's planned / considered post v3 ## What's planned / considered post v3
### Daemon mode (planned) ### Daemon mode (planned)
Instead of relying on scheduled tasks, we could launch backup & housekeeping operations as deamon. Instead of relying on scheduled tasks, we could launch backup & housekeeping operations as daemon.
Caveats: Caveats:
- We need a windows service (nuitka commercial implements one) - We need a windows service (nuitka commercial implements one)
- We need to use apscheduler (wait for v4) - We need to use apscheduler (wait for v4)
@ -40,7 +40,7 @@ We actually could improve upgrade_server to do so
### Hyper-V Backup plugin ### Hyper-V Backup plugin
That's another story. Creating snapshots and dumping VM is easy That's another story. Creating snapshots and dumping VM is easy
Shall we go that route since alot of good commercial products exist ? Probably not Shall we go that route since a lot of good commercial products exist ? Probably not
### Full disk cloning ### Full disk cloning
Out of scope of NPBackup. There are plenty of good tools out there, designed for that job Out of scope of NPBackup. There are plenty of good tools out there, designed for that job

View file

@ -45,7 +45,7 @@ Using `--show-config` should hide sensitive data, and manager password.
# NPF-SEC-00009: Option to show sensitive data # NPF-SEC-00009: Option to show sensitive data
When using `--show-config` or right click `show unecrypted`, we should only show unencrypted config if password is set. When using `--show-config` or right click `show unencrypted`, we should only show unencrypted config if password is set.
Environment variable `NPBACKUP_MANAGER_PASSWORD` will be read to verify access, or GUI may ask for password. Environment variable `NPBACKUP_MANAGER_PASSWORD` will be read to verify access, or GUI may ask for password.
Also, when wrong password is entered, we should wait in order to reduce brute force attacks. Also, when wrong password is entered, we should wait in order to reduce brute force attacks.
@ -62,6 +62,6 @@ Using obfuscation() symmetric function in order to not store the bare AES key.
The PRIVATE directory might contain alternative AES keys and obfuscation functions which should never be bundled for a PyPI release. The PRIVATE directory might contain alternative AES keys and obfuscation functions which should never be bundled for a PyPI release.
# NPF-SEC-00013: Don't leave encrypted envrionment variables for script usage # NPF-SEC-00013: Don't leave encrypted environment variables for script usage
Sensitive environment variables aren't available for scripts / additional parameters and will be replaced by a given string from __env__.py Sensitive environment variables aren't available for scripts / additional parameters and will be replaced by a given string from __env__.py

View file

@ -65,7 +65,7 @@ venv/bin/python upgrade_server/upgrade_server.py -c /etc/npbackup_upgrade_server
## Create a service ## Create a service
You can create a systemd service for the upgrade server as `/etc/systemd/system/npbackup_upgrade_server.service`, see the systemd file in the example directoy. You can create a systemd service for the upgrade server as `/etc/systemd/system/npbackup_upgrade_server.service`, see the systemd file in the example directory.
## Statistics ## Statistics

View file

@ -380,7 +380,7 @@ def compile(
print(f"ERROR: Could not sign: {output}") print(f"ERROR: Could not sign: {output}")
errors = True errors = True
elif os.path.isfile(ev_cert_data): elif os.path.isfile(ev_cert_data):
print(f"Signing with interal signer {ev_cert_data}") print(f"Signing with internal signer {ev_cert_data}")
sign( sign(
executable=npbackup_executable, executable=npbackup_executable,
arch=arch, arch=arch,

View file

@ -104,7 +104,7 @@
"type": "prometheus", "type": "prometheus",
"uid": "${DS_MIMIR}" "uid": "${DS_MIMIR}"
}, },
"description": "Number of succesful npbackup executions", "description": "Number of successful npbackup executions",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {

View file

@ -15,7 +15,7 @@
"type": "grafana", "type": "grafana",
"id": "grafana", "id": "grafana",
"name": "Grafana", "name": "Grafana",
"version": "11.4.0" "version": "11.5.1"
}, },
{ {
"type": "panel", "type": "panel",
@ -158,7 +158,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -248,7 +248,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -274,7 +274,7 @@
"type": "prometheus", "type": "prometheus",
"uid": "${DS_MIMIR}" "uid": "${DS_MIMIR}"
}, },
"description": "Number of succesful npbackup executions", "description": "Number of successful npbackup executions",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
@ -338,7 +338,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -362,7 +362,7 @@
"type": "prometheus", "type": "prometheus",
"uid": "${DS_MIMIR}" "uid": "${DS_MIMIR}"
}, },
"description": "Number of succesful npbackup executions", "description": "Number of successful backup backend executions",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
@ -426,7 +426,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -450,7 +450,7 @@
"type": "prometheus", "type": "prometheus",
"uid": "${DS_MIMIR}" "uid": "${DS_MIMIR}"
}, },
"description": "Number of succesful npbackup executions", "description": "Number of failed backup backend executions",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
@ -510,7 +510,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -590,7 +590,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -672,7 +672,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -754,7 +754,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -836,7 +836,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -918,7 +918,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1000,7 +1000,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1082,7 +1082,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1164,7 +1164,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1236,11 +1236,12 @@
"values": false "values": false
}, },
"tooltip": { "tooltip": {
"hideZeros": false,
"mode": "single", "mode": "single",
"sort": "none" "sort": "none"
} }
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1328,7 +1329,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1415,7 +1416,7 @@
"textMode": "auto", "textMode": "auto",
"wideLayout": true "wideLayout": true
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1502,7 +1503,7 @@
} }
] ]
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1615,8 +1616,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "dark-red", "color": "dark-red"
"value": null
} }
] ]
}, },
@ -1650,7 +1650,7 @@
} }
] ]
}, },
"pluginVersion": "11.4.0", "pluginVersion": "11.5.1",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@ -1776,8 +1776,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "dark-red", "color": "dark-red"
"value": null
} }
] ]
}, },
@ -1927,8 +1926,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "dark-red", "color": "dark-red"
"value": null
} }
] ]
}, },
@ -2092,8 +2090,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2199,8 +2196,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2306,8 +2302,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2412,8 +2407,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2518,8 +2512,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2662,8 +2655,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2804,8 +2796,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -2909,8 +2900,7 @@
"mode": "absolute", "mode": "absolute",
"steps": [ "steps": [
{ {
"color": "green", "color": "green"
"value": null
}, },
{ {
"color": "red", "color": "red",
@ -3065,8 +3055,8 @@
}, },
"timepicker": {}, "timepicker": {},
"timezone": "", "timezone": "",
"title": "NPBackup v3 20250211", "title": "Sauvegardes NPBackup v3 20250211",
"uid": "XNGJDIgRx", "uid": "XNGJDIgRx",
"version": 27, "version": 28,
"weekStart": "" "weekStart": ""
} }

View file

@ -21,7 +21,7 @@ NPBACKUP_CONF_FILE_TEMPLATE="${ROOT_DIR}/npbackup-cube.conf.template"
NPBACKUP_CONF_FILE="${ROOT_DIR}/npbackup-cube.conf" NPBACKUP_CONF_FILE="${ROOT_DIR}/npbackup-cube.conf"
SNAPSHOT_FAILED_FILE="${ROOT_DIR}/SNAPSHOT_FAILED" SNAPSHOT_FAILED_FILE="${ROOT_DIR}/SNAPSHOT_FAILED"
# Superseed tenants if this is set, else it is extracted from machine name, eg machine.tenant.something # Supersede tenants if this is set, else it is extracted from machine name, eg machine.tenant.something
# TENANT_OVERRIDE=netperfect # TENANT_OVERRIDE=netperfect
# default tenant if extraction of tenant name failed # default tenant if extraction of tenant name failed
DEFAULT_TENANT=netperfect DEFAULT_TENANT=netperfect
@ -173,7 +173,7 @@ function remove_snapshot {
qemu-img commit -dp "$disk_path" >> "$LOG_FILE" 2>&1 qemu-img commit -dp "$disk_path" >> "$LOG_FILE" 2>&1
log "Note that you will need to modify the XML manually" log "Note that you will need to modify the XML manually"
# virsh snapshot delete will erase commited file if exist so we don't need to manually tamper with xml file # virsh snapshot delete will erase committed file if exist so we don't need to manually tamper with xml file
virsh snapshot-delete --current $vm virsh snapshot-delete --current $vm
# TODO: test2 # TODO: test2
#virsh dumpxml --inactive --security-info "$vm" > "${ROOT_DIR}/$vm.xml.temp" #virsh dumpxml --inactive --security-info "$vm" > "${ROOT_DIR}/$vm.xml.temp"
@ -248,7 +248,7 @@ function main {
# Make sure we remove snapshots no matter what # Make sure we remove snapshots no matter what
trap 'cleanup' INT HUP TERM QUIT ERR EXIT trap 'cleanup' INT HUP TERM QUIT ERR EXIT
log "#### Make sure all template variables are encypted" log "#### Make sure all template variables are encrypted"
"${NPBACKUP_EXECUTABLE}" -c "${NPBACKUP_CONF_FILE_TEMPLATE}" --check-config-file "${NPBACKUP_EXECUTABLE}" -c "${NPBACKUP_CONF_FILE_TEMPLATE}" --check-config-file
log "#### Running backup `date`" log "#### Running backup `date`"

View file

@ -9,7 +9,7 @@ backup:
- excludes/generic_excludes - excludes/generic_excludes
#- excludes/windows_excludes #- excludes/windows_excludes
- excludes/linux_excludes - excludes/linux_excludes
exclude_case_ignore: false # Exclusions will always have case ignored on Windows systems regarless of this setting exclude_case_ignore: false # Exclusions will always have case ignored on Windows systems regardless of this setting
one_file_system: true one_file_system: true
## Paths can contain multiple values, one per line, without quotation marks ## Paths can contain multiple values, one per line, without quotation marks
paths: path_to_directory paths: path_to_directory

View file

@ -9,7 +9,7 @@ backup:
- excludes/generic_excludes - excludes/generic_excludes
- excludes/windows_excludes - excludes/windows_excludes
#- excludes/linux_excludes #- excludes/linux_excludes
exclude_case_ignore: false # Exclusions will always have case ignored on Windows systems regarless of this setting exclude_case_ignore: false # Exclusions will always have case ignored on Windows systems regardless of this setting
one_file_system: true one_file_system: true
## Paths can contain multiple values, one per line, without quotation marks ## Paths can contain multiple values, one per line, without quotation marks
paths: path_to_directory paths: path_to_directory

View file

@ -151,7 +151,7 @@
?:\Users\*\AppData\LocalLow ?:\Users\*\AppData\LocalLow
?:\Users\*\Tracing ?:\Users\*\Tracing
# Generic system file exlusions # Generic system file exclusions
**\MSOCache **\MSOCache
**\MSOCache.* **\MSOCache.*
**\Config.Msi **\Config.Msi

View file

@ -8,7 +8,7 @@ __author__ = "Orsiris de Jong"
__site__ = "https://www.netperfect.fr/npbackup" __site__ = "https://www.netperfect.fr/npbackup"
__description__ = "NetPerfect Backup Client" __description__ = "NetPerfect Backup Client"
__copyright__ = "Copyright (C) 2023-2025 NetInvent" __copyright__ = "Copyright (C) 2023-2025 NetInvent"
__build__ = "2024101201" __build__ = "2025021901"
import sys import sys
@ -23,7 +23,7 @@ import json
logger = getLogger() logger = getLogger()
# If set, debugging will be enabled by setting envrionment variable to __SPECIAL_DEBUG_STRING content # If set, debugging will be enabled by setting environment variable to __SPECIAL_DEBUG_STRING content
# Else, a simple true or false will suffice # Else, a simple true or false will suffice
__SPECIAL_DEBUG_STRING = "" __SPECIAL_DEBUG_STRING = ""
__debug_os_env = os.environ.get("_DEBUG", "False").strip("'\"") __debug_os_env = os.environ.get("_DEBUG", "False").strip("'\"")
@ -57,7 +57,7 @@ _NPBACKUP_ALLOW_AUTOUPGRADE_DEBUG = (
def exception_to_string(exc): def exception_to_string(exc):
""" """
Transform a catched exception to a string Transform a caught exception to a string
https://stackoverflow.com/a/37135014/2635443 https://stackoverflow.com/a/37135014/2635443
""" """
stack = traceback.extract_stack()[:-3] + traceback.extract_tb( stack = traceback.extract_stack()[:-3] + traceback.extract_tb(

View file

@ -402,7 +402,7 @@ This is free software, and you are welcome to redistribute it under certain cond
json_error_logging(False, msg, "critical") json_error_logging(False, msg, "critical")
sys.exit(71) sys.exit(71)
# This must be run before any other command since it's the way we're checking succesful upgrade processes # This must be run before any other command since it's the way we're checking successful upgrade processes
# So any pre-upgrade process command shall be bypassed when this is executed # So any pre-upgrade process command shall be bypassed when this is executed
if args.check_config_file: if args.check_config_file:
json_error_logging(True, "Config file seems valid", "info") json_error_logging(True, "Config file seems valid", "info")

View file

@ -23,7 +23,7 @@ from npbackup.core.nuitka_helper import IS_COMPILED
# Python 3.7 versions are considered legacy since they don't support msgspec # Python 3.7 versions are considered legacy since they don't support msgspec
# Since developpment currently follows Python 3.12, let's consider anything below 3.12 as legacy # Since development currently follows Python 3.12, let's consider anything below 3.12 as legacy
IS_LEGACY = True if sys.version_info[1] < 12 else False IS_LEGACY = True if sys.version_info[1] < 12 else False
try: try:

View file

@ -593,7 +593,7 @@ def get_repo_config(
""" """
Create inherited repo config Create inherited repo config
Returns a dict containing the repo config, with expanded variables Returns a dict containing the repo config, with expanded variables
and a dict containing the repo interitance status and a dict containing the repo inheritance status
""" """
def inherit_group_settings( def inherit_group_settings(
@ -720,7 +720,7 @@ def get_repo_config(
else: else:
_config_inheritance.g(key)[v] = False _config_inheritance.g(key)[v] = False
else: else:
# In other cases, just keep repo confg # In other cases, just keep repo config
_config_inheritance.s(key, False) _config_inheritance.s(key, False)
return _repo_config, _config_inheritance return _repo_config, _config_inheritance
@ -859,7 +859,7 @@ def load_config(config_file: Path) -> Optional[dict]:
config_file_is_updated = False config_file_is_updated = False
# Make sure we expand every key that should be a list into a list # Make sure we expand every key that should be a list into a list
# We'll use iter_over_keys instead of replace_in_iterable to avoid chaning list contents by lists # We'll use iter_over_keys instead of replace_in_iterable to avoid changing list contents by lists
# This basically allows "bad" formatted (ie manually written yaml) to be processed correctly # This basically allows "bad" formatted (ie manually written yaml) to be processed correctly
# without having to deal with various errors # without having to deal with various errors
def _make_struct(key: str, value: Union[str, int, float, dict, list]) -> Any: def _make_struct(key: str, value: Union[str, int, float, dict, list]) -> Any:

View file

@ -221,7 +221,7 @@ class NPBackupRunner:
self._live_output = False self._live_output = False
self._json_output = False self._json_output = False
# struct_output is msgspec.Struct instead of json, which is less memory consuming # struct_output is msgspec.Struct instead of json, which is less memory consuming
# struct_output neeeds json_output to be True # struct_output needs json_output to be True
self._struct_output = False self._struct_output = False
self._binary = None self._binary = None
self._no_cache = False self._no_cache = False
@ -551,7 +551,7 @@ class NPBackupRunner:
} }
try: try:
# When running group_runner, we need to extract operation from kwargs # When running group_runner, we need to extract operation from kwargs
# else, operarion is just the wrapped function name # else, operation is just the wrapped function name
# pylint: disable=E1101 (no-member) # pylint: disable=E1101 (no-member)
if fn.__name__ == "group_runner": if fn.__name__ == "group_runner":
operation = kwargs.get("operation") operation = kwargs.get("operation")
@ -686,7 +686,7 @@ class NPBackupRunner:
js = { js = {
"result": False, "result": False,
"operation": operation, "operation": operation,
"reason": f"Runner catched exception: {exception_to_string(exc)}", "reason": f"Runner caught exception: {exception_to_string(exc)}",
} }
return js return js
return False return False
@ -804,7 +804,7 @@ class NPBackupRunner:
except KeyError: except KeyError:
pass pass
except ValueError: except ValueError:
self.write_logs("Bogus backend connections value given.", level="erorr") self.write_logs("Bogus backend connections value given.", level="error")
try: try:
if self.repo_config.g("backup_opts.priority"): if self.repo_config.g("backup_opts.priority"):
self.restic_runner.priority = self.repo_config.g("backup_opts.priority") self.restic_runner.priority = self.repo_config.g("backup_opts.priority")
@ -1490,7 +1490,7 @@ class NPBackupRunner:
self.write_logs(msg, level="critical") self.write_logs(msg, level="critical")
return self.convert_to_json_output(False, msg) return self.convert_to_json_output(False, msg)
# Build policiy from config # Build policy from config
policy = {} policy = {}
for entry in ["last", "hourly", "daily", "weekly", "monthly", "yearly"]: for entry in ["last", "hourly", "daily", "weekly", "monthly", "yearly"]:
value = self.repo_config.g(f"repo_opts.retention_policy.{entry}") value = self.repo_config.g(f"repo_opts.retention_policy.{entry}")
@ -1551,7 +1551,7 @@ class NPBackupRunner:
Runs unlock, check, forget and prune in one go Runs unlock, check, forget and prune in one go
""" """
self.write_logs("Running housekeeping", level="info") self.write_logs("Running housekeeping", level="info")
# Add special keywors __no_threads since we're already threaded in housekeeping function # Add special keywords __no_threads since we're already threaded in housekeeping function
# Also, pass it as kwargs to make linter happy # Also, pass it as kwargs to make linter happy
kwargs = {"__no_threads": True, "__close_queues": False} kwargs = {"__no_threads": True, "__close_queues": False}
# pylint: disable=E1123 (unexpected-keyword-arg) # pylint: disable=E1123 (unexpected-keyword-arg)

View file

@ -46,12 +46,12 @@ def need_upgrade(upgrade_interval: int) -> bool:
def _get_count(file: str) -> Optional[int]: def _get_count(file: str) -> Optional[int]:
try: try:
with open(file, "r", encoding="utf-8") as fpr: with open(file, "r", encoding="utf-8") as fp:
count = int(fpr.read()) count = int(fp.read())
return count return count
except OSError as exc: except OSError as exc:
# We may not have read privileges # We may not have read privileges
logger.eror(f"Cannot read upgrade counter file {file}: {exc}") logger.error(f"Cannot read upgrade counter file {file}: {exc}")
except ValueError as exc: except ValueError as exc:
logger.error(f"Bogus upgrade counter in {file}: {exc}") logger.error(f"Bogus upgrade counter in {file}: {exc}")
return None return None

View file

@ -201,7 +201,7 @@ def _make_treedata_from_json(ls_result: List[dict]) -> sg.TreeData:
{'name': 'xmfbox.tcl', 'type': 'file', 'path': '/C/GIT/npbackup/npbackup.dist/tk/xmfbox.tcl', 'uid': 0, 'gid': 0, 'size': 27064, 'mode': 438, 'permissions': '-rw-rw-rw-', 'mtime': '2022-09-05T14:18:52+02:00', 'atime': '2022-09-05T14:18:52+02:00', 'ctime': '2022-09-05T14:18:52+02:00', 'struct_type': 'node'} {'name': 'xmfbox.tcl', 'type': 'file', 'path': '/C/GIT/npbackup/npbackup.dist/tk/xmfbox.tcl', 'uid': 0, 'gid': 0, 'size': 27064, 'mode': 438, 'permissions': '-rw-rw-rw-', 'mtime': '2022-09-05T14:18:52+02:00', 'atime': '2022-09-05T14:18:52+02:00', 'ctime': '2022-09-05T14:18:52+02:00', 'struct_type': 'node'}
] ]
Since v3-rc6, we're actually using a msgspec.Struct represenation which uses dot notation, but only on Python 3.8+ Since v3-rc6, we're actually using a msgspec.Struct representation which uses dot notation, but only on Python 3.8+
We still rely on json for Python 3.7 We still rely on json for Python 3.7
""" """
treedata = sg.TreeData() treedata = sg.TreeData()
@ -674,7 +674,7 @@ def _main_gui(viewer_mode: bool):
"%Y-%m-%d %H:%M:%S" "%Y-%m-%d %H:%M:%S"
) )
else: else:
snapshot_date = "Unparseable" snapshot_date = "Unparsable"
snapshot_username = snapshot["username"] snapshot_username = snapshot["username"]
snapshot_hostname = snapshot["hostname"] snapshot_hostname = snapshot["hostname"]
snapshot_id = snapshot["short_id"] snapshot_id = snapshot["short_id"]

View file

@ -79,7 +79,7 @@ def ask_manager_password(manager_password: str) -> bool:
def config_gui(full_config: dict, config_file: str): def config_gui(full_config: dict, config_file: str):
logger.info("Launching configuration GUI") logger.info("Launching configuration GUI")
# Don't let SimpleGUI handle key errros since we might have new keys in config file # Don't let SimpleGUI handle key errors since we might have new keys in config file
sg.set_options( sg.set_options(
suppress_raise_key_errors=True, suppress_raise_key_errors=True,
suppress_error_popups=True, suppress_error_popups=True,
@ -588,7 +588,7 @@ def config_gui(full_config: dict, config_file: str):
# First we need to clear the whole GUI to reload new values # First we need to clear the whole GUI to reload new values
for key in window.AllKeysDict: for key in window.AllKeysDict:
# We only clear config keys, wihch have '.' separator # We only clear config keys, which have '.' separator
if "." in str(key) and not "inherited" in str(key): if "." in str(key) and not "inherited" in str(key):
if isinstance(window[key], sg.Tree): if isinstance(window[key], sg.Tree):
window[key].Update(sg.TreeData()) window[key].Update(sg.TreeData())
@ -909,7 +909,7 @@ def config_gui(full_config: dict, config_file: str):
], ],
] ]
# We need to set current_manager_password variable to make sure we have sufficient permissions to modifiy settings # We need to set current_manager_password variable to make sure we have sufficient permissions to modify settings
full_config.s( full_config.s(
f"{object_type}.{object_name}.current_manager_password", f"{object_type}.{object_name}.current_manager_password",
full_config.g(f"{object_type}.{object_name}.manager_password"), full_config.g(f"{object_type}.{object_name}.manager_password"),
@ -1989,7 +1989,7 @@ Google Cloud storage: GOOGLE_PROJECT_ID GOOGLE_APPLICATION_CREDENTIALS\n\
def global_options_layout(): def global_options_layout():
""" " """ "
Returns layout for global options that can't be overrided by group / repo settings Returns layout for global options that can't be overridden by group / repo settings
""" """
identity_col = [ identity_col = [
[sg.Text(_t("config_gui.available_variables_id"))], [sg.Text(_t("config_gui.available_variables_id"))],

View file

@ -318,7 +318,7 @@ def gui_thread_runner(
read_queues = read_stdout_queue or read_stderr_queue read_queues = read_stdout_queue or read_stderr_queue
if not read_queues: if not read_queues:
# Arbitrary wait time so window get's time to get fully drawn # Arbitrary wait time so window gets time to get fully drawn
sleep(0.2) sleep(0.2)
break break

View file

@ -191,7 +191,7 @@ def restic_json_to_prometheus(
found = True found = True
break break
if not found: if not found:
logger.critical("Bogus data given. No message_type: summmary found") logger.critical("Bogus data given. No message_type: summary found")
return False, [], True return False, [], True
if not isinstance(restic_json, dict): if not isinstance(restic_json, dict):

View file

@ -135,7 +135,7 @@ class ResticRunner:
for env_variable, value in self.environment_variables.items(): for env_variable, value in self.environment_variables.items():
self.write_logs( self.write_logs(
f'Setting envrionment variable "{env_variable}"', level="debug" f'Setting environment variable "{env_variable}"', level="debug"
) )
os.environ[env_variable] = value os.environ[env_variable] = value
@ -144,7 +144,7 @@ class ResticRunner:
value, value,
) in self.encrypted_environment_variables.items(): ) in self.encrypted_environment_variables.items():
self.write_logs( self.write_logs(
f'Setting encrypted envrionment variable "{encrypted_env_variable}"', f'Setting encrypted environment variable "{encrypted_env_variable}"',
level="debug", level="debug",
) )
os.environ[encrypted_env_variable] = value os.environ[encrypted_env_variable] = value
@ -359,7 +359,7 @@ class ResticRunner:
self.write_logs( self.write_logs(
"Running in dry mode. No modifications will be done", level="info" "Running in dry mode. No modifications will be done", level="info"
) )
# Replace first occurence of possible operation # Replace first occurrence of possible operation
cmd = cmd.replace(operation, f"{operation} --dry-run", 1) cmd = cmd.replace(operation, f"{operation} --dry-run", 1)
_cmd = f'"{self._binary}"{additional_parameters}{self.generic_arguments} {cmd}' _cmd = f'"{self._binary}"{additional_parameters}{self.generic_arguments} {cmd}'
@ -607,7 +607,7 @@ class ResticRunner:
Init repository. Let's make sure we always run in JSON mode so we don't need Init repository. Let's make sure we always run in JSON mode so we don't need
horrendous regexes to find whether initialized horrendous regexes to find whether initialized
--json output when inializing: --json output when initializing:
{"message_type":"initialized","id":"8daef59e2ac4c86535ae3f7414fcac6534f270077176af3ebddd34c364cac3c2","repository":"c:\\testy"} {"message_type":"initialized","id":"8daef59e2ac4c86535ae3f7414fcac6534f270077176af3ebddd34c364cac3c2","repository":"c:\\testy"}
--json output when already initialized (is not json !!!) --json output when already initialized (is not json !!!)
""" """
@ -1345,13 +1345,13 @@ class ResticRunner:
snapshot_list: List, delta: int = None snapshot_list: List, delta: int = None
) -> Tuple[bool, Optional[datetime]]: ) -> Tuple[bool, Optional[datetime]]:
""" """
Making the actual comparaison a static method so we can call it from GUI too Making the actual comparison a static method so we can call it from GUI too
Expects a restic snasphot_list (which is most recent at the end ordered) Expects a restic snasphot_list (which is most recent at the end ordered)
Returns bool if delta (in minutes) is not reached since last successful backup, and returns the last backup timestamp Returns bool if delta (in minutes) is not reached since last successful backup, and returns the last backup timestamp
""" """
backup_ts = datetime(1, 1, 1, 0, 0) backup_ts = datetime(1, 1, 1, 0, 0)
# Don't bother to deal with mising delta or snapshot list # Don't bother to deal with missing delta or snapshot list
if not snapshot_list or not delta: if not snapshot_list or not delta:
return False, backup_ts return False, backup_ts
tz_aware_timestamp = datetime.now(timezone.utc).astimezone() tz_aware_timestamp = datetime.now(timezone.utc).astimezone()
@ -1376,7 +1376,7 @@ class ResticRunner:
""" """
Checks if a snapshot exists that is newer that delta minutes Checks if a snapshot exists that is newer that delta minutes
Eg: if delta = -60 we expect a snapshot newer than an hour ago, and return True if exists Eg: if delta = -60 we expect a snapshot newer than an hour ago, and return True if exists
if delta = +60 we expect a snpashot newer than one hour in future (!) if delta = +60 we expect a snapshot newer than one hour in future (!)
returns True, datetime if exists returns True, datetime if exists
returns False, datetime if exists but too old returns False, datetime if exists but too old
@ -1386,7 +1386,7 @@ class ResticRunner:
kwargs = locals() kwargs = locals()
kwargs.pop("self") kwargs.pop("self")
# Don't bother to deal with mising delta # Don't bother to deal with missing delta
if not delta: if not delta:
if self.json_output: if self.json_output:
msg = "No delta given" msg = "No delta given"

View file

@ -62,7 +62,7 @@ def entrypoint(*args, **kwargs):
) )
if not json_output: if not json_output:
if not isinstance(result, bool): if not isinstance(result, bool):
# We need to temprarily remove the stdout handler # We need to temporarily remove the stdout handler
# Since we already get live output from the runner # Since we already get live output from the runner
# Unless operation is "ls", because it's too slow for command_runner poller method that allows live_output # Unless operation is "ls", because it's too slow for command_runner poller method that allows live_output
# But we still need to log the result to our logfile # But we still need to log the result to our logfile

View file

@ -308,8 +308,8 @@ def create_scheduled_task_windows(
</Exec> </Exec>
</Actions> </Actions>
</Task>""" </Task>"""
# Create task file, without specific encoding in order to use platform prefered encoding # Create task file, without specific encoding in order to use platform preferred encoding
# platform prefered encoding is locale.getpreferredencoding() (cp1252 on windows, utf-8 on linux) # platform preferred encoding is locale.getpreferredencoding() (cp1252 on windows, utf-8 on linux)
try: try:
# pylint: disable=W1514 (unspecified-encoding) # pylint: disable=W1514 (unspecified-encoding)
with open(temp_task_file, "w") as file_handle: with open(temp_task_file, "w") as file_handle:

View file

@ -28,7 +28,7 @@ en:
one_per_line: one per line one_per_line: one per line
backup_priority: Backup priority backup_priority: Backup priority
additional_parameters: Additional parameters additional_parameters: Additional parameters
additional_backup_only_parameters: Additional backup only parmameters additional_backup_only_parameters: Additional backup only parameters
minimum_backup_age: Minimum delay between two backups minimum_backup_age: Minimum delay between two backups
backup_repo_uri: backup repo URI / path backup_repo_uri: backup repo URI / path
@ -55,7 +55,7 @@ en:
saved_initial_config: If you saved your configuration, you may now reload this program saved_initial_config: If you saved your configuration, you may now reload this program
bogus_config_file: Bogus configuration file found bogus_config_file: Bogus configuration file found
encrypted_env_variables: Encrypted envrionment variables encrypted_env_variables: Encrypted environment variables
env_variables: Environment variables env_variables: Environment variables
no_runner: Cannot connect to backend. Please see logs no_runner: Cannot connect to backend. Please see logs
@ -85,7 +85,7 @@ en:
create_backup_scheduled_task_at: Create scheduled backup task every day at create_backup_scheduled_task_at: Create scheduled backup task every day at
create_housekeeping_scheduled_task_at: Create housekeeping scheduled every day at create_housekeeping_scheduled_task_at: Create housekeeping scheduled every day at
scheduled_task_explanation: Task can run at a given time to run a backup which is great to make server backups, or run every x minutes, but only run actual backup when more than maximum_backup_age minutes was reached, which is the best way to backup laptops which have flexible power on hours. scheduled_task_explanation: Task can run at a given time to run a backup which is great to make server backups, or run every x minutes, but only run actual backup when more than maximum_backup_age minutes was reached, which is the best way to backup laptops which have flexible power on hours.
scheduled_task_creation_success: Scheduled task created successfuly scheduled_task_creation_success: Scheduled task created successfully
scheduled_task_creation_failure: Scheduled task could not be created. See logs for further info scheduled_task_creation_failure: Scheduled task could not be created. See logs for further info
machine_identification: Machine identification machine_identification: Machine identification
@ -93,7 +93,7 @@ en:
machine_group: Machine group machine_group: Machine group
show_decrypted: Show sensitive data show_decrypted: Show sensitive data
no_manager_password_defined: No manager password defined, cannot show unencrypted. If you just set one, you need to save the confiugration before you can use it no_manager_password_defined: No manager password defined, cannot show unencrypted. If you just set one, you need to save the configuration before you can use it
# compression # compression
auto: Automatic auto: Automatic
@ -114,7 +114,7 @@ en:
stdin_from_command: Standard input from command stdin_from_command: Standard input from command
stdin_filename: Optional filename for stdin backed up data stdin_filename: Optional filename for stdin backed up data
# retention policiy # retention policy
retention_policy: Retention policy retention_policy: Retention policy
keep: Keep keep: Keep
last: last snapshots last: last snapshots

View file

@ -24,7 +24,7 @@ en:
old: Old old: Old
up_to_date: Up to date up_to_date: Up to date
unknown: Unknown unknown: Unknown
not_connected_yet: Not conntected to repo not_connected_yet: Not connected to repo
size: Size size: Size
path: Path path: Path
@ -40,7 +40,7 @@ en:
is_uptodate: Program Up to date is_uptodate: Program Up to date
succes: Succes succes: Success
successfully: successfully successfully: successfully
failure: Failure failure: Failure

View file

@ -36,7 +36,7 @@ def check_private_ev():
from PRIVATE._ev_data import AES_EV_KEY from PRIVATE._ev_data import AES_EV_KEY
from PRIVATE._obfuscation import obfuscation from PRIVATE._obfuscation import obfuscation
print("We have private EV certifcate DATA") print("We have private EV certificate DATA")
return obfuscation(AES_EV_KEY) return obfuscation(AES_EV_KEY)
except ImportError as exc: except ImportError as exc:
print("ERROR: Cannot load private EV certificate DATA: {}".format(exc)) print("ERROR: Cannot load private EV certificate DATA: {}".format(exc))

View file

@ -22,7 +22,7 @@ import json
logger = getLogger() logger = getLogger()
# If set, debugging will be enabled by setting envrionment variable to __SPECIAL_DEBUG_STRING content # If set, debugging will be enabled by setting environment variable to __SPECIAL_DEBUG_STRING content
# Else, a simple true or false will suffice # Else, a simple true or false will suffice
__SPECIAL_DEBUG_STRING = "" __SPECIAL_DEBUG_STRING = ""
__debug_os_env = os.environ.get("_DEBUG", "False").strip("'\"") __debug_os_env = os.environ.get("_DEBUG", "False").strip("'\"")