* Sample solution.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update modules/config.py

Increase efficiency by using a boolean in place of a functional check.

Co-authored-by: bobokun <12660469+bobokun@users.noreply.github.com>

* Bump softprops/action-gh-release from 1 to 2

Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/softprops/action-gh-release/compare/v1...v2)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* update group_uplolad_speed logic

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Bump dependabot/fetch-metadata from 1.6.0 to 2.0.0

Bumps [dependabot/fetch-metadata](https://github.com/dependabot/fetch-metadata) from 1.6.0 to 2.0.0.
- [Release notes](https://github.com/dependabot/fetch-metadata/releases)
- [Commits](https://github.com/dependabot/fetch-metadata/compare/v1.6.0...v2.0.0)

---
updated-dependencies:
- dependency-name: dependabot/fetch-metadata
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump pre-commit from 3.6.2 to 3.7.0

Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 3.6.2 to 3.7.0.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v3.6.2...v3.7.0)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/hhatto/autopep8: v2.0.4 → v2.1.0](https://github.com/hhatto/autopep8/compare/v2.0.4...v2.1.0)
- [github.com/asottile/pyupgrade: v3.15.1 → v3.15.2](https://github.com/asottile/pyupgrade/compare/v3.15.1...v3.15.2)
- [github.com/psf/black: 24.2.0 → 24.3.0](https://github.com/psf/black/compare/24.2.0...24.3.0)

* Bump qbittorrent-api from 2024.2.59 to 2024.3.60

Bumps [qbittorrent-api](https://github.com/rmartin16/qbittorrent-api) from 2024.2.59 to 2024.3.60.
- [Release notes](https://github.com/rmartin16/qbittorrent-api/releases)
- [Changelog](https://github.com/rmartin16/qbittorrent-api/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rmartin16/qbittorrent-api/compare/v2024.2.59...v2024.3.60)

---
updated-dependencies:
- dependency-name: qbittorrent-api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump 4.0.10

* Revert "Add Aggregate Group Speed Capabilities"

* Bump gitpython from 3.1.42 to 3.1.43

Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.42 to 3.1.43.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.42...3.1.43)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add new args for mover.old

* Revert "Revert "Add Aggregate Group Speed Capabilities""

This reverts commit f112b2c92e.
Fixes bug in original commit code

* Remove username and password from debug logs sinced they are redacted anyways

* adds support for #513

* Fixes #501

* Fixes bug not logging torrents being deleted after meeting share_limits

* Adds additional error checking for invalid share_limits in config

* fixes bug in torrents not untagging min_seeding_time_tag

* Fixes bug in #494

* Bi-directional Wiki Sync Action

* Bump actions/checkout from 2 to 4

Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bug fix on share limits being applied every run

* Revert "Revert "Revert "Add Aggregate Group Speed Capabilities"""

This reverts commit 41d5170ba6.

* Fixes #494 with new enable_group_upload_speed flag

* Fixes bug found in #494

* 4.1.0

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Paul Walker <walkerp1@yahoo.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: bakerboy448 <55419169+bakerboy448@users.noreply.github.com>
This commit is contained in:
bobokun 2024-04-05 20:39:00 -04:00 committed by GitHub
parent 29690e9581
commit eccc96e388
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
16 changed files with 193 additions and 61 deletions

View file

@ -19,7 +19,7 @@ jobs:
# will not occur.
- name: Dependabot metadata
id: dependabot-metadata
uses: dependabot/fetch-metadata@v1.6.0
uses: dependabot/fetch-metadata@v2.0.0
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# Here the PR gets approved.

50
.github/workflows/docs.yml vendored Normal file
View file

@ -0,0 +1,50 @@
name: Documentation
on:
push:
branches:
- develop
paths:
- "docs/**"
repository_dispatch:
types: [docs]
gollum:
env:
GIT_AUTHOR_NAME: Actionbot
GIT_AUTHOR_EMAIL: actions@github.com
jobs:
job-sync-docs-to-wiki:
runs-on: ubuntu-latest
if: github.event_name != 'gollum'
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Sync docs to wiki
uses: newrelic/wiki-sync-action@main
with:
source: docs
destination: wiki
token: ${{ secrets.PAT }}
gitAuthorName: ${{ env.GIT_AUTHOR_NAME }}
gitAuthorEmail: ${{ env.GIT_AUTHOR_EMAIL }}
job-sync-wiki-to-docs:
runs-on: ubuntu-latest
if: github.event_name == 'gollum'
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
token: ${{ secrets.PAT }} # allows us to push back to repo
ref: develop
- name: Sync Wiki to Docs
uses: newrelic/wiki-sync-action@main
with:
source: wiki
destination: docs
token: ${{ secrets.PAT }}
gitAuthorName: ${{ env.GIT_AUTHOR_NAME }}
gitAuthorEmail: ${{ env.GIT_AUTHOR_EMAIL }}
branch: develop

View file

@ -61,7 +61,7 @@ jobs:
ghcr.io/${{ env.OWNER_LC }}/qbit_manage:${{ steps.get_version.outputs.VERSION }}
- name: Create release
id: create_release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
body_path: CHANGELOG
token: ${{ secrets.PAT }}

View file

@ -17,7 +17,7 @@ repos:
- id: pretty-format-json
args: [--autofix, --indent, '4', --no-sort-keys]
- repo: https://github.com/hhatto/autopep8
rev: v2.0.4
rev: v2.1.0
hooks:
- id: autopep8
- repo: https://github.com/adrienverge/yamllint.git
@ -38,12 +38,12 @@ repos:
name: isort (python)
args: [--force-single-line-imports, --profile, black]
- repo: https://github.com/asottile/pyupgrade
rev: v3.15.1
rev: v3.15.2
hooks:
- id: pyupgrade
args: [--py3-plus]
- repo: https://github.com/psf/black
rev: 24.2.0
rev: 24.3.0
hooks:
- id: black
language_version: python3

View file

@ -1,22 +1,17 @@
# Requirements Updated
- qbittorrent-api==2024.2.59
- GitPython==3.1.42
- ruamel.yaml==0.18.6
- qbittorrent-api==2024.3.60
- GitPython==3.1.43
# New Features
- Adds support for filtering more than just Completed torrents. Closes [#115](https://github.com/StuffAnThings/qbit_manage/issues/115)
- Updates mover script (Add check if file is still on cache mount #493)
- Adds support for ghcr.io container registry
- Adds support for custom [share_limits/cross-seed tags](https://github.com/StuffAnThings/qbit_manage/commit/9f8be69a4f2680501d492a8c7148969ae5ac5b72#diff-e5794b6d2186004aa3ee69cd4dee7bbd48d8e0edd9f1da90d03393ec28cbf912) Closes [#457](https://github.com/StuffAnThings/qbit_manage/issues/457)
- Added Group Uplaod Speed to share limits config to apply upload speed limits at a group level. Closes [#494](https://github.com/StuffAnThings/qbit_manage/issues/494)
- Added Force Retag All in config settings to force retagging of all torrents. Closes [#513](https://github.com/StuffAnThings/qbit_manage/issues/513)
- Added `--mover-old` command in mover script for users that use Mover Tuning Plugin
# Bug Fixes
- Fixes [#359](https://github.com/StuffAnThings/qbit_manage/issues/359)
- Fixes [#479](https://github.com/StuffAnThings/qbit_manage/issues/479)
- Fixes [#487](https://github.com/StuffAnThings/qbit_manage/issues/487)
- Fixes [#488](https://github.com/StuffAnThings/qbit_manage/issues/488)
- Fixes [#490](https://github.com/StuffAnThings/qbit_manage/issues/490)
- Update script header so that env python3 is used.
- Fixes [#501](https://github.com/StuffAnThings/qbit_manage/issues/501)
- Adds additional error checking for invalid share limits defined in config
- Minor bug fixes in share limits
Special thanks to @NooNameR, @ShanaryS, @ext4xfs for their contributions!
**Full Changelog**: https://github.com/StuffAnThings/qbit_manage/compare/v4.0.8...v4.0.9
Special thanks to @walkerp1 for their contributions!
**Full Changelog**: https://github.com/StuffAnThings/qbit_manage/compare/v4.0.9...v4.1.0

View file

@ -1 +1 @@
4.0.9
4.1.0

View file

@ -43,6 +43,7 @@ settings:
cat_filter_completed: True # Filters for completed torrents only when running cat_update command
share_limits_filter_completed: True # Filters for completed torrents only when running share_limits command
tag_nohardlinks_filter_completed: True # Filters for completed torrents only when running tag_nohardlinks command
force_retag_all: false # Forces all torrents to be updated on every run (may cause performance issues). This does not remove any previous tags assigned
directory:
# Do not remove these
# Cross-seed var: </your/path/here/> # Output directory of cross-seed
@ -189,6 +190,7 @@ share_limits:
# Will default to -1 (no limit) if not specified for the group.
max_seeding_time: 129600
# <OPTIONAL> min_seeding_time <int>: Will prevent torrent deletion by cleanup variable if torrent has not yet minimum seeding time (minutes).
# This should only be set if you are using this in conjunction with max_seeding_time and max_ratio. If you are not setting a max_ratio, then use max_seeding_time instead.
# If the torrent has not yet reached this minimum seeding time, it will change the share limits back to no limits and resume the torrent to continue seeding.
# Will default to 0 if not specified for the group.
min_seeding_time: 43200
@ -198,6 +200,8 @@ share_limits:
last_active: 43200
# <OPTIONAL> Limit Upload Speed <int>: Will limit the upload speed KiB/s (KiloBytes/second) (`-1` : No Limit)
limit_upload_speed: 0
# <OPTIONAL> Enable Group Upload Speed <bool>: Upload speed limits are applied at the group level. This will take limit_upload_speed defined and divide it equally among the number of torrents for the group.
enable_group_upload_speed: false
# <OPTIONAL> cleanup <bool>: WARNING!! Setting this as true Will remove and delete contents of any torrents that satisfies the share limits (max time OR max ratio)
cleanup: false
# <OPTIONAL> resume_torrent_after_change <bool>: This variable will resume your torrent after changing share limits. Default is true

View file

@ -187,6 +187,9 @@ class Config:
"tag_nohardlinks_filter_completed": self.util.check_for_attribute(
self.data, "tag_nohardlinks_filter_completed", parent="settings", var_type="bool", default=True
),
"force_retag_all": self.util.check_for_attribute(
self.data, "force_retag_all", parent="settings", var_type="bool", default=False
),
}
self.tracker_error_tag = self.settings["tracker_error_tag"]
@ -465,6 +468,16 @@ class Config:
do_print=False,
save=False,
)
self.share_limits[group]["enable_group_upload_speed"] = self.util.check_for_attribute(
self.data,
"enable_group_upload_speed",
parent="share_limits",
subparent=group,
var_type="bool",
default=False,
do_print=False,
save=False,
)
self.share_limits[group]["min_num_seeds"] = self.util.check_for_attribute(
self.data,
"min_num_seeds",
@ -508,6 +521,25 @@ class Config:
save=False,
)
self.share_limits[group]["torrents"] = []
if (
self.share_limits[group]["min_seeding_time"] > 0
and self.share_limits[group]["min_seeding_time"] > self.share_limits[group]["max_seeding_time"]
):
err = (
f"Config Error: min_seeding_time ({self.share_limits[group]['min_seeding_time']}) is greater than "
f"max_seeding_time ({self.share_limits[group]['max_seeding_time']}) for the grouping '{group}'.\n"
f"min_seeding_time must be less than or equal to max_seeding_time."
)
self.notify(err, "Config")
raise Failed(err)
if self.share_limits[group]["min_seeding_time"] > 0 and self.share_limits[group]["max_ratio"] <= 0:
err = (
f"Config Error: min_seeding_time ({self.share_limits[group]['min_seeding_time']}) is set, "
f"but max_ratio ({self.share_limits[group]['max_ratio']}) is not set for the grouping '{group}'.\n"
f"max_ratio must be greater than 0 when min_seeding_time is set."
)
self.notify(err, "Config")
raise Failed(err)
else:
if self.commands["share_limits"]:
err = "Config Error: share_limits. No valid grouping found."
@ -721,7 +753,7 @@ class Config:
if not self.dry_run:
for path in location_path_list:
if path != location_path:
util.remove_empty_directories(path, "**/*")
util.remove_empty_directories(path, "**/*", self.qbt.get_category_save_paths())
body += logger.print_line(
f"{'Did not delete' if self.dry_run else 'Deleted'} {num_del} files "
f"({util.human_readable_size(size_bytes)}) from the {location}.",

View file

@ -85,7 +85,10 @@ class RemoveOrphaned:
if not self.config.dry_run:
orphaned_parent_path = set(self.executor.map(self.move_orphan, orphaned_files))
logger.print_line("Removing newly empty directories", self.config.loglevel)
self.executor.map(lambda dir: util.remove_empty_directories(dir, "**/*"), orphaned_parent_path)
self.executor.map(
lambda dir: util.remove_empty_directories(dir, "**/*", self.qbt.get_category_save_paths()),
orphaned_parent_path,
)
else:
logger.print_line("No Orphaned Files found.", self.config.loglevel)

View file

@ -170,6 +170,8 @@ class ShareLimits:
logger.separator(
f"Updating Share Limits for [Group {group_name}] [Priority {group_config['priority']}]", space=False, border=False
)
group_upload_speed = group_config["limit_upload_speed"]
for torrent in torrents:
t_name = torrent.name
t_hash = torrent.hash
@ -181,8 +183,16 @@ class ShareLimits:
check_max_seeding_time = group_config["max_seeding_time"] != torrent.max_seeding_time
# Treat upload limit as -1 if it is set to 0 (unlimited)
torrent_upload_limit = -1 if round(torrent.up_limit / 1024) == 0 else round(torrent.up_limit / 1024)
if group_config["limit_upload_speed"] == 0:
if group_config["limit_upload_speed"] <= 0:
group_config["limit_upload_speed"] = -1
else:
if group_config["enable_group_upload_speed"]:
logger.trace(
"enable_group_upload_speed set to True.\n"
f"Setting limit_upload_speed to {group_upload_speed} / {len(torrents)} = "
f"{round(group_upload_speed / len(torrents))} kB/s"
)
group_config["limit_upload_speed"] = round(group_upload_speed / len(torrents))
check_limit_upload_speed = group_config["limit_upload_speed"] != torrent_upload_limit
hash_not_prev_checked = t_hash not in self.torrent_hash_checked
share_limits_not_yet_tagged = (
@ -207,21 +217,6 @@ class ShareLimits:
logger.trace(f"check_limit_upload_speed: {check_limit_upload_speed}")
logger.trace(f"hash_not_prev_checked: {hash_not_prev_checked}")
logger.trace(f"share_limits_not_yet_tagged: {share_limits_not_yet_tagged}")
if (
check_max_ratio or check_max_seeding_time or check_limit_upload_speed or share_limits_not_yet_tagged
) and hash_not_prev_checked:
if (
not is_tag_in_torrent(self.min_seeding_time_tag, torrent.tags)
and not is_tag_in_torrent(self.min_num_seeds_tag, torrent.tags)
and not is_tag_in_torrent(self.last_active_tag, torrent.tags)
):
logger.print_line(logger.insert_space(f"Torrent Name: {t_name}", 3), self.config.loglevel)
logger.print_line(logger.insert_space(f'Tracker: {tracker["url"]}', 8), self.config.loglevel)
if self.group_tag:
logger.print_line(logger.insert_space(f"Added Tag: {self.group_tag}", 8), self.config.loglevel)
self.tag_and_update_share_limits_for_torrent(torrent, group_config)
self.stats_tagged += 1
self.torrents_updated.append(t_name)
tor_reached_seed_limit = self.has_reached_seed_limit(
torrent=torrent,
@ -233,6 +228,24 @@ class ShareLimits:
resume_torrent=group_config["resume_torrent_after_change"],
tracker=tracker["url"],
)
# Get updated torrent after checking if the torrent has reached seed limits
torrent = self.qbt.get_torrents({"torrent_hashes": t_hash})[0]
if (
check_max_ratio or check_max_seeding_time or check_limit_upload_speed or share_limits_not_yet_tagged
) and hash_not_prev_checked:
if (
not is_tag_in_torrent(self.min_seeding_time_tag, torrent.tags)
and not is_tag_in_torrent(self.min_num_seeds_tag, torrent.tags)
and not is_tag_in_torrent(self.last_active_tag, torrent.tags)
) or share_limits_not_yet_tagged:
logger.print_line(logger.insert_space(f"Torrent Name: {t_name}", 3), self.config.loglevel)
logger.print_line(logger.insert_space(f'Tracker: {tracker["url"]}', 8), self.config.loglevel)
if self.group_tag:
logger.print_line(logger.insert_space(f"Added Tag: {self.group_tag}", 8), self.config.loglevel)
self.tag_and_update_share_limits_for_torrent(torrent, group_config)
self.stats_tagged += 1
self.torrents_updated.append(t_name)
# Cleanup torrents if the torrent meets the criteria for deletion and cleanup is enabled
if group_config["cleanup"]:
if tor_reached_seed_limit:
@ -359,7 +372,6 @@ class ShareLimits:
body.append(msg)
# Update Torrents
if not self.config.dry_run:
if tags and not is_tag_in_torrent(tags, torrent.tags):
torrent.add_tags(tags)
torrent_upload_limit = -1 if round(torrent.up_limit / 1024) == 0 else round(torrent.up_limit / 1024)
if limit_upload_speed is not None and limit_upload_speed != torrent_upload_limit:
@ -385,16 +397,22 @@ class ShareLimits:
):
"""Check if torrent has reached seed limit"""
body = ""
torrent_tags = torrent.tags
def _has_reached_min_seeding_time_limit():
print_log = []
if torrent.seeding_time >= min_seeding_time * 60:
if is_tag_in_torrent(self.min_seeding_time_tag, torrent.tags):
def _remove_min_seeding_time_tag():
nonlocal torrent_tags
if is_tag_in_torrent(self.min_seeding_time_tag, torrent_tags):
if not self.config.dry_run:
torrent.remove_tags(tags=self.min_seeding_time_tag)
def _has_reached_min_seeding_time_limit():
nonlocal torrent_tags
print_log = []
if torrent.seeding_time >= min_seeding_time * 60:
_remove_min_seeding_time_tag()
return True
else:
if not is_tag_in_torrent(self.min_seeding_time_tag, torrent.tags):
if not is_tag_in_torrent(self.min_seeding_time_tag, torrent_tags):
print_log += logger.print_line(logger.insert_space(f"Torrent Name: {torrent.name}", 3), self.config.loglevel)
print_log += logger.print_line(logger.insert_space(f"Tracker: {tracker}", 8), self.config.loglevel)
print_log += logger.print_line(
@ -411,20 +429,22 @@ class ShareLimits:
)
if not self.config.dry_run:
torrent.add_tags(self.min_seeding_time_tag)
torrent_tags += f", {self.min_seeding_time_tag}"
torrent.set_share_limits(ratio_limit=-1, seeding_time_limit=-1, inactive_seeding_time_limit=-1)
if resume_torrent:
torrent.resume()
return False
def _is_less_than_min_num_seeds():
nonlocal torrent_tags
print_log = []
if min_num_seeds == 0 or torrent.num_complete >= min_num_seeds:
if is_tag_in_torrent(self.min_num_seeds_tag, torrent.tags):
if is_tag_in_torrent(self.min_num_seeds_tag, torrent_tags):
if not self.config.dry_run:
torrent.remove_tags(tags=self.min_num_seeds_tag)
return False
else:
if not is_tag_in_torrent(self.min_num_seeds_tag, torrent.tags):
if not is_tag_in_torrent(self.min_num_seeds_tag, torrent_tags):
print_log += logger.print_line(logger.insert_space(f"Torrent Name: {torrent.name}", 3), self.config.loglevel)
print_log += logger.print_line(logger.insert_space(f"Tracker: {tracker}", 8), self.config.loglevel)
print_log += logger.print_line(
@ -441,22 +461,24 @@ class ShareLimits:
)
if not self.config.dry_run:
torrent.add_tags(self.min_num_seeds_tag)
torrent_tags += f", {self.min_num_seeds_tag}"
torrent.set_share_limits(ratio_limit=-1, seeding_time_limit=-1, inactive_seeding_time_limit=-1)
if resume_torrent:
torrent.resume()
return True
def _has_reached_last_active_time_limit():
nonlocal torrent_tags
print_log = []
now = int(time())
inactive_time_minutes = round((now - torrent.last_activity) / 60)
if inactive_time_minutes >= last_active:
if is_tag_in_torrent(self.last_active_tag, torrent.tags):
if is_tag_in_torrent(self.last_active_tag, torrent_tags):
if not self.config.dry_run:
torrent.remove_tags(tags=self.last_active_tag)
return True
else:
if not is_tag_in_torrent(self.last_active_tag, torrent.tags):
if not is_tag_in_torrent(self.last_active_tag, torrent_tags):
print_log += logger.print_line(logger.insert_space(f"Torrent Name: {torrent.name}", 3), self.config.loglevel)
print_log += logger.print_line(logger.insert_space(f"Tracker: {tracker}", 8), self.config.loglevel)
print_log += logger.print_line(
@ -473,6 +495,7 @@ class ShareLimits:
)
if not self.config.dry_run:
torrent.add_tags(self.last_active_tag)
torrent_tags += f", {self.last_active_tag}"
torrent.set_share_limits(ratio_limit=-1, seeding_time_limit=-1, inactive_seeding_time_limit=-1)
if resume_torrent:
torrent.resume()
@ -488,9 +511,10 @@ class ShareLimits:
elif max_seeding_time == -2 and self.qbt.global_max_seeding_time_enabled:
seeding_time_limit = self.qbt.global_max_seeding_time
else:
_remove_min_seeding_time_tag()
return False
if seeding_time_limit:
if (torrent.seeding_time >= seeding_time_limit * 60) and _has_reached_min_seeding_time_limit():
if _has_reached_min_seeding_time_limit() and (torrent.seeding_time >= seeding_time_limit * 60):
body += logger.insert_space(
f"Seeding Time vs Max Seed Time: {timedelta(seconds=torrent.seeding_time)} >= "
f"{timedelta(minutes=seeding_time_limit)}",

View file

@ -13,6 +13,7 @@ class Tags:
self.default_ignore_tags = qbit_manager.config.default_ignore_tags # default ignore tags
self.torrents_updated = [] # List of torrents updated
self.notify_attr = [] # List of single torrent attributes to send to notifiarr
self.force_retag = qbit_manager.config.settings["force_retag_all"] # Force retag of all torrents
self.tags()
self.config.webhooks_factory.notify(self.torrents_updated, self.notify_attr, group_by="tag")
@ -25,7 +26,7 @@ class Tags:
for torrent in self.qbt.torrent_list:
check_tags = [tag for tag in util.get_list(torrent.tags) if self.share_limits_tag not in tag]
if torrent.tags == "" or (len([trk for trk in check_tags if trk not in ignore_tags]) == 0):
if torrent.tags == "" or (len([trk for trk in check_tags if trk not in ignore_tags]) == 0) or self.force_retag:
tracker = self.qbt.get_tags(torrent.trackers)
if tracker["tag"]:
t_name = torrent.name

View file

@ -2,6 +2,7 @@
import os
import sys
from functools import cache
from qbittorrentapi import Client
from qbittorrentapi import LoginFailed
@ -33,7 +34,7 @@ class Qbt:
self.password = params["password"]
logger.secret(self.username)
logger.secret(self.password)
logger.debug(f"Host: {self.host}, Username: {self.username}, Password: {self.password}")
logger.debug(f"Host: {self.host}")
ex = ""
try:
self.client = Client(
@ -357,6 +358,7 @@ class Qbt:
logger.warning(e)
return tracker
@cache
def get_category(self, path):
"""Get category from config file based on path provided"""
category = ""
@ -378,6 +380,17 @@ class Qbt:
logger.warning(e)
return category
@cache
def get_category_save_paths(self):
"""Get all categories from qbitorrenta and return a list of save_paths"""
save_paths = set()
categories = self.client.torrent_categories.categories
for cat in categories:
save_path = categories[cat].savePath.replace(self.config.root_dir, self.config.remote_dir)
if save_path:
save_paths.add(save_path)
return list(save_paths)
def tor_delete_recycle(self, torrent, info):
"""Move torrent to recycle bin"""
try:
@ -477,7 +490,7 @@ class Qbt:
# Delete torrent and files
torrent.delete(delete_files=to_delete)
# Remove any empty directories
util.remove_empty_directories(save_path, "**/*")
util.remove_empty_directories(save_path, "**/*", self.get_category_save_paths())
else:
torrent.delete(delete_files=False)
else:

View file

@ -421,7 +421,7 @@ def copy_files(src, dest):
logger.error(ex)
def remove_empty_directories(pathlib_root_dir, pattern):
def remove_empty_directories(pathlib_root_dir, pattern, excluded_paths=None):
"""Remove empty directories recursively."""
pathlib_root_dir = Path(pathlib_root_dir)
try:
@ -435,6 +435,8 @@ def remove_empty_directories(pathlib_root_dir, pattern):
longest.append(pathlib_root_dir) # delete the folder itself if it's empty
for pdir in longest:
try:
if str(pdir) in excluded_paths:
continue
pdir.rmdir() # remove directory if empty
except (FileNotFoundError, OSError):
continue # catch and continue if non-empty, folders within could already be deleted if run in parallel

View file

@ -1,2 +1,2 @@
flake8==7.0.0
pre-commit==3.6.2
pre-commit==3.7.0

View file

@ -1,6 +1,6 @@
bencodepy==0.9.5
GitPython==3.1.42
qbittorrent-api==2024.2.59
GitPython==3.1.43
qbittorrent-api==2024.3.60
requests==2.31.0
retrying==1.3.4
ruamel.yaml==0.18.6

View file

@ -16,13 +16,19 @@ parser.add_argument(
"--cache-mount",
"--cache_mount",
help="Cache mount point in Unraid. This is used to additionally filter for only torrents that exists on the cache mount."
"Use this option ONLY if you follow TRaSH Guides folder structure.",
"Use this option ONLY if you follow TRaSH Guides folder structure. (For default cache drive set this to /mnt/cache)",
default=None,
)
parser.add_argument(
"--days-from", "--days_from", help="Set Number of Days to stop torrents between two offsets", type=int, default=0
)
parser.add_argument("--days-to", "--days_to", help="Set Number of Days to stop torrents between two offsets", type=int, default=2)
parser.add_argument(
"--mover-old",
help="Use mover.old instead of mover. Useful if you're using the Mover Tuning Plugin",
action="store_true",
default=False,
)
# --DEFINE VARIABLES--#
# --START SCRIPT--#
@ -88,10 +94,12 @@ if __name__ == "__main__":
stop_start_torrents(torrents, True)
time.sleep(10)
# Start mover
print("Starting Mover")
print(f"Starting {'mover.old' if args.mover_old else 'mover'} to move files older than {args.days_to} days to array disks.")
# Or using mover tunning
# os.system('/usr/local/sbin/mover start')
if args.mover_old:
os.system("/usr/local/sbin/mover.old start")
else:
os.system("/usr/local/sbin/mover start")
# Start Torrents
print(f"Resuming [{len(torrents)}] paused torrents from {args.days_from} - {args.days_to} days ago")
stop_start_torrents(torrents, False)