Merge branch 'develop' into patch-1

This commit is contained in:
bobokun 2023-06-11 16:45:37 -04:00 committed by GitHub
commit cbb12bd201
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
19 changed files with 192 additions and 92 deletions

View file

@ -45,7 +45,7 @@ repos:
hooks: hooks:
- id: black - id: black
language_version: python3 language_version: python3
args: [--line-length, '130'] args: [--line-length, '130', --preview]
- repo: https://github.com/PyCQA/flake8 - repo: https://github.com/PyCQA/flake8
rev: 6.0.0 rev: 6.0.0
hooks: hooks:

View file

@ -1,23 +1,24 @@
# Requirements Updated # Requirements Updated
- Updates ruamel.yaml to 0.17.31
- Updates qbitorrent-api to 2023.5.48
- Separate out dev requirements into requirements-dev.txt
# Breaking Changes
- `tag_nohardlinks` only updates/removes `noHL` tag. **It does not modify or cleanup share_limits anymore.**
- `tag_update` only adds tracker tags to torrent. **It does not modify or cleanup share_limits anymore.**
- Please remove any references to share_limits from your configuration in the tracker/nohardlinks section
- Migration guide can be followed here: [V4 Migration Guide](https://github.com/StuffAnThings/qbit_manage/wiki/v4-Migration-Guide)
- Webhook payloads changed (See [webhooks](https://github.com/StuffAnThings/qbit_manage/wiki/Config-Setup#webhooks) for updated payload)
# New Features # New Features
- Adds new command `share_limits`, `--share-limits` , `QBT_SHARE_LIMITS=True` to update share limits based on tags/categories specified per group (Closes #88, Closes #306, Closes #259, Closes #308, Closes #137) - cross-seed will move torrent to an error folder if it fails to inject torrent
- See [Config Setup - share_limits](https://github.com/StuffAnThings/qbit_manage/wiki/Config-Setup#share_limits) for more details
- Adds new command `skip_qb_version_check`, `--skip-qb-version-check`, `QBT_SKIP_QB_VERSION_CHECK` to bypass qbitorrent compatibility check (unsupported - Thanks to @ftc2 #307)
- Updates to webhook notifications to group notifications when a function updates more than 10 Torrents.
- Adds new webhooks for `share_limits`
- Adds rate limit to webhook notifications (1 msg/sec)
# Bug Fixes
- Fixes #302
- Fixes #317
**Full Changelog**: https://github.com/StuffAnThings/qbit_manage/compare/v3.6.4...v4.0.0 # Bug Fixes
- Fixes #329 (Updates missing share_limits tag even when share_limits are satisfied)
# Enhancements
- Logic for `share_limits_suffix_tag` changed to become a prefix tag instead along with adding the priority of the group. The reason for this change is so it's easier to see the share limit groups togethered in qbitorrent ordered by priority.
- `share_limits_suffix_tag` key is now `share_limits_tag`
- No config changes are required as the qbm will automatically change the previous `share_limits_suffix_tag` key to `share_limits_tag`
- Changes the default value of `share_limits_tag` to `~share_limit`. The `~` is used to sort the `share_limits_tag` in the qbt webUI to the bottom for less clutter
Example based on config.sample:
| old tag (v4.0.0) | new tag (v4.0.1) |
| ----------- | ----------- |
| noHL.share_limit | ~share_limit_1.noHL |
| cross-seed.share_limit | ~share_limit_2.cross-seed |
| PTP.share_limit | ~share_limit_3.PTP |
| default.share_limit | ~share_limit_999.default |
**Full Changelog**: https://github.com/StuffAnThings/qbit_manage/compare/v4.0.0...v4.0.1

View file

@ -6,7 +6,11 @@ venv: requirements.txt setup.py tox.ini
.PHONY: test .PHONY: test
test: test:
tox tox -e tests
.PHONY: pre-commit
pre-commit:
tox -e pre-commit
.PHONY: clean .PHONY: clean
clean: clean:
@ -14,3 +18,7 @@ clean:
find -name '__pycache__' -delete find -name '__pycache__' -delete
rm -rf .tox rm -rf .tox
rm -rf venv rm -rf venv
.PHONY: install-hooks
install-hooks:
tox -e install-hooks

View file

@ -1 +1 @@
4.0.0 4.0.1-develop4

View file

@ -28,7 +28,7 @@ settings:
force_auto_tmm: False # Will force qBittorrent to enable Automatic Torrent Management for each torrent. force_auto_tmm: False # Will force qBittorrent to enable Automatic Torrent Management for each torrent.
tracker_error_tag: issue # Will set the tag of any torrents that do not have a working tracker. tracker_error_tag: issue # Will set the tag of any torrents that do not have a working tracker.
nohardlinks_tag: noHL # Will set the tag of any torrents with no hardlinks. nohardlinks_tag: noHL # Will set the tag of any torrents with no hardlinks.
share_limits_suffix_tag: share_limit # Will add this suffix to the grouping separated by '.' to the tag of any torrents with share limits. share_limits_tag: ~share_limit # Will add this tag when applying share limits to provide an easy way to filter torrents by share limit group/priority for each torrent
ignoreTags_OnUpdate: # When running tag-update function, it will update torrent tags for a given torrent even if the torrent has at least one or more of the tags defined here. Otherwise torrents will not be tagged if tags exist. ignoreTags_OnUpdate: # When running tag-update function, it will update torrent tags for a given torrent even if the torrent has at least one or more of the tags defined here. Otherwise torrents will not be tagged if tags exist.
- noHL - noHL
- issue - issue

View file

@ -151,6 +151,11 @@ class Config:
self.loglevel = "DRYRUN" if self.dry_run else "INFO" self.loglevel = "DRYRUN" if self.dry_run else "INFO"
self.session = requests.Session() self.session = requests.Session()
share_limits_tag = self.data["settings"].get("share_limits_suffix_tag", "~share_limit")
# Convert previous share_limits_suffix_tag to new default share_limits_tag
if share_limits_tag == "share_limit":
share_limits_tag = "~share_limit"
self.settings = { self.settings = {
"force_auto_tmm": self.util.check_for_attribute( "force_auto_tmm": self.util.check_for_attribute(
self.data, "force_auto_tmm", parent="settings", var_type="bool", default=False self.data, "force_auto_tmm", parent="settings", var_type="bool", default=False
@ -159,19 +164,22 @@ class Config:
self.data, "tracker_error_tag", parent="settings", default="issue" self.data, "tracker_error_tag", parent="settings", default="issue"
), ),
"nohardlinks_tag": self.util.check_for_attribute(self.data, "nohardlinks_tag", parent="settings", default="noHL"), "nohardlinks_tag": self.util.check_for_attribute(self.data, "nohardlinks_tag", parent="settings", default="noHL"),
"share_limits_suffix_tag": self.util.check_for_attribute( "share_limits_tag": self.util.check_for_attribute(
self.data, "share_limits_suffix_tag", parent="settings", default="share_limit" self.data, "share_limits_tag", parent="settings", default=share_limits_tag
), ),
} }
self.tracker_error_tag = self.settings["tracker_error_tag"] self.tracker_error_tag = self.settings["tracker_error_tag"]
self.nohardlinks_tag = self.settings["nohardlinks_tag"] self.nohardlinks_tag = self.settings["nohardlinks_tag"]
self.share_limits_suffix_tag = "." + self.settings["share_limits_suffix_tag"] self.share_limits_tag = self.settings["share_limits_tag"]
default_ignore_tags = [self.nohardlinks_tag, self.tracker_error_tag, "cross-seed"] default_ignore_tags = [self.nohardlinks_tag, self.tracker_error_tag, "cross-seed"]
self.settings["ignoreTags_OnUpdate"] = self.util.check_for_attribute( self.settings["ignoreTags_OnUpdate"] = self.util.check_for_attribute(
self.data, "ignoreTags_OnUpdate", parent="settings", default=default_ignore_tags, var_type="list" self.data, "ignoreTags_OnUpdate", parent="settings", default=default_ignore_tags, var_type="list"
) )
"Migrate settings from v4.0.0 to v4.0.1 and beyond. Convert 'share_limits_suffix_tag' to 'share_limits_tag'"
if "share_limits_suffix_tag" in self.data["settings"]:
self.util.overwrite_attributes(self.settings, "settings")
default_function = { default_function = {
"cross_seed": None, "cross_seed": None,
@ -308,7 +316,7 @@ class Config:
else: else:
priority = max(priorities) + 1 priority = max(priorities) + 1
logger.warning( logger.warning(
f"Priority not defined for the grouping '{key}' in share_limits. " f"Setting priority to {priority}" f"Priority not defined for the grouping '{key}' in share_limits. Setting priority to {priority}"
) )
value["priority"] = self.util.check_for_attribute( value["priority"] = self.util.check_for_attribute(
self.data, self.data,
@ -543,14 +551,18 @@ class Config:
) )
if self.commands["rem_orphaned"]: if self.commands["rem_orphaned"]:
exclude_orphaned = f"**{os.sep}{os.path.basename(self.orphaned_dir.rstrip(os.sep))}{os.sep}*" exclude_orphaned = f"**{os.sep}{os.path.basename(self.orphaned_dir.rstrip(os.sep))}{os.sep}*"
self.orphaned["exclude_patterns"].append(exclude_orphaned) if exclude_orphaned not in self.orphaned[ (
"exclude_patterns" self.orphaned["exclude_patterns"].append(exclude_orphaned)
] else self.orphaned["exclude_patterns"] if exclude_orphaned not in self.orphaned["exclude_patterns"]
else self.orphaned["exclude_patterns"]
)
if self.recyclebin["enabled"]: if self.recyclebin["enabled"]:
exclude_recycle = f"**{os.sep}{os.path.basename(self.recycle_dir.rstrip(os.sep))}{os.sep}*" exclude_recycle = f"**{os.sep}{os.path.basename(self.recycle_dir.rstrip(os.sep))}{os.sep}*"
self.orphaned["exclude_patterns"].append(exclude_recycle) if exclude_recycle not in self.orphaned[ (
"exclude_patterns" self.orphaned["exclude_patterns"].append(exclude_recycle)
] else self.orphaned["exclude_patterns"] if exclude_recycle not in self.orphaned["exclude_patterns"]
else self.orphaned["exclude_patterns"]
)
# Connect to Qbittorrent # Connect to Qbittorrent
self.qbt = None self.qbt = None
@ -640,8 +652,10 @@ class Config:
if empty_after_x_days <= days: if empty_after_x_days <= days:
num_del += 1 num_del += 1
body += logger.print_line( body += logger.print_line(
(
f"{'Did not delete' if self.dry_run else 'Deleted'} " f"{'Did not delete' if self.dry_run else 'Deleted'} "
f"{filename} from {folder} (Last modified {round(days)} days ago).", f"{filename} from {folder} (Last modified {round(days)} days ago)."
),
self.loglevel, self.loglevel,
) )
files += [str(filename)] files += [str(filename)]
@ -654,8 +668,10 @@ class Config:
for path in location_path_list: for path in location_path_list:
util.remove_empty_directories(path, "**/*") util.remove_empty_directories(path, "**/*")
body += logger.print_line( body += logger.print_line(
(
f"{'Did not delete' if self.dry_run else 'Deleted'} {num_del} files " f"{'Did not delete' if self.dry_run else 'Deleted'} {num_del} files "
f"({util.human_readable_size(size_bytes)}) from the {location}.", f"({util.human_readable_size(size_bytes)}) from the {location}."
),
self.loglevel, self.loglevel,
) )
attr = { attr = {

View file

@ -31,18 +31,21 @@ class CrossSeed:
dir_cs = self.config.cross_seed_dir dir_cs = self.config.cross_seed_dir
dir_cs_out = os.path.join(dir_cs, "qbit_manage_added") dir_cs_out = os.path.join(dir_cs, "qbit_manage_added")
os.makedirs(dir_cs_out, exist_ok=True) os.makedirs(dir_cs_out, exist_ok=True)
dir_cs_err = os.path.join(dir_cs, "qbit_manage_error")
os.makedirs(dir_cs_err, exist_ok=True)
for file in cs_files: for file in cs_files:
tr_name = file.split("]", 2)[2].split(".torrent")[0] tr_name = file.split("]", 2)[2].split(".torrent")[0]
t_tracker = file.split("]", 2)[1][1:] t_tracker = file.split("]", 2)[1][1:]
# Substring Key match in dictionary (used because t_name might not match exactly with self.qbt.torrentinfo key) # Substring Key match in dictionary (used because t_name might not match exactly with self.qbt.torrentinfo key)
# Returned the dictionary of filtered item # Returned the dictionary of filtered item
torrentdict_file = dict(filter(lambda item: tr_name in item[0], self.qbt.torrentinfo.items())) torrentdict_file = dict(filter(lambda item: tr_name in item[0], self.qbt.torrentinfo.items()))
src = os.path.join(dir_cs, file)
dir_cs_out = os.path.join(dir_cs_out, file)
dir_cs_err = os.path.join(dir_cs_err, file)
if torrentdict_file: if torrentdict_file:
# Get the exact torrent match name from self.qbt.torrentinfo # Get the exact torrent match name from self.qbt.torrentinfo
t_name = next(iter(torrentdict_file)) t_name = next(iter(torrentdict_file))
dest = os.path.join(self.qbt.torrentinfo[t_name]["save_path"], "") dest = os.path.join(self.qbt.torrentinfo[t_name]["save_path"], "")
src = os.path.join(dir_cs, file)
dir_cs_out = os.path.join(dir_cs, "qbit_manage_added", file)
category = self.qbt.torrentinfo[t_name].get("Category", self.qbt.get_category(dest)) category = self.qbt.torrentinfo[t_name].get("Category", self.qbt.get_category(dest))
# Only add cross-seed torrent if original torrent is complete # Only add cross-seed torrent if original torrent is complete
if self.qbt.torrentinfo[t_name]["is_complete"]: if self.qbt.torrentinfo[t_name]["is_complete"]:
@ -98,6 +101,7 @@ class CrossSeed:
logger.print_line(error, self.config.loglevel) logger.print_line(error, self.config.loglevel)
else: else:
logger.print_line(error, "WARNING") logger.print_line(error, "WARNING")
util.move_files(src, dir_cs_err)
self.config.notify(error, "cross-seed", False) self.config.notify(error, "cross-seed", False)
self.config.webhooks_factory.notify(self.torrents_updated, self.notify_attr, group_by="category") self.config.webhooks_factory.notify(self.torrents_updated, self.notify_attr, group_by="category")

View file

@ -64,8 +64,10 @@ class ReCheck:
) )
logger.debug( logger.debug(
logger.insert_space( logger.insert_space(
(
f"-- Seeding Time vs Max Seed Time: {timedelta(seconds=torrent.seeding_time)} < " f"-- Seeding Time vs Max Seed Time: {timedelta(seconds=torrent.seeding_time)} < "
f"{timedelta(minutes=torrent.max_seeding_time)}", f"{timedelta(minutes=torrent.max_seeding_time)}"
),
4, 4,
) )
) )
@ -85,7 +87,7 @@ class ReCheck:
): ):
self.stats_resumed += 1 self.stats_resumed += 1
body = logger.print_line( body = logger.print_line(
f"{'Not Resuming' if self.config.dry_run else 'Resuming'} [{tracker['tag']}] - " f"{t_name}", f"{'Not Resuming' if self.config.dry_run else 'Resuming'} [{tracker['tag']}] - {t_name}",
self.config.loglevel, self.config.loglevel,
) )
attr = { attr = {

View file

@ -67,8 +67,10 @@ class RemoveOrphaned:
logger.print_line(f"{num_orphaned} Orphaned files found", self.config.loglevel) logger.print_line(f"{num_orphaned} Orphaned files found", self.config.loglevel)
body += logger.print_line("\n".join(orphaned_files), self.config.loglevel) body += logger.print_line("\n".join(orphaned_files), self.config.loglevel)
body += logger.print_line( body += logger.print_line(
(
f"{'Not moving' if self.config.dry_run else 'Moving'} {num_orphaned} Orphaned files " f"{'Not moving' if self.config.dry_run else 'Moving'} {num_orphaned} Orphaned files "
f"to {self.orphaned_dir.replace(self.remote_dir,self.root_dir)}", f"to {self.orphaned_dir.replace(self.remote_dir,self.root_dir)}"
),
self.config.loglevel, self.config.loglevel,
) )

View file

@ -145,22 +145,28 @@ class RemoveUnregistered:
if self.stats_deleted >= 1 or self.stats_deleted_contents >= 1: if self.stats_deleted >= 1 or self.stats_deleted_contents >= 1:
if self.stats_deleted >= 1: if self.stats_deleted >= 1:
logger.print_line( logger.print_line(
(
f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.stats_deleted} " f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.stats_deleted} "
f".torrent{'s' if self.stats_deleted > 1 else ''} but not content files.", f".torrent{'s' if self.stats_deleted > 1 else ''} but not content files."
),
self.config.loglevel, self.config.loglevel,
) )
if self.stats_deleted_contents >= 1: if self.stats_deleted_contents >= 1:
logger.print_line( logger.print_line(
(
f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.stats_deleted_contents} " f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.stats_deleted_contents} "
f".torrent{'s' if self.stats_deleted_contents > 1 else ''} AND content files.", f".torrent{'s' if self.stats_deleted_contents > 1 else ''} AND content files."
),
self.config.loglevel, self.config.loglevel,
) )
else: else:
logger.print_line("No unregistered torrents found.", self.config.loglevel) logger.print_line("No unregistered torrents found.", self.config.loglevel)
if self.stats_untagged >= 1: if self.stats_untagged >= 1:
logger.print_line( logger.print_line(
(
f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.tag_error} tags for {self.stats_untagged} " f"{'Did not delete' if self.config.dry_run else 'Deleted'} {self.tag_error} tags for {self.stats_untagged} "
f".torrent{'s.' if self.stats_untagged > 1 else '.'}", f".torrent{'s.' if self.stats_untagged > 1 else '.'}"
),
self.config.loglevel, self.config.loglevel,
) )
if self.stats_tagged >= 1: if self.stats_tagged >= 1:

View file

@ -24,10 +24,11 @@ class ShareLimits:
self.share_limits_config = qbit_manager.config.share_limits # configuration of share limits self.share_limits_config = qbit_manager.config.share_limits # configuration of share limits
self.torrents_updated = [] # list of torrents that have been updated self.torrents_updated = [] # list of torrents that have been updated
self.torrent_hash_checked = [] # list of torrent hashes that have been checked for share limits self.torrent_hash_checked = [] # list of torrent hashes that have been checked for share limits
self.share_limits_suffix_tag = qbit_manager.config.share_limits_suffix_tag # suffix tag for share limits self.share_limits_tag = qbit_manager.config.share_limits_tag # tag for share limits
self.group_tag = None # tag for the share limit group self.group_tag = None # tag for the share limit group
self.update_share_limits() self.update_share_limits()
self.delete_share_limits_suffix_tag()
def update_share_limits(self): def update_share_limits(self):
"""Updates share limits for torrents based on grouping""" """Updates share limits for torrents based on grouping"""
@ -166,6 +167,9 @@ class ShareLimits:
for torrent in torrents: for torrent in torrents:
t_name = torrent.name t_name = torrent.name
t_hash = torrent.hash t_hash = torrent.hash
self.group_tag = (
f"{self.share_limits_tag}_{group_config['priority']}.{group_name}" if group_config["add_group_to_tag"] else None
)
tracker = self.qbt.get_tags(torrent.trackers) tracker = self.qbt.get_tags(torrent.trackers)
check_max_ratio = group_config["max_ratio"] != torrent.max_ratio check_max_ratio = group_config["max_ratio"] != torrent.max_ratio
check_max_seeding_time = group_config["max_seeding_time"] != torrent.max_seeding_time check_max_seeding_time = group_config["max_seeding_time"] != torrent.max_seeding_time
@ -175,6 +179,7 @@ class ShareLimits:
group_config["limit_upload_speed"] = -1 group_config["limit_upload_speed"] = -1
check_limit_upload_speed = group_config["limit_upload_speed"] != torrent_upload_limit check_limit_upload_speed = group_config["limit_upload_speed"] != torrent_upload_limit
hash_not_prev_checked = t_hash not in self.torrent_hash_checked hash_not_prev_checked = t_hash not in self.torrent_hash_checked
share_limits_not_yet_tagged = True if self.group_tag and self.group_tag not in torrent.tags else False
logger.trace(f"Torrent: {t_name} [Hash: {t_hash}]") logger.trace(f"Torrent: {t_name} [Hash: {t_hash}]")
logger.trace(f"Torrent Category: {torrent.category}") logger.trace(f"Torrent Category: {torrent.category}")
logger.trace(f"Torrent Tags: {torrent.tags}") logger.trace(f"Torrent Tags: {torrent.tags}")
@ -192,9 +197,11 @@ class ShareLimits:
) )
logger.trace(f"check_limit_upload_speed: {check_limit_upload_speed}") logger.trace(f"check_limit_upload_speed: {check_limit_upload_speed}")
logger.trace(f"hash_not_prev_checked: {hash_not_prev_checked}") logger.trace(f"hash_not_prev_checked: {hash_not_prev_checked}")
if (check_max_ratio or check_max_seeding_time or check_limit_upload_speed) and hash_not_prev_checked: logger.trace(f"share_limits_not_yet_tagged: {share_limits_not_yet_tagged}")
if (
check_max_ratio or check_max_seeding_time or check_limit_upload_speed or share_limits_not_yet_tagged
) and hash_not_prev_checked:
if "MinSeedTimeNotReached" not in torrent.tags: if "MinSeedTimeNotReached" not in torrent.tags:
self.group_tag = f"{group_name}{self.share_limits_suffix_tag}" if group_config["add_group_to_tag"] else None
logger.print_line(logger.insert_space(f"Torrent Name: {t_name}", 3), self.config.loglevel) logger.print_line(logger.insert_space(f"Torrent Name: {t_name}", 3), self.config.loglevel)
logger.print_line(logger.insert_space(f'Tracker: {tracker["url"]}', 8), self.config.loglevel) logger.print_line(logger.insert_space(f'Tracker: {tracker["url"]}', 8), self.config.loglevel)
if self.group_tag: if self.group_tag:
@ -226,7 +233,7 @@ class ShareLimits:
# Remove previous share_limits tag # Remove previous share_limits tag
tags = util.get_list(torrent.tags) tags = util.get_list(torrent.tags)
for tag in tags: for tag in tags:
if self.share_limits_suffix_tag in tag: if self.share_limits_tag in tag:
torrent.remove_tags(tag) torrent.remove_tags(tag)
# Will tag the torrent with the group name if add_group_to_tag is True and set the share limits # Will tag the torrent with the group name if add_group_to_tag is True and set the share limits
@ -371,8 +378,11 @@ class ShareLimits:
print_log += logger.print_line(logger.insert_space(f"Tracker: {tracker}", 8), self.config.loglevel) print_log += logger.print_line(logger.insert_space(f"Tracker: {tracker}", 8), self.config.loglevel)
print_log += logger.print_line( print_log += logger.print_line(
logger.insert_space( logger.insert_space(
(
f"Min seed time not met: {timedelta(seconds=torrent.seeding_time)} <=" f"Min seed time not met: {timedelta(seconds=torrent.seeding_time)} <="
f"{timedelta(minutes=min_seeding_time)}. Removing Share Limits so qBittorrent can continue seeding.", f" {timedelta(minutes=min_seeding_time)}. Removing Share Limits so qBittorrent can continue"
" seeding."
),
8, 8,
), ),
self.config.loglevel, self.config.loglevel,
@ -401,8 +411,10 @@ class ShareLimits:
if seeding_time_limit: if seeding_time_limit:
if (torrent.seeding_time >= seeding_time_limit * 60) and _has_reached_min_seeding_time_limit(): if (torrent.seeding_time >= seeding_time_limit * 60) and _has_reached_min_seeding_time_limit():
body += logger.insert_space( body += logger.insert_space(
(
f"Seeding Time vs Max Seed Time: {timedelta(seconds=torrent.seeding_time)} >= " f"Seeding Time vs Max Seed Time: {timedelta(seconds=torrent.seeding_time)} >= "
f"{timedelta(minutes=seeding_time_limit)}", f"{timedelta(minutes=seeding_time_limit)}"
),
8, 8,
) )
return True return True
@ -422,3 +434,11 @@ class ShareLimits:
if _has_reached_seeding_time_limit(): if _has_reached_seeding_time_limit():
return body return body
return False return False
def delete_share_limits_suffix_tag(self):
""" "Delete Share Limits Suffix Tag from version 4.0.0"""
tags = self.client.torrent_tags.tags
old_share_limits_tag = self.share_limits_tag[1:] if self.share_limits_tag.startswith("~") else self.share_limits_tag
for tag in tags:
if tag.endswith(f".{old_share_limits_tag}"):
self.client.torrent_tags.delete_tags(tag)

View file

@ -62,7 +62,7 @@ class TagNoHardLinks:
self.stats_untagged += 1 self.stats_untagged += 1
body = [] body = []
body += logger.print_line( body += logger.print_line(
f"Previous Tagged {self.nohardlinks_tag} " f"Torrent Name: {torrent.name} has hardlinks found now.", f"Previous Tagged {self.nohardlinks_tag} Torrent Name: {torrent.name} has hardlinks found now.",
self.config.loglevel, self.config.loglevel,
) )
body += logger.print_line(logger.insert_space(f"Removed Tag: {self.nohardlinks_tag}", 6), self.config.loglevel) body += logger.print_line(logger.insert_space(f"Removed Tag: {self.nohardlinks_tag}", 6), self.config.loglevel)
@ -121,16 +121,20 @@ class TagNoHardLinks:
self.check_previous_nohardlinks_tagged_torrents(has_nohardlinks, torrent, tracker, category) self.check_previous_nohardlinks_tagged_torrents(has_nohardlinks, torrent, tracker, category)
if self.stats_tagged >= 1: if self.stats_tagged >= 1:
logger.print_line( logger.print_line(
(
f"{'Did not Tag' if self.config.dry_run else 'Added Tag'} for {self.stats_tagged} " f"{'Did not Tag' if self.config.dry_run else 'Added Tag'} for {self.stats_tagged} "
f".torrent{'s.' if self.stats_tagged > 1 else '.'}", f".torrent{'s.' if self.stats_tagged > 1 else '.'}"
),
self.config.loglevel, self.config.loglevel,
) )
else: else:
logger.print_line("No torrents to tag with no hardlinks.", self.config.loglevel) logger.print_line("No torrents to tag with no hardlinks.", self.config.loglevel)
if self.stats_untagged >= 1: if self.stats_untagged >= 1:
logger.print_line( logger.print_line(
(
f"{'Did not delete' if self.config.dry_run else 'Deleted'} " f"{'Did not delete' if self.config.dry_run else 'Deleted'} "
f"{self.nohardlinks_tag} tags for {self.stats_untagged} " f"{self.nohardlinks_tag} tags for {self.stats_untagged} "
f".torrent{'s.' if self.stats_untagged > 1 else '.'}", f".torrent{'s.' if self.stats_untagged > 1 else '.'}"
),
self.config.loglevel, self.config.loglevel,
) )

View file

@ -9,7 +9,7 @@ class Tags:
self.config = qbit_manager.config self.config = qbit_manager.config
self.client = qbit_manager.client self.client = qbit_manager.client
self.stats = 0 self.stats = 0
self.share_limits_suffix_tag = qbit_manager.config.share_limits_suffix_tag # suffix tag for share limits self.share_limits_tag = qbit_manager.config.share_limits_tag # suffix tag for share limits
self.torrents_updated = [] # List of torrents updated self.torrents_updated = [] # List of torrents updated
self.notify_attr = [] # List of single torrent attributes to send to notifiarr self.notify_attr = [] # List of single torrent attributes to send to notifiarr
@ -21,7 +21,7 @@ class Tags:
ignore_tags = self.config.settings["ignoreTags_OnUpdate"] ignore_tags = self.config.settings["ignoreTags_OnUpdate"]
logger.separator("Updating Tags", space=False, border=False) logger.separator("Updating Tags", space=False, border=False)
for torrent in self.qbt.torrent_list: for torrent in self.qbt.torrent_list:
check_tags = [tag for tag in util.get_list(torrent.tags) if self.share_limits_suffix_tag not in tag] check_tags = [tag for tag in util.get_list(torrent.tags) if self.share_limits_tag not in tag]
if torrent.tags == "" or (len([trk for trk in check_tags if trk not in ignore_tags]) == 0): if torrent.tags == "" or (len([trk for trk in check_tags if trk not in ignore_tags]) == 0):
tracker = self.qbt.get_tags(torrent.trackers) tracker = self.qbt.get_tags(torrent.trackers)

View file

@ -136,6 +136,13 @@ class check:
def __init__(self, config): def __init__(self, config):
self.config = config self.config = config
def overwrite_attributes(self, data, attribute):
"""Overwrite attributes in config."""
yaml = YAML(self.config.config_path)
if data is not None and attribute in yaml.data:
yaml.data[attribute] = data
yaml.save()
def check_for_attribute( def check_for_attribute(
self, self,
data, data,

View file

@ -171,8 +171,10 @@ class Webhooks:
def notify(self, torrents_updated=[], payload={}, group_by=""): def notify(self, torrents_updated=[], payload={}, group_by=""):
if len(torrents_updated) > GROUP_NOTIFICATION_LIMIT: if len(torrents_updated) > GROUP_NOTIFICATION_LIMIT:
logger.trace( logger.trace(
(
f"Number of torrents updated > {GROUP_NOTIFICATION_LIMIT}, grouping notifications" f"Number of torrents updated > {GROUP_NOTIFICATION_LIMIT}, grouping notifications"
f"{f' by {group_by}' if group_by else ''}", f"{f' by {group_by}' if group_by else ''}"
),
) )
if group_by == "category": if group_by == "category":
group_attr = group_notifications_by_key(payload, "torrent_category") group_attr = group_notifications_by_key(payload, "torrent_category")
@ -189,10 +191,14 @@ class Webhooks:
attr = { attr = {
"function": group_attr[group]["function"], "function": group_attr[group]["function"],
"title": f"{group_attr[group]['title']} for {group}", "title": f"{group_attr[group]['title']} for {group}",
"body": group_attr[group]["body"] "body": (
group_attr[group]["body"]
if only_one_torrent_updated if only_one_torrent_updated
else f"Updated {num_torrents_updated} " else (
f"{'torrent' if only_one_torrent_updated else 'torrents'} with {group_by} '{group}'", f"Updated {num_torrents_updated} "
f"{'torrent' if only_one_torrent_updated else 'torrents'} with {group_by} '{group}'"
)
),
"torrents": group_attr[group]["torrents"], "torrents": group_attr[group]["torrents"],
} }
if group_by == "category": if group_by == "category":

View file

@ -61,8 +61,10 @@ parser.add_argument(
action="store", action="store",
default="config.yml", default="config.yml",
type=str, type=str,
help="This is used if you want to use a different name for your config.yml or if you want to load multiple" help=(
"config files using *. Example: tv.yml or config*.yml", "This is used if you want to use a different name for your config.yml or if you want to load multiple"
"config files using *. Example: tv.yml or config*.yml"
),
) )
parser.add_argument( parser.add_argument(
"-lf", "-lf",
@ -103,8 +105,10 @@ parser.add_argument(
dest="tag_update", dest="tag_update",
action="store_true", action="store_true",
default=False, default=False,
help="Use this if you would like to update your tags and/or set seed goals/limit upload speed by tag." help=(
" (Only adds tags to untagged torrents)", "Use this if you would like to update your tags and/or set seed goals/limit upload speed by tag."
" (Only adds tags to untagged torrents)"
),
) )
parser.add_argument( parser.add_argument(
"-ru", "-ru",
@ -136,10 +140,12 @@ parser.add_argument(
dest="tag_nohardlinks", dest="tag_nohardlinks",
action="store_true", action="store_true",
default=False, default=False,
help="Use this to tag any torrents that do not have any hard links associated with any of the files. " help=(
"Use this to tag any torrents that do not have any hard links associated with any of the files. "
"This is useful for those that use Sonarr/Radarr which hard link your media files with the torrents for seeding. " "This is useful for those that use Sonarr/Radarr which hard link your media files with the torrents for seeding. "
"When files get upgraded they no longer become linked with your media therefore will be tagged with a new tag noHL. " "When files get upgraded they no longer become linked with your media therefore will be tagged with a new tag noHL. "
"You can then safely delete/remove these torrents to free up any extra space that is not being used by your media folder.", "You can then safely delete/remove these torrents to free up any extra space that is not being used by your media folder."
),
) )
parser.add_argument( parser.add_argument(
"-sl", "-sl",
@ -147,9 +153,11 @@ parser.add_argument(
dest="share_limits", dest="share_limits",
action="store_true", action="store_true",
default=False, default=False,
help="Use this to help apply and manage your torrent share limits based on your tags/categories." help=(
"Use this to help apply and manage your torrent share limits based on your tags/categories."
"This can apply a max ratio, seed time limits to your torrents or limit your torrent upload speed as well." "This can apply a max ratio, seed time limits to your torrents or limit your torrent upload speed as well."
"Share limits are applied in the order of priority specified.", "Share limits are applied in the order of priority specified."
),
) )
parser.add_argument( parser.add_argument(
"-sc", "-sc",
@ -422,7 +430,9 @@ def start():
next_run = nxt_run["next_run"] next_run = nxt_run["next_run"]
body = logger.separator( body = logger.separator(
f"Finished Run\n{os.linesep.join(stats_summary) if len(stats_summary)>0 else ''}" f"Finished Run\n{os.linesep.join(stats_summary) if len(stats_summary)>0 else ''}"
f"\nRun Time: {run_time}\n{next_run_str if len(next_run_str)>0 else ''}".replace("\n\n", "\n").rstrip() f"\nRun Time: {run_time}\n{next_run_str if len(next_run_str)>0 else ''}".replace(
"\n\n", "\n"
).rstrip()
)[0] )[0]
return next_run, body return next_run, body

View file

@ -184,8 +184,8 @@ def main():
print( print(
f"--- Torrent ages are below threshold of '{MIN_TORRENT_AGE} days'\n" f"--- Torrent ages are below threshold of '{MIN_TORRENT_AGE} days'\n"
f"--- Torrent seed ratios are below threshold of '{MIN_TORRENT_SHARE_RATIO}'\n" f"--- Torrent seed ratios are below threshold of '{MIN_TORRENT_SHARE_RATIO}'\n"
f"--- Torrents have multiple hard links\n" "--- Torrents have multiple hard links\n"
f"--- No torrents exists!" "--- No torrents exists!"
) )
quit_program(0) quit_program(0)

View file

@ -30,8 +30,10 @@ setup(
# repository. For example: MIT # repository. For example: MIT
license="MIT", license="MIT",
# Short description of your library # Short description of your library
description="This tool will help manage tedious tasks in qBittorrent and automate them. " description=(
"Tag, categorize, remove Orphaned data, remove unregistered torrents and much much more.", "This tool will help manage tedious tasks in qBittorrent and automate them. "
"Tag, categorize, remove Orphaned data, remove unregistered torrents and much much more."
),
# Long description of your library # Long description of your library
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",

20
tox.ini
View file

@ -5,16 +5,28 @@ tox_pip_extensions_ext_pip_custom_platform = true
tox_pip_extensions_ext_venv_update = true tox_pip_extensions_ext_venv_update = true
[testenv] [testenv]
deps = -r{toxinidir}/requirements.txt deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/requirements-dev.txt
passenv = HOME SSH_AUTH_SOCK USER passenv = HOME SSH_AUTH_SOCK USER
commands =
pre-commit install -f --install-hooks
pre-commit run --all-files --show-diff-on-failure
[testenv:venv] [testenv:venv]
envdir = venv envdir = venv
commands = commands =
[testenv:install-hooks]
deps = pre-commit
commands = pre-commit install -f --install-hooks
[testenv:pre-commit]
deps = pre-commit
commands = pre-commit run --all-files
[testenv:tests]
commands =
pre-commit install -f --install-hooks
pre-commit run --all-files
[flake8] [flake8]
max-line-length = 130 max-line-length = 130