[44] add more logging to imdb search

This commit is contained in:
meisnate12 2024-03-21 18:26:56 -04:00
parent a92ee5de67
commit 203e085037
3 changed files with 32 additions and 28 deletions

View file

@ -8,14 +8,14 @@ Updated tmdbapis requirement to 1.2.7
Due to FlixPatrol moving a lot of their data behind a paywall and them reworking their pages to remove IMDb IDs and TMDb IDs the flixpatrol builders and default files have been removed. There currently are no plans to re-add them.
# New Features
Added new [BoxOfficeMojo Builder](https://metamanager.wiki/en/latest/files/builders/mojo/) - credit to @nwithan8 for the suggestion and initial code submission
Added `monitor_existing` to sonarr and radarr. To update the monitored status of items existing in plex to match the `monitor` declared.
Added [Gotify](https://gotify.net/) as a notification service. Thanks @krstn420 for the initial code.
Added [Trakt and MyAnimeList Authentication Page](https://metamanager.wiki/en/latest/config/auth/) allowing users to authenticate against those services directly from the wiki. credit to @chazlarson for developing the script
# Updates
Reworked PMM Default Streaming [Collections](https://metamanager.wiki/en/latest/defaults/both/streaming) and [Overlays](https://metamanager.wiki/en/latest/defaults/overlays/streaming) to utilize TMDB Watch Provider data, allowing users to customize regions without relying on mdblist. This data will be more accurate and up-to-date now.
Added new [BoxOfficeMojo Builder](https://metamanager.wiki/en/latest/files/builders/mojo/) - credit to @nwithan8 for the suggestion and initial code submission
Added new [`trakt_chart` attributes](https://metamanager.wiki/en/latest/files/builders/trakt/#trakt-chart) `network_ids`, `studio_ids`, `votes`, `tmdb_ratings`, `tmdb_votes`, `imdb_ratings`, `imdb_votes`, `rt_meters`, `rt_user_meters`, `metascores` and removed the deprecated `network` attribute
Added [Trakt and MyAnimeList Authentication Page](https://metamanager.wiki/en/latest/config/auth/) allowing users to authenticate against those services directly from the wiki. credit to @chazlarson for developing the script
Trakt Builder `trakt_userlist` value `recommendations` removed and `favorites` added.
Mass Update operations now can be given a list of sources to fall back on when one fails including a manual source.
`mass_content_rating_update` has a new source `mdb_age_rating`

View file

@ -1 +1 @@
1.20.0-develop43
1.20.0-develop44

View file

@ -417,31 +417,35 @@ class IMDb:
imdb_ids = []
logger.ghost("Parsing Page 1")
response_json = self._graph_request(json_obj)
total = response_json["data"]["advancedTitleSearch"]["total"]
limit = data["limit"]
if limit < 1 or total < limit:
limit = total
remainder = limit % item_count
if remainder == 0:
remainder = item_count
num_of_pages = math.ceil(int(limit) / item_count)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
imdb_ids.extend([n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]])
if num_of_pages > 1:
for i in range(2, num_of_pages + 1):
start_num = (i - 1) * item_count + 1
logger.ghost(f"Parsing Page {i}/{num_of_pages} {start_num}-{limit if i == num_of_pages else i * item_count}")
json_obj["variables"]["after"] = end_cursor
response_json = self._graph_request(json_obj)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
ids_found = [n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]]
if i == num_of_pages:
ids_found = ids_found[:remainder]
imdb_ids.extend(ids_found)
logger.exorcise()
if len(imdb_ids) > 0:
return imdb_ids
raise Failed("IMDb Error: No IMDb IDs Found")
try:
total = response_json["data"]["advancedTitleSearch"]["total"]
limit = data["limit"]
if limit < 1 or total < limit:
limit = total
remainder = limit % item_count
if remainder == 0:
remainder = item_count
num_of_pages = math.ceil(int(limit) / item_count)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
imdb_ids.extend([n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]])
if num_of_pages > 1:
for i in range(2, num_of_pages + 1):
start_num = (i - 1) * item_count + 1
logger.ghost(f"Parsing Page {i}/{num_of_pages} {start_num}-{limit if i == num_of_pages else i * item_count}")
json_obj["variables"]["after"] = end_cursor
response_json = self._graph_request(json_obj)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
ids_found = [n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]]
if i == num_of_pages:
ids_found = ids_found[:remainder]
imdb_ids.extend(ids_found)
logger.exorcise()
if len(imdb_ids) > 0:
return imdb_ids
raise Failed("IMDb Error: No IMDb IDs Found")
except KeyError:
logger.error(f"Response: {response_json}")
raise
def _award(self, data):
final_list = []