safire/safire.ipynb

1336 lines
61 KiB
Plaintext
Raw Normal View History

2020-08-06 19:22:53 +08:00
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "safire.ipynb",
"provenance": [],
"private_outputs": true,
2020-08-10 16:07:23 +08:00
"collapsed_sections": [
2020-08-12 22:20:32 +08:00
"Ov-m1nFU9jry"
2020-08-10 16:07:23 +08:00
],
2020-08-11 18:45:09 +08:00
"toc_visible": true,
2020-08-10 16:07:23 +08:00
"include_colab_link": true
2020-08-06 19:22:53 +08:00
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
2020-08-10 16:07:23 +08:00
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/88lex/safire/blob/master/safire.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
2020-08-09 23:31:32 +08:00
{
"cell_type": "markdown",
"metadata": {
"id": "dAku_zP_KxPG",
"colab_type": "text"
},
"source": [
"### Commands"
]
},
{
"cell_type": "code",
"metadata": {
"id": "OeQm5tl7dLxF",
"colab_type": "code",
"cellView": "form",
"colab": {}
},
"source": [
2020-08-12 22:20:32 +08:00
"command1 = \"list projects\" #@param {type:\"string\"}\n",
2020-08-11 20:05:41 +08:00
"command2 = \"\" #@param {type:\"string\"}\n",
2020-08-09 23:31:32 +08:00
"command3 = \"\" #@param {type:\"string\"}\n",
"command4 = \"\" #@param {type:\"string\"}\n",
"command5 = \"\" #@param {type:\"string\"}\n",
"\n",
"\n",
"\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "yiEEfpW_zeB9",
"colab_type": "text"
},
"source": [
"Enter command(s) and select Runtime, Run All from the menu or ctrl-f9. See `Output` for results"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ov-m1nFU9jry",
"colab_type": "text"
},
"source": [
2020-08-12 22:20:32 +08:00
"### Safire"
2020-08-09 23:31:32 +08:00
]
},
2020-08-06 22:38:15 +08:00
{
"cell_type": "markdown",
"metadata": {
"id": "1FV-VHZsLkhG",
"colab_type": "text"
},
"source": [
2020-08-11 18:45:09 +08:00
"###### Instructions\n",
2020-08-06 22:38:15 +08:00
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "guMoYIjjMjgL",
"colab_type": "text"
},
"source": [
"\n",
2020-08-09 23:31:32 +08:00
"1. If using Colab:\n",
" - Save copy of the ipynb file to your drive (Colab Notebooks folder)\n",
" - Run the init code cell to auth to your My Drive account. This gives you access to Drive where you will store your creds, svcaccts and xlsx/csv files.\n",
"\n",
"2. If Running locally then change the config to local folders\n",
2020-08-11 20:05:41 +08:00
"3. Run the Config cell once to create empty /creds, /data and /svcaccts folders in the /safire folder if they do not exist.\n",
"4. If you have creds.json and token files, drag them to the /safire/creds you just created.\n",
"5. If you do not have creds.json and token files then create a creds.json file and drop it in safire/creds. Then run `auth projects` and `auth groups` (see [github](https://github.com/88lex/safire) for details.\n",
2020-08-09 23:31:32 +08:00
"\n",
"^^ All of this you do once, after which you will have a structure that looks like this:\n",
"```\n",
"/content/drive/My Drive\n",
" /safire\n",
" /creds\n",
" /creds.json\n",
" /token.pickle\n",
" /grptoken.pickle\n",
" /data\n",
" /svcaccts\n",
"```\n",
"\n",
2020-08-11 18:45:09 +08:00
"After creating your credentials and folders:\n",
2020-08-11 20:05:41 +08:00
"6. Enter one or more commands in the Commands section (empty cells are ignored). Command syntax is:\n",
2020-08-09 23:31:32 +08:00
" - `list projects`\n",
" - `list all`\n",
2020-08-11 20:05:41 +08:00
" - `list counts` will show you item counts for projects, drives, groups, etc.\n",
2020-08-09 23:31:32 +08:00
" - `add projects` will add the number of projects using a prefix that you entered in the config file\n",
" - `add projects 5 1 myproj` will create 5 projects starting with number 0001 using a prefix of myproj. Result: myproj0001, myproj0002 .. myproj0005\n",
"\n",
"7. Run commands using `Runtime - Run All` or `Ctrl-F9`\n",
" - NOTE: If you run only the Command cell it will do nothing. You need to Run All in order to run all of the code.\n"
2020-08-06 22:38:15 +08:00
]
},
2020-08-06 19:22:53 +08:00
{
"cell_type": "markdown",
"metadata": {
"id": "rkhRMr8DliCs",
"colab_type": "text"
},
"source": [
2020-08-09 23:31:32 +08:00
"###### **Init**\n",
2020-08-07 22:08:30 +08:00
"\n",
2020-08-11 18:45:09 +08:00
"- Init needs to be run once each time the Colab instance or Jupyter notebook is started\n"
2020-08-06 19:22:53 +08:00
]
},
{
"cell_type": "code",
"metadata": {
"id": "e2xK3xi2MlXs",
"colab_type": "code",
2020-08-12 22:20:32 +08:00
"cellView": "both",
2020-08-06 19:22:53 +08:00
"colab": {}
},
"source": [
2020-08-11 18:45:09 +08:00
"import os, importlib, sys, uuid, pickle\n",
2020-08-06 19:22:53 +08:00
"import pandas as pd\n",
"from base64 import b64decode\n",
"from glob import glob\n",
"from json import loads\n",
"from time import sleep\n",
"from pathlib import Path\n",
"from google.auth.transport.requests import Request\n",
"from google_auth_oauthlib.flow import InstalledAppFlow\n",
2020-08-07 22:08:30 +08:00
"from googleapiclient.discovery import build\n",
"\n",
2020-08-11 22:44:40 +08:00
"if importlib.util.find_spec('fire') is None: os.system('pip install fire')\n",
2020-08-11 18:45:09 +08:00
"import fire\n",
"\n",
2020-08-11 20:05:41 +08:00
"#@markdown Root dir when using Colab GDrive mount(recommended to leave default):\n",
2020-08-11 18:45:09 +08:00
"try:\n",
" from google.colab import drive\n",
" gdrive_mount = '/content/drive' #@param ['/content/drive'] {allow-input: true}\n",
" gd_output_dir = '/My Drive/safire' #@param ['/My Drive/safire'] {allow-input: true}\n",
" drive.mount(gdrive_mount)\n",
" safire_dir = gdrive_mount + gd_output_dir\n",
"except:\n",
" #@markdown Root directory when safire running locally (dropdown choices):\n",
" local_safire_dir = '/opt/safire/safire' #@param ['/opt/safire/safire', '~/safire', '~/../opt/safire/safire', \"safire\"] {allow-input: true}\n",
" safire_dir = local_safire_dir\n",
2020-08-12 22:20:32 +08:00
"#@markdown External config file located in safire_dir. Use any name. Def: config.py:\n",
"use_external_config = True #@param {type:\"boolean\"}\n",
"ext_conf_file = \"/config.py\" #@param [\"/config.py\"] {allow-input: true}\n",
"ext_conf_file = safire_dir + ext_conf_file"
2020-08-06 19:22:53 +08:00
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "WmnTaw3f9DF3",
"colab_type": "text"
},
"source": [
2020-08-09 23:31:32 +08:00
"###### Config\n",
2020-08-07 22:08:30 +08:00
"- Tells safire where to read and write files, how many 'things' to create and what to call them"
2020-08-06 19:22:53 +08:00
]
},
{
"cell_type": "code",
"metadata": {
"id": "KEdQk5ai8JTW",
"colab_type": "code",
2020-08-11 20:05:41 +08:00
"cellView": "form",
2020-08-06 19:22:53 +08:00
"colab": {}
},
"source": [
"import os\n",
2020-08-07 22:08:30 +08:00
"import random\n",
"import string\n",
2020-08-06 19:22:53 +08:00
"from pathlib import Path\n",
2020-08-07 22:08:30 +08:00
"from time import sleep\n",
2020-08-06 19:22:53 +08:00
"\n",
2020-08-07 22:08:30 +08:00
"#@markdown Project prefix must be unique across all of Google. If blank a random project prefix will be generated\n",
2020-08-11 22:44:40 +08:00
"#@markdown : NOTE - DO NOT keep sample name below = `project`\n",
"project_prefix = \"project\" #@param [ \"\"] {allow-input: true}\n",
2020-08-07 22:08:30 +08:00
"if not project_prefix:\n",
" letters = string.ascii_lowercase\n",
" project_prefix = ''.join(random.choice(letters) for i in range(10))\n",
" print(f\"Project prefix = {project_prefix}\")\n",
"\n",
"#@markdown Prefix for service account emails is =required= (can be anything you like)\n",
"email_prefix = \"sa\" #@param [ \"sa\", \"svcacc\", \"bananas\"] {allow-input: true}\n",
"\n",
"#@markdown Prefix for JSON keys is =optional=\n",
"json_prefix = \"\" #@param {type:\"string\"}\n",
"#@markdown Enter a group email address if using group functions (e.g. mygroup@domain.com)\n",
"group_address = \"\" #@param [ \"\", \"mygroup@domain.com\"] {allow-input: true}\n",
"\n",
"#@markdown Slide bars and dropdowns can be adjusted with arrow keys\n",
"num_new_projects = 1 #@param {type:\"slider\", min:1, max:200, step:1}\n",
"sas_per_project = 100 #@param {type:\"slider\", min:1, max:100, step:1}\n",
"# max_projects = 100 #@param {type:\"integer\"} \n",
2020-08-06 22:14:13 +08:00
"next_project_num = 1 #@param {type:\"integer\"}\n",
"next_sa_num = 1 #@param {type:\"integer\"}\n",
2020-08-07 22:08:30 +08:00
"#@markdown Note: JSON numbering can be independent of SA numbering\n",
2020-08-06 22:14:13 +08:00
"next_json_num = 1 #@param {type:\"integer\"}\n",
2020-08-06 19:22:53 +08:00
"\n",
"#@markdown ---\n",
"\n",
2020-08-11 18:45:09 +08:00
"# #@markdown Enter 'safire_dir'where creds, SAs and data files will be saved. Default is \n",
"# #@markdown `/content/drive/My Drive/safire`\n",
"# safire_dir = \"/content/drive/My Drive/safire\"#@param [\"/content/drive/My Drive/safire\"] {allow-input: true}\n",
2020-08-06 19:22:53 +08:00
"\n",
2020-08-11 18:45:09 +08:00
"#@markdown These are subfolders of 'safire_dir' above. \n",
2020-08-06 19:22:53 +08:00
"#@markdown Creds and tokens are stored in the same folder\n",
2020-08-11 18:45:09 +08:00
"cred_folder = \"/creds\" #@param {type:\"string\"}\n",
"cred_path = safire_dir + cred_folder\n",
"cred_file = \"/creds.json\" #@param {type:\"string\"}\n",
2020-08-06 19:22:53 +08:00
"credentials = cred_path + cred_file\n",
2020-08-09 23:31:32 +08:00
"token_file = \"/token.pickle\" #@param {type:\"string\"}\n",
2020-08-06 19:22:53 +08:00
"token = f\"{cred_path}/token.pickle\"\n",
2020-08-11 18:45:09 +08:00
"sa_folder = \"/svcaccts\" #@param {type:\"string\"}\n",
"sa_path = safire_dir + sa_folder\n",
"data_folder = \"/data\" #@param {type:\"string\"}\n",
"data_path = safire_dir + data_folder\n",
2020-08-06 19:22:53 +08:00
"\n",
"#@markdown Create and store a separate token for groups. You can use the same creds\n",
"group_cred_file = \"/creds.json\" #@param {type:\"string\"}\n",
"group_credentials = cred_path + group_cred_file\n",
"group_token_file = \"/grptoken.pickle\" #@param {type:\"string\"}\n",
"group_token = cred_path + group_token_file\n",
"\n",
"#@markdown ___\n",
"\n",
2020-08-07 22:08:30 +08:00
"#@markdown Set zero padding for names. e.g. jpad=6 yields 000001.json. Set to 1 if no prefix needed.\n",
2020-08-11 18:45:09 +08:00
"ppad = 4 #@param {type:\"integer\"}\n",
"spad = 6 #@param {type:\"integer\"}\n",
"jpad = 6 #@param {type:\"integer\"}\n",
2020-08-07 22:08:30 +08:00
"#@markdown retry and sleep_time help safire deal with slow google apis. Adjust as needed\n",
2020-08-12 22:20:32 +08:00
"retry = 5#@param {type:\"integer\"}\n",
"sleep_time = 5#@param {type:\"integer\"}\n",
2020-08-07 22:08:30 +08:00
"\n",
"## #@markdown [API Service Names and Versions]\n",
"DRIVE = [\"drive\", \"v3\"]\n",
"CLOUD = [\"cloudresourcemanager\", \"v1\"]\n",
"IAM = [\"iam\", \"v1\"]\n",
"ADMIN = [\"admin\", \"directory_v1\"]\n",
"SVCUSAGE = [\"serviceusage\", \"v1\"]\n",
2020-08-09 23:31:32 +08:00
"SHEETS = [\"sheets\", \"v4\"]\n",
2020-08-07 22:08:30 +08:00
"svcs_to_enable = [\"iam\", \"drive\"]\n",
"\n",
"### Generate empty directories if they do not exist\n",
2020-08-11 18:45:09 +08:00
"if not os.path.isdir(safire_dir):\n",
" os.makedirs(safire_dir, exist_ok=True)\n",
" sleep(sleep_time)\n",
2020-08-06 19:22:53 +08:00
"if not os.path.isdir(sa_path):\n",
2020-08-07 22:08:30 +08:00
" os.makedirs(sa_path, exist_ok=True)\n",
2020-08-06 19:22:53 +08:00
"if not os.path.isdir(cred_path):\n",
2020-08-07 22:08:30 +08:00
" os.makedirs(cred_path, exist_ok=True)\n",
2020-08-06 19:22:53 +08:00
"if not os.path.isdir(data_path):\n",
2020-08-07 22:08:30 +08:00
" os.makedirs(data_path, exist_ok=True)\n",
2020-08-09 23:31:32 +08:00
"\n",
"if use_external_config:\n",
2020-08-11 18:45:09 +08:00
" try:\n",
" exec(open(ext_conf_file).read())\n",
" except:\n",
" print(f\"Config file = {ext_conf_file} not found, not executed\")"
2020-08-06 19:22:53 +08:00
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "tjDqfHn89Ilg",
"colab_type": "text"
},
"source": [
2020-08-09 23:31:32 +08:00
"###### Utils"
2020-08-06 19:22:53 +08:00
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "67Ezfluw83Gb",
"colab_type": "text"
},
"source": [
"This section conatins the utils functions then main commands/subcommands"
]
},
{
"cell_type": "code",
"metadata": {
"id": "2HM3pfrX86w0",
"colab_type": "code",
"colab": {}
},
"source": [
"import os\n",
"import pickle\n",
2020-08-07 22:08:30 +08:00
"import uuid\n",
"import pandas as pd\n",
"\n",
2020-08-06 19:22:53 +08:00
"from glob import glob\n",
"from pathlib import Path\n",
2020-08-07 22:08:30 +08:00
"from time import sleep\n",
2020-08-06 19:22:53 +08:00
"\n",
"from google.auth.transport.requests import Request\n",
"from google_auth_oauthlib.flow import InstalledAppFlow\n",
"from googleapiclient.discovery import build\n",
"\n",
"\n",
"class Help:\n",
" \"\"\"These small functions support repeated activities in other classes/functions\"\"\"\n",
"\n",
" def __init__(self):\n",
" super(Help, self).__init__()\n",
"\n",
" def _svc(self, service, version, tkn):\n",
" try:\n",
" with open(tkn, \"rb\") as t:\n",
" creds = pickle.load(t)\n",
" return build(service, version, credentials=creds)\n",
" except:\n",
" print(\"No valid token found\")\n",
" auth = Auth()\n",
" auth.check()\n",
"\n",
" def _export(self, dset, filter, file_tag, fields, ltype, prt=True):\n",
" df = pd.DataFrame(dset)\n",
" fname = f\"{ltype}_list_{filter}_{file_tag}\"\n",
" pd.set_option('display.max_rows', None)\n",
" df.to_csv(\"{}/{}.csv\".format(data_path, fname))\n",
" df.to_excel(\"{}/{}.xlsx\".format(data_path, fname))\n",
" if prt:\n",
" print(df[fields])\n",
" print(f\" {len(df)} {ltype} found\")\n",
" print(f\"\\nData for {ltype} exported to {fname}.csv and .xlsx in folder /{data_path}\\n\")\n",
"\n",
" def _print1(self, plist, ltype):\n",
" print('', *plist, sep='\\n')\n",
" print(f\" {len(plist)} {ltype} found\")\n",
"\n",
" def _pre_pad(self, prefix, pad, number):\n",
" return prefix + (('0' * pad) + str(number))[-pad:]\n",
"\n",
"\n",
"class BatchJob():\n",
"\n",
" def __init__(self, service):\n",
" self.batch = service.new_batch_http_request(callback=self.callback_handler)\n",
" self.batch_resp = []\n",
"\n",
" def add(self, to_add, request_id=None):\n",
" self.batch.add(to_add, request_id=request_id)\n",
"\n",
" def callback_handler(self, rid, resp, exception):\n",
" response = {'request_id': rid, 'exception': None, 'response': None}\n",
" if exception is not None:\n",
" response['exception'] = exception\n",
" else:\n",
" response['response'] = resp\n",
" self.batch_resp.append(response)\n",
"\n",
" def execute(self):\n",
" try:\n",
" self.batch.execute()\n",
" except:\n",
" pass\n",
" return self.batch_resp\n",
"\n",
"\n",
"class Auth:\n",
" \"\"\"Authorize the app to access your projects, SAs, drives and groups. To generate creds.json go to\n",
" https://developers.google.com/apps-script/api/quickstart/python , click Enable then download a json,\n",
" rename it to creds.json and put a copy in the /creds folder\"\"\"\n",
"\n",
" def __init__(self):\n",
" super(Auth, self).__init__()\n",
" self.scopes_proj = [\n",
" \"https://www.googleapis.com/auth/drive\",\n",
" \"https://www.googleapis.com/auth/cloud-platform\",\n",
" \"https://www.googleapis.com/auth/iam\",\n",
2020-08-09 23:31:32 +08:00
" \"https://www.googleapis.com/auth/spreadsheets\",\n",
2020-08-06 19:22:53 +08:00
" ]\n",
" self.scopes_group = [\n",
" \"https://www.googleapis.com/auth/admin.directory.group\",\n",
" \"https://www.googleapis.com/auth/admin.directory.group.member\",\n",
" ]\n",
" self.scopes_all = self.scopes_proj + self.scopes_group\n",
"\n",
" def ask(self):\n",
" pass\n",
"\n",
" def check(self):\n",
" filelist = [credentials, token, group_credentials, group_token]\n",
" file_exists = [os.path.isfile(i) for i in filelist]\n",
" [print(f\"File = {i[0]} exists = {i[1]}\") for i in zip(filelist, file_exists)]\n",
" if not file_exists[0]:\n",
" print(\n",
" f\"Credentials file is missing. Download from Google console and run 'auth'\"\n",
" )\n",
" if not file_exists[2]:\n",
" print(\n",
" f\"Group credentials file is missing. Download from Google console and run 'auth'\"\n",
" f\"Note that the group credentials file is normally the same as the main projects credentials\"\n",
" f\"file - But you can optionally use separate credentials files. Specify in config.py\"\n",
" )\n",
" yes_no = input(\"Generate token for projects, SAs, drives? y/n: \")\n",
" if yes_no.lower() in [\"y\", \"yes\"]:\n",
" self.projects(credentials, token)\n",
" gyes_no = input(\"Generate token for groups? y/n: \")\n",
" if gyes_no.lower() in [\"y\", \"yes\"]:\n",
" self.groups(group_credentials, group_token)\n",
" exit()\n",
"\n",
" def projects(self, credentials=credentials, token=token):\n",
" \"\"\"Create an auth token for accessing and changing projects,\n",
" service accounts, json keys and drives\"\"\"\n",
" self.get_token(credentials, self.scopes_proj, token)\n",
"\n",
" def groups(self, credentials=credentials, group_token=group_token):\n",
" \"\"\"Create an auth token for adding/removing group members\"\"\"\n",
" self.get_token(credentials, self.scopes_group, group_token)\n",
"\n",
" def all(self, credentials=credentials, token=token):\n",
" \"\"\"Create an auth token with APIs enabled for projects and groups\"\"\"\n",
" self.projects(credentials, token)\n",
" self.groups(group_credentials, group_token)\n",
"\n",
" def get_token(self, credentials, scopes, token):\n",
" if not credentials or os.path.isfile(credentials[0]):\n",
" print(\n",
" f\"Credentials file is missing. Download from Google console and save as {credentials}\"\n",
" )\n",
" exit()\n",
" cred_file = glob(credentials)\n",
" creds = None\n",
" if os.path.exists(token):\n",
" with open(token, \"rb\") as tkn:\n",
" creds = pickle.load(tkn)\n",
" if not creds or not creds.valid:\n",
" if creds and creds.expired and creds.refresh_token:\n",
" creds.refresh(Request())\n",
" else:\n",
" flow = InstalledAppFlow.from_client_secrets_file(cred_file[0], scopes)\n",
" yes_no = input(\n",
" \"Run local server with browser? If no, generate console link. y/n: \"\n",
" )\n",
" if yes_no.lower() in [\"y\", \"yes\"]:\n",
" creds = flow.run_local_server()\n",
" else:\n",
" creds = flow.run_console()\n",
" with open(token, \"wb\") as tkn:\n",
" print(\"Writing/updating token\")\n",
" pickle.dump(creds, tkn)\n",
" else:\n",
" print(\"Credentials and token exist and appear to be valid\")\n",
" print(f\"credentials = {cred_file[0]} and token = {token}\")\n",
" yes_no = input(\n",
" f\"Do you want to delete and regenerate your token file = {token}? y/n: \"\n",
" )\n",
" if yes_no.lower() in [\"y\", \"yes\"]:\n",
" os.remove(token)\n",
" self.get_token(credentials, scopes, token)\n",
" else:\n",
" print(\"Finished without creating new token\")\n",
"\n",
"\n",
"class Link():\n",
" \"\"\"Create a symlink between safire's directories and another location\"\"\"\n",
"\n",
" def dirs(self):\n",
" cwd = os.path.dirname(os.path.realpath(__file__))\n",
" dest = f\"{str(Path.home())}/safire\"\n",
" dest1 = input(f\"Choose dir to link. To keep default = {dest} press Enter: \")\n",
" if dest1:\n",
" dest = dest1\n",
" if os.path.exists(dest):\n",
" print(f\"Directory {dest} already exists. Exiting.\")\n",
" exit()\n",
" print(f\"Linking {cwd} to {dest}\")\n",
" if os.path.isdir(dest):\n",
" os.unlink(dest)\n",
" os.symlink(cwd, dest)\n",
"\n",
"\n",
"class Commands:\n",
" \"\"\"safire creates projects, service accounts(SAs), json keys for SAs and adds SAs as group members.\\n Type -h\n",
" or --help after any command for info. Usage: './safire list projects' or 'safire add members mygroup@domain.com'\n",
" Most commands accept a 'filter'to process subsets. If no filter all items are listed.\"\"\"\n",
"\n",
" def __init__(self):\n",
" # Init()\n",
" self.list = List()\n",
" self.add = Add()\n",
" self.remove = Remove()\n",
" self.auth = Auth()\n",
" self.rename = Rename()\n",
" self.link = Link()\n",
" # alias commands for ease of use. e.g. 'safire add projects' = 'safire create projects'\n",
" self.create = self.add\n",
" self.enable = self.add\n",
" self.delete = self.remove\n",
2020-08-09 23:31:32 +08:00
"\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ouhOikY1qEZG",
"colab_type": "text"
},
"source": [
"###### List"
]
},
{
"cell_type": "code",
"metadata": {
"id": "5Hs3kkatqDOg",
"colab_type": "code",
"colab": {}
},
"source": [
2020-08-06 19:22:53 +08:00
"\n",
"class List(Help):\n",
" \"\"\"List drives, projects, service accounts (SAs), SA json keys, groups and group members. \n",
" In most cases a filter can be applied.\"\"\"\n",
"\n",
" def __init__(self):\n",
" super(List, self).__init__()\n",
"\n",
" def drives(self, filter=\"\", file_tag=\"\", token=token, prt=True):\n",
" \"\"\"List team/shared drives. Match 'filter'\"\"\"\n",
" drivelist = []\n",
2020-08-07 22:08:30 +08:00
" resp = {\"nextPageToken\": None}\n",
2020-08-06 19:22:53 +08:00
" drive = self._svc(*DRIVE, token)\n",
2020-08-07 22:08:30 +08:00
" while \"nextPageToken\" in resp:\n",
2020-08-06 19:22:53 +08:00
" resp = (\n",
" drive.drives()\n",
" .list(\n",
2020-08-07 22:08:30 +08:00
" fields=\"nextPageToken, drives(id,name)\",\n",
2020-08-06 19:22:53 +08:00
" pageSize=100,\n",
2020-08-07 22:08:30 +08:00
" pageToken=resp[\"nextPageToken\"],\n",
2020-08-06 19:22:53 +08:00
" )\n",
" .execute()\n",
" )\n",
" drivelist += resp[\"drives\"]\n",
" drivelist = [i for i in drivelist if str(filter) in i[\"name\"]]\n",
" if not prt:\n",
" return drivelist\n",
" self._export(drivelist, filter, file_tag, [\"id\", \"name\"], \"drives\", prt)\n",
"\n",
" def projects(self, filter=\"\", file_tag=\"\", token=token, prt=True, exact=False):\n",
" \"\"\"List projects. Match 'filter'\"\"\"\n",
" cloud = self._svc(*CLOUD, token)\n",
" plist = cloud.projects().list().execute()[\"projects\"]\n",
" plist = [i for i in plist if str(filter) in i[\"projectId\"]]\n",
" if exact:\n",
" plist = [i for i in plist if str(filter) == i[\"projectId\"]]\n",
" if not prt:\n",
" return [i[\"projectId\"] for i in plist]\n",
" self._export(\n",
" plist,\n",
" filter,\n",
" file_tag,\n",
" [\"projectNumber\", \"projectId\", \"name\"],\n",
" \"projects\",\n",
" prt,\n",
" )\n",
"\n",
" def sas(self, filter=\"\", exact=False, file_tag=\"\", token=token, prt=True):\n",
" \"\"\"List service accounts for projects. Projects match 'filter'.\n",
" Use '--exact=True' flag for exact project match.\"\"\"\n",
" svcacctlist, sa_summary = [], []\n",
" projId_list = self.projects(filter, file_tag, token, False, exact)\n",
" # if exact:\n",
" # projId_list = [i for i in projId_list if i == filter]\n",
" iam = self._svc(*IAM, token)\n",
" for project in sorted(projId_list):\n",
" try:\n",
" resp = (\n",
" iam.projects()\n",
" .serviceAccounts()\n",
" .list(name=\"projects/\" + project, pageSize=100)\n",
" .execute()\n",
" )[\"accounts\"]\n",
" svcacctlist += resp\n",
" sa_summary += [\n",
" f\" {len(resp)} service accounts (SAs) found in project: {project}\"\n",
" ]\n",
" except:\n",
" sa_summary += [\n",
" f\" No service accounts (SAs) found in project: {project}\"\n",
" ]\n",
" continue\n",
" svcacctlist.sort(key=lambda x: x.get(\"email\"))\n",
" if not prt:\n",
" return (\n",
" [i[\"email\"] for i in svcacctlist],\n",
" [i[\"uniqueId\"] for i in svcacctlist],\n",
" )\n",
" if svcacctlist:\n",
" self._export(svcacctlist, filter, file_tag, [\"email\"], \"svc_accts\")\n",
" print(\"\\nSummary:\", *sa_summary, sep=\"\\n\")\n",
"\n",
" def groups(self, filter=\"\", file_tag=\"\", group_token=group_token, prt=True):\n",
" \"\"\"List groups in the authorized account. Match 'filter'\"\"\"\n",
" svc = self._svc(*ADMIN, group_token)\n",
" grouplist = svc.groups().list(customer=\"my_customer\").execute()[\"groups\"]\n",
" grouplist = [i for i in grouplist if str(filter) in i[\"email\"]]\n",
" if not prt:\n",
" return [i[\"email\"] for i in grouplist]\n",
" self._export(\n",
" grouplist,\n",
" filter,\n",
" file_tag,\n",
" [\"id\", \"email\", \"directMembersCount\"],\n",
" \"groups\",\n",
" prt,\n",
" )\n",
"\n",
" def members(self, filter=\"\", file_tag=\"\", group_token=group_token, prt=True):\n",
" \"\"\"List members in groups. Groups match 'filter'\"\"\"\n",
" memberslist, member_summary = [], []\n",
" group_list = self.groups(filter, file_tag, group_token, False)\n",
" svc = self._svc(*ADMIN, group_token)\n",
" for group in group_list:\n",
" try:\n",
" response = []\n",
" resp = {\"nextPageToken\": None}\n",
" while \"nextPageToken\" in resp:\n",
" resp = (\n",
" svc.members()\n",
" .list(groupKey=group, pageToken=resp[\"nextPageToken\"])\n",
" .execute()\n",
" )\n",
" response += resp[\"members\"]\n",
" memberslist += response\n",
" except:\n",
" member_summary += [f\" No members found in group: {group}\"]\n",
" continue\n",
" member_summary += [f\" {len(response)} members found in group: {group}\"]\n",
" if not prt:\n",
" return sorted([i[\"email\"] for i in memberslist if i[\"role\"] != \"OWNER\"])\n",
" self._export(memberslist, filter, file_tag, [\"email\", \"role\"], \"members\")\n",
" print(\"\\nSummary:\", *member_summary, sep=\"\\n\")\n",
"\n",
" def jsons(self, sa_path=sa_path, filter=\"\", file_tag=\"\", prt=True):\n",
" \"\"\"alias: jsons = keys. List service account jsons/keys in the svcaccts folder. Match 'filter'\"\"\"\n",
" keys = glob(\"%s/*.json\" % sa_path)\n",
" json_keys = []\n",
" [json_keys.append(loads(open(key, \"r\").read())) for key in keys]\n",
" if filter:\n",
" json_keys = [key for key in json_keys if str(filter) in key[\"project_id\"]]\n",
" if prt and json_keys:\n",
" self._export(\n",
" json_keys, filter, file_tag, [\"client_email\"], \"json_keys\", prt\n",
" )\n",
" print(f\"There are {len(json_keys)} json keys in {sa_path}\")\n",
" elif json_keys:\n",
" return [i[\"client_email\"] for i in json_keys]\n",
2020-08-11 22:44:40 +08:00
" # else:\n",
" # print(f\"No json keys in path: {sa_path} matching filter: {filter}\")\n",
2020-08-06 19:22:53 +08:00
"\n",
" keys = jsons\n",
"\n",
" def all(self):\n",
" \"\"\"List all drives, projects, service accounts, json keys, groups and group members.\n",
" Also exports these lists with full data fields to csv and xlsx files in data_path folder\"\"\"\n",
" self.drives()\n",
" self.projects()\n",
" self.sas()\n",
" self.groups()\n",
" self.members()\n",
" self.jsons()\n",
"\n",
2020-08-06 22:14:13 +08:00
" def count(self):\n",
" \"\"\"Count drives, projects, service accounts, json keys, groups and group members.\"\"\"\n",
" list_drives = []\n",
" list_drives = self.drives(\"\", \"\", token, False)\n",
" print(\"Drive count :\", len(list_drives))\n",
" list_projects = []\n",
" list_projects = self.projects(\"\", \"\", token, False)\n",
" print(\"Project count:\", len(list_projects))\n",
" list_groups = []\n",
" list_groups = self.groups(\"\", \"\", group_token, False)\n",
" print(\"Group count :\", len(list_groups))\n",
" try:\n",
" list_jsons = self.jsons(sa_path, \"\", \"\", False)\n",
" print(\"JSON count :\", len(list_jsons))\n",
" except:\n",
" print(\"JSON count : 0\")\n",
" list_sas = []\n",
" list_sas, _ = self.sas(\"\", False, \"\", token, False)\n",
" print(\"SA count :\", len(list_sas))\n",
" list_members = []\n",
" list_members = self.members(\"\", \"\", group_token, False)\n",
" print(\"Member count :\", len(list_members))\n",
2020-08-09 23:31:32 +08:00
"\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "2NcemUMdp92X",
"colab_type": "text"
},
"source": [
"###### Add"
]
},
{
"cell_type": "code",
"metadata": {
"id": "jL9KMGLDp8X5",
"colab_type": "code",
"colab": {}
},
"source": [
2020-08-06 19:22:53 +08:00
"\n",
"class Add(Help):\n",
" \"\"\"Add projects, drives, service accounts(SAs), SA keys/jsons and group members\"\"\"\n",
"\n",
" def __init__(self):\n",
" super(Add, self).__init__()\n",
" self._list = List()\n",
"\n",
" def projects(\n",
" self,\n",
" num_new_projects=num_new_projects,\n",
" next_project_num=next_project_num,\n",
" project_prefix=project_prefix,\n",
" retry=5,\n",
" token=token,\n",
" ppad=4,\n",
" prt=False,\n",
" ):\n",
" \"\"\"Create projects in authorized account. Usage: 'safire add projects 1'.\n",
" Uses defaults in config if none specified.\"\"\"\n",
" cloud = self._svc(*CLOUD, token)\n",
" batch = BatchJob(cloud)\n",
" start_projs = self._list.projects(\"\", \"\", token, prt)\n",
" start_proj_count = len(start_projs)\n",
" num_proj_created = 0\n",
" curr_projects = []\n",
" proj_count = 0\n",
" print(f\"Creating {num_new_projects} new projects\")\n",
" retries = retry\n",
" while num_new_projects and retries:\n",
" new_projs = [\n",
" self._pre_pad(project_prefix, ppad, next_project_num + i)\n",
" for i in range(num_new_projects)\n",
" ]\n",
" for proj in new_projs:\n",
" batch.add(\n",
" cloud.projects().create(body={\"project_id\": proj, \"name\": proj})\n",
" )\n",
" batch.execute()\n",
" retries -= 1\n",
" curr_projects = self._list.projects(\"\", \"\", token, prt)\n",
" proj_count = len(curr_projects)\n",
" num_proj_created = proj_count - start_proj_count\n",
" next_project_num = next_project_num + num_proj_created\n",
" num_new_projects = num_new_projects - num_proj_created\n",
" if num_proj_created < 1:\n",
" print(\n",
" \"0 projects created. Your project prefix + number may be used already\",\n",
" \"somewhere else in Google. Try changing your prefix.\",\n",
" )\n",
" exit()\n",
" new_projs = [i for i in curr_projects if i not in start_projs]\n",
" print(f\"\\nCreated {num_proj_created} projects:\", *new_projs, sep=\"\\n\")\n",
" print(f\"Total project count = {proj_count}\")\n",
" print(\n",
" \"Sleep briefly to allow backend to register projects. Then enabling services.\\n\"\n",
" )\n",
" sleep(sleep_time)\n",
" for proj in new_projs:\n",
" self.apis(proj)\n",
"\n",
" def apis(\n",
" self, filter=\"\", svcs_to_enable=svcs_to_enable, token=token, prt=False\n",
" ):\n",
" \"\"\"Enables apis for projects. 'drive' and 'iam' apis by default. Automatic when projects are\n",
" created but can be run manually also.\"\"\"\n",
" svcusage = self._svc(*SVCUSAGE, token)\n",
" projId_list = self._list.projects(filter, \"\", token, prt)\n",
" batch = BatchJob(svcusage)\n",
" svcs_to_enable = [i + \".googleapis.com\" for i in svcs_to_enable]\n",
" for project in projId_list:\n",
" for svc1 in svcs_to_enable:\n",
" print(f\"Enabling service: {svc1} in project: {project}\")\n",
" batch.add(\n",
" svcusage.services().enable(\n",
" name=f\"projects/{project}/services/{svc1}\"\n",
" )\n",
" )\n",
" batch.execute()\n",
"\n",
" def sas(\n",
" self,\n",
" filter=\"\",\n",
" sas_per_project=sas_per_project,\n",
" email_prefix=email_prefix,\n",
" next_sa_num=next_sa_num,\n",
" retry=retry,\n",
" exact=False,\n",
" file_tag=\"\",\n",
" prt=False,\n",
" ):\n",
2020-08-12 22:20:32 +08:00
" \"\"\"Create N service accounts/SAs in projects which match 'filter'. Usage: 'safire add sas xyz 5'\n",
2020-08-06 19:22:53 +08:00
" will add 5 SAs to all projects containing 'xys' if fewer than 100 exist. Will not overwrite SAs.\"\"\"\n",
" iam = self._svc(*IAM, token)\n",
" projId_list = self._list.projects(filter, file_tag, token, prt)\n",
" all_sas = []\n",
" for project in projId_list:\n",
2020-08-12 22:20:32 +08:00
" # batch = BatchJob(iam)\n",
2020-08-06 19:22:53 +08:00
" sa_emails, _ = self._list.sas(project, True, file_tag, token, False)\n",
" start_sa_emails = sa_emails\n",
" start_sa_count = len(sa_emails)\n",
" all_sas = all_sas + sa_emails\n",
" count = min(sas_per_project, 100 - len(sa_emails))\n",
" print(\n",
" len(sa_emails), \"SAs exist. Creating\", count, \"SAs in project\", project\n",
" )\n",
" new_sas = []\n",
" retries = retry\n",
" while count and retries:\n",
2020-08-12 22:20:32 +08:00
" batch = BatchJob(iam)\n",
2020-08-06 19:22:53 +08:00
" for _ in range(count):\n",
" while [s for s in all_sas if str(next_sa_num) in s.split(\"@\")[0]]:\n",
" next_sa_num += 1\n",
" sa_id = self._pre_pad(email_prefix, spad, next_sa_num)\n",
" new_sas.append(sa_id)\n",
" next_sa_num += 1\n",
" name = f\"projects/{project}\"\n",
" body = {\n",
" \"accountId\": sa_id,\n",
" \"serviceAccount\": {\"displayName\": sa_id},\n",
" }\n",
" batch.add(\n",
" iam.projects().serviceAccounts().create(name=name, body=body)\n",
" )\n",
" batch.execute()\n",
" retries -= 1\n",
" sa_emails, _ = self._list.sas(project, True, file_tag, token, False)\n",
" curr_sa_count = len(sa_emails)\n",
" count = count - curr_sa_count + start_sa_count\n",
2020-08-12 22:20:32 +08:00
" sleep(sleep_time/5)\n",
2020-08-06 19:22:53 +08:00
" new_sa_emails = [i for i in sa_emails if i not in start_sa_emails]\n",
" num_sas_created = len(new_sa_emails)\n",
" print(\n",
" f\"\\nCreated {num_sas_created} sas in {project}:\",\n",
" *new_sa_emails,\n",
" sep=\"\\n\",\n",
" )\n",
" print(f\"Total SAs in {project} count = {len(sa_emails)}\")\n",
"\n",
" def jsons(\n",
" self,\n",
" filter=\"\",\n",
" sa_path=sa_path,\n",
" next_json_num=next_json_num,\n",
" json_prefix=json_prefix,\n",
" jpad=jpad,\n",
" retry=retry,\n",
" file_tag=\"\",\n",
" token=token,\n",
" prt=False,\n",
" ):\n",
" \"\"\"Create and download json/key files to svcaccts folder. Add to TDs and/or groups.\"\"\"\n",
" iam = self._svc(*IAM, token)\n",
" projId_list = self._list.projects(filter, file_tag, token, prt)\n",
" for project in sorted(projId_list):\n",
" batch = BatchJob(iam)\n",
" _, sa_uniqueId = self._list.sas(project, True, file_tag, token, prt)\n",
" num_sas = len(sa_uniqueId)\n",
" print(f\"Downloading {str(len(sa_uniqueId))} SA keys in project {project}\")\n",
" resp = []\n",
" retries = retry\n",
" while len(resp) < num_sas and retries:\n",
" for sa in sa_uniqueId:\n",
" batch.add(\n",
" iam.projects()\n",
" .serviceAccounts()\n",
" .keys()\n",
" .create(\n",
" name=f\"projects/{project}/serviceAccounts/{sa}\",\n",
" body={\n",
" \"privateKeyType\": \"TYPE_GOOGLE_CREDENTIALS_FILE\",\n",
" \"keyAlgorithm\": \"KEY_ALG_RSA_2048\",\n",
" },\n",
" )\n",
" )\n",
" resp = [i[\"response\"] for i in batch.execute()]\n",
" retries -= 1\n",
" for i in resp:\n",
" if i is not None:\n",
" k = (\n",
" i[\"name\"][i[\"name\"].rfind(\"/\") :],\n",
" b64decode(i[\"privateKeyData\"]).decode(\"utf-8\"),\n",
" )[1]\n",
" json_name = self._pre_pad(json_prefix, jpad, next_json_num)\n",
" with open(\"%s/%s.json\" % (sa_path, json_name), \"w+\") as f:\n",
" f.write(k)\n",
" next_json_num += 1\n",
" sleep(sleep_time)\n",
"\n",
" keys = jsons\n",
"\n",
" def drive(self, td_name):\n",
" \"\"\"Create a team/shared drive. Usage: 'safire add drive some_name'\"\"\"\n",
" drive = self._svc(*DRIVE, token)\n",
" body = {\"name\": td_name}\n",
" driveId = (\n",
" drive.drives()\n",
" .create(body=body, requestId=str(uuid.uuid4()), fields=\"id,name\")\n",
" .execute()\n",
" .get(\"id\")\n",
" )\n",
" print(f\" Drive ID for {td_name} is {driveId}\")\n",
" return td_name, driveId\n",
"\n",
2020-08-10 22:29:24 +08:00
" def drives(self, *td_list):\n",
" \"\"\"Create team/shared drives. Usage: 'safire add drives some_filename' containing \n",
" a list of drive names OR `safire add drives mytd1 mytd2 mytd3`\"\"\"\n",
" td_file = ' '.join(sys.argv[3:]).strip('\\\"')\n",
" if os.path.isfile(td_file):\n",
" with open(td_file, \"r\") as td_lst:\n",
" td_list = [i for i in td_lst]\n",
2020-08-06 19:22:53 +08:00
" for td_name in td_list:\n",
" print(f\"Creating {td_name}\")\n",
" self.drive(td_name.rstrip())\n",
"\n",
" def members(\n",
" self,\n",
" project_filter,\n",
" group_filter,\n",
" retry=5,\n",
" file_tag=\"\",\n",
" group_token=group_token,\n",
" prt=False,\n",
" ):\n",
" \"\"\"'add members' requires two arguments. Both 'project_filter' and 'group_filter' can be either the full\n",
" project/group name or a partial name which matches some projects/groups. \n",
" You can add SA emails from multiple projects to multiple groups if you wish.\"\"\"\n",
" admin = self._svc(*ADMIN, group_token)\n",
" projId_list = self._list.projects(project_filter, file_tag, token, prt)\n",
" group_list = self._list.groups(group_filter, file_tag, group_token, prt)\n",
" for group in group_list:\n",
" for project in projId_list:\n",
" batch = BatchJob(admin)\n",
" sa_emails, _ = self._list.sas(project, True, file_tag, token, False)\n",
" if len(sa_emails) == 0:\n",
" print(\"No service accounts found in\", project)\n",
" continue\n",
" group_members = self._list.members(group, file_tag, group_token, prt)\n",
" add_sas = [i for i in sa_emails if i not in group_members]\n",
" if len(add_sas) == 0:\n",
" continue\n",
" print(\n",
" f\"Adding {str(len(add_sas))} SAs to group: {group} from project: {project}\"\n",
" )\n",
" retries = retry\n",
" while add_sas and retries:\n",
" retries -= 1\n",
" for email in add_sas:\n",
" batch.add(\n",
" admin.members().insert(\n",
" groupKey=group, body={\"email\": email, \"role\": \"MEMBER\"}\n",
" )\n",
" )\n",
" batch.execute()\n",
" sleep(sleep_time)\n",
" group_members = self._list.members(\n",
" group_filter, file_tag, group_token, prt\n",
" )\n",
" add_sas = [i for i in sa_emails if i not in group_members]\n",
" print(\n",
" f\"{len(sa_emails)-len(add_sas)} SA emails from {project} are in {group}. {len(add_sas)} failed.\"\n",
" )\n",
"\n",
2020-08-12 22:20:32 +08:00
" def user(self, user, *td_id, role=\"organizer\"):\n",
" \"\"\"Add user (typically group name) to shared/team drive(s). Usage: 'safire add mygroup@domain.com td_filter'\"\"\"\n",
" for td_filt in td_id:\n",
" drives = self._list.drives(td_filt, \"\", token, False)\n",
" for td in drives:\n",
" if td_filt in td['name']:\n",
" print(f\"Adding {user} to {td['name']} {td['id']}\")\n",
" drive = self._svc(*DRIVE, token)\n",
" body = {\"type\": \"user\", \"role\": role, \"emailAddress\": user}\n",
" drive.permissions().create(body=body, fileId=td['id'], supportsAllDrives=True, fields=\"id\").execute()\n",
" else:\n",
" pass\n"
2020-08-09 23:31:32 +08:00
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "0882m2pipfVU",
"colab_type": "text"
},
"source": [
"###### Remove"
]
},
{
"cell_type": "code",
"metadata": {
"id": "HzNDhRCHpea1",
"colab_type": "code",
"colab": {}
},
"source": [
2020-08-06 19:22:53 +08:00
"\n",
"class Remove(Help):\n",
" \"\"\"Delete sas, jsons/keys, drives and group members. Note: 'remove' and 'delete' are equivalent commands\"\"\"\n",
"\n",
" def __init__(self):\n",
" super(Remove, self).__init__()\n",
" self._list = List()\n",
"\n",
" def sas(\n",
" self,\n",
" project_filter,\n",
" token=token,\n",
" exact=False,\n",
" file_tag=\"\",\n",
" prt=False,\n",
" retry=retry,\n",
" ):\n",
" \"\"\"Usage: 'safire remove sas filter' where filter is a string to match the projects from which \n",
" you want to delete service accounts. To remove all SAs for all projects use \"\" as your filter\"\"\"\n",
" projId_list = self._list.projects(project_filter, file_tag, token, prt)\n",
" iam = self._svc(*IAM, token)\n",
" for project in projId_list:\n",
" sas, _ = self._list.sas(project, True, file_tag, token, False)\n",
" retries = retry\n",
" while sas and retries:\n",
" retries -= 1\n",
" if len(sas) == 0:\n",
" print(f\"0 service accounts in {project}. Moving to next project\")\n",
" continue\n",
" batch = BatchJob(iam)\n",
" print(f\"Removing {len(sas)} service accounts from {project}\")\n",
" for i in sas:\n",
" name = f\"projects/{project}/serviceAccounts/{i}\"\n",
" batch.add(iam.projects().serviceAccounts().delete(name=name))\n",
" batch.execute()\n",
2020-08-12 22:20:32 +08:00
" sleep(sleep_time)\n",
2020-08-06 19:22:53 +08:00
" sas, _ = self._list.sas(project, True, file_tag, token, False)\n",
"\n",
" def jsons(self, filter=\"\", sa_path=sa_path):\n",
" \"\"\"Remove json keys from svcaccts path\"\"\"\n",
" _, _, files = next(os.walk(sa_path))\n",
" if not files:\n",
" return f\"No json keys found in {sa_path}, Nothing to delete\"\n",
" delete_files = [i for i in sorted(files) if \".json\" and str(filter) in i]\n",
" print(f\"Files to be deleted:\\n\", *delete_files, sep=\"\\n\")\n",
" yes_no = input(\"Confirm you want to delete all of these files. y/n: \")\n",
" if yes_no.lower() in [\"y\", \"yes\"]:\n",
" for file in delete_files:\n",
" print(f\"Removing {file}\")\n",
" os.remove(f\"{sa_path}/{file}\")\n",
" else:\n",
" print(\"Deletion of json files aborted\")\n",
"\n",
" def members(\n",
" self,\n",
" group_filter,\n",
" retry=retry,\n",
2020-08-06 22:14:13 +08:00
" batch_size=50,\n",
2020-08-06 19:22:53 +08:00
" file_tag=\"\",\n",
" group_token=group_token,\n",
" prt=False,\n",
" ):\n",
" \"\"\"Remove members from groups. Match 'filter'\"\"\"\n",
" admin = self._svc(*ADMIN, group_token)\n",
" batch = BatchJob(admin)\n",
" group_list = self._list.groups(group_filter, file_tag, group_token, prt)\n",
" group_members = []\n",
" for group in group_list:\n",
" group_members = self._list.members(group, file_tag, group_token, prt)\n",
2020-08-06 22:14:13 +08:00
" retries = retry^2\n",
2020-08-06 19:22:53 +08:00
" while len(group_members) > 1 and retries:\n",
" print(\n",
" f\"Removing {str(len(group_members))} SAs from group: {group}. Batch = {batch_size}\"\n",
" )\n",
" retries -= 1\n",
" for member in group_members[:batch_size]:\n",
" batch.add(admin.members().delete(groupKey=group, memberKey=member))\n",
" batch.execute()\n",
" batch = BatchJob(admin)\n",
2020-08-06 22:14:13 +08:00
" sleep(sleep_time/4)\n",
2020-08-06 19:22:53 +08:00
" group_members = self._list.members(group, file_tag, group_token, prt)\n",
" print(\n",
" f\"{len(group_members)} members remaining in {group} (excluding OWNER)\"\n",
" )\n",
"\n",
" def drive(self, teamDriveId, token=token):\n",
" \"\"\"Delete a team/shared drive. Usage: 'safire add teamdrive unique ID'. USE CAREFULLY!\n",
" Does not work with non-empty drives.\"\"\"\n",
" drvsvc = self._svc(*DRIVE, token)\n",
" drives = self._list.drives(teamDriveId, \"\", token, False)\n",
" print(\"Drives to be removed:\", *drives, sep=\"\\n\")\n",
" yes_no = input(\"Confirm you want to delete all of these drives. y/n: \")\n",
" if not yes_no.lower() in [\"y\", \"yes\"]:\n",
" return \"Deletion of drives aborted\"\n",
" for drive in drives:\n",
" print(f\"Removing drive: {drive['name']} with id: {drive['id']}\")\n",
" drvsvc.drives().delete(driveId=str(drive[\"id\"])).execute()\n",
"\n",
" def drives(self, input_file, token=token):\n",
" \"\"\"Delete team/shared drives. Usage: 'safire add teamdrive some_filename' with unique IDs. USE CAREFULLY\"\"\"\n",
" td_list = open(input_file, \"r\")\n",
" for teamDriveId in td_list:\n",
" print(f\"Deleting {teamDriveId}\")\n",
" self.drive(teamDriveId.rstrip())\n",
"\n",
2020-08-12 22:20:32 +08:00
" def user(self, user, *td_id, role=\"organizer\", token=token):\n",
" \"\"\"Remove user (typically group name) from shared/team drive(s). Usage: 'safire remove mygroup@domain.com td_filter'\"\"\"\n",
" drives = self._list.drives(\"\", \"\", token, False)\n",
" for td_filt in td_id:\n",
" for td in drives:\n",
" if td_filt in td['name']:\n",
" drvsvc = self._svc(*DRIVE, token)\n",
" user_info = []\n",
" resp = {'nextPageToken':None}\n",
" while 'nextPageToken' in resp:\n",
" resp = drvsvc.permissions().list(fileId=td['id'],pageSize=100,fields='nextPageToken,permissions(id,emailAddress,role)',supportsAllDrives=True,pageToken=resp['nextPageToken']).execute()\n",
" user_info += resp['permissions']\n",
" for i in user_info:\n",
" if user in i['emailAddress']:\n",
" print(f\"Removing {user} with id {i['id']} from {td['name']} {td['id']}\")\n",
" drvsvc.permissions().delete(permissionId=i['id'], fileId=td['id'], supportsAllDrives=True, fields=\"id\").execute()\n"
2020-08-09 23:31:32 +08:00
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "FEwTp1m6pA_-",
"colab_type": "text"
},
"source": [
"###### Rename"
]
},
{
"cell_type": "code",
"metadata": {
"id": "qeV1rF4ro9d7",
"colab_type": "code",
"colab": {}
},
"source": [
2020-08-06 19:22:53 +08:00
"\n",
"class Rename:\n",
" \"\"\"Rename json/key files to their email prefix, email numeric (omit prefix), uniqId or in a sequence.\n",
" Usage: 'safire rename jsons email' [choice email, email_seq, uniq, seq]\n",
" Renaming is repeatable. Can always delete and redownload keys if needed.\"\"\"\n",
" def jsons(self, rename_type, dir=f\"{sa_path}/\"):\n",
" \"\"\"Usage: 'safire rename jsons email' [choice email, email_seq, uniq, seq]\"\"\"\n",
" import json, os\n",
" filelist = os.listdir(dir)\n",
" print(\"\\nOriginal filenames:\", *sorted(filelist), sep=\"\\n\")\n",
" new_name = []\n",
" if rename_type == \"seq\":\n",
" [\n",
" os.rename(dir + file, dir + f\"{i+1}.json\")\n",
" for i, file in enumerate(sorted(filelist))\n",
" ]\n",
" else:\n",
" for file in sorted(filelist):\n",
" try:\n",
" with open(dir + file) as f:\n",
" data = json.load(f)\n",
" if rename_type == \"email\":\n",
" new_name = data[\"client_email\"].split(\"@\")[0] + \".json\"\n",
" if rename_type == \"email_seq\":\n",
" digits = list(\n",
" filter(\n",
" lambda x: x.isdigit(),\n",
" data[\"client_email\"].split(\"@\")[0],\n",
" )\n",
" )\n",
" new_name = \"\".join(digits) + \".json\"\n",
" if rename_type == \"uniq\":\n",
" new_name = data[\"client_id\"] + \".json\"\n",
" if os.path.exists(dir + new_name):\n",
" continue\n",
" os.rename(dir + file, dir + new_name)\n",
" except:\n",
" continue\n",
" print(\"\\nCurrent filenames:\", *sorted(os.listdir(dir)), sep=\"\\n\")\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "FmK4-LXJMcnU",
"colab_type": "text"
},
"source": [
"Add some command line tests\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bzeQ-rQXAe_L",
"colab_type": "text"
},
"source": [
2020-08-09 23:31:32 +08:00
"## Output"
2020-08-06 19:22:53 +08:00
]
},
{
"cell_type": "code",
"metadata": {
"id": "tTSI9mtOymet",
"colab_type": "code",
2020-08-07 22:08:30 +08:00
"cellView": "form",
2020-08-06 19:22:53 +08:00
"colab": {}
},
"source": [
2020-08-12 22:20:32 +08:00
"%%time\n",
"# %%timeit\n",
"\n",
2020-08-09 23:31:32 +08:00
"#@markdown\n",
2020-08-12 09:28:09 +08:00
"if \"safire.py\" == sys.argv[0] and __name__ == \"__main__\":\n",
" fire.Fire(Commands)\n",
"else:\n",
2020-08-11 22:44:40 +08:00
" if command1:\n",
" sys.argv = [\"\"] + list(command1.split(\" \"))\n",
" print(f\"safire command: {command1} \\n\\nOutput: \\n\")\n",
" fire.Fire(Commands)\n",
"\n",
" if command2:\n",
" sys.argv = [\"\"] + list(command2.split(\" \"))\n",
" print(f\"safire command: {command2} \\n\\nOutput: \\n\")\n",
" fire.Fire(Commands)\n",
"\n",
" if command3:\n",
" sys.argv = [\"\"] + list(command3.split(\" \"))\n",
" print(f\"safire command: {command3} \\n\\nOutput: \\n\")\n",
" fire.Fire(Commands)\n",
"\n",
" if command4:\n",
" sys.argv = [\"\"] + list(command4.split(\" \"))\n",
" print(f\"safire command: {command4} \\n\\nOutput: \\n\")\n",
" fire.Fire(Commands)\n",
"\n",
" if command5:\n",
" sys.argv = [\"\"] + list(command5.split(\" \"))\n",
" print(f\"safire command: {command5} \\n\\nOutput: \\n\")\n",
" fire.Fire(Commands)\n",
2020-08-12 09:28:09 +08:00
"# else:\n",
"# if __name__ == \"__main__\":\n",
"# fire.Fire(Commands)\n",
2020-08-11 22:44:40 +08:00
"\n",
"\n"
2020-08-06 19:22:53 +08:00
],
"execution_count": null,
"outputs": []
},
2020-08-11 18:45:09 +08:00
{
"cell_type": "markdown",
"metadata": {
"id": "eojC1KSyFlf3",
"colab_type": "text"
},
"source": [
"###### Vars and values"
]
},
2020-08-06 19:22:53 +08:00
{
"cell_type": "code",
"metadata": {
"id": "F2vRbufNwJg6",
"colab_type": "code",
2020-08-12 22:20:32 +08:00
"cellView": "form",
2020-08-06 19:22:53 +08:00
"colab": {}
},
"source": [
2020-08-12 22:20:32 +08:00
"#@markdown\n",
2020-08-06 19:22:53 +08:00
"%whos"
],
"execution_count": null,
"outputs": []
}
]
}