mirror of
https://github.com/ArchiveBox/ArchiveBox
synced 2024-11-14 00:17:08 +00:00
Merge pull request #97 from pigmonkey/configurableoutput
support a configurable output directory
This commit is contained in:
commit
6dafd53007
2 changed files with 2 additions and 1 deletions
|
@ -170,6 +170,7 @@ env CHROME_BINARY=google-chrome-stable RESOLUTION=1440,900 FETCH_PDF=False ./arc
|
||||||
- user agent: `WGET_USER_AGENT` values: [`Wget/1.19.1`]/`"Mozilla/5.0 ..."`/`...`
|
- user agent: `WGET_USER_AGENT` values: [`Wget/1.19.1`]/`"Mozilla/5.0 ..."`/`...`
|
||||||
- chrome profile: `CHROME_USER_DATA_DIR` values: [`~/Library/Application\ Support/Google/Chrome/Default`]/`/tmp/chrome-profile`/`...`
|
- chrome profile: `CHROME_USER_DATA_DIR` values: [`~/Library/Application\ Support/Google/Chrome/Default`]/`/tmp/chrome-profile`/`...`
|
||||||
To capture sites that require a user to be logged in, you must specify a path to a chrome profile (which loads the cookies needed for the user to be logged in). If you don't have an existing chrome profile, create one with `chromium-browser --disable-gpu --user-data-dir=/tmp/chrome-profile`, and log into the sites you need. Then set `CHROME_USER_DATA_DIR=/tmp/chrome-profile` to make Bookmark Archiver use that profile.
|
To capture sites that require a user to be logged in, you must specify a path to a chrome profile (which loads the cookies needed for the user to be logged in). If you don't have an existing chrome profile, create one with `chromium-browser --disable-gpu --user-data-dir=/tmp/chrome-profile`, and log into the sites you need. Then set `CHROME_USER_DATA_DIR=/tmp/chrome-profile` to make Bookmark Archiver use that profile.
|
||||||
|
- output directory: `OUTPUT_DIR` values: [`$REPO_DIR/output`]/`/srv/www/bookmarks`/`...` Optionally output the archives to an alternative directory.
|
||||||
|
|
||||||
(See defaults & more at the top of `config.py`)
|
(See defaults & more at the top of `config.py`)
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ FOOTER_INFO = os.getenv('FOOTER_INFO', 'Content is hosted
|
||||||
### Paths
|
### Paths
|
||||||
REPO_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..'))
|
REPO_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..'))
|
||||||
|
|
||||||
OUTPUT_DIR = os.path.join(REPO_DIR, 'output')
|
OUTPUT_DIR = os.getenv('OUTPUT_DIR', os.path.join(REPO_DIR, 'output'))
|
||||||
ARCHIVE_DIR = os.path.join(OUTPUT_DIR, 'archive')
|
ARCHIVE_DIR = os.path.join(OUTPUT_DIR, 'archive')
|
||||||
SOURCES_DIR = os.path.join(OUTPUT_DIR, 'sources')
|
SOURCES_DIR = os.path.join(OUTPUT_DIR, 'sources')
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue