support for additional linters (link-checker & aspell) (#410)

* Added support for additional linters (html-proofer link checker)

- added additional test matrix item just for linters
- install and build mdbook only when needed to speedup CI
- reorganized the travis scripts

* Added spellchecking script from rust-book to CI

also fixed minor typos

* updated and moved serde_json links to propper position

* move link checking to link-checker
This commit is contained in:
Michał Budzyński 2018-06-06 17:33:25 +02:00 committed by Andrew Gauger
parent af25e615d2
commit 0c167cb32c
11 changed files with 554 additions and 24 deletions

View file

@ -12,8 +12,29 @@ env:
global: global:
- secure: m28oDDxTcaLlbCXv9la/yz0PzafOCDuhOhmHRoc1ELQC0wc3r6HT3a2myrP5ewQQhaxYDzd2XXYDJB3odFV1qLQOp0hFDgNn/w3ctWZpJdLxIJN6dsaPL/azhE2hz7T+SPEoWLwTW1va6bu4wwzSOykt9//RIK0ZoyVMCRSAlMB965iMV2Nkw7SWdQZ8SlskMVk8sB103N5+WTtt6rse54jHnXTpFEq9q0EAXC1R3GBDKEWB7iwb0c++Kw46Fz86ZJJDotiVuxMtsEk0VfT0Yxx665is5Ko6sV4cahbuXqMIqYYEfqpTHNHadHWD1m1i32hW9Rjtt9fFZ+a8m9zfTixPlkfOZvQ94RnD2zqv9qiwFr8oR7t2SsZaB4aqPlJd45DqgnwQ1B0cmrUAsjSB2+1DQDkR4FgKFB/o1c6F6g8imNh+2OwiZXVLwIimXNJQ5xfZeObXFMrEZ0+uj7oxFX49EcwE/SvwsVJHST3/zL5QuQwa9/uVhW/x135/Z2ypVao2xydpow/KL8VwhX9YsOSP5ApffL4OLJ5hE9qwS/SShHGg8AenFqqm/UNFqWDU+C097YaWvG5PEvCVXvOofic65AUTCmwB+h3MSQmZIqz2sb/kwdbtkoRRR6maMgelQmg1JdIfQcKeTJIStIihjk54VENHPVAslz0oV7Ia5Bo= - secure: m28oDDxTcaLlbCXv9la/yz0PzafOCDuhOhmHRoc1ELQC0wc3r6HT3a2myrP5ewQQhaxYDzd2XXYDJB3odFV1qLQOp0hFDgNn/w3ctWZpJdLxIJN6dsaPL/azhE2hz7T+SPEoWLwTW1va6bu4wwzSOykt9//RIK0ZoyVMCRSAlMB965iMV2Nkw7SWdQZ8SlskMVk8sB103N5+WTtt6rse54jHnXTpFEq9q0EAXC1R3GBDKEWB7iwb0c++Kw46Fz86ZJJDotiVuxMtsEk0VfT0Yxx665is5Ko6sV4cahbuXqMIqYYEfqpTHNHadHWD1m1i32hW9Rjtt9fFZ+a8m9zfTixPlkfOZvQ94RnD2zqv9qiwFr8oR7t2SsZaB4aqPlJd45DqgnwQ1B0cmrUAsjSB2+1DQDkR4FgKFB/o1c6F6g8imNh+2OwiZXVLwIimXNJQ5xfZeObXFMrEZ0+uj7oxFX49EcwE/SvwsVJHST3/zL5QuQwa9/uVhW/x135/Z2ypVao2xydpow/KL8VwhX9YsOSP5ApffL4OLJ5hE9qwS/SShHGg8AenFqqm/UNFqWDU+C097YaWvG5PEvCVXvOofic65AUTCmwB+h3MSQmZIqz2sb/kwdbtkoRRR6maMgelQmg1JdIfQcKeTJIStIihjk54VENHPVAslz0oV7Ia5Bo=
matrix:
include:
- rust: stable
os: linux
env: CONTENT_TESTS=1
- rust: stable
os: linux
env: CONTENT_TESTS=1 CONTENT_TESTS_LINKS=1
allow_failures:
- rust: stable
os: linux
env: CONTENT_TESTS=1 CONTENT_TESTS_LINKS=1
addons:
apt:
packages:
- aspell
- aspell-en
before_install: before_install:
- cargo install mdbook --vers '0.1.7' --debug - ./ci/install_deps.sh
- export PATH=$HOME/.cargo/bin:$PATH - export PATH=$HOME/.cargo/bin:$PATH
after_success: ./deploy.sh script: ./ci/test_script.sh
after_success: ./ci/deploy.sh

View file

@ -2,6 +2,8 @@
set -o errexit -o nounset set -o errexit -o nounset
echo "Running $0"
if [ -z "${TRAVIS_BRANCH:-}" ]; then if [ -z "${TRAVIS_BRANCH:-}" ]; then
echo "This script may only be run from Travis!" echo "This script may only be run from Travis!"
exit 1 exit 1
@ -14,16 +16,22 @@ if [[ "$TRAVIS_BRANCH" != "master" || "$TRAVIS_RUST_VERSION" != "stable" || "$TR
exit 0 exit 0
fi fi
# check for outdated dependencies on nightly builds
if [ "${TRAVIS_EVENT_TYPE:-}" == "cron" ]; then
echo "This is cron build. Checking for outdated dependencies!"
rm ./Cargo.lock
cargo clean
# replace all [dependencies] versions with "*"
sed -i -e "/^\[dependencies\]/,/^\[.*\]/ s|^\(.*=[ \t]*\).*$|\1\"\*\"|" ./Cargo.toml
cargo test || { echo "Cron build failed! Dependencies outdated!"; exit 1; } if [ -z "${CONTENT_TESTS:-}" ]; then
echo "Cron build success! Dependencies are up to date!" # check for outdated dependencies on nightly builds
if [ "${TRAVIS_EVENT_TYPE:-}" == "cron" ]; then
echo "This is cron build. Checking for outdated dependencies!"
rm ./Cargo.lock
cargo clean
# replace all [dependencies] versions with "*"
sed -i -e "/^\[dependencies\]/,/^\[.*\]/ s|^\(.*=[ \t]*\).*$|\1\"\*\"|" ./Cargo.toml
cargo test || { echo "Cron build failed! Dependencies outdated!"; exit 1; }
echo "Cron build success! Dependencies are up to date!"
exit 0
fi
echo "We deploy only after we also test the markup and descriptions!"
exit 0 exit 0
fi fi

328
ci/dictionary.txt Normal file
View file

@ -0,0 +1,328 @@
personal_ws-1.1 en 0 utf-8
abcde
ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz
Addr
addr
AddrParse
AddrParseError
AfterPath
Akshat
akshat
alisha
AlphaNumeric
ApiResponse
Appender
appender
args
ascii
ashley
AsRef
attr
Auth
auth
backend
BACKTRACE
Backtrace
backtrace
BeforePath
bitfield
bitflags
bitwise
bool
BufRead
BufReader
byteorder
ByteRecord
bytestring
cAFeEDEcafBAd
CannotBeABase
ccccff
ChainedError
CHARSET
chrono
ClientBuilder
CmdError
colombo
concat
Config
config
ConsoleLogger
const
ContentRange
ContentRangeSpec
ContentType
Cookin
cpus
CString
CSVError
CsvError
CsvInnerError
ctrl
Datelike
DateParse
DateTime
datetime
DEadBEEfc
DecodeError
dedup
deduplicated
delhi
Deref
Deserialize
deserialize
deserialized
Deserializer
deserializer
dest
DirEntry
dotfiles
DynamicImage
endian
endif
EnvLogger
EnvVar
eprintln
ErrorKind
extern
ferris
FileAppender
filename
filenames
filepath
filesystem
FilterType
FixedOffset
flate
fname
Formatter
foundational
fout
FromStr
FromStrError
FromUtf
gists
GitHub
github
GlobError
Guybrush
GzDecoder
GzEncoder
Hackerman
Hardcoded
hardcoded
HashMap
HashSet
hashtag
HASHTAG
hashtags
headders
HeadersEcho
hexa
HexColor
HEXUPPER
HMAC
hmac
Hong
href
html
http
HttpRequest
ifdef
ImageBuffer
ImageError
impl
incrementing
IndependentSample
init
initialised
IntoInnerError
IntoIter
IntoIterator
io's
IOError
IoError
IpAddr
ITER
iter
Iterable
iterable
Janeiro
journalctl
JSON
Json
json
julia
lang
LevelFilter
libhello
Libz
linux
LittleEndian
LogBuilder
LogConfig
logfile
login
LogLevel
LogLevelFilter
LogMetadata
LogRecord
LogTarget
loopback
MatchOptions
MediaWiki
mediawiki
memmap
Metadata
metadata
mkdir
Mmap
mpsc
MpscRecv
Mutex
MutexGuard
myfile
MyFlags
NaiveDate
NaiveDateTime
NaiveTime
nFun
NotFound
NulError
OAuth
oneline
ParallelIterator
parallelized
params
ParseError
ParseInt
ParseIntError
ParseOptions
PartialContent
PartialEq
PartialRangeIter
passwd
PathBuf
PatternEncoder
PatternError
PBKDF
pbkdf
PhoneNumber
PNGs
preallocating
prerelease
prev
printf
println
proc
programmatically
radix
RandomResponseError
RateLimit
ReadBytesExt
ReaderBuilder
recusively
recv
RecvError
REGEX
Regex
regex
RegexSet
RegexSetBuilder
Replacer
repo
reponse
ReqError
ReqParseError
RequestBuilder
Reqwest
reqwest
resize
resized
RESTful
ReverseDependencies
rustaceans
rustc
RwLock
SecureRandom
SemVer
Semver
semver
SemVerError
SemVerReq
Serde
serde
SetLogger
SetLoggerError
SigningKey
SocketAddrV
StatusCode
stderr
stdin
Stdout
stdout
strftime
StringRecord
StripPrefixError
strs
struct
structs
subcommands
subdirectories
subfolders
subprocess
subtype
sydney
symlinks
syslog
SyslogError
SystemRandom
SystemTime
SystemTimeError
TcpListener
TcpStream
TempDir
tempdir
ThreadPool
threadpool
Threepwood
Timelike
Timestamp
timestamp
timezones
TOML
Toml
toml
tuple
Tuple
typesafe
unary
unwinded
UpperHex
uptime
urlencoded
UrlParse
UrlParseError
urls
UserAgent
userid
usize
VarError
versa
Versioning
VersionReq
vreq
WalkDir
Walkdir
walkdir
webpage
webservice
whitespace
WriteBytesExt
XPoweredBy
XRateLimitLimit
XRateLimitRemaining
XRateLimitReset
YAML
YYYY
zurich

23
ci/install_deps.sh Executable file
View file

@ -0,0 +1,23 @@
#!/bin/bash
set -o errexit -o nounset
echo "Running $0"
if [ -z "${TRAVIS_BRANCH:-}" ]; then
echo "This script may only be run from Travis!"
exit 1
fi
if [[ "${CONTENT_TESTS:-}" == 1 ]]; then
echo "Installing additional dependencies"
if [[ "${CONTENT_TESTS_LINKS:-}" == 1 ]]; then
pyenv global system 3.6
pip3 install --user link-checker==0.1.0
fi
cargo install mdbook --vers '0.1.7' --debug
fi
exit 0

102
ci/spellcheck.sh Executable file
View file

@ -0,0 +1,102 @@
#!/bin/bash
# Copyright 2016 The Rust Project Developers. See the COPYRIGHT
# file at the top-level directory of this distribution and at
# http://rust-lang.org/COPYRIGHT.
#
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
# option. This file may not be copied, modified, or distributed
# except according to those terms.
aspell --version
# Checks project markdown files for spell errors
# Notes:
# This script needs dictionary file ($dict_filename) with project-specific
# valid words. If this file is missing, first invocation of a script generates
# a file of words considered typos at the moment. User should remove real typos
# from this file and leave only valid words. When script generates false
# positive after source modification, new valid word should be added
# to dictionary file.
# Default mode of this script is interactive. Each source file is scanned for
# typos. aspell opens window, suggesting fixes for each found typo. Original
# files with errors will be backed up to files with format "filename.md.bak".
# When running in CI, this script should be run in "list" mode (pass "list"
# as first argument). In this mode script scans all files and reports found
# errors. Exit code in this case depends on scan result:
# 1 if any errors found,
# 0 if all is clear.
# Script skips words with length less than or equal to 3. This helps to avoid
# some false positives.
# We can consider skipping source code in markdown files (```code```) to reduce
# rate of false positives, but then we lose ability to detect typos in code
# comments/strings etc.
shopt -s nullglob
dict_filename=./ci/dictionary.txt
markdown_sources=(./src/*.md)
mode="check"
# aspell repeatedly modifies personal dictionary for some purpose,
# so we should use a copy of our dictionary
mkdir -p "/tmp/ci"
dict_path="/tmp/$dict_filename"
if [[ "$1" == "list" ]]; then
mode="list"
fi
if [[ ! -f "$dict_filename" ]]; then
# Pre-check mode: generates dictionary of words aspell consider typos.
# After user validates that this file contains only valid words, we can
# look for typos using this dictionary and some default aspell dictionary.
echo "Scanning files to generate dictionary file '$dict_filename'."
echo "Please check that it doesn't contain any misspellings."
echo "personal_ws-1.1 en 0 utf-8" > "$dict_filename"
cat "${markdown_sources[@]}" | aspell --ignore 3 list | sort -u >> "$dict_filename"
elif [[ "$mode" == "list" ]]; then
# List (default) mode: scan all files, report errors
declare -i retval=0
cp "$dict_filename" "$dict_path"
if [ ! -f $dict_path ]; then
retval=1
exit "$retval"
fi
for fname in "${markdown_sources[@]}"; do
command=$(aspell --ignore 3 --personal="$dict_path" "$mode" < "$fname")
if [[ -n "$command" ]]; then
for error in $command; do
# FIXME: Find more correct way to get line number
# (ideally from aspell). Now it can make some false positives,
# because it is just a grep
grep --with-filename --line-number --color=always "$error" "$fname"
done
retval=1
fi
done
exit "$retval"
elif [[ "$mode" == "check" ]]; then
# Interactive mode: fix typos
cp "$dict_filename" "$dict_path"
if [ ! -f $dict_path ]; then
retval=1
exit "$retval"
fi
for fname in "${markdown_sources[@]}"; do
aspell --ignore 3 --dont-backup --personal="$dict_path" "$mode" "$fname"
done
fi

51
ci/test_script.sh Executable file
View file

@ -0,0 +1,51 @@
#!/bin/bash
set -o errexit -o nounset
echo "Running $0"
if [ -z "${TRAVIS_BRANCH:-}" ]; then
echo "This script may only be run from Travis!"
exit 1
fi
# Returns 1 if program is installed and 0 otherwise
program_installed() {
local return_=1
type $1 >/dev/null 2>&1 || { local return_=0; }
echo "$return_"
}
if [[ "${CONTENT_TESTS:-}" == 1 ]]; then
# Ensure required programs are installed
if [ $(program_installed mdbook) == 0 ]; then
echo "Please install mdbook: cargo install mdbook."
exit 1
fi
echo "Testing markup and descriptions"
echo "Building site to book/"
mdbook build
echo "Checking spelling"
./ci/spellcheck.sh list
if [[ "${CONTENT_TESTS_LINKS:-}" == 1 ]]; then
if [ $(program_installed link-checker) == 0 ]; then
echo "Please install link-checker: 'pip install link-checker'"
exit 1
fi
echo "Checking local links:"
# failing local link test is a hard error as there should be no false possitives
link-checker --no-external ./book/
echo "Checking external links:"
# we do not want to fail on false positives
link-checker --no-local ./book/ --ignore '.*api.github.com*.' || true
fi
else
echo "Testing code snippets"
cargo build --verbose
cargo test --verbose
fi

View file

@ -151,7 +151,7 @@ fn run() -> Result<()> {
let tar = GzDecoder::new(tar_gz); let tar = GzDecoder::new(tar_gz);
// Load the archive from the tarball // Load the archive from the tarball
let mut archive = Archive::new(tar); let mut archive = Archive::new(tar);
// Unpack the archive inside curent working directory // Unpack the archive inside current working directory
archive.unpack(".")?; archive.unpack(".")?;
Ok(()) Ok(())

View file

@ -1585,7 +1585,7 @@ Parses a [`DateTime`] struct from strings representing the well-known formats
Escape sequences that are available for the [`DateTime::parse_from_str`] can be Escape sequences that are available for the [`DateTime::parse_from_str`] can be
found at [`chrono::format::strftime`]. Note that the [`DateTime::parse_from_str`] found at [`chrono::format::strftime`]. Note that the [`DateTime::parse_from_str`]
requires that such a DateTime struct must be creatable that it uniquely requires that such a DateTime struct can be created that it uniquely
identifies a date and a time. For parsing dates and times without timezones use identifies a date and a time. For parsing dates and times without timezones use
[`NaiveDate`], [`NaiveTime`], and [`NaiveDateTime`]. [`NaiveDate`], [`NaiveTime`], and [`NaiveDateTime`].
@ -1720,7 +1720,7 @@ fn main() {
Gets the current working directory by calling [`env::current_dir`], Gets the current working directory by calling [`env::current_dir`],
then for each entries in [`fs::read_dir`], extracts the then for each entries in [`fs::read_dir`], extracts the
[`DirEntry::path`] and gets the metada via [`fs::Metadata`]. The [`DirEntry::path`] and gets the metadata via [`fs::Metadata`]. The
[`Metadata::modified`] returns the [`SystemTime::elapsed`] time since [`Metadata::modified`] returns the [`SystemTime::elapsed`] time since
last modification of the entry. It's converted into seconds with last modification of the entry. It's converted into seconds with
[`Duration::as_secs`] and compared with 24 hours (24 * 60 * 60 [`Duration::as_secs`] and compared with 24 hours (24 * 60 * 60

View file

@ -179,7 +179,7 @@ fn main(){
It is simple to build bundled C code with custom defines using [`cc::Build::define`]. It is simple to build bundled C code with custom defines using [`cc::Build::define`].
It takes an [`Option`] value, so it is possible to create defines such as `#define APP_NAME "foo"` It takes an [`Option`] value, so it is possible to create defines such as `#define APP_NAME "foo"`
as well as `#define WELCOME` (pass `None` as the value for a value-less defne). This example builds as well as `#define WELCOME` (pass `None` as the value for a value-less define). This example builds
a bundled C file with dynamic defines set in `build.rs` and prints "**Welcome to foo - version 1.0.2**" a bundled C file with dynamic defines set in `build.rs` and prints "**Welcome to foo - version 1.0.2**"
when run. Cargo sets some [environment variables][cargo-env] which may be useful for some custom defines. when run. Cargo sets some [environment variables][cargo-env] which may be useful for some custom defines.

View file

@ -23,22 +23,16 @@
[![serde-json-badge]][serde-json] [![cat-encoding-badge]][cat-encoding] [![serde-json-badge]][serde-json] [![cat-encoding-badge]][cat-encoding]
The [`serde_json`] crate provides a [`from_str`] function to parse a `&str` of The [serde_json] crate provides a [`serde_json::from_str`] function to parse a `&str` of
JSON into a type of the caller's choice. JSON into a type of the caller's choice.
[`serde_json`]: https://docs.serde.rs/serde_json/
[`from_str`]: https://docs.serde.rs/serde_json/fn.from_str.html
Unstructured JSON can be parsed into a universal [`serde_json::Value`] type that Unstructured JSON can be parsed into a universal [`serde_json::Value`] type that
is able to represent any valid JSON data. is able to represent any valid JSON data.
[`serde_json::Value`]: https://docs.serde.rs/serde_json/enum.Value.html
The example below shows a `&str` of JSON being parsed and then compared to what The example below shows a `&str` of JSON being parsed and then compared to what
we expect the parsed value to be. The expected value is declared using the we expect the parsed value to be. The expected value is declared using the
[`json!`] macro. [`json!`] macro.
[`json!`]: https://docs.serde.rs/serde_json/macro.json.html
```rust ```rust
# #[macro_use] # #[macro_use]
@ -831,18 +825,21 @@ fn main() {
<!-- API Reference --> <!-- API Reference -->
[`csv::ByteRecord`]: https://docs.rs/csv/*/csv/struct.ByteRecord.html [`csv::ByteRecord`]: https://docs.rs/csv/*/csv/struct.ByteRecord.html
[`csv::invalid_option`]: https://docs.rs/csv/*/csv/fn.invalid_option.html
[`csv::Reader::deserialize`]: https://docs.rs/csv/*/csv/struct.Reader.html#method.deserialize [`csv::Reader::deserialize`]: https://docs.rs/csv/*/csv/struct.Reader.html#method.deserialize
[`csv::Reader::deserialize`]: https://docs.rs/csv/\*/csv/struct.Reader.html#method.deserialize [`csv::Reader::deserialize`]: https://docs.rs/csv/\*/csv/struct.Reader.html#method.deserialize
[`csv::StringRecord`]: https://docs.rs/csv/*/csv/struct.StringRecord.html [`csv::StringRecord`]: https://docs.rs/csv/*/csv/struct.StringRecord.html
[`csv::Writer`]: https://docs.rs/csv/*/csv/struct.Writer.html [`csv::Writer`]: https://docs.rs/csv/*/csv/struct.Writer.html
[`csv::invalid_option`]: https://docs.rs/csv/*/csv/fn.invalid_option.html
[`flush`]: https://docs.rs/csv/*/csv/struct.Writer.html#method.flush [`flush`]: https://docs.rs/csv/*/csv/struct.Writer.html#method.flush
[`form_urlencoded::byte_serialize`]: https://docs.rs/url/*/url/form_urlencoded/fn.byte_serialize.html [`form_urlencoded::byte_serialize`]: https://docs.rs/url/*/url/form_urlencoded/fn.byte_serialize.html
[`form_urlencoded::parse`]: https://docs.rs/url/*/url/form_urlencoded/fn.parse.html [`form_urlencoded::parse`]: https://docs.rs/url/*/url/form_urlencoded/fn.parse.html
[`FromStrError`]: https://docs.rs/mime/*/mime/struct.FromStrError.html [`FromStrError`]: https://docs.rs/mime/*/mime/struct.FromStrError.html
[`json!`]: https://docs.rs/serde_json/*/serde_json/macro.json.html
[`MIME`]: https://docs.rs/mime/*/mime/struct.Mime.html [`MIME`]: https://docs.rs/mime/*/mime/struct.Mime.html
[`percent_decode`]: https://docs.rs/percent-encoding/*/percent_encoding/fn.percent_decode.html [`percent_decode`]: https://docs.rs/percent-encoding/*/percent_encoding/fn.percent_decode.html
[`serde::Deserialize`]: https://docs.rs/serde/\*/serde/trait.Deserialize.html [`serde::Deserialize`]: https://docs.rs/serde/\*/serde/trait.Deserialize.html
[`serde_json::from_str`]: https://docs.rs/serde_json/*/serde_json/fn.from_str.html
[`serde_json::Value`]: https://docs.rs/serde_json/*/serde_json/enum.Value.html
[`serialize`]: https://docs.rs/csv/*/csv/struct.Writer.html#method.serialize [`serialize`]: https://docs.rs/csv/*/csv/struct.Writer.html#method.serialize
[`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html [`std::str::FromStr`]: https://doc.rust-lang.org/std/str/trait.FromStr.html
[`utf8_percent_encode`]: https://docs.rs/percent-encoding/*/percent_encoding/fn.utf8_percent_encode.html [`utf8_percent_encode`]: https://docs.rs/percent-encoding/*/percent_encoding/fn.utf8_percent_encode.html

View file

@ -110,7 +110,7 @@ Keep lines sorted.
[semver]: https://docs.rs/semver/ [semver]: https://docs.rs/semver/
[serde-badge]: https://badge-cache.kominick.com/crates/v/serde.svg?label=serde [serde-badge]: https://badge-cache.kominick.com/crates/v/serde.svg?label=serde
[serde-json-badge]: https://badge-cache.kominick.com/crates/v/serde_json.svg?label=serde_json [serde-json-badge]: https://badge-cache.kominick.com/crates/v/serde_json.svg?label=serde_json
[serde-json]: https://docs.serde.rs/serde_json/ [serde-json]: https://docs.rs/serde_json/*/serde_json/
[serde]: https://docs.rs/serde/ [serde]: https://docs.rs/serde/
[std-badge]: https://badge-cache.kominick.com/badge/std-1.25.0-blue.svg [std-badge]: https://badge-cache.kominick.com/badge/std-1.25.0-blue.svg
[std]: https://doc.rust-lang.org/std [std]: https://doc.rust-lang.org/std