Deprecating Elasticsearch, Logstash, and Kibana images

This commit is contained in:
Geoff Bourne 2019-05-11 10:11:24 -05:00
parent 8b253fe50e
commit 45d3ef288f
18 changed files with 5 additions and 838 deletions

View file

@ -8,3 +8,8 @@ This repository contains the various Dockerfile definitions I'm maintaining.
##### Cassandra
I have found the [official image](https://hub.docker.com/_/cassandra/) to be quite sufficient
##### ELK Stack (Elasticsearch, Logstash, Kibana)
Each of the ELK components is now well supported by Elastic and the images here fell way
behind the latest upstream releases.

2
build
View file

@ -2,8 +2,6 @@
pkgs=ubuntu-openjdk-7
pkgs="$pkgs minecraft-server"
pkgs="$pkgs elasticsearch"
pkgs="$pkgs kibana"
pkgs="$pkgs titan-gremlin"
for p in $pkgs

View file

@ -1,39 +0,0 @@
FROM openjdk:8u121-jre-alpine
LABEL maintainer "itzg"
RUN apk -U add bash
ARG ES_VERSION=5.5.2
# avoid conflicts with debian host systems when mounting to host volume
ARG DEFAULT_ES_USER_UID=1100
ADD https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VERSION.tar.gz /tmp
# need to adapt to both Docker's new remote-unpack-ADD behavior and the old behavior
RUN cd /usr/share && \
if [ -f /tmp/elasticsearch-$ES_VERSION.tar.gz ]; then \
tar xf /tmp/elasticsearch-$ES_VERSION.tar.gz; \
else mv /tmp/elasticsearch-${ES_VERSION} /usr/share; \
fi && \
rm -f /tmp/elasticsearch-$ES_VERSION.tar.gz
EXPOSE 9200 9300
HEALTHCHECK --timeout=5s CMD wget -q -O - http://$HOSTNAME:9200/_cat/health
ENV ES_HOME=/usr/share/elasticsearch-$ES_VERSION \
DEFAULT_ES_USER=elasticsearch \
DEFAULT_ES_USER_UID=$DEFAULT_ES_USER_UID \
ES_JAVA_OPTS="-Xms1g -Xmx1g"
RUN adduser -S -s /bin/sh -u $DEFAULT_ES_USER_UID $DEFAULT_ES_USER
VOLUME ["/data","/conf"]
WORKDIR $ES_HOME
COPY java.policy /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
COPY start /start
COPY log4j2.properties $ES_HOME/config/
CMD ["/start"]

View file

@ -1,258 +0,0 @@
This Docker image provides an easily configurable Elasticsearch node. Via port mappings, it is easy to create an arbitrarily sized cluster of nodes. As long as the versions match, you can mix-and-match "real" Elasticsearch nodes with container-ized ones.
# NOTE for use on Linux hosts
Elasticsearch 5.x requires that the virtual memory mmap count is set sufficiently for stable,
production use. [Refer to this guide for more information](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html).
# Basic Usage
To start an Elasticsearch data node that listens on the standard ports on your host's network interface:
docker run -d -p 9200:9200 -p 9300:9300 itzg/elasticsearch
You'll then be able to connect to the Elasticsearch HTTP interface to confirm it's alive:
http://DOCKERHOST:9200/
{
"status" : 200,
"name" : "Charon",
"version" : {
"number" : "1.3.5",
"build_hash" : "4a50e7df768fddd572f48830ae9c35e4ded86ac1",
"build_timestamp" : "2014-11-05T15:21:28Z",
"build_snapshot" : false,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
Where `DOCKERHOST` would be the actual hostname of your host running Docker.
# Simple, multi-node cluster
To run a multi-node cluster (3-node in this example) on a single Docker machine use:
docker run -d --name es0 -p 9200:9200 itzg/elasticsearch
docker run -d --name es1 --link es0 -e UNICAST_HOSTS=es0 itzg/elasticsearch
docker run -d --name es2 --link es0 -e UNICAST_HOSTS=es0 itzg/elasticsearch
and then check the cluster health, such as http://192.168.99.100:9200/_cluster/health?pretty
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
If you have a Docker Swarm cluster already initialized you can download this
[docker-compose.yml](https://raw.githubusercontent.com/itzg/dockerfiles/master/elasticsearch/docker-compose.yml) and deploy a cluster using:
docker stack deploy -c docker-compose.yml es
With a `docker service ls` you can confirm 1 master, 2 data, and 1 gateway nodes are running:
```
ID NAME MODE REPLICAS IMAGE
9nwnno8hbqgk es_kibana replicated 1/1 kibana:latest
f5x7nipwmvkr es_gateway replicated 1/1 es
om8rly2yxylw es_data replicated 2/2 es
tdvfilj370yn es_master replicated 1/1 es
```
As you can see, there is also a Kibana instance included and available at port 5601.
# Health Checks
This container declares a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#/healthcheck) that queries the `_cat/health`
endpoint for a quick, one-line gauge of health every 30 seconds.
The current health of the container is shown in the `STATUS` column of `docker ps`, such as
Up 14 minutes (healthy)
You can also check the history of health checks from `inspect`, such as:
```
> docker inspect -f "{{json .State.Health}}" es
{"Status":"healthy","FailingStreak":0,"Log":[...
```
# Configuration Summary
## Ports
* `9200` - HTTP REST
* `9300` - Native transport
## Volumes
* `/data` - location of `path.data`
* `/conf` - location of `path.conf`
# Configuration Details
The following configuration options are specified using `docker run` environment variables (`-e`) like
docker run ... -e NAME=VALUE ... itzg/elasticsearch
Since Docker's `-e` settings are baked into the container definition, this image provides an extra feature to change any of the settings below for an existing container. Either create/edit the file `env` in the `/conf` volume mapping or edit within the running container's context using:
docker exec -it CONTAINER_ID vi /conf/env
replacing `CONTAINER_ID` with the container's ID or name.
The contents of the `/conf/env` file are standard shell
NAME=VALUE
entries where `NAME` is one of the variables described below.
Configuration options not explicitly supported below can be specified via the `OPTS` environment variable. For example, by default `OPTS` is set with
OPTS=-Dnetwork.bind_host=_non_loopback_
_NOTE: That option is a default since `bind_host` defaults to `localhost` as of 2.0, which isn't helpful for
port mapping out from the container_.
## Cluster Name
If joining a pre-existing cluster, then you may need to specify a cluster name different than the default "elasticsearch":
-e CLUSTER=dockers
## Zen Unicast Hosts
When joining a multi-physical-host cluster, multicast may not be supported on the physical network. In that case, your node can reference specific one or more hosts in the cluster via the [Zen Unicast Hosts](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html#unicast) capability as a comma-separated list of `HOST:PORT` pairs:
-e UNICAST_HOSTS=HOST:PORT[,HOST:PORT]
such as
-e UNICAST_HOSTS=192.168.0.100:9300
## Plugins
You can install one or more plugins before startup by passing a comma-separated list of plugins.
-e PLUGINS=ID[,ID]
In this example, it will install the Marvel plugin
-e PLUGINS=elasticsearch/marvel/latest
Many more plugins [are available here](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html#known-plugins).
## Publish As
Since the container gives the Elasticsearch software an isolated perspective of its networking, it will most likely advertise its published address with a container-internal IP address. This can be overridden with a physical networking name and port using:
-e PUBLISH_AS=DOCKERHOST:9301
_Author Note: I have yet to hit a case where this was actually necessary. Other
than the cosmetic weirdness in the logs, Elasticsearch seems to be quite tolerant._
## Node Name
Rather than use the randomly assigned node name, you can indicate a specific one using:
-e NODE_NAME=Docker
## Node Type
If you refer to [the Node section](https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-node.html)
of the Elasticsearch reference guide, you'll find that there's three main types of nodes: master-eligible, data, and client.
In larger clusters it is important to dedicate a small number (>= 3) of master nodes. There are also cases where a large cluster may need dedicated gateway nodes that are neither master nor data nodes and purely operate as "smart routers" and have large amounts of CPU and memory to handle client requests and search-reduce.
To simplify all that, this image provides a `TYPE` variable to let you amongst these combinations. The choices are:
* (not set, the default) : the default node type which is both master-eligible and a data node
* `MASTER` : master-eligible, but holds no data. It is good to have three or more of these in a
large cluster
* `DATA` (or `NON_MASTER`) : holds data and serves search/index requests. Scale these out for elastic-y goodness.
* `NON_DATA` : performs all duties except holding data
* `GATEWAY` (or `COORDINATING`) : only operates as a client node or a "smart router". These are the ones whose HTTP port 9200 will need to be exposed
* `INGEST` : operates only as an ingest node and is not master or data eligible
A [Docker Compose](https://docs.docker.com/compose/overview/) file will serve as a good example of these three node types:
```
version: '3'
services:
gateway:
image: itzg/elasticsearch
environment:
UNICAST_HOSTS: master
TYPE: GATEWAY
ports:
- "9200:9200"
master:
image: itzg/elasticsearch
environment:
UNICAST_HOSTS: gateway
TYPE: MASTER
MIN_MASTERS: 2
data:
image: itzg/elasticsearch
environment:
UNICAST_HOSTS: master,gateway
TYPE: DATA
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://gateway:9200
```
## Minimum Master Nodes
In combination with the `TYPE` variable above, you will also want to configure the minimum master nodes to [avoid split-brain](https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-node.html#split-brain) during network outages.
The minimum, which can be calculated as `(master_eligible_nodes / 2) + 1`, can be set with the `MIN_MASTERS` variable.
Using the Docker Compose file above, a value of `2` is appropriate when scaling the cluster to 3 master nodes:
docker-compose scale master=3
## Multiple Network Binding, such as Swarm Mode
When using Docker Swarm mode the container is presented with multiple ethernet
devices. By default, all global, routable IP addresses are configured for
Elasticsearch to use as `network.host`.
That discovery can be overridden by providing a specific ethernet device name
to `DISCOVER_TRANSPORT_IP` and/or `DISCOVER_HTTP_IP`, such as
-e DISCOVER_TRANSPORT_IP=eth0
-e DISCOVER_HTTP_IP=eth2
## Heap size and other JVM options
By default this image will run Elasticsearch with a Java heap size of 1 GB. If that value
or any other JVM options need to be adjusted, then replace the `ES_JAVA_OPTS`
environment variable.
For example, this would allow for the use of 16 GB of heap:
-e ES_JAVA_OPTS="-Xms16g -Xmx16g"
Refer to [this page](https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html)
for more information about why both the minimum and maximum sizes were set to
the same value.

View file

@ -1,35 +0,0 @@
# This composition is known to work on a Swarm cluster consisting of
# 3 VM nodes with 1GB allocated to each.
version: '3'
services:
master:
image: itzg/elasticsearch
environment:
UNICAST_HOSTS: master
MIN_MASTERS: 1
ES_JAVA_OPTS: -Xms756m -Xmx756m
TYPE: NON_DATA
ports:
- "9200:9200"
- "9300:9300"
deploy:
replicas: 1
update_config:
parallelism: 1
data:
image: itzg/elasticsearch
deploy:
mode: global
update_config:
parallelism: 1
environment:
TYPE: DATA
UNICAST_HOSTS: master
ES_JAVA_OPTS: -Xms512m -Xmx512m
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://master:9200

View file

@ -1,35 +0,0 @@
version: '3'
services:
master:
build: .
environment:
TYPE: MASTER
UNICAST_HOSTS: master
MIN_MASTERS: 1
data:
build: .
environment:
TYPE: DATA
UNICAST_HOSTS: master
gateway:
build: .
ports:
- "9200:9200"
- "9300:9300"
environment:
TYPE: GATEWAY
UNICAST_HOSTS: master
ingest:
build: .
ports:
- "9222:9200"
environment:
TYPE: INGEST
UNICAST_HOSTS: master
kibana:
image: kibana:5.5.1
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://gateway:9200

View file

@ -1,21 +0,0 @@
version: '3'
services:
master:
image: itzg/elasticsearch
environment:
UNICAST_HOSTS: master
MIN_MASTERS: 1
ports:
- "9200:9200"
- "9300:9300"
deploy:
replicas: 1
update_config:
parallelism: 1
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://master:9200

View file

@ -1,44 +0,0 @@
version: '3'
services:
master:
image: itzg/elasticsearch
environment:
TYPE: MASTER
UNICAST_HOSTS: master
MIN_MASTERS: 1
deploy:
replicas: 1
update_config:
parallelism: 1
data:
image: itzg/elasticsearch
environment:
TYPE: DATA
UNICAST_HOSTS: master
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 60s
gateway:
image: itzg/elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
TYPE: GATEWAY
UNICAST_HOSTS: master
ingest:
image: itzg/elasticsearch
ports:
- "9222:9200"
environment:
TYPE: INGEST
UNICAST_HOSTS: master
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://gateway:9200

View file

@ -1,6 +0,0 @@
grant {
// JMX Java Management eXtensions
permission javax.management.MBeanTrustPermission "register";
permission javax.management.MBeanServerPermission "createMBeanServer";
permission javax.management.MBeanPermission "-#-[-]", "queryNames";
};

View file

@ -1,74 +0,0 @@
status = error
# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n
appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
#rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
#logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = console
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = console
logger.index_indexing_slowlog.additivity = false

View file

@ -1,165 +0,0 @@
#!/bin/sh
pre_checks() {
mmc=$(sysctl vm.max_map_count|sed 's/.*= //')
if [[ $mmc -lt 262144 ]]; then
echo "
ERROR: As of 5.0.0 Elasticsearch requires increasing mmap counts.
Refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
"
exit 1
fi
}
discoverIpFromLink() {
dev=$1
mode=$2
ip=`ipaddr show dev $dev scope global|awk '$1 == "inet" { if (!match($2,"/32")) { gsub("/.*","",$2) ; print $2 } }'`
echo "Discovered $mode address $ip for $dev"
OPTS="$OPTS -E $mode.host=$ip"
}
discoverAllGlobalIps() {
if [ ${#IGNORE_NETWORK} -eq 0 ]
then
IGNORE_NETWORK='999.999.999.999'
fi
printf "Finding IPs"
while [ ${#ips} -eq 0 ]
do
printf "."
ips=`ipaddr show scope global| grep -v "inet ${IGNORE_NETWORK}" | awk '$1 == "inet" { if (!match($2,"/32")) { gsub("/.*","",$2) ; addrs[length(addrs)] = $2 } } END { for (i in addrs) { if (i>0) printf "," ; printf addrs[i] } }'`
sleep 1
done
echo " found! $ips"
OPTS="$OPTS -E network.host=$ips"
}
setup_clustering() {
if [ -n "$CLUSTER" ]; then
OPTS="$OPTS -E cluster.name=$CLUSTER"
if [ -n "$CLUSTER_FROM" ]; then
if [ -d /data/$CLUSTER_FROM -a ! -d /data/$CLUSTER ]; then
echo "Performing cluster data migration from $CLUSTER_FROM to $CLUSTER"
mv /data/$CLUSTER_FROM /data/$CLUSTER
fi
fi
fi
if [ -n "$NODE_NAME" ]; then
OPTS="$OPTS -E node.name=$NODE_NAME"
fi
if [ -n "$MULTICAST" ]; then
OPTS="$OPTS -E discovery.zen.ping.multicast.enabled=$MULTICAST"
fi
if [ -n "$UNICAST_HOSTS" ]; then
OPTS="$OPTS -E discovery.zen.ping.unicast.hosts=$UNICAST_HOSTS"
fi
if [ -n "$PUBLISH_AS" ]; then
OPTS="$OPTS -E transport.publish_host=$(echo $PUBLISH_AS | awk -F: '{print $1}')"
OPTS="$OPTS -E transport.publish_port=$(echo $PUBLISH_AS | awk -F: '{if ($2) print $2; else print 9300}')"
fi
if [ -n "$MIN_MASTERS" ]; then
OPTS="$OPTS -E discovery.zen.minimum_master_nodes=$MIN_MASTERS"
fi
}
install_plugins() {
if [ -n "$PLUGINS" ]; then
for p in $(echo $PLUGINS | awk -v RS=, '{print}')
do
echo "Installing the plugin $p"
$ES_HOME/bin/elasticsearch-plugin install $p
done
else
mkdir -p $ES_HOME/plugins
fi
}
setup_personality() {
if [ -n "$TYPE" ]; then
case $TYPE in
MASTER)
OPTS="$OPTS -E node.master=true -E node.data=false -E node.ingest=false"
;;
GATEWAY|COORDINATING)
OPTS="$OPTS -E node.master=false -E node.data=false -E node.ingest=false"
;;
INGEST)
OPTS="$OPTS -E node.master=false -E node.data=false -E node.ingest=true"
;;
DATA)
OPTS="$OPTS -E node.master=false -E node.data=true -E node.ingest=false"
;;
NON_MASTER)
OPTS="$OPTS -E node.master=false -E node.data=true -E node.ingest=true"
;;
NON_DATA)
OPTS="$OPTS -E node.master=true -E node.data=false -E node.ingest=true"
;;
*)
echo "Unknown node type. Please use MASTER|GATEWAY|DATA|NON_MASTER"
exit 1
esac
fi
}
pre_checks
if [ -f /conf/env ]; then
. /conf/env
fi
if [ ! -e /conf/elasticsearch.* ]; then
cp $ES_HOME/config/elasticsearch.yml /conf
fi
if [ ! -e /conf/log4j2.properties ]; then
cp $ES_HOME/config/log4j2.properties /conf
fi
OPTS="$OPTS \
-E path.conf=/conf \
-E path.data=/data \
-E path.logs=/data \
-E transport.tcp.port=9300 \
-E http.port=9200"
discoverAllGlobalIps
if [ "${DISCOVER_TRANSPORT_IP}" != "" ]; then
discoverIpFromLink $DISCOVER_TRANSPORT_IP transport
fi
if [ "${DISCOVER_HTTP_IP}" != "" ]; then
discoverIpFromLink $DISCOVER_HTTP_IP http
fi
setup_personality
setup_clustering
install_plugins
mkdir -p /conf/scripts
echo "Starting Elasticsearch with the options $OPTS"
CMD="$ES_HOME/bin/elasticsearch $OPTS"
if [ `id -u` = 0 ]; then
echo "Running as non-root..."
chown -R $DEFAULT_ES_USER /data /conf
su -c "$CMD" $DEFAULT_ES_USER
else
$CMD
fi

View file

@ -1,22 +0,0 @@
FROM openjdk:8u111-jre
LABEL maintainer "itzg"
ENV KIBANA_VERSION 5.1.2
ADD https://artifacts.elastic.co/downloads/kibana/kibana-${KIBANA_VERSION}-linux-x86_64.tar.gz /tmp/kibana.tgz
RUN tar -C /opt -xzf /tmp/kibana.tgz && rm /tmp/kibana.tgz
ENV KIBANA_HOME /opt/kibana-$KIBANA_VERSION-linux-x86_64
# Simplify for cross-container
ENV ES_URL http://es:9200
WORKDIR $KIBANA_HOME
ADD start.sh /start
EXPOSE 5601
CMD ["/start"]

View file

@ -1,26 +0,0 @@
Provides a ready-to-run [Kibana](http://www.elasticsearch.org/overview/kibana/) server that can
easily hook into your [Elasticsearch containers](https://registry.hub.docker.com/u/itzg/elasticsearch/).
## Usage with Docker elasticsearch container
This is by far the easiest and most Docker'ish way to run Kibana.
Assuming you started one or more containers using something like
docker run -d --name your-es -p 9200:9200 itzg/elasticsearch
Start Kibana using
docker run -d -p 5601:5601 --link your-es:es itzg/kibana
Proceed to use Kibana starting from
[this point in the documentation](http://www.elasticsearch.org/guide/en/kibana/current/access.html)
## Usage with non-Docker elasticsearch
Start Kibana using
docker run -d -p 5601:5601 -e ES_URL=http://YOUR_ES:9200 itzg/kibana
Replacing `http://YOUR_ES:9200` with the appropriate URL for your system.

View file

@ -1,12 +0,0 @@
version: '2'
services:
es:
build: ../elasticsearch
ports:
- "9200:9200"
kibana:
build: .
ports:
- "5601:5601"

View file

@ -1,5 +0,0 @@
#!/bin/sh
OPTS="-e $ES_URL -H $HOSTNAME"
exec bin/kibana $OPTS

View file

@ -1,25 +0,0 @@
FROM itzg/ubuntu-openjdk-7
LABEL maintainer "itzg"
ENV LOGSTASH_VERSION 1.5.0-1
RUN wget -qO /tmp/logstash.deb http://download.elastic.co/logstash/logstash/packages/debian/logstash_${LOGSTASH_VERSION}_all.deb
RUN dpkg -i /tmp/logstash.deb && rm /tmp/logstash.deb
WORKDIR /opt/logstash
# For collectd reception
EXPOSE 25826
# /conf is the default directory where our logstash will read pipeline config files
# /logs is an optional attach point to reference something like /var/log on the host
VOLUME ["/conf","/logs"]
ENV PLUGIN_UPDATES 2015-06-10
RUN bin/plugin install logstash-input-heartbeat
RUN bin/plugin install logstash-output-elasticsearch_groom
CMD ["bin/logstash","agent","-f","/conf"]

View file

@ -1,44 +0,0 @@
This image bundles the latest (1.5.x) version of Logstash with the ability to
groom its own Elasticsearch indices.
# Basic Usage
To start a Logstash container, setup a directory on your host with one or more Logstash
pipeline configurations files, called `$HOST_CONF` here, and run
docker run -d -v $HOST_CONF:/conf itzg/logstash
# Accessing host logs
Logstash is much more useful when it is actually processing...logs. Logs inside the container
are non-existent, but you can attach the host machine's `/var/log` directory via the container's
`/logs` volume:
docker run ... -v /var/log:/logs ...
Keep in mind you will need to configure `file` inputs with a base path of `/logs`, such as
```
file {
path => ['/logs/syslog']
type => 'syslog'
}
```
# Receiving input from collectd
To allow for incoming [collectd](https://collectd.org/) content, **UDP** port 25826 is exposed and
can be mapped onto the host using:
docker run ... -p 25826:25826/udp
Regardless of the host port, be sure to configure the logstash input to bind at port `25826`, such
as
```
udp {
port => 25826
codec => collectd { }
buffer_size => 1452
}
```

View file

@ -1,25 +0,0 @@
input {
heartbeat {
type => 'groom'
interval => 11
add_field => {
scope => 'open'
cutoff => '4w'
action => 'close'
}
}
}
output {
if [type] == 'groom' {
elasticsearch_groom {
host => 'es:9200'
index => 'logstash-%{+YYYY.MM.dd}'
scope => '%{scope}'
age_cutoff => '%{cutoff}'
action => '%{action}'
}
}
}