> Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic). Known for its simple REST APIs, distributed nature, speed, and scalability, Elasticsearch is the central component of the Elastic Stack, a set of open source tools for data ingestion, enrichment, storage, analysis, and visualization. Commonly referred to as the ELK Stack (after Elasticsearch, Logstash, and Kibana), the Elastic Stack now includes a rich collection of lightweight shipping agents known as Beats for sending data to Elasticsearch.
An Elasticsearch _index_**is a collection of documents** that are related to each other. Elasticsearch stores data as JSON documents. Each document correlates a set of _keys_ (names of fields or properties) with their corresponding values (strings, numbers, Booleans, dates, arrays of _values_, geolocations, or other types of data).
Elasticsearch uses a data structure called an _inverted index_, which is designed to allow very fast full-text searches. An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in.
During the indexing process, Elasticsearch stores documents and builds an inverted index to make the document data searchable in near real-time. Indexing is initiated with the index API, through which you can add or update a JSON document in a specific index.
The protocol used to access Elasticsearch is **HTTP**. When you access it via HTTP you will find some interesting information: `http://10.10.10.115:9200/`
**By default Elasticsearch doesn't have authentication enabled**, so by default you can access everything inside the database without using any credentials.
You can verify that authentication is disabled with a request to:
```bash
curl -X GET "ELASTICSEARCH-SERVER:9200/_xpack/security/user"
{"error":{"root_cause":[{"type":"exception","reason":"Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node."}],"type":"exception","reason":"Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node."},"status":500}
That will means that authentication is configured an **you need valid credentials** to obtain any info from elasticserach. Then, you can [**try to bruteforce it**](../brute-force.md#elasticsearch) (it uses HTTP basic auth, so anything that BF HTTP basic auth can be used).\
Here you have a **list default usernames**: _**elastic** (superuser), remote\_monitoring\_user, beats\_system, logstash\_system, kibana, kibana\_system, apm\_system,_ \_anonymous_._ Older versions of Elasticsearch have the default password **changeme** for this user
These endpoints were [**taken from the documentation**](https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html) where you can **find more**.\
To obtain **information about which kind of data is saved inside an index** you can access: `http://host:9200/<index>` from example in this case `http://10.10.10.115:9200/bank`
If you want to **dump all the contents** of an index you can access: `http://host:9200/<index>/_search?pretty=true` like `http://10.10.10.115:9200/bank/_search?pretty=true`
_Take a moment to compare the contents of the each document (entry) inside the bank index and the fields of this index that we saw in the previous section._
So, at this point you may notice that **there is a field called "total" inside "hits"** that indicates that **1000 documents were found** inside this index but only 10 were retried. This is because **by default there is a limit of 10 documents**.\
But, now that you know that **this index contains 1000 documents**, you can **dump all of them** indicating the number of entries you want to dump in the **`size`** parameter: `http://10.10.10.115:9200/quotes/_search?pretty=true&size=1000`asd\
_Note: If you indicate bigger number all the entries will be dumped anyway, for example you could indicate `size=9999` and it will be weird if there were more entries (but you should check)._
In order to dump all you can just go to the **same path as before but without indicating any index**`http://host:9200/_search?pretty=true` like `http://10.10.10.115:9200/_search?pretty=true`\
Remember that in this case the **default limit of 10** results will be applied. You can use the `size` parameter to dump a **bigger amount of results**. Read the previous section for more information.
If you are looking for some information you can do a **raw search on all the indices** going to `http://host:9200/_search?pretty=true&q=<search_term>` like in `http://10.10.10.115:9200/_search?pretty=true&q=Rockwell`
That cmd will create a **new index** called `bookindex` with a document of type `books` that has the attributes "_bookId_", "_author_", "_publisher_" and "_name_"