MENU

Deploying the Elasticflow on K8s.

Run Elasticsearch on ECK

Link

Use kubectl

  1. Install [custom resource definitions] (https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/):

    kubectl create -f https://download.elastic.co/downloads/eck/2.9.0/crds.yaml

The following Elastic resources have been created:

customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created
  1. Install the operator with its RBAC rules:

    kubectl apply -f https://download.elastic.co/downloads/eck/2.9.0/operator.yaml

The ECK operator runs by default in the elastic-system namespace. It is recommended that you choose a dedicated namespace for your workloads, rather than using the elastic-system or the default namespace.

  1. Monitor the operator logs:

    kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

## Deploy an Elasticsearch cluster

link

Apply a simple Elasticsearch cluster specification, with one Elasticsearch node:

If your Kubernetes cluster does not have any Kubernetes nodes with at least 2GiB of free memory, the pod will be stuck in Pending state. Check Manage compute resources for more information about resource requirements and how to configure them.

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 8.9.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
EOF

The operator automatically creates and manages Kubernetes resources to achieve the desired state of the Elasticsearch cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use.

Setting node.store.allow_mmap: false has performance implications and should be tuned for production workloads as described in the Virtual memory section.

Monitor cluster health and creation progressedit

Get an overview of the current Elasticsearch clusters in the Kubernetes cluster, including health, version and number of nodes:

kubectl get elasticsearch
NAME          HEALTH    NODES     VERSION   PHASE         AGE
quickstart    green     1         8.9.2     Ready         1m

When you create the cluster, there is no HEALTH status and the PHASE is empty. After a while, the PHASE turns into Ready, and HEALTH becomes green. The HEALTH status comes from Elasticsearch’s cluster health API.

One Pod is in the process of being started:

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME                      READY   STATUS    RESTARTS   AGE
quickstart-es-default-0   1/1     Running   0          79s

Access the logs for that Pod:

kubectl logs -f quickstart-es-default-0

Request Elasticsearch accessedit

A ClusterIP Service is automatically created for your cluster:

kubectl get service quickstart-es-http
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
quickstart-es-http   ClusterIP   10.15.251.145   <none>        9200/TCP   34m
  1. Get the credentials.
    A default user named elastic is automatically created with the password stored in a Kubernetes secret:
PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
  1. Request the Elasticsearch endpoint.
    From inside the Kubernetes cluster:
curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

From your local workstation, use the following command in a separate terminal:

kubectl port-forward service/quickstart-es-http 9200

Then request localhost:

curl -u "elastic:$PASSWORD" -k "https://localhost:9200"

Disabling certificate verification using the -k flag is not recommended and should be used for testing purposes only. Check Setup your own certificate.

{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "XqWg0xIiRmmEBg4NMhnYPg",
  "version" : {...},
  "tagline" : "You Know, for Search"
}

Deploy a Kibana instance

link

To deploy your Kibana instance go through the following steps.

  1. Specify a Kibana instance and associate it with your Elasticsearch cluster:

    cat <<EOF | kubectl apply -f -
    apiVersion: kibana.k8s.elastic.co/v1
    kind: Kibana
    metadata:
      name: quickstart
    spec:
      version: 8.9.2
      count: 1
      elasticsearchRef:
        name: quickstart
    EOF
  2. Monitor Kibana health and creation progress.
    Similar to Elasticsearch, you can retrieve details about Kibana instances:
kubectl get kibana

And the associated Pods:

kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
  1. Access Kibana.

A ClusterIP Service is automatically created for Kibana:

kubectl get service quickstart-kb-http

Use kubectl port-forward to access Kibana from your local workstation:

kubectl port-forward service/quickstart-kb-http 5601

Open https://localhost:5601 in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a known certificate authority and not trusted by your browser. You can temporarily acknowledge the warning for the purposes of this quick start but it is highly recommended that you configure valid certificates for any production deployments.
Login as the elastic user. The password can be obtained with the following command:

kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

Use persistent storage

link

The cluster that you deployed in this quickstart guide only allocates a persistent volume of 1GiB for storage using the default storage class defined for the Kubernetes cluster. You will most likely want to have more control over this for production workloads. Refer to Volume claim templates for more information.

kubectl edit -n elasticflow elasticsearches.elasticsearch.k8s.elastic.co quickstart