Qbox Joins Instaclustr, Plans March 31, 2023, Sunset Date. Read about it in our blog post here.

The new ELK stack 6.6.0 was officially released by Elasticsearch on January 29, 2019, and it offers a lot of groundbreaking features and enhancements for Elasticsearch, Kibana, Logstash, APM, and Beats.

We’ve already tested Elasticsearch 6.6.0 with the brand new Kibana and are excited to share our experience with such valuable features as Index Lifecycle Management and Remote Cluster management. In this article, we’ll summarize these and other major new features for Elasticsearch, Kibana, and Elastic APM and will give you a glimpse of some cool stuff you can now do with your Elasticsearch indices in Kibana 6.6.0. Let’s get started!

Keep reading

Scaling Elasticsearch is not an easy task. In this article, we go over different methods to make a High-Availability Logstash Indexing Solution using Qbox Hosted Elasticsearch.

Logstash Indexer is the component that indexes events and sends them to Elasticsearch for faster searches. We will use multiple logstash indexers with the exact same configuration. Having multiple indexers with the same configuration opens up different possibilities to make a highly available logstash solution for your ELK stack. These indexer nodes with identical configuration can easily be created using configuration management tools like Puppet or Chef.

Keep reading

A comprehensive log management and analysis strategy is mission critical, enabling organizations to understand the relationship between operational, security, and change management events and to maintain a comprehensive understanding of their infrastructure. Log files from web servers, applications, and operating systems also provide valuable data, although in different formats, and in a random and distributed fashion.

Logs are a crucial part of any system because they give you insight into what a system is doing as well what happened. Virtually every process running on a system generates logs in some form or another. These logs are usually written to files on local disks. When your system grows to multiple hosts, managing the logs and accessing them can get complicated.

Searching for a particular error across hundreds of log files on hundreds of servers is difficult without good tools. A common approach to this problem is to set up a centralized logging solution so that multiple logs can be aggregated in a central location. To effectively consolidate, manage, and analyze these different logs, many customers choose to implement centralized logging solutions using Elasticsearch, Logstash, and Kibana, popularly known as ELK Stack.

Keep reading

The penetration testing world is fast moving and persistently demands new ideas, tools and methods for solving problems and breaking things. In recent years many people have gotten used to the idea of using Elasticsearch in the penetration testing workflow, most notably for hacking web applications.

More and more companies and websites are opening bug bounty programs. If you have new tools in your arsenal that other people don’t use or understand yet, then you could be making a great deal more money from Bug Bounty hunting. This tutorial teaches you how to use new tools with Elasticsearch to give you that competitive edge.

Keep reading

Having the ability to deploy Elasticsearch, Logstash and Kibana (ELK) from a single command is a wonderous thing. Together, in this post, we shall build the Ansible playbook to do just that.

There are some prerequisites. This Ansible playbook is made for Ubuntu Server and executed on Ubuntu Server 16.04. A basic system of 2 CPU cores and 4GB of RAM will be enough. The specs of the machine are entirely up to the situation and the volume of data.

This blog post is an alternative to using the ELK stack on Qbox. To easily deploy and run your own ELK setup on Qbox, simply sign up or launch your cluster here, and refer to the tutorial “Provisioning a Qbox Elasticsearch Cluster.

Keep reading

In this tutorial series we are going to utilize the ELK (Elasticsearch-Logstash-Kibana) stack to parse, index, visualize, and analyze logs. Nearly all the processes in a server or in an application are logged into a log file. These log files act as a critical source in helping us to accomplish numerous things, ranging from troubleshooting to anomaly detection by effectively analyzing these logs.

For analyzing the logs, one should parse it into smaller components with appropriate fields and values. Then, index the components in a database and conduct the required analysis. One of the most reliable and scalable stack for these purposes is the ELK stack. Here we have the logs parsed and split into proper individual documents by Logstash. These documents then get indexed into the powerful text analytic engine, Elasticsearch, and lastly, are passed into the visualization tool Kibana.

In this edition of the ELK blog series we are going to see the setup, configuration, and a basic example of how to parse and index logs using Logstash.

Keep reading