Training Video Course

SPLK-2002: Splunk Enterprise Certified Architect

PDFs and exam guides are not so efficient, right? Prepare for your Splunk examination with our training course. The SPLK-2002 course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Splunk certification exam. Pass the Splunk SPLK-2002 test with flying colors.

Rating
4.47rating
Students
91
Duration
10:52:00 h
$16.49
$14.99

Curriculum for SPLK-2002 Certification Video Course

Name of Video Time
Play Video: Introduction to Splunk
1. Introduction to Splunk
06:29
Play Video: Introduction to Docker Containers
2. Introduction to Docker Containers
09:47
Play Video: Setting up Docker Environment
3. Setting up Docker Environment
11:55
Play Video: Installing Splunk - Docker Approach
4. Installing Splunk - Docker Approach
06:00
Play Video: Installing Splunk - RPM Approach
5. Installing Splunk - RPM Approach
06:16
Play Video: Data Persistence for Container Volumes
6. Data Persistence for Container Volumes
07:26
Play Video: Important Pointer for Docker in Windows
7. Important Pointer for Docker in Windows
03:15
Play Video: Document - Persistent Docker Volume
8. Document - Persistent Docker Volume
04:17
Play Video: Splunk Licensing Model
9. Splunk Licensing Model
02:35
Play Video: Important Pointer for Docker in Windows
10. Important Pointer for Docker in Windows
02:02
Name of Video Time
Play Video: Importing Data to Splunk
1. Importing Data to Splunk
12:03
Play Video: Security Use-Case - Finding Attack Vectors
2. Security Use-Case - Finding Attack Vectors
14:45
Play Video: Search Processing Language (SPL)
3. Search Processing Language (SPL)
18:14
Play Video: Splunk Search Assistant
4. Splunk Search Assistant
04:10
Play Video: Splunk Reports
5. Splunk Reports
06:32
Play Video: Splunk Report - Email Clarification (Followup)
6. Splunk Report - Email Clarification (Followup)
01:22
Play Video: Understanding Add-Ons and Apps
7. Understanding Add-Ons and Apps
12:23
Play Video: Splunk Add-On for AWS
8. Splunk Add-On for AWS
10:15
Play Video: Splunk App for AWS
9. Splunk App for AWS
11:29
Play Video: Overview of Dashboards and Panels
10. Overview of Dashboards and Panels
07:31
Play Video: Building Dashboard Inputs - Time Range Picker
11. Building Dashboard Inputs - Time Range Picker
07:29
Play Video: Building Dashboard Inputs - Text Box
12. Building Dashboard Inputs - Text Box
05:34
Play Video: Building Dashboard Inputs - Drop down
13. Building Dashboard Inputs - Drop down
05:49
Play Video: Building Dashboard Inputs - Dynamic DropDown
14. Building Dashboard Inputs - Dynamic DropDown
03:25
Name of Video Time
Play Video: Directory Structure of Splunk
1. Directory Structure of Splunk
10:48
Play Video: Splunk Configuration Directories
2. Splunk Configuration Directories
11:25
Play Video: Splunk Configuration Precedence
3. Splunk Configuration Precedence
06:03
Play Video: Splunk Configuration Precedence - Apps and Locals
4. Splunk Configuration Precedence - Apps and Locals
04:05
Play Video: Introduction to Indexes
5. Introduction to Indexes
12:06
Play Video: Bucket Lifecycle
6. Bucket Lifecycle
17:19
Play Video: Warm to Cold Bucket Migration
7. Warm to Cold Bucket Migration
07:28
Play Video: Archiving Data to Frozen Path
8. Archiving Data to Frozen Path
08:14
Play Video: Thawing Process
9. Thawing Process
05:58
Play Video: Splunk Workflow Actions
10. Splunk Workflow Actions
05:50
Name of Video Time
Play Video: Overview of Universal Forwarders
1. Overview of Universal Forwarders
04:26
Play Video: Installing Universal Forwarder in Linux
2. Installing Universal Forwarder in Linux
14:47
Play Video: Challenges in Forwarder Management
3. Challenges in Forwarder Management
06:18
Play Video: Introduction to Deployment Server
4. Introduction to Deployment Server
08:36
Play Video: ServerClass and Deployment Apps
5. ServerClass and Deployment Apps
10:48
Play Video: Creating Custom Add-Ons for deployment
6. Creating Custom Add-Ons for deployment
11:24
Play Video: Pushing Splunk Linux Add-On via Deployment Server
7. Pushing Splunk Linux Add-On via Deployment Server
08:54
Name of Video Time
Play Video: Understanding Regular Expressions
1. Understanding Regular Expressions
15:15
Play Video: Parsing Web Server Logs & Named Group Expression
2. Parsing Web Server Logs & Named Group Expression
15:15
Play Video: Importance of Source Types
3. Importance of Source Types
07:16
Play Video: Interactive Field Extractor (IFX)
4. Interactive Field Extractor (IFX)
05:35
Play Video: props.conf and transforms.conf
5. props.conf and transforms.conf
16:16
Play Video: Splunk Event Types
6. Splunk Event Types
06:08
Play Video: Tags
7. Tags
06:45
Play Video: Splunk Events Types Priority and Coloring Scheme
8. Splunk Events Types Priority and Coloring Scheme
07:05
Play Video: Splunk Lookups
9. Splunk Lookups
13:44
Play Video: Splunk Alerts
10. Splunk Alerts
07:08
Name of Video Time
Play Video: Access Control
1. Access Control
10:26
Play Video: Creating Custom Roles & Capabilities
2. Creating Custom Roles & Capabilities
10:52
Name of Video Time
Play Video: Overview of Distributed Splunk Architecture
1. Overview of Distributed Splunk Architecture
07:05
Play Video: Understanding License Master
2. Understanding License Master
04:45
Play Video: Implementing License Master
3. Implementing License Master
05:36
Play Video: License Pools
4. License Pools
06:04
Play Video: Indexer
5. Indexer
04:29
Play Video: Masking Sensitive Data at Index Time
6. Masking Sensitive Data at Index Time
06:17
Play Video: Search Head
7. Search Head
03:41
Play Video: Splunk Monitoring Console
8. Splunk Monitoring Console
06:23
Name of Video Time
Play Video: Overview of Indexer Clustering
1. Overview of Indexer Clustering
04:12
Play Video: Deploying Infrastructure for Indexer Cluster
2. Deploying Infrastructure for Indexer Cluster
07:11
Play Video: Master Indexer
3. Master Indexer
07:45
Play Video: Peer Indexers
4. Peer Indexers
06:21
Play Video: Testing Replication and Failover capabilities
5. Testing Replication and Failover capabilities
09:29
Play Video: Configuration Bundle
6. Configuration Bundle
10:03
Play Video: Configuration Bundle - Part 02
7. Configuration Bundle - Part 02
04:37
Play Video: Forwarding Logs to Indexer Cluster
8. Forwarding Logs to Indexer Cluster
11:34
Play Video: Indexer Discovery
9. Indexer Discovery
10:02
Name of Video Time
Play Video: Overview of Search Head Clusters
1. Overview of Search Head Clusters
03:50
Play Video: Deploying Infrastructure for Search Head Cluster
2. Deploying Infrastructure for Search Head Cluster
06:43
Play Video: Configuring Cluster Setup on Search Heads
3. Configuring Cluster Setup on Search Heads
12:00
Play Video: Validating Search Head Replication
4. Validating Search Head Replication
02:18
Play Video: Pushing Artifacts through Deployer
5. Pushing Artifacts through Deployer
06:50
Play Video: Connecting Search Head Cluster to Indexer Cluster
6. Connecting Search Head Cluster to Indexer Cluster
06:02
Name of Video Time
Play Video: Using Btool for Troublshooting
1. Using Btool for Troublshooting
08:54
Play Video: Overview of Data Models
2. Overview of Data Models
05:02
Play Video: Creating Data Model - Practical
3. Creating Data Model - Practical
13:31
Play Video: Splunk Support Programs
4. Splunk Support Programs
08:06

Splunk SPLK-2002 Exam Dumps, Practice Test Questions

100% Latest & Updated Splunk SPLK-2002 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Splunk SPLK-2002 Premium Bundle
$54.98
$44.99

SPLK-2002 Premium Bundle

  • Premium File: 90 Questions & Answers. Last update: Nov 20, 2024
  • Training Course: 80 Video Lectures
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

SPLK-2002 Premium Bundle

Splunk SPLK-2002 Premium Bundle
  • Premium File: 90 Questions & Answers. Last update: Nov 20, 2024
  • Training Course: 80 Video Lectures
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$54.98
$44.99

Free SPLK-2002 Exam Questions & SPLK-2002 Dumps

File Name Size Votes
File Name
splunk.test-king.splk-2002.v2024-09-28.by.bella.50q.vce
Size
69.8 KB
Votes
1
File Name
splunk.real-exams.splk-2002.v2021-05-05.by.easton.50q.vce
Size
69.8 KB
Votes
2

Splunk SPLK-2002 Training Course

Want verified and proven knowledge for Splunk Enterprise Certified Architect? Believe it's easy when you have ExamSnap's Splunk Enterprise Certified Architect certification video training course by your side which along with our Splunk SPLK-2002 Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.

Splunk Architecture

1. Directory Structure of Splunk

Hey everyone and welcome back! In today's video, we will be discussing the Splunk directory structure. Now, I remember that whenever I take Linux classes, during the initial set of lectures, one of the first things that we look into is the Linux directory structure. So let me quickly show you this so basically in Linuxwhenever we do a LSI panel on root you seethat there are so many files which are present.Now as a system administrator, it is very important to understand what each one of these files and directories is all about, so here you will see that this is bin, which contains binaries, then you have boot, which basically contains your kernel as well as the grub related configuration, then you have dev. You have etc., and you also have where your variable related files are stored, like log files.

All of these understandings are very important when you are a system administrator, so whenever something does not work in your Linux, having a strong understanding of the directory structure is important. Now the same goes for Splunk as well. Splunk also has its own directory structure, and as a Splunk solutions architect, it is important for us to understand the Splunk directory structure and what those directories stand for now One important thing to remember is that by default, the Splunk installation happens in the opt directory. Now, if you do an LSL on opt Splunk, you will see that there are a lot of directories and files present over here.

This is very similar to Linux, where you have a bin directory. You have an etc directory; you also have a VARdirectory; along with that, you have something like include. You have OpenSSL. You have Shared and various others now in day today's administration of Splunk, typically while administering as well as troubleshooting certain of these directories, understanding about that is very important so let's look into some of the most important ones that you will primarily be working on on a regular basis now the first one is Bin now Bin basically contains your Splunk binary as well as various other binaries related to like B tool and various others Second, where now by where it basically means variable data, so any data which is variable in nature generally goes in the wire directory. This can be your Splunklogs, so whatever log that gets created by Splunk is a variable one, so those get inside the wire directory. Another important data that goes inside the wire directory is your actual data, so by actual data I mean whatever data your Splunk is indexing that again is a variable one, so that again goes to the wire directory The third one is Etc, so Etc basically contains all of your configuration file as well as all of the ons and apps that you basically install in Splunk.

The next one is the lib directory, which basically contains the libraries that Splunk needs. So let's look into how exactly these look within the file system. So I'm in my Splunk Docker container, so let me quickly do a clear, and if I do LS optsplunk, you will see that there are a lot of directories, something similar to what we saw in the earlier slides. Now one of the directories here is the bin directory. Now, "bin" stands for binaries. So any binaries that are required or any scripts that are required generally go inside the bin directory. So let's quickly go into the Opt Splunk bin. And here you will see that there are a lot of Python related scripts. You will also have various binaries, like OpenSSL, which are typically used when generating certificates. You have MongoDB related binaries; you'll also have Bloom, which we'll be discussing as well. And you also have your main Splunk binary over here. So if you quickly opt Splunk into Splunk status, you will basically see the status related to Splunk. And if you want to restart your Splunk, you can just do a restart, and your Splunk process will get restarted.

So all of these are stored in the /opt directory, which is the bin directory. So the next important part to remember is the etc directory. Now, Etc. basically contains all of the important configuration files as well as all of the ons and apps that are part of Splunk. So let's go inside the /opt/Splunketc directory. And if I do LS here, you will find that there are a lot of configuration files that are present. One of the important directories here is the apps directory. And if I do a search for iPhone apps, you will see that there are a lot of them present over here. Now one of the apps here you will see is the Splunk app for AWS, and we also see Splunk add-on for AWS as well as Splunk add-on for Unix and Linux. So these are some things that we had downloaded in the earlier videos, and all of these are stored within the Etc apps directory. So anytime you want to make changes to the addon or you want to create a new addon, this is where it actually gets stored into.Now, along with that information, certain other things are related to licenses. You have deployment apps, and you also have master apps. So all of these will be studied when we typically look into the distributed architecture of Splunk. And you also have the password file. So this is very similar to the Linux password, and you have the list of users and the system directory.

So within the system directory system, this is something with which you will be interacting quite a lot. So this is where you have a lot of important configurations. Let's look into some of them. So within the system, if you see a default, you have a lot of important configurations here. Now, if you do a web search for "F," let me do a cat here. All of these are basically various configurations related to Splunk, and having them in place is very important. So if you try deleting it, you will see a very important troubleshooting related class that you need to attend if someone changes these configurations and it becomes a little messy sometimes. Now along with that, let me show you one important one. So let's go to local, and within local you will see server code F. Let me open server context, and server context is basically what your server name is all about. So this is basically your server name, and these are the passport symmetric keys. So this server name is something that you would typically see within the Splunk installation. If I quickly show you this, I have a Splunk installation here, and if you quickly do an index is equal to let me look into the internal index where you are typically storing Splunklogs within the host, you will see that this is the server name that it is reflecting to Idly. This should be much more meaningful, like Splunk domain.com, like SplunkKplabin.

So we know what exactly this host name is all about. Anyways, we'll be looking into this in the upcoming videos, but I just wanted to make sure that we knew the basics. Along with that, we have the Libdirectory, which contains the libraries, and most importantly, we have the VAG directory. The Vibe Directory is something that is quite important. So if you go into the VAG directory, you will see you have lib and you have log. So these are the two important directories here. Now, within the log, you will typically see that you have all the logs that you will see related to add ons, related to Splunk by itself, and all of these logs, like whenever you are troubleshooting Splunk, sometimes it happens that Splunk does not come up and you want to understand why. So these are the log files that will help you understand that. Along with that, in the log file that you will see over here, you'll see you have an optsplancwirelogsplunkmatrix log, then you have a Splunkdot log scheduler and a log health; all of these logs you can even see from the GUI. But do remember that many times it happens that the Splunk PI itself does not come up. So there are no guides for you to look into.

So in such cases, you will have to go into this directory to do the troubleshooting. And one last thing that I would like to show is the lip directory. So the lib directory is where your actual Splunk indexed data get stored. So this is the actual index data that you see in Splunk. So if I quickly go to search, I'll go to data summary, and if I quickly open up a host, I'll select the timeline as all time and the data that you see over here. All of these AWS data or Linux or Unix data gets stored in the warps plunk directory. This is one important directory that you should basically avoid touching directly. So this is a basic overview of the Splunk directory structure. We'll be looking into this in great detail whenever we come up with a relevant use case where we are working with a specific configuration file. So that's about it for today's video. I hope this has been informative for you, and I look forward to seeing you in the next video.

2. Splunk Configuration Directories

Hey everyone, and welcome back. In today's video, we will be discussing the Splunk configuration directories. Now in a typical Splunk instance, there are multiple versions of configuration files that will be spread across the file system within the Splunk structure. Now, we can have configuration files with the same name in the default, local, and app directories. Now, this type of structure is basically also referred to as the "layering structure," and it helps Splunk determine the priorities. So let's understand this with a simple example. Now let's assume that this first block, so this first block column, is called the default. Now the second block is called ASAP, and the third block is called "local." So these are three directories.

Now, within the default directory, you have a stanza that says the server name is equal to default. Now, you have the same stanza in the app directory, where it says server name is equal to app, and again, you have the same stanza in the local directory, where server name is equal to king. So this is called the "layering effect." So you can have the same file names across the default directory, across the app directory, and across the local directory, and these file names can also have the same field names with a different value over here. So here the field name is "server name," and the value is "default app and king." Now, for such use cases, how Splunk will take this up depends upon the file hierarchy. So within Splunk's local, systemlocal has the highest file precedence, followed by app, then default.

So how it works would be that now that we have servername equal to default in the default directory and server name equal to king in the system local directory, Then the system local directory server name will be taken up by Splunk even though the app and default directory have their own version of the server name. We will be discussing more about this. Now, any default configuration files within Splunk should never be added. There is one golden rule to remember, so this is a nice quote. I got it from Splunk itself: all these words are yours except the default attempt. No editing there. So any configuration file that you see within the default directory, you should not touch that part. The reason why you should not touch it is because whenever you do a Splunk upgrade, all the configurations within the default directory will be changed. So if you make any changes to the default configuration, they will be overwritten during the upgrade process, and your Splunk installation might stop working. So let's understand this with an example. So if I go to the CD-OPT plan and within this I go to the etc. system, Now if I do a LSL over here, let me go to etcsystem, and if I do a LSL, you will see that there are two directories over here, one is default and the other is local.

Now, I'm in default if I do a LSL default. There are a lot of configuration files over here. Now do note that one configuration file is called server con. Now if you do a LSL local, you will see that server configuration is also present within the local directory. So what you have is one server conve.Let's understand this again. So you have one server configuration within the default directory, and you have one server configuration within the local directory. Now, this is what we were seeing as a layering effect, where essentially what we have is the same configuration file in two places: in the default directory and in the local directory. Now, whatever configurations are within the configuration file, let's assume that within one configuration file you have servername equal to default and here you have server name equal to king. So within this, the plank will pick up servername if it is equal to king, but it will not pick up server name if it is equal to default. very important part to remember. So let's look at what that would look like. So what I'll do is do a vim into the default serverconnect OK, Wim is not there; let's do a bi.

So within this, if you look into the general section, you have "server name" equal to "hostname." So this is one configuration file, and along with this, there are various other configuration files that are present, or I would say a key value pair here. Now, if you go within the local directory again and do a server corner, you will see that here you again have the same section called "general," you have the server name, and you have the hardcoded value. So essentially, what is happening is that you have two identical files, one in local and one in the default directory, and in the default directory you have hardcoded this specific value. So whatever value is hard coded within the local directory, which is system-local, would have the highest precedence overall. So this is like a king; any value within the app as well as the default will not be taken into consideration if the value is explicitly defined within the local directory.

This is an important part to remember. So one thing that I wanted to also share is that many times here you see that the location is Splunk Home, etc., system default. So this Splunk home is something that you will typically see in the Splunk documentation. So basically, you can install Splunk in /opt; you can even install Splunk in a different directory structure altogether. So this is the reason why this is considered a variable. So by default, this would be OPT's plug. However, if you install in a different directory, then this specific variable will change. So this is an important part to remember. Now speaking about the local configuration file, we have already discussed that whenever you make changes to the default configuration file, they typically get overwritten within the upgrade process, and it is not recommended to leave anything in default. And this is the reason why it is recommended that whatever files you create or edit be done under local configuration. So the local is typically the Splunk home, etc. system local. So this is where you would typically make PLANCK global configuration changes.

So with this theoretical session, let's look into the practicality aspect and see how exactly it would work. So the first thing that I'll do is quickly run an aggregate update with the pseudo command. So basically, the reason why I wanted to do it was because I wanted to install Nano. Nano is a great editor available on the Linux system. So generally, I always prefer nano to VI. So that's just a personal choice. Anyways, now that we have the Nano editor up and running, let's look into the configuration file precedent. So let's go to the default directory, and within default you have a lot of files we'll be looking at in web conveyor. So if I open Web Connect, you will see that there is a default. Within default settings, there are settings, and if you go a bit down here, you have HTTP port equal to 8000. Now, if you remember, whenever we open Splunk in the GUI, we have a browser. We have to manually specify port 8000, and that configuration file basically contains that setting. Now, what I'll do is assume that I want to change this HTTP port from 8000 to 80 80.Now, as you already know, we should never touch the default configuration file. This is a very important part to remember. So what we'll do is copy this specific configuration, and I'll go to local (let me go back), I'll go to local, and I'll create a file with the same name, which is "web commerce." It should have the same name, and we'll copy the configuration here. Now, instead of port 8000, what I'll do is say 8080, and I'll save this. Now, once I have saved this, we'll need to restart Splunk.

So I'll say opt splunk bin, splunk restart. And now here, if you will see within the checking prerequisite, you will see that it is actually looking for port 88, whether it is open or not. And then it is trying to start the webserver on port 8080 as opposed to port 8000, which is present in the default configuration files. And once that is done, you will see that the Splunk web interface is up. This is the hostname 80. 80. So although we had files with the same name but different parameters in default and local, Splunk took up the configuration file within the system local and used that as the primary change. Now, an important part to remember is that whatever you define within this directory, systemlocal, has the highest precedence. Nothing can override this. So this is like a king: whatever you put within this specific file will be taken into effect, irrespective of what you define elsewhere. So this is one important part that we should be remembering.

3. Splunk Configuration Precedence

Hey everyone, and welcome back. In today's video, we will be discussing the Splunk configuration precedent in greater detail. Now in the earlier video, I hope you remember we already discussed how putting a field value within the local directory within the system would give it a much higher precedence when compared to the value that is in the default directory. Now one important part that I would like to show specifically relates to the practical that we did where we changed the port from port 8000 to ADT. Now, in case you are doing it in Dockers and then trying to achieve something like localhost at, you will see that it will not really work out. Now the reason why it will not work out is because the Docker port mapping is still on port 8000. So if I log out of Docker and do a Docker PS, you will see here that the port of my Windows machine, 8000, is mapped to the 8000 port inside the Docker container.

Now that Plank was running on 8000, this would make more sense, but now that we changed it from 8000 to 8080, this field mapping will not work, and therefore you will not see any data over here. But in case you are running it outside of a Docker container within the server or a virtual machine, you should be able to see when you do on port 80 80.So in short, when it comes to the precedence order, it is important to remember that this is the order in which things are followed.

So first, Plank will look into the system local directory, which has the highest priority, then the app local directory, then the app default directory, and the fourth one is the system default directory, which has the lowest priority. So how things typically work is that whenever Splunk starts and looks for some global configuration, it will first look into the system local directory. Now if it finds a specific configuration within the system local directory, it will consider it the highest one, and that will be part of the running Splunk configuration process related values. Then, if it does not find certain fields within the system local directory, it will look into the app local directory to see whether that field is there or not. If it is not there, then it will look into the app's default settings.

And as a last resort, if a specific configuration is not present within the local app, then it will look into the system default directory and fetch the configurations from there. Now there are certain important points for us to remember, like the fact that app directory names also affect the precedent. Now we have already seen in the first slide that you have default, then the system priority goes to apps, then it goes to local. Now, within the app directory, it might happen that there are a lot of applications that would be installed. Let me quickly show you that. So currently I'm in the OpsPlunk, etc., apps. Now, if you'll see over here, if I do LSL, there are a lot of directories that are present, and each directory will have its own default and local configuration. So if I open up any random one, let me look into Splunk PA, Ni X. If you see within this, it also has its own default directory and can also have its own local directory. So each app has its own default and local directory. So in such cases, if you are putting certain configurations into place, it might happen that you have two of the same configuration files with different values in two different apps. Now, how will Splunk know which one to prioritize?

So for such cases, understanding the app's precedence is important. Now to determine the priority because there can be multiple applications Splunk basically uses lexicographical order. Now, how it basically works is that files in an app directory, if they are named A, will typically have a higher priority than files that are present in directory B. So this basically comes from the lexicographical order. Now, another important part to remember is that all apps that start with uppercase letters will have precedence over any apps that start with lowercase letters due to the lexicographical order.

So for such, if A has a precedence over Z, but Z has a higher precedence over A because Zi is an uppercase letter and A is a lowercase letter, So this is one important part that you need to remember. So, as for the summary of the president's order, you have a Splunk home etc. system local, then you have a Splunk home etc. app that is local, then you have a Splunk home etc. app that is default. So I think there is a little typo here. So you can just ignore this. So you have system local, you have app local, you have app default, and then you have system default. So this is one important thing that you should remember, and it will help you throughout the practical sessions that you will be doing.

4. Introduction to Indexes

Hey everyone, and welcome back. In today's video, we will be covering the basics of Splunk indexes. Now, in a very simple term, "the index" is a repository of Splunk data. So whatever data you put inside Splunk gets stored in an index. Now, now, Splunk transforms the incoming data into events, which in turn are stored in indexes. Now, when Splunk indexes your data, it creates a number of files. These files fall into two categories. One is the raw data, which is in a compressed format, and the second is the indexes that point to the raw data, which are also referred to as the Tsidx files, which also contain some metadata related aspects.

So let's understand this in a practical scenario since this is more of a theoretical aspect. So I'm in my Splunk here, and if I go to settings and click on indexes, you see that there are various indexes that are present over here. Now, one important aspect to remember is that whatever data you put inside Splunk needs to be stored in the index. Now consider an index as a container where you put your data into.So now, if you look at this specific diagram, this is a pretty good diagram. From the documentation, all credits go to the documentation. Now, depending upon where you take your data from, it can be from the monitored input, from UDP, from TCP, or even from the TCP input, which goes first into the parsing queue. From the parsing queue, it goes to the parsing pipeline. So the passing pipeline is where your data is converted into events. So all of the normalisationregx transformation, everything is done in the parsing pipeline. Then it goes to the index queue, and from the index queue, it goes to the index pipeline. From there, the data are stored in a specific directory, which contains raw data and index files. So let's understand this with a simple example. So what I'll do is create a new index, name it KP lapse, and ignore everything for now. We'll just create a very simple basic index with no additional configuration. And I'll click on "Save." Now, once your index is created, you will see that this is the index right now.

And it also shows the path where your index is stored. Now, in Splunk, your index is generally stored in the Splunk VAR library. And within this, you have a directory called Splunk. And here you will see that these are all the index directories. So we recently created a new index called Kplabs, and hence a new directory called Kplabs got created. So if I go inside Kplabs, you will see that there are multiple directories inside this.We'll be speaking about these in the upcoming videos. But currently, the main directory that you have here is DB. Now, inside DB, right now you only have the creation time. You don't have any data. Since this is an empty index, you don't really have any data here. So now let's do one thing: let's add some sample data to the index. So speaking of sample data, I'll just create a sample here. I'll say this is a sample dataset, and I'll also just copy this a few more times. So I'll just replace the numbering convention. You have 02030 four and zero five. Now, I have saved this in a text document called "sample data," and we'll add this to a Splunk instance. Now in order to add it, we'll go to settings, click on "Add data," and then upload our sample TXT file.

So I selected my sampledata.txt file, and it got uploaded. I'll go to the next. Now, basically, this is an event. I'll just say every line event break as every line. So now we have five events. I'll do a next, and basically it will ask me for a source type. So let's just say KP Labs. Or I would just say test. I click on "Save." Now in the index, you will see that by default it goes into the main index. So you have a default index called main. So until you specify that you don't want to or that you want to put your data in a different index, Splunk will put everything inside the main index. However, since we have already created an index called Kpops, I'll just select Kplabs over here. I'll do a review, and I'll click on "submit." So once I click on submit, if I start searching, you see I have my five events, which are present over here. Now let's look at the same aspect from the CLI perspective. So now if we do a LSDB, because earlier we did a LSDB and we only found that there was an expiration date, this is the only file. Now, if we do a LSDB yet again, you will see that there is one more directory created called hot underscore v" and one underscore zero. So let's go there. So I'll do a CD-Hot underscore v, underscore zero, and I'll do an iPhone L over here. And within this, you have multiple directories. Now, within this, there are two important files and directories that we want to look into. One is the PsyX file.

So this is also referred to as the "index file." So dundali is basically used to contain the summary of what is present within the raw data. Now, if you open up the raw data, this is file zero. And if you do a CAT on zero, you'll see that it contains this sample data one. Then you have this, which is sample data no. 3. Then this is sample data four. So raw means that it basically contains the data itself. So whatever data you import into Splunk gets stored within the raw data. Now, Splunk will also compress the raw data. So currently it is not in a compressed format, but it will also do a GC based compression at a later stage. However, in the Tsidx file over here, basically, it's like whenever you open a textbook during the start, you have the index there, and Tsidx is basically like an index. So by default, it will not contain each and every word; it only contains certain summaries like host, source type, and specific information, but we have an option where we can store additional fields here. So in order to understand this in a much better way, we'll take a simple example. So let's go back and let's go into an index called internal DB. So I'll go to the internal database here, and basically the directory structure will remain safe. So if I do a CD and if I do LS, you see that there are a lot of directories that are present, and we'll be discussing that at a later stage, but we are more interested in the hot or, let's say, let's pick up some other one that is not hot. I'll say the first one. All right, now if you do LS or iPhone L, you will have a similar structure; you will have a PS IDX file and the raw data within the raw data. Now, as you can see, you have journal entries but not direct raw data. Splunk generally compresses them so that the file size can be saved.

So generally, let's come back to that whenever you scan. There are two ways in which you can scan. First, you can directly scan through the index. A second way is to go ahead and go into the raw data, uncompress the raw data, and search through the events that are present there. So definitely, if you are scanning through the index, which is the Tsidx file, it will be much faster when compared to scanning the raw data directly. So in order to understand what we'll do, we'll try it out. So now I have two commands. One is index, which is equal to underscore internal, and we're basically doing a stats count by source type. Now, a source type is one of the fields that gets stored within the Tsidx file. So, however, when you do a stat, a stat is something that will look at the raw data. So now let's do one thing: let's do a stats count by source type and look into the job. We'll do an inspection job. Inspect Job will basically tell how much time it took for the SPL to complete, and it is saying it took 259 seconds for the entire query to be executed.

So now let's just quickly right; I'll say this is 259 seconds, and now let's do T stacks. So Tstax will not basically look at the raw data; it will only look at the TS IDX file, which is of great importance. So now let's close the job inspector, and I'll run the Tstats command yet again. And now, if you look into the inspection job, you see. It just took zero point 17 and five seconds. Let me just quickly write it down: 00:17 and five seconds. So the amount of difference between the execution of Tstats and that of stats is huge. This is just zero and one second, and this is around two and five this is a beauty about Tsidx file however.Important part to remember is that not everything will be stored in the TS IDX file, similar to how not everything will be stored in the index of a book or that not everything gets stored within the TS IDX file in Splunk. Now. Splunk Enterprise basically comes with preconfigured indexes, which include the main index (we have already discussed that this is the default index and all the data will get stored within this index unless specified to get stored in a different index altogether). The second index is an internal index now.Internal indexing is basically used for storing Splunk's internal logs, and you also have the audit index, basically. It contains events related to the user's search history. File system change monitoring as well as auditing-specific information, so these are the three indexes, which generally Splunk comes with in a preconfigured manner.

Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap Splunk Enterprise Certified Architect certification video training course that goes in line with the corresponding Splunk SPLK-2002 exam dumps, study guide, and practice test questions & answers.

Comments (0)

Add Comment

Please post your comments about SPLK-2002 Exams. Don't share your email address asking for SPLK-2002 braindumps or SPLK-2002 exam pdf files.

Add Comment

Only Registered Members can View Training Courses

Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.

  • Trusted by 1.2M IT Certification Candidates Every Month
  • Hundreds Hours of Videos
  • Instant download After Registration

Already Member? Click here to Login

A confirmation link will be sent to this email address to verify your login

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.