SPLK-2002 Splunk Enterprise Certified Architect – Forwarder & User Management

  1. ServerClass and Deployment Apps

Hey everyone and welcome back. In today’s video we will be discussing in detail about server class and deployment apps. Now, we have already discussed that server class is basically a group which can contain multiple deployment apps. So this is a nice little diagram that I have created where you have the server class. So this outer container is a server class and this server class is called Linux underscore servers. Now, within this server class there are two apps. One is Secure and second is audit. So Secure will basically pull all the logs which are in the directory or in the path of Warlock, secure and Audit deployment app will basically pull all the logs which are within the directory wire log audit. Now, along with that, there are certain rules which are being set.

Now, these rules are set inside the server class. Now, rules cannot be set in the deployment app, they are set inside the server class. So what is the rule? First rule is it should only be deployed on a subnet which is in the 1077 series. So if a server with 190 to 168 ten dot or 192 series IP address it tries to connect to the deployment server, it will get connected, but the deployment server will not push the Linux underscore server server class and its associated app. Now, along with that, there is one more rule which says exclude 1077 450. So this is possible where you include the entire subnet and you just exclude certain IP addresses where you do not want this specific server class and its associated deployment apps to be pushed. Now, it might happen. One example I can give you is that let’s say that you have 100 servers within 1077 series and five among them is Windows servers.

Now you can specify the IP addresses of the Windows servers so that this Linux underscore servers, server class and its associated app do not get pushed in those Windows systems. So this is the high level overview. Let’s jump to practical and let’s look into how we can achieve this. So I’m in my Splunk deployment, I’ll go to setting, I’ll go to the forwarder management and currently we do not have any server class. So let’s create a server class here. So currently what we have is we just have a sample app. So you see we just have one sample app here, but the app needs to be associated with the server class. So we’ll go ahead and create a server class by clicking on create one, I’ll name it as Linux underscore Secure. I’ll go ahead and save it. Perfect. So once you create a server class, what you need to do is you have to add apps over here. So basically you have an empty server class and you need to add deployment apps within the server class.

So currently we only have one sample app. So I’ll click on the sample app, it goes to the right and I’ll click on save. Now within the add clients you see you have option for include and you have option for exclude. So if you want to achieve this specific use case, what you can do is you can say 1077 asterisk and you can specifically mention 1077 450. So what it will do is it will only exclude this specific IP address rest all the servers which connects to this deployment server, this app, all the deployment app associated with the Linux and the score secure server class will be deployed over there. So this is one way you can also specify the DNS name. Like if your DNS name is say kplabs internal you can specify like you have server one kplabs internal you can specify the DNS names. You can also specify the DNS names in the exclude list. Anyway, to just keep everything simple, I’ll put an Asterisk over here, that means all the servers which will be associated with the deployment server will have this Linux underscore secure server class associated and is associated deployment app.

I’ll go ahead and I’ll click on save. Perfect. So now you have sample app. You don’t really have any clients because we do not have any clients which are connecting to our deployment server right now. So what we’ll do is we’ll create a new docker container where we’ll play around with deployment server. So we already have the docker sent to us image which is up and running.

So I’ll run docker run, the name would be forwarder two. This time we’ll set a hostname call as forwarder two kplabs internal. Just for ease of use we’ll do a DT, will specify the image ID and I’ll press Enter. So now we should have three docker containers running. One is the forwarder one that we had created earlier and second is the newer one that we just created. So let me quickly connect to the forwarder zero two server. I’ll do a batch and now if I quickly do a hostname you will see that it has a hostname of forwarder zero two kplabs dot internal perfect. So once we have one more docker container up, we’ll follow our standard guide which we were using to install the docker splunk universal forwarder and we’ll use it to install it in the second container as well. So let me quickly install wgat perfect. So now that we have wgat installed we’ll go ahead and run the command to download the splunk universal forwarder and once it’s done, I’ll go ahead and install the splunk universal forwarder. Perfect. So now let’s do one thing, let’s go to op splunk universal forwarder bin splunk start it will ask for license, I’ll do a yes and I’ll set the admin username and password. Perfect.

So now my splunk universal forwarder is up and running. So the second command that we typically use to do is the splunk add monitor. So list monitor was just to see what are the lists of log files that it will be forwarding or it should be forwarding. The next command to add a Vlog directory was to run this specific command. However, we will not be doing these commands because we have a new deployment server which our container should pull the data from. So basically all of these data right like which log files that the universal forwarder should be monitoring to, which indexer should my universal forwarder should be sending logs to. So all of these data should come from the deployment server. You don’t have to manually do this. So in order to do that, you have a simple command which is splunk set deploy poll then you have the IP address of your deployment server and colon 80 89. So let’s go to Splunk universal forward a bin and I’ll run the command, I’ll copy this up and the only command that we’ll run is this one. It will ask me for the username and password. I’ll just select it and now you see it says configuration updated so once then let’s restart Splunk.

So I’ll just say splunk restart. Perfect. So the Splunk has restarted successfully. So if we quickly go to op splunk forwarder wire log Splunk and let’s quickly do a tail on Splunky log. Let’s look into the log file. All right, so here it basically is saying that the deployment server, it is saying error not connected. So it is still trying to connect and if you go a bit down, you will see within the info section it says that the connection has been made. So what is happening is basically it is the Splunk universal forwarder is advertising itself saying that this is my IP address saying I have IP address 170 2170 four and my hostname is forwarder two Kplabs internal and basically it is running a phone uri on the deployment server. So now let’s quick refresh and you will see that one client has been deployed. So you see now I have a hostname of forwarder zero two GP labs internal and if I just expand this you will see that it has one app deployed which is sample underscore app and the associated server class is Linux underscore secure. So this is how you actually start to get your server details in. Now coming back to the forwarder instance, let’s go to of Splunk forwarder if I do a LS, you go to etc.

And within etc. You can look into apps directory and within apps directory you will see that you have a sample underscore app which is present over here. So this is how you actually push your app or whatever configurations that you have from the deployment server to the universal forwarder. So currently this is just a sample app within the app we have not specified which are the log file that it should be pushing towards the Splunk instead.

  1. Creating Custom Add-Ons for deployment

Hey everyone and welcome back. In today’s video we’ll be discussing on how we can create a custom addon in deployment server and push that addon to the universal forwarders which are connected. Now, one important thing that I would quickly like to show you before we start is that although we have the forwarder zero two universal forwarder connected, but if you look into the data summary, it does not really have any data which is coming in. So the part of Splunk add monitor wire log command that we used to run it is not being run. So if you are having a deployment server, those details actually comes from the deployment server and how will that work is something that we’ll be discussing in today’s video. So typically whenever you are developing add on which is going to be pushed in universal forwarder, there are two important files that you’ll have to look into. One is inputs conve and second is outputs conve.

Now, inputs convey basically determines which are the log files that you want to monitor and output con. F basically determines to which destination the lock should be forwarded to. So let’s take an example of sample configuration of inputs conve. So here we have a monitor phase where we are monitoring where slash logs. So this is the directory which we are monitoring and we are saying that the data from this directory should be sent to the forwarder index and the disabled is equal to false. So this is the inputs comf. Now if you have any more log directory that you might want to monitor, you can just add the stanza one more time and inside the inputs comf and that is sufficient. Now the second important question is what will happen after you start to monitor? Now definitely the server universal forwarder has to send the data somewhere.

So that configuration is part of outputs conifer. So within the outputs conve here you see that I have a default group called as default auto LB group which is auto load balancing. We’ll be discussing about those features in the upcoming section. But just remember that there’s a group default auto LB. Now within the default auto LB group, I have specified the server 170 2170 two. I’m sure you know what a triple nine seven is all about. So these are the two files that will be needed for a custom add on. So let’s do one thing, let’s go ahead and create a custom add on within the deployment server. Now, before we do that, I would just quickly like to show you a few things. So this is from the forwarder one server that we were looking into. So within the forwarder one, if you do a Splunk list monitor, you will see that it is monitoring the wire log directory. Now, if you want to see where the inputs conve and where the outputs conve are, let’s look into it.

So if you go into let’s do LS, I’ll say opsplunk, universal forwarder EDC apps, search local inputs, con so this is the file, and if I quickly do a cat, you will see that it has a very simple stanza where it is saying monitor where log and disabled is equal to false. So these are the only two lines which are basically required. Now coming to the outputs conve outputs conve would be in splunk universal forwarder. You will have etc system local outputs conve. So this is the file. And if we do a cat here you will see it has the similar stanza where you have 170, 2170 two. So this is the outputs Convenza so since we had not executed the splunk add monitor command within the forwarder two container inputs conve and outputs conve file will not be present. Hence, we will be pushing like we will be creating those two files within the deployment server, and we’ll be pushing that package to the forwarder zero two instance. Perfect. So now that we have the base set, this is the Splunk container. So I’ll go to Op, splunk and within, etc. You know, which is the file. You have the deployment apps. And currently we only have a sample app here. So we need to create one more app. And I’ll say this as Kplabs linux. So this is the naming convention that I’ll give now, within Kplabs underscore Linux, what I’ll do, I’ll create a directory called as local. So let’s do a Mkdir local and within the local. I create two files. One is inputs conve and one is outputs corner. Perfect. So these are the two files where we will be putting data into it. So in order to do that, I’ll open inputs, con, f. And basically from the forward zero one itself, so that we do not make any typo. I’ll copy the data from inputs.

Comf and I’ll paste it over here, which is monitored. You have this is the actual path, which is where log and disable is equal to false. You can even give few things like index. You can even give source type. So all of these are present but we are keeping simple as of now and we’ll look into more advanced configuration. At a later stage, I’ll go ahead and save it. At a similar stage, I’ll open outputs conifer and I’ll just copy the outputs conve which was present in the forward one machine, and I’ll paste. It over here. So this is the outputs conve and I’ll go ahead and I’ll save this. So these are the two files which are present within the deployment apps directory of my Splunk instance. So going back to my splunk instance. Within the apps, you will see that I have a new app. So in case you do not see, just refresh the page. Sometimes we’ll have to do a quick refresh. And I have two apps. Here one. Is sample app and second is Kplabs underscore Linux app and within the sample app you will see that within the client section you have one deployed and within the KP labs underscore Linux you have the zero deployed over here. So before we proceed further, what we like to do is we would like to remove this sample app from the forward as zero to server because this was just for demo purpose that we had done.

So in order to do that I’ll go to the server class, I click on edit and I click on edit apps and within the edit apps I’ll just unselect the sample app over here and I’ll click on save. So what would generally happen is that once the universal forwarder, so basically universal forwarder, it keeps on pinging my deployment server. So once universal forwarder pings the deployment server it will fetch the update stating that the Linux underscore secure server class has the sample underscore app removed and hence the universal forwarder will also have the app uninstalled from this instance. So let’s do one thing let’s log into the forwarder two server and let’s see whether the app is still present or not. So I’ll quickly do a docker executor two bash and you know the location now, I hope.

Ops plunged forwarder etc apps. So now if I do LS, you will see that the sample app has been removed now, so we are sure that the universal forwarder had contacted the deployment server and now you see the deployed apps associated with the forwarder zero two is zero. Perfect. So this is what we intended to do. Now let’s go back to the app and within the KP Labs underscore Linux app there is an action of edit and within edit you will typically have two options one is enable app and second is restart Splunky. So restart Splunky basically means that once the app is pushed so once this Kplabs underscore Linux is pushed to the universal forwarder it would need to restart the universal forwarder also. So I’ll go ahead and I’ll click on save and it says that add at least one server class so basically I need to associate it with the server class, I’ll associate it with the Linux underscore Secure and I’ll click on save. So again there are multiple ways in which you can add it.

You can add it from the app itself, you can go to the server class, you can edit apps and from here also you can do that. So now within the clients you see that I have one app which is deployed and basically now if you go into the forward two and if you do LS, you will see that I have a KP Labs underscore Linux add on which is installed. Now if you quickly open this up KP Labs underscore Linux you will see that there is a local and metadata within local you would typically see you have an app ConIf, you have an inputs corner and you have the outputs convey. So apps convex. We will be discussing about this in the relevant section but if you quickly let’s go inside Kplabs underscore Linux I’ll go to local and if you go into cat inputs con f this is the stanza that we had enabled and if you go into cat outputs conve, this is the stanza that we had enabled. Now to quickly verify if our inputs conf and outputs conve, whether this is actually working. What we need to do is we need to go to the Splunk Enterprise instance within the search and reporting app. And if you look into the data summary, you will see that I have one more instance which has come up, which is forwarder two KP lapse internal. So if I quickly click here, you’ll will see that it has started to index. So this is from Yam log wire log that the indexing has started to happen.

  1. Pushing Splunk Linux Add-On via Deployment Server

Hey everyone and welcome back. Now in the earlier video we were looking into how we can create our own custom addon which would have inputs, conve and outputs convey that addon through the deployment server and look into how exactly the universal forwarder uses that to send the log files. However, when it comes to universal forwarder, it is not just limited to monitoring log files, it can add actually do a huge bunch of things. And today’s video is exactly to see the capabilities of universal forwarder rather than just monitoring a log file. So let’s look into some of them. So for our demo purposes, what we’ll be doing is we’ll be using the official Splunk addon which Splunk provides and we’ll be pushing that through the deployment server through the universal forwarder.

So within the Splunk base when we type Linux, there is a Splunk add on for Unix and Linux and you see it is built by Splunk. So basically we are more interested in downloading this specific add on. So if you go a bit down you can go ahead and download it. So just download this app, click on Agree to download and it will go ahead and download it within your directory. So currently it is downloaded in my test directory. So this is plunk add on for Unix and Linux and what we’ll do is we will move this add on inside a docker container where our Splunk is running. So in order to do that, I am in my CLI. So I’ll go to the test directory and if I do a dir you will see that I have Splunk add on for Unix and linux. Let’s do a docker CP splunk add on and I’ll move it to the docker container inside a temp directory. So once it’s inside the temp directory, I’ll log into my docker instance. Let me quickly log in and if you go to the TMP directory you will see that I have Splunk add on for Unix and Linux.

So let’s quickly extract this off. So I do a tar xzvf splunk add on and it has extracted. So if I do a LS hyphen l, you will see that you have Splunk tare Nix. So we know that if we want to push a specific add on from the deployment server to the universal forwarder which is the directory that we need to move into, so I’ll do a move on Splunk tlnix and I’ll put it to opt Splunk etc deployment app. So once I have moved it into deployment apps, if you go to the Splunk instance, you go to the forwarder management and now within the apps you would see that you have one more app called Splunk ta underscore Nix. Now this app has a huge amount of capabilities other than just pushing the log files and we’ll look into what these capabilities are. So before we do that, let’s click on Edit over here within the server class I’ll say Linux underscore secure I’ll select after installation, just restart the Splunk D and I’ll go ahead and I’ll save the changes. So now within the clients you will see that now you have two deployed apps, you have Splunk Ta, Nix and KPL’s underscore Linux.

So if you’ll see so fast the new add on have been deployed and this is the reason why I really love deployment server, it’s extremely fast and pretty convenient to push new add ons that you might have. So now, once the app is deployed, if you quickly see you have both Splunk ta for Nix and Kplash Linux apps have been deployed. So if you quickly log into the instance and go to opt Splunk etc apps and let’s do an LS, you should see that there is a Splunk ta underscore Nix add on which is present. So let’s go here and within the default you would see that there are a lot of files which are present, one among them is inputs conve. Now if I open inputs conifer typically you will see that there are a lot of inputs which are present over here and every input has a status of Disabled is equal to True. So Disabled is equal to True basically means that even though this input is configured within this inputsconf file, this input is in the disabled state because there’s a Disabled is equal to True. So what we basically need to do is we have to change this stanza from disabled is equal to true to disabled is equal to false. So what we’ll do is we’ll copy this inputs convex and let’s move it to the local directory and now if I go inside the local directory we have inputs conifer and we’ll go ahead and edit these inputs convey. So let’s change certain configurations here so you can see disabled is equal to zero.

So I’ll just select some random one for our testing purpose and once you have done that, once you have selected any random ones and you have done Disabled is equal to zero you can go ahead and save the script and will quickly do a Splunk restart. So once it is restarted, let’s go to a Splunk instance and we’ll go to the search and reporting app and basically if you do a data summary and within the sources you see that there are a lot of new sources which has come up like Dfnets that open ports, packages, PS, others as well. So let’s click on PS and here what you’ll basically do is you basically get the list of packages which are available or this is the process output my back, this is the process related details. Now in order to get more information, let’s go back to the sources and let’s go to the package and this is basically the package information that you have from Splunk.

So basically there are 133 lines and these are all basically the packages like bin utils, you have Groff and various others. So this is one again you need a proper formatting and you need the proper passing. But we are just exploring on what are the details that we are basically getting. So since all of these details we are receiving from the forward to instance let’s click here and basically within the source type there will be a lot of source types. You have PS. You have Top. So Top is something which I’m sure everyone would be aware. So when you basically run top command so you basically get list of all the processes by CPU, memory time, you have priorities then a nice value as well as well as the PID and the user from which it is running from.

So you get the same output over here. So basically what Splunk Universal forwarder does is that not only it has the capability to just monitor log file but it also has the capability to run various custom scripts that you might define. So this is the high level overview about the capabilities of the Splunk Unix add on and how you can utilize it to fetch various information from a universal forwarder. Now, in the next video we’ll explore the directory structure of Splunk add on for Unix and Linux and we looked into the things that it is actually doing behind the scenes and how we can optimize it more further.

img