HomeVideos

Sumo Logic QuickStart Webinar - July 2018

Now Playing

Sumo Logic QuickStart Webinar - July 2018

Transcript

2605 segments

0:00

so want to welcome everybody to today's

0:02

webinar today we're going to be

0:03

discussing sumo logic and more

0:05

specifically similar logic QuickStart so

0:07

how do you get you guys up to speed in

0:09

the understanding of how to use sumo

0:12

logic some of the features today's

0:14

webinar is going to be a combination of

0:17

some PowerPoint slides certainly some

0:19

in-app demos and if you guys have

0:21

questions if you want to go ahead and

0:23

post them into the chat window we can go

0:25

ahead and try to answer those so let's

0:27

go and get started so here we go so what

0:34

are we going to be discussing today so

0:35

we're gonna be discussing sumo logic and

0:37

more specifically becoming a sumo pro

0:40

user this information today is going to

0:42

correspond with our certification level

0:45

one we've recently launched a

0:47

certification program that currently

0:48

consists of three different levels

0:50

today's class will be level one and as

0:53

you all have registered for this webinar

0:55

I'm sure you saw the links to register

0:58

for the other ones and we certainly

0:59

encourage you to do so to expand your

1:01

Summa logic knowledge what are we going

1:05

to specifically be talking about over

1:06

the next hour or so so these are gonna

1:08

be the five steps to becoming a sumo Pro

1:10

user and we're going to discuss these

1:12

throughout the session today so how is

1:15

similar logic help me

1:16

you now have similar logic access and

1:18

how do you use it to benefit you and and

1:21

make sure that you can go ahead and view

1:23

that content that's going to be relevant

1:24

to you what data is available so I can

1:27

analyze certainly you now have access to

1:29

system and now you need to understand

1:30

what is there that you can go ahead and

1:32

look at and we'll show you some ways to

1:34

see what data is been essentially loaded

1:38

for you how can I search parse and

1:41

analyze my data so now you have all this

1:43

data available to you and you need to go

1:45

ahead and analyze it and so we're going

1:47

to take a look at some of the search

1:48

mechanisms some parsing and analyzing a

1:51

query language that will allow you to

1:52

essentially take your diet data and

1:55

slice it and dice it to a way that's

1:57

going to make sense to you and any users

1:59

that might need to look at that data how

2:01

can I monitor my trends and critical

2:03

events so certainly we want you to be

2:05

able to take advantage of sumo logic to

2:07

be able to go ahead and do those type of

2:08

things to see trends

2:10

look at patterns and potentially even

2:12

prediction models and outliers to see

2:15

what's going on with your trends and

2:17

with the monitoring one of the cool

2:19

things of that Summa logic is you have

2:21

the ability to set up alerts so that you

2:23

can be notified of events that are

2:24

taking place when you're not in Summa

2:26

logic we certainly don't want you to

2:27

have to be logged in 24/7 staring at a

2:30

screen waiting for an event to occur

2:31

will show you some ways that you can

2:33

integrate with email or web hooks so you

2:35

can that get a notification when these

2:37

events occur and then where do I go from

2:40

here so certainly we're going to give

2:41

you a fair amount of knowledge over the

2:43

next hour so and then you know what are

2:45

the next steps how do you go ahead and

2:47

continue and advance your knowledge of

2:48

sumo logic so let's go ahead and dive in

2:51

here there's going to be a corresponding

2:54

tutorial that has some hands-on

2:56

exercises that we would encourage you to

2:58

go through we'll be looking at most of

3:01

those today I'll be kind of covering the

3:02

tutorial steps but we would certainly

3:04

encourage you to go ahead and do this on

3:06

your own time and you know actually get

3:08

the hands-on experience so this is the

3:10

content the information that's going to

3:13

be necessary to log into those training

3:16

environments and you couldyou I'll show

3:17

you how you can access this link so you

3:19

don't have to go ahead and write this

3:20

down or take a screenshot if you don't

3:21

need to Oh question so you're certainly

3:24

going to have questions after today we

3:27

have a very thorough documentation and a

3:30

community forum that are going to be

3:31

great for you to get some assistance

3:32

with answers so we would encourage you

3:35

to go ahead and attend or join the

3:38

community and you can also have a live

3:40

conversations with our community via

3:42

slack so I'll go ahead and point those

3:43

out later as well but did one just I'll

3:45

show you where those exist and then

3:48

finally based on theme note that we're

3:51

going to provide you today this is going

3:52

to be enough content and information for

3:55

you to be able to go ahead and take our

3:57

first exam so the level 1 exam

3:59

sumo pro user as you can see here this

4:01

is some of the on the right-hand side of

4:03

the screen we have the preparation which

4:05

is going to be the QuickStart webinar

4:06

which we're about to do and the tutorial

4:08

which I just referenced a few moments

4:09

ago and so this will be a great you know

4:11

opportunity for you to go ahead and

4:13

prove that you've have that similar

4:14

knowledge experience for yourself and

4:17

for your organization so it would

4:18

certainly encourage you to take the

4:19

exams at a later time so let's go ahead

4:22

and dive in so how does zoom

4:23

Jake help me you now have this system

4:25

how do you get to do something and to

4:27

take advantage of the service so what

4:32

I'm gonna do is I'm actually gonna start

4:33

with a demonstration so let me go ahead

4:34

and just switch screens here and I'm

4:37

gonna actually start with this grate

4:40

over here and so what I want to show is

4:44

a demonstration and we're gonna go

4:47

through the scenario of using our

4:50

fictitious company called travel logic

4:52

and what this organization does is is a

4:55

travel agency so similar to if you go to

4:57

a website like Expedia or any of those

4:59

popularize sites and you would go in

5:01

booked a car booked a flight

5:04

compare prices I think you guys have all

5:05

experienced that at some point and

5:07

that's gonna be with the site we're

5:09

going to use for today's scenario so I

5:12

could go in here and start cookbook a

5:13

flight and you know look for hotel and

5:16

all that I'm not going through just to

5:17

save a few moments of time because I'm

5:19

sure you have all experienced this

5:20

interface before but I want to start

5:22

with a scenario here so I get a call I'm

5:26

a an Operations individual and I get a

5:29

call from him someone in upper

5:30

management and he said hey we're getting

5:32

reports that people cannot check out on

5:34

travel logic as a result of they can't

5:36

check out they can't pay for their

5:38

tickets they can't book their travel

5:40

certainly a problem for them and a

5:42

problem for us as an organization

5:44

because if they can't book travel we

5:45

don't get paid so that's certainly a

5:47

scenario that is going to be a problem

5:49

for organizations so at this point I

5:52

know this information that people can't

5:54

check out so that's the piece of

5:55

information I know at the same time

5:57

though I get a slack message and what

6:00

this flack message is showing me is a

6:02

monitor alert and it says critical on

6:04

travel app hinode CPU so that's

6:07

interesting I know that or I've received

6:09

a phone call from somebody that said hey

6:11

people can't check out so I have that

6:13

piece of information and now I have a

6:14

piece of information that seems that

6:16

there's a high CPU related to the travel

6:18

app and those are the two pieces of

6:19

information I know and so now I need to

6:22

go ahead and start to do some detective

6:23

work and figure out what's going on and

6:25

more importantly how to resolve it and

6:27

so that's where I'm gonna go ahead and

6:28

start to use uma logic to help me so let

6:30

me go ahead and switch windows here

6:35

and I'm gonna jump into my travel logic

6:38

operational overview so at this point

6:41

what I'm looking at is a dashboard that

6:43

I've created that is going to allow me

6:45

to look at different operational events

6:47

and so what we're able to see here for

6:49

example and even if we don't know the

6:51

technical details on what these things

6:53

are just by taking a glance at these we

6:55

see reporting nodes you know they're

6:58

certain reporting so that's a good sign

7:00

and we see these green indicators that

7:03

there's no errors and so in this case

7:04

Green is good contrast ly red is bad and

7:08

so we see here we see that the response

7:10

times are a little bit higher than what

7:12

we would expect and more importantly the

7:14

check out service has produced about

7:16

4,000 errors in the last 60 minutes well

7:18

I know there's a or I believe there's an

7:20

issue with the check out service and

7:22

this is kind of confirming that there

7:24

have been a bunch of errors or

7:25

specifically about 4,000 of them in the

7:27

last 60 minutes so this gives me a

7:29

starting point to start to dig in and

7:31

now I want to go ahead and continue

7:33

digging in seeing what what's going on

7:35

and more importantly can I resolve it so

7:38

I'm gonna go ahead and click on this

7:39

icon here which is going to show in the

7:40

search basically I'm taking this panel

7:42

the dashboard I'm just gonna expand it

7:44

out and so now what I'm looking at are

7:47

the service details related to watch I

7:51

have a logic website and so I'm going to

7:52

go through some of the panels on this

7:54

dashboard view so over here we have our

7:57

list of our reporting on bookings and we

8:01

have our successes and fails and we see

8:03

here things were very successful at a

8:06

twenty people were able to book a thirty

8:08

eight forty fifty so on and then read

8:11

about look about eight fifty seven or so

8:13

our successful bookings dropped and we

8:16

have a bunch of fails which is well that

8:19

seems to be a problem so we're looking

8:20

at this occurred at about 856 so okay

8:23

that's interesting at about 856 things

8:25

seem to switch over so to speak to going

8:28

bad and let's take a look at some other

8:30

panels here and see if this corroborates

8:32

our information or or helps in any way

8:36

if we look at this errors live for the

8:38

last seven days we see there are a

8:40

couple errors a few days ago we see that

8:42

there were errors some three days ago

8:44

some errors yesterday and then we

8:47

see there was a space where there were

8:48

no errors at all over the last seven

8:50

days and then once again about 855 or so

8:53

there started to be errors today so that

8:56

seems to kind of be in sync with the

8:57

time so it looks like maybe about 855

9:00

this or so something went a little brawl

9:03

or at least ever started to be reported

9:05

if we go ahead and look further down

9:07

here errors by node we're only going to

9:10

go ahead and discuss on a little bit

9:11

source name but if I mouse over these I

9:13

can see that there were errors related

9:16

to the CS team dot travel - check out so

9:19

that's interesting there appears to be

9:21

errors specifically related to this

9:23

section so that's good to know

9:25

if I go over here and look at gateway

9:27

latency what I have here is a

9:30

essentially a threshold and an outlier

9:34

graph that's showing the information and

9:36

so what I'm what I'm able to interpret

9:38

from this is a our gateway latency was

9:42

determined to be within a certain

9:44

threshold range and as long as it's in

9:46

that range it would be considered to be

9:47

flat which is good so that things were

9:49

humming along as they should be and then

9:53

at about 855 or so we see that the

9:56

Gateway latency jumped up and we see

9:59

these pink triangles that represent

10:00

outliers and so basically we were or the

10:03

system was expecting some sort of norm

10:05

and then at about 855 856 the latency

10:09

went outside that norm okay that's

10:11

interesting as well and then finally we

10:13

have our CPU and total memory panel up

10:16

here and what we see here is if we look

10:19

at this at about you know this 8 o'clock

10:22

856 time or so we see a increase in or

10:26

start to see a spike in some of the

10:28

information in here and so we know that

10:31

due to the slack message I received that

10:33

something related to the CPU was was

10:38

high we don't know why it was high but

10:40

we also just know that the two pieces of

10:42

information seem to occur around the

10:44

same time check outs fails I started to

10:46

occur and spikes in CPU usage started to

10:49

occur as well so what we're going to do

10:52

is I'm going to drill into the CPU in

10:54

total memory because this is the one

10:55

piece of information I know where I can

10:57

start to look at something a little more

10:58

specifically since I had that slack

11:00

message that

11:00

help point me in this direction so I'm

11:03

going to go ahead and drill in here and

11:09

so now what I'm looking at is a metrics

11:12

view that is showing information about

11:14

CPU and total memory usage and so right

11:17

here we see the CPU information and we

11:20

see the memory usage information and so

11:22

we see both of these pieces of

11:24

information within this graph and so I

11:27

don't really need to look at the memory

11:28

usage because the slack message I

11:30

received indicates something was up

11:32

related to the CPU so I'm going to go

11:34

ahead and click on this guy and just

11:35

essentially turn off or comment off the

11:38

memory usage since it's not really

11:39

relevant for me at this time and so what

11:42

I'm looking at now is the CPU usage and

11:45

so if we still see those spikes once

11:48

again about this 856 time and so what I

11:51

want to do is right now I'm looking at

11:52

metrics and I want to go ahead and look

11:55

to identify why things were going on

11:57

with the metrics those are helping me to

11:59

identify what was going on and in this

12:01

case what was going on was increased CPU

12:03

usage but like I said I want to see why

12:06

it was going on so what I'm going to do

12:08

is I'm going to start to overlay some of

12:09

my logs and so I can look at essentially

12:12

a unified logs and metrics environment

12:15

so I'm going to go ahead down here and

12:16

start to define the logs that I want to

12:18

take a look at and if you remember I was

12:21

looking at source category CS travel I

12:24

see his team travel check out so I'm

12:27

gonna go ahead and start to look at

12:28

those logs I'm also gonna look more

12:30

specifically in the logs rather than

12:31

looking at all the logs together I want

12:33

to look for errors that occurred within

12:35

the logs so I'm gonna go ahead and look

12:36

for the keyword of error errrr and I'm

12:39

gonna hit enter and what's gonna happen

12:41

now is I'm gonna have a you know that

12:44

unified log of metrics environment that

12:46

I was just for everything and I'm going

12:47

to see the log information up here so

12:50

what I'm looking at right now at the

12:51

very top in this orange bar is

12:53

essentially a heat map which is showing

12:56

the information related to this log

12:59

information and so what I see here is

13:01

prior to this 855 time or so there were

13:04

actually no messages that had the string

13:07

of error in it

13:08

however at about 855 or so then we

13:10

started to see some errors our logs and

13:13

the orange bar represented up here is

13:16

kind of color sensitive so the lighter

13:18

the color the less frequent the message

13:20

is emits why there prior to 855 there

13:22

are actually no messages and why the bar

13:24

is white or blank but as we see here the

13:28

four start to be an increase in messages

13:30

so what I'm going to do is I'm going to

13:32

go ahead and look at the logs related to

13:34

one of these time slices as we call them

13:36

these are essentially one minute chunks

13:38

right now of time and I'm going to take

13:40

a look at those logs and see if I can

13:41

see in the logs why things were haywire

13:45

so to speak so I'm gonna go ahead and

13:47

click on the heatmap the time slice and

13:50

I'm gonna do a shift click and that's

13:52

going to go ahead and open the logs now

13:55

in the log browser and so what I'm

13:58

looking at now is all the errors that

14:01

occurred with the travel check out

14:04

source category and so this is giving me

14:07

all these errors and as I see here there

14:09

were about 300 errors during this one

14:11

minute chunk and so I could see some of

14:13

these errors here and if I look at them

14:14

I see access denied

14:16

I see an error down here related to the

14:18

SSL server certificate so that's

14:21

interesting I'm seeing the errors but

14:23

I'm not seeing what caused them yet and

14:24

that's really what's going to be most

14:25

important to me and more importantly

14:27

gonna help me identify how to solve the

14:29

problem so I'm going to do two things at

14:31

this time the first is I'm going to

14:33

expand my search out so rather than just

14:35

looking for errors I'm gonna look for

14:37

all the events that occurred in these

14:39

logs I'm going with the idea that

14:42

something caused all these errors but

14:45

what caused the error would may have not

14:47

necessarily been an error itself so I'm

14:48

just basically expanding my search out

14:50

the other thing I want to do is I want

14:52

to expand my time search out so right

14:54

now I'm looking at a one minute chunk of

14:56

time where I know there were errors but

14:58

kind of using the same idea with

15:00

expanding or removing the error string I

15:02

want to go ahead and span my time out

15:04

because I for this minute segment I know

15:06

that there are errors but I'm looking to

15:08

see what caused this hour so I'm gonna

15:10

go ahead and expand my search and look

15:11

at the last 60 minutes and so what I'm

15:14

doing now is I'm looking at the logs for

15:16

the travel check out source category

15:19

which is a naming convention that's used

15:21

for to identify the data and now I'm

15:24

able to view all the logs that occurred

15:26

over

15:27

like I said less 60 minutes and I see

15:29

there's about 57,000 locks which is you

15:31

know a fair amount and now I've could go

15:34

ahead and start to scroll through these

15:35

and look at 2200 pages of logs but

15:38

that's really not an efficient use of

15:39

time and as I'm looking at through these

15:42

messages of logs I do see that some

15:44

successful events occurred so I see here

15:46

the checkout service did occur and you

15:48

know I see some of those in here and

15:50

then I see some error so I'm still look

15:52

at this point I'm looking at a mixture

15:53

of all my information and so like I said

15:56

I can go ahead and start to scroll

15:58

through 2,200 pages of data but that's

16:00

just not sensical in any way and since

16:04

you're using similar logic you can start

16:06

to take advantage of some of the

16:07

features we have in there and one of the

16:09

features is advanced analytics and

16:11

particularly the log reduce option and

16:14

I'm gonna go ahead and click on log

16:15

Rives and when I do what's going to

16:17

occur is essentially it's gonna take all

16:19

those messages so approximately I was

16:21

about 58,000 messages and look at them

16:24

and look for pattern recognition

16:26

essentially and distill the messages

16:28

down into patterns that make sense so

16:31

it's gonna take all those messages and

16:33

combine them down into hopefully just a

16:37

fair handful of messages so I'm just

16:39

gonna let this finish we'll just take

16:40

another moment it's just about done now

16:43

and so what I see at the very top is

16:46

there were 7,000 are about 8,000

16:48

messages that match this pattern and so

16:51

this is actually although I know that my

16:54

service so I know kind of what to expect

16:56

here these are a good sign because these

16:58

have a transaction ID so this is kind of

17:00

an indication that those check ups did

17:02

occur here we see starting cart checkout

17:04

process and those are all kind of good

17:05

indicators but down here we see some

17:08

errors and we see about 1,500 of them

17:10

1,500 of these as well and so I'm in the

17:14

search mode of trying to find out once

17:17

again what caused these errors and so

17:19

what I want to look at is I want to flip

17:22

these patterns around so rather than

17:23

looking at events that occurred 8,000

17:25

times 7,000 times I want to look for

17:27

events that were probably more singular

17:29

or maybe less frequent my thinking or my

17:32

hypothesis being something caused those

17:35

errors and it was probably a singular

17:37

event

17:37

a bit of a leap but since I've done this

17:40

demo a few times I can make that

17:41

assumption so what I'm going to do is

17:43

I'm simply going to flip my count around

17:44

so rather than looking at the most

17:45

frequent I'm gonna look at the less

17:47

frequent and what I see right away is

17:50

this first line right here and what I

17:52

see is travel logic app starting cluster

17:54

the checkout service and I see version

17:57

1.1 for dev well this was a production

18:00

system and development code was loaded

18:02

on it and as a result that's what caused

18:04

these errors so somebody made a human

18:07

error let's say and just list loaded the

18:10

wrong set of code and that caused the

18:12

SSL Certificates to fail and as a result

18:14

that caused the checkout service to fail

18:16

and kind of tada so what I wanted to

18:19

show here is taking sumo logic and using

18:23

it to certainly your advanced of course

18:25

and using it to troubleshoot and so I'm

18:28

gonna go ahead and jump back into slide

18:29

deck to kind of Rica highlight or review

18:33

what we just did so let me go and grab

18:35

that slide there we go so what we did in

18:40

this demo was monitoring troubleshooting

18:43

and we looked at three different phases

18:45

of that the first part was we looked at

18:47

the alerts so we looked at notifications

18:49

of critical events in this case we saw a

18:51

slack message as an indication that an

18:54

event had occurred and we were able to

18:56

go into the dashboard and use that view

18:59

and kind of a simplistic view in a good

19:01

way to identify something taking place

19:04

where we were just able to go into the

19:05

dashboard and see read bad Green good

19:07

and go ahead and help you know start to

19:10

drill down and see what was going on

19:12

another component of what we did was

19:14

using metrics to identify what's going

19:16

on so we were able to ingest our

19:19

hardware information and bringing that

19:21

CPU usage that memory usage certainly

19:24

there are other metrics that can be

19:25

brought in but we were able to use those

19:27

and identify what was going on and the

19:29

what was the increased CPU CPU usage and

19:32

then I was gonna say more importantly

19:35

but if equal importance we were able to

19:37

use the logs to identify why it was

19:39

happening and so we were able to take

19:41

all that and put it together and

19:42

ultimately identify the issue and so

19:45

that's what I wanted to demonstrate

19:46

right there

19:47

so it's going to want to show another

19:49

couple slides here and then we'll go

19:51

back

19:51

to some demonstrations the sumo logic

19:55

dataflow certainly want you to guys to

19:57

understand how the dataflow works with

19:59

similar logic and so these are split up

20:01

into three different areas the first one

20:03

would be data collection so you're going

20:06

to be using collectors to bring your

20:09

sets of data into similar logic will go

20:12

over collectors and in a brief view and

20:14

in just a moment but the the short

20:16

version of this story is essentially

20:18

your data needs to be brought into sumo

20:19

logic which I think makes sense

20:21

once the speranza sumo logic then you

20:23

can start to search and analyze that

20:25

data in step number two so now your data

20:27

resides in sumo it's sitting there and

20:29

now you need to do something with it

20:31

so you're going to use operators to go

20:33

ahead and start to dig through that data

20:35

and pull out the things that are

20:36

important to you you'll be able to use

20:38

to charge to represent kind of that

20:40

visual component of what's going on with

20:41

your data and that's going to be a great

20:43

way to analyze you know what's taking

20:45

place and then finally visualizing and

20:48

monitoring is that alerts and dashboards

20:49

component and so the idea there is that

20:52

even if you don't know your set of data

20:55

strongly or or you want somebody else

20:57

outside your team to be able to take a

20:59

look and see what's going on these

21:01

dashboards are going to be a great way

21:02

to add a glance see what's taking place

21:05

and so you could have vision putting

21:07

those up in a knock or a command center

21:09

so that at a glance you could say oh you

21:11

know everything's green that's good oh

21:13

something's red that's bad and even if

21:15

like I said if you don't know your data

21:16

that well just using that kind of color

21:18

scheme you can help identify what what's

21:21

taking place and then the alert

21:22

component as well as ghosts hand-in-hand

21:24

where you can be notified outside of

21:26

sumo logic so as I mentioned earlier at

21:28

the start of the call ideally we don't

21:31

want you just sitting in sumo logic

21:33

staring at a dashboard 24/7 so with the

21:36

alerts you can still be notified of

21:37

critical events and as you define those

21:40

events to be critical and then you can

21:42

go ahead and take action on them and

21:43

then log in to Samoan and start to do

21:45

your troubleshooting or searching and

21:47

analyzing to see what's taking place

21:50

another slide here regarding sending

21:53

data to sumo logic I don't want to get

21:56

too bogged down into the details of this

21:58

we will cover this in more specific in

22:00

level three and actually setting up the

22:02

different collectors but what I want to

22:03

show you here is really

22:05

the variety of different ways that data

22:07

can be ingested into Summa logic and so

22:09

really the the summation here is there's

22:12

a bunch of different ways and as long as

22:13

data is out there and is readable by a

22:17

human it can be ingested in Summa logic

22:19

and we can treat it and handle it

22:21

perfectly so there's a variety from

22:24

environments that can be brought into

22:26

Summa logic maybe you're using clouds in

22:30

services Amazon and using a cloud watch

22:33

or a cloud trail or any of those popular

22:35

ones an s3 bucket can be go ahead and

22:37

bring that data into sumo logic maybe

22:40

you're going to use an HTTP connection

22:42

maybe you can bring your information via

22:44

syslog and you'll have the ability to

22:46

install these collectors or actually

22:48

just point your data to them to the

22:51

collectors depending on if you're using

22:53

a cloud watch or the s3 properties or

22:56

not but like I said we cover this a

22:58

little more in more detail in level 3

23:00

but just for today I think it's just

23:02

important to note that you know wherever

23:04

your data is out there it can be brought

23:06

into Summa logic and then metadata I've

23:10

alluded to metadata a couple times

23:11

already but we haven't really discussed

23:13

what it is and it's certainly important

23:14

that you understand how what is and how

23:16

it's used

23:17

so metadata are going to be tags that

23:19

are associated with each log message so

23:22

essentially you have all these logs out

23:23

there and you're going to bring them

23:24

into Summa logic and then you need a way

23:26

to sort through them or look through

23:28

them and you're going to use some sort

23:30

of identification method to do so and so

23:32

there's going to be some that are

23:33

essentially pre-built or pre labeled so

23:36

you might want to go ahead and search

23:37

for your data based on the name of the

23:39

collector which would be that piece of

23:41

software then would be installed that

23:42

would be grabbing your data and bringing

23:44

it to similar logic you might use the

23:46

source source name so I believe we just

23:48

used the source name in our demo so

23:51

that's a way oh you know another option

23:52

to go ahead and identify your data and

23:55

allow somebody to search through it and

23:57

then finally there's going to be a

23:58

source category and this is what we're

24:00

going to recommend you be used and it's

24:03

gonna be freely configured which is

24:05

really where it's gonna help we'll go

24:06

ahead and take a look at some source

24:08

categories in just a few moments in the

24:09

demo but the way it's going to be used

24:11

is to identify that data and by

24:14

providing a good naming convention it's

24:18

going to make

24:18

get really easy for your users to locate

24:20

their data so rather than looking for

24:22

data on system 28:49 they could go ahead

24:26

and look specifically for production

24:28

Apache data for example and we'll use

24:30

that Apache example throughout the rest

24:32

of today so that's why I bring it up now

24:33

so these are going to be typically

24:35

configured you when the data is ingested

24:39

these will the source category

24:40

specifically will be set up at that

24:42

point we would recommend a good naming

24:45

convention so that it's something that

24:46

you can add a glance recognize where

24:49

what that data consists of we would

24:52

cover the naming conventions in level

24:54

three so if you're interested in want to

24:56

learn more about proper naming

24:58

conventions best practices things like

25:00

that I will discuss that in a future

25:02

webinar so let's go ahead and take a

25:05

look at what data can I analyze so you

25:08

now have access to Summa logic and you

25:11

want to see what data is out there for

25:13

you and so now let's go ahead and dig

25:15

into there there's actually many two

25:17

ways to do so and I'll go ahead and demo

25:19

demonstrate those right now and so let's

25:22

go ahead and look at them so the first

25:25

one is going to be exploring your

25:26

collectors and so let me show you in app

25:28

what that looks like so let me just find

25:31

my window here so what I've done now is

25:38

I've logged into sumo logic hopefully

25:41

you guys have seen this environment

25:43

before but if you haven't this is the

25:44

first time you would when you log in you

25:46

would be brought to this home page we'll

25:49

go over some of the tabs in a little bit

25:51

but I'm gonna go ahead and just start

25:52

the dive in here and show you where the

25:55

information regarding the data that you

25:57

want to analyze can be found so the

25:59

first way you can go ahead and do so is

26:01

to actually look at the collectors that

26:03

are available to you and so to do so I'm

26:06

going to go to manage data and I'm going

26:07

to go to collection and what I'm going

26:09

to see are all the different collectors

26:11

which are those pieces of software

26:14

essentially that are grabbing the data

26:15

and now I can go ahead and view it and

26:18

so for example let's say I want to look

26:19

at patchy data which as I mentioned

26:21

earlier is what we'll be using today I

26:22

can go ahead and just type a patchy and

26:25

I'm going to see all the different

26:27

sources of Apache data that are

26:29

available and so I can go ahead and

26:32

to grab one of these and dig in and

26:34

analyze them and I'll show you how to do

26:36

that through the source queries in a few

26:39

moments so that's one way to see what's

26:41

available to you the other way is you

26:43

can just simply create a query on your

26:45

own so I'll to do so I'm gonna go ahead

26:47

and click on new and I'm gonna do a new

26:49

log search and I'm gonna have my query

26:53

window up here and I'm gonna go ahead

26:55

and just enter a very simple query first

26:58

I'm gonna go ahead and enter the star

27:00

which is gonna represent that I want to

27:01

look at all my data and then I'm going

27:04

to add a new line and I'm gonna use a

27:05

pipe to recognize that that's a new line

27:08

and I'm gonna do a count by source

27:11

category and so now what I'm looking at

27:19

is pretty similar to the information I

27:21

was looking at in the collection window

27:23

but just a different view of it and so

27:25

here I see all the different source

27:27

categories that exist and I see a count

27:30

for the amount of messages that have

27:32

occurred in those logs for the last 15

27:36

minutes so we're looking at a 15-minute

27:37

chunk of time and so I see here you know

27:40

there were certainly a lot of semantic

27:42

firewall logs and a lot of Cisco logs

27:46

not too many pager duty and you know you

27:49

can certainly just get a feel for where

27:51

or what's making up a large percentage

27:53

or data if I look down here I see labs

27:55

Apache access representing 54,000 log

27:58

messages and like I mentioned earlier

28:00

this is the source category that we're

28:02

gonna go ahead and use for today so

28:04

that's an easy way that you can go ahead

28:06

and see what data is available for you

28:09

let's go ahead and jump back into the

28:11

PowerPoint sorry I keep doing that

28:14

here it is and so now how can I anima

28:18

analyze my data so now you've you've

28:20

gone ahead you've you found the source

28:22

category you want to go ahead and start

28:24

to go ahead and analyze that data so how

28:27

do you do so so variety of different

28:29

ways to do so in the system I'm going to

28:31

show you a couple right now so I'm gonna

28:32

go ahead and jump back into actually

28:34

I'll show you a couple slides and then

28:35

we'll do in app so one way is you can

28:39

simply see what other people have done

28:43

within the environment so you can look

28:45

for a shared content so basically I can

28:47

go ahead and see has anybody else looked

28:50

at the set of data have they created

28:51

queries or dashboards that might be

28:53

valuable to me and to do so I'm gonna go

28:56

ahead and jump back into the app here

28:57

and so I'm gonna go ahead and I'm gonna

29:05

go ahead and take a look at shared

29:09

content so up here on top of the screen

29:12

we actually have a couple different ways

29:13

to look at shared content on the

29:16

left-hand side of the screen I'm gonna

29:18

have four different options and let me

29:19

go through these now because might as

29:21

well is a good point the first is I can

29:24

see some recent things that have been

29:25

shared with me recent things I've done

29:28

very similar if you've used most

29:30

products through a recent now but

29:32

basically it's gonna be the the things

29:33

that you've done you know recently kind

29:34

of a quick way to get to access to those

29:36

that's a good way to see things that

29:38

you've done recently another's to look

29:41

at your favorites so you've created a

29:43

query baby previously and you saved it

29:46

and you want to mark it as a favorite

29:47

and you can go ahead and do it so in

29:48

sumo logic so similar to the way you

29:50

would in the browser it's like a

29:52

bookmark essentially you have a personal

29:55

folder so here you can go ahead and view

29:57

all those different queries that you've

29:58

saved out both queries dashboards really

30:01

anything within Summa logic that you've

30:02

saved out and so here I've gone ahead

30:04

and I've created some Apache queries

30:07

before and some Microsoft Office 365

30:10

these are things I've done previously

30:12

but I want to in this case I want to see

30:14

what other people have done because I

30:15

want to take advantage of that so I can

30:18

go ahead and click on this icon here

30:19

which is gonna represent the library and

30:22

now I'm able to see all the other things

30:24

that other users have done now in this

30:27

case there's a lot of Apache folders

30:29

because I'm on a training site and we've

30:31

had other students use this site so but

30:34

that's perfectly fine so let's say I

30:35

want to go ahead and look at one of

30:36

these and I want to see what this what

30:38

information is in here if I go ahead and

30:40

click on Apache

30:41

I'm gonna go ahead and see that there's

30:43

all these different queries the orange

30:45

represents queries and I'm gonna see the

30:47

green that represents dashboards and

30:48

there's all these different dashboards

30:50

and now I can go ahead and look at these

30:52

so let's say I see this Apache overview

30:54

well that sounds kind of interesting

30:55

what is it though let me go in and click

30:57

on it

30:59

and now what I get is an overview of in

31:03

this case visitor locations so in this

31:05

case I have a map that's been created

31:07

for me and I'm able to view the visitor

31:10

locations regionally and so that's kind

31:12

of cool one of the cool things about

31:14

these maps will look at the maps in a

31:16

little a little bit later but I can zoom

31:18

in these so let's say right now I'm able

31:20

to say you know there's been a lot of

31:22

traffic or visitor locations in the

31:25

Northeast let me go ahead and look more

31:27

specifically where those are and I can

31:28

actually drill into the map here and I

31:30

can see more specifically you know

31:32

pittsburgh area has this many in

31:33

washington as this randy and so that's

31:35

kind of cool

31:36

and so you know maybe this is something

31:38

that i want to take advantage of

31:39

somebody did this hard work to go ahead

31:40

and create this map and maybe i want to

31:43

go ahead and modify it and what i can do

31:45

is i can click on this icon here that's

31:47

just showing search and what i'm just

31:49

going to do is going to bring that query

31:51

into the search window and it's going to

31:53

allow me to customize it

31:54

now this query language up here is going

31:56

to be maybe a little bit foreign to you

31:58

so this is certainly level one so these

32:00

are the query language itself we would

32:02

cover in level two but this is a good

32:04

point where you can go ahead and see hey

32:05

this is what somebody else did this is

32:07

how they set it up and you can start to

32:09

use this on your own so maybe you want

32:11

to start to change some of this

32:12

information so rather than looking only

32:14

at it say you know country name of the

32:18

united states you want to look at i

32:20

don't know let's see if there's any data

32:22

for actually i don't think there is any

32:28

but so you could go ahead and start to

32:30

modify in here different information so

32:36

let's go ahead and show another way

32:38

through that see take advantage of some

32:43

methods to look at your data rather than

32:46

just a straight query and let me go

32:47

ahead and jump back to the slide to show

32:50

what i want to here we go in the slide

32:54

so the other option is to use our app

32:58

catalog and so what is the app catalog

33:00

and as it says up here if you want to

33:03

read it the apps are designed to

33:04

accelerate your time and zoom logic and

33:06

what they are is essentially

33:08

pre-configured searches and dashboards

33:10

for the most common use cases so the way

33:12

the apps are

33:13

work is and let me go ahead and jump

33:16

into the app catalog right there here we

33:17

go and so the way this is gonna work is

33:19

one of the cool things or as cool as

33:22

logs can be is that they're consistent

33:24

in the way they're designed so a Apache

33:27

access log is always going to be in the

33:29

same format regardless if it's if you're

33:31

using it or organization and another

33:34

organization using it they're always

33:35

going to be in the same format we'll

33:37

look at the format's in a little bit but

33:39

so example for example it's always has

33:40

the IP address

33:41

it always has a timestamp following etc

33:44

etc and since that we know that that's

33:47

going to take place we know that the

33:51

formatting is always going to be the

33:52

same we can take advantage of that and

33:54

as a result you can take advantage of

33:56

that so let's say I want to go ahead and

33:57

start to look at these predefined

34:00

example searches and dashboards so I'm

34:02

going to go ahead and click on Apache

34:03

and I see all the different types of in

34:07

this case the orange once again is the

34:08

queries and the green are the dashboards

34:11

and I see all these different types of

34:13

pieces of information that are available

34:15

to me and so I can go ahead and look at

34:17

some examples so let's say I want to go

34:18

ahead and look at the Apache overview I

34:21

can actually preview the dashboard up

34:23

here and I see this overview and once

34:25

again it's another map but I see you

34:28

know hey this is kind of interesting oh

34:29

this visitor access types yeah yeah will

34:31

be helpful to see visitor platforms and

34:33

so now I want to go ahead and take

34:35

advantage of these queries in order to

34:37

set it up it's gonna be really simple

34:38

and it could be really complicated

34:40

because I could go through the query and

34:42

and as you saw with that query language

34:44

I could go ahead and write that query

34:45

but that's pretty advanced so what I can

34:48

do to streamline the process as I click

34:50

Add to library and all I'm gonna need to

34:53

do is identify where my data is so I'm

34:55

going to tell it hey look in labs Apache

34:58

access because that's where my lab data

35:01

is and I'm gonna want it to reference

35:04

that environment the other thing I do is

35:07

just need to give it a name as you can

35:09

serve all that see there were a lot of

35:10

Apache so I'm gonna go ahead and just

35:11

give it a name that's hopefully unique

35:13

so I don't think there's a triple-a

35:15

Apache yet and I'm gonna click at the

35:17

library it's gonna take a moment to run

35:19

and once it's done it's going to allow

35:21

me to then go ahead and look at those

35:24

dashboards and

35:26

in the with the set of data that I just

35:29

referenced so here we see I'm now

35:31

looking at Triple A Apache I also have

35:33

it available to me if I look in my

35:36

library so are in my personal library so

35:39

there it is a couple different ways to

35:40

look at it and now I can go ahead and

35:42

look at one of these so maybe let's say

35:44

I want to look at this time I look at

35:46

visitor access types when I click on it

35:49

it's going to go ahead and take the

35:52

query and essentially just overlay my

35:54

sets of data in there so now it's

35:55

looking specifically at lab Apache

35:57

access and it's showing my visitor

35:59

platforms and so this is a way that I

36:01

can go ahead and start taking advantage

36:03

of similar logic without doing any of

36:05

the heavy lifting that it comes with the

36:07

query and so we certainly encourage you

36:09

to take advantage of those app catalog

36:11

items than just bid late as you saw I

36:13

was you know pretty easy to set up so

36:15

would recommend doing so let's go ahead

36:19

and jump back into the slide deck sorry

36:22

I keep doing that there we go and let's

36:25

go ahead and this portion so if you were

36:29

doing the tutorial at this point you

36:33

would go ahead and do some of these

36:35

steps as far as actually installing a

36:37

sumo logic app on your own logging and

36:39

certainly searching for existing content

36:41

and so as I mentioned at the beginning

36:43

of the call I would certainly encourage

36:44

you to go ahead and do this hand these

36:46

hands-on exercises on your own

36:48

I think it'll certainly be beneficial

36:49

but I'm gonna skip this portion just

36:52

kind of for the essence of time and we

36:53

you just saw let me go through all those

36:55

steps so we kind of did that demo

36:57

together so now let's go ahead and get

37:00

into the data analytics side of things

37:03

regarding queries so taking kind of a

37:07

step back in our scenario you now have

37:09

all this data loaded into sumo logic you

37:11

saw what data is relevant to you or in

37:14

this case we're going to be looking at

37:15

it once again that Apache information

37:17

and now you need to go ahead and start

37:18

to query on it and so how does the query

37:20

work how does it work more particularly

37:22

in sumo logic so within the sumo logic

37:25

environment you're going to be using

37:27

keywords and operators that are going to

37:29

be separated by pipes and built on top

37:31

of one another so you can go ahead and

37:34

envision this model here where you have

37:39

a very big funnel

37:40

and all your data is at the top of the

37:41

funnel and ultimately you want to have

37:43

the good stuff for the goodness

37:46

essentially come out as your results and

37:47

so the way it's going to work is you're

37:49

going to start to load things into this

37:50

funnel and using these different

37:52

sections of syntax we're going to start

37:55

to go ahead and essentially filter out

37:59

the things that are needed so you're

38:02

gonna start with let's go ahead and look

38:03

at a query right here so this is a

38:05

sample query and so we're gonna start

38:07

with the metadata and keywords and so

38:09

for example and I'll start to do this in

38:10

a real example and actually probably in

38:13

a moment here but basically what you're

38:15

doing is you're starting to identify

38:16

this is the set of data I want to look

38:18

at and I want to look at a specific

38:20

keyword once you have that sets that

38:23

send days data you're going to want to

38:24

go ahead and start to parse data out and

38:27

with parsing what you're going to do is

38:29

extract meaningful fields to provide

38:31

structure to your data so essentially

38:32

going to start to label some of those

38:34

fields of data next you're gonna go

38:36

ahead and filter some of the data out

38:38

and here's an example of parsing when

38:40

we'll look at these in just a minute

38:43

you're gonna go ahead and start to

38:44

filter results so you now have

38:46

information and fields that you've

38:48

created now you want to go ahead and

38:50

start to filter on those fields with the

38:53

aggregation you're going to go ahead and

38:54

start to place them into groups let's

38:57

see we have an example some of the

38:58

mathematical operators you can start to

39:00

do count so for example we just did that

39:02

count earlier of source categories we

39:05

can do averages and things like that and

39:07

we'll play with a couple of operators in

39:08

a moment and then ultimately you can go

39:11

ahead and start to format your results

39:12

so now you've you've created a whole

39:15

essentially a set of data and now you

39:17

want to manipulate it to something

39:20

that's a little more friendly so let's

39:22

go ahead and jump back into the

39:23

application and go through a more

39:25

specific example and show you what it

39:27

what those those things I just described

39:29

really look like so let's go ahead and

39:32

jump back in here and I'm gonna go to

39:35

the Home tab and I'm gonna go to new log

39:39

search you can also go to a new log

39:41

search over here but I'm gonna be either

39:43

way kind of same difference and so now I

39:45

have a window that's going to allow me

39:48

to start to do a query and so how do i

39:51

do my query and let's start with a very

39:53

one so up here we have a search window

39:56

and what I can do is I can just go ahead

39:58

and start to type my query so let's say

40:00

I want to go ahead and start to search

40:01

for Mozilla I can just very simply type

40:04

Mozilla and hit start and here we go and

40:09

now I have my results and so what I have

40:11

on here a couple of things that are

40:13

certainly worth noting on here is first

40:16

I have my search window here

40:18

this is gonna be search case-insensitive

40:21

so that's why even though I entered a

40:23

low case M we see capital M s down here

40:25

and we actually see since we're looking

40:28

for keyword in our results sets we

40:30

actually see the keyword highlighted so

40:33

that's going to be helpful well you also

40:35

have a time window our time selector up

40:37

here and so what's going to happen by

40:39

default is I'm going to be able to

40:41

search for the last 15 minutes of data

40:43

and so these would be the messages from

40:46

last 15 minutes but maybe I want to

40:48

change that and certainly you would many

40:50

times you will want to change the time

40:52

and so to do so you're gonna go ahead

40:53

and click on the time here and you're

40:57

gonna have some predefined time

40:58

categories so maybe I want to look at

41:00

the last 60 minutes or I want to look at

41:02

data from the last three hours etc etc I

41:04

can go ahead and choose those time

41:05

options

41:06

another time option I have is I can go

41:08

to custom and I can use this kind of

41:10

calendaring feature to say all right I

41:12

specifically want to look for logs from

41:14

July 2nd to July 10th and from 3 a.m. on

41:20

the 2nd to you know 5 a.m. and you can

41:24

get very specific with your time in the

41:27

custom so that's a second way to set up

41:30

or config your time and of course

41:32

there's gonna be a third way because we

41:33

want to give you many options and you

41:34

can kind of see the syntax for it up

41:36

here so when I chose last three hours

41:40

when I click on it I have this time

41:43

syntax of minus 3h and so what I can do

41:46

is I can use that to customize my time

41:48

and so I can more specifically say let's

41:50

say I want to look at the last 5 hours

41:51

of time I can just go ahead and do minus

41:54

5 H maybe I want to do the last 24

41:57

minutes of time I can just simply do

41:58

minus 24 M maybe I want to go ahead and

42:01

do the last 24 minutes to the last 12

42:04

minutes I can go ahead and specify and

42:06

you can

42:07

you see it right here in this case it

42:09

would look only from 922 to 935 so I

42:12

could really have a couple different

42:13

options of how I'm gonna choose my time

42:16

the other thing to be aware of in here

42:18

with time and I'm going to set it back

42:19

last 15 minutes is you have this option

42:22

of using receipt time and I just messed

42:24

over it and I think you should be able

42:25

to see what it says there but basically

42:27

it's going to allow you to search for

42:29

the message is when they were received

42:32

in the system not from the dates parsed

42:34

from by them and so the way that works I

42:36

don't want to get too bogged down in

42:38

that but I do want to you guys to

42:39

understand how that works

42:40

is looking at in these log messages here

42:42

is when the records war essentially

42:45

created or exceed me when the dates were

42:47

parched from them so we see here that

42:49

this data was from you know just a few

42:52

minutes ago and it pulls up in here what

42:55

we put the scenario that could occur

42:57

however is let's say you go ahead and

43:00

this morning you've loaded data from the

43:03

last six months in you could go ahead

43:06

and use your search for Lex six months

43:08

or you could go ahead and use your

43:09

receipt time and say I'm going to look

43:11

for that data and since it just got

43:13

loaded you could use look at it when it

43:15

got loaded rather than using the dates

43:17

that were in the message itself so just

43:20

a couple different options there and

43:22

typically you would use the use the

43:24

default here and not use receive time

43:26

but just want to make sure you're aware

43:28

of that feature let's see other things

43:31

on here there's a this is a busy screen

43:32

but in a good way because you're gonna

43:34

be spending a lot of time in it let me

43:35

go ahead and just so up here we have

43:40

some options as far as saving so we have

43:45

the favorite icon which I mentioned

43:48

earlier switch you guys are I'm sure all

43:50

familiar with how to use you have the

43:52

save as so let's say you have created

43:54

this query and I want to go ahead and

43:56

save it for future use I can click Save

43:58

As and I can go ahead and give it a name

44:00

and it will show up and be saved in my

44:02

personal section so that's really an

44:03

option as well

44:07

and then there's a couple other options

44:10

in here you have the ability to go ahead

44:12

and share this query out so you've

44:13

created this query you want to go ahead

44:15

and give it to somebody else certainly

44:17

you can go ahead and direct them through

44:18

the library if you've shared it out in

44:20

that method but the other option is you

44:22

can go ahead and just give them a code

44:24

that they can use or URL so they can go

44:27

ahead and take this URL paste it into

44:28

the browser and they're going to go

44:30

ahead and get the query that you created

44:32

an even cooler option is using this code

44:35

here so I can just go ahead and copy

44:36

this code click done and now if I open a

44:39

new browser or a new search window

44:41

rather I can paste that code in and it's

44:44

going to bring in the query exactly this

44:47

I had it and that's a really cool way to

44:49

go ahead and share the queries via let's

44:51

say if you are collaborating with

44:53

someone via slack for example rather

44:56

than taking your query and cutting in

44:58

and pasting it or even giving a URL you

45:00

can just say hey look at this code

45:01

number you know 1 2 3 or whatever the

45:04

code is and then they would easily be

45:06

brought into that query and they would

45:09

essentially be looking at the same thing

45:10

that you had created which is really

45:12

useful let's see a couple of other

45:15

options here you can go ahead and pin

45:17

this search wizard which is going to

45:18

allow the search to continue to run even

45:21

if you step away from the hour log out

45:23

of your system and then live tale and

45:26

we'll look at live tell in a moment so

45:27

I'll kind of leave that on the back

45:28

burner other pieces of information on

45:31

this screen there so there's a lot going

45:33

on so I want to make sure that we're

45:34

aware but I'm just going to redo my

45:35

search here just to clean this up for a

45:37

second in the middle here we have a

45:40

histogram which is showing distribution

45:42

of messages across the time frame that

45:44

we've selected so we selected a

45:45

15-minute window and as a result we're

45:48

seeing essentially time slices by 1

45:50

minute so we see the count of messages

45:52

so during for this query Mozilla for

45:54

less 15 minutes there was a total of

45:56

about 83 83 thousand results but we see

45:59

here from 943 that window so 943 to 944

46:03

there were 5,000 messages next window

46:06

almost 6000 etc so that's a good way to

46:10

see at a glance kind of graphing

46:12

information about the distribution of

46:14

those messages and then down here we

46:16

have our results which is certainly

46:18

going to be important we're going to

46:20

have our

46:21

stamp so this is going to go ahead and

46:23

indicate when that message is from and

46:27

then we're gonna have the message itself

46:28

and as I described a moment ago the key

46:31

words will be highlighted in there so

46:33

now we've gone ahead and we've done a

46:35

very basic query and we actually have

46:38

some more that we can go ahead and do so

46:40

let's go ahead and expand this query out

46:42

and actually make it a little more

46:44

useful so what I want to look at now is

46:45

rather than just looking at Mozilla

46:47

cross the entire index in the entire

46:50

body of data I'm more specifically want

46:53

to look at our Apache data so I'm gonna

46:55

go ahead and I'm gonna I'm actually just

46:57

going through a new line just to kind of

46:58

so you can see the search ahead so I'm

47:01

gonna go ahead and define my source

47:03

category and before I even type it's

47:05

gonna go ahead and start to do a search

47:06

ahead so here I have a source category

47:09

so I can just click on that and then I

47:10

can go ahead and specify my more

47:14

specifically the source category I want

47:15

to use and the search head will work

47:17

there so I'm gonna use Labs labs Apache

47:20

access and I'm gonna go ahead as well

47:23

and still search for Mozilla and I'm a

47:26

type Mozilla and I'm gonna hit start and

47:28

a couple things are happening here so

47:31

now we're specifically looking for lab

47:34

Apache access data that has the string

47:36

of Mozilla in it the other thing to keep

47:39

in mind up here is there's going to be

47:42

an implied end statement so this and

47:45

this are the same thing and so just to

47:49

keep in mind if you want to leave out

47:50

the end you can't so of course up to you

47:52

and you know how you decide to kind of

47:53

code your query but just want you to

47:55

recognize there is an implied end and

47:57

with the implied end so it's looking for

47:59

lab Apache access and those logs that

48:03

have the keyword or the string of

48:05

Mozilla and so that's a very you know a

48:07

very basic query but I want to go ahead

48:10

and start to make this more advanced and

48:11

so let's go ahead and do so so what I

48:14

want to do now in this scenario is we're

48:17

gonna take the set of data and I want to

48:18

look for what are called status codes

48:22

I'm some of you were probably familiar

48:24

with status code some of you may not a

48:28

status code is basically when you go to

48:30

a website on the back end essentially

48:32

it's going to be a

48:34

a indication of the results of that

48:38

experience because it could say so if

48:40

you go to a site successfully you get

48:43

status code 200 I'm sure many of you

48:45

have seen status code for fours where

48:47

you go to a page and the file or the

48:49

page cannot be found and so we're gonna

48:51

play with those those status codes right

48:53

now and so what I want to do with this

48:56

this right here is I want to go ahead

48:58

and start to parse my information out

49:00

and by parsing it out basically I have

49:02

my log message here and I want to

49:04

identify within the system where what to

49:08

look for in a status code or where to

49:10

find any of this information so for

49:12

example right here we have an IP address

49:14

we have some date information and over

49:18

here we have our status code in this

49:20

case this one is 304 and so like I said

49:22

I want to go ahead and start to look for

49:24

all these status codes the 304 or give

49:28

me any status code and so what I can do

49:31

is I can start to create a parsing

49:33

expression that's going to allow me to

49:34

do that and so I actually have two

49:37

different ways actually many more about

49:39

three different ways I can go ahead and

49:41

parse these fields out so let me show

49:43

you one way to go ahead and do it and

49:46

actually for this example I'm gonna

49:48

actually parse out the IP address so

49:51

that's that will just be I think a

49:52

better illustration so I'm gonna go

49:53

ahead and just steal some code that I

49:55

was playing with earlier and then bring

49:57

it in here and I'll explain what exactly

49:58

I'm doing with this code so let me just

50:00

go ahead and there I go cool so what I

50:03

want to do now is I want to go ahead and

50:05

I want to look at my IP addresses so I

50:08

will take a break for these status codes

50:10

for just a second but I want to go ahead

50:12

and look for IP addresses and so I see

50:14

in these messages I see an IP address

50:16

and I see an IP address there and I'm

50:19

only looking at this IP address the host

50:21

IP address we're gonna ignore that's

50:22

referring more to the collector or the

50:24

metadata for the host that I'm not

50:26

really interested in right now in this

50:28

case I just want to see the IP address

50:29

that's associated with each message and

50:31

so how do I go ahead and use those and

50:33

more Express specifically how do I go

50:37

ahead and start to be able to take

50:41

advantage of those fields and start to

50:43

report on them so the first way that you

50:46

can go ahead and do so is you can use

50:47

red jack

50:48

and I don't know if you guys are

50:50

familiar with regex I'm sure some of you

50:51

are and are dreading the hearing the

50:53

word regex regex when it's going too

50:56

loud to you to do and this is regex is a

50:58

global standard it's not a sumo logic

51:00

specific thing what it allows you to do

51:02

is essentially create some pattern

51:04

matching and so let me show you an

51:06

example of a regex statement and here we

51:08

have one right here so what we're doing

51:10

with this regex

51:12

is we're going ahead and identifying the

51:13

field and basically what we're telling

51:15

it to do is look for a digit that has

51:17

between 1 and 3 digits in it look for a

51:21

dot look for one two three digits look

51:24

for dot etc etc and so that would match

51:26

an IP address so an IP address is going

51:28

to be made up of essentially number dot

51:30

number dot number dot number and so

51:33

we're telling it to go ahead and look

51:34

for that pattern and I'm gonna go ahead

51:35

and hit enter and I'll show you

51:37

you know what's gonna occur and so when

51:40

I run this I now have my IP address

51:43

displayed here and I can start to use

51:45

these IP addresses and so I could go

51:47

ahead and for example let's say I wanted

51:49

to do a count of by IP addresses I can

51:53

go ahead and do so and so now what I did

51:59

is I first I'd use the the second line

52:02

here to go ahead essentially identify

52:05

our tell the system what an IP address

52:07

looks like or what to look for in the

52:10

pattern recognition and call that field

52:12

IP address which that's what this part

52:14

is doing is saying when you find this

52:15

pattern call that an IP address and then

52:18

once I have my IP addresses as displayed

52:20

down here then I can go ahead and start

52:22

to do some further queering on them so

52:26

that's one way to go ahead and start to

52:28

do your parsing particularly of parsing

52:33

via regex but as I mentioned regex

52:35

is very finicky it's very specific and

52:38

it's well it's a good standard it's not

52:41

the easiest way to do your query and we

52:44

want to make this easy for you guys so I

52:45

want to show you a couple alternatives

52:46

on how to go ahead and parse your data

52:49

out so what I'm going to do is I'm gonna

52:50

go ahead and I just clean up my screen

52:52

here I'm gonna go ahead and comment

52:55

these two lines out by doing two slashes

52:58

on each line what a comment does is

53:00

essentially turns off that line so

53:02

that line when I run my query again it

53:05

will skip those lines in the query and

53:07

as you can see here it's basically just

53:08

running this first line and so what I

53:11

want to do now is I want to show you a

53:12

secondary way to do parsing and what

53:15

this is using is what we call parse

53:16

anchor and the way it works is you're

53:19

going to just use the mouse and

53:20

essentially start to click on some

53:22

sections so let me show you specifically

53:24

how it works so I'm gonna go I need to

53:26

pull that IP address so what I'm gonna

53:28

do is I'm going to take this whole

53:29

message and I'm gonna highlight it and

53:32

I'm going to click parse the selected

53:34

text and now I have my whole message

53:37

here and what I'm going to do now is I'm

53:39

gonna go ahead and start to identify

53:40

those pieces of the message and create

53:42

fields from them and use the pattern

53:45

recognition to do so as well

53:47

so I'm gonna take this first portion of

53:49

the message and I'm gonna highlight it

53:51

go click to extract this value and I'm

53:55

gonna head go ahead and call this IP

53:57

address I could call it IP I could call

53:59

it Network identification I can call

54:01

whatever I want but I want to call it

54:03

something that makes sense to me and

54:05

others if they look at my code so I've

54:07

now labeled

54:08

this is an IP address and I'm gonna keep

54:11

going so I'm gonna go ahead and this

54:13

section here now I could go ahead and

54:15

specifically say that this is the year

54:17

this is the month this is the day this

54:19

is the hour etc etc but from for my demo

54:23

and usually I don't really care about

54:25

that so I'm just gonna take this whole

54:26

thing and I'm just gonna call it time I

54:30

just kind of lumped it together and I'm

54:32

gonna keep going through here and just

54:33

give me a second so I'm gonna take this

54:35

and I'm gonna call this the refer and

54:38

I'm gonna take this guy and I'm gonna

54:41

call this the status code and as I

54:43

mentioned we'll be using the status code

54:44

going forward so I'm gonna call it

54:46

status code I can call it with

54:48

underscore I can't use a space so this

54:50

would not be acceptable but I could use

54:52

that or that and then let me just finish

54:55

this up I mean it called this the size

54:57

so this is the size of the message we're

55:00

not really going to use it today but

55:02

just so you're aware and then I'm going

55:03

to take this part almost here done just

55:07

a couple more sections here I'm gonna

55:10

call that the URL and then finally I'm

55:12

going to take this whole portion here

55:13

and I'm going to call it

55:15

user-agent what it is is it's an

55:18

identification of the browser or the

55:21

environment that the visitor is using

55:24

when they go to a web site so it's going

55:25

to show information about operating

55:27

system browser type those type of things

55:29

and I'm just gonna lump it together as

55:31

call it a user agent and I'm gonna click

55:35

Submit and what's gonna happen now is

55:37

it's gonna take this pattern recognition

55:39

and matching and go ahead and look

55:41

through our logs messages for it and

55:44

match those patterns together and so

55:47

basically it's gonna look for the first

55:48

part of the message and it's gonna find

55:50

that part and it's gonna call that an IP

55:52

address it's gonna look for a space it's

55:54

gonna look for him - she's gonna look

55:55

for a space look for it - look for an

55:56

Open bracket and whatever's in that open

55:58

bracket it's gonna call time look for a

56:00

closed bracket etc so let's go ahead and

56:02

start and run this and we'll see what it

56:04

really is doing so here we now have

56:11

these sections broken out so now I have

56:13

my IP address I have my refer and my

56:15

size my stash you know all these

56:16

different fields that we just labeled

56:18

and so that's a good way that I can now

56:21

start to query on those and so what I

56:23

can do for example is now let's say

56:26

let's go to status codes because now I

56:28

can more easily access them now I can go

56:30

ahead and do a count by status code and

56:34

since the system knows hey this is a

56:37

status code essentially look for it

56:39

right here it's gonna go ahead and look

56:42

for those status codes and now I'm gonna

56:44

be doing a count so now in this case I'm

56:46

going ahead and doing some aggregation

56:48

doing some mathematics to go ahead and

56:51

say you know this is the amount of

56:53

status codes that have occurred over the

56:55

last 15 minutes so that's the second way

56:57

to parse your information out it looks

57:00

like we just got a question when parsing

57:02

with the mouse and highlighting is every

57:04

value create assumed to be a string or

57:05

can you assign datatypes a string so

57:08

these are strings it's just simply

57:09

looking at the message and doing the

57:11

very straightforward pattern recognition

57:13

so I want to show another way that we

57:15

can go ahead and I essentially pull out

57:19

these fields let me go ahead and clean

57:21

this up again and kind of reset myself

57:24

I'm gonna go ahead and remove the

57:27

parsing let me remove the count because

57:29

I don't

57:29

this so I'm just gonna turn off those

57:31

lines I might use them later so that's

57:32

why I'm gonna keep them and let me start

57:34

this and now we're just back to where we

57:35

were back to square one so let's say now

57:39

I want to go ahead and start to use

57:40

those sass codes you'll notice or now

57:43

you'll notice that I pointed out on the

57:45

left side sign the screen we have a

57:47

variety of different fields that we were

57:48

just actually using so we have IP

57:50

address we have status code and I can go

57:52

ahead and use these and so if I want to

57:55

go ahead and do a count by status code I

57:56

can go ahead and only do count by status

57:59

code and when I run it the system is

58:02

gonna know where status code is and how

58:05

to how to essentially utilize it now how

58:07

does the system know how what the status

58:09

code is how did they create these fields

58:11

what what the mechanism to do to create

58:14

these fields is what we call field

58:17

extraction rules or F ers and the way

58:19

the field extraction rules work is

58:21

essentially the parsing is done upon

58:23

ingestion so in this case we did the

58:26

query and in our query we did the

58:29

parsing but you can actually have it set

58:31

up so that when the data gets ingested

58:32

into sumo logic these fields are

58:34

automatically applied and the reason it

58:37

works so well is since an Apache access

58:39

message for example Apache access but

58:41

these would work with any types of

58:43

content or log messages since an Apache

58:46

log message is the same we can recognize

58:49

hey it's always going to be in this

58:50

pattern always look for that first space

58:53

or that first piece of content that's

58:55

going to be an IP address then look for

58:57

a space look for another space so it's

58:59

essentially doing the parsing upon

59:00

ingestion which is going to be a really

59:02

great way to go ahead and simplify this

59:04

process so that you don't have to go

59:06

ahead and do this parsing that we did up

59:08

here either via regex

59:10

or via this parsing mechanism so these

59:14

field extraction rules would be set up

59:15

previously I'm going to show you where

59:17

they are in the environment that I want

59:19

you to get too bogged down with it but I

59:21

do just want to give the visual so if I

59:23

go ahead into my settings and these are

59:26

things we would cover more in a session

59:29

to in session 3 but just while we're

59:30

here I think it's worth showing so

59:32

basically the way these rules work and

59:34

there's a lot of them because we do a

59:35

lot of testing with this account but

59:36

basically what it's doing is it's it's

59:38

looking at and actually let's so what

59:41

it's doing is it's saying hey I want

59:43

patchy access rule I want to go ahead

59:45

and say all this data so anything that

59:47

comes in with in this case anything

59:50

Apache access is going to have this

59:51

parse expression applied to it and this

59:54

is the pattern recognition that we're

59:55

just discussing so here we see source IP

59:57

it's going to look for essentially

59:58

number dot number dot number dot number

59:59

and that will be the way that hey call

60:03

that and a source IP then look for you

60:06

know something else call that a method

60:07

look for something else called ad URL

60:08

and so on and so forth and so those are

60:10

going to be really helpful that we would

60:12

encourage you to set up upon ingestion

60:14

to make both your life and your users

60:17

lives much easier but let's go back into

60:19

the query and continue further so um so

60:24

let's see where are we gonna let's go

60:27

back let's run this and so what we're

60:30

looking at right now is we're

60:32

essentially looking at our Apache access

60:35

information and we're still looking at

60:37

the string of mozilla and we're looking

60:40

at account by status code and when we

60:42

run it since we're doing a count which

60:44

is essentially mathematics at this point

60:46

we get a new tab called aggregates and

60:50

that's shown here so we have our

60:52

messages tab that is going to show the

60:53

raw messages as well as the fields that

60:56

were parsing and then we're gonna have

60:58

our aggregates tab that will show that

61:00

mathematics in this case we were doing

61:01

count by status code so we see our

61:03

status codes and we see a count and once

61:05

again the count is representing all the

61:07

different messages that occurred that

61:09

matched this pattern over the last 15

61:12

minutes but let's go deeper now so we

61:14

now have our status codes now let's

61:16

start to play around with them and then

61:17

start to garner some information from

61:19

them so as I just mentioned I'm looking

61:23

at these status codes by essentially a

61:25

15-minute grouping but let's say I want

61:27

to look at trends over time so I want to

61:30

look at the status codes in 1 minute

61:32

increments similar to the way that

61:33

they're shown up here but I want to

61:35

actually look at the specific status

61:36

codes so what I can do is I can go ahead

61:38

and just add a new line up here and I'm

61:42

gonna add a line it's called time slice

61:43

and as I type it the search ahead is

61:45

gonna work for me and it's also gonna

61:47

tell me what it does so it says time

61:49

slice segment data by time periods are

61:50

bucketed over time range yep that's what

61:52

I want so I'm in time slice and I'm

61:54

gonna do it by 1 minutes long chunk

61:56

and I'm gonna hit enter and now what I'm

61:59

gonna get oh I need to have one more

62:03

line here so now I need to tell it hey

62:06

do account by those time slices or by

62:09

those 1 minute segments and show the

62:10

status code so now when I run it I'm

62:14

gonna have these time slices so here we

62:16

have these so here at 9:59 so

62:19

essentially the minute of 959 959 59 we

62:23

see that there were 612 3 or 4 status

62:26

codes and at 10:03 there were 36 4 or 3

62:30

codes and so on and so forth and so you

62:33

know this is good the results are there

62:35

but they're not an in chronological

62:36

order which visually just you know makes

62:38

a little tricky to look at so let me go

62:40

ahead and put them in the order and so

62:42

what I want to do is I want to order

62:43

these time slices in ascending order so

62:46

in my query I'm gonna do order bye-bye

62:50

time slice slice ascend and just gonna

62:57

run that now and what that's gonna do is

62:59

it's gonna do what you know what it

63:01

sounds like I want to show it it's gonna

63:02

put those time slices in order so now I

63:04

see all the 952 so I see 952 s 953 954

63:09

's and so this is you know it's better

63:11

it's starting to get some information

63:13

you know a little little better visually

63:16

but it's not there it's not the way I

63:18

want it to be and so what I'd like to

63:20

see is a little list of the time slices

63:23

down the columns and status codes as my

63:26

rows so similar the way you would do our

63:29

transposition or transposing in Excel

63:31

and so that's what I want to do and

63:33

let's go ahead and do that so I'm gonna

63:34

go ahead and add a new line and I'm

63:37

gonna do transpose and as I type it

63:39

transpose aggregate aggregated results

63:41

that's what I want to do and I'm just

63:44

gonna go ahead and label my rows and

63:46

columns so I'm going to say my row put

63:48

the time slice there and my column put

63:50

the status code there and so now what I

63:57

get is something more visually pleasing

63:59

we're still gonna go further with it but

64:01

this is better so now I have my time so

64:04

I have my 953 and then I see the time

64:06

slices at the top so I can pretty easily

64:09

see I

64:10

9:55 there were 3200 which are the

64:13

successful connections and you know

64:16

there were only or not only but there

64:17

138 404s and you know this is I'm able

64:22

to get some more information from this

64:23

but it's not really easy to digest the

64:26

problem at this point so what I have I

64:28

can take advantage of this I can take

64:30

look at some of the other charting

64:33

options that are built into sumo logic

64:35

so up here we've been looking at the one

64:38

we wrote our aggregates we've been

64:39

looking at this view which is kind of a

64:40

table view but let's say I want to look

64:42

at I want to convert this into a graph

64:44

maybe I want to do a bar chart here's my

64:47

information on a bar chart maybe I want

64:49

to do a column chart okay that's you

64:51

know that's kind of cool not really what

64:53

I want though this line chart I like

64:55

though you know this is showing me the

64:57

status codes and you know graphing them

65:00

over time so you know okay I'm gonna

65:01

stick with this one but you know this is

65:04

a little tricky this view because I see

65:07

my two hundreds I see my three or fours

65:09

and then down here I mean I see those

65:11

other status codes but they're kind of

65:12

lumped together they're kind of hidden

65:14

and so what I want to do is I want to

65:16

clean this chart up a little bit and I

65:17

want to really look at this set of

65:19

information not these first two and so I

65:21

have two different ways I can do this

65:22

the first is I can actually do within

65:24

the graph so if I say all right I don't

65:26

want to see to hundreds just go over to

65:28

200 over here with the legend click 200

65:30

and it's been turned off same thing with

65:32

304 Larry oh four there it is turned off

65:35

and now I get my scale or has been

65:38

adjusted and now you know once again

65:39

kind of visually it's more pleasing and

65:41

makes make sense as to what's going on

65:43

so that's one way I could go ahead and

65:46

essentially alter that set the date the

65:49

other though is let's say I want to do

65:50

it within my query I want to go ahead

65:52

and leave out the two hundreds and the

65:54

three all fours I can go ahead and

65:56

actually just add a line in my query so

65:58

what I'm going to do is add a new line

66:00

right here and I'm gonna do a where

66:02

statement so I'm gonna do where look

66:04

where status code is equal to 200 or

66:10

status code is equal to equal to 3 or 4

66:16

and since I want to essentially remove

66:19

those I'm gonna go ahead and put an

66:20

exclamation point and so what I'm doing

66:22

here is I'm saying

66:23

pull the lab Apache access data with the

66:26

with the string of or the well the

66:29

string in this case of Mozilla and look

66:33

for status code 200 304 or rather not so

66:35

basically exclude the exclude these -

66:38

excuse me so now if I run this query I

66:43

don't have my two hundred or three or

66:45

four in here and if I look at my raw

66:47

messages I'm not going to go ahead and

66:49

see a two hundred or a three or four in

66:51

here but we'll say I do want to see

66:53

let's move let me go ahead and reset

66:55

this one more time because I do I do

66:56

want to show one more feature actually I

66:58

show a bunch more so let me go ahead and

67:01

let's remove this one for right now I

67:04

did want to show in the field browser

67:07

over here one thing that's really cool

67:10

and let we'll use the status codes as an

67:12

example so I have my query right here in

67:15

this case we're back to just looking

67:17

essentially just for labs Apache access

67:18

and Mozilla sets the data

67:20

we're formatting it down here but really

67:22

we're just looking for those here so

67:24

let's say and this is will work with my

67:26

field extraction rules let's say I want

67:28

to go ahead and look at the status code

67:30

and get a feel for how many status codes

67:32

are occurring and you know break down I

67:34

can go ahead and click on this SAS code

67:37

actually I need to get rid of these

67:39

because these are sorry that these are

67:41

affecting my results let me go ahead and

67:44

start from scratch they're actually I

67:47

can just go and delete all this since

67:48

we're not using it so I'm so I'm back to

67:50

my original query right now and so we

67:52

were looking at status codes what I can

67:54

do is I can go ahead and click on status

67:55

code here and it's gonna show me for

67:58

this 15-minute time window it's gonna

67:59

show me all the values up to I believe

68:02

only ten will show up in here so there

68:04

were more it would be limited but it's

68:07

gonna show me the values it's gonna be

68:08

show me the number or the count of those

68:10

status codes so this is how many status

68:13

codes exist during this 15-minute time

68:15

window and the other thing it's gonna

68:17

show me is the percentage so here I can

68:20

see just easily you know 73% of the

68:23

status codes were two hundred and

68:24

fifteen percent of or more three or four

68:26

the other thing I can do here because

68:28

there's a bunch of things I can do here

68:30

is I can actually take one of these and

68:32

put them into my query so let's say I

68:34

want to go ahead and continue my query

68:35

but I only want to look at two

68:37

droids I could go ahead and do we're

68:39

status code equals 200 I'm not gonna do

68:42

it just to save save myself typing for a

68:44

second or I can go ahead and just go

68:46

ahead and click on status code click on

68:48

200 and it's automatically gonna bring

68:50

that in to my query and so now I'm

68:52

looking at the status codes for 201 I

68:56

didn't have to type in my query that I

68:59

want to look for SAS goes 200 and the

69:01

other thing since I'm taking advantage

69:02

of the field abstraction rules is I

69:04

didn't even have to do parsing to

69:05

identify what a SAS code is and so I'm

69:08

able to take advantage of this smart

69:10

again a logic that's going on with the

69:11

system and take advantage of that so

69:14

that's kind of cool

69:15

let's go ahead and let's see I want to

69:21

show you in here the ability to export

69:23

results so you create a query in this

69:26

case we're looking at status code 200

69:27

and you know that's cool data and let's

69:30

say you want to go ahead and export it

69:31

you just click on this gear icon it's

69:34

going to allow you to export either all

69:35

the only the display fields or all the

69:38

fields but it's going to push it out

69:40

into a CSV file and then you can take

69:42

that CSV file and will do whatever you

69:44

need to do with it import it to another

69:45

system or or just analyze that data so

69:47

that's gonna be an easy way for you to

69:49

take the your data that's available to

69:53

you and just put it into a different

69:55

format that might be helpful let's go

69:58

ahead and look at some different

69:59

operators now so we've looked at a

70:01

couple but I want to show two in

70:04

particular so let me go ahead and jump

70:07

into our training folder training and I

70:16

want to go ahead and look at outliers

70:19

first so let's have an example here

70:22

there it is and so what I'm looking at

70:26

right now it's gonna take a second to

70:27

load off let me go ahead and specify

70:30

that so what I'm look out before I run

70:34

this up what I'm gonna show you and it

70:37

just so you can kind of see what's gonna

70:39

be popping up on the screen is I'm gonna

70:41

take our set of lab low patchy access

70:44

data we're gonna look at the status code

70:46

200 and what we're gonna do is we're

70:47

gonna say I use the outlier command

70:50

we actually saw this earlier with the

70:51

Gateway latency but essentially what

70:54

we're gonna do is we're gonna create

70:55

thresholds and then we're going to be

70:58

able to be notified when those two

71:01

hundreds in this case exceed that

71:03

threshold so let's go ahead and just run

71:05

this and see what kind of data we get

71:06

back so when I run it I get eight still

71:12

in my tabular view and so I get my data

71:14

back and so here and well I'll show you

71:17

this in a better view in a second but

71:19

just to show you kind of the raw content

71:20

not to get too bogged down on it but

71:22

basically it's doing some mathematics

71:24

and so it's looking at 917 there were

71:26

almost 4,200 status codes and so it's

71:30

starting to set an upper and a lower

71:31

threshold limit and so it's saying

71:34

basically using you know this all this

71:37

information it was expecting between

71:39

about four thousand and two thousand and

71:41

that fits in there so that's you know

71:43

that's good and so let's go ahead and

71:45

look at this in a different way so what

71:48

I'm gonna do is I'm gonna actually gonna

71:48

flip this into the line chart and now

71:51

this will be much more visually pleasing

71:52

so what I see here is I have my dark

71:55

blue line which is representing the

71:57

essentially the amount of messages that

71:59

match the query so for example at 9:30

72:04

we see that there were 30 100 status

72:08

codes of 200 and there was based on this

72:11

outlier information up here there was a

72:14

threshold that was established between

72:16

about 2400 and 3500 and so since that

72:21

number is within that threshold it's

72:23

within the threshold threshold being

72:26

represented by a light blue line but if

72:28

we look over here we see that based on

72:32

the mathematics the threshold was

72:33

expected to be between 2,500 and 3600

72:36

but there were only 20 just a little bit

72:39

but they're only 20 494 account for

72:42

those status codes and that falls

72:44

outside of our norm or outside of our

72:46

threshold that we're expecting and so we

72:48

were able to see those type of outlier

72:50

events let me show you it in a better

72:52

way because I the 200 I think is useful

72:55

this would show successful connections

72:57

but let's say you want to see all the

72:58

unsuccessful connections so what I'm

73:00

doing is I'm

73:01

flipping this around looking for status

73:03

code four or fours and now I see two

73:06

different examples here so keep in mind

73:09

those 404s are bad and so what I say

73:12

here is while they're bad they were in

73:14

within that acceptable range throughout

73:16

most of this sixty minute window but we

73:19

see two examples of outliers here the

73:22

first one we see right here the system

73:25

was expecting between about 118 and 169

73:29

404 errors and there were only 113 so

73:32

that's outside the expected norm over

73:34

here we were expecting between 100 and

73:36

176 in the rx100 77 which is outside

73:39

that norm as well now in my mind's

73:41

looking at this set of data the fact

73:43

that there were less four fours less

73:45

failed connections than expected is kind

73:48

of a good thing I said you know it's a

73:50

good problem to have and so I don't

73:52

really want to see when it's underneath

73:53

I only want to see when it's exceeding

73:55

what is expected so using the outlier I

73:58

can just change this option here which

74:00

is direction so rather than looking

74:01

above and below the threshold I just

74:04

want to look when it's above the

74:05

threshold and so now when I run this I

74:07

should lose that left pink triangle and

74:09

I only have the one where it exceeds and

74:11

so that's how I can go ahead and start

74:13

to use the outlier to help establish

74:17

acceptable range the documentation will

74:21

give you more detail as far as what

74:23

these do I don't really want to get too

74:24

bogged down what it's doing is saying

74:26

you know how many standard deviations do

74:27

you want to do and how many trailing

74:29

points you know those are the

74:31

nitty-gritty details which would of

74:33

course be relevant if you were setting

74:35

this up but for now I just want to

74:36

illustrate that view the other example I

74:39

want to show right now is plotting

74:41

requests on a map so let me go ahead and

74:43

pull this guy up and so what I'm doing

74:47

here

74:47

let me alter that too so what i'm doing

74:51

here is i'm using parsing to go ahead

74:55

and pull out these IP addresses so in

74:57

each of my apache messages i have that

74:59

IP address that i've referenced you know

75:01

a few times and i'm essentially pulling

75:03

it out and calling a client IP then what

75:06

I'm doing is assuming or as long as the

75:08

IP address is public you can use a geo

75:11

lookup up here to go ahead and are

75:14

we would actually do a for you to

75:16

determine the latitude and longitude of

75:18

that IP address and you can actually

75:20

look up some other additional

75:22

information so you can see the city of

75:24

that IP address to state the country and

75:27

the country code as well

75:28

and so what I want to do is let's say I

75:30

want to go ahead and see a count of

75:33

where my people are or where my traffic

75:36

is coming from so if I'm look at the

75:38

aggregate I see essentially that

75:41

mathematics or a summation of that

75:42

information and I do see it here you

75:45

know I see that there were 7,000 people

75:49

or connections that came from latitude

75:53

37 longitude - 122 now if I'm really

75:56

good at geography I would know that this

75:57

is wherever in the world but I'm not so

76:00

I wanted to have this information

76:01

display than something that's a little

76:02

more friendly to me so I'm going to take

76:05

advantage of these mapping options we

76:08

have here are the graphing options and

76:10

one of them is a map option and I just

76:12

simply click click on the map and now I

76:14

get a map overlaid with those

76:15

connections and so now I can see that

76:18

you know whatever that there are almost

76:22

20,000 connections in the northeast of

76:23

the US and about 2,000 in Europe over

76:28

here and you know so on and so forth

76:30

and so this is a cool way that I can go

76:32

ahead and take that set of data

76:35

manipulate it to something that makes

76:37

sense and view that information on a mat

76:39

map and actually plot it and I'll show

76:41

you in a few moments how we can go ahead

76:43

and take that data and actually share it

76:45

out with other people as well let's go

76:48

ahead and look at another scenario we

76:51

actually looked at this a little bit

76:53

earlier but I want to show it a little

76:55

more thoroughly so I'm gonna do a new

76:56

tab just to kind of reset myself and

76:58

we're gonna look at a new set of data

77:00

we're gonna look at source category

77:05

security snort data and what's more data

77:10

is besides something that's fun to say

77:12

is a essentially a security data and so

77:17

what we see here we're gonna be using

77:19

the advanced analytics to actually look

77:20

through this but just visually if we

77:22

take a look we see different types of

77:24

classifications of different

77:28

instead of a Kurd so here we see there

77:29

was an attempt at information leaked and

77:31

a web back at web application attack and

77:35

deny all service I saw that around there

77:37

and a bunch of events I'm gonna expand

77:39

my timeframe range out just to get some

77:41

more data because I this will demo a

77:43

little bit better with larger set of

77:45

data and so we see that there were about

77:47

6,000 different events or reports or

77:51

classifications that have occurred over

77:53

the past hour and so you know as I

77:56

showed earlier when I was doing the demo

77:59

when you get to this point you can

78:01

certainly scroll through all 243 pages

78:03

and start to look through this data and

78:05

start to you know try and get a sense of

78:06

it but that's not a very good use of

78:09

anyone's time so what you can do is you

78:10

can go ahead and click on log reduce and

78:12

as mentioned earlier that's gonna take

78:14

all the data and distill it down into

78:16

those common patterns and so let's let

78:19

it run and it's just about finished and

78:21

so now what we see is rather than

78:23

looking at about 200 pages we see these

78:28

breakdowns in the commonalities of

78:31

things that occurred so we see that

78:33

there were about 3,000 attempted

78:35

information leaks over the last 60

78:37

minutes and 624 some sort of data is and

78:41

we see you know network trojan was

78:44

detected now whether that's good or bad

78:46

I am or mind it's kind of good that the

78:48

system caught it so that's a good thing

78:50

but regardless we see a bunch of

78:51

different events that have occurred but

78:53

let's say we want to go ahead and see

78:55

how this compares to another time period

78:57

if we click on a log compare what's

78:59

going to occur is the data is going to

79:01

go ahead and are the system relics you

79:03

make is going to look at the data that

79:05

occurred the last 60 minutes from now so

79:08

from 923 to 1023 Pacific time right now

79:12

and also go back 24 hours and see what

79:15

occurred from 923 to 929 23 yesterday

79:20

and here we're going to go ahead and get

79:21

some percentages so we see here that

79:23

there were a decrease of two percent of

79:26

tented information leak sounds like it's

79:28

a good thing there's a least it's a

79:29

decrease here we see there was a 21%

79:33

increase of successful administrator

79:35

privilege game well I don't know if

79:37

that's good or bad but at least we see

79:38

that you know there was why was there a

79:40

lot more

79:40

from yesterday we can start to go ahead

79:42

and compare their the other thing we see

79:44

here is some that were new or gone so

79:47

yesterday there were not anything that

79:49

matched this pattern sorry today however

79:53

this one existed yesterday but today

79:56

it's gone so it doesn't actually exist

79:57

so it's just a good way that we can go

80:00

ahead and see it a quick glance you know

80:02

this this information was from you know

80:06

this period of time and then we can go

80:07

back to another period of time and look

80:10

for that same window to see what type of

80:12

numbers we get and see if there's cause

80:15

for concern or not so cool so let's go

80:19

ahead and let's take a little step back

80:21

and so let's say what a message that

80:25

I've been looking at our messages that

80:27

have been ingested into sumo logic and

80:30

then processed and ultimately I did then

80:33

have access to go ahead and start to

80:34

look at that data using the queries

80:36

which you know I think makes sense let's

80:38

say I want to go ahead and start to look

80:39

at messages in real time what I can do

80:43

similar to a tail - F is I can go up

80:46

here I can go to new live tail and now I

80:50

can go ahead and type a somewhat basic

80:53

query but I can enter query in here so I

80:54

can go ahead and do source category

80:57

equals labs patchy access and now when I

81:02

run this this is going to look at the

81:04

logs in real time as they're coming into

81:06

sumo logic and I'll see these you know

81:09

these events have occurred now this case

81:12

we're loading demo data so it's there's

81:14

not a ton of data but you can imagine if

81:15

you have you know hundreds and thousands

81:17

of records coming in you know per a

81:19

minute this screen could be flying by

81:20

pretty quickly you don't have the

81:22

ability to do some basic query on here

81:24

so I could go ahead and say I want to

81:25

look for Mozilla that will work what you

81:28

can't do is for example say I want to

81:31

look for specific or I'll look for the

81:34

IP addresses or do calculations on that

81:36

other reason being this is the data that

81:38

really hasn't been ingested or is being

81:40

ingested in sumo and hasn't gone through

81:42

that phase to get those field extraction

81:45

rules applied so I just keep that in

81:46

mind it's real time it's not really

81:49

going to be the III I don't think you'll

81:51

use this very much but it does give you

81:54

the ability to mod

81:54

live logs in in production environment

81:56

which is you know certainly a good thing

81:58

so I just wanted to point that out see

82:02

so the last topic I want to go over

82:05

which is an important one is dashboards

82:07

let me go ahead and just close a couple

82:10

windows here and let's look at the

82:12

dashboards so actually let me go back to

82:16

slides and let me set the table there

82:18

I'm gonna skip through some of these so

82:21

live tale we just discussed advanced

82:24

analytics we looked at some of these so

82:26

we didn't actually do predict the way

82:29

predictive work is similar to outliers

82:31

it's taking the events that have

82:33

occurred and then using prediction

82:37

analytics to say you know this is the

82:39

amount of with the 404s for example this

82:42

is the amount of for force to expect so

82:44

certainly would encourage you to you

82:47

know play with these and little apps

82:49

will help help walk you through here log

82:52

reduce log compare we discussed already

82:53

the log reduce is used to as we say here

82:57

find the needle in the haystack by using

82:59

the pattern recognition or

83:00

identification log compare is going to

83:03

compare those patterns of today with

83:05

patterns in the past once again if you

83:09

were following along or when you go

83:11

ahead and I do the lab so there will be

83:14

a sections in there you'll be able to

83:16

you know go through and actually do this

83:18

on your own and so how can I monitor my

83:21

data so dashboards and alerts so let's

83:24

go ahead and talk about those so

83:27

monitoring dashboards how do they work

83:28

where they use for we saw that you

83:31

certainly saw the examples within the

83:33

demo I did earlier so each panel is

83:36

gonna represent a single process from a

83:38

single search and so your dashboard is

83:41

going to be made up of panels so just

83:43

from a terminology standpoint dashboard

83:46

is the essentially the entire page the

83:48

panels are those portions of the

83:50

dashboard you'll be able to drill down

83:53

into the corresponding query or link to

83:55

another dashboard so I showed that in

83:57

the earlier demo where I was looking at

83:58

the operational dashboard and then I

84:01

drilled into the services dashboard

84:03

and so you do have that ability to keep

84:06

drilling in the other thing you can do

84:07

with those panels of the dashboard as I

84:11

showed and I'll probably show an example

84:13

again is you can actually look at the

84:15

query to see what was taking place it's

84:17

a good way to go ahead and jump into

84:19

looking to see kind of the backend of

84:22

how that panel was created you do have a

84:25

live mode which is going to provide a

84:26

live stream of data alternatively you

84:29

could set it to look at a specific

84:32

window so you could say as it shows here

84:34

I want to look at my location of

84:35

incoming crest request for last 60

84:37

minutes so just depends on you know

84:39

what's appropriate for what you're

84:40

trying to view and you can use the

84:42

dashboards as templates with filters

84:44

I'll try and show an example of that and

84:47

so you could go ahead and make it easier

84:49

for your users to create their do

84:52

queries from a dashboard standpoint or

84:55

from a dashboard panel standpoint rather

84:57

than doing a query itself and like I

84:59

said I'll show ya I'll show that because

85:01

I think it's worth looking at so let's

85:04

go ahead and play around with some of

85:06

the dashboard so let me go ahead and

85:07

just jump back into my screen here and

85:11

let's go ahead and see do I have any of

85:16

these still open okay so let's say let's

85:21

take this one this is a good one so I've

85:23

created this this this query and both

85:29

turned it into a query and took the

85:30

query and turned it into a map and so

85:32

now I have this map and I like it I'm

85:35

proud of the work I did and I want to

85:37

share it with my whoever you know upper

85:40

management or with a peer or with a

85:43

customer you know ever and so to do so

85:46

it's gonna be pretty easy to start off

85:48

I'm you're gonna click Add to dashboard

85:50

when you do so you're gonna go ahead and

85:53

define the dashboard or you can use an

85:57

existing one so in this case I'm going

85:59

to create a new dashboard just for

86:00

illustrative purposes so I'm gonna call

86:02

this dashboard not going to give it a

86:04

great name hey 1 2 3 and I'm gonna

86:08

create a new dashboard and I'm going to

86:10

decide where do I want to save this

86:12

dashboard in my personal so using the

86:13

folder structure in this case I'm just

86:15

gonna save a kind of

86:18

so now the panel has been added to my

86:22

dashboard and so now I'm looking at this

86:25

this type of you so what I can do is I

86:28

now the ability to edit this dashboard

86:30

and the panel as well so let's say maybe

86:32

I want to make it bigger I can go ahead

86:34

and do so I can go up to some of the

86:36

more actions in here and I can play with

86:38

some of the toggling so maybe I want to

86:40

go ahead and change the team and put it

86:42

to a dark theme I can do that I can go

86:45

ahead and add a panel so maybe I want to

86:47

add a text panel and call this location

86:54

and then I can give it some information

86:57

this panel or this dashboard shows good

87:05

and so now that would be in here and so

87:08

now I want some done editing I can go

87:11

ahead and start to share this dashboard

87:13

out so now I've created this dashboard

87:14

it's now saved maybe I want to go ahead

87:17

and give it to an individual I can go

87:19

ahead and click on this share icon up

87:20

here and I can decide who in the

87:23

organization I want to share with so I

87:25

can say I want to share it with a

87:27

specific person maybe I wanna share it

87:28

with Bob or Beth or maybe when we're

87:30

more general and I want to send it to

87:32

somebody that has the role of analysts

87:35

we don't really talk about roles today

87:37

that more of a level-3 thing where you

87:39

would talk about role based access

87:41

controls and setting those up but

87:44

basically you can provide these to

87:46

specific users or specific roles and you

87:49

can also say what they can do so maybe

87:50

you only want them to be able to view

87:51

the dashboard maybe you want them to

87:53

edit it you also have the ability to go

87:56

ahead and share this dashboard out with

87:57

people outside the organization and so

88:00

if you if you want to and if your

88:02

organization has allowed it you can

88:04

share this with anybody that may be

88:06

whitelisted and there's a whitelist

88:08

that's available in the administrative

88:09

side also more of a level-3 thing or you

88:12

could share with anybody in the world so

88:13

you could go ahead and say this

88:15

dashboard I want to share publicly now

88:17

you may want to go ahead and limit who

88:19

you define as public so maybe you do

88:22

want to go ahead and whitelist and you

88:23

know share with only people that are

88:25

coming out of an IP address that you

88:27

know it's part of your physical location

88:29

or you know you

88:30

all those kind of options to choose

88:32

within there the other thing I want to

88:35

show in here is go ahead and let me set

88:42

it up kind of on the fly so I'm gonna go

88:44

in and edit and this is I want to show

88:45

filters so I've gone ahead and I've

88:47

created this dashboard here so let's go

88:51

ahead and look at the query just to kind

88:52

of remember what we're looking at so in

88:54

this case we're just looking

88:55

we're just parsing out the lab and

88:57

patchy access data and then we're just

89:00

pulling the lab Apache access data

89:01

excuse me and then we're parsing out IP

89:03

address so very straightforward so this

89:04

is showing all the visitor locations or

89:06

all the connections that have occurred

89:08

over a period of time which I believe we

89:11

had in the last 60 minutes or less 15

89:13

minutes but let's say I want to go ahead

89:15

and make this easier from somebody else

89:17

to use and I want to start to look at

89:19

status codes so let's say I want to make

89:21

this and I really want to just look at

89:22

the status code for four fours how am I

89:25

gonna do that now I can go back into my

89:28

query and I could go ahead and say where

89:32

status code status code equals 404

89:38

that's one option on how to do this

89:40

another option is I can go to my

89:44

messages I can go to status code oh in

89:48

the count so you need to turn off the

89:50

count before I can do that just clean

89:52

that up and then I can go ahead and grab

89:56

my status code and say 404 and I could

90:01

do it that way so that's two different

90:02

ways that I can go ahead and say

90:04

specifically I want to use for a forest

90:05

but I want to show a third way so let me

90:08

go ahead and close this I'm just gonna

90:11

go back and reset my screen there we go

90:14

so now here's my query so so let me show

90:16

you the other way that can be done

90:18

within the dashboards itself and setting

90:21

up filters what I can do is I can go to

90:22

the top of the screen here click on

90:24

click on edit first click on filters and

90:27

I can apply a filter and I want to apply

90:29

the filter for status code so I'm gonna

90:31

look for the filter of status codes

90:34

there it is and click add atom click

90:38

done editing so now I'm done so now

90:40

let's say I want to go ahead and look

90:41

for 404s

90:43

type 4 4 click enter and this screen is

90:48

only going to show a 4 or 4s and to

90:50

confirm that if I go ahead and look in

90:51

my query

90:52

there's the 404 so what I can do with

90:56

this now is I can share this dashboard

90:58

with somebody and even if they don't

91:00

know the query language they all they

91:02

need to know is the status code so they

91:04

can just say oh I want to you know look

91:06

at 200 now that sorry that was covering

91:08

that they can just simply type 200 and

91:10

they can see the different types of

91:12

status codes there's a sorry should have

91:15

the bracket there that's why it is

91:16

displayed and so this is a way that you

91:19

can go ahead and set up these dashboards

91:20

for somebody and even if they don't know

91:23

the data they don't know the query

91:26

language they can still go ahead and use

91:29

the dashboards and gain or garner

91:32

relevant information from them so that's

91:35

gonna be useful one one other thing I

91:38

want to show you and then we're going to

91:40

be close to wrapping up so let me go

91:44

ahead and show let's go back here so

91:48

this is gonna be true for any query I'm

91:49

gonna grab this query just because we've

91:51

been using it but you can do this for

91:52

any query so I've shown you to save as

91:55

and I certainly want to show this again

91:56

because this is gonna lead me to

91:57

something else that's going to be really

92:00

helpful and more specifically referring

92:02

to the alerts that I mentioned we've

92:05

mentioned a few different times so you

92:08

know we don't want you staring at the

92:09

dashboard we don't want you looking at

92:11

the queries you know non-stop how has

92:14

another mechanism to take advantage of

92:15

sumo and that's going to be those alerts

92:16

and so let's look at how those alerts

92:18

are set up and it's very simple

92:19

you're gonna go say that so you're gonna

92:21

create your query and then you're gonna

92:23

go save as and instead of going or in

92:27

addition but instead of saving it you're

92:28

gonna click scheduled to search and

92:30

you're gonna first decide how frequently

92:32

do you want this search to rock so how

92:34

frequently essentially do you want your

92:36

sets of data so maybe I want to get this

92:38

set I want to get an alert every 15

92:40

minutes so choose 15 minutes then I'm

92:44

going to have a bunch of different

92:44

options here and I'll go over the alert

92:48

types first so how do I want to receive

92:50

those notifications do I want to get an

92:52

email and if I would get an email what

92:54

do I want in that email

92:55

I want to go ahead and send the

92:57

histogram this middle chart over here

93:00

do I want to include the search query in

93:02

the email maybe you do maybe you know

93:04

you know depending on the audience

93:05

certainly you know there's different

93:07

reasons why you may or may not other

93:09

options you can go ahead and have a

93:11

script action run so maybe once this

93:13

every 15 minutes some sort of service

93:15

will run so maybe when you get a bunch

93:21

of four or fours you wan server to

93:23

reboot it and you want to script to run

93:24

to reboot that server you could do

93:26

something like that service now so if

93:29

you're using ServiceNow and you want to

93:30

integrate with that you can very easily

93:32

and just essentially feel fill out these

93:34

fields and then webhook is really the

93:37

one I wanted to show and so this is

93:39

where you can go ahead and establish a

93:40

connection with really anything that you

93:43

will utilize webhook so slack Twitter

93:46

although we recommend sending you a

93:48

message on Twitter that shows your 404s

93:51

but you know you could use any type of

93:53

web book environment the other option

93:56

that I wanted to point out here is the

93:57

send notification let me just so I can

94:01

have this occur every time a search is

94:03

complete so let's say every 15 minutes

94:05

this search is gonna run it's gonna send

94:06

me all these results and that's fine but

94:09

maybe I want to only receive a

94:10

notification when a condition is met so

94:13

maybe I only want to know when there's

94:15

been a certain amount of Forel force and

94:17

if I'm if I'm under that amount of 404s

94:19

I don't really care cuz that's you know

94:21

it's under my threshold and it's not

94:23

it's it's a good thing so I don't need

94:24

to be notified of it but maybe you need

94:26

to be notified when you hit a hundred

94:28

404s or you know when you hit some sort

94:30

of ratio between 404s and to hundreds or

94:32

something like that that can be all

94:34

configured for you as well or you can

94:35

configure that yourself rather but it

94:37

can be configured in the environment so

94:39

let's see let's go back to the slide

94:42

deck and kind of wrap up here today and

94:45

I'll open up to questions if you want to

94:47

go ahead and type a question in that's

94:50

cool otherwise I'll try and or just open

94:52

it up and you guys can ask questions but

94:57

let's see sorry about that noise there

95:00

we go so monitoring alerts this is we

95:02

just looked at it and these are you know

95:04

just some examples the hands-on lab as

95:08

mentioned this you know

95:09

would have certainly encouraged you guys

95:10

to go through these processes and

95:12

actually do the work yourself - I get

95:14

the hands-on experience metrics not

95:17

gonna really cover metrics too much

95:18

right now

95:20

we looked at it earlier and so with the

95:23

metrics we went ahead and when we did

95:26

the demo earlier we were using the

95:28

metrics to combine to see well once see

95:33

what was going on and then we were able

95:35

to go ahead and use that overlay as we

95:38

see here where we were looking at both

95:40

metrics and we were looking at logs at

95:42

the same time and using that overall a

95:44

to correlate the metrics to the relevant

95:46

logs and ultimately use the locks to

95:48

then identify why things were happening

95:53

ingesting metrics sources uh kind of

95:56

quick summation of this slide is there's

95:58

a bunch of different ways to ingest your

95:59

data into sumo logic for metrics similar

96:02

to the way that there's a bunch of

96:03

different ways to do it using your logs

96:05

and like I said earlier those would be

96:07

covered typically in level 3 so just

96:09

wanted to kind of highlight that you can

96:11

do it but I'm not gonna go for the

96:13

specifics on setting that up right now

96:14

and then metrics dashboards and alerts

96:17

so I've shown log metrics excuse me

96:21

log alerts log dashboards metrics also

96:24

have a similar component and we saw the

96:26

example of those in the the demo I did

96:29

earlier where we were looking at the top

96:31

right panel of CPU usage was an example

96:33

of the dashboard panel or I'm in a

96:36

metric span or rather excuse me and then

96:38

the metrics alert was demonstrated

96:41

through the slack message I received at

96:42

the beginning so where do I go from here

96:45

so you've sat through today you know

96:47

what's where the next steps technical

96:50

resources so in the when you first log

96:53

into sumo logic you'll be brought to

96:56

that home page that you saw earlier

96:58

there's also learned tab and the learn

97:00

tab is going to have a bunch of great

97:01

resources all the ones that are listed

97:03

here so there's gonna be access to the

97:06

tutorials which essentially I pretty

97:10

much went through this one here the

97:11

using similar logic tutorial but would

97:13

encourage you to do so you'll have

97:15

reference to technical documentation

97:17

shortcuts to cheat sheets which are

97:19

going to be really helpful to look at

97:21

the operator so you can see a whole list

97:23

of all

97:23

the operators and then decide you know

97:25

what what makes sense for you to use the

97:28

ability to go ahead and that to go ahead

97:33

and ask support for example or I would

97:36

encourage you to certainly join the

97:37

community forum that's gonna be a great

97:39

place to post questions and and find

97:42

answers and interact with other similar

97:44

logic users and you know discuss your

97:50

use case and oftentimes you know other

97:52

organizations are using doing a similar

97:55

use case and kind of bounce ideas back

97:57

and forth off each other let's see so

98:03

I'll go ahead and open it up to

98:04

questions let me go ahead and try to

98:06

unmute you guys let's see

Interactive Summary

Ask follow-up questions or revisit key timestamps.

This webinar provides an introduction to Sumo Logic's QuickStart program, covering five key steps to becoming a Sumo Pro user. It highlights how to access and utilize Sumo Logic for data analysis, including searching, parsing, and monitoring trends and critical events. The session features a live demo troubleshooting a scenario where users cannot check out on the fictional 'Travel Logic' website. This involves using dashboards, metrics, and logs to identify the root cause, which is discovered to be the deployment of development code on a production system. The webinar also details the Sumo Logic data flow (collection, analysis, visualization), methods for data ingestion, the importance of metadata for data organization, and how to analyze data through collectors and queries. Advanced features like log reduction, log comparison, outlier detection, and plotting requests on a map are demonstrated. Finally, it touches upon creating dashboards and alerts for monitoring and notification, emphasizing the availability of resources like tutorials, documentation, and community forums for continued learning and support.

Suggested questions

5 ready-made prompts