(Tess Berdiago Cahayag speaking) Hello, everyone! Welcome and thank you for joining today’s webinar FireEye Helix Explained, Helix Analytics. My name is Tess Berdiago Cahayag, the Senior Marketing Manager here at FireEye and I will be your host. So, let’s start by introducing our expert presenters. Sarah Cox is an instructor and curriculum developer at FireEye. Joining Sarah today is Dustin Siebel, Senior Manager Detection Engineering at FireEye. In addition we do have one other expert joining us for Q&A later on during the presentation, Todd Bane, he will be sharing his expertise with us. I’ll turn it over to Sarah to kick off the presentation. Sarah, the floor is yours. (Sarah Cox speaking) Thank you so much.
Thanks for taking us through those introductions here. So, as an overview of what we’re going to be talking through with the next hour we’re going to just start with Helix in action. Obviously, we have folks coming in with different breaths of experience with the product. Some of you have not really had an experience with analytics. Maybe you’re also new to Helix. So, we’ll just make sure we level set on that and then we’ll jump in to talking about analytics defining those, what they are, how we generate them, how you can view and consume them and take action in your own environment. We’ll go over a lot of different examples but we’ll also leave you with the information that you need to understand analytics in your environment, what you have enabled, what you can switch data sources you could potentially add in to extend those analytics. We’ll also talk about some of the related services that we have in Helix. Entities is a good example of this because the way that we’re powering this is related to the analytics engine. So, we’ll define that for you and show you how you can consume that. And we’ll leave you with some tips on how to leverage analytics in your own environment. So, just to start want to make sure that we’re all on the same page with the Helix Platform. Helix isn’t a single SOC platform meant to really consolidate your security tools. And we do that with the data that we feed into Helix. And so we want to with Helix help you surface and prioritize critical threats. And this is really one of the pieces where analytics comes in and brings a lot of benefit to you because that helps you surface these threats in a new way. We also want to empower and inform your decisions with intelligence and help you automate your responses in terms of the flow of events coming into Helix.
You may have seen something like this before. If you’ve been working with Helix you’ve seen this in action but it starts with the data collection. And analytics is no different. The more rich data that we can collect and see in Helix, the better we can help you identify abnormalities in that data. We are feeding data into Helix from events and alerts in your local network from cloud data sources, from security tools and other sources in your network. And as we do that collection we are then able to match with that. Now we are matching our intelligence and rules to detect threats and alert. And this is the point at which we are also applying our analytics information. So, at its highest level analytics are inspecting for anomalies in your environment. And so those anomalies are really going to be contextual for what your organization looks like. So, What a log in looks like whether it’s normal or abnormal really depends on the context of your organization. These analytics are happening in real time. Again this is happening at that match phase as we’re collecting information and then we’re comparing the activity that we see with organizational baselines and user baselines to detect if it is abnormal and we do scoring and provide description of why we would determine something is abnormal so that you can consume that and make decisions about how you want to take action on that. So, we can see here represented some of the types of activity that we’re looking for in terms of analytics. Certainly log on activity, file transfer, we also see some of the specific technologies represented here. When we move into the demo you’ll see how you can see the specific analytics that are covered in your environment and how you might be able to add data sources to extend that coverage. One piece we want to talk about to help you understand how these analytics are happening is the plug in analysis that is also done on your event. So, as we’re sending events into Helix, if we can have a very clear understanding that with each event we’re processing it against the intelligence that we have against the analytics that we have. We are also doing a plug-in analysis, kind of to look at events over time. And this is what drives our development and understanding of what is expected in your environment. So, we’re building out organizational baselines and user baselines and that’s what we use to compare our analytics as individual events are coming in. We compare that to those baselines. So, what is an organizational baseline look like? Well, we want to understand, what are the typical login locations that your users have or what are the local isps for that user base. And of course that is going to be custom to your environment. And we can build that understanding of what is normal based on the data that we see and then look for deviation. We’re also building out user base lines to understand for a specific user what are the typical user agents or source ISPs they’re using, what is the typical log on time, you know. Some users work very strict nine to fives, others enjoy flexibility and so they might not work kind of your normal nine to five hours and it might be more typical for them to log in in the evening hours. So, that piece of the plug-in analysis is really important because that’s how we can identify when we see a log-in event comparing it to a baseline if there isn’t a difference or if that looks anomalous. The plug in development also helps us with analytics detection. There are certain types of events that really need to be understood contextually. Brute force analysis is a really good example of this. If we see a single event and it’s a failed log-in and there’s no way for us to understand contextually how that fits in unless we’re able to see other related events. And so the plug-ins are helping us with event matches looking at events around it. So when we detect in analytics or when we have a match in analytics, we generate what we call an analytic advisory. Now for you what that’s going to look like is you’ll see an even, t a single event, in your environment with the classical analytics. We should see that here in the example. And then just underneath that, one of the fields that’s going to describe that match is the application field. So here we’re seeing an example of a log tracker application field. And then additional fields in this are going to be contextual to the analytics that we matched on providing further detail here. I wanted to point out a few of these key fields of interest that we’re going to see in all analytics fields like the application which tell you what the analytics is. We’re also going to score these. And that score is going to be based on how different this particular event looks compared to the baseline. And so that scoring is kind of a quick indication for you of what’s happening. But we also provide an explanation field that gives detail on really all of the details on why we detected this event as anomalous. And so we want to make sure you guys understand what’s in these events so you can consume them. I do also want to mention with this example of the log tracker field, this log tracker analytic is meant to track log volume over time and to detect fluctuations. So, in this example there was an hourly change kind of dipped. And so we generated an analytics for this. This log tracker can be a really powerful tool for you to use operationally to understand and make sure that your data sources are maintained. I want to just talk through one more slide on analytics before we get into the demo and really see this in action. So, this is the explanation field that I mentioned. Again every analytic is going to have this field. Dustin is going to show you a couple more of these in action. But here we can see when you mouse over and click on this field you get the pretty printed JSON explanation and that’s going to tell you all of the details of why we weighed this as an anomaly and why we wanted to report it to you. This example we can see is for a brute force that we see kind of horizontal password spray attack where they’re trying a bunch of different usernames here. So, this field is what is going to give you details on the analytic and understanding. And that is included in the analytic event itself. In terms of the analytics advisories I mentioned that the classes analytics. So I want to give everyone homework to help you understand analytics in your environment which is to log into Helix and run search for classical analytics and just have a look at what what kind of matches you have happening in your own environment. So analytics are coming with your out of the box Helix experience. You don’t really have to tune anything other than add data sources which you’ve already been adding. If you run classic with analytics you can see what is already being reported there. And then if you have many events and want to kind of dig into it try grouping by the application to have a look at which analytics are being detected. Have a look at score you’ll see kind of the level of anomaly that is being reported and pay attention to that severity field. So getting started point for you all that it might be worth just spending a little bit of time seeing how this looks in your own environment.
With that I’m going to pass it to Dustin to do a demo and to talk more about the specific analytics that we’re offering.
(Dustin Siebel speaking) All right, thanks Sarah. So, one of the questions you may be asking yourself is what analytics are available in my environment and out of those which ones are supported by data that I’m sending in to Helix. So, to answer that question we actually developed an analytic to help with that. And those analytic events which we call analytics reporter, get sent to your Helix instance just like any other analytics event. However, we do have a dashboard that’s available in every environment that helps summarize this data. Alright, so I pulled this up and this is just a demo education instance. But you can see two two columns or two widgets right here. One is for supported and one is for unsupported. Now what that means is you know with all of these analytics we obviously need certain sets of data in order for them to work properly. And so the way that we do that is we actually use a form of rules that help us down select and filter that data in the back end into our analytics engine. Right, so if there’s things like O365 or you know RDP events you know we need to first define what those look like before we were able to actually deploy the analytics. And so all the supported versus unsupported data point comes from is really just have we seen any of those alerts or the data from an instance. And if we have seen some of that data recently then we will mark the analytic as supported and unsupported if not. So, I would encourage you that, you know, if you review this dashboard, if you see, you know, an analytic on here for example maybe AWS analytic that you know you have AWS like cloud shell data but one of these is showing is unsupported. You may want to actually work with support. You know we can go ahead and investigate that as needed. But beyond that this is also the best place to see what analytics are currently deployed and you’ll see this widget here called, all analytics. Unfortunately we’ve exceeded the max roll limit on this widget. So, if you, if you pivot into search. You can see that. ..you can see the full list so right now we have forty one analytics deployed and you can see the analytics supported column is right there as well. And the last widget is just kind of a histogram of showing analytics over time to see if there’s been any recent changes. Okay…so, what I’ll do now is I wanted to just kind of walk through an example of what an analyst might look at when they’re reviewing an analytics alert. So I’m going to start by actually using the asset based alert correlation feature for this. And so I pre-filtered this to the user question, this is Jake Smith, I can see he has a risk score here of 140 and two alerts. So, if we click into this profile you can see there’s been two correlated alerts that are related to analytics like this one brute force and one abnormal log on. So, let’s go ahead and click into the brute force alert. So like Sarah mentioned there are certain fields that give you the most amount of detail, right? There is a summary here that gives you information about who the user is, you know what IP, ISP and other data surrounding the advisory. But in this case I’m going to pop right into the explanation field. And this gives me ,again, like the most information helps me determine what happened. So for this analytic, this is a brute force, try to correlate events together to score brute force and the best possible way. So the first step here is we have to actually detect a brute force itself. And this is usually just a number of failed log ons from the same source. So you can see we detected that. We gave it a score of one. So, the next thing we found was we determined that this was likely a password spray attack and the way we do that is really just by looking at the distinct count of users that the source had failures on. So we can see there’s 11 unique accounts in this case. So we added a score of three in this case. Lastly we also correlated a successful log on from the same source to our friend, Mr. Jake Smith, so that obviously looks like it could be bad, right? And this is why we scored it the way we did, right? At the end all of these scores get added up and that’s essentially what determines what what severity we give it which is here. And these severities will always match the level of these alerts. Alright, so we pop into the other alert, the abnormal log on. You know, again, this is very similar we have kind of our most important data points at the top. But again let’s pop into the explanation field and this analytic works a little bit different. So instead of using correlation like you do with brute force we’re using baselines you know so as Sarah mentioned we process all of this data and we were continuously updating baselines and checking against the baseline. So, in this case we see a number of what we call never before seen traits. So, in this case we see the source ISP has never been seen before. This user and we also give some extra information on what are the most common values we’ve seen, right? So, in this case for Jake Smith would be over ninety six percent of the log ons that are included in our baseline originated from acme shell co ISP. So, we give that a score of one. We also noticed that this was a new user agent that had not been seen before. Sometimes this can prove to be very interesting. One caveat here is that you know you may be asking well there’s you know browsers will automatically update themselves right? Chrome will just auto update and these numbers will increment. Obviously we don’t want to trigger you know an analytic hit every time Chrome updates itself. And so we’re not showing it here but we actually have some functionality that we call string similarity, right? So, before we actually complete this check we’ll run this through our string similarity algorithm and we’ll see it will determine if the current never before seen string is similar to some of the most common strings that we’ve seen our user agents in this case. So in that case if there’s a minor version no that’s not going to change the string that much. And so in that case we would still score. This is being new but then we’d add another section essentially deducting two points or maybe one point that it is very similar to the one we’ve recently seen. Lastly we have a never before seen country right so the user has logged in from Brazil has not been seen before for this user. You can see that over 90 percent almost 99 percent have been from the US. And so we do have some other functionality in this plug in. We do have some things like what we call órg similarity and so we can actually check. So, how common is Brazil within your organization to actually we create a separate baseline in this case for all O365 across the entire user base. So, if you have you know maybe a percentage of your users that actually legitimately do log in from Brazil then we can account for that we can actually reduce some points to say well this might be new for this user but it’s fairly common within the organization to see log on to Brazil. And so then we can reduce the score that way. And also as Sarah mentioned you know we do baseline, just the volume, right? So, we check how many times the user logs in on a daily basis, weekly, hour of the day and day of the week. So, if all of a sudden we see a big spike at two o’clock in the morning on a Sunday and maybe we had never seen any activity during that hour on that day that that’s something that we would also included in this check. So, really this as an analyst I think this will give you good information where you know we had two different alerts they were correlated together, right? They kind of corroborated each other. The initial attack was like their password spray that was successful which was corroborated by the ability to log on. And from there I would assume the analysts would take you know the normal response measures of seeing what other activity was done from that user, an IP possibly disabling the account, et cetera. Alright. Back to you, Sarah. (Sarah Cox speaking) Thanks, Dustin. So, up till now we’ve looked at the individual events and you can certainly search on those and understand what’s in your environment. But with the analytics piece of this, what we really want the user experience to be is that we can be providing this to you without any additional tuning on your part. And you can start to take advantage of that. And one of the ways that we do that is by building out rules and detection based on these analytic events. So even if you’ve never searched and run and looked at the analytics tracker or search for analytic events this is running in your own environment. You can see here in this screenshot we have four separate rules built out for this one example detection. And this is meant to show that we are our behavior can be different based on the scoring for that underlying analytic. And you can actually tune this and make adjustments. But the default behavior is for low severity or low scored analytics. We’re not going to generate an alert. We still have those events and you can match on them. But for medium height and critical severity we will detect an alert. And that alert looks just like any other alert that you might have seen and responded to in Helix. We’re providing all of the details on that match why we’ve generated an alert on this event and those details on that most recent event. And then it’s cropped off on the bottom of the screen here. We have this event tab and you can see we have an event attached to this alert. That event is our underlying analytic event that we just talked through and showed you. So, that has all of the rich detail on that analytic, It has that explanation field that you can review. If there’s not enough detail in the alert to understand why this abnormal login is of interest to you then you can review that. So really trying to tie all of that together and bring that information to your fingertips. But again this alerting is happening as the data is coming in based on the severity and you really don’t need to do any tuning to take advantage of it. So, that is essentially the alert piece. It’s just about being able to consume these. All right. So at this point I want to pass this back to Dustin to talk a little bit about how you might be able to tune alerts and look at the analytics data events in Helix. If you do want to try to extend or adjust how analytics are working in your environment.
(Dustin Siebel speaking) Alright thanks Sarah. I’m just going to pop back into the demo instead of the slides.
So, Sarah had mentioned that you know any medium or higher risk analytic will in most cases produce an alert out of the box, right? There’s nothing you need to do. So, like I described earlier we use the scoring system with analytics where you know if a score reaches a certain threshold then it becomes a medium where it becomes a high et cetera. And this is really a way for us to be able to call out things that are anomalous that maybe they don’t have high enough fidelity, right? For something that we would actually want to alert on. And in fact some of these hits would be low severity would be very noisy few to alert on them. But obviously every customer is different, right? And so we have to try to find a balance that works best for all customers. But for some customers there might be, you might not see many analytics hits at all, right? So, you might want to enable alerting for some of these lows. And so the first thing I’ll do is actually just show you how you can run a search. So I’ve pre constructed a search here where I’m just looking for all analytics. This colon is important because this allows me to do a prefix match so this will include those beta events that Sarah mentioned. And then I’m going to just look for lows. So, if we run this. We do see some hits right so we see both some regular analytics and some beta events. So this is just another way how you can just search for beta if you want. And from here you can pivot into each one of these, right? So, maybe I’m interested in the low abnormal 0365 log ons, right? So, I could just append them using the application field. And let’s just say I determined that I actually do want to get alerts for this, right? This looks useful.
So what I can do then is you know if I needed it kind of a hint at what rule is referencing this. I think there’s these detect decoration fields on these events. I can click and I can see OK well this ties to this rule abnormal log-on. So, let’s go ahead and pivot into that rule. So, what I would do now is go into rules click into the FireEye tab the easiest way to find all analytics. Rules is simply just search for that in the name we will always almost always put in the word analytics and these names and despite that filter alone you can see all those rules. But in this case we want to look at all of the rules that are set to not create an alert.
And you know in this case we wanted to look for O365 you can see OK here is the rule in question so if I would click it quite simply I would just enable alerts using this toggle. Now every time there’s a hit like we saw when we were in search instead of it just decorating it would also create an alert for this as well.
beyond that you know you may want to tune analytics so admittedly there is a bit of a black box. You know with these in the background there’s code and a lot of things going on. It is not customer codable but there are some things you can do to to adjust the alerts you receive. So let’s just say that I want to get alerts for these lows. But there… I’m seeing a lot of hits from one source ISP that is benign. Maybe it’s a back office or you know a vendor or something that for whatever reason is generating these hits. But I don’t care to see those anymore in my alert queue. So, just like any rule this isn’t unique to analytics. I can tune a FireEye rule. And so what this means is that it’s just going to take it’s going to leave the original query intact. That’s not tunable but you can append any of your own custom logic. So maybe in this case I want to say a source ISP if it equals 123.4 or maybe I would want to reference a list. Alright we’ll call it legit O365 sources, right? Now any time that I add an IP address to this list this rule will automatically filter out any of those hits you won’t get an alert for them. OK the thing I did want to show is that we do have some other dashboard’s it’s really just in the UI, go to dashboard’s custom. And then in this case I just searched for Helix. You know you could search for log tracking or in the case of the dashboard I looked at earlier. I did tracker. And you can even filter off FireEye or user dashboards so I already showed off the analytics tracker dashboard. Sarah mentioned this blog tracker, analytic that we had earlier. That concludes the presentation. At this point, I’ll turn it back to Tess for Q&A.
(Tess Berdiago Cahayag speaking) Awesome, thanks guys. Just some really good information, some great insights and I’m sure our audience has benefited from many of the key takeaways that you guys provided. Before we dive into our Q&A. I’d like to welcome Todd Bane. I mentioned Todd would be joining us for the Q&A portion. So, welcome Todd. So, let’s start with the first one, What analytics are available in Helix?” (Sarah Cox speaking) So, I’ll grab that one, Tess. Again the best single source place to see those analytics and those descriptions is to go on the dashboards menu to the custom dashboard and review the analytics tracker. That’s going to show you a list of all the analytics and it’s going to break down based on your data what analytics are supported based on what is seen in your own environment and what are unsupported which could give you some suggestions for additional data sources to add into leverage analytics and a more thorough way.
(Tess Berdiago Cahayag speaking) Perfect. Alright, thank you Sarah. Next question I think this might be a Todd question, How can I use analytics to monitor my data feed?” (Todd Bane speaking) Sure. So as far as monitoring data feeds like we mentioned a couple of times through this presentation we have a few applications in the analytics that do this type of monitoring. We have one application titled EPS which actually gives you kind of a snapshot of your contracted EPS for the instance but also the current EPS at the time of observation. And it does give you some additional information about whether or not the threshold for the EPS contract is being exceeded. EPS count in the past seven days being exceeded but then also percentage of EPS used at a given time stamp or date. The other one that we mentioned was log tracker which does kind of give you that observation at the class level you know as to the fluctuation of increase or decrease in EPS usage. (Tess Berdiago Cahayag speaking) Awesome. Thank you Todd. Is there an easy way to correlate or perform an index search for an analytics event such as brute force that shows all of the original events which create that’s created that analytics advisory? (Dustin Siebel speaking) This is Dustin here. I’ll take it. So, it’s a good, it’s a great question, right? So, we down select all the original events as we mentioned into the analytics engine then the output is a new what we call a synthetic event. So the question is how do you then pivot back into those source events. And I will admit that I think there’s some room for improvement here for us. But I think it really depends on the analytic, right? So, if there is a brute force and we do include what we feel are the most important data points that allow you to pivot from the analytics event. So the case of a brute force there should be a source IP field as well as a username. But even just starting with source IP and then maybe adding your data source if it’s O365 you add your class and then I would just start from there maybe do a groupby on the action field in that case or or whatever field would make sense for the source in question. (Tess Berdiago Cahayag speaking) Thanks, Dustin. Another question I’ve got here, I tried searching class analytics for very low and it yielded no results. Did I do something wrong? Sarah, I think this might be something for you.
(Sarah Cox speaking) Yes, sure I can take that so if there are no results back it won’t show results and that can sometimes look like an error especially if you’re expecting it. So if you have a syntax error you will get a report on the field name being wrong. So it may just be that you have no events but what I might suggest on that is tuning the query a little bit maybe extending the period back over a period of seven days to try to see if then you get data back or potentially taking out part of the query. So I think in the example you had specified severity equals low maybe two class colon analytic star and then the group by and then you would see all groups for all severity. So I would just kind of maybe that search is so targeted that there really are no results. So try opening it up that way.
(Tess Berdiago Cahayag speaking) Next question we’ve got, What data sources do I need to use analytics?” (Sarah Cox speaking) So, let me grab that one. That is really going to depend on the type of analytic as you said we’ve got over 40 now and they’re going to be driven by different data sources. So what I would recommend is having a look at the available analytics. If your environment has an analytics that’s not supported based on the data then you are going to have that listed in your unsupported analytics. And that’s going to give an indication based on the naming of what the data would be. So like O365 brute force that points you at the data that you would need if you have questions about the data sources or you think that you should be having that you’re sending a data source and that is not that that you think should be supported and it’s not showing as supported. That would be a great question. Suppose just log on to the support site and engage with chat and they can help you understand that. But the hopefully the descriptions and the analytics tracker are going to give you the information that you need on which data sources to configure.
(Tess Berdiago Cahayag speaking) Awesome. Thank you, Sarah. Okay another question we’ve got here, Can analytics tell me when my data sources drop or increase? Todd, I think that one might be a good one for you.
(Todd Bane speaking) Yeah, so that’s kind of what we mentioned with the log tracker application. It gives you a delta of the percentage of increase or decrease of that EPS. If you look at the the info parsed field of that analytic event it’ll tell you something to the effect of the hours events count for class and example maybe Windows event decreased by say 24 percent compared to the previous hour. And so you can kind of use the the elements and attributes you know within this analytic event itself as a means of even creating a custom rule that you could use then to evaluate maybe specific data Delta thresholds you know so if you were interested in identifying cases where say that the EPS for a given class increased by three hundred percent and you want to fire an alarm for something like that you could use the ratio field as a means of identifying that threshold and generate a custom rule to alarm you when those events occur. And then inversely obviously the same you know if there’s a significant drop as well.
(Tess Berdiago Cahayag speaking) Alright. Thank you to our presenters. Thank you everyone. Bye now.
This webinar was recorded on January 14, 2021.
Experts from FireEye Education Services detail the ins-and-outs of dashboards and reporting in FireEye Helix.
- How Helix Analytics automate the detection of suspicious activity in your environment using techniques that rules alone cannot provide
- The types of detectors employed by analytics to identify specific kinds of activity that are often associated with attackers
- Analytics Advisories that help you identify additional data sources for analytics detections in your environment
- Techniques for building context on analytics alert to enhance alert analysis and response
- How to improve threat and vulnerability detection with advanced user behavioral analytics
- A demonstration of how to review Helix analytics detections in your environment and tune their effectiveness.
The webinar is followed by an in-depth Q&A session.