FireEye Helix Explained: Mandiant Query Language Searches

(Chris Schrieber speaking) Hey everyone, welcome to today’s webinar FireEye Helix Explained, Mandiant Query Language Searches. My name is Chris Shrieber and I am a Platform Strategist here at FireEye and I’ll be your host today. We’re really excited to kick off this first webinar in a series of topics covering our Helix platform. So, I’d like to start by introducing our expert presenters. Mike Kizerian is a senior technical instructor with FireEye. Joining Mike is Sarah Cox. Sarah is an instructor and curriculum developer here at FireEye. Today Mike and Sarah are going to highlight some specific functionalities within our Helix platform to help you boost your effectiveness of your SOC. Specifically we’re going to cover the Mandiant Query Language today or MQL. Let’s go ahead and get started. I’d like to turn it over to our experts to begin the presentation.

(Sarah Cox speaking) Thanks, Chris. I’ve got the agenda for today on the screen. We’re going to do a quick overview of Helix. And just for any of you that have not been familiar with the product but most of our time today is going to be spent deep diving into the Mandiant Query Language. So, we’ll have tons of examples of searches and how to modify those searches with directives and transform that data so that you can injest it. At the end of the presentation, Mike will do some demo searches. All of this is to say that this global ecosystem, this FireEye ecosystem and global reach is feeding into our Helix product. We can see on the slide here focusing on some of the cloud visibility that we get super easy to integrate cloud sources through Helix Connect. All of the data sources that you have feeding into Helix are going to get the benefit of that global intelligence and machine intelligence for rules and analytics detection. Helix is this cloud hosted security operations platform that allows you to take control of any incident from alert to fix. But what’s really important to understand with that is that Helix is as good as the data that you’re driving into it. So Helix will integrate with over three hundred FireEye and non FireEye security tools. And we can see on the slide here just a sample of some of those various tools that are feeding into Helix. This is from our training environment. So we have bro events, we have specific firewall and http proxy events coming in. We can see Windows events coming in so we can get all of that data into Helix. And when we have the data we can do amazing things with it. So as each event comes in we have, we still have a high level view of Helix in our ecosystem if we barrel down to individual events coming in. Those are going to enter Helix and they’re going to be parsed. And with that passing then that gives us the power to manipulate and locate the data that we’re looking for. So each data, each piece of data comes in at an event with the wrong message. And all of you would be familiar with what those log sources would look like coming into Helix and then as they’re ingested into Helix they’re parsed. So, we can see the fields here highlighted blue. Those are pulled out of the events coming in. And they’re the names of those fields get normalized so that we can search across disparate resources. But more than just pulling the data out of the events coming in we’re also appending metadata. So, highlighted in green here. You can see some of the things that we highlight like timestamp of the event coming in or ID of the device that sent it. And all of that is important operationally. But we’ll also be appending data related to some specific event type and we’ll get a little bit more into this later. But if an event matches intelligence that we have we generate specific metadata, we generate specific metadata around the assets that are coming in to give you the power to search through your data for what you’re looking for.

We can also decorate the events with GOIP data when that’s appropriate and we can see those fields here and so all told once we parse the event and apply some metadata and GO fields. We have all of these fields accessible for searching. So how we do that searching is through Mandiant Query Language, MQL, the formal definition is that it’s a data analysis language using queries to retrieve events for further analysis. You’ve all worked with different query languages, so the syntax will be familiar. And just like other languages it has some specifics that will help you to understand. And the more you work with it the more familiarity and facility you’ll have in working with the platform.

MQL is really the backbone of Helix so we use it everywhere. And so understanding the events that you in Helix and how they’re parsed, what fields they have, give you the power to search for that data. So we can see here is just an example of identifying data from a particular source ip to a destination. And then we can understand very easily how we would search for data that has come into the environment. Once we develop the searches to understand how to identify the data that we’re looking for we can build dashboards and rules to leverage those searches. And then we’re also using MQL and rules. So as we write the MQL and craft it in the way that works for us to identify malicious activity we turn that into a rule for detection. The point here really is that MQL, understanding it and being fluent with it is really going to help you leverage the Helix product. One last thing I want to touch on before I hand over this presentation to Mike is as the data is coming into Helix we’re appending meta class information. So we have class information related to the specific data source. We can see two examples here with bluecoat and palo_alto_http. Sometimes it’s important to understand the specific class so that you can work with the data there. But a lot of times you might want to work with related classes. And so we have a field called metaclass which groups related classes together. So, the metaclass http_proxy contains both the classes, bluecoat, palo_alto_http and any other proxy class. And we can see here just a list on the slide of the 18 meta classes that were supported this month. So, I’m just about to hand the slides over to Mike, I know I’m going really fast, we have a lot of content to get through. This is the anatomy of an MQL query and we’re going to show it on a different example of those for searching for data and identifying what you’re looking for. We’re just going to provide examples of how to use, how to search for string text, how to do comparisons look for ranges, regular expressions, so we’ll have tons of those examples for identifying that event log specifically that you want. We’ll also show examples of applying directives to those, so narrowing down the time frame that you’re looking for through MQL, not the Web UI or limiting the results that you get back. And then finally examples of how to transform that data. So once you’ve identified and pulled back what you want. You want to be able to work with it manageably and be able to analyze, interpret it, and you can do those things with MQL as well.

(Mike Kizerian speaking) Thanks, Sarah. So, this is Mike, we’re going to go ahead and look at some of the MQL search clauses now and understand what the syntax actually looks like as you dig through your data. And so to start with we have our free form search. This leverages a fieldless way to search through the data. So, if you’re not quite sure what kind of fields you want to look for but you know of a specific value then you can search in a free form way such as you see here on the screen. Now this is going to chew through all of the data that has been ingested into Helix. It’s not going to look in a specific subset of data. So consequently this does take a bit more time and the amount of data you’re going to get will be quite wide spread. A more focused approach however can be seen as we go through and search in a field and value setup. So, one of the ways that we can start to see this is the free form search ofcourse is done without anything, it can also be done with this raw message field directive. Right? Or field here… and then we can go in and leverage very specific fields. So, this is everything that is actually parsed. Like Sarah mentioned earlier, right? So, everything that we will parse out becomes a field that can be searched, okay? Now, we can look for exact matches with an equal sign or we can do a colon which will search in more of a contained thing. So, one of the key things to understand here is “how your data looks” and we’re going to evaluate that and show some examples of how you can understand your data in our demo at the end of this presentation. Now, you may have multiple things that you want to search for, multiple values you want to search for, that can be done with bracket notation as seen here. Bracket notation is an implied “or”, if they need to be “and”ed, the values should be “and”ed. Then you can do so with an ampersand immediately preceding the first bracket. The bracket notation can also be used with the fields as we see here with an example of both the source and destination IP address. Now, a shortcut to some of the bracket notation for fields can be seen with our aliases. Aliases allow us to go through and set…

multiple fields that we can search across without having to write out each one of those. A very common alias here is the IPV4 which will be used for both…

source and destination. But, you can see some of the common aliases here.

Now moving on to this, we have the concept of lists. So, it begs to be answered if we can add multiple values and search across multiple values, what happens when we have a set of values we consistently want to look for or when that value list becomes incredibly large and we keep adding them all here within this array? The answer’s “no”. What we want to do is we want to make lists, we want to make use of a list so we can go in and create a list and then add in those values and reference them after, in all of our searches. So, this is an example here of where we’re looking for domains, you may be looking specifically for these domains. If they are very common we may be excluding these domains from our search.

Now, if you’re not sure of the exact match or quite what you’re looking for or if there is a fairly consistent piece of the value and some randomisation.

The asterix either immediately preceding the value or trailing it gives you that wild card fact. Please note here that these asterisks are used exactly in this way and they are not to be seen as wild cards that can be inserted inbetween a value, okay? And of course when we’re looking at these in this way, the results returned will contain some kind of component with this note that we do need to use colons with this because we’re not looking for exact matches here. Now, we can leverage regular expressions, we do have some examples coming up of what those regular expressions look like. We do highly recommend that you minimize the search results first. So, that when a regular expression is run you’re not searching across the entire Helix data set within your environment. So, narrow that search…we can narrow it by looking amongst classes first. And there are a few other examples that I’m going to show you here. So, one example, a kind of classic use of regular expressions here, occurred with the shell shock and being able to leverage regular expressions against the user agent string. So, if I just use this user agent string the search would be aborted because it would be searching across the stated limit in the preceding flight of fifty thousand returned data set. So, I want to reduce this and I can reduce this first by looking within my http returned data. And in this case, the class would be bro_http, however the result set would still be quite extensive. So, now I want to make sure that it has the user agent string. So, this “has” field, or this “has” directive here makes sure that that particular field is present in parse. This reduces that data set a bit more and now I can remove known user agent strings. In this case we’re leveraging the suffolks wild card of the asteriks, and we’re removing anything that looks like Mozilla. Or here under the next iteration to include the “sxl” and now I am at an amount of results that I can easily leverage within my…

result set. So, now with fifteen thousand results I can apply a regular expression across that and I have twenty one results that I can dig in and investigate.

We can search ranges as well. So, whether we’re looking for ports or across CIDR ranges we can do that. Note again we’re using our colon here within that field separation. The value range is used with angled brackets and it is non inclusive. Now we do have a special way to look for private IP addresses and that is with this specific phrase of private IP address lan within quotes. The bottom example here is showing destination isp, in other words that where any of those particular network connections in this case this would be http are hitting any of those privately… addressed IP addresses.

In your return results if you see anything with a dot notation, say three ellipses, that would indicate we have a nested object. Hovering over that we’ll show you what that nested object looks like. This is a JSON document and referencing the items inside it requires dot notation. So, in this case the parent field is “detect rule matches” and then within that we would have the dot notation, dot confidence or dot severity… whatever we’re wanting to flag within their four hour search. We also have the ability to do sub searches through parentheses. In this case we can see how this sub search can be built. It leverages the output of one query as input into the additional query. So, the sub search which is in parentheses is resolved first and then the inputs are used for the outer querying. I’m going to go ahead and pass this back to Sarah to discuss a few more components of an overall MQL search clause.

(Sarah Cox speaking) Thank you, Mike. So Mike just showed a bunch of different examples of using field names to narrow down the results coming back to you.

We can also apply directives to narrow the events coming back. Sp, the first that I’m showing here is addressing classes with time. And what’s really handy about the time directive is that you can either use the stamp format shown on the slide or you can use natural language. So, things like twenty four hours ago or sixty seconds ago.

So, we can see those examples here. And obviously you can also do those with the GUI, but adding them into the MQL means you can document it and grab it later and run the search again pretty easily. We also have directives for the page size, so if you ran this search shown and set the page size to 50 in the UI, you would see not the normal 10 events on screen but 50 batched at a time. This directive is especially useful if you’re using API functions. We also have offset for the API, so if you wanted to cycle through to return events you can do page size and offset and do a loop. And then we also have a limit directive. We have a kind of a blue warning sign on this. You really want to use care with limit zero. You will see this in some of the FireEye provided dashboards. But just keep in mind if you do a limit zero this basically turns off the limit and is going to return every single event. So it’s not something you want to use kind of on your first batch. But as you refine your searches if you find you need to get more data you can add the limit zero in. But just use it carefully because it is going to go back in and grab everything. The directives are pretty straightforward to understand. We also provide several different transforms and transforms are especially useful for manipulating the data to help you visualize your results. So, we’re going to do things like putting it into a table, sorting it and grouping it and then specifically on dashboards we have some special transforms that we can use. So, here we have some examples of sort order. The default sort order of events is going to be based on the timestamp. So if you want to sort on a different field, you put the pipe in to indicate you’re transforming the data and then you can indicate which field you want to sort on and which direction either ascending or descending. And we have some examples here for just sorting on one field or sorting on multiple fields. And if you want to have them going in different directions just separate them by the pipe and include the…

include the sort that you want.

We also have a group by transformation I think we’ve seen this in some of the examples that we’ve had. So, the top example is searching for bro_http data and then it’s going to group those results by the user agent. We can see by default it’s going to return the top 50 groups. And a lot of times that’s enough for you to get a sense of your data. And I’ll show you ways to control that grouping on the next slide. If you don’t want groups with the most events, if you want to focus on things that are less frequently in your network you can use the less than sign with the group by to look at the less frequent groups. And of course you can group on multiple fields using that bracket notation that we looked at earlier. When you’re using the group by as I said on the last slide we’re typically by default giving you 50 groups back, but you can control that with limit. And so if you wanted more than 50 groups we can set the limit to a hundred or for any value here. So, the example shows a hundred. If we wanted to show the least frequently occurring groups and not 50 but a specific number we see the second example we’re using the left hand sign and limit ten just to show the 10 user agent groups with the least amount of items in them. And then we can also with limit, set a threshold. So, if we add after the limit a threshold statement it will return groups of items that have met at least 50, have at least 50 items in them. So we can set that threshold. I find as I’m getting started on with MQL, specifically selling out limit and threshold is very useful for me to understand what I’m requesting and using. But as I get more familiar you can actually use the shorthand notation just after the group by statement after you indicate the field indicating the first number is the limit and the second is the threshold. So if you do not use limit and threshold, they’re interpreted first as the limit and then the threshold. I think we saw some examples of this as well. One of the really helpful group by’s is to group the data that is returned into a table here. And when you do that the table is going to include the specific values that you list out in the MQL statement. So in this example we’ve got the meta_ts field, srcipv4, desination ipv4, the destination port and domain. Those are fields in the table. And then we’re adding another transform with that type again in sorting the table on the domain. And we can see that in the results here. So we can use this table transformation in our searches and then also in our dashboards when we build them. The last transform I want to cover is a histogram. This is a special transform. It only works in Dashboard. So you can see here we’ve got a picture of the dashboard here. The syntax like all of the other transforms. It starts with the pipe, we use the histogram identifier and then we need to indicate the field that we want to build the histogram on and we need to build.. indicate the time period. So it has to be either day, minute, hour, second and then it will build a histogram of the event flow for that field during…for broken down by the period that you’ve indicated. So this is a really useful one to use on dashboards. And you’ll see it in some of the FireEye provided samples that we have in your environment. So, very early on in this presentation I showed you how we are parsing data coming into Helix and how our engine is appending some metadata events onto events coming in. And so I just want to talk through a few of those. This is a topic I could probably dive really deeply into. But for today focusing on MQL I want to make sure you guys at least have a sense of what these classes are and what the data looks like. So the intel hit is special synthetic event that is generated when an event enters Helix and matches some intelligence that we have. So Helix will generate a new event with the class intel hit, and it’s generated by the engine so it’s synthetic. And that event has all sorts of very useful information about that intelligence match. And so if you understand that an intel hit is generated when events match intelligence in Helix we can then use that Intel hit event to search and build context if we’re researching an alert. Some of the fields that include that intelligence contact, our intel contacts, Intel matches, and Intel match class. And so we can see like in the Intel context field here that it’s got the math icon on it. That is a JSON field and we can use that dot notation that Mike showed earlier to search for particular values in their or group on particular values to have a sense for what’s happening in our environment. And when Mike goes into the demo he’s going to show one of those.

The other metadata field that’s particularly useful for you is this asset field and the asset field again is a JSON object. We’ve got a mouse over here so we can see that JSON expanded here. And just like with Intel hit its JSON we can search specific fields in the asset field using dot location. So, in this search we just looked for events that have assets as a field and then we’re grouping on asset names. So, that would help us understand which assets we have in our environment. And this field is something that you can leverage in your own environment to build asset based risk and workflow capabilities. Next, I want to talk about the functions. These are pretty straightforward and kind of work as you would expect. So I’ll just go through this pretty quickly. We’ve seen several examples with the “has” function, so that “has” will return any event that includes the field indicated in the path to the function. So has( dstcity), Mike will show an example of this as well when we get into the demo. “Missing” is the reverse of that. So, if you use the “missing” function it’s going to list out events in your environment that do not have that field. So, these two in conjunction can really help you search your environment and understand what you have coming in. And they can also help you narrow down your search results if you know you’re looking for like regex on a specific field use the “has” function with that field just to make sure you’re only searching events that could potentially even match.

We also have some functions that are only applicable in rule. So, most of what we’re showing here applies in rules and search MQL and Dashboard. We have a few that apply only in rules because of the resources they use. These all work pretty much as you would expect. So for example the length function is going to calculate the length of the field that you pass it. The equals function is going to test the value of the field to see if they’re matching and hash function would generate a hash on the field that you pass it. So we’ve gone through a lot of searches and examples in this presentation and so just to summarize some of the key points for using MQL effectively you want to pay attention to which features apply only in rule, versus those that can be used in search. Search is a little bit more forgiving. But just keep that in mind. You don’t want to use search only features like transform or group by in your rules. And if you think about it, grouping groups of events does make sense in search, in a rule we’re just applying an MQL query to one specific event. So it doesn’t really make sense in that context. You also want to use the taxonomy. And when I say that I just mean use the class and the metaclass to be efficient in your searching. So just adding class and meta class is gonna limit the data results and allow you to work in Helix more effectively. You do want to use proper syntax for field names and aliases. This really is going to come with experience. And if you do key in a field name incorrectly or use the wrong case you will get an error and you’ll learn it as you go. Be sure to encase spaces and keywords with quotes. The one that always comes to mind as if you were searching like on windows for like plug and play. Make sure you’ve got that in parentheses because of course in query language and as a special… has a special meaning. So use quotes. You want to anchor your regex searches. So indicating start or end of your search expression just to make that more effective. Take advantage of lists. We haven’t really shown a lot. We’ve shown how to use them. We didn’t go through creating them. It’s a very straightforward thing to do in the UI, so definitely think about ways that you can leverage those and also use favourites and save searches. So when you run a search in Helix just right under the search window it’s a little “save as” button. And if you find yourself doing something once or twice just save it and it’ll make your life a little easier to come back to it.

Before I hand this back to Mike to run some demo searches I want to point out some resources we have many in our docs portal. So, have a look there. The URLs listed We have release notes for new releases, admin guides and some really useful reference guides like the full MQL programming guide, the self service parsing guide and so on. And I also want to point out the support portal. If you need to file a support request the chat function there is super useful to get online with a FireEye expert. Another resource that we have available to you is our “Tips and Insights” feature videos. And so these are available on YouTube as well as through the support portal. And what I really like about these is they’re chunked into very small tidbits. So you can tell by the name what it’s going to be about. It’s just a few minutes long focusing on one topic so you can pick and choose those videos that seem the most useful for you.

OK, so with that I’m going to hand it over to Mike to run some searches in the live environment.

(Mike Kizerian speaking) Excellent. Thanks, Sarah. So, I’m going to switch over to my screen share.

And now everybody should be seeing my Helix interface here. We’re going to go through a couple of quick examples and then we’ll move into the final piece of this webinar along with Q&A. The first one that I’m going to show you guys is one we always recommend as people are getting familiar with their environment. This here simply searches across all the data that has been parsed and provides you with an understanding of what meta class and classes exist within your environment. So, if you ran this search now in your environment and you’re only getting back a couple little classes regarding FireEye alerts that would indicate that you aren’t pushing any logs into Helix. So, you would definitely want to make some changes so that you’re getting a lot more of value out of this.

And then from this we can see all the various fields here. Of course this will change depending on what class we’re looking at. Now the next one that we’re going to look at is a surge that allows us to look at the intel hit.

And form this into a table so that we can quickly see in this case over the past twenty four hours. What Intel hits we have. Again, like Sarah mentioned this is overlaying our security threat intel against the data that you have within your environment, within your Helix environment. In this case I can see that I have several APT1 back doors running in our environment. If this wasn’t a test environment that would definitely be very concerning. I’ve got two more quick examples here. The next one is use of a sub search in which case we are taking all of the meta class connection information we have. And looking for where that is headed out to Russia and and then passing that data in… to look for…

any user names and our source ISP is private. The private IP addresses. So, all of these results that have been returned here have data that has gone outbound in to Russia.

Now, if we’re passing in information regarding security event logs and I realize that when we talk about Windows event logs and we talk about various types of security logs, this can be a very broad discussion here depending on how much data you would have coming in. This leverages a few different things that we’ve talked about. It leverages looking at regular expressions here and to pull information back. And it is specifically looking at log-ons for the network. Now this component right here is taking out any usernames that would follow a typical machine format. Now we can leverage some of the other features that we’ve talked about and actually look at a variety of different types of log-on types here and formulate this into a table. In this case we can see where systems were connecting from and then where they were connecting to. If we’re looking at our log on type three specifically this would show us those network log ons and help us identify potential lateral movement in our environment.

So, that brings us to an end of our discussion. Now, I’ll go ahead and turn things back over to Chris.

(Chris Schrieber speaking) Cool. Thanks, Mike. So, we’ve had a bunch of questions come in and a few people have asked, How do you know what fields are available for searching? So, I don’t know, Mike do you want to kind of walk through how you would take a look at what fields are available? (Mike Kizerian speaking) Sure, yeah. Whenever you’re performing your searches one thing to keep in mind that a lot of people are really asking behind that is Do you guys have a way to present exactly every single field that’s in the environment? We don’t presently have that. However if you do pull up a…

if you do pull up a quick search on a particular class, one of the things that you will see returned are all of the available fields within that.

And then, so, we can see our fields here in addition to that we do have a way of viewing some of these fields through our cloud integration. And I’m not going to waste…I’m not going to waste the time trying to log in and connect into that. But within your environment if you go out to cloud you will see an integration available that allows you to see a list of fields.

(Chris Schrieber speaking) Cool. And I forgot to mention we do have a couple more experts that are on the line to help answer questions here so bad on me as a host but Harley Listen is a strategic pattern recognition engineer. So he basically works with our Helix customers to help develop new person rules and figure out how to get data in to Helix. And Ryan Marino is a senior consultant for deployment and integration services. So he works a lot with helping customers get things up and running as they deploy Helix and Ryan I think this question is probably in your neck of the woods, How do we get thorough http data? I don’t see it in our Helix Instance.

(Ryan Marino speaking) Sure. Thank you. So bro_http is the field that was automatically generated by some hardware that we used to use for forwarding logs called cloud collectors. So, currently we’re using NX appliances to forward those logs now and they will show up as fireeye_NX. And you’ll be able to filter the type of data you’re seeing in that field or from that class using the meta class field. So, those will break down into the http which is at the stage traffic etc. So it would be there if you’re forwarding traffic from an NX. But currently you know we’re using NXs now instead of cloud collectors.

(Chris Schrieber speaking) And also for any customers who are already using (undiscernable) we fully support those log tapes. So, we do have a lot of customers that are using either the open source (undiscernable) itself or maybe they’re using something like “Corrleate” or “security onion”. And so we can definitely pull that data in. The key is you have to have a “bro” deployment in place actually be generating those bro data types. Otherwise like Ryan mentioned we can pull in things from our Network Security appliances directly and that uses a different engine. It’s actually based on “Serracotta” but it gives a lot of the same types of data.

OK so there’s a question here and this may have been answered in the example that Mike did but, How do we enter queries? Does the user need to have special permissions to enter a query? (Mike Kizerian speaking) So, I can tackle that one. You should not need any special permissions, but when you do connect into your system all you need to do is select this little search button here, this magnifying glass and that will drop the query window down and then you just start typing in and you can start with the examples that Sarah and I have presented. If you do see any issues there for whatever reason you’re unable to view this you’ll want to get with whoever has provisioned your initial account. If you are that person then please submit a ticket into the FireEye service desk.

(Chris Schrieber speaking) And this one I think might be for Harley, What fields are we currently pulling in from Office 365? (Harley Listen speaking) There is right now with Office 365 there’s we actually have quite a few different variations so answering that specifically is a little tough. Support, if you email in to customer support they’ll be able to help you out a little bit. We’re using API calls in order to pull that in using cloud environments. But we also have other situations where Office 365 is sent from on-prem. So, that’s a little difficult to answer directly on.

(Chris Schrieber speaking) This is probably for Ryan, Can we support integration with Symantec or Broadcom Casby?” (Ryan Marino speaking) Sure thing, yes we support integrations with practically any solution that is able to output data in either a JSON or a SYSLOG format. So what that integration would look like after we receive the data at the Helix level assuming that it’s being parsed correctly we would need to set up custom rules to trigger those as being alerts in the console. So that would be as simple as saying if the source is Symantec and it’s categorized as this type of alert we can assign it to the severity. The enrichment portion of that is going to come from the Helix Engine itself where when we see an alert we try to correlate other events that have to do with that alert and present them to you in that alert. So if you select an alert in the console and you scroll down to the bottom there are a couple of tabs down there that give next steps for analysis and a timeline of which events came in and in what order. But there is another tab called Associated Events and that’s where we would see if another log that came in within a certain time constraint shared a certain number of attributes with that alert we would pull that in there and say hey this log might be associated with this alert. So yes we can integrate with those solutions. And we do have something in enrichment around that. But there is a little bit of customization that needs to be done to get it to that point.

(Chris Schrieber speaking) OK, I think we got time for probably one or two more questions here.

So Sarah one of the questions is, I literally got access to our instance of Helix yesterday. Is there a good guide to use to figure out the syntax for reading MQL statements? (Sarah Cox speaking) I love this question. I think that I would really recommend for new customers is looking at the bottom of the dashboard when you get on for the event classes that come in. So I think one of the best things that you can do getting started with Helix is to understand like what your data is and then running something like what Mike showed with “has” class and just picking some of the classes that are there on your dashboard and looking at how they are parsed is going to be a really good thing. We do have a great MQL user guide that is very complex. And so once you get more familiar with it I think that would be a next step. (Chris Schrieber speaking) Cool, thank you. And we’re going to slip in one last question here, What is the difference between running a search where the equals sign versus a colon? So the example is just ISP equals or just ISP colon Mike, do you want to take that one? (Mike Kizerian speaking) Sure, yeah the the difference is quite simple the equals will be an exact match while the colon will contain that value. One of the best examples is if you’re searching within the event log of a Windows event and you want to look within the message fields, a message field within a Windows event log is quite verbose. In that case you would definitely need to use a colon unless you’re typing the entire verbiage out. An equals sign would only do an exact match, and so you must make sure that that value you’re searching for is exactly that value. If you’re using the equals sign. We always, we generally recommend actually when you’re starting this out just to stick with colons.

(Chris Schrieber speaking) Cool. And coming up at the end of the hour here if you’d like to learn more about Helix you can visit our website www.FireEye. com/Helix and also follow us on Twitter @FireEye. We’re also on Facebook, LinkedIn and YouTube. Thanks again everybody. Take care.

This webinar was recorded on March 24, 2020.

Experts from FireEye Education Services explain and demonstrate Helix searches using Mandiant Query Language.

Topics include:

  • The power of the Mandiant Query Language (MQL) to perform searches and transform clauses
  • Best practices and common use cases for Helix to boost the effectiveness of your security operations
  • Essential Helix features that improve visibility across your security operations platform
  • A demonstration of MQL in Helix

The webinar is followed by an in-depth Q&A session.

Scroll to Top