(Sarah Cox speaking) Hello, everyone. Welcome and thank you for joining today’s webinar. Fireeye Helix Explained- Preview of New Features. My name is Sarah Cox, Principal Instructor here at FireEye and I’ll be your host. Let’s start by introducing our expert presenters. Vignesh Balan is the product manager for the Helix Security Operations platform, building FireEye’s XDR strategy.
Joining Vignesh is Todd Bane. Todd serves as the technical leadership for the deployment and integrations professional services teams. Today Vignesh and Todd will highlight new Helix features that enhance the security analysts experience. These features are planned for release over the next few months. In addition to the Vignesh and Todd, Veronica Carr from the product management team is joining us to answer questions as time allows. Before we get started there’s one quick housekeeping item to cover by sharing this disclaimer slide. We’re excited to share these coming new features but we want to be clear that these are forward looking statements. Additionally, as a reminder, all information shared in this webinar is confidential and should not be shared broadly. I’ll now turn it over to our experts to begin the presentation. Vignesh and Todd, take it away. (Todd Bane speaking) Thank you for that, Sarah. So, in my mind there are several pain points which as an analyst and threat hunter makes my role challenging. The role of the information security analyst in today’s world requires a high amount of technical knowledge and flexibility. As time moves on, adversaries continue to become more capable, sophisticated and creative in their approach to running malicious campaigns. We see this in some of the recent incidents of supply chain compromise and ransomware attacks.
This presents major challenges to organizations charged with defending their environment. This also in turn requires higher investment in technology and expertise to maintain security posture.
Other challenges include increased potential attack surface for which threat actors may take advantage. As infrastructure, software and services evolve securing these systems plays a critical role in ensuring full enterprise visibility and defensive coverage.
With mass migration of traditional systems, cloud services and remote office work becoming a much more permanent reality. Being able to adapt effectively has been a heavy focus for security teams globally.
The way that we hunt for and detect threats in these environments has to be able to adapt or activity could be missed and these are the gaps that adversaries are relying on in order to exploit the infrastructure.
It’s because of all these points that it’s appropriate to take a somewhat paranoid approach to security. Either I am breached and I have evidence to prove it today or I need to walk into my job assuming that the enterprise is breached and I don’t know it yet. Having tools that can provide that evidence is important to me, because that will determine how quickly I can respond to an incident in order to minimize damage and loss. That time to response is a critical factor in determining the outcome of a breach.
In previous times, most sophisticated threat actors targeted large organizations or government entities for a number of reasons. They possess the intellectual property worth targeting and they had the finances that makes a campaign reward worthwhile. But, more recently that game has changed. We can see threats are moving down market to medium and even small businesses based on the ease of exploitation. Ransomware campaigns don’t require the same targeted TTPs in order to monetize the incident. And with the rising adoption of cryptocurrency and other electronic payment methods, it’s much easier to collect ransom from the victims of these attacks.
This forces smaller organizations who traditionally are more compliance focused to become more security conscious and adopt more enterprise grade solutions to combat these threats. And this puts a heavy burden on small businesses who don’t have budgets for enterprise grade solutions or they just lack the in house skills.
While larger enterprises may not have the same budget constraints as a small business, an over abundance of security tools may also have a negative effect on the effectiveness of a security team. The fatigue of swivel chair investigation has a real impact on an analyst ability to be efficient and timely when investigating an event of interest. There’s a calculable loss of time with repetitious tasks when it comes to performing event enrichment data lookups, performing host interrogation requests and other small tasks of the like.
Problems like this loss of time are exacerbated with non integrated disparate tools. Security operations receive benefit from greater efficiency of consolidated product suites that offer direct integration and automated actions. Besides the operational efficiency of a consolidated platform, the total cost of ownership is also significantly less than that of a vendor diverse toolset.
Now, I’d like to take a moment to talk about detect features in Helix. In Helix, you have the power of rules, intel and analytics to assist the analyst with the detection of threat actors. Rules leverage query logic that detects known threat behavioral methodology and provides prioritization to the analyst based on organizational risk and impact.
Rules also provide important contextual metadata about the logic behind the rule, whether it’s based on known vulnerability, living off the land methodology or a advanced persistent threat indicator rules are at the core of what helps drive alerting an investigation of threats that matter.
These features bring a lot of important data to the front of the analysts view but there are definitely other sources of information that would help paint a more complete picture for my investigations. Vignesh, I hear you’ve been working on some new capabilities that would help improve upon this. Can you maybe tell me a little bit more about it? (Vignesh Balan speaking) Sure, Todd. So, we have made a lot of improvement over the time for the direction feature for Helix. I would like to particularly highlight some of the upcoming features. We have improved our direction models and analytics which cuts across our 600 plus and ever growing integration with third party products. The rules and analytics, as you rightly said, are the backbone and the strength of Helix. We are enhancing them with better models and will be introducing machine learning as well. The enhancements are not only in the engine but how we represent the alerts. We will be decorating the risk, the alerts with the risk course with details which provide the analyst a heads up. Information like if there are Mandiant Advantage center threat intelligence hits available for the assets part of the alert, if there is a VIP, what we call a VIP asset, those assets which are associated to an important person or carries an important feature or application internet system and much more. So, we’ll be seeing a quick demo of this and many more features in a short while. But I just wanted to touch on this and say yes, there’s a lot of new things happening and we would like to call them as threat detection enrichment. Todd, what would be the next features that you would like to talk about? (Todd Bane speaking) Well, I would like to maybe talk about investigation. The features offered under the investigate menu provide security operations with a consolidated view of alerts over time. Correlated alerts, dashboard and case management as well. The alerts view highlights the alerts that matter over time and allows the analyst to drill down into the alert details and even manage alert cues for more SOC workflows that require incident routing.
The asset based alert correlation allows the analyst to also gauge the risk of an asset to the organization based off of a correlation of recent alerts and the case management feature provides the mechanism to run investigative workflows from start to finish.
The case dashboard will give my SOC lead an overview of the current case workload at a glance as well.
While the investigate menu is definitely feature rich I’m definitely a firm believer as well that there’s opportunity to improve. So, I’d be curious to know Vignesh, what types of new capabilities you might have in store for us here? (Vignesh Balan speaking) Absolutely, Todd. We have, we are improving a lot in the investigation front. We are introducing the correlation of alerts based on common observers like IP address host name. Let me explain it a bit more in detail. Let’s take a scenario where an alert is triggered by an endpoint that detected malware or let’s say a ransomer like Mace. An alert is generated tagging the infected asset and then let’s say the payload which was downloaded into the end point tried to break into a known malicious C to C server which is captured as an alert by a network censor. Now you have two different alerts. An alert which was triggered by an endpoint because a ransomware or malware was detected and then you have another alert where a network censor has actually captured your beaconing messages to a known C to C server. Typically an existing Helix customer would end up seeing these tools as separate alerts. What we’re trying to do is we’re trying to correlate them. Let’s say for example that endpoint IP Address is 10. 1.1. What we do is we picked this IP address and we correlated with those associated alerts and we group them as a threat or an incident. So, that’s a new workflow. Now in a typical SOC investigation cycle, these two alerts are presented separately. We are introducing a way to correlate this and present as a threat or incident. This not only reduces the number of alerts an analyst handles and reduces alert fatigue but gives a better view of an ongoing campaign which normally cuts across multiple threat vectors.
We will call it the correlation of alerts as threats. Todd? (Todd Bane speaking) That sounds like a pretty cool enhancement. I’m pretty excited about that. So, then maybe let me talk about hunting for a little bit. Signature list hunting is one of the hallmark use cases of tactical SIM analysis and what I really like is Helix’s high response index search capability. It’s a feature that lets threat hunters perform more queries faster essentially. The normalized taxonomy is also a huge benefit to me. This makes the query language a lot easier to pivot off of artifacts of interest for comparative analysis. Say like if it’s a long or short tail analysis, you’re performing micro or macro pivoting in your searches or your potentially doing complex subsearch queries. I’m able to find the needles in the haystack without having to spend large amounts of time crafting complex query syntax. I also make heavy use of the data transforms when performing analysis on datasets as well. It really helps me highlight the anomalies in the results which may be worth investigating. And this is truly one of my favorite features when it comes to Helix. So, I guess maybe I’m curious what other improvements you might have up your sleeve on this. (Vignesh Balan speaking) Yeah, sure, Todd. So, like I just highlighted there are many features existing available for our existing Helix customer. For the investigation workflow. In order to enhance our analyst experience, we’ll be introducing a new kind of visualization, we call the threat correlation graph. It’s a new enhancement to our current text based representation of an alert. The idea is to provide analyst a graphical representation of an alert. A representation that tells our analyst a story of what happened in the alert. A graph that provides some drill down capabilities as well. If you have to use an analogy here, it would be like reading a comic book graphic novel for an analyst. So, we would call this as the threat graph visualization and uh we have seen a lot of our early user experience testing that this is something which is going to move the clock for our analyst experience. Todd? (Todd Bane speaking) Thank you for that. That’s pretty interesting. Looks like we have a question about Helix permitting flexible deviance alerting on fluctuations. So to answer that, you know, kind of quickly, um you know, you don’t have to specifically craft a rule necessarily to detect these types of fluctuations and deviance or you know, you could potentially look at it as like a form of variance and counts and events based off of whatever source you might be working with here. We generally, or at least I personally, will use something called a histogram which I can use to filter on a specific dataset, that then I can group into counts over time and that can show me fluctuations, in the count or number of occurrences in a particular type of event and of interest. So maybe I’ll give an example if I had failed logins, you know, I could use that as an opportunity to see if there’s maybe a sudden spike in observable failed login attempts, which, you know, could potentially be a sign of brute force attempts, maybe if it’s on one particular account. It could be maybe something a little more widespread, you know, to the enterprise at a macro level, multiple accounts are seeing a sudden rise, you know, in log outs or log in attempts that fail so that maybe hopefully answers your question.
So, moving on to something else that I definitely wanted to cover as well, which is along the lines of response. When I think about what could possibly make my job easier and help me be more efficient in my role, automation is definitely one of the first things that comes to mind. Some of the most useful automation features that I use day in and day out are the guided investigation tips. When I’m reviewing an alert, I don’t have to jump out of the view to run other relevant searches. Helix makes it readily available to me in the console and is dynamically crafted based on attributes of the source alert and over the years of the FireEye expertise built into the Helix pipeline, our various services for enrichment, which helps me make more educated decisions about the activity that I observe. So, things like geolocation enrichment for example, I use constantly. It helps me make sense of network traffic connections which may be coming or going from countries that I wouldn’t necessarily expect to see in my organization. For maybe as an example, as an organization that primarily does business in North America, if I see an ssh connection coming to one of my hosts or from one of my hosts on a nonstandard port that maybe is a source country by Iran that might rank highly on my list of connections that I would want to further investigate. I might even take advantage of the Endpoint Security integration to initiate a host triage request as a result of this finding from the alert console.
So, in talking about automations and things like that, what types of improvements can maybe we expect from Helix going forward? (Vignesh Balan speaking) Yeah, sure Todd. So, this is my favorite topic. We understand that one of the major challenges our customers face isn’t responding to a threat or an incident, be it triaging a threat, collating those extra contextual information. Those threat intelligence feeds, kick starting various workflows and most importantly taking swift remediation action if possible. Right now these abilities are available as part of security orchestrator product which is available with Helix offering. But we acknowledge that the integration can, which our current analysts are experiencing, can be enhanced much better. So, we are introducing a tighter integration. When I say tighter integration, ability to trigger these response workflows or automation workflows right from your investigation screen rather than having to pivot between two different products. So it’s going to be a unified console experience for our analysts to trigger those automated actions right from the Helix screen. The trigger could be a manual trigger while they’re investigating a particular alert. Or they could do an automated triggering by associating these response action to the rule. So they’re never rule matches and an alert is generated. That particular response action is triggered. So, that’s a brand new highlight change. We are bringing Helix for the automation of the response action and we are calling that automated response action.
(Todd Bane speaking) I’m really excited about these new features. This sounds pretty awesome. I’m not gonna lie. Can we see maybe what they would look like and how they would function? (Vignesh Balan speaking) Sure. So let’s have a quick walk through. I’m just going to take a moment to share my screen here.
Todd, let me know once you’re able to see my screen.
(Todd Bane speaking) Yeah, I can see it now. (Vignesh Balan speaking) Okay, great. Alright. So um like I mentioned there are four new features, we were talking about threat detection and enrichment. So, let me start with that. So, what do we mean by threat detection and enrichment? Wherein we have enhanced a lot in our detection engine being the model, the logic used and we are introducing machine learning as part of the detection. What I would like to showcase in this new improved screen which is an upcoming feature is an ability to see all these details like, why was this particular threat provided a critical score of 140? What kind of information can be presented as part of each and every threat is available right in the new dashboard and not only that, we are trimming down in the terms of what words we used to describe an alert or a threat by introducing a user friendly name to these threats and alerts. So, it’s easier for an analyst to walk through these alerts and threats. These we have also introduced new widgts to understand have you as a company who’s using our Helix product, How are you protected? Have you covered the threat vectors you can possibly cover so that they have a better understanding that when you happen to use a consolidated platform like Helix, you will have to bring in all your threat vectors into this to get the best out of it. And that is available as part of these widgets and there are few more widgets which are available here. I’ll quickly jump to the next feature set we were talking about. Which is nothing but your correlation of the alerts. So I’ll have just opened an alert which was triggered here and if you can see here, it clearly denotes that this particular threat. So, we’re calling this as a threat or an incident, wherein they had about 13 alerts generated if this correlation was not done, if the analysts would have ended up screening through all these 13 alerts and correlating them in a manual way. So our correlation engine has detected that these alerts are related to each other and… it’s been consolidated and presented as a threat which is available for our analyst to triage. A quick walk through on what is available here, like it’s clearly mentioned, there is an email alert associated to it, which is basically saying there’s a retroactive detection enabled here. And there’s also an (FireEye) NX alert wherein a smart vision event was triggered as part of this. And a handful of (FireEye) Endpoint events were triggered as well. So, this is what we call an alert correlation and this will reduce significantly our analysts alert fatigue they’re going through and also gives a better picture of a campaign which is running in the system. So that covers the second feature we were highlighting. And now I’m moving on to the new representation of the threat we call the threat correlation graph. We have, we would like to present a story to our analysts to understand and analyze a threat and this is a summary view and the user would be able to expand these notes and read it like a storybook from left to right and understand how this threat campaign kind of unfolded. Quickly, to give you an idea, let’s say I’ve expand on what are the sources involved. It says that I have a handful of host and associated to this and I would like to know how are these, asserts our host uh, triggering these alerts. So I’m going to expand this uh, alert bubble and I’m able to see that there were a handful of alerts associated to this. So, I’m going to understand and see how are they related to these assets which were affected by this alert and by clicking on each alert, the associations are clearly explained and you would be able to expand them further to clearly understand and build that actual story of how this threat kind of evolved. Like in this example this endpoint alert was related to a handful of… ONE of the users, “Vinent”, and it involved a system component and an IP address 10.61… …174.135. And not only that, if the user wants to know more details on an asset involved, the user would be able to click on that and that information is available right here without having to pivot to any other location. So this is what we call as a threat of correlation graph and it’s going to reduce the time spent by analysts to understand an alert by just reading through it like a storybook. So that’s the threat correlation graph which is the third feature we were talking about. The fourth feature which is our response action of the automation. I’m going to give an example here, let’s take an alert here where I would like to perform some response actions. So as part of this deployment we have these playbooks, we call them playbooks, which are readily available for an analyst to trigger right from the investigation screen. Let’s say I would like to create a ‘service now’ ticket for this and I would probably want to go ahead and enrich the indicators which are shown as part of this alert against ‘Virus Total’ and maybe also check if there are any users here. And is there an enrichment available from Azure? I go ahead and click confirm one, triggering these playbooks and once I trigger these playbooks it’s going to be the outputs are readily available right here on the investigation screen. So, like these are the… playbooks or the response action I just triggered, to save time I’ve triggered a few of them a couple of hours back. So let me show you some of the outputs here, like for example, there was a ‘service now’ ticket created already and the output of that is readily available here and the user would be able to pivot to a service now and without having to manually create it. And a copy of this alert has been created as an incident in service now. Another example, we have integration to our cloud product which takes care of quarantining and cloud VM instance and if this alert had a cloud VM instance ID associated to it, we would be able to quarantine those incidents right from here. So we have various other response actions readily available for a end user to use and this was going to significantly reduce the time of our analysts spent on each alert or threat. So, that’s the overview of all those three or four, all those four features we were talking about. Todd, What are your, what is your first impression here? (Todd Bane speaking) Well, I mean I’ll tell you off the bat, you know, I really am impressed with the the change in the way that data is being presented in the alerts view, with the threats. I really like the way that the information is broken down. It seems to be more concise and to the point. So, as an analyst obviously time is of the essence for me and the more alerts that I can respond to the more that I can accomplish in a day. But, I also say looking at the automation piece here, that’s going to be an immense time saver for me and other individuals in the same type of role. The story time I think is also pretty unique as well because they say, “a picture speaks 1000 words”, right? Well, sometimes it’s very difficult to visualize in your head, raw data sets as to how they kind of inter relate where I think that the story time feature paints that picture that really I think most analysts want to see as to how various elements of the investigation kind of inter-relate with each other. So, that I think is pretty impressive. I also liked how you tied together the different alert types because I saw that you had Windows methodology which is a something that you would gather from an endpoint locally. I also saw the the endpoint technology for the FireEye Dndpoint Security as well. Network based alert detection that was correlated and cloud based event feeds that were correlated with this as well. So, it seems to me like you’ve done a really good job of being able to tie together all layers of the security stack. On prem and cloud as well. So, I mean I gotta say this is really exciting. And it looks like we have maybe a question here in the question list. So, we today have many customers with FSO that creates similar playbooks for response actions, which seems to be like reinventing the wheel over and over again. Will FireEye be creating more playbooks over time so that customers can leverage or share them? Is this a question that you want to answer, Vignesh? Or is it one you want me to take a stab at first? (Vignesh Balan speaking) Let me quickly answer it. So, to answer your question, yes we will be creating new playbooks which will be readily available for our customers to use. We are, in fact, what we are doing is we are enhancing our existing FSO customers in a way that rather than having to work with two different products. Now we’re going to have like a seamless integration between Helix and FSO that it’s going to be like a single unified console or platform where you could define your own response action, not only define it, you can actually use it by triggering right from your Helix console without having to pivot into multiple different tools. So, that’s the idea behind having this tighter integration and to your question on playbooks made available. Yes, we do have new playbooks being created and made available for our customers. (Todd Bane speaking) Something that I’ll definitely mention on the topic of playbooks made available for FSO through FireEye. We have, on the FireEye marketplace, a number of published ready made playbooks that you can download today. So, that’s something that also as the lead of the team of automation developers and sore subject matter experts, it’s something that you know is content that we are constantly developing that we think would be useful to our customer set. So, if if you’re interested in those types of reusable playbooks, I definitely would invite you to take a look at the FireEye marketplace and look for the security orchestration playbooks that are readily available there. That content is being published new on a monthly basis.
(Vignesh Balan speaking) Okay, I, well we are trying to answer, I would like to also touch on an important topic here. We have been getting a lot of questions from our customers. Fireeye XDR. So, what is FireEye XDR? Why are we doing these new features? So… apart from enhancing our customer experience, we are introducing these features to align our product strategy as an XDR offering. We have heard many of our customers ask “What is XDR?”” XDR is a construct or an architecture which consolidates your security control products in a unified console. FireEye XDR provides you the ability not only to natively integrate with our FireEye stack but the flexibility to bring in your own third party products and providing and unified experience. So, we would like to talk about this in a single slide and just express that what you have been hearing about FireEye XDR and these features are related and this would enable us to have much more feature enhancements which will be readily available for our customers to use. So, quickly to summarize Of what you saw in our demo, what you saw is that extending our detection and response across all threat vectors, correlation and prioritization of security data and alerts. Most importantly, automated response action. This is all happening from a unified platform. So, what does this mean to you? Improved production, increased analyst productivity And… much more so that brings us to the end of our presentation part. So, Sarah, if there are any unanswered questions we would be happy to answer them.
(Sarah Cox speaking) Thanks Vignesh and Todd. You guys both provided great insights and I’m sure our audience has benefited from seeing these new features and hearing um about them and your enthusiasm. These are definitely gonna maximize the visibility and efficiency of an organization’s SOC platform. I’m gonna move us into our Q and A portion of the session. Before we dive into these questions. I’d like to welcome Veronica Car from the product management team. I would like to start with the first question here which is, “If I have Helix today, are the current data sources that I have feeding Helix relevant to these new features?” and I’m gonna throw this one to Todd.
(Todd Bane speaking) That’s actually a really good question, Sarah, thank you. So, data sources that you’re feeding into Helix today are absolutely relevant. For this new version of Helix that we just demonstrated, for all intents and purposes the content really is no different. The analytics, the intel all that utilize the same data feeds that you would implement in Helix as it is today. So, they definitely are relevant and if anything are enhanced through the features that we see that Vignesh was able to demonstrate here.
(Sarah Cox speaking) Thanks, Todd. There was a clarifying question on the conversation we had about FSO. So, just to make sure this is stated clearly, the question was about FSO and “Is this new future meant to replace FSO and will there be a way to use existing FSO playbooks within the Helix console?”” Let me throw that one to Veronica.
(Veronica Carr speaking) Sure. So, basically our roadmap is as follows, you know, Vignesh and Todd did a great job of showing how we’re integrating uh the sore component into Helix as an overall platform. Um as we go forward, the answer is going to be yes in that anything that you have as far as playbooks and the like will be part of the same platform. Um Now the way there’s two options today, we just put out a new edition called Helix detect. It’s in preview or limited availability today. Um and that does have playbooks built in and Vignesh showed you that um the enterprise version of Helix will not just have the ability to point and click with the playbooks but they will also have the ability to build them like you do today and current customers, those playbooks that you have today um will transfer over. Um And Vignesh feel free if you want to add anything I know Vignesh is very close to this, this part of the product, so… (Vignesh Balan speaking) Yeah, okay, you’ve covered everything just to add a few more points, does this replace FSO? If you are uh this doesn’t replace FSO, we have actually folding in the FSO features into Helix, as a unified platform so it’s not a replacement for FSO. FSO and its features are going to be available as part of the unified platform. So that’s how I would put it. (Sarah Cox speaking) Thanks Vignesh. Um Alright here’s another one. Um I know I can build dashboards to monitor data sources. Is there any automated monitoring of rate changes for log sources or specific log types? Todd, do you want to take that one? (Todd Bane speaking) Yeah, for changes in log flow. Um You can absolutely use the analytics capabilities built in to Helix today. Um There’s a log tracker module that basically overtime does hourly diffs between event feeds, you know from one particular source class to the next and runs an average calculation over time over that hours period to be able to identify if there’s significant increases or decreases uh in that particular data source. And they do this at the source class level specifically. So that means when say you’re feeding in maybe alerts um you know from like a semantic antivirus product or if you’re you know feeding in Windows event monitoring, you know from endpoints within your environment. Um Any fluctuation in and change in the EPS Or the events that it sees and count over that hours period of time. It’s going to do a diff and then make a determination as to whether or not there’s been uh maybe like a like a 33% increase in events per second flow or maybe if it decreases a significant amount it’ll create a synthetic event that will allow you to query that information. We also have built in rules that you can take advantage of that are titled system monitoring um that are predefined you know for I think like an hour, four hours, eight hours and 12 hours timeframe I believe or And I’m sorry also a day as well so you can take advantage of that pre built content in order to get a better operational monitoring on the inflow of your data sources there. Um So that way if there’s any changes that occur that are negatively impacting to your monitoring then you can obviously adjust accordingly.
(Sarah Cox speaking) Awesome. Thank you. Alright here’s a specific question here about log data. Um do you have any plans to monitor log source level instead of class level when Helix um when Helix stops receiving logs. So this might be about just monitoring log data? (Todd Bane speaking) Yeah I think that again you know talking about the analytics, the way that it currently monitors information there. Um We also have another feature that does leverage source IPV4 as a uh as a means of doing data source monitoring and what that really means in terms of helix is that um it looks at the sender IP based off of the way that the broker software that receives the event stream, observes it as the source. So in the event of like you know, just to use Windows event logging as an example um the endpoint itself, if it was sending through like an agent syslog forwarding of the Windows event um monitoring you know to directly to that broker, it would see that endpoint’s IP as the actual sender IP or source IPV4 address. And so we would use that field as a means of doing what they call silent log detection to be able to see whether or not, you know that particular source has gone silent or not. Um But I would also say that if there are improvements or any ideas about how we can improve our log injest monitoring, I would definitely invite you to either reach out to your account team or if you have an active engagement with one of our D&I consultants or you could also go to support directly as well um to file an RFE, request for enhancement, and share your ideas. The product team I know is always interested in these types of feedback.
(Sarah Cox speaking) Thanks so much for that. Well it looks like we’ve been able to field everyone’s questions and we’re nearing the end and have a great rest of the day
This webinar was recorded on September 21, 2021.
Join experts FireEye Education Services as they present a preview of upcoming features for Helix that enhance the security analyst experience.
- Threat detection and enrichment
- Alert correlation across data sources
- Built-in automation for response action capabilities
- Threat graph visualization
The session includes a hands-on demonstration of these upcoming features that will show you how to take advantage of detection, correlation, and automated response action. The presentation and demonstration is followed by an in-depth Q&A session.