Agile Project Management for Cybersecurity

In our recent webinar, “Agile Project Management for Cybersecurity,” Myriad’s PMP and CSM-certified project manager Emma Sally presented an overview of agile project planning, its benefits and effects, terminology, how it compares to traditional practices, and how an IT department can implement this methodology today.

Sally starts by providing a comparison of project management styles teams typically use: waterfall, Scrum, Kanban, and “other,” as unique project management styles can be created by handpicking aspects of each methodology based upon a team’s needs. Afterwards, Sally explains the origins of agile project management, its manifesto, and the four values and twelve principles of the methodology.

The specifics surrounding the two most popular agile methodologies—Scrum and Kanban—are listed, including the use of a development board so work can be visualized, timeframes, designated roles, and rules. The project lifecycles of agile and waterfall are compared. Waterfall, a traditional process, results in roadblocks; a project can’t move forward until it has been completed and approved. In contrast, agile is iterative and teams arrive at their minimum viable product sooner.

Sally walks attendees through Myriad’s agile project management framework: PDCA (Plan, Do, Check Adjust) and demonstrates how it enhances readiness, visibility, and adaptability, all vital components of a strong cybersecurity posture. She goes over Myriad’s security reference model, which breaks down everything to be considered while evaluating the security strength of an IT infrastructure into 4 domains and 24 categories. This simplifies the products and services into easy to digest items, which makes the security landscape easier to navigate.

By addressing each category and creating a list of projects organized by priority (and then reprioritizing as team members finish projects and adjust), a project road map is created to bring a security posture from its current state to its ideal state. This zooming in to specific pieces of infrastructure and then zooming out to evaluate the infrastructure as a whole is crucial to maintaining the safety and a competitive edge of any organization.

In all, as the threat and risk profile continue to evolve, it forces IT departments to reflect at regular intervals and rapidly adapt, making agile project management a great tool to support an organization in this endeavor. The ability to catch defects and course correct more quickly increases productivity, efficiency, and quality while continuously arming stakeholders with information that move the ROI needle.

To view the entire “Agile Project Management for Cybersecurity” webinar, please click here:

Understanding Data Loss Prevention

I’ll assume that you’ve watched the Indiana Jones movie Raiders of the Lost Ark – if not, be warned, there are spoilers in the next sentence.  At the end of the movie, the Ark is reported to be in a place that is “very safe.”  To which Indy replies, “From whom?” Later, the scene cuts to a man rolling a wooden crate through a huge warehouse, already filled with other large wooden crates, to an open spot on the floor where he leaves the crate to sit for eternity… or at least until the Kingdom of the Crystal Skulls movie came out.  Sure, the location had a menacing fence topped with barbed wire surrounding the perimeter, and guards posted to check that incoming and outgoing traffic was permitted or blocked based on the schedule (policy) for the day – but those didn’t seem to be enough to keep the data… errr… artifacts safe in the next movie, when a determined force of baddies set their sights on removing one.

Imagine the warehouse, the surrounding property, and the security measures in place as your  IT ecosystem.  The large wooden crates could be systems and files within the ecosystem, and the artifacts could be considered the data housed within the systems and files.  Some of the crates might be consolidated into a specific area of the warehouse that has carefully guarded access.  Similarly, critical systems in organization’s environments are likely grouped onto network segments with access policies governing who can access the content.  By setting file permissions, what actions can be performed on data by specific people (or groups of people) can be even further restricted.  These network and system level rules do not, however, provide complete coverage or control over all data to prevent its loss or leakage.

Consider for a moment the lifecycle of the content before it is placed onto the protected system – it might go through many iterations of refinement on a user desktop and exchanged via email to various internal departments.  At any time, that unprotected content might be leaked if a laptop or phone is stolen or compromised, or if an errant destination email address was typed.

According to Gartner’s MQ for Enterprise Data Loss Protection (DLP), the DLP market is defined as those technologies that, as a core function, provide remediation for data loss based on both content inspection and contextual analysis of data.  In other words, a system or software that can interpret the content of a file (or email) and apply a set of rules based on how the content is being used (prevent everyone except authorized users from opening a file, prevent a file from being sent outside the organization, etc).  The interpretation and rule sets would be applied to data:

  • At rest on premises, or in cloud applications and cloud storage
  • In motion over the network
  • In use on a managed endpoint device

In its most simple form, a DLP solution gives the ability to assign a classification to data, which can be used to match against rules that assign some action to the data while it is at rest, in use, or in motion.  For instance, a policy may require that data at rest be encrypted and can only be unencrypted by someone with the correct entitlement to do so regardless of where the data resides – on a network share, or on your desktop.  The classification of the data resides within the file’s own metadata; therefore, enforcement of the policy will follow the file!  Agents on the network use policies to determine if the data can be shared from one user to another, internally or externally, possibly even by time of day.  Further enhancements to DLP also include anomaly-based triggers that understand what is normal and what stands out as different, thus making it actionable.

In the warehouse example, these controls would have enabled the facility to truly ensure the artifact was “very safe”:

Encrypt the data at rest. (Seal the artifact in an unbreakable box inside the crate which only an authorized person can open.)

Protect the data in use. (Ensure the artifact could not be read, modified, moved, deleted, etc by an unauthorized person.)

Protect the data in motion by sanitizing/redacting the sensitive data, and enforcing strict send/receive rules. (Magically remove the artifact from the crate without anyone noticing, or prevent the artifact from being sent in the first place.)

Finally, I’d like to address Indy’s question regarding safety.  We should also ask, “Safe from whom?” when considering data.  Many threats to our network today are internal – whether intentional or not.  How many times have you typed the first two or three letters of someone’s email address then hit tab or clicked the dropdown to complete?  Mistakes happen, however a DLP solution can help prevent data loss and leakage outside of your network.

In conclusion, I hope the above scenario helped you to understand what a DLP solution is and how it can help your organization. The ability to classify data and implement rule sets based on classification adds another layer of defense and further protects a company’s intellectual property.

Network Visibility is A Security Must-Have 

Given the trends of massive growth in bandwidth, deployment of large enterprise data centers, and significant adoption of cloud-based applications, enterprise networks have become more challenging to secure and manage. Organizations now have a reliance on the network for critical business functions, including remote access to data center and cloud-based applications. This increased reliance on the network has opened new avenues for cyberattacks. 

Hackers and cybercriminals are exploiting this increased network reliance to threaten most (if not all) organizations with operational disruption and data loss. The exponential growth in the network footprint, the shift in traffic patterns within the data center and the sophistication of cyberattacks are breaking traditional network security designs – making it impossible to secure the network perimeter. Worse, many security breaches are hard to detect, making remediation problematic. Security products (e.g. firewalls) are increasingly burdened with too many connections and too much traffic, resulting in significant impacts on performance. Visibility into who is accessing the network and the ability to identify anomalous traffic is essential to detecting cyberattacks and addressing security problems. 

If, Not When? Think Again 

The resulting growth in the frequency of malicious attacks has shifted the security landscape. The requirement must be to design IT security to defend systems when you are attacked, not if you will be attacked. The ramifications of successful attacks and the related adverse commercial impact have never been greater, so this shift in thinking is critical. The emphasis of security is now geared toward detection, containment, and fast remediation. 

As a transit point for all information exchange, the network is the critical point for identifying cyberattacks. Network monitoring systems are key to detecting anomalies and variances compared to normal traffic flows. As networks grow, the amount of traffic can quickly exceed the capabilities of security devices and related management tools. High performance, intelligent network monitoring systems capture packets (the building blocks of information), apply pattern matching, and can then send relevant traffic or metadata to the appropriate security appliances. Thus, monitoring can help match (limited) security performance with rapidly growing network performance. The visibility provided by network monitoring enables rapid decision-making in real time in response to threats, before they have time to affect the entire infrastructure. 

How Network Monitoring Tools Can Benefit Your Organization 

Using real-time network traffic is the best way to gain visibility into your IT infrastructure. To meet the monitoring and security needs of a modern network, a modern Network Packet Broker (NPB) is required. 

Monitoring throughout the network is critical to enable network visibility and dynamically mitigate security threats. Network monitoring tools must filter and aggregate huge traffic flows, isolate bad traffic on-demand to improve security, and ensure compliance. The specific benefits of enhanced network-wide visibility include: 

  • Rapid detection and resolution of application and network security breaches 
  • Improved network performance and reduced latency 
  • Ability to dynamically isolate bad traffic 
  • Improved tool utilization and performance  

The introduction of low-cost white box switches combined with SDN software have significantly improved performance, management, and lowered the cost of pervasive network monitoring. IT organizations can benefit from advances in network monitoring performance and capacity to improve application performance, end-user satisfaction, and to identify security challenges. 

Secure Email Gateways: Perimeter Security for the Mailbox

In the business world, 90 trillion emails are being sent each year. According to Kaspersky research, over 57% of those emails were classified as spam and malware. This makes email a very real threat vector that can infiltrate a user’s endpoint, branch offices, and even the data center. It’s important as ever to secure employee and user inboxes, and there have been continual advancements in security surrounding products that identify threats and safeguard users against them. 

Security Email Gateways are one of the many sets of products used to mitigate email threats. SEGs act as perimeter security for both incoming and outgoing email to ensure that threats from outside an organization can be flagged and quarantined from a user’s mailbox, as well as preventing the spreading of malicious content from a compromised endpoint. 

For personal accounts, many popular email providers have some form of SEG implemented that will safeguard individual mailboxes. The business world, however, looks to specialized solutions that will provide a higher level of flexibility and effectiveness to meet the needs of an organization while safeguarding employee mailboxes from compromising attacks. 

SEGs for inbound email security work by positioning themselves as an organization’s email server. The system then scans each email that comes in before making the determination to either block a message as spam or send it to an employee’s mailbox as legitimate email. This helps defend against threats outside of the organization such as compromised employees at an organization that has normal communication with the protected business or botnets that are utilized to send out spam. 

SEGs for outbound email security work similarly to inbound security except they mitigate email threats from within the organization. An IT administrator can set up their SEG to route all outbound email, to people both inside and outside the organization through its system to apply the same email security scanning and block potential threats. 

Security vendors have leveraged solutions that work both with a business’s on-premise email infrastructure as well as enterprise cloud email solutions such as Microsoft Office 365. At Myriad, we focus on leveraging vendors that have moved beyond the standard scanning of emails and use more efficient pattern matching and hashing to balance security and speed to the mailbox. We also look to make sure there have also been improvements on granularity based on an organization’s needs, placing enhanced security and safeguards on more at-risk departments or users, such as an organization’s finance department. 

While SEGs are not the only solution in the realm of email security, it is a very effective one, allowing security from the perimeter of an organization’s email infrastructure. As new and inventive ways are being developed to compromise users through their mailboxes, SEGs are constantly evolving to take on these changing threats and will continue to help provide a layer of security on the de-facto standard of electronic communication between business and users on the Internet in general. At Myriad, we’ll work for you to find, evaluate, recommend, and implement the best SEG to secure the perimeter in your business to help keep you ahead of the curve. 

UCaaS vs. UC: Keeping Up in The Modern World

Unified Communications (“UC”) and Unified Communications as a Service (“UCaaS”) are terms that have been widely overused in the tech industry, leading to confusion in the marketplace around what the terms mean, how they differ, and the benefits they bring to an organization.   

Unified Communications refers to premise-based systems which provide the integration of real-time communication tools such as IP telephony, conferencing/collaboration, instant messaging & presence (IM/P), email with voicemail (UM), and additional communication applications (e.g. contact center, SMS, MMS, etc.) into a single cohesive solution.  Unified Communications as a Service provides the same functions of its premise-based compadre, but in a cloud-based deliver model.   

Just confusing enough, right?  Let’s remove the technology aspect and compare it to how we watch movies…  

A long time ago in a galaxy far, far away, when people wanted to have a movie night, they would have to go to Blockbuster to rent or buy the VHS.  This required time to go to/from the store, a VHS player at our homes to play the movie, and there was a cost tied to each rental.  Archaic, right?  Thankfully, technology didn’t stay stagnant, and we moved from VHS to a better technology: DVDs! But wait—that disk won’t play on the machine we purchased not too long ago and works fine… Well, if you wanted to keep up with the times, you had to buy a new machine.  Then we moved from DVD to BluRay, and people thought, “What do you mean this disc isn’t compatible? It looks and feels the same!” It became time to shell out more money on another machine, which may have even caused a need to buy a new entertainment system to hold all three systems.   

Let’s fast-forward our way out of the Stone Age to modern day.  We have grown accustomed to our movie nights or binge-watching our favorite TV series being nothing more than a few clicks of a button away.  We don’t feel obligated to sit through 100 minutes of pain watching Caddyshack II because we paid to rent it and it’s due back tomorrow; we pay a flat monthly fee to gain access to endless options and the luxury of being able to pick a show back up from where we left off a month ago.  The physical machine no longer dictates what we can/can’t watch, nor does the format of the movie necessitate ongoing expenses; our only “worry” is how big of a screen we can it watch on. 

Believe it or not, this isn’t one long advertisement for “Netflix and Chill”—the importance of the necessity of constantly adapting in order to consume and utilize the best technology out there is just as relevant now as it was thirty years ago. When it comes to UCaaS vs. UC, it boils down to the consumption model and which one makes the most sense for an organization.  We live in a world where the only constant is change and the pace of change is increasing by the minute.  By adopting cloud-based, UCaaS solutions, organizations are able to increase their agility and ability to keep up with advancements in technology as they come by removing the constraints that legacy, on-premise, systems have imposed for years.  If you don’t agree, we respect your opinion and would love to discuss further— can you meet next at the local Blockbuster next week? 

Ransomware is A Real Threat

Within hours of the outbreak of WannaCry, it had infected more than 230,000 computers in over 150 countries. We’ve all heard stories describing the severe impacts organizations face after being targeted by ransomware. Ransomware is a big business for organized crime rings and, as some suspect, even state-sponsored agencies. According to industry experts, it’s estimated ransomware ransacked $1B in 2016. Money, personal files, and data aren’t all that’s at risk—when a hospital gets infected by ransomware and is unable to treat patients, it can be a life or death situation.  

What are the effects of ransomware? 

Beyond extortion (paying a ransom to regain access to infected systems), viruses and malware can impact productivity and lead to loss of revenue due to outage or employees not having access to systems. This highlights the importance of backups at home and in the business. Imagine losing all your precious family photos if your home computer gets infected. Now imagine your photos are an organization’s financials, client information, or patient data. That collateral is a big deal to your business, not just for productivity, but because the impact to your company’s reputation can have financial impacts that last longer than the settlement from a lawsuit or fines imposed by governing bodies.  

How does malware infect an organization? 

Companies are faced with the challenge of how to secure their resources from malware and viruses coming from different attack angles called “threat vectors.” Attacks can infiltrate an organization via websites, email, network, and remote/mobile workers. Sometimes an attack can be caused by something as seemingly innocuous as users bringing in a laptop from home or plugging a USB drive into the corporate network.  To prevent attacks from these different vectors, it’s necessary to secure internal resources from the inside out and the outside in. The question becomes: how do you protect your network from your own employees? 

What can an organization do to prevent attacks? 

Malware and viruses are constantly evolving, so there isn’t a sole, one-step solution that protects everything. Without layers of security, segmentation, authentication, blocking, visibility, and alerting, any user can gain access to the network and malware can spread from device to device without anyone knowing. For these reasons, automated blocking and a rapid time-to-detection are important. 

There are many ways an organization can be alerted of a security problem before it’s too late. The key is identifying what the threat vectors are and having a plan in place to address them. A great and easy first layer of defense companies can very easily add is DNS protection. DNS protection can automate the blocking of malicious known links/websites via the web or email and is a great/simple first layer of defense customers can add to their network without much effort. This service is offered as a cloud-based subscription service with 1, 3, and 5-year licenses. 

Ultimately, a layered approach is the best defense. Firewalls, email protection, DNS protection and endpoint protection are all great tools for visibility and alerting. Many use user behavior analytics and artificial intelligence (AI) to flag zero-day threats. A zero-day threat is a new malware or virus which firewalls don’t yet have definitions or signatures to identify and block, hence the name “zero day” – day one is when the threat is first recognized. A layered approach which includes regular updates to your computers, intelligence to uncover current and emerging threats, visibility across all devices and ports (anywhere), and power to block (stop phishing, malware, and ransomware) early on is critical, as they make the job of the cybercriminals targeting your company and network harder and the economics less attractive.