Everything in network security design is a trade-off. The operational elements of the network security design and deployment fundamentals have severe cost implications that make the decisions hooked on to the budget. One of the noteworthy points is the value of the asset protected must strike balance with the cost of network design and deployment.
If we analyze the last 20 years, the network security landscape has witnessed only occasional convulsions as not many cyber-attacks had financially impacted the companies. Until recently, the damage to reputation and privacy concerns have affected consumer willingness to engage online, and cases of ransomware and maze have caused financial loss, including the loss caused due to the shutdown of operations.
Here, let us revisit those fundamentals of network security and try to understand what part of the security architectures a business must leverage to enable offensive moves and where lies the most risk.
Understanding the Fundamental of Network Security Design
The fundamental concept of a network security design which also forms the foundation of the security architecture of an organization covers the 4 key areas: physical security, access controls, authentication, and accountability. Every function of the network security fundamentals emerges out of the need to take actionable measures in these areas in order to protect network infrastructure from unauthorized access, destruction, and disclosure.
The six fundamental functions of network security are:
Network segmentation or isolation or compartmentalization is based on the idea to restrict the effects of an incident to limited assets. Traditional approaches to network segmentation involve policy around Access Control Lists (ACLs), internal firewalls, and Virtual Local Area Network (VLAN) configurations.
Such implementations help in slowing down the impact of the cyber-attacks and reduce the damage to a specific section. However, not only these implementations are complicated and costly but they are not fully effective against threats.
For example, many companies assume that using different VLANs in different CIDR blocks could help them achieve optimal segmentation, but VLANs are usually never fully isolated from each other. Basically, even if you are using VLAN1 for a specific protocol that you can define by group, say unidirectional link detection (UDLD), there are chances of having multiple subnets in the VLAN1.
Security Policy Enforcement
Development and enforcement of security policy and controls enable organizations to detect hostile behavior and chronic or accidental policy abusers. The entire task of policy enforcement is categorized into three segments:
- The first is design analysis of the system and develop policy specifications based on the key considerations such as hardware and software technologies and threat landscape.
- The second is to detect violations. When the policy provisions are clearly specified, it becomes easier for the network administrators to spot unspecified behavior, assess real-time incidents, and keep a record of such incidents to be used as evidence purposes.
- The third is to take action against the violation and the violator, which is subject to local security policy or say audit policy that defines the control and authentication practices.
There are multiple elements that need to be covered when developing the policy, such as Access Control Lists (ACLs), firewall rules, application control, and file blocking policies. Today, enterprises are extensively dependent on applications in their day-to-day operations. According to the Cloud Security Alliance report, an enterprise uses around 1000 applications and every application has its own security risk touchpoints, which simply means more options for hackers to exploit the vulnerability to get access to the targeted system.
A single application could have numerous security risk touchpoints.
Another way to access the internal corporate network is by transporting malware through email, USB keys, and web downloads. Hackers also leverage programming flaws in web applications to manipulate the backend systems, either to gain access to the system or to get the output information such as credit card numbers, which otherwise is not available from the front door.
Having a security policy with respect to these elements is essential to keep the network infrastructure safe, as with every new technology, it is not just the security is advancing but the hacking tactics are advancing as well.
Identity and Trust
This function in network security is about determining the level of trust for an entity in order to provide access to a system. A simple analogy to the concept could be of an organization that issues identity cards to its employees. When you show that identity card at the door, the authority verifies the authenticity of the card to determine your identity and provides you the access to enter the premises. The concept of identity and trust in a network security solution is based on a similar system.
There are two crucial aspects of identity and trust that are usually applied to secure the network infrastructure: direct trust and third-party trust.
Direct trust security model: It is about establishing a secure communication network between two trusted parties. In this model, one entity performs all authentications for the other entity. Since there is no delegation, all validations are done directly by the CA (Certificate Authority). However, implementing a direct trust security model is both labor-intensive and costly. It is widely used by financial institutions, including banks and insurance companies.
Third-party trust model: This function explores the trust relationship between two entities, who share a common relationship with a third-party that vouch for the trustworthiness of each party. Here is a simple analogy that will help you understand the third-party relationship. A common example could be of a relationship between, a bank, which is a third-party, and the two users: customer and a shopping application.
In a large-scale network, there are umpteen users, who do not share personal relationships, and in that case, public-key cryptography is made accessible to all the users. The user trusts the public-key because it is made available by their organization.
In an attempt to build a robust identity and trust architecture, organizations use multi-factor authentication systems, wherein to get the access, users have to provide identity-proofs at least in two categories, such as a password or pin, and a hard token, like an OTP (one-time-password). Digital certificates and trusted IP addresses are a few of the factors that are verified to build trust mechanisms.
In order to see what is happening in your security architecture, it is essential to build capability around security visibility. Now, the visibility must be of the entire network infrastructure and is achieved through advanced analytical tools.
Here is a list of things you need visibility into:
- Logging Details: Logs are the element that establishes user accountability. Within your network infrastructure comes a range of elements that produce logs, such as Routers, Switches, Databases, and so on. Logging details can help you spot unauthorized behavior or identify any potential threat.
- Traffic: Getting visibility into the unidirectional traffic stream such as source and destination IP address, protocol, and source and destination ports can provide insight into the traffic on a network.
- DNS: Monitoring of recursive DNS queries, such as queries made to strange domains or any anomalous changes in DNS activity could possibly highlight an outbound or inbound DDoS attacks such as DNS data leaks and compromises machines and malware.
The objective of visibility is to analyze the communication between two systems.
By having visibility, you get information about the incidents or changes in the traffic patterns, but unless you add context to the events, the information would be of little help for you to take meaningful actions. For example, if you find a number of computer systems start communicating at the same time, it is an indication of botnet traffic.
Botnets are used to perform click fraud campaigns, collecting ransomware, sending spams, or stealing data. Whereas, if the two servers start communicating, which previously never have communicated indicates the presence of malware.
Now, contextualization applies to every component of network security design, which requires one to have visibility into the security applications for in-depth analysis. It is not possible without the support of advanced Security Information and Event Management (SIEM) technologies that provide real-time monitoring, event notifications, and other details from logging data, which have cost impacts.
Building resilience is necessary because we know that absolute cybersecurity is impossible, regardless of advanced tools and well-drafted policies. The time has come for the companies to think differently by adopting a proactive approach, which means instead of taking actions when the security is breached, we need to design a network infrastructure that predicts when it will be breached.
In order to make the network resilient, a High-Availability (HA) pair of security devices, as primary and secondary units are deployed, in which two identical firewalls are configured for consistent connection and increased protection. So in case if the primary unit fails, then the secondary unit takes over.
Another way of building resilience is by designing a network architecture that can withstand a DoS/DDoS attack. This could be achieved by increasing the bandwidth of your server.
For example, if the IP address suddenly starts experiencing heavy traffic that is overwhelming the system, and the attack size is lower in bandwidth of the link between yourselves and your ISP/ISPs, then you get more time to mitigate the risks. Today, some firewalls also have DoS/DDoS mitigate features such as Syn Cookies, but the performance profiles must be carefully assessed before opting for one.
As you can see, on the enterprise level, while developing a network security design and deployment strategy, there are multiple factors to be considered, each having a cost associated with it. Lately, many firewall vendors have come up with products that assure high security at the cost of a few thousand dollars, but in a restrictive environment and that means a requirement for more focused considerations on fundamentals, in order to cherry-pick the most critical ones and design security systems around them.
Main Photo Credit: Medium