I took a new position and along with life during Covid I haven’t had much time for studying new topics. Until now.
I took a new position and along with life during Covid I haven’t had much time for studying new topics. Until now.
A lot of ambiguity exists around how to properly manage vulnerabilities. The vulnerability management program will encompass multiple teams, tools and processes with the common goal to secure and maintain security of the environment. A proper vulnerability management will allow for the management and measurement through metrics, which mature programs regularly produce.
Before performing vulnerability scans things a few things must be considered.
What should be scanned, or how should different segments be treated? Is one zone PCI and the other is HITRUST but the dev network is out of scope.
Once the scope is identified we need to
Keep a valid asset inventory is not a fun or cool thing to do but it is important and often over looked.
Another often-overlooked aspect of an update to date asset inventory is the utilizing outside sources of alerts. For example if I know I have a ton of Apache Web Servers I can set alerts from multiple sources to notify me or the team of any new vulnerabilities specifically against the criteria I define.
Once assets are discovered we will need to confirm custodian/ownership or who is patching them?
After assets are known and owners are identified we need to assign criticality rating based on business function.
The asset, asset owner, /custodian, and the criticality rating of each asset should be included in the inventory along with the location, name, serial number, important software, OS or versions and other relevant information.
Now the fun stuff…
Identifying Vulnerabilities (Detection Phase)
A vulnerability scan is a combination of an automated or manual tools, techniques, and/or methods that are run against external and internal network devices and servers, designed to expose potential vulnerabilities in networks that could be exploited by malicious individuals. Once these weaknesses are identified, the entity should focus on remediating deficiencies discovered, and repeats the scan to verify the vulnerabilities have been corrected.
A vulnerability scan should be performed against the internal network and the external footprint of an organization. The frequency of the scans can vary depending upon compliance regulations but a suggested practice would be to scan quarterly and after any major changes to your environment.
PCI requires quarterly scans from an approved vendor for organizations external environments. Internal scans are defined for annual along with penetration tests.
HITRUST does require vulnerability scanning quarterly as well.
Evaluation of Vulnerabilities (Prioritizing of Risk)
On a typical vulnerability scan findings will range from critical, high, medium and low
These ratings are based on the Common Vulnerability and Exposures, which is commonly referred to as a CVE.
A CVE score is used for prioritizing the security of vulnerabilities. The CVE glossary is a project dedicated to tracking and documenting vulnerabilities for software and hardware. Since 1999, MITRE Corporation has maintained the CVE system, which is funded by the National Cyber Security Division of the US Department of Homeland Security.
When a security researcher discovers a new vulnerability it will be evaluated and identified according to the Root CVE Numbering Authority. Once independent researchers can confirm the vulnerability it will be entered into the NIST National Vulnerability Database (NVD).
The Common Vulnerability Scoring System CVSS is utilized to identify a final score for vulnerability.
3 parts of the CVSS score.
The Base score, temporal score and environmental score.
Knowing the criticality ratings of each system will help judge associated risk of the system based on the vulnerability rating considering as well the the system rating. The CVSS score includes an environmental score that would help determine a vulnerabilities risk specifically against your organization. The temporal score is based on the timing of available patching or fixes for vulnerabilities.
A high vulnerability on a system that is not external exposed and requires network access before it can be exploited will be rated lower than a high vulnerability found on an externally exposed web server. This is a simple example but situations will be similar and using a defined methodology will help.
Remediation (Fixing the detected Issues)
Remediation efforts should be tracked, managed and reviewed. Ideally a central ticketing system will be utilized which can track remediation efforts to provide meaningful metrics to the overall vulnerability management program. Metrics from the remediation effort can include, mean time to detection, mean time to remediation, average window of exposure, % of systems without critical or high vulnerabilities.
A vulnerability management program should consist of a team of individuals including IT, Security and a management level IT or Security person. The idea is to include IT that most likely will perform remediation and have it managed and overseen by security. IT or Security management should be included to prove buy-in and to help with any approvals required.
A vulnerability management program provides an organization the data needed to properly manage and measure their IT infrastructure.
A mature vulnerability program will encompass not only vulnerability scanning but inventory management, a remediation process for discovered vulnerabilities, penetration testing, and risk management. A mature program will have the ability to coordinate the results of previous scans to create meaningful metrics that an organization can use to review the processes for improvements and additional risk reduction.
Sorry for the long delay. It’s been the busy season.
And I’ve been in full swing of migrating to compliance and the audit team.
For the December 2020 Jawn of the Month
It’s such a go to for me now that I don’t even think twice about installing it.
It satisfies all of my Office needs and is completely open source.
Is Office suite better yes but this is open source.
I took most of the summer off. I’ll have some cloud articles coming up on the docket for the next few months.
For August 2020 the Jawn of the Month is the PI-HOLE
The PI-HOLE is an operating system built to run upon a Raspberry Pi. This operating system is used as an internal DNS server for your internal network. This internal DNS server will block a majority of ads and some malware on the network layer adding another layer of defense to your home network.
I was late to the party on utilizing a PI-HOLE as my internal DNS server. I used OpenDNS previously and thought it was sufficient to block ads and add a layer of protection on my home network but I was wrong. Instantly implementing the PI-HOLE I could see normal ads that escaped my in-browser protections were now completely blank.
A PI-HOLE is absolutely worth implementing for your home network.
2020 is almost over…
For June 2020 the Jawn of the Month is Joplin.
Joplin is a open source note taking application. It is similar to OneNote and just as functional.
It has the ability to sync to a cloud service or file share.
Notes are searchable and tag-able items.
My friend Ian Terry recommended it to me and I thank him for that!
Cloud environments make a lot of sense for businesses of all types. As we move to a more agile workforce utilizing cloud resources provide added functionality, which was often not obtainable for smaller to medium businesses. Cloud resources in their nature are highly available, highly scalable, and, easier to implement disaster recovery.
In the past, for a smaller company to scale up to meet demand it would require a huge up-front investment to acquire new hardware. That hardware would require time to set up and then configuration. With the advent of the cloud hosting these operations have become much easier and at a much affordable subscription model instead of previous up-front costs of hardware/licensing. I can see some situations which may still require on-site equipment for various, compliance, legal, or cost requirements. Those situations will be the exception, as most businesses will benefit from the cost savings associated with cloud hosting. Additionally, as more companies utilize Linux systems for web applications and their services that will also reduce cost as most Linux operating systems are open-source and do not require any licensing costs.
One misconception about migrating to a cloud environment is that by default they’re secure. I would say that is partially true but not completely. By default, AWS, for example, will utilize a deny all policy for its security groups. Utilizing a default-deny policy is a best practice that requires that any access to that system will require a security group (firewall) rule to specifically allow that traffic into the cloud resource. As best practice access should be opened up according to the need. In a cloud environment you are not responsible for the physical security of your cloud systems or underlying network but the security of your hosts and services is your responsibility. An application, which is only utilized by your employees, should limit access to those specific employees. This can be implemented by a direct connection, VPN, certificate to authenticate the device, or by whitelisting the specific IP addresses for those employees that require access (This can be painful if they are not static IPs for the users). Whitelisting IP addresses for remote workers would be cumbersome as those addresses can change but for smaller organizations, it could be feasible. If you have a web application which users should be able to access from anywhere in the world then you’ll need to open up access to everyone for that application.
Opening access to the world can be a scary concept. If your system is available over the open Internet expect it to be tested consistently. Regardless of the service, if something exists and is accessible to everyone on the Internet it will be discovered by crawlers some of these will be for research purposes and others will be for malicious purposes. This is a fact of life that every organization, government, and Internet user must face. In order to protect your systems, you must implement proper access controls, secure transmissions, and permissions to limit the possibility of unauthorized access.
Making a system secure requires multiple layers of protection in place. A layered approach can deter an attacker, as it may be too difficult to make an entry, it could also prevent a deeper breach or prevent an attacker from obtaining the keys to the kingdom (administrator access). Keeping an environment secure either in the cloud or on-premises will require the same concepts. Create a strong perimeter either on the network layer or if you’re following a Zero Trust model on the host itself. That means shutting down services, which aren’t required, utilizing a default, deny all rule and allowing specific traffic by exception into the host. Add multi-factor authentication to your remote access methods to further secure your access.
After securing the perimeter protections should be in place to limit file/network access to the user’s role. There is no need for a standard user to have privileged rights. Securing the perimeter and limiting user access will be a great start for a program but to fully secure systems techniques such as:
Centralizing Access – One Location which stores all of the user information and can edit permissions/access at a moment’s notice. Changes in this system are reflected in all systems.
Centralizing Monitoring – All Logs of all devices will send logs to a SIEM or Syslog server which can create metrics or trigger alerts for defined events.
Adding Network Detection/Prevention Systems – These systems can sit on a network and detect or prevent malicious activity and send alerts based on triggers. Different than a SIEM as these triggers can be set according to network traffic while SIEM triggers are based on logs.
Application Firewalls – If you have an application you own and it is exposed to the internet you should have an application layer firewall. These next-generation firewalls can detect and inspect application-layer traffic.
To some this rant up into something meaningful, your new and exciting cloud hosting will still require the same old boring security practices that helped keep your on-premise servers secure (mostly). As more organizations move to the cloud they’ll need to hire staff with skills to implement and utilize cloud features to make the cloud safe, secure, and cost-effective. Pick your vendors, contractors, and assessors wisely as they’re not all created equal. When you talk to your third party consultant make sure they understand how cloud infrastructures work and function. It is all too often we see items a previous assessment team missed completely, or they misunderstood and did not fully understand. Mistakes like these can go in either direction such as providing an organization a false sense of security or requiring an organization to perform wasteful remediation for a system that meets requirements but is just poorly understood. As not all cloud-hosting providers are created equal the same can be said about security organizations. Perform your due diligence as for the credentials of the team members, ask for references, and hold discussions with them to see if they’re on the level. Picking a partner to help secure your organization may be one of the most important choices you make.
April was a crappy month so we’ll just skip it…
The jawn of the month for May 2020 is Grammarly.
One area where I need to improve is my grammar and writing skills. It is something I’ve often overlooked and have not spent the time I need to improve it. I wouldn’t doubt some of you have noticed my affinity for writing down rants and ramblings. Grammarly has helped turn some of the rants and ramblings into a somewhat readable format.
As a co-worker pointed out (if you read this and are ok with a shout out I will edit your name in) to me it wouldn’t be safe to use around sensitive information. But it can be used for any personal writings or etc.
As out of office work has become more common all businesses must address the security of their remote workforce. Early during this pandemic’s quarantine, businesses were forced to become extremely agile and required to adapt and adjust to our current situation. However, now that Covid-19 is with us for the near future we must pivot from making things work to making things work securely.
Once the pandemic eventually ends normal business operations will resume with a few resemblances of normalcy. But one thing is certain compliance and regulatory requirements will remain and the new remote workforce will need to be secured.
Most modern businesses had already implemented remote access methods prior to the pandemic. For organizations of all sizes adjusting to 100% remote for their workforce caused unexpected consequences. Many organizations had to develop a disaster recovery/business continuity plan in real-time. A common problem that most likely occurred for medium-sized businesses is a lack of available bandwidth for the VPN appliance. Most VPNs are built for between fewer than 100-200 users and once that user threshold is met the device can become unstable for some or all VPN users. Utilizing a high availability pair can provide load balancing for these types of events. In addition, a second VPN Appliance that often also performs double duty as a firewall can also help increase bandwidth over the VPN. Having a duplicate VPN Appliance can be costly as it doubles the cost as you’re now utilizing two appliances instead of one. That high cost can be justified for businesses that require high uptime or are performing critical work that can’t be delayed. But scalability will not end with your VPN and may need to be addressed for other services or functions as well.
Additionally, organizations must begin to consider the level of access given to its users. One problem with VPNs is that they provide network layer access to users, which is often unrestricted. It is commonplace for a VPN user to have access to the entire network if the network is not segmented. It is possible to limit VPN access or segment the network to avoid unauthorized access to subnets or networks which normal users wouldn’t require access too. In networking the old philosophy of trusting traffic on the same access level is beginning to change. The idea of Zero Trust addresses the very idea that all hosts are given Zero Trust and in order for a user to access any system it requires the same authentication process disregarding the user’s connection origin. The user attempting to connect to a host from a public Wi-Fi connection or plugged into a switch right next to the host, the traffic will be treated the same, requiring the same authentication process for either origin location. Zero Trust utilizes systems, which are already established such as centralized access controls, multi-factor authentication, and device authentication. A Zero Trust authentication will require a valid username/password that is tied to a central identity access system (ideally), then a multi-factor authentication token, and a certificate, which was issued to the device at deployment. This method effectively authenticates the user, the session, and the device. Zero Trust and cloud computing work very nicely and I’d suggest if you move to the cloud that is the time to implement Zero Trust.
Having resources in the cloud can now be utilized by businesses of all sizes and can avoid requiring a VPN connection to the physical office. The older model of IT infrastructure utilized on-premises servers that contain resources such as file shares, desktop applications, or desktop environments. This internal infrastructure would require a VPN or less secure alternatives to access the internal network’s services and functions which employees need to perform their job functions. Some businesses may require to keep their equipment on-site however the vast majority will have the ability to move to the cloud. The cloud offers alternatives as you can utilize a cloud provider for file shares and web applications to replace the older desktop applications. Pivoting to cloud resources can eliminate the need for a VPN and can enable a remote workforce to flourish.
If you’re considering adjusting your organization’s infrastructure placement, you must ask a series of questions about your organization.
The answers to all of these questions is most likely yes…
Once you’ve identified if you can move to the cloud an organization must determine what services are required. Do you need a complete virtual environment for workers or just a web application? As desktop applications become more obsolete web applications will continue to allow for more agility in our remote workforce. Previously organizations may have had an internal application, which was only accessible on an endpoint in the office or over a VPN, as now a web application can be utilized. Access to a web application can be safe if you implement a centralized identity management system that utilizes a two-factor authentication token for verification. As the potential web application would be exposed over the Internet the use of centralized identity management and two-factor authentication help compensate to prevent unauthorized access.
One benefit of moving to a cloud environment is the idea that disaster recovery and business continuity can become much easier to engage. Spinning up a warm site or cold site can take time and require a large effort for physical hardware. But in AWS or Azure, it can be a few mouse clicks away by replicating a server to a different Availability Zone (AZ) in AWS and/or a different region to spread it across multiple data centers within your cloud provider.
Moving to the cloud does not guarantee a more or less secure environment. If you move to a cloud environment don’t forget the basics. Zero Trust is a great idea but it should be implemented along with the normal security operations such as limiting permissions for users, reviewing access, user awareness training, running frequent patching, and utilizing secure protocols for transmitting sensitive data. Often organizations look for security silver bullets and neglect the mundane tasks that will equate to a stronger security posture.
A third party would be defined as any contractor, business associate, business partner, vendor or even volunteer who works within your organization. When you work with a third party it will instantly create an additional risk for your organization. However, that level of risk will depend upon the third party, as not all third parties are created equal. A third-party security services firm with a positive reputation will most likely introduce less risk than a contractor hired to do marketing.
Third Party Risk Management has quickly been recognized as a vital task, which must be taken seriously. As companies look to adopt newer technologies and become more agile it is only reasonable to think that third parties will be utilized more frequently in the future for more and more services and tasks. Without assessing, and managing third party relationships how can you be sure that your third party is taking your security seriously?
From a technical perspective a third-party connection could come in various forms for example a web API (web hook) pulling data from one source into another system which could inadvertently share information through web services. A more traditional third-party connection could be a remote user with VPN access or direct connections through an application such as Citrix. Another possibility of a third-party connection could be a Site-to-Site VPN between corporate networks, which is always, open and connected which could potentially lead to the most risk. Additionally, a third party may have access to a secure file transfer protocol (SFTP), which could be used to transfer and/or exchange information between companies. Each method described details differing levels of access and with that third-party access it introduces potential risks.
A Web API could be utilized to transfer PHI or other data through HTTP(s). This method is similar to a user accessing a web application to view a medical record, but instead of a user it’s an automated process which pulls the medical record or other information requested and returns results to be incorporated or displayed to the initial program, which made the request. An example of this would be if I created a web site dedicated to the price of gold. One of my main objectives of this website is to display the current price of gold. However, I don’t want to edit the website continuously to keep updating the price of gold as it changes in real-time. Instead I could create an API which will pull the price of gold from another site of my choosing which would then be displayed on my website. This API would be updated in real-time and would not require me to consistently update the gold price on my web site.
A remote VPN is pretty straight forward it would be a single user accessing a VPN to obtain network access to utilize the agreed upon asset. Similarly, a direct access connection through a Citrix or VMware application environment can utilize a native ability to limit the contractor or third parties’ access to only the required resources. While a site to site VPN is a consistent open connection between your organizations network and the third parties’ networks. Site-to-Site VPNs should be avoided, as the trust of the third party can never truly be known without extensive audits. If a Site-to-Site VPN must be used it is important to utilize a strict deny all rule and allow only the specifically needed traffic through the VPN tunnel.
An SFTP connection is a common way to allow the exchange of information. A third party can be provided unique credentials and provide the source of their IP address to allow for the source validation of the incoming IP addresses to allow only the known third parties to upload data to your file upload service. Validating the source IP address along with a unique username/password to authentication over a SFTP is a secure method. Ideally, the third parties would only have permissions to view data or folders associated with their company and have the ability to only upload data.
Additionally, a third party may actually host all of your data. With the strong adoption of cloud hosting it is becoming more common for companies to offload their sensitive information to the cloud. If sensitive information is stored in the cloud it should be detailed in a third-party risk tracker. Additionally, the vetting and continued assessing of any hosted provider should be considered as the wrong choice could have costly consequences.
All of these different methods have many similarities when it comes to managing third party access. The use of third-parties will only continue to increase with newer technologies and advancing skill gaps between workers. If an organization follows the proper vetting, management and continued auditing of their associated third-parties then an organization can effectively limit and document the risks involved. We have some steps below that all organizations should follow to keep their relationships productive and secure.
The first step is to include, review and update contract language in BAAs addressing third party security requirements, obligations and best practices. This is especially true for any service which hosts PHI.
Second is to track which contractors or third parties have access to what data or systems. A detailed tracker will go a long way to document and understand who has access and what level of access they possess. Additionally, for APIs, SFTP, VPNs, hosted data and other connections should include the IP addresses of a connection; ports, protocols and all characteristics of the connections possible.
Additionally, the technical aspects of third-party management should spin-off different tasks, which should be performed regularly to add a layer of defense. These audits can catch mistakes, anomalies, things falling through the cracks and any other things overlooked regularly.
Verify unique IDs are in place for all users
Verify the appropriate level of privileges is set for all users
Test the monitoring in place to ensure it is in place and working properly especially for third party users
Verify secure protocols are implemented for all transmitting of sensitive information
Verify that any PHI stored is encrypted while at rest
Review connection agreements are matching the agreed requirements
When it comes to securing your third-party relationship, it should follow the same principles as the rest of your security practices. Do the basic things track the third-party users or connections, detail the level of access, implement strong access controls, and verify your monitoring is working properly. Also, make sure only secure protocols are used for any data exchanges and always audit and review. Finally, strongly assessing and reviewing your third party can help ensure the continued security of your sensitive infrastructure. It is better to discover the possible can of worms now before you find yourself making a headline for the wrong reasons.
For February 2020 the Jawn of the month is…
NMAP is such a powerful tool that it is can be used for multiple functions including pen testing, network mapping, vulnerability scanning and more.