Why encryption is good.

Privacy is a big topic, and it ties into encryption.

Our data, from the news sites we visit to our social media and purchases, is tracked and cataloged. This data is sometimes compiled from different sources to create profiles for each user. 

Most of the time, this information is provided by non-voluntary measures; however, sometimes, we offer it even if it’s not required. 

I know what you’re thinking… Maybe I can use a VPN and block some of this tracking, use a DNS service, or use similar protection.  It could help a bit, but tracking now is so sophisticated that I’m unsure if this helps much. Your credit card transactions will work against you, and the amount of things required to do business on the Internet will likely create a trail that will be hard to shake off. 

With every data breach, more of our data is leaked: name, address,  phone number, age, gender, and email.  Other data points often collected are ethnicity, income level,  voting registration, and occupation.  

This amount of information, pieced together from various breaches, could be used maliciously. In the same way, advertisers now target us using this data; hackers are likely doing the same. Countries are likely taking advantage of this data.

I’m sure everyone reading this has been sent a notice in email or mail of a notification of their information being disclosed in a breach. 

How do we prevent this? 

We can’t… Too many companies collect our data… There are companies neither of us of heard of that have profiles on us and everyone we know. These profiles are used for split-second advertising auctions that determine the ad you see or even the commercial on your streaming service…. That is a whole other story. 

Since we can’t stop this, what can we do?

Good IT practices are required to secure data, but most importantly, data must be handled carefully, specifically encrypted. Hence, it is only readable with a second factor (usually a key) to decrypt that data, whether in transit or at rest.

If a hacker steals data that has been encrypted using strong ciphers, the data is useless unless the hacker obtains a second factor, such as a key. If the hacker obtains the key, too, well, it’s not the encryption fault at that point…

Our only hope?

Regulatory and other compliance requirements create the frameworks or rules required to protect data in an IT system. 

For instance, a company that processes many credit card transactions must do an IT audit based on the Payment Card Industry (PCI). This audit will ensure the protection of secure data, that only data required is collected, and that other IT controls to validate a safe computer environment are used to process or store credit card data. This is an example of a regulatory standard. Other regional requirements or industries have their requirements. 

However, most companies collecting data on US citizens have few regulations today.

GDPR and NIS 2 for Europeans are trying to address this topic and reduce the risk by questioning data collection and creating requirements for storing data and potentially deleting a user’s data if requested. 

FedRamp is an IT audit that must be passed for cloud-based companies working with the US government. It is based on the NIST 800-53 framework and, in my opinion, is helping the industry as a whole, but it is only applied to cloud-based government contractors. 

Why is encryption good? 

Encryption is required to keep data safe to ensure confidentiality and integrity.

Encryption is a mechanism to render data unreadable outside the intended parties.

This can be used In two ways,

At rest, when an item is stored,  think network share drive or Sharepoint

Or

In transit, when data is moving through the network,  think of credit card transactions.

Business can only be conducted online with strong, reliable encryption to protect transactions. 

Now, imagine if the government wanted to have a back door in any of the encryption standards…..

Every financial transaction, from Amazon to FanDual, must be encrypted to prevent snooping. A backdoor in the encryption protocol used for these transactions would shake the confidence in any online transaction.

Any security professional will understand the CIA triad, confidentiality, integrity, and availability. These are the pillars of data security. 

The government should understand that an encryption protocol with a back door is not secure. 

Please think twice before assuming encryption is a bad thing. Without encryption, online transactions would not be possible. 

The next time you bet on Fandual or purchase on Amazon, consider how you hope your information is encrypted securely. 

Vulnerability Management

A lot of ambiguity exists around how to properly manage vulnerabilities.  The vulnerability management program will encompass multiple teams, tools and processes with the common goal to secure and maintain security of the environment. A proper vulnerability management will allow for the management and measurement through metrics, which mature programs regularly produce.

Before performing vulnerability scans things a few things must be considered.

What should be scanned, or how should different segments be treated?  Is one zone PCI and the other is HITRUST but the dev network is out of scope.

Once the scope is identified we need to

Discover Assets

Keep a valid asset inventory is not a fun or cool thing to do but it is important and often over looked.

Another often-overlooked aspect of an update to date asset inventory is the utilizing outside sources of alerts.  For example if I know I have a ton of Apache Web Servers I can set alerts from multiple sources to notify me or the team of any new vulnerabilities specifically against the criteria I define.

Once assets are discovered we will need to confirm custodian/ownership or who is patching them?

After assets are known and owners are identified we need to assign criticality rating based on business function.

The asset, asset owner, /custodian, and the criticality rating of each asset should be included in the inventory along with the location, name, serial number, important software, OS or versions and other relevant information.

Now the fun stuff…

Identifying Vulnerabilities (Detection Phase)

A vulnerability scan is a combination of an automated or manual tools, techniques, and/or methods that are run against external and internal network devices and servers, designed to expose potential vulnerabilities in networks that could be exploited by malicious individuals. Once these weaknesses are identified, the entity should focus on remediating deficiencies discovered, and repeats the scan to verify the vulnerabilities have been corrected.

A vulnerability scan should be performed against the internal network and the external footprint of an organization. The frequency of the scans can vary depending upon compliance regulations but a suggested practice would be to scan quarterly and after any major changes to your environment.

PCI requires quarterly scans from an approved vendor for organizations external environments. Internal scans are defined for annual along with penetration tests.

HITRUST does require vulnerability scanning quarterly as well. 

Evaluation of Vulnerabilities (Prioritizing of Risk)

On a typical vulnerability scan findings will range from critical, high, medium and low 

These ratings are based on the Common Vulnerability and Exposures, which is commonly referred to as a CVE. 

A CVE score is used for prioritizing the security of vulnerabilities. The CVE glossary is a project dedicated to tracking and documenting vulnerabilities for software and hardware. Since 1999, MITRE Corporation has maintained the CVE system, which is funded by the National Cyber Security Division of the US Department of Homeland Security.

When a security researcher discovers a new vulnerability it will be evaluated and identified according to the Root CVE Numbering Authority. Once independent researchers can confirm the vulnerability it will be entered into the NIST National Vulnerability Database (NVD).

The Common Vulnerability Scoring System CVSS is utilized to identify a final score for vulnerability. 

3 parts of the CVSS score.

The Base score, temporal score and environmental score.

Knowing the criticality ratings of each system will help judge associated risk of the system based on the vulnerability rating considering as well the the system rating. The CVSS score includes an environmental score that would help determine a vulnerabilities risk specifically against your organization.  The temporal score is based on the timing of available patching or fixes for vulnerabilities.

A high vulnerability on a system that is not external exposed and requires network access before it can be exploited will be rated lower than a high vulnerability found on an externally exposed web server. This is a simple example but situations will be similar and using a defined methodology will help.

Remediation (Fixing the detected Issues)

Remediation efforts should be tracked, managed and reviewed. Ideally a central ticketing system will be utilized which can track remediation efforts to provide meaningful metrics to the overall vulnerability management program.  Metrics from the remediation effort can include, mean time to detection, mean time to remediation, average window of exposure, % of systems without critical or high vulnerabilities.

  • Rinse, Repeat and Document
  • Set a defined frequency for:
    • Scanning
    • Patching
    • Reviewing Scans
    • Reviewing Remediation efforts

A vulnerability management program should consist of a team of individuals including IT, Security and a management level IT or Security person.  The idea is to include IT that most likely will perform remediation and have it managed and overseen by security. IT or Security management should be included to prove buy-in and to help with any approvals required.

A vulnerability management program provides an organization the data needed to properly manage and measure their IT infrastructure. 

A mature vulnerability program will encompass not only vulnerability scanning but inventory management, a remediation process for discovered vulnerabilities, penetration testing, and risk management. A mature program will have the ability to coordinate the results of previous scans to create meaningful metrics that an organization can use to review the processes for improvements and additional risk reduction.

We’ve moved in the cloud now what do we do?

Cloud environments make a lot of sense for businesses of all types. As we move to a more agile workforce utilizing cloud resources provide added functionality, which was often not obtainable for smaller to medium businesses. Cloud resources in their nature are highly available, highly scalable, and, easier to implement disaster recovery.

In the past, for a smaller company to scale up to meet demand it would require a huge up-front investment to acquire new hardware. That hardware would require time to set up and then configuration. With the advent of the cloud hosting these operations have become much easier and at a much affordable subscription model instead of previous up-front costs of hardware/licensing. I can see some situations which may still require on-site equipment for various, compliance, legal, or cost requirements. Those situations will be the exception, as most businesses will benefit from the cost savings associated with cloud hosting. Additionally, as more companies utilize Linux systems for web applications and their services that will also reduce cost as most Linux operating systems are open-source and do not require any licensing costs.

One misconception about migrating to a cloud environment is that by default they’re secure. I would say that is partially true but not completely. By default, AWS, for example, will utilize a deny all policy for its security groups. Utilizing a default-deny policy is a best practice that requires that any access to that system will require a security group (firewall) rule to specifically allow that traffic into the cloud resource. As best practice access should be opened up according to the need. In a cloud environment you are not responsible for the physical security of your cloud systems or underlying network but the security of your hosts and services is your responsibility. An application, which is only utilized by your employees, should limit access to those specific employees. This can be implemented by a direct connection, VPN, certificate to authenticate the device, or by whitelisting the specific IP addresses for those employees that require access (This can be painful if they are not static IPs for the users). Whitelisting IP addresses for remote workers would be cumbersome as those addresses can change but for smaller organizations, it could be feasible. If you have a web application which users should be able to access from anywhere in the world then you’ll need to open up access to everyone for that application.

Opening access to the world can be a scary concept. If your system is available over the open Internet expect it to be tested consistently. Regardless of the service, if something exists and is accessible to everyone on the Internet it will be discovered by crawlers some of these will be for research purposes and others will be for malicious purposes. This is a fact of life that every organization, government, and Internet user must face. In order to protect your systems, you must implement proper access controls, secure transmissions, and permissions to limit the possibility of unauthorized access.

Making a system secure requires multiple layers of protection in place. A layered approach can deter an attacker, as it may be too difficult to make an entry, it could also prevent a deeper breach or prevent an attacker from obtaining the keys to the kingdom (administrator access). Keeping an environment secure either in the cloud or on-premises will require the same concepts. Create a strong perimeter either on the network layer or if you’re following a Zero Trust model on the host itself. That means shutting down services, which aren’t required, utilizing a default, deny all rule and allowing specific traffic by exception into the host. Add multi-factor authentication to your remote access methods to further secure your access.

After securing the perimeter protections should be in place to limit file/network access to the user’s role. There is no need for a standard user to have privileged rights. Securing the perimeter and limiting user access will be a great start for a program but to fully secure systems techniques such as:

Centralizing Access – One Location which stores all of the user information and can edit permissions/access at a moment’s notice. Changes in this system are reflected in all systems.

Centralizing Monitoring – All Logs of all devices will send logs to a SIEM or Syslog server which can create metrics or trigger alerts for defined events.

Adding Network Detection/Prevention Systems – These systems can sit on a network and detect or prevent malicious activity and send alerts based on triggers. Different than a SIEM as these triggers can be set according to network traffic while SIEM triggers are based on logs.

Application Firewalls – If you have an application you own and it is exposed to the internet you should have an application layer firewall. These next-generation firewalls can detect and inspect application-layer traffic.

To some this rant up into something meaningful, your new and exciting cloud hosting will still require the same old boring security practices that helped keep your on-premise servers secure (mostly). As more organizations move to the cloud they’ll need to hire staff with skills to implement and utilize cloud features to make the cloud safe, secure, and cost-effective. Pick your vendors, contractors, and assessors wisely as they’re not all created equal. When you talk to your third party consultant make sure they understand how cloud infrastructures work and function. It is all too often we see items a previous assessment team missed completely, or they misunderstood and did not fully understand. Mistakes like these can go in either direction such as providing an organization a false sense of security or requiring an organization to perform wasteful remediation for a system that meets requirements but is just poorly understood. As not all cloud-hosting providers are created equal the same can be said about security organizations. Perform your due diligence as for the credentials of the team members, ask for references, and hold discussions with them to see if they’re on the level. Picking a partner to help secure your organization may be one of the most important choices you make.

Is it time to move to the cloud?

As out of office work has become more common all businesses must address the security of their remote workforce. Early during this pandemic’s quarantine, businesses were forced to become extremely agile and required to adapt and adjust to our current situation.  However, now that Covid-19 is with us for the near future we must pivot from making things work to making things work securely.

Once the pandemic eventually ends normal business operations will resume with a few resemblances of normalcy.  But one thing is certain compliance and regulatory requirements will remain and the new remote workforce will need to be secured.

Most modern businesses had already implemented remote access methods prior to the pandemic.  For organizations of all sizes adjusting to 100% remote for their workforce caused unexpected consequences. Many organizations had to develop a disaster recovery/business continuity plan in real-time. A common problem that most likely occurred for medium-sized businesses is a lack of available bandwidth for the VPN appliance.  Most VPNs are built for between fewer than 100-200 users and once that user threshold is met the device can become unstable for some or all VPN users.  Utilizing a high availability pair can provide load balancing for these types of events. In addition, a second VPN Appliance that often also performs double duty as a firewall can also help increase bandwidth over the VPN. Having a duplicate VPN Appliance can be costly as it doubles the cost as you’re now utilizing two appliances instead of one. That high cost can be justified for businesses that require high uptime or are performing critical work that can’t be delayed. But scalability will not end with your VPN and may need to be addressed for other services or functions as well.

Additionally, organizations must begin to consider the level of access given to its users. One problem with VPNs is that they provide network layer access to users, which is often unrestricted.   It is commonplace for a VPN user to have access to the entire network if the network is not segmented.  It is possible to limit VPN access or segment the network to avoid unauthorized access to subnets or networks which normal users wouldn’t require access too.  In networking the old philosophy of trusting traffic on the same access level is beginning to change.  The idea of Zero Trust addresses the very idea that all hosts are given Zero Trust and in order for a user to access any system it requires the same authentication process disregarding the user’s connection origin. The user attempting to connect to a host from a public Wi-Fi connection or plugged into a switch right next to the host, the traffic will be treated the same, requiring the same authentication process for either origin location.   Zero Trust utilizes systems, which are already established such as centralized access controls, multi-factor authentication, and device authentication.  A Zero Trust authentication will require a valid username/password that is tied to a central identity access system (ideally), then a multi-factor authentication token, and a certificate, which was issued to the device at deployment. This method effectively authenticates the user, the session, and the device. Zero Trust and cloud computing work very nicely and I’d suggest if you move to the cloud that is the time to implement Zero Trust. 

Having resources in the cloud can now be utilized by businesses of all sizes and can avoid requiring a VPN connection to the physical office. The older model of IT infrastructure utilized on-premises servers that contain resources such as file shares, desktop applications, or desktop environments.  This internal infrastructure would require a VPN or less secure alternatives to access the internal network’s services and functions which employees need to perform their job functions.   Some businesses may require to keep their equipment on-site however the vast majority will have the ability to move to the cloud. The cloud offers alternatives as you can utilize a cloud provider for file shares and web applications to replace the older desktop applications.  Pivoting to cloud resources can eliminate the need for a VPN and can enable a remote workforce to flourish.  

If you’re considering adjusting your organization’s infrastructure placement, you must ask a series of questions about your organization. 

  • Will the move be Cost-Effective?
  • Will the cloud satisfy my Compliance Regulations?
  • Will the cloud satisfy my Business Agreement Obligations?
  • Will the cloud satisfy my Legal Requirements?

The answers to all of these questions is most likely yes…

Once you’ve identified if you can move to the cloud an organization must determine what services are required. Do you need a complete virtual environment for workers or just a web application? As desktop applications become more obsolete web applications will continue to allow for more agility in our remote workforce.  Previously organizations may have had an internal application, which was only accessible on an endpoint in the office or over a VPN, as now a web application can be utilized. Access to a web application can be safe if you implement a centralized identity management system that utilizes a two-factor authentication token for verification.  As the potential web application would be exposed over the Internet the use of centralized identity management and two-factor authentication help compensate to prevent unauthorized access. 

One benefit of moving to a cloud environment is the idea that disaster recovery and business continuity can become much easier to engage. Spinning up a warm site or cold site can take time and require a large effort for physical hardware. But in AWS or Azure, it can be a few mouse clicks away by replicating a server to a different Availability Zone (AZ) in AWS and/or a different region to spread it across multiple data centers within your cloud provider.

Moving to the cloud does not guarantee a more or less secure environment. If you move to a cloud environment don’t forget the basics. Zero Trust is a great idea but it should be implemented along with the normal security operations such as limiting permissions for users, reviewing access, user awareness training, running frequent patching, and utilizing secure protocols for transmitting sensitive data. Often organizations look for security silver bullets and neglect the mundane tasks that will equate to a stronger security posture.   

Technical Risks of third-parties

A third party would be defined as any contractor, business associate, business partner, vendor or even volunteer who works within your organization.  When you work with a third party it will instantly create an additional risk for your organization. However, that level of risk will depend upon the third party, as not all third parties are created equal. A third-party security services firm with a positive reputation will most likely introduce less risk than a contractor hired to do marketing.  

Third Party Risk Management has quickly been recognized as a vital task, which must be taken seriously. As companies look to adopt newer technologies and become more agile it is only reasonable to think that third parties will be utilized more frequently in the future for more and more services and tasks. Without assessing, and managing third party relationships how can you be sure that your third party is taking your security seriously? 

From a technical perspective a third-party connection could come in various forms for example a web API (web hook) pulling data from one source into another system which could inadvertently share information through web services. A more traditional third-party connection could be a remote user with VPN access or direct connections through an application such as Citrix. Another possibility of a third-party connection could be a Site-to-Site VPN between corporate networks, which is always, open and connected which could potentially lead to the most risk.  Additionally, a third party may have access to a secure file transfer protocol (SFTP), which could be used to transfer and/or exchange information between companies. Each method described details differing levels of access and with that third-party access it introduces potential risks.

A Web API could be utilized to transfer PHI or other data through HTTP(s).  This method is similar to a user accessing a web application to view a medical record, but instead of a user it’s an automated process which pulls the medical record or other information requested and returns results to be incorporated or displayed to the initial program, which made the request. An example of this would be if I created a web site dedicated to the price of gold. One of my main objectives of this website is to display the current price of gold.  However, I don’t want to edit the website continuously to keep updating the price of gold as it changes in real-time.  Instead I could create an API which will pull the price of gold from another site of my choosing which would then be displayed on my website.  This API would be updated in real-time and would not require me to consistently update the gold price on my web site.   

A remote VPN is pretty straight forward it would be a single user accessing a VPN to obtain network access to utilize the agreed upon asset.  Similarly, a direct access connection through a Citrix or VMware application environment can utilize a native ability to limit the contractor or third parties’ access to only the required resources. While a site to site VPN is a consistent open connection between your organizations network and the third parties’ networks.  Site-to-Site VPNs should be avoided, as the trust of the third party can never truly be known without extensive audits. If a Site-to-Site VPN must be used it is important to utilize a strict deny all rule and allow only the specifically needed traffic through the VPN tunnel.

An SFTP connection is a common way to allow the exchange of information.  A third party can be provided unique credentials and provide the source of their IP address to allow for the source validation of the incoming IP addresses to allow only the known third parties to upload data to your file upload service.  Validating the source IP address along with a unique username/password to authentication over a SFTP is a secure method.  Ideally, the third parties would only have permissions to view data or folders associated with their company and have the ability to only upload data.

Additionally, a third party may actually host all of your data. With the strong adoption of cloud hosting it is becoming more common for companies to offload their sensitive information to the cloud. If sensitive information is stored in the cloud it should be detailed in a third-party risk tracker. Additionally, the vetting and continued assessing of any hosted provider should be considered as the wrong choice could have costly consequences.

All of these different methods have many similarities when it comes to managing third party access. The use of third-parties will only continue to increase with newer technologies and advancing skill gaps between workers. If an organization follows the proper vetting, management and continued auditing of their associated third-parties then an organization can effectively limit and document the risks involved. We have some steps below that all organizations should follow to keep their relationships productive and secure.

The first step is to include, review and update contract language in BAAs addressing third party security requirements, obligations and best practices. This is especially true for any service which hosts PHI.

Second is to track which contractors or third parties have access to what data or systems.  A detailed tracker will go a long way to document and understand who has access and what level of access they possess. Additionally, for APIs, SFTP, VPNs, hosted data and other connections should include the IP addresses of a connection; ports, protocols and all characteristics of the connections possible. 

Additionally, the technical aspects of third-party management should spin-off different tasks, which should be performed regularly to add a layer of defense.  These audits can catch mistakes, anomalies, things falling through the cracks and any other things overlooked regularly. 

      Verify unique IDs are in place for all users

      Verify the appropriate level of privileges is set for all users

      Test the monitoring in place to ensure it is in place and working properly especially for third party users

     Verify secure protocols are implemented for all transmitting of sensitive information

 Verify that any PHI stored is encrypted while at rest

      Review connection agreements are matching the agreed requirements

When it comes to securing your third-party relationship, it should follow the same principles as the rest of your security practices. Do the basic things track the third-party users or connections, detail the level of access, implement strong access controls, and verify your monitoring is working properly. Also, make sure only secure protocols are used for any data exchanges and always audit and review. Finally, strongly assessing and reviewing your third party can help ensure the continued security of your sensitive infrastructure. It is better to discover the possible can of worms now before you find yourself making a headline for the wrong reasons.

Tips for Everyday Security

As any IT or Security person knows we’re often asked what a normal person can do to stay secure. I have some simple things to consider and some more difficult things to implement for your cyber life. These changes will make accessing your accounts more cumbersome to access. However consider this, if it is harder for the account holder imagine how much more difficult accessing your account will be for a hacker.

Step 1. Use tougher Passwords, I suggest using 10 or more characters with a mixture of lower case, upper case, numbers and symbols. Phrases are very popular now such as I thinkmypasswordisreallysecure2019! but depending on the phrase I’d say be mindful of common phrases. My best advice is use a password manager and utilize a random password generator and set the characters to 16. The longer the password length and more diverse the complexity, the longer it will take for an attempted brute force attack to discover that password. A randomly generated password using all possible characters and 16 characters or more will be nearly impossible to crack.

Step 2. Use different passwords for every site. Insert groans here……. Yes this is a major pain in the ass but it is the truth. When you use a password for any given site you have no idea how securely that password is being stored. As we’ve recently learned from Facebook and other instances of website breaches, websites can leave passwords with weak hashes or even store them in plain text. If your password is compromised in plain text or the hash is broken that leaked password will be associated with your email account. If that email and password are used for multiple accounts odds are that information will be utilized to access your accounts. You need different passwords for each specific site. At the very least do it for the accounts which have access to your money!

Step 3. Use Multi-Factor Authentication for any account that can utilize it. Any bank site, financial services, crypto currency should have the ability for MFA so go ahead and enable it. Use Google authentication with it and avoid the use of email as a MFA method and if it’s the only option SMS will work but a separate app is better.

Step 4 Limit the information you make available about yourself. Facebook, LinkedIn, Instagram, all of these sites have treasure troves of information about us and hackers can and do often use this information to craft specifically targeted attacks. If I see you went to Hawaii in 2014 I may add Hawaii2014 and every variation to it to a password list I’d use to brute force attack your bank account.

Step 5 Turn off any services that are not in use, this goes for phones, tablets and laptops. Turn off Bluetooth for your devices if it’s not in use, turn off sharing and cover your laptop camera. If you have a smart phone you already know your sacrificing privacy for convenience. So don’t act surprised when you talk about buying new shoes and later that day you see shoe ads displayed on news sites you visit later in the day.

Step 6 Nothing is free, if you provide info to a company most likely that company will be selling it. Be conscious of this since anyone can buy that info. Your phone number, address and email are very easy to obtain so hesitate next time before you give that information away and think do I really need to do this? Think of the spam calls your getting all the time (like me) they didn’t make your number up they got it from somewhere.

Step 7 Trust no One… Microsoft won’t call you, your bank probably isn’t calling you (If the bank is calling it may be fraud prevention and they won’t be asking for any money but only to verify recent activity). If you receive a phone call and it doesn’t feel right it’s probably not. If someone calls you and that person is pressuring you to give them money it’s a scam. If you think it could be real, ask for the persons name and extension to call them back. If the answer to that question sounds good, google the info you obtained and see if the number is actually associated with that company or if the name of the person can be found on LinkedIn employed at that company. Don’t trust any one without verifying their identity. Scams are happening every day don’t fall victim to them.

Step 8 Spam Mail Protection if you click a link and it takes you to a login, stop. Close the link and open up a new tab and login to the site through typing it in the address bar or use your bookmark. This practice can help protect yourself in case you do click a link which is a phishing attempt (Spam mail, which has a link to a fake login screen made to mimics a site to steal a users password by tricking the user to enter the info into the fake site). I have a poster below from SANS which will provide a ton of detail regarding the items to look for in a possible phishing attempt. But always remember TRUST NO One!

Step 9 Use an ad blocker for your web browser. Adblock, and Ublock are some good options but more do exist. Ad blockers can help prevent malicious advertisements which can lead to malware installing itself on your device. Blocking malicious ads at the source through an ad-blocker provides an additional layer of security.

Step 10 Avoid using public WiFi for financial transactions. I strongly believe that if it is not a necessity it is best to avoid using any public WiFi to login to any accounts which could lead to identity theft. I feel the same in regards to accessing banking applications over cellular networks. This may be more of a personal feeling and less of proven technical theory but proof of concepts do exist of rouge cellular networks catching data transmitted over 4G. Ideally I’d prefer to only access banking, or other financial sites over a private WiFi connection. However, if you travel often and have to use public or hotel WiFi, I’d suggest utilizing a private VPN for that sensitive traffic. At least with a VPN it will encrypt that traffic over the public wireless connection and give you an additional level of protection. HTTPS should be used by those connections and will encrypt that data but as it is commonly mentioned in security a layered defense is best.

Thank you for reading this information and hopefully it can help keep your information a little safer.