How to design secure architectures – holistic strategy

The following information security strategy outlines common efforts companies can take to improve their overall security posture. It tremendously helps when going for important industry certifications and to adhere to the law.

Our data protection commitment

Our dedication to data protection begins with the following statement: “The protection of our customers’ data is our top concern.” This is not an empty term for us: we place a high value on openness and transparency. We guarantee that your personal information is always as safe as technically possible from nefarious purposes. We’d like to show you more about our ongoing efforts to preserve your privacy with this paper.

We pledge the following:

  • We will never, ever sell user data. Your personal information is always kept private.
  • Sensitive data is only handled and kept in encrypted form, including backups. In addition, we do not use insecure protocols or unpatched systems.
  • The EU-GDPR is very important to us. We only keep relevant information. You may also request a copy of your stored data at any time, as well as a change or deletion if authorized by law.
  • Key infrastructure is housed in a private cloud environment in Germany, while our website is hosted on a public cloud platform in Frankfurt.
  • Our employees are not permitted to access your sensitive information.

How we protect your data

Information security always tries to achieve three different goals: Confidentiality, integrity, and availability. For us, this means:

  • Data may only be visible to those who have permission to read it.
  • Data must remain unaltered and trustworthy at all times.
  • Internal systems must be available when our customers need to access them.

We have defined eight different defenses to effectively protect both our internal and confidential corporate information as well as user and customer data from hacker attacks:

  • Network and infrastructure security to protect against malicious or unauthorized traffic
  • Malware protection of clients and servers
  • Strong end-to-end encryption and uniform data classification
  • Compliance with current safety standards and regulations
  • Consistent logging, threat analysis, and effective performance metrics
  • Sophisticated privileged access management including monitoring options
  • Holistic (crisis) management of vulnerabilities
  • Comprehensive application security

Security of the network and infrastructure

Firewall rules are defensive measures that we use to protect our entire infrastructure. On the one hand, we only allow network traffic over protocols that we consider necessary and secure. On the other hand, we block IP addresses based on open-source information and previous attacks.

Intrusion detection and intrusion prevention systems are usually part of a firewall. They check and analyze data traffic and identify potential threats. Depending on the configuration, the detected threats can either trigger an alarm or be blocked automatically. We use an IPS to detect and defend against potential attacks on our systems.

We require information from external sources on the Internet. A web proxy can unblock certain addresses and block all others. It is a simple, yet very efficient measure to further prevent hackers from compromising our systems. Access to the proxy is not guaranteed anonymously, and the available target pages are strictly cataloged.

Administrative and internal systems can only be accessed via a virtual private network. Furthermore, they are additionally secured by a second factor and strong identity verification. Unauthorized as well as highly privileged accesses are also logged and continuously monitored for anomalies and possible threats.

Different subnets separate highly critical security zones, which include authentication servers, from regular services – for example, our website. Compromising one network does not necessarily compromise the entire infrastructure. In addition, security-critical services are located in a demilitarized zone that is not accessible from the Internet.

Not everyone should be able to move freely and without restrictions on the network. So-called zero-trust networks prevent any communication between communication participants that is not permitted in advance. Our company currently aligns its internal communication with the zero-trust methodology.

Our internal architecture includes multiple independent cloud-native microservices and follows a service-oriented architecture. It is built to be resilient and, in combination with Kubernetes and the Open Telekom Cloud, can scale greatly in moments as well as successfully deprovision servers. This ensures availability of our services at all times, regardless of the workload. Multi-regional data centers provide our servers exactly where we need them. In addition, data replication provides us with sufficient protection against the failure of an entire data center.

Consistent use of content delivery networks implicitly protects us and our network from various types of attacks – including denial-of-service attacks. We also get valuable additional features such as geo-blocking or enforcement of certain protocols.

Malware protection for hosts and clients

One of the most efficient ways to protect against malware is to consistently rely on containerization. The more virtualized our infrastructure is, the better our sensitive data is protected. Transitioning in a public cloud environment, the responsibility for securing the underlying servers lies with the large cloud service providers, who have an immense knowledge advantage and a financial budget that only a handful of companies can afford.

Although responsibility is usually outsourced to large cloud providers, our critical landscape must be protected separately. In addition to a contingency plan for emergencies, we use traditional antivirus software, have defined measures for the proper handling of our hardware resources, and monitor our infrastructure around the clock.

Encryption and data classification

Proper data protection with the major cloud providers ensures a very high level of security. Many public cloud providers provide state-of-the-art and very efficient cryptographic services for all scenarios and add complex behavioral analysis to their repertoire to further prevent malicious attacks and access. For the secure provision of our services, we rely on the aforementioned technologies of the cloud providers, but especially on key management services.

Cryptographic keys behave similarly to normal passwords and are generated using proven algorithms, sufficient length, appropriate information content, and a random component. We rely exclusively on FIPS 140-3 certified encryption methods.

The encryption and decryption of said data is performed by the cloud provider and by us. Access from the latter must be done at all times via fine-grained identity and access management rules and role-based access control policies that only give a few select developers access to the keys.

We aim to enforce cryptographic operations across the platform. This includes HTTPS-only, the use of secure protocols, encrypted backups, and secured database entries that contain sensitive information.

In addition to the encryption itself, the data to be encrypted must first be identified and classified. This includes information that personally identifies users, such as addresses, names or tax numbers, and other information that can be classified as sensitive or confidential, such as transaction data.

Safety standards and regulations

Strict compliance with contractual and regulatory requirements is one of the top goals of many security programs. To achieve this, we enlist the help of trusted partners by handing over responsibility to cloud providers, among others, thereby increasing the overall level of security at the same time. This is commonly referred to as the “shared responsibility” model. Specifically, for example, we hand over much of the responsibility at the operating system, virtualization, network, infrastructure and hardware levels and can focus nearly entirely on securing our application, infrastructure, business data and access models.

Above all, the user management process is the main focus of our company philosophy. It includes provisioning new users and adding the necessary access rights. In the second step, we focus on maintaining these access rights. Job titles change and competencies shift. We have to react to this adequately and in real time. We also need to know at all times which user has which access rights to the system. Probably the most important function by far is the revocation of permissions to systems. Our user management system enables the immediate removal of a user without delay or technical problems.

Every software enhancement we write, every partnership we form, and every decision we make can introduce further risk into our ecosystem. It is seemingly impossible to screen every single decision for potential risks. Therefore, as part of threat modeling, we have defined measures to be taken in the event of an incident for all threats that are relevant to us.

Since different risks have to be assessed differently, we have established the model of so-called compliance zones. Here, we pack systems that have similar compliance requirements, for example, critical authentication servers, into a single zone, and then protect them either with lower priority or particularly well – for example with a 4-eyes principle for all administrative access. In addition, the principle of compliance zones also allows us to better structure our IT landscape.

Logging, threat modeling, and key performance indicators

A holistic logging system is essential for us. We use a centralized security incident and event management tool as a single point of source for all our events, application logs, and additional metrics such as current server utilization and IP address capacity. As part of our security operations center activities, we continuously evaluate log entries, including examining them daily for anomalies.

We consolidate regular log data with infrastructure analytics data to gain accurate insights into the health of our systems and identify potential threats. For example, an unusually high number of failed login attempts could mean that someone is trying to enter the system without authorization. High network latency could also be a reason for a distributed denial of service attack. File accesses at unusual times? This indicates that someone is trying to steal data when no one is watching. We are especially monitoring strange and uncommon scenarios like that as part of our daily work.

Privileged access management

Every authorization assignment within our infrastructure is based on the principle of least privilege. Authorizations are only granted for the execution of necessary tasks. Once the task is completed, the authorizations are reset as far as possible. This prevents the creation of users who have unrestricted access to all services, including sensitive, confidential corporate data and keys. If a non-administrative account is compromised, the damage is usually kept within manageable limits.

We primarily use an internal privileged access management tool to record privileged activity and, if necessary, terminate the associated sessions and lock down the user. PAM shines not only because it protects the company from unauthorized access by preventing damage, but also because the records reduce the burden on administrators. Errors can be tracked, and since all activity is recorded, false accusations can no longer occur.

Privileged identity management provides a secure way to verify and manage user identities: passwords for privileged accounts can be automatically exchanged or dynamically generated. According to the internal definition, an account is privileged if it has far-reaching rights that can influence the entire system. This includes, for example, active directory users or SQL users of production databases or those containing confidential or sensitive data.

Holistic vulnerability management

We use various types of vulnerability scanners to check our entire IT infrastructure for security gaps and subsequently eliminate the vulnerabilities found. Regular internal penetration tests and manual review of scan results enable us to continuously eliminate vulnerabilities and improve our services.

Datasets derived from the vulnerability scanner are presented in a graphically appealing way and displayed in the form of easy-to-understand performance metrics. Filter options and pre-prioritized results allow us to focus on the acute issues at any time, but still keep lower priority risks on the screen.

In addition to classic scanning for malware or many different vulnerabilities, we also use our scanners to find ownerless infrastructures. These can either be servers that were forgotten by us for some inexplicable reason – like people forgetting about them over time -, or servers that were placed in our network by external attackers, and in both cases pose a serious threat.

We started to implement fuzzing and static code and dependency scanning. By doing this, we hope to have a much more resilient code base and be able to find vulnerabilities that classic scanners cannot detect. Overall, we test our entire infrastructure several times a week and immediately take care of processing the results.

Application security

All major IT companies have some type of information security program in place that provides insight into the world of application security in addition to phishing awareness. The Open Web Application Security Project has reported for many years on several guidelines on classic application security measures to ward off known attacks like SQL injection. We train employees in the basics of IT and IT security in particular, regardless of their profession, position or field of activity.

Above all, our developers receive ongoing training on the subject of application security. Combined with secure code reviews and the 4-eyes principle, they have the necessary security and confidence to write professional and robust software. And if a mistake is made, we simply correct it and learn from it.

This training is not limited to just the application, but goes a step further and looks at the entire lifecycle of the DevSecOps toolchain. The first half starts with planning, e.g., threat modeling or technical debt ingestion, through to security plugins in the integrated development environment and the various scanning options such as fuzzing, interactive scanning or dependency scanning. The second half ranges from software signatures and integrity checks to network monitoring, obfuscation, vulnerability analysis, and an outlook on the security landscape in the coming weeks and months.

Outlook

Although we have done a lot in the last years and ensured the security of our customers, there are still some exciting topics on our road map: from finalizing fuzzing, implementing red-teaming measures or deception technologies to matching open-source intelligence – the end of 2022 will be a very exciting and we are already looking forward to the challenges that await us.

Leave a Comment

Your email address will not be published. Required fields are marked *