Image for post
Image for post
Author: Alberto Domínguez Serra| Security Architect

The software you never developed…

Software management in day-to-day operations at organisations is a complex issue. If we think about the typical technological infrastructure that supports operations at organisations, we see that it is a very diverse element with many different components from different sources. In this article we are going to explore what we may find when software is not developed by the organisation first-hand.

If we focus on the origin of software management, there would be three main types: software developed by the organisation, either using its own resources or by means of outsourcing, software commercially acquired and developed by third parties, and software developed by open-source software communities. It is worth noting that, in general, already developed software components are used to create other elements. This means that each software or program already comes

with a set of dependencies (components) that may have originally belonged to one of the other types previously mentioned. This entails that most artefacts are mixed in nature and have internal components from different sources. Additionally, there may also be different versions of the same software, adding different characteristics to the complexity: functionalities, configuration and, in terms of security, vulnerabilities.

If an efficient approach is not taken, the application of adequate measures to protect and maintain software in this scenario can become a titanic task. One of the elements that helps to keep control over the technological infrastructure of an organisation is the Configuration Management process, which aids to inventory the different programs, applications, and software according to their versions and corresponding configuration files.

However, the problem originates in the dependencies previously mentioned, which come from sources external to the organisation. If vulnerabilities are detected in any of those components, it could compromise the whole artefact. Security also requires order and control to be successful, so, same as the end components already included, these dependencies (internal and external) must also be managed by the control process. Such is the theory, but in practice things are rather different.

All that glitters is not gold

The attack on EQUIFAX, which affected 143 million people and led its CEO to resign, was caused by a flaw in a base component of one of its applications. According to the latest report by Black Duck, 97% of applications contain open-source software from third parties, out of which 67% also have well-known vulnerabilities.

Using third-party components, especially FOSS (Free and Open Source Software), can be very beneficial for organisations: it speeds up development and provides support from mature and already tested components, which in turn reduces development costs, etc. However, such practices also require that organisations ensure a thorough follow-up of these components and their corresponding updates. When the components are commercially acquired, software providers usually inform their clients of newly released patches or updates, but open-source software communities often do not know who is using their products, and can only use their communication channels (websites, social networks, RSS, etc.) to report issues.

Thus, it falls to the organisation to have everything under control, and to properly monitor the components that form the different FOSS applications, and this is also recommended for third-party components. This followup can be conducted by means of online channels of providers and communities, or through public repositories of vulnerabilities and CERTs (Computer Emergency Response Team).


The lack of control over software components in an organisation can result in two main types of risks. Firstly, some components may directly pose a high risk to the organisation (depending on the vulnerabilities). Secondly, it could result in a lack of control over the infrastructure, as there would be no way of knowing which software is running and where. This would be critical if we were at a loss for knowledge about our own organisation in the event of an attack. The combination of both risks would be fatal, as this would allow potential attackers to access and take control of the technological infrastructure. Generally speaking, applications that are exposed online require more supervision on the part of organisations, as they could become a direct point of access. Vulnerable components interacting with the outside are usually detected during the periodic audits. However, those components used internally in applications can go unnoticed until some modification turns them into a feasible attack target.

Apart from applications, we should not forget other elements included in the technological infrastructure, such as operating systems, webservers, application servers, database servers, and any other type of system and middleware. This software is usually the core support for the internal architectures of the infrastructure, which in turn supports all other applications and is essential to support the organisation’s global operations. Most of these components come from third parties instead of being developed by the organisation first hand, and many of them are FOSS.

These elements have to be included in the organisation’s Configuration Management process, same as the applications and their components since, in terms of vulnerabilities, all of them pose the same problems. Thus, there should be a proper follow-up of vulnerabilities reported and patches released by the corresponding communities or software providers. In general, it is recommended to standardise and normalise the different versions deployed of each software so as to make it easier to manage security more efficiently, instead of having to monitor each software version. The simpler the infrastructure, the easier it will be to protect.

If the goal is to improve the security of this type of software, then hardening actions may help to mitigate potential vulnerabilities. For those unfamiliar with the concept, hardening consists of a set of practices and configurations to be applied to software in order to improve security. Most times, these actions seek to eliminate unnecessary functionalities or services, to modify default configurations, and to delete help documentation and the examples available.

As if all this were not enough, there is yet another type of software that should be included in these control processes, since it could contain vulnerabilities and compromise the infrastructure. This software runs on certain equipment required to keep the machines functioning, since it runs on network equipment, controlling blades in data processing centres, and even the firewalls.

Normally, this software consists of proprietary firmware from the corresponding equipment providers. This software has to be inventoried and controlled as in previous cases, especially those that maintain the infrastructure up and running. Manufacturer updates and the latest vulnerabilities reported should be monitored as well.


Given all this information, we see that it is essential to have proper control over all software implemented at all levels, and over the dependencies, as part of the day-today operations of organisations, which are increasingly relying on third-party software.

Moreover, code security audits can help to detect potential flaws in our software dependencies that could go unnoticed in other types of audits. These tests are particularly efficient when it comes to finding vulnerable components in our applications, as they allow us to check the version being currently used and even perform internal analyses in the case of FOSS. Code audit results make it possible to improve control over the technological infrastructure, as they can be used to update the inventory of dependencies approved for use and execution in production.

The performance of periodic reviews by means of security tests, together with the control over all existing software in our infrastructure, would help to maintain a significant level of stability in terms of security. This would make organisations mature enough to control the risk posed by the software they never developed.

Exponential intelligence for exponential companies

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store