Early last week, a developer found a cleverly hidden cyberattack in XZ Utils, an open-source software that is a part of the Linux operating system - someone, most likely state actors, had worked for years to gain the trust of the developers on the program and push seemingly innocuous pieces of code that would allow users to access any computer that downloaded it. The linked New York Times article does a good job of explaining the entire system, but it doesn’t go into huge depth about what open-source systems are, or how they are different from other types of software.
Open source software is exactly what it sounds like - anyone can see the code and submit snippets. Trusted individuals called maintainers will review the submissions and decide if they should be merge to the main branch that everyone uses. They are generally free to use, with restrictions specified by a given license. Some non-profits, such as Red Hat, will provide open-source software for a fee, but so many projects that are critical to the functioning of the internet as we know it are maintained by hobbyists in their spare time and provided for free that xkcd has a comic about it. This can be benefical - access to most of the internet would be significantly restricted for a large number of people if they had to pay for every piece of code they used, and in theory, this allows for more innovation. However, since this is all done in addition to a maintainers’ actual job, packages can quickly fall out-of-date if they do not have time to work on it. In addition, a loose structure without oversight can lead to bullying and personal biases hijacking the community. Richard Stallman, the founder of the Free Software Foundation, is a prominent offender, and Linus Tovalds, the inventor of Linux, stepped down after being accused of bullying. While these men undoubtably contributed greatly to the field, we will never know what potential successors were encouraged to leave their careers due to the toxic work culture.
In terms of cybersecurity, open-source code offers the benefits of many eyes to catch would-be attacks. At the same time, the ability of any given person to push changes means that it can be easier to infiltrate the system, as demonstrated by the attack that inspired this article. It is much easier to social-engineer your way into a codebase with overworked maintainers on multiple continents than to get hired at Microsoft and insert malicious code into Windows. From a technical perspective, it is generally easier to find weaknesses in open-source code because it is a “white box” system where you can see everything that goes on inside.
The other edge of the coin is proprietary software, in which corporations pay for development and the code itself is not readily available to the public. Think iOS, Microsoft Office, or a Nintendo video game. The benefits of proprietary software from a company perspective are clear - you can pay experienced engineers to develop high-quality code, which you can then sell to others for a profit. Those engineers can work full-time on a single project, so other responsibilities won’t get in the way. Unfortunately, company ownership can mean that development doesn’t always align with the best needs of the user - like Facebook spending money to improve its ad software instead of combatting misinformation or working to make the platform more accessible to those with disabilities. This also means that if the company itself makes a decision that actively harms the user, it can be more difficult for the general public to find them out. The most famous example of this is Apple purposely slowing down iPhones due to a battery defect. On the other side of the coin, the structure and heirarchy of a company, at least in theory, is supposed to dilute the human biases that can proliferate in open-source settings. From a cybersecurity perspective, it requires more technical prowess to hack a proprietary system because it is “black box” - you can only see inputs and outputs, not the full code, so you need to guess the best way to attack it.
The third most common type of software is called open access, and it falls in between open source and proprietary in terms of freedom to use. Open access software means that the code itself is open for the public to see, but only employees of the company that develops it can make changes to it. The most famous example of this is Twitter - while Elon Musk claimed to be making the code open source, it is actually open access. They claim to be working towards a fully open source model, but no updates on a timeline have been provided. Open-access software is most useful in cases where the main customers are other developers who will be using your software in their own projects. This allows the developers to see everything they need in order to properly integrate with their systems, but prevents them from making breaking changes to the code. In terms of vulnerability to attacks, open access software systems are easier to hack than proprietary software, but harder than open source, since you can see the code, but can’t push malicious software.
It may seem unnecessary to understand the nuances of software types if you are not actively involved developing it, but as technology permeates every aspect of our lives and laws struggle to catch up, it is vital to know what is going on, both so that you can protect yourself, and so that the wider world can work together to determine how to move forward in ways that allows technology to develop without overlooking human concerns.
I learn something new every day.