There is a dystopian scenario in which a culture reaches a point where it is no longer able to maintain its existing technology. This could be through idiocracy in which dysgenic breeding fails to produce enough people of sufficient intelligence, or even through short-sighted governmental policies that de-prioritize maintenance. The American Society of Civil Engineers, in its infrastructure report card, gives the U.S. a grade of C minus, due to the thousands of dams and bridges in such poor repair. Our water infrastructure is leaking billions of gallons of water, our airports are sub-par and anyone around our cities can attest to the gridlock. Public transit systems are likewise in disrepair, and don’t have the capacity to be an alternative to cars. The situation with cybersecurity is a very obvious example of what is going on all across US infrastructure. It’s causes are multifold and seemingly complex, but it can be broken down into simple pieces. Although it is far from a new phenomenon, the past year has treated us to report after report of high profile hacks and ransomware attacks. Millions of people have already had their complete identifying information stolen from credit bureaus such as Equifax, or from the government itself during the hack of the Office of Personnel Management. Most recently we've seen people's prescription history pilfered from a major pharmacy software provider. But along the way we have seen municipal governments, meat producers, hospitals, insurance providers and energy pipelines shut down as well. Hundreds of companies and government departments were compromised when network infrastructure monitoring software was modified to include back doors. The list of hacks and shutdowns is so long as to be beyond listing, and most of them are not publicized. Schools have been shut down regularly while doing remote instruction due to rented Distributed Denial of Service attacks, and this was kept out of the headlines. Bureaucratized companies hire teams of highly credentialed security experts who follow "industry best practices" flawlessly. And fail. And fail. And fail. There are many reasons why they are failing, and I'd like to highlight these. Failure to Airgap The first reason is a failure to address unquestioned fundamentals, one of which I will reveal with a question: is it necessary for every desktop and server in a business to have access to the Internet? The answer is no. For decades, even after the common deployment of computers and networks in businesses, business networks were isolated from the Internet. Rather than running TCP/IP, the protocols of the Internet, they ran protocols such as IPX/SPX. Internal email worked fine. Printers worked fine. Internal databases worked fine. This is just one example, but the key point is that even though hackers existed and the Internet existed, many businesses ran networks that were airgapped: physically inaccessible to the Internet. Though these networks could still get viruses, usually brought in by executives on floppy disks in the form of macro viruses, this risk was relatively easy to address and remediate in a variety of ways. Today in many government departments that deal with classified information, personnel have two computers on their desks: one with some degree of access to the outside world, and the other that is theoretically running on an independent network with no access to the outside whatsoever. Although today it is mostly not feasible to run different protocols internal to a network, it is certainly entirely feasible for computers in a business with access to vital resources to be run on an airgapped network. Admittedly, this can be a pain in the ass because it requires updates and other important upgrades to be imported into local repositories and all the configuration that goes along with that. But properly planned, this will work perfectly well. And if some people need access to the Internet for their jobs, then throw two computers on their desks and run the one with access to the Internet on a separate insecure network. So the first major problem here is a combination of cheapness, laziness and unquestioned assumptions. Given the cost of the average employee, the cost of running an extra Ethernet cable and deploying a second computer is minuscule. Given the cost of spinning up an internal virtual server for upgrades these days, the cost of airgapping business critical stuff is likewise small. It can present a few hurdles, but nothing that IT staff of the 1990's wasn't dealing with routinely. And let me give one more benefit to this. Companies have pervasive problems with having to monitor and lockdown various sorts of web browsing because it wastes resources while employees surf social media, poses risks of downloading malware and so forth. They have to publish extensive acceptable use policies in their employee handbooks. Most of these problems go away for positions that do not require Internet access. Too bad your average CIO doesn’t understand that a network isn’t truly airgapped if there is a single host in their company with access to both that network AND the Internet. Silos The use of silos, under the doctrine of “least privilege” is pervasive. The doctrine of least privilege is that any given person or application should have access only to those resources needed to do their tasks. In this fashion, the damage that can be done by compromising their associated accounts is limited. And this certainly makes sense for a wide variety of reasons, and should indeed be applied generally. There’s no reason why a customer service representative in a call center should have access to the social security numbers of all employees, or the ability to override the fire alarm. On the other hand, the overuse of silos in IT is a problem. It is enforced because IT personnel often have very high levels of access and if their accounts were hijacked or they became the classic “disgruntled employee,” the damage they could do is extensive. Limiting their access and job function serves to limit the damage they could do under a worst case scenario. The reasoning behind this is sensible, but it accepts certain premises without question. The most important premise is that it is acceptable to treat highly capable people (or anyone else) in a systematically unfair or unreasonable fashion to such a degree that they would consider criminal mischief to be justifiable. When highly skilled people are treated as potential criminals from the very beginning, it establishes an adversarial relationship in which employer and employee pursue their own best interests, in harmony only where they happen to coincide. But loyalty is simply not there because there is no trust. What silos do in IT is limit visibility and access to narrow areas. A person may work only on switches, only on routers, only on databases, only on a web site or only on certain servers. Because of their limited visibility, they often have no idea how an entire system functions, or how it is supposed to function, or what “normal” looks like outside of their narrow space. The result of this is an inability to identify abnormalities that would be visible only with a wider view. This means threats and breaches will evade detection because within each silo, everything is performing as expected … until it is not. This increases vulnerability. Your average criminal is a short range thinker, and short range thinkers don’t devote the time and effort to become highly skilled experts unless they already intend to exercise that expertise criminally. Highly skilled IT employees are similar to engineers, scientists and doctors. Although such people can be found doing that work on a professional basis criminally, illegal organ harvesting for example, it is highly unusual for a gainfully employed professional to “go rogue” and abuse their skills for harm if they are not doing so already. Furthermore, especially in IT, going rogue has dire consequences of permanent unemployment in the field if caught … and nearly all get caught. Although IT professionals have college degrees, their true credentials outside of experience are usually certifications by manufacturers (e.g. Cisco and Microsoft), certifying bodies (e.g. CompTIA) or foundations (e.g. The Linux Foundation). In ALL of these cases there is an ethics clause such that knowledge of your having gone rogue will result in the cancellation of those credentials and an ineligibility to obtain them again. Likewise, rogue IT personnel are severely prosecuted. Just use any search engine to search for “prosecution logic bomb” (without the quotes) to find that when employees play this game, unlike hackers, they get jail time, fines, and more. So the benefits of widening visibility in many cases exceed the potential downsides. Obviously, I am not advocating that junior server administrators have root level access to routers. A little common sense is required here. But I am saying that as IT personnel progress in skill and their loyalty is more secure, their visibility and access outside their specific realm should be increased, along with some cross training. It’s not that long ago that your average IT expert ran the whole shebang: network, servers, desktops and you name it. And cybersecurity and virus threats were, although pervasively present, far less successful. This is not a coincidence. Greater visibility and knowledge among a larger proportion of people actually increases security so long as you aren’t running a sweatshop where mistreating people is a sacrament. But this brings me to the third problem. Hyperspecialization Hyperspecializing occurs for a lot of reasons, but one of them is credential creep. We see this broadly in the the way a high school diploma, which used to be all somebody needed for most professions, has been displaced by increasingly more advanced post secondary degrees. Broadly, this has been necessitated by social promotions in which illiterate people receive high school diplomas and degrees costing tens (or hundreds) of thousands of dollars assure employers that a prospective employee can at least read as a sixth grader could 100 years ago. But as bachelor’s degrees have become the new high school diploma, and standards have dropped in order to avoid disparate ethnic impacts, functional illiteracy has become fairly common among college graduates as well, and so the next stop is advanced degrees, and so on. But in IT this is driven more by HR ignorance and bureaucratic insanity than any need to assure literacy. You have to be highly literate to pass even the most basic IT certification tests, and substantive IT certifications are readily accepted for college transfer credit because they are more rigorous than most college classes. So literacy is not the issue. Rather, the desire is for higher demonstrable competence, which is understandable, but it is generally self-defeating for the reasons I’ll describe. I will use the Cisco certifications in networking as an example. These are widely recognized for their rigor and competency worldwide. When I last looked at it, the Cisco CCNA, their base certification, was accepted by some colleges as equivalent to six upper level and six lower level credits. That is the equivalent of an entire year of study, and it is not unusual for even bright people already in that business to have to invest between 500 and 1,000 hours of study to be sure of passing the test. For a person already employed, that means studying 20 hours a week for six months to a year. Basically, all of their waking hours after they get home from work. No small task. Most critically, it leaves no room for learning anything else. A person with a CCNA has a solid baseline in all the required aspects of networking: switching, routing, routing protocols and security. However, they do not know these in considerable depth. For example, a person whose knowledge is limited to what is demonstrated on the CCNA exam can set up OSPF, but cannot competently configure BGP. One way to make sure someone can competently configure BGP is to require a higher level certification, such as the CCNP or CCIE. But achieving these, again, requires a laser focus for an extended period of time that will exclude the acquisition of any other skills and will furthermore include a lot of stuff unrelated to BGP and the adjunct skills needed to make it work right. This creates a sort of tunnel-vision in which the professional has no choice but to specialize to such a degree as to exclude even the time for the opportunity to gain added knowledge. This, then, plays right into the silo system as well, with a guy who knows networking but can’t troubleshoot an issue anywhere else. The fourth problem is the opposite of hyper-specialization, and you can see it in many IT job postings. Mile Wide, Inch Deep Employers often err in the other direction, and wanting to economize, they ask for expertise in a dozen different things. What happens is that, over time, companies adopt a variety of different operating systems, internal software, networking systems and so forth that are all layered together. Often they are looking for someone who can step right in and run all of it, but the reality is the only person who could do this is someone who already worked there and helped implement all those systems. The problem here is that people should have areas (ideally more than one) of truly deep knowledge upon which they can build, and other matters should be an adjunct. But the habit of employers in this regard encourages professionals to pursue knowledge that is a mile wide but an inch deep, which keeps them from having the knowledge needed to truly understand when something unusual is occurring. What needs to happen is a highly competent person, at that company, gets progressively trained over a period of years in these areas outside his core competency, and thus his knowledge is intimate, organic and contextual. This is someone who will have an understand of what is normal, what is abnormal, and, perhaps with a bit of help, where the holes are. Instead, the fact companies are too cheap to apprentice people and help them with career development means they replace someone who was just described with someone who has administrative knowledge of things, but no contextual knowledge and thus security holes develop over the period of a few years that it will take him to truly come up to speed. Inheriting the Wheel A modern programming paradigm is called “Object Oriented Programming,” and it embodies an idea called code re-use. The idea is that a programmer need not invent the wheel each time it is needed, but merely use a wheel that somebody else created. This manifests in object oriented programming, but also the use of pre-made libraries for everything ranging from web platforms to database and system access. These are treated like a “black box” by programmers, and simply used via function calls with little thought to what is inside them. This saves time, which means fewer programmers are needed to accomplish a given task, which saves money. But no doubt you are already seeing the problem here: when you blindly use someone else’s code, something nasty can be placed there, and you’ll never know. Many libraries that are used as a matter of routine are actually created and maintained by unpaid volunteers who don’t necessarily have the time and resources to dedicate to security. A major example of this was a Node.js library called “event-stream” that was modified to enable the hacking of people’s bitcoin wallets via websites. This library was used so extensively that it was used by 1600 other libraries, and an estimated one million websites. Modern software development, with its focus on economic efficiency first and foremost, does not allocate time for programmers to be examining the wheels they inherit. They just inherit them, and plow forward to meet their performance metrics. This means that malicious code can find its way into legitimate software ranging from desktops, to servers, to web pages and it can be years before anyone knows. But this also happens with a great deal of the core systems software that runs the Internet. For example, people don’t write their own DNS software. Instead, they deploy ISC BIND. This is good software, but right now it has over 600 open issues, with only 7 people doing 90% of the work on literally hundreds of source files. Most security issues in software are not a result of malice, but rather a simple software bug that allows a string to be written beyond the end of a variable’s allocated space, or a pointer that can be re-written. The infamous Solarwinds hack had the reach it did because their network management software was compromised, and then thousands of companies and government agencies used it, meaning they were also compromised. But all of this comes down, really, to economizing by hiring as few programmers as possible. The pursuit of greatest economic efficiency ultimately means building an interdependent house of cards where a single piece of open source code maintained by a volunteer in his spare time can be compromised, and that compromise will ultimately find its way across the globe, allowing hackers to do whatever they wish. The Security Guys Computer security guys run the gamut and are generally very bright people. It takes serious time, effort and dedication to get credentials as, for example, a pen tester or an ethical hacker. As a result, this tends to create the same sort of hyper-specialized tunnel vision described elsewhere. And this credentialization comes with a standardizing mindset that often can’t see the forest for the trees, and focuses on standards, best practices and so forth to the exclusion of understanding how things really work in depth. This makes security guys effective at the basics, such as establishing password complexity requirements or two-factor authentication, but otherwise largely annoying and ineffective. In corporate environments, security guys effectively have authority over all other IT personnel, when they don’t even necessarily fully understand what the other personnel are doing. This results in requirements being implemented that might not even make sense, are unnecessary, etc. Ultimately, this often ends up compromising security. Somewhere out there is a Linux Sysadmin reading this, and nodding his head at all the stupid things he has done because Security dictated it, while overlooking actual problems they did not understand. Even worse, the Security guys are excellent at forms, buzzwords and procedures, but in doing so create environments in which they are hated by nearly all employees. They come across like classic holier than thou leftists, and grate people at a visceral level such that people rebel, and that rebellion creates security flaws. Of course, often rebellion would never happen if security had a greater understanding of business needs. But that goes back to the silo. The emphasis on “best practices” and so forth in security happens at the expense of creativity, and is basically a way to avoid being fired. Essentially, as long as security personnel can show that they have dotted their Is and crossed their Ts, even if something bad happens, they don’t get fired. But what we REALLY need in security, outside the basics, is creative and thoughtful people who live outside the silos, and can work with other IT experts to fully understand what is being done, why it is done that way, and how to improve its security. But if there is one thing that gives corporate HR departments nightmares, it is the idea of a creative and thoughtful employee who can’t be replaced as easily as a Tinkertoy because procedures are as dynamic as the threats they mitigate. Bigger Picture Being a competent programmer, systems administrator or network engineer requires a high degree of both intelligence and persistence, Fewer than 2% of people can do that kind of work, and these are the same 2% who might also become engineers, doctors or scientists. In the interests of efficiency, corporations have sought to replace people with software, and then tried to economize on the use of programmers to write and maintain it. They have further, through aggregation and cloud computing, tried to economize on administrators and engineers. Because such people should indeed be paid well, corporations have artificially lowered the wages by offshoring and importing as many visa workers as possible. This has made such jobs, in many cases, insufficiently remunerative for our own people to justify paying for college and studying for additional years to enter them. This then exacerbates the problem in a downward spiral until you end up with scandals as Congress critters literally employ foreign spies for their IT. What could possibly go wrong? As our reliance on computing infrastructure increases, instead of incentivizing high IQ people to make more babies, every disincentive is employed, and low IQ people are imported by the millions from the third world with marginally high IQ people from Asia supplementing it. Even with the savings gained from such computing infrastructure, every attempt is made to economize on programmers through attempting to inherit almost everything, often relying ultimately on insecure software from unpaid volunteers. A great many core projects upon which the Internet relies are open source software maintained mostly by a handful of unpaid people. These projects are poorly documented and how to deploy them securely is an art form acquired over time. In the interests of security, IT often works in silos or hyper-specializes to such a degree that nobody really has good cross-functional understanding to comprehend when something is wrong, or to understand what might go wrong. When they don’t do that, they go too far the other direction, employing people with almost no mastery of anything just to check buzzword boxes. Security personnel operate in their own safe space bubble, far too often concentrating on best practices but with zero creativity and even less understanding of systems, so that their dictates create more problems than they fix. And what it all comes down to is that these problems will only get worse. Outside of a few extremely well funded and hyper competent organizations that won’t skimp on hiring as many people as they need and maybe even some surplus, the drive to economic efficiency combined with visas and offshoring and outsourcing means that ultimately we lack the people necessary to maintain what is being built, and that problem will only get worse until the entire system falls. Our computation and Internet infrastructure is in some respects like our other public infrastructure. In the places people often visit and see, it looks new and slick. But when you look underneath the bridges, you see rebar emerging from deteriorating concrete, and there is no budget to fix it because doing so won’t buy as many votes as welfare will.