When considering security features, especially mitigations in the kernel and gcc, those deploying them can take them for granted. This is because of generalized assumptions about the power of the mitigations in preventing exploitation. A generalized understanding is one with mostly theoretical experience in programming and what that mitigations are about. To really appreciate security one needs an operational understanding.
An operational understanding is gained from experience testing your servers mostly. One should pentest their systems themselves as well as they can. If you have other pentesters you want good reporting from them. Overall you want experience in the operational matters of security and to certain principles. Further, you should make it so you can turn that operational wisdom into further restrictions and mitigations of your own, which is where things like RBAC and SELinux can be extremely useful.
An operational understanding comes with time, and to get it you might have to take the risk of running systems. This isn't a good idea when you are only at that level of generalized understanding with little experience in operational matters. It goes to my views in opinion, bullshit, and fact that your generalized understanding is in that bullshit area, its based on what you imagine from theory and non-production wisdom. You need to see the mechanics, the cause and effect, the logic of software to get into the operational side of things, and you need a production system with dumb customers.
By operational one must consider the semantics a bit and meaning. In the service of productive activity with economic or social gains, a system operator themselves in the task of operating such a system has a good understanding of what to expect, because of technical knowledge of the system. Operational understanding of systems basically means knowing how a system operates as a system operator. Such a person understands systems without much guess work, and is highly experienced.