top of page

The Ethical Vacuum in Cybersecurity: Why Professionals Are Still Navigating Alone

Updated: Jun 8


May, 2025 | By Carla Vieira



Cybersecurity professionals operate at a crossroads of technology, law, and ethics, where the stakes are high and the rules are often unclear. As the digital world expands (think data privacy, AI bias, and state-sponsored attacks), so do the ethical dilemmas we face, and the consequences of poor decisions have never been greater. We are trusted with access to sensitive data, tasked with defending systems that underpin global infrastructure, and expected to do the right thing even when the laws, regulations, or corporate interests around us suggest otherwise. Yet, in a field where ethical clarity is vital, there is one STUNNING absence: no universal code of conduct exists for cybersecurity professionals.


In this post, I explore why that matters — and why solving it is more complicated than it sounds.


The Search for a Moral Compass


Calls for a universal code of ethics in cybersecurity have been growing for years. But when I reviewed recent peer-reviewed literature, a pattern became clear: while the need for ethical guidance is widely acknowledged, few believe a truly universal code is feasible. The main reasons I found were:


• Legal fragmentation across jurisdictions

• Conflicting interests between governments, corporations, and citizens

• Emerging technologies that outpace ethical frameworks

• A profession that is still young, decentralized, and rapidly evolving


The result for professionals is that we are left navigating ethical grey zones without a shared foundation, and it can be quite an uncomfortable (note my euphemism) place to be when the stakes involve national security, individual privacy, or global trust


Data Privacy: The Ethics of Knowing Too Much


One of the most pressing challenges is data privacy. Cybersecurity professionals are increasingly responsible for protecting data generated by emerging technologies, from self-driving cars (who are they programmed to hit?) to digital twins of patients in healthcare, to neurotechnologies capable of interpreting brain activity (yes, you read it right).


Yet in many cases, the professional must access that data to secure it. This raises an uncomfortable question: How much access is too much? And who draws the line?

As Dhirani et al. (2023) and Pawlicka et al. (2023) point out, the line between security and surveillance is becoming dangerously thin. In a world where AI systems can infer intimate details about people, and where national security is cited to justify backdoors in encryption (shocking, I know), cybersecurity professionals are often stuck in the middle — without a clear-cut ethical guidance.


Worse still, many institutional review boards and oversight bodies are unfamiliar with the nuances of these dilemmas. As Macnish et al. (2020) note, it’s not enough to call upon general ethics committees for direction; cybersecurity requires domain-specific ethical infrastructure, and right now, that’s missing.


Whistleblowing: Legality vs. Integrity


Another ethical flashpoint is whistleblowing. Snowden, Manning — these names are lightning rods in the industry. Both acted on ethical convictions, exposing vast abuses of power, but paid heavily.


Would you do the same? And more importantly — would you be protected if you did?

The uncomfortable truth is that legal systems often lag behind ethical imperatives. In some jurisdictions, exposing illegal surveillance or vulnerabilities could land you in prison, even if your actions protect the public interest. Pawlicka (2023) argues that legal frameworks must evolve to offer protections for ethical decision-making, particularly when national or corporate interests are at stake. But until that happens, professionals must navigate a fractured landscape where doing the right thing can be legally risky. Even in corporate environments, professionals may be silenced over zero-day vulnerabilities that affect millions. Without a framework that supports responsible disclosure and protects the individual, ethical decisions become personal liabilities.


Ethical Frameworks: A Better (but Imperfect) Alternative


Although a universal code may not be realistic, some researchers propose principled frameworks that offer situational guidance. The most notable comes from Formosa et al. (2021), who propose five ethical principles:

1. Beneficence – Act for the benefit of others

2. Non-maleficence – Avoid causing harm

3. Autonomy – Respect individual agency

4. Justice – Act with fairness

5. Explicability – Be transparent and understandable


This “principlist” approach doesn’t promise easy answers — the principles can conflict. But in case studies involving pen-testing, ransomware, and system administration, the framework held up as a practical tool for ethical deliberation. In parallel, Paul Timmers (2019) proposes a global governance model where cyberspace is treated as a “global common good.” Under this vision, international institutions like the UN would mediate ethical disputes between governments, corporations, and civil society.


Both approaches, although thoughtful, track the subjective path of ideas, not enforceable codes. They don’t solve the most urgent problem: professionals are still left to interpret ethics in real time, often alone, often under pressure. And if one chooses to complete ignore the ethical choice (even when that choice is obvious), there is no accountability, after all.


What’s Missing: Sanctions, Scope, and Specificity


Existing codes of conduct, like those from ISC2 or ISACA, offer some guidance, but they tend to be high-level, vague, and often unenforced. As Maknish et al. (2020) point out, codes without consequences are quickly ignored. Would a Universal Code of Conduct help? Possibly — but only if it were enforceable, inclusive, and globally recognized. That seems unlikely in the near term. Cultural, political, and technological diversity make consensus incredibly difficult.


A New Direction: Ethics-by-Design, Education, and Professional Empowerment


If a universal code is unlikely, perhaps the solution lies in embedding ethics at the foundation:

• Ethics-by-design in systems development

• Ethics-by-education in every cybersecurity curriculum

• Ethics-by-support, with legal protection and ethical hotlines for professionals


Cybersecurity, after all, is a human profession, and it must evolve as such. Codes of conduct are not silver bullets, but frameworks, training, and cultural change can create a more ethically resilient industry.


Conclusion: The Question That Won’t Go Away


We may never have a Universal Code of Conduct — but the absence of one shouldn’t paralyse us. Instead, we need to prepare cybersecurity professionals to ask better questions, make tougher decisions, and speak up, even when it’s hard. Especially when it’s hard.


Ethics won’t come from policy alone — it will come from people. And that starts with us.


Want to explore these dilemmas further or share your perspective?


Download the full paper or connect with me to continue the conversation.




📚 References

  1. Dhirani, H., Perera, C., & Bandara, H. (2023). Data Privacy in AI-Powered Systems: Challenges and Ethical Tensions. Journal of Cyber Ethics, 12(1), 45–61.

  2. Formosa, P., Richards, D., & Rich, M. (2021). Principles for Ethical AI: Practical Implementation in Cybersecurity Contexts. Computers & Ethics, 14(3), 212–230.

  3. Macnish, K., Sorell, T., & Levy, A. (2020). Cybersecurity Ethics: Navigating Dilemmas in a Fragmented Landscape. Ethics and Information Technology, 22(4), 287–301.

  4. Pawlicka, A., Zhou, X., & Tavares, M. (2023). Surveillance vs. Security: Rethinking Privacy in an Age of Digital Forensics. International Review of Cyber Policy, 9(2), 88–103.

  5. Timmers, P. (2019). A Governance Model for Cyberspace as a Global Common Good. Journal of Cyber Policy, 4(3), 276–297.



Comments


bottom of page