Unless you are living under an iceberg in the middle of Nanuvut, you’ve probably heard or seen the popular buzz phrase: “Data is the new oil”.

Lately, every time I hear it, I can’t help but ask [read: overthink] whether that’s true, what does it mean for everything downstream that depends on data? Is data, by itself, as a stand-alone enough to create real, sustainable value?

Instead, I think about it this way: if data is the crude oil, then surely information is the refined fuel, and knowledge is the vehicle (or engine) that moves us forward….driving innovation, collaboration, and progress (hopefully with a bit of laughter and conversation along the way)!

However, just like any vehicle, the journey can only go as far as how well we protect and maintain what powers it.

In today’s digital era, are organizations really treating data, information, and knowledge as the mission-critical assets they are?

For the last two semesters, I’ve had the privilege of studying data, information & knowledge at Columbia University. Over this time, a real distinct image and depth od understanding has grown. These terms are no longer abstract concepts in everyday jargon but real, strategic resources that need active stewardship. And….as AI systems take a more central role in business processes, the stakes get higher. Today, strong data governance and protection aren’t just smart, they’re non-negotiable. They’re what keep data trustworthy, information accurate, and knowledge usable in ways that are both ethical and impactful.

All of this directly influences how effective and fair AI models are, and ultimately, whether they deliver long-term value. So how do we protect these core assets while still pushing ahead with AI-driven growth?

Well, it starts with understanding the often inappreciable difference between data, information, and knowledge and treating the nuance with care it deserves. Building a solid data governance framework is key, especially for staying compliant….but I believe that’s only half the equation. Pair it with smart knowledge management strategies, and you’ve got a foundation for AI systems that don’t just meet regulatory and ethical standards; they actually help you hit your business goals!

In this article I’ll explore the critical differences between data, information, and knowledge; and why they matter more than ever in building a future-ready AI strategy. I’ll look at how these layers interact, and how strong governance, and well-designed knowledge management frameworks can amplify AI’s ability to protect and harness these strategic assets.

Defining the Assets: Distinguishing Data, Information, and Knowledge

Let’s start at the base: data – this is the raw, unprocessed stuff – numbers, text, images, audio, sensor readings. Essentially, it is isolated facts without structure or context. Organizations collect mountains of it every day from various sources. On its own, data has little strategic value. It’s just noise until it’s organized and analyzed with intention.

That’s where information comes in. When data is processed, cleaned, structured, and interpreted, it becomes information. It answers the “who,” “what,” “where,” and “when”, adding context and relevance. Information gives organizations the clarity needed to spot trends, evaluate performance, and make informed decisions.

A great example of this transformation in action comes from NASA’s Apollo missions. As highlighted in Hoffman, E. J., Kohut, M., & Prusak, L.’s The Smart Mission: NASA’s Lessons For Managing Knowledge, People, And Projects MIT Press ((2023), NASA had to manage and make sense of massive streams of telemetry data in real time. Their ability to convert raw data into actionable information, insights that could be immediately understood and acted upon, was essential to mission success. Without that structured layer of information, decision-making would have been delayed, or worse, misinformed.

Here’s the kicker: information still isn’t the endgame.

Knowledge, the application of information in context, is where true strategic value is unlocked. It’s the “how” and “why” behind organizational decisions. It enables foresight, innovation, and learning. And this is where many organizations hit a wall. The challenge isn’t finding information; it’s knowing how to use it effectively.

Jacobson and Prusak nailed this in The Cost of Knowledge Harvard Business Review (2006). They pointed out that while companies throw money at data lakes and search tools, they often ignore the harder problem: helping employees interpret, internalize, and apply what they find. Without a system that supports the conversion of information into knowledge, AI strategies will fall flat.

Data & Information Governance: Building AI on a Solid Foundation

In the current regulatory climate, where AI laws  regulations are quickly gaining momentum (think the EU’s AI Act, Canada’s Artificial Intelligence and Data Act (AIDA), and China’s administrative AI measures and draft AI law – let’s leave the US out for now) there’s no room for improvisation. Any AI strategy worth implementing must be rooted in compliance from inception.

It starts with the basics: privacy and data protection laws (*surprise, surprise*). Regulations like the General Data Protection Regulation (GDPR) in Europe define strict rules around how personal data is collected, used, stored, and shared. They demand transparency, informed consent, and user rights like data access and erasure. Fall short, and the consequences are real and costly; fines, reputational damage, and lost trust.

But this is just the floor, not the ceiling. Stay with me!

Emerging AI-specific regulations are now pushing beyond privacy to demand explainability, accountability, and fairness in AI systems. That means AI strategies, as companies begin to develop them, cannot just tack on compliance as an afterthought. These requirements must be built into your data governance frameworks by design and by default; from data collection all the way to model deployment and monitoring.

A robust data and information governance framework provides the infrastructure needed to manage assets across the AI lifecycle. This includes:

  • Data classification and role-based access controls to ensure only the right people have access to sensitive information;
  • Metadata management to track data provenance, lineage, and context;
  • Adequate security protocols including encryption to protect data in transit and at rest; and
  • Bias detection and mitigation tools embedded into the pipeline to uphold fairness and compliance

The goal is to be proactive, ensuring end-to-end data and information protection throughout the entire AI system lifecycle.

Too often, organizations view compliance as a one-time checklist; however, true governance is continuous. It’s about enabling accountability, building trust, and ensuring that the data feeding AI systems remains reliable, traceable, and protected. In an era where regulation is rapidly evolving, I’m sorry to say compliance is not just a legal necessity; it’s a strategic differentiator [Please, for the love of God, make friends with your legal & compliance teams].

Knowledge Management as a Strategic Enabler

Built on this, AI can elevate how organizations leverage data and information assets to create, share, and apply knowledge; but only if it’s embedded within a broader knowledge management (KM) strategy. As Davenport and Prusak emphasized decades ago in “Working Knowledge: How Organizations Manage What They Know” Harvard Business Press (1998), it’s not just about collecting knowledge, it’s about integrating it into workflows, decision-making, and the organizational fabric.

This distinction becomes clear when we look at some of the really popular high-profile corporate failures.

Take the Equifax data breach, for example; months before the breach, a critical vulnerability in Apache Struts was known internally and a patch was available. However, due to poor asset classification and unclear accountability, the vulnerability remained unaddressed.

Why? If this was so easily fixable? Why? This wasn’t just a technical failure; it was simultaneously a failure of both data governance and knowledge management.  It was a lack of strategic oversight and a consequence of operational neglect. The knowledge existed, but there were no mechanisms to escalate, share, or even act on it. It’s a classic example of how fragmented responsibility can undermine even well-resourced organizations. This aptly illustrates the risk of fragmented responsibility and poor data stewardship despite available internal knowledge.

A closer look at most data breaches will demonstrate that the failures weren’t necessarily just technical or with data governance, but due to lack of cross-functional action, ownership, and proper knowledge management. Valuable internal knowledge existed, but there were no effective mechanisms to capture it, share it, or act on it in time. These aren’t just cautionary tales; they’re reminders of what happens when knowledge is siloed, and feedback loops are broken.

To avoid these breakdowns, organizations must treat knowledge management as a core part of their data and information governance, and AI strategy. That means designing systems (and cultures) that not only protect data and information systems, but that also help knowledge flow.

Key KM practices that align with AI strategies include:

  • Codifying expert insights into training datasets to improve model accuracy and contextual relevance
  • Building knowledge graphs that map relationships across teams, domains, and data sources
  • Using intelligent search and recommendation engines to connect employees with the right expertise
  • Creating space for tacit knowledge sharing, through intentional workspace design, collaborative rituals, mentorship programs, and cross-functional learning events

Tacit knowledge (i.e., what people know but may not formalize) is often the most strategic and the most underleveraged. AI can’t access it unless organizations build cultures and environments that encourage people to capture it, speak up, reflect, and collaborate. Proper knowledge management isn’t just infrastructure – it’s a mindset. (And as Ed Hoffman states – “People! People! People!).

Without it, AI tools risk becoming impressive but disconnected—technically powerful, yet strategically shallow.

Putting It All Together: A Smarter Foundation for AI

As artificial intelligence becomes embedded in the core of how organizations operate, success won’t come from tech alone. It will come from how well leaders steward their data, information, and knowledge assets. Each plays a distinct role in shaping how AI is trained, deployed, and adapted over time, and each requires its own systems of stewardship.

  • Data must be accurate, secure, and governed from the start.
  • Information must be structured, contextual, and actionable.
  • Knowledge must be shared, applied, and continuously evolved to keep pace with change.

To fully realize the benefits of AI while minimizing risk, organizations must invest in:

  • Robust data governance frameworks that manage the entire AI data lifecycle
  • End-to-end regulatory compliance processes embedded by design and default
  • Comprehensive knowledge management strategies that turn insights into action

Together, these mechanisms don’t just protect critical assets; they amplify organizational intelligence, enabling continuous learning, strategic agility, and long-term innovation.

Call me optimistic – but I don’t believe AI will ever replace human insight. However, with the right foundation, it will become one of its most powerful enablers. The challenge is not just to build AI systems, but to build systems that are ready for AI ….. systems that connect data to decisions, insights to impact, and knowledge to the future.

Share this:

Artificial Intelligence (AI) is revolutionizing industries across the globe, and the legal profession is no exception. With promises of unprecedented efficiency, improved access to justice, and cost-effectiveness, AI tools are quickly becoming integral to law practices. However, with great power comes great responsibility. The growing use of AI in legal work brings not only exciting potential but also significant ethical challenges that need careful governance.

The Double-Edged Sword of AI in Law

AI offers powerful tools to enhance legal practices, but it also introduces risks that could undermine fairness and justice. On one hand, AI can automate mundane tasks like document review and legal research, allowing lawyers to focus on more complex and strategic work. It can also level the playing field for smaller law firms by providing access to advanced tools that were once reserved for large, well-funded firms. For clients, AI has the potential to democratize legal services, making high-quality legal assistance more affordable and accessible.

However, these advantages come with considerable ethical concerns, such as bias in AI algorithms, data privacy issues, and a lack of transparency in AI decision-making processes. These challenges can impact the integrity of legal work, client trust, and the overall credibility of the justice system.

Key Ethical Concerns in AI Use in Legal Practice

  1. Bias and Fairness: One of the most pressing risks in AI deployment is bias. AI systems learn from historical data, which may contain inherent biases. These biases can manifest in the form of discriminatory outcomes, such as suggesting harsher sentences based on biased historical legal data. Legal practitioners must be vigilant about the potential for AI to perpetuate systemic injustices, particularly when using predictive analytics in cases involving sentencing or employment discrimination.
  2. The Black Box Problem: Some AI systems, particularly those using deep learning, operate as “black boxes,” meaning they provide results without transparency into the reasoning behind those conclusions. This lack of clarity is problematic in law, where accountability and justification are paramount. Lawyers must ensure that AI-generated outputs are explainable, particularly when used in critical legal decisions.
  3. Data Security and Confidentiality: Legal professionals handle sensitive, confidential information. The use of AI tools in legal workflows raises concerns about data security and privacy breaches. Legal firms must ensure that AI tools comply with data protection regulations and that robust safeguards are in place to protect client information from unauthorized access.
  4. AI Hallucinations and Inaccuracy: AI systems can generate plausible yet incorrect information, known as “hallucinations.” For example, AI may produce fictitious case law citations that appear legitimate but are entirely fabricated. This poses a significant risk in legal practice, where inaccurate or misleading information can have severe consequences for clients, courts, and the integrity of legal proceedings.

Building a Governance Framework for AI in Legal Practice

Given these risks, it’s essential for law firms and legal practitioners to develop robust AI governance frameworks that ensure responsible and ethical AI usage. This includes:

  • Guiding Development and Deployment: AI tools used in legal practice must be designed with fairness, transparency, and accountability in mind. Developers should collaborate with legal professionals to create systems that meet ethical standards and regulatory requirements.
  • Responsible Use and Compliance: Lawyers must understand the limitations of AI and use it as an assistant rather than a replacement for human judgment. They must also ensure compliance with data protection laws, maintain client confidentiality, and safeguard sensitive data when using AI tools.
  • Regular Audits and Oversight: Continuous monitoring of AI usage in legal practices is crucial to identifying and mitigating risks. Regular audits help ensure that AI tools are used appropriately and in compliance with ethical and legal standards.
  • Training and Awareness: Legal practitioners should be well-trained in the capabilities and limitations of AI technologies. This ensures that AI tools are used responsibly and ethically, and that lawyers can confidently integrate them into their practice without compromising client trust or professional standards.

The Role of Legal Professionals in AI Governance

While AI tools can enhance the efficiency of legal services, they cannot replace the need for human oversight. Lawyers have a professional duty to ensure the accuracy and reliability of AI-generated outputs. This includes verifying AI-generated information, ensuring transparency in decision-making, and safeguarding the confidentiality of client data.

Legal ethics also play a crucial role in AI adoption. Lawyers are bound by principles such as competence, confidentiality, integrity, and transparency. They must ensure that AI tools align with these ethical standards, both in terms of their development and their use within the practice.

Conclusion: Embracing AI with Responsibility

AI is transforming the legal profession, offering new opportunities for efficiency, accessibility, and innovation. However, to harness its full potential, legal practitioners must carefully navigate the ethical considerations associated with AI use. By establishing clear governance frameworks, ensuring transparency, and adhering to ethical principles, lawyers can leverage AI to enhance their practices while safeguarding justice, fairness, and client trust.

For a deeper exploration of these ethical considerations and best practices in AI governance for law practices, you can read my full paper on AI Governance: Ethical Considerations in the Transformative Use of AI in Your Law Practice here. This paper was initially presented at the 2024 Annual Jamaica Bar Association Flagship Conference.

Share this:

We’re all familiar with how much of our lives are tracked and quantified in the digital age. From the step counter on your phone to the number of likes on your latest selfie, we are constantly leaving behind a trail of data. And while some of it might seem harmless (after all, who doesn’t love a GPS that guides them around traffic?), the reality is that all this data also raises big privacy concerns—especially when it comes to Artificial Intelligence (AI) in the telecom industry.

AI, Telecom, and Privacy: What’s the Connection?

Telecommunications (the networks that carry our calls, messages, and data) is the backbone of our connected world. Think of telecoms as the pipes that carry data like water, and AI as the smart system that analyzes and optimizes how that data flows. It’s a powerful combo, but it comes with a big question: How do we protect our privacy in a world where AI is positioned to process so much of our personal data? You might already be familiar with AI as the brains behind chatbots or the villain in sci-fi movies. But AI is much more than that. It’s about machines that can learn, adapt, and make decisions that usually require human intelligence. From facial recognition to self-driving cars, AI is rapidly transforming how we live, work, and communicate. In the telecom sector, AI can analyze massive amounts of data, optimize network performance, and enhance user experiences. But here’s the thing: as AI collects, processes, and analyzes data, it has the potential to uncover deeply personal insights about us—without us even knowing.

The Data Deluge: What’s Being Collected?

Consider how much data is recorded about you every single day: Your phone’s GPS tracks your movements. Your IP address links you to the Wi-Fi, revealing browsing habits. Your voice assistant is always listening for commands. All of this data flows through networks—and with 5G on the horizon, the amount of data moving through these pipes will only grow. But with all this data being collected, there’s also a big risk. The more devices connected to the network, the more chances there are for data breaches or cyberattacks. Your smart fridge, your security camera, even your car—everything is potentially vulnerable to surveillance or compromise.

The Caribbean’s Privacy Challenge

So, where does the Caribbean stand in all of this? Our legal and regulatory systems for data protection are often fragmented, with each country having its own laws and guidelines. This patchwork approach can make it harder to enforce privacy and security consistently across the region. The EU, on the other hand, is a shining example of regional collaboration. Its General Data Protection Regulation (GDPR) is a robust framework that protects individual privacy while allowing innovation to flourish. The Caribbean could take inspiration from this model—creating a regional approach to AI and data governance that balances the need for privacy with the drive for technological progress.

Privacy Laws: Are They Keeping Up with AI?

Current privacy and data protection laws in the Caribbean often struggle to keep up with the rapid pace of technological change. For example, most laws focus on the collection of personal data—like your name, email, or location. But with the rise of AI, new categories of data are being generated. AI can infer things about us from patterns in our behavior, such as predicting our health, financial status, or even our likelihood to commit a crime. This inferred data can be just as revealing—and sometimes more invasive—than data we willingly share. Take the example of the unverified story of Target and the annoyed father. The story is that the American retailer, once used AI to predict when a customer was pregnant based on their shopping habits. They then sent highly personalized coupons to customers at specific stages of pregnancy. While this data was highly effective for Target, it raised privacy concerns when a father received maternity coupons for his teenage daughter—before he even knew she was pregnant. This story highlights two key issues:

  1. There are new types of data (like inferred data) that aren’t always covered by existing privacy laws.
  2. Inferences made by AI—whether accurate or not—can have serious implications for privacy.

A New Approach to Data Classification

So, how do we fix this? One solution is to rethink how we classify and protect data. Right now, privacy laws focus heavily on identifiable information—data that can directly identify an individual, like your name or address. But with AI, data can be generated or inferred about you without your direct input. This includes things like your health status, your interests, or even your predicted behavior. A more comprehensive data governance framework for the Caribbean should take a risk-based approach and focus on four key areas:

  1. Source of Data: Where does the data come from? Is it directly provided by the individual, collected indirectly, or inferred through algorithms?
  2. Sensitivity of Data: How sensitive is the data? Does it pose a risk to the individual, others, or society as a whole if it’s exposed or misused?
  3. Intended Use: How is the data being used? Is it for personal, operational, or critical uses? Data used for healthcare or legal decisions, for example, requires stricter oversight than data used for marketing.
  4. Privacy Rights: Individuals should have control over how their data is used. This includes the right to access, correct, or delete personal data—and challenge inferences made by AI.

A Balanced Approach to Privacy

It’s clear that AI  will continue to shape our lives. The challenge is to ensure that while we embrace these technologies, we also protect individuals’ privacy and rights. A balanced data governance framework—one that considers data’s source, sensitivity, and intended use—will help safeguard privacy while allowing for innovation. As the Caribbean continues to develop its approach to AI and data protection, regional cooperation will be key. A unified framework for AI and privacy can ensure that we protect personal data without stifling the technological growth that promises so much for the region.

To dive deeper into the complexities of AI, telecoms, and privacy in the Caribbean, check out my full paper on Navigating AI in Caribbean Telecom: A Privacy Perspective. This paper was initially presented at 2024 University of the West Indies Cavehill Faculty of Law Caribbean Commercial Law Workshop.

Share this:

On 30th November, 2021 the Government of Jamaica, through its publication in the Jamaica Gazette, enacted sections 2,4, 56, 57, 60, 66, 74 and 77, and the First Schedule, of the Data Protection Act 2020 with an operative date of the 1st December 2021. A week later, it was reported via local news outlets, that the Governor General had also appointed an Information Commissioner – Ms. Celia Barclay, also with an effective date of 1st December 2021. These developments have the primary effect of:

  1. Establishing the Office of the Information Commissioner with certain powers, duties and responsibilities as conferred under the Act;
  2. Commencing the two year transitional period stipulated in section 76 of the Act; and
  3. Effecting immediate obligations & data standards for data that can’t be processed automatically, or that does not form a part of a structured filing system.

The Office of the Information Commissioner

The sections of the act brought into operation with the gazette notice, primarily apply to the establishment of the role and office of the Information Commissioner. With these enactments, the duties & responsibilities of the Commissioner are now operational. In particular, the Commissioner is to establish procedures and make regulations to give effect to the provisions of the act and create a data sharing code after consultation with industry stakeholders. Additionally, the published notice officially conferred to the Commissioner the duty to prepare reports & guidelines for parliament; to adhere to regulations for international co-operation; and to maintain confidentiality of information in her role. The newly appointed Information Commissioner, Ms. Celia Barclay brings to her role a wealth of legal & regulatory experience with over fourteen years at the bar and  over seven years in public service.

Commencement of Transitionary Period for Data Controllers

The Act directs controllers to take all necessary measures to ensure compliance with the provisions of the Act and the standards articulated therein for a period of two years after the earliest date of enactment. For this transitionary period, no proceedings may be taken against a data controller for any processing done in good faith. Data controllers now therefore have until 30th November 2023 to reform their data processing practices to ensure that the comply with the provisions of the Data Protection Act.  

Immediately Effective Standards & Obligations

As of the earliest effective date of the Act, being December 1st 2021, any personal data that is held in a way that:

  1. does not allow the data to be processed automatically or;
  2. is not a part of a filing system where the information is structured (either by a reference to individuals or by reference to criteria relating to individuals) in a way that allows specific information relating to a particular individual to be readily accessible;

shall be subject to certain obligations under the Act. In particular, any such data must adhere to the following data standards in accordance with the Act:

  1. The personal data shall be processed fairly and lawfully;
  2. The personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with those purpose;
  3. The personal data shall be adequate, relevant, and limited to what is necessary for the purposes for which they are processed
  4. The personal data processed for any purpose shall not be kept for longer than is necessary for that purpose
  5. Appropriate technical and organisational measures shall be taken— (a) against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data; and (b) to ensure that the Commissioner is notified, without any undue delay, of any breach of the data controller’s security measures which affect or may affect any personal data.
  6. The personal data shall not be transferred to a State or territory outside of Jamaica unless that State or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.
  7. That personal data shall be processed in accordance with the rights of data subjects conferred under the Act, with the exception of the right to access and the right to request rectification of inaccuracies.

In addition to this, Controllers processing the data falling within this category are required to:

  1. Obtain consent for any direct marketing in accordance with the Act;
  2. Adhere to written requests for the prevention or cessation of processing in accordance with the Act;
  3. Respect the rights conferred on data subjects with regard to automated decision making;
  4. Meet registration requirements with the Information Commissioner; and
  5. Where applicable, appoint a data protection officer.

Notwithstanding this enactment, without the establishment and structure of a formal registration process within the Office of the Information Commissioner, it is unlikely these provisions will be immediately enforced. Moreover, where a data controller can demonstrate that he has been processing data in good faith during this transitionary period no proceedings may be brought against him under the Act.  

Share this: