Most enterprise data platform decisions are made by the wrong people.
A data engineering team identifies a capability gap. IT runs a vendor comparison. Finance approves the budget. Legal reviews the contract. And privacy? Privacy shows up at the end, after the architecture has been set and the vendor is already onboarded.
That sequence is how organisations end up spending more money fixing privacy problems than the platform cost in the first place.
This guide covers what a data privacy consulting engagement looks for when evaluating an enterprise data platform before the contract is signed. Not after.
Why Choosing a Data Platform Is a Privacy Decision
Here is the version of events that plays out frequently. An organisation buys a data platform. It works well. Two years later, a regulator sends a request, a customer submits a right-to-access request, or an internal audit flags something. The organisation discovers that the platform cannot produce a full record of where a specific person’s data has been, who has seen it, or how it has been used.
At that point, the options are expensive. You can try to retrofit controls that the platform was not built to support. You can build manual processes around the gaps. Or you can start over.
None of those are good options.
The reason this happens is structural. Data platform evaluations tend to focus on what the platform can do: how fast it processes data, how many sources it connects to, how well it integrates with existing tools. Those are legitimate questions. But they sit alongside a set of questions that rarely get the same attention: what does this platform do with personal data, how does it enforce who can see what, and what happens when someone asks you to delete their information?
Privacy requirements do not disappear after a platform goes live. They get harder to meet the longer the platform runs without the right controls in place.
The Criteria That Matter
Can You See Where Every Piece of Data Has Been?
This is the foundation. If you cannot trace a piece of personal information from the moment it enters your organisation through every system that has touched it, you cannot respond to a regulator asking for that record. You also cannot respond to a customer asking what you know about them.
Ask the vendor: if I want to know everywhere a specific customer’s email address has been used, processed, or copied in this platform, can you show me that?
If the answer involves spreadsheets, manual investigation, or “it depends,” that is a gap. A well-built platform maintains a running record of where data comes from, where it goes, and what happens to it along the way. That record should be exportable and legible to someone who is not a data engineer.
Does the Platform Know What Kind of Data It Is Holding?
Not all data carries the same risk. A customer’s purchase history is different from their health information. A business email address is different from a passport number. The platform you use should be able to identify which data falls into sensitive categories, and treat it accordingly.
This matters for two reasons. First, privacy regulations in most markets impose stricter requirements on certain types of personal data. Second, you cannot protect what you cannot find.
Ask the vendor: how does the platform identify and label sensitive data, and what happens automatically when it finds some?
Watch for platforms that handle structured data well but have blind spots on anything else. A lot of sensitive information lives in documents, emails, and free-text fields, not just in clean database tables.
Who Can See What, and Is That Actually Enforced?
The principle here is simple: people should only have access to the data they need to do their job, and nothing more. Most organisations agree with this in theory. In practice, access tends to expand over time because restricting it creates friction and nobody is actively reviewing it.
A good platform makes least-privilege access the default, not the exception. It should support fine-grained control, including the ability to restrict access at the field level. A sales analyst might see aggregated revenue data without seeing the names and addresses of every customer behind it.
Ask the vendor: how do we enforce that a user can only see the rows or columns they are permitted to see, and how do we verify that is actually working?
If the answer requires significant custom configuration on your side, that is a cost you need to factor in.
Does the Platform Respect Why Data Was Collected?
This one catches organisations off guard. Privacy law in most jurisdictions does not just say you need to protect personal data. It says you can only use it for the purpose it was collected for. Data gathered to process a transaction cannot be repurposed for marketing without consent. Data collected in one geography cannot always be processed in another.
Most platforms do not enforce this automatically. Many platform gaps are described in our guide on platform engineering (platform-engineering-when-it-works-when-it-fails), and they store data and make it available to anyone with access. The question is whether the platform can connect a piece of data to the consent or permission that governs how it can be used, and whether it can block uses that fall outside that permission.
Ask the vendor: how does the platform know why a piece of data was collected, and what prevents someone from using it for a different purpose?
The honest answer from most vendors is that it cannot, and that this sits in a separate system. That is worth knowing before you sign. It means you are taking on the integration problem yourself.
Can Data Cross Borders Without Creating Compliance Problems?
For any organisation operating across multiple countries, this is not a theoretical concern. The rules governing where personal data can be stored and processed are complex and vary significantly by country. Getting this wrong is not just a fine risk. It can interrupt operations.
The vendor question here is not just whether they offer regional deployment options. That is the easy answer. The harder question is whether data leaves the approved region in any form, including for support access, system monitoring, or performance logging.
Ask the vendor: where does data physically reside, and are there any circumstances under which support staff or automated systems access it from outside that region?
That second part is where problems tend to hide.
What Happens When Someone Asks to See or Delete Their Data?
Privacy regulations in most major markets give individuals the right to know what data an organisation holds about them, to request a copy of it, and in many cases to ask for it to be deleted. These are not edge cases. Any consumer-facing business should expect to receive these requests routinely.
The platform needs to support this. That means being able to find all data associated with a specific individual across every part of the system, produce it in a portable format, and delete it in a way that actually removes it rather than just marking it inactive.
The challenge is that most organisations have more than one platform. The rights fulfilment problem spans your entire data environment, not just one tool. But the platform you are evaluating should cover its portion cleanly.
Ask the vendor: show me how a deletion request propagates through the platform. How do we verify it is complete?
Is the Vendor Itself a Privacy Risk?
This often goes unexamined. When you put personal data into a vendor’s platform, that vendor becomes a data processor under most privacy frameworks. That creates legal obligations on both sides.
You should require a Data Processing Agreement before the contract is signed. This document defines what the vendor can do with the data, how they will notify you if there is a breach, and what sub-processors they use. Sub-processors are the other companies the vendor relies on to deliver their service. Your data may flow through several of them.
Ask the vendor: provide your standard Data Processing Agreement, your current sub-processor list, and your most recent security certification.
If they cannot produce these documents quickly, that is the answer.
For regulated industries, also verify what certifications they hold. SOC 2 Type II and ISO 27001 are the common benchmarks. They are not a guarantee, but absence of them is a flag.
How to Run the Data Privacy Company Evaluation
The order matters.
Start internally. Before any vendor conversations begin, map where personal data currently lives in your organisation and what privacy obligations attach to it. Without this map, you cannot assess whether a platform closes your gaps or creates new ones.
Build a privacy-specific section into your vendor assessment process. Most procurement teams use a standard security questionnaire. Privacy is a separate discipline with different questions. The two should not be conflated.
Test with your riskiest data. When you reach the proof-of-concept stage, do not test with sample data. Test with your highest-risk data category under realistic conditions. This is the only way to find the gaps that matter.
Get legal eyes on the Data Processing Agreement before the shortlist is finalised. Not after. DPA terms can be a dealbreaker, and discovering that after you have narrowed to one vendor removes your negotiating position.
The right people in the room for this process: your privacy or legal lead, the CISO or equivalent, whoever owns the data that carries the most regulatory risk, and the business sponsor who will live with the consequences of the decision.
Where Organisations Most Commonly Go Wrong
The most common failure is treating privacy as a final sign-off step rather than an input to the decision. By the time privacy reviews the vendor, the team has a preferred choice and a timeline. Raising concerns at that stage tends to get managed rather than resolved.
The second failure is trusting certifications without testing controls. A certification tells you a vendor met a set of requirements at a point in time. It does not tell you whether their controls work in your specific environment, with your data, at your scale.
The third failure is letting the technical team run the evaluation alone. They will find the platform that works best for their use case. That is their job. Privacy is a different use case, and it needs a voice in the room before the recommendation is made.
Retrofitting privacy controls after a platform is in production is not impossible. It is just slow, expensive, and disruptive. The work done before the contract is signed is cheaper than anything done after.
Final Decision
A data platform that cannot satisfy these criteria will create regulatory exposure, operational debt, or both. The exposure may not be visible on day one. It tends to surface when a regulator asks a question you cannot answer, or when a customer asks for their data and you realise you cannot produce it cleanly.
Data privacy consulting at this stage is not about adding risk to the procurement process. It is about making the decision correctly the first time.
The right starting point is an internal gap analysis. Map what you have, identify what regulations apply to it, and go into vendor conversations knowing what you actually need. That context changes every question you ask and every answer you hear.
About the author
Related