News & Press Blog Post

Security as a Core Feature of Scientific AI

By Ambrose Sterr

When you choose an AI service to support your lab, you are deciding where some of your most valuable information will live, how it will be handled, and who might be able to see it. When a single prompt to an insecure service can compromise it, security is paramount. The decision touches research timelines, internal assay documentation, regulatory exposure, and the day-to-day trust your teams have in the tools they use. I want to explain why standards-based security in AI services you choose matter, what "comprehensive" actually means in practice, and how we've built Potato to earn your trust.

AI changes the shape of risk

Most software stores data you already have. AI systems do something different: they ingest raw material, transform it, and generate new content. In many companies, the inputs and outputs that drive that content quickly become the most sensitive stuff in the building. Think about a prompt that includes a batch-release deviation summary, a clinical narrative, a molecule design rationale, or a half-formed idea for a new indication. Even if the original data is protected elsewhere, the AI session becomes another home for it.

That is why a narrow or add-on security model is not enough for AI. You need a service that assumes sensitive data will show up in prompts and outputs, and protects it from the moment it enters the system until it is deleted. Otherwise you end up relying on users to self-censor in the tool. That is not realistic, and in practice either doesn't happen or causes users to avoid the tool entirely.

What "comprehensive security" means for end users

Here are basic things the users of every service should be able to count on:

  1. Your data is treated as confidential by default.
    Anything you upload or type should be handled like sensitive information, not like public content. At Potato we classify and protect all customer data as confidential, including Personally Identifiable Information (PII) and non-PII business data, and we apply handling, retention, and disposal controls across its lifecycle.
  2. Only the right people can access it.
    The product should use "need-to-know" access, not broad visibility even within your team, and enforce formal user lifecycle controls for provisioning and removal of access, retain evidence of permitted access, and regularly review internal access controls. Employees at companies providing the platforms should not be allowed to access or review your data without your consent.
  3. It is encrypted in transit and at rest.
    This is table stakes, but let's be crystal clear. Whether data is moving across networks or sitting in storage, it should be encrypted to modern standards and protected by strong key management. Keys should be rotated at least annually and key access should support dual-control principles.
  4. The system is monitored for misuse and attacks.
    Security is not just design-time rules. It includes watching for things that should not be happening. We use defense-in-depth, continuous network and log monitoring, and alert correlation to spot unusual behavior quickly.
  5. The system is aligned to recognized frameworks.
    When someone says they are "secure," that can mean anything. Mature programs anchor on accepted standards so you can map controls to your internal requirements. Potato's security program is aligned to the NIST Cybersecurity Framework and follows a defense-in-depth approach across governance, engineering, access control, encryption, monitoring, incident response, and operational resilience.

Potato operates under the belief that you should not have to ask for these details or accept vague assurances. Service providers should be able to provide audited evidence that the product is built that way from the start.

Data lifecycle clarity

In regulated settings, the data lifecycle is part of compliance. You need straight answers about what is stored, for how long, where, and how it is disposed of. Any gaps here become audit pain later.

Our posture treats customer data as confidential across the full lifecycle, with retention and secure disposal practices designed around protecting confidentiality, integrity, and availability.

Why this matters for IP, not just privacy

Data security often starts with user privacy, but for science it cannot end there. AI sessions can include formulas, assay designs, manufacturing process notes, or competitive intelligence. Losing that data is not just a breach - it can be a direct hit to valuation and years of work.

Security controls help protect confidentiality, but they also protect integrity. If an attacker can tamper with data or models, they can mislead research decisions or quality investigations. A comprehensive program addresses both. That is why we emphasize controlled change, validated releases, and monitoring for anomalous behavior, not only perimeter defenses.

How Potato tries to earn trust every day

Security is not one feature. It is a habit, a set of constraints, and a willingness to say no to shortcuts.

A few things we do that shape our product in noticeable ways:

  • We design under the assumption that sensitive information will be used in the platform. We do not rely on users to perfectly filter themselves.
  • We restrict access internally and treat privileged roles as high-risk, not high-convenience.
  • We encrypt and manage keys with the expectation of audits and adversaries, not just good faith.
  • We release changes carefully, even when it is tempting to move faster, and we proactively patch vulnerabilities as soon as they are identified.
  • We watch the platform like we expect it to be targeted, because AI systems are increasingly attractive to attackers.

We are not claiming perfection, nor should you believe anyone that makes that claim. What we can say is that our posture is deliberate, mapped to a real framework, and built into how we engineer and operate the platform. That is what "comprehensive" looks like in practice.

Looking Forward

AI is now part of how work gets done. The right question is not whether people will use it, but whether they will use it safely. A secure AI product removes friction from safety. It protects data and IP without asking users to become security experts, and it gives IT teams the controls and evidence they need to stand behind the tool with confidence.

A secure AI service removes friction from safety, allowing your teams to innovate without fear. Using Potato, they can - we have designed for security right from the start, so you can be confident your data will stay yours.

Ready to experience the future of science?

Join the Beta Waitlist

Contact Us

Interested in piloting Potato? Have a partnership idea?

We'd love to hear from you. hello@potato.ai