Skip to main content

Delve accused of misleading customers with ‘fake compliance’

 An anonymous Substack post published this week accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations, potentially exposing those customers to “criminal liability under HIPAA and hefty fines under GDPR.”

Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations with on its blog, calling the Substack post “misleading” and saying it “contains a number of inaccurate claims.”

The Substack post is credited to “DeepDelver,” who described themselves as working at a (now former) Delve client. 

DeepDelver recounted receiving an email in December claiming the startup had “leaked a spreadsheet with confidential client reports.” While Delve CEO Karun Kaushik apparently assured customers in a subsequent email that they were in compliance and that no external party gained access to sensitive data, DeepDelver said they and other customers had become suspicious.

“Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together,” they wrote.

Their conclusion? That Delve “achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.”

DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with “fabricated evidence of board meetings, tests, and processes that never happened,” then forcing those customers to “choose between adopting fake evidence or performing mostly manual work with little real automation or AI.”

DeepDelver also claimed that virtually all of Delve’s clients seem to have gone through two audit firms, Accorp and Gradient, which they described as “part of the same operation,” one that operates primarily in India, with only a nominal presence in the United States.

Those firms, they said, are just rubber-stamping reports that were generated by Delve. As a result, DeepDelver said the startup “inverts” the normal compliance structure: “By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.”

In addition to accusing Delve of misleading its customers, DeepDelver said the startup is helping those customers “mislead the public by hosting trust pages that contain security measures that were never implemented.” 

As for its own relationship with Delve, DeepDelver said their company has unpublished its trust page and no longer relies on the startup for compliance.

Delve responded to the accusations by saying it does not issue compliance reports at all. Instead, it’s an “automation platform” that ingests information about compliance, then provides auditors with access to that information.

“Final reports and opinions are issued solely by independent, licensed auditors, not Delve,” the company said.

Delve also said that its customers “can opt to work with an auditor of their choosing or opt to work with one from Delve’s network of independent, accredited third-party audit firms.” Those firms, the startup said, are “established firms used broadly across the industry, including by other compliance platforms.”

In response to the accusation that it’s providing customers with “fake evidence,” Delve countered that it’s simply offering “templates to help teams document their processes in accordance with compliance requirements, as do other compliance platforms.”

“Draft templates are not the same as ‘pre-filled evidence,” the company said.

Delve added that it is “actively investigating any leaks” and is “still reviewing the Substack.”

Comments

Popular posts from this blog

Cluly CEO Roy Lee publicly admits to lying about revenue figures last year Introduction

Introduction Transparency of Startups - A Topic of Discussion Again in the World of Technology Transparency of startups has once again become a topic of discussion in the world of technology. Roy Lee, who is the co-founder and CEO of Cluly, recently confessed that the revenue figures that he had shared with the public last year were not true. Earlier, the company had claimed that they had an annual recurring revenue of $7 million, but this figure, according to Roy Lee, was not true. His entry gave rise to various discussions about startup marketing, viral growth, etc. Roy Lee withdraws $7 million ARR claim Roy Lee posted on X that the $7 million ARR figure he previously reported was incorrect. In the post, he described it as the only intentionally dishonest statement he had made publicly online and issued a formal retraction. Lee reported that the number was mentioned during a conversation with a reporter and said that he did not expect the discussion to become a published article. How...

Microsoft rolls back some of its Copilot AI bloat on Windows

Microsoft announced on Friday a series of changes focused on improving the quality of its Windows 11 operating system, which notably includes dialing back the number of entry points to its AI assistant, Copilot.  The company said it will reduce Copilot AI integrations in some apps, starting with Photos, Widgets, Notepad, and its Snipping Tool. Under the heading of “integrating AI where it’s most meaningful,” Pavan Davuluri, EVP of Windows and Devices, wrote on the company’s blog that Microsoft is becoming more intentional about “how and where Copilot integrates across Windows.” Its goal, he explained, is to focus on AI experiences that are “genuinely useful.” This “less-is-more” approach to integrating AI into existing platforms may reflect the growing consumer pushback against AI bloat. While many people today understand AI to be a useful tool, there are also concerns around trust and safety. For instance, a Pew Research study published this month noted that half of U.S. adults ...