Blog

GenAI Made Me A Perfect BLT Here’s Why It Won’t Solve Your Data Problems (Yet)

The GenAI Hype Is Real—but So Are the Limitations

Every new technology follows a predictable cycle of hype, excitement, disillusionment, and eventual productive adoption—commonly known as the technology hype cycle. Generative AI (GenAI) and Large Language Models (LLMs), including agentic AI, are no exception. Currently, these technologies are at a stage where expectations often exceed practical capabilities, especially in complex enterprise scenarios.

As of 2025, while promising, GenAI technologies are not yet fully mature for delivering comprehensive, end-to-end enterprise data solutions. Enterprises should approach investments in GenAI cautiously, carefully considering current limitations and potential future developments rather than rushing into widespread implementation.

Document-Based Tasks: A Natural Fit (To a Point)

Currently, GenAI excels primarily in simpler document-based tasks such as handling Requests for Information (RFIs) and Statements of Work (SOWs). However, more complex documents like Requests for Proposals (RFPs), detailed contracts, and regulatory compliance documents still demand significant human oversight to ensure accuracy and contextual appropriateness.

Vendor Platforms Offer a Head Start

Vendor-based enterprise software solutions, such as SAP, Workday, and Salesforce, present attractive opportunities for integrating AI effectively. These platforms leverage their built-in, customizable reference data models, allowing small and medium-sized enterprises (SMEs) to benefit significantly due to their adherence to standardized reference models.

Why Larger Enterprises May Face Challenges

Larger enterprises face additional complexity due to highly customized implementations, requiring substantial resources for integration, fine-tuning, and validation. Even with vendor-driven AI components that efficiently operate within their ecosystems, deriving meaningful insights often necessitates integrating carefully curated external data sources.

The Role of Metadata and the Cost of Tokens

Despite these challenges, solutions can be effectively engineered using LLMs in areas such as data inference and improving data quality. Given that LLM usage is priced based on tokens input and output, working directly with large data volumes can be cost-prohibitive. Instead, leveraging summary data profiles, enriched metadata about datasets and their elements can significantly enhance the reliability of AI or agent-based solutions.

Latency, Real-Time Demands, and the Semantic Layer

Latency issues add another layer of complexity, particularly when real-time or near-real-time insights are required. To address these latency challenges effectively, organizations must invest in comprehensive data curation, optimization, and structuring across multiple varied datasets—essentially building a semantic layer. This foundational layer enables consistent, meaningful access to business data and is currently beyond GenAI’s reach without extensive additional data engineering.

First Things First: Fix Your Data

Another critical consideration for successful GenAI deployment is data quality and readiness. Since poor data inevitably results in suboptimal AI outcomes, enterprises should prioritize robust data governance, consistent data quality frameworks, and thorough data curation processes as foundational steps.

Strategic Patience Beats Premature Investment

Enterprises should adopt a “wait-and-watch” strategy, carefully evaluating and monitoring developments from established solution vendors. By thoughtfully addressing foundational data readiness and aligning AI investments strategically, organizations can position themselves to derive substantial, sustainable value from future advancements in GenAI technologies.

Ready to get started?

Fill in your details to experience Needletail in action