From Abstract to Concrete: The Path Forward for Machine Learning Patents

While the precedent set by Recentive Analytics, Inc. v. Fox Corp.[1] presents a formidable challenge, it does not represent a complete foreclosure of patent protection for machine learning inventions. The court’s opinion, though overly broad, left open the possibility that “patent-eligible improvements in technology” can and will emerge from the field of machine learning (ML). For innovators and patent practitioners, the key is to understand the narrow paths to eligibility that remain and to draft applications with a strategic focus on the specific types of technical contributions the courts are willing to recognize. The core of any successful strategy post-Recentive must be to shift the inventive narrative away from the abstract function of the ML model (e.g., “predicting,” “classifying,” or “optimizing”) and toward the concrete technical structure of the system that performs that function. This involves locating the inventive concept not in the abstract idea of applying artificial intelligence (AI), but in the specific, tangible implementation details of the invention.

The Technical Integration Doctrine: Rooting the Invention in Technology

One approach to demonstrating an inventive concept is to claim the ML invention as an integral and necessary component of a larger technical system, where its operation produces a tangible technical effect or solves a specific technological problem. The inquiry, as framed by prior Federal Circuit cases, is whether the claims are “rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks”[2] or another technical field. The invention must be presented not as a mathematical exercise, but as a “specific means or method that solves a problem in an existing technological process.”[3]

One strategy is to draft claims that encompass an entire integrated system, not just the ML model in isolation. The specification can clearly articulate how the ML components interact with other hardware and software elements to achieve an improvement in the functioning of the overall technology. Examples of such technical integration can include:

    • Control of Physical Systems: Using an ML model’s output as a direct control signal for a physical apparatus. This could involve a model that controls a robotic arm in a manufacturing process, adjusts the flight path of a drone, or manages the operation of an industrial process. In these cases, the invention is not just the model’s prediction but the entire control loop that translates that prediction into a physical action.
    • Improvement to Computer Functionality: Demonstrating that the ML model improves the functioning of the computer itself can be a powerful argument because it directly addresses the court’s concern that the computer is merely being used as a generic tool. Such improvements could include using an ML model to manage memory allocation more efficiently, reduce network latency by predicting traffic patterns, or lower the energy consumption of a processing unit by optimizing computational loads.
    • Transformation of Data as a Technical Process: Claiming a process where the ML model is used to transform data from one state to another in a way that represents a technical improvement, not just an abstract analysis. This could involve using a model to enhance the quality of a digital image or audio signal, or to compress data for more efficient transmission or storage. The key is that the data itself is being technically modified, not just interpreted.

    Innovation in the Periphery: Pre- and Post-Processing as the Inventive Locus

    When the core ML model or algorithm is conventional, the inventive concept can often be found in the novel and non-obvious steps that occur at the periphery of the model: the data pre-processing and post-processing stages. The Recentive court focused on the “black box” of the ML model, but significant innovation often lies in how data is prepared for that box and how its output is made useful. One strategy is to detail these peripheral steps in the specification and claims, arguing that they are patentable either in their own right or as an inventive combination with the model.

      • Inventive pre-processing: The quality and format of input data are critical to an ML model’s performance. Innovation can lie in novel methods of data collection, cleansing, normalization, or feature engineering that are specifically designed to overcome a technical challenge and enable the model to function effectively. For example, a patent could claim a unique method for constructing feature vectors from raw, noisy sensor data, transforming it into a structured format that a specific type of ML model can effectively process. This transformation is arguably a concrete technical step, not an abstract idea.
      • Inventive post-processing: The raw output of an ML model is often a set of numbers, such as probabilities or classifications. The inventive step can be the process that transforms this raw output into a concrete, technical action or a specific, useful data format for a downstream system. For instance, an invention could involve a novel process for converting a model’s predictive output (e.g., the likelihood of a system failure) into a specific set of control signals for a drone’s flight stabilization system or into a structured report that automatically triggers a maintenance workflow. This moves the claim beyond merely “displaying a result” and into the realm of a tangible, automated process.

      The Special Case of Large Language Models (LLMs): Inventive Workflows and Applications

      The rise of powerful foundation models like Large Language Models (LLMs) provides a fertile ground for illustrating how the application of an existing, powerful tool can be highly inventive. Here, the invention can be the novel and specific technical workflow or system architecture built around the LLM. The patent strategy can focus on the novelty of this surrounding structure, detailing how the LLM is prompted, how its outputs are constrained and utilized, and how it interacts with other software agents or hardware components to solve a specific technical problem.

      • Multi-agent systems (MAS): These are a collection of multiple autonomous agents that collaborate to solve complex problems that a single agent is not equipped to solve. The invention lies in the architecture of such a collaboration: the communication protocols, the task decomposition strategies, and the methods for delegating sub-tasks. Examples include systems for automated software debugging, where one agent identifies a bug, another proposes a fix, and a third tests the solution, or sophisticated financial trading systems where different agents analyze market data, sentiment, and risk profiles.
      • Novel human-AI interaction workflows: These involve the design and implementation of processes where humans and AI systems collaborate to achieve common goals. For instance, the HealMe framework[4] uses an LLM to guide a user through a structured cognitive reframing process based on established psychotherapeutic principles, representing a novel application in computational mental health. Another example involves using LLMs to create “digital twins” of online communities[5], allowing social scientists to simulate and study community dynamics at scale, a novel workflow for computational social science.
      • Integration with external knowledge and tools: These involve creating systems that intelligently augment a general-purpose LLM with specialized external knowledge (KGs) or software tools. The inventive concept can be focused around the mechanism of integration. For example, Graph Memory-based Editing for Large Language Models (GMeLLo)[6] describes a system that translates a user’s natural language question into a formal, structured query to retrieve information from an external KG, thereby grounding the LLM’s response in verifiable facts.
      • Automating complex technical processes: Applying LLMs to highly structured, unconventional, and technically demanding domains may be inventive. The Selene benchmark[7] demonstrates the use of LLMs to automate the generation of formal verification proofs for an operating system microkernel – a task that is complex, logical, and far removed from typical language generation.

      These examples demonstrate that even with an allegedly “generic” foundation model, immense inventive potential exists in the specific technical systems built around it. The key for patentability is to claim the novel structure of that system, not the abstract function of its components. This legal constraint, while burdensome, may have the unintended benefit of forcing innovators to pursue more holistic and robust patent protection. A patent on a single, narrow algorithm can often be designed around. However, a patent that covers an entire integrated system – including the unique data pre-processing pipeline, the specific model integration points, and the novel post-processing workflow – can be far more difficult to circumvent and may ultimately represent a more valuable commercial asset.

      Despite the Federal Circuit’s ruling in the Recentive case, it is still possible to articulate an inventive concept that can withstand § 101 scrutiny by focusing on concrete technical structures. This can be through novel model architectures, specific technical integrations, inventive data processing at the periphery, or the design of novel application workflows. These pathways, however, are workarounds for a flawed patent system, not a substitute for a sound legal doctrine.

      [1] Recentive Analytics, Inc. v. Fox Corp., No. 23-2437 (Fed. Cir. 2025)

      [2] DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014)

      [3] Koninklijke KPN N.V. v. Gemalto M2m GmbH, 942 F.3d 1143, 1150 (Fed. Cir. 2019)

      [4] https://arxiv.org/abs/2403.05574

      [5] https://www.sciencedirect.com/science/article/pii/S2352864824001706?via%3Dihub

      [6] https://openreview.net/pdf/d069f9c812b129a08b12c808987bc97b270db560.pdf

      [7] arXiv:2401.07663