- 25 minutes ago
Generative AI models work with context windows which restrict the number of tokens which can be entered in a single query.
So when you're setting up a workflow in Harvey AI, which may use multiple prompts to generate a legal document, each 'block' is limited to 240 pages of text, which might be 100,000 to 200,000 tokens.

This post on harvey.ai , lists the following context range limits:

A review table in Harvey is kind of like a spreadsheet in which entries are provided in a cell for categories separated by columns. There can only be 60 pages of text in each cell.


So you can see here in this demonstration that it is identifying which agreements uploaded as PDFs have a particular type of provision, how this provision is defined, and the basis for the provision to become activated:

A Harvey thread, which is limited to 240 pages of text that consists of a prompt, a series of questions and answers, and uploaded documents. Threads are used to get information about a subset of documents quickly. It's possible to stay under the limit by structuring queries so that the system only analyzes relevant materials. Threads are used to get information about a subset of documents quickly.

So, in a thread the user can interact with Harvey to get results by entering commands in the pane on the left that modifies the work product on the right.

Harvey allows for groups of documents to be collected in big sets of up to 100,000. Queries or commands can then be entered to make Harvey generate content based on just the documents in a vault. So, a user could upload thousands of contracts a business is party to, and get Harvey to generate a table indicating how each agreement meets the requirements of a particular regulation.
















