As a associate at Idea Ventures, a VC agency constructed round deep know-how and market analysis, I spend my days swimming in info: educational papers, market experiences, interview notes, and written analyses. Our job is to synthesize these information factors right into a nuanced perspective to tell our funding selections.
Studying the hype on-line, it’s tempting to suppose you possibly can simply delegate something to AI. However for one thing so essential to our job, we don’t simply need it finished, we’d like it to be wonderful. How a lot can AI actually do for us?
On this piece, I’ll share:
- How we construction directions to get one of the best evaluation out of an AI mannequin
- The place I critically intervene and rely by myself ideas
- How one can get an AI to reflect the way in which you write
When counting on an LLM you typically get one thing that solely appears good at first look: typically the AI has missed particulars, or an necessary nuance. And for this core a part of my job, first rate isn’t sufficient—I want output to be wonderful.
This AI accuracy hole creates a painful cycle the place you spin in circles, making an attempt to re-prompt the system to get what you need till you’re basically left rewriting your complete output by yourself. In the long run, it’s unclear if AI truly helped in any respect. The more practical strategy is knowing the way you (the human) do the considering and depart writing (i.e., formatting and synthesis) to the LLM. This easy separation is what elevates AI-augmented workflows from first rate to distinctive.
Right here’s an instance of how we construct these sorts of workflows at Idea Ventures, and how one can too. We’ll illustrate an instance with the automation of our inside market analysis experiences.
Step 1: Outline the considering course of
Put together a doc with very detailed directions on the underlying evaluation/building you search to realize—clearly define the context & objectives, then dive into the entire particulars on the way you deconstruct a broad evaluation: the particular questions you’ll ask, follow-up sub-questions, how they need to be answered with information, and key callouts or exceptions.
You should utilize an AI assistant that will help you generate a primary draft of this, sharing accomplished paperwork and asking it to deconstruct the evaluation. However these directions are essential, so it’s necessary to complete writing it by hand and proceed to replace it over time whenever you tweak your evaluation.
Instance evaluation directions included within the immediate (be aware: the total directions will usually be 2 to 10 or extra pages lengthy)
- Analyze the underlying market construction: Is it fragmented or consolidated? Why? (e.g., excessive specialization wants, regulatory limitations, community results, legacy tech debt). How is fragmentation altering over time, and is it completely different throughout market segments?
- Use the next information sources and analyses: . . .
- Consider key market dynamics: What are the everyday switching prices? How prevalent is tech debt? What are the everyday gross sales cycles and purchaser behaviors? How do incumbents preserve their place (moats)?
- Use the next information sources and analyses: . . .
Step 2: Lay out your human-led evaluation
Present your main evaluation, together with uncooked notes and directions to the AI. We set our programs up so that they require the consumer to offer their key takeaways and evaluation to information the system in the direction of what’s most necessary—highlighting areas to concentrate on, key alternatives, and potential considerations. These are usually 4 to 5 detailed bullet factors of two to 4 sentences every. That is the crux of the evaluation and may due to this fact by no means be AI-generated.
Instance key takeaways supplied to the system:
- This market has traditionally been small and fragmented with out main software program suppliers. We anticipate it would develop dramatically, primarily by means of at the moment automating labor spend and consolidating a set of level options. The underlying demand for this functionality may also improve with XYZ challenges. We really feel very assured in these two progress levers.
- There’s substantial focus on the higher finish of the market. Main platforms management round X% of the market and have all invested closely in their very own know-how. However under the top-n largest gamers, there’s a wholesome cohort of medium-large patrons which have the dimensions to wish this resolution however don’t wish to construct it. We predict that is adequate to construct a sizeable firm, though market focus and construct versus purchase stays a key long-term danger.
Step 3: Run an interactive Q&A to hone the evaluation
This dialogue is essentially the most fascinating and enjoyable step: Have the system generate inquiries to make clear the contours of your evaluation. Based mostly on the first evaluation, together with the notes and basic directions, the system asks questions on issues that both weren’t clear or had conflicting info/directions. This helps sharpen the evaluation and provides the consumer the chance to share extra of their thought course of and steering.
Instance Q&A:
- Q from the AI: You mentioned that main platforms have invested closely on this know-how, however conversations with a few of these corporations indicated an pleasure to purchase. Do you suppose that will likely be widespread, or have been they exceptions?
- A from the human: Good level. I do suppose that lots of them will purchase ultimately, however as a result of they’ve constructed a number of know-how internally they’re extra more likely to want a brand new platform just for sure parts, versus shopping for an end-to-end system. And the very largest corporations (prime three to 5) will construct every thing in-house.
Step 4: Share previous work to match tone, not concepts
Use earlier examples of your work to copy tone and elegance solely after the scaffolding work is finished. Most individuals skip instantly to this step, however we discovered (and analysis exhibits) that offering completed examples is most helpful simply to match tone and writing fashion, versus shaping the evaluation itself.
In researching one of the best AI-native merchandise, we’ve seen that virtually the entire work goes into defining the considering and evaluation portion of the issue—detailed directions, pointers, orchestration, and tooling—so the AI system is aware of what it ought to do and simply executes on it.
At Idea Ventures, we’ve began to reflect the identical system by growing highly-constrained, human-in-the-loop workflows that direct the evaluation, leaving the LLM to execute primary info extraction and synthesis. That’s how we—and our AI programs—have began working smarter. Not by asking AI to suppose for us, however by serving to it suppose higher.