From ChatGPT to Niche Models: Vision of the AI Horizon by Maxime Le Dantec - Resonance
π‘ muffin.ai: the media to understand the challenges of AI and leverage them in our society and jobs β by a collective of French engineers & entrepreneurs.
Hello everyone!
We are excited to publish a new deep dive into the fascinating world of artificial intelligence with visionaries. π
Your voice matters a great deal to us. Share your thoughts, questions, and suggestions with us. π
Today we are sharing our conversation with Maxime Le Dantec, partner & co-founder at Resonance.
Resonance is one of the most active early stage VC in Europe with already 8 deals done in the last 12 months.
π§ Max's forecasts on the growth and transformation of the AI infrastructure market in the upcoming years are something we always look forward to!
Thanks Max for the interview, it was a real pleasure to connect π.
On the menu today:
π‘ How to build defensibility with LLMs?
βοΈ How the infrastructure market is going to structure
π€ Is AI hype the new crypto hype?
π Some sources recommended by Maxime
If anyone shared this newsletter and youβre willing to subscribe, itβs here:
π§ MF : Max, I appreciate your time today. Before we delve deeper, is Resonance utilizing GenAI tools?
π Maxime :
Thank you for having me. Indeed, at Resonance, we extensively use ChatGPT in two primary areas. First, it aids us in researching the competitive market and obtaining revenue or EBITDA figures. This has proven especially beneficial for publicly-traded companies with accessible data, saving us significant time. Additionally, ChatGPT plays a pivotal role in our content creation process, assisting us in refining blog posts and articles. GenAI is invaluable when seeking inspiration and crafting engaging phrases. Our analyst, Mathieu, is also in the process of creating a bespoke tool that can automatically summarize calls with founders and seamlessly integrate them into our CRM.
π§ MF : How can one bypass the creation of a ChatGPT wrapper and establish a competitive edge with LLMs?
π Maxime :
Absolutely! The approach isn't vastly different from how major AI enterprises have fortified their positions over the past decade. Two primary strategies for differentiation are harnessing unique data and designing an interface that captivates users, thereby fostering a feedback loop.
While OpenAI's recent release allowing GPT 3.5 to be trained on custom data is a commendable step forward, ChatGPT remains somewhat enigmatic, limiting startups from gaining complete mastery over it. However, the emergence of open-source LLMs, such as France's Mistral.ai and META's Llama 2, coupled with unrestricted access to AI model weights, empowers startups to tailor their AI training more intricately to specific datasets and applications. This means, rather than relying on GPT 3.5, which excels in general queries but lacks expertise in niche areas, startups can cultivate specialized domain experts. For instance, in the legal sector, emerging startups like Harvey and Jimini aim to develop AI legal aides proficient in public jurisprudence and proprietary contracts.
The triumph of an AI-centric system hinges not just on its technological prowess but also on the user experience. A seamless, user-friendly interface, complemented by precise AI responses derived from high-quality training data, is paramount. Continual AI enhancement to stay abreast of evolving trends is also vital. Hence, integrating a feedback mechanism where users can relay their feedback, akin to the thumbs up or down feature in ChatGPT, aids in honing the AI's precision over time. In essence, the goal isn't merely to craft a tool, but to sculpt an experience that's bespoke, user-focused, and perpetually evolving.
π§ MF : How do you envision the market structure evolving? Will there be a dominant, all-purpose LLM like ChatGPT, or will we see a proliferation of niche-specific models?
π Maxime :
Your question touches on a pivotal aspect of the AI landscape. While the appeal of a universal LLM such as ChatGPT is strong, the trend seems to be leaning towards the development of multiple niche-focused models, and here are the reasons.
To begin with, the financial aspect cannot be overlooked. Running inferences on a singular, massive model can be prohibitively expensive. In fact, the inference cost, which pertains to the expense associated with generating a response to a given prompt, is intrinsically linked to the "size" of the LLM. Thus, with a larger LLM, you might gain a broader intelligence, but at a steeper cost.
To put this into perspective, consider the realm of legal services. There exists a specialized LLM named 'Harvey' that's fine-tuned for legal inquiries. If OpenAI were to challenge Harvey by adapting ChatGPT specifically for legal matters, it would first need to sift through and select datasets pertinent to legal expertsβa considerable undertaking. Following this, the onus would be on ensuring that this adapted model either matches or outperforms Harvey in terms of response quality. Attaining such specialization within a more generalized model is not only demanding in terms of resources but also presents hurdles in guaranteeing the precision and applicability of the answers.
π§ MF : With the observed decline in ChatGPT's usage over recent months, could there be an AI hype similar to the crypto frenzy we witnessed a few years ago?
π Maxime :
Your analogy draws an intriguing parallel. The crypto domain certainly went through a phase of intense enthusiasm, where, regrettably, a vast majority (around 90%) of the initiatives either turned out to be fraudulent or lacked tangible merit.
The fervor surrounding AI's potential often results in heightened anticipations. This surge in expectations, or 'hype', can be attributed to the occasional overblown claims about AI's capabilities.
A prominent challenge facing LLMs at present is their propensity for "hallucinations", meaning they occasionally generate content that is of subpar quality and may not adhere strictly to factual accuracy.
However, in contrast to the crypto scenario, we're already witnessing concrete examples of genuine value derived from general AI products. Startups like Jasper, Midjourney, and Stable Diffusion boast millions of monthly active users and have already amassed substantial revenues in the tens of millions.
π§ MF : thanks Maxime for sharing, can you leave us the sources you recommend if we want to go further on the subject
π Maxime :
A few newsletter that are worth reading if you are interesting in GenAI, and more broadly by tech:
Enjoy your week !
β muffin.ai team
If you enjoyed the edition, consider clicking on the β€οΈ button and leaving a comment so that more people can discover muffin.ai on Substack π