Edelman鈥檚 2024 AI Landscape Report: The Communicator鈥檚 Guide to Finding AI Tools You Can Trust is the recently released and inaugural evaluation in a series of reports from Edelman, focusing on the most enterprise-ready AI solutions for marketing and communications professionals. In the Enterprise AI in Focus interview series, report contributors are sharing their insights on the evaluation, the landscape, and what they see ahead. In this first installment of the series, we sat down with the author of the report鈥檚 Major LLM category, Mirza Germovic, VP, AI Solutions & Advisory, to learn more about his findings.
Q: Can you give us a brief overview of the report and your focus on enterprise-ready LLMs for marcom?
Mirza Germovic: The generative AI market for marcom is highly saturated, with nearly 80,000 potential vendors. Where should someone even start looking? In the search for an enterprise-ready LLM, there are many factors to consider, especially when the outputs of these tools can impact millions, if not billions of consumers, and just as many dollars of revenue. When we first embarked on this research, Edelman鈥檚 team of AI experts considered some of the primary challenges faced by decision-makers when looking for tools that will not only align with use cases and organizational goals, but other pertinent areas, such as meeting technical guidelines and ethical requirements. These informed our own evaluation guidelines.
The enterprise-ready LLMs are in their own category because they support all six of the universal and priority use cases we identified. We have two other tool categories in this report: Creative and Design and Analytics and Social Listening. AI can significantly speed up data analysis, offering insights that would normally take much longer to generate manually. AI also plays a huge role in design and creative production, helping marketers quickly develop campaign visuals and other creative assets. In those categories, we go more in-depth into specific use cases.
Q: What are some misconceptions about LLMs in an enterprise MarCom setting?
MG: A common misconception is that LLMs will completely replace the work done by marketing and communications teams, such as drafting press releases, creating and editing content, or ideating campaigns. However, marcom requires human nuance in the way that we communicate with each other. And at the end of the day, communication is a very human activity and engagement.
Basically, LLMs should be viewed as partners in work, not as a substitute for human expertise. Their real value lies in helping teams increase productivity, improve content quality, and generate faster AI content production. With LLMs, we鈥檒l see an age of 鈥渋nfinite content,鈥 where teams can produce high-quality outputs rapidly, but it鈥檚 essential to understand that humans are still central to ensuring brand authenticity and creative direction.
Q: What criteria did you use to evaluate the enterprise readiness of major LLMs?
MG: In the report, we offer a look at our criteria used for evaluating LLMs in an enterprise context. Among them, data protection is paramount. Any tool that doesn鈥檛 offer strong data protection mechanisms is a giant risk, especially if you're dealing with sensitive client or consumer information.
We also assessed the quality of data used to train the LLMs. Other criteria included ease of use - a tool that鈥檚 difficult to learn will require extensive training and face slow user adoption. We also looked at explainability - can users understand how the tool processes inputs to generate outputs?
Two other criteria we considered were ecosystem partnerships and financial backing and stability. We looked at ecosystem partnerships because the higher the level of penetration in partnerships, the better the tool is prepared for an enterprise grade environment. And we considered the financial aspect because tools backed by large organizations, or those with strong investor support, tend to have better quality and potentially, longevity.
Q: What are the top enterprise use cases for generative AI tools included in your section of the report?
MG: For the major LLM category, we focused on six universal marcom use cases that generative AI tools can support: writing, research, ideation, analysis, synthesis, and design. These cover the core tasks that marcom teams handle daily. Some of the use cases are more nascent than others, so we've approached this by focusing on the priority use cases.
All of these elements are core to our industry and our function, which is why it was important to include them in the report.
Q: Over the course of your evaluations into Major Large Language Models, did you find anything particularly surprising or unexpected?
MG: The rapid development of AI tools never ceases to amaze me. I've worked on many AI-related projects over the last decade, and I'm continually surprised by the pace of development of AI, and the speed in which the major tool providers are developing new capabilities and features and optimizing their large language models. This rapid advancement means that major LLMs are learning faster and improving their ability to support the specific needs of marcom teams.
And another thing that continues to surprise me is that the speed at which these tools develop is shortening as time goes on.
Q: What should decision makers consider when selecting a major LLM as part of their MarCom workflow?
MG: First, think about the LLM鈥檚 enterprise-readiness. Security, data protection, and ease of integration into your existing tech stack should be top considerations. Beyond that, think about use case mapping: how does the tool support your team鈥檚 specific needs? Don鈥檛 just ask, how do I use this tool? The better question is, how do I integrate this tool? Each marcom team operates differently, and people within the team will have different use cases, so it鈥檚 critical to align the LLM鈥檚 features with your workflow.
Think of it as working backwards: take what you have, use case map against it and then find the LLM that supports those use cases instead of being distracted by what I call the 鈥渟hiny object syndrome鈥 - in other words, considering something that might not be appropriate for your business simply because it is new and exciting.
Q: What trends do you foresee in the future of generative AI for MarCom teams?
MG: I'm seeing generative AI for marketing and communications going in several directions. The first area is content production and optimization. Traditional SEO is evolving into Generative Engine Optimization (GEO). As more users turn to LLMs instead of traditional search engines, understanding how to optimize your brand鈥檚 content for LLMs will be crucial. This involves placing your content in areas where major LLMS are crawling or scanning the web for that type of content.
The next direction is that of the custom AI model, which is going to become increasingly important for MarCom teams. This allows teams to use their proprietary brand data to inform and build automated AI models, helping teams validate their content and providing feedback on campaign and business strategies. In short, these models are an excellent means to hyper-personalize the brand voice and output of content.
I am also starting to see early interest around the idea of AI agents (conversational AI platform that automates tasks. In the long term, these agents will automate some of the work the team is doing, enabling them to spend less time on that repetitive, lower-tier work and open up bandwidth for more strategic, higher-level work.
And of course, AI will always have to be used in partnership with humans, because generative AI, especially in a Marcom environment, isn鈥檛 about replacement. It's about augmentation and upskilling, and I don鈥檛 see this changing.
See Edelman鈥檚 2024 ranking of enterprise-ready major LLMs, download the full 2024 AI Landscape report today.