Episode 025
AI Roundtable: Tangible Takeaways to Help Leverage Artificial Intelligence
In this special episode, we have exclusive audio from our recent webinar: Activating AI: Empowering Your Business for Success. The panel of experts from this webinar includes:
Alex Castrounis, Founder and CEO, Why of AI
Nik Kapauan, Principal, Access Holdings
Ken McLaren, Partner, AI, Frazier Healthcare Partners
Keith Thomas, National Practice Lead, Security Operations, AT&T
The panel, moderated by Karma School of Business host, Sean Mooney, includes the following topics:
4:40 - What pre-cursor activities to perform before diving in to AI
10:20 - Things that companies should keep in mind to remain secure
13:50 - Expert opinions on best in class tools
27:23 - Most addressable use cases for everyday companies
34:07 - Considerations that companies should take when implementing tools off the shelf
39:03 - Things that may take shape in the near future
46:55 - Answering FAQs
To view the on-demand webinar, go to www.bluwave.net/demystifying-ai-webinar-registration/.
For more information on this podcast, go to www.bluwave.net/podcast.
www.whyofai.com
www.accessholdings.com
www.frazierhealthcare.com
www.att.com
Alex Castrounis, Founder and CEO, Why of AI
Nik Kapauan, Principal, Access Holdings
Ken McLaren, Partner, AI, Frazier Healthcare Partners
Keith Thomas, National Practice Lead, Security Operations, AT&T
The panel, moderated by Karma School of Business host, Sean Mooney, includes the following topics:
4:40 - What pre-cursor activities to perform before diving in to AI
10:20 - Things that companies should keep in mind to remain secure
13:50 - Expert opinions on best in class tools
27:23 - Most addressable use cases for everyday companies
34:07 - Considerations that companies should take when implementing tools off the shelf
39:03 - Things that may take shape in the near future
46:55 - Answering FAQs
To view the on-demand webinar, go to www.bluwave.net/demystifying-ai-webinar-registration/.
For more information on this podcast, go to www.bluwave.net/podcast.
www.whyofai.com
www.accessholdings.com
www.frazierhealthcare.com
www.att.com
EPISODE TRANSCRIPT
Sean Mooney:
Welcome to the Karma School of Business, a podcast about the private equity industry, business best practices and real-time trends. This episode is a special presentation of a webinar we recorded with some of the very best experts in AI from our private equity client base and our business builder network. We discuss important precursor steps needed to adopt AI, key technologies, things you can do right now and a lens into the future of AI.
This episode is brought to you today by BluWave. I'm Sean Mooney, BluWave's founder and CEO. BluWave is the go-to expert of those with expertise. BluWave connects proactive business builders, including more than 500 of the world's leading private equity firms, to the very best service providers for their critical, variable, on point and on time business needs.
I'm very pleased to be joined by Alex Castrounis, Nik Kapauan, Ken McLaren and Keith Thomas. And so what we'll do briefly here is make introductions from each of the panelists so you understand who's joining us today and the insights that they're bringing. To kick things off, Alex, can you give us a quick intro?
Alex Castrounis:
Thanks, Sean. Hey everyone, I'm Alex Castrounis. I'm the founder and CEO of Why of AI. We're an AI strategy consulting and business training company that helps companies demystify and understand AI and develop responsible AI strategies to drive business growth and customer success. I'm also a professor of artificial intelligence at Northwestern University's Kellogg School of Management as part of their MBAi program. And I'm the author of a book called AI for People in Business. Great to be here. Thanks.
Sean Mooney:
Thanks, Alex. Nik, how about yourself?
Nik Kapauan:
Thanks, Sean. Hey, everyone. My name is Nik Kapauan and I lead digital at Access Holdings among other things. And thanks to Sean and BluWave, been incredible partner to us throughout the years. At Access we aspire, and we're very early in the journey, but we aspire to build what we call kind of the next generation private equity firm, and analytics and data is central to that. Thinking through how do we apply all these technologies to improve what we do as a sponsor across the investment lifecycle, but then also for value creation in our portfolio businesses.
So prior to Access I was a consultant. I was with McKinsey for six years, primarily focused on digital business building and innovation strategy. And started off my career as a software engineer and startup founder. Looking forward to being on this panel with everyone.
Sean Mooney:
Great, thank you. Ken, how about yourself?
Ken McLaren:
Hey, nice to see everyone. Ken McLaren, part of Frazier Healthcare Partners. If you don't know Frazier, we're a mid-market healthcare focused PE firm, investing out of [inaudible 00:03:01]. We're about a one and a half billion fund exclusively focused in healthcare for 32 years. That's all we do.
To Sean's point up front, we believe significantly in value building. Our center of excellence team, which I'm a part of, actually has more members than our investment team, which is a little bit unique in the mid-market. But really work closely with our [inaudible 00:03:18] to build value, is what we do. Data and AI is the vertical that I lead. And so we've been investing behind data and AI as a strategy for the last three years. Just really believe in the value it will bring to our portfolio companies. So excited to be here prior to Frazier spend about 20 years as an executive, building and leading data and analytics businesses in a healthcare space.
Sean Mooney:
That's great. Thanks Ken. Keith, bring us home.
Keith Thomas:
Thanks. Keith Thomas, AT&T Cybersecurity. I've been with AT&T for a little over 14 years. I've got about a 25-year track record in IT, wearing all kinds of hats. And today we've been really fortunate in looking at the AI space and trying to understand how to implement AI tools and other cybersecurity technologies safely, for our customers to consume and for businesses like the partners that are here today to consume and use safely.
Sean Mooney:
Right. Thank you Keith.
So as we kick off here, maybe to give people a sense for agenda, we're going to talk a little bit about the precursors that most companies should be thinking about before they really are able to activate the AI journey, some of the tools that are going to be available or are available right now, and then maybe a lens into where the future is and how quickly things will change. And so I think as we get into the first part of the webinar here, I'd like to maybe set the stage. So jumping into AI with both feet is all well and good, and I think people have to do it. But in working with private equity firms in support of this journey, we found a number of things should probably be done in advance that are going to have maybe more immediate ROI, but also are going to enable the effectiveness of your AI journey.
And so to get started off I'd love to get some of your thoughts, Nik, about what are some of these precursor activities that people should be thinking about?
Nik Kapauan:
Yeah, for sure. And I'll lean back on my consulting days when we did these big digital transformations. McKenzie, we typically have almost like a three layer cake conceptual model when we thought about these big transformations. So you have strategy at the top, then you have your foundational enablers in the middle, and at the bottom you have execution and change management to actually roll this out. So it starts with strategy, I think that that's the first point make when it comes to this, that understanding where the value is, understanding how you're going to prioritize things, basics of a good strategy really sets your strategic compass for when you deploy these technologies. And that strategy needs to be tied. Your strategy for using AI obviously needs to tie to your broader digital strategy, which needs to tie to your broader business strategy as a firm, where the value is, where you're going to focus on.
And I'd also bifurcate it, because when we say AI it's a broad spectrum of things. You have your traditional... Normally I think about it as you have your traditional analytics, which is descriptive analytics, just getting stuff on a screen and reporting. And then you have your more predictive analytics for predicting the future. Then prescriptive analytics, like predicting the future and then actually doing something about it automatically.
And with those kind of use cases the value is well known, the use cases are... It's a bit more mature. So you could be a bit more systematic with a strategy of taking some of these best practices, applying them to your specific business context. Versus something like generative AI, which I think is what we're all kind of excited to talk about here, like ChatGPT and LLMs and things like that. Where we know there's a ton of value on the table but it's still new. So the way you'd approach that strategy is a bit more iterative, a bit more experimental, trying to get use cases and experimenting as soon as you can to figure out where the value is. But either case, even if you're in well trod-in waters or experimenting with something new, just having that strategic compass, knowing where the value is and how it relates to what you are trying to achieve as a business.
And then the second layer below that, I'll talk about foundational enablers. So the biggest one here is obviously data. You need to do analytics on something. So it's getting your data in order. Oftentimes when you do some of these big... When I've done some of these big analytics projects in the past, 80, 90% of the effort is just getting the data centralized and clean and almost like the analytics is the fun part, once you can actually get to that, once your data is all there. So that that's definitely something everyone should be doing now as a no-brainer, to really extract the value from some of these advanced analytics capabilities.
And there's different ways you can do that depending on how complex your organization is. Us at Access, there's a Microsoft Azure data platform that we roll out at all of our portfolio businesses that ingests data from all their source systems services for analytics and then sends it over to us at Access for portfolio monitoring and fund valuations and things like that. So just getting your data ecosystem in order is critical.
Some of the other enablers here are talent, technology, and there's a build versus buy question when it comes to those things. How much do you insource versus partner with? And the answer will depend on your specific context, but definitely important ones to think through.
And then finally, my one last comment on the change management piece. Having done a lot of these digital transformations before I think one of the biggest predictors of success is a champion inside the organization that could really own the vision and drive the opportunity. And often that's from the CEO or something, someone the CEO directly holds accountable for the digital agenda. And really having that leadership voice to set the vision and drive the organization and mobilize change is critical to success for analytics and any other kind of major digital transformation.
Sean Mooney:
Nik, I thought you brought up a lot of really good points. And one I think is so important is just the basics. You've got to do the unglamorous data cleanliness part. And when I was investing in information and data and analytics businesses, and certainly everyone here at BluWave here [inaudible 00:09:38] is the only thing worse than no data is bad data. And so you've got to do the unglamorous part of making sure that stuff's good and keep it good, because it's like a piece of equipment, it's got to be maintained. Anytime there's rotation enforced on anything it wants to lose calibration. So I thought that was a great point.
This also idea of like, just do the basics. You're going to get a lot of ROI out of visualizing it and analyzing it. And don't forget that type of really, really good precursor things. And then it's the change management, and we can talk a little bit later about also... AI's going to be part of your strategy. It's a tactic, it's not your strategy. So I think there was a lot in there that was extremely helpful. And so if we think about that, you're on this journey, you've got your data, it's clean, you're analyzing it, you're getting ready to kick off the AI. Keith, what are some of the things that any company should be really thoughtful about in terms of protecting the castle as you're building these data assets up?
Keith Thomas:
Yeah, absolutely. There's some great points here. I love that you said change management first of all, because if you don't have a structured change management to support the integrity of the data being put in then you're going to end up with... We'll call it data poisoning, but it could be inaccuracies in your data that then throw everything else off. And we don't want to see that. So we want to make sure that organizations protect their data that they're actually ingesting, and definitely using a change management process is highly important.
As these AI systems get rolled out, and Nik, you had mentioned that you have them at your different organizations, there's a dependency on AI at this point. And if you don't get that data, if you don't get those analytics that you're looking for because the AI system goes down or becomes unavailable, then how does your organization respond to that? And so it's a very important piece, that you want to make sure that you have some type of disaster recovery failover capability. Even if it is to go to a manual approach, that's okay. Having the plan is the most important part of that.
And then finally I'll say one more thing, Sean, which is we're going to invest a lot of time, a lot of resources into building these data models, into building these systems. And we want to make sure that we protect them from theft, making sure that if someone gets into our organization that they can't pull that model out and take it with them to use somewhere else. And there's some ways that we protect using different security tools and different security capabilities to support the idea of a model theft by attackers.
Sean Mooney:
That's great. I think those are great points, and once again we're seeing this theme of failing to prepare is preparing to fail. And so you've got to do the work in advance, not just even on the data and the analytics side, but also on protecting your data. At BluWave we call the cybersecurity issue our Friday at five call, because it's always Friday at five. And there are things that large companies certainly have better, bigger resources for.
But I'll tell you, just through the everyday here there are all sorts of resources available to SMBs in any size company that can do a lot to protect your organization. And in some ways it's increasingly your most critical assets are your data. And so if you're looking for this you can go to almost any MSP, it can take you a long way as a business. And so I think this is a great way to start, is doing the precursor work that's maybe not as glamorous but that's going to enable you to do a lot of the really cool fun stuff in the days ahead here.
So you have your systems protected, you're cleaning your data, you're visualizing, you're analyzing, you have change management and you're ready to start putting AI in place. Let's talk about some of the tech that people and business leaders should be thinking about, considering above and beyond maybe even some of the consumer available LLMs like ChatGPT and Bard, et cetera. So Ken, I'd be curious, what's your thoughts on some of the best-in-class tools and tech stacks that businesses should be thinking about as they get going on this journey?
Ken McLaren:
Thanks Sean, and love the preamble 'cause I think a lot of people like to jump into the tech and the AI but getting the foundations in place first is critical. But once you've got those foundations I think technology is important. We do still do most of our prototyping on desktops. So once you've got your data in place just using Jupyter Notebook, Python and you pick your toolkit. R, Scikit, PyTorch, TensorFlow, everyone has their own tool of the trade. But we do a lot of prototyping on desktops, and as we prove the value and the use cases we then start getting ready for production. But don't build in production first, get the proof value with your customer market in place before you start building.
When we're ready to build we do like Microsoft Azure. We actually say we're cloud-agnostic, but the reality of being in mid-market healthcare is the majority of our portfolio companies are already in Microsoft Office. The majority of our companies are already using Azure Cloud in some ways, so it's a natural step. And we do really love the tooling that Microsoft has been putting in place. We work with other cloud providers as well, but Microsoft Azure is just an easy path for us. And we really like, to Keith's point before, that we've got a lot of good terms and conditions, cybersecurity, data privacy, active directory. A lot of the stuff you don't want to be thinking about with AI kind of comes out of the box with Microsoft. And so that's a nice place to start.
From there the business has been investing... Microsoft is investing heavily behind AI, so not just the OpenAI partnership but their Azure ML environment before that. The cognitive services, the open source tooling that they bring in so you can natively work in Databricks and Spark, you can use Kubernetes and Kafka. And so whether you're streaming or batching, your tool of choice is pretty much available for you there. So it's a good platform and a good [inaudible 00:15:44] for all of those reasons. And so we really do have a lot of good success working there.
The number one thing we always will start with is just having a good data lake infrastructure. You can pick some of your tools around that, but it's not just the clean data, it's that that data is hosted and production ready. So having your data pipes with things like Azure Data Factory, having good storage, whether it's Azure Data Lake storage or using Databricks Delta Lake on top of that, having a production-ready data environment is important. And then once you've got your models ready, really thinking about the productionalization, which is your MLOps world. And again, you can do a lot through Azure ML, but you can plug in a lot of open source tools.
So there's really no one platform to rule it all. There's a lot of best of breed tools, but whether it is Azure or Google or AWS, you've got good environments to work in. And we just probably spend 80% of our time in Azure for the reasons I mentioned before, and they continue to invest in and innovate in that platform space. So we see a lot of great benefits, and because of the healthcare world that we live in, privacy, security, easy access, reasonable scaled production-ready tools is really important to us. We're not a hyper scaled organization so we can get a lot of good bang for buck and move fast. So good tool for us to use. Most of our vendors and partners have experience with it, so it's pretty easy to get talent in there and move pretty fast.
Sean Mooney:
Ken, and... That's great. And I think particularly with your background in healthcare there's all sorts of a much higher order things to think about in terms of HIPAA data and very sensitive information. And we've heard a lot from our clients that they're probably the earliest at adopting Microsoft Azure, particularly as it relates to the OpenAI partnership and the confidentiality it relates. We're going to go deeper into this, but can you talk about maybe some of the things that you've seen that Azure solves, particularly as you try to bring in ChatGPT-like capabilities into organizations?
Ken McLaren:
For sure. So just in general, I'd say yes, out of the box Azure's posture around cybersecurity, data privacy, active directory, you get a lot of security compliance out of the box. And certainly putting that in place is critical, as Keith shared up front. We love where AI is going. We're intrigued by the use cases gen AI is opening up. Many of our portfolio companies are concerned about using some of the open tools where it's not clear what will happen to the data that goes into a prompt. Will it be used to fine tune or train a model? And eventually what happens to that data? I think the open tools are starting to catch up and share some better language around protecting user data, but we still guide our portfolio companies for sensitive business data, customer data, keep it out of any open tool and use the protection of the Microsoft Azure environment where OpenAI services and Microsoft has strong protections for business and customer data.
And again, to the healthcare point Sean, yes, for HIPAA and PHI we do a lot of work just to make sure those environments are protected in the right ways, for not just our businesses and teams but our customers and patients that are all impacted by the data that's there. So that's pretty critical for us, and we certainly do guide in those directions, and like the Microsoft terms and conditions around the commitments not to use open AI data in any of the underlying models that they train.
Sean Mooney:
Yeah, and that that's spot on, and so I appreciate that. So Alex, what about you? What are some of the things that you're seeing in terms of what types of tech stacks and tools that companies are starting to adopt to now enable the activities?
Alex Castrounis:
Yeah, absolutely. I mean, as Ken said, there's multiple cloud platforms out there that are really quite powerful, do amazing things. As we've seen in software development in general, there's sort of this movement more and more towards no-code, low-code solutions. And part of the benefit of those things is, one, accessibility and making it easier for people and organizations to sort build software, or in this case train models, iterate on models, tune them, optimize them, deploy them and so on. But also sort of gives you the ability to abstract away some of the underlying DevOps or orchestration that needs to be done. Or most importantly sometimes, governance, like model governance and data pipeline governance and so on.
So in addition to the Azure offerings that Ken had talked about, I think Google has released recently their generative AI studio as part of Vertex. And it's really quite interesting because I think... If you think of the superhero movies, like Iron Man with Jarvis and I think in Black Panther they have something like that, a lot of the way that AI could be really powerful using especially the conversational interfaces and natural language interfaces isn't so much just having the parameters of a model like GPT-4 or ChatGPT to generate text, like if you're trying to write an email or summarize something, but rather to create this interface that becomes an information retrieval system or a question answering system on top of your data.
And I think that's really where we're going to see a lot of the real value in the already, but really in the near future. And I think some of the platforms out there, not just Google but also some of the open source stuff like LangChain, which I'll talk more about here in a minute, are really pioneering and pushing that a lot forward. So in the case of generative AI studio, what's really amazing there is it gives you the ability to load up your data in different formats, whether it's CSB or PDF or text in other formats or what have you. And then you can put on top of it, they have a model hub. And in that model hub you could choose either... Google has their own model Bard which is similar to ChatGPT. The underlying model is actually PaLM. So you can choose different versions of that, but it's also a hub that has open source models like LAMA and other models.
So it really gives you a lot of flexibility there to put a LLM interface like I'm talking about on top of your data for information retrieval and question answering, as well as even Google search itself on top of your data, which is pretty powerful. And it allows you to bias actually between how much of the output of this generative AI is generated by the model parameters, in the same way if you just went into ChatGPT's user interface and asked it to summarize something versus from your actual data. So you can kind of bias one way or another, 'cause obviously one of the main concerns a lot of organizations would have are things like trustworthiness, hallucinations and so on.
So trying to get more towards a true informational retrieval system that you can rely on, and that's, excuse me, trustworthy, as well as it's all very containerized where you can have your own encryption key that Google doesn't have access to. So it can be HIPAA-compliant and everything else, and no one can really get access to your data except yourself. So it solves a lot of those issues that I know a lot of organizations are wondering when it comes to proprietary data, confidential data and intellectual property, and just kind of sending that off to a closed-source proprietary API in the case of some of the OpenAI stuff. And then it also allows for a lot of that other ops and governance type stuff in terms of model tuning and iteration and deployment and so on.
One of the other things that's interesting with a tool like LangChain for example, is if you're just going into a user face interface like ChatGPT, it's hard to track your prompts and versioning them, and as you're doing prompt engineering and so on. And so these kinds of frameworks like LangChain really help you, excuse me, to templatize, if you will, your prompts and also create variables in your prompts so you can inject different things. You can iterate on the prompts, you can version them and then you can also chain prompts. So output of one LLM call becomes the input of another one, and it allows you to then set up these pretty sophisticated systems that not only do that chaining but also can integrate with outside sources like the web or Wikipedia or other types of databases and so on, and APIs.
So I think a lot of that, and of course Hugging Face is one of the really big open source options out there that's really compelling, has done a lot with making transformers widely accessible to folks.
Sean Mooney:
I'd say too. I mean, I think it's really clear that, A, I think Microsoft got an early start according to Ken, and they've got the most, I think, established systems in place. But Google's catching up real quick, and my sense is that Google's probably had this stuff on the shelf, they're just trying to decide how they want to put the cards on the table for a while. Are you seeing some of that in the Google stack?
Alex Castrounis:
Yeah, I mean for sure. And I know they have some partnerships now with some pretty big companies that are building on top of this. So I think Uber, if I'm not mistaken, or Uber Eats is building now on that generative AI Vertex platform. I think Wayfair, I think one of the big travel companies, maybe it's KAYAK, I'm not 100% sure, but definitely... And one thing worth mentioning too is there's also part of the data store or the database... Ken had talked a bit about having a data lake and so on. One of the things that's also becoming really prevalent with these new generative AI solutions is the idea of a vector database. And so Pinecone is a good example of that, and Chroma DB is another example of that.
And these are ways to, I don't want to get too technical here so I'll try and summarize it very high level, but essentially when you have unstructured data, like your text data, your documents, whatever it is, you can convert that into a very specific kind of numerical format that gets stored in a specific kind of database called a vector database, which essentially encodes all the semantic meaning of your unstructured natural language data and the contextual meaning of it, and that becomes very handy later on to do certain things like putting that conversational interface on top of your data. Because now it allows, when you're doing queries by... whether it's through a chat interface or something like that, to translate your query into pulling the right data out of the database to pass to the large language model without having to do SQL queries or anything like that, or writing any sort of code. So it's almost automated in that way, once you have your data stored in one of these vector databases.
Sean Mooney:
That's great. No, that makes a ton of sense. And so I think what's clear to us is there's a lot of technical tools, there's a couple leaders and that's going to change very quickly. And I can tell even personally, BluWave has been building our recommendation engines using multiple AI technologies for over a year, and we're already replacing maybe a half of what we've already done. And we started with the best-in-class tools that were available a year ago. And so there's going to be requiring agility and there's going to be things that are developing and new tools available, and you're going to be on a journey. And I think it shows you that there's some major titans helping, but there's some new things coming as well.
And so Nik, let's jump as well into... You've got these tools at your disposal, you've done your precursor. What are some of the most addressable use cases that companies should start thinking about this, for most everyday companies can start doing today?
Nik Kapauan:
Yeah, no, and great commentary from Alex and Ken. And I was furiously writing down notes 'cause these are all issues that we're thinking of as well, trying to figure out how to give it access to our proprietary data and stuff like that.
But as I think about use cases, the construct that I find helpful is the power of... Talking specifically about generative AI, that the power of these large language models is tied to the power of language itself, because these things can synthetically generate language. So academically, language serves three functions. It's language as information, language as expression and language as instruction. So similarly you can think of bucket use cases for generative AI across those dimensions.
So when it comes to information this is really... And Ken touched on a bunch of this, it's really the power of these models to rapidly collect and synthesize data, information. And you really get something differentiating when you can connect that to your proprietary data. So this would be things like market and customer research for a marketing and sales function. Report generation for customer operations, you collect all of your customer's data, feed it to them in an easy-to-read report automatically.
On the sponsor side we do a lot of market research and we have these documents that we prepare using the analysts' associative capacity. Can you automate a lot of that using generative AI? So I think about that, these kind of gen AI for information as almost like a Google on steroids kind of use case. And the value prop there is really when it comes to time savings.
The next level for these kind of models is, language isn't just information, it's not data, but it's also expressions, being able to convey thoughts and ideas. And that's really the power of these models to generate creative content on any topic, in any voice, in real time. So these could be things like... Content marketing and sales is kind of the obvious one to start with. So content generation for personalized marketing campaigns. Can you create sales agents or chatbots for customer operations? Things like that. So being able to create... Generating a question list for expert interviews is one of the use cases that we've deployed on the sponsor side.
And then finally, and this is where I think there's going to be a ton of value here, especially down the road, would be language as instruction. So not only can you intelligently create language and content, but you could also use that to do things, to affect change in the world. So being able to generate code using these LLMs, that's a perfect example. So being able to do software development cheaper and faster than before. I think you're going to start seeing the rise of autonomous agents that can automate a lot of tasks that use. So I've been hearing about Salesforce is launching Salesforce GPT, and all these models that can not only generate content for marketing campaigns but can actually execute campaigns. It could bid on keywords, things like that, to really drive automation.
So that's kind of a concept of how I think about the different use cases. When it comes to what function, I think we'll see the highest value. There's a few really obvious ones. So marketing and sales, like we said, anything kind of marketing campaign generation, marketing customer research, sales agents, customer operations, so [inaudible 00:31:35], chatbots, customer support. I think R and D and product development could be another big function where this stuff will add a ton of value. So being able to quickly test ideas, but also for rapid prototyping. So being able to use these models as a man-in-the-box MVP for product development. And then obviously software engineering, software development too is going to be seeing a big uplift from gen AI.
And then also data analytics. So I almost see some of these models as a smart analyst in your organization, that you could feed it data and have it run ad hoc analytics and use it in your data science, data analytics work. So that's a few thoughts on use cases and where I think some of the value is going to be in the early stages.
Sean Mooney:
Yeah, I think those are spot on. And maybe to give even some personal journey stuff, we've seen our content creation at BluWave go up 500% since using these generative engines, and arguably they're better SEO-ed. And so you can get some real value out of these things. I saw a statistic, now this is not validated so this might be a hallucination itself, but 40% of the code in GitHub now has had some AI generative aspect to it. And so there's going to be really huge increases in productivity, kind of the magnitude that we saw on the internet in the nineties, that are going to take place.
And then the real question, and maybe we'll talk about this a little bit later, is who is the beneficiary of that? Should your coders be cheaper or faster or both? Should your lawyers be cheaper or faster or both? Who gets that beneficial aspect? The spoils of this productivity is going to be something I think humanity will be working through a lot of. And I think the points that you're bringing up are spot-on use cases.
What I think is interesting is in some ways it's opposite of probably what they thought it would be. They thought it would be more robotic process automation and plans and then it would get to mid-organization and then creative. It seems like it's starting at creative and then working its way through the rest of the organization with content and imagery like DALL-E and whatnot. So I think those are really important things that every business should think about in terms of their own organizations, Nik.
So Keith, let's talk about this. So you've got these great engines that are amazing that you can feed things into and they give these great distillations back to you. What are some of the things that companies should be considerate of in terms of using particularly the publicly available models and the implications that they could have on a company?
Keith Thomas:
Sure. Yeah. We just heard a whole bunch of acronyms, a whole bunch of technologies and tools. And me as a technology nerd, I was geeking out. So thank you all for that. There is some challenges with that as you heard, and people who are not technology focused, there's a black box problem here. We don't necessarily know what's behind those AI systems. And so to Ken's point, they're building out their own infrastructure, they're building out their own platforms, and that's a way to reduce the black box problem of knowing what's behind all of these generative AIs, knowing that's behind all the reports that get built.
So we want to make sure that you have an architecture and a design that people can follow, and that we can look at as technology experts and help align, not just security, but align the integrity and the availability of these systems so that they stay online and they stay available. And then to the point of what we put in there, data privacy. And I'll kind of keep going back to this. What is the data privacy rules that are in place to protect your organizations and your customers? And how do we align that with our AI models and how do we align it with all the different pools that we are going to build out our AI platform with?
Sean Mooney:
Yeah. And talk about that. So what happens if someone puts sensitive company data into ChatGPT with history turned on?
Keith Thomas:
Yeah. So if you've got history turned on, well then it becomes part of that AI system, it gets built into the models. And there's the ability for the model to use that data. What if it's poisoned data? What if it's corrupted information that's stuck in there? Then that messes up your analysts' model, your analysis and your models and your reporting. It could be something malicious in there as well, and then that gives adversaries the ability to do an attack. It could be an attack to modify or corrupt the data, it could be an attack to try to take the data. There's lots of different ways that they could come in, vectors is what we call them, that they can come in.
So those are a few of the ways, or a couple of the ways actually, that you want to be aware of when you're building out these models and looking at the overall architecture and design.
Sean Mooney:
Yeah, and I think that's a really good point because if you think about it, whether it's ChatGPT or Bard, if you put that data in, you put your company's data in, that becomes part of the model. And then if that's something that's competitively sensitive or advantageous to your business you've just given it to the world. And so...
Keith Thomas:
Normally... Yeah, Sean, normally we'll want to have a privacy policy in place for our organization so that users of the AI platforms know what their company policy is and what they're allowed to put in. And then we also want to make sure we do least privilege access to the data. So your company has multiple pools of data, we want to make sure that each employee only has access to the data that they need access to. That will prevent them from actually taking data inadvertently and putting it into an AI model.
Sean Mooney:
Great lesson. Every company needs to have a policy on these engines so people are very clear about what they can put in, and making sure that you just can't carte blanche copy-paste your customer list and paste it in because it makes it easier to structure in Excel.
Keith Thomas:
Right.
Sean Mooney:
But it's really tempting and really easy to do, and we certainly have the same temptations with our team, but people don't realize the secondary and tertiary effects that it could have in terms of the competitive protections that a company has. So I think those are great.
So in order to leave a little bit of time for some Q and A, I think let's do a little bit of a lightning round here. And so this is the million-dollar question, and probably the billion-dollar question in terms of, where's all this going? And we're going to talk crystal ball time. The crystal ball on my desk is pretty hazy and it seems to be changing shape and form every time I show up in the morning. And so let's talk about where you all see AI going in the visible future. And so Alex, what are some of the things maybe that you think may take shape, to the extent you can?
Alex Castrounis:
Yeah. I mean, things are moving quickly. Just yesterday, I think, or a couple days ago with Google adding multi-modality to Bard and PaLM, they did a demo where they took a picture of a mobile app screen and then code was automatically generated, actual functional code, just based on pixels of an image. So it's pretty remarkable I think. Not only are we going to see these large language models keep evolving on the tech side, but we'll also see that multi-modality. So adding audio capabilities, adding video capabilities, which are really just sequences of images at a certain frame rate and so on.
And so we'll go from... Right now generative models like ChatGPT and GPT-4 essentially just keep iteratively predicting the next word or the next token, and that's what generates text. But we could see that happening with video, for example. So given a sequence of frames of video, what's the next image or frame in the video? Predicting and then just keep recursively doing that. Or not too long, instead of just saying, "Hey, write an email," say, "Hey, I want a 30-second company explainer video." And the next thing it creates the script, the animations in it, the background music, the whole enchilada all at once. Things are just advancing very quickly, as well as the real world applications and use cases of some of this generative AI stuff.
So yeah, it's going to be moving quickly. It's hard to predict where we'll be even three months from now, but people need to get started, need to start learning about it. Don't wait. At least look at what's happening right now and start building small and prototyping out specific use cases. Get ahead of the game.
Sean Mooney:
And Alex, I think it's a good point. It's so easy to go overwhelmed by this and just say, "I'm just going to wait." Don't wait. Get to doing something.
Alex Castrounis:
Yeah. And don't forget... And one last thing, sorry, and don't forget that AI too, I believe Nik made this point earlier, is more than just generative AI. The good thing about, I guess, ChatGPT's release and exponentially increasing public awareness and interest in this AI is that it did exactly that. But on the other hand it's also over-indexed for people on AI as being just LLMs, when in reality it's this massive field. So when I say get started it isn't just go all in on LLMs or generative AI. Don't forget, there's a lot you can do with AI and machine learning.
Sean Mooney:
And it's an excellent point. Ken, how about you? What does your crystal ball see?
Ken McLaren:
Agree with Alex. Hard to predict. What we can predict is the pace of change is going to be relentless and unlike what we've seen before, I think is the message. So get ready for rapid change. That's what we share with our leadership team is the pace of change is going to be much faster than what we've seen in the past decades. Sam Altman, CEO of OpenAI was interviewed just after ChatGPT came out, and he was asked by Lex Fridman on a good podcast show, "What is GPT for?" And he said, "It's a system that we'll look back at in the future and say it was a very early AI. It's slow, it's buggy, it doesn't do a lot of things very well." This is the CEO of his big new product launch saying, "It's not that good, but neither were computers when they first launched. And so they're going to get a lot better and they're going to get better fast."
So the other point I'd raise is, I think we all know about Moore's Law, which has really driven innovation for the last almost 60 years. So the number of transistors in the chip or the cost per compute going at a rate of change of every two years the power doubling or the cost halving. What we see now with AI algorithms is the power or the pace of change is almost every three to six months the performance of AI algorithms is doubling. The big tech have about 400 billion of operating cash flow annually that they're going to be investing behind this. And so you see all of these dollars starting to divert towards rapid innovation.
So from the first iPhone launch to iPhone 14, it took about 16 years to go from gen one to gen 14. Think about what would happen if ChatGPT's 16th version or 14th version came out 16 months from now instead of 16 years from now. The pace of change is going to be much faster. And so for our businesses to be ready to adopt the new technology, but not anchor on ChatGPT or what... you know, you pick your tool being the last and final, this is the start of a long journey that's going to move quite quickly. So be agile, be ready for change, have good foundations in place with all the stuff the team talked about here today. But be expecting of a pretty rigorous pace of change. And the future's really in front of us, we're just getting started with this.
Sean Mooney:
I think it's great advice, and it's easy to be daunted but it's better to get excited. And if you get excited about it, boy, the future is kind of uncapped. Nik, how about you?
Nik Kapauan:
Yeah. I mean, I'm a big believer that the generative AI has as much transformative potential as some of the other big technologies we've seen in our lifetime, the personal computer, the internet, the mobile internet cloud. So it's going to be transformational.
And you'll probably be users of generative AI even if you're not trying to be, right? Like Google search, Bing search, your favorite CRM, Microsoft Copilot, all these tools that we're already using are going to get enabled by generative AI and it's going to become like electricity or the internet or analytics, as something foundational to something that businesses just need as table stakes.
I think the way we're thinking about it is still very in early days, but it's about experimenting. Obviously being careful of privacy and data and all that, but anytime you need to Google something or search for something try an LLM. And every time you need to generate content for something try experimenting with an LLM. And it's through that you'll uncover really powerful use cases that can be scaled and productionized across your business. So I'd say don't be afraid of it, experiment with it, find exciting use cases responsibly because it's transformative.
Sean Mooney:
That's great. Keith, how about yourself?
Keith Thomas:
Sure. I always think about it as, how are we going to enhance our user's experience? And that's something that generative AI, obviously today everyone's experimenting, to what Nik just said, which is fantastic. I do it too. And if you aren't you should.
I see some pretty big areas actually where AI is going to be very supportive for us. I think it was Bill Gates who said that the winners in AI are the ones who are going to create the personal assistants, the ones that are going to help us. And so we think about how AI might be used in training, taking someone through a training program and being able to answer questions on the fly.
And then also there are certain industries that don't have enough professionals to staff the position. And cybersecurity is a huge industry, a tremendous industry where there isn't enough skills and skilled position workers to fill all the spaces. And so we'll see AI come in, and it won't necessarily replace those professionals, but it'll relieve these professionals of additional work that could be handed off to an AI platform to support, and then they can pivot and start working on things to improve and grow the business in different directions. So those are a few of the ways that I see AI helping us in the future.
Sean Mooney:
Awesome. Really, really important considerations. Okay, well we've had I think a rapid fire of really insightful information here. We have a few questions that have come in, so let me try to paraphrase if we can get to one or two of them. And so one of them is, if you think about these hallucinations that we've talked about in different places during this conversation, and there's this propensity for errors, how do you think about applying some of these generative AIs in different use cases and what use cases maybe are more appropriate than others, given the fact that we don't quite know how it comes up with some of these answers and why they're right and why they're wrong in certain places? Does anyone have a take on that?
Ken McLaren:
Can take a quick go, Sean. So for our [inaudible 00:47:30], again, in the healthcare space, very sensitive to ultimate use case and hallucinations would not be good if we were in a customer, patient market-facing position. So a human in the loop is something we talk about quite a bit, using gen AI, embracing it but having a human in the loop, in the process so you're putting some gating around it, is something that we always advocate, number one.
Two, where we think there can be some more scaled use cases, there's definitely the ability today to train your LLMs so you can fine tune them and you can really put a lot of guardrails around them to say, "Here is our protocol, this is the way that we manage our communications." And by fine-tuning and training there's a number of tools and techniques, you heard Alex talk a bit about vector embedding, you can put the LLM on guardrails, still with a human in the loop, but you can train it so it does what you direct it to. So those are two initial thoughts and things that we spend time talking about, but responsible AI, you heard a lot of different versions of that today, is critically important. So that's definitely needed in our space.
Sean Mooney:
And I think that's a great answer, the human in the loop. And I think that's one of the other big themes that we touched on a little bit, is a lot of these are copilots, right? You're not supposed to let them off on their own. It's something that's going to drive productivity and enhancements, but you can't just let them untethered out into your customer bases or your users, et cetera. And certainly that's something that I think maybe for future conversations we can talk about the moral, the ethical, the compliance considerations that we should think about.
I think we have one more question that we can probably get to here, and it's in more of the practical side. What do you think are some of the AI applications that can be used to maybe optimize some of the mid and back-office functions within companies and drive savings that might not necessarily be LLM, but could actually improve the efficiencies of companies through the middle of the business?
Nik Kapauan:
Yeah, I think that there's a lot. The easy examples are all customer facing, marketing and sales. But in terms of back-office operations I think the big one that come to mind is reporting and analytics in general as you're doing that in your business. So being able to quickly synthesize customer data and generate reports on the private equity side, things like rapidly synthesizing investment memos or go into a data room and list out all the interesting things, the things that... I think of a LLM model almost as a really strong performing kind of analyst or associate, just another member of the team that you can point to different tasks and automate. And then sometimes they're wrong, just like real people are wrong, but you have to evaluate that. So I think time savings in the backend for things like knowledge discovery, information synthesis, report generation, that could have huge impact.
Sean Mooney:
Yeah, I think that's going to be massive as well. The only other use case that I'll say that we see coming in from our client base is around call centers. And so there's a lot of training, it's really more into the LLM side, but how do you resolve your customer's needs earlier using this kind of dynamic, bilateral generative AI, which then lets your humans in your call centers work on the really hard stuff and/or find the answers to the really obscure much quicker. And so I think what we're seeing is that it can have a cost savings impact, but it can also enable you to serve your customers in a better way, faster. And so that 10 minute wait time can go to 30 seconds, and you can get access to better information faster for your customers. And so that's something that we're at least, from a what's coming in perspective, seeing some helpful use cases on this.
The only last question I have, this sounds like a plug but it's not, are there any partners that can assist you with building out use cases? And the answer is yes, you can call BluWave and we know them all. So a lot of [inaudible 00:51:34], we know a lot of them. But we'd be happy to help with this ecosystem that we're building out rapidly, that are addressable for not only larger companies but particularly SMBs.
So that is all we have for today. I want to give really, really special thanks to Alex and Nik and Ken and Keith here for sharing these insights. And maybe we'll get the band back together again in a few months and we'll say that it's all changed again. So thank you so much everyone for joining us today.
For more information on how your business can be connected with top advisors, experts and/or consultants in the world of business intelligence, predictive analytics, AI or anything else related to building your business with more speed and certainty, please visit bluwave.net. That's B-L-U-W-A-V-E.net. Please continue to look for the Karma School of Business podcast anywhere you find your favorite podcasts, including Apple, Google and Spotify.
We truly appreciate your support. If you like what you hear, please follow, review and share. It really helps us when you do that, so thank you in advance. In the meantime, let us know if there's anything we can do to support your success. Onward.
Welcome to the Karma School of Business, a podcast about the private equity industry, business best practices and real-time trends. This episode is a special presentation of a webinar we recorded with some of the very best experts in AI from our private equity client base and our business builder network. We discuss important precursor steps needed to adopt AI, key technologies, things you can do right now and a lens into the future of AI.
This episode is brought to you today by BluWave. I'm Sean Mooney, BluWave's founder and CEO. BluWave is the go-to expert of those with expertise. BluWave connects proactive business builders, including more than 500 of the world's leading private equity firms, to the very best service providers for their critical, variable, on point and on time business needs.
I'm very pleased to be joined by Alex Castrounis, Nik Kapauan, Ken McLaren and Keith Thomas. And so what we'll do briefly here is make introductions from each of the panelists so you understand who's joining us today and the insights that they're bringing. To kick things off, Alex, can you give us a quick intro?
Alex Castrounis:
Thanks, Sean. Hey everyone, I'm Alex Castrounis. I'm the founder and CEO of Why of AI. We're an AI strategy consulting and business training company that helps companies demystify and understand AI and develop responsible AI strategies to drive business growth and customer success. I'm also a professor of artificial intelligence at Northwestern University's Kellogg School of Management as part of their MBAi program. And I'm the author of a book called AI for People in Business. Great to be here. Thanks.
Sean Mooney:
Thanks, Alex. Nik, how about yourself?
Nik Kapauan:
Thanks, Sean. Hey, everyone. My name is Nik Kapauan and I lead digital at Access Holdings among other things. And thanks to Sean and BluWave, been incredible partner to us throughout the years. At Access we aspire, and we're very early in the journey, but we aspire to build what we call kind of the next generation private equity firm, and analytics and data is central to that. Thinking through how do we apply all these technologies to improve what we do as a sponsor across the investment lifecycle, but then also for value creation in our portfolio businesses.
So prior to Access I was a consultant. I was with McKinsey for six years, primarily focused on digital business building and innovation strategy. And started off my career as a software engineer and startup founder. Looking forward to being on this panel with everyone.
Sean Mooney:
Great, thank you. Ken, how about yourself?
Ken McLaren:
Hey, nice to see everyone. Ken McLaren, part of Frazier Healthcare Partners. If you don't know Frazier, we're a mid-market healthcare focused PE firm, investing out of [inaudible 00:03:01]. We're about a one and a half billion fund exclusively focused in healthcare for 32 years. That's all we do.
To Sean's point up front, we believe significantly in value building. Our center of excellence team, which I'm a part of, actually has more members than our investment team, which is a little bit unique in the mid-market. But really work closely with our [inaudible 00:03:18] to build value, is what we do. Data and AI is the vertical that I lead. And so we've been investing behind data and AI as a strategy for the last three years. Just really believe in the value it will bring to our portfolio companies. So excited to be here prior to Frazier spend about 20 years as an executive, building and leading data and analytics businesses in a healthcare space.
Sean Mooney:
That's great. Thanks Ken. Keith, bring us home.
Keith Thomas:
Thanks. Keith Thomas, AT&T Cybersecurity. I've been with AT&T for a little over 14 years. I've got about a 25-year track record in IT, wearing all kinds of hats. And today we've been really fortunate in looking at the AI space and trying to understand how to implement AI tools and other cybersecurity technologies safely, for our customers to consume and for businesses like the partners that are here today to consume and use safely.
Sean Mooney:
Right. Thank you Keith.
So as we kick off here, maybe to give people a sense for agenda, we're going to talk a little bit about the precursors that most companies should be thinking about before they really are able to activate the AI journey, some of the tools that are going to be available or are available right now, and then maybe a lens into where the future is and how quickly things will change. And so I think as we get into the first part of the webinar here, I'd like to maybe set the stage. So jumping into AI with both feet is all well and good, and I think people have to do it. But in working with private equity firms in support of this journey, we found a number of things should probably be done in advance that are going to have maybe more immediate ROI, but also are going to enable the effectiveness of your AI journey.
And so to get started off I'd love to get some of your thoughts, Nik, about what are some of these precursor activities that people should be thinking about?
Nik Kapauan:
Yeah, for sure. And I'll lean back on my consulting days when we did these big digital transformations. McKenzie, we typically have almost like a three layer cake conceptual model when we thought about these big transformations. So you have strategy at the top, then you have your foundational enablers in the middle, and at the bottom you have execution and change management to actually roll this out. So it starts with strategy, I think that that's the first point make when it comes to this, that understanding where the value is, understanding how you're going to prioritize things, basics of a good strategy really sets your strategic compass for when you deploy these technologies. And that strategy needs to be tied. Your strategy for using AI obviously needs to tie to your broader digital strategy, which needs to tie to your broader business strategy as a firm, where the value is, where you're going to focus on.
And I'd also bifurcate it, because when we say AI it's a broad spectrum of things. You have your traditional... Normally I think about it as you have your traditional analytics, which is descriptive analytics, just getting stuff on a screen and reporting. And then you have your more predictive analytics for predicting the future. Then prescriptive analytics, like predicting the future and then actually doing something about it automatically.
And with those kind of use cases the value is well known, the use cases are... It's a bit more mature. So you could be a bit more systematic with a strategy of taking some of these best practices, applying them to your specific business context. Versus something like generative AI, which I think is what we're all kind of excited to talk about here, like ChatGPT and LLMs and things like that. Where we know there's a ton of value on the table but it's still new. So the way you'd approach that strategy is a bit more iterative, a bit more experimental, trying to get use cases and experimenting as soon as you can to figure out where the value is. But either case, even if you're in well trod-in waters or experimenting with something new, just having that strategic compass, knowing where the value is and how it relates to what you are trying to achieve as a business.
And then the second layer below that, I'll talk about foundational enablers. So the biggest one here is obviously data. You need to do analytics on something. So it's getting your data in order. Oftentimes when you do some of these big... When I've done some of these big analytics projects in the past, 80, 90% of the effort is just getting the data centralized and clean and almost like the analytics is the fun part, once you can actually get to that, once your data is all there. So that that's definitely something everyone should be doing now as a no-brainer, to really extract the value from some of these advanced analytics capabilities.
And there's different ways you can do that depending on how complex your organization is. Us at Access, there's a Microsoft Azure data platform that we roll out at all of our portfolio businesses that ingests data from all their source systems services for analytics and then sends it over to us at Access for portfolio monitoring and fund valuations and things like that. So just getting your data ecosystem in order is critical.
Some of the other enablers here are talent, technology, and there's a build versus buy question when it comes to those things. How much do you insource versus partner with? And the answer will depend on your specific context, but definitely important ones to think through.
And then finally, my one last comment on the change management piece. Having done a lot of these digital transformations before I think one of the biggest predictors of success is a champion inside the organization that could really own the vision and drive the opportunity. And often that's from the CEO or something, someone the CEO directly holds accountable for the digital agenda. And really having that leadership voice to set the vision and drive the organization and mobilize change is critical to success for analytics and any other kind of major digital transformation.
Sean Mooney:
Nik, I thought you brought up a lot of really good points. And one I think is so important is just the basics. You've got to do the unglamorous data cleanliness part. And when I was investing in information and data and analytics businesses, and certainly everyone here at BluWave here [inaudible 00:09:38] is the only thing worse than no data is bad data. And so you've got to do the unglamorous part of making sure that stuff's good and keep it good, because it's like a piece of equipment, it's got to be maintained. Anytime there's rotation enforced on anything it wants to lose calibration. So I thought that was a great point.
This also idea of like, just do the basics. You're going to get a lot of ROI out of visualizing it and analyzing it. And don't forget that type of really, really good precursor things. And then it's the change management, and we can talk a little bit later about also... AI's going to be part of your strategy. It's a tactic, it's not your strategy. So I think there was a lot in there that was extremely helpful. And so if we think about that, you're on this journey, you've got your data, it's clean, you're analyzing it, you're getting ready to kick off the AI. Keith, what are some of the things that any company should be really thoughtful about in terms of protecting the castle as you're building these data assets up?
Keith Thomas:
Yeah, absolutely. There's some great points here. I love that you said change management first of all, because if you don't have a structured change management to support the integrity of the data being put in then you're going to end up with... We'll call it data poisoning, but it could be inaccuracies in your data that then throw everything else off. And we don't want to see that. So we want to make sure that organizations protect their data that they're actually ingesting, and definitely using a change management process is highly important.
As these AI systems get rolled out, and Nik, you had mentioned that you have them at your different organizations, there's a dependency on AI at this point. And if you don't get that data, if you don't get those analytics that you're looking for because the AI system goes down or becomes unavailable, then how does your organization respond to that? And so it's a very important piece, that you want to make sure that you have some type of disaster recovery failover capability. Even if it is to go to a manual approach, that's okay. Having the plan is the most important part of that.
And then finally I'll say one more thing, Sean, which is we're going to invest a lot of time, a lot of resources into building these data models, into building these systems. And we want to make sure that we protect them from theft, making sure that if someone gets into our organization that they can't pull that model out and take it with them to use somewhere else. And there's some ways that we protect using different security tools and different security capabilities to support the idea of a model theft by attackers.
Sean Mooney:
That's great. I think those are great points, and once again we're seeing this theme of failing to prepare is preparing to fail. And so you've got to do the work in advance, not just even on the data and the analytics side, but also on protecting your data. At BluWave we call the cybersecurity issue our Friday at five call, because it's always Friday at five. And there are things that large companies certainly have better, bigger resources for.
But I'll tell you, just through the everyday here there are all sorts of resources available to SMBs in any size company that can do a lot to protect your organization. And in some ways it's increasingly your most critical assets are your data. And so if you're looking for this you can go to almost any MSP, it can take you a long way as a business. And so I think this is a great way to start, is doing the precursor work that's maybe not as glamorous but that's going to enable you to do a lot of the really cool fun stuff in the days ahead here.
So you have your systems protected, you're cleaning your data, you're visualizing, you're analyzing, you have change management and you're ready to start putting AI in place. Let's talk about some of the tech that people and business leaders should be thinking about, considering above and beyond maybe even some of the consumer available LLMs like ChatGPT and Bard, et cetera. So Ken, I'd be curious, what's your thoughts on some of the best-in-class tools and tech stacks that businesses should be thinking about as they get going on this journey?
Ken McLaren:
Thanks Sean, and love the preamble 'cause I think a lot of people like to jump into the tech and the AI but getting the foundations in place first is critical. But once you've got those foundations I think technology is important. We do still do most of our prototyping on desktops. So once you've got your data in place just using Jupyter Notebook, Python and you pick your toolkit. R, Scikit, PyTorch, TensorFlow, everyone has their own tool of the trade. But we do a lot of prototyping on desktops, and as we prove the value and the use cases we then start getting ready for production. But don't build in production first, get the proof value with your customer market in place before you start building.
When we're ready to build we do like Microsoft Azure. We actually say we're cloud-agnostic, but the reality of being in mid-market healthcare is the majority of our portfolio companies are already in Microsoft Office. The majority of our companies are already using Azure Cloud in some ways, so it's a natural step. And we do really love the tooling that Microsoft has been putting in place. We work with other cloud providers as well, but Microsoft Azure is just an easy path for us. And we really like, to Keith's point before, that we've got a lot of good terms and conditions, cybersecurity, data privacy, active directory. A lot of the stuff you don't want to be thinking about with AI kind of comes out of the box with Microsoft. And so that's a nice place to start.
From there the business has been investing... Microsoft is investing heavily behind AI, so not just the OpenAI partnership but their Azure ML environment before that. The cognitive services, the open source tooling that they bring in so you can natively work in Databricks and Spark, you can use Kubernetes and Kafka. And so whether you're streaming or batching, your tool of choice is pretty much available for you there. So it's a good platform and a good [inaudible 00:15:44] for all of those reasons. And so we really do have a lot of good success working there.
The number one thing we always will start with is just having a good data lake infrastructure. You can pick some of your tools around that, but it's not just the clean data, it's that that data is hosted and production ready. So having your data pipes with things like Azure Data Factory, having good storage, whether it's Azure Data Lake storage or using Databricks Delta Lake on top of that, having a production-ready data environment is important. And then once you've got your models ready, really thinking about the productionalization, which is your MLOps world. And again, you can do a lot through Azure ML, but you can plug in a lot of open source tools.
So there's really no one platform to rule it all. There's a lot of best of breed tools, but whether it is Azure or Google or AWS, you've got good environments to work in. And we just probably spend 80% of our time in Azure for the reasons I mentioned before, and they continue to invest in and innovate in that platform space. So we see a lot of great benefits, and because of the healthcare world that we live in, privacy, security, easy access, reasonable scaled production-ready tools is really important to us. We're not a hyper scaled organization so we can get a lot of good bang for buck and move fast. So good tool for us to use. Most of our vendors and partners have experience with it, so it's pretty easy to get talent in there and move pretty fast.
Sean Mooney:
Ken, and... That's great. And I think particularly with your background in healthcare there's all sorts of a much higher order things to think about in terms of HIPAA data and very sensitive information. And we've heard a lot from our clients that they're probably the earliest at adopting Microsoft Azure, particularly as it relates to the OpenAI partnership and the confidentiality it relates. We're going to go deeper into this, but can you talk about maybe some of the things that you've seen that Azure solves, particularly as you try to bring in ChatGPT-like capabilities into organizations?
Ken McLaren:
For sure. So just in general, I'd say yes, out of the box Azure's posture around cybersecurity, data privacy, active directory, you get a lot of security compliance out of the box. And certainly putting that in place is critical, as Keith shared up front. We love where AI is going. We're intrigued by the use cases gen AI is opening up. Many of our portfolio companies are concerned about using some of the open tools where it's not clear what will happen to the data that goes into a prompt. Will it be used to fine tune or train a model? And eventually what happens to that data? I think the open tools are starting to catch up and share some better language around protecting user data, but we still guide our portfolio companies for sensitive business data, customer data, keep it out of any open tool and use the protection of the Microsoft Azure environment where OpenAI services and Microsoft has strong protections for business and customer data.
And again, to the healthcare point Sean, yes, for HIPAA and PHI we do a lot of work just to make sure those environments are protected in the right ways, for not just our businesses and teams but our customers and patients that are all impacted by the data that's there. So that's pretty critical for us, and we certainly do guide in those directions, and like the Microsoft terms and conditions around the commitments not to use open AI data in any of the underlying models that they train.
Sean Mooney:
Yeah, and that that's spot on, and so I appreciate that. So Alex, what about you? What are some of the things that you're seeing in terms of what types of tech stacks and tools that companies are starting to adopt to now enable the activities?
Alex Castrounis:
Yeah, absolutely. I mean, as Ken said, there's multiple cloud platforms out there that are really quite powerful, do amazing things. As we've seen in software development in general, there's sort of this movement more and more towards no-code, low-code solutions. And part of the benefit of those things is, one, accessibility and making it easier for people and organizations to sort build software, or in this case train models, iterate on models, tune them, optimize them, deploy them and so on. But also sort of gives you the ability to abstract away some of the underlying DevOps or orchestration that needs to be done. Or most importantly sometimes, governance, like model governance and data pipeline governance and so on.
So in addition to the Azure offerings that Ken had talked about, I think Google has released recently their generative AI studio as part of Vertex. And it's really quite interesting because I think... If you think of the superhero movies, like Iron Man with Jarvis and I think in Black Panther they have something like that, a lot of the way that AI could be really powerful using especially the conversational interfaces and natural language interfaces isn't so much just having the parameters of a model like GPT-4 or ChatGPT to generate text, like if you're trying to write an email or summarize something, but rather to create this interface that becomes an information retrieval system or a question answering system on top of your data.
And I think that's really where we're going to see a lot of the real value in the already, but really in the near future. And I think some of the platforms out there, not just Google but also some of the open source stuff like LangChain, which I'll talk more about here in a minute, are really pioneering and pushing that a lot forward. So in the case of generative AI studio, what's really amazing there is it gives you the ability to load up your data in different formats, whether it's CSB or PDF or text in other formats or what have you. And then you can put on top of it, they have a model hub. And in that model hub you could choose either... Google has their own model Bard which is similar to ChatGPT. The underlying model is actually PaLM. So you can choose different versions of that, but it's also a hub that has open source models like LAMA and other models.
So it really gives you a lot of flexibility there to put a LLM interface like I'm talking about on top of your data for information retrieval and question answering, as well as even Google search itself on top of your data, which is pretty powerful. And it allows you to bias actually between how much of the output of this generative AI is generated by the model parameters, in the same way if you just went into ChatGPT's user interface and asked it to summarize something versus from your actual data. So you can kind of bias one way or another, 'cause obviously one of the main concerns a lot of organizations would have are things like trustworthiness, hallucinations and so on.
So trying to get more towards a true informational retrieval system that you can rely on, and that's, excuse me, trustworthy, as well as it's all very containerized where you can have your own encryption key that Google doesn't have access to. So it can be HIPAA-compliant and everything else, and no one can really get access to your data except yourself. So it solves a lot of those issues that I know a lot of organizations are wondering when it comes to proprietary data, confidential data and intellectual property, and just kind of sending that off to a closed-source proprietary API in the case of some of the OpenAI stuff. And then it also allows for a lot of that other ops and governance type stuff in terms of model tuning and iteration and deployment and so on.
One of the other things that's interesting with a tool like LangChain for example, is if you're just going into a user face interface like ChatGPT, it's hard to track your prompts and versioning them, and as you're doing prompt engineering and so on. And so these kinds of frameworks like LangChain really help you, excuse me, to templatize, if you will, your prompts and also create variables in your prompts so you can inject different things. You can iterate on the prompts, you can version them and then you can also chain prompts. So output of one LLM call becomes the input of another one, and it allows you to then set up these pretty sophisticated systems that not only do that chaining but also can integrate with outside sources like the web or Wikipedia or other types of databases and so on, and APIs.
So I think a lot of that, and of course Hugging Face is one of the really big open source options out there that's really compelling, has done a lot with making transformers widely accessible to folks.
Sean Mooney:
I'd say too. I mean, I think it's really clear that, A, I think Microsoft got an early start according to Ken, and they've got the most, I think, established systems in place. But Google's catching up real quick, and my sense is that Google's probably had this stuff on the shelf, they're just trying to decide how they want to put the cards on the table for a while. Are you seeing some of that in the Google stack?
Alex Castrounis:
Yeah, I mean for sure. And I know they have some partnerships now with some pretty big companies that are building on top of this. So I think Uber, if I'm not mistaken, or Uber Eats is building now on that generative AI Vertex platform. I think Wayfair, I think one of the big travel companies, maybe it's KAYAK, I'm not 100% sure, but definitely... And one thing worth mentioning too is there's also part of the data store or the database... Ken had talked a bit about having a data lake and so on. One of the things that's also becoming really prevalent with these new generative AI solutions is the idea of a vector database. And so Pinecone is a good example of that, and Chroma DB is another example of that.
And these are ways to, I don't want to get too technical here so I'll try and summarize it very high level, but essentially when you have unstructured data, like your text data, your documents, whatever it is, you can convert that into a very specific kind of numerical format that gets stored in a specific kind of database called a vector database, which essentially encodes all the semantic meaning of your unstructured natural language data and the contextual meaning of it, and that becomes very handy later on to do certain things like putting that conversational interface on top of your data. Because now it allows, when you're doing queries by... whether it's through a chat interface or something like that, to translate your query into pulling the right data out of the database to pass to the large language model without having to do SQL queries or anything like that, or writing any sort of code. So it's almost automated in that way, once you have your data stored in one of these vector databases.
Sean Mooney:
That's great. No, that makes a ton of sense. And so I think what's clear to us is there's a lot of technical tools, there's a couple leaders and that's going to change very quickly. And I can tell even personally, BluWave has been building our recommendation engines using multiple AI technologies for over a year, and we're already replacing maybe a half of what we've already done. And we started with the best-in-class tools that were available a year ago. And so there's going to be requiring agility and there's going to be things that are developing and new tools available, and you're going to be on a journey. And I think it shows you that there's some major titans helping, but there's some new things coming as well.
And so Nik, let's jump as well into... You've got these tools at your disposal, you've done your precursor. What are some of the most addressable use cases that companies should start thinking about this, for most everyday companies can start doing today?
Nik Kapauan:
Yeah, no, and great commentary from Alex and Ken. And I was furiously writing down notes 'cause these are all issues that we're thinking of as well, trying to figure out how to give it access to our proprietary data and stuff like that.
But as I think about use cases, the construct that I find helpful is the power of... Talking specifically about generative AI, that the power of these large language models is tied to the power of language itself, because these things can synthetically generate language. So academically, language serves three functions. It's language as information, language as expression and language as instruction. So similarly you can think of bucket use cases for generative AI across those dimensions.
So when it comes to information this is really... And Ken touched on a bunch of this, it's really the power of these models to rapidly collect and synthesize data, information. And you really get something differentiating when you can connect that to your proprietary data. So this would be things like market and customer research for a marketing and sales function. Report generation for customer operations, you collect all of your customer's data, feed it to them in an easy-to-read report automatically.
On the sponsor side we do a lot of market research and we have these documents that we prepare using the analysts' associative capacity. Can you automate a lot of that using generative AI? So I think about that, these kind of gen AI for information as almost like a Google on steroids kind of use case. And the value prop there is really when it comes to time savings.
The next level for these kind of models is, language isn't just information, it's not data, but it's also expressions, being able to convey thoughts and ideas. And that's really the power of these models to generate creative content on any topic, in any voice, in real time. So these could be things like... Content marketing and sales is kind of the obvious one to start with. So content generation for personalized marketing campaigns. Can you create sales agents or chatbots for customer operations? Things like that. So being able to create... Generating a question list for expert interviews is one of the use cases that we've deployed on the sponsor side.
And then finally, and this is where I think there's going to be a ton of value here, especially down the road, would be language as instruction. So not only can you intelligently create language and content, but you could also use that to do things, to affect change in the world. So being able to generate code using these LLMs, that's a perfect example. So being able to do software development cheaper and faster than before. I think you're going to start seeing the rise of autonomous agents that can automate a lot of tasks that use. So I've been hearing about Salesforce is launching Salesforce GPT, and all these models that can not only generate content for marketing campaigns but can actually execute campaigns. It could bid on keywords, things like that, to really drive automation.
So that's kind of a concept of how I think about the different use cases. When it comes to what function, I think we'll see the highest value. There's a few really obvious ones. So marketing and sales, like we said, anything kind of marketing campaign generation, marketing customer research, sales agents, customer operations, so [inaudible 00:31:35], chatbots, customer support. I think R and D and product development could be another big function where this stuff will add a ton of value. So being able to quickly test ideas, but also for rapid prototyping. So being able to use these models as a man-in-the-box MVP for product development. And then obviously software engineering, software development too is going to be seeing a big uplift from gen AI.
And then also data analytics. So I almost see some of these models as a smart analyst in your organization, that you could feed it data and have it run ad hoc analytics and use it in your data science, data analytics work. So that's a few thoughts on use cases and where I think some of the value is going to be in the early stages.
Sean Mooney:
Yeah, I think those are spot on. And maybe to give even some personal journey stuff, we've seen our content creation at BluWave go up 500% since using these generative engines, and arguably they're better SEO-ed. And so you can get some real value out of these things. I saw a statistic, now this is not validated so this might be a hallucination itself, but 40% of the code in GitHub now has had some AI generative aspect to it. And so there's going to be really huge increases in productivity, kind of the magnitude that we saw on the internet in the nineties, that are going to take place.
And then the real question, and maybe we'll talk about this a little bit later, is who is the beneficiary of that? Should your coders be cheaper or faster or both? Should your lawyers be cheaper or faster or both? Who gets that beneficial aspect? The spoils of this productivity is going to be something I think humanity will be working through a lot of. And I think the points that you're bringing up are spot-on use cases.
What I think is interesting is in some ways it's opposite of probably what they thought it would be. They thought it would be more robotic process automation and plans and then it would get to mid-organization and then creative. It seems like it's starting at creative and then working its way through the rest of the organization with content and imagery like DALL-E and whatnot. So I think those are really important things that every business should think about in terms of their own organizations, Nik.
So Keith, let's talk about this. So you've got these great engines that are amazing that you can feed things into and they give these great distillations back to you. What are some of the things that companies should be considerate of in terms of using particularly the publicly available models and the implications that they could have on a company?
Keith Thomas:
Sure. Yeah. We just heard a whole bunch of acronyms, a whole bunch of technologies and tools. And me as a technology nerd, I was geeking out. So thank you all for that. There is some challenges with that as you heard, and people who are not technology focused, there's a black box problem here. We don't necessarily know what's behind those AI systems. And so to Ken's point, they're building out their own infrastructure, they're building out their own platforms, and that's a way to reduce the black box problem of knowing what's behind all of these generative AIs, knowing that's behind all the reports that get built.
So we want to make sure that you have an architecture and a design that people can follow, and that we can look at as technology experts and help align, not just security, but align the integrity and the availability of these systems so that they stay online and they stay available. And then to the point of what we put in there, data privacy. And I'll kind of keep going back to this. What is the data privacy rules that are in place to protect your organizations and your customers? And how do we align that with our AI models and how do we align it with all the different pools that we are going to build out our AI platform with?
Sean Mooney:
Yeah. And talk about that. So what happens if someone puts sensitive company data into ChatGPT with history turned on?
Keith Thomas:
Yeah. So if you've got history turned on, well then it becomes part of that AI system, it gets built into the models. And there's the ability for the model to use that data. What if it's poisoned data? What if it's corrupted information that's stuck in there? Then that messes up your analysts' model, your analysis and your models and your reporting. It could be something malicious in there as well, and then that gives adversaries the ability to do an attack. It could be an attack to modify or corrupt the data, it could be an attack to try to take the data. There's lots of different ways that they could come in, vectors is what we call them, that they can come in.
So those are a few of the ways, or a couple of the ways actually, that you want to be aware of when you're building out these models and looking at the overall architecture and design.
Sean Mooney:
Yeah, and I think that's a really good point because if you think about it, whether it's ChatGPT or Bard, if you put that data in, you put your company's data in, that becomes part of the model. And then if that's something that's competitively sensitive or advantageous to your business you've just given it to the world. And so...
Keith Thomas:
Normally... Yeah, Sean, normally we'll want to have a privacy policy in place for our organization so that users of the AI platforms know what their company policy is and what they're allowed to put in. And then we also want to make sure we do least privilege access to the data. So your company has multiple pools of data, we want to make sure that each employee only has access to the data that they need access to. That will prevent them from actually taking data inadvertently and putting it into an AI model.
Sean Mooney:
Great lesson. Every company needs to have a policy on these engines so people are very clear about what they can put in, and making sure that you just can't carte blanche copy-paste your customer list and paste it in because it makes it easier to structure in Excel.
Keith Thomas:
Right.
Sean Mooney:
But it's really tempting and really easy to do, and we certainly have the same temptations with our team, but people don't realize the secondary and tertiary effects that it could have in terms of the competitive protections that a company has. So I think those are great.
So in order to leave a little bit of time for some Q and A, I think let's do a little bit of a lightning round here. And so this is the million-dollar question, and probably the billion-dollar question in terms of, where's all this going? And we're going to talk crystal ball time. The crystal ball on my desk is pretty hazy and it seems to be changing shape and form every time I show up in the morning. And so let's talk about where you all see AI going in the visible future. And so Alex, what are some of the things maybe that you think may take shape, to the extent you can?
Alex Castrounis:
Yeah. I mean, things are moving quickly. Just yesterday, I think, or a couple days ago with Google adding multi-modality to Bard and PaLM, they did a demo where they took a picture of a mobile app screen and then code was automatically generated, actual functional code, just based on pixels of an image. So it's pretty remarkable I think. Not only are we going to see these large language models keep evolving on the tech side, but we'll also see that multi-modality. So adding audio capabilities, adding video capabilities, which are really just sequences of images at a certain frame rate and so on.
And so we'll go from... Right now generative models like ChatGPT and GPT-4 essentially just keep iteratively predicting the next word or the next token, and that's what generates text. But we could see that happening with video, for example. So given a sequence of frames of video, what's the next image or frame in the video? Predicting and then just keep recursively doing that. Or not too long, instead of just saying, "Hey, write an email," say, "Hey, I want a 30-second company explainer video." And the next thing it creates the script, the animations in it, the background music, the whole enchilada all at once. Things are just advancing very quickly, as well as the real world applications and use cases of some of this generative AI stuff.
So yeah, it's going to be moving quickly. It's hard to predict where we'll be even three months from now, but people need to get started, need to start learning about it. Don't wait. At least look at what's happening right now and start building small and prototyping out specific use cases. Get ahead of the game.
Sean Mooney:
And Alex, I think it's a good point. It's so easy to go overwhelmed by this and just say, "I'm just going to wait." Don't wait. Get to doing something.
Alex Castrounis:
Yeah. And don't forget... And one last thing, sorry, and don't forget that AI too, I believe Nik made this point earlier, is more than just generative AI. The good thing about, I guess, ChatGPT's release and exponentially increasing public awareness and interest in this AI is that it did exactly that. But on the other hand it's also over-indexed for people on AI as being just LLMs, when in reality it's this massive field. So when I say get started it isn't just go all in on LLMs or generative AI. Don't forget, there's a lot you can do with AI and machine learning.
Sean Mooney:
And it's an excellent point. Ken, how about you? What does your crystal ball see?
Ken McLaren:
Agree with Alex. Hard to predict. What we can predict is the pace of change is going to be relentless and unlike what we've seen before, I think is the message. So get ready for rapid change. That's what we share with our leadership team is the pace of change is going to be much faster than what we've seen in the past decades. Sam Altman, CEO of OpenAI was interviewed just after ChatGPT came out, and he was asked by Lex Fridman on a good podcast show, "What is GPT for?" And he said, "It's a system that we'll look back at in the future and say it was a very early AI. It's slow, it's buggy, it doesn't do a lot of things very well." This is the CEO of his big new product launch saying, "It's not that good, but neither were computers when they first launched. And so they're going to get a lot better and they're going to get better fast."
So the other point I'd raise is, I think we all know about Moore's Law, which has really driven innovation for the last almost 60 years. So the number of transistors in the chip or the cost per compute going at a rate of change of every two years the power doubling or the cost halving. What we see now with AI algorithms is the power or the pace of change is almost every three to six months the performance of AI algorithms is doubling. The big tech have about 400 billion of operating cash flow annually that they're going to be investing behind this. And so you see all of these dollars starting to divert towards rapid innovation.
So from the first iPhone launch to iPhone 14, it took about 16 years to go from gen one to gen 14. Think about what would happen if ChatGPT's 16th version or 14th version came out 16 months from now instead of 16 years from now. The pace of change is going to be much faster. And so for our businesses to be ready to adopt the new technology, but not anchor on ChatGPT or what... you know, you pick your tool being the last and final, this is the start of a long journey that's going to move quite quickly. So be agile, be ready for change, have good foundations in place with all the stuff the team talked about here today. But be expecting of a pretty rigorous pace of change. And the future's really in front of us, we're just getting started with this.
Sean Mooney:
I think it's great advice, and it's easy to be daunted but it's better to get excited. And if you get excited about it, boy, the future is kind of uncapped. Nik, how about you?
Nik Kapauan:
Yeah. I mean, I'm a big believer that the generative AI has as much transformative potential as some of the other big technologies we've seen in our lifetime, the personal computer, the internet, the mobile internet cloud. So it's going to be transformational.
And you'll probably be users of generative AI even if you're not trying to be, right? Like Google search, Bing search, your favorite CRM, Microsoft Copilot, all these tools that we're already using are going to get enabled by generative AI and it's going to become like electricity or the internet or analytics, as something foundational to something that businesses just need as table stakes.
I think the way we're thinking about it is still very in early days, but it's about experimenting. Obviously being careful of privacy and data and all that, but anytime you need to Google something or search for something try an LLM. And every time you need to generate content for something try experimenting with an LLM. And it's through that you'll uncover really powerful use cases that can be scaled and productionized across your business. So I'd say don't be afraid of it, experiment with it, find exciting use cases responsibly because it's transformative.
Sean Mooney:
That's great. Keith, how about yourself?
Keith Thomas:
Sure. I always think about it as, how are we going to enhance our user's experience? And that's something that generative AI, obviously today everyone's experimenting, to what Nik just said, which is fantastic. I do it too. And if you aren't you should.
I see some pretty big areas actually where AI is going to be very supportive for us. I think it was Bill Gates who said that the winners in AI are the ones who are going to create the personal assistants, the ones that are going to help us. And so we think about how AI might be used in training, taking someone through a training program and being able to answer questions on the fly.
And then also there are certain industries that don't have enough professionals to staff the position. And cybersecurity is a huge industry, a tremendous industry where there isn't enough skills and skilled position workers to fill all the spaces. And so we'll see AI come in, and it won't necessarily replace those professionals, but it'll relieve these professionals of additional work that could be handed off to an AI platform to support, and then they can pivot and start working on things to improve and grow the business in different directions. So those are a few of the ways that I see AI helping us in the future.
Sean Mooney:
Awesome. Really, really important considerations. Okay, well we've had I think a rapid fire of really insightful information here. We have a few questions that have come in, so let me try to paraphrase if we can get to one or two of them. And so one of them is, if you think about these hallucinations that we've talked about in different places during this conversation, and there's this propensity for errors, how do you think about applying some of these generative AIs in different use cases and what use cases maybe are more appropriate than others, given the fact that we don't quite know how it comes up with some of these answers and why they're right and why they're wrong in certain places? Does anyone have a take on that?
Ken McLaren:
Can take a quick go, Sean. So for our [inaudible 00:47:30], again, in the healthcare space, very sensitive to ultimate use case and hallucinations would not be good if we were in a customer, patient market-facing position. So a human in the loop is something we talk about quite a bit, using gen AI, embracing it but having a human in the loop, in the process so you're putting some gating around it, is something that we always advocate, number one.
Two, where we think there can be some more scaled use cases, there's definitely the ability today to train your LLMs so you can fine tune them and you can really put a lot of guardrails around them to say, "Here is our protocol, this is the way that we manage our communications." And by fine-tuning and training there's a number of tools and techniques, you heard Alex talk a bit about vector embedding, you can put the LLM on guardrails, still with a human in the loop, but you can train it so it does what you direct it to. So those are two initial thoughts and things that we spend time talking about, but responsible AI, you heard a lot of different versions of that today, is critically important. So that's definitely needed in our space.
Sean Mooney:
And I think that's a great answer, the human in the loop. And I think that's one of the other big themes that we touched on a little bit, is a lot of these are copilots, right? You're not supposed to let them off on their own. It's something that's going to drive productivity and enhancements, but you can't just let them untethered out into your customer bases or your users, et cetera. And certainly that's something that I think maybe for future conversations we can talk about the moral, the ethical, the compliance considerations that we should think about.
I think we have one more question that we can probably get to here, and it's in more of the practical side. What do you think are some of the AI applications that can be used to maybe optimize some of the mid and back-office functions within companies and drive savings that might not necessarily be LLM, but could actually improve the efficiencies of companies through the middle of the business?
Nik Kapauan:
Yeah, I think that there's a lot. The easy examples are all customer facing, marketing and sales. But in terms of back-office operations I think the big one that come to mind is reporting and analytics in general as you're doing that in your business. So being able to quickly synthesize customer data and generate reports on the private equity side, things like rapidly synthesizing investment memos or go into a data room and list out all the interesting things, the things that... I think of a LLM model almost as a really strong performing kind of analyst or associate, just another member of the team that you can point to different tasks and automate. And then sometimes they're wrong, just like real people are wrong, but you have to evaluate that. So I think time savings in the backend for things like knowledge discovery, information synthesis, report generation, that could have huge impact.
Sean Mooney:
Yeah, I think that's going to be massive as well. The only other use case that I'll say that we see coming in from our client base is around call centers. And so there's a lot of training, it's really more into the LLM side, but how do you resolve your customer's needs earlier using this kind of dynamic, bilateral generative AI, which then lets your humans in your call centers work on the really hard stuff and/or find the answers to the really obscure much quicker. And so I think what we're seeing is that it can have a cost savings impact, but it can also enable you to serve your customers in a better way, faster. And so that 10 minute wait time can go to 30 seconds, and you can get access to better information faster for your customers. And so that's something that we're at least, from a what's coming in perspective, seeing some helpful use cases on this.
The only last question I have, this sounds like a plug but it's not, are there any partners that can assist you with building out use cases? And the answer is yes, you can call BluWave and we know them all. So a lot of [inaudible 00:51:34], we know a lot of them. But we'd be happy to help with this ecosystem that we're building out rapidly, that are addressable for not only larger companies but particularly SMBs.
So that is all we have for today. I want to give really, really special thanks to Alex and Nik and Ken and Keith here for sharing these insights. And maybe we'll get the band back together again in a few months and we'll say that it's all changed again. So thank you so much everyone for joining us today.
For more information on how your business can be connected with top advisors, experts and/or consultants in the world of business intelligence, predictive analytics, AI or anything else related to building your business with more speed and certainty, please visit bluwave.net. That's B-L-U-W-A-V-E.net. Please continue to look for the Karma School of Business podcast anywhere you find your favorite podcasts, including Apple, Google and Spotify.
We truly appreciate your support. If you like what you hear, please follow, review and share. It really helps us when you do that, so thank you in advance. In the meantime, let us know if there's anything we can do to support your success. Onward.
THE BUSINESS BUILDER’S PODCAST
Private equity insights for and with top business builders, including investors, operators, executives and industry thought leaders. The Karma School of Business Podcast goes behind the scenes of PE, talking about business best practices and real-time industry trends. You'll learn from leading professionals and visionary business executives who will help you take action and enhance your life, whether you’re at a PE firm, a portco or a private or public company.
BluWave Founder & CEO Sean Mooney hosts the Private Equity Karma School of Business Podcast. BluWave is the business builders’ network for private equity grade due diligence and value creation needs.
BluWave Founder & CEO Sean Mooney hosts the Private Equity Karma School of Business Podcast. BluWave is the business builders’ network for private equity grade due diligence and value creation needs.
OTHER RECENT EPISODES
Connect with a PE-grade Resource
1
Contact BluWave
2
Connect with BluWave-vetted service providers in hours
3
Select and hire a PE-grade resource that fits your needs