Someone might ask you about a friend, and you might respond, "Yeah, he's been like that his whole life."
At an atomic level, your friend has changed many times over during his life. He is, in fact, quite different than when he was a young child. You guessed the question was about his personality changes, not his atomic changes. Will those kinds of guesses be so easily determined by large language models (LLMs) when asked about the state of the network?
I suspect the granularity of concern will be challenging for LLMs like ChatGPT and Bard when integrated into network equipment, monitoring systems, and other tools. That and how vendors deal with the different results different prompts will deliver.
** Please note I am not involved in what the company I work for will or will not do in this area. These thoughts are my own **
LLM Prompts and Context
If I asked ChatGPT to write a story about "Network Engineering," I might get a short story that starts something like "Once upon a time, in the bustling city of Technopolis..." (read the entire response).
OTOH, if I provide the following prompt, "Write a story about a network engineer in the style of James Patterson who cleverly diverts traffic maliciously generated to bring down the Internet," I might get a story that starts like this "Max Turner, a maverick network engineer, worked in a small, dark office in the heart of New York City, his work unnoticed by the billions whose lives he kept running smoothly." (read the entire response).
And I could go on about how you can request a plot, add characters with various neurosis, expand on each character, and change the technical details to be more realistic, etc.
The bottom line is how you craft the prompt makes a BIG difference, and the less you specify, the more the LLM will fill in the blanks for you and/or the less optimized response you will get.
I suspect one of the differentiators in various LLM integrations will be determining the context of the prompt so that unspecified assumptions can be filled in optimally for a great UX.
Granularity of Concern
In the cloud, especially, there is this idea that, much like our mythical friend above, certain things are persistent, like a service. However, other things are transient, for example, the specific container instances delivering the service. As a result, there is one school of thought that there should not be an alarm for every container failure but only when an SLA, SLO, or SLI is violated. For sure, only get someone out of bed if that is the case.
In networking, we can sometimes adopt similar thinking; for example, don't raise an alarm every time there is a BGP update; we expect the Internet's topology will change over time, perhaps frequently. We might even assert the view of best / available Internet routes is never the same, everywhere, at the same time. OTOH, if critical physical network equipment fails, we want to know about it.
Ok, so imagine you are in network operations, and you go to your LLM and say, "Tell me the network health." Should the LLM reply with only those things with SLA violations or all the things that are flapping, failing to converge, trending toward failing, or something else? If you ask an LLM a simple question, it will make many assumptions for you.
The LLM is going to need to know whether your concern is:
Should you get someone out of bed?
What proactive measures need to be taken before an incident happens?
What looks like it is constantly failing and recovering even though the SLA is currently not violated?
LLMs will either have to rely on effective prompt engineering by the people/workflows using them or have some way of working out the context surrounding the prompt. Whereas we already see some of the latter in today's existing paradigms, the former may effectively be yet another CLI or query language. Then again, there may be a best-of-both-worlds scenario. It will be fascinating to see which direction engineers and product management teams go.
Conclusion
There have already been announcements of LLM integration into monitoring platforms and other networking / IT dohickeys.
I suspect we are very early in understanding how UX will apply here. However, my intuition is prompt engineering, granularity of concern, and context setting will be essential to effectively use this unbelievable technology.