Anirban Ghoshal
Senior Writer

ServiceNow, Nvidia to bring generative AI to enterprise workflows

News
17 May 20234 mins
Artificial IntelligenceEnterprise ApplicationsGenerative AI

The first results of the collaboration, which includes Nvidia’s NeMo foundation models, Nvidia’s DGX Cloud, and ServiceNow’s workflow platform, will see the companies offer AI generative applications for IT.

Business discussion, partnership, collaboration
Credit: Pressmaster / Shutterstock

ServiceNow and Nvidia on Wednesday said that they were collaborating to build generative AI applications for different enterprise functions in an effort to optimize business processes and workflows.

ServiceNow will use data available on its workflow platform along with Nvidia’s DGX Cloud, Nvidia DGX SuperPOD, and Nvidia’s Enterprise AI software suite to develop custom large language models for enterprises, said Rama Akkiraju, vice president of AI for IT at ServiceNow.

These models will be then used to develop generative AI application use cases across enterprise IT departments, customer service teams, employees, and developers in an effort to increase productivity by enhancing workflow automation, Akkiraju added.

The first results  of the collaboration will be aimed at building generative AI application for enterprise IT departments, the companies said, adding that tasks and activities such as ticket summarization, ticket auto-routing, incident severity prediction, intent detection, semantic search, ticket auto-resolution, root cause analysis, and similar incident detection were being targeted.

Giving an example of how generative AI can help in increasing productivity, Akkiraju said that ticket summarization by AI can help save agents at least seven to eight minutes per interaction.  

The implementation of generative AI use cases will be done mostly through virtual assistants, which in turn will be able to resolve a broad range of user questions and support requests with purpose-built AI chatbots that use large language models and focus on defined IT tasks, the companies said. 

“To simplify the user experience, enterprises can customize chatbots with proprietary data to create a central generative AI resource that stays on topic while resolving many different requests,” the companies added.

These implementations can also be mirrored across customer service teams, ServiceNow and Nvidia said.

Generative AI, according to both companies, can be used by enterprises to improve employee experience by helping staffers identify growth opportunities, and recommending courses and mentors based on natural language queries and information from a staffer’s profile.

Generative AI to streamline IT operations for Nvidia

As part of the collaboration, ServiceNow is building generative AI use cases in an effort to streamline IT operations for Nvidia, Akkiraju said.

In order to develop these applications, ServiceNow is using Nvidia’s data present on its platform to customize foundation models using Nvidia’s NeMo framework on DGX Cloud and on-premises DGX SuperPOD computers.

Nvidia’s NeMo framework, which is a part of the company’s AI Enterprise software suite, includes features such as prompt tuning, supervised fine-tuning, and knowledge retrieval tools to help developers build, customize and deploy language models for enterprise use cases.

The software suite itself, according to Nvidia, accelerates the data science pipeline and streamlines development and deployment of production AI including generative AI, computer vision, and speech AI.

It contains over 50 frameworks, pretrained models and development tools, Nvidia said, adding that the suite also comes with NeMo Guardrails software that enables developers to add safety and security features for AI chatbots. In the beginning of May, ServiceNow had partnered with Hugging Face to release a free large language model (LLM) trained to generate code, in an effort to take on AI-based programming tools

More on generative AI: