Why use AI agents?
Feb 15, 2024

Why Use Agents? Agents are powered by LLMs to perform complex tasks using data and reasoning to inform their decision-making process. With agents, a goal is set by a human and an agent reasons and makes decisions, choosing what actions are needed to accomplish them. They excel in complex systems that may require splitting out subtasks to other models or tools.

Use case: Enhancing an Artificial Plant Webshop Suppose your artificial plant webshop wants to improve its customer service chatbot. An agent-equipped chatbot can guide customers in finding the perfect artificial plant based on their needs. For instance, a customer might ask, “What plant would look good in a low-light office space with a modern design?” An agent could break down this request into subqueries such as:

  • What plants thrive in low-light environments?

  • Which artificial plants match a modern design style?

  • What are the recommended sizes for office spaces?

The agent can then use specific tools to search through your product catalog, applying filters for light conditions, design style, and size recommendations, ultimately providing a tailored response to the customer.

Personalized recommendations with Agents Beyond answering questions, an AI agent can also provide personalized product recommendations. By integrating customer browsing behavior, previous purchases, and preferences, the agent can suggest relevant artificial plants that match the customer's style and practical needs. For example:

  • Recommending tall artificial plants for creating privacy in open office spaces.

  • Suggesting low-maintenance, realistic plants for busy professionals.

  • Offering curated collections like 'Tropical Vibes' or 'Minimalist Design' based on trending styles.

Agents can also enhance the webshop's search functionality. Instead of relying solely on keyword matches, the agent can interpret natural language queries such as "I need something for my hallway that doesn't need sunlight."

The agent could then apply multiple filters automatically, showcasing only the most relevant products. This approach increases the chance of conversion by presenting the right products more quickly to the customer.

On the backend, agents can support operational efficiency by analyzing sales data, identifying trends in customer preferences, and suggesting inventory adjustments. For example, if the agent detects an increase in searches for 'large artificial plants,' it could prompt stock updates or highlight those products in marketing campaigns.

NVIDIA LlamaIndex Agents

Using NIM Microservices with LlamaIndex NIM microservices are easy and flexible to deploy as well as simple to use with LlamaIndex. Once they’re deployed, to use them with LlamaIndex, you will need to install the NVIDIA LlamaIndex packages.

"What are the best-selling artificial plants under €50 for living rooms, and are they available with free shipping?"

The agent could break down the query into smaller subqueries like:

  1. What are the best-selling artificial plants?
    (Route to the sales analytics system or database.)

  2. Which of the best-selling artificial plants are priced under €50?
    (Filter products from the previous result using price criteria.)

  3. Which of these plants are suitable for living rooms?
    (Check product descriptions or tags for "living room" suitability.)

  4. Do these plants qualify for free shipping?
    (Cross-check the shipping policy and pricing conditions.)

The response could then combine the results to present accurate and comprehensive information, such as:

"Our best-selling artificial plants for living rooms under €50 include the 45cm Premium Monstera and 30cm Modern Fiddle Leaf Fig. Both are available with free shipping."

1. Customer support automation

You could implement a query engine on your website to help answer complex customer questions. For example:

  • Customer Query: "What are the best artificial plants for a low-light office, and do you offer bulk discounts for businesses?"

  • Sub-Questions:

    • "What artificial plants are suitable for low-light environments?"

    • "What are the current bulk discount policies for businesses?"

    • "What are the top-selling office plants in our catalog?"

2. Product recommendation system

You could build an AI-powered product recommendation tool that asks follow-up questions to guide customers:

  • Customer query: "I need plants for a modern living room with white and grey decor."

  • Sub-questions:

    • "What modern-style artificial plants are available?"

    • "Which plants match white and grey decor?"

    • "Do you have any customer reviews or images of these plants in similar settings?"

3. Content Generation & SEO:

For your content marketing and SEO strategy, you could generate blog topics or detailed product descriptions:

  • SEO Query: "What topics should we cover to rank for 'realistic artificial plants for home decor'?"

  • Sub-Questions:

    • "What are the trending keywords related to artificial home decor?"

    • "What content formats (e.g., blogs, videos, images) perform best in this niche?"

    • "What are our competitors writing about in this space?"

4. Ad Copy & Marketing Automation:

For Google Ads or social media:

  • Marketing Query: "Create ad copy for our new line of premium outdoor artificial plants."

  • Sub-Questions:

    • "What are the unique selling points of our outdoor artificial plants?"

    • "What promotions or discounts are currently available?"

    • "What keywords should we focus on for ad targeting?"

Python script below:

class QueryEvent(Event):

    question: str

class AnswerEvent(Event):

    question: str

    answer: str

class SubQuestionQueryEngine(Workflow):

    @step

    async def query(self, ctx: Context, ev: StartEvent) -> QueryEvent:

        if hasattr(ev, "query"):

            await ctx.set("original_query", ev.query)

            print(f"Query is {await ctx.get('original_query')}")

        if hasattr(ev, "llm"):

            await ctx.set("llm", ev.llm)

        if hasattr(ev, "tools"):

            await ctx.set("tools", ev.tools)

        response = (await ctx.get("llm")).complete(

            f"""

            Given a user question, and a list of tools, output a list of

            relevant sub-questions, such that the answers to all the

            sub-questions put together will answer the question. Respond

            in pure JSON without any markdown, like this:

            {{

                "sub_questions": [

                    "What artificial plants are suitable for low-light offices?",

                    "What bulk discounts are available for businesses?",

                    "What are the best-selling office plants in our catalog?"

                ]

            }}

            Here is the user question: {await ctx.get('original_query')}

            And here is the list of tools: {await ctx.get('tools')}

            """

        )

        print(f"Sub-questions are {response}")

        response_obj = json.loads(str(response))

        sub_questions = response_obj["sub_questions"]

        await ctx.set("sub_question_count", len(sub_questions))

        for question in sub_questions:

            self.send_event(QueryEvent(question=question))

        return None

    @step

    async def sub_question(self, ctx: Context, ev: QueryEvent) -> AnswerEvent:

        print(f"Sub-question is {ev.question}")




        agent = ReActAgent.from_tools(

            await ctx.get("tools"), llm=await ctx.get("llm"), verbose=True

        )

        response = agent.chat(ev.question)

        return AnswerEvent(question=ev.question, answer=str(response))

    @step

    async def combine_answers(

        self, ctx: Context, ev: AnswerEvent

    ) -> StopEvent | None:

        ready = ctx.collect_events(

            ev, [AnswerEvent] * await ctx.get("sub_question_count")

        )

        if ready is None:

            return None

        answers = "\n\n".join(

            [

                f"Question: {event.question}: \n Answer: {event.answer}"

                for event in ready

            ]

        )

        prompt = f"""

            You are given an overall question that has been split into sub-questions,

            each of which has been answered. Combine the answers to all the sub-questions

            into a single answer to the original question.

            Original question: {await ctx.get('original_query')}

            Sub-questions and answers:

            {answers}

        """

        print(f"Final prompt is {prompt}")

        response = (await ctx.get("llm")).complete(prompt)

        print("Final response is", response)

        return StopEvent(result=str(response))

# Example usage

engine = SubQuestionQueryEngine(timeout=120, verbose=True)

result = await engine.run(

    llm=Settings.llm,

    tools=query_engine_tools,

    query="What are the best artificial plants for a modern living room with white and grey decor?",

)

print(result)