• Data Dive
  • Posts
  • Tool Use, AI Agents, and the next few years of AI

Tool Use, AI Agents, and the next few years of AI

With AI hype at an all-time high… What is the actual value?

In the near term, the answer is: Tool Use, Tool Chains, and finally, AI Agents.

QUICK NOTE - The code utilized in the post is from LangChain’s open-source documentation and Quickstart examples. You can view the code I used here.

What is Tool Use? 

Tool use, also referred to as Function-Calling, is a process that certain Large Language Models can utilize to intelligently reason and help a user invoke external tools with natural language (For example - APIs, Search Engines, database calls, functions, and more). This gives us the ability to not just ask LLMs questions… but to accomplish real-world tasks.

By the end of this 5-minute article, you will fully understand how to use natural language to:

  1. Access revenue data in a Snowflake Data Warehouse

  2. Access revenue projections stored in a Google Drive document

  3. Ask the LLM the question “Did our Black Friday revenue this year beat our projections?”

  4. Get the correct answer.

This example I just gave is a perfect example of Tool Use - which is, in my opinion, the number one area AI will add massive enterprise value in the coming years.

How does Tool Use work? 

First, the user registers the tools with the LLM. This is commonly done with the @tool decorator in Python.

Then, any time a prompt is sent to the LLM that requires using those pre-defined tools, the LLM can reason which tool it will call. The model can also understand the parameters with which it will execute that call. Finally, the user executes the tool call and accomplishes whatever task was in the prompt.

Live example of Tool Use

Looking at the QuickStart in LangChain, we can easily create a sample tool that multiplies two numbers together. 

@tool
def multiply(first_int: int, second_int: int) -> int:
    """Multiply two integers together."""
    return first_int * second_int

And when we invoke that tool, we see that we get the right answer!

multiply.invoke({"first_int": 4, "second_int": 5})

20

NOTE TO READER - You can test all the code in this blog post for yourself, and even play around with this functionality, all at this link: https://colab.research.google.com/drive/1nHBloKVeh4d8Rcob9tAOZG2yaVrMboon

You might be thinking “Multiplication isn’t new. Whats the big deal?”

Look at the first line of the first code block above. Do you see that “@tool” decorator? This is how we register the multiply function as a tool with the Large Language Model (LLM). When we register this tool (The multiplication function) with our LLM, the LLM can then intelligently use this functionality on its own. When I ask “What is 5 times 4”, the LLM can reason and decide that it should utilize my multiplication function to solve that, and the model can deduce that the two parameters are 5 and 4. 

That’s pretty impressive. Let’s map that same Tool Use capability to a more impressive enterprise use case: Querying a Snowflake Data Warehouse containing a table with your company’s historical sales data.

If I utilize LangChain’s (An open source framework that helps developers build applications using LLMs) built-in Snowflake integration, I can create a tool to query Snowflake tables and load Snowflake documents with natural language. Suddenly, I can ask questions like “How much revenue did we generate on Black Friday this year?” and that query would be automatically answered for me, without any need to connect to the database and write a custom SQL statement! 

Now THAT is valuable. 

But, one tool is not that impressive. But what is impressive? Invoking multiple Tools.

That’s where Tool Chains come in

What is a Tool Chain?

A Tool Chain is when multiple tools are called sequential. Its literally a “chain” of “tools”. Pretty easy concept to grasp.

Keeping on the math example, let’s add in a second tool - Addition! 

@tool
def add(first_int: int, second_int: int) -> int:
    "Add two integers."
    return first_int + second_int

This simple tool adds two number together, and functions the same way as the multiplication tool. But now since there are two tools registered, I can prompt the LLM on either one.

chain.invoke("What's 5 times 4")

20

chain.invoke("What's 20 plus 100")

120

We now have multiple tools in our chain. We can call either of them as needed!

Earlier we created our snowflake connection and queried a snowflake table via natural language to find out our Black Friday revenue Now, we set up a Google Drive connector via Cohere’s Google Drive Quickstart Connector. In our fake scenario here, this is where we keep our company’s financial projections for this year. We can use natural language to ask both:

“How much revenue did we make on Black Friday this year”

And

“Did we meet our 2024 Black Friday revenue projections?”

This is a perfect example of a Tool Chain. A chain of tools that each accomplish individual tasks, which all together accomplish a goal like calculating the Black Friday revenue from Snowflake, and then the Black Friday revenue projections from this year. 

But there is one limitation: I can only invoke one of these tools at a time. What if I wanted to ask “Did our Black Friday revenue this year beat our projections?”

This is where AI Agents come in.

AI Agents

The core idea of AI Agents (sometimes called Intelligent Agents) is to use a language model to choose a sequence of actions to take. In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.

Looking back at our addition and multiplication example from the LangChain documentation, we can now invoke both tools in one question!

chain.invoke("What is 5 times 4, plus 100?")

.
.
.

"5 × 4 equals 20, and adding 100 to that total gives us **120**"

This ability to take my natural language prompt, deduce what functions should be used, and what parameters to use in those functions (in this case, what two numbers to multiply, and what two numbers to add together) is all quite impressive! Let’s relate this to our enterprise example one final time.

With the ability to invoke both my Snowflake tool and my Google Drive tool in one call, I can ask the question “Did our Black Friday jean sales this year beat our projections?” and the LLM will call the Snowflake tool to return the revenue, call the google drive tool to find out our Black Friday project, compare those two results, and then return to me an answer such as:

“Black Friday sales were $100,000, which is higher than the projected $80,000 for this year”.

chain.invoke(Did our Black Friday jean sales this year beat our projections?”)

.
.
.

"Black Friday sales were $100,000, which is higher than the projected $80,000 for this year”.

Conclusion

Tool Use and Intelligent Agents are the next phase of AI.

With NVIDIA at almost a 3 trillion dollar market cap, and every single tech company spending money on AI, I think its safe to say AI hype is at an all-time high.

But, as with any hyped technology, it seems like we could be approaching a bubble. So where is the real return on investment for companies when it comes to AI?

In the near term? Tool Use and Intelligent Agents.

Tool Use and Intelligent Agents are going to completely transform knowledge work, and give users the ability to automate tasks that they wouldn’t have been able to dream of before the advent of LLMs.

These agents will expand far beyond asking questions to a Snowflake table. Soon, entire workflows will be automated away without any human intervention. The LLM will function as a sort of human-like brain, which can do everything within the limits of the target tools API.

The next few years will bring in massive changes in enterprise automation and knowledge work.

___________________________________________________________

Google Colab used in this blog: With code directly from LangChain’s documentation: