How I Made My Own AI Assistants Do My Work For Me: CrewAI

Introduction

Have you ever found yourself on the border of making a controversial purchase, only to hesitate at the end? That inner dialogue you experience is known as system two thinking. It is a slow, conscious type of thinking that requires effort and time. On the other hand, system one thinking is subconscious and automatic, like effortlessly recognizing a familiar, friendly face in a crowd.

In this blog, we will explore how AI assistants, despite their incredible capabilities, are currently limited to system one thinking. However, smart individuals have found ways to work around this limitation by simulating rational, system two thinking. We will discuss two methods: tree of thought prompting and the use of platforms like CrewAI and Agent Systems. Additionally, we will delve into how to make AI assistants even more intelligent by giving them access to real-world data and how to avoid paying fees and protect your privacy by running models locally.

How I Made My Own AI Assistants Do My Work For Me: CrewAI


Simulating Rational Thinking

Tree of Thought Prompting

Tree of thought prompting is a simple and effective method to simulate rational thinking in AI assistants. This approach involves forcing the language models to consider an issue from multiple perspectives or from the perspectives of various experts. By doing so, the models can make a more informed and rational decision by taking everyone's contributions into account. While this method is effective, it is limited in its ability to provide complex solutions to intricate problems for AI crews.

Platforms like CrewAI and Agent Systems

Another method to overcome the limitations of system one thinking in AI assistants is by utilizing platforms like CrewAI and Agent Systems. These platforms allow anyone, even those without programming knowledge, to build their own custom AI agents or experts. These agents can collaborate with each other to solve complex tasks and provide more comprehensive solutions. By tapping into existing models or running local models, these platforms offer flexibility and control over the AI agents' capabilities.

How I Made My Own AI Assistants Do My Work For Me: CrewAI


Building a Team of AI Agents

To demonstrate how to assemble your own team of smarter AI crews, let's consider the example of setting up three agents to analyze and refine a startup concept. Here's a step-by-step complete guide:

  1. Open VS Code on your devices, and then open a new terminal.

  2. Create and activate a virtual environment.
  3. Install CrewAI by typing the necessary command in the terminal.
  4. Import necessary modules and packages.
  5. Define three agents with specific roles and goals.
  6. Define tasks for each agent, specifying the desired results.
  7. Instantiate the crew or team of agents, including all the agents and tasks.
  8. Define the process by which the agents will work together.
  9. Run the crew to see the results.

In this example, we have a marketer, a technologist, and a business development expert as part of the team. Each agent is assigned specific tasks related to the startup concept, such as analyzing the potential demand, providing suggestions for product improvement, and writing a business plan. The crew works sequentially, with the output of one agent becoming the input for the next agent.

Making AI Assistants Smarter

While AI agents are already intelligent, there are ways to make them even smarter for work. By giving them access to real-world data, such as emails or Reddit conversations, you can enhance their capabilities.

Adding Built-in Tools

One way to make AI agents smarter is by adding built-in tools that are part of Langchain. These tools provide access to various functionalities, such as text-to-speech, YouTube data, and Google data. By incorporating these tools, you enable your agents to access real-time information and generate more realistic outputs.

Custom Made Tools

Another approach to improving the intelligence of AI agents is by creating your own custom tools. For example, you can develop a tool that scrapes the latest posts and comments from a specific subreddit. This allows you to gather information from a source that interests you and tailor the output of your AI agents accordingly.

Running Local Models and Keeping Conversations Private

Running models locally offers several advantages, such as avoiding fees and protecting your privacy. By utilizing open source models like Llama, you can run AI models on your own machine without relying on external APIs. However, it's important to note that running models locally requires sufficient RAM and may have limitations in terms of model size.

How I Made My Own AI Assistants Do My Work For Me: CrewAI


Choosing the Right Local Model

During testing, various open source models were evaluated, and their performances varied. It was found that some models, such as the Llama 2 Series and 52, performed poorly in understanding the given tasks. On the other hand, models like Open Chat and Mistro showed better results, although they still had limitations. It's essential to experiment with different models and prompts to find the best fit for your specific needs.

Improving the Quality of Outputs

To improve the quality of output, it's crucial to find reliable sources of information. In the example of generating a newsletter about AI and machine learning innovation, retrieving data from the local llama subreddit proved to be more effective than using pre-built tools. Custom tools can be developed to scrape specific sources, providing more relevant and up-to-date information.

Conclusion

AI assistants have immense potential to enhance our decision-making and problem-solving capabilities. While they currently operate on system one thinking, there are ways to simulate rational, system two thinking. Through methods like tree of thought prompting and utilizing platforms like CrewAI and Agent Systems, we can leverage the power of AI to solve complex tasks and make more informed decisions.

By giving AI agents access to real-world data and running models locally, we can further enhance their intelligence and protect our privacy. These advancements open up new possibilities for automating tasks and conducting in-depth research with the help of AI assistants or AI crews.

Experimenting with different models and tools allows us to find the best fit for our specific needs for making AI assistance, ensuring high-quality output. While AI assistants continue to evolve, it's important to explore their capabilities and push the boundaries of what they can achieve.

What are your experiences with AI assistants? Have you tried CrewAI or experimented with running models locally? Share your experiences with us!

Thank you for reading!


Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.