DEV Community

Cover image for A Deep Dive into CrewAI and Agentic Design
selvakumar palanisamy
selvakumar palanisamy

Posted on

A Deep Dive into CrewAI and Agentic Design

Mastering Mock Interviews with AI:

A Deep Dive into CrewAI and Agentic Design

Are you preparing for a technical interview and wishing you had a personalized, intelligent interviewer to practice with? Look no further! My latest project, ai_mock_interview demonstrates a powerful application of AI agents using the CrewAI framework to create a dynamic and realistic mock interview experience.

This blog post will walk you through the core components of the ai_mock_interview project, highlighting how specific Python functions are designed to act as intelligent agents and how they collaborate within a "crew" to deliver a comprehensive mock interview and feedback session.

The Power of Agentic AI with CrewAI
Before dive into the code, let's briefly understand the underlying magic. CrewAI is an open-source framework for orchestrating role-playing autonomous AI agents. It allows you to define agents with specific roles, backstories, and goals, and then assign tasks to them. These agents can communicate, delegate, and collaborate to achieve a common objective, mimicking a real-world team.
In ai_mock_interview project, there are several agents, each responsible for a distinct phase of the interview process.

Refer my GitHub repo https://github.com/selvakumarsai/ai_mock_interview

https://github.com/selvakumarsai/ai_mock_interview/blob/main/interview_practice_system.py

Let's break down the key Python functions and classes

  1. Preparation: Research the company and role, then generate a primary interview question.
  2. Concurrent Follow-up Generation: While the user answers, a follow-up question is already being prepared in the background.
  3. User Interaction: The main question is presented, and the user's answer is captured.
  4. Initial Evaluation: The user's first answer is evaluated against the model answer, providing immediate feedback.
  5. Dynamic Follow-up: The pre-generated follow-up question is presented, allowing for a deeper assessment.
  6. Final Evaluation: The follow-up answer is also evaluated, completing the mock interview cycle.

This multi-stage, agent-driven approach provides a robust, interactive, and highly valuable tool for anyone looking to sharpen their technical interview skills. By leveraging AI agents, we've created a system that is not only automated but also intelligent and adaptable, simulating the dynamic nature of real-world interviews.
Check out the script to see this sophisticated agentic design in action!

Defining the Output Structure: QuestionAnswerPair
Before delving into agents, we define a Pydantic BaseModel to structure the output of our question-generating agents.
Python

class QuestionAnswerPair(BaseModel):
    """Schema for the question and its correct answer."""
    question: str = Field(..., description="The technical question to be asked")
    correct_answer: str = Field(..., description="The correct answer to the question")

Enter fullscreen mode Exit fullscreen mode

This QuestionAnswerPair class ensures that when an agent generates a question, it also provides the correct_answer in a standardized format, which is crucial for the evaluation phase.

1. The Company Research Specialist Agent (company_researcher)

This agent is the intelligence gatherer, laying the groundwork for relevant questions.

Python Object: company_researcher (an instance of Agent).
Role: "Company Research Specialist"
Backstory: "You are an expert in researching companies and creating technical interview questions. You have deep knowledge of tech industry hiring practices and can create relevant questions that test both theoretical knowledge and practical skills."
Goal: "Gather information about the company and create interview questions with answers"
Tools: It's equipped with SerperDevTool(), allowing it to perform web searches to gather company-specific information.

search_tool = SerperDevTool()
Enter fullscreen mode Exit fullscreen mode
company_researcher = Agent(
    role="Company Research Specialist",
    goal="Gather information about the company and create interview questions with answers",
    backstory="""You are an expert in researching companies and creating technical interview questions.
    You have deep knowledge of tech industry hiring practices and can create relevant
    questions that test both theoretical knowledge and practical skills.""",
    tools=[search_tool],
    verbose=True,
)
Enter fullscreen mode Exit fullscreen mode

Integration: This agent is assigned the create_company_research_task, which uses its research capabilities to provide a summary of the company's technical requirements and interview process. This output then informs the question_preparer.

def create_company_research_task(company_name: str, role: str, difficulty: str) -> Task:
    return Task(
        description=f"""Research {company_name} and gather information about:
        1. Their technical interview process
        2. Common interview questions for {role} positions at {difficulty} difficulty level
        3. Technical stack and requirements

        Provide a summary of your findings.""",
        expected_output="A report about the company's technical requirements and interview process",
        agent=company_researcher,
    )
Enter fullscreen mode Exit fullscreen mode

2. The Question and Answer Preparer Agent (question_preparer)

This agent is the content creator, responsible for crafting the interview questions.

Python Object: question_preparer (an instance of Agent).
Role: "Question and Answer Preparer"
Backstory: "You are an experienced technical interviewer who knows how to create challenging yet fair technical questions and provide detailed model answers. You understand how to assess different skill levels and create questions that test both theoretical knowledge and practical problem-solving abilities."
Goal: "Prepare comprehensive questions with model answers"

question_preparer = Agent(
    role="Question and Answer Preparer",
    goal="Prepare comprehensive questions with model answers",
    backstory="""You are an experienced technical interviewer who knows how to create
    challenging yet fair technical questions and provide detailed model answers.
    You understand how to assess different skill levels and create questions that
    test both theoretical knowledge and practical problem-solving abilities.""",
    verbose=True,
)

Enter fullscreen mode Exit fullscreen mode

Integration: This agent is assigned the create_question_preparation_task. It takes the research from the company_researcher and then generates a technical question at the specified difficulty, along with a comprehensive model answer, adhering to the QuestionAnswerPair Pydantic schema for structured output.

def create_question_preparation_task(difficulty: str) -> Task:
    return Task(
        description=f"""Based on the company research, create:
        1. A technical question at {difficulty} difficulty level that tests both theory and practice
        2. A comprehensive model answer that covers all key points
        3. Key points to look for in candidate answers

        The question should be appropriate for {difficulty} difficulty level - challenging but fair, and the answer should be detailed.""",
        expected_output="A question and its correct answer",
        output_pydantic=QuestionAnswerPair,
        agent=question_preparer,
    )

Enter fullscreen mode Exit fullscreen mode

3. The Answer Evaluator Agent (answer_evaluator)

This agent is the critic, providing crucial feedback on the candidate's answers.
Python Object: answer_evaluator (an instance of Agent).
Role: "Answer Evaluator"
Backstory: "You are a senior technical interviewer who evaluates answers against the expected solution. You know how to identify if an answer is technically correct and complete."
Goal: "Evaluate if the given answer is correct for the question"

answer_evaluator = Agent(
    role="Answer Evaluator",
    goal="Evaluate if the given answer is correct for the question",
    backstory="""You are a senior technical interviewer who evaluates answers
    against the expected solution. You know how to identify if an answer is
    technically correct and complete.""",
    verbose=True,
)
Enter fullscreen mode Exit fullscreen mode

Integration: This agent is central to the create_evaluation_task. It receives the original question, the user's answer, and the correct answer, then provides a detailed evaluation, including correctness, key points covered/missing, and an explanation.

def create_evaluation_task(
    question: str, user_answer: str, correct_answer: str
) -> Task:
    return Task(
        description=f"""Evaluate if the given answer is correct for the question:
        Question: {question}
        Answer: {user_answer}
        Correct Answer: {correct_answer}
        Provide:
        1. Whether the answer is correct (Yes/No)
        2. Key points that were correct or missing
        3. A brief explanation of why the answer is correct or incorrect""",
        expected_output="Evaluation of whether the answer is correct for the question with feedback",
        agent=answer_evaluator,
    )

Enter fullscreen mode Exit fullscreen mode

4. The Follow-up Question Specialist Agent (follow_up_questioner)

This agent simulates a dynamic interview by generating follow-up questions.
Python Object: follow_up_questioner (an instance of Agent).
Role: "Follow-up Question Specialist"
Backstory: "You are an expert technical interviewer who knows how to create meaningful follow-up questions that probe deeper into a candidate's knowledge and understanding. You can create questions that build upon previous answers and test different aspects of the candidate's technical expertise."
Goal: "Create relevant follow-up questions based on the context"

follow_up_questioner = Agent(
    role="Follow-up Question Specialist",
    goal="Create relevant follow-up questions based on the context",
    backstory="""You are an expert technical interviewer who knows how to create
    meaningful follow-up questions that probe deeper into a candidate's knowledge
    and understanding. You can create questions that build upon previous answers
    and test different aspects of the candidate's technical expertise.""",
    verbose=True,
)
Enter fullscreen mode Exit fullscreen mode

Integration: This agent is tasked by create_follow_up_question_task. It takes the original question, company context, role, and difficulty to craft a new question that deepens the assessment, also returning its output as a QuestionAnswerPair.

def create_follow_up_question_task(
    question: str, company_name: str, role: str, difficulty: str
) -> Task:
    return Task(
        description=f"""Based on the following context, create a relevant follow-up question:
        Original Question: {question}
        Company: {company_name}
        Role: {role}
        Difficulty Level: {difficulty}

        Create a follow-up question that:
        1. Builds upon the original question
        2. Tests deeper understanding of the topic
        3. Is appropriate for the specified difficulty level
        4. Is relevant to the company and role

        The follow-up question should be challenging but fair, and should help
        assess the candidate's technical depth and problem-solving abilities.""",
        expected_output="A follow-up question that builds upon the original question",
        output_pydantic=QuestionAnswerPair,
        agent=follow_up_questioner,
    )

Enter fullscreen mode Exit fullscreen mode

Orchestrating the Interview: This system
intelligently uses two main crews, one for preparation and one for evaluation, with an additional crew for generating follow-up questions concurrently.

A. The preparation_crew (Question Preparation)

Agents: company_researcher, question_preparer
Tasks:
create_company_research_task: company_researcher researches the company, role, and difficulty.
create_question_preparation_task: question_preparer uses the research to generate the primary technical question and its correct answer.
Process: Process.sequential – the tasks are executed one after another.
Outcome: This crew's kickoff() method returns a QuestionAnswerPair object containing the main question and its correct answer, ready to be presented to the user.

    preparation_crew = Crew(
        agents=[company_researcher, question_preparer],
        tasks=[
            create_company_research_task(company_name, role, difficulty),
            create_question_preparation_task(difficulty),
        ],
        process=Process.sequential,
        verbose=True,
    )

Enter fullscreen mode Exit fullscreen mode

B. The evaluation_crew (Answer Evaluation)

Agents: answer_evaluator
Tasks:
create_evaluation_task: answer_evaluator assesses the user's provided answer against the expected correct answer for the initial question.
Process: Process.sequential
Outcome: This crew's kickoff() provides a detailed textual evaluation of the user's response.

    evaluation_crew = Crew(
        agents=[answer_evaluator],
        tasks=[
            create_evaluation_task(
                question=preparation_result.pydantic.question,
                user_answer=user_answer,
                correct_answer=preparation_result.pydantic.correct_answer,
            )
        ],
        process=Process.sequential,
        verbose=True,
    )

Enter fullscreen mode Exit fullscreen mode

C. The follow_up_crew (Follow-up Question Generation)

This crew is created and kicked off asynchronously using asyncio, meaning it runs in the background while the user answers the main question.
Agents:follow_up_questioner
Tasks:
create_follow_up_question_task: The follow_up_questioner generates a relevant follow-up question based on the initial question and context.
Process: Process.sequential
Outcome: By the time the user finishes answering the first question, the follow_up_question_task (which is an asyncio.Task) is awaited, and its result (a QuestionAnswerPair for the follow-up) is retrieved. This allows for a more seamless interview flow.

    follow_up_question_task = asyncio.create_task(
        generate_follow_up_question(
            question=preparation_result.pydantic.question,
            company_name=company_name,
            role=role,
            difficulty=difficulty,
        )
    )

Enter fullscreen mode Exit fullscreen mode

The start_interview_practice function orchestrates these crews. It first runs the preparation_crew, then concurrentlyinitiates the follow_up_crew while prompting the user for an answer to the main question. Once the user answers, the evaluation_crew assesses the first answer. Finally, it presents the pre-generated follow-up question and evaluates the user's second response.

Top comments (0)