DEV Community

Cover image for Nosana Builders Challenge: Agent-101
David Britt for Nosana

Posted on • Originally published at nosana.com

Nosana Builders Challenge: Agent-101

The main goal of this Nosana Builders Challenge to teach participants to build and deploy agents. This first step will be in running a basic AI agent and giving it some basic functionality. Participants will add a tool, for the tool calling capabilities of the agent. These are basically some TypeScript functions, that will, for example, retrieve some data from a weather API, post a tweet via an API call, etc.

Mastra

For this challenge, we will be using Mastra to build our tool.

Mastra is an opinionated TypeScript framework that helps you build AI applications and features quickly. It gives you the set of primitives you need: workflows, agents, RAG, integrations, and evals. You can run Mastra on your local machine, or deploy to a serverless cloud.

Required Reading

We recommend reading the following sections to get started with how to create an Agent and how to implement Tool Calling.

Get Started

To get started run the following command to start developing:
We recommend using pnpm, but you can try npm, or bun if you prefer.

pnpm install
pnpm run dev
Enter fullscreen mode Exit fullscreen mode

Assignment

Challenge Overview

Welcome to the Nosana AI Agent Hackathon! Your mission is to build and deploy an AI agent on Nosana.
While we provide a weather agent as an example, your creativity is the limit. Build agents that:

Beginner Level:

  • Simple Calculator: Perform basic math operations with explanations
  • Todo List Manager: Help users track their daily tasks

Intermediate Level:

  • News Summarizer: Fetch and summarize latest news articles
  • Crypto Price Checker: Monitor cryptocurrency prices and changes
  • GitHub Stats Reporter: Fetch repository statistics and insights

Advanced Level:

  • Blockchain Monitor: Track and alert on blockchain activities
  • Trading Strategy Bot: Automate simple trading strategies
  • Deploy Manager: Deploy and manage applications on Nosana

Or any other innovative AI agent idea at your skill level!

Getting Started

  1. Fork the Nosana Agent Challenge to your GitHub account
  2. Clone your fork locally
  3. Install dependencies with pnpm install
  4. Run the development server with pnpm run dev
  5. Build your agent using the Mastra framework

How to build your Agent

Here we will describe the steps needed to build an agent.

Folder Structure

Provided in this repo, there is the Weather Agent.
This is a fully working agent that allows a user to chat with an LLM, and fetches real time weather data for the provided location.

There are two main folders we need to pay attention to:

In src/mastra/agents/weather-agent/ you will find a complete example of a working agent. Complete with Agent definition, API calls, interface definition, basically everything needed to get a full fledged working agent up and running.
In src/mastra/agents/your-agents/ you will find a bare bones example of the needed components, and imports to get started building your agent, we recommend you rename this folder, and it's files to get started.

Rename these files to represent the purpose of your agent and tools. You can use the Weather Agent Example as a guide until you are done with it, and then you can delete these files before submitting your final submission.

As a bonus, for the ambitious ones, we have also provided the src/mastra/agents/weather-agent/weather-workflow.ts file as an example. This file contains an example of how you can chain agents and tools to create a workflow, in this case, the user provides their location, and the agent retrieves the weather for the specified location, and suggests an itinerary.

LLM-Endpoint

Agents depend on an LLM to be able to do their work.

Running Your Own LLM with Ollama

The default configuration uses a local Ollama LLM.
For local development or if you prefer to use your own LLM, you can use Ollama to serve the lightweight qwen2.5:1.5b mode.

Installation & Setup:

  1. Install Ollama :

  2. Start Ollama service:

ollama serve
Enter fullscreen mode Exit fullscreen mode
  1. Pull and run the qwen2.5:1.5b model:
ollama pull qwen2.5:1.5b
ollama run qwen2.5:1.5b
Enter fullscreen mode Exit fullscreen mode
  1. Update your .env file

There are two predefined environments defined in the .env file. One for local development and another, with a larger model, qwen2.5:32b, for more complex use cases.

Why qwen2.5:1.5b?

  • Lightweight (only ~1GB)
  • Fast inference on CPU
  • Supports tool calling
  • Great for development and testing

Do note qwen2.5:1.5b is not suited for complex tasks.

The Ollama server will run on http://localhost:11434 by default and is compatible with the OpenAI API format that Mastra expects.

Testing your Agent

You can read the Mastra Documentation: Playground to learn more on how to test your agent locally.
Before deploying your agent to Nosana, it's crucial to thoroughly test it locally to ensure everything works as expected. Follow these steps to validate your agent:

Local Testing:

  1. Start the development server with pnpm run dev and navigate to http://localhost:8080 in your browser
  2. Test your agent's conversation flow by interacting with it through the chat interface
  3. Verify tool functionality by triggering scenarios that call your custom tools
  4. Check error handling by providing invalid inputs or testing edge cases
  5. Monitor the console logs to ensure there are no runtime errors or warnings

Docker Testing:
After building your Docker container, test it locally before pushing to the registry:

# Build your container
docker build -t yourusername/agent-challenge:latest .

# Run it locally with environment variables
docker run -p 8080:8080 --env-file .env yourusername/agent-challenge:latest

# Test the containerized agent at http://localhost:8080
Enter fullscreen mode Exit fullscreen mode

Ensure your agent responds correctly and all tools function properly within the containerized environment. This step is critical as the Nosana deployment will use this exact container.

Submission Requirements

1. Code Development

  • Fork this repository and develop your AI agent
  • Your agent must include at least one custom tool (function)
  • Code must be well-documented and include clear setup instructions
  • Include environment variable examples in a .env.example file

2. Docker Container

  • Create a Dockerfile for your agent
  • Build and push your container to Docker Hub or GitHub Container Registry
  • Container must be publicly accessible
  • Include the container URL in your submission
Build, Run, Publish

Note: You'll need an account on Dockerhub


# Build and tag
docker build -t yourusername/agent-challenge:latest .

# Run the container locally
docker run -p 8080:8080 yourusername/agent-challenge:latest

# Login
docker login

# Push
docker push yourusername/agent-challenge:latest
Enter fullscreen mode Exit fullscreen mode

3. Nosana Deployment

  • Deploy your Docker container on Nosana
  • Your agent must successfully run on the Nosana network
  • Include the Nosana job ID or deployment link
Nosana Job Definition

We have included a Nosana job definition at <./nos_job_def/nosana_mastra.json>, that you can use to publish your agent to the Nosana network.

A. Deploying using @nosana/cli

  • Edit the file and add in your published docker image to the image property. "image": "docker.io/yourusername/agent-challenge:latest"
  • Download and install the @nosana/cli
  • Load your wallet with some funds
    • Retrieve your address with: nosana address
    • Go to our Discord and ask for some NOS and SOL to publish your job.
  • Run: nosana job post --file nosana_mastra.json --market nvidia-3060 --timeout 30
  • Go to the Nosana Dashboard to see your job

B. Deploying using the Nosana Dashboard

  • Make sure you have https://phantom.com/, installed for your browser.
  • Go to our Discord and ask for some NOS and SOL to publish your job.
  • Click the Expand button, on the Nosana Dashboard
  • Copy and Paste your edited Nosana Job Definition file into the Textarea
  • Choose an appropriate GPU for the AI model that you are using
  • Click Deploy

4. Video Demo

  • Record a 1-3 minute video demonstrating:
    • Your agent running on Nosana
    • Key features and functionality
    • Real-world use case demonstration
  • Upload to YouTube, Loom, or similar platform

5. Documentation

  • Update this README with:
    • Agent description and purpose
    • Setup instructions
    • Environment variables required
    • Docker build and run commands
    • Example usage

Submission Process

  1. Complete all requirements listed above
  2. Commit all of your changes to the main branch of your forked repository
    • All your code changes
    • Updated README
    • Link to your Docker container
    • Link to your video demo
    • Nosana deployment proof
  3. Social Media Post: Share your submission on X (Twitter)
    • Tag @nosana_ai
    • Include a brief description of your agent
    • Add hashtag #NosanaAgentChallenge
  4. Finalize your submission on the https://earn.superteam.fun/agent-challenge page
  • Remember to add your forked GitHub repository link
  • Remember to add a link to your X post.

Judging Criteria

Submissions will be evaluated based on:

  1. Innovation (25%)
  • Originality of the agent concept
  • Creative use of AI capabilities
  1. Technical Implementation (25%)
  • Code quality and organization
  • Proper use of the Mastra framework
  • Efficient tool implementation
  1. Nosana Integration (25%)
  • Successful deployment on Nosana
  • Resource efficiency
  • Stability and performance
  1. Real-World Impact (25%)
    • Practical use cases
    • Potential for adoption
    • Value proposition

Prizes

We’re awarding the top 10 submissions:

  • πŸ₯‡ 1st: $1,000 USDC
  • πŸ₯ˆ 2nd: $750 USDC
  • πŸ₯‰ 3rd: $450 USDC
  • πŸ… 4th: $200 USDC
  • πŸ”Ÿ 5th–10th: $100 USDC

All prizes are paid out directly to participants on SuperTeam

Resources

Support

Important Notes

  • Ensure your agent doesn't expose sensitive data
  • Test thoroughly before submission
  • Keep your Docker images lightweight
  • Document all dependencies clearly
  • Make your code reproducible
  • You can vibe code it if you want πŸ˜‰
  • Only one submission per participant
  • Submissions that do not compile, and do not meet the specified requirements, will not be considered
  • Deadline is: 9 July 2025, 12.01 PM
  • Announcement will be announced about one week later, stay tuned for our socials for exact date

Don’t Miss Nosana Builder Challenge Updates

Good luck, builders! We can't wait to see the innovative AI agents you create for the Nosana ecosystem.
Happy Building!

Top comments (0)