AWS Bedrock Generative AI + Demo Projects - Part 1

AWS Bedrock Generative AI + Demo Projects - Part 1

This is a part 1 of a series of blogs related to AWS Generative AI and Bedrock with some demo production ready projects

What Is AWS Bedrock?

Amazon Bedrock is a part of Amazon Web Services (AWS) that offers developers access to foundational models and the tools to customize them for specific applications. Developers don’t need to build their own infrastructure to train and host their applications. Instead, they rely on AWS’s cloud.

The goal of Amazon Bedrock is to make it as easy as possible for developers to build and deploy generative AI applications. It does this by offering foundational models -- the large language models (LLMs) built by other companies -- to serve as the backbone of a new application. It partners with AI21 Labs, Anthropic, and Stability AI to offer its LLMs for developers to build on. Developers can then add their own custom data to further train the model and build out their applications before deploying it with AWS’s cloud.

The Generative AI Opportunity:

The global generative AI market could reach $109 billion by 2023, according to analysis from Grand View Research. With the growing demand for generative AI applications for use in just about every industry, cloud computing providers have an opportunity to help businesses develop and scale their applications. About 10% to 20% of total revenue in generative AI goes to cloud providers, according to analysts at Andreessen Horowitz.

Benefits of using Bedrock?

AWS Bedrock offers swift AI access without the complexities of infrastructure, leveraging AWS's cloud expertise for seamless integration.

How to activate the Model Access in AWS Console?

1. On Bedrock service page chose Model Access in AWS Console. Depends on region you are in it will show the available Base Model as mentioned below.

Article content

2. Click on Edit on top right corner and select the base models required for your development task and save changes.

Article content
Article content

What are the Foundational Models (FMs)?

Foundational Models are collaborative creations with leading AI pioneers like AI21 Labs, Anthropic, and Stability AI. They serve as the building blocks for diverse AI applications, providing the bedrock upon which developers construct their AI visions.

Note: The data we input in AWS stay in AWS and does not leave for security while using Bedrock.

If we look into the model providers we can look into the details and specifications of each models. Bedrock provides access to a range of powerful FMs (Feature Models) for handling text and images, which includes Amazon’s Titan FMs and two recently introduced Large Language Models (LLMs). This service is overseen by AWS, ensuring scalability, dependability, and data security. With Bedrock’s server less approach, users can effortlessly discover, customize, and incorporate these FMs into their applications using familiar AWS tools, all without the need of managing the infrastructure.

Article content

  1. AI21 Labs Jurassic

Article content
Article content
Article content
Source: medium.com

Note: Above chart illustrates that Jurassic-2 Ultra offers superior quality, while Jurassic-2 Mid provides a cost-efficient option with slightly lower latency

Use Cases of Jurassic are:

  • Financial Services: Streamline intricate financial reports, extract crucial information from papers, and craft custom legal and financial documents. Moreover, design a natural language interface for materials and data to allow users to engage with the content through common phrases while gleaning insights.
  • Retail: Create descriptions for products, evaluate and summarize reviews of those products, and develop customized marketing materials that fit the desired tone, length, and style.
  • Customer Support: Provide immediate and organic language replies to customer queries by employing documents, policies, and information sheets.
  • Knowledge Management: Make it easier for employees to access organizational data and extract valuable insights through natural language interactions.


2. Amazon Titan

Article content

There exist two categories of Titan models, namely Embeddings and Text generation.

The Titan Embeddings Models, also known as LLMs, are capable of converting text inputs such as words and phrases into numeric representations or embeddings that accurately reflect the semantic meaning. Despite not being able to generate actual text outputs, these models play a significant role in personalization and search tasks by producing contextually appropriate responses through comparison of various embeddings.

On the other hand, Titan Text models act as generative LLMs for multiple functions like summarization techniques generation mechanisms classification schemes open-ended Q&A modules information extraction methods. Furthermore, they undergo training with different programming languages along with rich-text formats including JSON tables csv among others.

To ensure ethical practices surrounding AI implementation, Titan Foundation Models possess functionalities responsible for detection and elimination of harmful data content while filtering out any inappropriate user-generated output featuring hate-speech profanity violence-related materials.

There are several use cases for Amazon Titan, such as:

With the use of Amazon Titan, you can benefit from various features such as Text Generation for creating content, Summarization to obtain concise overviews of extensive text documents and Semantic Search by employing Titan Embeddings. Retrieval is possible through Augmented Generation which enhances accuracy in user queries by connecting foundation models with data sources.

This technology has a diverse range of uses, from generating human-like language for summarizing content or providing answers to questions. Additionally, it can improve the accuracy and relevance of search results while giving personalized recommendations. The inclusion of responsible AI functionality ensures that inappropriate or harmful material is kept at bay. Furthermore, Amazon Titan models are highly customizable using your own data enabling them to perform unique tasks tailored specifically for your organization's needs with ease.

3. Anthropic Claude

Article content

Anthropic, a research company, developed the sophisticated Language Learning Model (LLM) known as the Claude model. It is specifically designed to handle a large volume of tokens in a context window during inference, which allows it to understand and produce responses based on lengthy documents.

Some use cases of Claude are –

Customer service: Claude might serve as an ongoing virtual sales representative, guaranteeing amiable and prompt handling of service inquiries and raising client contentment.

Operations: Claude is skilled at gathering pertinent data from corporate emails and documents, compiling survey answers into a clear and concise style, and quickly and accurately processing enormous amounts of text.

Legal — Claude is equipped to analyze legal documents and provide answers to questions related to them, allowing lawyers to reduce costs and concentrate on more advanced tasks.

Coding: Claude models are always improving their mathematical, reasoning, and coding skills. The most recent model, Claude 2, performed better on the Codex HumanEval, a Python coding exam, with a score of 71.2% (up from 56.0%).

By utilizing Anthropic Claude, you can obtain the following advantages.

Leading-industry 100K token context window Claude provides a safe means of handling large volumes of data with its 100,000-token context window—roughly comparable to 75,000 words. Claude is qualified to help with any type of content, including documents, research reports, emails, FAQs, chat transcripts, records, and more.

state of the art talents for a variety of activities — Claude can handle a wide range of jobs with versatility and skill, including complex thinking, coding, precise instructions, and creative content creation. It has several functions, including content-based Q&A, categorization, rephrasing, summarizing, extracting structured data, and editing.

Frontier AI Safety Features: Claude uses methods like Constitutional AI and is built on Anthropic's state-of-the-art safety research. Its design seeks to reduce brand risk by being innocuous, honest, and helpful.

4. Cohere command

Article content

A text generation model that, when given specific inputs, generates text-based responses suited to business needs.

Among Cohere's use cases are:

  • Spelling and grammar check: This tool checks for proper capitalization of phrases and fixes capitalization mistakes when given text. Extracting the key point or idea from a discussion is known as transcript summarization.
  • Transcript Summarization — Extracts the primary message or the main idea from a conversation.
  • Headline Market Analysis — When provided with a news headline, it determines whether the article falls into the categories of technology, economy, or health.
  • Product Description to Benefits — Transforms a product description into a list of benefits, including functional, emotional, and social advantages.
  • Semantic Similarity — Imagine the frequent recurrence of inquiries faced by customer service representatives on a daily basis. Language models can evaluate text similarity and determine if an incoming question bears resemblance to questions previously answered in the FAQ section. Upon receiving similarity scores, the system can take various actions, such as displaying the answer to the most similar question (if it surpasses a certain similarity threshold) or suggesting it to a customer service agent.
  • Subreddit Titles — Recognizes distinctive groupings or themes among posts within subreddits.
  • Invoice Identity Extraction — When presented with an invoice, it can extract the named entities mentioned in the document.
  • Product Description — Generates a product description based on provided keywords and the product’s name.
  • Chatbot — Operates as a chatbot, capable of handling a variety of conversational tasks.

5. Stable Diffusion XL

Deep learning, text to image model used to create detailed images based on text descriptions, image inpainting, image outpointing and image to image translation.

With one of the highest parameter counts of any openly accessible image model, SDXL1.0 has a base model of 3.5 Billion Parameters and a model ensemble pipeline of 6.6 Billion Parameters. The output of two models is combined.

The complete model is based on a two-step latent diffusion model. The base model first produces noisy latent representations. These are then refined by a dedicated model for the denoising step. What’s important is that the base model also works independently.

The two-stage approach preserves speed and doesn't require too many computer resources, all while guaranteeing robustness in image production. It is anticipated that SDXL 1.0 will operate efficiently on consumer GPUs with 8GB VRAM or on widely accessible cloud instances.

Article content
A two-stage image model with one of the largest parameter counts in open access, ensuring robust image generation

Some use cases of Diffusion XL are –

Advertising and Promotions: Possibility of creating customized campaigns and providing a wide range of marketing materials.

Media and entertainment: They stimulate the generation of boundless creative resources and imagination via visuals.

Gaming and the Metaverse: These two domains push the limits of immersive experiences by enabling the creation of new characters, scenarios, and virtual worlds.

END OF PART 1

The remaining content in this topic will be cover on next part as follows:

  • Code Generation Project Demo

Hardik Choksi

Associate Director - AI & Cloud Architect at PwC

1y

Good read, Sourav Seal ! Thanks for sharing.

To view or add a comment, sign in

More articles by Sourav Seal

Others also viewed

Explore content categories