putmistral

package
v9.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 17, 2025 License: Apache-2.0 Imports: 14 Imported by: 0

Documentation

Overview

Create a Mistral inference endpoint.

Creates an inference endpoint to perform an inference task with the `mistral` service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Index

Constants

This section is empty.

Variables

View Source
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")

ErrBuildPath is returned in case of missing parameters within the build of the request.

Functions

This section is empty.

Types

type NewPutMistral

type NewPutMistral func(tasktype, mistralinferenceid string) *PutMistral

NewPutMistral type alias for index.

func NewPutMistralFunc

func NewPutMistralFunc(tp elastictransport.Interface) NewPutMistral

NewPutMistralFunc returns a new instance of PutMistral with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.

type PutMistral

type PutMistral struct {
	// contains filtered or unexported fields
}

func New

Create a Mistral inference endpoint.

Creates an inference endpoint to perform an inference task with the `mistral` service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put-mistral

func (*PutMistral) ChunkingSettings

func (r *PutMistral) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *PutMistral

The chunking configuration object. API name: chunking_settings

func (PutMistral) Do

func (r PutMistral) Do(providedCtx context.Context) (*Response, error)

Do runs the request through the transport, handle the response and returns a putmistral.Response

func (*PutMistral) ErrorTrace

func (r *PutMistral) ErrorTrace(errortrace bool) *PutMistral

ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace

func (*PutMistral) FilterPath

func (r *PutMistral) FilterPath(filterpaths ...string) *PutMistral

FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path

func (*PutMistral) Header

func (r *PutMistral) Header(key, value string) *PutMistral

Header set a key, value pair in the PutMistral headers map.

func (*PutMistral) HttpRequest

func (r *PutMistral) HttpRequest(ctx context.Context) (*http.Request, error)

HttpRequest returns the http.Request object built from the given parameters.

func (*PutMistral) Human

func (r *PutMistral) Human(human bool) *PutMistral

Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human

func (PutMistral) Perform

func (r PutMistral) Perform(providedCtx context.Context) (*http.Response, error)

Perform runs the http.Request through the provided transport and returns an http.Response.

func (*PutMistral) Pretty

func (r *PutMistral) Pretty(pretty bool) *PutMistral

Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty

func (*PutMistral) Raw

func (r *PutMistral) Raw(raw io.Reader) *PutMistral

Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.

func (*PutMistral) Request

func (r *PutMistral) Request(req *Request) *PutMistral

Request allows to set the request property with the appropriate payload.

func (*PutMistral) Service

The type of service supported for the specified task type. In this case, `mistral`. API name: service

func (*PutMistral) ServiceSettings

func (r *PutMistral) ServiceSettings(servicesettings types.MistralServiceSettingsVariant) *PutMistral

Settings used to install the inference model. These settings are specific to the `mistral` service. API name: service_settings

type Request

type Request struct {

	// ChunkingSettings The chunking configuration object.
	ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
	// Service The type of service supported for the specified task type. In this case,
	// `mistral`.
	Service mistralservicetype.MistralServiceType `json:"service"`
	// ServiceSettings Settings used to install the inference model. These settings are specific to
	// the `mistral` service.
	ServiceSettings types.MistralServiceSettings `json:"service_settings"`
}

Request holds the request body struct for the package putmistral

https://github.com/elastic/elasticsearch-specification/blob/52c473efb1fb5320a5bac12572d0b285882862fb/specification/inference/put_mistral/PutMistralRequest.ts#L29-L78

func NewRequest

func NewRequest() *Request

NewRequest returns a Request

func (*Request) FromJSON

func (r *Request) FromJSON(data string) (*Request, error)

FromJSON allows to load an arbitrary json into the request structure

type Response

type Response struct {

	// ChunkingSettings Chunking configuration object
	ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
	// InferenceId The inference Id
	InferenceId string `json:"inference_id"`
	// Service The service type
	Service string `json:"service"`
	// ServiceSettings Settings specific to the service
	ServiceSettings json.RawMessage `json:"service_settings"`
	// TaskSettings Task settings specific to the service and task type
	TaskSettings json.RawMessage `json:"task_settings,omitempty"`
	// TaskType The task type
	TaskType tasktype.TaskType `json:"task_type"`
}

Response holds the response body struct for the package putmistral

https://github.com/elastic/elasticsearch-specification/blob/52c473efb1fb5320a5bac12572d0b285882862fb/specification/inference/put_mistral/PutMistralResponse.ts#L22-L25

func NewResponse

func NewResponse() *Response

NewResponse returns a Response