prompting.protocol
#
Module Contents#
Classes#
The Prompting subclass of the Synapse class encapsulates the functionalities related to prompting scenarios. |
|
StreamPrompting is a specialized implementation of the StreamingSynapse tailored for prompting functionalities within |
- class prompting.protocol.Prompting#
Bases:
bittensor.Synapse
The Prompting subclass of the Synapse class encapsulates the functionalities related to prompting scenarios.
It specifies three fields - roles, messages and completion - that define the state of the Prompting object. The roles and messages are read-only fields defined during object initialization, and completion is a mutable field that can be updated as the prompting scenario progresses.
The Config inner class specifies that assignment validation should occur on this class (validate_assignment = True), meaning value assignments to the instance fields are checked against their defined types for correctness.
- roles#
A list of roles in the prompting scenario. This field is both mandatory and immutable.
- Type:
List[str]
- messages#
A list of messages in the prompting scenario. This field is both mandatory and immutable.
- Type:
List[str]
- required_hash_fields List[str]
A list of fields that are required for the hash.
The Prompting class also overrides the deserialize method, returning the instance itself when this method is invoked. Additionally, it provides a Config inner class that enforces the validation of assignments (validate_assignment = True).
Here is an example of how the Prompting class can be used:
```python # Create a Prompting instance prompt = Prompting(roles=[“system”, “user”], messages=[“Hello”, “Hi”])
# Print the roles and messages print(“Roles:”, prompt.roles) print(“Messages:”, prompt.messages)
# Update the completion model_prompt =… # Use prompt.roles and prompt.messages to generate a prompt for your LLM as a single string. prompt.completion = model(model_prompt)
# Print the completion print(“Completion:”, prompt.completion) ```
This will output:
` Roles: ['system', 'user'] Messages: ['You are a helpful assistant.', 'Hi, what is the meaning of life?'] Completion: "The meaning of life is 42. Deal with it, human." `
This example demonstrates how to create an instance of the Prompting class, access the roles and messages fields, and update the completion field.
- class Config#
Pydantic model configuration class for Prompting. This class sets validation of attribute assignment as True. validate_assignment set to True means the pydantic model will validate attribute assignments on the class.
- validate_assignment = True#
- deserialize()#
Returns the instance of the current Prompting object.
This method is intended to be potentially overridden by subclasses for custom deserialization logic. In the context of the Prompting class, it simply returns the instance itself. However, for subclasses inheriting from this class, it might give a custom implementation for deserialization if need be.
- Returns:
The current instance of the Prompting class.
- Return type:
- class prompting.protocol.StreamPrompting#
Bases:
bittensor.StreamingSynapse
StreamPrompting is a specialized implementation of the StreamingSynapse tailored for prompting functionalities within the Bittensor network. This class is intended to interact with a streaming response that contains a sequence of tokens, which represent prompts or messages in a certain scenario.
As a developer, when using or extending the StreamPrompting class, you should be primarily focused on the structure and behavior of the prompts you are working with. The class has been designed to seamlessly handle the streaming, decoding, and accumulation of tokens that represent these prompts.
Attributes: - roles (List[str]): A list of roles involved in the prompting scenario. This could represent different entities
or agents involved in the conversation or use-case. They are immutable to ensure consistent interaction throughout the lifetime of the object.
- messages (List[str]): These represent the actual prompts or messages in the prompting scenario. They are also
immutable to ensure consistent behavior during processing.
- completion (str): Stores the processed result of the streaming tokens. As tokens are streamed, decoded, and
processed, they are accumulated in the completion attribute. This represents the “final” product or result of the streaming process.
required_hash_fields (List[str]): A list of fields that are required for the hash.
Methods: - process_streaming_response: This method asynchronously processes the incoming streaming response by decoding
the tokens and accumulating them in the completion attribute.
deserialize: Converts the completion attribute into its desired data format, in this case, a string.
- extract_response_json: Extracts relevant JSON data from the response, useful for gaining insights on the response’s
metadata or for debugging purposes.
Note: While you can directly use the StreamPrompting class, it’s designed to be extensible. Thus, you can create subclasses to further customize behavior for specific prompting scenarios or requirements.
- async process_streaming_response(response)#
process_streaming_response is an asynchronous method designed to process the incoming streaming response from the Bittensor network. It’s the heart of the StreamPrompting class, ensuring that streaming tokens, which represent prompts or messages, are decoded and appropriately managed.
As the streaming response is consumed, the tokens are decoded from their ‘utf-8’ encoded format, split based on newline characters, and concatenated into the completion attribute. This accumulation of decoded tokens in the completion attribute allows for a continuous and coherent accumulation of the streaming content.
- Parameters:
response (starlette.responses.StreamingResponse) – The streaming response object containing the content chunks to be processed. Each chunk in this response is expected to be a set of tokens that can be decoded and split into individual messages or prompts.
- deserialize()#
Deserializes the response by returning the completion attribute.
- Returns:
The completion result.
- Return type:
- extract_response_json(response)#
extract_response_json is a method that performs the crucial task of extracting pertinent JSON data from the given response. The method is especially useful when you need a detailed insight into the streaming response’s metadata or when debugging response-related issues.
Beyond just extracting the JSON data, the method also processes and structures the data for easier consumption and understanding. For instance, it extracts specific headers related to dendrite and axon, offering insights about the Bittensor network’s internal processes. The method ultimately returns a dictionary with a structured view of the extracted data.
- Parameters:
response (starlette.responses.StreamingResponse) – The response object from which to extract the JSON data. This object typically includes headers and content which can be used to glean insights about the response.
- Returns:
- A structured dictionary containing:
Basic response metadata such as name, timeout, total_size, and header_size.
Dendrite and Axon related information extracted from headers.
Roles and Messages pertaining to the current StreamPrompting instance.
The accumulated completion.
- Return type: