Prompts
OptimizeRequest
Bases: BaseModel
Schema for prompt optimization request.
Attributes:
-
instruction(str) –The user instruction to optimize.
-
connector_ids(List[str]) –List of connector IDs to provide context for optimization.
Source code in app/api/v1/endpoints/prompts.py
19 20 21 22 23 24 25 26 27 28 29 30 31 | |
OptimizeResponse
Bases: BaseModel
Schema for prompt optimization response.
Attributes:
-
optimized_instruction(str) –The optimized version of the instruction.
Source code in app/api/v1/endpoints/prompts.py
34 35 36 37 38 39 40 41 42 | |
optimize_prompt(request, service, current_user)
async
Optimizes a user instruction using LLM and context from connectors.
Parameters:
-
request(OptimizeRequest) –The optimization request containing instruction and connector IDs.
-
service(Annotated[PromptService, Depends(get_prompt_service)]) –The prompt service instance.
-
current_user(Annotated[User, Depends(get_current_user)]) –The currently authenticated user.
Returns:
-
OptimizeResponse(OptimizeResponse) –The response containing the optimized instruction.
Raises:
-
FunctionalError–If there's a functional error during optimization.
-
TechnicalError–If there's a technical error during optimization.
Source code in app/api/v1/endpoints/prompts.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | |