Jump to content

Sample Files: AI-Complaint-Assistant with prompt: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 1: Line 1:
You can share your own app files on this page additional, please ensure you include a brief description of your app files.
You can share your own app files on this page. Please include a brief description of your app files.


[[Media: Sample-Complaint-Assessment-with-prompt.zip]]
<blockquote>
Description: Settings queries product code FMF and date range from 2020 to present, prompts for creating similarity scores with top matching MAUDE reports.
'''''Reference files'''''
[[File: ComplaintAssessmentPrompt-Schema.zip]]
[[File: openFDA device_search_fields.xlsx]]
<code>
Settings.txt:
CONFIG:OpenFDA-Vertex-V1.0  //identifies schema version
 
Open FDA API Query specification:
https_url1:api.fda.gov/device/event.json?
limit1:100
 
REPORT
report_title:Complaint Reportability Assessment
 
// Data fields analyzed in MDR query results
data1_fields:product_problems, device.brand_name, patient.patient_problems, device.manufacturer_d_name, report_number, mdr_text.text
 
// General keyword search terms (in any field)
KEYWORD1-SEARCH:
 
// Query search fields and terms
SEARCH1-FIELDS:SEARCH1-TERMS
date_received:[20200101+TO+20240315]
device.device_report_product_code:(FMF)
 
// Query results sorting
SORT1-FIELD:SORT1-TERM
date_received:desc
 
// Query count field/term
COUNT1-FIELD:COUNT1-TERM
 
AI CONFIGURATION & PROMPTS:
// Prompt that summarizes the product problem
AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details.
// Prompt that counts the instances of items
AI-CountSummaryPrompt:List each unique item along with the count for each item.
// Prompt that summarizes the problem similarity to a product problem
AI-ProblemSimilarityPrompt:Analyze the similarity of this problem ({{problem_input}}) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score.
// Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria
AI-MDRSimilarityPrompt:Match the most similar problem reports to this ({{problem_input}}). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities.
// Prompt that summarizes the most similar problems
AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems.
// Maximum number of words in each intermediate report.
AI-WordsPerReport:1500
// LLM Pro-Vision temperature index (0..1f)
AI-ModelTemperature:0.05
// LLM Pro-Vision TOP_P index (0..1f)
AI-ModelTopP:0.4
// LLM Pro-Vision TOP_K (Number of words for next word prediction)
AI-ModelTopK:10
// LLM Maximum output words (1..2048)
AI-ModelMaxOutputTokens:2048
</code>
</blockquote>
 
==Sample App Files==
[[Media: Basic-Complaint-Assessment-with-prompt.zip]]
Description: Settings includes filter on product code FMF and date range from 2020 to present. Prompts for matching top similar product problem codes and individual MAUDE reports using semantic similarity.