Sample Files: AI-Complaint-Assistant with prompt: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
Line 5: Line 5:
[[File: ComplaintAssessmentPrompt-Schema.zip]]
[[File: ComplaintAssessmentPrompt-Schema.zip]]
[[File: openFDA device_search_fields.xlsx]]
[[File: openFDA device_search_fields.xlsx]]
<code>
Settings.txt:
CONFIG:OpenFDA-Vertex-V1.0  //identifies schema version


Open FDA API Query specification:
;Settings.txt:
https_url1:api.fda.gov/device/event.json?
:CONFIG:OpenFDA-Vertex-V1.//identifies schema version
limit1:100


REPORT
:Open FDA API Query specification:
report_title:Complaint Reportability Assessment
:https_url1:api.fda.gov/device/event.json?
:limit1:100


// Data fields analyzed in MDR query results
:REPORT
data1_fields:product_problems, device.brand_name, patient.patient_problems, device.manufacturer_d_name, report_number, mdr_text.text
:report_title:Complaint Reportability Assessment


// General keyword search terms (in any field)
:// Data fields analyzed in MDR query results
KEYWORD1-SEARCH:
:data1_fields:product_problems, device.brand_name, patient.patient_problems, device.manufacturer_d_name, report_number, mdr_text.text


// Query search fields and terms
:// General keyword search terms (in any field)
SEARCH1-FIELDS:SEARCH1-TERMS
:KEYWORD1-SEARCH:
date_received:[20200101+TO+20240315]
device.device_report_product_code:(FMF)


// Query results sorting
:// Query search fields and terms
SORT1-FIELD:SORT1-TERM
:SEARCH1-FIELDS:SEARCH1-TERMS
date_received:desc
:date_received:[20200101+TO+20240315]
:device.device_report_product_code:(FMF)


// Query count field/term
:// Query results sorting
COUNT1-FIELD:COUNT1-TERM
:SORT1-FIELD:SORT1-TERM
:date_received:desc


AI CONFIGURATION & PROMPTS:
:// Query count field/term
// Prompt that summarizes the product problem
:COUNT1-FIELD:COUNT1-TERM
AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details.
 
// Prompt that counts the instances of items
:AI CONFIGURATION & PROMPTS:
AI-CountSummaryPrompt:List each unique item along with the count for each item.
:// Prompt that summarizes the product problem
// Prompt that summarizes the problem similarity to a product problem
:AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details.
AI-ProblemSimilarityPrompt:Analyze the similarity of this problem ({{problem_input}}) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score.  
:// Prompt that counts the instances of items
// Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria
:AI-CountSummaryPrompt:List each unique item along with the count for each item.
AI-MDRSimilarityPrompt:Match the most similar problem reports to this ({{problem_input}}). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities.
:// Prompt that summarizes the problem similarity to a product problem
// Prompt that summarizes the most similar problems
:AI-ProblemSimilarityPrompt:Analyze the similarity of this problem ({{problem_input}}) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score.  
AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems.
:// Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria
// Maximum number of words in each intermediate report.  
:AI-MDRSimilarityPrompt:Match the most similar problem reports to this ({{problem_input}}). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities.
AI-WordsPerReport:1500
:// Prompt that summarizes the most similar problems
// LLM Pro-Vision temperature index (0..1f)
:AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems.
AI-ModelTemperature:0.05
:// Maximum number of words in each intermediate report.  
// LLM Pro-Vision TOP_P index (0..1f)
:AI-WordsPerReport:1500
AI-ModelTopP:0.4
:// LLM Pro-Vision temperature index (0..1f)
// LLM Pro-Vision TOP_K (Number of words for next word prediction)
:AI-ModelTemperature:0.05
AI-ModelTopK:10
:// LLM Pro-Vision TOP_P index (0..1f)
// LLM Maximum output words (1..2048)
:AI-ModelTopP:0.4
AI-ModelMaxOutputTokens:2048
:// LLM Pro-Vision TOP_K (Number of words for next word prediction)
</code>
:AI-ModelTopK:10
:// LLM Maximum output words (1..2048)
:AI-ModelMaxOutputTokens:2048
</blockquote>
</blockquote>