Sample Files: AI-Complaint-Assistant with prompt: Difference between revisions
mNo edit summary |
mNo edit summary |
||
| Line 5: | Line 5: | ||
[[File: ComplaintAssessmentPrompt-Schema.zip]] | [[File: ComplaintAssessmentPrompt-Schema.zip]] | ||
[[File: openFDA device_search_fields.xlsx]] | [[File: openFDA device_search_fields.xlsx]] | ||
;Settings.txt: | |||
:CONFIG:OpenFDA-Vertex-V1.0 //identifies schema version | |||
:Open FDA API Query specification: | |||
:https_url1:api.fda.gov/device/event.json? | |||
:limit1:100 | |||
:REPORT | |||
:report_title:Complaint Reportability Assessment | |||
// | :// Data fields analyzed in MDR query results | ||
:data1_fields:product_problems, device.brand_name, patient.patient_problems, device.manufacturer_d_name, report_number, mdr_text.text | |||
// | :// General keyword search terms (in any field) | ||
:KEYWORD1-SEARCH: | |||
// Query | :// Query search fields and terms | ||
:SEARCH1-FIELDS:SEARCH1-TERMS | |||
date_received: | :date_received:[20200101+TO+20240315] | ||
:device.device_report_product_code:(FMF) | |||
// Query | :// Query results sorting | ||
:SORT1-FIELD:SORT1-TERM | |||
:date_received:desc | |||
AI CONFIGURATION & PROMPTS: | :// Query count field/term | ||
// Prompt that summarizes the product problem | :COUNT1-FIELD:COUNT1-TERM | ||
AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details. | |||
// Prompt that counts the instances of items | :AI CONFIGURATION & PROMPTS: | ||
AI-CountSummaryPrompt:List each unique item along with the count for each item. | :// Prompt that summarizes the product problem | ||
// Prompt that summarizes the problem similarity to a product problem | :AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details. | ||
AI-ProblemSimilarityPrompt:Analyze the similarity of this problem ({{problem_input}}) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score. | :// Prompt that counts the instances of items | ||
// Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria | :AI-CountSummaryPrompt:List each unique item along with the count for each item. | ||
AI-MDRSimilarityPrompt:Match the most similar problem reports to this ({{problem_input}}). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities. | :// Prompt that summarizes the problem similarity to a product problem | ||
// Prompt that summarizes the most similar problems | :AI-ProblemSimilarityPrompt:Analyze the similarity of this problem ({{problem_input}}) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score. | ||
AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems. | :// Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria | ||
// Maximum number of words in each intermediate report. | :AI-MDRSimilarityPrompt:Match the most similar problem reports to this ({{problem_input}}). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities. | ||
AI-WordsPerReport:1500 | :// Prompt that summarizes the most similar problems | ||
// LLM Pro-Vision temperature index (0..1f) | :AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems. | ||
AI-ModelTemperature:0.05 | :// Maximum number of words in each intermediate report. | ||
// LLM Pro-Vision TOP_P index (0..1f) | :AI-WordsPerReport:1500 | ||
AI-ModelTopP:0.4 | :// LLM Pro-Vision temperature index (0..1f) | ||
// LLM Pro-Vision TOP_K (Number of words for next word prediction) | :AI-ModelTemperature:0.05 | ||
AI-ModelTopK:10 | :// LLM Pro-Vision TOP_P index (0..1f) | ||
// LLM Maximum output words (1..2048) | :AI-ModelTopP:0.4 | ||
AI-ModelMaxOutputTokens:2048 | :// LLM Pro-Vision TOP_K (Number of words for next word prediction) | ||
:AI-ModelTopK:10 | |||
:// LLM Maximum output words (1..2048) | |||
:AI-ModelMaxOutputTokens:2048 | |||
</blockquote> | </blockquote> | ||
Revision as of 20:30, 7 April 2024
You can share your own app files on this page. Please include a brief description of your app files.
Reference files File:ComplaintAssessmentPrompt-Schema.zip File:OpenFDA device search fields.xlsx
- Settings.txt
- CONFIG:OpenFDA-Vertex-V1.0 //identifies schema version
- Open FDA API Query specification:
- https_url1:api.fda.gov/device/event.json?
- limit1:100
- REPORT
- report_title:Complaint Reportability Assessment
- // Data fields analyzed in MDR query results
- data1_fields:product_problems, device.brand_name, patient.patient_problems, device.manufacturer_d_name, report_number, mdr_text.text
- // General keyword search terms (in any field)
- KEYWORD1-SEARCH:
- // Query search fields and terms
- SEARCH1-FIELDS:SEARCH1-TERMS
- date_received:[20200101+TO+20240315]
- device.device_report_product_code:(FMF)
- // Query results sorting
- SORT1-FIELD:SORT1-TERM
- date_received:desc
- // Query count field/term
- COUNT1-FIELD:COUNT1-TERM
- AI CONFIGURATION & PROMPTS:
- // Prompt that summarizes the product problem
- AI-ProblemSummaryPrompt:Describe the following product problem in a couple of sentences. Include essential details.
- // Prompt that counts the instances of items
- AI-CountSummaryPrompt:List each unique item along with the count for each item.
- // Prompt that summarizes the problem similarity to a product problem
- AI-ProblemSimilarityPrompt:Analyze the similarity of this problem (Template:Problem input) to the following problem. Use semantic similarity measurement. Present the result with the similarity score as a percentage followed by a concise explanation of the similarity score.
- // Prompt that analyzes the similarity between input problem and MAUDE results matching query criteria
- AI-MDRSimilarityPrompt:Match the most similar problem reports to this (Template:Problem input). Include all important details. Include all reference numbers. Include the similarity scores as a percentage. Explain the similarities. Use semantic similarity measurements. Present results in descending similarity scores. For each matching result, include the matching report number followed by the similarity score followed by a brief description of the problem followed by an explanation of the similarities.
- // Prompt that summarizes the most similar problems
- AI-ReportSummaryPrompt: List the top matching problem reports with the highest similarity scores. Include all details about the matching problems.
- // Maximum number of words in each intermediate report.
- AI-WordsPerReport:1500
- // LLM Pro-Vision temperature index (0..1f)
- AI-ModelTemperature:0.05
- // LLM Pro-Vision TOP_P index (0..1f)
- AI-ModelTopP:0.4
- // LLM Pro-Vision TOP_K (Number of words for next word prediction)
- AI-ModelTopK:10
- // LLM Maximum output words (1..2048)
- AI-ModelMaxOutputTokens:2048
Sample App Files
Media: Basic-Complaint-Assessment-with-prompt.zip Description: Settings includes filter on product code FMF and date range from 2020 to present. Prompts for matching top similar product problem codes and individual MAUDE reports using semantic similarity.