Development and validation of a data visualization dashboard for automatic pain assessment and artificial intelligence analyses in cancer patients
Original Article

Development and validation of a data visualization dashboard for automatic pain assessment and artificial intelligence analyses in cancer patients

Francesco Cutugno1, Marco Cascella2 ORCID logo

1Department of Electrical Engineering and Information Technologies, Università di Napoli “Federico II”, Napoli, Italy; 2Unit of Anesthesiology, Intensive Care Medicine, and Pain Medicine, Department of Anesthesia and Critical Care, Department of Medicine, Surgery, and Dentistry, University of Salerno, Baronissi, Italy

Contributions: (I) Conception and design: Both authors; (II) Administrative support: None; (III) Provision of study materials or patients: M Cascella; (IV) Collection and assembly of data: F Cutugno; (V) Data analysis and interpretation: F Cutugno; (VI) Manuscript writing: Both authors; (VII) Final approval of manuscript: Both authors.

Correspondence to: Prof. Marco Cascella, MD, PhD. Unit of Anesthesiology, Intensive Care Medicine, and Pain Medicine, Department of Anesthesia and Critical Care, Department of Medicine, Surgery, and Dentistry, University of Salerno, Via Allende, Baronissi 84082, Italy. Email: mcascella@unisa.it.

Background: Pain is one of the most common and debilitating symptoms in cancer patients. Despite accurate assessment is fundamental for pain treatment, unidimensional and multidimensional subjective instruments have important limitations. This article aims to introduce a dashboard designed for multimodal data collection and visualization, which is essential for developing artificial intelligence (AI) models for automatic pain assessment (APA) in cancer pain.

Methods: Functional and non-functional prerequisites were integrated. Concerning non-functional prerequisites, the dashboard was developed as a web app. For the creation of the mock-ups, the Figma web app was implemented. Shneiderman’s eight golden rules and Nielsen’s 10 heuristics were followed for interface design. Subsequently, a usability test was conducted by engaging 5 clinicians.

Results: The dashboard was developed. The average success rate of the usability test was 80%. No major usability issues were identified. One user reported difficulties in task execution. Another user completed all tasks within the allotted time, except for the task of adding a new drug to the system. The feedback analysis revealed a lack of experience with computer systems. Potential solutions include introducing an initial tutorial for less experienced users and making the relevant fields more clearly visible.

Conclusions: Since the use of AI-powered APA techniques is still in its infancy, further developments are needed for their widespread implementation in clinical practice. Data collection is the key step for these developments as it can act as ground truth for the training of automated systems. This user-friendly graphical interface can be utilized to design high-performance AI models for personalized pain management.

Keywords: Cancer pain; pain; automatic pain assessment (APA); dashboard; artificial intelligence (AI)


Received: 28 July 2024; Accepted: 18 October 2024; Published online: 20 December 2024.

doi: 10.21037/jmai-24-251


Highlight box

Key findings

• We developed and validated a data visualization dashboard for artificial intelligence (AI)-powered automatic pain assessment (APA) in cancer patients.

• The dashboard integrates multimodal data for more accurate and efficient pain assessment.

• Usability tests showed an 80% success rate, highlighting user-friendliness and effectiveness.

What is known and what is new?

• Current pain assessment methods have limitations.

• AI for APA is an emerging field of research.

• The tool is a comprehensive, data-driven instrument for pain assessment, supporting AI model development.

What is the implication, and what should change now?

• Enhanced real-time pain assessment and management, and telehealth use.

• Design of high-performance AI models for APA.

• Tailored pain management.


Introduction

Pain is one of the most common and debilitating symptoms in cancer patients. It is estimated that up to 55% of patients experience pain during anticancer treatment and this percentage rises to 66% for those with metastatic, advanced, or terminal disease (1). Notably, for effective pain management, an accurate assessment of the symptoms is mandatory (2). Nevertheless, pain assessment according to unidimensional pain ratings such as the 0–10 numeric rating scale (NRS) has important limitations. While these tools are simple and quick to use, they are prone to reporting bias influenced by psychosocial factors, such as a tendency to catastrophize or underreport pain (3). Multidimensional scales, like the Brief Pain Inventory (4) and the McGill Pain Questionnaire (5), offer a more comprehensive assessment by considering various aspects of pain, including sensory, emotional, cognitive, and social factors. However, these scales have limitations, such as limited specificity and susceptibility to response bias (6). Moreover, they do not provide an objective and exhaustive measurement of pain. Nociception, indeed, is just one of the components responsible for pain expression and a multitude of biopsychosocial elements concur to structure complex clinical scenarios (7).

The topic of pain assessment may be covered by artificial intelligence (AI). The research is oriented toward the development of AI-powered models for automatic pain assessment (APA) systems. This term encompasses a range of methodologies that study the objective aspects of pain, such as biosignal responses and observable behaviors like facial expressions and speech patterns. These approaches can be further explored using AI techniques (8-10). Specifically, AI can analyze large datasets of biosignals, and other data, identifying patterns and correlations that might be undetectable to clinicians. For instance, machine learning algorithms can be trained to recognize specific physiological responses associated with varying pain levels, providing a more objective and consistent measure of pain intensity (11). Additionally, computational language analysis can be employed for pain and emotion recognition (12,13), while computer vision appears well-suited for examining pain-related facial expressions in non-cancer (14) and adult cancer pain (15).

However, while APA methods offer the potential for more objective insights into pain intensity, it is important to recognize their limitations (16). These include a lack of high-quality validation studies, uncertainty about which parameters are most effective in different settings, and technical challenges such as timing of implementation (17). Therefore, a comprehensive pain assessment should ideally integrate both subjective self-reporting and objective measures to provide a more holistic understanding of pain and support better-informed pain management strategies (18). The aim is to isolate the nociceptive component of pain, by setting its weight within the perceived symptomatology.

The use of these techniques is still in its infancy, and further developments are needed for their widespread implementation in clinical practice. Data collection is the key step for these developments as it can act as ground truth for the training of automated systems. Although several generic datasets on pain are available (19), there is a need to develop specific datasets based on subjects suffering from oncological diseases.

In this clinical setting, the creation of this multimodal dataset could allow the training of automatic systems for the effective recognition of cancer pain. The collection of multimodal information including physiological data (e.g., heart rate, temperature, pulse oximetry, electrodermal activity, and others), audiovisual material (video diaries), and questionnaires for pain and quality of life should be channeled to a database. Data storage must follow criteria that facilitate writing and reading operations. However, a graphical interface is essential for the easy use of the data and to make the most of all the information available. For this purpose, a dashboard must be as complete as it is simple to use. A dashboard is a tool that aggregates data from multiple visual sources, like graphs and maps, allowing users to monitor, analyze, and support decision-making across different organizational levels (20). Importantly, from the perspective of AI researchers, several advantages can be found. These advantages include having a topic-specific dataset with multimodal data that can be mined and analyzed based on various needs. This approach significantly simplifies the preprocessing and feature engineering phases, while also providing benefits during exploratory machine learning analysis.

This article aims to present a dashboard for the collection and visualization of data to be used for the study of APA in cancer pain. The research question is: How can an easy-to-use graphical interface be developed to integrate both subjective and objective data, thereby facilitating a more comprehensive assessment of pain?


Methods

To answer the research question, we designed the dashboard which was subsequently tested for usability. The research did not involve patients; thus, the ethical approval was waived. Informed consent was obtained from the five clinicians who participated in the usability assessment.

Dashboard development

Requirements elicitation

The definition of the data visualization dashboard requirements involves the integration of functional and non-functional prerequisites. Specifically, functional requirements are the essential features necessary for the dashboard to perform its intended tasks. On the other hand, non-functional requirements address the broader aspects of user experience, performance, and accessibility.

Functional requirements include:

  • Log in to the dashboard;
  • Log out;
  • View the list of patients registered on the platform;
  • View the summary patient’s information;
  • View the history of the collected data:
    • NRS values entered;
    • Vital signs collected;
    • Recorded video diaries (with or without sentiment analysis);
    • Completed questionnaires;
    • Crisis reports;
    • Activities entered;
  • View the patient’s current therapy;
  • Assign a new therapy to a patient;
  • View the list of drugs on the platform;
  • Insert, modify or delete a drug.

We focused on these requirements for APA investigations. For further descriptive and predictive analyses, the patient’s ID can be used to access additional information regarding the pathology, imaging, and therapies through the electronic medical record.

Concerning non-functional prerequisites, the dashboard was developed as a web app. In this way, there was no need to install software, and it enables quick access from any device with an internet browser. For this aim, the web app supports Chrome, Firefox, Microsoft Edge, and Safari. Furthermore, the web app is responsive and can be adapted to the various types of screens, while maintaining a graphical interface oriented to a desktop or tablet view. The system guarantees a start-up time of fewer than 4 seconds in 90% of cases and a response time to any search of fewer than 5 seconds in 90% of cases. Finally, it ensures that the user can understand the entire set of platform functions after an average of 1 hour of use. The web application was designed by the team of prof Cutugno at the Department of Electrical Engineering and Information Technologies, Università di Napoli “Federico II”.

Tree graph

After defining the software requirements, the corresponding functions were outlined, leading to the creation of a flowchart that illustrates the dashboard’s screens and the navigation between them. This tree helps to quickly identify the macrostructure of the functions and the user interface of the dashboard. The macrostructure of the functions and the user interface of the dashboard were tested with end users (see Usability assessment of the dashboard). The Draw.io web app was used to create the tree graph (Figure 1).

Figure 1 Tree graph. *, the logout is always reachable, but to simplify the graph it has been reached after two screens; ^, the “Patient summary” subtree shows information from the last available date, by default.

High-fidelity mock-ups

High-fidelity mock-ups represent software in the late stages of design. They go beyond placeholders and wireframe “lorem ipsum” and include actual content, fonts, colors, images, and branding.

For the creation of the mock-ups, the Figma web app was implemented due to its flexibility and collaborative capabilities. It is an extremely complete and easy-to-use web app. Specifically, its robust set of features greatly enhances the design process. The tool also facilitates a collaborative environment of team members including designers and domain experts. This is especially important in our context of cancer pain assessment, where input from pain physicians is crucial to ensure that the design meets the complex needs of the users. Moreover, Figma’s ability to create interactive prototypes allows us to simulate the user experience more effectively, helping to identify potential usability issues early in the design process.

Interface design according to criteria of user experience design

In designing the interface, each choice was made according to the most accredited user experience design rules. In particular, Shneiderman’s eight golden rules of interface design (21) and Nielsen’s 10 heuristics were applied (22). The aim was to make the dashboard clear and easy to use.

The Shneiderman Golden Rules include:

  • Rule 1: strive for consistency;
  • Rule 2: seek universal usability;
  • Rule 3: offer informative feedback;
  • Rule 4: design dialogs to yield closure;
  • Rule 5: prevent errors;
  • Rule 6: permit easy reversal of actions;
  • Rule 7: keep users in control;
  • Rule 8: reduce short-term memory load.

For example, following the first rule, we have designed that most of the clickable elements and confirmation buttons are all light blue. Moreover, if the user adds a drug to the therapy by forgetting to enter the dose, the system prevents the error by displaying a message inviting it to be entered (rule 5).

The Nielsen Usability Heuristics are 10 general principles for interaction design. They are called “heuristics” because they are general rules and not specific usability guidelines. The principles include:

  • Principle 1: visibility of system status;
  • Principle 2: match between the system and the real world;
  • Principle 3: user control and freedom;
  • Principle 4: consistency and standards;
  • Principle 5: error prevention;
  • Principle 6: recognition rather than recall;
  • Principle 7: flexibility and efficiency of use;
  • Principle 8: aesthetic and minimalist design;
  • Principle 9: help users recognize, diagnose, and recover from errors;
  • Principle 10: help and documentation.

For example, if the operator adds a drug to therapy and then has second thoughts, it can be easily eliminated thanks to the red “X” close to the list of drugs (principle 3). In addition, we implemented a design that is as minimal as possible. In each window, screen elements are limited to those strictly necessary for the specific task. To simplify data entry, the “hints” of the text fields have been used (principle 8). They are predefined texts within the fields, that disappear as soon as the user types something, and reappear if the user empties the text field.

Flutter implementation

The platform was developed as a web app using the unconventional Flutter framework. It is an open-source framework developed by Google for creating high-performance native interfaces. This framework allows development for almost any device, from the desktop (Windows, Mac, or Linux) to the web, from iOS to Android, with a single code base. It can also import external packages and allows the transition from a web app to a desktop or mobile app, through minimal or no code changes.

Flutter uses the programming language Dart. Since Flutter is written specifying what must be done and not how it must be done, it is based on declarative programming. These features mainly concern the way the graphical interface is built and populated. Flutter is provided with the status of the application (i.e., the data), and how to display them, and the framework creates the user interface to reflect the current state of the application. It can be imagined as a mathematical formula:

UI=f(state)

where UI is the graphical interface, f is the method construction of the graphical interface and state is the status of the application.

When the status of the app changes (for example, the user changes an option in the settings screen), Flutter is notified of the modification and triggers a refresh of the user interface. It is not necessary to change the interface as required by mandatory programming, for example by executing the widget.setText (“new text”) instruction as, when the state changes, the user interface is rebuilt from scratch.

Usability assessment of the dashboard

The usability of a system is the extent to which users succeed in using it to achieve specific objectives effectively and efficiently, while at the same time deriving satisfaction from the process.

Usability assessment involves different activities depending on the method used including data acquisition, data analysis, and criticism.

Data acquisition

It concerns the collection of usability data combined with usability tests and subjective evaluations. These usability tests can be divided into two macro-categories namely task test and scenario test. To evaluate the usability of the dashboard, we decided to carry out only task tests, observing and analyzing user behavior to understand if, where, and why they encountered difficulties. In particular, task tests are a series of tests in which users perform individual tasks that allow exercising multiple system functions. They are very useful as they simulate real situations in which the user will find himself. Despite the user being engaged to reach a certain purpose, he does not know what intermediate steps must be followed.

The tasks selected to carry out the tests are:

  • Search for a patient and view their monitoring;
  • Enter a new drug on the platform;
  • Delete a drug from the platform;
  • Log off the platform.

A time limit of 5 minutes was imposed for each task. The selected users (n=5) were randomly chosen among clinicians from the Cancer Institute of Naples, Italy. To familiarize users with the application, a test session was launched. In this preliminary step, clinicians were able to use it for 2 minutes without any specific task and without knowing what had to be done for the test, thus avoiding influences.

Feedback

It is the interpretation of collected data to identify problems. In this phase, all the difficulties encountered by various users in interacting with the system during the execution of tasks have been grouped. Suggestions for solutions or improvements to mitigate the problems were evaluated.

Statistical analysis

The calculation of the average success rate for the usability test was performed by using the following formula:

(Nsuccess·Psuccess)+(Npartialsuccess·Ppartialsuccess)Ntotalperformedtest

where N is the number of events and P is the weight. A full success was evaluated as 1 and a partial success as half success (i.e., 0.5).


Results

Following the research framework, we developed the “APA Pascale” dashboard.

Figure 2 shows the login and logout views.

Figure 2 Login and logout views.

The patient’s monitoring view is shown in Figure 3.

Figure 3 Physiological monitoring.

The results of the usability tests are shown in Table 1.

Table 1

Test results

Variables Search for a patient and view their monitoring Enter a new drug on the platform Delete a drug from the platform Log off the platform
User 1 S P S S
User 2 S S S S
User 3 F F F P
User 4 S S S S
User 5 S S S S

S, success; P, partial success; F, failure.

Consequently, the success rate was [15 + (2 × 0.5)]/20 = 80%. This result gives a significant indication of the usability of the system. In other words, users can complete almost all tasks in a short time.

No serious usability problems were highlighted; only User 3 reported failures in the execution of tasks, but he was a user accustomed to analog tools and not to the use of computer systems. User 1, on the other hand, correctly carried out all the operations in the pre-established time, except for the one concerning the insertion of a new drug in the system. The criticality analysis (feedback) concerned the two users who encountered problems:

  • Criticality 1: one of the users was not able to use computer systems, he had problems with all the tasks that were entrusted to him.
    • Possible solution: add an initial tutorial for less experienced users.
  • Criticality 2: a user was unable to complete the task of inserting a drug into the system in the set time.
    • Possible solution: highlight the fields or fields more clearly.

Discussion

A dashboard is a tool that aggregates data from multiple visual sources, like graphs and maps, allowing users to monitor, analyze, and support decision-making across different organizational levels (20). We present the development and validation of a dashboard for APA in cancer patients. Significantly, it is a crucial step for investigating this complex research scenario. Specifically, its primary purpose is to collect and organize data that could be used to generate risk assessments or develop predictive multimodal AI-powered models for objective pain evaluation. For example, the platform is designed for the acquisition of short videos (video diaries). These data can be implemented to perform sentiment analysis based on language analysis from speech and audio features as well as by investigating facial expressions (e.g., action units and movements) (23). Although in pain medicine, other dashboards were designed prevalently to facilitate the clinician-patient relationship (24) and to support quality improvement teams in pain management (25), attempts in the field of cancer pain for AI-based APA research are not available.

Importantly, the proposed research addresses a field with several critical gaps that need to be filled, particularly concerning datasets for oncological pain. In previous studies, we developed models for oncological pain using datasets originally intended for chronic pain that was not oncological (15). These limitations have significantly impacted the results we were able to achieve.

In developing the tool, it is essential to ensure the dashboard’s effectiveness in clinical processes by prioritizing its capabilities and addressing implementation challenges. Therefore, functional and non-functional requirements have been chosen by focusing on the available literature and the aims of our research (i.e., AI models for APA). Evidence-based medicine research, for instance, has shown that functional requirements for dashboards include features such as reporting, reminders, customization options, tracking, alert creation, and the assessment of performance indicators. In contrast, non-functional requirements encompass factors like dashboard speed, security, user-friendliness, compatibility with various devices, integration with other systems, data currency, and the use of data visualization elements tailored to user needs. Additionally, a web-based design was suggested (26). Furthermore, in our case, the user experience was a critical factor.

The mock-ups provide a clear reference for implementing the interface and functionality. These properties ensure that the final product aligns closely with the approved design. In our dashboard development process, they satisfied a dual purpose. Initially, they were implemented for visual design approval, ensuring that the layout, user interface, and overall aesthetics meet the project requirements and stakeholder expectations. Subsequently, once the design was approved, these high-fidelity mock-ups guided the development team during the coding phase.

The platform was developed based on general guidelines and our expertise in managing oncology patients with pain symptoms. As such, we prioritized multiparametric assessments, video diaries, questionnaires, and pain events for APA objectives. Additionally, we sought to create a tool for evaluating treatment effectiveness, for example, by correlating pain crises with medication use. This link could provide valuable insights into typical manifestations of oncological pain, such as breakthrough cancer pain (27). For more detailed descriptive and predictive analyses, the patient’s ID can be used to access additional information on cancer disease, imaging, and anticancer therapies through the electronic medical record. For this reason, the dashboard functioning has been simplified. For example, the pain scale is not displayed on the dashboard. Patients are instructed to use the scale (NRS 0–10) and report the value once a day and during crises.

Research implications

The development of the dashboard for APA in cancer pain has significant implications for clinical practice, future research, and policy-making. For instance, regarding clinical practice, the tool has the potential to revolutionize how pain is assessed and managed. It can integrate multimodal data, including both subjective and objective measures, providing a more comprehensive understanding of a patient’s pain experience. Additionally, clinicians could use the dashboard to monitor pain in real-time, adjust treatment plans more accurately, and respond more swiftly to changes in a patient’s condition. This aim aligns with the new avenue of telehealth-based management for challenging conditions such as cancer pain (28-30). Notably, since the instrument provides a standardized tool for collecting and analyzing pain data, it can also provide a valuable resource for research in pain assessment. For example, the dataset that can be used to train and refine AI models for cancer investigations. For this aim, wearable health monitoring systems can be integrated into the research framework (31-33). On a policy level, the availability of data-driven insights into pain management could lead to the development of new protocols and best practices that ensure more consistent and effective care across healthcare settings. Furthermore, policymakers could use data from the dashboard to allocate resources more effectively, targeting areas where pain management practices need improvement and ensuring that all patients have access to high-quality pain assessment tools.

Limitations and improvement

Since the graphical interface of the dashboard has been designed for desktop or tablet use, it is not very suitable for use on many mobile devices. Taking this into account, an improvement could be the designing of a responsive dashboard, so that it can be used by both a smartphone and a computer. Finally, given that the app provided to patients was developed only for the Android operating system, a further improvement could be to create an iOS counterpart.

Regarding the usability assessment, the main limitations concern the complexity of test flows used and the limited number of users enrolled. Conducting more complex tests with longer use flows could provide a deeper understanding of the application’s usability. Nevertheless, the current tasks were designed to ensure initial user comprehension and to identify any immediate usability issues. Consequently, the selected tasks were chosen because, during the development of the dashboard, we noticed that the critical phases concerned the visualization of patient data, monitoring, and above all drug management. Regarding the number of volunteers, we acknowledge that a sample size of five may not provide statistically significant results. Our initial pilot study was intended to gather preliminary insights. Within the research protocols on APA, we are planning to verify the usability through cross-sectional investigations (34). This will allow us to prepare for changes in the real world.


Conclusions

Within the field of APA, data visualization is of paramount importance. The development of an easy-to-use system for multimodal data acquisition in cancer pain patients represents a significant advancement toward personalized pain management strategies. Oncological pain is a complex condition where pain transcends the simple concept of nociception and is compounded by significant psycho-emotional components. In this context, the integration of multimodal data—encompassing both subjective and objective metrics—is crucial for capturing the full spectrum of the patient’s experience. For example, the monitoring of pain crises and the subsequent AI-powered analysis of the linked variables could allow us to intercept typical, but still poorly dissected, cancer pain phenomena such as breakthrough cancer pain. Therefore, our research also addresses the critical gap in available datasets specifically tailored to oncological pain. Currently, such dedicated datasets are not available, which poses a substantial challenge in structuring effective research frameworks.


Acknowledgments

Funding: None.


Footnote

Data Sharing Statement: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-251/dss

Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-251/prf

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-251/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The research did not involve patients; thus, the ethical approval was waived. Informed consent was obtained from the five clinicians who participated in the usability assessment.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. van den Beuken-van Everdingen MH, Hochstenbach LM, Joosten EA, et al. Update on Prevalence of Pain in Patients With Cancer: Systematic Review and Meta-Analysis. J Pain Symptom Manage 2016;51:1070-1090.e9. [Crossref] [PubMed]
  2. Cascella M, Vittori A, Petrucci E, et al. Strengths and Weaknesses of Cancer Pain Management in Italy: Findings from a Nationwide SIAARTI Survey. Healthcare (Basel) 2022;10:441. [Crossref] [PubMed]
  3. Caraceni A, Shkodra M. Cancer Pain Assessment and Classification. Cancers (Basel) 2019;11:510. [Crossref] [PubMed]
  4. Poquet N, Lin C. The Brief Pain Inventory (BPI). J Physiother 2016;62:52. [Crossref] [PubMed]
  5. Ngamkham S, Vincent C, Finnegan L, et al. The McGill Pain Questionnaire as a multidimensional measure in people with cancer: an integrative review. Pain Manag Nurs 2012;13:27-51. [Crossref] [PubMed]
  6. Schmitt JS, Abbott JH. Patient global ratings of change did not adequately reflect change over time: a clinical cohort study. Phys Ther 2014;94:534-42. [Crossref] [PubMed]
  7. Cascella M, Muzio MR, Monaco F, et al. Pathophysiology of Nociception and Rare Genetic Disorders with Increased Pain Threshold or Pain Insensitivity. Pathophysiology 2022;29:435-52. [Crossref] [PubMed]
  8. Prkachin KM, Hammal Z. Corrigendum: Computer Mediated Automatic Detection of Pain-Related Behavior: Prospect, Progress, Perils. Front Pain Res (Lausanne) 2022;3:849950. [Crossref] [PubMed]
  9. Aung MSH, Kaltwang S, Romera-Paredes B, et al. The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset. IEEE Trans Affect Comput 2016;7:435-51. [Crossref] [PubMed]
  10. Wu CL, Liu SF, Yu TL, et al. Deep Learning-Based Pain Classifier Based on the Facial Expression in Critically Ill Patients. Front Med (Lausanne) 2022;9:851690. [Crossref] [PubMed]
  11. Lötsch J, Ultsch A, Mayer B, et al. Artificial intelligence and machine learning in pain research: a data scientometric analysis. Pain Rep 2022;7:e1044. [Crossref] [PubMed]
  12. Albashayreh A, Bandyopadhyay A, Zeinali N, et al. Natural Language Processing Accurately Differentiates Cancer Symptom Information in Electronic Health Record Narratives. JCO Clin Cancer Inform 2024;8:e2300235. [Crossref] [PubMed]
  13. Machová K, Szabóova M, Paralič J, et al. Detection of emotion by text analysis using machine learning. Front Psychol 2023;14:1190326. [Crossref] [PubMed]
  14. D'Antoni F, Russo F, Ambrosio L, et al. Artificial Intelligence and Computer Vision in Low Back Pain: A Systematic Review. Int J Environ Res Public Health 2021;18:10909. [Crossref] [PubMed]
  15. Cascella M, Vitale VN, Mariani F, et al. Development of a binary classifier model from extended facial codes toward video-based pain recognition in cancer patients. Scand J Pain 2023;23:638-45. [Crossref] [PubMed]
  16. El-Tallawy SN, Pergolizzi JV, Vasiliu-Feltes I, et al. Incorporation of "Artificial Intelligence" for Objective Pain Assessment: A Comprehensive Review. Pain Ther 2024;13:293-317. [Crossref] [PubMed]
  17. Cascella M, Schiavo D, Cuomo A, et al. Artificial Intelligence for Automatic Pain Assessment: Research Methods and Perspectives. Pain Res Manag 2023;2023:6018736. [Crossref] [PubMed]
  18. Cascella M, Di Gennaro P, Crispo A, et al. Advancing the integration of biosignal-based automated pain assessment methods into a comprehensive model for addressing cancer pain. BMC Palliat Care 2024;23:198. [Crossref] [PubMed]
  19. Mende-Siedlecki P, Qu-Lee J, Lin J, et al. The Delaware Pain Database: a set of painful expressions and corresponding norming data. Pain Rep 2020;5:e853. [Crossref] [PubMed]
  20. Gröger C, Hillmann M, Hahn F, et al. The operational process dashboard for manufacturing. Procedia CIRP 2013;7:205-10. [Crossref]
  21. UX Matters. Applying the 8 Golden Rules of User-Interface Design. Last Accessed: Aug 30, 2024. Available online: https://www.uxmatters.com/mt/archives/2022/10/applying-the-8-golden-rules-of-user-interface-design.php
  22. Nielsen J. How I Developed the 10 Usability Heuristics. UXTigers. Last Accessed: Aug 30, 2024. Available online: https://www.uxtigers.com/post/usability-heuristics-history
  23. Gomutbutra P, Kittisares A, Sanguansri A, et al. Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository. Front Artif Intell 2022;5:942248. [Crossref] [PubMed]
  24. Bach K, Marling C, Mork PJ, et al. Design of a clinician dashboard to facilitate co-decision making in the management of non-specific low back pain. Journal of Intelligent Information Systems 2019;52:269-84. [Crossref]
  25. Rabiei R, Almasi S. Requirements and challenges of hospital dashboards: a systematic literature review. BMC Med Inform Decis Mak 2022;22:287. [Crossref] [PubMed]
  26. Opie J, Bellio M, Williams R, et al. Requirements for a Dashboard to Support Quality Improvement Teams in Pain Management. Front Big Data 2021;4:654914. [Crossref] [PubMed]
  27. Cascella M, Racca E, Nappi A, et al. Bayesian Network Analysis for Prediction of Unplanned Hospital Readmissions of Cancer Patients with Breakthrough Cancer Pain and Complex Care Needs. Healthcare (Basel) 2022;10:1853. [Crossref] [PubMed]
  28. Bramanti A, Ciurleo R, Vecchione C, et al. Telerehabilitation: A Solution for Patients After Hip Fracture? Transl Med UniSa 2024;26:30-7. [Crossref] [PubMed]
  29. Cascella M, Coluccia S, Grizzuti M, et al. Satisfaction with Telemedicine for Cancer Pain Management: A Model of Care and Cross-Sectional Patient Satisfaction Study. Curr Oncol 2022;29:5566-78. [Crossref] [PubMed]
  30. Buonanno P, Marra A, Iacovazzo C, et al. Telemedicine in Cancer Pain Management: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. Pain Med 2023;24:226-33. [Crossref] [PubMed]
  31. Bhatkar V, Picard R, Staahl C. Combining Electrodermal Activity With the Peak-Pain Time to Quantify Three Temporal Regions of Pain Experience. Front Pain Res (Lausanne) 2022;3:764128. [Crossref] [PubMed]
  32. Campanella S, Altaleb A, Belli A, et al. A Method for Stress Detection Using Empatica E4 Bracelet and Machine-Learning Techniques. Sensors (Basel) 2023;23:3565. [Crossref] [PubMed]
  33. Schuurmans AAT, de Looff P, Nijhof KS, et al. Validity of the Empatica E4 Wristband to Measure Heart Rate Variability (HRV) Parameters: a Comparison to Electrocardiography (ECG). J Med Syst 2020;44:190. [Crossref] [PubMed]
  34. Almasi S, Bahaadinbeigy K, Ahmadi H, et al. Usability Evaluation of Dashboards: A Systematic Literature Review of Tools. Biomed Res Int 2023;2023:9990933. [Crossref] [PubMed]
doi: 10.21037/jmai-24-251
Cite this article as: Cutugno F, Cascella M. Development and validation of a data visualization dashboard for automatic pain assessment and artificial intelligence analyses in cancer patients. J Med Artif Intell 2025;8:21.

Download Citation