take pictures of those wounded. When an image is taken of a wound, the app would automatically
classify the wound (Laceration, Abrasion, Contusion, Avulsions, etc), measure its dimensions (height,
width, depth), automatically calculate how much suturing and other materials are needed (patient-based
and for the entire location), assist the risk of infection, estimate how long it would take for the wound
to heal and automatically set a reminder to check on the wound.
Moreover, the app would automate the triage process of wounds (based on the calculations). This
would allow the volunteers walking around to take pictures of those wounded, and help them identify
which patients should receive immediate medical care. Moreover, it will automate the process of
estimating how much supplies are needed (how many sutures, sterilizing alcohol, band-aids, etc).
It is often difficult to automate the process of rapidly classifying the conditions of people de-located
from their homes in areas troubled by natural disaster and war. Often those individuals are wounded,
and trained health care providers need to be present in order to triage the patients and assess their
wounds. But what if we did not have enough medical professionals to get to everyone on time? What if
we can have non-medical volunteers take pictures of patients' wounds through their mobile phones and
have those pictures automatically analyzed by the app?
Moreover, if the application can rapidly calculate how many sutures are needed and the type of sutures
and other supplies needed, per patient and per group, we will achieve a high level of automation in the
supply chain, as we'll be able to accurately estimate the supplies per location.
people escaped to refugee camps as their homes were bombed and many injured. Consider a scenario
with a refugee camp with 500 people. If we had 10 health care professionals for the 250 people who are
injured/wounded, how long would it take the health care professionals to triage the wounds, and
accurately assess the supplies needed to treat the wounds? Since non-medical volunteers are easier to
access than medical professionals in terms of numbers, what if we had 5 physicians instead of 10 and
10 volunteers, each with a smart phone. The volunteers can each attend to 25 patients, taking pictures
of their wounds and having that information readily available for those in charge. Let's walk through
the scenario for one patient:
1. The volunteer would approach the patient and take a picture of the full patient's body with the
wound (need a relatively high resolution camera).
2. The app would first identify the skin colour of the patient. This will help the app identify
wounds (different colour than the skin) and also help identify any other colours that may
indicate infection bordering or inside the wound.
3. The app would use a built-in 3D mesh of the human body to indicate the location of the wound
based on the 2D picture taken (optional).
4. The app would measure the dimensions of the wound (height, width, depth).
5. The app would use a reference library with pictures of the different types of wounds to classify
the wound (Laceration, Abrasion, Contusion, Avulsions, etc).
6. The volunteer, after taking the picture can use voice recording (built inside the app) to record
the name, age and gender of the patient.
7. The app would calculate how many stitches are required (and all other immediate and future
supplies required for this wound).
8. The app would triage the wounds based on a severity index, and immediately alert volunteers to
instruct patients to seek medical assistance if they needed.
9. The app would calculate the expected duration for healing once the wound has been sutured,
based on the current dimensions of the wound.
10.The app would aggregate the supplies needed for all patients, and compare that information
against available supplies to forecast over/under-stock items.
11. If the app is used for an entire region (e.g. 5 camps), it will help stock controllers identify
supplies delivery to the entire region, and supply movement between camps, as needed.
12.Once the patient arrives to the medical professional, their file can be pulled out (with the
picture), and their information; the medical professional would then suture the wound the record
the time it was sutured.
13.Recording the time, along with the previous calculation of how long it will take the wound to
heal, will help in automating when the patient should come back for a follow-up. This will
significantly help in automating the scheduling process, and act as an accurate estimate of
related workload for the medical professionals.
will attempt to outline the ones I am aware of:
1. Volunteers familiarity with mobile technology. This can only be estimated based on up-to-
date technology penetration rates in the affected region.
2. Using Technology to estimate the depth of the wound. Using Depth Maps can solve this.
3. Developing an accurate Wound Image Reference Library, and its related algorithm. This
will help the app compare the wounds of patients to those in the library, in order to automate the
classification process. Platforms like OpenCV and object recognition technologies (Microsoft
Kinect) can add significant value in such a development.
4. Developing the algorithm for wound triage. Best practices need to be established for what
variables are considered when triaging the wound.
5. Matching the wound location in a 3D Mesh based on a 2D image. There are various
software packages available that convert a 2D image into a 3D image. The idea would be to
adapt the patient's picture to a standard built-in 3D mesh of the human body.
linking the app to an Air Quality Modelling app connected to a High Volume Sampler (HVS), that
measures the pollution in that area, and assess the risks for infection accordingly.
dimensions of the wound. The Height-Catcher is what inspired this idea.
2. OpenCV is an open source platform that uses advanced Image Processing, Geometric
Descriptions, Gesture Recognition, and Object Fitting concepts. It could be used as a
development platform for such apps.
graphic designers. In addition, once the prototype is developed, it requires thorough testing in a clinical
setting in order to insure its accuracy, and make the needed adjustments before releasing the solution. A
proposed method is to use the prototype at a busy ER (by a volunteer and after Hospital and Patient
consent) and compare the generated results on a weekly basis, compared to the ER physician's input.
Every week, the development team would meet to discuss the cases that the app got right/wrong during
the past week, and make adjustments to the app accordingly. After the modifications have been made,
the same images recorded during the previous week are run through the app for a final check to insure
that the software is functioning accurately. Depending on how busy the ER is, and the variety of
wounds treated, in a month or two, the app should have enough knowledge to provide estimates close
to those made by physicians.