Hitaya_OneAPI: Healthcare for underserved communities

Jayita Bhattacharyya
4 min readMar 27, 2023

--

Healthcare is a niche area of concern. With the increased demand for treatments over alarming disease rates, affordability comes at stake. Often healthcare organizations mislead people for the sake of money. Unfortunately, in most scenarios, underserved communities become targets of such traps of organ trade and other criminal offences or spend their entire savings after travelling from one organization to another. Negligence towards health and no proper mechanism to monitor makes things worse. Financial aspects and lack of knowledge in due fields lead to losing lives at many early stages.

Recent studies have found that over 8 million people die yearly due to poor access to proper treatment.

In this article, we come up with our ideation and approach to solving such problems using an AI software application. We first take you through our client-server architecture and API interactions. In the next article, we show how it happens internally at the backend and how we end up building a machine learning pipeline.

Hitaya_OneAPI: The digital healthcare genie

With the advent of technology, the world has seen ease of access in each aspect of livelihood. AI and machine learning has been revolutionizing in the past few years. Internet & mobile phones being cheap these days are present in every house. Thus, giving them a chance to underserved people to make changes in their lifestyles.

Hitaya_OneAPI is a one-stop solution for a set of most commonly diagnosed diseases using emerging technologies such as AI/ML along with Intel OneAPI AI Analytics Toolkits and libraries for enhanced result predictions. We propose this healthcare app to take care of patients in their checkup stages when reaching a doctor could not have been feasible or lab tests require days to give results. This could also be used for day-to-day health tracking.

Architecture

Our approach focuses on bringing the best user experience and encourages user-friendliness with the app. Upon signing up, the app suggests categories of diseases to select. It starts with basic data collection be it in the form of images or text. Collected data gets saved into the blockchain, ensuring data security. Once these details are filled and submitted it's now taken care of by the backend infrastructure. Here comes into the picture, various machine learning models such as Intel OneAPIDeep Neural Network Library which have been trained over similar data points to analyze and bring out at-par predictions.

The model is designed in a manner to handle newly added data & also get it labelled by benchmarking methods. We’ve set up a doctor dashboard to get feedback on unlabeled or unseen data points, which helps us make our system better. The live data is fed to the system & model gets trained from time to time for better performance. The final result is displayed on the user’s screen with a confidence score and a message showing whether or not the person is detected as having the disease. For performance optimization, we plan to use Intel Optimization for Tensorflow/PyTorch which could help us in faster response time.

Code Walkthrough

The following code snippet shows how the backend Flask API works. A post request is made to the server when the URL endpoint “/pneumonia” is hit. Here we make use of Intel Distribution for Python. An asynchronous function is called which takes File inputs in the form of jpeg, jpg and png. If the file isn’t in any of these three formats, then it throws an error with the suggested message. On acceptance, the image file is sent for preprocessing techniques namely resizing, conversion to RGB format and into a binary array to make it machine understandable.

@app.post("/pneumonia")
async def pneumonia(file: UploadFile = File(...)):
extension = file.filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not extension:
return "Image must be jpg or png format!"

predictions = {}
load_model()

image = read_imagefile(await file.read())
image = process_image(image)
try:
out = model.predict(image)
predictions = {
"positive":str(np.round(out[0][1],2)),
"negative":str(np.round(out[0][0],2))
}
except:
predictions ={}

return predictions

Simultaneously the function for loading the pre-trained model is called. The model object “out” is responsibly parsing the preprocessed image to the model prediction phase. Finally, the prediction for positive or negative scores after rounding off is sent back to the user screen. The complete implementation can be found in the below-mentioned.

Conclusion

Our end goal is to get our users a hassle-free experience and take the upper hand in caring for their wellbeing. As part of our future goal, we plan to make use of more edge technologies such as speech to text & text to speech capabilities which would enable more users to have ease of access. Also making the app multi-lingual, would overcome language barriers. Along with this, we have lined up to add more disease-detection models which we’re preparing and shall be up and running in the next phase. The link to the next article covering model training using one API is linked here below.

--

--

Jayita Bhattacharyya
Jayita Bhattacharyya

Written by Jayita Bhattacharyya

Official Code-breaker | Generative AI | Machine Learning | Software Engineer | Traveller

Responses (1)