Search results for: accuracy
463 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 270462 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy
Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed
Abstract:
The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the filmKeywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery
Procedia PDF Downloads 494461 Surgical Imaging in Ancient Egypt
Authors: Haitham Nabil Zaghlol Hasan
Abstract:
This research aims to study of the surgery science and imaging in ancient Egypt and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even though this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said: “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.Keywords: egypt, ancient_egypt, civilization, archaeology
Procedia PDF Downloads 69460 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa
Authors: Jean Paul M. Milambo
Abstract:
Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors
Procedia PDF Downloads 78459 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project
Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson
Abstract:
The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction
Procedia PDF Downloads 80458 An Elaboration Likelihood Model to Evaluate Consumer Behavior on Facebook Marketplace: Trust on Seller as a Moderator
Authors: Sharmistha Chowdhury, Shuva Chowdhury
Abstract:
Buying-selling new as well as second-hand goods like tools, furniture, household, electronics, clothing, baby stuff, vehicles, and hobbies through the Facebook marketplace has become a new paradigm for c2c sellers. This phenomenon encourages and empowers decentralised home-oriented sellers. This study adopts Elaboration Likelihood Model (ELM) to explain consumer behaviour on Facebook Marketplace (FM). ELM suggests that consumers process information through the central and peripheral routes, which eventually shape their attitudes towards posts. The central route focuses on information quality, and the peripheral route focuses on cues. Sellers’ FM posts usually include product features, prices, conditions, pictures, and pick-up location. This study uses information relevance and accuracy as central route factors. The post’s attractiveness represents cues and creates positive or negative associations with the product. A post with remarkable pictures increases the attractiveness of the post. So, post aesthetics is used as a peripheral route factor. People influenced via the central or peripheral route forms an attitude that includes multiple processes – response and purchase intention. People respond to FM posts through save, share and chat. Purchase intention reflects a positive image of the product and higher purchase intention. This study proposes trust on sellers as a moderator to test the strength of its influence on consumer attitudes and behaviour. Trust on sellers is assessed whether sellers have badges or not. A sample questionnaire will be developed and distributed among a group of random FM sellers who are selling vehicles on this platform to conduct the study. The chosen product of this study is the vehicle, a high-value purchase item. High-value purchase requires consumers to consider forming their attitude without any sign of impulsiveness seriously. Hence, vehicles are the perfect choice to test the strength of consumers attitudes and behaviour. The findings of the study add to the elaboration likelihood model and online second-hand marketplace literature.Keywords: consumer behaviour, elaboration likelihood model, facebook marketplace, c2c marketing
Procedia PDF Downloads 139457 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 168456 Investigating the Effect of Orthographic Transparency on Phonological Awareness in Bilingual Children with Dyslexia
Authors: Sruthi Raveendran
Abstract:
Developmental dyslexia, characterized by reading difficulties despite normal intelligence, presents a significant challenge for bilingual children navigating languages with varying degrees of orthographic transparency. This study bridges a critical gap in dyslexia interventions for bilingual populations in India by examining how consistency and predictability of letter-sound relationships in a writing system (orthographic transparency) influence the ability to understand and manipulate the building blocks of sound in language (phonological processing). The study employed a computerized visual rhyme-judgment task with concurrent EEG (electroencephalogram) recording. The task compared reaction times, accuracy of performance, and event-related potential (ERP) components (N170, N400, and LPC) for rhyming and non-rhyming stimuli in two orthographies: English (opaque orthography) and Kannada (transparent orthography). As hypothesized, the results revealed advantages in phonological processing tasks for transparent orthography (Kannada). Children with dyslexia were faster and more accurate when judging rhymes in Kannada compared to English. This suggests that a language with consistent letter-sound relationships (transparent orthography) facilitates processing, especially for tasks that involve manipulating sounds within words (rhyming). Furthermore, brain activity measured by event-related potentials (ERP) showed less effort required for processing words in Kannada, as reflected by smaller N170, N400, and LPC amplitudes. These findings highlight the crucial role of orthographic transparency in optimizing reading performance for bilingual children with dyslexia. These findings emphasize the need for language-specific intervention strategies that consider the unique linguistic characteristics of each language. While acknowledging the complexity of factors influencing dyslexia, this research contributes valuable insights into the impact of orthographic transparency on phonological awareness in bilingual children. This knowledge paves the way for developing tailored interventions that promote linguistic inclusivity and optimize literacy outcomes for children with dyslexia.Keywords: developmental dyslexia, phonological awareness, rhyme judgment, orthographic transparency, Kannada, English, N170, N400, LPC
Procedia PDF Downloads 11455 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars
Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez
Abstract:
The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars
Procedia PDF Downloads 229454 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 404453 Applications of Artificial Intelligence (AI) in Cardiac imaging
Authors: Angelis P. Barlampas
Abstract:
The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine
Procedia PDF Downloads 79452 Climate Changes Impact on Artificial Wetlands
Authors: Carla Idely Palencia-Aguilar
Abstract:
Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.Keywords: DEM, evapotranspiration, geostatistics, NDVI
Procedia PDF Downloads 120451 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival
Procedia PDF Downloads 302450 Creation of a Clinical Tool for Diagnosis and Treatment of Skin Disease in HIV Positive Patients in Malawi
Authors: Alice Huffman, Joseph Hartland, Sam Gibbs
Abstract:
Dermatology is often a neglected specialty in low-resource settings, despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV positive patients. African countries have the highest HIV infection rates and skin conditions are frequently misdiagnosed and mismanaged, because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV positive patients. A literature search within Embase, Medline and Google scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff a list of 15 skin conditions was included and a booklet created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff, alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.Keywords: dermatology, HIV, Malawi, skin disease
Procedia PDF Downloads 205449 Diagnostic Value of CT Scan in Acute Appendicitis
Authors: Maria Medeiros, Suren Surenthiran, Abitha Muralithar, Soushma Seeburuth, Mohammed Mohammed
Abstract:
Introduction: Appendicitis is the most common surgical emergency globally and can have devastating consequences. Diagnostic imaging in acute appendicitis has become increasingly common in aiding the diagnosis of acute appendicitis. Computerized tomography (CT) and ultrasound (US) are the most commonly used imaging modalities for diagnosing acute appendicitis. Pre-operative imaging has contributed to a reduction of negative appendicectomy rates from between 10-29% to 5%. Literature report CT scan has a diagnostic sensitivity of 94% in acute appendicitis. This clinical audit was conducted to establish if the CT scan's diagnostic yield for acute appendicitis matches the literature. CT scan has a high sensitivity and specificity for diagnosing acute appendicitis and its use can result in a lower negative appendicectomy rate. The aim of this study is to compare the pre-operative imaging findings from CT scans to the histopathology results post-operatively and establish the accuracy of CT scans in aiding the diagnosis of acute appendicitis. Methods: This was a retrospective study focusing on adult presentations to the general surgery department in a district general hospital in central London with an impression of acute appendicitis. We analyzed all patients from July 2022 to December 2022 who underwent a CT scan preceding appendicectomy. Pre-operative CT findings and post-operative histopathology findings were compared to establish the efficacy of CT scans in diagnosing acute appendicitis. Our results were also cross-referenced with pre-existing literature. Data was collected and anonymized using CERNER and analyzed in Microsoft Excel. Exclusion criteria: Children, age <16. Results: 65 patients had CT scans in which the report stated acute appendicitis. Of those 65 patients, 62 patients underwent diagnostic laparoscopies. 100% of patients who underwent an appendicectomy with a pre-operative CT scan showing acute appendicitis had acute appendicitis in histopathology analysis. 3 of the 65 patients who had a CT scan showing appendicitis received conservative treatment. Conclusion: CT scans positive for acute appendicitis had 100% sensitivity and a positive predictive value, which matches published research studies (sensitivity of 94%). The use of CT scans in the diagnostic work-up for acute appendicitis can be extremely helpful in a) confirming the diagnosis and b) reducing the rates of negative appendicectomies and consequently reducing unnecessary operative-associated risks for patients, reducing costs and reducing pressure on emergency theatre lists.Keywords: acute apendicitis, CT scan, general surgery, imaging
Procedia PDF Downloads 94448 Drug Therapy Problem and Its Contributing Factors among Pediatric Patients with Infectious Diseases Admitted to Jimma University Medical Center, South West Ethiopia: Prospective Observational Study
Authors: Desalegn Feyissa Desu
Abstract:
Drug therapy problem is a significant challenge to provide high quality health care service for the patients. It is associated with morbidity, mortality, increased hospital stay, and reduced quality of life. Moreover, pediatric patients are quite susceptible to drug therapy problems. Thus this study aimed to assess drug therapy problem and its contributing factors among pediatric patients diagnosed with infectious disease admitted to pediatric ward of Jimma university medical center, from April 1 to June 30, 2018. Prospective observational study was conducted among pediatric patients with infectious disease admitted from April 01 to June 30, 2018. Drug therapy problems were identified by using Cipolle’s and strand’s drug related problem classification method. Patient’s written informed consent was obtained after explaining the purpose of the study. Patient’s specific data were collected using structured questionnaire. Data were entered into Epi data version 4.0.2 and then exported to statistical software package version 21.0 for analysis. To identify predictors of drug therapy problems occurrence, multiple stepwise backward logistic regression analysis was done. The 95% CI was used to show the accuracy of data analysis and statistical significance was considered at p-value < 0.05. A total of 304 pediatric patients were included in the study. Of these, 226(74.3%) patients had at least one drug therapy problem during their hospital stay. A total of 356 drug therapy problems were identified among two hundred twenty six patients. Non-compliance (28.65%) and dose too low (27.53%) were the most common type of drug related problems while disease comorbidity [AOR=3.39, 95% CI= (1.89-6.08)], Polypharmacy [AOR=3.16, 95% CI= (1.61-6.20)] and more than six days stay in hospital [AOR=3.37, 95% CI= (1.71-6.64) were independent predictors of drug therapy problem occurrence. Drug therapy problems were common in pediatric patients with infectious disease in the study area. Presence of comorbidity, polypharmacy and prolonged hospital stay were the predictors of drug therapy problem in study area. Therefore, to overcome the significant gaps in pediatric pharmaceutical care, clinical pharmacists, Pediatricians, and other health care professionals have to work in collaboration.Keywords: drug therapy problem, pediatric, infectious disease, Ethiopia
Procedia PDF Downloads 153447 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI
Authors: Brennan Lodge
Abstract:
Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies
Procedia PDF Downloads 95446 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 284445 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding
Procedia PDF Downloads 138444 Flow Boiling Heat Transfer at Low Mass and Heat Fluxes: Heat Transfer Coefficient, Flow Pattern Analysis and Correlation Assessment
Authors: Ernest Gyan Bediako, Petra Dancova, Tomas Vit
Abstract:
Flow boiling heat transfer remains an important area of research due to its relevance in thermal management systems and other applications. Despite the enormous work done in the field of flow boiling heat transfer over the years to understand how flow parameters such as mass flux, heat flux, saturation conditions and tube geometries influence the characteristics of flow boiling heat transfer, there are still many contradictions and lack of agreement on the actual mechanisms controlling heat transfer and how flow parameters impact the heat transfer. This work thus seeks to experimentally investigate the heat transfer characteristics and flow patterns at low mass fluxes, low heat fluxes and low saturation pressure conditions which are of less attention in literature but prevalent in refrigeration, air-conditioning and heat pump applications. In this study, flow boiling experiment was conducted for R134a working fluid in a 5 mm internal diameter stainless steel horizontal smooth tube with mass flux ranging from 80- 100 kg/m2 s, heat fluxes ranging from 3.55kW/m2 - 25.23 kW/m2 and saturation pressure of 460 kPa. Vapor quality ranged from 0 to 1. A well-known flow pattern map created by Wojtan et al. was used to predict the flow patterns noticed during the study. The experimental results were correlated with well-known flow boiling heat transfer correlations in literature. The findings show that, heat transfer coefficient was influenced by both mass flux and heat fluxes. However, for an increasing heat flux, nucleate boiling was observed to be the dominant mechanism controlling the heat transfer especially at low vapor quality region. For an increasing mass flux, convective boiling was the dominant mechanism controlling the heat transfer especially in the high vapor quality region. Also, the study observed an unusual high heat transfer coefficient at low vapor qualities which could be due to periodic wetting of the walls of the tube due to slug flow pattern and stratified wavy flow patterns. The flow patterns predicted by Wojtan et al. flow pattern map were mixture of slug and stratified wavy, purely stratified wavy and dry out. Statistical assessment of the experimental data with various well-known correlations from literature showed that, none of the correlations reported in literature could predicted the experimental data with enough accuracy.Keywords: flow boiling, heat transfer coefficient, mass flux, heat flux.
Procedia PDF Downloads 116443 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 70442 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 363441 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms
Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita
Abstract:
Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.Keywords: air quality, internet of things, artificial intelligence, smart home
Procedia PDF Downloads 93440 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 58439 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 478438 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways
Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman
Abstract:
Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)
Procedia PDF Downloads 129437 Association Between Type of Face Mask and Visual Analog Scale Scores During Pain Assessment
Authors: Merav Ben Natan, Yaniv Steinfeld, Sara Badash, Galina Shmilov, Milena Abramov, Danny Epstein, Yaniv Yonai, Eyal Berbalek, Yaron Berkovich
Abstract:
Introduction: Postoperative pain management is crucial for effective rehabilitation, with the Visual Analog Scale (VAS) being a common tool for assessing pain intensity due to its sensitivity and accuracy. However, challenges such as misunderstanding of instructions and discrepancies in pain reporting can affect its reliability. Additionally, the mandatory use of face masks during the COVID-19 pandemic may impair nonverbal and verbal communication, potentially impacting pain assessment and overall care quality. Aims: This study examines the association between the type of mask worn by health care professionals and the assessment of pain intensity in patients after orthopedic surgery using the visual analog scale (VAS). Design: A nonrandomized controlled trial was conducted among 176 patients hospitalized in an orthopedic department of a hospital located in northern-central Israel from January to March 2021. Methods: In the intervention group (n = 83), pain assessment using the VAS was performed by a healthcare professional wearing a transparent face mask, while in the control group (n = 93), pain assessment was performed by a healthcare professional wearing a standard nontransparent face mask. The initial assessment was performed by a nurse, and 15 minutes later, an additional assessment was performed by a physician. Results: Healthcare professionals wearing a standard non-transparent mask obtained higher VAS scores than healthcare professionals wearing a transparent mask. In addition, nurses obtained lower VAS scores than physicians. The discrepancy in VAS scores between nurses and physicians was found in 50% of cases. This discrepancy was more prevalent among female patients, patients after knee replacement or spinal surgery, and when health care professionals were wearing a standard nontransparent mask. Conclusions: This study supports the use of transparent face masks by healthcare professionals in an orthopedic department, particularly by nurses. In addition, this study supports the assumption of problems involving the reliability of VAS.Keywords: postoperative pain management, visual analog scale, face masks, orthopedic surgery
Procedia PDF Downloads 28436 Proteomic Analysis of Cytoplasmic Antigen from Brucella canis to Characterize Immunogenic Proteins Responded with Naturally Infected Dogs
Authors: J. J. Lee, S. R. Sung, E. J. Yum, S. C. Kim, B. H. Hyun, M. Her, H. S. Lee
Abstract:
Canine brucellosis is a critical problem in dogs leading to reproductive diseases which are mainly caused by Brucella canis. There are, nonetheless, not clear symptoms so that it may go unnoticed in most of the cases. Serodiagnosis for canine brucellosis has not been confirmed. Moreover, it has substantial difficulties due to broad cross-reactivity between the rough cell wall antigens of B. canis and heterospecific antibodies present in normal, uninfected dogs. Thus, this study was conducted to characterize the immunogenic proteins in cytoplasmic antigen (CPAg) of B. canis, which defined the antigenic sensitivity of the humoral antibody responses to B. canis-infected dogs. In analysis of B. canis CPAg, first, we extracted and purified the cytoplasmic proteins from cultured B. canis by hot-saline inactivation, ultrafiltration, sonication, and ultracentrifugation step by step according to the sonicated antigen extract method. For characterization of this antigen, we checked the sort and range of each protein on SDS-PAGE and verified the immunogenic proteins leading to reaction with antisera of B. canis-infected dogs. Selected immunodominant proteins were identified using MALDI-MS/MS. As a result, in an immunoproteomic assay, several polypeptides in CPAg on one or two-dimensional electrophoresis (DE) were specifically reacted to antisera from B. canis-infected dogs but not from non-infected dogs. The polypeptides with approximate 150, 80, 60, 52, 33, 26, 17, 15, 13, 11 kDa on 1-DE were dominantly recognized by antisera from B. canis-infected dogs. In the immunoblot profiles on 2-DE, ten immunodominant proteins in CPAg were detected with antisera of infected dogs between pI 3.5-6.5 at approximate 35 to 10 KDa, without any nonspecific reaction with sera in non-infected dogs. Ten immunodominant proteins identified by MALDI-MS/MS were identified as superoxide dismutase, bacteroferritin, amino acid ABC transporter substrate-binding protein, extracellular solute-binding protein family3, transaldolase, 26kDa periplasmic immunogenic protein, Rhizopine-binding protein, enoyl-CoA hydratase, arginase and type1 glyceraldehyde-3-phosphate dehydrogenase. Most of these proteins were determined by their cytoplasmic or periplasmic localization with metabolism and transporter functions. Consequently, this study discovered and identified the prominent immunogenic proteins in B. canis CPAg, highlighting that those antigenic proteins may accomplish a specific serodiagnosis for canine brucellosis. Furthermore, we will evaluate those immunodominant proteins for applying to the advanced diagnostic methods with high specificity and accuracy.Keywords: Brucella canis, Canine brucellosis, cytoplasmic antigen, immunogenic proteins
Procedia PDF Downloads 147435 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 11434 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition
Authors: Habtamu Garoma Debela, Gemechis File Duressa
Abstract:
In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent
Procedia PDF Downloads 143